URL
stringlengths
15
1.68k
text_list
listlengths
1
199
image_list
listlengths
1
199
metadata
stringlengths
1.19k
3.08k
http://jgtao.me/content/10-10-16/
[ "# A Circuit for H-Bridge Logic Reduction\n\nHere is a control circuit for reducing the four-wire H-bridge control logic to only two wires. As shown by Figure 1, the inputs are A and B, while the outputs are A, B, X, and Y.\n\nThis circuit\n\n• prevents a FET H-bridge from being driven incorrectly,\n• allows a controller to fully control an H-bridge with only two wires,\n• is compatible low input logic voltages,\n• is compatible with high supply voltages (limited by BJT C-E breakdown voltage),\n• only requires 2 transistors, commonly available in a single pre-biased dual-transistor package.\n\n## Introduction\n\nAn H-bridge is a circuit for applying a positive voltage, a negative voltage, a short, or an open float across a circuit. These four states are derived from how the controller drives the four transistors that make up the H-bridge. However, the four binary inputs to the H-bridge transistors allows for 2^4 = 16 different states, some of which are redundant, invalid, or destructive.\n\nThe control circuit described here eliminates 12 undesired states, while providing the user access to four useful states, with an addition of two control transistors.\n\n## Principle of Operation\n\nThe circuit works by allowing the user to directly control the bottom H-bridge transistors while computing logic to drive the top transistors based on the input of the lower transistors.\n\nThe upper H-bridge transistors only turn on when there is a differential signal from the two inputs to the lower transistors. The circuit rejects common mode control for the upper H-bridge transistors. In other words, the signals to the bottom H-bridge transistors must be different from each other for any of the top transistors to turn on.\n\n## Walk-through of the States\n\nWhen inputs A and B are described as \"high,\" it means that they provide a voltage that can turn on the two control transistors. \"Low\" means that the control transistors are off. The threshold voltage can be set by the resistors R1, R2, R5, and R6. For example, R1 = 10 kOhms and R2 = 10 kOhms would allow Q1 to turn on from inputs that are higher than 1.4 V (two times a Vbe of 0.7 V).\n\nIf the H-bridge is used to control a DC motor, the \"open\" state corresponds to letting the motor coast, while the \"short\" state corresponds to dynamic braking of the motor.\n\nRefer back to the schematics for the following walk-through of the states.\n\n### The Open State\n\nThe open state applies an open float across the load R7. In this state, input B and input A are both low. Q4 and Q6 are also both off due to direct control by the inputs.\n\nSince input B and input A are the same, there is zero base-emitter voltage both Q1 and Q2, so Q1 and Q2 remain off. This means the collectors of Q1 and Q2 are both pulled up to VCC, which turns off Q3 and Q5.\n\n### The Forward State\n\nThe forward state applies a positive voltage across the load R7. In this state, input B is high and input A is low. Q6 is on, while Q4 is off.\n\nBecause input B is higher than input A, Q1's base-emitter junction is forward biased while Q2's base-emitter junction is reversed biased. Q1 turns on while Q2 is off. Q1's collector voltage is pulled low while Q2's collector voltage is high. In this way, Q3 turns on while Q5 remains off. A path of current through the load R7 is formed by Q3 and Q6.\n\n### The Backward State\n\nThe backward state applies a negative voltage across the load R7, In this state, Input B is low and input A is high. Q6 is off, while Q4 is on.\n\nThe backward state is essentially the forward state in reverse. Q1 is off while Q2 turns on, which means Q3 is off and Q5 turns on. A path of current through the load R7 is formed by Q5 and Q4.\n\n### The Short State\n\nThe short state applies a short across the load R7. In this state, input B and input A are both high. Q4 and Q6 are also both on.\n\nSince input B and input A are the same, there is zero base-emitter voltage on both Q1 and Q2, so Q1 and Q2 remain off. Thus, both Q3 and Q5 are off. As a result, Q4 and Q6 pull both sides of the load R7 toward ground and short R7.\n\n## Implementation Notes\n\nThe lower H-bridge FETs need a Vgs threshold that is lower than the user's input voltage. The upper H-bridge FETs need a Vgs threshold that is lower than VCC-Vce(sat), where Vce(sat) is the control transistors' saturation voltage.\n\nThe two control BJTs can be arbitrarily chosen as long as the bias resistors R1, R2, R5, and R6 allow a given input to easily turn on the BJTs. A recommendation is the MUN5211 pre-biased dual-transistor package, which implements most of the circuit in a single part, excluding the collector resistors.\n\nThe collector resistors of the control BJTs should be chosen to maximize transition speed without excessive current consumption. A small resistance should be used if the collectors are directly connected to the upper H-bridge FETs, while a large resistance can be used if a gate driver is used.", null, "Figure 5. The MUN5211 pre-biased dual transistor\n\n## Notes\n\nWritten on the 10th of October in 2016" ]
[ null, "http://jgtao.me/content/10-10-16/img/MUN5211.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.92697287,"math_prob":0.9331341,"size":4344,"snap":"2020-24-2020-29","text_gpt3_token_len":1087,"char_repetition_ratio":0.15645161,"word_repetition_ratio":0.111809045,"special_character_ratio":0.23526703,"punctuation_ratio":0.10250818,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9879732,"pos_list":[0,1,2],"im_url_duplicate_count":[null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-02T09:52:04Z\",\"WARC-Record-ID\":\"<urn:uuid:09801d8b-fb91-4d5f-bfa0-8a8fb63548ba>\",\"Content-Length\":\"7706\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b0db9395-4c76-40c7-ae60-c73e469a122e>\",\"WARC-Concurrent-To\":\"<urn:uuid:668d0127-08cb-4cc6-a45f-072a39239865>\",\"WARC-IP-Address\":\"185.199.111.153\",\"WARC-Target-URI\":\"http://jgtao.me/content/10-10-16/\",\"WARC-Payload-Digest\":\"sha1:YKRJFTF22XGMZKC6LRYX4L4ZE6D2MJJJ\",\"WARC-Block-Digest\":\"sha1:OB4R3KKC5ITIANQURX3MUASMI2J3P3MV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655878639.9_warc_CC-MAIN-20200702080623-20200702110623-00575.warc.gz\"}"}
https://mathoverflow.net/questions/353546/lie-group-topological-group-action-on-differentiable-stack-topological-stack
[ "# Lie group (topological group) action on differentiable stack (topological stack)\n\nLet $$G$$ be a Lie group and $$\\mathcal{D}$$ be a differentiable stack (I am also ok to start with a topological group and topological stack).\n\nI have seen someone mentioning somewhere that the notion of group action on stacks appeared first in “Group Actions on Stacks and Applications” by M. Romagny (correct me if I am wrong). The below definition of group action on a differentiable stack is from Group actions on stacks and applications to equivariant string topology for stacks by Gregory Ginot and Behrang Noohi.\n\nA Lie group action on a differentiable stack is given by a morphism stacks $$\\alpha: G\\times \\mathcal{D}\\rightarrow \\mathcal{D}$$ satisfying some conditions. Though they did not specify, I am believe that by $$G$$ they mean the stack $$[*/G]$$, so an action of a Lie group $$G$$ on a differentiable stack $$\\mathcal{D}$$ is a morphism of stacks $$\\alpha: [*/G]\\times \\mathcal{D}\\rightarrow \\mathcal{D}$$ satisfying some conditions (correct me if I am wrong).\n\nQuestions :\n\n1. Is there any notion of Lie group $$G$$ action on a Lie groupoid $$[\\mathcal{G}_1\\rightrightarrows \\mathcal{G}_0]$$? Would a pair of maps $$(G\\times \\mathcal{G}_1\\rightarrow \\mathcal{G}_1, G\\times \\mathcal{G}_0\\rightarrow \\mathcal{G}_0)$$ giving an action of Lie group on the manifolds $$\\mathcal{G}_1,\\mathcal{G}_0$$ compatible with source, target etc maps of Lie groupoid, a good notion of Lie group action on a manifold?\n2. Is the notion of Lie group action on a differentiable stack mentioned above deduced/inspired from some notion of Lie group action on a Lie groupoid, in the sense that this notion of Lie group action on Lie groupoid is Moria invariant giving an action of Lie group on a differentiable stack?\n3. Is this definition of Lie group action on a differentiable stack directly/indirectly related to the notion of action of a group object on an object of a category as mentioned in Definition $$2.15$$ of Notes on Grothendieck topologies, fibered categories and descent theory?\n• Thanks @YCor for the edit :) – Praphulla Koushik Feb 26 at 0:26\n• I am not sure that $[*/G]$ is the correct object to consider. If you want to recover the action of a group on a manifold in the case that $\\mathcal D$ is just a manifold, you really need $G$ to be a Lie group, that is a manifold with extra structure maps (like $G\\times G\\to G$). I don't think $[*/G]$ would do the job. – Sebastian Goette Feb 26 at 19:45\n• @SebastianGoette That seem to be correct.. :) :) Thank you.. How should I interpret the map $G\\times \\mathcal{D}\\rightarrow \\mathcal{D}$ as? – Praphulla Koushik Feb 27 at 3:13\n• I was hoping to see a good answer to your question by somebody else ... – Sebastian Goette Feb 28 at 9:54\n• @SebastianGoette Oh. Please let me know if you have any favorite reference for this set up? – Praphulla Koushik Mar 1 at 16:04" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8229986,"math_prob":0.9846008,"size":1957,"snap":"2020-24-2020-29","text_gpt3_token_len":496,"char_repetition_ratio":0.21505377,"word_repetition_ratio":0.11147541,"special_character_ratio":0.23147675,"punctuation_ratio":0.06077348,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9976379,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-07T10:08:16Z\",\"WARC-Record-ID\":\"<urn:uuid:0e333d46-67c9-4b46-aa4a-bf952baf7165>\",\"Content-Length\":\"125166\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:28598e90-2c32-4d0f-abff-ae568a7bfe2e>\",\"WARC-Concurrent-To\":\"<urn:uuid:3ef5cd31-3eb1-43f2-84b6-c9ec26df3a98>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://mathoverflow.net/questions/353546/lie-group-topological-group-action-on-differentiable-stack-topological-stack\",\"WARC-Payload-Digest\":\"sha1:MJ3BR3O2BF4FWOV5J47OG24PG46YTR4V\",\"WARC-Block-Digest\":\"sha1:YDPYORQRQIBGG6Z2EBW3O3XAU6HTI4PU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655891884.11_warc_CC-MAIN-20200707080206-20200707110206-00505.warc.gz\"}"}
https://www.slideserve.com/brandon-rice/cs-3343-analysis-of-algorithms
[ "", null, "Download", null, "Download Presentation", null, "CS 3343: Analysis of Algorithms\n\n# CS 3343: Analysis of Algorithms\n\nDownload Presentation", null, "## CS 3343: Analysis of Algorithms\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -\n##### Presentation Transcript\n\n1. CS 3343: Analysis of Algorithms Review for final\n\n2. Final Exam • Closed book exam • Coverage: the whole semester • Cheat sheet: you are allowed one letter-size sheet, both sides • Monday, Dec 16, 3:15 – 5:45pm • Basic calculator (no graphing) allowed\n\n3. Final Exam: Study Tips • Study tips: • Study each lecture • Study the homework and homework solutions • Study the midterm exams • Re-make your previous cheat sheets\n\n4. Topics covered (1) By reversed chronological order: • Graph algorithms • Representations • MST (Prim’s, Kruskal’s) • Shortest path (Dijkstra’s) • Running time analysis with different implementations • Greedy algorithm • Unit-profit restaurant location problem • Fractional knapsack problem • Prim’s and Kruskal’s are also examples of greedy algorithms • Greedy algorithm • Unit-profit restaurant location problem • Fractional knapsack problem • Prim’s and Kruskal’s are also examples of greedy algorithms • How to show that certain greedy choices are optimal\n\n5. Topics covered (2) • Dynamic programming • LCS • Restaurant location problem • Shortest path problem on a grid • Other problems • How to define recurrence solution, and use dynamic programming to solve it • Binary heap and priority queue • Heapify, buildheap, insert, exatractMax, changeKey • Running time\n\n6. Topics covered (3) • Order statistics • Rand-Select • Worst-case Linear-time selection • Running time analysis • Sorting algorithms • Insertion sort • Merge sort • Quick sort • Heap sort • Linear time sorting: counting sort, radix sort • Stability of sorting algorithms • Worst-case and expected running time analysis • Memory requirement of sorting algorithms\n\n7. Topics covered (4) • Analysis • Order of growth • Asymptotic notation, basic definition • Limit method • L’ Hopital’s rule • Stirling’s formula • Best case, worst case, average case • Analyzing non-recursive algorithms • Arithmetic series • Geometric series • Analyzing recursive algorithms • Defining recurrence • Solving recurrence • Recursion tree (iteration) method • Substitution method • Master theorem\n\n8. Review for finals • In chronological order • Only the more important concepts • Very likely to appear in your final • Does not mean to be exclusive\n\n9. Asymptotic notations • O: Big-Oh • Ω: Big-Omega • Θ: Theta • o: Small-oh • ω: Small-omega • Intuitively: O is like  o is like <  is like   is like >  is like =\n\n10. Big-Oh • Math: • O(g(n)) = {f(n):  positive constants c and n0 such that 0 ≤ f(n) ≤ cg(n)  n>n0} • Or: lim n→∞ g(n)/f(n) > 0 (if the limit exists.) • Engineering: • g(n) grows at least as faster as f(n) • g(n) is an asymptotic upper bound of f(n) • Intuitively it is like f(n) ≤ g(n)\n\n11. Big-Oh • Claim: f(n) = 3n2 + 10n + 5  O(n2) • Proof: 3n2 + 10n + 5  3n2 + 10n2 + 5n2 when n >118 n2 when n >1 Therefore, • Let c = 18 and n0 = 1 • We have f(n)  c n2,  n > n0 • By definition, f(n)  O(n2)\n\n12. Big-Omega • Math: • Ω(g(n)) = {f(n):  positive constants c and n0 such that 0 ≤ cg(n) ≤ f(n)  n>n0} • Or: lim n→∞ f(n)/g(n) > 0 (if the limit exists.) • Engineering: • f(n) grows at least as faster as g(n) • g(n) is an asymptotic lower bound of f(n) • Intuitively it is like g(n) ≤ f(n)\n\n13. Big-Omega • f(n) = n2 / 10 = Ω(n) • Proof: f(n) = n2 / 10, g(n) = n • g(n) = n ≤ n2 / 10 = f(n) when n > 10 • Therefore, c = 1 and n0 = 10\n\n14. Theta • Math: • Θ(g(n)) = {f(n):  positive constants c1, c2, and n0 such that c1 g(n)  f(n)  c2 g(n)  n  n0  n>n0} • Or: lim n→∞ f(n)/g(n) = c > 0 and c < ∞ • Or: f(n) = O(g(n)) and f(n) = Ω(g(n)) • Engineering: • f(n) grows in the same order as g(n) • g(n) is an asymptotic tight bound of f(n) • Intuitively it is like f(n) = g(n) • Θ(1) means constant time.\n\n15. Theta • Claim: f(n) = 2n2 + n = Θ (n2) • Proof: • We just need to find three constants c1, c2, and n0 such that • c1n2 ≤ 2n2+n ≤ c2n2 for all n > n0 • A simple solution is c1 = 2, c2 = 3, and n0 = 1\n\n16. Using limits to compare orders of growth 0 • lim f(n) / g(n) = c > 0 ∞ f(n)  o(g(n)) f(n)  O(g(n)) f(n) Θ (g(n)) n→∞ f(n)  Ω(g(n)) f(n) ω (g(n))\n\n17. Compare 2n and 3n • lim 2n / 3n = lim(2/3)n = 0 • Therefore, 2n o(3n), and 3nω(2n) n→∞ n→∞\n\n18. L’ Hopital’s rule lim f(n) / g(n) = lim f(n)’ / g(n)’ If both lim f(n) and lim g(n) goes to ∞ n→∞ n→∞\n\n19. Compare n0.5 and logn • lim n0.5 / logn = ? • (n0.5)’ = 0.5 n-0.5 • (log n)’ = 1 / n • lim (n-0.5 / 1/n) = lim(n0.5) = • Therefore, log n  o(n0.5) n→∞ ∞\n\n20. Stirling’s formula (constant)\n\n21. Compare 2n and n! • Therefore, 2n = o(n!)\n\n23. General plan for analyzing time efficiency of a non-recursive algorithm • Decide parameter (input size) • Identify most executed line (basic operation) • worst-case = average-case? • T(n) = i ti • T(n) = Θ (f(n))\n\n24. Analysis of insertion Sort Statement cost time__ InsertionSort(A, n) { for j = 2 to n {c1 n key = A[j] c2 (n-1) i = j - 1; c3 (n-1) while (i > 0) and (A[i] > key) { c4 S A[i+1] = A[i] c5 (S-(n-1)) i = i - 1 c6 (S-(n-1)) } 0 A[i+1] = key c7 (n-1) } 0 }\n\n25. Best case • Array already sorted Inner loop stops when A[i] <= key, or i = 0 i j 1 Key sorted\n\n26. Worst case • Array originally in reverse order Inner loop stops when A[i] <= key i j 1 Key sorted\n\n27. Average case • Array in random order Inner loop stops when A[i] <= key i j 1 Key sorted\n\n28. Find the order of growth for sums • How to find out the actual order of growth? • Remember some formulas • Learn how to guess and prove\n\n29. Arithmetic series • An arithmetic series is a sequence of numbers such that the difference of any two successive members of the sequence is a constant. e.g.: 1, 2, 3, 4, 5 or 10, 12, 14, 16, 18, 20 • In general: Recursive definition Closed form, or explicit formula Or:\n\n30. Sum of arithmetic series If a1, a2, …, an is an arithmetic series, then\n\n31. Geometric series • A geometric series is a sequence of numbers such that the ratio between any two successive members of the sequence is a constant. e.g.: 1, 2, 4, 8, 16, 32 or 10, 20, 40, 80, 160 or 1, ½, ¼, 1/8, 1/16 • In general: Recursive definition Closed form, or explicit formula Or:\n\n32. Sum of geometric series if r < 1 if r > 1 if r = 1\n\n33. Important formulas Remember them, or remember where to find them!\n\n34. Sum manipulation rules Example:\n\n35. Recursive algorithms • General idea: • Divide a large problem into smaller ones • By a constant ratio • By a constant or some variable • Solve each smaller onerecursively or explicitly • Combine the solutions of smaller ones to form a solution for the original problem Divide and Conquer\n\n36. How to analyze the time-efficiency of a recursive algorithm? • Express the running time on input of size n as a function of the running time on smaller problems\n\n37. Sloppiness:Should be T( n/2 ) + T( n/2) , but it turns out not to matter asymptotically. Analyzing merge sort T(n) Θ(1) 2T(n/2) f(n) MERGE-SORTA[1 . . n] • If n = 1, done. • Recursively sort A[ 1 . . n/2 ] and A[ n/2+1 . . n ] . • “Merge” the 2 sorted lists\n\n38. Analyzing merge sort • Divide: Trivial. • Conquer: Recursively sort 2 subarrays. • Combine: Merge two sorted subarrays T(n) = 2T(n/2) + f(n) +Θ(1) # subproblems Work dividing and Combining subproblem size • What is the time for the base case? • What is f(n)? • What is the growth order of T(n)? Constant\n\n39. Solving recurrence • Running time of many algorithms can be expressed in one of the following two recursive forms or Challenge: how to solve the recurrence to get a closed form, e.g. T(n) = Θ (n2) or T(n) = Θ(nlgn), or at least some bound such as T(n) = O(n2)?\n\n40. Solving recurrence • Recurrence tree (iteration) method - Good for guessing an answer • Substitution method - Generic method, rigid, but may be hard • Master method - Easy to learn, useful in limited cases only - Some tricks may help in other cases\n\n41. The master method The master method applies to recurrences of the form T(n) = aT(n/b) + f(n), where a³ 1, b > 1, and f is asymptotically positive. • Dividethe problem into a subproblems, each of size n/b • Conquer the subproblems by solving them recursively. • Combine subproblem solutions • Divide + combine takes f(n) time.\n\n42. Master theorem T(n) = aT(n/b) + f(n) Key: compare f(n) with nlogba • CASE 1:f(n) = O(nlogba – e) T(n) = Q(nlogba) . • CASE 2:f(n) = Q(nlogba) T(n) = Q(nlogba log n) . • CASE 3:f(n) = W(nlogba + e) and af(n/b) £cf(n) •  T(n) = Q(f(n)) . • e.g.: merge sort: T(n) = 2 T(n/2) + Θ(n) • a = 2, b = 2  nlogba = n •  CASE 2  T(n) = Θ(n log n) .\n\n43. Case 1 Compare f(n) with nlogba: f(n) = O(nlogba – e) for some constant e > 0. : f(n)grows polynomially slower than nlogba (by an ne factor). Solution:T(n) = Q(nlogba) i.e., aT(n/b) dominates e.g. T(n) = 2T(n/2) + 1 T(n) = 4 T(n/2) + n T(n) = 2T(n/2) + log n T(n) = 8T(n/2) + n2\n\n44. Case 3 Compare f(n) with nlogba: f(n) = W(nlogba + e) for some constant e > 0. : f(n)grows polynomially faster than nlogba (by an ne factor). Solution:T(n) = Q(f(n)) i.e., f(n) dominates e.g. T(n) = T(n/2) + n T(n) = 2 T(n/2) + n2 T(n) = 4T(n/2) + n3 T(n) = 8T(n/2) + n4\n\n45. Case 2 Compare f(n) with nlogba: f(n) = Q(nlogba). : f(n)and nlogba grow at similar rate. Solution:T(n) = Q(nlogba log n) e.g. T(n) = T(n/2) + 1 T(n) = 2 T(n/2) + n T(n) = 4T(n/2) + n2 T(n) = 8T(n/2) + n3\n\n46. Recursion tree Solve T(n) = 2T(n/2) + dn, where d > 0 is constant.\n\n47. Recursion tree Solve T(n) = 2T(n/2) + dn, where d > 0 is constant. T(n)\n\n48. dn T(n/2) T(n/2) Recursion tree Solve T(n) = 2T(n/2) + dn, where d > 0 is constant.\n\n49. dn dn/2 dn/2 T(n/4) T(n/4) T(n/4) T(n/4) Recursion tree Solve T(n) = 2T(n/2) + dn, where d > 0 is constant.\n\n50. Recursion tree Solve T(n) = 2T(n/2) + dn, where d > 0 is constant. dn dn/2 dn/2 dn/4 dn/4 dn/4 dn/4 … Q(1)" ]
[ null, "https://www.slideserve.com/img/player/ss_download.png", null, "https://www.slideserve.com/img/replay.png", null, "https://thumbs.slideserve.com/1_6671294.jpg", null, "https://www.slideserve.com/img/output_cBjjdt.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7594376,"math_prob":0.99645925,"size":9403,"snap":"2021-31-2021-39","text_gpt3_token_len":3154,"char_repetition_ratio":0.12394936,"word_repetition_ratio":0.14524838,"special_character_ratio":0.35467404,"punctuation_ratio":0.103685506,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999064,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,null,null,1,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-08-04T13:34:32Z\",\"WARC-Record-ID\":\"<urn:uuid:14efd25c-3435-46f7-9641-0cc4d5b360db>\",\"Content-Length\":\"105485\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0eabcf12-4b8c-4205-b09e-13ff99d6dd74>\",\"WARC-Concurrent-To\":\"<urn:uuid:e1923b40-97d8-463a-9ccd-bcc1f1cb3a76>\",\"WARC-IP-Address\":\"52.43.159.186\",\"WARC-Target-URI\":\"https://www.slideserve.com/brandon-rice/cs-3343-analysis-of-algorithms\",\"WARC-Payload-Digest\":\"sha1:OUZW2NA6BL5SHPAY6VM3KUAAWK4QOGIO\",\"WARC-Block-Digest\":\"sha1:SYD2FOD67GGNF5MROFQLVME2D3AA4XPG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046154805.72_warc_CC-MAIN-20210804111738-20210804141738-00114.warc.gz\"}"}
http://jrg3.net/ternary.html
[ "# Ternary Logic and Circuits\n\n## Why Base 3? Well, Why Base 2?\n\nWe evolved with ten fingers, and so we adopted a base ten number system. There is no reason to think that base 10 is the best number system just because of this accident of evolutionary biology. Computers use base two. This works wonderfully for a number of reasons. First, as Claude Shannon pointed out in 1937, it allows us to implement George Boole's system of algebraic logic to switching machinery. There is a very nice way in which the two states in Boolean logic can be interpreted logically (false and true) when it suits us, and arithmetically (numeric 0 and 1) at other times. Boole gave us a neat way of thinking about 0 and 1, and certain operations that could be performed on them, that jibes well with the way we already think. He named his book, after all, \"An Investigation Of The Laws Of Thought\"!\n\nSince these days we work with solid state electronics, it is also pretty easy to decide that, say, 0 volts is Boolean 0 and +5 volts is Boolean 1, and establish some cutoff thresholds with a buffer of indeterminacy between them, and glue some transistors together to implement Boole's operations in silicon. Circuits that only have to distinguish between two states are easy from an engineering perspective.\n\nMight we someday be working with an implementation substrate that naturally allows for three states, not two? Something involving wavelengths of light, or polarity of light, or something? Maybe good old fashioned electricity might even be considered tristate, depending on what we are using as our signal. I have no idea.\n\nI do know, however, that just from an arithmetic point of view, base two is a pain for humans. The numbers get big too quickly. No one wants to balance their checkbook by hand in binary. On the other hand, what if we used, say, base 100? That would be zero plus 99 other individual numerals, or glyphs, whose order we would all have to just memorize, in the same way we have burned in the fact that 7 is greater than 3. That seems unwieldy. So what would be the ideal base to use, with the trade-off between number of individual numerals and numbers getting too big too quickly? I once read [citation needed] that the best base \"converges on e\". OK, so e is 2.72, which rounds to 3 (assuming that we want to keep our base an integer!)\n\nWikipedia has entries for ternary logic and ternary computers, and apparently the Soviets did some work along these lines way back when.\n\n# Intuitive Interpretations Of Our Three States?\n\nI want to emphasize that inside a computer, there are different voltage levels and devices that manipulate voltage levels. The machine, once it is finished and works, does not care that we decided that 0 volts = false = numeric 0, and +5 volts = true = numeric 1. These conventions are immensely helpful to the human engineers, but there is nothing inherent in the switching circuitry that carries the residue of these conventions.\n\nSimilarly, if you have a tristate system, you are free to interpret your states as [false, indeterminate, true], [-1, 0, 1], [0, 1, 2] or whatever suits you, and in whatever order. By the way, whereas in base 2 we call our BInary Digits \"bits\", in base 3 we speak of \"trits\".\n\nThe appeal of bistate Boolean logic, as I see it, is twofold (ha!). First, as I said, Boolean logic gives us a tiny handful of primitive operations (AND, OR, and NOT) that align perfectly with normal, verbalized propositional logic and its connectives that any child understands.\n\n# Ternary Equivalent of Sum-Of-Products Realization\n\nSecondly, using the primitive Boolean operations of AND, OR, and NOT, there are mindless, mechanical ways of implementing any function, with any number of inputs, that we can write out a truth table for. Ignoring optimization, we can use sum-of-products or product-of-sums to just read the lines off the truth table and bang out a formula (or circuit).\n\nFor a brief overview of two-valued traditional Boolean logic, including the implementation of an arbitrary function (in this case, a single bit-slice of an adder), see the \"Boolean Algebra\" section and \"Binary Adder\" subsection of my high school presentation about how computers work.\n\nIs there an interpretation, a convention, for tristate logic that has that same simple, intuitive, ease of understanding that Boolean logic does? Note that this is a question both of interpreting the three states themselves and of choosing and interpreting the operations upon them. Frankly, I personally don't mind if the convention we come up with seems a little weird at first. I think that we can train ourselves to think the \"right\" way if the logic works functionally. So I will defer this question, and let its answer be guided by other, more technical considerations, like our other question.\n\nAre there operations we can use in tristate logic that allow us to mindlessly bang out any function we can write a truth table for, with any number of tristate inputs?\n\nAs a first pass, we might want to look at traditional Boolean logic and see if we can extend it to base 3. Because sum-of-products (SOP) comes more naturally to me than product-of-sums (as, I imagine, it does for most people), I will focus on that. Can we do sum-of-products in base 3?\n\nIn Boolean logic, sum-of-products involves all three of our primitives, AND, OR, and NOT. What are the ternary equivalents of these? First, let me establish an ad hoc convention concerning our three states. Let's say they are the colors [red, green, blue], some physically distinguishable characteristic of the world. For the purposes of what follows, I just need to establish some order among them, so I choose to call them [0, 1, 2]. I could just as easily, and with as much validity, chosen [-117, π, 53]. My point is that thinking of our states as [0, 1, 2] does not commit us to this particular numerical interpretation for all time (although it does not disallow it either!), it merely helps establish an ordering convention, i.e. 0 < 1 < 2.\n\n## Ternary Equivalent of AND and OR\n\nOK, so [0, 1, 2]. Let's see if we can extend AND and OR to our ternary system. We are taught AND and OR in terms of absolutes, truth values. Note, however, that if we think of their inputs not as [false, true] but as numeric [0, 1], AND is simply the minimum function, and OR is the maximum function. That is, AND yields as output the answer to the question \"which of my inputs is the least?\" and OR yields as output the answer to the question \"which of my inputs is the greatest?\" That sounds promising, so let's just carry that forward into base 3. The MAX() function simply returns the greatest of its abritrary number of inputs, and is our OR equivalent, and MIN() returns the least of its abritrary number of inputs and is our AND equivalent.\n\nWith standard Boolean SOP form, you start with the truth table for your desired function or circuit, and you write an expression like this:\n\n``` term1 OR term2 OR term3 OR ... OR termn ```\n\nWith one term for each 1 that appeared in the output column of your truth table. Each term consists of all of your input variables, maybe negated with NOT, all ANDed together. So each term matches on one particular combination of the input variables, and then the term becomes 1, and the whole expression becomes 1. If no terms match, all terms are 0, and the whole expression is 0.\n\nFor our ternary SOP, we want one term (a product, a bunch of stuff MINed together) for each non-0 in the output column of the truth table, and we want each term to activate, that is, contribute to the final value of our expression, only for its particular combination of input trit values, and be 0 in all other cases.\n\nThis operation will work fine given our common-sense extension of AND as MIN and OR as MAX, but requires us to get a little cute with NOT.\n\n## Ternary Replacement of Boolean NOT\n\nHow many possible unary operations are there in base 2? We have a single input bit, so there are two rows in the truth table. There are 22 = 4 ways of filling out the output column:\n\ninputoutput 0output 1output 2output 3\n0 0 0 1 1\n1 0 1 0 1\n\nOf the four output columns here, output 0 is just the constant 0, output 3 is just the constant 1, and output 1 is just a pass-through pipe, leaving its input bit unchanged. Only output 2 is even mildly interesting, flipping its input bit. It is, of course, Boolean NOT.\n\nHow many possible unary operations are there in base 3? A single input trit gives us a three row truth table, and each row could, in its output column, contain any of three values, so there are 33 = 27 different ways of filling out that output column. Tediously, I'm going to list them here, and number each line in good old base 10:\n\n 0 000 1 001 2 002 3 010 4 011 5 012 6 020 7 021 8 022 9 100 10 101 11 102 12 110 13 111 14 112 15 120 16 121 17 122 18 200 19 201 20 202 21 210 22 211 23 212 24 220 25 221 26 222\n\nI realize that I've written this out vertically, and I, for one, have a hard time envisioning it flipped by 90 degrees so each line in the above looks more like an output column in our unary function's truth table, but bear with me. For our purposes, I'm reading left to right in the table above, so row 22 (211) corresponds to a function whose truth table is this:\n\ninputoutput\n02\n11\n21\n\nSome of these 27 unary functions are more interesting than others, at least at first blush. Note that lines 0, 13, and 26 are constants 0, 1, and 2, respectively, thus boring. Just about as boring is line 5, which is a pass-through pipe, leaving its input unchanged. 15 is add 1 (modulo 3) and 19 is subtract 1 (modulo 3), a bit more interesting, may be useful later. Lines 7, 11, and 21 pass one value through unchanged, but exchange the other two, which might correspond somewhat with our intuitive notion of NOT, depending on the convention we choose for our logical truth values. For example, if we think of our three states as [false, indeterminate, true], it may well turn out that it would be useful to define ternary NOT of these respective values as [true, indeterminate, false]. That is, one (indeterminate) gets passed through unchanged, and the other two get flipped.\n\nFor the purposes of the current exercise, that of coming up with a ternary SOP realization of an arbitrary function, I'd like to focus on the lines with two zeros among the three lines, that is, these:\n\n1: 001 \"Give me a 2, and I'll give you a 1, 0 otherwise.\"\n2: 002 \"Give me a 2, and I'll give you a 2, 0 otherwise.\"\n3: 010 \"Give me a 1, and I'll give you a 1, 0 otherwise.\"\n6: 020 \"Give me a 1, and I'll give you a 2, 0 otherwise.\"\n9: 100 \"Give me a 0, and I'll give you a 1, 0 otherwise.\"\n18: 200 \"Give me a 0, and I'll give you a 2, 0 otherwise.\"\n\nTo make these six a little clearer, I will put them in more traditional truth table form:\n\ninput 1 2 3 6 9 18\n0 0 0 0 0 1 2\n1 0 0 1 2 0 0\n2 1 2 0 0 0 0\n\nFor now, we need a convention for what to call these six unary functions. Let's call them, for example, D12 for \"Demand 1, yield 2\". This filtering function is what we can use in place of traditional NOT in our ternary SOP realizations.\n\nLet's make up an arbitrary three-input ternary function that we want to implement. Let's call the input trits [a,b,c]. Instead of writing out all 27 rows of the truth table, assume that all rows have 0 in the output column except these:\n\nabcoutput\n0022\n0111\n1211\n2012\n2202\n\nYou know, I have never ever liked calling AND multiplication and OR addition, so the whole sum-of-products terminology never sat well with me. Now that we have extended AND and OR into base 3 in a way that has nothing at all to do with addition and multiplication, I'm going to start calling it max-of-min realization.\n\nGiven the above, the above function could be realized with the following:\n\n``` max[ min(D02(a), D02(b), D22(c)), min(D01(a), D11(b), D11(c)), min(D11(a), D21(b), D11(c)), min(D22(a), D02(b), D12(c)), min(D22(a), D22(b), D02(c)) ] ```\n\nAs with traditional SOP Boolean form, there is one term per non-0 output line in the truth table, and each term evaluates to 0 unless the inputs [a,b,c] have the particular values that that term looks for. In that case, the term assumes the proper value, and gets ORed (or MAXed) together with all the other terms (which are 0, since their particular combination of input values was not hit). Now we are off to the races, and can realize any function we want.\n\n## Negative Numbers\n\nIn base 2, we use two's complement to represent negative integers. This works wonderfully for a couple of reasons. First, you simply don't have to have any special handling for negative numbers at all. You just add numbers, and if one is negative, well, you've just done a subtraction. Secondly, the most significant bit is easily recognizable as a \"sign bit\". You can tell immediately if a number is negative or positive by just looking at the sign bit. This helps, for example, in setting the \"negative\" status bit in the CPU after each instruction, for subsequent branching or test instructions.\n\nFortunately, the same logic that allows us to do two's complement also works in base 3, or any base. To create the negative of a number in the range of positive numbers, we do the following. For each digit, replace it with <base> - <digit> - 1. That is, after we are done, we have a number that when added to the number we started with, will yield <base># of digits - 1. So if we are dealing in base 10, and are dealing with a 3-digit wide register, and we want to create negative 281, we write 718. Sure enough, 281 + 718 = 999. So in base 10, 718 is the complement of 281.\n\nSecond, we just add 1 to our complement, so 718 becomes 719. Viola! 719 is the negative of 281! Huh? So 353 - 281 = 72. Instead of subtracting 281, we want to add its negative, which we just said is 719. So 353 + 719 = 1072. But since we are dealing with a 3-digit wide register, the thousands digit falls off the end of the world, and our 1072 becomes 072.\n\nAnyway, the math works in any base, including base 3. For more discussion, check out the wikipedia article linked above. This all works by splitting the entire range of values in half, where the first half is positive, and the second half is negative. As I said above, in base 2 this has the happy side effect that the most significant bit is a sign bit, end of discussion. In any odd base, that halfway point is in the middle of a range of the most significant digit. This has the unhappy consequence that it looks like we must do a full-register comparison to determine if a number is negative or not, as when we are setting that CPU status bit after each instruction. Oh well.\n\nAs with two's complement in binary, we shifted the range of numbers we can represent in an n-digit register from 0 → basen - 1 to half that range in the positives and half in the negatives. That is, the largest number we can hold in a given register is only half as large as it would be if we were only using unsigned arithmetic.\n\nHere is the set of 2-trit ternary numbers, 0-8, with their complements and complements + 1 (i.e. negatives). Remember that overflows fall off the end of the world, so 22 + 1 = 00. Note that 0 is its own negative. Note also that the largest positive number here must be 11, or 4, and its negative is the next entry, 12, or 5, which must be -4 (note that 11 and 12 are negatives of each other in the table). This makes sense, since that splits our range of 0-8 in half (1→4 positive, 5→8 negative), given that 0 is its whole own thing at the beginning.\n\nline number complement complement + 1 unsigned base 10 of line number signed base 10 of line number\n00220000\n01212211\n02202122\n10122033\n11111244\n1210115-4\n2002106-3\n2101027-2\n2200018-1" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9255738,"math_prob":0.95548445,"size":15615,"snap":"2020-24-2020-29","text_gpt3_token_len":3944,"char_repetition_ratio":0.115303315,"word_repetition_ratio":0.0388619,"special_character_ratio":0.2759526,"punctuation_ratio":0.12750074,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.968361,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-06-01T08:50:59Z\",\"WARC-Record-ID\":\"<urn:uuid:3082ff06-35c7-4198-a788-3b1b6fe4047d>\",\"Content-Length\":\"20278\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:43f2457a-09ce-4d30-9af1-fcbe4333b9bf>\",\"WARC-Concurrent-To\":\"<urn:uuid:8f520e01-7586-4aea-9e68-19f47a965cca>\",\"WARC-IP-Address\":\"184.168.41.1\",\"WARC-Target-URI\":\"http://jrg3.net/ternary.html\",\"WARC-Payload-Digest\":\"sha1:CDJRJA7XWPSM7GQR4YAAHGZXWY5FUD63\",\"WARC-Block-Digest\":\"sha1:C3NLYLC7CJ3E46ALO3NY3FAPKXH6FMIQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347415315.43_warc_CC-MAIN-20200601071242-20200601101242-00457.warc.gz\"}"}
https://de.mathworks.com/help/robotics/ug/path-following-for-differential-drive-robot.html
[ "# Path Following for a Differential Drive Robot\n\nThis example demonstrates how to control a robot to follow a desired path using a Robot Simulator. The example uses the Pure Pursuit path following controller to drive a simulated robot along a predetermined path. A desired path is a set of waypoints defined explicitly or computed using a path planner (refer to Path Planning in Environments of Different Complexity). The Pure Pursuit path following controller for a simulated differential drive robot is created and computes the control commands to follow a given path. The computed control commands are used to drive the simulated robot along the desired trajectory to follow the desired path based on the Pure Pursuit controller.\n\nNote: Starting in R2016b, instead of using the step method to perform the operation defined by the System object™, you can call the object with arguments, as if it were a function. For example, `y = step(obj,x)` and `y = obj(x)` perform equivalent operations.\n\n### Define Waypoints\n\nDefine a set of waypoints for the desired path for the robot\n\n```path = [2.00 1.00; 1.25 1.75; 5.25 8.25; 7.25 8.75; 11.75 10.75; 12.00 10.00]; ```\n\nSet the current location and the goal location of the robot as defined by the path.\n\n```robotInitialLocation = path(1,:); robotGoal = path(end,:);```\n\nAssume an initial robot orientation (the robot orientation is the angle between the robot heading and the positive X-axis, measured counterclockwise).\n\n`initialOrientation = 0;`\n\nDefine the current pose for the robot [x y theta]\n\n`robotCurrentPose = [robotInitialLocation initialOrientation]';`\n\n### Create a Kinematic Robot Model\n\nInitialize the robot model and assign an initial pose. The simulated robot has kinematic equations for the motion of a two-wheeled differential drive robot. The inputs to this simulated robot are linear and angular velocities.\n\n`robot = differentialDriveKinematics(\"TrackWidth\", 1, \"VehicleInputs\", \"VehicleSpeedHeadingRate\");`\n\nVisualize the desired path\n\n```figure plot(path(:,1), path(:,2),'k--d') xlim([0 13]) ylim([0 13])```", null, "### Define the Path Following Controller\n\nBased on the path defined above and a robot motion model, you need a path following controller to drive the robot along the path. Create the path following controller using the `controllerPurePursuit` object.\n\n`controller = controllerPurePursuit;`\n\nUse the path defined above to set the desired waypoints for the controller\n\n`controller.Waypoints = path;`\n\nSet the path following controller parameters. The desired linear velocity is set to 0.6 meters/second for this example.\n\n`controller.DesiredLinearVelocity = 0.6;`\n\nThe maximum angular velocity acts as a saturation limit for rotational velocity, which is set at 2 radians/second for this example.\n\n`controller.MaxAngularVelocity = 2;`\n\nAs a general rule, the lookahead distance should be larger than the desired linear velocity for a smooth path. The robot might cut corners when the lookahead distance is large. In contrast, a small lookahead distance can result in an unstable path following behavior. A value of 0.3 m was chosen for this example.\n\n`controller.LookaheadDistance = 0.3;`\n\n### Using the Path Following Controller, Drive the Robot over the Desired Waypoints\n\nThe path following controller provides input control signals for the robot, which the robot uses to drive itself along the desired path.\n\nDefine a goal radius, which is the desired distance threshold between the robot's final location and the goal location. Once the robot is within this distance from the goal, it will stop. Also, you compute the current distance between the robot location and the goal location. This distance is continuously checked against the goal radius and the robot stops when this distance is less than the goal radius.\n\nNote that too small value of the goal radius may cause the robot to miss the goal, which may result in an unexpected behavior near the goal.\n\n```goalRadius = 0.1; distanceToGoal = norm(robotInitialLocation - robotGoal);```\n\nThe `controllerPurePursuit` object computes control commands for the robot. Drive the robot using these control commands until it reaches within the goal radius. If you are using an external simulator or a physical robot, then the controller outputs should be applied to the robot and a localization system may be required to update the pose of the robot. The controller runs at 10 Hz.\n\n```% Initialize the simulation loop sampleTime = 0.1; vizRate = rateControl(1/sampleTime); % Initialize the figure figure % Determine vehicle frame size to most closely represent vehicle with plotTransforms frameSize = robot.TrackWidth/0.8; while( distanceToGoal > goalRadius ) % Compute the controller outputs, i.e., the inputs to the robot [v, omega] = controller(robotCurrentPose); % Get the robot's velocity using controller inputs vel = derivative(robot, robotCurrentPose, [v omega]); % Update the current pose robotCurrentPose = robotCurrentPose + vel*sampleTime; % Re-compute the distance to the goal distanceToGoal = norm(robotCurrentPose(1:2) - robotGoal(:)); % Update the plot hold off % Plot path each instance so that it stays persistent while robot mesh % moves plot(path(:,1), path(:,2),\"k--d\") hold all % Plot the path of the robot as a set of transforms plotTrVec = [robotCurrentPose(1:2); 0]; plotRot = axang2quat([0 0 1 robotCurrentPose(3)]); plotTransforms(plotTrVec', plotRot, \"MeshFilePath\", \"groundvehicle.stl\", \"Parent\", gca, \"View\",\"2D\", \"FrameSize\", frameSize); light; xlim([0 13]) ylim([0 13]) waitfor(vizRate); end```", null, "### Using the Path Following Controller Along with PRM\n\nIf the desired set of waypoints are computed by a path planner, the path following controller can be used in the same fashion. First, visualize the map\n\n```load exampleMaps map = binaryOccupancyMap(simpleMap); figure show(map)```", null, "You can compute the `path` using the PRM path planning algorithm. See Path Planning in Environments of Different Complexity for details.\n\n```mapInflated = copy(map); inflate(mapInflated, robot.TrackWidth/2); prm = robotics.PRM(mapInflated); prm.NumNodes = 100; prm.ConnectionDistance = 10;```\n\nFind a path between the start and end location. Note that the `path` will be different due to the probabilistic nature of the PRM algorithm.\n\n```startLocation = [4.0 2.0]; endLocation = [24.0 20.0]; path = findpath(prm, startLocation, endLocation)```\n```path = 8×2 4.0000 2.0000 3.1703 2.7616 7.0797 11.2229 8.1337 13.4835 14.0707 17.3248 16.8068 18.7834 24.4564 20.6514 24.0000 20.0000 ```\n\nDisplay the inflated map, the road maps, and the final path.\n\n`show(prm);`", null, "You defined a path following controller above which you can re-use for computing the control commands of a robot on this map. To re-use the controller and redefine the waypoints while keeping the other information the same, use the `release` function.\n\n```release(controller); controller.Waypoints = path;```\n\nSet initial location and the goal of the robot as defined by the path\n\n```robotInitialLocation = path(1,:); robotGoal = path(end,:);```\n\nAssume an initial robot orientation\n\n`initialOrientation = 0;`\n\nDefine the current pose for robot motion [x y theta]\n\n`robotCurrentPose = [robotInitialLocation initialOrientation]';`\n\nCompute distance to the goal location\n\n`distanceToGoal = norm(robotInitialLocation - robotGoal);`\n\n`goalRadius = 0.1;`\n```reset(vizRate); % Initialize the figure figure while( distanceToGoal > goalRadius ) % Compute the controller outputs, i.e., the inputs to the robot [v, omega] = controller(robotCurrentPose); % Get the robot's velocity using controller inputs vel = derivative(robot, robotCurrentPose, [v omega]); % Update the current pose robotCurrentPose = robotCurrentPose + vel*sampleTime; % Re-compute the distance to the goal distanceToGoal = norm(robotCurrentPose(1:2) - robotGoal(:)); % Update the plot hold off show(map); hold all % Plot path each instance so that it stays persistent while robot mesh % moves plot(path(:,1), path(:,2),\"k--d\") % Plot the path of the robot as a set of transforms plotTrVec = [robotCurrentPose(1:2); 0]; plotRot = axang2quat([0 0 1 robotCurrentPose(3)]); plotTransforms(plotTrVec', plotRot, 'MeshFilePath', 'groundvehicle.stl', 'Parent', gca, \"View\",\"2D\", \"FrameSize\", frameSize); light; xlim([0 27]) ylim([0 26]) waitfor(vizRate); end```", null, "", null, "" ]
[ null, "https://de.mathworks.com/help/examples/robotics/win64/PathFollowingControllerExample_01.png", null, "https://de.mathworks.com/help/examples/robotics/win64/PathFollowingControllerExample_02.png", null, "https://de.mathworks.com/help/examples/robotics/win64/PathFollowingControllerExample_03.png", null, "https://de.mathworks.com/help/examples/robotics/win64/PathFollowingControllerExample_04.png", null, "https://de.mathworks.com/help/examples/robotics/win64/PathFollowingControllerExample_05.png", null, "https://de.mathworks.com/help/examples/robotics/win64/PathFollowingControllerExample_06.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.76301134,"math_prob":0.96957374,"size":8269,"snap":"2022-27-2022-33","text_gpt3_token_len":1954,"char_repetition_ratio":0.17652753,"word_repetition_ratio":0.19653179,"special_character_ratio":0.23993228,"punctuation_ratio":0.1617357,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9948054,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-01T20:46:48Z\",\"WARC-Record-ID\":\"<urn:uuid:19a7e905-3260-423c-90fc-ac6d32f7fc0f>\",\"Content-Length\":\"89332\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4166f695-076f-480a-b976-240deae02adb>\",\"WARC-Concurrent-To\":\"<urn:uuid:eebbf394-2c9d-43a6-ae1f-0c133362f2f1>\",\"WARC-IP-Address\":\"104.68.243.15\",\"WARC-Target-URI\":\"https://de.mathworks.com/help/robotics/ug/path-following-for-differential-drive-robot.html\",\"WARC-Payload-Digest\":\"sha1:J3SZMS4RIQQYFWA5ULHFHSPENERURP7X\",\"WARC-Block-Digest\":\"sha1:HGCSUNHOFG6YA4IK5WSNOPRIJ7HOJC5C\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103945490.54_warc_CC-MAIN-20220701185955-20220701215955-00168.warc.gz\"}"}
https://fr.slideserve.com/sarai/biochemistry
[ "", null, "Download", null, "Download Presentation", null, "Biochemistry\n\n# Biochemistry\n\nTélécharger la présentation", null, "## Biochemistry\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -\n##### Presentation Transcript\n\n1. Biochemistry Chapter 13: Enzymes Chapter 14: Mechanisms of enzyme action Chapter 15: Enzyme regulation Chapter 17: Metabolism- An overview Chapter 18: Glycolysis Chapter 19: The tricarboxylic acid cycle Chapter 20: Electron transport & oxidative phosphorylation Chapter 21: Photosynthesis http://www.aqua.ntou.edu.tw/chlin/\n\n2. Chapter 13 Enzymes – Kineticsand Specificity Biochemistry by Reginald Garrett and Charles Grisham\n\n3. What are enzymes, and what do they do? • Biological Catalysts • Increase the velocity of chemical reactions\n\n4. What are enzymes, and what do they do? • Thousands of chemical reactions are proceeding very rapidly at any given instant within all living cells • Virtually all of these reactions are mediated by enzymes--proteins (and occasionally RNA) specialized to catalyze metabolic reactions • Most cells quickly oxidize glucose, producing carbon dioxide and water and releasing lots of energy: C6H12O6 + 6 O2 6 CO2 + 6 H2O + 2870 kJ of energy • It does not occur under just normal conditions • In living systems, enzymes are used to accelerate and control the rates of vitally important biochemical reactions\n\n5. Figure 13.1Reaction profile showing the large DG‡ for glucose oxidation, free energy change of -2,870 kJ/mol; catalysts lower DG‡, thereby accelerating rate.\n\n6. Enzymes are the agents of metabolic function • Enzymes form metabolic pathways by which • Nutrient molecules are degraded • Energy is released and converted into metabolically useful forms • Precursors are generated and transformed to create the literally thousands of distinctive biomolecules • Situated at key junctions of metabolic pathways are specialized regulatory enzymes capable of sensing the momentary metabolic needs the cell and adjusting their catalytic rates accordingly\n\n7. Figure 13.2The breakdown of glucose by glycolysis provides a prime example of a metabolic pathway. Ten enzymes mediate the reactions of glycolysis. Enzyme 4, fructose 1,6, biphosphate aldolase, catalyzes the C-C bond- breaking reaction in this pathway.\n\n8. 13.1 – What Characteristic Features Define Enzymes? • Enzymes are remarkably versatile biochemical catalyst that have in common three distinctive features: • Catalyticpower • The ratio of the enzyme-catalyzed rate of a reaction to the uncatalyzed rate • Specificity • The selectivity of enzymes for their substrates • Regulation • The rate of metabolic reactions is appropriate to cellular requirements\n\n9. Catalytic power • Enzymes can accelerate reactions as much as 1016 over uncatalyzed rates! • Urease is a good example: • Catalyzed rate: 3x104/sec • Uncatalyzed rate: 3x10 -10/sec • Ratio is 1x1014 (catalytic power)\n\n10. Specificity • Enzymes selectively recognize proper substances over other molecules • The substances upon which an enzyme acts are traditionally called substrates • Enzymes produce products in very high yields - often much greater than 95%\n\n11. Specificity • The selective qualities of an enzyme are recognized as its specificity • Specificity is controlled by structureof enzyme • the unique fit of substrate with enzyme controls the selectivity for substrate and the product yield • The specific site on the enzyme where substrate binds and catalysis occurs is called the active site\n\n12. Regulation • Regulation of an enzyme activity is essential to the integration and regulation of metabolism • Because most enzymes are proteins, we can anticipate that the functional attributes of enzymes are due to the remarkable versatility found in protein structure • Enzyme regulation is achieved in a variety of ways, ranging from controls over the amount of enzyme protein produced by the cell to more rapid, reversible interactions of the enzyme with metabolic inhibitors and activators (chapter 15)\n\n13. Nomenclature • Traditionally, enzymes often were named by adding the suffix -ase to the name of the substrate upon which they acted: Urease for the urea-hydrolyzing enzyme or phosphatase for enzymes hydrolyzing phosphoryl groups from organic phosphate compounds • Resemblance to their activity: protease for the proteolytic enzyme • Trypsin and pepsin\n\n14. Nomenclature • International Union of Biochemistry and Molecular Biology (IUBMB) http://www.chem.qmw.ac.uk/iubmb/enzyme/ • Enzymes Commission number: EC #.#.#.# • A series of four number severe to specify a particular enzyme • First number is class (1-6) • Second number is subclass • Third number is sub-subclass • Fourth number is individual entry\n\n15. Classification of protein enzymes • Oxidoreductases catalyze oxidation-reduction reactions • Transferases catalyze transfer of functional groups from one molecule to another • Hydrolases catalyze hydrolysis reactions • Lyases catalyze removal of a group from or addition of a group to a double bond, or other cleavages involving electron rearrangement • Isomerases catalyze intramolecular rearrangement (isomerization reactions) • Ligases catalyze reactions in which two molecules are joined (formation of bonds)\n\n16. For example, ATP:D-glucose-6-phosphotransferase (glucokinase) is listed as EC 2.7.1.2. ATP + D-glucose  ADP + D-glucose-6-phosphate • A phosphate group is transferred from ATP to C-6-OH group of glucose, so the enzyme is a transferase (class 2) • Transferring phosphorus-containing groups is subclass 7 • An alcohol group (-OH) as an acceptor is sub-subclass 1 • Entry 2 EC 2.7.1.1 hexokinaseEC 2.7.1.2 glucokinaseEC 2.7.1.3 ketohexokinaseEC 2.7.1.4 fructokinaseEC 2.7.1.5 rhamnulokinaseEC 2.7.1.6 galactokinaseEC 2.7.1.7 mannokinase EC 2.7.1.8 glucosamine kinase . .. . EC 2.7.1.156 adenosylcobinamide kinase\n\n17. Many enzymes require non-protein components called coenzymes or cofactors to aid in catalysis • Coenzymes: many essential vitamins are constituents of coenzyme • Cofactors: metal ions • metalloenzymes • Holoenzyme: apoenzyme (protien) + prosthetic group\n\n18. Other Aspects of Enzymes • Mechanisms - to be covered in Chapter 14 • Regulation - to be covered in Chapter 15 • Coenzymes - to be covered in Chapter 17\n\n19. 13.2 – Can the Rate of an Enzyme-Catalyzed Reaction Be Defined in a Mathematical Way? • Kinetics is concerned with the rates of chemical reactions • Enzyme kinetics addresses the biological roles of enzymatic catalyst and how they accomplish their remarkable feats • In enzyme kinetics, we seek to determine the maximum reaction velocity that the enzyme can attain and its binding affinities for substrates and inhibitors • These information can be exploited to control and manipulate the course of metabolic events\n\n20. Chemical kinetics A  P (A  I  J  P) • rate or velocity (v) v = d[P] / dt or v = -d[A] / dt • The mathematical relationship between reaction rate and concentration of reactant(s) is the rate law v = -d[A] / dt =k[A] • k is the proportional constant or rate constant (the unit of k is sec-1)\n\n21. Chemical kinetics v = -d[A] / dt =k[A] • v is first-order with respect to A The order of this reaction is a first-order reaction • molecularity of a reaction The molecularity of this reaction equal 1 (unimolecular reaction)\n\n22. Figure 13.4Plot of the course of a first-order reaction. The half-time, t1/2, is the time for one-half of the starting amount of A to disappear.\n\n23. Chemical kinetics A + B  P + Q • The molecularity of this reaction equal 2 (bimolecular reaction) • The rate or velocity (v) v = -d[A] / dt = -d[B] / dt = d[P] / dt = d[Q] / dt • The rate law is v = k[A] [B] • The order of this reaction is a second-order reaction • The rate constantk has the unit of M-1 sec-1)\n\n24. The Transition State • Reaction coordinate: a generalized measure of the progress of the reaction • Free energy (G) • Standard statefree energy (25℃, 1 atm, 1 M/each) • Transition state • The transition state represents an intermediate molecular state having a high free energy in the reaction. • Activation energy: • Barriers to chemical reactions occur because a reactant molecule must pass through a high-energy transition state to form products. • This free energy barrier is called theactivation energy.\n\n25. Decreasing G‡ increase reaction rate Two general ways may accelerate rates of chemical reactions • Raise the temperature The reaction rate are doubled by a 10℃ • Add catalysts • True catalysts participate in the reaction, but are unchangedby it. Therefore, they can continue to catalyze subsequent reactions. • Catalysts change the rates of reactions, but do not affect the equilibrium of a reaction.\n\n26. (a) Raising the temperate (b) Adding a catalyst\n\n27. Most biological catalysts are proteins called enzymes(E). • The substance acted on by an enzyme is called a substrate (S). • Enzymes accelerate reactions by lowering the free energy of activation • Enzymes do this by binding the transition state of the reaction better than the substrate • The mechanism of enzyme action in Chapter 14\n\n28. 13.3 – What Equations Define the Kinetics of Enzyme-Catalyzed Reactions? • The Michaelis-Menten Equation • The Lineweaver-Burk double-reciprocal plot • Hanes-Woolf plot Vmax [S] v = Km + [S]\n\n29. Figure 13.7 Substrate saturation curve for an enzyme-catalyzed reaction. The amount of enzyme is constant, and the velocity of the reaction is determined at various substrate concentrations. The reaction rate, v, as a function of [S] is described by a rectangular hyperbola. At very high [S], v = Vmax. The H2O molecule provides a rough guide to scale. The substrate is bound at the active site of the enzyme.\n\n30. The Michaelis-Menten Equation • Louis Michaelis and Maud Menten's theory • It assumes the formation of an enzyme-substrate complex (ES) E + S ES • At equilibrium k-1 [ES] = k1 [E][S] And Ks = = k1 k-1 [E][S] k-1 [ES] k1\n\n31. The Michaelis-Menten Equation k1 k2 E + S ES E + P • The steady-state assumption ES is formed rapidly from E + S as it disappears by dissociation to generate E + S and reaction to form E + P d[ES] dt • That is; formation of ES = breakdown of ES k1 [E] [S] = k-1[ES] + k2[ES] k-1 = 0\n\n32. Figure 13.8Time course for the consumption of substrate, the formation of product, and the establishment of a steady-state level of the enzyme-substrate [ES] complex for a typical enzyme obeying the Michaelis-Menten, Briggs-Haldane models for enzyme kinetics. The early stage of the time course is shown in greater magnification in the bottom graph.\n\n33. The Michaelis-Menten Equation k1 [E] [S] = k-1[ES] + k2[ES] = (k-1+ k2)[ES] [ES] = ( )[E] [S] Km = Km is Michaelis constant Km [ES] = [E] [S] k1 k-1+ k2 k-1+ k2 k1\n\n34. The Michaelis-Menten Equation Km [ES] = [E] [S] Total enzyme, [ET] = [E] + [ES] [E] = [ET] – [ES] Km [ES] = ([ET] – [ES]) [S] = [ET] [S] – [ES] [S] Km [ES] + [ES] [S] = [ET] [S] (Km + [S]) [ES] = [ET] [S] [ES] = [ET] [S] Km + [S]\n\n35. The Michaelis-Menten Equation [ET] [S] [ES] = The rate of product formation is v = k2 [ES] v = Vmax = k2 [ET] v = Km + [S] k2 [ET] [S] Km + [S] Vmax [S] Km + [S]\n\n36. Understanding Km • The Michaelis constant Km measures the substrate concentration at which the reaction rate is Vmax/2. • associated with the affinity of enzyme for substrate • Small Km means tight binding; high Km means weak binding\n\n37. v = When v = Vmax / 2 Vmax Vmax [S] 2 Km + [S] Km + [S] = 2 [S] [S] = Km Vmax [S] Km + [S] =\n\n38. Understanding Vmax The theoretical maximal velocity • Vmax is a constant • Vmax is the theoretical maximal rate of the reaction - but it is NEVER achieved in reality • To reach Vmax would require that ALL enzyme molecules are tightly bound with substrate • Vmax is asymptotically approached as substrate is increased\n\n39. The dual nature of the Michaelis-Menten equation Combination of 0-order and 1st-order kinetics • When S is low ([s] <<Km), the equation for rate is 1st order in S • When S is high ([s] >>Km), the equation for rate is 0-order in S • The Michaelis-Menten equation describes a rectangular hyperbolic dependence of v on S • The actual estimation of Vmax and consequently Km is only approximate from each graph\n\n40. The turnover number A measure of catalytic activity • kcat, the turnover number, is the number of substrate molecules converted to product per enzyme molecule per unit of time, when E is saturated with substrate. • kcat is a measure of its maximal catalytic activity • If the M-M model fits, k2 = kcat = Vmax/Et • Values of kcat range from less than 1/sec to many millions per sec (Table 13.4)\n\n41. The catalytic efficiency Name for kcat/KmAn estimate of \"how perfect\" the enzyme is • kcat/Kmis an apparent second-order rate constant v = (kcat/Km) [E] [S] • kcat/Km provides an index of the catalytic efficiency of an enzyme • kcat/Km = k1k2 / (k-1 + k2) • The upper limit for kcat/Km is the diffusion limit - the rate at which E and S diffuse together\n\n42. Linear Plots of the Michaelis-Menten Equation • Lineweaver-Burkplot • Hanes-Woolf plot • Smaller and more consistent errors across the plot" ]
[ null, "https://fr.slideserve.com/img/player/ss_download.png", null, "https://fr.slideserve.com/img/replay.png", null, "https://thumbs.slideserve.com/1_5595643.jpg", null, "https://fr.slideserve.com/img/output_cBjjdt.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.88771075,"math_prob":0.9539699,"size":12685,"snap":"2022-05-2022-21","text_gpt3_token_len":3235,"char_repetition_ratio":0.15085562,"word_repetition_ratio":0.027632207,"special_character_ratio":0.2400473,"punctuation_ratio":0.06940639,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97526264,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,null,null,7,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-24T14:54:34Z\",\"WARC-Record-ID\":\"<urn:uuid:cf67ba21-bf98-4ada-a391-8a0182d725c3>\",\"Content-Length\":\"105855\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6a90ba11-067b-4917-a9b4-86c088cca18f>\",\"WARC-Concurrent-To\":\"<urn:uuid:82e60299-17b2-4f73-a643-4a719144c480>\",\"WARC-IP-Address\":\"35.83.129.7\",\"WARC-Target-URI\":\"https://fr.slideserve.com/sarai/biochemistry\",\"WARC-Payload-Digest\":\"sha1:2BV746TOGWEDRWLNDMM7JM7PL32PYZ7M\",\"WARC-Block-Digest\":\"sha1:DSC2MHGTOVDWZJ6MVQ7FOOEZGZUZN6ZM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662573053.67_warc_CC-MAIN-20220524142617-20220524172617-00363.warc.gz\"}"}
https://arxiv.org/abs/1508.07170v1
[ "math-ph\n\n# Title:Kitaev's quantum double model from a local quantum physics point of view\n\nAbstract: A prominent example of a topologically ordered system is Kitaev's quantum double model $\\mathcal{D}(G)$ for finite groups $G$ (which in particular includes $G = \\mathbb{Z}_2$, the toric code). We will look at these models from the point of view of local quantum physics. In particular, we will review how in the abelian case, one can do a Doplicher-Haag-Roberts analysis to study the different superselection sectors of the model. In this way one finds that the charges are in one-to-one correspondence with the representations of $\\mathcal{D}(G)$, and that they are in fact anyons. Interchanging two of such anyons gives a non-trivial phase, not just a possible sign change. The case of non-abelian groups $G$ is more complicated. We outline how one could use amplimorphisms, that is, morphisms $A \\to M_n(A)$ to study the superselection structure in that case. Finally, we give a brief overview of applications of topologically ordered systems to the field of quantum computation.\n Comments: Chapter contributed to R. Brunetti, C. Dappiaggi, K. Fredenhagen, J. Yngvason (eds), Advances in Algebraic Quantum Field Theory (Springer 2015). Mainly review Subjects: Mathematical Physics (math-ph); Quantum Physics (quant-ph) Journal reference: Advances in Algebraic Quantum Field Theory, pp 365-395 (Springer 2015) DOI: 10.1007/978-3-319-21353-8_9 Cite as: arXiv:1508.07170 [math-ph] (or arXiv:1508.07170v1 [math-ph] for this version)\n\n## Submission history\n\nFrom: Pieter Naaijkens [view email]\n[v1] Fri, 28 Aug 2015 11:16:29 UTC (146 KB)" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8528075,"math_prob":0.81098783,"size":1330,"snap":"2019-51-2020-05","text_gpt3_token_len":330,"char_repetition_ratio":0.085218705,"word_repetition_ratio":0.0,"special_character_ratio":0.23082706,"punctuation_ratio":0.10483871,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9694859,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-13T03:16:21Z\",\"WARC-Record-ID\":\"<urn:uuid:7fc5d86b-ff60-4eed-8d67-8002e504cc98>\",\"Content-Length\":\"20116\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:de7492a2-0c25-4f1a-b1b8-686c38a7f6f9>\",\"WARC-Concurrent-To\":\"<urn:uuid:3fc60d79-e5a9-4a67-af8c-639c896aae61>\",\"WARC-IP-Address\":\"128.84.21.199\",\"WARC-Target-URI\":\"https://arxiv.org/abs/1508.07170v1\",\"WARC-Payload-Digest\":\"sha1:PKMQVZTWDVTB3EMGLTVPEPIBRCEUWCEU\",\"WARC-Block-Digest\":\"sha1:WCADHHY7VMQNKVE6QUOFAGGW3UPGCSSN\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540548537.21_warc_CC-MAIN-20191213020114-20191213044114-00110.warc.gz\"}"}
https://matplotlib.org/devdocs/gallery/pie_and_polar_charts/nested_pie.html
[ "# Nested pie charts#\n\nThe following examples show two ways to build a nested pie chart in Matplotlib. Such charts are often referred to as donut charts.\n\nimport matplotlib.pyplot as plt\nimport numpy as np\n\n\nThe most straightforward way to build a pie chart is to use the pie method.\n\nIn this case, pie takes values corresponding to counts in a group. We'll first generate some fake data, corresponding to three groups. In the inner circle, we'll treat each number as belonging to its own group. In the outer circle, we'll plot them as members of their original 3 groups.\n\nThe effect of the donut shape is achieved by setting a width to the pie's wedges through the wedgeprops argument.\n\nfig, ax = plt.subplots()\n\nsize = 0.3\nvals = np.array([[60., 32.], [37., 40.], [29., 10.]])\n\ncmap = plt.colormaps[\"tab20c\"]\nouter_colors = cmap(np.arange(3)*4)\ninner_colors = cmap([1, 2, 5, 6, 9, 10])\n\nwedgeprops=dict(width=size, edgecolor='w'))\n\nwedgeprops=dict(width=size, edgecolor='w'))\n\nax.set(aspect=\"equal\", title='Pie plot with ax.pie')\nplt.show()", null, "However, you can accomplish the same output by using a bar plot on axes with a polar coordinate system. This may give more flexibility on the exact design of the plot.\n\nIn this case, we need to map x-values of the bar chart onto radians of a circle. The cumulative sum of the values are used as the edges of the bars.\n\nfig, ax = plt.subplots(subplot_kw=dict(projection=\"polar\"))\n\nsize = 0.3\nvals = np.array([[60., 32.], [37., 40.], [29., 10.]])\n# Normalize vals to 2 pi\nvalsnorm = vals/np.sum(vals)*2*np.pi\n# Obtain the ordinates of the bar edges\nvalsleft = np.cumsum(np.append(0, valsnorm.flatten()[:-1])).reshape(vals.shape)\n\ncmap = plt.colormaps[\"tab20c\"]\nouter_colors = cmap(np.arange(3)*4)\ninner_colors = cmap([1, 2, 5, 6, 9, 10])\n\nax.bar(x=valsleft[:, 0],\nwidth=valsnorm.sum(axis=1), bottom=1-size, height=size,\ncolor=outer_colors, edgecolor='w', linewidth=1, align=\"edge\")\n\nax.bar(x=valsleft.flatten(),\nwidth=valsnorm.flatten(), bottom=1-2*size, height=size,\ncolor=inner_colors, edgecolor='w', linewidth=1, align=\"edge\")\n\nax.set(title=\"Pie plot with ax.bar and polar coordinates\")\nax.set_axis_off()\nplt.show()", null, "References\n\nThe use of the following functions, methods, classes and modules is shown in this example:\n\nGallery generated by Sphinx-Gallery" ]
[ null, "https://matplotlib.org/devdocs/_images/sphx_glr_nested_pie_001.png", null, "https://matplotlib.org/devdocs/_images/sphx_glr_nested_pie_002.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.65401846,"math_prob":0.99538416,"size":2408,"snap":"2022-27-2022-33","text_gpt3_token_len":669,"char_repetition_ratio":0.098169714,"word_repetition_ratio":0.10526316,"special_character_ratio":0.29194352,"punctuation_ratio":0.21804512,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9978256,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-01T03:21:31Z\",\"WARC-Record-ID\":\"<urn:uuid:9df02696-831c-41e9-ae35-0e2b671a99ae>\",\"Content-Length\":\"129817\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9c10d05a-3388-4ef0-962a-04254300d288>\",\"WARC-Concurrent-To\":\"<urn:uuid:3b72d795-e4e1-4b24-8b1d-8851f9c16097>\",\"WARC-IP-Address\":\"104.26.1.8\",\"WARC-Target-URI\":\"https://matplotlib.org/devdocs/gallery/pie_and_polar_charts/nested_pie.html\",\"WARC-Payload-Digest\":\"sha1:65SGYHXWGEC6SSSMUP2E4E7HVPHJJ45D\",\"WARC-Block-Digest\":\"sha1:5WGS2DIEWERUKNE4V4VCYLH2ORNMUCJC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103917192.48_warc_CC-MAIN-20220701004112-20220701034112-00664.warc.gz\"}"}
https://codegolf.stackexchange.com/questions/242934/knights-jam-chess
[ "# Knights Jam | Chess\n\nThe knight is a chess piece that, when placed on the o-marked square, can move to any of the x-marked squares (as long as they are inside the board):\n\n.x.x.\nx...x\n..o..\nx...x\n.x.x.\n\n\nEight knights, numbered from 1 to 8, have been placed on a 3×3 board, leaving one single square empty ..\n\nThey can neither attack each other, nor share the same square, nor leave the board: the only valid moves are jumps to the empty square.\n\nCompute the minimum number of valid moves required to reach the following ordered configuration by any sequence of valid moves:\n\n123\n456\n78.\n\n\nOutput -1 if it is not reachable.\n\nExample detailed: Possible in 3 moves\n\n128 12. 123 123\n356 --> 356 --> .56 --> 456\n7.4 784 784 78.\n\n\n## Input & Output\n\n• You are given three lines of three characters (containing each of the characters 1-8 and . exactly once)\n• You are to output a single integer corresponding to the smallest number of moves needed to reach the ordered configuration, or -1 if it is not reachable.\n• You are allowed to take in the input as a matrix or array/list\n• You are allowed to use 0 or in the input instead of .\n\n## Test cases\n\n128\n356\n7.4\n->\n3\n\n674\n.25\n831\n->\n-1\n\n.67\n835\n214\n->\n-1\n\n417\n.53\n826\n->\n23\n\n\n## Scoring\n\nThis is code-golf, so shortest code wins!\n\nCredits to this puzzle\n\n• Can we replace . with something like 0 in the input? Feb 17, 2022 at 8:27\n\n# Charcoal, 54 52 bytes\n\n≔⭆³Sθ≔⭆83270561§θIιθI⌈⟦±¹⁻²⁸↔⁻²⁸⁺×⁸⌕”)″“◨_'↷χ”⁻θ.⌕θ.\n\n\nTry it online! Link is to verbose version of code. Save 4 bytes if printing any negative number is acceptable for an unreachable configuration. Explanation:\n\n≔⭆³Sθ\n\n\nInput the grid and join the lines together into a single string.\n\n≔⭆83270561§θIιθ\n\n\nReorder the digits from the string starting at the original bottom right corner and taking clockwise knight's moves.\n\nI⌈⟦±¹⁻²⁸↔⁻²⁸\n\n\nOutput the difference from 28 of the absolute difference with 28 (or -1 if that is negative) of...\n\n⁺×⁸⌕”)″“◨_'↷χ”⁻θ.⌕θ.\n\n\n... finding the position of that string excluding the . in the compressed string 4381672438167, multiplying that by 8, and adding on the position of the ..\n\n# JavaScript (Node.js), 331 bytes\n\n(n,F=(i,I)=>(I%3-i%3)**2+((I/3|0)-(i/3|0))**2==5)=>Math.min(...(X=[...Array(9)].flatMap((e,i,a)=>F(i,n.search0)?[eval('for(k=0,N=n,S=[];!(t=S.filter(Z=>Z==N))&&N!=\"123456780\";(N=[...m=N],N[h=m.search0]=N[(H=k?a.findIndex((E,P)=>P!=M&F(P,h)):i,M=m.search0,H)],N[H]=\"0\",S.push(N=N.join),k++));t?-1:k')]:[])).length?X:[-1])\n\n\nTry it online!\n\nTakes a 9-character string where the dot is replaced by a zero.\n\n## Explanation\n\nAs the trick is really fun to find, I've hidden it behind a spoiler.\n\nThere are only two spaces that are a knight's move away from the zero. Let us consider one of the pieces. If we move it to the zero, then we can decide to move it back to where it was. However, that wastes moves. Since there are only two spaces that are a knight's move away from any given space, this means that only one other piece can be moved to the new position of the zero, other than the one we just moved. As a result, we must move that piece. Using the same reasoning as above, this means that there is only one \"path\" originating from a given move, because we only have one choice at each stage. Eventually, we will end up at the ideal arrangement. Now, to account for the impossible cases, notice that there are only so many possible moves. Eventually, we will reach an arrangement that we have already seen. If we do, that means the case is impossible and we can break out of the loop and return -1.\n\nA problem I have is that this is way too long, so I'll try to keep golfing it.\n\nFixed a bug that would create outputs of Infinity for certain inputs.\n\n# 05AB1E, 4039 38 bytes\n\n-1 byte porting @Neil's 54 bytes Charcoal approach:\n\n•4Gв©Tв•8×I•55˜u•SèDðkUþJk8*X+D56α‚ß®M\n\n\n•þÜε•SDIðkk._Tиü2vÐ{QiND56α‚ßq}DyèyRǝ}®\n\n\nBoth programs take a flattened list of digits as input with \" \" as empty square.\n\nExplanation:\n\n•4Gв©Tв• # Push compressed integer 4381672438167\n8× # Repeat it 8 times as string\nI # Push the input-list\n•55˜u• # Push compressed integer 83270561\nS # Convert it to a list of digits\nè # Index each into the input\nD # Duplicate it\nðk # Get the index of the space \" \"\nU # Pop and store this index in variable X\nþ # Remove the space by only keeping digits\nJ # Join it together to a string\nk # Get the first 0-based index of this in the large string\n8* # Multiply this by 8\nX+ # Add index X\nD # Duplicate it\n56α # Get the absolute difference with 56\n‚ # Pair both together\nß # Pop and push the minimum\n® # Push -1\nM # Push the largest number on the stack\n# (after which it is output implicitly as result)\n\n•þÜε• # Push compressed integer 16507238\nS # Convert it to a list of digits\nD # Duplicate it\nI # Push the input-list\nðk # Get the index of the space \" \"\nk # Use that to index into the duplicated list of digits\n._ # Rotate the list of digits that many times towards the left\nTи # Repeat it 10 times as list\nü2 # Get all overlapping pairs of this list\nv # Loop over each pair y:\nÐ # Triplicate the current list\n# (which will be the implicit input in the first iteration)\n{ # Sort the top copy\n# (where the space will go to the end of the sorted digits)\nQi # If the two lists are still the same\nN # Push the loop index\nD56α‚ß # Same as above\nq # Stop the program\n# (after which the result is output implicitly)\n} # Close the if-statement\nDyèyRǝ # Swap the values at indices y:\nD # Duplicate the list once more\nyè # Get the values at the y indices\nyR # Push the reversed pair y\nǝ # Insert the values back into the list at those indices\n} # Close the loop\n® # Push -1\n# (which is output implicitly if we haven't encountered the q)\n\n\nSee this 05AB1E tip of mine (section How to compress large integers?) to understand why •4Gв©Tв• is 4381672438167; •55˜u• is 83270561; and •þÜε• is 16507238.\n\n• I think D56α‚ß can be 28α28α for the same byte count, but maybe you can deduplicate that somehow?\n– Neil\nFeb 18, 2022 at 8:29\n• @Neil I'm afraid not. I could do it like 28©α®α or 2F28α}, but both would be the same byte-count. Feb 18, 2022 at 8:33\n• Oh well. It saved me two bytes in Charcoal, so I thought I'd mention it just in case.\n– Neil\nFeb 18, 2022 at 8:41\n\n# Python 3, 126 bytes\n\ndef f(a,s=[*range(1,9),0],m=0,n=8):\nfor o in(3,2,7,0,5,6,1,8)*7:m+=a!=s or m-55;s[n],s[o],n=s[o],0,o\nreturn min(56-m or-1,m)\n\n\nTry it online!\n\n-1 byte thanks to Jonathan Allan\n\n-3 bytes thanks to Arnauld\n\nThere are 56 possible permutations. From the starting position, we can walk a circular path 7 times, until we reach the starting position again. If the input is found, we return the number of steps m required. If the path is shorter the other way around, we return 56-m.\n\n• Does it really always work? For instance, [1,0,3,4,5,6,7,8,2] should return 1. Feb 17, 2022 at 13:00\n• def f(a,s=[*range(1,9),0],p=(3,2,7,0,5,6,1,8)*7,m=0): saves one. Feb 17, 2022 at 13:58\n\n# Excel, 158 bytes\n\n=IFERROR(LET(a,MID(A1&A2&A3,{2;7;6;1;8;3;4;9},1)*1,b,CONCAT(FILTER(a,a)),d,\"2761834\",f,(FIND(2,b)*8+1-XMATCH(0,a))*(1-ISERROR(FIND(b,d&d)))-1,MIN(f,56-f)),-1)\n\n\n# JavaScript (ES6),  110 108  105 bytes\n\nThis is very similar to Jitse's answer.\n\nExpects a comma-separated string.\n\na=>(b=[...\"123456780\"],g=k=>i=b!=a&&k--?g(k,b[(s=\"83270561\")[k+1&7]]=b[p=s[k&7]],b[p]=0):k)(56)>28?56-i:i\n\n\nTry it online!\n\n# Ruby, 101 98 bytes\n\n->n{a=[*1..8,0];z=j=-29\n(\"I;-XAf#\"*7).bytes{|i|j+=1;a==n&&z=j;a[i/9%9],a[i%9]=a[i%9],0}\n28-z.abs}\n\n\nTry it online!\n\nFairly straightforward. Cycle generates all 56 possible positions and compares with the input.\n\nSaved some bytes off the orignal version by counting from the solved position at -28 through 0 to +27, and using 28-z.abs` to decode the output." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.86120284,"math_prob":0.83587193,"size":1269,"snap":"2023-14-2023-23","text_gpt3_token_len":358,"char_repetition_ratio":0.10197628,"word_repetition_ratio":0.008438818,"special_character_ratio":0.3286052,"punctuation_ratio":0.16724738,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9757136,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-05-31T04:31:40Z\",\"WARC-Record-ID\":\"<urn:uuid:6d6bdf10-242e-49b9-b306-40912ac6d424>\",\"Content-Length\":\"235547\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bf7ef8a3-d133-417d-8568-cbdf18d9c2f8>\",\"WARC-Concurrent-To\":\"<urn:uuid:3dbd86ad-4dcf-4c12-98f7-d298e052ed7f>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://codegolf.stackexchange.com/questions/242934/knights-jam-chess\",\"WARC-Payload-Digest\":\"sha1:YSCM2BQ425H4DKHXP7OC33PTOBUTLILI\",\"WARC-Block-Digest\":\"sha1:46KFAS6SQI6D2EZOWFIAVKXEHHTPO5C6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224646257.46_warc_CC-MAIN-20230531022541-20230531052541-00511.warc.gz\"}"}
https://www.taxalertindia.com/2011/07/all-about-interest-rates-on-rupees.html
[ "Jul 3, 2011\n\nReserve Bank of India has issued a master circular about the interest rates on all types of deposit by the banks. In this circular RBI has classified all types of investment in the banks and maximum rate of interest on these deposit. In this master circular includes.\n\nRate of interest on saving account\nRate of interest on current account\nInterest Payable on term deposits.\nPayment of interest of term deposit maturing on non banking day\nPremature withdrawal of term deposit\nFDR,RD\nPayment of interest on frozen accounts\nLoan against fdr\nRounding off the transactions.\nDormat accounts etc.\nFor example RBI said that rate of interest on saving account would be maximum of 4% p.a. whereas rate of interest on current account would be maximum of 0.5%.\nFull master circular is as under\nAll about interest rates in india\n\nTags-rate of interest on saving account,rate of interest on current account,interest rate on fixed deposit,interest rate on term deposit,interest rate on recurring deposit,interest rate on premature withdrawl of fdr" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.90384746,"math_prob":0.89637804,"size":1086,"snap":"2019-26-2019-30","text_gpt3_token_len":226,"char_repetition_ratio":0.22920518,"word_repetition_ratio":0.03529412,"special_character_ratio":0.18047883,"punctuation_ratio":0.085,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9525515,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-20T23:43:59Z\",\"WARC-Record-ID\":\"<urn:uuid:82a9beda-23c3-4027-bf23-5b31a39a6f60>\",\"Content-Length\":\"200410\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:55a93b99-e079-480b-bc2e-ab204746e7e2>\",\"WARC-Concurrent-To\":\"<urn:uuid:36f791f7-867a-41fd-8f1c-4681a235aa45>\",\"WARC-IP-Address\":\"172.217.164.147\",\"WARC-Target-URI\":\"https://www.taxalertindia.com/2011/07/all-about-interest-rates-on-rupees.html\",\"WARC-Payload-Digest\":\"sha1:7XXCDGJMYCPKSRAO5GYWYLW7S24WAWXP\",\"WARC-Block-Digest\":\"sha1:SP2R6D5U3VP7JZAG42LDUH2UWKYX42XK\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560627999291.1_warc_CC-MAIN-20190620230326-20190621012326-00082.warc.gz\"}"}
https://www.colorhexa.com/50525d
[ "# #50525d Color Information\n\nIn a RGB color space, hex #50525d is composed of 31.4% red, 32.2% green and 36.5% blue. Whereas in a CMYK color space, it is composed of 14% cyan, 11.8% magenta, 0% yellow and 63.5% black. It has a hue angle of 230.8 degrees, a saturation of 7.5% and a lightness of 33.9%. #50525d color hex could be obtained by blending #a0a4ba with #000000. Closest websafe color is: #666666.\n\n• R 31\n• G 32\n• B 36\nRGB color chart\n• C 14\n• M 12\n• Y 0\n• K 64\nCMYK color chart\n\n#50525d color description : Very dark grayish blue.\n\n# #50525d Color Conversion\n\nThe hexadecimal color #50525d has RGB values of R:80, G:82, B:93 and CMYK values of C:0.14, M:0.12, Y:0, K:0.64. Its decimal value is 5263965.\n\nHex triplet RGB Decimal 50525d `#50525d` 80, 82, 93 `rgb(80,82,93)` 31.4, 32.2, 36.5 `rgb(31.4%,32.2%,36.5%)` 14, 12, 0, 64 230.8°, 7.5, 33.9 `hsl(230.8,7.5%,33.9%)` 230.8°, 14, 36.5 666666 `#666666`\nCIE-LAB 35.064, 1.734, -6.674 8.301, 8.53, 11.565 0.292, 0.3, 8.53 35.064, 6.896, 284.566 35.064, -1.644, -8.77 29.207, -0.379, -3.031 01010000, 01010010, 01011101\n\n# Color Schemes with #50525d\n\n• #50525d\n``#50525d` `rgb(80,82,93)``\n• #5d5b50\n``#5d5b50` `rgb(93,91,80)``\nComplementary Color\n• #50595d\n``#50595d` `rgb(80,89,93)``\n• #50525d\n``#50525d` `rgb(80,82,93)``\n• #55505d\n``#55505d` `rgb(85,80,93)``\nAnalogous Color\n• #595d50\n``#595d50` `rgb(89,93,80)``\n• #50525d\n``#50525d` `rgb(80,82,93)``\n• #5d5550\n``#5d5550` `rgb(93,85,80)``\nSplit Complementary Color\n• #525d50\n``#525d50` `rgb(82,93,80)``\n• #50525d\n``#50525d` `rgb(80,82,93)``\n• #5d5052\n``#5d5052` `rgb(93,80,82)``\nTriadic Color\n• #505d5b\n``#505d5b` `rgb(80,93,91)``\n• #50525d\n``#50525d` `rgb(80,82,93)``\n• #5d5052\n``#5d5052` `rgb(93,80,82)``\n• #5d5b50\n``#5d5b50` `rgb(93,91,80)``\nTetradic Color\n• #2d2e34\n``#2d2e34` `rgb(45,46,52)``\n• #383a42\n``#383a42` `rgb(56,58,66)``\n• #44464f\n``#44464f` `rgb(68,70,79)``\n• #50525d\n``#50525d` `rgb(80,82,93)``\n• #5c5e6b\n``#5c5e6b` `rgb(92,94,107)``\n• #686a78\n``#686a78` `rgb(104,106,120)``\n• #737686\n``#737686` `rgb(115,118,134)``\nMonochromatic Color\n\n# Alternatives to #50525d\n\nBelow, you can see some colors close to #50525d. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #50555d\n``#50555d` `rgb(80,85,93)``\n• #50545d\n``#50545d` `rgb(80,84,93)``\n• #50535d\n``#50535d` `rgb(80,83,93)``\n• #50525d\n``#50525d` `rgb(80,82,93)``\n• #50515d\n``#50515d` `rgb(80,81,93)``\n• #50505d\n``#50505d` `rgb(80,80,93)``\n• #51505d\n``#51505d` `rgb(81,80,93)``\nSimilar Colors\n\n# #50525d Preview\n\nText with hexadecimal color #50525d\n\nThis text has a font color of #50525d.\n\n``<span style=\"color:#50525d;\">Text here</span>``\n#50525d background color\n\nThis paragraph has a background color of #50525d.\n\n``<p style=\"background-color:#50525d;\">Content here</p>``\n#50525d border color\n\nThis element has a border color of #50525d.\n\n``<div style=\"border:1px solid #50525d;\">Content here</div>``\nCSS codes\n``.text {color:#50525d;}``\n``.background {background-color:#50525d;}``\n``.border {border:1px solid #50525d;}``\n\n# Shades and Tints of #50525d\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #070809 is the darkest color, while #fdfdfd is the lightest one.\n\n• #070809\n``#070809` `rgb(7,8,9)``\n• #111113\n``#111113` `rgb(17,17,19)``\n• #1a1a1e\n``#1a1a1e` `rgb(26,26,30)``\n• #232428\n``#232428` `rgb(35,36,40)``\n• #2c2d33\n``#2c2d33` `rgb(44,45,51)``\n• #35363d\n``#35363d` `rgb(53,54,61)``\n• #3e3f48\n``#3e3f48` `rgb(62,63,72)``\n• #474952\n``#474952` `rgb(71,73,82)``\n• #50525d\n``#50525d` `rgb(80,82,93)``\n• #595b68\n``#595b68` `rgb(89,91,104)``\n• #626572\n``#626572` `rgb(98,101,114)``\n• #6b6e7d\n``#6b6e7d` `rgb(107,110,125)``\n• #747787\n``#747787` `rgb(116,119,135)``\nShade Color Variation\n• #7f8191\n``#7f8191` `rgb(127,129,145)``\n• #898c9a\n``#898c9a` `rgb(137,140,154)``\n• #9496a3\n``#9496a3` `rgb(148,150,163)``\n• #9ea0ac\n``#9ea0ac` `rgb(158,160,172)``\n• #a9abb5\n``#a9abb5` `rgb(169,171,181)``\n• #b3b5be\n``#b3b5be` `rgb(179,181,190)``\n• #bebfc7\n``#bebfc7` `rgb(190,191,199)``\n• #c8cad0\n``#c8cad0` `rgb(200,202,208)``\n• #d3d4d9\n``#d3d4d9` `rgb(211,212,217)``\n• #dddee2\n``#dddee2` `rgb(221,222,226)``\n• #e8e9eb\n``#e8e9eb` `rgb(232,233,235)``\n• #f3f3f4\n``#f3f3f4` `rgb(243,243,244)``\n• #fdfdfd\n``#fdfdfd` `rgb(253,253,253)``\nTint Color Variation\n\n# Tones of #50525d\n\nA tone is produced by adding gray to any pure hue. In this case, #50525d is the less saturated color, while #001bad is the most saturated one.\n\n• #50525d\n``#50525d` `rgb(80,82,93)``\n• #494d64\n``#494d64` `rgb(73,77,100)``\n• #43496a\n``#43496a` `rgb(67,73,106)``\n• #3c4471\n``#3c4471` `rgb(60,68,113)``\n• #354078\n``#354078` `rgb(53,64,120)``\n• #2f3b7e\n``#2f3b7e` `rgb(47,59,126)``\n• #283685\n``#283685` `rgb(40,54,133)``\n• #21328c\n``#21328c` `rgb(33,50,140)``\n• #1b2d92\n``#1b2d92` `rgb(27,45,146)``\n• #142999\n``#142999` `rgb(20,41,153)``\n• #0d24a0\n``#0d24a0` `rgb(13,36,160)``\n• #071fa6\n``#071fa6` `rgb(7,31,166)``\n• #001bad\n``#001bad` `rgb(0,27,173)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #50525d is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.55422354,"math_prob":0.8187295,"size":3685,"snap":"2021-21-2021-25","text_gpt3_token_len":1632,"char_repetition_ratio":0.12387938,"word_repetition_ratio":0.007380074,"special_character_ratio":0.5660787,"punctuation_ratio":0.236404,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9923433,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-14T21:17:36Z\",\"WARC-Record-ID\":\"<urn:uuid:f015c87e-53d3-4d81-b3bf-e17432ae7db5>\",\"Content-Length\":\"36240\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:21b3723c-1ad8-4ef8-bf6f-34a979cb91ff>\",\"WARC-Concurrent-To\":\"<urn:uuid:e0efdb18-ba76-45d5-a7de-6f615f065c72>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/50525d\",\"WARC-Payload-Digest\":\"sha1:YCI2S6HQEOTNECX77RQ72INHKWJG5SOF\",\"WARC-Block-Digest\":\"sha1:JHD7CTEUM45FVCAMED2CUER3NQZ5AWCS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487613453.9_warc_CC-MAIN-20210614201339-20210614231339-00481.warc.gz\"}"}
https://www.teachoo.com/3645/699/Ex-5.3--14---Find-dy-dx-in--y-sin-1-(2x-root-1-x2)---CBSE/category/Ex-5.3/
[ "Ex 5.3\n\nChapter 5 Class 12 Continuity and Differentiability\nSerial order wise", null, "", null, "Learn in your speed, with individual attention - Teachoo Maths 1-on-1 Class\n\n### Transcript\n\nEx 5.3, 14 Find 𝑑𝑦/𝑑𝑥 in, y = sin–1 (2𝑥 √(1−𝑥^2 )) , − 1/√2 < x < 1/√2 y = sin–1 (2𝑥 √(1−𝑥^2 )) Putting 𝑥 =𝑠𝑖𝑛⁡𝜃 𝑦 = sin–1 (2 sin⁡𝜃 √(1−〖𝑠𝑖𝑛〗^2 𝜃)) 𝑦 = sin–1 ( 2 sin θ √(〖𝑐𝑜𝑠〗^2 𝜃)) 𝑦 =\"sin–1 \" (〖\"2 sin θ\" 〗⁡cos⁡𝜃 ) 𝑦 = sin–1 (sin⁡〖2 𝜃)〗 𝑦 = 2θ Putting value of θ = sin−1 x 𝑦 = 2 〖𝑠𝑖𝑛〗^(−1) 𝑥 Since x = sin θ ∴ 〖𝑠𝑖𝑛〗^(−1) x = θ Differentiating both sides 𝑤.𝑟.𝑡.𝑥 . (𝑑(𝑦))/𝑑𝑥 = (𝑑 (〖2 sin^(−1)〗⁡𝑥 ))/𝑑𝑥 𝑑𝑦/𝑑𝑥 = 2 (𝑑〖 (𝑠𝑖𝑛〗^(−1) 𝑥))/𝑑𝑥 𝑑𝑦/𝑑𝑥 = 2 (1/√(1 −〖 𝑥〗^2 )) 𝒅𝒚/𝒅𝒙 = 𝟐/√(𝟏 − 𝒙^𝟐 ) ((sin^(−1)⁡𝑥 )^′= 1/√(1 − 𝑥^2 ))", null, "" ]
[ null, "https://d1avenlh0i1xmr.cloudfront.net/6dd4d5ba-bf17-4c9f-b8f9-b80ecb0108ec/slide30.jpg", null, "https://d1avenlh0i1xmr.cloudfront.net/7d0c74bc-6e2e-4559-ad3e-9752b8facaa1/slide31.jpg", null, "https://www.teachoo.com/static/misc/Davneet_Singh.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.75662714,"math_prob":0.9999299,"size":654,"snap":"2023-40-2023-50","text_gpt3_token_len":541,"char_repetition_ratio":0.13538462,"word_repetition_ratio":0.08510638,"special_character_ratio":0.53669727,"punctuation_ratio":0.072289154,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9994992,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,5,null,5,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-01T12:16:16Z\",\"WARC-Record-ID\":\"<urn:uuid:a2a5cc7f-dcb8-4705-8431-6150b9472b37>\",\"Content-Length\":\"158794\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e182439d-c1e7-4184-8aa4-b2c519d74811>\",\"WARC-Concurrent-To\":\"<urn:uuid:a2d57cb5-5e09-44f3-9c5c-df2025518a6b>\",\"WARC-IP-Address\":\"23.22.5.68\",\"WARC-Target-URI\":\"https://www.teachoo.com/3645/699/Ex-5.3--14---Find-dy-dx-in--y-sin-1-(2x-root-1-x2)---CBSE/category/Ex-5.3/\",\"WARC-Payload-Digest\":\"sha1:ZKQJIWCPRT4ILIUIPI7ALK6EID5EIA4G\",\"WARC-Block-Digest\":\"sha1:M2N7HYHH2HZVTO2GOIHTU2WS562WFQUS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510888.64_warc_CC-MAIN-20231001105617-20231001135617-00565.warc.gz\"}"}
https://mathexamination.com/class/bohr-compactification.php
[ "## Do My Bohr Compactification Class", null, "A \"Bohr Compactification Class\" QE\" is a basic mathematical term for a generalized continuous expression which is utilized to fix differential equations and has options which are regular. In differential Class solving, a Bohr Compactification function, or \"quad\" is utilized.\n\nThe Bohr Compactification Class in Class kind can be expressed as: Q( x) = -kx2, where Q( x) are the Bohr Compactification Class and it is an essential term. The q part of the Class is the Bohr Compactification constant, whereas the x part is the Bohr Compactification function.\n\nThere are four Bohr Compactification functions with correct solution: K4, K7, K3, and L4. We will now take a look at these Bohr Compactification functions and how they are fixed.\n\nK4 - The K part of a Bohr Compactification Class is the Bohr Compactification function. This Bohr Compactification function can also be written in partial portions such as: (x2 - y2)/( x+ y). To fix for K4 we increase it by the correct Bohr Compactification function: k( x) = x2, y2, or x-y.\n\nK7 - The K7 Bohr Compactification Class has a service of the type: x4y2 - y4x3 = 0. The Bohr Compactification function is then increased by x to get: x2 + y2 = 0. We then have to multiply the Bohr Compactification function with k to get: k( x) = x2 and y2.\n\nK3 - The Bohr Compactification function Class is K3 + K2 = 0. We then multiply by k for K3.\n\nK3( t) - The Bohr Compactification function equationis K3( t) + K2( t). We multiply by k for K3( t). Now we multiply by the Bohr Compactification function which gives: K2( t) = K( t) times k.\n\nThe Bohr Compactification function is likewise called \"K4\" because of the initials of the letters K and 4. K indicates Bohr Compactification, and the word \"quad\" is pronounced as \"kah-rab\".\n\nThe Bohr Compactification Class is among the main techniques of resolving differential formulas. In the Bohr Compactification function Class, the Bohr Compactification function is first increased by the appropriate Bohr Compactification function, which will offer the Bohr Compactification function.\n\nThe Bohr Compactification function is then divided by the Bohr Compactification function which will divide the Bohr Compactification function into a real part and an imaginary part. This offers the Bohr Compactification term.\n\nLastly, the Bohr Compactification term will be divided by the numerator and the denominator to get the quotient. We are entrusted to the right-hand man side and the term \"q\".\n\nThe Bohr Compactification Class is an important concept to comprehend when resolving a differential Class. The Bohr Compactification function is simply one method to solve a Bohr Compactification Class. The techniques for resolving Bohr Compactification equations include: particular worth decay, factorization, optimal algorithm, mathematical service or the Bohr Compactification function approximation.\n\n## Hire Someone To Do Your Bohr Compactification Class\n\nIf you wish to end up being familiar with the Quartic Class, then you need to first begin by checking out the online Quartic page. This page will show you how to use the Class by utilizing your keyboard. The explanation will also show you how to create your own algebra formulas to assist you study for your classes.\n\nPrior to you can comprehend how to study for a Bohr Compactification Class, you must first comprehend making use of your keyboard. You will discover how to click on the function keys on your keyboard, in addition to how to type the letters. There are three rows of function keys on your keyboard. Each row has four functions: Alt, F1, F2, and F3.\n\nBy pressing Alt and F2, you can multiply and divide the value by another number, such as the number 6. By pressing Alt and F3, you can utilize the 3rd power.\n\nWhen you press Alt and F3, you will key in the number you are attempting to increase and divide. To increase a number by itself, you will press Alt and X, where X is the number you want to multiply. When you push Alt and F3, you will enter the number you are trying to divide.\n\nThis works the same with the number 6, except you will only enter the two digits that are six apart. Lastly, when you push Alt and F3, you will utilize the 4th power. However, when you press Alt and F4, you will use the real power that you have actually found to be the most appropriate for your issue.\n\nBy utilizing the Alt and F function keys, you can increase, divide, and then use the formula for the 3rd power. If you need to multiply an odd variety of x's, then you will need to go into an even number.\n\nThis is not the case if you are attempting to do something complex, such as multiplying two even numbers. For example, if you wish to multiply an odd variety of x's, then you will require to enter odd numbers. This is specifically real if you are trying to find out the answer of a Bohr Compactification Class.\n\nIf you wish to convert an odd number into an even number, then you will need to push Alt and F4. If you do not know how to increase by numbers on their own, then you will need to utilize the letters x, a b, c, and d.\n\nWhile you can multiply and divide by utilize of the numbers, they are much easier to utilize when you can take a look at the power tables for the numbers. You will need to do some research study when you first start to utilize the numbers, however after a while, it will be second nature. After you have created your own algebra formulas, you will have the ability to produce your own multiplication tables.\n\nThe Bohr Compactification Solution is not the only way to fix Bohr Compactification formulas. It is necessary to learn about trigonometry, which utilizes the Pythagorean theorem, and then use Bohr Compactification solutions to resolve problems. With this approach, you can know about angles and how to resolve problems without needing to take another algebra class.\n\nIt is very important to try and type as rapidly as possible, because typing will help you know about the speed you are typing. This will assist you write your answers faster.\n\n## Pay Someone To Take My Bohr Compactification Class", null, "A Bohr Compactification Class is a generalization of a linear Class. For example, when you plug in x=a+b for a given Class, you get the value of x. When you plug in x=a for the Class y=c, you obtain the values of x and y, which offer you an outcome of c. By applying this standard concept to all the equations that we have actually attempted, we can now solve Bohr Compactification equations for all the worths of x, and we can do it rapidly and effectively.\n\nThere are numerous online resources readily available that offer free or inexpensive Bohr Compactification formulas to resolve for all the worths of x, including the expense of time for you to be able to take advantage of their Bohr Compactification Class task assistance service. These resources normally do not need a membership charge or any sort of investment.\n\nThe answers offered are the result of complex-variable Bohr Compactification formulas that have been resolved. This is likewise the case when the variable utilized is an unidentified number.\n\nThe Bohr Compactification Class is a term that is an extension of a direct Class. One advantage of using Bohr Compactification formulas is that they are more general than the direct formulas. They are easier to solve for all the worths of x.\n\nWhen the variable used in the Bohr Compactification Class is of the type x=a+b, it is easier to resolve the Bohr Compactification Class due to the fact that there are no unknowns. As a result, there are fewer points on the line defined by x and a consistent variable.\n\nFor a right-angle triangle whose base points to the right and whose hypotenuse points to the left, the right-angle tangent and curve graph will form a Bohr Compactification Class. This Class has one unknown that can be found with the Bohr Compactification formula. For a Bohr Compactification Class, the point on the line defined by the x variable and a consistent term are called the axis.\n\nThe existence of such an axis is called the vertex. Since the axis, vertex, and tangent, in a Bohr Compactification Class, are a given, we can discover all the values of x and they will sum to the given worths. This is achieved when we use the Bohr Compactification formula.\n\nThe factor of being a consistent factor is called the system of formulas in Bohr Compactification formulas. This is in some cases called the central Class.\n\nBohr Compactification equations can be resolved for other values of x. One way to fix Bohr Compactification equations for other worths of x is to divide the x variable into its element part.\n\nIf the variable is given as a favorable number, it can be divided into its factor parts to get the normal part of the variable. This variable has a magnitude that amounts to the part of the x variable that is a constant. In such a case, the formula is a third-order Bohr Compactification Class.\n\nIf the variable x is unfavorable, it can be divided into the same part of the x variable to get the part of the x variable that is increased by the denominator. In such a case, the formula is a second-order Bohr Compactification Class.\n\nOption assistance service in fixing Bohr Compactification equations. When using an online service for solving Bohr Compactification equations, the Class will be solved instantly." ]
[ null, "https://mathexamination.com/Do-My-Math-Class.webp", null, "https://mathexamination.com/Take-My-Math-Class.webp", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.899485,"math_prob":0.94387484,"size":9520,"snap":"2021-31-2021-39","text_gpt3_token_len":2095,"char_repetition_ratio":0.25210172,"word_repetition_ratio":0.05939394,"special_character_ratio":0.20493698,"punctuation_ratio":0.0949506,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99134874,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-08-04T02:06:17Z\",\"WARC-Record-ID\":\"<urn:uuid:0272b268-04a7-44db-ad4c-c24271c11cad>\",\"Content-Length\":\"20072\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:28a7aa03-68a0-4973-83fd-a8a954231ede>\",\"WARC-Concurrent-To\":\"<urn:uuid:b0ed0575-b376-44d9-bc3b-82bd3654f0cf>\",\"WARC-IP-Address\":\"172.67.178.201\",\"WARC-Target-URI\":\"https://mathexamination.com/class/bohr-compactification.php\",\"WARC-Payload-Digest\":\"sha1:BD4J5AIWKUBB24DTS6LSIUAXOE6JXL5W\",\"WARC-Block-Digest\":\"sha1:BFJHXG4QFV5GMUU553WK6CX53L2BRANB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046154500.32_warc_CC-MAIN-20210804013942-20210804043942-00529.warc.gz\"}"}
https://takeuforward.org/data-structure/prime-numbers-in-a-given-range/
[ "# Prime Numbers in a given range\n\nProblem Statement: Given a and b, find prime numbers in a given range [a,b], (a and b are included here).\n\nExamples:\n\n```Examples:\nInput: 2 10\nOutput: 2 3 5 7\nExplanation: Prime Numbers b/w 2 and 10 are 2,3,5 and 7.\n\nExample 2:\nInput: 10 16\nOutput: 11 13\nExplanation: Prime Numbers b/w 10 and 16 are 11 and 13.\n```\n\n### Solution\n\nDisclaimer: Don’t jump directly to the solution, try it out yourself first.", null, "Approach: Prime numbers b/w a and b can be found out by iterating through every number from a and b and checking for the number whether it is a prime number or not.\n\nFor E.g.\n\na=10\n\nb=18\n\nLet’s begin from 10\n\n1. 10 is not prime.\n2. 11 is prime.\n3. 12 is not prime.\n4. 13 is prime.\n5. 14 is not prime.\n6. 15 is not prime\n7. 16 is not prime.\n8. 17 is prime.\n9. 18 is not prime.\n\nSo, print 11, 13, and 17 as prime numbers.\n\nCode:\n\n## C++ Code\n\n``````#include <iostream>\n#include <math.h>\nusing namespace std;\nbool checkprime(int num)\n{\nif (num == 1)\nreturn false;\nint i = 2;\nfor (i = 2; i < sqrt(num); i++)\n{\nif (num % i == 0)\nreturn false;\n}\nreturn true;\n}\nvoid PrintPrimesbwrange(int a, int b)\n{\nfor (int i = a; i <= b; i++)\n{\nif (checkprime(i))\n{\ncout << i << \" \";\n}\n}\n}\nint main()\n{\nint a = 11, b = 17;\nPrintPrimesbwrange(a, b);\nreturn 0;\n}\n``````\n\nOutput: 11 13 17\n\nTime Complexity: O(N2) Since two nested loops are used.\n\nSpace Complexity: O(1)\n\n## Java Code\n\n``````public class Main {\npublic static boolean isPrime(int num) {\nif (num == 1)\nreturn false;\nfor (int i = 2; i < Math.sqrt(num); i++) {\nif (num % i == 0)\nreturn false;\n}\nreturn true;\n}\npublic static void PrintPrimesbwrange(int a, int b) {\nfor (int i = a; i <= b; i++) {\nif (isPrime(i)) {\nSystem.out.print(i + \" \");\n}\n}\n}\npublic static void main(String args[]) {\nint a = 10, b = 17;\nPrintPrimesbwrange(a, b);\n}\n}\n``````\n\nOutput: 11 13 17\n\nTime Complexity: O(N2) Since two nested loops are used.\n\nSpace Complexity: O(1)\n\nSpecial thanks to Gurmeet Singh for contributing to this article on takeUforward. If you also wish to share your knowledge with the takeUforward fam, please check out this article" ]
[ null, "https://lh5.googleusercontent.com/oFi9q9_hDw70OLWKnwRC-Mrg1KaGJ659e4Ghnj4WWJJEt3rcta6Ne64n8g3crf3Q-ve9RTSc4fO3FLXn8-5Wtu4gU1-LyD0szpfGJyAJ70zmwO-WCZmw3DjDLWup1weQJhiksJ-o", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.67592424,"math_prob":0.9892045,"size":1994,"snap":"2022-40-2023-06","text_gpt3_token_len":631,"char_repetition_ratio":0.11859296,"word_repetition_ratio":0.20759493,"special_character_ratio":0.36308926,"punctuation_ratio":0.18,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9988558,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-05T09:51:45Z\",\"WARC-Record-ID\":\"<urn:uuid:89e0efc9-9bd4-455d-a6c1-9cf9a0726366>\",\"Content-Length\":\"74528\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9ea6377d-fab8-47e8-abe9-396a02125ce8>\",\"WARC-Concurrent-To\":\"<urn:uuid:c704eecc-8e2f-4d8c-8360-5500c484feeb>\",\"WARC-IP-Address\":\"104.26.4.96\",\"WARC-Target-URI\":\"https://takeuforward.org/data-structure/prime-numbers-in-a-given-range/\",\"WARC-Payload-Digest\":\"sha1:BAZCCPHJOHRCXO3KSSSQFEWAFUMLSOZG\",\"WARC-Block-Digest\":\"sha1:UKIMBGIYWL2VJCVGWYK7PYVJJD4RAJ7D\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337595.1_warc_CC-MAIN-20221005073953-20221005103953-00088.warc.gz\"}"}
https://discourse.processing.org/t/vertex-color-interpolation-on-shape/17614
[ "", null, "# Vertex color interpolation on shape\n\nHi, I’m using p5js to draw some 3D triangles. For each triangle I have a color defined at each vertex and I would like to see a smooth interpolation of the color using the typical barycentric interpolation across the triangle. I’m trying to do this as follows:\n\n``````beginShape();\nfill(color1);\nvertex(x1, y1, z1);\nfill(color2);\nvertex(x2, y2, z2);\nfill(color3);\nvertex(x3, y3, z3);\nendShape();\n``````\n\nThis is drawing each triangle with a solid flat color instead of an interpolated color. Is there something I’m missing?\n\n1 Like\n\nHi,\n\nThat’s strange, for it surely worked for me.\n\n`````` beginShape(TRIANGLE);\nfor (int i=0; i<vr.size(); i++) {\nRib r = vr.get(i);\nPVector v1 = faces.get(r.tr).cc;\nPVector v2 = faces.get(r.tr).cc;\nfill(c1);\nvertex(vxs.x, vxs.y, vxs.z);\nif (faces.get(r.tr).pid!=pid) {\nfill(c2);\n} else {\nfill(c1);\n}\nvertex(s*v1.x, s*v1.y, s*v1.z);\nif (faces.get(r.tr).pid!=pid) {\nfill(c2);\n} else {\nfill(c1);\n}\nvertex(s*v2.x, s*v2.y, s*v2.z);\n}\nendShape(CLOSE);\n``````\n\n(nevermind all the variables there, there are still three vertices of 3D vectors and I change the `fill()` according to some condition)\n\nThe piece of code above draws every little triangle on the screen each of which is formed by a center point of Voronoi diagram and the two points of each of its edges (I really hope I make myself clear enough, sorry if not, feel free to ask to clarify)\n\nMaybe the parameters of `beginShape(TRIANGLE)` and `endShape(CLOSE)` do the job?\n\n1 Like\n\nYour code appears to be running in Processing not in p5js. I’m thinking that maybe this feature isn’t implemented in p5js yet.\n\nStrange, this code works in the p5js editor:\n\nfunction setup() {\ncreateCanvas(400, 400, WEBGL);\n}\n\nfunction draw() {\nbackground(220);\nbeginShape(TRIANGLES);\nfill(255, 0, 0);\nvertex(-100, -100, 0);\nfill(0, 255, 0);\nvertex( 100, -100, 0);\nfill(0, 0, 255);\nvertex( 0, 100, 0);\nendShape(CLOSE);\n}\n\nbut draws a solid blue triangle on my website. I wonder if there’s a flag I’m setting somewhere that turns this off, or if I’m pulling an old version of p5js.\n\nFound the problem. It’s when lights() has been enabled. This must be a bug.\n\n``````function setup() {\ncreateCanvas(400, 400, WEBGL);\n}\n\nfunction draw() {\nbackground(220);\nlights();\nbeginShape(TRIANGLES);\nfill(255, 0, 0);\nvertex(-100, -100, 0);\nfill(0, 255, 0);\nvertex( 100, -100, 0);\nfill(0, 0, 255);\nvertex( 0, 100, 0);\nendShape(CLOSE);\n}\n``````\n\nThe code above draws a blue triangle in the p5js editor. I’ll file a bug report.\n\n2 Likes\n1 Like" ]
[ null, "https://aws1.discourse-cdn.com/standard10/uploads/processingfoundation1/original/1X/cba26f49f722c2f3c1c1ca16e7e94da13430fda2.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7998144,"math_prob":0.9551618,"size":508,"snap":"2020-10-2020-16","text_gpt3_token_len":131,"char_repetition_ratio":0.14285715,"word_repetition_ratio":0.0,"special_character_ratio":0.26181102,"punctuation_ratio":0.18181819,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9876572,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-23T12:16:01Z\",\"WARC-Record-ID\":\"<urn:uuid:7319dcb6-b47e-4c31-a553-175ebc96be27>\",\"Content-Length\":\"23979\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:632063cb-af38-4766-b18e-9172f296f201>\",\"WARC-Concurrent-To\":\"<urn:uuid:54d8a45d-d30e-4888-b68c-9a88468a27f3>\",\"WARC-IP-Address\":\"66.220.12.139\",\"WARC-Target-URI\":\"https://discourse.processing.org/t/vertex-color-interpolation-on-shape/17614\",\"WARC-Payload-Digest\":\"sha1:GKD5PBZ4XRLS6TCLCHDZ7MS3INN2AWEW\",\"WARC-Block-Digest\":\"sha1:APSVVSQ4XZILSTEVZQEGTIALYTHWNITK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875145767.72_warc_CC-MAIN-20200223093317-20200223123317-00375.warc.gz\"}"}
https://help.scilab.org/docs/5.3.0/en_US/nlev.html
[ "Change language to:\nFrançais - 日本語 - Português\n\nSee the recommended documentation of this function\n\nScilab manual >> Linear Algebra > nlev\n\n# nlev\n\nLeverrier's algorithm\n\n### Calling Sequence\n\n`[num,den]=nlev(A,z [,rmax])`\n\n### Arguments\n\nA\n\nreal square matrix\n\nz\n\ncharacter string\n\nrmax\n\noptional parameter (see `bdiag`)\n\n### Description\n\n`[num,den]=nlev(A,z [,rmax])` computes `(z*eye()-A)^(-1)`\n\nby block diagonalization of A followed by Leverrier's algorithm on each block.\n\nThis algorithm is better than the usual leverrier algorithm but still not perfect!\n\n### Examples\n\n```A=rand(3,3);x=poly(0,'x');\n[NUM,den]=nlev(A,'x')\nclean(den-poly(A,'x'))\nclean(NUM/den-inv(x*eye()-A))```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5499174,"math_prob":0.8984177,"size":1240,"snap":"2023-14-2023-23","text_gpt3_token_len":449,"char_repetition_ratio":0.08576052,"word_repetition_ratio":0.0,"special_character_ratio":0.29274192,"punctuation_ratio":0.1005291,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9919184,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-06T09:20:41Z\",\"WARC-Record-ID\":\"<urn:uuid:de3351bf-333f-403c-a424-ea98986c6e3a>\",\"Content-Length\":\"12048\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:34ab3826-a6d0-41e1-8042-be9accbc528f>\",\"WARC-Concurrent-To\":\"<urn:uuid:b94a96a6-90d7-4c89-84f6-ea8f55219a87>\",\"WARC-IP-Address\":\"107.154.79.223\",\"WARC-Target-URI\":\"https://help.scilab.org/docs/5.3.0/en_US/nlev.html\",\"WARC-Payload-Digest\":\"sha1:M6YOHQHD72PL5MD2LR26DWOCNOWBOAKF\",\"WARC-Block-Digest\":\"sha1:YGZGALCCAOOCNX4KD6MLW5FDOW4ZZCIY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224652494.25_warc_CC-MAIN-20230606082037-20230606112037-00144.warc.gz\"}"}
https://www.futurestarr.com/blog/mathematics/d-bar-calculator
[ "FutureStarr\n\nD Bar Calculator\n\n# D Bar Calculator", null, "## Using a D Bar Calculator", null, "The d-bar is a statistical tool used to calculate the d-measure, the distance between two samples' means. In other words, the d-bar measures the spread in the sample. This graph helps you to identify trends in the data. The tool also shows the range of values within a sample. This graph is used to study trends in many different fields. There are various d-bar calculators on the Internet.\n\n## X bar control chart\n\nAn X-bar control chart displays the Xbar values and the range values. These are used in calculating the centerline and the control limits of the chart. Each observation's value is represented by a sub-group called x1 through xn. The grand average is calculated using the average of these subgroups and the corresponding values are grouped together to form the control limits. A standard deviation (SD) is calculated for each subgroup, based on the sample size.\n\nThe X-bar is a statistical shorthand for arithmetic mean or average. The X-bar represents the arithmetic mean of a sample, which differs from the population-wide mean, which is represented by the Greek letter mu. X-bars represent sample mean. This metric is commonly used in statistics and can be interpreted in many different ways.\n\nWhen interpreting a process capability study, the X-bars must be within control limits. When they are not, the process is not stable enough to perform process capability study. This means that an X-bar chart cannot be interpreted unless it is in control. However, if you use an S-bar chart, you will be able to interpret X-bar charts easily. These two charts can help you determine the control center and process variation in a given process.\n\nThe X-bar control chart for a d-bar calculator must be used in situations where frequent data is available. The number of samples needed for an average to be calculated and plotted depends on the size of the subgroup. For example, a four-person subgroup would require four samples to calculate the average and plot points. For a one-person subgroup, the sample would need to be calculated over four days. This means that there are out-of-control points that occurred four days ago.\n\nAnother widely used control chart for variable data is the X bar R. It is widely used in process stability analysis in many industries. While choosing the right chart is crucial, improper selection can result in inaccurate control limits. If you're not careful, you'll end up with a confusing chart. The best chart to use is the one that best suits your specific situation. If you're planning to map a control chart, you should know that it's the one that will give you the best results.\n\nWhen calculating the control limits of a process, you'll need to define a range within a subgroup. A rational subgroup minimizes variations within a subgroup, while maximising opportunities between subgroups. In other words, it identifies changes in a process and reveals the effects of those changes. Incorrectly defined subgroups may even hide changes, rendering the control chart completely useless.\n\nX bar charts have two control limits. One has an upper limit, and the other has a lower one. The upper limit is given by UCLx, and the lower limit, LCLx. The control limits, or range, should be plotted as dashed lines on the chart. The control limits are used in all tests of statistical control. For example, points beyond the control limits should be plotted as dashed lines on an X bar control chart.\n\n## X bar calculator\n\nThere are many ways to calculate the x bar. One way to calculate this is using multiple n-bar strips or columns. Each strip represents a letter of the alphabet. Then, you can add up the values from each row to get the average for each strip. This tool is also called an x-bar calculator. This tool can be found on the Resources page of most text editors. To use it, you don't even need to type a character!\n\nThe X-bar is statistical shorthand for the average or arithmetic mean. This number is often written as the letter 'x' with a straight line above it. An X bar calculator online can be used to calculate the arithmetic mean and add it to a web page. If you need to calculate the X-bar for a particular group, use this tool. It is also possible to use it on your own webpages.\n\nThe X-bar is also useful for users of Excel. It's a tool that calculates the arithmetic mean of a sample group. This is done by dividing the sum of all the numbers by the number of samples. Using an X-bar calculator, you can calculate the arithmetic mean of a group of numbers by entering the values. All you have to do is input the range and press Enter or Tab.\n\nIn addition to calculating the arithmetic mean of a set of values, an X-bar calculator is also useful for calculating the arithmetic mean of a group of values. If you have values like 12, 24, 13, 45, 55, and so on, you can use an X-bar calculator. However, you should note that unix machines don't support x-bars.\n\nAside from calculating the arithmetic mean, X-bars can also be used to determine how stable a process is. This is useful for process capability studies, but you should only calculate the x-bar calculator if the sample size is in the control limits. Otherwise, you can't perform a process capability study. When using an X-bar calculator, you should remember that the values should be within a range of three standard deviations.\n\nThe X-bar chart is an important tool in many industries. It allows you to plot a group of n-samples of a constant size, and then analyze the variance of the process. It also helps you calculate the process's center of control, which is useful for assessing its variability. An X-bar chart is often used in statistical analyses to identify process stability. The X-bar chart is also used in Excel to assess the variability of a process.\n\nBesides using an X-bar calculator, you can also use a spreadsheet to enter your own data. You can simply copy and paste the data from your text document or spreadsheet and use the X-bar calculator to calculate it. Once you have entered your data, you should press the \"Submit Data\" button to perform the computation. If you'd like, you can also input new data and start over. So, why not give it a try?\n\n## Using a d-Bar Calculator to Calculate the Standard Error of the Estimate", null, "The X bar control chart shows the difference between the means of the paired samples. The Standard error of the estimate (S.E.) is the distance between the means of the paired samples. In statistics, a d-bar measures the variance of a sample and the distance between the means of two different groups. A d-bar calculator is a handy tool that is available for any statistician.\n\nX bar control charts plot the average values of subsets of experimental data to determine whether a process is under control. These charts show that the process is stabile and that all the rules of stability are met. In order to have stabilty, at least two out of three consecutive points must fall within twos of each other on either side of a centerline. The range between subgroups is also calculated using these averages.\n\nThe X-Bar control chart is a useful tool for establishing limits, average standard deviation, and grand range. To calculate a grand range or average standard deviation, first find the range of n readings at different points. Next, determine the standard deviation of n readings from all time points. This way, you will know whether a particular level of variability is within your control zone.\n\nX Bar R chart is a convenient tool to compute process mean and standard deviation. A sigma chart requires tedious calculations, especially if you are using a large sample size. Unlike the range, the standard deviation is a better measure of variation since it takes into account all the data. For this reason, the X Bar R control chart is an example of statistical process control. It can help understand the stability of a process and detect the presence of special cause variation.\n\nAn X bar chart evaluates the consistency of process averages. Each subgroup's average is plotted on a chart. The X bar control chart can detect large shifts in process averages. An R chart, on the other hand, shows the average ranges of subsets. This allows you to compare the X bar and R chart. Using these two tools together, you can determine the most appropriate method for your data analysis.\n\nWhen determining control limits, an X bar chart should first be interpreted. The control limits in an X bar chart are determined by the S bar (average standard deviation). If the X bar chart's control limits are outside the normal range, it is probably a sign of system instability. In addition, inflated R-bars can increase the likelihood of calling a subgroup variation and working on the wrong area.\n\nAn X bar chart should be used for process improvement when time-ordered data is available. The x-bar chart uses test number one, which requires a point to be outside of three standard deviations. The S chart, on the other hand, shows the process variability between samples. To understand the X bar control chart, you must know how to read the S chart.\n\n## Standard error of the estimate with regard to a mean\n\nThe standard error of an estimate is the amount of variation between the sample and the population mean. Because the sample is not necessarily representative of the whole population, the estimate is unlikely to be identical to the population mean. However, a larger sample size can help to minimize the standard error of an estimate. The standard error of the sample with regard to a mean is computed by taking the sample standard deviation, which is the variability in the sample. The denominator of the standard error of an estimate is the square root of the sample size.\n\nThe standard error of an estimate with regard to a mean is also known as the standard deviation, or SEM. Normally, the sample mean is an estimate of the population mean. However, another sample from the same population may provide a different estimate. Despite this difference, the sample means can be used to identify the extent of variation within the population. Using SEM, statisticians can use a sample as a proxy for the population's mean to get a better idea of the extent of variation.\n\nWhen using a regression model, it's useful to know the standard error of the estimate with regard to a population. In the case of a regression model, the standard error is the difference between the predicted and actual values. For example, if we know that the sample mean is higher than the population mean, the standard error of the prediction is greater than the estimated value. A regression model that uses a single line has a low standard error of estimation because it doesn't have a central limit.\n\nThe standard error of the estimate with regard to a population is an important statistician's tool. This mathematical tool allows researchers to compare two similar measures and assess their accuracy. It also provides a way to quantify the error inherent in sampling. With the help of this tool, you can identify the most accurate estimates. This can help you improve the accuracy of the sample data and create confidence intervals that can help you make sound statistical decisions.\n\nThe standard error of an estimate with regard to a mean is a good measure of the variability associated with the sample. This statistic is often confused with the standard deviation. They both reflect a fundamental distinction between data description and inference. All researchers should appreciate the difference between standard deviation and standard error when reporting data. For example, when reporting results in a journal, the standard error should be reported instead of the variance.\n\n## Distance between the means of the paired samples\n\nThe distance between the means of paired samples is also known as standard deviation and is a non-negative, differentiable variable. The data are essentially defined by a single location parameter m that must lie on the line of =m. There are two reasons to use this calculation instead of the standard deviation. First, it will give you a better estimate of the population parameter. Second, using this method will allow you to calculate the standard error of the sample mean.\n\nWhen doing paired t-test analyses, you can calculate the distance between the means of two groups. First, you need to know the population's mean and the sample's mean. Next, you need to know the population standard deviation and the sample's variance. Finally, you need to know the sample size for each group. Using the paired t-test calculator, you can quickly calculate paired t-test p-value, t-value, and outliers. To use this calculator, enter the population's and sample's data into the respective fields.\n\nThe standard error of the mean is another measure of the precision of the sample. The sample's mean should be within a reasonable range for the standard error to be acceptable. For example, a d-bar of three means the sample is 3 times more precise than the population's mean. It is important to remember that the sample size is the difference between the population's mean and the sample's standard error.\n\nThe distance between the means of two groups is also called the paired difference. If the population means are the same, the paired difference should be near zero. If they differ by a significant amount, the standard error is greater than the d-bar, and the standard deviation is smaller. To simplify the calculation, you can use the d-bar method in paired-sample analyses.\n\n## Amazon Jobs Near Me", null, "If you're considering a career in online retail, Amazon has many different jobs near me. These positions range from customer service to technical support, with many employees working from home. Some positions are even completely remote, like customer service, data entry, and technical support. These positions require workers to process information quickly, such as maintaining an extensive database of customers' records. Amazon is also known for its specialized roles, such as software engineers and programmers who build new applications or design algorithms to make their products better.\n\n## Free training\n\nLooking for Amazon jobs near you? Then you're in luck! Amazon is currently hiring refugees all over the world. They're committed to increasing their hiring and training programs for refugees, so they've offered free training. The Amazon Web Services re/Start Program trains unemployed people for cloud computing careers and connects them with job interviews. To help you get started on your Amazon career, Amazon is part of the Tent Coalition for Refugees in the U.S.\n\nThe cost of a subscription is relatively inexpensive compared to other online training courses. The company's website has hundreds of courses available free of charge, so even if you're not interested in working on Amazon, you can still learn a lot about cloud computing. The company's AWS Skill Builder course catalog includes more than three hundred free courses, as well as paid subscriptions with more features and services. This program is free for aspiring cloud computing employees, but you'll need a subscription to access the paid content.\n\nCareer Choice is another option for Amazon employees looking for a career change. The company will pre-pay for tuition at hundreds of education partners throughout the country. The program also funds GEDs and high school completion. AWS Career Choice also pays for ESL proficiency certifications. Since it launched in 2012, more than 50,000 Amazon employees have received free training for high-demand occupations. But the program will be years before it shows if it works.\n\nIn the mean time, the Amazon hiring cycle is moving fast. With the growing pandemic, there's a tremendous demand for home delivery of goods. For this reason, Amazon is hiring for tens of thousands of new employees. The company offers part-time, full-time, and seasonal work. And if you don't want to wait for a few months to start working, there's nothing wrong with applying for any of the available positions.\n\n## Base pay\n\nInterested in working for Amazon but aren't sure where to start? Amazon has jobs near you in Seattle and Sunnyvale. These locations pay well for entry-level jobs and are ranked as some of the best places to live and work. Here are the best cities to live and work for Amazon. Start your search by entering your zip code to find your nearest job opening. You can browse the available openings by location, skill, and pay.\n\nWhile there's a good chance that you can find an Amazon job near you, the company is notorious for aggressive firing practices. Amazon's computer systems have been known to fire employees for minor infractions. One such example is Jose Pagan, an employee in the Bronx. Despite this, he was given positive feedback by managers. The company has also been criticized in the past for its poor management of workers.\n\nEarlier this year, Amazon raised the base pay cap for corporate employees to \\$350,000 from \\$160,000. The company was citing a tight labor market to justify the change. It also favored stock compensation over cash compensation. Currently, Amazon is down about four percent. But its new minimum base pay is still much lower than that of other major tech companies. So, what's the catch? Those with experience in the technology industry are likely to be offered a higher salary.\n\nIn general, salaries for Amazon jobs are slightly above average, but they vary by organizational function. Engineering employees earn a salary of \\$116,836 annually, while researchers and developers earn the next highest: \\$33,151. Other high-paying roles include warehouse and customer service. A cashier can earn as little as \\$28,347 a year. And of course, there are other jobs available in the company that pay much less than average.\n\n## Work-from-home options\n\nIf you're looking for a job that allows you to work from home, you've come to the right place. Amazon offers hundreds of work-from-home opportunities. These jobs pay well and are flexible, making them an excellent choice for those who don't have much free time but still want to earn a good salary. To find an Amazon job near me, simply enter your location into the search box below.\n\nAmazon's marketing department seeks individuals who want to help them develop relationships with customers. They have many different positions available, including social media manager, brand specialist, and marketing coordinator. These positions require critical thinking skills and a self-starter attitude. However, these positions also require previous experience in the marketing field. If you're looking for a work-from-home job that allows you to work from home, be sure to read Amazon's privacy policy and understand how Amazon handles confidential information.\n\nIf you have a technical background, Amazon may be a good fit. Amazon's IT team is the brains behind the company and one of the main reasons why it runs smoothly. Common IT positions include IT technician, network support engineer, and systems engineer. Those with experience in IT operations and systems will be great for this position. Some Amazon jobs are virtual and require remote work, but you'll still have plenty of opportunities to choose from.\n\nTelecommuting is a growing trend in the business world, and more companies are making it easier for employees to work from home. Many job boards are designed to accommodate telecommuters, so you can look for jobs that are ideal for you and your schedule. These jobs may even be available from a home office if you have a computer, Internet access, and some extra time. And of course, writing and technical positions are available as well.\n\nThere are many benefits available to Amazon employees. They offer comprehensive benefits such as health care, paid time off, parental leave, tuition reimbursement, and more. They also have a thriving culture. You can even get paid tips on top of your hourly salary! However, there is one big drawback to this option. Not all areas of the country have this option, and the needs of Amazon change frequently, so you should look elsewhere to find a job that works for you.\n\n## Competitive pay\n\nIf you are looking for Amazon jobs near me that pay well, you may be wondering what the compensation is like. Although Amazon's compensation structure is not standardized, it does vary somewhat and can be quite competitive, especially with their perks. You can negotiate your salary to a certain extent, but there are some things you need to consider before you start the process. Below we'll discuss the factors you should consider when negotiating your pay.\n\nAmazon is a diverse workplace. There is a large percentage of women, minorities, and people with disabilities. Amazon is also an equal opportunity employer and does not discriminate based on race, gender, or political affiliation. You can apply for both full-time and part-time positions, as well as night shifts. There are many kinds of jobs available on Amazon, from warehouse operations to distribution, yard work, and logistics.\n\nWhile Amazon has some of the best benefits in the industry, it also has a notoriously low employee retention rate. Hundreds of thousands of employees quit each year because they didn't like their jobs, but they were tempted by the high pay and competitive benefits. The company's management structure and culture is very different from other companies, so if you're looking for Amazon jobs near me that pay well, check out the details of each position.\n\nStarting at \\$18 per hour, Amazon employees are eligible to get a range of benefits. These include comprehensive health care coverage, paid parental leave, paid college tuition, ways to save for the future, and more. You'll be able to improve your health and well-being and take advantage of free upskilling opportunities for nearly 300,000 U.S. employees. These benefits make working for Amazon a great career move for any professional.\n\nWhether you're looking for an hourly or salaried position, Amazon has thousands of opportunities near you. The company's internal report predicted that it will run out of available labor in the Inland Empire region of California by 2022. If Amazon continues to increase employee turnover, it might delay the day when it will have to hire new workers. Amazon data also shows that almost ninety percent of new employees want to stay for at least six months.\n\n## Using a t-Test Calculator", null, "A t-test calculator can help you determine whether a statistical difference is significant. Its interface will show the level of confidence for directional and non-directional tests. If a difference is 96% significant, it means the result is more than just a random chance. Its level of significance determines whether the null hypothesis is rejected. The calculator will provide you with a table describing the levels of confidence for the different types of tests.\n\n## Student's t-test\n\nIf you need to use Student's t-test calculator to calculate p-value, you can use this online tool. This calculator will calculate the p-value and calculate cdf for you automatically. If you're using this tool for primary data analysis, you'll need to enter the sample values of Group 1 and Group 2.\n\nA student's t-test calculator is a great tool for calculating the test of significance and the difference between two means. It can also compute the critical value of t. It can also generate a complete work for small samples. This tool can be a huge help to grade school students who are struggling with statistics homework. Once you've used it, you'll never look back. The Student's t-test calculator will even provide step-by-step explanations of the results you've obtained.\n\nYou can also use a t-test calculator for paired data analysis. The method used in a paired t-test allows you to compare the means before and after the intervention. This method also allows you to use the t-score and p-value approaches to determine whether a change in mean is due to the intervention. However, you can also use the critical regions approach if you prefer.\n\nWhen used in research, Student's t-test is often used to compare two groups of data. In the case of independent groups, sample size does not matter; it only matters whether the two groups are paired or not. If there's an overlapping group, the t-value will be lower. Similarly, if the sample size is small, the p-value will be low. If you're looking to compare two groups of data, you can use Student's t-test calculator.\n\nWhen using Student's t-test calculator, you'll need to enter the degree of freedom in which you'd like to test for significance. The formulas for these three factors will vary based on the type of t-test that you're using. As a result, you'll need to know the sample size and the number of degrees of freedom. Once you've entered this information into the calculator, you can calculate the P-value in seconds.\n\nThe Student's t-test is an essential tool for many researchers. This statistical test is a powerful tool used to test the significance of mean differences between two populations. In most cases, this type of test is used to compare two groups where the variance is unknown. You can also use a Student's t-test calculator to calculate the sample size if you don't know the population size of each group.\n\n## Welch's t-test\n\nIf you've ever had to do a statistical analysis, a Welch's t-test can be helpful. These calculators can provide you with a step-by-step solution for your statistical calculations. All you need to do is choose the type of hypothesis test and significance level, input the data sets, and get a visual representation of your output. In most cases, independent observations are the rule of thumb. For example, if subjects in sample one do not appear in sample two, the Welch's t-test calculator can help you determine whether the first observation is significant.\n\nIn the case of independent t-tests, the two samples must be of similar sizes. However, in the case of Welch's t-tests, the two sample variances are not equal. As such, the p-value will always be greater than the sample size. Therefore, if the variances in the two groups are similar, the test will yield similar results. Welch's t-test calculators will help you determine the difference in variances between groups.\n\nThe t-test calculator can perform calculations for both Welch's and Student's t-tests. The calculator also provides recommendations based on the F test for equal variance. In addition, the calculator will let you know the effect size of the two groups by dividing the mean differences between them by the standard deviation. This calculation is important for determining whether the two samples are statistically significant.\n\nAs the test is becoming more difficult, statisticians increasingly rely on the technology. The t-statistic calculator provides a number of functions that will help you calculate the p-value of your hypotheses. The calculator also provides further information, such as degrees of freedom. It will also calculate the degrees of freedom (df) as a two-tail test. The t-test calculator will return all the information you need to conduct the test.\n\nFor unpaired groups, the t-statistic that is less than the t-critical is accepted as a null hypothesis. If it is greater than 0.05, then the data set is considered a positive one. The statistical summary of the data also includes the mean, standard deviation, and variance. In addition to the t-critical, the t-statistic also includes minimum and maximum values, median and mode, and confidence interval.\n\nAnother t-test calculator is an online program that allows you to run a statistical analysis in a matter of seconds. It is an excellent way to calculate t-values and variances for any type of data. There are many options available to choose from, including the ones for two-sample, three-sample, and four-sample tests. The calculator can also help you calculate the sample size for your sample size.\n\nA t-test calculator is useful for comparing the means of two groups. It compares the two groups' means, and includes directions for using it. The calculator also provides helpful information on t tests, such as the types of t-tests and which ones to use. However, remember that a t-test calculator is not the same as a one-sample t-test.\n\n## One-sample t-test\n\nA one-sample t-test is a statistical test that compares one population to a standard value. A paired t-test is another common test, which compares the means of two groups at different times. For example, a paired t-test can compare the mean of a group before and after the experimental intervention. A paired t-test will also work in case of a two-sample design.\n\nThe one-sample t-test calculator allows you to calculate the t-value of a population by entering its summary information. This summary information can either be tabular data or raw data. To enter the summaries of the data, simply type a comma or a space between the two types of data. You can also enter a new line if the data is tabular.\n\nThe One-Sample T-test calculator lets you enter the raw data directly, from Excel or Google sheets, or another tool that allows you to export your data. However, you must make sure that you include a header with the sample data. The one-sample t-test calculator ignores any cell with zero or more values. However, you can also use a spreadsheet with the data.\n\nIn a two-sample t-test, you need to determine if there is a significant difference in the mean values of the two groups. If you have a small sample, you should use a larger one. The calculator will round up the larger results and display significant figures for the smaller ones. It will also include outliers as a warning field. For homework, you may ignore the warnings.\n\nAnother important factor in the one-sample t-test is the number of degrees of freedom. The number of degrees of freedom is a factor that determines the distribution of test statistics. Unlike the normal distribution, a t-Student distribution has heavy tails. As a result, a large number of degrees of freedom may not differentiate between the distributions of two groups.\n\nWhen the sample size is small, it may be difficult to test for normality. In such cases, the data may be outliers and require more assumptions to be made. In other cases, the company may know that the protein content of its bars is normally distributed. Therefore, they would conduct a t-test and assume that their protein bars are healthy. In this case, a smaller sample size will yield a smaller t-test result.\n\nThe t-value is affected by the number of observations in the sample. The size of the sample will affect the amount of variance that will be calculated. The left hand column of a full t-table will show the number of degrees of freedom in a sample. Degrees of freedom are based on the number of observations in a sample. For a one-sample t-test, this value is equal to one less than the number of observations in the sample.\n\n## How to Use a Two Sample T-Test Calculator", null, "In the context of a two-sample t-test, the t-score of the sample is the difference between the means of the two groups. There are two ways to calculate t-scores. First, calculate the degree of freedom for each sample. The degrees of freedom are the size of sample one n1 and sample two n2 minus 2. The pooled standard deviation is the square root of the size of sample one n1 plus the standard deviation of the two samples, s1.\n\nThe GraphPad Prism two sample t test calculator is designed to make statistical analysis easy. With intuitive controls, you can enter data, select appropriate analyses, and create stunning graphs. The calculator includes a wide variety of statistical tests, including standard, specialized, and exploratory. Every study comes with a checklist that helps you understand the assumptions behind the test and ensure that your results are accurate.\n\nGraphPad Prism offers many ways to conduct tests, including independent and paired samples. You can select the sample size, threshold, and statistical test. You can also choose whether to test a null hypothesis or a paired sample. It is very easy to use, and you can even use it without knowing much about statistics or coding. You can even enter your own data, and Prism will calculate the t-test for you.\n\n## Student's t-test calculator\n\nYou can use a Student's t-test calculator to make a calculation based on the results of your t-test. Student's t-tests are a statistical test that compares two groups and identifies whether they are significantly different. The p-value is used to determine whether there is significant difference between the two groups. The calculators available online have both unpaired and paired t-tests.\n\nThe Student's T-test calculator can help you perform one-sample, two-sample, paired, and multiple-sample t-tests. They will also help you find p-values and critical values of t-tests. Student's t-test calculators are an excellent substitute for Microsoft Excel or Google Sheets. Free software is one of the best ways to meet the needs of all learners, and Student's T-test calculators are no exception.\n\nWith the Student's t-test calculator, you can enter the number of samples, standard deviation, and degrees of freedom in a single click. A t-score and degrees of freedom will be computed, along with a p-value and interpretation. The null hypothesis is that the population mean is the same in both groups. The p-value will appear with the interpretation. If the null hypothesis is true, the results will be the same regardless of the test's significance level.\n\nStudents may also use a student's t-test calculator to calculate the t-test, or t-tests, to determine if two groups are significantly different. The t-test calculator provides step-by-step explanations of the t-test, including how to use it properly. It's important to understand that student's t-test calculators are not one-off sample t-tests.\n\n## Multiple comparison methods\n\nA two-sample t-test is a statistical test that compares the means of two samples. The two samples are usually considered independent. The method also has an alternative name, the unpaired-samples t-test. Both methods assume that the populations' means are similar. They may not be related, but they share similar characteristics. In either case, the t-test calculator can help you choose the best method for your research.\n\nIf a population's means are equal, a high p-value indicates that there is no difference. In other words, the sample size was too small to detect a difference. To calculate the size of the sample needed, use the calculator. Alternatively, you can use a statistical calculator to help you select the most appropriate sample size. However, be sure to follow all directions, because the results may vary depending on the number of samples.\n\nIn a study of a few hundred people, a paired-sample t-test calculator may be an important tool. A calculator for this type of test is particularly useful when the groups are closely related. By entering the data from two samples into one, you will receive a boxplot with the p-values. You can also find a calculator for multiple comparison methods for two sample t-tests using a data science package.\n\nUsing a t-test calculator, you can easily compare the means of two samples to find out which one is more significant. When using the t-test calculator, remember to input the degree of freedom. The calculator is available for TI-83 and TI-84, so if you're using one, be sure to add a comma after the decimal.\n\n## Null hypothesis\n\nTo perform a t-test, you must know the sample size and the mean of both samples. Then, you can figure out the standard deviation of each sample. You can use the t-test calculator to determine the difference between the two samples. The t-score indicates whether the difference between the two samples is statistically significant. The p-value, on the other hand, indicates whether the samples are statistically different.\n\nA t-statistic is a composite of basic metrics in a descriptive statistics panel. It compares a sample's mean to its theoretical mean. Based on Student's t-distribution, it is then transformed into a probability value. The p-value represents the probability that the null hypothesis is true. In most cases, a t-statistic has a lower limit than the t-critical value.\n\nA t-test calculator is useful in many situations, but it is often helpful to use one when testing a single population against another. In the example below, the sample size is six, and the difference is 10. As a result, the t-test calculator calculates a p-value for you. This p-value will determine whether there is a difference between two groups.\n\nThe null hypothesis for two sample t-test is useful when the two sample means are different. If the two populations have the same variance, you can use an alternative two sample t-test. This method, however, has more statistical power and can detect differences between populations. A test with two means requires you to set a significance level for each population. The lower the significance level, the better.\n\n## Outliers\n\nOutliers in a two-sample t-test are numbers that are more than 1.5 times from the lower or upper quartile. These values are referred to as major outliers. An outlier may be an error or the true value of a particular measurement. In the case of a t-test, a significant value is one that lies outside the interquartile range.\n\nAn outlier is a skewed value that can either be part of the population or represent a special cause. When it comes to two-sample t-tests, outliers can be errors or natural variations. A study should always keep these outliers in the dataset if they are not errors or special causes. However, the use of a t-test calculator that only accounts for one sample is not recommended.\n\nUsing an outlier plot is another way to identify an outlier. It's similar to an individual plot, but it helps you visualize the outliers in your data. The red-square outliers in Minitab can help you identify the causes of outliers and correct measurement or data-entry errors. It's also useful for removing data values that represent unusual events. Outliers in two-sample t-tests can be tricky to deal with.\n\nA two-sample t-test is the most common statistical analysis. Using one sample means you can get a sample size of any size, and two samples means a smaller number of observations. Outliers can occur when the sample size is too large or too small. A two-sample t-test calculator can help you calculate the two-sample t-test to find out how many outliers you have.\n\n## Using a t-Test Calculator With Mean and Standard Deviation\n\nIf you're struggling to find the right t-test calculator, this article can help. By reading it, you'll know how to perform a paired, one-sample, or two-sample t-test. You'll also learn how to use the t-test calculator to find the p-value, or critical value, of a t-test.\n\n## t-test calculator\n\nA t-test calculator with mean and standard departure allows you to quickly determine the significance of two groups' means. First, you must enter the sample size, mean, and standard deviation. The calculator then computes t-scores and degrees of freedom. Once these are input, you will be given an interpretation and a p-value. The null hypothesis states that the population mean is the same.\n\nNext, input the values of the two samples. These values are usually real numbers or variables. These can be copied from a text document or spreadsheet. Then, use the calculator to compute t-tests between the two groups. This can be useful for studies that involve samples from two independent groups with different means. The calculator's output will be a graphic representation of the results. Once you have entered the values, you can run the test.\n\nTo use a t-test calculator with mean and standard descent, you must enter the sample size, mean, and standard deviation. For example, if the new sample has a mean of 10.4 hours and the standard deviation is 0.2 hours, you will use the calculator above to calculate t-scores for the two groups. The resulting t-score is 0.72%, or 0.01, and the standard deviation is 0.28 hours.\n\nThe t-test can also be performed on data with different means. The null hypothesis is defined as a distribution with one or two standard deviations. When there are two different groups, a significant difference in mean charge is indicated. This means that the results are not due to chance or sampling error, but are a reflection of a characteristic of the population. For this reason, a t-test can be used to analyze population characteristics.\n\nYou've probably used a Student's t-test calculator with means and standard deviation. The t-test calculator computes the t-score for a given sample size, mean, and standard deviation. It also calculates the p-value and degrees of freedom, and provides an interpretation of the results. The null hypothesis is that the population mean is equal to the mean of the sample.\n\nThe t-test calculator computes the cumulative probability associated with the sample mean and t score. Helpful resources include sample problems and Frequently Asked Questions. Stat Trek offers a tutorial on the Student's t-test and t-distribution. It also calculates the p-value based on the sample size. The calculator also allows you to compare the mean and standard deviation of a sample to the averages of the entire population.\n\nIf the sample size is large enough to be considered normal, the test can be used to compare two samples with unequal standard deviations. The test is also useful for determining whether the population is uniform. For example, if the population has a standard deviation that is equal to one sample size, it is possible to perform a Sattherwaite test. If the population has unequal standard deviations, the t-test can be used to determine whether the results are reproducible across the entire population.\n\nThe student's t-test calculator with mean or standard deviation has the ability to calculate p-values for paired samples. You can use this calculator to calculate the test statistic for two or four population proportions. The sample size does not need to be equal, but it should be close. If you're using two samples to compare two populations, you'll need to use both of them.\n\n## Degrees of freedom\n\nA t-test calculator with mean and standard variance allows you to compare two groups using either the same or different sample sizes. Simply enter the data you need to calculate t-scores and degrees of freedom, and the calculator will do the rest. Once you've entered the numbers, an interpretation will appear. Typically, the null hypothesis is that the mean and standard deviation of the sample are equal. If the sample size is small, however, the difference is large enough to be significant.\n\nThe degrees of freedom of a t-test is the number of independent pieces of information in the sample. In statistical terms, this number is determined by the sample size. The sample size is the number of observations, and the degree of freedom is the number of independent pieces of information. The sample size is the number of samples in the test. If the sample size is small, then it is called a student's t-test.\n\nAnother way to find the p-value of a t-test is to use a t-test calculator. This tool can calculate the p-value of a t-test based on sample means and standard deviations. The t-test calculator can be very helpful in doing statistical analyses. By using a t-test calculator, you won't have to use a statistical software or consult tables.\n\nAnother way to use a t-test calculator is to enter the three fundamental data values (mean, standard deviation, and variance) in two separate samples. The result of the t-test will determine whether the sample sets in two groups are from the same population. Typically, samples from different classes would not have the same mean and standard deviation as those from an experimental treatment group.\n\n## One-tail test\n\nThe one-tailed test is a statistical test that tests whether a population's mean is larger or smaller than expected. Its p-value must be less than 0.05 to be statistically significant. Its other features include calculating the mean, standard deviation, and effect size. The calculator will also calculate the number of outliers in the population's data. A warning is displayed if a test statistic is too large or too small.\n\nThe one-tailed test is different than most statistical hypothesis tests. It does not compare two groups. Rather, it compares the population's mean to a predetermined value. This method can be more powerful than a two-tailed test because it is more symmetrical. Hence, the p-value for a one-tailed test is smaller than that of a two-tailed test.\n\nThe sample size, the mean, and the standard deviation are important when calculating the p-value. A one-tail test calculator will help you determine p-values. This tool will also let you know the critical values and p-value of a t-test. You can use the calculator to perform one-sample, two-sample, or paired t-tests.\n\nThis tool will help you determine whether the difference in a sample's mean and standard deviation is significant. The tool will round up results with larger sample sizes to make them look better, and display significant figures if they are small. It also gives you the option to compute summary data, if you need a quick answer for homework. You can also choose the p-value at a 1% significance level.\n\nAnother common test used in statistics is the one-tailed test. In this case, the test statistic is in the upper or lower tail of the sample distribution. Depending on the level of significance, the test statistic must be lower than the critical value, or higher than the critical value. When the test statistic is in the upper tail, it is considered to be significant, and if it is less than the lower tail, it is classified as non-significant.\n\n## Pooled variance estimator\n\nThe sample t-test calculator has a pooled variance estimator for calculating the variance and standard deviation for two samples of the same size. It assumes that the sample variances are equal among the samples. Alternatively, if the sample variances are unequal, you can use the unpooled variance estimator calculator. This calculator is particularly useful when you need to compare the variances of two groups with different means.\n\nTo calculate the pooled variance, you will need to know the mean and standard deviation of each sample. A sample size of three is an ideal number. For the standard deviation, multiply each sample's mean by three. The difference between the two means is the pooled standard deviation. Then, you can divide the three means by three and use the pooled standard deviation to calculate the average deviation for the three groups.\n\nThe sample means are calculated using the t distribution. This formula also calculates the confidence intervals for sample means. The sample means and standard deviations are the sample mean and the postulated population mean. The sample sizes of the two groups are s1 and s2. The n-test calculator with mean and standard deviation uses the Sattherwaite test and can be used for unequal standard deviations.\n\nWhen the sample means are equal, the test is considered valid. A large p-value indicates that the sample size is insufficient to detect the difference between the groups. Using a sample size calculator will help you determine the number of observations required to test for a difference in means. Once you know the size of the sample, you can run the test. It is a very helpful tool for evaluating your sample size and comparing two different groups.\n\n## How Many Kilograms in 500 lbs?", null, "Whether you're trying to find the exact value of a certain amount of pound, or you're looking for the conversion from pounds to kilograms, you can get the answer you need with this simple tool. The lb to kg conversion calculator will convert any number you enter into the corresponding kilogram.\n\n## Calculator\n\nUsing a calculator to convert pounds to kilograms is a good way to go, as it can save you some time. Depending on your needs, you may want to know how many kg in 500 lb, and it's not difficult to find out.\n\nWhether you're looking for the proper way to convert kilograms to pounds, or you just want to see how much you weigh in kilograms, you'll have no trouble finding an online calculator that is easy to use. You can even search for the most accurate conversions. If you're looking for how many lb in 500 g, you'll need to know that one of the most popular units of mass is the pound, which is defined as a unit of weight with a value of 0.45359237 kilograms.\n\nThe metric pound has a modern value of 500 grams, or 0.286 kilograms, but it was once a little closer to the 0.552 kilogram mark. The pound has been used to measure weight in the United Kingdom since 1878, and it is also known as the imperial pound. It's the simplest of the modern units to use, and is used in most countries as the base unit of measurement.\n\nThe pound is not part of the SI International System of Measurements, but is commonly used in the United States. There are also other units of mass, such as the ounce. The most common are the kilogram and the pound.\n\nThe pound is not the only unit of measurement to measure a volume of space, but it is still a handy device to have in your back pocket. The ounce is a familiar unit of measurement, and it can be used to compare the quantity of water you're carrying to that of an ounce.\n\nThe best way to go about converting lbs to kilograms is to use a lbs to kg converter. This device will convert pounds to kilograms in a matter of seconds. It also includes a feature that will allow you to convert prices in a given currency to the equivalent price in pounds. This is useful if you're looking to buy something and don't have the money to pay in dollars.\n\n## Meaning of lb\n\nSeveral definitions of pound have been used throughout history. Some countries retained the pound as an informal term, while others abandoned it when the metric system was introduced. In the United States, the pound is a standard unit for measuring weight. It is also used to measure the mass of humans and animals.\n\nA pound is the Anglo-American equivalent of the kilogram. It is defined as a unit of mass in the United States Customary Systems of Measurement. It is one of the most common units in use today. The metric pound is a close match to the Roman pound. The international pound is defined by the International Avoirdupois Pound. It is not to be confused with the troy pound. The apothecaries pound has a different symbol.\n\nThe pound has a long and complicated history. It was first known as the Roman Libra, which was a weight measurement device. It was initially divided into 12 ounces. It was the largest unit of mass that the Romans had. It was the predecessor to the modern pound.\n\nThe pound is also a unit of force. It is also the most commonly used weight measurement unit in the United States. It is defined as the unit of mass in the United Kingdom and the United States. It is not part of the SI (International System of Units).\n\nThe most basic and simple definition of a pound is the lb. It is the symbol for the Roman libra. The lb was also an ounce, but the lbm is the modern abbreviation. The lb has been renamed as the lbm to avoid confusion. The lb is not to be confused with the lb-t, a troy ounce.\n\nThe libra is the Roman ancestor of the modern pound. The libra was also a unit of weight measurement, but it was a larger unit than the modern pound. It was around 0.3289 kg, and its size was similar to the smallest pound.\n\nThe libra was used in ancient times to measure the mass of an object, but the modern lb was a little more sophisticated. It was actually a smaller version of the libra, weighing about 328.9 grams.\n\n## Kilograms as a unit of mass\n\nUsing kilograms as a unit of mass is a standard method of measurement for scientific work and commerce. It is also a common unit of mass used by people around the world.\n\nThe international prototype of the kilogram was made of platinum-iridium and is kept in a safe in the International Office of Weights and Measures in Paris. It is the standard reference for all mass units in the International System of Units. The modern kilogram is accurate to 30 parts per million.\n\nThe original definition of the kilogram was as a mass of one thousand cubic centimetres of water. This was defined in 1795. The unit is now used as a base unit in the metric system. It is also a base unit in the International System of Units. It is also the only SI base unit with an SI prefix.\n\nThe kilogram is a basic unit of mass and is one of the seven fundamental units of the International System of Units. It is equal to zero times 500 pounds, and is the basis of other traditional units of weight throughout the world.\n\nIn the US customary system, the pound is a unit of mass. It is a larger unit than the kilogram, and is accepted in the British Imperial system.\n\nThe pound is a base unit of mass in the United Kingdom, and is also used in the U.S. and the rest of the imperial system. The pound has different names, but is generally defined as 0.45359237 kg.\n\nThe pound is an important unit in the imperial system. In the US, it is a base unit of mass, and is commonly used in the consumer and commercial world. It is also the base unit of mass in the Metric system.\n\nThe metric pound is closer to the Roman pound than the metric kilogram. The metric pound has a modern value of 500 grams. The pound is a unit of weight in the US customary system and the Imperial System.\n\nThe gram is a 1/1000 of the kilogram. The gram is a unit of mass that is used when expressing mass in kilograms becomes too tedious.\n\n## Converting from lb to kg\n\nWhether you are from the United States or another country, converting pounds to kilograms is important. There are many reasons for doing this. If you do not know what the difference is between these units, you can quickly find out using a conversion calculator.\n\nThe term \"pound\" was first used in Roman times and derived from the word \"libra,\" which is Latin for scales. The libra was originally divided into 12 ounces. The pound is also a standard unit of measurement in the United Kingdom and the United States. It is now a part of the imperial measurement system. Those from countries that use the pound as the base weight unit will have a hard time visualizing how much a pound weighs. The conversion between pounds and kilograms is easy.\n\nThe pound is also an important unit of measurement in the International System of Units (SI). The kilogram is the metric system's base unit for mass. It is also the only SI base unit with an SI prefix. The kilogram is also a major unit of measurement in scientific applications. The kilogram is equal to 1,000 cubic centimetres of water.\n\nThere are several different definitions of the pound throughout history. The Attic mina was equal to 0.432kg to 0.4366kg, and the metric pound was close to this. Today, the metric pound is valued at 500 grams. While it is closer to the 0.4895kg value of the Roman pound, the metric pound still is not equal to the mass of one liter of water.\n\nThe pound and the kilogram are primarily used in the United States and the United Kingdom. However, there are countries around the world that use the pound as the base weight units. While the pound is accepted in the United States and the UK, the kilogram is primarily used in the US and in some other countries. In addition, the kilogram is used in the Imperial System of Measurement.\n\nIf you need to convert pounds to kilograms, you can use our conversion tables to find the right conversion. The formula to convert pounds to kilograms is to multiply the value of the pound by 0.45359237.\n\n## Using a Stone to Weigh 75 Kg", null, "Using a stone to weigh 75 kg is an easy task to do, and it can be done in many ways. You can use it for various purposes such as trade and sports. You can also find out about its uses and conversions.\n\n## Kilogram to stone conversion\n\nUntil about the mid nineteenth century, the stone was used in many an industry to measure the weight of objects of size. In fact, the aforementioned stone could be measured by weights as small as 5 pounds or as large as 40 pounds. In Europe, the metric system was adopted by most countries, but the stone was still used in a few select pockets of the continent. It was the stone's ilk that led to the most elaborate and slick of systems, but it was also the sexiest, and a bit hazardous to carry.\n\nThe stone, of course, was not used for trade in the United Kingdom, but it was the standard in Ireland. Using the stone as a currency was outlawed by the Weights and Measures Act of 1985, but it continues to be used in a handful of official institutions. Aside from the official weighing scales, there are several other ways to weigh items, and the stone is no longer the only game in town.\n\nThe stone remained the unofficial weight of the day until the nineteenth century, when the metric system was officially introduced. This may explain the stone's ubiquity in the British Isles, where it has been an accepted and well used unit of measurement for hundreds of years. Aside from its traditional uses, the stone is also used as a supplementary unit of measurement in a number of official jurisdictions, including the United States. The stone is a good bet for a long list of uses, including weighing livestock, measuring soil moisture and determining the mass of a particular slug.\n\nThe stone was a big improvement over the kilo, but it remains a useful and practical unit of measurement. It is still used in a handful of official jurisdictions, but it is essentially a dinosaur in other countries. Aside from its use in the UK, there is nothing to be said for the stone in other countries, as they have yet to catch on. It is a curious anomaly, but there are a few reasons why this might be.\n\nIt is a wonder that the stone is still in use, as the metric system was adopted by most European countries by the early nineteenth century. The stone was the gizmo of its time, but it was soon overshadowed by the kilo and kilowatt. It is a sad state of affairs, but the stone remains a useful and convenient measurement tool for some in the UK and across the pond.\n\n## Kilogram to stone usage in sports\n\nUsing stone is a very old form of weight measurement, and is still used today in Britain, Ireland, and Australia. The unit of measurement is used to express human body weight in many sports. Traditionally, the stones were between three and ten pounds in value, but have been standardized to six to fourteen pounds.\n\nIn the United Kingdom, stones are used to denote the body weight of athletes in a variety of sports, including sledge hockey, rugby, and volleyball. In Ireland, the stone is still used to denote the weight of a jockey in horse racing.\n\nThe stones are a traditional system of weights, and the values of the stones vary by country and location. In England, the value of a stone is usually between three and ten pounds, with variations depending on the size of the stone and the commodity it is weighing. In Scotland and Wales, the value of a stone is between eight and sixteen pounds.\n\nBefore the metric system was introduced in the 19th century, the values of the stones varied greatly. Some European countries adopted the kilogram as their base unit of measurement, while others, such as the Netherlands, redefined the stone to be in sync with the kilogram. The stone was also used to measure the value of commodities such as wool bales.\n\nA stone is a traditional unit of mass, and is commonly used to describe the weight of humans and large animals. The stone is currently defined as 6.35029318 kilograms, or 14 pounds. It is often abbreviated to st.\n\nBefore the metric system was established, the value of a stone differed from country to country and city to city. In medieval Germany, for example, a scale depicted wool bales weighing according to the local stone. The weight of the stone was then standardized, and it was assumed that a stone had an equal weight. This was a relatively safe way of measuring the weight of objects, but it was difficult and inconvenient to weigh a large object with water.\n\nThe kilogram is a metric unit of measurement, and is used worldwide for a wide range of purposes, from calculating the mass of a product to describing the weight of a penalty. The kilogram is also used in industry and in science and engineering. It is the base unit of the SI (System of International Units). It is the standard unit of mass in the United States, and is the prefix for the number 103 in the metric system.\n\nThe kilogram was first used in England in 1795. It is used in many fields, including government, military, and industry. It is also the base unit of the Imperial System of units, which is the same as the metric system. Originally, a kilogram was defined as the mass of one litre of water. In 2019, the definition of a kilogram was changed to use the Planck constant, which is a constant of the universe, and a kilogram is now a mass of approximately one thousand cubic centimeters of water.\n\n## Kilogram to stone uses in the trade industry\n\nHistorically, stones were used to measure weight in Europe before the metric system was implemented. The stone of a yin yang has been around for millennia. It was commonly used to measure the weight of wool bales in medieval Germany. The unit is also used in Asia to this day.\n\nThe stone is a good ol' fashioned weight unit that is still used in some countries, notably the United Kingdom. There are no official standards of measurement, but the best estimates suggest that it is between 3 and 15 pounds. The value of a stone varies wildly depending on location and commodity. A pond may have been the equivalent of 500 grams, but in reality, weighing heavy objects with water is a bad idea. The stone has a few notable exceptions.\n\nThe stone has been around for a long time, but the newest iteration is not yet commonplace. It was the subject of many a rumour in the 17th century. It is said that the stone was the first of its kind in Europe, although it is doubtful. Until the mid-19th century, it was a widely accepted weight unit. Its main competitor was the kilo, which was only introduced to France in 1817. The stone was used to measure the weight of wool and other commodities in the medieval era.\n\nThe simplest way to measure the stone is to weigh it using a scale. The stone is usually weighed in pounds and kilograms. Its smallest incarnation is about a pound, while its largest resembles a bag of bricks. The stone was used to measure the weight in the heyday of medieval trade. A similar system was also used in northern European nations before the metric system was adopted. Nevertheless, its uses include military applications and engineering and science domains. The stone has long since been eclipsed by the kilo.\n\nThe stone is the most obvious of all units, but there are others. The stone of a yin and yang has been around for a long time, though it is only recently that the metric system has been ratified in most of Europe. The stone has been used to measure the weight of wool bales, but in reality, weighing heavy objects using water is a bad idea. The stone is also the most expensive unit to buy and maintain, as it requires a lot of maintenance and replacement. The stone of a yin has long been used to measure the weight of wool bales, however, in reality, weighing heavy objects using water has been a bad idea for a long time.\n\n## How to Convert 4 Lbs to Kg (Liters)", null, "Oftentimes, people will want to know how to convert 4 pounds to kg (liters) so that they can use it as a measure of weight in the metric system. The kilogram is the metric system's base unit of weight.\n\nUsing a 4 lbs to kg calculator is a good way to convert pounds into kilograms without having to actually do it yourself. The trick is to find a calculator that works with your currency of choice. In our case, we use the US Dollar. If you want to know how many grams in 4 lbs, you can look up the weight in metric units or use the metric tonometer, which calculates the mass in grams in terms of pound.\n\nThe best place to look is your local grocery store, which may have a 4 lbs to kg calculator tacked on to the back of its checkout counter. If you're looking for the same thing in a different currency, you'll have to check out a kiloliter, which will tell you how much a liter of water weighs in a given unit. You'll also have to look up how much a liter of water weighs if you're in Britain.\n\nThe best thing about a calculator is that you don't have to type it out yourself. This is especially handy if you're on the go and you don't have the time to do it yourself. A 4 lbs to kg calculator can save you a trip to the store or even the doctor's office. Of course, if you're not into math, you can also do the calculations manually, but why would you? The perks of being able to see your answers instantly are priceless. You can also get a better idea of what you're spending, which can lead to more effective shopping.\n\nThe best 4 lbs to kg calculator is a surprisingly effective tool for converting pounds into kilograms. It will also tell you the sexiest of sexy weights in your range, so it's a win-win.\n\n## Unit of mass\n\nGenerally, the conversion between 4 lbs to kg is used in several practical applications. Although there are various definitions for pound, physics and engineering insist on defining it as force. This unit is also commonly used in lay terminology.\n\nThe term 'pound' is derived from the ancient Roman weight unit known as libra. This unit was originally defined as a mass of one liter of water, but was later changed to the unit we use today.\n\nThe pound is a mass unit that can be abbreviated as \"lb.\" The pound is an imperial unit of mass that is used in many fields. It is the legal unit of weight in the United States. However, the pound is not the same as the unit of mass in some countries. For example, in the UK, the pound is used as an imperial unit of force, but in the US, the pound is used as a customary unit of mass.\n\nThe kilogram is the most common unit of mass in the metric system. It is also the base unit in the International System of Units. The kilogram is equal to the mass of the International Prototype of the Kilogram. It was also defined as a platinum-iridium bar that was stored in Sevres, France. The kilogram is 2.2 times heavier than the pound.\n\nThe kilogram is the standard unit for all mass in the metric system. It is the primary standard for all mass on Earth. It is also the basis for all other metric mass units. In the metric system, all other units of mass are derived from the kilogram.\n\nThe metric system of units is most widely used in science. This is why the conversion from 4 lbs to kg is so important for several practical reasons.\n\n## Converting from English or US weight units to metric units\n\nWhether you are a business owner or industrial worker, you may need to convert your weight units from English or US to metric units. This is important for many reasons. It is essential to know both systems of units so that you can take accurate measurements and communicate with people around the world.\n\nThe metric system of measurement is used in many countries around the world. The metric system is based on the decimal system of units. The decimal system of units is a simple system that uses numbers and abbreviations to represent measurements.\n\nThe metric system is divided into base units for different types of measurements. The unit of length, for example, is measured in centimeters.\n\nOther units in the metric system include grams, kilograms, and milliliters. They are used to measure the mass, capacity, and area of an object. The metric system of measurement was developed in 1670 in Europe, and has spread throughout the world.\n\nThe English system is used in the United States and other nations in the world. The English system is considered easier to use. It is also known as the US customary system. However, the United States has not officially adopted the metric system of measurement. The United States is a signatory to the Metric Convention of 1875.\n\nA lot of people don't understand the metric system. They think that it is difficult to handle because everything is in a power of 10. This is not the case. The metric system is a convenient system because it avoids decimals. In addition, the system is written in centimeters, making it easy to understand and handle.\n\nWhen converting from English or US weight units to metric units, you will need to know the proper unit conversion ratio. A unit conversion ratio is the ratio of two unit names to the appropriate unit. This is a standardized way to convert between systems.\n\n## Kilograms are the metric system's base unit for weight\n\nOften called the \"Big K\", the kilogram is the metric system's base unit for mass. The kilogram is a platinum-iridium alloy cylinder that is 39 millimeters wide by 39 millimeters high. The kilogram is also used to calibrate scales and measure mass.\n\nThe International Bureau of Weights and Measures (IBWM) in Sevres, France has been the home of the kilogram since 1889. In the past, the kilogram was referred to as the \"International Prototype of the Kilogram\" and was manufactured in the late 19th century.\n\nIn the late 1700s, a group of scientists from around the world met in Versailles, France to discuss the establishment of the International Bureau of Weights and Measures. At the time, it was thought that the metre would be the unit of choice.\n\nAfter a series of meetings, the metric system was adopted in France. The meter was then defined as the proportion of Earth's size. The metre was expected to be more popular than the failed decimal hour.\n\nThe metric system is now used in nearly all countries. It is based on the metric prefixes for length, volume and mass units. Each successive unit is 10 times larger than the previous one. Each unit is listed relative to the kilogram as the base unit.\n\nThe metric prefixes are also used to define quantities of a gram in 1/1000th of a gram. This is done by attaching the corresponding prefix name to the unit symbol \"g\".\n\nThe CIPM has resolved that it will redefine the kilogram in terms of the Planck constant. The resolution was passed at the 24th CGPM conference in 2011. The metric system uses the newton as a derived unit of force. This has a large margin of error.\n\n## Historical mass-pounds\n\nHistorically, the pound has been used for weight measurement in many countries. Although the kilogram is now the most common unit of mass measurement, the pound is still used in English speaking countries. It is important to convert the weight of an item from pounds to kilograms correctly for accurate engineering and trade purposes.\n\nIn many parts of the world, the pound has a variety of different definitions. Some are still used in some countries while others have been abandoned. In the United States, the pound is considered a customary unit of measurement. In the United Kingdom and US allied countries, the pound is considered an imperial unit of measurement. In some countries, such as Denmark, the pound has been redefined as a 500-gram unit of weight in the 19th century.\n\nThe modern metric system of units has replaced most of the historic units of measurement. The kilogram is the SI's basic unit of mass. The kilogram is equal to 2.2 pounds and is twice as heavy as the pound.\n\nThe modern pound is an amalgamation of two ancient units of mass. The libra was the ancient Roman unit of mass, which was divided into 12 unciae. The libra was originally equal to 328.9 grams, which was equal to one kilogram.\n\nThe pound has also been defined as a number of different ways in history. Some of the different historical mass-pounds include: Tower pound, Merchant pound, Troy pound, and London pound. The English pound is cognate to the Dutch pond, the German Pfund, and the Swedish pund. The apothecaries pound has an alternative symbol of ''.\n\nThe pound is also known as the pound-mass or the pound-force in some countries. This modern terminology distinguishes it from other mass-measures, such as the kilogram, the ounce, and the stone.\n\n## How to Convert 12 Kg to Lbs", null, "Whether you're working out or trying to figure out the conversion between 12 kg to lbs, there are many resources out there to help you figure out exactly what you need to do. We'll go over a few options for you in this article.\n\n## 240 kg to lbs equals 529\n\n240 kg to lbs is the same as 240 kilograms to pounds. The conversion factor is 2.20462262184878, which is a factor used in converting between the units of mass and weight. The international avoirdupois pound is a unit of mass that is legal and defined at 0.45359237 kilograms. It is divided into 16 ounces and is used in the Imperial System of Measurement.\n\nThe kilogram is the basic unit of mass in the metric system. It is a unit of mass that is equal to a mass of about 1000 cubic centimeters of water. The kilogram is also known as an international avoirdupois pound, the pound and the kilo. The kilo is the base unit of mass in the metric system. The pound is an SI unit of mass that is used for measurement of force and weight. It is also used for precious metals such as gold and silver. The symbol for the pound is lb, and the alternative is lbm. It is used primarily in the United States.\n\nThe lb is the SI unit of weight that is equal to 0.45359237 kilograms. It has an international prototype that is a platinum-iridium prototype. The pound is also used in the imperial system of measurement, which is the system of measurements in the United States. The pound is used in many of the legacy units of the kilogram. The lb is used for measuring force and weight, and is not the correct term for the mass. The lb is a common unit of measure in the United States, Australia, Britain, and Germany.\n\nA kilogram is a unit of mass that is equal of about 1000 grams. It is also the standard unit of mass in the Imperial System of Measurement. It is a unit of mass that was developed and adopted in the United Kingdom in the mid-1700s. A kilogram is a unit of mass that measures the amount of matter. There are three methods for calculating the value of a kilogram to pounds. The first method involves multiplying the mass of the kilogram by the conversion factor. The second method involves dividing the mass of the kilogram by the conversion factor. This method is the most common.\n\n## 233 kg to lbs equals 513 lbs\n\nUsing a 12 kg to lbs calculator can be very helpful in converting one unit of weight to another. You can simply type in the kilogram and pound and the calculator will do the rest. This can also be useful in case you are shopping and would like to know how much 513 kilograms in pounds means.\n\nThe weight of a kilogram is equivalent to the weight of a 10 cm cube of water. The metric system is based on kilogram as the base unit of weight. It is also known as the international Avoirdupois pound. In the United States, pounds are primarily used.\n\nThe pound is also a unit of measurement that is used in the Imperial System of Measurement. In the United States, the pound is equal to a bit over four and a half millionths of a kilogram. It is also the most commonly used unit of weight. In addition, it is a legal unit of weight in the United States. The pound is a good measurement to know, but it can be a bit confusing. It is also used in the United Kingdom.\n\nThere are many other units of measurement that are used throughout the world, but the pound is one of the most common. Despite its popularity, the pound may not be the best measure of how much a kilogram is worth. There are other measures of weight that are just as important, such as kilograms, ounces, and kilograms per centimeter. You can find a more complete list of units of weight on our website.\n\nThis website is not responsible for any errors. In addition, it does not guarantee that the information provided will be accurate. The contents of this site are not intended for risky uses. If you are interested in learning more about how to convert a 12 kg to lbs, you can use the calculator provided on this page. The calculator is easy to use, and the results are quick.\n\n## 235 kg to lbs equals 520 lbs\n\n235 kg to lbs is a measure of mass in the metric system. The kilogram is a standard unit of weight in many parts of the world. The pound is the official unit of weight in the United States. In the metric system, the kilogram is equivalent to a 10 cm cube of water. In the US, a pound is also a base unit of weight.\n\nThe 235 kg to lbs calculator is the quick and easy way to convert between kilograms and pounds. All you need to enter is your numbers, and it will provide a pound value for each kilo. This calculator has all the numbers you need in one place, so there's no need to search for them. It's the perfect tool for people who are looking for an easier way to calculate the weight of a given number of kilograms.\n\nA simple search for a 235 kg to lbs calculator will return a lot of results. Some of them are more useful than others. The site authors aren't responsible for any errors, so the site doesn't provide any guarantees. It does, however, offer a way to find out the most frequent conversions between the two units. The results are displayed on a handy converter, which makes it simple to see the actual value in pounds, as well as the logical order for the pounds to kilo conversion.\n\nThe 230 kg to lbs calculator is another simple and useful tool. It's easy to use, but it's only good for people who know their metric system. To get a lbs value for a 230 kg, you'll need to multiply your numbers by 2.203. You'll also have to use a formula to convert kilo to lb, which can be a little complicated if you don't have a lot of experience with the metric system. The simplest formula to use is 2.20462262184878. The lb value is then rounded off, depending on what you're trying to measure.\n\n## 244 kg to lbs equals 538\n\nUsing the conversion factor \"kg to lb\", it is easy to calculate the number of pounds in 244 kilograms. The pound is the legal unit of weight in the International Avoirdupois Pound (IAP). It is also the weight used in the Imperial System of Measurement in the United States. A pound is defined as 16 avoirdupois ounces. There are also other base units in the SI system. There are also several units of weight which are rounded. The pound is used to measure the weight of a large quantity of things.\n\nThe pound is primarily used in the United States. There are also a few other countries, but it is generally considered the standard unit of weight in the world. The pound is measured in kilograms and it is usually written as lbs. Using this conversion factor, it is also possible to convert a 244 kg to a 440 lb.\n\nThe pound is also known as the International Avoirdupois Pound and it is used mainly in the United States. It is equal to 0.453592 kilograms in the SI system and is the equivalent of 16 ounces of distilled water. If you want to know more about the pound, you can visit the official website of the International Avoirdupois Pound. The contents of the site are not intended for risky uses. The website's authors are not responsible for any errors.\n\n## How to Convert 53 Kg to Lbs", null, "Whether you are trying to calculate how many lbs to pounds or how many kg to pounds, the formula is the same. Basically, you are dividing the total weight by the height and multiplying it by the square of the difference.\n\n240 kg to lbs is not an uncomplicated task. There are two basic ways to go about the task. The first method involves multiplying the kilogram's weight by its conversion factor. The second method involves dividing the kilogram by the SI unit of mass, the pound. Both methods will yield the same result.\n\nThere are several units of measurement in the metric system. The pound is the most common unit of weight. There are a number of legacy units of pound that are still in use but are not in official use. These units are not listed in the metric system and their definitions vary. Some are rounded and others are not. The pound is also not the same as the kilogram. The kilogram is the SI unit of mass.\n\nThe international Avoirdupois pound is a measure of weight. The unit is defined by the International Bureau of Weights and Measures (BIPM) as a mass of 0.45359237 kilograms. It is divided into 16 ounces. It is also the most valuable of the metric systems units of measurement. It is the smallest possible amount of mass that can be expressed in words, as well as the smallest possible amount of matter that can be represented with the use of numbers.\n\nThe 240 lb to kilo, or 240 kilograms to pounds, equation is the same as the corresponding calculation. The lb to kilo, also known as the international Avoirdupois pound, is a mass of 0.45359237 kg and is divided into 16 ounces.\n\n## 224 kg to lbs equals 493\n\n224 kg to pounds is easy to understand when you know how to convert the units. This is because it's easier to visualize your weight when you know how many pounds you're dealing with. If you don't know how to convert lbs to kilograms, you can easily do it using a calculator.\n\nA 224 kg to lbs calculator is a great way to do it. Just enter the number of kilograms you're dealing with, and the 224 kg to pounds calculator will give you the equivalent in pounds. You can also choose to use the conversion chart, which has all the numbers in one place.\n\nThe 224 kg to lbs calculator is incredibly easy to use. It uses the standard metric unit of pound, which is the US standard unit of weight. You can also enter other units of mass such as ounces, pounds, or even kilo-grams if you're unsure of what is the proper measurement for the item. You can also check the converter's accuracy by entering the metric units and pound abbreviations instead of the standard units. You can also find a handy search form if you're looking for a particular conversion frequently.\n\nThe 224 kg to pounds calculator makes it easy to convert between two units of mass, using a simple formula. The conversion factor is based on the international prototype of kilograms, which is a platinum-iridium prototype. The conversion factor is 2.20462262184878.\n\n## 226 kg to lbs equals 498\n\nTrying to calculate how many pounds 226 kg equals can be very difficult. If you're not familiar with the metric system, it can be hard to figure out how to convert a kilogram to a pound. Fortunately, there are conversion calculators available that will allow you to do this easily.\n\nThe first thing you need to do is type in the weight of the object you're trying to calculate. You can choose to input the weight in kilograms or pounds. If you're not sure which to use, you can try using the metric system, as this is the most popular. Once you've entered the weight, you'll need to hit the \"convert\" button to see your result.\n\nAnother quick way to calculate how many pounds 226 kg equals is to use a conversion chart. These can be found on the Internet. This type of chart has all the numbers in one place so you can easily see how to translate one unit of mass into another. You can also find frequent conversions by using the search form.\n\nTo use the calculator, you simply need to enter the weight of the object you're trying convert into a kilogram. If you have some experience in the metric system, you can input your weight into the calculator and hit the convert button. The calculator will then display the equivalent weight in pounds. You'll also be able to enter abbreviations for your pound instead of using standard metric units.\n\n## 228 kg to lbs equals 502\n\nWhenever you want to convert a kilogram to pounds, you can use the following table to get the result. The table will tell you how many pounds you need to multiply by the conversion factor in order to get the exact value of your kilogram. The table will also show you the abbreviations for the pound unit, so you can enter it instead of the standard metric units. This website is not liable for any mistaken or miscalculated results. This site is not a substitute for using a metric system calculator. The contents of the website are for informational purposes only and are not suitable for use in any way which involves risk.\n\nA kilogram is a standard unit of mass used in most countries. It is equal to the mass of the international prototype of the kilogram, which is a platinum-iridium mixture. The International Avoirdupois Pound is defined to be 0.45359237 kilograms, and is the standard unit of mass in the United States and in other nations that follow the SI (International System of Measurements).\n\nYou can easily convert a kilogram to pounds by using the following calculator. Just input the number of kilograms and the amount of weight you need to convert, and the calculator will calculate the equivalent weight in pounds. You will then be shown the ounces in a kilogram and the ounces in a pound.\n\n## 234 kg to lbs equals 515\n\nUsing a 515 kg to lbs calculator is a great way to get an idea of how many pounds you've got. This is because it will give you a more accurate estimation of the weight you've got without having to weigh it yourself. If you're in the process of packing, you'll have an easier time if you have an estimate of how much you've got. You can even use it as a tool to compare the weights of different items. You can then see which one is the best buy.\n\nA 515 kg to lbs calculator is easy to use and is likely to be reliable. Just enter in the number of kilograms you've got, and then click on \"calculate.\" The results will be listed in pounds, ounces, and milligrams. You can also copy and paste the values into Excel or another program. The converter is a cinch to use, and is especially useful if you're planning on traveling to another country, where the conversion might be a bit more complicated. You can also choose to look at the charts, which will show you the lb, ounce, and milligram values for a wide variety of kilograms.\n\nThe metric system is better for those who are more familiar with the system. However, if you're in the US, the pound is the best bet. The lb is the standard unit of weight in the United States. The pound is a little more than a quarter of a kilogram, which is a ten-centimeter cube of water.\n\n## 236 kg to lbs equals 520\n\nWhether you are trying to calculate how many pounds you can fit in a two-wheeler or simply trying to find out how many kilograms are in a pound, knowing the pound to kilograms conversion formula can help you get an accurate answer. Alternatively, you can use a 236 kg to lbs calculator to find out the equivalent in pounds. But if you are not very familiar with the metric system, you may prefer to use the United States' standard unit of weight.\n\nThe pound is the basic unit of weight in the metric system, and is also used in the US and the British commonwealths. The pound is defined as 0.45359237 kilograms. This is the international avoirdupois pound, which is divided into 16 ounces. This is a legal unit of measurement.\n\nThe metric system uses the kilogram as the base unit of weight, and the pound as a measure of force. The kilogram is also the base unit of mass in the Imperial System of Measurement, and is equal to a mass of about 1,000 cubic cm. In the Imperial system, kilograms are usually written in metric units, but they are often rounded to make them easier to read.\n\nWhen you know the pound to kilograms conversion formula, you can quickly and easily convert pounds to kilograms, making it much easier to visualize your weight in either form. The metric system is better for people who are more familiar with the metric system.\n\n## 83 Kilograms to Pounds Calculator\n\n83 Kilogram to pounds calculator is a useful tool to help you calculate your weight in pounds. It has a variety of helpful information and tips. The calculator has an easy-to-use layout that makes it easy for you to enter in the number of kilograms and then press the \"Calculate\" button. The calculator also has a handy chart to help you easily convert kilograms to lbs. You can also use the calculator to determine the exact value of a kilogram in a foot or a meter.\n\nUsing a calculator for 83 kg to lbs is a good way to convert between different units. Whether you're trying to calculate how many ounces are in a pound or simply want to see how many pounds a kilogram actually weighs, this simple tool can help.\n\nThe simplest way to convert 83 kilo to lbs is to use the shorer formula. The shorer formula is a clever little tool that's been around since the early twentieth century and is designed to give the best possible results for 83 kilo to lbs conversions. It's also the quickest way to calculate the weight of a kilogram in pounds.\n\nThe 83 pound to lb calculator isn't just a handy tool, it's a useful one. The calculator uses a variety of pound abbreviations and jargon-free terminology to allow for quick and easy conversions. Among other things, the calculator will even tell you which unit of mass is the most logical for a given 83 kilo to lbs equation.\n\nThe 83 kilo to lbs calculator is a great place to start when it comes to converting metric weights to the imperial system. In the United States, the pound is the official unit of weight and is equal to 16 avoirdupois ounces. The pound is also used to measure precious metals, such as gold and silver.\n\nThe 83 kilo weight to lb calculator also has the capability of converting a number of other metric weights, such as ounces, grams, and pounds. It's simple to use and doesn't require any fancy software. You just need to input a few numbers and click the \"calculate\" button. Besides, the results are all in one place, making it a convenient and quick method of figuring out what's in a pound.\n\nThe 83 kilo lb calculator is the simplest of its kind. The site does not claim to be the best, but it does promise to make your life easier by providing a simple and quick way to convert a plethora of different metric weights into their US equivalents.\n\n## Kilograms to pounds conversion\n\nUsing a kilograms to pounds conversion is a quick and easy way to convert between these two units of measurement. Both of these units measure weight and mass, and are used in several fields. In the United States, pounds are more common than kilograms. However, in many parts of the world, kilograms are more commonly used.\n\nKilograms and pounds are both base units of the SI (International System of Units) and US Customary Units. They are used in a variety of industries and science. They are used to measure the weight of bodies, packaged goods, and food products. They are also used in government and the military.\n\nOne kilogram is equal to 2.20462 lbs. The pound is a unit of mass in the US Customary System and the Imperial System. It is used to measure body weight, but is more often used to label food products and packaged goods. In the UK, the pound is used over the kilogram for body weight measurement. The pound is divided into 16 ounces.\n\nThe pound is a unit of weight that is most commonly used in the United States and the United Kingdom. It is also used in many other countries around the world, and is considered a base SI unit of mass. It is used to measure the mass of the International Prototype of the Kilogram.\n\nUsing a kilograms to pounds conversion calculator is a quick and easy way to convert these two units. Enter your numbers into the box and click the \"convert\" button. This will automatically calculate the answer. It will also display the answer in the pounds field.\n\nIf you are unsure of what you need to multiply by, you can consult the conversion table. This table has a variety of values for the two units, which can be customized based on your needs. It includes weights from 130lb to 220lb.\n\nYou can also use an online converter to convert from one measurement to another. This tool will convert your weight from grams and ounces to kilograms and pounds, and provides a pointer and calculation formulas to make your calculations easy.\n\n## Kilograms to pounds chart\n\nUsing an 83 kg to lbs chart is a great way to convert between the two units, and will give you all the numbers in one place. Similarly, it will also show you how much a pound is equivalent to a kilogram.\n\nDespite the name, a pound is not actually a weight, but rather a unit of mass. The pound was first introduced in the British commonwealths and is the legal equivalent of 0.45359237 kilograms. A pound is also the standard unit of mass in some parts of the world, including the United States. Alternatively, pounds are also referred to as International Avoirdupois Pounds. The pound is often used in the United States to measure the mass of food, such as strawberries.\n\nThere are several reasons for this. For one, it is easy to weigh strawberries by measuring their weight in pounds. Another reason is that kilograms and pounds are derived from a similar metric system. If you are familiar with the metric system, then you will know that pounds are a bit less precise than kiloliters or grams. It is also easier to visualize a pound as a solid than a gram.\n\nIf you are not a fan of metric systems, then you can still perform a 83 kg to lbs conversion by using a shorer formula. The shorer is a simple and reliable formula that will show you the value of 83 kg in lbs in a snap. The calculator can also be used to calculate the metric equivalent of a pound. It's not as accurate as the shorer method, but it will show you the right answer if you use it correctly.\n\nThe 83 kg to lbs chart above is the quickest and most convenient way to convert between the two units. It can be a little daunting to look up the 83 kg to lbs calculator, but it's a simple process that will give you an answer quickly. To convert between the two, just enter your kilograms into the 83 kg to lbs calculator, and you will instantly see the corresponding pound equivalent.\n\n## Foot pounds to kilograms meters\n\nUsing a conversion calculator is a convenient way to convert between a foot-pound and kilograms meters. It allows you to enter the value you want to convert and the category of unit you are converting from. Once you have inputted the value, you will be able to see the result in a fraction of a second. In addition, the calculator will also show you the formula to use when converting from a foot-pound to a kilogram.\n\nA foot-pound is a measurement of force. It is also used to measure power. In the FPS unit system, a foot-pound is a measure of the work done by a one-pound force acting through a one-foot distance. A foot-pound is equal to 0.1382549543776 meter-kilograms. In the SI unit system, a foot-pound is also a measure of torque. It is a measure of the force applied and the distance from the pivot point.\n\nA kilograms-meter is a measurement of power. It is a unit of measurement that is used in the SI unit system to describe the rate at which energy is transferred or transformed. It is a unit that is also abbreviated as kgF-m. A meter-kilogram is equivalent to 9.807 newton-meters. This measurement is similar to pounds per cubic foot. The units of density are related to each other because both have the same proportional parts when divided.\n\nIf you have the value of foot-pounds, you can enter it into the calculator and the calculator will accept any abbreviation you prefer. In addition, the calculator will also take the full name of the unit you are converting from. Then, you will be able to see the category of unit that the calculator has selected. If you choose to convert from a foot-pound to a meter-kilogram, you can enter the value of 9 foot-pounds and the calculator will give you the results in a meter-kilogram.\n\n## Related Articles\n\n•", null, "", null, "January 17, 2022     |     Future Starr\n•", null, "#### Surface area of a cylinder", null, "December 26, 2021     |     m malik\n•", null, "#### How to Get a JP Morgan Reserve Card 2023", null, "November 21, 2021     |     Future Starr\n•", null, "#### Does Facebook Marketplace Have an App 2023?", null, "December 25, 2021     |     Future Starr\n•", null, "#### 10.5 Kg in Pounds Calculator", null, "January 02, 2022     |     Future Starr\n•", null, "#### 1 Ounce Grams", null, "January 23, 2022     |     Future Starr\n•", null, "#### A 42000 Car Payment Calculator", null, "January 11, 2022     |     M aqib\n•", null, "#### Pick the Powerball Numbers For Wednesday February 26th 2023", null, "November 27, 2021     |     Future Starr\n•", null, "", null, "December 06, 2021     |     Future Starr\n•", null, "#### When is Drawing For Mega Millions 2023?", null, "February 05, 2022     |     Future Starr\n•", null, "#### Powerball Winning Numbers For 27 November 2019 and 2023", null, "December 11, 2021     |     Future Starr\n•", null, "#### What's.375 As a Fraction?", null, "July 15, 2021     |     Future Starr\n•", null, "#### Converting 0.225 Lb to Kilograms", null, "January 04, 2022     |     Future Starr\n•", null, "#### How to Convert a Decimal to 17.375 as a Fraction", null, "December 24, 2021     |     Future Starr\n•", null, "#### How Do You Qualify For American Express Centurion Card 2023?", null, "November 21, 2021     |     Future Starr" ]
[ null, "https://www.futurestarr.com/blog-media/e524343348a2fb2d11105b1f3aeb6237.jpg", null, "https://i.imgur.com/az9dQtv.png", null, "https://i.imgur.com/5jFwxUS.gif", null, "https://i.imgur.com/AJSiJtU.jpg", null, "https://i.imgur.com/iTC7TlC.png", null, "https://i.imgur.com/SJCqFVL.jpg", null, "https://i.imgur.com/A0ddD4U.jpg", null, "https://i.imgur.com/eCIpn0k.jpg", null, "https://i.imgur.com/bgd3Q0M.jpg", null, "https://i.imgur.com/OmU8twu.jpg", null, "https://i.imgur.com/7Nj1lCg.jpg", null, "https://www.futurestarr.com/assets/images/default-ad-banner.png", null, "https://www.futurestarr.com/blog-media/1642401907Muhammad.png", null, "https://www.futurestarr.com/assets/images/default-ad-banner.png", null, "https://www.futurestarr.com/blog-media/1640531034m.png", null, "https://www.futurestarr.com/assets/images/default-ad-banner.png", null, "https://www.futurestarr.com/blog-media/1637485578m.png", null, "https://www.futurestarr.com/assets/images/default-ad-banner.png", null, "https://www.futurestarr.com/blog-media/1640451391m.png", null, "https://www.futurestarr.com/assets/images/default-ad-banner.png", null, "https://www.futurestarr.com/blog-media/1641142366m.png", null, "https://www.futurestarr.com/assets/images/default-ad-banner.png", null, "https://www.futurestarr.com/blog-media/1642926423m.png", null, "https://www.futurestarr.com/assets/images/default-ad-banner.png", null, "https://www.futurestarr.com/blog-media/1641885390Muhammad.png", null, "https://www.futurestarr.com/assets/images/default-ad-banner.png", null, "https://www.futurestarr.com/blog-media/1638004783Muhammad.png", null, "https://www.futurestarr.com/assets/images/default-ad-banner.png", null, "https://www.futurestarr.com/blog-media/1638770566Muhammad.png", null, "https://www.futurestarr.com/assets/images/default-ad-banner.png", null, "https://www.futurestarr.com/blog-media/1644050698Ali.png", null, "https://www.futurestarr.com/blog-media/cca24dd4d00744d5888533dff7331662.jpg", null, "https://www.futurestarr.com/blog-media/1639201349Muhammad.png", null, "https://www.futurestarr.com/blog-media/1716fffb68dfe24dd084665f43dcaf05.jpg", null, "https://www.futurestarr.com/blog-media/1626380220Future.png", null, "https://www.futurestarr.com/assets/images/default-ad-banner.png", null, "https://www.futurestarr.com/blog-media/1641278731m.png", null, "https://www.futurestarr.com/assets/images/default-ad-banner.png", null, "https://www.futurestarr.com/blog-media/1640343671m.png", null, "https://www.futurestarr.com/assets/images/default-ad-banner.png", null, "https://www.futurestarr.com/blog-media/1637505192m.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9489858,"math_prob":0.95376575,"size":94214,"snap":"2023-14-2023-23","text_gpt3_token_len":19602,"char_repetition_ratio":0.19070162,"word_repetition_ratio":0.0771751,"special_character_ratio":0.2067421,"punctuation_ratio":0.09601673,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.97695327,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82],"im_url_duplicate_count":[null,null,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,null,null,2,null,null,null,9,null,null,null,6,null,null,null,6,null,null,null,null,null,null,null,6,null,null,null,6,null,null,null,4,null,null,null,4,null,null,null,4,null,null,null,8,null,null,null,8,null,null,null,5,null,null,null,2,null,null,null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-05-30T08:14:01Z\",\"WARC-Record-ID\":\"<urn:uuid:5080ad11-5e0c-485f-9670-05098d1e068b>\",\"Content-Length\":\"208784\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:45f6976b-a9d3-42f2-9063-64b2411b64de>\",\"WARC-Concurrent-To\":\"<urn:uuid:b717b6a9-78b8-41cd-913c-27435c0f042e>\",\"WARC-IP-Address\":\"37.19.207.34\",\"WARC-Target-URI\":\"https://www.futurestarr.com/blog/mathematics/d-bar-calculator\",\"WARC-Payload-Digest\":\"sha1:RYSNMPPFEI6CVE2SL3VZ3NCOA4PK2QC6\",\"WARC-Block-Digest\":\"sha1:CDBUPSSGE3HLIBTKQMJKMRAV7KGH7TYS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224645417.33_warc_CC-MAIN-20230530063958-20230530093958-00069.warc.gz\"}"}
https://xplaind.com/584953/amortization-of-bond-discount-straight-line-method
[ "# Amortization of Bond Discount: Straight Line Method\n\nWhen the coupon rate on a bond is lower than the interest rate prevailing in the market the bond is issued at a discount to par value. Alternatively, if the coupon rate is higher than the market interest rate the bond is issued at a premium to its par value. In both cases the carrying value of the bond is different from its face value. In case of issue at a discounted issue, the carrying amount equals face value minus the discount on bond; and in case of a premium issue, the carrying amount equals face value plus the amount of premium.\n\nIn both cases the interest paid or payable is based on the coupon rate which is the stated rate of the bond. However, the interest expense reported on the income statement is higher when the bond is issued at a discount to the par value by the amount of periodic amortization of bond discount. There are two methods for amortization of bond discount: the straight line method and the effective interest rate.\n\n## Straight line method\n\nUnder the straight line method of amortization of bond discount, the bond discount is written off in equal amounts over the life of the bond.\n\n## Example\n\nCompany DS intended to issue a bond with face value of \\$100,000 having a maturity of 5 years and annual coupon of 8%. At the time of issue however, the market interest rate rose to 10% and the bond could fetch a price of \\$92,420 only.\n\nThe difference of \\$7,580 between the face value of bond of \\$100,000 and the proceeds of \\$92,420 represent the discount on bond. Since the bond has a life of 5 years, the annual amortization of bond discount would equal \\$1,516 (\\$7,580 divided by 5). At the end of first year if interest payable is \\$8,000 Company DS would record its interest expense using the following journal entry:\n\n Interest Expense 9,516 Interest Payable 8,000 Bond Discount 1,516\n\nUnder straight line method the periodic interest expense, interest payable and amortization of bond discount does not vary over the periods." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9277059,"math_prob":0.9786198,"size":2165,"snap":"2020-45-2020-50","text_gpt3_token_len":476,"char_repetition_ratio":0.1679778,"word_repetition_ratio":0.04255319,"special_character_ratio":0.2290993,"punctuation_ratio":0.09259259,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9886248,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-20T09:16:11Z\",\"WARC-Record-ID\":\"<urn:uuid:f68315bf-fa8f-4fee-bfa5-64ccd128e6fc>\",\"Content-Length\":\"52842\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3c7eed55-9380-43c3-870d-d5104098bdc3>\",\"WARC-Concurrent-To\":\"<urn:uuid:d2c78055-6fbb-4bc5-83d1-bffd51d0e905>\",\"WARC-IP-Address\":\"50.16.49.81\",\"WARC-Target-URI\":\"https://xplaind.com/584953/amortization-of-bond-discount-straight-line-method\",\"WARC-Payload-Digest\":\"sha1:Q3Q3GJHUTJWP6QDOKRFX2HGMIQLHVGVC\",\"WARC-Block-Digest\":\"sha1:YKCC2RUSKTFYTNOVWYWJTQOT5IJ5ZMP3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107871231.19_warc_CC-MAIN-20201020080044-20201020110044-00352.warc.gz\"}"}
https://www.dlubal.com/en/support-and-learning/support/faq/003414
[ "", null, "# What is the meaning of the superposition according to the CQC rule in a dynamic analysis??\n\nThe complete quadratic combination (CQC rule) must be applied if there are the adjacent modal shapes, whose periods differ about less than 10%, when analyzing the spatial models with the combined torsional / translational mode shapes. If this is not the case, the square root of the sum of the squares (SRSS rule) applies. In all other cases, the CQC rule must be applied. The CQC rule is defined as follows:\n\n${\\mathrm E}_{\\mathrm{CQC}}=\\sqrt{\\sum_{\\mathrm i=1}^{\\mathrm p}\\sum_{\\mathrm j=1}^{\\mathrm p}{\\mathrm E}_{\\mathrm i}{\\mathrm\\varepsilon}_{\\mathrm{ij}}{\\mathrm E}_{\\mathrm j}}$\n\nwith the correlation coefficient:\n\n${\\mathrm\\varepsilon}_{\\mathrm{ij}}=\\frac{8\\sqrt{{\\mathrm D}_{\\mathrm i}{\\mathrm D}_{\\mathrm j}}({\\mathrm D}_{\\mathrm i}+{\\mathrm D}_{\\mathrm j})\\mathrm r^{\\displaystyle\\frac32}}{\\left(1-\\mathrm r^2\\right)^2+4{\\mathrm D}_{\\mathrm i}{\\mathrm D}_{\\mathrm j}\\mathrm r(1+\\mathrm r^2)+4(\\mathrm D_{\\mathrm i}^2+\\mathrm D_{\\mathrm j}^2)\\mathrm r^2}$\n\nwhere:\n\n$\\mathrm r=\\frac{{\\mathrm\\omega}_{\\mathrm j}}{{\\mathrm\\omega}_{\\mathrm i}}$\n\nThe correlation coefficient is simplified if the viscous damping value D is selected to be the same for all mode shapes:\n\n${\\mathrm\\varepsilon}_{\\mathrm{ij}}=\\frac{8\\mathrm D^2(1+\\mathrm r)\\mathrm r^{\\displaystyle\\frac32}}{\\left(1-\\mathrm r^2\\right)^2+4\\mathrm D^2\\mathrm r(1+\\mathrm r^2)}$\n\nBy analogy to the SRSS rule, the CQC rule can also be performed as an equivalent linear combination. The formula of the modified CQC rule is as follows:\n\n${\\mathrm E}_{\\mathrm{CQC}}=\\sum_{\\mathrm i=1}^{\\mathrm p}{\\mathrm f}_{\\mathrm i}{\\mathrm E}_{\\mathrm i}$\n\nwhere:\n\n${\\mathrm f}_{\\mathrm i}=\\frac{{\\displaystyle\\sum_{\\mathrm i=1}^{\\mathrm p}}{\\mathrm\\varepsilon}_{\\mathrm{ij}}{\\mathrm E}_{\\mathrm j}}{\\sqrt{{\\displaystyle\\sum_{\\mathrm i=1}^{\\mathrm p}}{\\displaystyle\\sum_{\\mathrm j=1}^{\\mathrm p}}{\\mathrm E}_{\\mathrm i}{\\mathrm\\varepsilon}_{\\mathrm{ij}}{\\mathrm E}_{\\mathrm j}}}$\n\n#### Reference\n\n Meskouris, K. (1999). Baudynamik, Modelle, Methoden, Praxisbeispiele. Berlin: Ernst & Sohn.", null, "" ]
[ null, "https://www.facebook.com/tr", null, "https://www.dlubal.com/-/media/Images/website/img/000001-000100/000018.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5623399,"math_prob":0.99992585,"size":3203,"snap":"2020-34-2020-40","text_gpt3_token_len":896,"char_repetition_ratio":0.21944357,"word_repetition_ratio":0.12938005,"special_character_ratio":0.2784889,"punctuation_ratio":0.08347245,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99998784,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-05T02:06:39Z\",\"WARC-Record-ID\":\"<urn:uuid:41c3ab11-6e93-4bda-9db3-23cef61e5229>\",\"Content-Length\":\"157688\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5982d893-ad48-45a1-9bba-37cb779848b6>\",\"WARC-Concurrent-To\":\"<urn:uuid:95de9c3a-099d-4e7a-8456-24d1864e6e8e>\",\"WARC-IP-Address\":\"89.187.130.201\",\"WARC-Target-URI\":\"https://www.dlubal.com/en/support-and-learning/support/faq/003414\",\"WARC-Payload-Digest\":\"sha1:6WCGKPXYDBV6R6N4XYVKTJVWYD4AWJTY\",\"WARC-Block-Digest\":\"sha1:M5QNFI62JG5E7ZD2JDW4CE53CCEV5YYB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439735906.77_warc_CC-MAIN-20200805010001-20200805040001-00033.warc.gz\"}"}
https://www.mathssciencecorner.com/2019/01/maths-std-7-swadhyay-111.html
[ "Maths Std 7 Swadhyay 11.1 - Maths Science Corner\n\n# Maths Science Corner\n\nMath and Science for all competitive exams\n\n# Maths\n\n## Standard 7\n\n### (Perimeter and Area)\n\nOn maths science corner you can now download new NCERT 2018 Gujarati Medium Textbook Standard 7 Maths Chapter 11 Parimiti Ane Kshetrafal (Perimeter and Area) Swadhyay 11.1 in pdf form for your easy reference.\n\nOn Maths Science Corner you will get all the printable study material of Maths and Science Including answers of prayatn karo, Swadhyay, Chapter Notes, Unit tests, Online Quiz etc..\n\nThis material is very helpful for preparing Competitive exam like Tet 1, Tet 2, Htat, tat for secondary and Higher secondary, GPSC etc..\n\nHighlight of the chapter\n\nSquares and rectangles\n\nPerimeter of square = 4*length of its sides\n\nPerimeter of rectangle = 2(length + bredth)\n\nArea of square = square of its length\n\nArea of rectangle = length * bredth\n\nTriangle as a part of rectangle\n\nGeneralisation of othe parts of rectangle\n\nArea of parallelogram = base * altitude\n\nArea of triangle = 1/2 * base * altitude\n\nCircumference of circle = 2*pi*r\n\nArea of circle = pi*r*r\n\nConversion of units\n\nApplications\n\nYou can get the above chapter from the following link\n\nMaths Std 7 Chapter 11\n\nIn swadhyay 11.1 You will be able to learn area and perimeter of square and rectangle.\n\nToday Maths Science Corner is giving you Maths Standard 7 Textbook Chapter 11 Swadhyay 11.1 in pdf format for your easy reference." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.80703497,"math_prob":0.70877385,"size":1267,"snap":"2021-43-2021-49","text_gpt3_token_len":336,"char_repetition_ratio":0.12747426,"word_repetition_ratio":0.0,"special_character_ratio":0.22888714,"punctuation_ratio":0.084745765,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99233955,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-16T19:03:38Z\",\"WARC-Record-ID\":\"<urn:uuid:08482e09-2622-4608-a637-81521f8b980a>\",\"Content-Length\":\"134811\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b3a3b1e0-376e-47eb-a834-c08f25d36c80>\",\"WARC-Concurrent-To\":\"<urn:uuid:505241f7-320a-4f1f-972c-7c3faa326a75>\",\"WARC-IP-Address\":\"142.250.65.83\",\"WARC-Target-URI\":\"https://www.mathssciencecorner.com/2019/01/maths-std-7-swadhyay-111.html\",\"WARC-Payload-Digest\":\"sha1:F2AW7MCKTJXXS4MYNYYGHO7IRLOAPXZP\",\"WARC-Block-Digest\":\"sha1:OITGRMK2H3TY7YIO73GIOQJE4ZUVNARH\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323584913.24_warc_CC-MAIN-20211016170013-20211016200013-00711.warc.gz\"}"}
https://www.centerspace.net/linear-regression
[ "# C# Linear Regression / Régression Linéaire\n\nNMath from CenterSpace Software is a .NET class library that provides functions for statistical computation and biostatistics, including descriptive statistics, probability distributions, combinatorial functions, multiple linear regression, hypothesis testing, analysis of variance, and multivariate statistics.\n\nNote that with the release of NMath 7, all statistical types were unified into the CenterSpace.NMath.Core namespace and the CenterSpace.NMath.Stats namespace was deprecated.\n\nThe NMath library provides building blocks for mathematical, financial, engineering, and scientific applications on the .NET platform. Features include matrix and vector classes, linear algebra, random number generators, numerical integration methods, interpolation, statistics, biostatistics, multiple linear regression, analysis of variance (ANOVA), optimization, and object-oriented interfaces to public domain computing packages such as the BLAS (Basic Linear Algebra Subprograms) and LAPACK (Linear Algebra PACKage). All NMath routines are callable from any .NET language, including C#, Visual Basic.NET, and F#.\n\n### Linear Regression Documentation\n\nComplete documentation for all NMath libraries is available online. For more general information on linear regression, see the linear regression chapter in the NMath Stats User’s Guide.\n\nAll API documentation related to linear regression is available in the NMath Stats Reference Guide, outlined in the table below.\n\nClass\nDescription\nComputes a single or multiple linear regression from an input matrix of independent variable values and vector of dependent variable values\nTests overall model significance for linear regressions computed by class LinearRegression\nTests statistical hypotheses about estimated parameters in linear regressions computed by class LinearRegression\n\n### Linear Regression Code Examples\n\nAll NMath libraries include extensive code examples in both C# and Visual Basic.NET. Studying these examples is one of the best ways to learn how to use NMath libraries. For more information on linear regression, see:\n\n• SimpleLinearRegressionExample [C#]  [VB.NET]\nExample showing how to use the linear regression class to perform a simple linear regression.\n• MultipleLinearRegressionExample [C#]  [VB.NET]\nExample showing how to use the linear regression class to perform a multiple linear regression.\n\n### Try a Free Evaluation", null, "", null, "If you are interested in evaluating the Linear Regression classes in NMath, we offer a free trial version, for a 30-day evaluation period. This trial version is a fully featured distribution of NMath with no limitations. In only a few minutes you can be enjoying the power of NMath.", null, "", null, "Orders may be placed through our secure online store using either google checkout or paypal checkout. Our sales staff would be happy to help you with any questions that you may have about our products.  We are looking forward to working with you!" ]
[ null, "data:image/gif;base64,R0lGODdhAQABAPAAAP///wAAACwAAAAAAQABAEACAkQBADs=", null, "https://www.centerspace.net/themes/centerspace/images/badge-trial.png", null, "data:image/gif;base64,R0lGODdhAQABAPAAAP///wAAACwAAAAAAQABAEACAkQBADs=", null, "https://www.centerspace.net/themes/centerspace/images/badge-order.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8259198,"math_prob":0.8567154,"size":2960,"snap":"2020-34-2020-40","text_gpt3_token_len":560,"char_repetition_ratio":0.16271989,"word_repetition_ratio":0.0625,"special_character_ratio":0.17364866,"punctuation_ratio":0.12240664,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9900888,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-28T14:35:03Z\",\"WARC-Record-ID\":\"<urn:uuid:c81bfa3a-1f2a-4820-af4c-b450c35581e1>\",\"Content-Length\":\"38295\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:23105da6-a0ae-4e1b-98d3-5c2c575d27bb>\",\"WARC-Concurrent-To\":\"<urn:uuid:051aa785-25ed-4213-a57d-4ce77d325b0b>\",\"WARC-IP-Address\":\"72.47.229.209\",\"WARC-Target-URI\":\"https://www.centerspace.net/linear-regression\",\"WARC-Payload-Digest\":\"sha1:G645EFRPJ5L5RNIHHWKJE4LJU2WZKNNB\",\"WARC-Block-Digest\":\"sha1:HWFJXC6IN6NQY6T7USUGYMI6V4RPFUNU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600401601278.97_warc_CC-MAIN-20200928135709-20200928165709-00435.warc.gz\"}"}
https://math.stackexchange.com/questions/3215020/prove-sum-k-0-infty-binom2nk1n-22nk1-1
[ "# Prove $\\sum_{k=0}^\\infty \\binom{2n+k+1}{n}/2^{2n+k+1}=1$.\n\nI was trying to find a closed form for the sum $$\\sum_{k=0}^\\infty \\binom{2n+k+1}{n}/2^{2n+k+1}$$.\n\nAccording to Wolfram https://www.wolframalpha.com/input/?i=sum+(2n%2Bk%2B1)!%2F(n!*n%2Bk%2B1)!*2%5E(2n%2Bk%2B1))+from+k%3D0+to+infinity\nthis sum evaluates to 1, but I can't figure out how to prove this. Any hints.\n\n• This maybe a far fetch but if you set a=2 in the formula under the subtitle \"5. An asymptotic formula for the inversion numbers\" in the following link, it may lead you to some clue: academic.csuohio.edu/bmargolius/homepage/inversions/invers.htm – NoChance May 5 '19 at 20:34\n• @ZaeemHussain $\\sum\\limits_{k=0}^{\\infty}\\frac{\\binom{2\\,n+k+1}{n}}{2^{2\\,n+k+1}}=\\frac{\\sqrt{\\pi }\\,\\Gamma(n+2)}{2^{2\\,n+1}\\Gamma \\left(\\frac{1}{2} (2\\,n+3)\\right)}\\binom{2\\,n+1}{n} =1$ perhaps provides some insight. – Steven Clark May 5 '19 at 21:32\n\nPreliminary \\begin{align} a_n &=\\sum_{k=0}^n\\frac{\\binom{k+n}{k}}{2^k}\\tag{1a}\\\\ &=\\sum_{k=0}^n\\frac{\\binom{k+n-1}{k-1}+\\binom{k+n-1}{k}}{2^k}\\tag{1b}\\\\ &=\\sum_{k=0}^{n-1}\\frac{\\binom{k+n}{k}}{2^{k+1}}+\\sum_{k=0}^n\\frac{\\binom{k+n-1}{k}}{2^k}\\tag{1c}\\\\ &=\\frac12a_n-\\frac{\\binom{2n}{n}}{2^{n+1}}+a_{n-1}+\\frac{\\binom{2n-1}{n}}{2^n}\\tag{1d}\\\\[3pt] &=\\frac12a_n+a_{n-1}\\tag{1e}\\\\[9pt] &=2a_{n-1}\\tag{1f} \\end{align} Explanation:\n$$\\text{(1a)}$$: define $$a_n$$\n$$\\text{(1b)}$$: Pascal Identity\n$$\\text{(1c)}$$: substitute $$k\\mapsto k+1$$ in the left sum\n$$\\text{(1d)}$$: apply $$\\text{(1a)}$$\n$$\\text{(1e)}$$: cancel terms\n$$\\text{(1f)}$$: $$2$$ times $$\\text{(1e)}$$ minus $$\\text{(1a)}$$\n\nSince $$a_0=1$$, we get $$\\sum_{k=0}^n\\frac{\\binom{k+n}{k}}{2^k}=2^n\\tag2$$\n\nAnswer \\begin{align} \\sum_{k=0}^\\infty\\frac{\\binom{2n+k+1}{n}}{2^{2n+k+1}} &=\\sum_{k=0}^\\infty\\frac{\\binom{2n+k+1}{n+k+1}}{2^{2n+k+1}}\\tag{3a}\\\\ &=\\frac1{2^n}\\sum_{k=0}^\\infty(-1)^{n+k+1}\\frac{\\binom{-n-1}{n+k+1}}{2^{n+k+1}}\\tag{3b}\\\\ &=\\frac1{2^n}\\sum_{k=n+1}^\\infty(-1)^k\\frac{\\binom{-n-1}{k}}{2^k}\\tag{3c}\\\\ &=\\frac1{2^n}2^{n+1}-\\frac1{2^n}\\sum_{k=0}^n(-1)^k\\frac{\\binom{-n-1}{k}}{2^k}\\tag{3d}\\\\ &=2-\\frac1{2^n}\\sum_{k=0}^n\\frac{\\binom{k+n}{k}}{2^k}\\tag{3e}\\\\[9pt] &=1\\tag{3f} \\end{align} Explanation:\n$$\\text{(3a)}$$: symmetry of Pascal's Triangle\n$$\\text{(3b)}$$: negative binomial coefficient\n$$\\text{(3c)}$$: substitute $$k\\mapsto k-n-1$$\n$$\\text{(3d)}$$: Binomial Theorem\n$$\\text{(3e)}$$: negative binomial coefficient\n$$\\text{(3f)}$$: apply $$(2)$$\n\nRecall that $$\\binom nm=\\frac1{2\\pi i}\\oint_{|z|=\\rho}\\frac{(1+z)^n}{z^{m+1}}dz.\\tag1$$ Therefore, assuming $$\\rho<1$$: \\begin{align} \\sum_{k=0}^\\infty \\binom{2n+k+1}{n}\\left(\\frac12\\right)^{2n+k+1} &=\\sum_{k=0}^\\infty\\left(\\frac12\\right)^{2n+k+1}\\frac1{2\\pi i}\\oint_{|z|=\\rho}\\frac{(1+z)^{2n+k+1}}{z^{n+1}}dz\\tag2\\\\ &=\\frac1{2\\pi i}\\oint_{|z|=\\rho}\\left(\\frac{1+z}2\\right)^{2n+1}\\frac{dz}{z^{n+1}} \\sum_{k=0}^\\infty\\left(\\frac{1+z}2\\right)^{k}\\tag3\\\\ &=\\frac1{2\\pi i}\\oint_{|z|=\\rho}\\left(\\frac{1+z}2\\right)^{2n+1} \\frac1{1-\\frac{1+z}2}\\frac{dz}{z^{n+1}}\\tag4\\\\ &=\\frac1{2\\pi i}\\oint_{|z|=\\rho}\\left(\\frac{1+z}2\\right)^{2n+1} \\frac2{1-z}\\frac{dz}{z^{n+1}}\\tag5\\\\ &=\\operatorname{Res}_{z=0}\\left(\\frac12\\right)^{2n}\\frac{1}{z^{n+1}} \\sum_{l=0}^{2n+1}\\binom{2n+1}l z^l\\sum_{k=0}^\\infty z^k\\tag6\\\\ &=\\left(\\frac12\\right)^{2n}\\sum_{k=0}^n\\binom{2n+1}{n-k}\\tag7\\\\ &=\\left(\\frac12\\right)^{2n}2^{2n}=1.\\tag8 \\end{align}\n\nExplanations:\n\n$$(1)$$ Follows from the residue theorem, since $$\\binom nm$$ is the coefficient at $$z^{-1}$$ in the Laurent expansion of the integrand about $$z=0$$.\n\n$$(2)$$ The binomial coefficient is replaced according to $$(1)$$.\n\n$$(3)$$ The terms are rearranged and the order of integration and summation is interchanged (which is possible due to $$\\rho <1$$).\n\n$$(4)$$ The geometric series is evaluated (which converges due to $$\\rho <1$$).\n\n$$(5)$$ The result for the geometric series is rearranged.\n\n$$(6)$$ The residue theorem is applied. The terms $$(1+z)^{2n+1}$$ and $$\\dfrac1 {1-z}$$ are expanded to binomial sum and geometric series, respectively.\n\n$$(7)$$ The residue, i.e. the coefficient at $$z^{-1}$$, is evaluated.\n\n$$(8)$$ The sum of the binomial coefficients is evaluated to $$2^{2n}$$ (since $$2n+1$$ is odd and exactly half of binomial coefficients is summed) which gives rise to the final result.\n\n• Thanks. Could you also point me to a reference for the formula relating the binomial coefficient to contour integration that you have in the beginning of the answer? Since I am not familiar with it I also couldn't follow the step where you get rid of the integral. – Zaeem Hussain May 5 '19 at 23:28\n• @ZaeemHussain I have added some explanations. Please let me know if it helps. – user May 6 '19 at 6:06" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6592498,"math_prob":1.0000051,"size":1803,"snap":"2020-34-2020-40","text_gpt3_token_len":742,"char_repetition_ratio":0.15008338,"word_repetition_ratio":0.0,"special_character_ratio":0.41652802,"punctuation_ratio":0.053475935,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000076,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-19T08:18:47Z\",\"WARC-Record-ID\":\"<urn:uuid:b2142262-3653-4009-9fec-a807864ea009>\",\"Content-Length\":\"165486\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8fe819c2-d488-4571-8189-28bd38cd61b9>\",\"WARC-Concurrent-To\":\"<urn:uuid:da1d6db6-80b4-4017-852a-08e94dfeb3d8>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/3215020/prove-sum-k-0-infty-binom2nk1n-22nk1-1\",\"WARC-Payload-Digest\":\"sha1:H6FDHBMZMDFBRGIW3GZ32AOD4SJCMYNB\",\"WARC-Block-Digest\":\"sha1:MOREYIL2TYDBP7VW65YA7YYVBPL5N2QJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400191160.14_warc_CC-MAIN-20200919075646-20200919105646-00120.warc.gz\"}"}
https://stackoverflow.com/questions/7336861/how-to-convert-string-to-boolean-php/15075609
[ "# How to convert string to boolean php\n\nHow can I convert string to `boolean`?\n\n``````\\$string = 'false';\n\n\\$test_mode_mail = settype(\\$string, 'boolean');\n\nvar_dump(\\$test_mode_mail);\n\nif(\\$test_mode_mail) echo 'test mode is on.';\n``````\n\nit returns,\n\nboolean true\n\nbut it should be `boolean false`.\n\n• Why any answered about \\$bool=!!\\$string1 ? – zloctb Oct 16 '13 at 19:42\n• @zloctb because it doesn't answer the question. `!!\\$string1` would return a boolean indicative of the string outlined in the top rated answer. – David Barker Feb 8 '15 at 10:49\n\nStrings always evaluate to boolean true unless they have a value that's considered \"empty\" by PHP (taken from the documentation for `empty`):\n\n1. `\"\"` (an empty string);\n2. `\"0\"` (0 as a string)\n\nIf you need to set a boolean based on the text value of a string, then you'll need to check for the presence or otherwise of that value.\n\n``````\\$test_mode_mail = \\$string === 'true'? true: false;\n``````\n\nEDIT: the above code is intended for clarity of understanding. In actual use the following code may be more appropriate:\n\n``````\\$test_mode_mail = (\\$string === 'true');\n``````\n\nor maybe use of the `filter_var` function may cover more boolean values:\n\n``````filter_var(\\$string, FILTER_VALIDATE_BOOLEAN);\n``````\n\n`filter_var` covers a whole range of values, including the truthy values `\"true\"`, `\"1\"`, `\"yes\"` and `\"on\"`. See here for more details.\n\n• I recommend to always use strict comparison, if you're not sure what you're doing: `\\$string === 'true'` – Znarkus Sep 7 '11 at 16:00\n• I found this - `filter_var(\\$string, FILTER_VALIDATE_BOOLEAN);` is it a good thing? – laukok Sep 7 '11 at 16:05\n• The ternary doesn't seem necessary. Why not just set \\$test_mode_mail to the value of the inequality? \\$test_mode_mail = \\$string === 'true' – Tim Banks Jun 5 '12 at 15:28\n• But what about 1/0, TRUE/FALSE? I think @lauthiamkok 's answer is the best. – ryabenko-pro Dec 15 '12 at 14:06\n• @FelipeTadeo I'm talking about how PHP evaluates strings with respect to boolean operations, I never mentioned eval() and I'd never recommending using it under any circumstances. The string \"(3 < 5)\" will be evaluated by PHP as boolean true because it's not empty. – GordonM Jul 26 '13 at 8:06\n\nThis method was posted by @lauthiamkok in the comments. I'm posting it here as an answer to call more attention to it.\n\nDepending on your needs, you should consider using `filter_var()` with the `FILTER_VALIDATE_BOOLEAN` flag.\n\n``````filter_var( true, FILTER_VALIDATE_BOOLEAN); // true\nfilter_var( 'true', FILTER_VALIDATE_BOOLEAN); // true\nfilter_var( 1, FILTER_VALIDATE_BOOLEAN); // true\nfilter_var( '1', FILTER_VALIDATE_BOOLEAN); // true\nfilter_var( 'on', FILTER_VALIDATE_BOOLEAN); // true\nfilter_var( 'yes', FILTER_VALIDATE_BOOLEAN); // true\n\nfilter_var( false, FILTER_VALIDATE_BOOLEAN); // false\nfilter_var( 'false', FILTER_VALIDATE_BOOLEAN); // false\nfilter_var( 0, FILTER_VALIDATE_BOOLEAN); // false\nfilter_var( '0', FILTER_VALIDATE_BOOLEAN); // false\nfilter_var( 'off', FILTER_VALIDATE_BOOLEAN); // false\nfilter_var( 'no', FILTER_VALIDATE_BOOLEAN); // false\nfilter_var('asdfasdf', FILTER_VALIDATE_BOOLEAN); // false\nfilter_var( '', FILTER_VALIDATE_BOOLEAN); // false\nfilter_var( null, FILTER_VALIDATE_BOOLEAN); // false\n``````\n• According to the documentation, this function is available for PHP 5 >= 5.2.0. php.net/manual/en/function.filter-var.php – Westy92 Oct 2 '15 at 2:49\n• I really like this solution for setting booleans based on WordPress shortcode attributes that have values such as true, false, on, 0, etc. Great answer, should definitely be the accepted answer. – AndyWarren Jun 8 '17 at 17:49\n• `filter_var(\\$answer, FILTER_VALIDATE_BOOLEAN, FILTER_NULL_ON_FAILURE)` worked even better for me. See php.net/manual/en/function.filter-var.php#121263 – Ryan Aug 26 '17 at 19:42\n• !! Important note !! filter_var returns also FALSE if the filter fails. This may create some problems. – AFA Med Oct 4 '17 at 10:30\n\nThe String `\"false\"` is actually considered a `\"TRUE\"` value by PHP. The documentation says:\n\nTo explicitly convert a value to boolean, use the (bool) or (boolean) casts. However, in most cases the cast is unnecessary, since a value will be automatically converted if an operator, function or control structure requires a boolean argument.\n\nWhen converting to boolean, the following values are considered FALSE:\n\n• the boolean FALSE itself\n\n• the integer 0 (zero)\n\n• the float 0.0 (zero)\n\n• the empty string, and the string \"0\"\n\n• an array with zero elements\n\n• an object with zero member variables (PHP 4 only)\n\n• the special type NULL (including unset variables)\n\n• SimpleXML objects created from empty tags\n\nEvery other value is considered TRUE (including any resource).\n\nso if you do:\n\n``````\\$bool = (boolean)\"False\";\n``````\n\nor\n\n``````\\$test = \"false\";\n\\$bool = settype(\\$test, 'boolean');\n``````\n\nin both cases `\\$bool` will be `TRUE`. So you have to do it manually, like GordonM suggests.\n\n• Euhm, ofcourse the lower one returns false. In fact, it throws a fatal :) \"Fatal error: Only variables can be passed by reference\". \\$a = 'False'; settype(\\$a,'boolean'); var_dump(\\$a); will indeed return false. – Rob Oct 24 '16 at 6:55\n\nWhen working with JSON, I had to send a Boolean value via `\\$_POST`. I had a similar problem when I did something like:\n\n``````if ( \\$_POST['myVar'] == true) {\n// do stuff;\n}\n``````\n\nIn the code above, my Boolean was converted into a JSON string.\n\nTo overcome this, you can decode the string using `json_decode()`:\n\n``````//assume that : \\$_POST['myVar'] = 'true';\nif( json_decode('true') == true ) { //do your stuff; }\n``````\n\n(This should normally work with Boolean values converted to string and sent to the server also by other means, i.e., other than using JSON.)\n\nyou can use json_decode to decode that boolean\n\n``````\\$string = 'false';\n\\$boolean = json_decode(\\$string);\nif(\\$boolean) {\n// Do something\n} else {\n//Do something else\n}\n``````\n• json_decode will also transform to integer if the given string is an integer – Mihai Răducanu Aug 16 '16 at 13:53\n• Yes, that's true, but its mentioned that the string is holding boolean value – souparno majumder Aug 16 '16 at 14:13\n``````(boolean)json_decode(strtolower(\\$string))\n``````\n\nIt handles all possible variants of `\\$string`\n\n``````'true' => true\n'True' => true\n'1' => true\n'false' => false\n'False' => false\n'0' => false\n'foo' => false\n'' => false\n``````\n• What about `on` and `off`? – Cyclonecode Mar 21 '18 at 16:56\n• @Cyclonecode it won't handle it the same as `вкл` and `выкл`. – mrded Sep 25 '20 at 9:55\n\nIf your \"boolean\" variable comes from a global array such as \\$_POST and \\$_GET, you can use `filter_input()` filter function.\n\nExample for POST:\n\n``````\\$isSleeping = filter_input(INPUT_POST, 'is_sleeping', FILTER_VALIDATE_BOOLEAN);\n``````\n\nIf your \"boolean\" variable comes from other source you can use `filter_var()` filter function.\n\nExample:\n\n``````filter_var('true', FILTER_VALIDATE_BOOLEAN); // true\n``````\n\nYou can use `boolval(\\$strValue)`\n\nExamples:\n\n``````<?php\necho '0: '.(boolval(0) ? 'true' : 'false').\"\\n\";\necho '42: '.(boolval(42) ? 'true' : 'false').\"\\n\";\necho '0.0: '.(boolval(0.0) ? 'true' : 'false').\"\\n\";\necho '4.2: '.(boolval(4.2) ? 'true' : 'false').\"\\n\";\necho '\"\": '.(boolval(\"\") ? 'true' : 'false').\"\\n\";\necho '\"string\": '.(boolval(\"string\") ? 'true' : 'false').\"\\n\";\necho '\"0\": '.(boolval(\"0\") ? 'true' : 'false').\"\\n\";\necho '\"1\": '.(boolval(\"1\") ? 'true' : 'false').\"\\n\";\necho '[1, 2]: '.(boolval([1, 2]) ? 'true' : 'false').\"\\n\";\necho '[]: '.(boolval([]) ? 'true' : 'false').\"\\n\";\necho 'stdClass: '.(boolval(new stdClass) ? 'true' : 'false').\"\\n\";\n?>\n``````\n\nDocumentation http://php.net/manual/es/function.boolval.php\n\n• `echo boolval('false');` => 1 – Mubashar Jun 3 '19 at 5:37\n• You can use `echo (int)'false;` or `echo intval('false');` – anayarojo Jun 3 '19 at 22:59\n• @anayarojo `(int)'true'` and `intval('true')` both return 0 as well (all strings do) – sketchyTech Sep 2 '19 at 12:59\n\nthe easiest thing to do is this:\n\n``````\\$str = 'TRUE';\n\n\\$boolean = strtolower(\\$str) == 'true' ? true : false;\n\nvar_dump(\\$boolean);\n``````\n\nDoing it this way, you can loop through a series of 'true', 'TRUE', 'false' or 'FALSE' and get the string value to a boolean.\n\n• You could make the above a bit simpler by doing `\\$boolean = strtolower(\\$str) == 'true';` – Cyclonecode Sep 28 '20 at 22:47\n``````filter_var(\\$string, FILTER_VALIDATE_BOOLEAN, FILTER_NULL_ON_FAILURE);\n\n\\$string = 1; // true\n\\$string ='1'; // true\n\\$string = 'true'; // true\n\\$string = 'trUe'; // true\n\\$string = 'TRUE'; // true\n\\$string = 0; // false\n\\$string = '0'; // false\n\\$string = 'false'; // false\n\\$string = 'False'; // false\n\\$string = 'FALSE'; // false\n\\$string = 'sgffgfdg'; // null\n``````\n\nYou must specify\n\n`FILTER_NULL_ON_FAILURE`\notherwise you'll get always false even if \\$string contains something else.\n\nOther answers are over complicating things. This question is simply logic question. Just get your statement right.\n\n``````\\$boolString = 'false';\n\\$result = 'true' === \\$boolString;\n``````\n\n• `false`, if the string was `'false'`,\n• or `true`, if your string was `'true'`.\n\nI have to note that `filter_var( \\$boolString, FILTER_VALIDATE_BOOLEAN );` still will be a better option if you need to have strings like `on/yes/1` as alias for `true`.\n\n``````function stringToBool(\\$string){\nreturn ( mb_strtoupper( trim( \\$string)) === mb_strtoupper (\"true\")) ? TRUE : FALSE;\n}\n``````\n\nor\n\n``````function stringToBool(\\$string) {\nreturn filter_var(\\$string, FILTER_VALIDATE_BOOLEAN);\n}\n``````\n\nI do it in a way that will cast any case insensitive version of the string \"false\" to the boolean FALSE, but will behave using the normal php casting rules for all other strings. I think this is the best way to prevent unexpected behavior.\n\n``````\\$test_var = 'False';\n\\$test_var = strtolower(trim(\\$test_var)) == 'false' ? FALSE : \\$test_var;\n\\$result = (boolean) \\$test_var;\n``````\n\nOr as a function:\n\n``````function safeBool(\\$test_var){\n\\$test_var = strtolower(trim(\\$test_var)) == 'false' ? FALSE : \\$test_var;\nreturn (boolean) \\$test_var;\n}\n``````\n\nThe answer by @GordonM is good. But it would fail if the `\\$string` is already `true` (ie, the string isn't a string but boolean TRUE)...which seems illogical.\n\n``````\\$test_mode_mail = (\\$string === 'true' OR \\$string === true));\n``````\n\nYou can use the settype method too!\n\n``````SetType(\\$var,\"Boolean\")\nEcho \\$var //see 0 or 1\n``````\n\nI was getting confused with wordpress shortcode attributes, I decided to write a custom function to handle all possibilities. maybe it's useful for someone:\n\n``````function stringToBool(\\$str){\nif(\\$str === 'true' || \\$str === 'TRUE' || \\$str === 'True' || \\$str === 'on' || \\$str === 'On' || \\$str === 'ON'){\n\\$str = true;\n}else{\n\\$str = false;\n}\nreturn \\$str;\n}\nstringToBool(\\$atts['onOrNot']);\n``````\n• i was looking for help in the first place, but did not find anything as easy as as i hoped. that's why i wrote my own function. feel free to use it. – tomi Apr 5 '16 at 18:02\n• Perhaps lower the string to you don't need all the or conditions `\\$str = strtolower(\\$str); return (\\$str == 'true' || \\$str == 'on');` – Cyclonecode Sep 28 '20 at 22:44\n\nA simple way is to check against an array of values that you consider true.\n\n``````\\$wannabebool = \"false\";\n\\$isTrue = [\"true\",1,\"yes\",\"ok\",\"wahr\"];\n\\$bool = in_array(strtolower(\\$wannabebool),\\$isTrue);\n``````\n\nEdited to show a possibility not mentioned here, because my original answer was far from related to the OP's question.\n\npreg_match(); Is possible to use. However, in most applications it will be much more heavy to use than other answers here.\n\n``````if (preg_match(\"/true/i\", \"true PHP is a web scripting language of choice.\")) {\necho \"<br><br>Returned true\";\n} else {\necho \"<br><br>Returned False\";\n}\n``````\n\n`/(?:true)|(?:1)/i` Can also be used if needed in certain situations. It will not return correctly when it evaluates a string containing both \"false\" and \"1\".\n\n• This is not what was asked. The question is how to convert a string into boolean. – mrded Jun 20 '17 at 10:39\n• mrded: I misread the question I apologize. So in the spirit good form I will add another possibility not mentioned here. – JSG Jul 21 '19 at 19:53\n\nIn PHP you simply can convert a value to a boolean by using double not operator (`!!`):\n\n``````var_dump(!! true); // true\nvar_dump(!! \"Hello\"); // true\nvar_dump(!! 1); // true\nvar_dump(!! [1, 2]); // true\nvar_dump(!! false); // false\nvar_dump(!! null); // false\nvar_dump(!! []); // false\nvar_dump(!! 0); // false\nvar_dump(!! ''); // false\n\n``````\n• Using this, a \"false\" string will end up as boolean true. – Daniel Wu Jul 1 '20 at 3:47\n\nYou should be able to cast to a boolean using (bool) but I'm not sure without checking whether this works on the strings \"true\" and \"false\".\n\nThis might be worth a pop though\n\n``````\\$myBool = (bool)\"False\";\n\nif (\\$myBool) {\n//do something\n}\n``````\n\nIt is worth knowing that the following will evaluate to the boolean False when put inside\n\n``````if()\n``````\n• the boolean FALSE itself\n• the integer 0 (zero)\n• the float 0.0 (zero)\n• the empty string, and the string \"0\"\n• an array with zero elements\n• an object with zero member variables (PHP 4 only)\n• the special type NULL (including unset variables)\n• SimpleXML objects created from empty tags\n\nEverytyhing else will evaluate to true.\n\n• In response to the guess in your first paragraph: using an explicit cast to boolean will convert `\"false\"` to `true`. – Mark Amery Mar 13 '13 at 10:29\n• This will print \"true\" `\\$myBool = (bool)\"False\"; if (\\$myBool) { echo \"true\"; }` – SSH This Apr 22 '13 at 18:28\n• This is wrong, strings are evaluated as true unless they contain \"\" or \"0\". – Michael J. Calkins May 25 '13 at 17:19" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.54064226,"math_prob":0.7678966,"size":10249,"snap":"2021-21-2021-25","text_gpt3_token_len":2804,"char_repetition_ratio":0.18125916,"word_repetition_ratio":0.020525979,"special_character_ratio":0.3165187,"punctuation_ratio":0.19850586,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96142113,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-07T14:35:17Z\",\"WARC-Record-ID\":\"<urn:uuid:412a22cf-dd29-4416-bcbe-da132eb4e070>\",\"Content-Length\":\"354803\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d0c96f1a-c119-4c1a-8094-7d52d16575ed>\",\"WARC-Concurrent-To\":\"<urn:uuid:fe605a31-14fd-4357-ae0f-97d2b714612c>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://stackoverflow.com/questions/7336861/how-to-convert-string-to-boolean-php/15075609\",\"WARC-Payload-Digest\":\"sha1:FFXK6HC76NZZIEHEQ43SKXV5T365GCX6\",\"WARC-Block-Digest\":\"sha1:5QCB2YNMFC2ANEFIWNPIIMYCMQXCCQD2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243988793.99_warc_CC-MAIN-20210507120655-20210507150655-00215.warc.gz\"}"}
https://physicscatalyst.com/Class10/CG-x_fa.php
[ "", null, "# Class 10 Maths Important Questions for Coordinate Geometry\n\nGiven below are the Class 10 Maths Important Questions for Coordinate Geometry\n(a) Concepts questions\n(b) Calculation problems\n(c) Multiple choice questions\n(e) Fill in the blank's\n\nQuestion 1\nCalculate the Following\n1. Distance between the point  (1,3) and ( 2,4)\n2. Mid-point of line segment  AB where A(2,5) and B( -5,5)\n3. Area of  the triangle  formed by joining the line segments (0,0)  ,( 2,0) and (3,0)\n4. Distance of point (5,0) from Origin\n5. Distance of point (5,-5) from Origin\n6. Coordinate of the point M which divided the line segment A(2,3) and B( 5,6) in the ratio 2:3\n7. Quadrant of the Mid-point of the line segment A(2,3) and B( 5,6)\n8.  the coordinates of a point A, where AB is the diameter of circle whose center is (2,−3) and B is (1, 4)\nSolution\n(a) $D=\\sqrt{(1-2)^{2}+(3-4)^{2}}=\\sqrt{2}$\n(b)Mid-point is given by (2-5)/2,(5+5)/2 or (-3/2, 5)\n(c) $A=\\frac{1}{2}[0(0-0)+2(0-0)+3(0-0)]=0$\nSince the three points are collinear, the area is zero\n(d) $D=\\sqrt{5^{2}+0^{2}}=5$\n(e)$D=\\sqrt{(-5)^{2}+0^{2}}=5$\n(f) Coordinates of point M is given by\n$x=\\frac{2X3+3X2}{2+3}=\\frac{12}{5}$\n$y=\\frac{2X6+3X5}{2+3}=\\frac{27}{5}$\n(g) Mid point is given by (7/2, 9/2) which lies in First quadrant\n(h) We know that center is mid point of AB, So\n$2=\\frac{1+x}{2}$\n$-3=\\frac{4+y}{2}$\nSolving these, we get (3,-10)\n\n## True or False statement\n\nQuestion 2\n(a) Point A( 0,0) B( 0,3) ,C( 0,7) and D( 2,0) formed a quadrilateral\n(b) The point P (-2, 4) lies on a circle of radius 6 and center C (3, 5)\n(c) Triangle PQR with vertices P (-2, 0), Q (2, 0) and R (0, 2) is similar to Δ XYZ with\nVertices X (-4, 0) Y (4, 0) and Z (0, 4).\n(d) Point X (2, 2) Y (0, 0) and Z (3, 0) are not collinear\n(e) The triangle formed by joining the point A( -3,0) , B( 0,0) and C( 0,2) is a right angle triangle\n(f) A circle has its center at the origin and a point A (5, 0) lies on it. The point B (6, 8) lies inside the circle\n(g) The points A (-1, -2), B (4, 3), C (2, 5) and D (-3, 0) in that order form a rectangle\nSolution\n1. False, As three point are A,B and C are collinear, So no quadrilateral can be formed\n2. False, As the distance between the point P and C is $\\sqrt{26}$ which is less than 6.So point lies inside the circle\n3. True. Both the triangle are equilateral triangle with side 4 and 8 respectively\n4. True. As the Area formed by the triangle XYZ is not zero\n5. True, If we plot the point on the Coordinate system, it becomes clear that it is right angle at origin\n6. False. The radius of the circle is 5 and distance of the point B is more than 5,So it lies outside the circle\n7. True. If we calculate the distance between two points, it becomes clear that opposite side are equal, also the diagonal are equal. So it is a rectangle\n\n## Multiple choice Questions\n\nQuestion 3\nFind the centroid of the triangle XYZ whose vertices are X (3, - 5) Y (- 3, 4) and Z (9, - 2).\n(a) (0, 0)\n(b) (3, 1)\n(c) (2, 3)\n(d) (3,-1)\nSolution\n(d)\nCentroid of the triangle is given by\n$x=\\frac{x_{1}+x_{2}+x_{3}}{3}=\\frac{3-3+9}{3}=3$\n$y=\\frac{y_{1}+y_{2}+y_{3}}{3}=\\frac{-5+4-2}{3}=-1$\n\nQuestion 4\nThe area of the triangle ABC with coordinates as A (1, 2) B (2, 5) and C (- 2, - 5)\n(a)-1\n(b) .4\n(c)2\n(d) 1\nSolution\n(d)\n$A=\\frac{1}{2}[1(5+5)+2(-5-2)-2(2-5)]=1$\n\nQuestion 5\nFind the value of p for which these point are collinear  (7,-2) , (5,1) ,(3,p)?\n(a) 2\n(b) 4\n(c) 3\n(d) None of these\nSolution\na\nFor these points to be collinear\nA=0\nOr\n$\\frac{1}{2}[7(1-p)+5(p+2)+3(-2-1)]=0$\n7-7p+5p+10-9=0\np=2\n\nQuestion 6\nDetermine the ratio in which the line 2x + y - 4 = 0 divides the line segment joining the points A (2, - 2) and B (3, 7).\n(a) 2:9\n(b) 1:9\n(c)1:2\n(d) 2:3\nSolution\n(a)\nLet the ratio be m: n\nNow\nCoordinate of the intersection\n$x=\\frac{3m+3n}{m+n}$\n$y=\\frac{7m-2n}{m+n}$\nNow these points should lie of the line, So\n$2(\\frac{3m+2n}{m+n})+(\\frac{7m-n}{m+n})-4=0$\nm:n=2:9\n\nQuestion 7\nIf the mid-point of the line segment joining the points A (3, 4) and B (a, 4) is P (x, y) and x + y - 20 = 0,then find the value of a\n(a) 0\n(b) 1\n(c) 40\n(d)45\nSolution (d)\nid point (3+a)/2, 4\nNow\n(3+a)/2 -4 -20=0\n3+a=48\nA=45\n\nQuestion 8\nProve that the points (a, b + c), (b, c + a) and (c, a + b) are collinear.\n\nQuestion 9\nFor what value of x will the points (x, -1), (2, 1) and (4, 5) lie on a line?\n\nQuestion 10\nIf the points (p, q) (m, n) and (p - m, q -n) are collinear, show that pn = qm.\n\nQuestion 11\nFind k so tht the point P (-4, 6) lies on the line segment joining A (k, 10) and B (3, -8). Also, find the ratio in which P divides AB.\nQuestion 12\nFind the area of the quadrilaterals, the co- ordinates of whose vertices are\n(i)(-3, 2), (5, 4), (7, -6) and (-5, -4)\n(ii)(1, 2), (6, 2), (5, 3) and (3, 4)\n(iii) (-4, -2), (-3, -5), (3, -2), (2, 3)\n\nQuestion 13\nShow that the following sets of points are collinear\n(i) (2, 5), (4, 6) and (8, 8)\n(ii) (1, -1), (2, 1) and (4, 5)\n\nQuestion 14.\nFind the value of x such that PQ = QR where the co- ordinates of P, Q and R are (6, -1), (1, 3) and (x, 8) respectively.\n\nGo back to Class 10 Main Page using below links\n\n### Practice Question\n\nQuestion 1 What is $1 - \\sqrt {3}$ ?\nA) Non terminating repeating\nB) Non terminating non repeating\nC) Terminating\nD) None of the above\nQuestion 2 The volume of the largest right circular cone that can be cut out from a cube of edge 4.2 cm is?\nA) 19.4 cm3\nB) 12 cm3\nC) 78.6 cm3\nD) 58.2 cm3\nQuestion 3 The sum of the first three terms of an AP is 33. If the product of the first and the third term exceeds the second term by 29, the AP is ?\nA) 2 ,21,11\nB) 1,10,19\nC) -1 ,8,17\nD) 2 ,11,20" ]
[ null, "https://physicscatalyst.com/image/logo200.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7630171,"math_prob":0.99978095,"size":6567,"snap":"2023-14-2023-23","text_gpt3_token_len":2709,"char_repetition_ratio":0.15861648,"word_repetition_ratio":0.121753246,"special_character_ratio":0.4522613,"punctuation_ratio":0.10494204,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99988055,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-30T05:03:32Z\",\"WARC-Record-ID\":\"<urn:uuid:a103527b-3792-41e6-9f6e-5c0cf2489d7f>\",\"Content-Length\":\"63742\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3f7adecf-b5a9-4743-9311-f9b22f8bfffa>\",\"WARC-Concurrent-To\":\"<urn:uuid:b3968a00-7532-4fe3-81cf-0e0fb7cad834>\",\"WARC-IP-Address\":\"104.21.48.230\",\"WARC-Target-URI\":\"https://physicscatalyst.com/Class10/CG-x_fa.php\",\"WARC-Payload-Digest\":\"sha1:Z5JHJZBSHRCTZ7NT73B2KJNBYZBPX3TR\",\"WARC-Block-Digest\":\"sha1:ESQZK72NB4XGOA2EFHUQQPSMQEM55SUW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296949097.61_warc_CC-MAIN-20230330035241-20230330065241-00433.warc.gz\"}"}
https://1to20tables.com/multiplication-table-of-15/
[ "Multiplication Table Of 15\n\nMultiplication tables are fun to learn. While there are other practical ways to make learning multiplication fun for children, letting them learn through an online portal is really effective. Most children are internet savvy and this makes it important for parents to monitor and guide them through their online adventures. If you are a parent or teacher and you find children involved in online activities,  introduce them to this online platform. They will have more fun learning tables online than with a book. Below is a list of the multiplication table of 15.  Let your child have fun learning these tables. Learn multiplication table 1-20", null, "15 × 1 = 15 15 × 2 = 30 15 × 3 = 45 15 × 4 = 60 15 × 5 = 75 15 × 6 = 90 15 × 7 = 105 15 × 8 = 120 15 × 9 = 135 15 × 10 = 150 15 x 11 = 165 15 x 12 = 180 15 x 13 = 195 15 x 14 = 210 15 x 15 = 225 15 x 16 = 240 15 x 17 = 255 15 x 18 = 270 15 x 19 = 285 15 x 20 = 300" ]
[ null, "http://1to20tables.com/wp-content/uploads/2018/06/Untitled-design-3-300x37.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9080227,"math_prob":0.9988111,"size":909,"snap":"2019-26-2019-30","text_gpt3_token_len":280,"char_repetition_ratio":0.16353591,"word_repetition_ratio":0.0,"special_character_ratio":0.41364136,"punctuation_ratio":0.04918033,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9980753,"pos_list":[0,1,2],"im_url_duplicate_count":[null,10,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-17T20:52:00Z\",\"WARC-Record-ID\":\"<urn:uuid:10020eaf-4d22-4d71-8dc8-27a0b0cb8f14>\",\"Content-Length\":\"37947\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:24825963-46ac-4699-9099-554f1d8ef0fa>\",\"WARC-Concurrent-To\":\"<urn:uuid:595d9bcd-63d1-4ee3-b265-f67bf5473e39>\",\"WARC-IP-Address\":\"104.28.13.222\",\"WARC-Target-URI\":\"https://1to20tables.com/multiplication-table-of-15/\",\"WARC-Payload-Digest\":\"sha1:FMVTRFX2KBBGW56EPRL6HQ2KI4GNXCZY\",\"WARC-Block-Digest\":\"sha1:UTT5JN3E7GYAYHSCZZECJPJNBHMKSOFN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560627998580.10_warc_CC-MAIN-20190617203228-20190617225228-00478.warc.gz\"}"}
https://percent.info/bps-to-percent/what-is-318-basis-points-in-percentage.html
[ "318 Basis Points in Percentage", null, "Here we will explain what 318 basis points means and show you how to convert 318 basis points (bps) to percentage.\n\nFirst, note that 318 basis points are also referred to as 318 bps, 318 bibs, and even 318 beeps. Basis points are frequently used in the financial markets to communicate percentage change. For example, your interest rate may have decreased by 318 basis points or your stock price went up by 318 basis points.\n\n318 basis points means 318 hundredth of a percent. In other words, 318 basis points is 318 percent of one. Therefore, to calculate 318 basis points in percentage, we calculate 318 percent of one percent. Below is the math and the answer to 318 basis points to percent:\n\n(318 x 1)/100 = 3.18\n318 basis points = 3.18%\n\nShortcut: As you can see from our calculation above, you can convert 318 basis points, or any other basis points, to percentage by dividing the basis points by 100.\n\nBasis Points to Percentage Calculator\nUse this tool to convert another basis point value to percentage.\n\n319 Basis Points in Percentage\nHere is the next basis points value on our list that we have converted to percentage.\n\nCopyright  |   Privacy Policy  |   Disclaimer  |   Contact" ]
[ null, "https://percent.info/images/basis-points-to-percentage.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9086412,"math_prob":0.9956137,"size":1124,"snap":"2021-04-2021-17","text_gpt3_token_len":258,"char_repetition_ratio":0.27321428,"word_repetition_ratio":0.0,"special_character_ratio":0.2669039,"punctuation_ratio":0.10714286,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9934842,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-19T12:52:34Z\",\"WARC-Record-ID\":\"<urn:uuid:4dd5c8ee-9059-402e-9c70-8cdbb9487ae2>\",\"Content-Length\":\"5675\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cb1def73-9af7-4c13-872c-4d2fc7727ba8>\",\"WARC-Concurrent-To\":\"<urn:uuid:c3b8234f-9b30-46d0-93d2-59f200a0feee>\",\"WARC-IP-Address\":\"13.32.200.84\",\"WARC-Target-URI\":\"https://percent.info/bps-to-percent/what-is-318-basis-points-in-percentage.html\",\"WARC-Payload-Digest\":\"sha1:BUJT4C6JKLTI4ZGRFXE65D7RRGLUB7UH\",\"WARC-Block-Digest\":\"sha1:WJ7D5YZQF6QPP457GULULKJ2O2Y3JGSG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038879374.66_warc_CC-MAIN-20210419111510-20210419141510-00089.warc.gz\"}"}
https://thisismyclassroom.wordpress.com/2015/09/
[ "# Monthly Archives: September 2015\n\n## Enumerating possibilities of combinations of two variables\n\nWith Year 6 children expected to work on the objective ‘enumerate possibilities of combinations of two variables’, we should be clear on the difference between the underlying concept and the algebraic representation of it.\n\n2g + w = 10\n\nFor questions such as this, children should first have a secure understanding of the part, part, whole model.  We can show that 2 lots of something add one lot of something else is equal to 10 by using a concrete manipulative such as Numicon.  First, children represent the whole, in this case 10. Then they can speculate on the two equal parts (2g), trying out g=1 before finding the Numicon piece that fills the gap and therefore is equal to w:", null, "Having found one solution, they can continue to work systematically to find alternative solutions.  Trying g = 2 is logical:", null, "Lining up solutions beneath the whole reinforces the idea that the expressions are equivalent.  Children can continue to work systematically:", null, "This also provides a scaffold for questions of greater depth, such as ‘What is the greatest number that g can represent?  Explain…’\nSubtraction?  Not a problem, although in this case, children must know that for subtraction, you always do so from the whole.\n\n10 = 3g – w\n\nIn this question, the whole is 3g and the parts are 10 and w:", null, "What is not clear from this model is the trial and error that went into it.  Children may well try 3 ones and quickly realise that it is already less than 10, so subtracting from it will not give a valid solution.  There is lots of scope here for discussion about the smallest number that g could represent.\n\nThe use of Numicon leads nicely into children representing problems as bar models.  Here are the two examples used so far:\n\nFiled under Curriculum, Maths\n\nThe question was on the screen:", null, "One year 6 child said: ‘The empty box is in the middle so you do the inverse.  You have to add the numbers together’.\n\nThis got me thinking about how children build on their early concepts of number to be able deal with problems like this, which I’ll call ‘empty box problems’.", null, "The underlying pattern of additive reasoning is the relationships between the parts and the whole.   Getting children to think and talk about the whole and parts using concrete manipulatives early on should lay the foundations for them to internalise this underlying pattern.  Every time children think and talk about number bonds, they can be practising identifying the whole, breaking it into parts and then recombining to make the whole once more.", null, "Alongside talking about the whole and parts, children should begin to generate worded statements whilst manipulating cubes or Numicon, for example.  At this point it is important to experiment with rearranging the words in the statement.  They should get to know that ‘four add two is equal to six’ and ‘six is equal to four add two’ are statements that are saying the same thing.  Some discussion around what is the same and what is different about these two statements would be worthwhile.\n\nWhen children are then shown how this looks abstractly with numerals and the equals sign, this would hopefully go some way towards avoiding the misconception that the equals sign means that ‘the answer is next’.", null, "In the examples used so far, the whole and each of the parts have been ‘known’.  Using the same manipulatives and language patterns, children can be introduced to unknowns.  It seems sensible to begin with giving children the parts and using the word ‘something’ to show that the whole is unknown, i.e., four add two is equal to something.  Some modelling alongside a clear explanation followed by plenty of practice should see children get used to the language patterns needed to think about the concept with clarity.  The next step is to show children the whole and one of the parts, using the word ‘something’ to replace the unknown part.  All of this talk and manipulation of objects is intended to support children to develop a concept of additive reasoning where they do not have the misconception that ‘inverse’ means ‘do the opposite’.", null, "More sophisticated additive reasoning is the understanding of the inverse relationship between addition and subtraction.  Children need to fully understand that two or more parts can be equal to the whole.  From this, they need to internalise the underlying patterns: that Part + Part = Whole and that Whole – Part = Part.  From this, they should be able to work out the full range of calculations that represent one bar model.  Again, it is important to vary the placement of the = sign.", null, "One more way to get children to think about the whole and the parts is to use bar models for calculation practice rather than simply writing a calculation for children to work out.  When done like this, children have to decide what calculation to do to work out the unknown.  Children often exhibit misconceptions such as ‘when you subtract, the biggest number goes first’.  These can be addressed using the underlying patterns; adding parts together makes the whole and, when you subtract, you always subtract from the whole.  When unknowns are introduced, they can be substituted into these basic patterns:\n\nPart + Something = Whole           Part + □ = Whole              35 + □ = 72\n\nSomething + Part = Whole           □ + Part = Whole              □ + 35 = 72\n\nWhole – Something = Part           Whole – □ = Part               72 – □ = 35\n\nSomething – Part = Part                □ – Part = Part                   □ – 35 = 37", null, "Knowing these patterns will help children to able to analyse problem types in order to decide on the calculation needed.  An additive reasoning bar model with one unknown generates both an addition statement and a subtraction statement.  Showing children empty box problems pictorially, they can talk through the calculations that can be read from the bar model, using the word ‘something’ to represent the unknown.  The next step is to show children abstract empty box problems and get them to map it onto a blank bar model.  They should be drawing on their knowledge that the whole is equal to the sum of the parts and that when you subtract, you always start with the whole.  Eventually, the hope is that the language alone should suffice to work out how to solve empty box problems, with children no longer needing the bars.\n\nWhich brings us back to that year 6 child.  Of course, children will develop misconceptions as they make sense of what is shown and explained to them.  By expecting them to think and talk about additive reasoning in the ways described above, it should go some way to building sound conceptual understanding." ]
[ null, "https://thisismyclassroom.files.wordpress.com/2015/09/img_4142.jpg", null, "https://thisismyclassroom.files.wordpress.com/2015/09/img_4143.jpg", null, "https://thisismyclassroom.files.wordpress.com/2015/09/img_4144.jpg", null, "https://thisismyclassroom.files.wordpress.com/2015/09/img_4145.jpg", null, "https://thisismyclassroom.files.wordpress.com/2015/09/ar13.png", null, "https://thisismyclassroom.files.wordpress.com/2015/09/ar1.png", null, "https://thisismyclassroom.files.wordpress.com/2015/09/ar2.png", null, "https://thisismyclassroom.files.wordpress.com/2015/09/ar4.png", null, "https://thisismyclassroom.files.wordpress.com/2015/09/ar5.png", null, "https://thisismyclassroom.files.wordpress.com/2015/09/ar6.png", null, "https://thisismyclassroom.files.wordpress.com/2015/09/ar11.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9502382,"math_prob":0.9510463,"size":4729,"snap":"2020-24-2020-29","text_gpt3_token_len":961,"char_repetition_ratio":0.13862434,"word_repetition_ratio":0.02791262,"special_character_ratio":0.20448297,"punctuation_ratio":0.078977935,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9693939,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-06-03T16:49:46Z\",\"WARC-Record-ID\":\"<urn:uuid:ec9ec502-3c35-42d5-9e40-32cbff1dc8b7>\",\"Content-Length\":\"94330\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:61fa76ee-b9e7-4534-a263-b0d5af874bc3>\",\"WARC-Concurrent-To\":\"<urn:uuid:9236ea39-1090-4fd0-a9a3-d5ec39cc347d>\",\"WARC-IP-Address\":\"192.0.78.12\",\"WARC-Target-URI\":\"https://thisismyclassroom.wordpress.com/2015/09/\",\"WARC-Payload-Digest\":\"sha1:WYXGUC6TNUFAMXQJRPMO6BK5MQQ5I7UK\",\"WARC-Block-Digest\":\"sha1:4JMOCKDWK3A7HQR5G5EJDH4XM6P5CIDC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347435238.60_warc_CC-MAIN-20200603144014-20200603174014-00339.warc.gz\"}"}
https://www.elprocus.com/lc-oscillator-circuit-working-and-its-applications/
[ "# LC Oscillator Circuit : Working and Its Applications\n\nAn oscillator is an electronic circuit used to change an input DC to an output AC. This can have an extensive range of waveforms with different frequencies based on the application. Oscillators are used in several applications like test equipment which generate any of these waveforms like a sinusoidal, sawtooth, square wave, triangular waveforms. LC Oscillator is usually used within RF circuits due to their high-quality phase noise characteristics as well as easy implementation. Basically, an oscillator is an amplifier that includes positive or negative feedback. In electronic circuit design, the main problem is to stop the amplifier from oscillating when trying to acquire oscillators to oscillate. This article discusses an overview of LC oscillator and circuit working.\n\n## What is LC Oscillator?\n\nBasically, an oscillator uses positive feedback and generates an o/p frequency without using an input signal. Thus these are self-supporting circuits that generate a periodic o/p waveform at an exact frequency. LC oscillator is a kind of oscillator where a tank circuit (LC) is used to give the required positive feedback for maintaining the oscillations.\n\nThis circuit is also called as LC tuned or LC resonant circuit. These oscillators can understand with the help of FET, BJT, Op-Amp, MOSFET, etc. The applications of LC oscillators mainly include frequency mixers, RF signal generators, tuners, RF modulators, sine wave generators, etc. Please refer to this link to know more about Difference Between Capacitor and Inductor\n\n### LC Oscillator Circuit Diagram\n\nAn LC circuit is an electric circuit that can be built with an inductor and capacitor where the inductor is denoted with ‘L’ and the capacitor is denoted with ‘C’ both allied within a single circuit. The circuit works like an electrical resonator which stores energy to oscillate at the resonant frequency of the circuit.\n\nThese circuits are used either to select a signal at the particular frequency through the compound signal otherwise generating signals at a particular frequency. These circuits work like major components within a variety of electronic devices such as radio apparatus, circuits such as filters, tuners, and oscillators. This circuit is a perfect model that imagines that the dissipation of energy doesn’t happen because of resistance. The main function of this circuit is to oscillate through the least damping to make the resistance minimum possible.\n\n### LC Oscillator Derivation\n\nWhen the oscillator circuit is energized with stable voltage using time changing frequency, after that the reactance of RL, as well as RC, is also changed. Therefore the frequency and amplitude of the o/p can be changed when contrasted with i/p signal.\n\nThe inductive reactance and the frequency can be directly proportional to each other while the frequency and the capacitive reactance can be inversely proportional to each other. So, at lesser frequencies, the inductor’s capacitive reactance of the inductor is extremely small performs like short circuit while the capacitive reactance is higher & performs like an as open circuit.\n\nAt higher frequencies, the reverse will happen i.e., capacitive reactance acts as short circuit whereas inductive reactance acts as an open circuit. The circuit at a specific combination of an inductor and capacitor will become tuned or resonant frequency at both the reactance’s of capacitive and inductive are the same & stop with each other.\n\nTherefore there will be simply resistance is there within the circuit for opposing the current flow & thus the voltage cannot produce the LC phase shift oscillator current with the help of a resonant circuit. So the flow of current and voltage will be in phase with each other.\n\nThe continued oscillations can be attained by giving the voltage supply to the components like inductor and capacitor. As a result, LC oscillator uses the LC or tank circuit to generate the oscillations.\n\nThe oscillations frequency can be produced from the tank circuit which completely relies upon the inductor, capacitor values & their condition of resonance. So it can be stated by using the following formula.\n\nXL = 2*π* f* L\n\nXC = 1/ (2*π* f* C)\n\nWe know that, at resonance, XL is equal to XC. So the equation will become like the following.\n\n2*π* f* L = 1/ (2*π* f* C)\n\nOnce the equation can be shortened then the equation of LC oscillator frequency includes the following.\n\nf2 = 1/ ((2π) * 2 LC)\n\nf = 1/ (2π √ (LC))\n\n### Types of LC Oscillators\n\nLC oscillator is classified into different types which include the following.\n\n#### Tuned Collector Oscillator\n\nThis oscillator is a basic type of LC oscillator. This circuit can be built with a capacitor and a transformer by connecting in parallel across the oscillator’s collector circuit. The tank circuit can be formed by the capacitor and main of the transformer. The minor of the transformer feeds backside a portion of the oscillations generated within the tank circuit to the base of the transistor. Please refer to this link to know more about Tuned Collector Oscillator\n\n#### Tuned Base Oscillator\n\nThis is one kind of LC transistor oscillator wherever this circuit is located among the two terminals of transistor-like the ground and the base. The tuned circuit can be formed by using a capacitor & main coil of a transformer. The minor coil of the transformer is used as feedback.\n\n#### Hartley Oscillator\n\nThis is a kind of LC oscillator wherever the tank circuit includes one capacitor and two inductors. The capacitor is connected in parallel and inductors are connected in series to the combination of series. This oscillator was made-up by Ralph Hartley in the year 1915. He is an American scientist. Typical Hartley oscillator’s operating frequency ranges from 20 kHz-20MHz. It can be recognized by using FET, BJT, otherwise op-amps. Please refer to this link to know more about Hartley Oscillator\n\n#### Colpitts Oscillator\n\nThis is another kind of oscillator wherever the tank circuit can be built with one inductor & two capacitors. The connection of these capacitors can be done in series whereas the inductor can be connected in parallel toward the capacitor’s series combination.\n\nThis oscillator was made-up by scientists namely Edwin Colpitts in 1918. The operating frequency range of this oscillator ranges from 20 kHz – MHz. This oscillator includes superior frequency strength as contrasted to the Hartley oscillator. Please refer to this link to know more about Colpitts Oscillator\n\n#### Clapp Oscillator\n\nThis oscillator is an alteration of the Colpitts oscillator. In this oscillator, an extra capacitor can be connected in series toward the inductor within the tank circuit. This capacitor can be made uneven in the applications of variable frequency. This extra capacitor separates the remaining two capacitors from the transistor parameter effects such as junction capacitance as well as advances the frequency strength.\n\n### Applications\n\nThese oscillators are broadly used for producing high-frequency signals; therefore these are also named as RF oscillators. By using the practical values of capacitors & inductors, It is probable to generate a higher range of frequencies like > 500 MHz.\n\nThe applications of LC oscillators mainly include in radio, television, high-frequency heating, and RF generators, etc. This oscillator uses a tank circuit which includes a capacitor ‘C’ and an inductor ‘L’.\n\n### Difference between LC and RC Oscillator\n\nWe know that the RC network offers regenerative feedback & decides the operation of frequency within RC oscillators. Each oscillator that we have discusses above uses a resonant LC tank circuit. We know that how this tank circuit stores energy within the used components in the circuit like capacitor and inductor.\n\nThe main difference between LC and RC circuits is that the frequency deciding device within the RC oscillator is not an LC circuit. Consider, the operating of an LC oscillator can be done using biasing like class A otherwise class C due to the action of the oscillator in the resonant tank.  The RC oscillator should utilize class-A biasing as determining the RC frequency device doesn’t contain the ability of oscillation of a tank circuit.\n\nThus, this is all about what is LC Oscillation and deviation using the circuit. Here is a question for you, what are the advantages of LC Circuit?" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9198805,"math_prob":0.8871217,"size":8821,"snap":"2020-45-2020-50","text_gpt3_token_len":1759,"char_repetition_ratio":0.19008733,"word_repetition_ratio":0.023289666,"special_character_ratio":0.18399274,"punctuation_ratio":0.07884362,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97934437,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-04T20:59:07Z\",\"WARC-Record-ID\":\"<urn:uuid:476d21c4-c7ce-4247-82a0-8c16b6ed0bd6>\",\"Content-Length\":\"71909\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1f2965c2-24ec-4ef4-92fe-c45b8ccc5605>\",\"WARC-Concurrent-To\":\"<urn:uuid:8bd4ec00-2e0b-420a-834f-342679fd0b68>\",\"WARC-IP-Address\":\"206.189.131.248\",\"WARC-Target-URI\":\"https://www.elprocus.com/lc-oscillator-circuit-working-and-its-applications/\",\"WARC-Payload-Digest\":\"sha1:C7TJFVOFMFFFRHU42KG4L2NWGMN65INY\",\"WARC-Block-Digest\":\"sha1:UQ2BAJDAZMCX6Y7OLWBZQZNDFEM2X5Y3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141743438.76_warc_CC-MAIN-20201204193220-20201204223220-00576.warc.gz\"}"}
https://www.arxiv-vanity.com/papers/hep-th/0602178/
[ "CERN-PH-TH/2006-033\n\nHUTP-06/A0005\n\nCausality, Analyticity and an\n\n[0.4cm] IR Obstruction to UV Completion\n\n[1cm] Allan Adams, Nima Arkani-Hamed, Sergei Dubovsky,\n\n[0.2cm]Alberto Nicolis, Riccardo Rattazzi111On leave from INFN, Pisa, Italy.\n\n[0.5cm]\n\nJefferson Physical Laboratory,\n\nHarvard University, Cambridge, MA 02138, USA\n\nCERN Theory Division, CH-1211 Geneva 23, Switzerland\n\nInstitute for Nuclear Research of the Russian Academy of Sciences,\n\n60th October Anniversary Prospect, 7a, 117312 Moscow, Russia\n\nAbstract\nWe argue that certain apparently consistent low-energy effective field theories described by local, Lorentz-invariant Lagrangians, secretly exhibit macroscopic non-locality and cannot be embedded in any UV theory whose -matrix satisfies canonical analyticity constraints. The obstruction involves the signs of a set of leading irrelevant operators, which must be strictly positive to ensure UV analyticity. An IR manifestation of this restriction is that the “wrong” signs lead to superluminal fluctuations around non-trivial backgrounds, making it impossible to define local, causal evolution, and implying a surprising IR breakdown of the effective theory. Such effective theories can not arise in quantum field theories or weakly coupled string theories, whose -matrices satisfy the usual analyticity properties. This conclusion applies to the DGP brane-world model modifying gravity in the IR, giving a simple explanation for the difficulty of embedding this model into controlled stringy backgrounds, and to models of electroweak symmetry breaking that predict negative anomalous quartic couplings for the and . Conversely, any experimental support for the DGP model, or measured negative signs for anomalous quartic gauge boson couplings at future accelerators, would constitute direct evidence for the existence of superluminality and macroscopic non-locality unlike anything previously seen in physics, and almost incidentally falsify both local quantum field theory and perturbative string theory.\n\n## 1 Introduction\n\nCan every low-energy effective theory be UV completed into a full theory? To a string theorist in 1985, the answer to this question would have been a resounding “no.” The hope was that the consistency conditions on a full theory of quantum gravity would be so strong as to more or less uniquely single out the standard model coupled to GR as the unique low-energy effective theory, and that the infinite number of other possible effective theories simply couldn’t be extended to a full theory. In support of this view, the early study of perturbative heterotic strings yielded many constraints on the properties of the low-energy theory invisible to the effective field theorist. For instance, the rank of the gauge group was restricted to be smaller than 22.\n\nWith the discovery of D-branes and the duality revolution, these constraints appear to have evaporated, leaving us with a continuous infinity of consistent supersymmetric theories coupled to gravity and very likely a huge discretum of non-supersymmetric vacua . If the low-energy theory describing our universe is not unique but merely one point in a vast landscape of vacua of the underlying theory, then the properties of our vacuum—such as the values of the dimensionless couplings of the standard model—are unlikely to be tied to the structure of the fundamental theory in any direct way, reducing the detailed study of its particle-physical properties to a problem of only parochial interest. This situation is not without its consolations. With a vast landscape of vacua, seemingly intractable fine-tuning puzzles such as the cosmological constant problem , and perhaps even the hierarchy problem , can be solved by being demoted from fundamental questions to environmental ones, suggesting new models for particle physics .\n\nGiven these developments, it is worth asking again: can every effective field theory be UV completed? The evidence for an enormous landscape of vacua in string theory certainly encourages this point of view—if even the consistency conditions on quantum gravity leave room for huge numbers of consistent theories, surely any consistent model can be embedded somewhere in the landscape. Much of the activity in model-building in the last five years has implicitly taken this point of view, constructing interesting theories purely from the bottom-up with no obvious embedding into any microscopic theory. This has been particularly true in the context of attempts to modify gravity in the infrared, including most notably the Dvali-Gabadadze-Porrati model and more recent ideas on Higgs phases of gravity [6, 7, 8].\n\nIn this note, we wish to argue that the pendulum has swung too far in the “anything goes” direction. Using simple and familiar arguments, we will show that some apparently perfectly sensible low-enegy effective field theories governed by local, Lorentz-invariant Lagrangians, are secretly non-local, do not admit any Lorentz-invariant notion of causality, and are incompatible with a microscopic -matrix satisfying the usual analyticity conditions. The consistency condition we identify is that the signs of certain higher-dimensional operators in any non-trivial effective theory must all be strictly positive. The inconsistency of theories which violate this positivity condition has both UV and IR avatars.\n\nThe IR face of the problem is that, for the “wrong” sign of these operators, small fluctuations around translationally invariant backgrounds propagate superluminally, making it impossible to define a Lorentz-invariant time-ordering of events. Moreover, in general backgrounds, the equation of motion can degenerate on macroscopic scales to a non-local constraint equation whose solutions are UV-dominated. Thus, while these theories are local in the sense that the field equations derive from a strictly local Lagrangian, and Lorentz-invariant in the sense that Lorentz transforms of solutions to the field equations are again solutions, the macroscopic IR physics of this theory is neither Lorentz-invariant nor local.\n\nThe UV face of the problem is also easy to discern: assuming that UV scattering amplitudes satisfy the usual analyticity conditions, dispersion relations and unitarity immediately imply a host of constraints on low energy amplitudes. One particular such constraint is that that the leading low energy forward scattering amplitude must be non-negative, yielding the same positivity condition on the higher-derivative interactions as the superluminality constraint. Of course the fact that analyticity and unitarity imply positivity constraints is very well known, and the connection of analyticity to causality is an ancient one.\n\nWe will focus on models in which the UV cutoff is far beneath the (four-dimensional) Planck scale, so gravity in unimportant, though we will also make some comments about gravitational theories. Our work thus complements the intrinsically gravitational limitations on effective field theories recently discussed in [9, 10].\n\nOf course, local quantum field theories have a Lorentz-invariant notion of causality and satisfy the usual -matrix axioms, so any effective field theory which violates our positivity conditions cannot be UV completed into a local QFT. Significantly, since weakly coupled string amplitudes satisfy the same analyticity properties as amplitudes in local quantum field theories— indeed, the Veneziano amplitude arose from -matrix theory— the same argument applies to weakly coupled strings. Thus, while string theory is certainly non-local in many crucial ways, the effective field theories arising from string theory are in this precise sense just as local as those deriving from local quantum field theory, and satisfy the same positivity constraints.\n\nPositivity thus provides a tool for identifying what physics can and cannot arise in the landscape. Perhaps surprisingly, the tool is a powerful one. For example, it is easy to check that the DGP model violates positivity, providing a simple explanation for why this model has so far resisted an embedding in controlled weakly coupled string backgrounds. Similarly, certain 4-derivative terms in the chiral Lagrangian are constrained to be positive, implying for example that the electroweak chiral Lagrangian cannot be UV completed unless the anomalous quartic gauge boson couplings are positive.\n\nThe flipside of this argument is that any experimental evidence of a violation of these positivity constraints would signal a crisis for the usual rules of macroscopic locality, causality and analyticity, and, almost incidentally, falsify perturbative string theory. For example, the DGP model makes precise predictions for deviations in the moon’s orbit that will be checked by laser lunar ranging experiments . If these deviations are seen and other pieces of experimental evidence supporting the DGP effective theory are gathered, we would also have evidence for parametrically fast superluminal signal propagation and macroscopic violation of locality, as well as a non-analytic -matrix, unlike anything previously seen in physics. The same conclusion holds if future colliders indicate evidence for negative anomalous quartic gauge boson couplings. Experimental evidence for either of these theories would therefore clearly disprove some of our fundamental assumptions about physics.\n\n## 2 Examples\n\nLet’s begin with some examples of the apparently consistent low-energy effective theories we will constrain. Of course we should be precise about what we mean by a consistent effective theory—loosely it should have stable vacuum, no anomalies and so on, but most precisely, a consistent effective field theory is just one that produces an exactly unitary -matrix for particle scattering at energies beneath some scale .\n\nConsider the theory of a single gauge field. The leading interactions in this theory are irrelevant operators,\n\n L=−14FμνFμν+c1Λ4(FμνFμν)2+c2Λ4(Fμν~Fμν)2+…, (1)\n\nwith some mass scale and dimensionless coefficients. As another example, consider a massless scalar field with a shift symmetry . Again the leading interactions are irrelevant,\n\n L=∂μπ∂μπ+c3Λ4(∂μπ∂μπ)2+… (2)\n\nAs far as an effective field theorist is concerned, the coefficients are completely arbitrary numbers. Whatever the are, they can give the leading amplitudes in an exactly unitary -matrix at energies far beneath . Of course the theories are non-renormalizable so an infinite tower of higher operators must be included, nonetheless there is a systematic expansion for the scattering amplitudes in powers of which is unitary to all orders in this ratio. However, we claim that in any UV completion which respects the usual axioms of -matrix theory, the are forced to be positive\n\n ci>0. (3)\n\nIt is easy to check that indeed these coefficients are positive in all familiar UV completions of these models. For instance, the Euler-Heisenberg Lagrangian for QED, arising from integrating out electrons at 1-loop, indeed generates . Analogously, we can identify as a Goldstone boson in a linear sigma model, where and a Higgs field are united into a complex scalar field ,\n\n Φ=(v+h)eiπ/v, (4)\n\nwith a potential . The action for at tree-level is\n\n L=(1+hv)2(∂π)2+(∂h)2−M2hh2−… (5)\n\nIntegrating out at tree-level yields the quartic term\n\n Leff=λM4h(∂π)4+… (6)\n\nwhich has the claimed positive sign.\n\nAnother example involves the fluctuations of a brane in an extra dimension, given by a field with the effective lagrangian\n\n L=−f4√1−(∂y)2=f4[−1+(∂y)22+(∂y)48+…]. (7)\n\nAgain we find the correct sign. Related to this, the Born-Infeld action for a gauge field localized to a D-brane also gives the correct sign for all terms.\n\nThere are also other simple 1-loop checks. For example, imagine coupling fermions to in our UV linear sigma model; for sufficietly large , 1-loop effects can dominate over the tree terms coming from integrating out the Higgs. For instance, consider integrating out a higgsed fermion. Grouping two Weyl fermions with charges into a Dirac spinor , the effective Lagrangian is\n\n ¯Ψ[iγμ(∂μ+i∂μπvγ5)−MΨ]Ψ. (8)\n\nAt 1-loop, we generate an effective quartic interaction\n\n Leff=148π2v4(∂π)4+…, (9)\n\nresulting again in a positive leading irrelevant operator.\n\nNote that the positivity constraints we are talking about are not directly related to other familiar positivity constraints that follow from vacuum stability. We know for instance that kinetic terms are forced to be positive, and that and couplings must also be positive. In all these cases, the “wrong” signs are associated with a clear instability already visible in the low-energy theory. Related to this, the euclidean path integrals for such theories are not well-defined, having non-positive-definite euclidean actions.\n\nBy contrast, the “wrong” sign for the leading derivative interactions (such as the terms above) are not associated with any energetic instabilities in the low-energy vacuum: the correct sign of the kinetic terms guarantee that all gradient energies are positive, with the terms proportional to the giving only small corrections within the effective theory. Indeed, even if the leading irrelevant operators—the only ones to which our constraints apply—have the “wrong” sign, higher order terms can ensure the positivity of energy (at least classically), e.g. higher powers of . Related to this, the euclidean path integrals in theories with “wrong” signs do not exhibit any obvious pathologies. Of course this non-renormalizable theory must be treated using the standard ideas of effective field theory, but the healthy euclidean formulation at least perturbatively guarantees a unitary low-energy -matrix when we continue back to Minkowski space.\n\n## 3 Signs and Superluminality\n\nIf models with the “wrong” signs have stable, Lorentz-invariant vacuua with perfectly sensible and unitary perturbative -matrices, why don’t they arise as the low-energy limit of any familiar UV-complete theories? As we will see, while the trivial vacua of such theories are well-behaved, the speed of fluctuations around non-trivial backgrounds depend critically on these signs, with the “wrong” signs leading to superluminal propagation in generic backgrounds. This in turn leads to familiar conflicts with causality and locality which are not present in any microscopically local quantum field or perturbative string theory. Exactly how this conflict arises turns out to be an illuminating question.\n\nLet’s begin by establishing the connection between positivity-violating irrelevant leading interactions and superluminality in non-trivial backgrounds. Suppose we expand the effective theory around some non-trivial translationally invariant solution of the field equations. As long as the background field is sufficiently small, the effective field theory remains valid. Translational invariance ensures that small fluctuations satisfy a simple dispersion relation, , with the velocity determined by the higher-dimension operators in the lagrangian. The crucial insight is that whether fluctuations travel slower or faster than light depends entirely on the signs of the leading irrelevant interactions.\n\nLet’s see how this works in an explicit example. Consider our Goldstone model expanded around the solution , where is a constant vector. The linearized equation of motion for fluctuations around this background is\n\n [ημν+4c3Λ4CμCν+…]∂μ∂νφ=0. (10)\n\nWithin the regime of validity of the effective theory, , all higher dimension interactions are negligible - all that matters is the leading interaction, . Expanding in plane waves, this reads\n\n kμkμ+4c3Λ4(C⋅k)2=0. (11)\n\nSince , the absence of superluminal excitations requires that the coefficient is positive.\n\nThe case of the electromagnetic field is slightly more involved—the speed of fluctuations around non-trivial backgrounds now depends on both momentum and polarization , and thus on both of the leading interactions in the Lagrangian,\n\n kμkμ +32c1Λ4(Fμνkμϵν)2 + 32c2Λ4(~Fμνkμϵν)2=0, (12)\n\nbut the conclusion is completely analogous: there exist no superluminal excitations iff the coefficients are both positive. Note that these conclusions hold independently of the particular background field one turns on. Note too that even when the shift in the speed of propagation is very small, , it can easily be measured in the low-energy effective theory by allowing signals to propagate over large distances. It is interesting to note that in the case of open strings on D-branes, which are governed by a BI Lagrangian of the form (1), the speed of propagation in the presence of a background fieldstrength can be computed exactly in terms of the so-called ”open string metric” and is always slower than the speed of light – which is to say, this appearance of the BI Lagrangian in string theory satisfies positivity, with .\n\nAt this point all the problems usually associated with superluminality—the ability to send signals back in time, closed timelike curves, etc.—rear their heads. On the other hand, such effects are appearing within a theory governed by a local Lorentz-invariant lagrangian, a hyperbolic equation of motion and a perfectly stable vacuum. It is thus instructive to work through the physical consequences of this kind of superluminality and understand exactly when and why these theories run into trouble.\n\n### 3.1 The Trouble with Lorentz Invariance\n\nThat the effective Lagrangian is Lorentz-invariance ensures that Lorentz transforms of solutions to the field equations are again solutions to the field equations. It does not, however, ensure that all inertial frames are on an even footing. Consider for example the equation of motion for fluctuations around translationally-invariant backgrounds of our Goldstone model,\n\n ∂2tφ−v2∂2iφ=0,\n\nwhere is the velocity of propagation. This has oscillatory solutions propagating in all directions, e.g. . Upon boosting in, say, the direction, the equation of motion becomes\n\n (1−v2β2) ∂2tφ +2β(1−v2) ∂t∂xφ −(v2−β2) ∂2xφ −v2 ∂2⊥φ=0.\n\nwhose solutions, e.g. , are the Lorentz boost of the original solutions. So far so good. However, if , there exists a frame () in which the coefficient of vanishes, propagates instantaneously and the equation of motion becomes a non-dynamical constraint. In this frame it is simply impossible to set up an initial value problem to evolve the field from Cauchy slice to Cauchy slice111Notice that we are dealing with tiny superluminal shifts in the dispersion relation, so we need huge boost velocities to observe these effects, requiring both and to be of order ; however, since the Lorentz invariant combination remains tiny, the description of the system in terms of the effective theory remains valid for all observers, ensuring that these effects obtain well within the domain of validity of effective field theory.. When , the equation of motion is again perfectly dynamical and can certainly be integrated—however, oscillatory solutions to these equations move only in the positive direction, while modes in other directions may be exponentially growing or decaying. What’s going on? How is it possible that what looks like a stable system in one frame looks horribly unstable in another?\n\nThe point is that what look like perfectly natural initial conditions for a superluminal mode in one frame look like horribly fine-tuned conditions in another. Indeed, the time-ordering of events connected by propagating fluctuations is not Lorentz-invariant. Observers in relative motion will thus disagree rather dramatically about what constitutes a sensible set of initial conditions to propagate with their equations of motion—initial conditions that to one observer look like turning on a localized source at some unremarkable point in spacetime will appear to the other as a bewildering array of fluctuations incident from past infinity which conspire miraculously to annihilate what the original observer wanted to call the localized source. Said differently, the retarded Green function in one frame is a mixture of advanced and retarded Green functions in another frame. Fixing initial conditions on past infinity thus explicitly breaks Lorentz-invariance. In order for the theory to be predictive, we must choose a frame in which to define retarded Green functions. In sufficiently well-behaved backgrouds, there is a particularly natural choice of frame, that in which such conspiracies do not appear.\n\nReturning to our question of stability vs instability, consider a solution in the highly boosted frame in which we turn on a localized source for one of the unstable excitations. A Lorentz boost unambiguously maps this to a solution in the stable unboosted frame. The crucial point is that the resulting configuration does not look like a small fluctuation sourced by a local source—indeed, these are explicitly stable according to the equation of motion—but rather involves turning on initial conditions at a fixed time which vary exponentially in space, along the slice. These do not represent instabilities in any usual sense; they simply represent initial conditions which we would normally rule out as unphysical. By the same token, a localized fluctuation which remains everywhere bounded and oscillatory in the original frame transforms into a miraculous conspiracy in the initial conditions that prevents the apparently unstable mode from turning on and growing. Crucially, this never happens in theories with null or timelike propagation, in which Lorentz transformations carry sensible initial conditions to sensible initial conditions.\n\nIt is enlightening to run through the above logic in translationally non-invariant backgrounds. Consider again the Goldstone model with “wrong” sign, , and imagine building, by suitable arrangement of sources, a finite-sized bubble of condensate localized in space and time. Let’s begin in the rest frame of the condensate, in which . Outside the bubble, in the trivial vacuum, fluctuations of satisfy the massless wave equation and propagate along null rays. Inside, however, fluctuations move with velocity and thus propagate not along the light cone but along a “causal” cone defined by the effective metric . When , this cone is broader than the light cone and fluctuations propagate ever so slightly superluminally (see fig. 1a). However, since fluctuations always propagate forward in time, setting up and solving the Cauchy problem in this background is still no problem.", null, "Figure 1: Bubbles of non-trivial vacua, π=Cμxμ, in our Goldstone model with c3<0. (a) In the rest frame of the bubble, Cμ=(C,0,0,0). The solid lines denote the causal cone inside of which small fluctuations are constrained to propagate. (b) The same system in a boosted frame in which the bubble moves with a large velocity in the positive x′ direction. For sufficiently large boosts, the causal cone dips below horizontal, and small fluctuations are only seen to propagate to the left with a different temporal ordering than in the unboosted frame.\n\nAs above, when it is possible for the coefficient of the term in the equation of motion of a rapidly moving observer to vanish (see fig. 1b). Inside the bubble the coefficient of in the equation of motion is negative, while outside it is positive—somewhere along the boundary of the bubble, then, the coefficient must pass through zero, at which point the equation of motion becomes again a constraint. Thus, in any frame in which the causal cone deep inside the bubble dips below the horizontal, the bubble has a closed shell on which evolution from timeslice to timeslice cannot be prescribed by local hamiltonian flow. This in fact helps explain the peculiar phenomena seen by this boosted observer. Consider the sequence of events depicted in fig. 1. An observer in the rest frame of the bubble sends a superluminal fluctuation from a point, , deep inside the bubble to a point, , on the boundary at which the wave exits the bubble, proceeding at the speed of light to a distant point, . In a highly boosted frame, the sequence of events will have happening before or . How is this possible? The resolution is that the coefficient of vanishes at , so the evolution of at can’t be predicted from local measurements; instead, a constraint requires the spontaneous appearance of two excitation just inside and outside the bubble, which then continue forwards in time to and .\n\nThis is not something with which we are familiar, and makes it seem unlikely any Lorentz invariant -matrix exists within such theories. Indeed, the existence of a prefered class of frames—those in which the field equations do not degenerate to constraint equations—suggests that the Lorentz invariance of the classical Lagrangian is physically irrelevant, and raises doubts about the possibility of embedding such effective theories in UV-complete theories which respect microscopic Lorentz invariance and locality. Notice that systems with superluminal propagation are in this sense somewhat analogous to Lorentz invariant field theories with ghosts, of which no sense can be made unless Lorentz-invariance is explicitly broken. This is because boost-invariance makes the rate of decay of the vacuum by ghost emission formally infinite—only if Lorentz-invariance is not a symmetry of the theory can the decay rate be made finite. In such systems, however, Lorentz-invariance can only arise as an accidental symmetry.\n\n### 3.2 Global Problems with Causality", null, "Figure 2: Two finite bubbles moving with large opposite velocities in the x direction and separated by a finite distance in the y direction. The open cones indicate the local causal cones of π-fluctuations, and the red line the closed trajectory of a series of small fluctuations along these cones. Such closed time-like trajectories make it clear that no notion of causality or locality survives in a theory which violates positivity.\n\nIn the simple system of a single bubble in otherwise empty space, there always exists families of inertial frames in which causality is meaningfully defined. In particular, the co-moving rest frame of the bubble defines a time slicing in this ‘good’ class, so we can simply declare that evolution is to be prescribed in the rest frame of the bubble and translated into other frames by boosting with the spontaneously broken Lorentz generators. Forward evolution in time in highly boosted frames may look bizarre to a boosted inertial observer, but it is unambiguous. However, there are always backgrounds in which no global rest frame exists—for example, two bubbles of condensate flying past each other at high velocity and finite impact parameter, as in fig. 2—so it is far from obvious whether there is any good notion of causal ordering in these theories.\n\nIt is useful to treat this problem with the aid of some formalism. Consider again the wave equation for small fluctuations around a non-trivial background in the Goldstone system,\n\n Gμν∂μ∂νφ=0,Gμν=ημν+4c3Λ4∂μπ∂νπ. (13)\n\nThis equation suggests a natural inverse-metric with which to define ‘‘lightcones” and time-evolution 222Strictly speaking the interpretation of as an effective metric holds only in the geometric optics limit in which the wavelengths are short enough with respect to the distance over which itself varies. Anyway, if a pathology arises already in this limit, and we shall see that it does, we do not need to worry about the case of long wavelengths.. The metric is indeed what determines the light cone structure within the blobs in Fig. 1. Now that we have a metric, we can apply the methodology of General Relativity to determine whether causality is meaningfully defined over our spacetime . A first requirement is that the spacetime be time orientable, meaning that there should exist a globally defined and non-degenerate timelike vector, . To see that this is the case, note that is\n\n Gμν=ημν−4c3Λ4∂μπ∂νπ+… (14)\n\nwhere the dots stand for terms that can be neglected when and the effective field theory surely makes sense. Then, for , we have and therefore the vector is globally defined, non-degenrate and time-like. The vector defines at each space-time point the direction of time flow. Future directed timelike curves are those defined by\n\n ˙xμtνGμν>0˙xμ˙xνGμν>0. (15)\n\nThe second condition for causality to hold is that there be no closed (future directed) timelike curves (CTCs). In the presence of CTCs, the coordinate is not globally defined—it is multiply valued—and time evolution again becomes a constrained, non-local problem, and causality is lost.\n\nThe Goldstone and the Euler-Heisemberg systems are both time orientable, at least for backgrounds within the domain of validity of the effective field theory description. Moreover, for simple backgrounds like the single bubble of Fig. 1 it is also evident that there are no CTCs, so that a sensible, although not Lorentz invariant, notion of causality exists. However, in both systems, there exist other backgrounds in which the effective metric does admit CTCs, and time evolution can not be locally defined but must satisfy globally constraints.\n\nIn our Goldstone system, a simple such offending background is given by two superluminal bubbles flying rapidly past each other, as shown in Fig. 1. Note that a head on collision between the two bubbles in the same plane would certainly take us out of the regime of validity of the effective theory, with becoming large in the overlap region. But it is easy to check that a small separation in a transverse direction—the direction in the figure—is enough to ensure that can remain parametrically small everywhere in the background, and thus within the effective theory. Note that these pathologies only occur in backgrounds where passes through zero and goes negative—as long as , we can always use to define a single-valued time-like coordinate.\n\nAnother particularly nice example of such closed timelike trajectories involves the propagation of light in a non-trivial background of our “wrong”-signed Euler-Heisenberg system in eq. (1). Consider a homogenous, static electromagnetic field with and , such as might be found deep inside a cylindrical capacitor coaxial with a current-carrying solenoid, as depicted in fig. 3. Photons in this background moving orthogonal to the field, , and polarized along , move with velocity\n\n v=1−c132Λ4|→E|21+c132Λ4|→E|2\n\nin the direction parallel to the current and in the other. If , photons in this system propagate superluminally. Moreover, as , the velocity of small fluctuations diverges as their kinetic term vanishes: this is the critical value of for which the light cone of the effective metric at each point becomes tangent to the constant time slices of an observer at rest with respect to the solenoid. Finally, for the forward light cone for the effective metric overlaps with the past of the static observer. In particular the cylinder’s angular direction is at each point within the forward effective light cone, so that a circle between the cylindrical plates at fixed Lorentz time represents a CTC for the effective metric! Note that this configuration remains entirely within the effective theory, for while , all local Lorentz invariants are small—indeed they are fine-tuned to vanish. Furthermore, the small fluctuations needed to probe these CTCs remain within the effective regime as long as their wavelengths remain large compared to . As in the Goldstone example, violations of positivity lead to superluminality and macroscopic violations of causality.", null, "Figure 3: The field between the plates of a charged capacitor coaxial with a current-carrying solenoid is of the form →E=Ar^r and →B=B^z. When c1<0, small fluctuations at fixed r propagate superluminally. For sufficiently large fieldstrengths, but still within the regime of validity of the effective field theory, the “causal cone” of small fluctuations dips below horizontal, allowing for purely spacelike evolution all the way around the capacitor at fixed t, a dramatic violation of locality and causality.\n\nNote that we have been tacitly working with a single positivity-violating field. The situation is just as bad, and in some sense rather worse, if we include additional fields. In particular, we have relied heavily on the existence, for every configuration within the regime of validity of the effective theory, of a locally comoving frame in which the condensate is at rest, i.e. a frame in which all superluminal fluctuations propagate strictly forward in the local time-like coordinate. If we have two superluminal fields, this is generically impossible.\n\nNotice that attempting to define a global notion of causality, and a corresponding local Hamiltonian flow, by working in a non-inertial frame—i.e. by working with the metric in our intrisically flat spacetime—runs into problems when the non-inertial metric admits CTCs, since the affine parameter cannot be globally defined, so evolution is a globally constrained problem. Now, in GR, with asymptotically flat space, CTC’s do not arise as long as the energy momentum tensor satisfies the null energy condition, i.e. if the matter action satisfies certain restrictions. It is a remarkable fact that if the matter dynamics do not feature either instabilities or superluminal modes then the energy momentum tensor satisfies the null energy condition . Conversely, as soon as superluminal modes are allowed, the null energy condition is lost, even in the absence of instabilities within the matter dynamics , and CTC’s can in principle appear with respect to the gravitational metric as well. Therefore, whether gravity is dynamical or not, superluminal propagation generally leads to a global breakdown of causality.\n\nAnother well-known energy condition closely related to superluminality is the dominant energy condition. It states that should a be future directed time-like vector for any future directed time-like , i.e., there should be no energy-momentum flow outside the light-cone for any observer. This condition is trivially violated by a negative cosmological constant, as well as negative tension objects such as orientifold planes in string theory. To make it meaningful one must assume that the vacuum contribution is subtracted from . In this form the dominant energy condition follows from the absence of superluminality for a large class of systems. For instance, the sound velocity in a fluid is given by , and the dominant energy condition follows from the absence of superluminality . For a single derivatively coupled scalar field the absence of superluminality for a general background requires the lagrangian to be a convex function of , . This is not the same as the dominant energy condition, which requires . For small fluctuations around the trivial background with , these conditions agree, but for a general background, the absence of superluminality is a stronger condition. Thus the absence of superluminality is a more direct and fundamental requirement than the dominant energy condition.\n\n### 3.3 The Fate of Fate\n\nWhat have we learned about physics in a Lorentz invariant theory which allows superluminal propagation only around non-trivial backgrounds? First, there is no Lorentz-invariant notion of causality. Second, for observers in relative motion, disagreements about time ordering can be traced to sharp violations of locality; in sufficiently simple backgrounds, both of these complications can be avoided by a judicious choice of frame in which evolution is everywhere local and causal. Third, in more general backgrounds, attempting to foliate spacetime into (perhaps non-inertial) constant-time slices is obstructed by the existence of closed time-like trajectories, so that time-evolution can never be locally defined but is always globally constrained.\n\nDoes this mean that effective theories which violate positivity are impossible to realize in nature? Not necessarily. Rather, since positivity-violating effective Lagrangians can in principle be reconstructed from experiments in completely sensible backgrounds, e.g. by measuring low-energy scattering amplitudes in well-behaved backgrounds, these phenomena can be interpreted as signaling the breakdown of the effective theory in pathological backgrounds. This is a novel constraint on effective field theories, which are normally thought to be self-consistent as long as all local Lorentz-invariants remain below a UV cutoff, so that UV-sensitive higher-dimension operators in the Lagrangian remain negligible—instead, these effective theories break down in the IR when local Lorentz-invariants get sufficiently small. An underlying theory could complete the IR physics in two distinct ways. One possibility is that the theory simply does not admit backgrounds where local Lorentz invariants can get arbitrarlily small—for instance, if the action contains terms with inverse powers of . This means that even the vacuum must spontaneously break Lorentz invariance, though local physics need not be violated. Another possibility is that the underlying theory is fundamentally non-local and capable of manifesting this non-locality at arbitrarily large scales, while remaining Lorentz invariant. In both cases, positivity provides an IR obstruction to a purely UV completion of such effective theories. Of course, no known well-defined theories, e.g. local quantum field theories or perturbative string theories, realize such macroscopic non-locality, so positivity provides an obstruction to embedding these effective field theories into quantum field or string theory. Any experimental observation of a violation of positivity would thus provide spectacular evidence that one of our most fundamental assumptions about Nature—macroscopic locality—is simply wrong.\n\n## 4 Analyticity and positivity constraints\n\nInterestingly, the UV origins of the IR pathologies we have found are visible already at the level of 2 2 scattering amplitudes: with the wrong signs, these amplitudes fail to satisfy the standard analyticity axioms of -matrix theory. To see why the UV properties of scattering are relevant to superluminal propagation, it is illuminating to interpret the propagation of a fluctuation on top of a background as a scattering process. The effect we have described corresponds to the re-summation of all tree-level graphs depicted in fig. 4.", null, "Figure 4: Propagation of a small fluctuation around a background represented as a sequence of scattering events.\n\nThat the leading vertex is a derivative interaction implies a theoretical uncertainty of order on the position of the interaction, or equivalently on the position at which our fluctuation emerges after having interacted with the background. This is because the derivative involves knowing the field at two arbitrarily close points, but the closest we can take two points in the effective theory is a distance of order —the exact position is fixed by the microscopic UV theory. In a typical collision, any advance or retardation due to physics on scales smaller than the cutoff is thus unmeasurable in the low-energy effective theory. However, during propagation in a translationally-invariant background, many scattering events take place, each contributing the same super- or sub- luminal shift. Over large distances and after many scatterings, these small shifts add up to give a macroscopic time advance or delay that can be measured in the effective theory. This consideration makes it clear that the presence/absence of superluminal excitations is a UV question: it depends on the signs of non-renormalizable operators precisely because these interactions cannot be extrapolated down to arbitrarily short scales.\n\nIn a local quantum field theory, the subluminality of the speed of small fluctuations around translationally invariant backgrounds follows straightforwardly from the fact that local operators commute outside the lightcone. Recall that, in a free field theory, while is the Feynman propagator, determines the retarded and advanced Green’s functions as\n\nTherefore, the vanishing of the commutator as an operator statement,\n\n [ϕ(x),ϕ(y)]=0    if    (x−y)2<0, (17)\n\nimplies that vanishes outside the lightcone.\n\nExactly the same logic holds in the interacting theory. The scalar particles are interpolated by some operator in the full theory. The Fourier transform of has a delta function singularity on the mass shell in momentum space, and the pole structure is such that is interpreted as , so that vanishes outside the lightcone since the operator does. But exactly the same conclusion follows for any translationally invariant background of the theory. Indeed,\n\nwhere represents the propagator for small fluctuations about the background . Thus again, vanishes outside the lightcone.\n\nThis argument may appear too quick—after all, our effective field theories with the wrong signs for the higher-dimension operators are local quantum field theories—what goes wrong with the commutator argument? The problem is precisely in the UV singularities associated with their only being effective theories. Due to the derivative interactions, the operator commutators aquire UV singular terms proportional to derivatives of delta functions localized on the light-cone. These serve to fuzz-out the light cone on scales comparable to . Indeed, this is nothing but an operator translation of the argument at the end of last section, explaining how superluminality can arise as a result of a sequence of collisions with the background field. So it is crucial in the above argument that we are dealing with a UV complete theory, with no UV divergent terms localized on the lightcone in the commutators.\n\nThe commutator argument is convenient when we have the luxury of an off-shell formulation as in local quantum field theories. But what happens if the UV theory is not a local quantum field theory, for instance if it is a perturbative string theory? The only observable in string theory is the -matrix. It is therefore desirable to see whether the positivity constraints we are discussing follow more generally from properties of the -matrix.\n\nIndeed, how is causality encoded in the -matrix? After all, when we only have access to the asymptotic states, it is not completely clear how we would know whether the interactions giving rise to scattering are causal or not. This was a vexing question to -matrix theorists, who wanted to build causality directly into the axioms of -matrix theory. In the end, there was no physically transparent way of implementing causality; instead, all the physical consequences of microcausality were seen to follow from the assumption that the -matrix as a function of kinematic invariants is a real boundary value of an analytic function with cuts (and poles associated with exactly stable particles) as dictated by unitarity. Of course it is unsurprising that microlocality should be encoded in analyticity properties—the textbook explanation for the absence of superluminal propagation in mediums like glass relies on the analytic properties of the index of refraction in the complex frequency plane.\n\nAs we will show momentarily, the positivity constraints on the interactions in the effective theories we have been discussing follow directly from the dispersion relation and the assumed analyticity properties of the -matrix. As such, our conclusions apply equally well to perturbative string theories, where the -matrix satisfies all the usual properties—unsurprisingly, as the Veneziano amplitude arose in the framework of -matrix theory. It is of course elementary and long-understood that analyticity and dispersion relations often imply positivity constraints (though since such arguments are a little old-fashioned we will review them here in detail)—what is not well appreciated is that these positivity conditions can serve as a powerful constraint on interesting effective field theories.\n\nAs a warm-up, let us understand why the coefficient of came out positive in two of our explicit examples— integrating out the Higgs at tree-level or fermions at 1-loop. At lowest order in the couplings the relevant diagrams are those depicted in fig. 5 and 6. Let’s consider the amplitude for scattering, . At leading order and at low-energies, is\n\n M(s,t)=c3Λ4(s2+t2+u2)+…, (19)\n\nwhere . Of course this amplitude violates unitarity at energies far above , and the theory needs a UV completion.\n\nConsider first the case where the theory is UV completed into a linear sigma model; the full amplitude at tree level is instead\n\n M(s,t)=λM2h[−s2s−M2h+−t2t−M2h+−u2u−M2h], (20)\n\nand of course as , const. Let’s further look at the amplitude in the forward direction, as , and define ; note by crossing symmetry . The analytic structure of this amplitude in the complex plane is shown in fig. 6. Now consider the contour integral around the contour shown in the figure\n\n I=∮γds2πiA(s)s3. (21)", null, "Figure 5: Analytic structure of the forward 2→2 scattering amplitude at tree level, in the theory of a Goldstone boson UV completed into a linear sigma model with Higgs mass M2h. The poles arise from tree-level Higgs exchange\n\nIn the full theory, this amplitude has poles at from the and channel Higgs exchange. is bounded by a constant at infinity—more generally, as long as is bounded by at infinity, . On the other hand, is equal to the sum of the residues of at its poles. Since near the origin, there will be a contribution from a pole at the origin, as well as from the poles at . Thus,\n\n 0=I=c3Λ4+2resA(s=M2h)(M2h)3 (22)\n\nwhere the factor of accounts for the pole at since is even in . In the simple example at hand the residue of at is manifestly negative from eq. (20), and so must be positive. However for the purpose of the future discussion it is useful to trace how positivity of arises more generally from unitarity. Indeed, as ,\n\n A(s)→res[A(s=M2h)]s−M2h+iϵ⇒ImA(s)=−πδ(s−M2h)res[A(s=M2h)]. (23)\n\nSince by the optical theorem Im where is the total cross section for scattering, we have\n\n c3Λ4=2π∫dssσ(s)s3 (24)\n\nwhich is manifestly positive since the cross section is positive.\n\nWhat about the case with the fermions integrated out at 1-loop? In this case, the analytic structure is shown in fig. 7. There is a cut beginning at corresponding to pair production, and extending to . Now consider again the contour integral around the curve shown in the figure. Again, since falls off sufficiently rapidly at infinity, vanishes. As before, there is a contribution to from the pole at the origin, together with 2 the integral of the discontinuity across the cut disc[. By the optical theorem this is again related to the total cross section for scattering, and we are led to the identical expression for as above.", null, "Figure 6: Analytic structure of the forward 2→2 scattering amplitude at tree level, with the Goldstone couplings arising from integrating out a Fermion at 1-loop. The cuts starting at s=±4M2Ψ correspond to Ψ pair production.\n\nOf course this is not an accident. In fact the difference between the analytic structures of these amplitudes is completely an artefact of the lowest-order approximation. Let’s consider the Higgs theory at 1 loop. The amplitude will now have a cut going all the way to the origin—the discontinuity across the cut reflecting the (tree-level) low-energy scattering cross section. The low energy cross section grows and becomes largest in the neighborhood of the Higgs resonance near . At 1-loop, we see the non-zero Higgs width . As the physical region for is reached from above (as per the presecription), the resonance is seen since the amplitude takes the usual Breit-Wigner form . There is however no pole on the first or physical sheet in the complex plane—the expected pole at is reached by continuing the amplitude under the cut to the second sheet. Of course the presence of the resonance is visible on the physical sheet—by a big bump in the discontinuity across the cut in the vicinity of . This analytic structure is exhibited in fig. 8. Of course the analytic structure is the same for the full amplitude at all orders, and the theory as well. In fact, this is the usual general structure of the forward scattering amplitude—analytic everywhere in the complex plane, except for cuts on the real axis (and poles associated with exactly stable particles). Narrow resonances appear as poles on the second sheet.", null, "Figure 7: General analytic structure of the forward 2→2 scattering amplitude. Poles associated with narrow resonances are reached by going under the cut to the second sheet.\n\nNote that analyticity fixes to be strictly positive for an interacting theory, rather than merely non-negative, as was motivated by the IR arguments of Section 3. Here, and in general, the constraints coming from UV analyticity are stronger than those observable in the effective field theory in the IR.\n\nIt is instructive to explicity see how perturbative string theory satisfies the usual analyticity and positivity requirements. We can also see explicitly that an analogous argument also holds for perturbative string amplitudes. Let’s consider the amplitude for gauge boson scattering in type I string theory in 10D. At lowest order in , this only involves open strings, and furthermore if we restrict the external gauge bosons to the Cartan subalgebra, the amplitude does not have any contributions from massless gauge boson exchnage. The scattering amplitude for gauge bosons with external polarizations in 10 dimensions has the form \n\n M(s,t)=gsK(ei)[Γ(−s)Γ(−u)Γ(1−s−u)+Γ(−t)Γ(−u)Γ(1−t−u)+Γ(−s)Γ(−t)Γ(1−s−t)], (25)\n\nwhere we are using units and is given by\n\n K=−14(ste1⋅e4e2⋅e3+perm)+12(se1⋅k4e3⋅k2e2⋅e4+perm). (26)\n\nIf we take and choose in order to look at the forward amplitude relevant for the optical theorem, we find\n\n M(s,t→0)→stanπs. (27)\n\nThis function is indeed well-behaved in the complex plane at infinity, and is in fact bounded by away from the real axis. Thus the same arguments apply, and the coefficient of in the forward amplitude is guaranteed to be strictly positive.\n\nOur arguments are clearly general. Other than standard analyticity properties, all that was needed was that the forward amplitude be bounded by at large . In fact, under very general assumptions, unitarity forces the high-energy amplitude in the forward limit to be bounded by the famous Froissart bound [15, 16] , as follows. As with fixed, the total cross section is dominated by the exchange of soft particles at large impact parameter, so we can use the eikonal approximation to get\n\n M(s,t=−q2⊥)≃−2i s∫d2b eiq⊥⋅b(e2iδ(b,s)−1). (28)\n\nNow as long as there is a mass gap, the phase shift should fall off exponentially with impact parameter, . Locality then suggests to grow no faster than a power law, , with determined by the spin of the intermediate particles (e.g. for a single spin- particle, ). The forward amplitude is thus dominated by events with of order 1, i.e. impact parameters beneath , bounding the amplitude as . So long as there is a mass gap, which can often be achieved by a mild IR deformation of the theory, a violation of the Froissart bound implies a dramatic and abnormal behavior of the theory in the UV, with amplitudes that grow faster than any power of . It thus makes sense to study the low energy implications of a normal UV behavior which satisfies the Froissart bound.\n\nLet us finally give the general complete argument for positivity. For simplicity, we restrict our attention to a general scalar field theory with a shift symmetry . The leading form of the low-energy effective Lagrangian is of the form\n\n L=(∂π)2+a(∂π)2□πΛ3+c(∂π)4Λ4+… (29)\n\nNote that there is a cubic interaction term—we have not assumed a symmetry—which might arise in a CP violating theory for which is the Goldstone. As we will discuss in the next section, the brane-bending mode of the DGP model is described by precisely this cubic interaction.\n\nThe claim is that must be strictly positive. More precisely, we will find a positivity constraints on the forward scattering amplitude . The argument is virtually identical to the one used in the above examples, with two additional technical subtleties. First, it is well-known that the Froissart bound can be violated by the exchange of massless particles, such as gauge bosons and gravitons, so we might worry that it will not hold for the scattering of our massless ’s, which would allow amplitudes to grow too rapidly at infinity for contours to be closed. Secondly, and relatedly, while all the non-analytic behavior of the lowest order amplitudes of our examples was associated with UV-completion physics, the exact amplitudes have additional cuts in the complex -plane associated to pair-production of massless particles; in the absence of a gap, these cuts extend all the way to .\n\nTo ensure that cuts from the exchange of massless particles do not modify the conclusion of positivity, we need to regulate the theory in the IR by giving a small mass to the particles (see fig. 8). This also ensures that the Froissart bound is satisfied. The scattering amplitude is still symmetric in , , and ; however, we now have , so that the forward amplitude is even around the point , and the -channel and -channel cuts associated to pair production extend on the real axis from to and from to , respectively (thin cuts in the figure). If the trilinear vertex is non-zero, there is an additional contribution to the scattering amplitude coming from single exchange, leading to additional low-energy poles at the mass. However, given the large number of derivatives involved in the leading interactions, the residues of these IR poles scale like a positive power of , and go to zero in the massless limit. Consequently, these poles disappear in the massless theory: in particular there is no divergence of the amplitude in the forward () limit. This is just a consequence of the fact that, despite the presence of massless particles, the amplitude is dominated by short-distance interactions.\n\nIn the forward limit, then, the -channel and -channel low-energy poles are located at and at (gray poles in the figure), and the cuts starting from to and to .", null, "Figure 8: Analytic structure of the forward 2→2 scattering amplitude in the regularized massive theory\n\nSince we have modified the theory in the deep IR by adding a mass term, we no longer want to probe the limit of , instead, we will probe the behavior of for near an intermediate scale with . We will do this by considering the contour integral\n\n I=∮γds2πiA(s)(s−M2)3 (30)\n\nNote that is effectively acting as an “RG scale”; since becomes non-analytic as we approach the real axis, this is not a convenient place to probe the amplitude, so instead we will not put near the real axis but will instead consider Re() Im().\n\nNow, is given by the sum of the residues coming from the pole at , together with the poles near . Since is bounded by the Froissart bound, once again contribution to the integral from infinity can be neglected. The contribution from the discontinuity across the cuts is determined by the total cross section as before. We thus have\n\n 12A′′(s=M2)+∑s∗=m2,3m2resA(s=s∗)(s∗−M2)3=1π∫cutsdssσ(s)(s−M2)3 (31)\n\nBecause of the derivative interactions, the second term above is suppressed by powers of . Also, since at energies beneath , grows at least as fast as , for we have that\n\n ∫cutsdssσ(s)(s−M2)3=2∫cutats>0dssσ(s)s3+correctionsoforderpowersofM2Λ2 (32)\n\nThus we conclude that\n\n A′′(s=M2) = 4π∫dssσ(s)s3+O(M2,m2Λ2) (33) = positiveuptopowersuppressedcorrections (34)\n\nSo, said precisely, the forward amplitude away from the real axis, and for , is an analytic function in the complex plane. Its power expansion around any point in this region must begin with a term of the form with a strictly positive coefficient.\n\nThis is all we can say in complete generality. However, in theories where in addition to the dimensionful scale there is a dimensionless weak coupling factor so that has an expansion in , we can say more. Such theories include, for instance, weakly coupled linear sigma model completions of non-linear sigma models, where corresponds to the Higgs mass and is the perturbative quartic coupling in the UV theory, or perturbative string theories, where is the string scale and is the string coupling . For , the tree amplitude in this theory is of the form\n\n Atree(s)=g∞∑n=1cn(s2Λ4)n (35)\n\nNote that low-energy cuts, which are absent at leading order in , appear at order precisely as needed for 1-loop unitarity. Thus, by considering the contour integral\n\n In=∮γds2πiA(s)s2n+1 (36)\n\nand running through the same argument (and now ignoring the contributions from low-energy cuts which don’t exist at this order in ) we conclude\n\n cn>0 (37)\n\nTherefore, in a weakly coupled theory, there are an infinite number of constraints on the effective theory: the leading (in weak coupling ) amplitude in the forward direction has an expansion as a polynomial in with all positive coefficients. For example, the forward scattering amplitude in the Goldstone model is\n\n M(s,t→0)=λ(s2M4h+s4M8h+s6M12h+…), (38)\n\nwhile the amplitude for gauge boson scattering in 10D type I string theory is\n\n M(s,t→0)=gs(πs2+π33s4+2π515s6+…), (39)\n\nboth of which of course have all positive coefficients.\n\n## 5 The DGP Model\n\nThe DGP model is an extremely interesting brane-world model which modifies gravity at large distances. In addition to gravity in a 5D bulk, there is a 4D brane localized at an orbifold fixed point with a large Einstein-Hilbert term localized on this boundary, with an action of the form\n\n S=2M24∫braned4x√−gR(4)+2M35∫bulkd4xdy√−GR(5), (40)\n\nwith . The large term quasi-localizes a 4D graviton to the brane up to distances of order , and at larger distances gravity on the brane reverts to being 5 dimensional.\n\nNaively, this model makes sense as an effective field theory up to the lower of the two Planck scales . However, as in the case of massive gravity , there is in fact a lower scale\n\n Λ∼M25M4 (41)\n\nat which a single 4D scalar degree of freedom —loosely the “brane-bending” mode—becomes strongly coupled . The classical action for this mode can be isolated by taking a decoupling limit as , keeping fixed. In this limit both four and five dimensional gravity are decoupled and so the physics is purely four-dimensional, leading to the effective action \n\n L=3(∂π)2−(∂π)2□πΛ3. (42)\n\nThe unusual normalization of the kinetic term is for later convenience. Note that the Lagrangian is derivatively coupled as expected for a brane-bending mode, and that the reflection symmetry is broken since the boundary is an orbifold fixed point. All the interesting phenomenology of the DGP model—including the “self-accelerating” solution (which is actually plagued by ghosts, as confirmed by a direct 5D calculation in ref. ) as well as the modification to the lunar orbit—actually follows from this non-linear classical Lagrangian with the scalar coupled to the trace of the energy momentum tensor for matter fields as . Indeed, the non-linear properties of this theory are what allow it to be experimentally viable, at least classically.\n\nNow, for realistic parameters, the scale corresponds to km. If, at quantum level, all operators of the form\n\n (∂π)2NΛ4N−4+… (43)\n\nare generated, then, despite the interesting features of the classical theory, the correct quantum theory would lose all predictivity at distances beneath km . It is therefore interesting to consider loop corrections in this theory, as was initiated in , where it was shown that the tree-level cubic term is not renormalized. In , it was shown that at loop level only operators of the form are generated, and with additional assumptions about the structure of the UV theory, argued that the healthy classical non-linear properties of the theory survive quantum-mechanically.\n\nThese results all follow from the fact that the form of the Lagrangian is preserved by a constant shift in the first derivative of ,\n\n ∂μπ→∂μπ+cμ. (44)\n\nNaively this suggest that any term in the Lagragian should involve at least two derivatives on every —however the variation of the cubic term in eq. (42) under this transformation is a total derivative, and therefore vanishes once integrated. The same holds for the kinetic term, .\n\nThis symmetry is nothing but 5D Galilean invariance. The position of the brane along the fifth dimension is (in some gauge) related to the canonically normalized by . The model enjoys of course full 5D Lorentz invariance, but in the decoupling limit in which is the only relevant mode,\n\n M5,M4→∞,Λ=const, (45)\n\nthe brane becomes flatter and flatter, the ‘velocity’ goes to zero and a 5D Lorentz transformation acts on as a Galilean transformation. This symmetry forces the Lagrangian to take the form\n\n L=3(∂π)2−1Λ3□π(∂π)2+O(∂m(∂2π)n), (46)\n\nthat is all further interactions involve at least two derivatives on any . 333Of course it is possible to make field redefinitions to eliminate the cubic interaction term, but the theory is not free, the tree-level scattering amplitude is non-zero. The field redefinition eliminates the DGP term but generates quartic terms of the form , as needed to reproduce the amplitude. However, the cubic form of the action is most convenient—first, because it makes the Galilean symmetry simply manifest, and second, because the coupling to matter is simple: a linear coupling of the form to the trace of the energy momentum tensor .\n\nIndeed, the absence of the terms is the only thing making this effective theory special in any sense. After all, a generic UV theory yielding a Goldstone boson , which violates (and hence the symmetry), would have the same leading cubic interaction, which is the lowest order derivative coupling for a scalar. The only thing that can distinguish the DGP scalar Lagrangian from a generic Goldstone theory is the presence of the Galilean symmetry and the associated absence of type terms in the Lagrangian. And again, it is the absence of such terms in the effective action that gives it a chance for non-linear health and experimental viability.\n\nHowever, precisely this property of the theory makes it impossible to UV complete into a UV theory with usual analyticity conditions on the -matrix. As we saw in the last section, the coefficient of the term, which gives rise to an term in the forward amplitude, must be strictly positive. Instead, in the DGP model, this operator is forced to vanish by the Galilean symmetry. The amplitude for scattering has a tree-level exchange contribution from the DGP term (see fig. 9) as well as contributions from the higher order term, but they all begin at order\n\n M(s,t)=s3+t3+u3Λ6+O(s4,t4,u4) (47)\n\nIn the forward limit , this amplitude vanishes; and in particular the piece proportional to vanishes identically. of course there will be some forward amplitude at even higher orders, but these will involve even more suppression by powers of and there will be no piece. We conclude that it is impossible to complete an effective theory for a scalar with a shift symmetry of the form into a UV theory with the usual analyticity properties for the -matrix. Again, this includes any local quantum field theory or perturbative string theory. Conversely, any experimental indication for the validity of the DGP model can then be taken as the direct observation of something that is not local QFT or string theory.\n\nAssociated with this, it is easy to see that signals about non-trivial backgrounds can travel superluminally. It is trivial to see that this is possible—the leading interaction term is cubic, and therefore around a background, the modification of the speed of propagation for small fluctuations is linear in the background field and can therefore have any sign. And indeed simple physical backgrounds allow superluminal propagation. is sourced by , the trace of the stress energy tensor. In the presence of a compact spherical object of mass , develops a radial background . The gradient of this solution is \n\n π′0(r)=3Λ34r[√r4+118πR3Vr−r2], (48)\n\nwhere is the so-called Vainshtein radius of the source. In such a Schwarzschild-like solution the quadratic action for the fluctuation is \n\n Lφ=[3+2Λ3(π′′0+2π′0r)]˙φ2−[3+4Λ3π′0r](∂rφ)2−[3+2Λ3(π′′0+π′0r)](∂Ωφ)2, (49)\n\nwhere is the angular part of . The speed of a fluctuation moving along the radial direction is given by the ratio between the coefficient of and that of in the equation above; on the solution eq. (48) is larger than 1 for any !\n\nA plot of versus is given in fig. 10: it starts from at , reaches a maximum of at and asymptotes to 1 (from above!) for . This is an deviation from the speed of light in an enormous region of space; for instance for the Sun, is cm. Clearly highly boosted observers can observe parametrically fast propagation, and indeed if they boost too much they can observe the peculiar time-reversed sequence of events. It is also easy to find spatially homogeneous and isotropic background configurations for which even observers at rest can observe parametrically fast signal propagation.\n\nHaving found superluminal propagation, we run into the same paradoxes as we discussed in section 2. For instance two blobs of field boosted towards each other in the direction with a small separation in give rise to the same closed timelike curve problems as in the two boosted blob Goldstone examples. However, while there we assumed the presence of suitable sources that could give rise to our paradoxical field configuration, here we expect something more. Since the simple Schwarzschild-like solution we just described features superluminal propagation, a closed timelike curve should appear in the field actually sourced by two masses boosted towards each other. This is not easy to check: a quick estimate shows that in order to close the closed timelike curve the two masses must pass so close to each other that, even if their Vainshtein regions do not overlap, the presence of one mass induces sizable non-linearities close to the other, and vice versa. In other words, the full solution is not just the linear superposition of two Schwarzschild-like solutions—new non-linear anisotropic corrections must be taken into account. It would be interesting to further investigate such a configuration and understand whether a closed timelike curve really arises.\n\nIt is instructive to contrast this with what happens for a generic Goldstone theory, where the leading interaction is still the same cubic term, but we also have the terms. In the presence of a generic background field this interaction gives a contribution to the quadratic Lagrangian for the fluctuations which is linear in the background,\n\n δL=2Λ3(∂μ∂νπ0−ημν□π0)∂μφ∂νφ. (50)\n\nIf we turn on a background with constant second derivatives, then the field equation for the fluctuation is exactly of the form eq. (10), with replaced by . Exactly as in the DGP analysis, it appears that superluminal signals are possible since has no a priori positivity property. However the term saves the day. We can certainly set up in some region a background with constant and negligible , so that the effect of the cubic dominates over that of ; but this region cannot be larger than , since grows linearly with for constant , and after a while the term starts dominating the kinetic Lagrangian of the fluctuations. Once this happens, if the coefficient of is positive there are no superluminal excitations.\n\nThe correction to the propagation speed inside the region where the cubic dominates is , so the maximum time advance/delay we can measure for a fluctuation traveling all across the ‘superluminal region’ is\n\n δtmax∼Lδc∼∂2π1/20Λ5/2. (51)\n\nNow, we would normally require in order for the effective theory to make sense. In such a case we immediately get , too small a time interval to be measured inside the effective theory. However argued that in a theory like eq. (42) consistent assumptions about the UV physics can be made to extend the regime of validity of the effective theory to much larger background fields and to much shorter length scales. In particular, in the presence of a strong background field the effective cutoff scale is raised from to . In this case too the superluminal time advance is unmeasurably small: the size of the region in which the effect of the cubic dominates over the quartic is of order of the UV effective cutoff, . In both cases the quartic saves the day. Thus, not only does the coefficient of the term have to be positive, it must be set by the same scale as the coefficient of the cubic term, a conclusion we could have also reached from the dispersion relation arguments of the previous section.\n\nWe have uncovered a subtle inconsistency of the DGP model. As a classical theory, it has well-defined, two-derivative, Lorentz invariant equations of motion; this property underlies the healthy non-linear behavior of the theory and distinguishes it from more brutal modifications of gravity, such as the theory of a massive graviton. However, just as in the simple scalar field theory examples studied in the previous sections, which also have Lorentz invariant two-derivative equations of motion, the theory suffers from a lack of a Lorentz-invariant notion of causality, which is in turn related to a violation of the usual analyticity properties of scattering amplitudes.\n\nOf course, even in brane models respecting the usual UV locality properties, there are DGP terms induced on the brane. What we have shown is that we can not have a decoupling limit with holding fixed. This suggests that there is a limit of in any sensibly causal theory—it would be interesting to investigate these questions from the geometrical perspective of the five dimensional theory in more detail.\n\nThere is also an interesting connection between our constraint on the DGP model and the “weak gravity” conjecture of . Both situations involve trying to make some interaction much weaker than bulk gravity—in DGP it is the 4D Gravity on the brane, taking , while in , it is the attempt to keep and the cutoff of the theory fixed, but send . We have seen that a simple physical principle—requiring subluminal signal propagation—prohibits the DGP limit. Similarly, it appears that other general physical principles—such as the absence of global symmetries in quantum gravity—block taking the weak coupling limit. In both cases, there are obstacles to making any interaction physically weaker than bulk gravity.\n\n## 6 Positivity in the Chiral Lagrangian\n\nThere are similar positivity conditions in more familiar effective field theories in particle physics. Consider for instance the chiral Lagrangian, parametrized by the unitary field ,\n\n L=f2tr(∂μU†∂μU)+L4[tr(∂μU†∂μU)]2+L5[tr(∂μU†∂νU)]2+⋯ (52)\n\nThere is a solution of the equations of motion with pointing in a specifc isospin direction which we can take to be , of the form\n\n π3(x)=cμxμ (53)\n\nWe can look at the small fluctuations of both as well as around this background. It is then easy to check that in order for both and to propagate subluminally we must have\n\n L4,5>0 (54)\n\nIn our previous Abelian examples, the 4-derivative terms were the leading irrelevant interactions in the theory, and so did not have any logarithmic scale dependence. On the other hand, are logarithmically scale dependent; so the positivity constraint is then actually a constraint on the running couplings at energies parametrically smaller than . Indeed, we can imagine turning on a background where is approximately constant over a length scale much larger than the cutoff scale; in order to avoid superluminality we should demand that that running evaluated near this scale are positive. Of course, the log running of induced off the lowest-order 2-derivative term pushes positive, and so in a theory without a weak-coupling expansion, at low-energies are dominated by the log running contribution and there is no interesting constraint on the UV physics. However in theories with a weak coupling , the matching contribution to at the scale will dominate over the log-running contribution down to energies of order , and we can independently identify the matching contribution to from the high-energy physics from the low-energy running contribution, and hence the positivity bound is a non-trivial constraint.\n\nNaturally, the existence of these sorts of positivity constraints following from dispersion theory are very well known, though not said very explicitly; our present example was discussed (though perhaps not widely recognized) in the literature long ago .\n\nOf course the pion chiral Lagrangian follows from QCD which is a local quantum field theory, so these conditions must neccessarily be satisfied. The situation is perhaps more interesting for the electroweak chiral Lagrangian governing the dynamics of the longitudinal components of the bosons. While it is most likely, given precision electroweak constraints, that the UV completion involves Higgses and a linear sigma model, there may also be more exotic possibilities, including in the extreme case a low fundamental scale close to the electroweak scale. This physics should manifest itself through the higher-dimension operators in the effective Lagrangian, and assuming custodial is a good approximate symmetry, the constraint on the electroweak chiral Lagrangian is the same (with the derivatives covariantized for the gauge symmetry ). These operators are not associated with the well-known constraints of precision electroweak physics— instead, in unitary gauge , they represent anomalous quartic couplings for the , which must be positive.\n\n## 7 Examples from String Theory\n\n### 7.1 Little String Theory\n\nAs we have seen, UV theories which are local or, what is the same, satisfy the usual analyticity properties of -matrix theory, give rise to effective theories with positivity constraints on certain leading irrelevant interactions that forbid superluminality and macroscopic non-locality. If we experimentally measure such interactions and find that they are zero or negative, then we have direct evidence for a fundamentally non-local theory. But if we also happen to know some of these operators theoretically, on other grounds, we can use them as a locality test for the UV completion.\n\nThe prime candidate for such a test is of course -theory, which does not have a weakly coupled description, and is thought by many to be fundamentally non-local. However, as we saw in the last section, in gravitational theories, there is no well-defined way to extract information about higher-dimension operators from superluminality constraints, since the notion of the correct metric to use for the GR lightcone can be modified by higher-dimension operators, while just gravity already bends all signals inside the underlying Minkowski lightcone. Associated with this, gravitational amplitudes are dominated by long-distance graviton exchange in the forward direction, with -channel poles, and the dispersion relation arguments can’t be used.\n\nHowever, we can certainly study non-gravitational UV completions of higher-dimensional gauge theories, especially supersymmetric ones. In five dimensions, maximally supersymmetric Yang-Mills theories are UV completed into the six dimensional superconformal theory, which although mysterious is still a local CFT. On the other hand, 6D super-Yang Mills is UV completed into the 6D little string theory, which is a non-gravitational string theory with string tension set by the 6D Yang-Mills coupling but no small dimensionless coupling. This is another candidate for a “non-local” theory. This issue can be probed if we can determine the coefficient of the operators in the low-energy SYM theory—if any of them have the “wrong” sign, this would prove that the LST is dramatically non-local.\n\nSome of these terms have in fact been determined by a variety of methods. For instance, the" ]
[ null, "https://media.arxiv-vanity.com/render-output/6536240/x1.png", null, "https://media.arxiv-vanity.com/render-output/6536240/x2.png", null, "https://media.arxiv-vanity.com/render-output/6536240/x3.png", null, "https://media.arxiv-vanity.com/render-output/6536240/x4.png", null, "https://media.arxiv-vanity.com/render-output/6536240/x5.png", null, "https://media.arxiv-vanity.com/render-output/6536240/x6.png", null, "https://media.arxiv-vanity.com/render-output/6536240/x7.png", null, "https://media.arxiv-vanity.com/render-output/6536240/x8.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9263019,"math_prob":0.95701236,"size":71662,"snap":"2023-14-2023-23","text_gpt3_token_len":14436,"char_repetition_ratio":0.15808423,"word_repetition_ratio":0.016726825,"special_character_ratio":0.18828668,"punctuation_ratio":0.08937446,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9696396,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-03T21:08:24Z\",\"WARC-Record-ID\":\"<urn:uuid:e325b49a-062a-4b87-94ec-6e75d34c79da>\",\"Content-Length\":\"1049380\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:26efb688-9198-443b-a087-92b55c5b3adc>\",\"WARC-Concurrent-To\":\"<urn:uuid:97d4ceeb-741c-4e52-8a3b-3e5202ee3e50>\",\"WARC-IP-Address\":\"172.67.158.169\",\"WARC-Target-URI\":\"https://www.arxiv-vanity.com/papers/hep-th/0602178/\",\"WARC-Payload-Digest\":\"sha1:2CADARILBPHVEHOZSNUOI5NUPETKLTG5\",\"WARC-Block-Digest\":\"sha1:5T4LMTHPIVF44HEKTIUE3D3ZFPZ3FDYS\",\"WARC-Truncated\":\"length\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224649343.34_warc_CC-MAIN-20230603201228-20230603231228-00239.warc.gz\"}"}
https://open.kattis.com/problems/dvds
[ "# DVDs\n\nDezider is very unhappy as he just discovered that his DVDs got unsorted. Dezider is (occasionally) very organized and during one of his get-organized spells he numbered the DVDs from $1$ to $n$. He keeps the DVDs in a tall stack and he wants to have them sorted in increasing order by their number, with $1$ at the bottom and $n$ at the top of the stack. The trouble is that his space allows him to perform only one type of sorting operation: take a DVD, pull it out of the stack while the DVDs above it fall down by one position, then place it at the top of the stack. What is the smallest number of such operations he needs to do to sort the DVD stack?\n\n## Input\n\nThe first line contains $k$, the number of input instances. Each input instance is described on two lines. The first line contains $1\\leq n\\leq 1\\, 000\\, 000$. The second line lists the DVDs in the initial order on the stack, from the bottom to the top.\n\n## Output\n\nThe output contains $k$ lines. The $i$-th line corresponds to the $i$-th input. It contains the smallest number of operations needed to sort the stack.\n\n## Note\n\nIn the first sample input it suffices to take DVD $4$ and move it to the top of the stack; then the stack is sorted. In the second sample input one can first move DVD $4$ and then DVD $5$ to get a sorted stack.\n\nSample Input 1 Sample Output 1\n2\n4\n1 4 2 3\n5\n5 1 2 4 3\n\n1\n2" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.92210656,"math_prob":0.98388886,"size":1360,"snap":"2022-05-2022-21","text_gpt3_token_len":363,"char_repetition_ratio":0.13643068,"word_repetition_ratio":0.015037594,"special_character_ratio":0.2632353,"punctuation_ratio":0.0777027,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9646861,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-25T18:56:54Z\",\"WARC-Record-ID\":\"<urn:uuid:e77396a3-8cbb-4608-bf83-05700715d4ca>\",\"Content-Length\":\"18408\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9a43dab1-bec0-4cbf-bcbe-557d86ae6be2>\",\"WARC-Concurrent-To\":\"<urn:uuid:e1970385-223e-4762-bf91-44afcaff9928>\",\"WARC-IP-Address\":\"172.66.40.199\",\"WARC-Target-URI\":\"https://open.kattis.com/problems/dvds\",\"WARC-Payload-Digest\":\"sha1:FKPWFHYMH77ZILDGXH3OQOQGCL3XOO3L\",\"WARC-Block-Digest\":\"sha1:Z4LAXTJLPS5U4MTRKOEAJNMYLZPLNC2U\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662593428.63_warc_CC-MAIN-20220525182604-20220525212604-00148.warc.gz\"}"}
https://eudml.org/search/page?q=sc.general*op.AND*l_0*c_0author_0eq%253A1.Andrzej+Fryszkowski&qt=SEARCH
[ "## Currently displaying 1 – 8 of 8\n\nShowing per page\n\nOrder by Relevance | Title | Year of publication\n\n### The generalization of Cellina's Fixed Point Theorem\n\nStudia Mathematica\n\n### Continuous selections for a class of non-convex multivalued maps\n\nStudia Mathematica\n\n### Existence of solutions of functional-differential inclusion in nonconvex case\n\nAnnales Polonici Mathematici\n\n### Abstract differential inclusions with some applications to partial differential ones\n\nAnnales Polonici Mathematici\n\n### A class of retracts in ${L}^{p}$ with some applications to differential inclusion\n\nDiscussiones Mathematicae, Differential Inclusions, Control and Optimization\n\n### Filippov Lemma for matrix fourth order differential inclusions\n\nBanach Center Publications\n\nIn the paper we give an analogue of the Filippov Lemma for the fourth order differential inclusions y = y”” - (A² + B²)y” + A²B²y ∈ F(t,y), (*) with the initial conditions y(0) = y’(0) = y”(0) = y”’(0) = 0, (**) where the matrices $A,B\\in {ℝ}^{d×d}$ are commutative and the multifunction $F:\\left[0,1\\right]×{ℝ}^{d}⇝cl\\left({ℝ}^{d}\\right)$ is Lipschitz continuous in y with a t-independent constant l < ||A||²||B||². Main theorem. Assume that $F:\\left[0,1\\right]×{ℝ}^{d}⇝cl\\left({ℝ}^{d}\\right)ismeasurableintandintegrablybounded.Let$y₀ ∈ W4,1$beanarbitraryfunctionsatisfying\\left(**\\right)andsuchthat$ ${d}_{H}\\left(y₀\\left(t\\right),F\\left(t,y₀\\left(t\\right)\\right)\\right)\\le p₀\\left(t\\right)$ a.e. in [0,1], where p₀ ∈ L¹[0,1]. Then there exists a solution y ∈ W4,1 of (*) with (**) such...\n\n### Filippov Lemma for certain second order differential inclusions\n\nOpen Mathematics\n\nIn the paper we give an analogue of the Filippov Lemma for the second order differential inclusions with the initial conditions y(0) = 0, y′(0) = 0, where the matrix A ∈ ℝd×d and multifunction is Lipschitz continuous in y with a t-independent constant l. The main result is the following: Assume that F is measurable in t and integrably bounded. Let y 0 ∈ W 2,1 be an arbitrary function fulfilling the above initial conditions and such that where p 0 ∈ L 1[0, 1]. Then there exists a solution y ∈ W 2,1...\n\n### Vitali Lemma approach to differentiation on a time scale\n\nStudia Mathematica\n\nA new approach to differentiation on a time scale is presented. We give a suitable generalization of the Vitali Lemma and apply it to prove that every increasing function f: → ℝ has a right derivative f₊’(x) for ${\\mu }_{\\Delta }$-almost all x ∈ . Moreover, ${\\int }_{\\left[a,b\\right)}f{₊}^{\\text{'}}\\left(x\\right)d{\\mu }_{\\Delta }\\le f\\left(b\\right)-f\\left(a\\right)$.\n\nPage 1" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.82699597,"math_prob":0.9953181,"size":1286,"snap":"2022-40-2023-06","text_gpt3_token_len":370,"char_repetition_ratio":0.10452418,"word_repetition_ratio":0.17021276,"special_character_ratio":0.30015552,"punctuation_ratio":0.12222222,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9983536,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-28T10:06:32Z\",\"WARC-Record-ID\":\"<urn:uuid:282e213a-1552-4a70-b79f-4d80490425de>\",\"Content-Length\":\"78561\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5d5e6166-4e17-4fe6-9367-0e93820e731f>\",\"WARC-Concurrent-To\":\"<urn:uuid:82b72b65-5408-4365-a0b5-f87fa9f712b6>\",\"WARC-IP-Address\":\"213.135.60.110\",\"WARC-Target-URI\":\"https://eudml.org/search/page?q=sc.general*op.AND*l_0*c_0author_0eq%253A1.Andrzej+Fryszkowski&qt=SEARCH\",\"WARC-Payload-Digest\":\"sha1:5HLADY4VYLMFN3NFQUPFRZDSKWLOSKSU\",\"WARC-Block-Digest\":\"sha1:SNBSCWGPHU2LCMZP3C3XQLJWB7GYLDRL\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499541.63_warc_CC-MAIN-20230128090359-20230128120359-00440.warc.gz\"}"}
https://www.geeksforgeeks.org/count-minimum-substring-removals-required-to-reduce-string-to-a-single-distinct-character/?ref=rp
[ "# Count minimum substring removals required to reduce string to a single distinct character\n\n• Difficulty Level : Hard\n• Last Updated : 09 Jun, 2021\n\nGiven a string S consisting of ‘X’, ‘Y’ and ‘Z’ only, the task is to convert S to a string consisting of only a single distinct character by selecting a character and removing substrings that does not contain that character, minimum number of times.\n\nNote: Once a character is chosen, no other character can be used in further operations.\n\nAttention reader! Don’t stop learning now. Get hold of all the important DSA concepts with the DSA Self Paced Course at a student-friendly price and become industry ready.  To complete your preparation from learning a language to DS Algo and many more,  please refer Complete Interview Preparation Course.\n\nIn case you wish to attend live classes with experts, please refer DSA Live Classes for Working Professionals and Competitive Programming Live for Students.\n\nExamples:\n\nInput: S = “XXX”\nOutput: 0\nExplanation: Since the given string already consists of a single distinct character, i.e. X, no removal is required. Therefore, the required count is 0.\n\nInput: S = “XYZXYZX”\nOutput: 2\nExplanation:\nSelecting the character ‘X’ and removing the substrings “YZ” in two consecutive operations reduces the string to “XXX”, which consists of a single distinct character only.\n\nApproach: The idea is to count occurrences of each character using unordered_map and count the number of removals required for each of them and print the minimum. Follow the steps below to solve the problem:\n\n• Initialize an unordered_map and store the indices of occurrences for each of the characters.\n• Iterate over all the characters of the string S and update occurrences of the characters ‘X’, ‘Y’ and ‘Z’ in the map.\n• Iterate over the Map and for each character, count the number of removals required for each character.\n• After calculating for each character, print the minimum count obtained for any of the characters.\n\nBelow is the implementation of the above approach:\n\n## C++\n\n `// C++ program for the above approach` `#include ``using` `namespace` `std;` `// Function to find minimum removals``// required to convert given string``// to single distinct characters only``int` `minimumOperations(string s, ``int` `n)``{` `    ``// Unordered map to store positions``    ``// of characters X, Y and Z``    ``unordered_map<``char``, vector<``int``> > mp;` `    ``// Update indices of X, Y, Z;``    ``for` `(``int` `i = 0; i < n; i++) {``        ``mp[s[i]].push_back(i);``    ``}``    ` `    ``// Stores the count of``    ``// minimum removals``    ``int` `ans = INT_MAX;` `    ``// Traverse the Map``    ``for` `(``auto` `x : mp) {``        ``int` `curr = 0;``        ``int` `prev = 0;``        ``bool` `first = ``true``;` `        ``// Count the number of removals``        ``// required for current character``        ``for` `(``int` `index : (x.second)) {``            ``if` `(first) {``                ``if` `(index > 0) {``                    ``curr++;``                ``}``                ``prev = index;``                ``first = ``false``;``            ``}``            ``else` `{``                ``if` `(index != prev + 1) {``                    ``curr++;``                ``}``                ``prev = index;``            ``}``        ``}``        ``if` `(prev != n - 1) {``            ``curr++;``        ``}` `        ``// Update the answer``        ``ans = min(ans, curr);``    ``}` `    ``// Print the answer``    ``cout << ans;``}` `// Driver Code``int` `main()``{``    ``// Given string``    ``string s = ``\"YYXYZYXYZXY\"``;` `    ``// Size of string``    ``int` `N = s.length();` `    ``// Function call``    ``minimumOperations(s, N);` `    ``return` `0;``}`\n\n## Java\n\n `// Java program for the above approach``import` `java.util.*;``class` `GFG``{``  ` `  ``// Function to find minimum removals``  ``// required to convert given string``  ``// to single distinct characters only``  ``static` `void` `minimumOperations(String s, ``int` `n)``  ``{` `    ``// Unordered map to store positions``    ``// of characters X, Y and Z``    ``HashMap> mp = ``new` `HashMap<>();` `    ``// Update indices of X, Y, Z;``    ``for``(``int` `i = ``0``; i < n; i++)``    ``{``      ``if` `(mp.containsKey(s.charAt(i)))``      ``{``        ``mp.get(s.charAt(i)).add(i);``      ``}``      ``else``      ``{``        ``mp.put(s.charAt(i), ``new` `ArrayList(Arrays.asList(i)));``      ``}``    ``}` `    ``// Stores the count of``    ``// minimum removals``    ``int` `ans = Integer.MAX_VALUE;` `    ``// Traverse the Map``    ``for` `(Map.Entry> x : mp.entrySet())``    ``{``      ``int` `curr = ``0``;``      ``int` `prev = ``0``;``      ``boolean` `first = ``true``;` `      ``// Count the number of removals``      ``// required for current character``      ``for``(Integer index : (x.getValue()))``      ``{``        ``if` `(first)``        ``{``          ``if` `(index > ``0``)``          ``{``            ``curr++;``          ``}``          ``prev = index;``          ``first = ``false``;``        ``}``        ``else``        ``{``          ``if` `(index != prev + ``1``)``          ``{``            ``curr++;``          ``}``          ``prev = index;``        ``}``      ``}``      ``if` `(prev != n - ``1``)``      ``{``        ``curr++;``      ``}` `      ``// Update the answer``      ``ans = Math.min(ans, curr);``    ``}` `    ``// Print the answer``    ``System.out.print(ans);``  ``}``    ` `  ``// Driver code``  ``public` `static` `void` `main(String[] args)``  ``{` `    ``// Given string``    ``String s = ``\"YYXYZYXYZXY\"``;` `    ``// Size of string``    ``int` `N = s.length();` `    ``// Function call``    ``minimumOperations(s, N);``  ``}``}` `// This code is contributed by divyeshrabadiya07`\n\n## Python3\n\n `# Python3 program for the above approach``import` `sys;``INT_MAX ``=` `sys.maxsize;` `# Function to find minimum removals``# required to convert given string``# to single distinct characters only``def` `minimumOperations(s, n) :` `    ``# Unordered map to store positions``    ``# of characters X, Y and Z``    ``mp ``=` `{};` `    ``# Update indices of X, Y, Z;``    ``for` `i ``in` `range``(n) :``        ``if` `s[i] ``in` `mp :``            ``mp[s[i]].append(i);``        ``else` `:``            ``mp[s[i]] ``=` `[i];``            ` `    ``# Stores the count of``    ``# minimum removals``    ``ans ``=` `INT_MAX;` `    ``# Traverse the Map``    ``for` `x ``in` `mp :``        ``curr ``=` `0``;``        ``prev ``=` `0``;``        ``first ``=` `True``;` `        ``# Count the number of removals``        ``# required for current character``        ``for` `index ``in` `mp[x] :``            ``if` `(first) :``                ``if` `(index > ``0``) :``                    ``curr ``+``=` `1``;``                ``prev ``=` `index;``                ``first ``=` `False``;``            ` `            ``else` `:``                ``if` `(index !``=` `prev ``+` `1``) :``                    ``curr ``+``=` `1``;``                ``prev ``=` `index;``                ` `        ``if` `(prev !``=` `n ``-` `1``) :``            ``curr ``+``=` `1``;` `        ``# Update the answer``        ``ans ``=` `min``(ans, curr);` `    ``# Print the answer``    ``print``(ans);` `# Driver Code``if` `__name__ ``=``=` `\"__main__\"` `:` `    ``# Given string``    ``s ``=` `\"YYXYZYXYZXY\"``;` `    ``# Size of string``    ``N ``=` `len``(s);` `    ``# Function call``    ``minimumOperations(s, N);` `    ``# This code is contributed by AnkThon`\n\n## C#\n\n `// C# program for the above approach``using` `System;``using` `System.Collections.Generic; ` `class` `GFG{``    ` `// Function to find minimum removals``// required to convert given string``// to single distinct characters only``static` `void` `minimumOperations(``string` `s, ``int` `n)``{``    ` `    ``// Unordered map to store positions``    ``// of characters X, Y and Z``    ``Dictionary<``char``,``          ``List<``int``>> mp = ``new` `Dictionary<``char``,``                                    ``List<``int``>>(); `` ` `    ``// Update indices of X, Y, Z;``    ``for``(``int` `i = 0; i < n; i++)``    ``{``        ``if` `(mp.ContainsKey(s[i]))``        ``{``            ``mp[s[i]].Add(i);``        ``}``        ``else``        ``{``            ``mp[s[i]] = ``new` `List<``int``>();``            ``mp[s[i]].Add(i);``        ``}``    ``}``     ` `    ``// Stores the count of``    ``// minimum removals``    ``int` `ans = Int32.MaxValue;`` ` `    ``// Traverse the Map``    ``foreach``(KeyValuePair<``char``, List<``int``>> x ``in` `mp)``    ``{``        ``int` `curr = 0;``        ``int` `prev = 0;``        ``bool` `first = ``true``;`` ` `        ``// Count the number of removals``        ``// required for current character``        ``foreach``(``int` `index ``in` `(x.Value))``        ``{``            ``if` `(first)``            ``{``                ``if` `(index > 0)``                ``{``                    ``curr++;``                ``}``                ``prev = index;``                ``first = ``false``;``            ``}``            ``else``            ``{``                ``if` `(index != prev + 1)``                ``{``                    ``curr++;``                ``}``                ``prev = index;``            ``}``        ``}``        ``if` `(prev != n - 1)``        ``{``            ``curr++;``        ``}`` ` `        ``// Update the answer``        ``ans = Math.Min(ans, curr);``    ``}`` ` `    ``// Print the answer``    ``Console.Write(ans);``}` `// Driver Code``static` `void` `Main()``{``    ` `    ``// Given string``    ``string` `s = ``\"YYXYZYXYZXY\"``;``    ` `    ``// Size of string``    ``int` `N = s.Length;``    ` `    ``// Function call``    ``minimumOperations(s, N);``}``}` `// This code is contributed by divyesh072019`\n\n## Javascript\n\n ``\nOutput:\n`3`\n\nTime Complexity: O(N)\nAuxiliary Space: O(N)\n\nMy Personal Notes arrow_drop_up" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.64660853,"math_prob":0.9605602,"size":7992,"snap":"2021-43-2021-49","text_gpt3_token_len":2273,"char_repetition_ratio":0.14121182,"word_repetition_ratio":0.38018742,"special_character_ratio":0.31956956,"punctuation_ratio":0.1724138,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9983669,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-04T21:26:00Z\",\"WARC-Record-ID\":\"<urn:uuid:0cecfacf-7b32-46de-a156-2cb55a14ae06>\",\"Content-Length\":\"191175\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5ad1d10e-1861-4f87-bb64-4db9d34791ca>\",\"WARC-Concurrent-To\":\"<urn:uuid:36512f01-736f-4d35-babf-469d19df976a>\",\"WARC-IP-Address\":\"23.218.217.179\",\"WARC-Target-URI\":\"https://www.geeksforgeeks.org/count-minimum-substring-removals-required-to-reduce-string-to-a-single-distinct-character/?ref=rp\",\"WARC-Payload-Digest\":\"sha1:3ZX2IQJDUUG67VPYW2AD65C756NR2OC2\",\"WARC-Block-Digest\":\"sha1:USCV7F6RP62ULS32FUO6PADOGTODZUEH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363006.60_warc_CC-MAIN-20211204185021-20211204215021-00050.warc.gz\"}"}
https://www.asafraction.net/number/0.911
[ "### 0.911 as a Fraction\n\n##### 0.911 as a fraction equals 911/1000\n\nSteps to convert 0.911 into a fraction:\n\nWrite 0.911 as\n0.911/1\n\nMultiply both the numerator and denominator by 10 for each digit after the decimal point:\n\n0.911/1\n=\n0.911 x 1000/1 x 1000\n=\n911/1000\n\nAs a side note the whole number-integral part is: empty\nThe decimal part is: .911 = 911/1000\nFull simple fraction breakdown: 911/1000\n\nScroll down to customize the precision point enabling 0.911 to be broken down to a specific number of digits.\n\nThe page also includes 2-3D graphical representations of 0.911 as a fraction, the different types of fractions, and what type of fraction 0.911 is when converted.\n\n##### Graph Representation of 0.911 as a Fraction\n\nPie chart representation of the fractional part of 0.911", null, "##### Level of Precision for 0.911 as a Fraction\n\nThe level of precision are the number of digits to round to. Select a lower precision point below to break decimal 0.911 down further in fraction form. The default precision point is 5. If the last trailing digit is \"5\" you can use the \"round half up\" and \"round half down\" options to round that digit up or down when you change the precision point.\n\nFor example 0.875 with a precision point of 2 rounded half up = 88/100, rounded half down = 87/100.\n\nselect a precision point:\n\n91100/100000\n= 9110/10000\n= 911/1000\n\n#### Decimal to Fraction Converter\n\nEnter a Decimal Value:\n\n##### Numerator & Denominator for 0.911 as a Fraction\n\n0.911 = 0 911/1000\nnumerator/denominator = 911/1000\n\n##### Is 911/1000 a Mixed, Whole Number or Proper fraction?\n\nA mixed number is made up of a whole number (whole numbers have no fractional or decimal part) and a proper fraction part (a fraction where the numerator (the top number) is less than the denominator (the bottom number). In this case the whole number value is empty and the proper fraction value is 911/1000.\n\n##### Can all decimals be converted into a fraction?\n\nNot all decimals can be converted into a fraction. There are 3 basic types which include:\n\nTerminating decimals have a limited number of digits after the decimal point.\n\nExample: 642.2891 = 642 2891/10000\n\nRecurring decimals have one or more repeating numbers after the decimal point which continue on infinitely.\n\nExample: 3369.3333 = 3369 3333/10000 = 333/1000 = 33/100 = 1/3 (rounded)\n\nIrrational decimals go on forever and never form a repeating pattern. This type of decimal cannot be expressed as a fraction.\n\nExample: 0.378197244.....\n\n##### Fraction into Decimal\n\nYou can also see the reverse conversion I.e. how fraction 911/1000 is converted into a decimal.\n\n#### Common Decimal to Fraction Conversions\n\nClick any decimal to see it as a fraction:\n\n#### More Sample Conversions\n\nClick any decimal to see the converted fraction value:\n\n#### Three Decimal Points to Fraction Conversions\n\nClick a decimal to convert into a fraction:\n\n#### Four Decimal Points to Fraction Conversions\n\nClick a decimal to calculate the fraction value:" ]
[ null, "https://chart.apis.google.com/chart", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.799619,"math_prob":0.9929085,"size":2058,"snap":"2021-43-2021-49","text_gpt3_token_len":502,"char_repetition_ratio":0.16699123,"word_repetition_ratio":0.0057803467,"special_character_ratio":0.2813411,"punctuation_ratio":0.111380145,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99907166,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-11-27T05:12:38Z\",\"WARC-Record-ID\":\"<urn:uuid:15b72620-1b77-47d9-bb1b-91fe563cc2b4>\",\"Content-Length\":\"40619\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e9d4cad4-6590-4c62-bf74-407a70977baa>\",\"WARC-Concurrent-To\":\"<urn:uuid:e8165edf-f7d6-4c11-9d4d-b6a542adc2ea>\",\"WARC-IP-Address\":\"50.16.49.81\",\"WARC-Target-URI\":\"https://www.asafraction.net/number/0.911\",\"WARC-Payload-Digest\":\"sha1:25SRYNCRU6CUKHMYB5X7IJX5MPWWASD7\",\"WARC-Block-Digest\":\"sha1:ASCCCMNINJWCJ4F5MSTJPQS3VMT4X2ZW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964358118.13_warc_CC-MAIN-20211127043716-20211127073716-00491.warc.gz\"}"}
https://discuss.pytorch.org/t/how-to-run-autoencoder-on-single-image-sample-for-inference/141946
[ "# How to run autoencoder on single image/sample for inference\n\nI have trained an autoencoder and the training results seem to be okay.\nBut when i run the model on a single image,the generated results are incosistent.\nAny ideas on how I can run the autoencoder on a single example.\nThe autoencoder model in my case accepts an input of dimension (256x256+3,1)\nMy evaluation code is as follows\n\n``````img_dir='blender_files/Model_0/image_0/model_0_0.jpg'\nimg=torch.from_numpy(img)\n#print(img)\ncoord=np.array([0,0,0]) #3x1 vector which i need to predict using the autoencoder\ncoord=torch.from_numpy(coord)\nimg=torch.unsqueeze(torch.flatten(img),1)\n#print(img)\ncoord=torch.unsqueeze(torch.flatten(coord),1)\nX=torch.cat((img,coord),dim=0)#the input feature which i am feeding to the model ,whose size is ```torch.Size()```\n\n#Feed feature vector to model\ndevice = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\nfeatures=X.to(device)\nfeatures=torch.flatten(features)\nprint(features.shape)\nmodel=AutoEncoder(num_features=features.shape,num_hidden_1=num_hidden_1,num_hidden_2=num_hidden_2,num_hidden_3=num_hidden_3)\nmodel=model.double()\nmodel.eval()\ntorch.manual_seed(123)\ndecoded_rep=model(features[None,...])\nprint(decoded_rep)\n``````\n\nThe output i am getting is as follows\n\n``````tensor([[3.2402e+16, 3.4111e+16, 3.2839e+16, ..., 4.4640e+16, 4.4089e+16,\n7.7656e+15]], dtype=torch.float64)\n``````\n\nBut the decoded output obtained during training is\n\n``````[ 67.5205, 67.6745, 67.6265, ..., 124.9578, 124.8637, 4.7602]\n``````\n\nAs you can see both outputs are not even close to one another.\nI was looking at vanilla autoencoders and it seems for generation purposes,they are not really a good choice,as such what other models can I use .\nSince in my case I am interested in predicting the last 3 values of the feature ,would an autoregressive model suit my case better.\n\nThe following also adds more weight to my point\nOne common application done with autoregressive models is auto-completing an image. As autoregressive models predict pixels one by one, we can set the first N pixels to predefined values and check how the model completes the image" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6775406,"math_prob":0.94231373,"size":2358,"snap":"2022-05-2022-21","text_gpt3_token_len":622,"char_repetition_ratio":0.11597281,"word_repetition_ratio":0.0,"special_character_ratio":0.28244275,"punctuation_ratio":0.197065,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.989551,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-16T16:34:01Z\",\"WARC-Record-ID\":\"<urn:uuid:e9b6d17d-bf3d-4c89-95c4-ad9ae26780b4>\",\"Content-Length\":\"15719\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7a320864-0b98-4899-95f5-371c3bf9a884>\",\"WARC-Concurrent-To\":\"<urn:uuid:1b20539a-9eef-4572-b36c-bd2240aa0bbc>\",\"WARC-IP-Address\":\"159.203.145.104\",\"WARC-Target-URI\":\"https://discuss.pytorch.org/t/how-to-run-autoencoder-on-single-image-sample-for-inference/141946\",\"WARC-Payload-Digest\":\"sha1:ZQETSWDCUWSCI4NCFYRUKL3MUTW44IT7\",\"WARC-Block-Digest\":\"sha1:5L3AL5AVRADSXZQUVD7WDEWXM6AILBMG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662510138.6_warc_CC-MAIN-20220516140911-20220516170911-00265.warc.gz\"}"}
https://cbseignou.com/anna-university/statistics-for-management
[ "Cbseignou.com\n\nMehta Solutions provides Mba assignments , mba books ,blis , projects\n\nStatistics for management (Code: DBA 7102)", null, "", null, "", null, "Statistics for management SOLVED PAPERS AND GUESS\n\nProduct Details: IGNOU Statistics for management SOLVED PAPERS AND GUESS\n\nFormat: BOOK\n\nPub. Date: NEW EDITION APPLICABLE FOR Current EXAM\n\nPublisher: MEHTA SOLUTIONS\n\nEdition Description: 2015-16\n\nRATING OF BOOK: EXCELLENT\n\nFROM THE PUBLISHER\n\nIf you find yourself getting fed up and frustrated with other anna university book solutions now mehta solutions brings top solutions for anna university. this Statistics for management book contains previous year solved papers plus faculty important questions and answers specially for anna university .questions and answers are specially design specially for anna university students .\n\n•  Case studies solved\n\nPH: 09871409765 , 09899296811 FOR ANY problem\n\nDBA 7102  Statistics for management\n\nUNIT I  PROBABILITY -  Basic definitions and rules for probability, conditional probability,\nindependent of events, Baye’s  Theorem, random variables, Probability distributions:\nBinomial, Poisson, Uniform and Normal Distributions.\nUNIT II  SAMPLING DISTRIBUTION AND ESTIMATION  -  Introduction to sampling\ndistributions,  sampling techniques, sampling distribution of mean and proportion,\napplication of central limit theorem. Estimation: Point and Interval estimates for\npopulation parameters of large sample and small samples, determining the sample size.\nUNIT III  TESTING OF HYPOTHESIS -  Hypothesis testing: one sample and two samples tests\nfor means and proportions of large samples (z-test), one sample and two sample tests for\nmeans of small samples (t-test), F-test for two sample standard deviations.\nUNIT IV  NON-PARAMETRIC METHODS -  Sign test for paired data. Rank sum test: Mann  –\nWhitney U test and Kruskal Wallis test. One sample run test, Rank correlation. Chi-square tests for independence of attributes and goodness of fit.\nUNIT V   CORRELATION, REGRESSION AND TIME SERIES ANALYSIS  -  Correlation\nanalysis, estimation of regression line. Time series analysis: Variations in time series,\ntrend analysis, cyclical variations, seasonal variations and irregular variations.\n\nOld price: 350.00 Rs\nPrice: 280.00 Rs\nDelivery time: 2-5 days\nWeight: 0.5 Kg" ]
[ null, "https://cbseignou.com/components/com_jshopping/files/img_labels/new.png", null, "https://cbseignou.com/components/com_jshopping/files/img_products/thumb_5915cf2143cbd7e77476588bacebebdc.jpg", null, "https://cbseignou.com/components/com_jshopping/files/img_products/thumb_829d9492d54299050fb75132bc2971f8.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7852334,"math_prob":0.6332344,"size":2181,"snap":"2019-26-2019-30","text_gpt3_token_len":471,"char_repetition_ratio":0.11667432,"word_repetition_ratio":0.019543974,"special_character_ratio":0.18936267,"punctuation_ratio":0.13445379,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96897286,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,10,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-17T03:05:43Z\",\"WARC-Record-ID\":\"<urn:uuid:04404106-f82a-4b74-9754-d5dc252164cd>\",\"Content-Length\":\"45911\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:55020249-eea2-4aff-80ea-109d0e42110b>\",\"WARC-Concurrent-To\":\"<urn:uuid:343d0850-a8f4-40ce-9046-8b4dea9c2ad7>\",\"WARC-IP-Address\":\"147.135.10.137\",\"WARC-Target-URI\":\"https://cbseignou.com/anna-university/statistics-for-management\",\"WARC-Payload-Digest\":\"sha1:QDUQHNTQT5E4UD6D6FXKH7PBTBYWGUTW\",\"WARC-Block-Digest\":\"sha1:VBL723L524LV6K4PAFEXRLTOS7RI2ZQE\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560627998369.29_warc_CC-MAIN-20190617022938-20190617044938-00118.warc.gz\"}"}
http://blog.darkbuzz.com/2013/09/quantum-does-not-contradict-probability.html
[ "## Saturday, September 28, 2013\n\n### Quantum does not contradict probability laws\n\nStatistician Andrew Gelman writes:\nClassical probability does not apply to quantum systems (causal inference edition) ...\n\nIf you recall your college physics, you’ll realize that the results of the two-slit experiment violate the laws of joint probability, ...\n\nI discuss this in my linked blog post. But, in brief, the intuitive application of probability theory to the 2-slit experiment is that, if y is the position of the photon and x is the slit that the photon goes through, that p(y) = p(y|x=1)p(x=1) + p(y|x=2)p(x=2). But this is not true. As we all know, the superposition works not with the probabilities but with the probability amplitudes. Classical probabilities don’t have phases, hence you can just superimpose them via the familiar law of total probability. Quantum probabilities work differently.\nThis seems to be a widespread misconception. As Tim Maudlin explains in the comments, there is no contradiction with classical probability theory. In quantum mechanics, a photon is not a classical particle, but also has wave properties. The photon history is not just the sum of two particle possibilities. It can also be a wave that passes thru both slits at once.\n\nThe double slit experiment does show that light has wave properties. Everyone has agreed to that since 1803. If you deny that light is a wave that can go thru both slits at once, then you can get a contradiction. That is another way of saying the same thing. But the contradiction is with the classical particle theory of light, and not with probability theory.\n\nThere are people who have tried to make sense of quantum mechanics by using quantum logic or some modification to the laws of probability. These approaches have never worked.\n\nI can't blame Gelman too much. There are a lot of physicists who, like Einstein, really want to believe that quantum mechanics is really a theory of imperfect info about hidden variables. It is not.\n\nHe argues:\nSure, a physical experiment can violate a mathematical law. The classic example is, if in a universe with closed curvature, you construct a large enough triangle, its angles will not add up to 180 degrees. Another classic example is that, for various particles, Boltzmann statistics do not apply, instead you have to use Fermi-Dirac or Bose-Einstein statistics. Boltzmann statistics is a mathematical probability model that does not apply in these settings. Another example is, in the two-slit experiment, p(A) does not equal the sum over B of p(A|B)p(B). In all these cases, you have a mathematical model that works (or approximately works) in some areas of application but not others. The math is not wrong but it does not apply to all settings.\nThis is silly. Yes, the math of flat space does not necessarily apply to curved space. Probability is a funny subject with multiple interpretations, but none of them are contradicted by light having wave properties.\n\nHe insists:\nThe 2-slit data indeed violate the laws of joint probability. I learned about this in physics class in college. In quantum mechanics, it is the complex functions that superimpose, not the probabilities. It is the application of the mathematics of wave mechanics to particles. The open question is whether it might make sense to apply wave mechanics to macroscopic measurements.\nI would be interested in any textbooks say it wrong in this way.\n\nSurely it must seem odd that we have a notion of probability that works in all situations except quantum mechanics, and we have some other notion that applies to quantum mechanics, but no one has figured out a way to make that probability notion apply to anything other than quantum mechanics. The answer is that quantum mechanics uses the same logic and probability that everyone else does.\n\nMaudlin writes:\nIt ought to cause some pause that Feynman himself makes exactly this erroneous claim about the 2-slit experiment in the Lectures. Feynman does not mention locality, unitarity, or causality. He makes a straight claim about the data, based on a bad argument—exactly the argument I was attributing to Andrew. So if Feynman screwed this up, it would not be odd of many other physicists do too.\nFeynman was a big advocate of particle interpretations of quantum mechanics. So he thought that the strangest part of quantum mechanics is the experiments showing wave behavior, like the double-slit experiment." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9431542,"math_prob":0.8826583,"size":4341,"snap":"2021-43-2021-49","text_gpt3_token_len":900,"char_repetition_ratio":0.14041965,"word_repetition_ratio":0.002805049,"special_character_ratio":0.19857176,"punctuation_ratio":0.10872162,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98615825,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-23T11:59:07Z\",\"WARC-Record-ID\":\"<urn:uuid:9e939bb2-72d2-4b19-a867-e4078555e4a8>\",\"Content-Length\":\"90687\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d6633f06-6bf6-441b-8eba-ee6ea52f9a6b>\",\"WARC-Concurrent-To\":\"<urn:uuid:16d834da-315e-469e-9eb0-77a222d0c91e>\",\"WARC-IP-Address\":\"172.217.1.211\",\"WARC-Target-URI\":\"http://blog.darkbuzz.com/2013/09/quantum-does-not-contradict-probability.html\",\"WARC-Payload-Digest\":\"sha1:KDEKWBUEFZCODKX6TTMF54A4PF2NXA56\",\"WARC-Block-Digest\":\"sha1:PGP6NS7W2RFTYRAQLWUYLK6NKFJIDLQO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585671.36_warc_CC-MAIN-20211023095849-20211023125849-00509.warc.gz\"}"}
https://ncatlab.org/nlab/show/CR+manifold
[ "# nLab CR manifold\n\nContents\n\n### Context\n\n#### Differential geometry\n\nsynthetic differential geometry\n\nIntroductions\n\nfrom point-set topology to differentiable manifolds\n\nDifferentials\n\nV-manifolds\n\nsmooth space\n\nTangency\n\nThe magic algebraic facts\n\nTheorems\n\nAxiomatics\n\ncohesion\n\n• (shape modality $\\dashv$ flat modality $\\dashv$ sharp modality)\n\n$(\\esh \\dashv \\flat \\dashv \\sharp )$\n\n• dR-shape modality$\\dashv$ dR-flat modality\n\n$\\esh_{dR} \\dashv \\flat_{dR}$\n\ninfinitesimal cohesion\n\ntangent cohesion\n\ndifferential cohesion\n\nsingular cohesion\n\n$\\array{ && id &\\dashv& id \\\\ && \\vee && \\vee \\\\ &\\stackrel{fermionic}{}& \\rightrightarrows &\\dashv& \\rightsquigarrow & \\stackrel{bosonic}{} \\\\ && \\bot && \\bot \\\\ &\\stackrel{bosonic}{} & \\rightsquigarrow &\\dashv& \\mathrm{R}\\!\\!\\mathrm{h} & \\stackrel{rheonomic}{} \\\\ && \\vee && \\vee \\\\ &\\stackrel{reduced}{} & \\Re &\\dashv& \\Im & \\stackrel{infinitesimal}{} \\\\ && \\bot && \\bot \\\\ &\\stackrel{infinitesimal}{}& \\Im &\\dashv& \\& & \\stackrel{\\text{étale}}{} \\\\ && \\vee && \\vee \\\\ &\\stackrel{cohesive}{}& \\esh &\\dashv& \\flat & \\stackrel{discrete}{} \\\\ && \\bot && \\bot \\\\ &\\stackrel{discrete}{}& \\flat &\\dashv& \\sharp & \\stackrel{continuous}{} \\\\ && \\vee && \\vee \\\\ && \\emptyset &\\dashv& \\ast }$\n\nModels\n\nLie theory, ∞-Lie theory\n\ndifferential equations, variational calculus\n\nChern-Weil theory, ∞-Chern-Weil theory\n\nCartan geometry (super, higher)\n\n# Contents\n\n## Definition\n\nA CR manifold consists of a differentiable manifold $M$ together with a subbundle $L$ of the complexified tangent bundle, $L \\subset TM \\otimes_\\mathbf{R} \\mathbf{C}$ such that $[L, L ] \\subset L$ and $L \\cap\\overline{L} =\\{ 0 \\}$.\n\n### As first-order integrable $G$-structure\n\nCR manifold structure are equivalently certain first-order integrable G-structures (Dragomi-Tomassini 06, section 1.6), a type of parabolic geometry.\n\n## Properties\n\n### Relation to solutions in supergravity\n\nA close analogy between CR geometry and supergravity superspacetimes (as both being torsion-ful integrable G-structures) is pointed out in (Lott 01 exposition (4.2)).\n\n## Other Clifford-type Hypersurfaces\n\nThe original article is\n\nSurveys:" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.73917127,"math_prob":0.995365,"size":1877,"snap":"2023-40-2023-50","text_gpt3_token_len":475,"char_repetition_ratio":0.123331554,"word_repetition_ratio":0.0,"special_character_ratio":0.20990942,"punctuation_ratio":0.14102565,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9928522,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-26T05:56:23Z\",\"WARC-Record-ID\":\"<urn:uuid:46a19e9e-d61c-47e2-8e53-4ba9edc883c2>\",\"Content-Length\":\"43991\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d5124d63-b864-4960-b4b6-2f10130621be>\",\"WARC-Concurrent-To\":\"<urn:uuid:7f131882-06d7-4adf-813f-849189ba3e84>\",\"WARC-IP-Address\":\"128.2.25.48\",\"WARC-Target-URI\":\"https://ncatlab.org/nlab/show/CR+manifold\",\"WARC-Payload-Digest\":\"sha1:SPN6KVFJIDG2ULFU75TYTBL7GXWA24K3\",\"WARC-Block-Digest\":\"sha1:C272NKOZKYXCKPE37SV4JSSX5EEDO4YY\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510149.21_warc_CC-MAIN-20230926043538-20230926073538-00150.warc.gz\"}"}
https://boost-spirit.com/distrib/spirit_1_6_3/libs/spirit/doc/predefined_actions.html
[ "Predefined Actions", null, "The framework has two predefined semantic action functors. Experience shows that these functors are so often used that they were included as part of the core framework to spare the user from having to reinvent the same functionality over and over again.\n\n### assign(v)\n\nAssign the value passed by the parser to the variable v.\n\nExample usage:\n\n`````` int i;\nstd::string s;\nr = int_p[assign(i)] >> (+alpha_p)[assign(s)];``````\n\nGiven an input 123456 \"Hello\", assign(i) will extract the number 123456 and assign it to i, then, assign(s) will extract the string \"Hello\" and assign it to s. Technically, the expression assign(v) is a template function that generates a semantic action. The semantic action generated is polymorphic and should work with any type as long as it is compatible with the arguments received from the parser. It might not be obvious, but a string can accept the iterator first and last arguments that are passed into a generic semantic action (see above). In fact, any STL container that has an assign(first, last) member function can be used.\n\nFor reference and to aid users in writing their own semantic action functors, here's the implementation of the assign(v) action. We include it here since it is short and simple enough to understand.\n\nThe assign_actor class\n\n`````` template <typename T>\nclass assign_actor\n{\npublic:\n\nexplicit\nassign_actor(T& ref_)\n: ref(ref_) {}\n\ntemplate <typename T2>\nvoid operator()(T2 const& val) const\n{ ref = val; }\n\ntemplate <typename IteratorT>\nvoid\noperator()(IteratorT const& first, IteratorT const& last) const\n{ ref.assign(first, last); }\n\nprivate:\n\nT& ref;\n};``````\n\nThe assign function\n\n`````` template <typename T>\ninline assign_actor<T> const\nassign(T& t)\n{\nreturn assign_actor<T>(t);\n}``````\n\n### append(c)\n\nAppend the value passed by the parser to the container c.\n\nExample usage:\n\n`````` std::vector<int> v;\nr = int_p[append(v)] >> *(',' >> int_p[append(v)]);``````\n\nThe code above can parse a comma separated list of integers and stuff the numbers in the vector v. If it isn't obvious already, append(c) appends the parsed value (the argument passed into the semantic action by the parser) into the container c, which must have member functions insert(where, value) and end(). To cut the story short, STL containers are perfect candidates for append(c) to work on. Like assign(v), append(c) may also take in the iterator pairs. In which case, the container must have two member functions: insert(where, first, last) and end(); e.g. std::vector<std::string>." ]
[ null, "https://boost-spirit.com/distrib/spirit_1_6_3/libs/spirit/doc/theme/spirit.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.83447516,"math_prob":0.94681287,"size":2630,"snap":"2021-43-2021-49","text_gpt3_token_len":613,"char_repetition_ratio":0.12376238,"word_repetition_ratio":0.019851116,"special_character_ratio":0.2460076,"punctuation_ratio":0.1403162,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96368927,"pos_list":[0,1,2],"im_url_duplicate_count":[null,10,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-25T01:34:06Z\",\"WARC-Record-ID\":\"<urn:uuid:61c7dc8f-eaa3-426d-b552-4c60f6509363>\",\"Content-Length\":\"11963\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:eee19ce0-32a0-48b6-bca7-7850545ddb3a>\",\"WARC-Concurrent-To\":\"<urn:uuid:b359ec67-88fe-42c6-9350-d15d9a064be4>\",\"WARC-IP-Address\":\"64.92.125.173\",\"WARC-Target-URI\":\"https://boost-spirit.com/distrib/spirit_1_6_3/libs/spirit/doc/predefined_actions.html\",\"WARC-Payload-Digest\":\"sha1:PLXWMX6YPADTZ4RYT7K7GLRBC33NXMIP\",\"WARC-Block-Digest\":\"sha1:OBAJ3KKJKRVKZMJ3ADSVYP6VQUY3PWNQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323587608.86_warc_CC-MAIN-20211024235512-20211025025512-00283.warc.gz\"}"}
https://elki-project.github.io/releases/current/doc/de/lmu/ifi/dbs/elki/distance/distancefunction/class-use/DBIDDistanceFunction.html
[ "", null, "## Uses of Interfacede.lmu.ifi.dbs.elki.distance.distancefunction.DBIDDistanceFunction\n\n• Packages that use DBIDDistanceFunction\nPackage Description\nde.lmu.ifi.dbs.elki.database.query.distance\nPrepared queries for distances\nde.lmu.ifi.dbs.elki.distance.distancefunction\nDistance functions for use within ELKI.\nde.lmu.ifi.dbs.elki.distance.distancefunction.external\nDistance functions using external data sources\n• ### Uses of DBIDDistanceFunction in de.lmu.ifi.dbs.elki.database.query.distance\n\nFields in de.lmu.ifi.dbs.elki.database.query.distance declared as DBIDDistanceFunction\nModifier and Type Field and Description\nprotected DBIDDistanceFunction DBIDDistanceQuery.distanceFunction\nThe distance function we use.\nMethods in de.lmu.ifi.dbs.elki.database.query.distance that return DBIDDistanceFunction\nModifier and Type Method and Description\nDBIDDistanceFunction DBIDRangeDistanceQuery.getDistanceFunction()\nDBIDDistanceFunction DBIDDistanceQuery.getDistanceFunction()\nConstructors in de.lmu.ifi.dbs.elki.database.query.distance with parameters of type DBIDDistanceFunction\nConstructor and Description\nDBIDDistanceQuery(Relation<DBID> relation, DBIDDistanceFunction distanceFunction)\nConstructor.\n• ### Uses of DBIDDistanceFunction in de.lmu.ifi.dbs.elki.distance.distancefunction\n\nSubinterfaces of DBIDDistanceFunction in de.lmu.ifi.dbs.elki.distance.distancefunction\nModifier and Type Interface and Description\ninterface  DBIDRangeDistanceFunction\nDistance functions valid in a static database context only (i.e. for DBIDRanges) For any \"distance\" that cannot be computed for arbitrary objects, only those that exist in the database and referenced by their ID.\nClasses in de.lmu.ifi.dbs.elki.distance.distancefunction that implement DBIDDistanceFunction\nModifier and Type Class and Description\nclass  AbstractDBIDRangeDistanceFunction\nAbstract base class for distance functions that rely on integer offsets within a consecutive range.\nclass  RandomStableDistanceFunction\nThis is a dummy distance providing random values (obviously not metrical), useful mostly for unit tests and baseline evaluations: obviously this distance provides no benefit whatsoever.\n• ### Uses of DBIDDistanceFunction in de.lmu.ifi.dbs.elki.distance.distancefunction.external\n\nClasses in de.lmu.ifi.dbs.elki.distance.distancefunction.external that implement DBIDDistanceFunction\nModifier and Type Class and Description\nclass  DiskCacheBasedDoubleDistanceFunction\nDistance function that is based on double distances given by a distance matrix of an external binary matrix file.\nclass  DiskCacheBasedFloatDistanceFunction\nDistance function that is based on float distances given by a distance matrix of an external binary matrix file.\nclass  FileBasedSparseDoubleDistanceFunction\nDistance function that is based on double distances given by a distance matrix of an external ASCII file.\nclass  FileBasedSparseFloatDistanceFunction\nDistance function that is based on float distances given by a distance matrix of an external ASCII file." ]
[ null, "https://elki-project.github.io/releases/current/doc/figures/elki-logo-200.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6674775,"math_prob":0.6425041,"size":2178,"snap":"2020-24-2020-29","text_gpt3_token_len":453,"char_repetition_ratio":0.22815087,"word_repetition_ratio":0.29007635,"special_character_ratio":0.16758494,"punctuation_ratio":0.13311689,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9535262,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-08T23:20:38Z\",\"WARC-Record-ID\":\"<urn:uuid:cd25d76f-14f1-4404-8621-47b50c464eb6>\",\"Content-Length\":\"20291\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:722ef753-e880-44d9-a7e8-ddc1fda3f2f9>\",\"WARC-Concurrent-To\":\"<urn:uuid:310d5fc8-a450-4ac0-8aa1-defb62359993>\",\"WARC-IP-Address\":\"185.199.111.153\",\"WARC-Target-URI\":\"https://elki-project.github.io/releases/current/doc/de/lmu/ifi/dbs/elki/distance/distancefunction/class-use/DBIDDistanceFunction.html\",\"WARC-Payload-Digest\":\"sha1:JQAQZ6KXW4BETVLLJOFW3BXNBSW5Z3HN\",\"WARC-Block-Digest\":\"sha1:PSRHZVCT27AI7UF3SNISGDE46ZYVJN3Z\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655897707.23_warc_CC-MAIN-20200708211828-20200709001828-00037.warc.gz\"}"}
http://sciforums.com/threads/yang%E2%80%93mills-and-mass-gap.160286/page-4#post-3500593
[ "# Yang–Mills and Mass Gap\n\nDiscussion in 'Physics & Math' started by Thales, Nov 29, 2017.\n\n1. ### arfa branecall me arfValued Senior Member\n\nMessages:\n7,832\nBefore putting on the mathematical gloves, perhaps some ordinary English discussion of the likes of isotopic spin (isobaric spin or just isospin).\n\nFrom Wikipedia:\nG. t'Hooft, in the SciAm article I linked earlier, states that isospin symmetry is continuous. This means for instance that proton/neutron isospin can describe a particle in a superposition of proton + neutron. So what about the electric charge of +1 in the case of such a superposition?\n\nOr is it the case that, even though the symmetry is continuous, physically it doesn't exist?\n\n3. ### arfa branecall me arfValued Senior Member\n\nMessages:\n7,832\nAnother question about the difference between a global and a local symmetry: some authors appear to distinguish between the two by saying only the latter can be a gauge symmetry, however this seems to be contradicted by t'Hooft in his article, and also here at physics.stackexchange by Rod Vance (reply #9).\nt'Hooft says something like, by taking a global symmetry and making it local, something needs to be added to a gauge theory, which is a force. Is this otherwise known as symmetry-breaking?\n\nOne idea I have about breaking a symmetry is, in the case of a sphere the (global?) symmetry is that all points on the surface are equivalent (for instance, all points have the same fiber over them which is the set of all possible directions a vector at any point can have). If however, the sphere is rotated about an axis, two points become 'special' (the poles) and all other points rotate with the same angular velocity (sort-of \"combing\" all the direction vectors).\n\nOf course, the sphere can rotate about more than one axis of symmetry, it can precess and so on, but then this additional rotation is relative to another principal axis.\n\nRotation breaks spherical symmetry because you can then treat the surface as a family of infinitesimal rings rotating about the same axis.\n\nLast edited: Jan 27, 2018\n\n5. ### hansdaValued Senior Member\n\nMessages:\n2,424\nOur Earth is spinning with relative to its axis. What do you think is the frequency $f$ for this spin?\n\n7. ### NotEinsteinValued Senior Member\n\nMessages:\n1,986\nI believe it's exactly $0.5\\:\\text{double-sidereal days}^{-1}$, so less than 1.\n\n8. ### hansdaValued Senior Member\n\nMessages:\n2,424\nWhat is the definition of frequency, you are considering here?\n\n9. ### NotEinsteinValued Senior Member\n\nMessages:\n1,986\nJust the standard one: the number of occurrences of a repeating event per unit of time.\n\nEdit: Although I think I got the units wrong. It's actually: $0.5\\:\\text{half-sidereal day}^{-1}$.\n\n10. ### hansdaValued Senior Member\n\nMessages:\n2,424\nSo, can this repeating be less than one?\n\n11. ### NotEinsteinValued Senior Member\n\nMessages:\n1,986\nIf your time units are less than a sidereal day, then obviously yes. Likewise, the frequency of the little hand of a clock going around is less than 1 if your time units are less than 1 hour. If something is rotating (at a fixed speed) once an hour, then during half an hour it will rotate halfway. 1/hr = 0.5/half-hr.\n\n12. ### hansdaValued Senior Member\n\nMessages:\n2,424\nSo, what is the minimum frequency here?\n\n13. ### NotEinsteinValued Senior Member\n\nMessages:\n1,986\nZero, if the object isn't rotating at all. If you want to exclude that, it's the smallest non-zero positive number you can imagine.\n\n14. ### hansdaValued Senior Member\n\nMessages:\n2,424\nConsider the equation $w=2\\pi f$. What do you think is the minimum frequency here?\n\n15. ### NotEinsteinValued Senior Member\n\nMessages:\n1,986\nThat's simple: because I based my response in post #70 on just the definition on frequency, it's the same answer: \"Zero, if the object isn't rotating at all. If you want to exclude that, it's the smallest non-zero positive number you can imagine.\"\n\n16. ### hansdaValued Senior Member\n\nMessages:\n2,424\nIf the frequency is less than one, the cycle is incomplete.\n\n17. ### NotEinsteinValued Senior Member\n\nMessages:\n1,986\nYes, so? I don't see any problem with that?\n\n18. ### arfa branecall me arfValued Senior Member\n\nMessages:\n7,832\n\nA \"Feynman\" diagram I found on the net. Apparently M is the invariant mass of some exchange particle (hence the difference in energy ΔE is defined by two constants, but corresponds to a \"rest energy\" (??)). Or maybe it's the \"mass gap\" (?).\n\nAnyhoo, the last equation gives the range of the interaction. If M = 0, then we have an algebraic problem, namely division by 0, or maybe something unphysical.On the other hand if the exchange particle has zero rest mass, that implies the range is infinite!\n\nThe above, btw, is from a public lecture by Kenneth Young on Yang-Mills and the Higgs boson. I'd post a link but the lecture itself isn't all that edifying, he spends about 5 seconds explaining this diagram and that's a bit of a trend.\n\n19. ### arfa branecall me arfValued Senior Member\n\nMessages:\n7,832\nBut you haven't defined a cycle. You haven't specified what is a \"complete\" cycle, or whether a cycle is a smooth function, etc.\n\n20. ### NotEinsteinValued Senior Member\n\nMessages:\n1,986\n(Note: this is a guess.)\nThe first formula's LHS is the difference in energy between the incoming and outgoing particles on one side, i.e. the energy available for the red-line particle. It's expressed as a mass here.\nThe second formula is Heisenberg's uncertainty principle.\nThe third formula is simply the second one rewritten, with the first formula plugged in.\nThe fourth formula calculates the distance the particle can travel while complying to Heisenberg's uncertainty principle.\n\nSo if the particle is massless, its range is infinite. While I don't dare draw solid conclusions from this because there's all kinds of problems setting M to zero in this derivation, this outcome seems to make sense: photons (massless particles) are stable and will travel forever (if not stopped). They have infinite range, so R being infinite in their case kinda makes sense.\n\n21. ### arfa branecall me arfValued Senior Member\n\nMessages:\n7,832\nIsospin symmetry is a global symmetry. Intermediate states between protons and neutrons aren't seen in nature, locally, because of \"spontaneous symmetry breaking\", via the Higgs mechanism (!).\n\nSince isospin is global, the orthogonal proton/neutron states transform everywhere in the same way (protons become neutrons, neutrons become protons and the strong coupling is invariant under a global transformation).\n\nThe Higgs field in effect provides a reference direction for the isospin state of neutrons/protons; the symmetry is local (a gauge symmetry), but hidden--see t'Hooft's article.\n\n22. ### arfa branecall me arfValued Senior Member\n\nMessages:\n7,832\nhansda was almost on to something, I have to admit (but what exactly I'm not too sure).\n\nAccording to Kenneth Krane in Modern Physics 3rd ed., the uncertainty relation between frequency and time is readily derived from the concept of measuring a period T of some waveform. There is an uncertainty in the time measurement: Δt ≈ T. So there is an uncertainty in the period itself: ΔT. Assume this uncertainty is a small fraction of T, i.e. ΔT ~ εT.\n\nNext, take the product: ΔtΔT ~ (T)εT. But f = 1/T and we can now take differentials: $df = -1/T^2 dT$, and \"convert\" the differentials into absolute differences, so the minus sign can be ignored (we are interested in the magnitudes of the uncertainties). So $Δf = 1/T^2 ΔT$ which yields after substitution: ΔfΔt ~ ε.\n\nHence, if Δt is large (the duration of a measurement of a period), the uncertainty in frequency is small.\n\nThat said, hansda is assuming that $E = mc^2 = hf$ holds. But this means there's a big problem with Einstein's 1905 paper on the photelectric effect, because it says photons have a rest energy and so a finite range. Thus Maxwell's equations are wrong!\n\nLast edited: Jan 30, 2018\n23. ### Q-reeusBannedValued Senior Member\n\nMessages:\n4,695\nYou might be better off trying somewhere like: https://www.quora.com/How-does-the-uncertainty-principle-relate-to-Fourier-transforms\nWhere on earth did you conclude that from?! Nonsense. Photoelectric effect is all about light absorption and release of electrons from a metal having a particular work function. It doesn't even prove the EM field is quantized as photons - that required more sophisticated approaches much later on." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9322937,"math_prob":0.8303071,"size":543,"snap":"2023-40-2023-50","text_gpt3_token_len":120,"char_repetition_ratio":0.10204082,"word_repetition_ratio":0.0,"special_character_ratio":0.1970534,"punctuation_ratio":0.12,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9858507,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-30T17:10:45Z\",\"WARC-Record-ID\":\"<urn:uuid:0d14f760-c527-4aab-8427-327275b24e4e>\",\"Content-Length\":\"102533\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c0124ded-9aa3-415b-af09-3ae7dca30f3e>\",\"WARC-Concurrent-To\":\"<urn:uuid:6b677a06-aa78-4d2c-b054-f9fbde54481f>\",\"WARC-IP-Address\":\"158.69.116.119\",\"WARC-Target-URI\":\"http://sciforums.com/threads/yang%E2%80%93mills-and-mass-gap.160286/page-4#post-3500593\",\"WARC-Payload-Digest\":\"sha1:FC2E65EAZCJDBNF4MQXCYFJ4HPYQBJW6\",\"WARC-Block-Digest\":\"sha1:WLYMDC2AX7DCDQ7QGAR2MR5JDIJRWYBD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510697.51_warc_CC-MAIN-20230930145921-20230930175921-00393.warc.gz\"}"}
https://www.geeksforgeeks.org/python-pandas-dataframe-mean/
[ "# Python | Pandas dataframe.mean()\n\nPython is a great language for doing data analysis, primarily because of the fantastic ecosystem of data-centric python packages. Pandas is one of those packages and makes importing and analyzing data much easier.\n\nPandas` dataframe.mean()` function return the mean of the values for the requested axis. If the method is applied on a pandas series object, then the method returns a scalar value which is the mean value of all the observations in the dataframe. If the method is applied on a pandas dataframe object, then the method returns a pandas series object which contains the mean of the values over the specified axis.\n\nSyntax: DataFrame.mean(axis=None, skipna=None, level=None, numeric_only=None, **kwargs)\n\nParameters :\naxis : {index (0), columns (1)}\nskipna : Exclude NA/null values when computing the result\n\nlevel : If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing into a Series\n\nnumeric_only : Include only float, int, boolean columns. If None, will attempt to use everything, then use only numeric data. Not implemented for Series.\n\nReturns : mean : Series or DataFrame (if level specified)\n\nExample #1: Use `mean()` function to find the mean of all the observations over the index axis.\n\n `# importing pandas as pd ` `import` `pandas as pd ` ` `  `# Creating the dataframe  ` `df ``=` `pd.DataFrame({``\"A\"``:[``12``, ``4``, ``5``, ``44``, ``1``], ` `                   ``\"B\"``:[``5``, ``2``, ``54``, ``3``, ``2``],  ` `                   ``\"C\"``:[``20``, ``16``, ``7``, ``3``, ``8``], ` `                   ``\"D\"``:[``14``, ``3``, ``17``, ``2``, ``6``]}) ` ` `  `# Print the dataframe ` `df `", null, "Let’s use the `dataframe.mean()` function to find the mean over the index axis.\n\n `# Even if we do not specify axis = 0, ` `# the method will return the mean over ` `# the index axis by default ` `df.mean(axis ``=` `0``) `\n\nOutput :", null, "Example #2: Use `mean()` function on a dataframe which has `Na` values. Also find the mean over the column axis.\n\n `# importing pandas as pd ` `import` `pandas as pd ` ` `  `# Creating the dataframe  ` `df ``=` `pd.DataFrame({``\"A\"``:[``12``, ``4``, ``5``, ``None``, ``1``], ` `                   ``\"B\"``:[``7``, ``2``, ``54``, ``3``, ``None``], ` `                   ``\"C\"``:[``20``, ``16``, ``11``, ``3``, ``8``],. ` `                   ``\"D\"``:[``14``, ``3``, ``None``, ``2``, ``6``]}) ` ` `  `# skip the Na values while finding the mean ` `df.mean(axis ``=` `1``, skipna ``=` `True``) `\n\nOutput :", null, "My Personal Notes arrow_drop_up", null, "Check out this Author's contributed articles.\n\nIf you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.\n\nPlease Improve this article if you find anything incorrect by clicking on the \"Improve Article\" button below.\n\nArticle Tags :\n\n1\n\nPlease write to us at [email protected] to report any issue with the above content." ]
[ null, "https://media.geeksforgeeks.org/wp-content/uploads/1-551.png", null, "https://media.geeksforgeeks.org/wp-content/uploads/1-561.png", null, "https://media.geeksforgeeks.org/wp-content/uploads/1-562.png", null, "https://media.geeksforgeeks.org/auth/profile/16bgo1jjncnewlu610hu", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.62699074,"math_prob":0.82446754,"size":2994,"snap":"2019-51-2020-05","text_gpt3_token_len":777,"char_repetition_ratio":0.18461539,"word_repetition_ratio":0.15808824,"special_character_ratio":0.2949232,"punctuation_ratio":0.19932432,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9923419,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,8,null,8,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-13T01:29:36Z\",\"WARC-Record-ID\":\"<urn:uuid:942bc4c2-c45f-4b62-9edb-32b9e191c847>\",\"Content-Length\":\"135815\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:27db0abd-8435-41d3-9350-93ea0e4f05b7>\",\"WARC-Concurrent-To\":\"<urn:uuid:122945de-02e0-4208-9a70-56ad5036c74e>\",\"WARC-IP-Address\":\"23.221.72.19\",\"WARC-Target-URI\":\"https://www.geeksforgeeks.org/python-pandas-dataframe-mean/\",\"WARC-Payload-Digest\":\"sha1:BHZWZZ3KPM4JFCRL5JCX5UGD7FIZ5FIQ\",\"WARC-Block-Digest\":\"sha1:5YKNW6VYK7USKKHJBJNXBPRDEGBMR2RU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540547536.49_warc_CC-MAIN-20191212232450-20191213020450-00274.warc.gz\"}"}
https://isaacscienceblog.com/2016/02/16/
[ "Day: February 16, 2016", null, "# Power Homogeneous DE\n\nPower Homogeneous DE    02/16/16\nPower homogeneous Differential equations are differential equations that can be written as a function of yx. One can easily recognize a power homogeneous Differential equation if it is written in the form y’= G(x,y)H(x,y). If the equation is in this form, then we can make a substitution v= yxand then put in all in terms of Y." ]
[ null, "https://isaacscienceblog.files.wordpress.com/2016/02/eq0026mp.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.90483636,"math_prob":0.9955983,"size":408,"snap":"2020-24-2020-29","text_gpt3_token_len":99,"char_repetition_ratio":0.18069308,"word_repetition_ratio":0.0,"special_character_ratio":0.23529412,"punctuation_ratio":0.072289154,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9960364,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-11T23:33:18Z\",\"WARC-Record-ID\":\"<urn:uuid:b3a08ed2-ce6f-4226-89f2-8da74ab57f0e>\",\"Content-Length\":\"40475\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d59ab89e-aedb-4fa4-ab06-b9d5b33d95a6>\",\"WARC-Concurrent-To\":\"<urn:uuid:e73798f9-dc07-40e5-8209-04b2c05709f1>\",\"WARC-IP-Address\":\"192.0.78.24\",\"WARC-Target-URI\":\"https://isaacscienceblog.com/2016/02/16/\",\"WARC-Payload-Digest\":\"sha1:4XDZLDLO5MQ3U2ZXRQMLZE2ORE53CYBT\",\"WARC-Block-Digest\":\"sha1:5WB6SQZDZD7F7WQFRQGW4DS4GK7G6HZR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593657129257.81_warc_CC-MAIN-20200711224142-20200712014142-00082.warc.gz\"}"}
https://discuss.codechef.com/t/chn15f-editorial/11963
[ "", null, "# CHN15F - Editorial\n\nAuthor: Jingbo Shang\nTester: Kevin Atienza\nEditorialist: Kevin Atienza\n\nMedium-hard\n\n### PROBLEM:\n\nThere is a H \\times W grid. You are at (1,1), the bottom-left cell. There may be rocks at some cells, except (1,1).\n\nThe rocks are falling down, discretely. Specifically, a rock at (x, y) will be at (x, y - 1) at the next second, and will disappear once y becomes 0. You may move left or right to avoid being hit by a rock. At every second, you make your move, and then the rocks fall 1 cell down. You can only go to a cell within a grid if there are no rocks at your destination cell before and after the rocks fall.\n\nHow many different grids are there in which you can survive, if you always play in the optimal way after the layout is made available to you?\n\n### QUICK EXPLANATION:\n\nLet M be a subset of \\{1,\\ldots,W\\}, and R be a nonempty subset of M (R and M can be represented with bitmasks). For 1 \\le h \\le H, define f(h,M,R) as the number of survivable W\\times h grids such that in the topmost row, M is the set of cells not containing a rock, and R is the set of reachable cells from the start. The answer is clearly the sum of f(H,M,R) for all pairs (M,R). We can also express f(h,M,R) recursively by enumerating the possible sets M' for the second topmost row, and computing the new set R' based on R, M and M'. The base case is f(1,M,R) = 1 if R = \\{1\\} and 0 otherwise.\n\nThere are O(3^W H) valid arguments for f(h,M,R), and each one can be computed in O(2^W), so the overall running time is O(6^W H).\n\n### EXPLANATION:\n\nThis problem strongly suggests a DP + bitmask approach.\n\nThe general idea our solution will be that we will generate the grid row by row, from the bottom to top, and count the number of possibilities depending on the rows we’ve already placed. The hard part would be to determine how to count the number of possibilities given on the choices we’ve already done. Intuitively, this would work, because the rocks fall one row by one row, which means we only need to check the current and next row to determine where we can go. (Although the optimal strategy itself may require looking ahead several rows.)\n\nSince we’ll be building the grid row by row, it’s important to keep track of which cells don’t contain rocks in the current row. However, this is not enough! Consider for example the following case:\n\n##.#\n.#.#\n.###\n\n\nIf you look at it two consecutive rows at a time, it looks like there is always a way to get from each row to the next, thus be able to survive all the way to the last row. However, it doesn’t take into account the fact that the second free cell in the second row isn’t reachable! Thus, we also need to keep track of which cells are reachable, in addition to just the free cells.\n\nThroughout this algorithm, we fix the value W. For the current row, let M be the set of columns containing free cells, and R be the set of columns containing reachable cells. Clearly, R is a subset of M, and M is a subset of \\{1, 2 \\ldots W\\}.\n\nLet’s define f(h,M,R) as the number of grids with height h whose set of free cells is M and reachable cells R. Clearly, the answer is the sum of all f(H,M,R) for all nonempty R s. We also have the following base case:\n\nf(1,M,R) = \\begin{cases} 1 & \\text{if $R = \\\\{1\\\\}$} \\\\\\ 0 & \\text{otherwise} \\end{cases}\n\nThis is because only the cell (1,1) is initially reachable.\n\nNow, we need to express f(h,M,R) in terms of a sum involving the $f(h-1,M’,R’)$s.\n\nSuppose M' and R' are the sets of columns containing free cells and reachable cells at row h-1, respectively, and M and R for row h. What makes (M,R) “compatible” with (M',R')? (Compatible in the sense that (M,R) is a valid new state given the previous state (M',R')) Well, R must be the set of cells you can reach from the previous row assuming you start from one of the cells in R' and given the set of free cells in the current and previous rows: M and M' respectively. In fact, given M, M' and R', the set R is uniquely determined! Given M, M' and R', you can in fact compute all cells reachable at row h according to the rules of the problem.\n\nHere’s a more explicit formula: R is the union of three sets R_{-1}, R_0 and R_1, where:\n\nR_{-1} = \\{x : x \\in M \\text{ and } x \\in M' \\text{ and } x-1 \\in R'\\}\nR_0 = \\{x : x \\in M \\text{ and } x \\in M' \\text{ and } x \\in R'\\}\nR_1 = \\{x : x \\in M \\text{ and } x \\in M' \\text{ and } x+1 \\in R'\\}\n\nIn other words:\n\nR = M \\cap M' \\cap [(R'+1) \\cup R' \\cup (R'-1)]\n\nwhere we define R'+c = \\{x + c : x \\in R'\\}.\n\nIntuitively, R_{-1} is the set of cells you can reach by moving one step to the right. R_0 and R_1 are similarly defined.\n\nArmed with this, we now have the following recurrence:\n\nf(h,M,R) = \\sum_{\\substack{(M',R') \\\\\\ \\text{$(M,R)$ is compatible with $(M',R')$}}} f(h-1,M',R')\n\nThis gives us an algorithm to compute f(h,M,R) for all pairs (M,R) given f(h-1,M',R') for all pairs (M',R'):\n\n• Initialize all f(h,M,R) = 0.\n• For all pairs (M',R') and sets M, let R = M \\cap M' \\cap [(R'+1) \\cup R' \\cup (R'-1)]. Then add the value f(h-1,M',R') to f(h,M,R).\n\nFinally, we can now perform this for all values h up to H, and we can then extract the answer from the last set of f values!\n\nHow fast does this run? First, let’s count how many valid (M,R) pairs there are. Note that R must be a subset of M, and M a subset of \\{1 \\ldots W\\}. Thus, there are 2^W possible values for M. Naïvely, we know that there are 2^W choices for R, so there are at most 4^W pairs overall.\n\nBut we can find the exact value! Suppose M has k elements. Then there are 2^k possible values for R, and {W \\choose k} ways to choose the k elements of M itself. Thus, the total number of ways to choose (M,R) is:\n\n\\sum_{k=0}^W {W \\choose k} 2^k\n\nwhich by the binomial theorem we know to be equal to (2 + 1)^W = 3^W. Thus, there are 3^W pairs (M,R)!\n\nNow, back to the running time. Consider a single application of the step above to compute the $f(h,M,R)$s given the $f(h-1,M’,R’)s. The slowest part is enumerating all pairs (M’,R’)$ and sets M, of which there are 3^W \\cdot 2^W = 6^W possibilities. Assuming we can operate among subsets of \\{1 \\ldots W\\} in O(1) time each, which is possible when the sets are implemented as bitmasks, the running time of a single step is O(6^W). Since we will do this H times, the overall running time is O(6^WH). This is not polynomial time, but W is small enough in the problem so this will pass the time limit.\n\nAs a final bit of trivia, we mention that it is possible to solve this problem in O(W4^WH) time. We’ll leave it to the reader to discover!\n\n### Time Complexity:\n\nO(6^WH) or O(W4^WH)" ]
[ null, "https://s3.amazonaws.com/discourseproduction/original/3X/7/f/7ffd6e5e45912aba9f6a1a33447d6baae049de81.svg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.91081303,"math_prob":0.9923488,"size":6129,"snap":"2020-34-2020-40","text_gpt3_token_len":1682,"char_repetition_ratio":0.11934694,"word_repetition_ratio":0.025795357,"special_character_ratio":0.27361724,"punctuation_ratio":0.13849287,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9996729,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-26T21:54:01Z\",\"WARC-Record-ID\":\"<urn:uuid:f5c2752c-cc4d-445e-87c2-294e42bda3bf>\",\"Content-Length\":\"22416\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:78b2cd9c-19f7-4419-adba-a2269e387eff>\",\"WARC-Concurrent-To\":\"<urn:uuid:8895b3e1-8b5b-4ee3-9583-1d1cd1bc4323>\",\"WARC-IP-Address\":\"18.213.158.143\",\"WARC-Target-URI\":\"https://discuss.codechef.com/t/chn15f-editorial/11963\",\"WARC-Payload-Digest\":\"sha1:RE6KJH5GJZGBQFXQCEEXCERI5WRL33QM\",\"WARC-Block-Digest\":\"sha1:DJJL7H3OZZWFRZNFJFI5U5WZQSPCHHIQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400245109.69_warc_CC-MAIN-20200926200523-20200926230523-00032.warc.gz\"}"}
https://balsammed.net/function-table-answer-key/
[ "# Function table answer key Awesome\n\n» » Function table answer key Awesome\n\nYour Function table answer key images are ready. Function table answer key are a topic that is being searched for and liked by netizens today. You can Download the Function table answer key files here. Get all free images.\n\nIf you’re searching for function table answer key pictures information connected with to the function table answer key interest, you have pay a visit to the right blog. Our website always gives you hints for refferencing the maximum quality video and image content, please kindly surf and locate more enlightening video content and graphics that fit your interests.\n\nFunction Table Answer Key. Then graph the function. Previous to preaching about Function Table Worksheet Answer Key you need to be aware that Education can be each of our key to a better next week as well as learning does not only halt after a college bell rings. Work off of the two variables that you are given to devise a solution. 12062018 Ex 1 Graph A Linear Equation Using Table Of Values You.", null, "An Introduction To Plotting Straight Line Graphs Use The Equation To Complete A Table Of V Graphing Worksheets Graphing Linear Equations Line Graph Worksheets From pinterest.com\n\nWorksheet by Lucas Kaufmann. 35 function table worksheet answers resource plans identifying functions tables worksheet free commoncoresheets identifying points of a function. Each of the numbered sheets gets progressively more difficult. 12062018 Ex 1 Graph A Linear Equation Using Table Of Values You. Quadratic Functions From Tables With Answers - Displaying top 8 worksheets found for this concept. 35 Function Table Worksheet Answers Resource Plans.\n\n### Great for a.\n\nPatterns Function Machine Worksheets Free Commoncoresheets. Function Table Worksheets With Answer Sheet. 12042018 Function Tables Worksheet For 6th 8th Grade Lesson Planet. B A - 1. The preview file shows the easiest and 2nd most difficult. Make a table of values that shows the number of cupcakes each guest will get if there are 6 10 or 15 guests.", null, "Source: pinterest.com\n\nGreat for a tiered lesson math center warm-up or just. These Function Table Worksheets are great for all levels of math. 2 6 2 3 7 2 2 7 0 20 16 4 10 5 2 16 12 0 2 3 4 8 4 3 4 2 5 0 4 4 6 0 7 4 8 6 10 8 6 9 3. Worksheet Works Graphing Linear Equations Answer Key. Easily check their work with the answer sheets.", null, "Source: pinterest.com\n\nB A - 1. 6 x x fx 3 fx 3. Whitney has a total of 30 cupcakes for her guestsThe function rule 30. The preview file shows the easiest and 2nd most difficult. 12102017 Identifying Functions With Ordered Pairs Tables Graphs Lesson Transcript Study Com.", null, "Source: pinterest.com\n\nGreat for a tiered lesson math center warm-up or just. These Function Table Worksheets are great for all levels of math. 11 14 7 13 7 x 1 22 27 15 x 8 5 12 6 f3. B A - 1. 35 function table worksheet answers resource plans identifying functions tables worksheet free commoncoresheets identifying points of a function.", null, "Source: es.pinterest.com\n\nGreat for a tiered lesson math center warm-up or just. 12062018 Ex 1 Graph A Linear Equation Using Table Of Values You. Here are 4 one-page sets of function tables WITH ANSWER KEYS. Make a table of values that shows the number of cupcakes each guest will get if there are 6 10 or 15 guests. Previous to preaching about Function Table Worksheet Answer Key you need to be aware that Education can be each of our key to a better next week as well as learning does not only halt after a college bell rings.", null, "Source: pinterest.com\n\nA Function Table - Linear Function L1ES1 x fx Complete the function table using the function rule fx 5x and answer the following questions. Here are 4 one-page sets of function tables WITH ANSWER KEYS. Worksheet 1 8 Homework Piecewise Functions Answer Key. 24 function tables in all 12 horizontal and 12 vertical. Complete the function table.", null, "Source: pinterest.com\n\nThe preview file shows the easiest and 2nd most difficult. READ Round Table Pizza Clubhouse Rocklin Ca. Each of the numbered sheets gets progressively more difficult. Here are 4 one-page sets of function tables WITH ANSWER KEYS. Patterns Function Machine Worksheets Free Commoncoresheets.", null, "Source: pinterest.com\n\nGreat for a tiered lesson math center warm-up or just. Complete the function table for each equation worksheet answer key brokeasshome com awesome co in 2020 graphing linear equations quadratic functions eighth grade tables 10 one page worksheets writing algebra identifying free commoncoresheets points of a and out boxes math expressions patterns 3rd words machine aids tessshlo. 7 2 Skills Practice Graphing Polynomial Functions Worksheet For 10th 12th Grade Lesson Planet. Whitney has a total of 30 cupcakes for her guestsThe function rule 30. Gallery of 20 Function Table Worksheet Answer Key.", null, "Source: pinterest.com\n\nUsing the rule stated above the table calculate the required value and the output field. Kids will be able to easily review and practice their math skills. Patterns Function Machine Worksheets Free Commoncoresheets. 4 x fx 2 fx 15. 11 14 7 13 7 x 1 22 27 15 x 8 5 12 6 f3.", null, "Source: pinterest.com\n\nWork off of the two variables that you are given to devise a solution. 35 function table worksheet answers resource plans identifying functions tables worksheet free commoncoresheets identifying points of a function. 7 2 5 3 4 4 5 2. Make a table of values that shows the number of cupcakes each guest will get if there are 6 10 or 15 guests. Using A Table Of Values To Graph Equations.", null, "Source: pinterest.com\n\n27052020 Input Output Tables Worksheets from function table worksheet answer key image source. Worksheet Works Graphing Linear Equations Answer Key. Then graph the function. Make a table of values that shows the number of cupcakes each guest will get if there are 6 10 or 15 guests. Previous to preaching about Function Table Worksheet Answer Key you need to be aware that Education can be each of our key to a better next week as well as learning does not only halt after a college bell rings.", null, "Source: pinterest.com\n\n4 x fx 2 fx 15. Gallery of 20 Function Table Worksheet Answer Key. The preview file shows the easiest and 2nd most difficult. Here are 4 one-page sets of function tables WITH ANSWER KEYS. 12042018 Function Tables Worksheet For 6th 8th Grade Lesson Planet.", null, "Source: pinterest.com\n\nWorksheet Works Graphing Linear Equations Answer Key. Great for a tiered lesson math center warm-up or just. This remaining mentioned we supply you with a number of basic but. Each of the numbered sheets gets progressively more difficult. 7 2 5 3 4 4 5 2.", null, "Source: pinterest.com\n\nYou will be able to go deep into your subject. 6 3 2 0 2 8 48 1 4 2 2 6 36 3 8 4 6 5 30 5 10 6 10 0 0 7 12 8 14 1 6-9 -8 -7 -6 -5 -1-4 -3 -2 1 6-30-6-12-18-24-36. Easily check their work with the answer sheets. 07012018 Function table worksheets in and out boxes math expressions 3rd grade words word problems complete the for each equation worksheet answer key brokeasshome com awesome complet graphing linear equations functions aids tessshlo eighth tables 10 one page writing r answers nidecmege pre algebra systems of you Function Table Worksheets In And Out Boxes. The preview file shows the easiest and 2nd most difficult.", null, "Source: in.pinterest.com\n\nComplete the function table. 7 2 5 3 4 4 5 2. Great for a tiered lesson math center warm-up or just. Worksheet 1 8 Homework Piecewise Functions Answer Key. 4 x fx 2 fx 15.", null, "Source: pinterest.com\n\nIt will give you ideas on things that you did not know before and you will learn a lot of new facts about your topic. Two Variable Linear Equations Intro Khan Academy. Gallery of 20 Function Table Worksheet Answer Key. A class activity for you to introduce everything that falls under this concept. Worksheet Ks Answer Key Mixed Equations Math Maze Scientific.", null, "Source: pinterest.com\n\nB A - 1. Plot the points and graph the line. Worksheet by Lucas Kaufmann. Allows you by each table key is set in the sheets. Function Table Worksheets With Answer Sheet.", null, "Source: pinterest.com\n\nEach of the numbered sheets gets progressively more difficult. These Function Table Worksheets are great for all levels of math. 27052020 Input Output Tables Worksheets from function table worksheet answer key image source. Simply download and print these Function Table Worksheets. Function Table Worksheets With Answer Sheet.", null, "Source: pinterest.com\n\n27052020 Input Output Tables Worksheets from function table worksheet answer key image source. 35 Function Table Worksheet Answers Resource Plans. Complete the function table for each equation worksheet answer key brokeasshome com awesome co in 2020 graphing linear equations quadratic functions eighth grade tables 10 one page worksheets writing algebra identifying free commoncoresheets points of a and out boxes math expressions patterns 3rd words machine aids tessshlo. It will give you ideas on things that you did not know before and you will learn a lot of new facts about your topic. Allows you by each table key is set in the sheets.\n\nThis site is an open community for users to do sharing their favorite wallpapers on the internet, all images or pictures in this website are for personal wallpaper use only, it is stricly prohibited to use this wallpaper for commercial purposes, if you are the author and find this image is shared without your permission, please kindly raise a DMCA report to Us.\n\nIf you find this site beneficial, please support us by sharing this posts to your favorite social media accounts like Facebook, Instagram and so on or you can also save this blog page with the title function table answer key by using Ctrl + D for devices a laptop with a Windows operating system or Command + D for laptops with an Apple operating system. If you use a smartphone, you can also use the drawer menu of the browser you are using. Whether it’s a Windows, Mac, iOS or Android operating system, you will still be able to bookmark this website." ]
[ null, "https://balsammed.net/img/placeholder.svg", null, "https://balsammed.net/img/placeholder.svg", null, "https://balsammed.net/img/placeholder.svg", null, "https://balsammed.net/img/placeholder.svg", null, "https://balsammed.net/img/placeholder.svg", null, "https://balsammed.net/img/placeholder.svg", null, "https://balsammed.net/img/placeholder.svg", null, "https://balsammed.net/img/placeholder.svg", null, "https://balsammed.net/img/placeholder.svg", null, "https://balsammed.net/img/placeholder.svg", null, "https://balsammed.net/img/placeholder.svg", null, "https://balsammed.net/img/placeholder.svg", null, "https://balsammed.net/img/placeholder.svg", null, "https://balsammed.net/img/placeholder.svg", null, "https://balsammed.net/img/placeholder.svg", null, "https://balsammed.net/img/placeholder.svg", null, "https://balsammed.net/img/placeholder.svg", null, "https://balsammed.net/img/placeholder.svg", null, "https://balsammed.net/img/placeholder.svg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.80108696,"math_prob":0.82812476,"size":9983,"snap":"2021-21-2021-25","text_gpt3_token_len":2277,"char_repetition_ratio":0.184287,"word_repetition_ratio":0.4389246,"special_character_ratio":0.2292898,"punctuation_ratio":0.08825065,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9535677,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-06T19:06:34Z\",\"WARC-Record-ID\":\"<urn:uuid:ce5c9883-0c79-4cd3-9a8f-678d4c2eccce>\",\"Content-Length\":\"33566\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1cd9010f-666d-4d4a-aff5-0a9382b89bb5>\",\"WARC-Concurrent-To\":\"<urn:uuid:fc0e5dba-e612-43eb-9707-20b44ee42999>\",\"WARC-IP-Address\":\"78.46.212.35\",\"WARC-Target-URI\":\"https://balsammed.net/function-table-answer-key/\",\"WARC-Payload-Digest\":\"sha1:2BGN2IMDSDH4ZW44RXEW3Y4HBI3AJDCH\",\"WARC-Block-Digest\":\"sha1:D6DWXTTOBF7Y3PHAZNA5XCIKQB5URVD7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243988759.29_warc_CC-MAIN-20210506175146-20210506205146-00193.warc.gz\"}"}
https://studymoose.com/decision-analysis-essay
[ "We use cookies to give you the best experience possible. By continuing we’ll assume you’re on board with our cookie policy\n\nCheck Writers' Offers\n\nHire a Professional Writer Now\n\nThe input space is limited by 250 symbols\n\nChoose 3 Hours or More.\nBack\n2/4 steps\n\nHow Many Pages?\n\nBack\n3/4 steps\n\nBack\nGet Offer\n\n# Decision Analysis\n\nPaper type: Analysis 85 (21017 words)\nCategories: Decision Making, Taking Risks\n Views: 484\n\nChapter 4\n\nDECISION ANALYSIS\n\nCONTENTS 4.1 PROBLEM FORMULATION Influence Diagrams Payoff Tables Decision Trees DECISION MAKING WITHOUT PROBABILITIES Optimistic Approach Conservative Approach Minimax Regret Approach DECISION MAKING WITH PROBABILITIES Expected Value of Perfect Information RISK ANALYSIS AND SENSITIVITY ANALYSIS Risk Analysis Sensitivity Analysis DECISION ANALYSIS WITH SAMPLE INFORMATION An Influence Diagram A Decision Tree Decision Strategy Risk Profile Expected Value of Sample Information Efficiency of Sample Information COMPUTING BRANCH PROBABILITIES\n\n4.2\n\n4.3 4.4\n\n4.5\n\n4.6\n\nDecision analysis can be used to determine an optimal strategy when a decision maker is faced with several decision alternatives and an uncertain or risk-filled pattern of future events.\n\nFor example, a global manufacturer might be interested in determining the best location for a new plant. Suppose that the manufacturer has identified five decision alternatives corresponding to five plant locations in different countries. Making the plant location decision is complicated by factors such as the world economy, demand in various regions of the world, labor availability, raw material costs, transportation costs, and so on.\n\nIn such a problem, several scenarios could be developed to describe how the various factors combine to form the possible uncertain future events.\n\nThen probabilities can be assigned to the events. Using profit or cost as a measure of the consequence for each decision alternative and each future event combination, the best plant location can be selected. Even when a careful decision analysis has been conducted, the uncertain future events make the final consequence uncertain. In some cases, the selected decision alternative may provide good or excellent results. In other cases, a relatively unlikely future event may occur causing the selected decision alternative to provide only fair or even poor results. The risk associated with any decision alternative is a direct result of the uncertainty associated with the final consequence. A good decision analysis\n\nASW/QMB-Ch.04 3/8/01 10:35 AM Page 97\n\nChapter 4\n\nDecision Analysis\n\n97\n\nincludes risk analysis. Through risk analysis the decision maker is provided with probability information about the favorable as well as the unfavorable consequences that may occur. We begin the study of decision analysis by considering problems having reasonably few decision alternatives and reasonably few possible future events. Influence diagrams and payoff tables are introduced to provide a structure for the decision problem and to illustrate the fundamentals of decision analysis. We then introduce decision trees to show the sequential nature of decision problems. Decision trees are used to analyze more complex problems and to identify an optimal sequence of decisions, referred to as an optimal decision strategy. Sensitivity analysis shows how changes in various aspects of the problem affect the recommended decision alternative.\n\n4.1\n\nPROBLEM FORMULATION\nThe first step in the decision analysis process is problem formulation. We begin with a verbal statement of the problem. We then identify the decision alternatives, the uncertain future events, referred to as chance events, and the consequences associated with each decision alternative and each chance event outcome. Let us begin by considering a construction project of the Pittsburgh Development Corporation. Pittsburgh Development Corporation (PDC) has purchased land, which will be the site of a new luxury condominium complex. The location provides a spectacular view of downtown Pittsburgh and the Golden Triangle where the Allegheny and Monongahela rivers meet to form the Ohio River. PDC plans to price the individual condominium units between \\$300,000 and \\$1,400,000.\n\nPDC has preliminary architectural drawings for three different-sized projects: one with 30 condominiums, one with 60 condominiums, and one with 90 condominiums. The financial success of the project depends upon the size of the condominium complex and the chance event concerning the demand for the condominiums. The statement of the PDC decision problem is to select the size of the new luxury condominium project that will lead to the largest profit given the uncertainty concerning the demand for the condominiums. Given the statement of the problem, it is clear that the decision is to select the best size for the condominium complex. PDC has the following three decision alternatives: d1 d2 d3 a small complex with 30 condominiums a medium complex with 60 condominiums a large complex with 90 condominiums\n\nA factor in selecting the best decision alternative is the uncertainty associated with the chance event concerning the demand for the condominiums. When asked about the possible demand for the condominiums, PDC’s president acknowledged a wide range of possibilities, but decided that it would be adequate to consider two possible chance event outcomes: a strong demand and a weak demand. In decision analysis, the possible outcomes for a chance event are referred to as the states of nature. The states of nature are defined so that one and only one of the possible states of nature will occur. For the PDC problem, the chance event concerning the demand for the condominiums has two states of nature: s1 s2 strong demand for the condominiums weak demand for the condominiums\n\nThus, management must first select a decision alternative (complex size),\nthen a state of nature follows (demand for the condominiums), and finally a consequence will occur. In this case, the consequence is the PDC’s profit.\n\nASW/QMB-Ch.04 3/8/01 10:35 AM Page 98\n\n98\n\nInfluence Diagrams\nAn influence diagram is a graphical device that shows the relationships among the decisions, the chance events, and the consequences for a decision problem. The nodes in an influence diagram are used to represent the decisions, chance events, and consequences. Rectangles or squares are used to depict decision nodes, circles or ovals are used to depict chance nodes, and diamonds are used to depict consequence nodes. The lines connecting the nodes, referred to as arcs, show the direction of influence that the nodes have on one another. Figure 4.1 shows the influence diagram for the PDC problem. The complex size is the decision node, demand is the chance node, and profit is the consequence node. The arcs connecting the nodes show that both the complex size and the demand influence PDC’s profit.\n\nPayoff Tables\nGiven the three decision alternatives and the two states of nature, which complex size should PDC choose? To answer this question, PDC will need to know the consequence associated with each decision alternative and each state of nature. In decision analysis, we refer to the consequence resulting from a specific combination of a decision alternative and a state of nature as a payoff. A table showing payoffs for all combinations of decision alternatives and states of nature is a payoff table. Because PDC wants to select the complex size that provides the largest profit, profit is used as the consequence. The payoff table with profits expressed in millions of dollars is shown in Table 4.1. Note, for example, that if a medium complex is built and demand turns out to be strong, a profit of \\$14 million will be realized. We will use the notation Vij to denote the payoff associated with decision alternative i and state of nature j. Using Table 4.1, V31 20 indicates a payoff of \\$20 million occurs if the decision is to build a large complex (d3) and the strong demand state of nature (s1) occurs. Similarly, V32 9 indicates a loss of \\$9 million if the decision is to build a large complex (d3) and the weak demand state of nature (s2) occurs.\n\nPayoffs can be expressed in terms of profit, cost, time, distance, or any other measure appropriate for the decision problem being analyzed.\n\nDecision Trees\nA decision tree provides a graphical representation of the decision-making process. Figure 4.2 presents a decision tree for the PDC problem. Note that the decision tree shows the natuFIGURE 4.1 INFLUENCE DIAGRAM FOR THE PDC PROBLEM States of Nature Strong (s1) Weak (s2 )\n\nDemand\n\nComplex Size\n\nProfit\n\nDecision Alternatives Small complex (d1) Medium complex (d2) Large complex (d3)\n\nConsequence Profit\n\nASW/QMB-Ch.04 3/8/01 10:35 AM Page 99\n\nChapter 4\n\nDecision Analysis\n\n99\n\nTABLE 4.1 PAYOFF TABLE FOR THE PDC CONDOMINIUM PROJECT (PAYOFFS IN \\$ MILLION)\nState of Nature Decision Alternative Small complex, d1 Medium complex, d2 Large complex, d3 Strong Demand s1 8 14 20 Weak Demand s2 7 5 9\n\nIf you have a payoff table, you can develop a decision tree. Try Problem 1(a).\n\nral or logical progression that will occur over time. First, PDC must make a decision regarding the size of the condominium complex (d1, d2, or d3). Then, after the decision is implemented, either state of nature s1 or s2 will occur. The number at each end point of the tree indicates the payoff associated with a particular sequence. For example the topmost payoff of 8 indicates that an \\$8 million profit is anticipated if PDC constructs a small condominium complex (d1) and demand turns out to be strong (s1). The next payoff of 7 indicates an anticipated profit of \\$7 million if PDC constructs a small condominium complex (d1) and demand turns out to be weak (s2). Thus, the decision tree shows graphically the sequences of decision alternatives and states of nature that provide the six possible payoffs for PDC.\n\nThe decision tree in Figure 4.2 has four nodes, numbered 1–4. Squares are used to depict decision nodes and circles are used to depict chance nodes. Thus, node 1 is a decision node, and nodes 2, 3, and 4 are chance nodes. The branches, which connect the nodes, leaving the decision node correspond to the decision alternatives. The branches leaving each chance node correspond to the states of nature. The payoffs are shown at the end of the states-of-nature branches. We now turn to the question: How can the decision maker use the information in the payoff table or the decision tree to select the best decision alternative? Several approaches may be used.\n\nFIGURE 4.2 DECISION TREE FOR THE PDC CONDOMINIUM PROJECT (PAYOFFS IN \\$ MILLION) Strong (s1) Small (d1) 2 Weak (s 2) 7\n\n8\n\nStrong (s1) 1 Medium (d 2) 3 Weak (s 2)\n\n14\n\n5\n\nStrong (s1) Large (d 3) 4 Weak (s 2)\n\n20\n\n–9\n\nASW/QMB-Ch.04 3/8/01 10:35 AM Page 100\n\n100\n\n1. Experts in problem solving agree that the first step in solving a complex problem is to decompose it into a series of smaller subproblems. Decision trees provide a useful way to show how a problem can be decomposed and the sequential nature of the decision process. 2. People often view the same problem from different perspectives. Thus, the discussion regarding the development of a decision tree may provide additional insight about the problem.\n\n4.2\nMany people think of a good decision as one in which the consequence is good. However, in some instances, a good, wellthought-out decision may still lead to a bad or undesirable consequence.\n\nDECISION MAKING WITHOUT PROBABILITIES\nIn this section we consider approaches to decision making that do not require knowledge of the probabilities of the states of nature. These approaches are appropriate in situations in which the decision maker has little confidence in his or her ability to assess the probabilities, or in which a simple best-case and worst-case analysis is desirable. Because different approaches sometimes lead to different decision recommendations, the decision maker needs to understand the approaches available and then select the specific approach that, according to the decision maker’s judgment, is the most appropriate.\n\nOptimistic Approach\nThe optimistic approach evaluates each decision alternative in terms of the best payoff that can occur. The decision alternative that is recommended is the one that provides the best possible payoff. For a problem in which maximum profit is desired, as in the PDC problem, the optimistic approach would lead the decision maker to choose the alternative corresponding to the largest profit. For problems involving minimization, this approach leads to choosing the alternative with the smallest payoff. To illustrate the optimistic approach, we use it to develop a recommendation for the PDC problem. First, we determine the maximum payoff for each decision alternative; then we select the decision alternative that provides the overall maximum payoff. These steps systematically identify the decision alternative that provides the largest possible profit. Table 4.2 illustrates these steps. Because 20, corresponding to d3, is the largest payoff, the decision to construct the large condominium complex is the recommended decision alternative using the optimistic approach.\n\nFor a maximization problem, the optimistic approach often is referred to as the maximax approach; for a minimization problem, the corresponding terminology is minimin.\n\nConservative Approach\nThe conservative approach evaluates each decision alternative in terms of the worst payoff that can occur. The decision alternative recommended is the one that provides the best of the worst possible payoffs. For a problem in which the output measure is profit, as in the PDC prob-\n\nTABLE 4.2 MAXIMUM PAYOFF FOR EACH PDC DECISION ALTERNATIVE Decision Alternative Small complex, d1 Medium complex, d2 Large complex, d3 Maximum Payoff 8 14 20\n\nMaximum of the maximum payoff values\n\nASW/QMB-Ch.04 3/8/01 10:35 AM Page 101\n\nChapter 4\n\nDecision Analysis\n\n101\n\nFor a maximization problem, the conservative approach is often referred to as the maximin approach; for a minimization problem, the corresponding terminology is minimax.\n\nlem, the conservative approach would lead the decision maker to choose the alternative that maximizes the minimum possible profit that could be obtained. For problems involving minimization, this approach identifies the alternative that will minimize the maximum payoff. To illustrate the conservative approach, we use it to develop a recommendation for the PDC problem. First, we identify the minimum payoff for each of the decision alternatives; then we select the decision alternative that maximizes the minimum payoff. Table 4.3 illustrates these steps for the PDC problem. Because 7, corresponding to d1, yields the maximum of the minimum payoffs, the decision alternative of a small condominium complex is recommended. This decision approach is considered conservative because it identifies the worst possible payoffs and then recommends the decision alternative that avoids the possibility of extremely “bad” payoffs. In the conservative approach, PDC is guaranteed a profit of at least \\$7 million. Although PDC may make more, it cannot make less than \\$7 million.\n\nMinimax Regret Approach\nMinimax regret is an approach to decision making that is neither purely optimistic nor purely conservative. Let us illustrate the minimax regret approach by showing how it can be used to select a decision alternative for the PDC problem. Suppose that the PDC constructs a small condominium complex\n(d1) and demand turns out to be strong (s1). Table 4.1 shows that the resulting profit for PDC would be \\$8 million. However, given that the strong demand state of nature (s1) has occurred, we realize that the decision to construct a large condominium complex (d3), yielding a profit of \\$20 million, would have been the best decision. The difference between the payoff for the best decision alternative (\\$20 million) and the payoff for the decision to construct a small condominium complex (\\$8 million) is the opportunity loss, or regret, associated with decision alternative d1 when state of nature s1 occurs; thus, for this case, the opportunity loss or regret is \\$20 million \\$8 million \\$12 million. Similarly, if PDC makes the decision to construct a medium condominium complex (d2) and the strong demand state of nature (s1) occurs, the opportunity loss, or regret, associated with d2 would be \\$20 million \\$14 million \\$6 million. In general the following expression represents the opportunity loss, or regret. Rij where Rij V* j Vij the regret associated with decision alternative di and state of nature sj the payoff value1 corresponding to the best decision for the state of nature sj the payoff corresponding to decision alternative di and state of nature sj V* j Vij (4.1)\n\nTABLE 4.3 MINIMUM PAYOFF FOR EACH PDC DECISION ALTERNATIVE Decision Alternative Small complex, d1 Medium complex, d2 Large complex, d3 Minimum Payoff Maximum of the 7 minimum payoff values 5 9\n\n1\n\nIn maximization problems, V * will be the largest entry in column j of the payoff table. In minimization problems, V j* will j be the smallest entry in column j of the payoff table.\n\nASW/QMB-Ch.04 3/8/01 10:35 AM Page 102\n\n102\n\nTABLE 4.4 OPPORTUNITY LOSS, OR REGRET, TABLE FOR THE PDC CONDOMINIUM PROJECT (\\$ MILLION) State of Nature Decision Alternative Small complex, d1 Medium complex, d2 Large complex, d3 Strong Demand s1 12 6 0 Weak Demand s2 0 2 16\n\nTABLE 4.5 MAXIMUM REGRET FOR EACH PDC DECISION ALTERNATIVE Decision Alternative Small complex, d1 Medium complex, d2 Large complex, d3 Maximum Regret 12 Minimum of the 6 maximum regret 16\n\nFor practice in developing a decision recommendation using the optimistic, conservative, and minimax regret approaches, try Problem 1(b).\n\nNote the role of the absolute value in equation (4.1). That is, for minimization problems, the best payoff, V*, is the smallest entry in column j. Because this value always is less than j or equal to Vij, the absolute value of the difference between V* and Vij ensures that the rej gret is always the magnitude of the difference. Using equation (4.1) and the payoffs in Table 4.1, we can compute the regret associated with each combination of decision alternative di and state of nature sj. Because the PDC problem is a maximization problem, V* will be the largest entry in column j of the j payoff table. Thus, to compute the regret, we simply subtract each entry in a column from the largest entry in the column. Table 4.4 shows the opportunity loss, or regret, table for the PDC problem. The next step in applying the minimax regret approach is to list the maximum regret for each decision alternative;\n\nTable 4.5 shows the results for the PDC problem. Selecting the decision alternative with the minimum of the maximum regret values—hence, the name minimax regret—yields the minimax regret decision. For the PDC problem, the alternative to construct the medium condominium complex, with a corresponding maximum regret of \\$6 million, is the recommended minimax regret decision. Note that the three approaches discussed in this section provide different recommendations, which in itself isn’t bad. It simply reflects the difference in decision-making philosophies that underlie the various approaches. Ultimately, the decision maker will have to choose the most appropriate approach and then make the final decision accordingly. The main criticism of the approaches discussed in this section is that they do not consider any information about the probabilities of the various states of nature. In the next section we discuss an approach that utilizes probability information in selecting a decision alternative.\n\n4.3\n\nDECISION MAKING WITH PROBABILITIES\nIn many decision-making situations, we can obtain probability assessments for the states of nature. When such probabilities are available, we can use the expected value approach to\n\nASW/QMB-Ch.04 3/8/01 10:35 AM Page 103\n\nChapter 4\n\nDecision Analysis\n\n103\n\nidentify the best decision alternative. Let us first define the expected value of a decision alternative and then apply it to the PDC problem. Let N P(sj) the number of states of nature the probability of state of nature sj\n\nBecause one and only one of the N states of nature can occur, the probabilities must satisfy two conditions: P(sj) N\n\n0 P(s1)\n\nfor all states of nature P(s2) … P(sN) 1\n\n(4.2) (4.3)\n\nP(sj)\nj 1\n\nThe expected value (EV) of decision alternative di is defined as follows. N\n\nEV(di)\nj 1\n\nP(sj)Vij\n\n(4.4)\n\nIn words, the expected value of a decision alternative is the sum of weighted payoffs for the decision alternative. The weight for a payoff is the probability of the associated state of nature and therefore the probability that the payoff will occur. Let us return to the PDC problem to see how the expected value approach can be applied. PDC is optimistic about the potential for the luxury high-rise condominium complex. Suppose that this optimism leads to an initial subjective probability assessment of 0.8 that demand will be strong (s1) and a corresponding probability of 0.2 that demand will be weak (s2). Thus, P(s1) 0.8 and P(s2) 0.2. Using the payoff values in Table 4.1 and equation (4.4), we compute the expected value for each of the three decision alternatives as follows: EV(d1) EV(d2) EV(d3) 0.8(8) 0.8(14) 0.8(20) 0.2(7) 0.2(5) 0.2( 9) 7.8 12.2 14.2\n\nCan you now use the expected value approach to develop a decision recommendation? Try Problem 5.\n\nComputer software packages are available to help in constructing more complex decision trees.\n\nThus, using the expected value approach, we find that the large condominium complex, with an expected value of \\$14.2 million, is the recommended decision. The calculations required to identify the decision alternative with the best expected value can be conveniently carried out on a decision tree. Figure 4.3 shows the decision tree for the PDC problem with state-of-nature branch probabilities. Working backward through the decision tree, we first compute the expected value at each chance node. That is, at each chance node, we weight each possible payoff by its probability of occurrence.\n\nBy doing so, we obtain the expected values for nodes 2, 3, and 4, as shown in Figure 4.4. Because the decision maker controls the branch leaving decision node 1 and because we are trying to maximize the expected profit, the best decision alternative at node 1 is d3. Thus, the decision tree analysis leads to a recommendation of d3 with an expected value of \\$14.2 million. Note that this recommendation is also obtained with the expected value approach in conjunction with the payoff table. Other decision problems may be substantially more complex than the PDC problem, but if a reasonable number of decision alternatives and states of nature are present, you can use the decision tree approach outlined here. First, draw a decision tree consisting of decision nodes, chance nodes, and branches that describe the sequential nature of the problem. If you use the expected value approach, the next step is to determine the probabilities for\n\nASW/QMB-Ch.04 3/8/01 10:35 AM Page 104\n\n104\n\nFIGURE 4.3 PDC DECISION TREE WITH STATE-OF-NATURE BRANCH PROBABILITIES Strong (s1) Small (d1) 2 P(s1) = 0.8 Weak (s 2) P(s2) = 0.2 Strong (s1) 1 Medium (d 2 ) 3 P(s1) = 0.8 Weak (s2) P(s2) = 0.2 Strong (s1) Large (d 3) 4 P(s1) = 0.8 Weak (s2) P(s2) = 0.2 –9\n\n8\n\n7\n\n14\n\n5\n\n20\n\nFIGURE 4.4 APPLYING THE EXPECTED VALUE APPROACH USING DECISION TREES Small (d 1)\n\n2\n\nEV(d 1) = 0.8(8) + 0.2(7) = \\$7.8\n\n1\n\nMedium (d 2)\n\n3\n\nEV(d 2) = 0.8(14) + 0.2(5) = \\$12.2\n\nLarge (d 3)\n\n4\n\nEV(d 3) = 0.8(20) + 0.2(–9) = \\$14.2\n\neach of the states of nature and compute the expected value at each chance node. Then select the decision branch leading to the chance node with the best expected value. The decision alternative associated with this branch is the recommended decision.\n\nExpected Value of Perfect Information\nSuppose that PDC has the opportunity to conduct a market research study that would help evaluate buyer interest in the condominium project and provide information that management could use to improve the probability assessments for the states of nature. To determine the potential value of this information, we begin by supposing that the study could provide perfect information regarding the states of nature; that is, we assume for the mo-\n\nASW/QMB-Ch.04 3/8/01 10:35 AM Page 105\n\nChapter 4\n\nDecision Analysis\n\n105\n\nment that PDC could determine with certainty, prior to making a decision, which state of nature is going to occur. To make use of this perfect information, we will develop a decision strategy that PDC should follow once it knows which state of nature will occur. A decision strategy is simply a decision rule that specifies the decision alternative to be selected after new information becomes available. To help determine the decision strategy for PDC, we have reproduced PDC’s payoff table as Table 4.6. Note that, if PDC knew for sure that state of nature s1 would occur, the best decision alternative would be d3, with a payoff of \\$20 million. Similarly, if PDC knew for sure that state of nature s2 would occur, the best decision alternative would be d1, with a payoff of \\$7 million. Thus, we can state PDC’s optimal decision strategy when the perfect information becomes available as follows: If s1, select d3 and receive a payoff of \\$20 million. If s2, select d1 and receive a payoff of \\$7 million. What is the expected value for this decision strategy? To compute the expected value with perfect information, we return to the original probabilities for the states of nature: P(s1) 0.8, and P(s2) 0.2. Thus, there is a 0.8 probability that the perfect information will indicate state of nature s1 and the resulting decision alternative d3 will provide a \\$20 million profit. Similarly, with a 0.2 probability for state of nature s2, the optimal decision alternative d1 will provide a \\$7 million profit. Thus, from equation (4.4), the expected value of the decision strategy that uses perfect information is 0.8(20) 0.2(7) 17.4\n\nIt would be worth \\$3.2 million for PDC to learn the level of market acceptance before selecting a decision alternative.\n\nWe refer to the expected value of \\$17.4 million as the expected value with perfect information (EVwPI). Earlier in this section we showed that the recommended decision using the expected value approach is decision alternative d3, with an expected value of \\$14.2 million. Because this decision recommendation and expected value computation were made without the benefit of perfect information, \\$14.2 million is referred to as the expected value without perfect information (EVwoPI). The expected value with perfect information is \\$17.4 million, and the expected value without perfect information is \\$14.2; therefore, the expected value of the perfect information (EVPI) is \\$17.4 \\$14.2 \\$3.2 million. In other words, \\$3.2 million represents the additional expected value that can be obtained if perfect information were available about the states of nature. Generally speaking, a market research study will not provide “perfect” information; however, if the market research study is a good one, the information gathered might be worth a sizable portion of the \\$3.2 million. Given the EVPI of \\$3.2 million, PDC should seriously consider the market survey as a way to obtain more information about the states of nature.\n\nTABLE 4.6 PAYOFF TABLE FOR THE PDC CONDOMINIUM PROJECT (\\$ MILLION) State of Nature Decision Alternative Small complex, d1 Medium complex, d2 Large complex, d3 Strong Demand s1 8 14 20 Weak Demand s2 7 5 9\n\nASW/QMB-Ch.04 3/8/01 10:35 AM Page 106\n\n106\n\nIn general, the expected value of perfect information is computed as follows: EVPI where EVPI EVwPI EVwoPI For practice in determining the expected value of perfect information, try Problem 14.\n\nEVwPI\n\nEVwoPI\n\n(4.5)\n\nexpected value of perfect information expected value with perfect information about the states of nature expected value without perfect information about the states of nature\n\nNote the role of the absolute value in equation (4.5). For minimization problems the expected value with perfect information is always less than or equal to the expected value without perfect information. In this case, EVPI is the magnitude of the difference between EVwPI and EVwoPI, or the absolute value of the difference as shown in equation (4.5).\n\nWe restate the opportunity loss, or regret, table for the PDC problem (see Table 4.4) as follows. State of Nature Strong Demand s1 12 6 0 Weak Demand s2 0 2 16 0.8 and P(s2) 0.2, the expected opportunity loss for each of the three decision alternatives is EOL(d1) EOL(d2) EOL(d3) 0.8(12) 0.8(6) 0.8(0) 0.2(0) 0.2(2) 0.2(16) 9.6 5.2 3.2\n\nDecision Alternative Small complex, d1 Medium complex, d2 Large complex, d3\n\nUsing P(s1), P(s2), and the opportunity loss values, we can compute the expected opportunity loss (EOL) for each decision alternative. With P(s1)\n\nRegardless of whether the decision analysis involves maximization or minimization, the minimum expected opportunity loss always provides the best decision alternative. Thus, with EOL(d3) 3.2, d3 is the recommended decision. In addition, the minimum expected opportunity loss always is equal to the expected value of perfect information. That is, EOL(best decision) EVPI; for the PDC problem, this value is \\$3.2 million.\n\n4.4\n\nRISK ANALYSIS AND SENSITIVITY ANALYSIS\nIn this section, we introduce risk analysis and sensitivity analysis. Risk analysis can be used to provide probabilities for the payoffs associated with a decision alternative. As a result, risk analysis helps the decision maker recognize the difference between the expected value of a decision alternative and the payoff that may actually occur. Sensitivity analysis also helps the decision maker by describing how changes in the state-of-nature probabilities and/or changes in the payoffs affect the recommended decision alternative.\n\nRisk Analysis\nA decision alternative and a state of nature combine to generate the payoff associated with a decision. The risk profile for a decision alternative shows the possible payoffs along with their associated probabilities. Let us demonstrate risk analysis and the construction of a risk profile by returning to the PDC condominium construction project. Using the expected value approach, we identified the large condominium complex (d3) as the best decision alternative. The expected\n\nASW/QMB-Ch.04 3/8/01 10:35 AM Page 107\n\nChapter 4\n\nDecision Analysis\n\n107\n\nvalue of \\$14.2 million for d3 is based on a 0.8 probability of obtaining a \\$20 million profit and a 0.2 probability of obtaining a \\$9 million loss. The 0.8 probability for the \\$20 million payoff and the 0.2 probability for the \\$9 million payoff provide the risk profile for the large complex decision alternative. This risk profile is shown graphically in Figure 4.5. Sometimes a review of the risk profile associated with an optimal decision alternative may cause the decision maker to choose another decision alternative even though the expected value of the other decision alternative is not as good. For example, the risk profile for the medium complex decision alternative (d2) shows a 0.8 probability for a \\$14 million payoff and 0.2 probability for a \\$5 million payoff. Because no probability of a loss is associated with decision alternative d2, the medium complex decision alternative would be judged less risky than the large complex decision alternative. As a result, a decision maker might prefer the less-risky medium complex decision alternative even though it has an expected value of \\$2 million less than the large complex decision alternative.\n\nSensitivity Analysis\nSensitivity analysis can be used to determine how changes in the probabilities for the states of nature and/or changes in the payoffs affect the recommended decision alternative. In many cases, the probabilities for the states of nature and the payoffs are based on subjective assessments. Sensitivity analysis helps the decision maker understand which of these inputs are critical to the choice of the best decision alternative. If a small change in the value of one of the inputs causes a change in the recommended decision alternative, the solution to the decision analysis problem is sensitive to that particular input.\n\nExtra effort and care should be taken to make sure the input value is as accurate as possible. On the other hand, if a modest to large change in the value of one of the inputs does not cause a change in the recommended decision alternative, the solution to the decision analysis problem is not sensitive to that particular input. No extra time or effort would be needed to refine the estimated input value. One approach to sensitivity analysis is to select different values for the probabilities of the states of nature and/or the payoffs and then resolve the decision analysis problem. If the recommended decision alternative changes, we know that the solution is sensitive to the changes made. For example, suppose that in the PDC problem the probability for a strong demand is revised to 0.2 and the probability for a weak demand is revised to 0.8. Would the FIGURE 4.5 RISK PROFILE FOR THE LARGE COMPLEX DECISION ALTERNATIVE FOR THE PDC CONDOMINIUM PROJECT\n\n1.0 Probability .8 .6 .4 .2\n\n–10\n\n0 10 Profit (\\$ Millions)\n\n20\n\nASW/QMB-Ch.04 3/8/01 10:35 AM Page 108\n\n108\n\nrecommended decision alternative change? Using P(s1) 0.2, P(s2) 0.8, and equation (4.4), the revised expected values for the three decision alternatives are EV(d1) EV(d2) EV(d3) 0.2(8) 0.2(14) 0.2(20) 0.8(7) 0.8(5) 0.8( 9) 7.2 6.8 3.2\n\nComputer software packages for decision analysis, such as Precision Tree, make it easy to calculate these revised scenarios.\n\nWith these probability assessments the recommended decision alternative is to construct a small condominium complex (d1), with an expected value of \\$7.2 million. The probability of strong demand is only 0.2, so constructing the large condominium complex (d3) is the least preferred alternative, with an expected value of \\$3.2 million (a loss). Thus, when the probability of strong demand is large, PDC should build the large complex; when the probability of strong demand is small, PDC should build the small complex. Obviously, we could continue to modify the probabilities of the states of nature and learn even more about how changes in the probabilities affect the recommended decision alternative.\n\nThe drawback to this approach is the numerous calculations required to evaluate the effect of several possible changes in the state-of-nature probabilities. For the special case of two states of nature, a graphical procedure can be used to determine how changes for the probabilities of the states of nature affect the recommended decision alternative. To demonstrate this procedure, we let p denote the probability of state of nature s1; that is, P(s1) p. With only two states of nature in the PDC problem, the probability of state of nature s2 is P(s2) 1\nP(s1) 1 p\n\nUsing equation (4.4) and the payoff values in Table 4.1, we determine the expected value for decision alternative d1 as follows: EV(d1) P(s1)(8) P(s2)(7) p(8) (1 p)(7) 8p 7 7p p 7\n\n(4.6)\n\nRepeating the expected value computations for decision alternatives d2 and d3, we obtain expressions for the expected value of each decision alternative as a function of p: EV(d2) EV(d3) 9p 5 29p 9 (4.7) (4.8)\n\nThus, we have developed three equations that show the expected value of the three decision alternatives as a function of the probability of state of nature s1. We continue by developing a graph with values of p on the horizontal axis and the associated EVs on the vertical axis. Because equations (4.6), (4.7), and (4.8) are linear equations, the graph of each equation is a straight line. For each equation, then, we can obtain the line by identifying two points that satisfy the equation and drawing a line through the points. For instance, if we let p 0 in equation (4.6), EV(d1) 7. Then, letting p 1, EV(d1) 8. Connecting these two points, (0, 7) and (1, 8), provides the line labeled EV(d1) in Figure 4.6. Similarly, we obtain the lines labeled EV(d2) and EV(d3); these lines are the graphs of equations (4.7) and (4.8), respectively. Figure 4.6 shows how the recommended decision changes as p, the probability of the strong demand state of nature (s1), changes. Note that for small values of p, decision al-\n\nASW/QMB-Ch.04 3/8/01 10:35 AM Page 109\n\nChapter 4\n\nDecision Analysis\n\n109\n\nFIGURE 4.6 EXPECTED VALUE FOR THE PDC DECISION ALTERNATIVES AS A FUNCTION OF p\n\n20\n\nd3 provides the highest EV\n(d 3\n)\n\n15 Expected Value (EV) d1 provides the highest EV 10\n\nd2 provides the highest EV\n\nEV\n\n) EV(d 2\n\nEV(d1)\n5\n\n0\n\n0.2\n\n0.4\n\n0.6\n\n0.8\n\n1.0\n\np\n\n-5\n\n-10\n\nternative d1 (small complex) provides the largest expected value and is thus the recommended decision. When the value of p increases to a certain point, decision alternative d2 (medium complex) provides the largest expected value and is the recommended decision. Finally, for large values of p, decision alternative d3 (large complex) becomes the recommended decision. The value of p for which the expected values of d1 and d2 are equal is the value of p corresponding to the intersection of the EV(d1) and the EV(d2) lines. To determine this value, we set EV(d1) EV(d2) and solve for the value of p: p 7 8p p 9p 5 2 2 0.25 8\n\nGraphical sensitivity analysis shows how changes in the probabilities for the states of nature affect the recommended decision alternative. Try Problem 8.\n\nHence, when p 0.25, decision alternatives d1 and d2 provide the same expected value. Repeating this calculation for the value of p corresponding to the intersection of the EV(d2) and EV(d3) lines we obtain p 0.70. Using Figure 4.6, we can conclude that decision alternative d1 provides the largest expected value for p 0.25, decision alternative d2 provides the largest expected value for 0.25 p 0.70, and decision alternative d3 provides the largest expected value for p 0.70. Because p is the probability of state of nature s1 and (1 p) is the probability of state\n\nASW/QMB-Ch.04 3/8/01 10:35 AM Page 110\n\n110\n\nof nature s2, we now have the sensitivity analysis information that tells us how changes in the state-of-nature probabilities affect the recommended decision alternative. Sensitivity analysis calculations can also be made for the values of the payoffs. In the original PDC problem, the expected values for the three decision alternatives were as follows: EV(d1) 7.8, EV(d2) 12.2, and EV(d3) 14.2. Decision alternative d3 (large complex) was recommended. Note that decision alternative d2 with EV(d2) 12.2 was the second best decision alternative. Decision alternative d3 will remain the optimal decision alternative as long as EV(d3) is greater than or equal to the expected value of the second best decision alternative. Thus, decision alternative d3 will remain the optimal decision alternative as long as EV(d3) Let S W Using P(s1) the payoff of decision alternative d3 when demand is strong the payoff of decision alternative d3 when demand is weak 0.8 and P(s2) 0.2, the general expression for EV(d3) is EV(d3) 0.8S 0.2W (4.10) 12.2 (4.9)\n\nAssuming that the payoff for d3 stays at its original value of \\$9 million when demand is weak, the large complex decision alternative will remain the optimal as long as EV(d3) Solving for S, we have 0.8S 1.8 0.8S S 12.2 14 17.5 0.8S 0.2( 9) 12.2 (4.11)\n\nRecall that when demand is strong, decision alternative d3 has an estimated payoff of \\$20 million. The preceding calculation shows that decision alternative d3 will remain optimal as long as the payoff for d3 when demand is strong is at least \\$17.5 million. Assuming that the payoff for d3 stays at its original value of \\$20 million, we can make a similar calculation to learn how sensitive the optimal solution is with regard to the payoff for d3 when demand is weak. Returning to the expected value calculation of equation (4.10), we know that the large complex decision alternative will remain optimal as long as EV(d3) Solving for W, we have 16 0.2W 0.2W W 12.2 3.8 19 0.8(20) 0.2W 12.2 (4.12)\n\nRecall that when demand is weak, decision alternative d3 has an estimated payoff of \\$9 million. The preceding calculation shows that decision alternative d3 will remain optimal as long as the payoff for d3 when demand is weak is at least \\$19 million.\n\nASW/QMB-Ch.04 3/8/01 10:35 AM Page 111\n\nChapter 4\n\nDecision Analysis\n\n111\n\nSensitivity analysis can assist management in deciding whether more time and effort should be spent obtaining better estimates of payoffs and/or probabilities.\n\nBased on this sensitivity analysis, we conclude that the payoffs for the large complex decision alternative (d3) could vary considerably and d3 would remain the recommended decision alternative. Thus, we conclude that the optimal solution for the PDC decision problem is not particularly sensitive to the payoffs for the large complex decision alternative. We note, however, that this sensitivity analysis has been conducted based on only one change at a time. That is, only one payoff was changed and the probabilities for the states of nature remained P(s1) 0.8 and P(s2) 0.2. Note that similar sensitivity analysis calculations can be made for the payoffs associated with the small complex decision alternative d1 and the medium complex decision alternative d2. However, in these cases, decision alternative d3 remains optimal only if the changes in the payoffs for decision alternatives d1 and d2 meet the requirements that EV(d1) 14.2 and EV(d2) 14.2.\n\n1. Some decision analysis software automatically provide the risk profiles for the optimal decision alternative. These packages also allow the user to obtain the risk profiles for other decision alternatives. After comparing the risk profiles, a decision maker may decide to select a decision alternative with a good risk profile even though the expected value of the decision alternative is not as good as the optimal decision alternative. 2. A tornado diagram, a graphical display, is particularly helpful when several inputs combine to determine the value of the optimal solution. By varying each input over its range of values, we obtain information about how each input affects the value of the optimal solution. To display this information, a bar is constructed for the input with the width of the bar showing how the input affects the value of the optimal solution. The widest bar corresponds to the input that is most sensitive. The bars are arranged in a graph with the widest bar at the top, resulting in a graph that has the appearance of a tornado.\n\n4.5\n\nDECISION ANALYSIS WITH SAMPLE INFORMATION\nIn applying the expected value approach, we have shown how probability information about the states of nature affects the expected value calculations and thus the decision recommendation. Frequently, decision makers have preliminary or prior probability assessments for the states of nature that are the best probability values available at that time. However, to make the best possible decision, the decision maker may want to seek additional information about the states of nature. This new information can be used to revise or update the prior probabilities so that the final decision is based on more accurate probabilities for the states of nature. Most often, additional information is obtained through experiments designed to provide sample information about the states of nature. Raw material sampling, product testing, and market research studies are examples of experiments (or studies) that may enable management to revise or update the state-of-nature probabilities.\n\nThese revised probabilities are called posterior probabilities. Let us return to the PDC problem and assume that management is considering a sixmonth market research study designed to learn more about potential market acceptance of the PDC condominium project. Management anticipates that the market research study will provide one of the following two results: 1. Favorable report: A significant number of the individuals contacted express interest in purchasing a PDC condominium. 2. Unfavorable report: Very few of the individuals contacted express interest in purchasing a PDC condominium.\n\nASW/QMB-Ch.04 3/8/01 10:35 AM Page 112\n\n112\n\nAn Influence Diagram\nBy introducing the possibility of conducting a market research study, the PDC problem becomes more complex. The influence diagram for the expanded PDC problem is shown in Figure 4.7. Note that the two decision nodes correspond to the research study and the complex-size decisions. The two chance nodes correspond to the research study results and demand for the condominiums. Finally, the consequence node is the profit. From the arcs of the influence diagram, we see that demand influences both the research study results and profit. Although demand is currently unknown to PDC, some level of demand for the condominiums already exists in the Pittsburgh area. If existing demand is strong, the research study is likely to find a significant number of individuals who express an interest in purchasing a condominium. However, if the existing demand is weak, the research study is more likely to find a significant number of individuals who express little interest in purchasing a condominium. In this sense, existing demand for the condominiums will influence the research study results. And clearly, demand will have an influence upon PDC’s profit.\n\nThe arc from the research study decision node to the complex-size decision node indicates that the research study decision precedes the complex-size decision. No arc spans from the research study decision node to the research study results node, because the decision to conduct the research study does not actually influence the research study results. The decision to conduct the research study makes the research study results available, but it does not influence the results of the research study. Finally, the complex-size node and the demand node both influence profit. Note that if there were a stated cost to conduct the research study, the decision to conduct the research study would also influence profit. In such a case, we would need to add an arc from the research study decision node to the profit node to show the influence that the research study cost would have on profit.\n\nA Decision Tree\nThe decision tree for the PDC problem with sample information shows the logical sequence for the decisions and the chance events. First, PDC’s management must decide whether the market research should be conducted. If it is conducted, PDC’s management must be prepared to make a decision about the size of the condominium project if the market research report is favorable and, possibly, a different decision about the size of the condominium project if the market research report is unfavorable. The decision tree in Figure 4.8 shows this PDC decision problem. The squares are de-\n\nFIGURE 4.7 INFLUENCE DIAGRAM FOR THE PDC PROBLEM WITH SAMPLE INFORMATION\n\nResearch Study Results\n\nDemand\n\nResearch Study\n\nComplex Size\n\nProfit\n\nASW/QMB-Ch.04 3/8/01 10:35 AM Page 113\n\nChapter 4\n\nDecision Analysis\n\n113\n\nFIGURE 4.8 THE PDC DECISION TREE INCLUDING THE MARKET RESEARCH STUDY Strong (s1) Small (d1) 6 Weak (s2) Strong (s1) Favorable Report 3 Medium (d2) 7 Weak (s2) Strong (s1) Large (d3) Market Research 2 Study Small (d1) 9 8 Weak (s2) Strong (s1) Weak (s2) Strong (s1) 1 Unfavorable Report 4 Medium (d2) 10 Weak (s2) Strong (s1) Large (d3) 11 Weak (s2) Strong (s1) Small (d1) 12 Weak (s2) Strong (s1) No Market Research Study 5 Medium (d2) 13 Weak (s2) Strong (s1) Large (d3) 14 Weak (s2)\n\n8 7 14 5 20 9 8 7 14 5 20 9 8 7 14 5 20 9\n\ncision nodes and the circles are chance nodes. At each decision node, the branch of the tree that is taken is based on the decision made. At each chance node, the branch of the tree that is taken is based on probability or chance. For example, decision node 1 shows that PDC must first make the decision of whether to conduct the market research study. If the market research study is undertaken, chance node 2 indicates that both the favorable report branch and the unfavorable report branch are not under PDC’s control and will be determined by chance. Node 3 is a decision node, indicating that PDC must make the decision to construct the small,\n\nASW/QMB-Ch.04 3/8/01 10:35 AM Page 114\n\n114\n\nWe explain in Section 4.6 how these probabilities can be developed.\n\nmedium, or large complex if the market research report is favorable. Node 4 is a decision node showing that PDC must make the decision to construct the small, medium, or large complex if the market research report is unfavorable. Node 5 is a decision node indicating that PDC must make the decision to construct the small, medium, or large complex if the market research is not undertaken. Nodes 6 to 14 are chance nodes indicating that the strong demand or weak demand state-of-nature branches will be determined by chance. Analysis of the decision tree and the choice of an optimal strategy requires that we know the branch probabilities corresponding to all chance nodes. PDC has developed the following branch probabilities. If the market research study is undertaken P(Favorable report) 0.77 P(Unfavorable report) 0.23 If the market research report is favorable P(Strong demand given a Favorable report) P(Weak demand given a Favorable report) If the market research report is unfavorable P(Strong demand given an Unfavorable report) P(Weak demand given an Unfavorable report) 0.35 0.65 0.94 0.06\n\nIf the market research report is not undertaken the prior probabilities are applicable. P(Strong demand) P(Weak demand) 0.80 0.20\n\nThe branch probabilities are shown on the decision tree in Figure 4.9.\n\nDecision Strategy\nA decision strategy is a sequence of decisions and chance outcomes where the decisions chosen depend on the yet to be determined outcomes of chance events. The approach used to determine the optimal decision strategy is based on a backward pass through the decision tree using the following steps: 1. At chance nodes, compute the expected value by multiplying the payoff at the end of each branch by the corresponding branch probabilities. 2. At decision nodes, select the decision branch that leads to the best expected value. This expected value becomes the expected value at the decision node. Starting the backward pass calculations by computing the expected values at chance nodes 6 to 14 provides the following results. EV(Node 6) EV(Node 7) EV(Node 8) EV(Node 9) EV(Node 10) EV(Node 11) EV(Node 12) 0.94(8) 0.94(14) 0.94(20) 0.35(8) 0.35(14) 0.35(20) 0.80(8) 0.06(7) 0.06(5) 0.06( 9) 0.65(7) 0.65(5) 0.65( 9) 0.20(7) 7.94 13.46 18.26 7.35 8.15 1.15 7.80\n\nASW/QMB-Ch.04 3/8/01 10:35 AM Page 115\n\nChapter 4\n\nDecision Analysis\n\n115\n\nEV(Node 13) EV(Node 14)\n\n0.80(14) 0.80(20)\n\n0.20(5) 0.20( 9)\n\n12.20 14.20\n\nFigure 4.10 shows the reduced decision tree after computing expected values at these chance nodes. Next move to decision nodes 3, 4, and 5. For each of these nodes, we select the decision alternative branch that leads to the best expected value. For example, at node 3 we have the FIGURE 4.9 THE PDC DECISION TREE WITH BRANCH PROBABILITIES Strong (s1) Small (d1)\n\n0.94\n\n8 7 14 5 20 9 8 7 14 5 20 9 8 7 14 5 20 9\n\n6\n\nWeak (s2)\n0.06\n\nStrong (s1) Favorable Report 0.77 3 Medium (d2)\n0.94\n\n7\n\nWeak (s2)\n0.06\n\nStrong (s1) Large (d3) Market Research 2 Study Small (d1) 9\n0.94\n\n8\n\nWeak (s2)\n0.06\n\nStrong (s1)\n0.35\n\nWeak (s2)\n0.65\n\nStrong (s1) 1 Unfavorable Report 0.23 4 Medium (d2)\n0.35\n\n10\n\nWeak (s2)\n0.65\n\nStrong (s1) Large (d3)\n0.35\n\n11\n\nWeak (s2)\n0.65\n\nStrong (s1) Small (d1)\n0.80\n\n12\n\nWeak (s2)\n0.20\n\nStrong (s1) No Market Research Study 5 Medium (d2)\n0.80\n\n13\n\nWeak (s2)\n0.20\n\nStrong (s1) Large (d3)\n0.80\n\n14\n\nWeak (s2)\n0.20\n\nASW/QMB-Ch.04 3/8/01 10:35 AM Page 116\n\n116\n\nFIGURE 4.10 PDC DECISION TREE AFTER COMPUTING EXPECTED VALUES AT CHANCE NODES 6 TO 14 Small (d1)\n\n6\n\nEV = 7.94\n\nFavorable Report 0.77\n\n3\n\nMedium (d2)\n\n7\n\nEV = 13.46\n\nLarge (d3) Market Research 2 Study Small (d1)\n\n8\n\nEV = 18.26\n\n9\n\nEV = 7.35\n\n1\n\nUnfavorable Report 0.23\n\n4\n\nMedium (d2)\n\n10\n\nEV = 8.15\n\nLarge (d3)\n\n11\n\nEV = 1.15\n\nSmall (d1)\n\n12\n\nEV = 7.80\n\nNo Market Research Study\n\n5\n\nMedium (d2)\n\n13\n\nEV = 12.20\n\nLarge (d3)\n\n14\n\nEV = 14.20\n\nchoice of the small complex branch with EV(Node 6) 7.94, the medium complex branch with EV(Node 7) 13.46, and the large complex branch with EV(Node 8) 18.26. Thus, we select the large complex decision alternative branch and the expected value at node 3 becomes EV(Node 3) 18.26. For node 4, we select the best expected value from nodes 9, 10, and 11. The best decision alternative is the medium complex branch that provides EV(Node 4) 8.15. For node 5, we select the best expected value from nodes 12, 13, and 14. The best decision alterna-\n\nASW/QMB-Ch.04 3/8/01 10:35 AM Page 117\n\nChapter 4\n\nDecision Analysis\n\n117\n\nFIGURE 4.11 PDC DECISION TREE AFTER CHOOSING BEST DECISIONS AT NODES 3, 4, AND 5 Favorable Report 0.77\n\n3\n\nEV(d3) = 18.26\n\nMarket Research 2 Study\n\n1\n\nUnfavorable Report 0.23\n\n4\n\nEV(d2 ) = 8.15\n\nNo Market Research Study\n\n5\n\nEV(d3) = 14.20\n\ntive is the large complex branch which provides EV(Node 5) 14.20. Figure 4.11 shows the reduced decision tree after choosing the best decisions at nodes 3, 4, and 5. The expected value at chance node 2 can now be computed as follows: EV(Node 2) 0.77EV(Node 3) 0.23EV(Node 4) 0.77(18.26) 0.23(8.15) 15.93\n\nThis reduces the decision tree to one involving only the 2 decision branches from node 1 (see Figure 4.12). Finally, the decision can be made at decision node 1 by selecting the best expected values from nodes 2 and 5. This action leads to the decision alternative to conduct the market research study, which provides an overall expected value of 15.93. The optimal decision for PDC is to conduct the market research study and then carry out the following decision strategy: If the market research is favorable, construct the large condominium complex. If the market research is unfavorable, construct the medium condominium complex.\n\nASW/QMB-Ch.04 3/8/01 10:35 AM Page 118\n\n118\n\nFIGURE 4.12 PDC DECISION TREE REDUCED TO 2 DECISION BRANCHES Market Research Study\n\n2\n\nEV = 15.93\n\n1\n\nNo Market Research Study\n\n5\n\nEV = 14.20\n\nProblem 16 will test your ability to develop an optimal decision strategy.\n\nThe analysis of the PDC decision tree describes the methods that can be used to analyze more complex sequential decision problems. First, draw a decision tree consisting of decision and chance nodes and branches that describe the sequential nature of the problem. Determine the probabilities for all chance outcomes. Then, by working backward through the tree, compute expected values at all chance nodes and select the best decision branch at all decision nodes. The sequence of optimal decision branches determines the optimal decision strategy for the problem. The Q. M. in Action article on drug testing for student athletes describes how Santa Clara University used decision analysis to make a decision regarding whether to implement a drug testing program for student athletes.\n\nRisk Profile\nFigure 4.13 provides a reduced decision tree showing only the sequence of decision alternatives and chance events for the PDC optimal decision strategy. By implementing the optimal decision strategy, PDC will obtain one of the four payoffs shown at the terminal branches of the decision tree. Recall that a risk profile shows the possible payoffs with their associated probabilities. Thus, in order to construct a risk profile for the optimal decision strategy we will need to compute the probability for each of the four payoffs. Note that each payoff results from a sequence of branches leading from node 1 to the payoff. For instance, the payoff of \\$20 million is obtained by following the upper branch from node 1, the upper branch from node 2, the lower branch from node 3 and the upper branch from node 8. The probability of following that sequence of branches can be found by multiplying the probabilities for the branches from the chance nodes in the sequence.\n\nASW/QMB-Ch.04 3/8/01 10:35 AM Page 119\n\nChapter 4\n\nDecision Analysis\n\n119\n\nQ.M. IN ACTION DECISION ANALYSIS AND DRUG TESTING FOR STUDENT ATHLETES* The athletic governing board of Santa Clara University considered whether to implement a drugtesting program for the university’s intercollegiate athletes. The decision analysis framework contains two decision alternatives: implement a drug-testing program and do not implement a drug-testing program. Each student athlete is either a drug user or not a drug user, so these two possibilities are considered to be the states of nature for the problem. If the drug-testing program is implemented, student athletes will be required to take a drugscreening test. Results of the test will be either positive (test indicates a possible drug user) or negative (test does not indicate a possible drug user). The test outcomes are considered to be the sample information in the decision problem.\n\nIf the test result is negative, no follow-up action will be taken. However, if the test result is positive, follow-up action will be taken to determine whether the student athlete actually is a drug user. The payoffs include the cost of not identifying a drug user and the cost of falsely identifying a nonuser. Decision analysis showed that if the test result is positive, a reasonably high probability still exists that the student athlete is not a drug user. The cost and other problems associated with this type of misleading test result were considered significant. Consequently, the athletic governing board decided not to implement the drug-testing program. *Charles D. Feinstein, “Deciding Whether to Test Student Athletes for Drug Use,” Interfaces 20, no. 3 (May–June 1990): 80–87.\n\nThus the probability the \\$20 million payoff is (0.77)(0.94) 0.72. Similarly, the probabilities for each of the other payoffs are obtained by multiplying the probabilities for the branches from the chance nodes leading to the payoffs. Doing so, we find the probability of the \\$9 million payoff is (0.77)(0.06) 0.05; the probability of the \\$14 million payoff is (0.23)(0.35) 0.08; and the probability of the \\$5 million payoff is (0.23)(0.65) 0.15. The following table showing the probability distribution for the payoffs for the PDC\n\nFIGURE 4.13 PDC DECISION TREE SHOWING ONLY BRANCHES ASSOCIATED WITH OPTIMAL DECISION STRATEGY Favorable Report 0.77\n\n3\n\nStrong (s1) Large (d3) Market Research 2 Study\n0.94\n\n20 9\n\n8\n\nWeak (s2)\n0.06\n\nStrong (s1) 1 Unfavorable Report 0.23 4 Medium (d2)\n0.35\n\n14 5\n\n10\n\nWeak (s2)\n0.65\n\nASW/QMB-Ch.04 3/8/01 10:35 AM Page 120\n\n120\n\noptimal decision strategy is the tabular representation of the risk profile for the optimal decision strategy.\n\nPayoff (\\$ Million) 9 5 14 20\n\nProbability 0.05 0.15 0.08 0.72 1.00\n\nFigure 4.14 provides a graphical representation of the risk profile. Comparing Figures 4.5 and 4.14, we see that the PDC risk profile is changed by the strategy to conduct the market research study. In fact, the use of the market research study has lowered the probability of the \\$9 million loss from 0.20 to 0.05. PDC’s management would most likely view that change as a significant reduction in the risk associated with the condominium project.\n\nExpected Value of Sample Information\nIn the PDC problem, the market research study is the sample information used to determine the optimal decision strategy. The expected value associated with the market research study is \\$15.93. In Section 4.3 we showed that the best expected value if the market research study is not undertaken is \\$14.20. Thus, we can conclude that the difference, \\$15.93 \\$14.20 \\$1.73, is the expected value of sample information. In other words, conducting the market research study adds \\$1.73 million to the PDC expected value. In general, the expected value of sample information is as follows: EVSI EVwSI EVwoSI (4.13)\n\nThe EVSI \\$1.73 million suggests PDC should be willing to pay up to \\$1.73 million to conduct the market research study.\n\nFIGURE 4.14 RISK PROFILE FOR PDC CONDOMINIUM PROJECT WITH SAMPLE INFORMATION SHOWING PAYOFFS ASSOCIATED WITH OPTIMAL DECISION STRATEGY\n\n.8 Probability\n\n.6\n\n.4\n\n.2\n\n–10\n\n0 10 Profit (\\$ Millions)\n\n20\n\nASW/QMB-Ch.04 3/8/01 10:35 AM Page 121\n\nChapter 4\n\nDecision Analysis\n\n121\n\nwhere EVSI EVwSI EVwoSI expected value of sample information expected value with sample information about the states of nature expected value without sample information about the states of nature\n\nNote the role of the absolute value in equation (4.13). For minimization problems the expected value with sample information is always less than or equal to the expected value without sample information. In this case, EVSI is the magnitude of the difference between EVwSI and EVwoSI; thus, by taking the absolute value of the difference as shown in equation (4.13), we can handle both the maximization and minimization cases with one equation.\n\nEfficiency of Sample Information\nIn Section 4.3 we showed that the expected value of perfect information (EVPI) for the PDC problem is \\$3.2 million. We never anticipated that the market research report would obtain perfect information, but we can use an efficiency measure to express the value of the market research information. With perfect information having an efficiency rating of 100%, the efficiency rating E for sample information is computed as follows. E For the PDC problem, E 1.73 3.2 100 54.1% EVSI EVPI 100 (4.14)\n\nIn other words, the information from the market research study is 54.1% as efficient as perfect information. Low efficiency ratings for sample information might lead the decision maker to look for other types of information. However, high efficiency ratings indicate that the sample information is almost as good as perfect information and that additional sources of information would not yield significantly better results.\n\n4.6\n\nCOMPUTING BRANCH PROBABILITIES\nIn Section 4.5 the branch probabilities for the PDC decision tree chance nodes were specified in the problem description. No computations were required to determine these probabilities. In this section we show how Bayes Theorem, a topic covered in Chapter 2, can be used to compute branch probabilities for decision trees. The PDC decision tree is shown again in Figure 4.15. Let F U s1 s2 Favorable market research report Unfavorable market research report Strong demand (state of nature 1) Weak demand (state of nature 2)\n\nAt chance node 2, we need to know the branch probabilities P(F) and P(U). At chance nodes 6, 7, and 8, we need to know the branch probabilities P(s1 F), the probability of state of nature 1 given a favorable market research report, and P(s2 F), the probability of state\n\nASW/QMB-Ch.04 3/8/01 10:35 AM Page 122\n\n122\n\nFIGURE 4.15 THE PDC DECISION TREE\nStrong (s1) Small (d1)\nP(s1 F)\n\n8 7 14 5 20 9 8 7 14 5 20 9 8 7 14 5 20 9\n\n6\n\nWeak (s2)\nP(s2 F)\n\nStrong (s1) Favorable Report P(F) 3 Medium (d2)\nP(s1 F)\n\n7\n\nWeak (s2)\nP(s2 F)\n\nStrong (s1) Large (d3) Market Research 2 Study Small (d1) 9\nP(s1 F)\n\n8\n\nWeak (s2)\nP(s2 F)\n\nStrong (s1)\nP(s1 U)\n\nWeak (s2)\nP(s2 U)\n\nStrong (s1) 1 Unfavorable Report P(U) 4 Medium (d2)\nP(s1 U)\n\n10\n\nWeak (s2)\nP(s2 U)\n\nStrong (s1) Large (d3)\nP(s1 U)\n\n11\n\nWeak (s2)\nP(s2 U)\n\nStrong (s1) Small (d1)\nP(s1)\n\n12\n\nWeak (s2)\nP(s2)\n\nStrong (s1) No Market Research Study 5 Medium (d2)\nP(s1)\n\n13\n\nWeak (s2)\nP(s2)\n\nStrong (s1) Large (d3)\nP(s1)\n\n14\n\nWeak (s2)\nP(s2)\n\nof nature 2 given a favorable market research report. P(s1 F) and P(s2 F) are referred to as posterior probabilities because they are conditional probabilities based on the outcome of the sample information. At chance nodes 9, 10, and 11, we need to know the branch probabilities P(s1 U) and P(s2 U); note that these are also posterior probabilities, denoting the probabilities of the two states of nature given that the market research report is unfavorable. Finally at chance nodes 12, 13, and 14, we need the probabilities for the states of nature, P(s1) and P(s2), if the market research study is not undertaken.\n\nASW/QMB-Ch.04 3/8/01 10:35 AM Page 123\n\nChapter 4\n\nDecision Analysis\n\n123\n\nIn making the probability computations, we need to know PDC’s assessment of the probabilities for the two states of nature, P(s1) and P(s2); these are the prior probabilities as discussed earlier. In addition, we must know the conditional probability of the market research outcomes (the sample information) given each state of nature. For example, we need to know the conditional probability of a favorable market research report given that the state of nature is strong demand for the PDC project; note that this conditional probability of F given state of nature s1 is written P(F s1). To carry out the probability calculations, we will need conditional probabilities for all sample outcomes given all states of nature, that is, P(F s1), P(F s2), P(U s1) and P(U s2). In the PDC problem, we assume that the following assessments are available for these conditional probabilities.\n\nMarket Research State of Nature Strong demand, s1 Weak demand, s2 Favorable, F P(F | s1) 0.90 P(F | s2) 0.25 Unfavorable, U P(U | s1) 0.10 P(U | s2) 0.75\n\nNote that the preceding probability assessments provide a reasonable degree of confidence in the market research study. If the true state of nature is s1, the probability of a favorable market research report is 0.90, and the probability of an unfavorable market research report is 0.10. If the true state of nature is s2, the probability of a favorable market research report is 0.25, and the probability of an unfavorable market research report is 0.75. The reason for a 0.25 probability of a potentially misleading favorable market research report for state of nature s2 is that when some potential buyers first hear about the new condominium project, their enthusiasm may lead them to overstate their real interest in it. A potential buyer’s initial favorable response can change quickly to a “no thank you” when later faced with the reality of signing a purchase contract and making a down payment. In the following discussion, we present a tabular approach as a convenient method for carrying out the probability computations. The computations for the PDC problem based on a favorable market research report (F) are summarized in Table 4.7.\n\nThe steps used to develop this table are as follows. Step 1. In column 1 enter the states of nature. In column 2 enter the prior probabilities for the states of nature. In column 3 enter the conditional probabilities of a favorable market research report (F) given each state of nature. Step 2. In column 4 compute the joint probabilities by multiplying the prior probability values in column 2 by the corresponding conditional probability values in column 3. Step 3. Sum the joint probabilities in column 4 to obtain the probability of a favorable market research report, P(F). Step 4. Divide each joint probability in column 4 by P(F) 0.77 to obtain the revised or posterior probabilities, P(s1 F) and P(s2 F). Table 4.7 shows that the probability of obtaining a favorable market research report is P(F) 0.77.\n\nIn addition, P(s1 F) 0.94 and P(s2 F) 0.06. In particular, note that a favorable market research report will prompt a revised or posterior probability of 0.94 that the market demand of the condominium will be strong, s1. The tabular probability computation procedure must be repeated for each possible sample information outcome. Thus, Table 4.8 shows the computations of the branch probabilities of the PDC problem based on an unfavorable market research report. Note that the probability of obtaining an unfavorable market research report is P(U) 0.23. If an\n\nASW/QMB-Ch.04 3/8/01 10:35 AM Page 124\n\n124\n\nTABLE 4.7 BRANCH PROBABILITIES FOR THE PDC CONDOMINIUM PROJECT BASED ON A FAVORABLE MARKET RESEARCH REPORT States of Nature sj s1 s2 Prior Probabilities P(sj) 0.8 0.2 1.0 Conditional Probabilities P(F | sj) 0.90 0.25 Joint Probabilities P(F sj) 0.72 0.05 P(F) 0.77 Posterior Probabilities P(sj | F) 0.94 0.06 1.00\n\nTABLE 4.8 BRANCH PROBABILITIES FOR THE PDC CONDOMINIUM PROJECT BASED ON AN UNFAVORABLE MARKET RESEARCH REPORT States of Nature sj s1 s2 Prior Probabilities P(sj) 0.8 0.2 1.0 Conditional Probabilities P(U | sj) 0.10 0.75 Joint Probabilities P(U sj) 0.08 0.15 P(U) 0.23 Posterior Probabilities\nP(sj | U) 0.35 0.65 1.00\n\nProblem 22 asks you to compute the posterior probabilities.\n\nunfavorable report is obtained, the posterior probability of a strong market demand, s1, is 0.35 and of a weak market demand, s2, is 0.65. The branch probabilities from Tables 4.7 and 4.8 were shown on the PDC decision tree in Figure 4.9. The discussion in this section shows an underlying relationship between the probabilities on the various branches in a decision tree. To assume different prior probabilities, P(s1) and P(s2), without determining how these changes would alter P(F) and P(U), as well as the posterior probabilities P(s1 F), P(s2 F), P(s1 U) and P(s2 U) would be inappropriate.\n\nSUMMARY\nDecision analysis can be used to determine a recommended decision alternative or an optimal decision strategy when a decision maker is faced with an uncertain and risk-filled pattern of future events. The goal of decision analysis is to identify the best decision alternative or the optimal decision strategy given information about the uncertain events and the possible consequences or payoffs. The uncertain future events are called chance events and the outcomes of the chance events are called states of nature.\n\nWe showed how influence diagrams, payoff tables, and decision trees could be used to structure a decision problem and describe the relationships among the decisions, the chance events, and the consequences. We presented three approaches to decision making without probabilities: the optimistic approach, the conservative approach, and the minimax regret approach. When probability assessments are provided for the states of nature, the expected value approach can be used to identify the recommended decision alternative or decision strategy. In cases where sample information about the chance events is available, a sequence of decisions has to be made. First we must decide whether to obtain the sample information.\n\nASW/QMB-Ch.04 3/8/01 10:35 AM Page 125\n\nChapter 4\n\nDecision Analysis\n\n125\n\nIf the answer to this decision is yes, an optimal decision strategy based on the specific sample information must be developed. In this situation, decision trees and the expected value approach can be used to determine the optimal decision strategy. Even though the expected value approach can be used to obtain a recommended decision alternative or optimal decision strategy, the payoff that actually occurs will usually have a value different from the expected value. A risk profile provides a probability distribution for the possible payoffs and can assist the decision maker in assessing the risks associated with different decision alternatives.\n\nFinally, sensitivity analysis can be conducted to determine the effect changes in the probabilities for the states of nature and changes in the values of the payoffs have on the recommended decision alternative. Decision analysis has been widely used in practice. The Q. M. in Action: Investing in a Power Transmission System describes how Oglethorpe Power Corporation used decision analysis to decide whether to invest in a major transmission system between Georgia and Florida. The Quantitative Methods in Practice at the end of the chapter describes how Ohio Edison used decision analysis to select equipment that helped the company meet emission standards.\n\nQ. M. IN ACTION INVESTING IN A TRANSMISSION SYSTEM\nOglethorpe Power Corporation (OPC) provides wholesale electrical power to consumer-owned cooperatives in the state of Georgia. Florida Power Corporation proposed that OPC join in the building of a major transmission line from Georgia to Florida. Deciding whether to become involved in the building of the transmission line was a major decision for OPC because it would involve the commitment of substantial OPC resources. OPC worked with Applied Decision Analysis, Inc., to conduct a comprehensive decision analysis of the problem. In the problem formulation step, three decisions were identified: (1) deciding whether to build a transmission line from Georgia to Florida; (2) deciding whether to upgrade existing transmission facilities; and (3) deciding who would control the new facilities. Oglethorpe was faced with five chance events: (1) construction costs, (2) competition, (3) demand in Florida, (4) OPC’s share of the operation, and (5) pricing.\n\nThe consequence or payoff was measured in terms of dollars saved. The influence diagram for the problem had three decision nodes, five chance nodes, a consequence node, and several intermediate nodes that described intermediate calculations. The decision tree for the problem had more than 8000 paths from the starting node to the terminal branches. An expected value analysis of the decision tree provided an optimal decision strategy for OPC. However, the risk profile for the optimal decision strategy showed that the recommended strategy was very risky and had a significant probability of increasing OPC’s cost rather than providing a savings. The risk analysis led to the conclusion that more information about the competition was needed in order to reduce OPC’s risk. Sensitivity analysis involving various probabilities and payoffs showed that the value of the optimal decision strategy was stable over a reasonable range of input values. The final recommendation from the decision analysis was that OPC should begin negotiations with Florida Power Corporation concerning the building of the new transmission line. Based on Borison, Adam, “Oglethorpe Power Corporation Decides about Investing in a Major Transmission System” Interfaces, March–April, 1995, pp. 25–36.\n\nGLOSSARY\nChance event An uncertain future event affecting the consequence, or payoff, associated with a decision. States of nature The possible outcomes for chance events that affect the payoff associated with a decision alternative.\n\nASW/QMB-Ch.04 3/8/01 10:35 AM Page 126\n\n126\n\nInfluence diagram A graphical device that shows the relationship among decisions, chance events, and consequences for a decision problem.\n\nConsequence The result obtained when a decision alternative is chosen and a chance event occurs. A measure of the consequence is often called a payoff. Payoff A measure of the consequence of a decision such as profit, cost, or time. Each combination of a decision alternative and a state of nature has an associated payoff, (consequence). Payoff table A tabular representation of the payoffs for a decision problem. Decision tree A graphical representation of the decision problem that shows the sequential nature of the decision-making process. Node An intersection or junction point of an influence diagram or a decision tree. Decision nodes Nodes indicating points where a decision is made. Chance nodes Nodes indicating points where an uncertain event will occur. Branch Lines showing the alternatives from decision nodes and the outcomes from chance nodes. Optimistic approach An approach to choosing a decision alternative without using probabilities. For a maximization problem, it leads to choosing the decision alternative corresponding to the largest payoff; for a minimization problem, it leads to choosing the decision alternative corresponding to the smallest payoff. Conservative approach An approach to choosing a decision alternative without using probabilities.\n\nFor a maximization problem, it leads to choosing the decision alternative that maximizes the minimum payoff; for a minimization problem, it leads to choosing the decision alternative that minimizes the maximum payoff. Minimax regret approach An approach to choosing a decision alternative without using probabilities. For each alternative, the maximum regret is computed, which leads to choosing the decision alternative that minimizes the maximum regret. Opportunity loss, or regret The amount of loss (lower profit or higher cost) from not making the best decision for each state of nature. Expected value approach An approach to choosing a decision alternative that is based on the expected value of each decision alternative. The recommended decision alternative is the one that provides the best expected value. Expected value (EV) For a chance node, it is the weighted average of the payoffs. The weights are the state-of-nature probabilities. Expected value of perfect information (EVPI) The expected value of information that would tell the decision maker exactly which state of nature is going to occur (i.e., perfect information). Decision strategy A strategy involving a sequence of decisions and chance outcomes to provide the optimal solution to a decision problem.\n\nRisk analysis The study of the possible payoffs and probabilities associated with a decision alternative or a decision strategy. Risk profile The probability distribution of the possible payoffs associated with a decision alternative or decision strategy. Sensitivity analysis The study of how changes in the probability assessments for the states of nature and/or changes in the payoffs affect the recommended decision alternative. Prior probabilities The probabilities of the states of nature prior to obtaining sample information. Sample information New information obtained through research or experimentation that enables an updating or revision of the state-of-nature probabilities.\n\nASW/QMB-Ch.04 3/8/01 10:35 AM Page 127\n\nChapter 4\n\nDecision Analysis\n\n127\n\nPosterior (revised) probabilities The probabilities of the states of nature after revising the prior probabilities based on sample information. Expected value of sample information (EVSI) The difference between the expected value of an optimal strategy based on sample information and the “best” expected value without any sample information. Efficiency The ratio of EVSI to EVPI as a percent; perfect information is 100% efficient. Bayes theorem A probability expression that enables the use of sample information to revise prior probabilities. Conditional probabilities The probability of one event given the known outcome of a (possibly) related event. Joint probabilities The probabilities of both sample information and a particular state of nature occurring simultaneously.\n\nPROBLEMS\n1. The following payoff table shows profit for a decision analysis problem with two decision alternatives and three states of nature.\n\nState of Nature Decision Alternative d1 d2 s1 250 100 s2 100 100 s3 25 75\n\na. Construct a decision tree for this problem. b. If the decision maker knows nothing about the probabilities of the three states of nature, what is the recommended decision using the optimistic, conservative, and minimax regret approaches? 2. Suppose that a decision maker faced with four decision alternatives and four states of nature develops the following profit payoff table.\n\nState of Nature Decision Alternative d1 d2 d3 d4 s1 14 11 9 8 s2 9 10 10 10 s3 10 8 10 11 s4 5 7 11 13\n\na.\n\nIf the decision maker knows nothing about the probabilities of the four states of nature, what is the recommended decision using the optimistic, conservative, and minimax regret approaches? b. Which approach do you prefer? Explain. Is establishing the most appropriate approach before analyzing the problem important for the decision maker? Explain. c. Assume that the payoff table provides cost rather than profit payoffs. What is the recommended decision using the optimistic, conservative, and minimax regret approaches?\n\nASW/QMB-Ch.04 3/8/01 10:35 AM Page 128\n\n128\n\n3. Southland Corporation’s decision to produce a new line of recreational products has resulted in the need to construct either a small plant or a large plant. The best selection of plant size depends on how the marketplace reacts to the new product line. To conduct an analysis, marketing management has decided to view the possible long-run demand as either low, medium, or high. The following payoff table shows the projected profit in millions of dollars:\n\nLong-Run Demand Plant Size Small Large Low 150 50 Medium 200 200 High 200 500\n\na. b. c. d.\n\nWhat is the decision to be made, and what is the chance event for Southland’s problem? Construct an influence diagram. Construct a decision tree. Recommend a decision based on the use of the optimistic, conservative, and minimax regret approaches.\n\n4. Amy Lloyd is interested in leasing a new Saab and has contacted three automobile dealers for pricing information. Each dealer has offered Amy a closed-end 36-month lease with no down payment due at the time of signing. Each lease includes a monthly charge and a mileage allowance. Additional miles receive a surcharge on a per-mile basis. The monthly lease cost, the mileage allowance, and the cost for additional miles follow:\n\nDealer Forno Saab Midtown Motors Hopkins Automotive\n\nMonthly Cost \\$299 \\$310 \\$325\n\nMileage Allowance 36,000 45,000 54,000\n\nCost per Additional Mile \\$0.15 \\$0.20 \\$0.15\n\nAmy has decided to choose the lease option that will minimize her total 36-month cost. The difficulty is that Amy is not sure how many miles she will drive over the next three years. For purposes of this decision she believes it is reasonable to assume that she will drive 12,000 miles per year, 15,000 miles per year, or 18,000 miles per year. With this assumption Amy has estimated her total costs for the three lease options. For example, she figures that the Forno Saab lease will cost her \\$10,764 if she drives 12,000 miles per year, \\$12,114 if she drives 15,000 miles per year, or \\$13,464 if she drives 18,000 miles per year. a. What is the decision, and what is the chance event? b. Construct a payoff table for Amy’s problem. c.\n\nIf Amy has no idea which of the three mileage assumptions is most appropriate, what is the recommended decision (leasing option) using the optimistic, conservative, and minimax regret approaches? d. Suppose that the probabilities that Amy drives 12,000, 15,000, and 18,000 miles per year are 0.5, 0.4, and 0.1, respectively. What option should Amy choose using the expected value approach? e. Develop a risk profile for the decision selected in Part (d). What is the most likely cost, and what is its probability?\n\nASW/QMB-Ch.04 3/8/01 10:35 AM Page 129\n\nChapter 4\n\nDecision Analysis\n\n129\n\nf.\n\nSuppose that after further consideration, Amy concludes that the probabilities that she will drive 12,000, 15,000, and 18,000 miles per year are 0.3, 0.4, and 0.3, respectively. What decision should Amy make using the expected value approach?\n\n5. The following profit payoff table was presented in Problem 1. Suppose that the decision maker has obtained the probability assessments: P(s1) 0.65, P(s2) 0.15, and P(s3) 0.20. Use the expected value approach to determine the optimal decision.\n\nState of Nature Decision Alternative d1 d2 s1 250 100 s2 100 100 s3 25 75\n\n6. The profit payoff table presented in Problem 2 is repeated here.\n\nState of Nature Decision Alternative d1 d2 d3 d4 s1 14 11 9 8 s2 9 10 10 10 s3 10 8 10 11 s4 5 7 11 13\n\nSuppose that the decision maker obtains information that enables the following probability assessments to be made: P(s1) 0.5, P(s2) 0.2, P(s3) 0.2, and P(s4) 0.1. a. Use the expected value approach to determine the optimal decision. b. Now assume that the entries in the payoff table are costs; use the expected value approach to determine the optimal decision. 7. Hudson Corporation is considering three options for managing its data processing operation: continuing with its own staff, hiring an outside vendor to do the managing (referred to as outsourcing), or using a combination of its own staff and an outside vendor. The cost of the operation depends on future demand. The annual cost of each option (in \\$000s) depends on demand as follows.\n\nDemand Staffing Options Own Staff Outside Vendor Combination High 650 900 800 Medium 650 600 650 Low 600 300 500\n\nIf the demand probabilities are 0.2, 0.5, and 0.3, which decision alternative will minimize the expected cost of the data processing operation? What is the expected annual cost associated with that recommendation? b. Construct a risk profile for the optimal decision in part (a). What is the probability of the cost exceeding \\$700,000?\n\na.\n\nASW/QMB-Ch.04 3/8/01 10:35 AM Page 130\n\n130\n\n8. The following payoff table shows the profit for a decision problem with two states of nature and two decision alternatives.\n\nState of Nature Decision Alternative d1 d2 s1 10 4 s2 1 3\n\na.\n\nUse graphical sensitivity analysis to determine the range of probabilities of state of nature s1 for which each of the decision alternatives has the largest expected value. b. Suppose P(s1) 0.2 and P(s2) 0.8. What is the best decision using the expected value approach? c. Perform sensitivity analysis on the payoffs for decision alternative d1. Assume the probabilities are as given in part (b) and find the range of payoffs under states of nature s1 and s2 that will keep the solution found in part (b) optimal. Is the solution more sensitive to the payoff under state of nature s1 or s2?\n\n9. Myrtle Air Express has decided to offer direct service from Cleveland to Myrtle Beach. Management must decide between a full price service using the company’s new fleet of jet aircraft and a discount service using smaller capacity commuter planes. It is clear that the best choice depends on the market reaction to the service Myrtle Air offers. Management has developed estimates of the contribution to profit for each type of service based upon two possible levels of demand for service to Myrtle Beach: strong and weak. The following table shows the estimated quarterly profits in thousands of dollars.\n\nDemand for Service Service Full Price Discount Strong \\$960 \\$670 Weak \\$490 \\$320\n\na.\n\nWhat is the decision to be made, what is the chance event, and what is the consequence for this problem? How many decision alternatives are there? How many outcomes are there for the chance event? b. If nothing is known about the probabilities of the chance outcomes, what is the recommended decision using the optimistic, conservative, and minimax regret approaches? c. Suppose that management of Myrtle Air Express believes that the probability of strong demand is 0.7 and the probability of weak demand is 0.3. Use the expected value approach to determine an optimal decision. d. Suppose that the probability of strong demand is 0.8 and the probability of weak demand is 0.2.\n\nWhat is the optimal decision using the expected value approach? e. Use graphical sensitivity analysis to determine the range of demand probabilities for which each of the decision alternatives has the largest expected value. 10. Political Systems, Inc., is a new firm specializing in information services such as surveys and data analysis for individuals running for political office. The firm is opening its headquarters in Chicago and is considering three office locations, which differ in cost due to square footage and office equipment requirements. The profit projections shown (in thousands of dollars) for each location were based on both strong demand and weak demand states of nature.\n\nASW/QMB-Ch.04 3/8/01 10:35 AM Page 131\n\nChapter 4\n\nDecision Analysis\n\n131\n\nDemand Office Location A B C Strong 200 120 100 Weak 20 10 60\n\na.\n\nInitially, management is uncomfortable stating probabilities for the states of nature. Let p denote the probability of the strong demand state of nature. What does graphical sensitivity analysis tell management about location preferences? Can any location be dropped from consideration? Why or why not? b. After further review, management estimated the probability of a strong demand at 0.65. Based on the results in part (a), which location should be selected? What is the expected value associated with that decision? 11. For the Pittsburgh Development Corporation problem in Section 4.3, the decision alternative to build the large condominium complex was found to be optimal using the expected value approach. In Section 4.4 we conducted a sensitivity analysis for the payoffs associated with this decision alternative. We found that the large complex remained optimal as long as the payoff for the strong demand was greater than or equal to \\$17.5 million and as long as the payoff for the weak demand was greater than or equal to \\$19 million. a. Consider the medium complex decision. How much could the payoff under strong demand increase and still keep decision alternative d3 the optimal solution? b. Consider the small complex decision. How much could the payoff under strong demand increase and still keep decision alternative d3 the optimal solution?\n\n12. The distance from Potsdam to larger markets and limited air service have hindered the town in attracting new industry. Air Express, a major overnight delivery service is considering establishing a regional distribution center in Potsdam. But Air Express will not establish the center unless the length of the runway at the local airport is increased. Another candidate for new development is Diagnostic Research, Inc. (DRI), a leading producer of medical testing equipment. DRI is considering building a new manufacturing plant. Increasing the length of the runway is not a requirement for DRI, but the planning commission feels that doing so will help convince DRI to locate their new plant in Potsdam. Assuming that the town lengthens the runway, the Potsdam planning commission believes that the probabilities shown in the following table are applicable.\n\nNew Air Express Center No Air Express Center\n\nNew DRI Plant .30 .40\n\nNo DRI Plant .10 .20\n\nFor instance, the probability that Air Express will establish a new distribution center and DRI will build a new plant is .30. The estimated annual revenue to the town, after deducting the cost of lengthening the runway, is as follows:\n\nNew Air Express Center No Air Express Center\n\nNew DRI Plant \\$600,000 \\$250,000\n\nNo New Plant \\$150,000 \\$200,000\n\nASW/QMB-Ch.04 3/8/01 10:35 AM Page 132\n\n132\n\nIf the runway expansion project is not conducted, the planning commission assesses the probability DRI will locate their new plant in Potsdam at 0.6; in this case, the estimated annual revenue to the town will be \\$450,000. If the runway expansion project is not conducted and DRI does not locate in Potsdam, the annual revenue will be \\$0 since no cost will have been incurred and no revenues will be forthcoming. a. What is the decision to be made, what is the chance event, and what is the consequence? b. Compute the expected annual revenue associated with the decision alternative to lengthen the runway. c. Compute the expected annual revenue associated with the decision alternative to not lengthen the runway. d. Should the town elect to lengthen the runway? Explain. e. Suppose that the probabilities associated with lengthening the runway were as follows:\n\nNew Air Express Center No Air Express Center\n\nNew DRI Plant .40 .30\n\nNo DRI Plant .10 .20\n\nWhat effect, if any, would this change in the probabilities have on the recommended decision? 13. Seneca Hill Winery has recently purchased land for the purpose of establishing a new vineyard. Management is considering two varieties of white grapes for the new vineyard: Chardonnay and Riesling. The Chardonnay grapes would be used to produce a dry Chardonnay wine, and the Riesling grapes would be used to produce a semi-dry Riesling wine. It takes approximately four years from the time of planting before new grapes can be harvested.\n\nThis length of time creates a great deal of uncertainty concerning future demand and makes the decision concerning the type of grapes to plant difficult. Three possibilities are being considered: Chardonnay grapes only; Riesling grapes only; and both Chardonnay and Riesling grapes. Seneca management decided that for planning purposes it would be adequate to consider only two demand possibilities for each type of wine: strong or weak. With two possibilities for each type of wine it was necessary to assess four probabilities. With the help of some forecasts in industry publications management made the following probability assessments.\n\nRiesling Demand Chardonnay Demand Weak Strong Weak 0.05 0.25 Strong 0.50 0.20\n\nRevenue projections show an annual contribution to profit of \\$20,000 if Seneca Hill only plants Chardonnay grapes and demand is weak for Chardonnay wine, and \\$70,000 if they only plant Chardonnay grapes and demand is strong for Chardonnay wine. If they only plant Riesling grapes, the annual profit projection is \\$25,000 if demand is weak for Riesling grapes and \\$45,000 if demand is strong for Riesling grapes. If Seneca plants both types of grapes, the annual profit projections are shown in the following table.\n\nASW/QMB-Ch.04 3/8/01 10:35 AM Page 133\n\nChapter 4\n\nDecision Analysis\n\n133\n\nRiesling Demand Chardonnay Demand Weak Strong Weak \\$22,000 \\$26,000 Strong \\$40,000 \\$60,000\n\na.\n\nWhat is the decision to be made, what is the chance event, and what is the consequence? Identify the alternatives for the decisions and the possible outcomes for the chance events. b. Develop a decision tree. c. Use the expected value approach to recommend which alternative Seneca Hill Winery should follow in order to maximize expected annual profit. d. Suppose management is concerned about the probability assessments when demand for Chardonnay wine is strong. Some believe it is likely for Riesling demand to also be strong in this case. Suppose the probability of strong demand for Chardonnay and weak demand for Riesling is 0.05 and that the probability of strong demand for Chardonnay and strong demand for Riesling is 0.40. How does this change the recommended decision? Assume that the probabilities when Chardonnay demand is weak are still 0.05 and 0.50. e. Other members of the management team expect the Chardonnay market to become saturated at some point in the future causing a fall in prices. Suppose that the annual profit projections fall to \\$50,000 when demand for Chardonnay is strong and Chardonnay grapes only are planted. Using the original probability assessments, determine how this change would affect the optimal decision. 14. The following profit payoff table was presented in Problems 1 and 5.\n\nState of Nature Decision Alternative d1 d2 s1 250 100 s2 100 100 s3 25 75\n\nThe probabilities for the states of nature are: P(s1) 0.65, P(s2) 0.15, and P(s3) 0.20. a. What is the optimal decision strategy if perfect information were available? b. What is the expected value for the decision strategy developed in part (a)? c. Using the expected value approach, what is the recommended decision without perfect information? What is its expected value? d. What is the expected value of perfect information? 15. The Lake Placid Town Council has decided to build a new community center to be used for conventions, concerts, and other public events. But, considerable controversy surrounds the appropriate size. Many influential citizens want a large center that would be a showcase for the area. But the mayor feels that if demand does not support such a center, the community will lose a large amount of money. To provide structure for the decision process, the council narrowed the building alternatives to three sizes: small, medium, and large. Everybody agreed that the critical factor in choosing the best size is the number of people who will want to use the new facility. A regional planning consultant provided demand estimates under three scenarios: worst case, base case, and best case. The worst-case scenario corresponds to a situation in which tourism drops significantly; the base-case\n\nASW/QMB-Ch.04 3/8/01 10:35 AM Page 134\n\n134\n\nscenario corresponds to a situation in which Lake Placid continues to attract visitors at current levels; and the best-case scenario corresponds to a significant increase in tourism. The consultant has provided probability assessments of 0.10, 0.60, and 0.30 for the worst-case, base-case, and best-case scenarios, respectively. The town council suggested using net cash flow over a five-year planning horizon as the criterion for deciding on the best size. The following projections of net cash flow, in thousands of dollars, for a five-year planning horizon have been developed. All costs, including the consultant’s fee, have been included.\n\nDemand Scenario Center Size Small Medium Large Worst Case 400 250 400 Base Case 500 650 580 Best Case 660 800 990\n\na. What decision should Lake Placid make using the expected value approach? b. Construct risk profiles for the medium and large alternatives. Given the mayor’s concern over the possibility of losing money and the result of part (a), which alternative would you recommend? c. Compute the expected value of perfect information. Do you think it would be worth trying to obtain additional information concerning which scenario is likely to occur? d. Suppose the probability of the worst-case scenario increases to 0.2, the probability of the base-case scenario decreases to 0.5, and the probability of the best-case scenario remains at 0.3. What effect, if any, would these changes have on the decision recommendation?\n\ne. The consultant has suggested that an expenditure of \\$150,000 on a promotional campaign over the planning horizon will effectively reduce the probability of the worstcase scenario to zero. If the campaign can be expected to also increase the probability of the best-case scenario to 0.4, is it a good investment? 16. Consider a variation of the PDC decision tree shown in Figure 4.9. The company must first decide whether to undertake the market research study. If the market research study is conducted, the outcome will either be favorable (F) or unfavorable (U). Assume there are only two decision alternatives d1 and d2 and two states of nature s1 and s2. The payoff table showing profit is as follows:\n\nState of Nature Decision Alternative d1 d2 s1 100 400 s2 300 200\n\na. Show the decision tree. b. Using the following probabilities, what is the optimal decision strategy? P(F) P(U) 0.56 0.44 P(s1 F) P(s2 F) 0.57 0.43 P(s1 U) P(s2 U) 0.18 0.82 P(s1 ) P(s2 ) 0.40 0.60\n\nASW/QMB-Ch.04 3/8/01 10:35 AM Page 135\n\nChapter 4\n\nDecision Analysis\n\n135\n\n17. A real estate investor has the opportunity to purchase land currently zoned residential. If the county board approves a request to rezone the property as commercial within the next year, the investor will be able to lease the land to a large discount firm that wants to open a new store on the property. However, if the zoning change is not approved, the investor will have to sell the property at a loss. Profits (in \\$000s) are shown in the following payoff table.\n\nState of Nature Decision Alternative Purchase, d1 Do not purchase, d2 Rezoning Approved s1 600 0 Rezoning Not Approved s2 200 0\n\na.\n\nIf the probability that the rezoning will be approved is 0.5, what decision is recommended? What is the expected profit? b. The investor can purchase an option to buy the land. Under the option, the investor maintains the rights to purchase the land anytime during the next 3 months while learning more about possible resistance to the rezoning proposal from area residents. Probabilities are as follows. Let H L P(H) P(L) 0.55 0.45 High resistance to rezoning Low resistance to rezoning P(s1 H) P(s1 L) 0.18 0.89 P(s2 H) P(s2 L) 0.82 0.11\n\nc.\n\nWhat is the optimal decision strategy if the investor uses the option period to learn more about the resistance from area residents before making the purchase decision? If the option will cost the investor an additional \\$10,000, should the investor purchase the option? Why or why not? What is the maximum that the investor should be willing to pay for the option?\n\n18. McHuffter Condominiums, Inc., of Pensacola, Florida, recently purchased land near the Gulf of Mexico and is attempting to determine the size of the condominium development it should build. It is considering three sizes of developments: small, d1; medium, d2; and large, d3. At the same time, an uncertain economy makes ascertaining the demand for the new condominiums difficult. McHuffter’s management realizes that a large development followed by low demand could be very costly to the company. However, if McHuffter makes a conservative small-development decision and then finds a high demand, the firm’s profits will be lower than they might have been. With the three levels of demand—low, medium, and high—McHuffter’s management has prepared the following profit (in \\$000s) payoff table.\n\nState of Nature Decision Alternatives Small Condo, d1 Medium Condo, d2 Large Condo, d3 Low, s1 400 100 300 Medium, s2 400 600 300 High, s3 400 600 900\n\nASW/QMB-Ch.04 3/8/01 10:35 AM Page 136\n\n136\n\nThe probabilities for the states of nature are P(s1) 0.20, P(s2) 0.35, and P(s3) 0.45. Suppose that before making a final decision, McHuffter is considering conducting a survey to help evaluate the demand for the new condominium development. The survey report is anticipated to indicate one of three levels of demand: weak (W), average (A), or strong (S). The relevant probabilities are as follows: P(W) P(A) P(S) 0.30 0.38 0.32 P(s1 W) P(s2 W) P(s3 W) 0.39 0.46 0.15 P(s1 A) P(s2 A) P(s3 A) 0.16 0.37 0.47 P(s1 S) P(s2 S) P(s3 S) 0.06 0.22 0.72\n\na. Construct a decision tree for this problem. b. What is the recommended decision if the survey is not undertaken? What is the expected value? c. What is the expected value of perfect information? d. What is McHuffter’s optimal decision strategy? e. What is the expected value of the survey information? f. What is the efficiency of the survey information? 19. Hale’s TV Productions is considering producing a pilot for a comedy series in the hope of selling it to a major television network. The network may decide to reject the series, but it may also decide to purchase the rights to the series for either one or two years. At this point in time, Hale may either produce the pilot and wait for the network’s decision or transfer the rights for the pilot and series to a competitor for \\$100,000. Hale’s decision alternatives and profits (in thousands of dollars) are as follows:\n\nState of Nature Decision Alternative Produce Pilot, d1 Sell to Competitor, d2 Reject, s1 100 100 1 Year, s2 50 100 2 Years, s3 150 100\n\nThe probabilities for the states of nature are P(s1) 0.20, P(s2) 0.30, and P(s3) 0.50. For a consulting fee of \\$5,000, an agency will review the plans for the comedy series and indicate the overall chances of a favorable network reaction to the series. Assume that the agency review will result in a favorable (F) or an unfavorable (U) review and that the following probabilities are relevant. P(F) P(U) 0.69 0.31 P(s1 F) P(s2 F) P(s3 F) 0.09 0.26 0.65 P(s1 U) P(s2 U) P(s3 U) 0.45 0.39 0.16\n\na. Construct a decision tree for this problem. b. What is the recommended\ndecision if the agency opinion is not used? What is the expected value? c. What is the expected value of perfect information? d. What is Hale’s optimal decision strategy assuming the agency’s information is used? e. What is the expected value of the agency’s information? f. Is the agency’s information worth the \\$5,000 fee? What is the maximum that Hale should be willing to pay for the information? g. What is the recommended decision?\n\nASW/QMB-Ch.04 3/8/01 10:35 AM Page 137\n\nChapter 4\n\nDecision Analysis\n\n137\n\n20. Martin’s Service Station is considering entering the snowplowing business for the coming winter season. Martin can purchase either a snowplow blade attachment for the station’s pick-up truck or a new heavy-duty snowplow truck. Martin has analyzed the situation and believes that either alternative would be a profitable investment if the snowfall is heavy. Smaller profits would result if the snowfall is moderate, and losses would result if the snowfall is light. The following profits have been determined.\n\nState of Nature Decision Alternatives Blade Attachment, d1 New Snowplow, d2 Heavy, s1 3500 7000 Moderate, s2 1000 2000 Light, s3 1500 9000\n\nThe probabilities for the states of nature are P(s1) 0.4, P(s2) 0.3, and P(s3) 0.3. Suppose that Martin decides to wait until September before making a final decision. Assessments of the probabilities associated with a normal (N) or unseasonably cold (U) September are as follows: P(N) P(U) 0.80 0.20 P(s1 N) P(s2 N) P(s3 N) 0.35 0.30 0.35 P(s1 U) P(s2 U) P(s3 U) 0.62 0.31 0.07\n\na. Construct a decision tree for this problem. b. What is the recommended decision if Martin does not wait until September? What is the expected value? c. What is the expected value of perfect information? d. What is Martin’s optimal decision strategy if the decision is not made until the September weather is determined? What is the expected value of this decision strategy? 21. Lawson’s Department Store faces a buying decision for a seasonal product for which demand can be high, medium, and low. The purchaser for Lawson’s can order 1, 2, or 3 lots of the product before the season begins but cannot reorder later. Profit projections (in \\$000s) are shown.\n\nState of Nature Decision Alternative Order 1 lot, d1 Order 2 lots, d2 Order 3 lots, d3 High Demand s1 60 80 100 Medium Demand s2 60 80 70 Low Demand s3 50 30 10\n\na.\n\nIf the prior probabilities for the three states of nature are 0.3, 0.3, and 0.4, respectively, what is the recommended order quantity? b. At each preseason sales meeting, the vice-president of sales provides a personal opinion regarding potential demand for this product. Because of the vice-president’s enthusiasm and optimistic nature, the predictions of market conditions have always been either “excellent” (E) or “very good” (V). Probabilities are as follows. What is the optimal decision strategy?\n\nASW/QMB-Ch.04 3/8/01 10:35 AM Page 138\n\n138\n\nP(E) P(V)\n\n0.70 0.30\n\nP(s1 E) P(s2 E) P(s3 E)\n\n0.34 0.32 0.34\n\nP(s1 V) P(s2 V) P(s3 V)\n\n0.20 0.26 0.54\n\nc.\n\nUse the efficiency of sample information and discuss whether the firm should consider a consulting expert who could provide independent forecasts of market conditions for the product.\n\n22. Suppose that you are given a decision situation with three possible states of nature: s1, s2, and s3. The prior probabilities are P(s1) 0.2, P(s2) 0.5, and P(s3) 0.3. With sample information I, P(I s1) 0.1, P(I s2) 0.05, and P(I s3) 0.2. Compute the revised or posterior probabilities: P(s1 I), P(s2 I), and P(s3 I). 23. In the following profit payoff table for a decision problem with two states of nature and three decision alternatives, the prior probabilities, for s1 and s2 are P(s1) 0.8 and P(s2) 0.2.\n\nState of Nature Decision Alternative d1 d2 d3 s1 15 10 8 s2 10 12 20\n\na. What is the optimal decision? b. Find the EVPI. c. Suppose that sample information I is obtained, with P(I s1) 0.2 and P(I s2) 0.75. Find the posterior probabilities P(s1 I) and P(s2 I). Recommend a decision alternative based on these probabilities. 24. To save on expenses, Rona and Jerry agreed to form a carpool for traveling to and from work. Rona preferred to use the somewhat longer but more consistent Queen City Avenue. Although Jerry preferred the quicker expressway, he agreed with Rona that they should take Queen City Avenue if the expressway had a traffic jam. The following payoff table provides the one-way time estimate in minutes for traveling to or from work.\n\nState of Nature Expressway Open s1 30 25 Expressway Jammed s2 30 45\n\nDecision Alternative Queen City Avenue, d1 Expressway, d2\n\nBased on their experience with traffic problems, Rona and Jerry agreed on a 0.15 probability that the expressway would be jammed. In addition, they agreed that weather seemed to affect the traffic conditions on the expressway. Let C O R clear overcast rain\n\nASW/QMB-Ch.04 3/8/01 10:35 AM Page 139\n\nChapter 4\n\nDecision Analysis\n\n139\n\nThe following conditional probabilities apply. P(C s1) P(C s2) a. 0.8 0.1 P(O s1) P(O s2) 0.2 0.3 P(R s1) P(R s2) 0.0 0.6\n\nUse the Bayes’ probability revision procedure to compute the probability of each weather condition and the conditional probability of the expressway open s1 or jammed s2 given each weather condition. b. Show the decision tree for this problem. c. What is the optimal decision strategy, and what is the expected travel time? 25. The Gorman Manufacturing Company must decide whether to manufacture a component part at its Milan, Michigan, plant or purchase the component part from a supplier. The resulting profit is dependent upon the demand for the product. The following payoff table shows the projected profit (in \\$000s).\n\nState of Nature Decision Alternative Manufacture, d1 Purchase, d2 Low Demand s1 20 10 Medium Demand s2 40 45 High Demand s3 100 70\n\nThe state-of-nature probabilities are P(s1) 0.35, P(s2) 0.35, and P(s3) 0.30. a. Use a decision tree to recommend a decision. b. Use EVPI to determine whether Gorman should attempt to obtain a better estimate of demand. c. A test market study of the potential demand for the product is expected to report either a favorable (F) or unfavorable (U) condition. The relevant\nconditional probabilities are as follows: P(F s1) P(F s2) P(F s3) 0.10 0.40 0.60 P(U s1) P(U s2) P(U s3) 0.90 0.60 0.40\n\nWhat is the probability that the market research report will be favorable? d. What is Gorman’s optimal decision strategy? e. What is the expected value of the market research information? f. What is the efficiency of the information?\n\nCase Problem PROPERTY PURCHASE STRATEGY\nGlenn Foreman, president of Oceanview Development Corporation, is considering submitting a bid to purchase property that will be sold by sealed bid at a county tax foreclosure. Glenn’s initial judgment is to submit a bid of \\$5 million. Based on his experience, Glenn estimates that a bid of \\$5 million will have a 0.2 probability of being the highest bid and securing the property for Oceanview. The current date is June 1. Sealed bids for the property must be submitted by August 15. The winning bid will be announced on September 1. If Oceanview submits the highest bid and obtains the property, the firm plans to build and sell a complex of luxury condominiums. However, a complicating factor is that the\n\nASW/QMB-Ch.04 3/8/01 10:35 AM Page 140\n\n140\n\nproperty is currently zoned for single-family residences only. Glenn believes that a referendum could be placed on the voting ballot in time for the November election. Passage of the referendum would change the zoning of the property and permit construction of the condominiums. The sealed-bid procedure requires the bid to be submitted with a certified check for 10% of the amount bid. If the bid is rejected, the deposit is refunded. If the bid is accepted, the deposit is the down payment for the property. However, if the bid is accepted and the bidder does not follow through with the purchase and meet the remainder of the financial obligation within six months, the deposit will be forfeited. In this case, the county will offer the property to the next highest bidder. To determine whether Oceanview should submit the \\$5 million bid, Glenn has done some preliminary analysis. This preliminary work provided an assessment of 0.3 for the probability that the referendum for a zoning change will be approved and resulted in the following estimates of the costs and revenues that will be incurred if the condominiums are built.\n\nCost and Revenue Estimates Revenue from condominium sales \\$15,000,000 Cost Property \\$5,000,000 Construction expenses \\$8,000,000\n\nIf Oceanview obtains the property and the zoning change is rejected in November, Glenn believes that the best option would be for the firm not to complete the purchase of the property. In this case, Oceanview would forfeit the 10% deposit that accompanied the bid. Because the likelihood that the zoning referendum will be approved is such an important factor in the decision process, Glenn has suggested that the firm hire a market research service to conduct a survey of voters. The survey would provide a better estimate of the likelihood that the referendum for a zoning change would be approved. The market research firm that Oceanview Development has worked with in the past has agreed to do the study for \\$15,000.\n\nThe results of the study will be available August 1, so that Oceanview will have this information before the August 15 bid deadline. The results of the survey will be either a prediction that the zoning change will be approved or a prediction that the zoning change will be rejected. After considering the record of the market research service in previous studies conducted for Oceanview, Glenn has developed the following probability estimates concerning the accuracy of the market research information. P(A s1) P(A s2) where A N s1 s2 prediction of zoning change approval prediction that zoning change will not be approved the zoning change is approved by the voters the zoning change is rejected by the voters 0.9 0.2 P(N s1) P(N s2) 0.1 0.8\n\nASW/QMB-Ch.04 3/8/01 10:35 AM Page 141\n\nChapter 4\n\nDecision Analysis\n\n141\n\nManagerial Report\nPerform an analysis of the problem facing the Oceanview Development Corporation, and prepare a report that summarizes your findings and recommendations. Include the following items in your report: 1. A decision tree that shows the logical sequence of the decision problem 2. A recommendation regarding what Oceanview should do if the market research information is not available 3. Adecision strategy that Oceanview should follow if the market research is conducted 4. A recommendation as to whether Oceanview should employ the market research firm, along with the value of the information provided by the market research firm. Include the details of your analysis as an appendix to your report.\n\nCase Problem LAWSUIT DEFENSE STRATEGY\nJohn Campbell, an employee of Manhattan Construction Company, claims to have injured his back as a result of a fall while repairing the roof at one of the Eastview apartment buildings. He has filed a lawsuit against Doug Reynolds, the owner of Eastview Apartments, asking for damages of \\$1,500,000. John claims that the roof had rotten sections and that his fall could have been prevented if Mr. Reynolds had told Manhattan Construction about the problem. Mr. Reynolds has notified his insurance company, Allied Insurance, of the lawsuit. Allied must defend Mr. Reynolds and decide what action to take regarding the lawsuit. Some depositions have been taken, and a series of discussions have taken place between both sides. As a result, John Campbell has offered to accept a settlement of \\$750,000. Thus, one option is for Allied to pay John \\$750,000 to settle the claim. Allied is also considering making John a counteroffer of \\$400,000 in the hope that he will accept a lesser amount to avoid the time and cost of going to trial.\n\nBut, Allied’s preliminary investigation has shown that John has a strong case; Allied is concerned that John may reject their counteroffer and request a jury trial. Allied’s lawyers have spent some time exploring John’s likely reaction if they make a counteroffer of \\$400,000. The lawyers have concluded that it is adequate to consider three possible outcomes to represent John’s possible reaction to a counteroffer of \\$400,000: (1) John will accept the counteroffer and the case will be closed; (2) John will reject the counteroffer and elect to have a jury decide the settlement amount; or (3) John will make a counteroffer to Allied of \\$600,000. If John does make a counteroffer, Allied has decided that they will not make additional counteroffers.\n\nThey will either accept John’s counteroffer of \\$600,000 or go to trial. If the case goes to a jury trial, Allied has decided that it should be adequate to consider three possible outcomes: (1) the jury may reject John’s claim and Allied will not be required to pay any damages; (2) the jury will find in favor of John and award him \\$750,000 in damages; or (3) the jury will conclude that John has a strong case and award him the full amount that he sued for, \\$1,500,000. Key considerations as Allied develops its strategy for disposing of the case are the probabilities associated with John’s response to an Allied counteroffer of \\$400,000, and the probabilities associated with the three possible trial outcomes. Allied’s lawyers believe the probability that John will accept a counteroffer of \\$400,000 is 0.10, the probability that John will reject a counteroffer of \\$400,000 is 0.40, and the probability that John will, himself, make a counteroffer to Allied of \\$600,000 is 0.50. If the case goes to court, they believe that\n\nASW/QMB-Ch.04 3/8/01 10:35 AM Page 142\n\n142\n\nthe probability the jury will award John damages of \\$1,500,000 is 0.30, the probability that the jury will award John damages of \\$750,000 is 0.50, and the probability that the jury will award John nothing is 0.20.\n\nManagerial Report\nPerform an analysis of the problem facing Allied Insurance and prepare a report that summarizes your findings and recommendations. Be sure to include the following items: 1. A decision tree 2. A recommendation regarding whether Allied should accept John’s initial offer to settle the claim for \\$750,000 3. A decision strategy that Allied should follow if they decide to make John a counteroffer of \\$400,000 4. A risk profile for your recommended strategy\n\nAppendix 4.1 DECISION ANALYSIS WITH SPREADSHEETS\nA spreadsheet provides a convenient way to perform the basic decision analysis computations. A spreadsheet may be designed for any of the decision analysis approaches described in this chapter. We will demonstrate use of the spreadsheet in decision analysis by solving the PDC condominium problem using the expected value approach.\n\nThe Expected Value Approach\nThis spreadsheet solution is shown in Figure 4.16. The payoff table with appropriate headings is placed into cell A3 through cell C8. In addition, the probabilities for the two states of nature are placed in cells B9 and C9. The Excel formulas that provide the calculations and optimal solution recommendation are as follows: Cells D6:D8 Compute the expected value for each decision alternative Cell D6 B9*B6 C9*C6 Cell D7 B9*B7 C9*C7 Cell D8 B9*B8 C9*C8 Compute the maximum expected value MAX(D6:D8) Determine which decision alternative is recommended Cell E6 IF(D6 D11,A6,” “) Cell E7 IF(D7 D11,A7,” “) Cell E8 IF(D8 D11,A8,” “)\n\nCell D11 Cell E6:E8\n\nAs Figure 4.16 shows, the expected value approach recommends the large complex decision alternative with a maximum expected value of 14.2. The only change required to convert the spreadsheet in Figure 4.16 into a minimization analysis is to change the formula in cell D11 to MIN(D6:D8). With this change, the decision alternative with the minimum expected value will be shown in column E.\n\nComputation of Branch Probabilities\nSpreadsheets can be used to compute the branch probabilities for a decision tree as discussed in Section 4.6. A spreadsheet used to compute the branch probabilities for the PDC problem is shown in Figure 4.17. The prior probabilities are entered into cells B5 and B6.\n\nASW/QMB-Ch.04 3/8/01 10:35 AM Page 143\n\nFIGURE 4.16 SPREADSHEET SOLUTION FOR THE PDC PROBLEM USING THE EXPECTED VALUE APPROACH A 1 2 3 4 5 6 7 8 9 10 11 B C D E\n\nPDC Problem – Expected Value Approach\nPayoff Table Decision Alternative Small complex Medium complex Large complex Probability Maximum Expected Value State of Nature High acceptance Low acceptance 8 7 14 5 20 -9 0.8 0.2 Expected Value 7.8 12.2 14.2 Recommended Decision\n\nLarge complex\n\n14.2\n\nFIGURE 4.17 SPREADSHEET SOLUTION FOR THE PDC PROBLEM PROBABILITY CALCULATIONS A 1 2 3 4 B C D E\n\nPDC Problem – Bayes’ Probability Calculations\nPrior Probabilities 0.8 0.2\n\nStates of Nature 5 High Acceptance 6 Low Acceptance\n7 8 9 10\n\nConditional Probabilities Market Research Favorable Unfavorable 0.90 0.10 0.25 0.75\n\nIf State of Nature Is 11 High Acceptance 12 Low Acceptance\n13 14\n\nMarket Research Favorable (F) Prior Conditional Joint Posterior State of Nature Probabilities Probabilities Probabilities Probabilities 0.8 0.90 0.72 0.94 18 High Acceptance 0.2 0.25 0.05 0.06 19 Low Acceptance P(F) = 0.77 20 15 16 17 21 22\n\nMarket Research Unfavorable (U) Prior Conditional Joint Posterior State of Nature Probabilities Probabilities Probabilities Probabilities 0.8 0.10 0.08 0.35 26 High Acceptance 0.2 0.75 0.15 0.65 27 Low Acceptance P(U) = 0.23 28 23 24 25\n\nASW/QMB-Ch.04 3/8/01 10:35 AM Page 144\n\n144\n\nThe four conditional probabilities are entered into cells B11, B12, C11, and C12. The following cell formulas perform the probability calculations for the PDC problem based on a favorable market research report, shown previously in Table 4.7. Cells B18 and B19 Enter the prior probabilities B5 B6 Enter the conditional probabilities for a favorable market research report B11 B12 Compute the joint probabilities B18*C18 B19*C19 Compute the probability of a favorable market research report SUM(D18:D19) Compute the posterior probabilities for each state of nature D18/D20 D19/D20\n\nCells C18 and C19\n\nCells D18 and D19\n\nCell D20 Cells E18 and E19\n\nThe same logic was used to perform the probability calculations based on an unfavorable market research report shown in cells B23:E28.\n\nASW/QMB-Ch.04 3/8/01 10:35 AM Page 145\n\nQuantitative Methods in Practice\n\nOHIO EDISON COMPANY*\nAKRON, OHIO\n\nOhio Edison Company is an operating company of FirstEnergy Corporation. Ohio Edison and its subsidiary, Pennsylvania Power Company, provide electrical service to more than one million customers in central and northeastern Ohio and western Pennsylvania. Most of this electricity is generated by coal-fired power plants. To meet evolving air-quality standards, Ohio Edison replaced existing particulate control equipment at most of its generating plants with more efficient equipment. The combination of this program to upgrade air-quality control equipment with the continuing need to construct new generating plants to meet future power requirements resulted in a large capital investment program. Quantitative methods at Ohio Edison are distributed throughout the company rather than centralized in a specific department, and are more or less evenly divided among the following areas: fossil and nuclear fuel planning, environmental studies, capacity planning, large equipment evaluation, and corporate planning. Applications include decision analysis, optimal ordering strategies, computer modeling, and simulation.\n\nparticulate emission control equipment, that equipment was no longer capable of meeting new particulate emission requirements. A decision had already been made to burn lowsulfur coal in four of the smaller units (units 1–4) at the plant in order to meet SO2 emission standards. Fabric filters were to be installed on these units to control particulate emissions. Fabric filters, also known as baghouses, use thousands of fabric bags to filter out the particulates; they function in much the same way as a household vacuum cleaner. It was considered likely, although not certain, that the three larger units (units 5–7) at this plant would burn medium- to high-sulfur coal. A method of controlling particulate emissions at these units had not yet been selected.\n\nPreliminary studies narrowed the particulate control equipment choice to a decision between fabric filters and electrostatic precipitators (which remove particulates suspended in the flue gas by passing the flue gas through a strong electric field). This decision was affected by a number of uncertainties, including the following: • • • • • • Uncertainty in the way some air-quality laws and regulations might be interpreted Potential requirements that either low-sulfur coal or high-sulfur Ohio coal (or neither) be burned in units 5–7 Potential future changes to air-quality laws and regulations An overall plant reliability improvement program already under way at this plant The outcome of this program itself, which would affect the operating costs of whichever pollution control technology was installed in these units Uncertain construction costs of the equipment, particularly because limited space at the plant site made it necessary to install the equipment on\n\nA Decision Analysis Application\nThe flue gas emitted by coal-fired power plants contains small ash particles and sulfur dioxide (SO2). Federal and state regulatory agencies have established emission limits for both particulates and sulfur dioxide. In the late 1970s, Ohio Edison developed a plan to comply with new air-quality standards at one of its largest power plants. This plant, which consists of seven coal-fired units (most of which were constructed in the 1960s), constitutes about one-third of the generating capacity of Ohio Edison and its subsidiary company. Although all units were initially constructed with *The authors are indebted to Thomas J. Madden and M. S. Hyrnick of Ohio Edison Company, Akron, Ohio, for providing this application.\n\nASW/QMB-Ch.04 3/8/01 10:35 AM Page 146\n\n146\n\n• •\n\na massive bridge deck over a four-lane highway immediately adjacent to the power plant Uncertain costs associated with replacing the electrical power required to operate the particulate control equipment Various other factors, including potential accidents and chronic operating problems that could increase the costs of operating the generating units (the degree to which each of these factors could affect operating costs varied with the choice of technology and with the sulfur content of the coal)\n\nResults\nA decision tree similar to that shown in Figure 4.18 was used to generate cumulative probability distributions for the annual revenue requirements outcomes calculated for each of the two particulate control alternatives. Careful study of these results led to the following conclusions: • The expected value of annual revenue requirements for the electrostatic precipitator technology was approximately \\$1 million lower than that for the fabric filters. The fabric filter alternative had a higher upside risk—that is, a higher probability of high revenue requirements—than did the precipitator alternative. The precipitator technology had nearly an 80% probability of lower annual revenue requirements than the fabric filters. Although the capital cost of the fabric filter equipment (the cost of installing the equipment) was lower than for the precipitator, this cost was more than offset by the higher operating costs associated with the fabric filter.\n\nParticulate Control Decision\nThe air-quality program involved a choice between two types of particulate control equipment (fabric filters and electrostatic precipitators) for units 5–7. Because of the complexity of the problem, the high degree of uncertainty associated with factors affecting the decision, and the importance (because of potential reliability and cost impact on Ohio Edison) of the choice, decision analysis was used in the selection process. The decision measure used to evaluate the outcomes of the particulate technology decision analysis was the annual revenue requirements for the three large units over their remaining lifetime. Revenue requirements are the monies that would have to be collected from the utility customers to recover costs resulting from the decision.\n\nThey include not only direct costs but also the cost of capital and return on investment. A decision tree was constructed to represent the particulate control decision and its uncertainties and costs. A simplified version of this decision tree is shown in Figure 4.18. The decision and chance nodes are indicated. Note that to conserve space, a type of shorthand notation is used. The coal sulfur content chance node should actually be located at the end of each branch of the capital cost chance node, as the dashed lines indicate. Each chance node actually represents several probabilistic cost models or submodels. The total revenue requirements are the sum of the revenue requirements for capital and operating costs. Costs associated with these models were obtained from engineering calculations or estimates. Probabilities were obtained from existing data or the subjective assessments of knowledgeable persons.\n\n• •\n\nThese results led Ohio Edison to select the electrostatic precipitator technology for the generating units in question. Had the decision analysis not been performed, the particulate control decision might have been based chiefly on capital cost, a decision measure that would have favored the fabric filter equipment. Decision analysis offers a means for effectively analyzing the uncertainties involved in a decision. Because of this analysis, the use of decision analysis methodology in this application resulted in a decision that yielded both lower expected revenue requirements and lower risk.\n\nQuestions\n1. Why was decision analysis used in the selection of particulate control equipment for units 5, 6, and 7? 2. List the decision alternatives for the decision analysis problem developed by Ohio Edison. 3. What were the benefits of using decision analysis in this application?\n\nASW/QMB-Ch.04 3/8/01 10:35 AM Page 147\n\nChapter 4\n\nDecision Analysis\n\n147\n\nFIGURE 4.18 SIMPLIFIED PARTICULATE CONTROL EQUIPMENT DECISION TREE Technology Decision Revenue Requirements for Capital Cost High \\$ 1.5% Low \\$ Fabric Filters High \\$ 2.5% Low \\$ High \\$ 3.5% Low \\$ High \\$ 1.5% Low \\$ Electrostatic Precipitators High \\$ 2.5% Low \\$ High \\$ 3.5% Low \\$ Low \\$ Low \\$ High \\$ Coal Sulfur Content Revenue Requirements for Operating Cost High \\$\n\n## Cite this essay\n\nHow to Avoid Plagiarism\n• Use multiple resourses when assembling your essay\n• Use Plagiarism Checker to double check your essay\n• Get help from professional writers when not sure you can do it yourself", null, "" ]
[ null, "https://studymoose.com/wp-content/themes/theme/img/welcome-logo.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8959755,"math_prob":0.9101001,"size":129860,"snap":"2020-34-2020-40","text_gpt3_token_len":30135,"char_repetition_ratio":0.2169788,"word_repetition_ratio":0.13523267,"special_character_ratio":0.23342831,"punctuation_ratio":0.10707472,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9544687,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-28T16:42:58Z\",\"WARC-Record-ID\":\"<urn:uuid:fc611806-0bf2-4be9-a7ca-7cec0cba6cad>\",\"Content-Length\":\"295579\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2fd2e0ef-5f55-43a4-93c3-ee090d2c9eda>\",\"WARC-Concurrent-To\":\"<urn:uuid:111a8749-065c-47f3-8f84-d635060ce8dc>\",\"WARC-IP-Address\":\"172.67.13.136\",\"WARC-Target-URI\":\"https://studymoose.com/decision-analysis-essay\",\"WARC-Payload-Digest\":\"sha1:RS4YVNQ5UE7BV3ZNJ2X65VYQNZ2MDHWX\",\"WARC-Block-Digest\":\"sha1:CCGQBQ2PCOVPQ6HJ3RRLC7UFIUE5S6A6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600401601278.97_warc_CC-MAIN-20200928135709-20200928165709-00039.warc.gz\"}"}
https://probnum.readthedocs.io/en/v0.1.17/api/automod/probnum.randprocs.RandomProcess.html
[ "# RandomProcess¶\n\nclass probnum.randprocs.RandomProcess(input_shape, output_shape, dtype, mean=None, cov=None)\n\nBases: Generic[probnum.randprocs._random_process.InputType, probnum.randprocs._random_process.OutputType], abc.ABC\n\nRandom processes represent uncertainty about a function.\n\nRandom processes generalize functions by encoding uncertainty over function values in their covariance function. They can be used to model (deterministic) functions which are not fully known or to define functions with stochastic output.\n\nParameters\n\nRandomVariable\n\nRandom variables.\n\nGaussianProcess\n\nGaussian processes.\n\nMarkovProcess\n\nRandom processes with the Markov property.\n\nNotes\n\nRandom processes are assumed to have an (un-/countably) infinite domain. Random processes with a finite index set are represented by RandomVariable.\n\nAttributes Summary\n\n cov Covariance function $$k(x_0, x_1)$$ of the random process. dtype Data type of (elements of) the random process evaluated at an input. input_ndim Syntactic sugar for len(input_shape). input_shape Shape of inputs to the random process. mean Mean function $$m(x) := \\mathbb{E}[f(x)]$$ of the random process. output_ndim Syntactic sugar for len(output_shape). output_shape Shape of the random process evaluated at an input.\n\nMethods Summary\n\n __call__(args) Evaluate the random process at a set of input arguments. marginal(args) Batch of random variables defining the marginal distributions at the inputs. push_forward(args, base_measure, sample) Transform samples from a base measure into samples from the random process. sample(rng[, args, size]) Sample paths from the random process. std(args) Standard deviation function. var(args) Variance function.\n\nAttributes Documentation\n\ncov\n\nCovariance function $$k(x_0, x_1)$$ of the random process.\n\n\\begin{equation} k(x_0, x_1) := \\mathbb{E} \\left[ (f(x_0) - \\mathbb{E}[f(x_0)]) (f(x_1) - \\mathbb{E}[f(x_1)])^\\top \\right] \\end{equation}\nReturn type\n\nKernel\n\ndtype\n\nData type of (elements of) the random process evaluated at an input.\n\nReturn type\n\ndtype\n\ninput_ndim\n\nSyntactic sugar for len(input_shape).\n\nReturn type\n\nint\n\ninput_shape\n\nShape of inputs to the random process.\n\nReturn type\nmean\n\nMean function $$m(x) := \\mathbb{E}[f(x)]$$ of the random process.\n\nReturn type\n\nFunction\n\noutput_ndim\n\nSyntactic sugar for len(output_shape).\n\nReturn type\n\nint\n\noutput_shape\n\nShape of the random process evaluated at an input.\n\nReturn type\n\nMethods Documentation\n\nabstract __call__(args)[source]\n\nEvaluate the random process at a set of input arguments.\n\nParameters\n\nargs (TypeVar(InputType)) – shape= batch_shape + input_shape – (Batch of) input(s) at which to evaluate the random process. Currently, we require batch_shape to have at most one dimension.\n\nReturns\n\nshape= batch_shape + output_shape – Random process evaluated at the input(s).\n\nReturn type\n\nrandvars.RandomVariable\n\nmarginal(args)[source]\n\nBatch of random variables defining the marginal distributions at the inputs.\n\nParameters\n\nargs (TypeVar(InputType)) – shape= batch_shape + input_shape – (Batch of) input(s) at which to evaluate the random process. Currently, we require batch_shape to have at most one dimension.\n\nReturn type\n\n_RandomVariableList\n\npush_forward(args, base_measure, sample)[source]\n\nTransform samples from a base measure into samples from the random process.\n\nThis function can be used to control sampling from the random process by explicitly passing samples from a base measure evaluated at the input arguments.\n\nParameters\nReturn type\n\nndarray\n\nsample(rng, args=None, size=())[source]\n\nSample paths from the random process.\n\nIf no inputs are provided this function returns sample paths which are callables, otherwise random variables corresponding to the input locations are returned.\n\nParameters\nReturn type\n\nUnion[Callable[[TypeVar(InputType)], TypeVar(OutputType)], TypeVar(OutputType)]\n\nstd(args)[source]\n\nStandard deviation function.\n\nParameters\n\nargs (TypeVar(InputType)) – shape= batch_shape + input_shape – (Batch of) input(s) at which to evaluate the standard deviation function.\n\nReturns\n\nshape= batch_shape + output_shape – Standard deviation of the process at args.\n\nReturn type\n\n_OutputType\n\nvar(args)[source]\n\nVariance function.\n\nReturns the variance function which is the value of the covariance or kernel evaluated elementwise at args for each output dimension separately.\n\nParameters\n\nargs (TypeVar(InputType)) – shape= batch_shape + input_shape – (Batch of) input(s) at which to evaluate the variance function.\n\nReturns\n\nshape= batch_shape + output_shape – Variance of the process at args.\n\nReturn type\n\n_OutputType" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6200487,"math_prob":0.98995775,"size":5426,"snap":"2022-27-2022-33","text_gpt3_token_len":1290,"char_repetition_ratio":0.17760974,"word_repetition_ratio":0.28820375,"special_character_ratio":0.23350534,"punctuation_ratio":0.14417532,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99747205,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-14T16:13:27Z\",\"WARC-Record-ID\":\"<urn:uuid:83ec1fd8-094e-4d99-8fb1-59c7f558e3ce>\",\"Content-Length\":\"109197\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7f8ce31f-24d6-4e34-9e80-d06bbd294961>\",\"WARC-Concurrent-To\":\"<urn:uuid:a69edffd-10fc-47c5-8554-65101008ff50>\",\"WARC-IP-Address\":\"104.17.32.82\",\"WARC-Target-URI\":\"https://probnum.readthedocs.io/en/v0.1.17/api/automod/probnum.randprocs.RandomProcess.html\",\"WARC-Payload-Digest\":\"sha1:4NZA7OD5ICCGBX2IYJH45FP5EHKJENWK\",\"WARC-Block-Digest\":\"sha1:TMETQ6HABA7COH4NEC7CADRM363QXZA3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882572043.2_warc_CC-MAIN-20220814143522-20220814173522-00299.warc.gz\"}"}
https://people.cs.vt.edu/~asandu/Courses/MTU/CS2911/fortran_notes/node16.html
[ "", null, "", null, "", null, "", null, "Next: Literal Constants Up: A quick tour of Previous: Assignment   Contents\n\n# Intrinsic Numerical Operations\n\n NUMERIC_TYPE :: a,b NUMERIC_TYPE :: [a] b\n\nFortran, like any other language, defines several operators that act on numerical type variables. The addition, subtraction, multiplication, division and exponentiation operators are denoted\n\nrespectively. Nothe that addition and subtraction can be monadic (e.g. or ) or dyadic (e.g. ) operators. Also note that we can raise a positive real number to a real power, e.g. , but not .\n\nIn arithmetic expressions different numerical types can be used (will see later), but we usually cannot mix numerical and character, or numerical and logical variables." ]
[ null, "file:///usr/share/latex2html/icons/next.png", null, "file:///usr/share/latex2html/icons/up.png", null, "file:///usr/share/latex2html/icons/prev.png", null, "file:///usr/share/latex2html/icons/contents.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.82672006,"math_prob":0.9111762,"size":403,"snap":"2023-14-2023-23","text_gpt3_token_len":83,"char_repetition_ratio":0.13533835,"word_repetition_ratio":0.0,"special_character_ratio":0.191067,"punctuation_ratio":0.20289855,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97400635,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-02T18:25:17Z\",\"WARC-Record-ID\":\"<urn:uuid:99c8a1de-ecb0-482f-b9ed-81f945e771b7>\",\"Content-Length\":\"3445\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1b3c4799-8f52-4fec-9235-a829f5c9a3f0>\",\"WARC-Concurrent-To\":\"<urn:uuid:af6c664a-14d5-4af1-bd46-dcdf4390e2c8>\",\"WARC-IP-Address\":\"198.82.184.52\",\"WARC-Target-URI\":\"https://people.cs.vt.edu/~asandu/Courses/MTU/CS2911/fortran_notes/node16.html\",\"WARC-Payload-Digest\":\"sha1:ITPAEX63WJZDYOKMQFOZ77BX5YIU3JCK\",\"WARC-Block-Digest\":\"sha1:7K7EEG2ZTXW4ZVZYFUQP67WCV3T3OAOI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224648850.88_warc_CC-MAIN-20230602172755-20230602202755-00500.warc.gz\"}"}
https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/T5.html
[ "Subscriber Authentication Point\nPress Release\nFree Access\n\n## Table B.3\n\nPriors and posterior distributions for the parameters modeled in the joint fit analysis (see Sect. 6).\n\nPlanet parameters Prior Posterior\nStellar parameters\nStellar radius, R (R)", null, "(0.2139,0.0041,0,1)", null, "Stellar mass, M (M)", null, "(0.179,0.014,0.,1)", null, "Limb darkening coefficient, u1", null, "(0.1858804,0.1,0.,1)", null, "Limb darkening coefficient, u2", null, "(0.49001512,0.1,0,1)", null, "Stellar luminosity, L (L) (derived)", null, "LHS 1140 b\n\nOrbital period, Pb (days)", null, "(24.736959,0.0004)", null, "Time of mid-transit, T0,b − 2 400 000 (days)", null, "(58399.0,58401.0)", null, "Planet mass, Mb (M)", null, "(0.0,50.0)", null, "Planet radius, Rb (R)", null, "(1.727,0.1,0,10)", null, "Orbital inclination, ib (deg.)", null, "(89.89,0.05,70,90)", null, "Planet density, ρb (g cm−3) (derived)", null, "Transit depth, Δb (ppt) (derived)", null, "Orbit semi-major axis, ab (AU) (derived)", null, "Relative orbital separation, abR (derived)", null, "Transit duration, T14,b (h) (derived)", null, "Planet surface gravity, gb (m s−2) (derived)", null, "Incident Flux, Finc,b (Finc,⊕) (derived)", null, "Stellar effective incident flux, Sb (S) (derived)", null, "Stellar luminosity, L (L) (derived)", null, "Equilibrium temperature, Teq,b (K) (derived)", null, "LHS 1140 c\n\nOrbital period, Pc (days)", null, "(3.777931,3e-05)", null, "Time of mid-transit, T0,c − 2 400 000 (days)", null, "(58389.2939,0.1)", null, "Planet mass, Mc (M)", null, "(0.0,50.0)", null, "Orbital inclination, ic (deg.)", null, "(89.92,0.05,70,90)", null, "Planet radius, Rc (R)", null, "(1.282,0.1,0,10)", null, "Planet density, ρc (g cm−3) (derived)", null, "Transit depth, Δc (ppt) (derived)", null, "Orbit semi-major axis, ac (AU) (derived)", null, "Relative orbital separation, acR (derived)", null, "Transit duration, T14,c (h) (derived)", null, "Planet surface gravity, gc (m s−2) (derived)", null, "Incident Flux, Finc,c (Finc,⊕) (derived)", null, "Stellar effective incident flux, Sc (S) (derived)", null, "Equilibrium temperature, Teq,c (K) (derived)", null, "Instrument parameters\n\nLC level", null, "(-500.0,500.0)", null, "Dilution factor", null, "(0.052,0.001,0.,1.)", null, "LC jitter (ppm)", null, "(0.0,2000.0)", null, "δESPRESSOpre (m s−1)", null, "(-15.0,-10.0)", null, "δESPRESSOpost (m s−1)", null, "(-15.0,-10.0)", null, "δHARPSc (m s−1)", null, "(-15.0,-10.0)", null, "σESPRESSOpre (m s−1)", null, "(0.0,0.005)", null, "σESPRESSOpost (m s−1)", null, "(0.0,0.005)", null, "σHARPSc (m s−1)", null, "(0.0,0.005)", null, "GP hyperparameters\n\nη1,FWHM (m s−1)", null, "(-6.0,6.0)", null, "η1 (m s−1)", null, "(-5.0,5.0)", null, "η2 (days)", null, "(100.0,500.0)", null, "η3 (days)", null, "(131.0,5.0)", null, "η4", null, "(-2.0,2.0)", null, "Notes.", null, ": Normal distribution with mean μ and width σ2.", null, ": Uniform distribution between a and b.", null, ": Log-uniform distribution between a and b.", null, ": Truncated normal distribution with mean μ and width σ2, between a and b.\n\nCurrent usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.\n\nData correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days." ]
[ null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq118.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq119.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq120.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq121.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq122.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq123.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq124.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq125.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq126.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq127.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq128.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq129.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq130.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq131.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq132.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq133.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq134.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq135.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq136.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq137.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq138.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq139.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq140.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq141.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq142.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq143.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq144.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq145.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq146.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq147.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq148.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq149.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq150.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq151.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq152.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq153.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq154.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq155.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq156.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq157.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq158.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq159.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq160.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq161.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq162.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq163.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq164.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq165.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq166.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq167.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq168.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq169.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq170.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq171.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq176.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq177.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq178.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq179.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq180.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq181.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq182.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq183.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq184.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq185.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq186.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq187.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq188.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq189.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq190.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq191.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq192.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq193.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq194.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq195.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq196.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq197.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq172.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq173.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq174.png", null, "https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/aa38922-20-eq175.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.79394674,"math_prob":0.9020376,"size":2643,"snap":"2020-45-2020-50","text_gpt3_token_len":1104,"char_repetition_ratio":0.16142479,"word_repetition_ratio":0.052785926,"special_character_ratio":0.42300415,"punctuation_ratio":0.25354332,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95851475,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-23T19:44:48Z\",\"WARC-Record-ID\":\"<urn:uuid:cd84213e-f4d6-4e18-87a9-ddce77b32418>\",\"Content-Length\":\"91627\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:004b005d-8932-4e27-a611-e6db38a9bc36>\",\"WARC-Concurrent-To\":\"<urn:uuid:84d80f52-f010-4532-9e67-900262bada56>\",\"WARC-IP-Address\":\"167.114.155.65\",\"WARC-Target-URI\":\"https://www.aanda.org/articles/aa/full_html/2020/10/aa38922-20/T5.html\",\"WARC-Payload-Digest\":\"sha1:AJYFJI2PKHDPAXLZ5IKHTALXQAH7WFWB\",\"WARC-Block-Digest\":\"sha1:2H6BKX6J4H54SOYYJ66JC4SK57DUMWUZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141164142.1_warc_CC-MAIN-20201123182720-20201123212720-00275.warc.gz\"}"}
http://ixtrieve.fh-koeln.de/birds/litie/document/37443
[ "# Document (#37443)\n\nEditor\nChung, P.W.H. et al.\nTitle\nDevelopments in applied artificial intelligence : proceedings / 16th International Conference on Industrial and Engineering Applications of Artificial Intelligence and Expert Systems, IEA/AIE 2003, Loughborough, UK, June 23 - 26, 2003\nImprint\nBerlin : Springer\nYear\n2003\nPages\nXIV, 817 S\nIsbn\n3-540-40455-4\nSeries\nLecture notes in computer science ; Vol. 2718 : Lecture notes in artificial intelligence\nAbstract\nThis book constitutes the refereed proceedings of the 16th International Conference on Industrial and Engineering Applications of Artificial Intelligence and Expert Systems, IEA/AIE 2003, held in Loughborough, UK in June 2003. The 81 revised full papers presented were carefully reviewed and selected from more than 140 submissions. Among the topics addressed are soft computing, fuzzy logic, diagnosis, knowledge representation, knowledge management, automated reasoning, machine learning, planning and scheduling, evolutionary computation, computer vision, agent systems, algorithmic learning, tutoring systems, financial analysis, etc.\nTheme\nWissensrepräsentation\nLCSH\nArtificial intelligence / Industrial applications / Congresses\nExpert systems (Computer science) / Industrial applications / Congresses\nRSWK\nKünstliche Intelligenz / Kongress / Loughborough <2003>\nSoft Computing / Kongress / Loughborough <2003>\nExpertensystem / Kongress / Loughborough <2003>\nBK\n54.72 (Künstliche Intelligenz)\nDDC\n670.28563\nGHBS\nWBE (HA)\nTZE (HA)\nTZH (HA)\nTTQ (SI)\nLCC\nQ334\nRVK\nSS 4800\n\n## Similar documents (content)\n\n1. Research and advanced technology for digital libraries : 7th European conference, ECDL2003 Trondheim, Norway, August 17-22, 2003. Proceedings (2003) 0.47\n```0.47162485 = sum of:\n0.47162485 = product of:\n1.4738277 = sum of:\n0.07286139 = weight(abstract_txt:refereed in 4427) [ClassicSimilarity], result of:\n0.07286139 = score(doc=4427,freq=1.0), product of:\n0.09444614 = queryWeight, product of:\n8.228904 = idf(docFreq=30, maxDocs=42740)\n0.011477366 = queryNorm\n0.7714597 = fieldWeight in 4427, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n8.228904 = idf(docFreq=30, maxDocs=42740)\n0.09375 = fieldNorm(doc=4427)\n0.0445818 = weight(abstract_txt:conference in 4427) [ClassicSimilarity], result of:\n0.0445818 = score(doc=4427,freq=1.0), product of:\n0.08576319 = queryWeight, product of:\n1.3476384 = boost\n5.544793 = idf(docFreq=453, maxDocs=42740)\n0.011477366 = queryNorm\n0.5198244 = fieldWeight in 4427, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.544793 = idf(docFreq=453, maxDocs=42740)\n0.09375 = fieldNorm(doc=4427)\n0.14303102 = weight(title_txt:proceedings in 4427) [ClassicSimilarity], result of:\n0.14303102 = score(doc=4427,freq=1.0), product of:\n0.10604695 = queryWeight, product of:\n1.4985526 = boost\n6.1657224 = idf(docFreq=243, maxDocs=42740)\n0.011477366 = queryNorm\n1.3487518 = fieldWeight in 4427, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n6.1657224 = idf(docFreq=243, maxDocs=42740)\n0.21875 = fieldNorm(doc=4427)\n0.39844117 = weight(subject_txt:congresses in 4427) [ClassicSimilarity], result of:\n0.39844117 = score(doc=4427,freq=2.0), product of:\n0.18467525 = queryWeight, product of:\n1.977549 = boost\n8.13653 = idf(docFreq=33, maxDocs=42740)\n0.011477366 = queryNorm\n2.1575234 = fieldWeight in 4427, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n8.13653 = idf(docFreq=33, maxDocs=42740)\n0.1875 = fieldNorm(doc=4427)\n0.026029738 = weight(abstract_txt:systems in 4427) [ClassicSimilarity], result of:\n0.026029738 = score(doc=4427,freq=1.0), product of:\n0.08131221 = queryWeight, product of:\n2.0747738 = boost\n3.414623 = idf(docFreq=3820, maxDocs=42740)\n0.011477366 = queryNorm\n0.3201209 = fieldWeight in 4427, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n3.414623 = idf(docFreq=3820, maxDocs=42740)\n0.09375 = fieldNorm(doc=4427)\n0.055537067 = weight(abstract_txt:applications in 4427) [ClassicSimilarity], result of:\n0.055537067 = score(doc=4427,freq=1.0), product of:\n0.12510112 = queryWeight, product of:\n2.3018048 = boost\n4.7353325 = idf(docFreq=1019, maxDocs=42740)\n0.011477366 = queryNorm\n0.44393742 = fieldWeight in 4427, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n4.7353325 = idf(docFreq=1019, maxDocs=42740)\n0.09375 = fieldNorm(doc=4427)\n0.4217762 = weight(subject_txt:kongress in 4427) [ClassicSimilarity], result of:\n0.4217762 = score(doc=4427,freq=2.0), product of:\n0.21957575 = queryWeight, product of:\n2.6409533 = boost\n7.24405 = idf(docFreq=82, maxDocs=42740)\n0.011477366 = queryNorm\n1.9208689 = fieldWeight in 4427, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n7.24405 = idf(docFreq=82, maxDocs=42740)\n0.1875 = fieldNorm(doc=4427)\n0.31156936 = weight(abstract_txt:2003 in 4427) [ClassicSimilarity], result of:\n0.31156936 = score(doc=4427,freq=2.0), product of:\n0.3777853 = queryWeight, product of:\n5.2915077 = boost\n6.220473 = idf(docFreq=230, maxDocs=42740)\n0.011477366 = queryNorm\n0.824726 = fieldWeight in 4427, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n6.220473 = idf(docFreq=230, maxDocs=42740)\n0.09375 = fieldNorm(doc=4427)\n0.32 = coord(8/25)\n```\n2. ¬The Semantic Web - ISWC 2010 : 9th International Semantic Web Conference, ISWC 2010, Shanghai, China, November 7-11, 2010, Revised Selected Papers, Part 2. (2010) 0.45\n```0.45272142 = sum of:\n0.45272142 = product of:\n1.0289123 = sum of:\n0.04857426 = weight(abstract_txt:refereed in 1707) [ClassicSimilarity], result of:\n0.04857426 = score(doc=1707,freq=1.0), product of:\n0.09444614 = queryWeight, product of:\n8.228904 = idf(docFreq=30, maxDocs=42740)\n0.011477366 = queryNorm\n0.5143065 = fieldWeight in 1707, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n8.228904 = idf(docFreq=30, maxDocs=42740)\n0.0625 = fieldNorm(doc=1707)\n0.013962976 = weight(abstract_txt:computer in 1707) [ClassicSimilarity], result of:\n0.013962976 = score(doc=1707,freq=1.0), product of:\n0.051829305 = queryWeight, product of:\n1.0476364 = boost\n4.3104496 = idf(docFreq=1559, maxDocs=42740)\n0.011477366 = queryNorm\n0.2694031 = fieldWeight in 1707, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n4.3104496 = idf(docFreq=1559, maxDocs=42740)\n0.0625 = fieldNorm(doc=1707)\n0.031270433 = weight(abstract_txt:international in 1707) [ClassicSimilarity], result of:\n0.031270433 = score(doc=1707,freq=3.0), product of:\n0.06151376 = queryWeight, product of:\n1.1413242 = boost\n4.6959233 = idf(docFreq=1060, maxDocs=42740)\n0.011477366 = queryNorm\n0.5083486 = fieldWeight in 1707, product of:\n1.7320508 = tf(freq=3.0), with freq of:\n3.0 = termFreq=3.0\n4.6959233 = idf(docFreq=1060, maxDocs=42740)\n0.0625 = fieldNorm(doc=1707)\n0.029721199 = weight(abstract_txt:conference in 1707) [ClassicSimilarity], result of:\n0.029721199 = score(doc=1707,freq=1.0), product of:\n0.08576319 = queryWeight, product of:\n1.3476384 = boost\n5.544793 = idf(docFreq=453, maxDocs=42740)\n0.011477366 = queryNorm\n0.34654957 = fieldWeight in 1707, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.544793 = idf(docFreq=453, maxDocs=42740)\n0.0625 = fieldNorm(doc=1707)\n0.0372069 = weight(abstract_txt:engineering in 1707) [ClassicSimilarity], result of:\n0.0372069 = score(doc=1707,freq=1.0), product of:\n0.099618286 = queryWeight, product of:\n1.4524207 = boost\n5.975915 = idf(docFreq=294, maxDocs=42740)\n0.011477366 = queryNorm\n0.37349468 = fieldWeight in 1707, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.975915 = idf(docFreq=294, maxDocs=42740)\n0.0625 = fieldNorm(doc=1707)\n0.053723138 = weight(abstract_txt:computing in 1707) [ClassicSimilarity], result of:\n0.053723138 = score(doc=1707,freq=2.0), product of:\n0.10100766 = queryWeight, product of:\n1.462514 = boost\n6.0174437 = idf(docFreq=282, maxDocs=42740)\n0.011477366 = queryNorm\n0.5318719 = fieldWeight in 1707, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n6.0174437 = idf(docFreq=282, maxDocs=42740)\n0.0625 = fieldNorm(doc=1707)\n0.08757899 = weight(abstract_txt:soft in 1707) [ClassicSimilarity], result of:\n0.08757899 = score(doc=1707,freq=1.0), product of:\n0.17627472 = queryWeight, product of:\n1.932048 = boost\n7.9493184 = idf(docFreq=40, maxDocs=42740)\n0.011477366 = queryNorm\n0.4968324 = fieldWeight in 1707, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.9493184 = idf(docFreq=40, maxDocs=42740)\n0.0625 = fieldNorm(doc=1707)\n0.017353158 = weight(abstract_txt:systems in 1707) [ClassicSimilarity], result of:\n0.017353158 = score(doc=1707,freq=1.0), product of:\n0.08131221 = queryWeight, product of:\n2.0747738 = boost\n3.414623 = idf(docFreq=3820, maxDocs=42740)\n0.011477366 = queryNorm\n0.21341394 = fieldWeight in 1707, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n3.414623 = idf(docFreq=3820, maxDocs=42740)\n0.0625 = fieldNorm(doc=1707)\n0.555739 = weight(subject_txt:kongress in 1707) [ClassicSimilarity], result of:\n0.555739 = score(doc=1707,freq=5.0), product of:\n0.21957575 = queryWeight, product of:\n2.6409533 = boost\n7.24405 = idf(docFreq=82, maxDocs=42740)\n0.011477366 = queryNorm\n2.530967 = fieldWeight in 1707, product of:\n2.236068 = tf(freq=5.0), with freq of:\n5.0 = termFreq=5.0\n7.24405 = idf(docFreq=82, maxDocs=42740)\n0.15625 = fieldNorm(doc=1707)\n0.07269513 = weight(abstract_txt:intelligence in 1707) [ClassicSimilarity], result of:\n0.07269513 = score(doc=1707,freq=1.0), product of:\n0.1961569 = queryWeight, product of:\n2.8823032 = boost\n5.929549 = idf(docFreq=308, maxDocs=42740)\n0.011477366 = queryNorm\n0.37059683 = fieldWeight in 1707, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.929549 = idf(docFreq=308, maxDocs=42740)\n0.0625 = fieldNorm(doc=1707)\n0.08108706 = weight(abstract_txt:artificial in 1707) [ClassicSimilarity], result of:\n0.08108706 = score(doc=1707,freq=1.0), product of:\n0.21097668 = queryWeight, product of:\n2.989201 = boost\n6.1494617 = idf(docFreq=247, maxDocs=42740)\n0.011477366 = queryNorm\n0.38434136 = fieldWeight in 1707, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n6.1494617 = idf(docFreq=247, maxDocs=42740)\n0.0625 = fieldNorm(doc=1707)\n0.44 = coord(11/25)\n```\n3. ¬The Semantic Web - ISWC 2010 : 9th International Semantic Web Conference, ISWC 2010, Shanghai, China, November 7-11, 2010, Revised Selected Papers, Part I. (2010) 0.45\n```0.45272142 = sum of:\n0.45272142 = product of:\n1.0289123 = sum of:\n0.04857426 = weight(abstract_txt:refereed in 1708) [ClassicSimilarity], result of:\n0.04857426 = score(doc=1708,freq=1.0), product of:\n0.09444614 = queryWeight, product of:\n8.228904 = idf(docFreq=30, maxDocs=42740)\n0.011477366 = queryNorm\n0.5143065 = fieldWeight in 1708, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n8.228904 = idf(docFreq=30, maxDocs=42740)\n0.0625 = fieldNorm(doc=1708)\n0.013962976 = weight(abstract_txt:computer in 1708) [ClassicSimilarity], result of:\n0.013962976 = score(doc=1708,freq=1.0), product of:\n0.051829305 = queryWeight, product of:\n1.0476364 = boost\n4.3104496 = idf(docFreq=1559, maxDocs=42740)\n0.011477366 = queryNorm\n0.2694031 = fieldWeight in 1708, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n4.3104496 = idf(docFreq=1559, maxDocs=42740)\n0.0625 = fieldNorm(doc=1708)\n0.031270433 = weight(abstract_txt:international in 1708) [ClassicSimilarity], result of:\n0.031270433 = score(doc=1708,freq=3.0), product of:\n0.06151376 = queryWeight, product of:\n1.1413242 = boost\n4.6959233 = idf(docFreq=1060, maxDocs=42740)\n0.011477366 = queryNorm\n0.5083486 = fieldWeight in 1708, product of:\n1.7320508 = tf(freq=3.0), with freq of:\n3.0 = termFreq=3.0\n4.6959233 = idf(docFreq=1060, maxDocs=42740)\n0.0625 = fieldNorm(doc=1708)\n0.029721199 = weight(abstract_txt:conference in 1708) [ClassicSimilarity], result of:\n0.029721199 = score(doc=1708,freq=1.0), product of:\n0.08576319 = queryWeight, product of:\n1.3476384 = boost\n5.544793 = idf(docFreq=453, maxDocs=42740)\n0.011477366 = queryNorm\n0.34654957 = fieldWeight in 1708, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.544793 = idf(docFreq=453, maxDocs=42740)\n0.0625 = fieldNorm(doc=1708)\n0.0372069 = weight(abstract_txt:engineering in 1708) [ClassicSimilarity], result of:\n0.0372069 = score(doc=1708,freq=1.0), product of:\n0.099618286 = queryWeight, product of:\n1.4524207 = boost\n5.975915 = idf(docFreq=294, maxDocs=42740)\n0.011477366 = queryNorm\n0.37349468 = fieldWeight in 1708, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.975915 = idf(docFreq=294, maxDocs=42740)\n0.0625 = fieldNorm(doc=1708)\n0.053723138 = weight(abstract_txt:computing in 1708) [ClassicSimilarity], result of:\n0.053723138 = score(doc=1708,freq=2.0), product of:\n0.10100766 = queryWeight, product of:\n1.462514 = boost\n6.0174437 = idf(docFreq=282, maxDocs=42740)\n0.011477366 = queryNorm\n0.5318719 = fieldWeight in 1708, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n6.0174437 = idf(docFreq=282, maxDocs=42740)\n0.0625 = fieldNorm(doc=1708)\n0.08757899 = weight(abstract_txt:soft in 1708) [ClassicSimilarity], result of:\n0.08757899 = score(doc=1708,freq=1.0), product of:\n0.17627472 = queryWeight, product of:\n1.932048 = boost\n7.9493184 = idf(docFreq=40, maxDocs=42740)\n0.011477366 = queryNorm\n0.4968324 = fieldWeight in 1708, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.9493184 = idf(docFreq=40, maxDocs=42740)\n0.0625 = fieldNorm(doc=1708)\n0.017353158 = weight(abstract_txt:systems in 1708) [ClassicSimilarity], result of:\n0.017353158 = score(doc=1708,freq=1.0), product of:\n0.08131221 = queryWeight, product of:\n2.0747738 = boost\n3.414623 = idf(docFreq=3820, maxDocs=42740)\n0.011477366 = queryNorm\n0.21341394 = fieldWeight in 1708, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n3.414623 = idf(docFreq=3820, maxDocs=42740)\n0.0625 = fieldNorm(doc=1708)\n0.555739 = weight(subject_txt:kongress in 1708) [ClassicSimilarity], result of:\n0.555739 = score(doc=1708,freq=5.0), product of:\n0.21957575 = queryWeight, product of:\n2.6409533 = boost\n7.24405 = idf(docFreq=82, maxDocs=42740)\n0.011477366 = queryNorm\n2.530967 = fieldWeight in 1708, product of:\n2.236068 = tf(freq=5.0), with freq of:\n5.0 = termFreq=5.0\n7.24405 = idf(docFreq=82, maxDocs=42740)\n0.15625 = fieldNorm(doc=1708)\n0.07269513 = weight(abstract_txt:intelligence in 1708) [ClassicSimilarity], result of:\n0.07269513 = score(doc=1708,freq=1.0), product of:\n0.1961569 = queryWeight, product of:\n2.8823032 = boost\n5.929549 = idf(docFreq=308, maxDocs=42740)\n0.011477366 = queryNorm\n0.37059683 = fieldWeight in 1708, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.929549 = idf(docFreq=308, maxDocs=42740)\n0.0625 = fieldNorm(doc=1708)\n0.08108706 = weight(abstract_txt:artificial in 1708) [ClassicSimilarity], result of:\n0.08108706 = score(doc=1708,freq=1.0), product of:\n0.21097668 = queryWeight, product of:\n2.989201 = boost\n6.1494617 = idf(docFreq=247, maxDocs=42740)\n0.011477366 = queryNorm\n0.38434136 = fieldWeight in 1708, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n6.1494617 = idf(docFreq=247, maxDocs=42740)\n0.0625 = fieldNorm(doc=1708)\n0.44 = coord(11/25)\n```\n4. Emerging frameworks and methods : Proceedings of the Fourth International Conference on the Conceptions of Library and Information Science (CoLIS4), Seattle, WA, July 21 - 25, 2002 (2002) 0.33\n```0.33357775 = sum of:\n0.33357775 = product of:\n1.1913491 = sum of:\n0.018053994 = weight(abstract_txt:international in 2056) [ClassicSimilarity], result of:\n0.018053994 = score(doc=2056,freq=1.0), product of:\n0.06151376 = queryWeight, product of:\n1.1413242 = boost\n4.6959233 = idf(docFreq=1060, maxDocs=42740)\n0.011477366 = queryNorm\n0.2934952 = fieldWeight in 2056, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n4.6959233 = idf(docFreq=1060, maxDocs=42740)\n0.0625 = fieldNorm(doc=2056)\n0.019371498 = weight(abstract_txt:learning in 2056) [ClassicSimilarity], result of:\n0.019371498 = score(doc=2056,freq=1.0), product of:\n0.06447117 = queryWeight, product of:\n1.168438 = boost\n4.807482 = idf(docFreq=948, maxDocs=42740)\n0.011477366 = queryNorm\n0.3004676 = fieldWeight in 2056, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n4.807482 = idf(docFreq=948, maxDocs=42740)\n0.0625 = fieldNorm(doc=2056)\n0.029721199 = weight(abstract_txt:conference in 2056) [ClassicSimilarity], result of:\n0.029721199 = score(doc=2056,freq=1.0), product of:\n0.08576319 = queryWeight, product of:\n1.3476384 = boost\n5.544793 = idf(docFreq=453, maxDocs=42740)\n0.011477366 = queryNorm\n0.34654957 = fieldWeight in 2056, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.544793 = idf(docFreq=453, maxDocs=42740)\n0.0625 = fieldNorm(doc=2056)\n0.122598015 = weight(title_txt:proceedings in 2056) [ClassicSimilarity], result of:\n0.122598015 = score(doc=2056,freq=1.0), product of:\n0.10604695 = queryWeight, product of:\n1.4985526 = boost\n6.1657224 = idf(docFreq=243, maxDocs=42740)\n0.011477366 = queryNorm\n1.156073 = fieldWeight in 2056, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n6.1657224 = idf(docFreq=243, maxDocs=42740)\n0.1875 = fieldNorm(doc=2056)\n0.5312549 = weight(subject_txt:congresses in 2056) [ClassicSimilarity], result of:\n0.5312549 = score(doc=2056,freq=2.0), product of:\n0.18467525 = queryWeight, product of:\n1.977549 = boost\n8.13653 = idf(docFreq=33, maxDocs=42740)\n0.011477366 = queryNorm\n2.8766978 = fieldWeight in 2056, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n8.13653 = idf(docFreq=33, maxDocs=42740)\n0.25 = fieldNorm(doc=2056)\n0.3976544 = weight(subject_txt:kongress in 2056) [ClassicSimilarity], result of:\n0.3976544 = score(doc=2056,freq=1.0), product of:\n0.21957575 = queryWeight, product of:\n2.6409533 = boost\n7.24405 = idf(docFreq=82, maxDocs=42740)\n0.011477366 = queryNorm\n1.8110125 = fieldWeight in 2056, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.24405 = idf(docFreq=82, maxDocs=42740)\n0.25 = fieldNorm(doc=2056)\n0.07269513 = weight(abstract_txt:intelligence in 2056) [ClassicSimilarity], result of:\n0.07269513 = score(doc=2056,freq=1.0), product of:\n0.1961569 = queryWeight, product of:\n2.8823032 = boost\n5.929549 = idf(docFreq=308, maxDocs=42740)\n0.011477366 = queryNorm\n0.37059683 = fieldWeight in 2056, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.929549 = idf(docFreq=308, maxDocs=42740)\n0.0625 = fieldNorm(doc=2056)\n0.28 = coord(7/25)\n```\n5. Research and advanced technology for digital libraries : 8th European conference, ECDL 2004, Bath, UK, September 12-17, 2004 : proceedings (2004) 0.27\n```0.2726949 = sum of:\n0.2726949 = product of:\n1.1362287 = sum of:\n0.07286139 = weight(abstract_txt:refereed in 4428) [ClassicSimilarity], result of:\n0.07286139 = score(doc=4428,freq=1.0), product of:\n0.09444614 = queryWeight, product of:\n8.228904 = idf(docFreq=30, maxDocs=42740)\n0.011477366 = queryNorm\n0.7714597 = fieldWeight in 4428, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n8.228904 = idf(docFreq=30, maxDocs=42740)\n0.09375 = fieldNorm(doc=4428)\n0.0445818 = weight(abstract_txt:conference in 4428) [ClassicSimilarity], result of:\n0.0445818 = score(doc=4428,freq=1.0), product of:\n0.08576319 = queryWeight, product of:\n1.3476384 = boost\n5.544793 = idf(docFreq=453, maxDocs=42740)\n0.011477366 = queryNorm\n0.5198244 = fieldWeight in 4428, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.544793 = idf(docFreq=453, maxDocs=42740)\n0.09375 = fieldNorm(doc=4428)\n0.14303102 = weight(title_txt:proceedings in 4428) [ClassicSimilarity], result of:\n0.14303102 = score(doc=4428,freq=1.0), product of:\n0.10604695 = queryWeight, product of:\n1.4985526 = boost\n6.1657224 = idf(docFreq=243, maxDocs=42740)\n0.011477366 = queryNorm\n1.3487518 = fieldWeight in 4428, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n6.1657224 = idf(docFreq=243, maxDocs=42740)\n0.21875 = fieldNorm(doc=4428)\n0.39844117 = weight(subject_txt:congresses in 4428) [ClassicSimilarity], result of:\n0.39844117 = score(doc=4428,freq=2.0), product of:\n0.18467525 = queryWeight, product of:\n1.977549 = boost\n8.13653 = idf(docFreq=33, maxDocs=42740)\n0.011477366 = queryNorm\n2.1575234 = fieldWeight in 4428, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n8.13653 = idf(docFreq=33, maxDocs=42740)\n0.1875 = fieldNorm(doc=4428)\n0.055537067 = weight(abstract_txt:applications in 4428) [ClassicSimilarity], result of:\n0.055537067 = score(doc=4428,freq=1.0), product of:\n0.12510112 = queryWeight, product of:\n2.3018048 = boost\n4.7353325 = idf(docFreq=1019, maxDocs=42740)\n0.011477366 = queryNorm\n0.44393742 = fieldWeight in 4428, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n4.7353325 = idf(docFreq=1019, maxDocs=42740)\n0.09375 = fieldNorm(doc=4428)\n0.4217762 = weight(subject_txt:kongress in 4428) [ClassicSimilarity], result of:\n0.4217762 = score(doc=4428,freq=2.0), product of:\n0.21957575 = queryWeight, product of:\n2.6409533 = boost\n7.24405 = idf(docFreq=82, maxDocs=42740)\n0.011477366 = queryNorm\n1.9208689 = fieldWeight in 4428, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n7.24405 = idf(docFreq=82, maxDocs=42740)\n0.1875 = fieldNorm(doc=4428)\n0.24 = coord(6/25)\n```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.70102155,"math_prob":0.99586153,"size":19695,"snap":"2020-34-2020-40","text_gpt3_token_len":7571,"char_repetition_ratio":0.24589914,"word_repetition_ratio":0.6365546,"special_character_ratio":0.53069305,"punctuation_ratio":0.28198895,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9997861,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-09T14:28:40Z\",\"WARC-Record-ID\":\"<urn:uuid:74d3cc57-4222-4e51-97ea-ca0cf80928c3>\",\"Content-Length\":\"34118\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:54f5a72a-b4ae-4b6e-81d2-d38d7ea3d681>\",\"WARC-Concurrent-To\":\"<urn:uuid:8b2f8d29-89b2-4899-b336-d164b851eedd>\",\"WARC-IP-Address\":\"139.6.160.6\",\"WARC-Target-URI\":\"http://ixtrieve.fh-koeln.de/birds/litie/document/37443\",\"WARC-Payload-Digest\":\"sha1:KSNQYANZAMSVISSRQXD64GFS42K57X4Z\",\"WARC-Block-Digest\":\"sha1:I4FCVBITRM6L47OPN5JFMH6EWKSLT4PF\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439738555.33_warc_CC-MAIN-20200809132747-20200809162747-00489.warc.gz\"}"}
https://codereview.stackexchange.com/questions/71466/matrix-to-tree-algorithm
[ "# Matrix to Tree Algorithm\n\nThe code takes a matrix and turns it into a tree of all the possible combinations. It then \"maps\" the tree by setting the value of the ending nodes to the total distance of the nodes from beginning node to ending node.\n\nIt seems to work fairly well but I've got a couple questions:\n\n1. Is a Python dict the best way to represent a tree?\n2. Any ways to simplify, speed up, or otherwise make it more clean and legible?\n\nKeep in mind I'll need to sort the set so I can display the most attractive routes.\n\nYou can find the code on my GitHub page (starts at line 61).\n\ndef matrix_to_tree(nodes):\n\"\"\"\nCreates a tree of all possible combinations of\nprovided nodes in dict format\n\"\"\"\ntree = {}\nfor node in nodes:\nchildren = nodes[:]\nchildren.remove(node)\ntree[node] = matrix_to_tree(children)\nreturn tree\n\ndef set_start(tree, start):\n\"\"\"\nRemoves extraneous starting nodes if only one\nstarting location is desired.\n\"\"\"\ntree = tree[start]\nreturn tree\n\ndef set_end(tree, end):\n\"\"\"\nRemoves ending nodes when they are not the\nlast node of the branch. Used when one\nending location is desired.\n\"\"\"\nif tree[end]:\ndel tree[end]\nnodes = tree.keys()\nif len(nodes) > 1:\nfor node in nodes:\nset_end(tree[node], end)\nreturn tree\n\ndef map_tree(tree, matrix, start, distance=0):\n\"\"\"\nMaps the distance from the root to each\nending node.\n\"\"\"\nfor node in tree:\nnew_distance = distance + node_distance(matrix, start, node)\nif tree[node]:\nmap_tree(tree[node], matrix, node, new_distance)\nelse:\ntree[node] = new_distance\nreturn tree\n\ndef node_distance(matrix, start, end):\n\"\"\"\nSearches a matrix for the value of two\npoints.\n\"\"\"\nreturn matrix[start][end]\n\nnodes = [key for key in x.keys()]\na = matrix_to_tree(nodes)\nb = set_start(a, 'A')\nc = set_end(b, 'G')\nd = map_tree(c, x, 'A')\nprint(d)\n\n\nInput example:\n\n{ A: {B:1, C:2}, B: {A:1, C:3}, C: {A:2, B:3}, }\n\nOutput example:\n\n# 'A' being the root node with no ending node specified\n{\nA: {B:{C:4}, C:{B:5}},\n}\n\n• Even if I try to adapt the input example by adding quotes etc. I can't get the example output by running the code. Please add a runnable example. – Janne Karila Dec 3 '14 at 20:16\n• @JanneKarila Name your matrix 'x' and try again. Or just use the code on GitHub. This was written for Python 3.4. – Colton Allen Dec 3 '14 at 21:39\n• @ColtonAllen as A, B, C are no valid variables. Please change your example to {'A': {'B':1, 'C':2}, 'B': {'A':1, 'C':3}, 'C': {'A':2, 'B':3}}. Also your code with that input example does not yield your output example. – lummax Dec 4 '14 at 9:41\n\nThe code is clean, modular and readable. You could try to work with generators to make the calculations lazy.\n\nI think the namedtuple are a better choice than your dict approach, something like:\n\nNode = collections.namedtuple('Node', ['name', 'distance', 'children'])\n\n\nYou can look into the way a functional language like Hskell or Clojure represents trees. It is weird that you mark the leaves of your tree by empty dicts and replace them with the calculated distance.\n\n• PEP8:\n\n• Use 4 spaces per indentation level.\n• Separate top-level function and class definitions with two blank lines.\n• set_start(): Why not just return tree[start]?\n\n• set_end():\n\n• tree[end] can result in a KeyError\n• Why the if len(nodes) > 1?\n\nif end in tree:\ndel tree[end]\nfor child in tree.values():\nset_end(child, end)\nreturn tree\n\n• node_distance(): This is an internal function and can be inlined in map_tree().\n\n• Please do not place code on module level, but wrap it in a main() or test() function.\n\nnodes = [key for key in x.keys()] can be simplified to nodes = list(x.keys())." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7287984,"math_prob":0.9202114,"size":2093,"snap":"2019-51-2020-05","text_gpt3_token_len":574,"char_repetition_ratio":0.1373863,"word_repetition_ratio":0.0,"special_character_ratio":0.31294793,"punctuation_ratio":0.1942605,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9903808,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-21T21:39:43Z\",\"WARC-Record-ID\":\"<urn:uuid:c1d4b9a3-a5f1-4cd7-ac63-7eecf112ea6b>\",\"Content-Length\":\"142470\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:eea34a71-5d5c-429a-af12-cdac614a2401>\",\"WARC-Concurrent-To\":\"<urn:uuid:6b84f925-63de-45c8-a44e-31f0ab2f01a7>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://codereview.stackexchange.com/questions/71466/matrix-to-tree-algorithm\",\"WARC-Payload-Digest\":\"sha1:5IEZQTOT4VX2HHBQBOT3PEWLYMHENWT5\",\"WARC-Block-Digest\":\"sha1:DHR7P7VPWFMMNQUXWEBXADGWBBVY2GZU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250605075.24_warc_CC-MAIN-20200121192553-20200121221553-00487.warc.gz\"}"}
https://topic.alibabacloud.com/a/sorting-algorithm-iii-the-python-implementation-and-algorithm-optimization-of-bubbling-selecting-sort_1_29_30296206.html
[ "# Sorting algorithm (iii) the Python implementation and algorithm optimization of bubbling, selecting sort\n\nSource: Internet\nAuthor: User\n\nSay in front\n\nThe last year is too busy, blog long grass. Recently used Python to achieve a common sorting algorithm for everyone's reference.\n\nJava version sorting algorithm and optimization, see the previous article.\n\nSimple ordering of sorting algorithms (bubbling, selecting, inserting)\n\nSorting algorithm (ii) heap sequencing\n\n1. Sorting Concepts\n\nPlease refer to the previous 2 articles for further review.\n\n2. Simple sort of bubble method python implementation and optimization\n\nSchematic diagram\n\n650) this.width=650; \"src=\" Https://s1.51cto.com/wyfs02/M02/07/40/wKiom1nF8CTRGF7IAACgLGM-eko688.png \"title=\" Bubble method schematic diagram 1 \"width=\" \"height=\" 283 \"border=\" 0 \"hspace=\" 0 \"vspace=\" 0 \"style=\" width:700px;height:283px; \"alt=\" Wkiom1nf8ctrgf7iaacglgm-eko688.png \"/>\n\n650) this.width=650; \"src=\" Https://s1.51cto.com/wyfs02/M00/A5/F1/wKioL1nF8Cix7ybrAAByZegzjnI171.png \"title=\" Bubble method schematic diagram 2 \"width=\" \"height=\" 131 \"border=\" 0 \"hspace=\" 0 \"vspace=\" 0 \"style=\" width:700px;height:131px; \"alt=\" Wkiol1nf8cix7ybraabyzegzjni171.png \"/>\n\n2.1. Basic realization\n\n`Num_list = [    [1, 9, 8, 5, 6, 7, 4, 3,  2],    [1, 2, 3, 4, 5, 6, 7, 8, 9]]nums  = num_listprint (nums) Length = len (nums) count_swap = 0count = 0#  bubble_sortfor i in range (length):     for j in range ( LENGTH-I-1):        count += 1         if nums[j] > nums[j+1]:             tmp = nums[j]             nums[j] = nums[j+1]             nums[j+1] = tmp            count_ Swap += 1print (Nums,&nbsP;count_swap, count) `\n\n2.2, optimize the implementation\n\nIdea: If there is interaction in this round, the order is not correct, if this round is not exchanged, the description is the target order, the direct end of the sort.\n\n`Num_list = [    [1, 9, 8, 5, 6, 7, 4, 3,  2],    [1, 2, 3, 4, 5, 6, 7, 8, 9],     [1, 2, 3, 4, 5, 6, 7, 9, 8]]nums = num_ Listprint (nums) Length = len (nums) count_swap = 0count = 0# bubble_sortfor  i in range (length):     flag = false    for  j in range (length-i-1):         count += 1         if nums[j] > nums[j+1]:             tmp = nums[j]             nums[j] = nums[j+1]              nums[j+1] = tmp            flag =  true # swapped            count_ swap += 1    if not flag:         breakprint (Nums, count_swap, count)`\n\nSummary :\n\nThe bubbling method requires a round-wheel comparison of data.\n\noptimization, you can set a marker to determine if there is a data exchange occurring on this wheel, and if no interchange occurs, you can end the sort, and if an interchange occurs, proceed to the next round of sorting\n\nWorst-case scenario is that the initial order is exactly the opposite of the target sequence, with the number of traversal 1,..., n-1 N (n-1)/2\n\nThe best sort scenario is that the initial order is exactly the same as the target order and the number of traversal n-1\n\nTime complexity O (n^2)\n\n3, simple sorting of the choice of sorting Python implementation and optimization\n\nSelect the core of the sort: each round compares to find an extremum (maximum or minimum) on one end, and then find the extremum for the remaining number until the comparison is over.\n\nSchematic diagram\n\n650) this.width=650; \"src=\" Https://s3.51cto.com/wyfs02/M02/A5/F1/wKioL1nF8Frhx7cIAAC8VirJv_A194.png \"title=\" Select sort schematic \"width=\" \"height=\" 304 \"border=\" 0 \"hspace=\" 0 \"vspace=\" 0 \"style=\" width:700px;height:304px; \"alt=\" Wkiol1nf8frhx7ciaac8virjv_a194.png \"/>\n\n3.1. Basic realization\n\n`M_list = [    [1, 9, 8, 5, 6, 7, 4, 3, 2 ],    [1, 2, 3, 4, 5, 6, 7, 8, 9],     [9, 8, 7, 6, 5, 4, 3, 2, 1],    [1,  1, 1, 1, 1, 1, 1, 1, 1]]nums = m_listlength =  Len (nums) print (nums) count_swap = 0count_iter = 0for i in range (length):     maxindex = i    for j in range (i +  1, length):        count_iter += 1         if nums[maxindex] < nums[j]:             maxindex = j    if i ! = maxindex:        tmp = nums[i]        nums[ I] = nums[maxindex]        nums[maxindex] = tmp         count_swap += 1print (nums, count_swap,  Count_iter)`\n\n3.2, optimize the implementation--two Yuan Select sort\n\nIdea: Reduce the number of iterations, one round to determine the number of 2, that is, the maximum number and the decimal.\n\n`M_list = [    [1, 9, 8, 5, 6, 7, 4, 3, 2 ],    [1, 2, 3, 4, 5, 6, 7, 8, 9],     [9, 8, 7, 6, 5, 4, 3, 2, 1],    [1,  1, 1, 1, 1, 1, 1, 1, 1]]nums = m_listlength =  Len (nums) print (nums) count_swap = 0count_iter = 0#  Two Yuan Select sort For i in range (length // 2):     maxindex = i    minindex =  -i - 1    minorigin = minindex         for j in range (i + 1, length - i):   #   Less than one         count_iter += 1    each time around      if nums[maxindex] < nums[j]:             maxindex = j        if nums[minindex]  > nums[-j - 1]:             minindex = -j - 1         #print (Maxindex,minindex)     if i != maxindex:        tmp  = nums[i]        nums[i] = nums[maxindex]         nums[maxindex] = tmp         count_swap += 1        #  to update the index if the minimum value has been exchanged         if i == minindex or i ==  Length + minindex:            minindex = maxindex         if minorigin != minindex:         tmp = nums[minorigin]        nums[ minorigin] = nums[minindex]        nums[minindex] =  tmp        count_swap += 1print (Nums, count_swap,  count_iter)`\n\n3.3. Optimization of equivalence\n\nIdeas: Two yuan when sorting, each round can know the maximum and minimum value, if a round the maximum minimum value is the same, indicating that the remaining numbers are equal, the direct end of the sort.\n\n`M_list = [    [1, 9, 8, 5, 6, 7, 4, 3, 2 ],    [1, 2, 3, 4, 5, 6, 7, 8, 9],     [9, 8, 7, 6, 5, 4, 3, 2, 1],    [1,  1, 1, 1, 1, 1, 1, 1, 1]]nums = m_listlength =  Len (nums) print (nums) count_swap = 0count_iter = 0#  Two Yuan Select sort For i in range (length // 2):     maxindex = i    minindex =  -i - 1    minorigin = minindex         for j in range (i + 1, length - i):   #   Less than one         count_iter += 1    each time around      if nums[maxindex] < nums[j]:             maxindex = j        if nums[minindex]  > nums[-j - 1]:             minindex = -j - 1     #print (Maxindex,minindex)      if nums[maxindex] == nums[minindex]: #  Elements Same          break    if i != maxindex:         tmp = nums[i]        nums[i] = nums[maxindex ]        nums[maxindex] = tmp         count_swap += 1        #  if the minimum value has been exchanged, To update the index         if i == minindex or i == length +  Minindex:            minindex = maxindex     if minorigin != minindex:         tmp = nums[minorigin]        nums[minorigin] =  nums[minindex]        nums[minindex] = tmp         count_swap += 1print (Nums, count_swap, count_iter)`\n\n3.4, the equivalence situation optimization advanced\n\nIdeas:\n\n[1, 1, 1, 1, 1, 1, 1, 1, 2] In this case, the smallest index found is-2, the maximum index 8, the code above will be exchanged 2 times, the minimum two 1 exchange is useless, so, add a judgment.\n\n`M_list = [    [1, 9, 8, 5, 6, 7, 4, 3, 2 ],    [1, 2, 3, 4, 5, 6, 7, 8, 9],     [9, 8, 7, 6, 5, 4, 3, 2, 1],    [1,  1, 1, 1, 1, 1, 1, 1, 1],    [1, 1, 1,  1, 1, 1, 1, 1, 2]]nums = m_listlength = len (nums) print ( Nums) count_swap = 0count_iter = 0#  Two Yuan Select sort For i in range (length // &NBSP;2):     maxindex = i    minindex = -i -  1    minorigin = minindex         For j in range (i + 1, length - i):  #  less than one   each time around        count_iter += 1        if nums[ maxindex] < nums[j]:             maxindex = j        if nums[minindex] >  nums[-j - 1]:            minindex =  -j - 1    print (Maxindex,minindex)          if nums[maxindex] == nums[minindex]: #  same Element          break            if i  != maxindex:        tmp = nums[i]         nums[i] = nums[maxindex]         nums[maxindex] = tmp        count_swap += 1         #  if the minimum value has been swapped, update the index         if i ==  minindex or i == length + minindex:             minindex = maxindex        The          #  minimum index is different, but the same value does not need to be exchanged     if  minorigin != minindex and nums[minorigin] != nums[minindex]:         tmp = nums[minorigin]         nums[minorigin] = nums[minindex]        nums[minindex]  = tmp        count_swap += 1         prinT (Nums, count_swap, count_iter) `\n\nThere may be some special cases that can be optimized, but all of them are optimized for exceptions, and the whole algorithm is limited.\n\nSummarize\n\nSimple select sort requires data round-wheel comparison and finds extrema in each round\n\nThere is no way to know whether the current wheel has reached the sorting requirements, but it is possible to know whether the extremum is at the target index position\n\nTraversal count 1,..., n-1 N (n-1)/2\n\nTime complexity O (n^2)\n\nReduced switching times, improved efficiency, slightly better performance than bubbling method\n\nSorting algorithm (iii) the Python implementation and algorithm optimization of bubbling, selecting sort\n\nRelated Keywords:\nRelated Article\n\nThe content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.\n\nIf you find any instances of plagiarism from the community, please send an email to: [email protected] and provide relevant evidence. A staff member will contact you within 5 working days.\n\n## A Free Trial That Lets You Build Big!\n\nStart building with 50+ products and up to 12 months usage for Elastic Compute Service\n\n• #### Sales Support\n\n1 on 1 presale consultation\n\n• #### After-Sales Support\n\n24/7 Technical Support 6 Free Tickets per Quarter Faster Response\n\n• Alibaba Cloud offers highly flexible support services tailored to meet your exact needs." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6538309,"math_prob":0.9827324,"size":8036,"snap":"2023-40-2023-50","text_gpt3_token_len":2729,"char_repetition_ratio":0.17430279,"word_repetition_ratio":0.41781318,"special_character_ratio":0.35378298,"punctuation_ratio":0.21530554,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9926514,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-10T01:58:21Z\",\"WARC-Record-ID\":\"<urn:uuid:4c8184da-12c2-4b34-88b9-27a9fc6dfa02>\",\"Content-Length\":\"96830\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fd90ce8b-a7a3-4ab3-9924-a0b685067e12>\",\"WARC-Concurrent-To\":\"<urn:uuid:db40b40c-1d22-4a2d-8759-d36393d8da83>\",\"WARC-IP-Address\":\"47.88.251.189\",\"WARC-Target-URI\":\"https://topic.alibabacloud.com/a/sorting-algorithm-iii-the-python-implementation-and-algorithm-optimization-of-bubbling-selecting-sort_1_29_30296206.html\",\"WARC-Payload-Digest\":\"sha1:TXIHU6BSZOQYYW55ZUGU32B4IVHXNG2X\",\"WARC-Block-Digest\":\"sha1:7BF2YDJTDULHSN2BBHKW4637JOUKIVW4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100989.75_warc_CC-MAIN-20231209233632-20231210023632-00160.warc.gz\"}"}
https://www.systutorials.com/docs/linux/man/3-CLAIC1/
[ "# CLAIC1 (3) - Linux Man Pages\n\nclaic1.f -\n\n## SYNOPSIS\n\n### Functions/Subroutines\n\nsubroutine claic1 (JOB, J, X, SEST, W, GAMMA, SESTPR, S, C)\nCLAIC1 applies one step of incremental condition estimation.\n\n## Function/Subroutine Documentation\n\n### subroutine claic1 (integerJOB, integerJ, complex, dimension( j )X, realSEST, complex, dimension( j )W, complexGAMMA, realSESTPR, complexS, complexC)\n\nCLAIC1 applies one step of incremental condition estimation.\n\nPurpose:\n\n``` CLAIC1 applies one step of incremental condition estimation in\nits simplest version:\n\nLet x, twonorm(x) = 1, be an approximate singular vector of an j-by-j\nlower triangular matrix L, such that\ntwonorm(L*x) = sest\nThen CLAIC1 computes sestpr, s, c such that\nthe vector\n[ s*x ]\nxhat = [ c ]\nis an approximate singular vector of\n[ L 0 ]\nLhat = [ w**H gamma ]\nin the sense that\ntwonorm(Lhat*xhat) = sestpr.\n\nDepending on JOB, an estimate for the largest or smallest singular\nvalue is computed.\n\nNote that [s c]**H and sestpr**2 is an eigenpair of the system\n\ndiag(sest*sest, 0) + [alpha gamma] * [ conjg(alpha) ]\n[ conjg(gamma) ]\n\nwhere alpha = x**H*w.\n```\n\nParameters:\n\nJOB\n\n``` JOB is INTEGER\n= 1: an estimate for the largest singular value is computed.\n= 2: an estimate for the smallest singular value is computed.\n```\n\nJ\n\n``` J is INTEGER\nLength of X and W\n```\n\nX\n\n``` X is COMPLEX array, dimension (J)\nThe j-vector x.\n```\n\nSEST\n\n``` SEST is REAL\nEstimated singular value of j by j matrix L\n```\n\nW\n\n``` W is COMPLEX array, dimension (J)\nThe j-vector w.\n```\n\nGAMMA\n\n``` GAMMA is COMPLEX\nThe diagonal element gamma.\n```\n\nSESTPR\n\n``` SESTPR is REAL\nEstimated singular value of (j+1) by (j+1) matrix Lhat.\n```\n\nS\n\n``` S is COMPLEX\nSine needed in forming xhat.\n```\n\nC\n\n``` C is COMPLEX\nCosine needed in forming xhat.\n```\n\nAuthor:\n\nUniv. of Tennessee\n\nUniv. of California Berkeley\n\nUniv. of Colorado Denver\n\nNAG Ltd.\n\nDate:\n\nSeptember 2012\n\nDefinition at line 136 of file claic1.f.\n\n## Author\n\nGenerated automatically by Doxygen for LAPACK from the source code." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7644387,"math_prob":0.9449637,"size":1892,"snap":"2021-21-2021-25","text_gpt3_token_len":541,"char_repetition_ratio":0.110699154,"word_repetition_ratio":0.08709677,"special_character_ratio":0.25158563,"punctuation_ratio":0.15027322,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9960585,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-25T03:04:54Z\",\"WARC-Record-ID\":\"<urn:uuid:ece85865-a420-4e8b-a92b-df396931f4b3>\",\"Content-Length\":\"10284\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4ed04a17-d2a5-4c81-96d0-48c35802d549>\",\"WARC-Concurrent-To\":\"<urn:uuid:de229742-52d5-411a-92e3-176e11a89bdc>\",\"WARC-IP-Address\":\"104.21.34.36\",\"WARC-Target-URI\":\"https://www.systutorials.com/docs/linux/man/3-CLAIC1/\",\"WARC-Payload-Digest\":\"sha1:4UFYTBV5QO4BTCCEZDL6NMBSZUHOMKDD\",\"WARC-Block-Digest\":\"sha1:HXIADZ3NKQ6PLEQHNWAVJBCOEY2E5VIZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623488567696.99_warc_CC-MAIN-20210625023840-20210625053840-00351.warc.gz\"}"}
https://mathematica.stackexchange.com/questions/104236/imagehistogram-with-logarithmic-y-scale
[ "# ImageHistogram with logarithmic y scale\n\nImageHistogram is much faster than Histogram with ImageData.\n\nThe only problem: I cannot find out how to make the y axis logarithmic. Is this possible?\n\nI am using Mathematica 10.3.1.\n\n• You could try ImageLevels to get the data, then plot manually. – Szabolcs Jan 16 '16 at 21:06\n• Interpreting a logarithmic histogram y-axis is problematic at best. What about transforming the data with a log or square root? That leaves the resulting histogram comparable among different sets of data and hopefully shows desired features of the data. – JimB Jan 16 '16 at 21:11\n• To Szabolcs: Very helpful your info. I used \"ListLogPlot [ImageLevels[image], InterpolationOrder -> 0, Joined -> True]\". This works fine and is relatively fast. The only problem is that missing image levels do not appear in ListLogPlot. So empty gaps occur for those data, which looks different than in ImageHistogram or Histogram and is not really nice for presentation. Do you have an idea if ListLogPlot can produce plots that look as Histograms? See for example the histogram plot i.stack.imgur.com/X5dkA.png – mrz Jan 16 '16 at 23:03\n• Maybe something like hist = ImageLevels@ ColorConvert[ExampleData[{\"TestImage\", \"Lena\"}], \"Grayscale\"]; Histogram[WeightedData @@ Transpose[hist], Automatic, {\"Log\", \"PDF\"}] or maybe BarChart with ScalingFunctions – Szabolcs Jan 17 '16 at 10:39\n• I tried your first solution and it works perfect. Thank you. – mrz Jan 18 '16 at 13:03\n\nHere is an answer from the Wolfram Technical Support:\n\nMathematica does not currently allow for an option for a logarithmic scale in ImageHistogram. However, taking apart the underlying structure, it is possible to rescale the data. The underlying structure is a GraphicsComplex, such that the following code should get you started on a workaround for your interests:\n\nLogImageHistogram[input_Image, base_?NumericQ /; base >= 2] :=\nModule[\n{\nimh = ImageHistogram[input], logdata\n},\nlogdata = MapAt[\nLog[#]/Log[base] &,\nFirst@Cases[imh, GraphicsComplex[x_, y_] :> x, Infinity], {All, 2}\n] /. Indeterminate -> -1;\n\n(\nimh /. GraphicsComplex[x_, y_] :> GraphicsComplex[logdata, y]\n) /.\n{\nRule[FrameTicks, x_] :> Rule\n[\nFrameTicks, {\n{\n{#, base^#} & /@ Range[1, 10] // N, None\n}, {Automatic, Automatic}\n}\n],\nRule[PlotRange, x_] :> Rule[PlotRange, {0, Max[logdata]}]\n}\n]\n\n\nThis function takes two arguments,\n\n1) the input image and\n\n2) the logarithmic base with which to scale the y-axis.\n\nThis function isn't perfect because I only generate 10 tick marks, but these things can be adjusted by hand.\n\nAlso, because the GraphicsComplex contains some zeroes for the y-coordinates, I've artificially set these to -1 because the Log is Indeterminate. You won't see these because the PlotRange starts at 0.\n\nShow\n[\nLogImageHistogram[image, #],\nBaseStyle -> {FontFamily -> \"Calibri\", FontSize -> 20},\nImageSize -> 800\n] & /@ {10, 2}\n\n\ngives:", null, "" ]
[ null, "https://i.stack.imgur.com/Y7oqF.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.73084956,"math_prob":0.7991218,"size":2173,"snap":"2020-10-2020-16","text_gpt3_token_len":586,"char_repetition_ratio":0.10788382,"word_repetition_ratio":0.1529052,"special_character_ratio":0.2733548,"punctuation_ratio":0.19642857,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9805681,"pos_list":[0,1,2],"im_url_duplicate_count":[null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-19T04:31:04Z\",\"WARC-Record-ID\":\"<urn:uuid:bdf067a4-4bb9-41d8-ae64-a0c75e768d12>\",\"Content-Length\":\"145870\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:583605c7-27ff-4ca6-919c-ecfede4eebff>\",\"WARC-Concurrent-To\":\"<urn:uuid:096ca490-b9eb-47c8-9a33-e78a94bfcf6e>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://mathematica.stackexchange.com/questions/104236/imagehistogram-with-logarithmic-y-scale\",\"WARC-Payload-Digest\":\"sha1:HR65DAN22ZLPSX6V6FQ33FLIKVLBQVL3\",\"WARC-Block-Digest\":\"sha1:UZZXHDNUPORYYOO6FDTNHDB5WZ3KZ67P\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875144027.33_warc_CC-MAIN-20200219030731-20200219060731-00355.warc.gz\"}"}
http://www.bjjdwang.com/190-product_show.html
[ " 澳门葡京网上开户平-首頁\n\n#### 滤芯", null, "# 聚丙烯(PP)微孔膜折叠滤芯\n\n我要分享:", null, "产品特色\n• 杰出的化学兼容性,适合过滤强酸、强碱及有机溶剂\n• 滤膜为折叠式深层过滤,膜过滤面积大\n• 压差低,纳污能力强,使用寿命长\n• 有多种过滤精度可选择\n• 渐进式孔径变化提供超高纳污能力\n• FDA认证材料\n\n• 用于民生生活饮用水过滤R.O逆渗透过滤\n• 用于工业化学制程中酸碱液体之过滤\n• 工业用水、电镀液之过滤\n• 无菌水、超纯水之前置过滤处理\n• 化工原料、有机溶剂之过滤\n\n• 滤芯参数:\n\n• 部件材质:\n\n• 滤芯性能:\n\n5μm,10μm,20μm,50μm\n\n [过滤精度] [滤芯长度] [滤芯接头] [密封材质] [中心杆材质] JTLXPP 10 10 S E S 0.1=0.1μm 5=5μm 10=10” D=DOE型(双开) E=三元乙丙橡胶 S=不锈钢 0.22=0.22μm 10=10μm 20=20” T=222型(平尾) P=聚四氟乙烯 B=聚丙烯 0.45=0.45μm 20=20μm 30=30” S=226型(翘片) V=氟橡胶 1=1μm 50=50μm 40=40” S=硅橡胶 3=3μm\n\n• 标题:\n• *姓名:\n• *邮箱:\n• 公司名称:\n• 电话:\n• 传真:\n• 所在地:\n• 内容:\n• 验证码:", null, "", null, "", null, "", null, "", null, "" ]
[ null, "http://www.bjjdwang.com/data/image/20170614/20170614091235643564.jpg", null, "http://www.bjjdwang.com/data/image/20170614/20170614092633643364.jpg", null, "http://www.bjjdwang.com/inc/GetCode.asp", null, "http://www.bjjdwang.com/images/bottom_logo.png", null, "http://www.bjjdwang.com/images/bottom_iso.png", null, "http://www.bjjdwang.com/images/er.jpg", null, "http://www.bjjdwang.com/images/er2.jpg", null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.88277256,"math_prob":0.8780443,"size":1115,"snap":"2020-34-2020-40","text_gpt3_token_len":1167,"char_repetition_ratio":0.08640864,"word_repetition_ratio":0.02962963,"special_character_ratio":0.42421526,"punctuation_ratio":0.082125604,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99401844,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,2,null,2,null,7,null,null,null,7,null,7,null,7,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-23T21:35:51Z\",\"WARC-Record-ID\":\"<urn:uuid:deff61c4-809f-4aa1-932a-f70972b613a6>\",\"Content-Length\":\"43475\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:351958fd-6c9f-435b-918c-63674072262d>\",\"WARC-Concurrent-To\":\"<urn:uuid:5eb553cb-aaf9-46b1-954b-f5d441640336>\",\"WARC-IP-Address\":\"107.164.150.182\",\"WARC-Target-URI\":\"http://www.bjjdwang.com/190-product_show.html\",\"WARC-Payload-Digest\":\"sha1:IIBJR4FPXNXE63QTC7RF24AQNCGGJNAW\",\"WARC-Block-Digest\":\"sha1:7LYFKZOUGQJ6SET4T2BCDHB56E7LNQPH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400212959.12_warc_CC-MAIN-20200923211300-20200924001300-00472.warc.gz\"}"}
https://www.physicsforums.com/threads/two-cylinders-in-contact-come-to-final-angular-velocities.794922/
[ "# Two cylinders in contact come to final angular velocities\n\n## Homework Statement\n\nTwo cylinders, made from the same material and having the same length,have radii r1 and r2with r1> r2. Both are free to rotate about their respective axes. The larger cylinder is initially rotating with angular velocity ωo. The smaller cylinder is moved until it comes into contact withthe larger one. Eventually the frictional force causes both cylinders torotate with constant angular velocity but in opposite directions. Find the final angular velocity of the smaller cylinder. Are any dynamical quantities conserved in this case?\n\nω1 is the final angular speed of the larger cylinder r1 and ω2 is the final angular speed for the smaller cylinder r2.\n\n## Homework Equations\n\nThe angular impulse k is equal to the change in the angular momentum ΔL for both cylinders. k is defined as the time integral of the torque so k=rFt where F is the friction force. Also, L=Iω and r1ω1=r2ω2 at the end.\n\n## The Attempt at a Solution\n\nr1Ft=I11o)\nr2Ft=I2ω2\nr1ω1=r2ω2\nUsing these 3 equations I am able to solve for the angular speed of the smaller cylinder (ω2) but my final answer involves the moments of inertia. Am I forgetting something or is this a sufficient answer? Also, no dynamical quantities are conserved due to friction.", null, "" ]
[ null, "https://www.physicsforums.com/attachments/img_20150130_011752952-jpg.78394/", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.90237975,"math_prob":0.8699315,"size":1249,"snap":"2022-05-2022-21","text_gpt3_token_len":313,"char_repetition_ratio":0.14779116,"word_repetition_ratio":0.009569378,"special_character_ratio":0.2193755,"punctuation_ratio":0.076271184,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9933195,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-25T07:11:41Z\",\"WARC-Record-ID\":\"<urn:uuid:84636978-482e-4887-becd-330d54e749cd>\",\"Content-Length\":\"69725\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:dcb7b4cf-ecb7-4969-988c-65d4e8c25307>\",\"WARC-Concurrent-To\":\"<urn:uuid:8b9338f8-b997-4d01-bd28-cd896b1c6de0>\",\"WARC-IP-Address\":\"172.67.68.135\",\"WARC-Target-URI\":\"https://www.physicsforums.com/threads/two-cylinders-in-contact-come-to-final-angular-velocities.794922/\",\"WARC-Payload-Digest\":\"sha1:QG4ASVTXZV7RAHIZQK3S4EHDCOPYXDWK\",\"WARC-Block-Digest\":\"sha1:VYJ325RRTCVT6IZNR462F3F4ILK54YO7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662580803.75_warc_CC-MAIN-20220525054507-20220525084507-00276.warc.gz\"}"}
https://math.stackexchange.com/questions/2844644/integrals-involving-the-fractional-part-function-and-the-w-lambert-function
[ "# Integrals involving the fractional part function and the W-Lambert function\n\nI am trying make interesting integrals involving the fractional part function and special functions. I wondered if it is possible to deduce a series representation (in the atempt to get a closed-form that maybe there no exists) for $$\\int_0^{1/3} \\left\\{ \\frac{1}{x} \\right\\}W_0(x)dx,$$ where $\\left\\{ x \\right\\}=x-\\lfloor x\\rfloor$ denotes the fractional part function and $W_0(x)$ the Lambert W-Function, see for example the related MathWorld's article Lambert W-Function or the corresponding Wikipedia.\n\nClaim. One has that\n\n$$\\int_0^{1/3} \\left\\{ \\frac{1}{x} \\right\\}W_0(x)dx=\\sum_{n=1}^\\infty\\frac{(-1)^{n-1}n^{n-2}}{(n+1)!}\\left(3^{-n}-n\\left(\\zeta(n+1)-1-2^{-(n+1)}-3^{-(n+1)}\\right)\\right),$$ where $\\zeta(s)$ denotes the Riemann zeta function.\n\nQuestion. I think that previous Claim is right. Am I right? Are feasible more simplifications (closed-forms for some of the terms of the series of RHS, or a better way to write the resulting series) for my deduction? Many thanks.\n\nI would like to know if there are more potentially interesting combinations of definite integrals involving the fractional part function and the Lambert W-function (for a different integrand if it is required). Thus if you want answer next optional question adding a paragraph in your answer or well in comments.\n\nQuestion (Optional). Can you propose a more interesting (with a nice closed-form or with a calculation more interesting than mine) definite integral involving the fractional part function and a Lambert W-function (the interval of integration can be different than mine)? Many thanks.\n\n• How did you get this series expression and have you checked it numerically? – Yuriy S Oct 17 '18 at 22:22\n• First many thanks for your friendly behaviour with all users in this site MSE. Secondly I don't remember how was deduced my claim, but I think that I've combined the series expansion of the W-Lambert function and integration of fractional parts to deduce my result, that I don't checked numerically. In Question I am asking about if my deduction is right or as companion interesting results or simplifications related to mine @YuriyS Feel free to add your contribution so that other users can value it, I am going to delete my account in this site, but your contributions are always valuable. – user243301 Oct 19 '18 at 10:02" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8465596,"math_prob":0.966358,"size":1578,"snap":"2019-35-2019-39","text_gpt3_token_len":416,"char_repetition_ratio":0.12642948,"word_repetition_ratio":0.042056076,"special_character_ratio":0.25982255,"punctuation_ratio":0.06506849,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9937278,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-15T10:13:19Z\",\"WARC-Record-ID\":\"<urn:uuid:d358e348-667e-42d0-8cbd-b2bd82a0dc41>\",\"Content-Length\":\"131173\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:37eb90e4-9088-4ce8-90e9-46f8b8c0232c>\",\"WARC-Concurrent-To\":\"<urn:uuid:b2c3dd04-59c2-4eba-bbfa-e6b16d6576fb>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/2844644/integrals-involving-the-fractional-part-function-and-the-w-lambert-function\",\"WARC-Payload-Digest\":\"sha1:XH6FUK3YFQL3NJZB63TWJQWGUN7I4BI3\",\"WARC-Block-Digest\":\"sha1:GQFHHXC3FHHMYFKWK7Y3HRX74Z5Q3D7I\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514571027.62_warc_CC-MAIN-20190915093509-20190915115509-00078.warc.gz\"}"}
https://aakashdigitalsrv1.meritnation.com/ask-answer/question/give-the-equations-of-2-lines-passing-through-2-14-how-many/linear-equations-in-two-variables/3636621
[ "# give the equations of 2 lines passing through (2,14).how many more such lines are there and why?\n\nEquations of 2 lines passing through (2,14) are:\n\n1) x + y = 16\n\n2) y - x = 12\n\nInfinitely many such lines are there because:\n\n1) (2,14) is a point on the graph.\n\n2) Also, according to an axiom, infinitely many lines can pass through a given point.\n\n3) Therefore, there are infinitely many such lines.\n\n• 49\n\neq\n\nx+y=16\n\ninfinity many such more lines can pass becoz itis linear eq in 2 variable\n\n• 13\nWhat are you looking for?" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9595077,"math_prob":0.99600035,"size":392,"snap":"2022-05-2022-21","text_gpt3_token_len":110,"char_repetition_ratio":0.14948453,"word_repetition_ratio":0.028169014,"special_character_ratio":0.3112245,"punctuation_ratio":0.14772727,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.999374,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-26T02:50:19Z\",\"WARC-Record-ID\":\"<urn:uuid:070383bf-3301-4ba5-8629-e4ac45e425e3>\",\"Content-Length\":\"27187\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1b25a9cd-45ba-4ffb-877d-be6ea0e61eb3>\",\"WARC-Concurrent-To\":\"<urn:uuid:aed9c3be-318d-4744-a373-6af1015b4de6>\",\"WARC-IP-Address\":\"13.32.208.104\",\"WARC-Target-URI\":\"https://aakashdigitalsrv1.meritnation.com/ask-answer/question/give-the-equations-of-2-lines-passing-through-2-14-how-many/linear-equations-in-two-variables/3636621\",\"WARC-Payload-Digest\":\"sha1:IODZRDGFU2QU66CZYZZAXNJXTLCRVP2F\",\"WARC-Block-Digest\":\"sha1:BNJCOX6FEXV2TXMKRI5CYBMJXUZ4S42N\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662595559.80_warc_CC-MAIN-20220526004200-20220526034200-00672.warc.gz\"}"}
https://poi.apache.org/apidocs/dev/org/apache/poi/ss/formula/BaseFormulaEvaluator.html
[ "org.apache.poi.ss.formula\n\n## Class BaseFormulaEvaluator\n\n• java.lang.Object\n• org.apache.poi.ss.formula.BaseFormulaEvaluator\n• ### Field Summary\n\nFields\nModifier and Type Field and Description\n`protected WorkbookEvaluator` `_bookEvaluator`\n• ### Constructor Summary\n\nConstructors\nModifier Constructor and Description\n`protected ` `BaseFormulaEvaluator(WorkbookEvaluator bookEvaluator)`\n• ### Method Summary\n\nAll Methods\nModifier and Type Method and Description\n`WorkbookEvaluator` `_getWorkbookEvaluator()`\nProvide the underlying WorkbookEvaluator\n`void` `clearAllCachedResultValues()`\nShould be called whenever there are major changes (e.g.\n`protected abstract RichTextString` `createRichTextString(java.lang.String str)`\n`CellValue` `evaluate(Cell cell)`\nIf cell contains a formula, the formula is evaluated and returned, else the CellValue simply copies the appropriate cell value from the cell and also its cell type.\n`static void` `evaluateAllFormulaCells(Workbook wb)`\nLoops over all cells in all sheets of the supplied workbook.\n`protected static void` ```evaluateAllFormulaCells(Workbook wb, FormulaEvaluator evaluator)```\n`CellType` `evaluateFormulaCell(Cell cell)`\nIf cell contains formula, it evaluates the formula, and saves the result of the formula.\n`CellType` `evaluateFormulaCellEnum(Cell cell)`\nDeprecated.\nuse `evaluateFormulaCell(cell)` instead\n`protected abstract CellValue` `evaluateFormulaCellValue(Cell cell)`\n`Cell` `evaluateInCell(Cell cell)`\nIf cell contains formula, it evaluates the formula, and puts the formula result back into the cell, in place of the old formula.\n`protected EvaluationWorkbook` `getEvaluationWorkbook()`\ninternal use\n`protected void` ```setCellType(Cell cell, CellType cellType)```\nOverride if a different variation is needed, e.g.\n`protected void` ```setCellType(Cell cell, CellValue cv)```\nset the cell type\n`protected void` ```setCellValue(Cell cell, CellValue cv)```\n`void` `setDebugEvaluationOutputForNextEval(boolean value)`\nPerform detailed output of formula evaluation for next evaluation only? Is for developer use only (also developers using POI for their XLS files).\n`void` `setIgnoreMissingWorkbooks(boolean ignore)`\nWhether to ignore missing references to external workbooks and use cached formula results in the main workbook instead.\n`static void` ```setupEnvironment(java.lang.String[] workbookNames, BaseFormulaEvaluator[] evaluators)```\nCoordinates several formula evaluators together so that formulas that involve external references can be evaluated.\n`void` `setupReferencedWorkbooks(java.util.Map<java.lang.String,FormulaEvaluator> evaluators)`\nSets up the Formula Evaluator to be able to reference and resolve links to other workbooks, eg [Test.xls]Sheet1!A1.\n• ### Methods inherited from class java.lang.Object\n\n`clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait`\n• ### Methods inherited from interface org.apache.poi.ss.usermodel.FormulaEvaluator\n\n`evaluateAll, notifyDeleteCell, notifySetFormula, notifyUpdateCell`\n• ### Field Detail\n\n• #### _bookEvaluator\n\n`protected final WorkbookEvaluator _bookEvaluator`\n• ### Constructor Detail\n\n• #### BaseFormulaEvaluator\n\n`protected BaseFormulaEvaluator(WorkbookEvaluator bookEvaluator)`\n• ### Method Detail\n\n• #### setupEnvironment\n\n```public static void setupEnvironment(java.lang.String[] workbookNames,\nBaseFormulaEvaluator[] evaluators)```\nCoordinates several formula evaluators together so that formulas that involve external references can be evaluated.\nParameters:\n`workbookNames` - the simple file names used to identify the workbooks in formulas with external links (for example \"MyData.xls\" as used in a formula \"[MyData.xls]Sheet1!A1\")\n`evaluators` - all evaluators for the full set of workbooks required by the formulas.\n• #### setupReferencedWorkbooks\n\n`public void setupReferencedWorkbooks(java.util.Map<java.lang.String,FormulaEvaluator> evaluators)`\nDescription copied from interface: `FormulaEvaluator`\nSets up the Formula Evaluator to be able to reference and resolve links to other workbooks, eg [Test.xls]Sheet1!A1.\n\nFor a workbook referenced as [Test.xls]Sheet1!A1, you should supply a map containing the key Test.xls (no square brackets), and an open FormulaEvaluator onto that Workbook.\n\nSpecified by:\n`setupReferencedWorkbooks` in interface `FormulaEvaluator`\nParameters:\n`evaluators` - Map of workbook names (no square brackets) to an evaluator on that workbook\n• #### _getWorkbookEvaluator\n\n`public WorkbookEvaluator _getWorkbookEvaluator()`\nDescription copied from interface: `WorkbookEvaluatorProvider`\nProvide the underlying WorkbookEvaluator\nSpecified by:\n`_getWorkbookEvaluator` in interface `WorkbookEvaluatorProvider`\n• #### getEvaluationWorkbook\n\n`protected EvaluationWorkbook getEvaluationWorkbook()`\ninternal use\nReturns:\nevaluation workbook\n• #### clearAllCachedResultValues\n\n`public void clearAllCachedResultValues()`\nShould be called whenever there are major changes (e.g. moving sheets) to input cells in the evaluated workbook. If performance is not critical, a single call to this method may be used instead of many specific calls to the notify~ methods. Failure to call this method after changing cell values will cause incorrect behaviour of the evaluate~ methods of this class\nSpecified by:\n`clearAllCachedResultValues` in interface `FormulaEvaluator`\n• #### evaluate\n\n`public CellValue evaluate(Cell cell)`\nIf cell contains a formula, the formula is evaluated and returned, else the CellValue simply copies the appropriate cell value from the cell and also its cell type. This method should be preferred over evaluateInCell() when the call should not modify the contents of the original cell.\nSpecified by:\n`evaluate` in interface `FormulaEvaluator`\nParameters:\n`cell` - may be `null` signifying that the cell is not present (or blank)\nReturns:\n`null` if the supplied cell is `null` or blank\n• #### evaluateInCell\n\n`public Cell evaluateInCell(Cell cell)`\nIf cell contains formula, it evaluates the formula, and puts the formula result back into the cell, in place of the old formula. Else if cell does not contain formula, this method leaves the cell unchanged. Note that the same instance of `Cell` is returned to allow chained calls like:\n``` int evaluatedCellType = evaluator.evaluateInCell(cell).getCellType();\n```\nBe aware that your cell value will be changed to hold the result of the formula. If you simply want the formula value computed for you, use `evaluateFormulaCell(Cell)`}\nSpecified by:\n`evaluateInCell` in interface `FormulaEvaluator`\nParameters:\n`cell` - The `Cell` to evaluate and modify.\nReturns:\nthe `cell` that was passed in, allowing for chained calls\n• #### evaluateFormulaCellValue\n\n`protected abstract CellValue evaluateFormulaCellValue(Cell cell)`\n• #### evaluateFormulaCell\n\n`public CellType evaluateFormulaCell(Cell cell)`\nIf cell contains formula, it evaluates the formula, and saves the result of the formula. The cell remains as a formula cell. Else if cell does not contain formula, this method leaves the cell unchanged. Note that the type of the formula result is returned, so you know what kind of value is also stored with the formula.\n``` CellType evaluatedCellType = evaluator.evaluateFormulaCell(cell);\n```\nBe aware that your cell will hold both the formula, and the result. If you want the cell replaced with the result of the formula, use `evaluate(org.apache.poi.ss.usermodel.Cell)` }\nSpecified by:\n`evaluateFormulaCell` in interface `FormulaEvaluator`\nParameters:\n`cell` - The cell to evaluate\nReturns:\nThe type of the formula result (the cell's type remains as CellType.FORMULA however) If cell is not a formula cell, returns `CellType._NONE` rather than throwing an exception.\n• #### evaluateFormulaCellEnum\n\n```@Deprecated\n@Removal(version=\"4.2\")\npublic CellType evaluateFormulaCellEnum(Cell cell)```\nDeprecated. use `evaluateFormulaCell(cell)` instead\nIf cell contains formula, it evaluates the formula, and saves the result of the formula. The cell remains as a formula cell. Else if cell does not contain formula, this method leaves the cell unchanged. Note that the type of the formula result is returned, so you know what kind of value is also stored with the formula.\n``` CellType evaluatedCellType = evaluator.evaluateFormulaCell(cell);\n```\nBe aware that your cell will hold both the formula, and the result. If you want the cell replaced with the result of the formula, use `evaluate(org.apache.poi.ss.usermodel.Cell)` }\nSpecified by:\n`evaluateFormulaCellEnum` in interface `FormulaEvaluator`\nParameters:\n`cell` - The cell to evaluate\nReturns:\nThe type of the formula result (the cell's type remains as CellType.FORMULA however) If cell is not a formula cell, returns `CellType._NONE` rather than throwing an exception.\nSince:\nPOI 3.15 beta 3\n• #### setCellType\n\n```protected void setCellType(Cell cell,\nCellValue cv)```\nset the cell type\nParameters:\n`cell` -\n`cv` -\n• #### setCellType\n\n```protected void setCellType(Cell cell,\nCellType cellType)```\nOverride if a different variation is needed, e.g. passing the evaluator to the cell method\nParameters:\n`cell` -\n`cellType` -\n• #### createRichTextString\n\n`protected abstract RichTextString createRichTextString(java.lang.String str)`\n• #### setCellValue\n\n```protected void setCellValue(Cell cell,\nCellValue cv)```\n• #### evaluateAllFormulaCells\n\n`public static void evaluateAllFormulaCells(Workbook wb)`\nLoops over all cells in all sheets of the supplied workbook. For cells that contain formulas, their formulas are evaluated, and the results are saved. These cells remain as formula cells. For cells that do not contain formulas, no changes are made. This is a helpful wrapper around looping over all cells, and calling evaluateFormulaCell on each one.\n• #### evaluateAllFormulaCells\n\n```protected static void evaluateAllFormulaCells(Workbook wb,\nFormulaEvaluator evaluator)```\n• #### setIgnoreMissingWorkbooks\n\n`public void setIgnoreMissingWorkbooks(boolean ignore)`\nWhether to ignore missing references to external workbooks and use cached formula results in the main workbook instead.\n\nIn some cases external workbooks referenced by formulas in the main workbook are not available. With this method you can control how POI handles such missing references:\n\nSpecified by:\n`setIgnoreMissingWorkbooks` in interface `FormulaEvaluator`\nParameters:\n`ignore` - whether to ignore missing references to external workbooks\n• #### setDebugEvaluationOutputForNextEval\n\n`public void setDebugEvaluationOutputForNextEval(boolean value)`\nPerform detailed output of formula evaluation for next evaluation only? Is for developer use only (also developers using POI for their XLS files). Log-Level WARN is for basic info, INFO for detailed information. These quite high levels are used because you have to explicitly enable this specific logging.\nSpecified by:\n`setDebugEvaluationOutputForNextEval` in interface `FormulaEvaluator`\nParameters:\n`value` - whether to perform detailed output\n\nCopyright 2020 The Apache Software Foundation or its licensors, as applicable." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6100287,"math_prob":0.6878798,"size":6585,"snap":"2022-27-2022-33","text_gpt3_token_len":1360,"char_repetition_ratio":0.22215469,"word_repetition_ratio":0.13307494,"special_character_ratio":0.16810934,"punctuation_ratio":0.12446352,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9639383,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-14T10:21:05Z\",\"WARC-Record-ID\":\"<urn:uuid:1b0db2d8-cf1d-4b2e-93c9-36ab61987c1f>\",\"Content-Length\":\"45386\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9588b755-43dd-425d-813b-2f45c37f2675>\",\"WARC-Concurrent-To\":\"<urn:uuid:9f0e9092-c1bd-4369-9470-6d3505097427>\",\"WARC-IP-Address\":\"151.101.2.132\",\"WARC-Target-URI\":\"https://poi.apache.org/apidocs/dev/org/apache/poi/ss/formula/BaseFormulaEvaluator.html\",\"WARC-Payload-Digest\":\"sha1:NBAZEPNZFTNNH2DCGASUQ2UWOIQM3JU4\",\"WARC-Block-Digest\":\"sha1:UU2CFFAWM4BZCN37EGCRX4C4XEKZQ7LB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882572021.17_warc_CC-MAIN-20220814083156-20220814113156-00407.warc.gz\"}"}
https://scicomp.stackexchange.com/questions/11742/fast-vector-diagonal-matrix-multiplication/11745
[ "# Fast vector - “diagonal” matrix multiplication\n\nLet $\\mathbf{1}\\in\\mathbb{R}^d$ be a vector with all elements equal to $1$. Define: $$\\mathbf{D} = \\mathrm{diag}(\\mathbf{1}^\\top,\\mathbf{1}^\\top,\\ldots,\\mathbf{1}^\\top) = \\begin{bmatrix} 1 \\cdots 1 & & \\\\ & 1\\cdots 1 & \\\\ & & 1\\cdots 1 \\end{bmatrix} \\in\\mathbb{R}^{d\\times d^2}$$\n\nI would like to compute $\\mathbf{D}\\cdot \\mathbf{x}$ for any vector $\\mathbf{x}\\in\\mathbb{R}^{d^2}$. Could anybody suggest me a way to efficiently compute this product? (Since $\\mathbf{D}$ has a very special structure, I guess there should be such a way).\n\nThis can be interpreted as summing over an index of a tensor when the vector $x$ is reshaped into a box of numbers instead of a list. In particular, if $X$ is the $d\\text{-by-}d$ folded version of $x$, then the operation you are doing is, \\begin{align} Dx &= \\mathrm{vec}\\left((I \\otimes \\mathbf{1})\\mathrm{vec}(X)\\right) \\\\ &= \\mathrm{vec}(\\mathbf{1}^T X I) \\\\ &= \\mathrm{vec}(\\mathbf{1}^T X). \\end{align}\n\nThe matlab code to do this is surprisingly simple. It is,\n\nsum(reshape(x,d,d))'\n\n\nHere's an example of it in action - you can see that it far outperforms the standard dense multiply, sparse matrix multiply, and for loop versions:\n\n>> onesmatrixquestion\ndense matrix multiply\nElapsed time is 0.000873 seconds.\nsparse matrix multiply\nElapsed time is 0.000115 seconds.\nfor loop version\nElapsed time is 0.000154 seconds.\ntensorized version\nElapsed time is 0.000018 seconds.\n\n\nHere's the code that generated those timing results:\n\n%onesmatrixquestion.m\nd = 100;\nonevec = ones(1,d);\nD = kron(eye(d,d),onevec);\nDsparse = sparse(D);\n\nx = randn(d^2,1);\n\ndisp('dense matrix multiply')\ntic\naa = D*x;\ntoc\n\ndisp('sparse matrix multiply')\ntic\nbb = Dsparse*x;\ntoc\n\ndisp('for loop version')\ntic\ncc = zeros(d,1);\nind = 1;\nfor kk=1:d\nfor jj=1:d\ncc(kk) = cc(kk) + x(ind);\nind = ind + 1;\nend\nend\ntoc\n\ndisp('tensorized version')\ntic\ndd = sum(reshape(x,d,d))';\ntoc\n\nif (norm(aa - bb) > 1e-9 || norm(bb - cc) > 1e-9 ...\n|| norm(cc - dd) > 1e-9)\ndisp('error: different methods give different results')\nend\n\n• Hi @Nick. I've realized that it can be done even better: ee = (onevec*reshape(x,d,d))'; – Khue May 28 '14 at 13:13\n• Does it go faster that way as compared to using the sum command? – Nick Alger May 28 '14 at 14:43\n• Yes, it does. That's why I posted that comment ;) – Khue May 28 '14 at 15:08\n• Oh, interesting. I was thinking that the 'sum' command would be specially optimized and therefore faster than a generic matvec - I guess not! – Nick Alger May 28 '14 at 15:31\n• Hmm you're right. Right multiplying a matrix by a vector of ones seems to be about twice as fast as using the sum command in Matlab. – Nick Alger May 28 '14 at 16:11\n\nThis is equivalent to computing sums of consecutive contiguous subvectors of $\\mathbf{x}$. You won't do much better than simple hand-coded nested loops if you have an automatically vectorizing compiler since you will probably be memory bandwidth limited for large $d$.\n\n• Hi Victor. I've got an answer. Thanks anyway :D – Khue May 28 '14 at 9:48\n\nEdit: removed confusing code\n\nYes there is a shortcut, here's some Python code:\n\n\n\nimport numpy as np\n\nd = 3\n\nD = np.repeat(np.identity(d), d, axis=1)\nprint('D:')\nprint D\nprint\n\nprint('x:')\nx = np.arange(d*d, dtype=float)\nprint x\nprint\n\nprint('D * x:')\nprint D.dot(x)\nprint\n\nprint('shortcut:')\nprint x.reshape((d, d)).sum(axis=1)\nprint\n\n\n\noutput:\n\n\nD:\n[[ 1. 1. 1. 0. 0. 0. 0. 0. 0.]\n[ 0. 0. 0. 1. 1. 1. 0. 0. 0.]\n[ 0. 0. 0. 0. 0. 0. 1. 1. 1.]]\n\nx:\n[ 0. 1. 2. 3. 4. 5. 6. 7. 8.]\n\nD * x:\n[ 3. 12. 21.]\n\nshortcut:\n[ 3. 12. 21.]\n\n• How is that a shortcut? It doesn't take advantage of sparsity at all and just directly computes the dot product (which is the same as matrix vector multiplication here). – Doug Lipinski May 27 '14 at 18:20\n• @doug Sorry my answer was unnecessarily long and confusing. First I compute the product directly, but if you keep reading you will find that I then compute the product in a second way using a trick. – k20 May 27 '14 at 18:40\n• Oh, I see now. I suggest adding at least a minimal explanation or comments in your code. – Doug Lipinski May 27 '14 at 19:06\n• Thanks a lot, k20. Your answer is similar to @Nick's answer above. +1! – Khue May 28 '14 at 9:49\n• Looks like this answer was posted first too. – Nick Alger May 30 '14 at 9:43" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6163545,"math_prob":0.99575984,"size":1475,"snap":"2020-34-2020-40","text_gpt3_token_len":470,"char_repetition_ratio":0.11624745,"word_repetition_ratio":0.00877193,"special_character_ratio":0.32881355,"punctuation_ratio":0.14826499,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998067,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-15T20:31:01Z\",\"WARC-Record-ID\":\"<urn:uuid:81bb848b-51c0-42f2-ac4d-e3ea047f6408>\",\"Content-Length\":\"170821\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a550aecf-8120-47f5-b614-d7e8b0bded40>\",\"WARC-Concurrent-To\":\"<urn:uuid:a4db7f0b-a2d8-4247-b0b0-8b7b88e90712>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://scicomp.stackexchange.com/questions/11742/fast-vector-diagonal-matrix-multiplication/11745\",\"WARC-Payload-Digest\":\"sha1:GPG2EDPTPBJSIXBUI6GXWORLQOHYLCZY\",\"WARC-Block-Digest\":\"sha1:KI6J73D4OVABI6QLZN7POJMD536RYAPM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439741154.98_warc_CC-MAIN-20200815184756-20200815214756-00327.warc.gz\"}"}
https://casmusings.wordpress.com/tag/square/
[ "Tag Archives: square\n\nSquares and Octagons, A compilation\n\nMy last post detailed my much-too-long trigonometric proof of why the octagon formed by connecting the midpoints and vertices of the edges of a square into an 8-pointed star is always 1/6 of the area of the original square.", null, "My proof used trigonometry, and responses to the post on Twitter  and on my ‘blog showed many cool variations.  Dave Radcliffe thought it would be cool to have a compilation of all of the different approaches.  I offer that here in the order they were shared with me.\n\nMethod 1:  My use of trigonometry in a square.  See my original post.\n\nMethod 2:  Using medians in a rectangle from Tatiana Yudovina, a colleague at Hawken School.\n\nBelow, the Area(axb rectangle) = ab = 16 blue triangles, and\nArea(octagon) = 4 blue triangles – 2 red deltas..", null, "Now look at the two green, similar triangles.  They are similar with ratio 1/2, making\n\nArea(red delta) =", null, "$\\displaystyle \\frac{b}{4} \\cdot \\frac{a}{6} = \\frac{ab}{24}$, and\n\nArea(blue triangle) =", null, "$\\displaystyle \\frac{1}{16} ab$\n\nSo, Area(octagon) =", null, "$\\displaystyle 2 \\frac{ab}{24}-4\\frac {ab}{16}=\\frac{1}{6}ab$.\n\nQED\n\nMethod 3:  Using differences in triangle areas in a square (but easily extended to rectangles)from @Five_Triangles (‘blog here).\n\nMethod 4:  Very clever shorter solution using triangle area similarity in a square also from @Five_Triangles (‘blog here).\n\nMethod 5:  Great option Using dilated kitesfrom Dave Radcliffe posting as @daveinstpaul.\n\nMethod 6:  Use fact that triangle medians trisect each other from Mike Lawler posting as @mikeandallie.\n\nMethod 7:  Use a coordinate proof on a specific square from Steve Ingrassia, a colleague at Hawken School.  Not a quick proof like some of the geometric solutions, but it’s definitely different than the others.\n\nIf students know the formula for finding the area of any polygon using its coordinates, then they can prove this result very simply with nothing more than simple algebra 1 techniques.   No trig is required.\n\nThe area of polygon with vertices (in either clockwise or counterclockwise order, starting at any vertex) of", null, "$(x_1, y_1)$,", null, "$(x_2, y_2)$, …,", null, "$(x_n, y_n)$ is", null, "$\\displaystyle Area = \\left| \\frac{(x_1y_2-x_2y_1)+(x_2y_3-x_3y_2)+...+(x_{n-1}y_n-x_ny_{n-1})}{2} \\right|$\n\nUse a 2×2 square situated with vertices at (0,0), (0,2), (2,2), and (2,0).  Construct segments connecting each vertex with the midpoints of the sides of the square, and find the equations of the associated lines.\n\n• L1 (connecting (0,0) and (2,1):    y = x/2\n• L2 (connecting (0,0) and (1,2):   y=2x\n• L3 (connecting (0,1) and (2,0):  y= -x/2 + 1\n• L4 (connecting (0,1) and (2,2):  y= x/2 + 1\n• L5 (connecting (0,2) and (1,0):  y = -2x + 2\n• L6 (connecting (0,2) and (2,1):  y= -x/2 + 2\n• L7 (connecting (1,2) and (2,0):  y = -2x + 4\n• L8 (connecting (2,2) and (1,0):  y = 2x – 2\n\nThe 8 vertices of the octagon come at pairwise intersections of some of these lines, which can be found with simple substitution:\n\n• Vertex 1 is at the intersection of L1 and L3:   (1, 1/2)\n• Vertex 2 is at the intersection of L3 and L5:  (2/3, 2/3)\n• Vertex 3 is at the intersection of L2 and L5:  (1/2, 1)\n• Vertex 4 is at the intersection of L2 and L4:  (2/3, 4/3)\n• Vertex 5 is at the intersection of L4 and L6:  (1, 3/2)\n• Vertex 6 is at the intersection of L6 and L7:  (4/3, 4/3)\n• Vertex 7 is at the intersection of L7 and L8:  (3/2, 1)\n• Vertex 8 is at the intersection of L1 and L8:  (4/3, 2/3)\n\nUsing the coordinates of these 8 vertices in the formula for the area of the octagon, gives", null, "$\\displaystyle \\frac{ \\left| 1/3 +1/3+0+(-1/3)+(-2/3)+(-1/3)+0 \\right|}{2} = \\frac{2}{3}$\n\nSince the area of the original square was 4, the area of the octagon is exactly 1/6th of the area of the square.\n\nSquares and Octagons\n\nFollowing is a really fun problem Tom Reardon showed my department last May as he led us through some TI-Nspire CAS training.  Following the introduction of the problem, I offer a mea culpa, a proof, and an extension.\n\nTHE PROBLEM:\n\nTake any square and construct midpoints on all four sides.\nConnect the four midpoints and four vertices to create a continuous 8-pointed star as shown below.  The interior of the star is an octagon.  Construct this yourself using your choice of dynamic geometry software and vary the size of the square.\n\nCompare the areas of the external square and the internal octagon.", null, "You should find that the area of the original square is always 6 times the area of the octagon.\n\nI thought that was pretty cool.  Then I started to play.\n\nMINOR OBSERVATIONS:\n\nUsing my Nspire, I measured the sides of the octagon and found it to be equilateral.\n\nAs an extension of Tom’s original problem statement, I wondered if the constant square:octagon ratio occurred in any other quadrilaterals.  I found the external quadrilateral was also six times the area of the internal octagon for parallelograms, but not for any more general quadrilaterals.  Tapping my understanding of the quadrilateral hierarchy, that means the property also holds for rectangles and rhombi.\n\nMEA CULPA:\n\nMath teachers always warn students to never, ever assume what they haven’t proven.  Unfortunately, my initial exploration of this problem was significantly hampered by just such an assumption.  I obviously know better (and was reminded afterwards that Tom actually had told us that the octagon was not equiangular–but like many students, I hadn’t listened).   After creating the original octagon, measuring its sides and finding them all equivalent, I errantly assumed the octagon was regular.  That isn’t true.\n\nThat false assumption created flaws in my proof and generalizations.  I discovered my error when none of my proof attempts worked out, and I eventually threw everything out and started over.  I knew better than to assume.  But I persevered, discovered my error through back-tracking, and eventually overcame.  That’s what I really hope my students learn.\n\nTHE REAL PROOF:\n\nGoal:  Prove that the area of the original square is always 6 times the area of the internal octagon.\n\nAssume the side length of a given square is", null, "$2x$, making its area", null, "$4x^2$.\n\nThe octagon’s area obviously is more complicated.  While it is not regular, the square’s symmetry guarantees that it can be decomposed into four congruent kites in two different ways.  Kite AFGH below is one such kite.", null, "Therefore, the area of the octagon is 4 times the area of AFGH.  One way to express the area of any kite is", null, "$\\frac{1}{2}D_1\\cdot D_2$, where", null, "$D_1$ and", null, "$D_2$ are the kite’s diagonals. If I can determine the lengths of", null, "$\\overline{AG}$ and", null, "$\\overline {FH}$, then I will know the area of AFGH and thereby the ratio of the area of the square to the area of the octagon.\n\nThe diagonals of every kite are perpendicular, and the diagonal between a kite’s vertices connecting its non-congruent sides is bisected by the kite’s other diagonal.  In terms of AFGH, that means", null, "$\\overline{AG}$ is the perpendicular bisector of", null, "$\\overline{FH}$.\n\nThe square and octagon are concentric at point A, and point E is the midpoint of", null, "$\\overline{BC}$, so", null, "$\\Delta BAC$ is isosceles with vertex A, and", null, "$\\overline{AE}$ is the perpendicular bisector of", null, "$\\overline{BC}$.\n\nThat makes right triangles", null, "$\\Delta BEF \\sim \\Delta BCD$.  Because", null, "$\\displaystyle BE=\\frac{1}{2} BC$, similarity gives", null, "$\\displaystyle AF=FE=\\frac{1}{2} DC=\\frac{x}{2}$.  I know one side of the kite.\n\nLet point I be the intersection of the diagonals of AFGH.", null, "$\\Delta BEA$ is right isosceles, so", null, "$\\Delta AIF$ is, too, with", null, "$m\\angle{IAF}=45$ degrees.  With", null, "$\\displaystyle AF=\\frac{x}{2}$, the Pythagorean Theorem gives", null, "$\\displaystyle IF=\\frac{x}{2\\sqrt{2}}$.  Point I is the midpoint of", null, "$\\overline{FH}$, so", null, "$\\displaystyle FH=\\frac{x}{\\sqrt{2}}$.  One kite diagonal is accomplished.", null, "Construct", null, "$\\overline{JF} \\parallel \\overline{BC}$.  Assuming degree angle measures, if", null, "$m\\angle{FBC}=m\\angle{FCB}=\\theta$, then", null, "$m\\angle{GFJ}=\\theta$ and", null, "$m\\angle{AFG}=90-\\theta$.  Knowing two angles of", null, "$\\Delta AGF$ gives the third:", null, "$m\\angle{AGF}=45+\\theta$.", null, "I need the length of the kite’s other diagonal,", null, "$\\overline{AG}$, and the Law of Sines gives", null, "$\\displaystyle \\frac{AG}{sin(90-\\theta )}=\\frac{\\frac{x}{2}}{sin(45+\\theta )}$, or", null, "$\\displaystyle AG=\\frac{x \\cdot sin(90-\\theta )}{2sin(45+\\theta )}$.\n\nExpanding using cofunction and angle sum identities gives", null, "$\\displaystyle AG=\\frac{x \\cdot sin(90-\\theta )}{2sin(45+\\theta )}=\\frac{x \\cdot cos(\\theta )}{2 \\cdot \\left( sin(45)cos(\\theta ) +cos(45)sin( \\theta) \\right)}=\\frac{x \\cdot cos(\\theta )}{\\sqrt{2} \\cdot \\left( cos(\\theta ) +sin( \\theta) \\right)}$\n\nFrom right", null, "$\\Delta BCD$, I also know", null, "$\\displaystyle sin(\\theta )=\\frac{1}{\\sqrt{5}}$ and", null, "$\\displaystyle cos(\\theta)=\\frac{2}{\\sqrt{5}}$.  Therefore,", null, "$\\displaystyle AG=\\frac{x\\sqrt{2}}{3}$, and the kite’s second diagonal is now known.\n\nSo, the octagon’s area is four times the kite’s area, or", null, "$\\displaystyle 4\\left( \\frac{1}{2} D_1 \\cdot D_2 \\right) = 2FH \\cdot AG = 2 \\cdot \\frac{x}{\\sqrt{2}} \\cdot \\frac{x\\sqrt{2}}{3} = \\frac{2}{3}x^2$\n\nTherefore, the ratio of the area of the square to the area of its octagon is", null, "$\\displaystyle \\frac{area_{square}}{area_{octagon}} = \\frac{4x^2}{\\frac{2}{3}x^2}=6$.\n\nQED\n\nEXTENSIONS:\n\nThis was so nice, I reasoned that it couldn’t be an isolated result.\n\nI have extended and proved that the result is true for other modulo-3 stars like the 8-pointed star in the square for any n-gon.  I’ll share that very soon in another post.\n\nI proved the result above, but I wonder if it can be done without resorting to trigonometric identities.  Everything else is simple geometry.   I also wonder if there are other more elegant approaches.\n\nFinally, I assume there are other constant ratios for other modulo stars inside larger n-gons, but I haven’t explored that idea.  Anyone?\n\nTwo Squares, Two Triangles, and some Circles\n\nHere’s another fun twist on another fun problem from the Five Triangles ‘blog.  A month ago, this was posted.", null, "What I find cool about so many of the Five Triangles problems is that most permit multiple solutions.  I also like that several Five Triangles problems initially appear to not have enough information.  This one is no different until you consider the implications of the squares.\n\nI’ve identified three unique ways to approach this problem.  I’d love to hear if any of you see any others.  Here are my solutions in the order I saw them.  The third is the shortest, but all offer unique insights.\n\nMethod 1: Law of Cosines\n\nThis solution goes far beyond the intended middle school focus of the problem, but it is what I saw first.  Sometimes, knowing more gives you additional insights.\n\nBecause DEF is a line and EF is a diagonal of a square, I know", null, "$m\\angle CEF=45^{\\circ}$, and therefore", null, "$m\\angle CED=135^{\\circ}$.", null, "$\\Delta CEF$ is a 45-45-90 triangle with hypotenuse 6, so its leg, CE has measure", null, "$\\frac{6}{\\sqrt{2}}=3\\sqrt{2}$.  Knowing two sides and an angle in", null, "$\\Delta DEC$ means I could apply the Law of Cosines.", null, "$DC^2 = 4^2 + (3\\sqrt{2})^2 - 2\\cdot (3\\sqrt{2}) \\cdot \\cos(135^{\\circ})=58$\n\nBecause I’m looking for the area of ABCD,  and that is equivalent to", null, "$DC^2$, I don’t need to solve for the length of DC to know the area I seek is 58.\n\nMethod 2: Use Technology\n\nI doubt many would want to solve using this approach, but if you don’t see (or know) trigonometry, you could build a solution from scratch if you are fluent with dynamic geometry software (GeoGebra, TI-Nspire, GSP).  My comfort with this made finding the solution via construction pretty straight-forward.\n\n1. Construct segment EF with fixed length 6.\n2. Build square CEGF with diagonal EF.  (This can be done several ways.  I was in a transformations mood, so I rotated EF", null, "$90^{\\circ}$ to get the other endpoints.)\n3. Draw line EF  and then circle with radius 4 through point E.\n4. Mark point D as the intersection of circle and line EF outside CEGF .\n5. Draw a segment through points and C.  (The square of the length of CD is the answer, but I decided to go one more step.)\n6. Construct square ABCD with sides congruent to CD.  (Again, there are several ways to do this.  I left my construction marks visible in my construction below.)\n7. Compute the area of ABCD.\n\nHere is my final GeoGebra construction.", null, "Method 3: The Pythagorean Theorem\n\nSometimes, changing a problem can make it much easier to solve.\n\nAs soon as I saw the problem, I forwarded it to some colleagues at my school.  Tatiana wrote back with a quick solution.  In the original image, draw diagonal, CG, of square CEGF. Because the diagonals of a square perpendicularly bisect each other, that creates right", null, "$\\Delta DHC$ with legs 3 and 7.  That means the square of the hypotenuse of", null, "$\\Delta DHC$ (and therefore the area of the square) can be found via the Pythagorean Theorem.", null, "$DC^2 = 7^2+3^2 = 58$\n\nMethod 4: Coordinate Geometry\n\nOK, I said three solutions, and perhaps this approach is completely redundant given the Pythagorean Theorem in the last approach, but you could also find a solution using coordinate geometry.\n\nBecause the diagonals of a square are perpendicular, you could construct ECFG with its center at the origin.  I placed point C at (0,3) and point E at (3,0).  That means point D is at (7,0), making the solution to the problem the square of the length of the segment from (0,3) to (7,0).  Obviously, that can be done with the Pythagorean Theorem, but in the image below, I computed number i in the upper left corner of this GeoGebra window as the square of the length of that segment.", null, "Fun.\n\nArea 10 Squares – Proof & Additional Musings\n\nAdditional musings on the problem of Area 10 Squares:\n\nThanks, again to Dave Gale‘s inspirations and comments on my initial post. For some initial clarifications, what I was asking in Question 3 was whether these square areas ultimately can all be found after a certain undetermined point, thereby creating a largest area that could not be drawn on a square grid. I’m now convinced that the answer to this is a resounding NO–there is no area after which all integral square areas can be constructed using square grid paper. This is because there is no largest un-constructable area (proof below). This opens a new question.\n\nQuestion 4:\nIs there some type of 2-dimensional grid paper which does allow the construction of all square areas?\n\nThe 3-dimensional version of this question has been asked previously, and this year in the College Math Journal, Rick Parris of Exeter has “proved that if a cube has all of its vertices in", null, "then the edge length is an integer.”\n\nDave’s proposition above about determining whether an area 112 (or any other) can be made is very interesting. (BTW, 112 cannot be made.) I don’t have any thoughts at present about how to approach the feasibility of a random area. As a result of my searches, I still suspect (but haven’t proven) that non-perfect square multiples of 3 that aren’t multiples of pre-existing squares seem to be completely absent. This feels like a number theory question … not my area of expertise.\n\nWhether or not you decide to read the following proof for why there are an infinite number of impossible-to-draw square areas using square grids, I think one more very interesting question is now raised.\n\nQuestion 5:\nLike the prime numbers, there is an infinite number of impossible-to-draw square areas. Is there a pattern to these impossible areas? (Remember that the pattern of the primes is one of the great unanswered questions in all of mathematics.)\n\nTHE PROOF\nMy proof does not feel the most elegant to me. But I do like how it proves the infinite nature of these numbers without ever looking at the numbers themselves. It works by showing that there are far more integers than there are ways to arrange them on a square grid, basically establishing that there is simply not enough room for all of the integers forcing some to be impossible. I don’t know the formal mathematics name for this principle, but I think of it as a reverse Pigeonhole Principle. Rather than having more pigeons than holes (guaranteeing duplication), in this case, the number of holes (numbers available to be found) grows faster than the number of pigeons (the areas of squares that can actually be determined on a square grid), guaranteeing that there will always be open holes (areas of squares that cannot be determined on using a square grid).\n\nThis exploration and proof far exceeds most (all?) textbooks, but the individual steps require nothing more than the ability to write an equation for an exponential function and find the sum a finite arithmetic sequence. The mathematics used here is clearly within the realm of what high school students CAN do. So will we allow them to explore, discover, and prove mathematics outside our formal curricula? I’m not saying that students should do THIS problem (although they should be encouraged in this direction if interested), but they must be encouraged to do something real to them.\n\nNow on to a proof for why there must be an infinite number of impossible-to-draw square areas on a square grid.\n\nThis chart shows all possible areas that can be formed on a square grid. The level 0 squares are the horizontal squares discussed earlier. It is lower left-upper right symmetric (as noted on Dave’s ‘blog), so only the upper triangle is shown.", null, "From this, the following can be counted.\nLevel 1 – Areas 1-9: 6 of 9 possibilities found (yellow)\nLevel 2 – Areas 10-99: 40 of 90 possibilities found (orange)\nLevel 3 – Areas 100-999: 342 of 900 possibilities found (blue)\n\nThe percentage of possible numbers appears to be declining and is always less than the possible number of areas. But a scant handful of data points does not always definitively describe a pattern.\n\nDetermining the total number of possible areas:\nLevel 1 has 9 single-digit areas. Level 2 has 90 two-digit areas, and Level 3 has 900 three-digit areas. By this pattern, Level M has", null, "M-digit areas. This is the number of holes that need to be filled by the squares we can find on the square grid.\n\nDetermining an upper bound for the number of areas that can be accommodated on a square grid:\nNotice that if a horizontally-oriented square has area of Level M, then every tilted square in its column has area AT LEAST of Level M. Also, the last column that contains any Level M areas is column", null, "where floor is the floor function.\n\nIn the chart, Column 1 contains 2 areas, and every Column N contains exactly (N+1) areas. The total number of areas represented for Columns 1 through N is an arithmetic sequence, so an upper bound for the number of distinct square areas represented in Columns 1 to N (assuming no duplication, which of course there is) is", null, ".\n\nThe last column that contains any Level M areas has column number", null, ". Assuming all of the entries in the data chart up to column", null, "are Level M (another overestimate if", null, "is not an integer), then there are", null, "maximum area values to fill the", null, "Level M area holes. This is an extreme over-estimate as it ignores the fact that this chart also contains all square areas from Level 1 through Level (M-1), and it also contains a few squares which can be determined multiple ways (e.g., area 25 squares).\n\nConclusion:\nBoth of these are dominated by base-10 exponential functions, but the number of areas to be found has a coefficient of 9 and the number of squares that can be found has coefficient 1/2. Further, the number of squares that can be found is decreased by an exponential function of base", null, ", accounting in part for the decreasing percentage of found areas noted in the data chart. That is, the number of possible areas grows faster than the number of areas that actually can be created on square grid paper.\n\nWhile this proof does not say WHICH areas are possible (a great source for further questions and investigation!), it does show that the number of areas of squares impossible to find using a square grid grows without bound. Therefore, there is no largest area possible." ]
[ null, "https://casmusings.files.wordpress.com/2014/11/octagon1.jpg", null, "https://casmusings.files.wordpress.com/2014/11/octagon6.jpg", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://casmusings.files.wordpress.com/2014/11/octagon1.jpg", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://casmusings.files.wordpress.com/2014/11/octagon2.jpg", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://casmusings.files.wordpress.com/2014/11/octagon4.jpg", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://casmusings.files.wordpress.com/2014/11/octagon5.jpg", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://casmusings.files.wordpress.com/2013/10/aa311-squares.png", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://casmusings.files.wordpress.com/2013/10/squares12.png", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://casmusings.files.wordpress.com/2013/10/squares2.png", null, "https://casmusings.files.wordpress.com/2011/08/eqn1.gif", null, "https://casmusings.files.wordpress.com/2011/07/area10grid.png", null, "https://casmusings.files.wordpress.com/2011/08/eqn2.gif", null, "https://casmusings.files.wordpress.com/2011/08/eqn3.gif", null, "https://casmusings.files.wordpress.com/2011/08/eqn8.gif", null, "https://casmusings.files.wordpress.com/2011/08/eqn4.gif", null, "https://casmusings.files.wordpress.com/2011/08/eqn5.gif", null, "https://casmusings.files.wordpress.com/2011/08/eqn5.gif", null, "https://casmusings.files.wordpress.com/2011/08/eqn9.gif", null, "https://casmusings.files.wordpress.com/2011/08/eqn2.gif", null, "https://casmusings.files.wordpress.com/2011/08/eqn7.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8944734,"math_prob":0.9923602,"size":3396,"snap":"2022-05-2022-21","text_gpt3_token_len":1004,"char_repetition_ratio":0.14416273,"word_repetition_ratio":0.039087947,"special_character_ratio":0.29946998,"punctuation_ratio":0.14227642,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99876416,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156],"im_url_duplicate_count":[null,null,null,4,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,8,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,8,null,null,null,null,null,null,null,null,null,null,null,null,null,8,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,9,null,9,null,null,null,9,null,9,null,9,null,null,null,null,null,9,null,null,null,9,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-19T03:47:21Z\",\"WARC-Record-ID\":\"<urn:uuid:7b32d98b-4b49-4525-a8c9-c4e378570b20>\",\"Content-Length\":\"154758\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a8bb0cca-feb9-4da0-881f-18ff39192ceb>\",\"WARC-Concurrent-To\":\"<urn:uuid:bce7ea30-8ad8-43dc-b74d-2ed214223c9e>\",\"WARC-IP-Address\":\"192.0.78.12\",\"WARC-Target-URI\":\"https://casmusings.wordpress.com/tag/square/\",\"WARC-Payload-Digest\":\"sha1:NVHGVLIF6SAKI3P5PVB5XCY25KDUHG4M\",\"WARC-Block-Digest\":\"sha1:UZAIGDDIMMHSYA3HCGH6MFVWIXMK4OZR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320301263.50_warc_CC-MAIN-20220119033421-20220119063421-00627.warc.gz\"}"}
https://www.teachoo.com/10125/3044/Average-Velocity/category/Concepts/
[ "Concepts\n\nClass 9\nChapter 8 Class 9 - Motion\n\n## What is Average Velocity?\n\nIt is the total distance travelled in a particular diection divided by total time taken\n\nWe can say that\n\nAverage Velocity = Total Displacement/Total Time Taken\n\nAVERAGE SPEED FORMULA\n\nAverage Speed = Total Distance/Total Time Taken\n\nAVERAGE VELOCITY FORMULA\n\nAverage Velocity =Total Displacement/Total Time Taken", null, "", null, "", null, "", null, "Summary\n\nAverage Velocity may or may not be equal to Average Speed\n\n## Questions\n\nQ 2 Page 102 - Under what condition(s) is the magnitude of average velocity of an object equal to its average speed?\n\nNCERT Question 2 - Joseph jogs from one end A to the other end B of a straight 300 m road in 2 minutes 30 seconds and then turns around and jogs 100 m back to point C in another 1 minute. What are Joseph’s average speeds and velocities in jogging (a) from A to B and (b) from A to C?\n\nExample 8.3 - Usha swims in a 90 m long pool. She covers 180 m in one minute by swimming from one end to the other and back along the same straight path. Find the average speed and average Velocity of Usha.\n\nIntroducing your new favourite teacher - Teachoo Black, at only ₹83 per month\n\n### Transcript\n\nExample 1 A person travels from Point A to Point B for 30 meters towards south in 4 Seconds. Then he travels from Point B to Point C for 40 Meters towards east in 6 Seconds. What is his Average Speed and Average Velocity? Calculating Average Speed Total Distance Travelled = AB + BC = 30 + 40 = 70 Meters Total Time Taken = 4 + 6 seconds = 10 Seconds Average Speed = (𝑇𝑜𝑡𝑎𝑙 𝐷𝑖𝑠𝑡𝑎𝑛𝑐𝑒)/(𝑇𝑜𝑡𝑎𝑙 𝑇𝑖𝑚𝑒) = 70/10 = 7 m/s Calculating Average Velocity We first need to calculate Total Displacement Displacement = AC Now, Δ ABC is a right angled triangle By Pythagoras Theorem Hypotenuse2 = Height2 + Base2 AC2 = AB2 + BC2 AC2 = 302 + 402 AC2 = 30 × 30 + 40 × 40 AC2 = 9000 + 1600 AC2 = 2500 AC2 = 50 × 50 AC = 50 Meters Thus, Displacement = AC = 50 m towards South East Now, Total Time Taken = 4 + 6 = 10 Seconds Average Velocity = (𝑇𝑜𝑡𝑎𝑙 𝐷𝑖𝑠𝑝𝑙𝑎𝑐𝑒𝑚𝑒𝑛𝑡)/(𝑇𝑜𝑡𝑎𝑙 𝑇𝑖𝑚𝑒) = 50/10 = 5 m/s towards South East", null, "" ]
[ null, "https://d1avenlh0i1xmr.cloudfront.net/a17ad7b3-0042-40cd-982a-d4a4cf95709b/slide13.jpg", null, "https://d1avenlh0i1xmr.cloudfront.net/c871b14b-66b3-4469-8e5e-903626c9dc62/slide14.jpg", null, "https://d1avenlh0i1xmr.cloudfront.net/c2ae771c-aaec-4422-ac55-d5cdc1f230c4/slide-15-calculating-average-velocity.jpg", null, "https://d1avenlh0i1xmr.cloudfront.net/1d6965cf-2f25-4dd5-88ce-49ee4568fc5b/slide16.jpg", null, "https://www.teachoo.com/static/misc/CA_Maninder_Singh.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8939253,"math_prob":0.9927218,"size":1017,"snap":"2022-27-2022-33","text_gpt3_token_len":245,"char_repetition_ratio":0.14807503,"word_repetition_ratio":0.0,"special_character_ratio":0.2291052,"punctuation_ratio":0.040609136,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99936885,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,5,null,5,null,5,null,6,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-06T13:57:31Z\",\"WARC-Record-ID\":\"<urn:uuid:5daa458e-c0d1-4f42-9d1b-024a5bc43de0>\",\"Content-Length\":\"156716\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fcb43fbb-a5b3-4abf-abf4-72240e29d932>\",\"WARC-Concurrent-To\":\"<urn:uuid:f372b4ea-bbc7-4e02-ab07-b9c6b1a653f3>\",\"WARC-IP-Address\":\"35.175.60.16\",\"WARC-Target-URI\":\"https://www.teachoo.com/10125/3044/Average-Velocity/category/Concepts/\",\"WARC-Payload-Digest\":\"sha1:45XOC5ZT7IKX6WD4CCSALGCHU3SJGZNO\",\"WARC-Block-Digest\":\"sha1:HCS5WGEMGASRDR4MBNUSN6T76ZHTUIHC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104672585.89_warc_CC-MAIN-20220706121103-20220706151103-00331.warc.gz\"}"}
https://chemistry.stackexchange.com/questions/76107/balance-equation-for-trilead-tetra-oxide-lead-monoxide-oxygen
[ "Balance equation for trilead tetra oxide -> lead monoxide + oxygen [closed]\n\nPlease give me the answer for this question. I am having difficulty in this question.\n\nclosed as off-topic by Nilay Ghosh, Pritt Balagopal, bon, paracetamol, Satwik PasaniJun 12 '17 at 12:15\n\n• This question does not appear to be about chemistry within the scope defined in the help center.\nIf this question can be reworded to fit the rules in the help center, please edit the question.\n\n• I'm voting to close this question as off-topic because there is no effort in the question. – Nilay Ghosh Jun 12 '17 at 7:20\n• Homework questions like this are often better when you show an effort to solve the problem yourself, and then come to us for any particular issues you are having. – Crafter0800 Jun 12 '17 at 7:21\n• Your question become victim of downvote. – Junaid Jamil Jun 12 '17 at 12:13\n\nI won't do your homework for you, but I can show you how to balance equations. Balancing equations is pretty simple once you get the hang of it. First, let's look at an unbalanced equation:\n\n$$\\ce{Al2O3 -> Al + O2}$$\n\nThe number of atoms for each element on either side of the reaction must match. On the left side there are 2 aluminum atoms and 3 oxygen atoms. On the right side there is 1 aluminum atom and 2 oxygen atoms. Keep in mind that oxygen is one of the diatomic gases that must be bonded to itself when alone as $\\ce{O2}$. Now that we have counted the atoms on each side, we can balance them out, by changing the coefficients or number of each molecule in the reaction. There needs to be the same number of atoms on each side, yielding:\n\n$$\\ce{Al2O3 -> 2 Al + 1.5 O2}$$\n\nWhile the number of atoms is now balanced, there is a problem. You can't have a fraction of a molecule in an equation, so we have to double all of the coefficients to change the 1.5 into 3.\n\n$$\\ce{2 Al2O3 -> 4 Al + 3 O2}$$\n\nWe are finally done. As you can see there are 4 aluminum atoms and 6 oxygen atoms on the right side and the left side. That is how you balance an equation and I hope it helps you solve your equation! Feel free to ask any questions." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.93599653,"math_prob":0.9679435,"size":2002,"snap":"2019-43-2019-47","text_gpt3_token_len":532,"char_repetition_ratio":0.12112112,"word_repetition_ratio":0.0,"special_character_ratio":0.27122876,"punctuation_ratio":0.0942029,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9642861,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-20T07:18:05Z\",\"WARC-Record-ID\":\"<urn:uuid:9c38e09d-57b6-448c-a3c6-9ea57d43d3a7>\",\"Content-Length\":\"123349\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:231061a5-4a33-4d15-8a45-0fb74a39647b>\",\"WARC-Concurrent-To\":\"<urn:uuid:3f865245-8199-403d-baf9-54248bc1277f>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://chemistry.stackexchange.com/questions/76107/balance-equation-for-trilead-tetra-oxide-lead-monoxide-oxygen\",\"WARC-Payload-Digest\":\"sha1:XUTUKPH5QL5OIRLISIYRVEQ4D3PPRW5C\",\"WARC-Block-Digest\":\"sha1:Q53HMVLARPFLGDDUGXH7KDZ5FPGNB2RG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986703625.46_warc_CC-MAIN-20191020053545-20191020081045-00341.warc.gz\"}"}
https://www.businesswritingservices.org/business-finance/359-liquidity-ratios
[ "## LIQUIDITY RATIOS\n\nAlso called working capital ratios.  They indicate ability of the firm to meet its short term maturing financial obligation/current liabilities as and when they fall due.\n\nThe ratios are concerned with current assets and current liabilities.  They include:\n\na)    Current ratio    =      Current Assets\nCurrent liabilities\nThis ratio indicates the No. of times the current liabilities can be paid from current assets before this assets are exhausted.\n\nThe most recommended ratio is 2.0 i.e. the current asset must at least be twice as high as current liabilities\n\nb)    Quick/acid test ratios    =    Current Asset - Stock\nCurrent liabilities\nIs a more refined current ratio which exclude amount of stock of the firm.  Stocks are excluded for two basic reasons.\n\ni)    They are valued on historical cost basis\nii)    They may not be converted into cash very quickly\n\nThe ratio therefore indicates the ability of the firm to pay its current liabilities from the more liquid assets of the firm.\n\nc)    Cash ratio    =    Cash in hand/bank + short term marketable securities\nCurrent liabilities\n\nThis is a refinement of the acid test ratio indicating the ability of the firm to meet its current liabilities from its most liquid resources.\nShort term marketable securities refers to short term investment of the firm which can be converted into cash within a very short period e.g commercial paper and treasury bills.\n\nd)    Net working capital Ratio    =    Networking Capital x 100\nNet Assets\n\nWhere Net Assets or Capital employed = Total Assets – Current liability\n\nThis ratio indicates the proportions of total net assets which is liquid enough to meet the current liabilities of the firm.\n\nIt is expressed in % term." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9430222,"math_prob":0.8533621,"size":1675,"snap":"2019-43-2019-47","text_gpt3_token_len":331,"char_repetition_ratio":0.19808498,"word_repetition_ratio":0.032608695,"special_character_ratio":0.19044776,"punctuation_ratio":0.05782313,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9514094,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-17T22:44:00Z\",\"WARC-Record-ID\":\"<urn:uuid:cb82655f-cb03-4a5d-a4fb-1d0d991f710d>\",\"Content-Length\":\"35704\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3500b366-ce1f-43fd-be43-ddc866ea4026>\",\"WARC-Concurrent-To\":\"<urn:uuid:8b3e6319-4cd3-45a3-9d43-3e35e60487fa>\",\"WARC-IP-Address\":\"68.65.122.214\",\"WARC-Target-URI\":\"https://www.businesswritingservices.org/business-finance/359-liquidity-ratios\",\"WARC-Payload-Digest\":\"sha1:PQSIJWWXV7XG4NNX6J4CA2ULD2ZMJMHD\",\"WARC-Block-Digest\":\"sha1:C2XS7DTJE7KHT44544SI4P3YWIRMWUQY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496669352.5_warc_CC-MAIN-20191117215823-20191118003823-00213.warc.gz\"}"}
http://www.infocobuild.com/education/audio-video-courses/electronics/ee44-circuits-and-systems-caltech.html
[ "# InfoCoBuild\n\n## EE 44: Circuits and Systems\n\nEE 44: Circuits and Systems (Caltech). Instructor: Professor Ali Hajimiri. Fundamentals of circuits and network theory, circuit elements, linear circuits, terminals and port presentation, nodal and mesh analysis, time-domain analysis of circuits and systems, sinusoidal response, introductory frequency domain analysis, transfer functions, poles and zeros, time and transfer constants, network theorems, transformers.\n\n Circuits Fundamentals\n\n 01. Circuits Fundamentals: Definitions, Graph Properties, Current & Voltage, Power & Energy 02. Circuits Fundamentals: Passivity and Activity, KCL and KVL, Ideal Sources 03. Circuits Fundamentals: Resistance, Ohm's Law, Linearity, Time-variance, Diode Circuits 04. Nodal Analysis: Ground, Y-Matrix, Node Voltage & Stimulus Vectors, Linear Algebra, Determinant 05. Nodal Analysis: Examples, Dependent Sources, Existence of a Solution 06. Nodal Analysis (cont.): Nodal Analysis, Dependent Sources, with Voltage Sources, Super Nodes 07. Mesh Analysis & Diode Circuits: Mesh Analysis, 3D Networks, Super Mesh, Diode Circuit Design 08. Circuit Theorems: Superposition, Thevenin, Norton, Source Transformation, Network Equivalence 09. Circuit Theorems: Source Transportation, Substitution Theorem, Maximum Power Transfer, Y-Delta 10. Active circuits: Op-Amp, Feedback, Asymptotic Equality, Inverting and Noninverting Amplifiers 11. Singularity Functions: Introduction, Unit Step, Pulse, and Dirac Delta (Impulse) Functions 12. Linear Systems: Dirac Delta, Sifting Property, Impulse Response, LTI, Convolution 13. Linear Systems: Convolution, Examples of System Response, Convolution Examples 14. Time-Domain Response: Capacitors and Inductors, RC Response, General 1st-Order System 15. Time Domain Response: RC Step and Impulse Response 16. Heaviside Operator: Introduction, Basic Examples 17. Heaviside Operator: Low-Pass Operator, High-Pass Operator, Solving Differential Equations 18. Heaviside Operator: Circuit Examples 19. Heaviside Operator: Nodal Analysis Examples, Order of System, Oscilloscope Probe 20. Impulse Response of 2nd Order System: Complex Numbers, Real Poles, Underdamped and Over-damped Response, Real and Complex Conjugate Roots 21. Heaviside Operator: Partial Fraction Expansion (PFE), Example 22. Heaviside Operator: Partial Fraction Expansion (PFE) with Multiple Roots, Example 23. Heaviside Operator: Operator Catalog, Solving Differential Equation Directly, Examples 24. Heaviside Operator: Time Delay, Convolution, Example 25. Heaviside Operator: Operator Catalog Review, Convolution Example 26. Heaviside Operator: Initial Conditions 27. System Function: Forced and Natural Response, Poles and Zeros, Time Domain View, Laplace Transform 28. Stability: Definition, Criterion, Poles Location, Routh-Hurwitz Method 29. Laplace Transform Summary: Definition, Properties 30. Intro to Network Synthesis, Complex Impedance 31. Sinusoidal Drive, Phasor Notations, Cascaded Systems, Intro to Bode Plot 32. Bode Plot: Properties, Poles and Zeros, Resonance (2nd Order Peaking) 33. Fourier Series and Fourier Transform: Intro, Basic Derivation 34. Fourier Transform: Spectrum, Time and Frequency Duality, Impulse, Sinc, Box 35. Fourier Transform: Modulation 36. Fourier Transform: Sampling 37. Time and Transfer Constants: Brief Introduction 38. Two-Port Networks: An Introduction 39. Course Brief Final Summary" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6030951,"math_prob":0.8641853,"size":3398,"snap":"2020-10-2020-16","text_gpt3_token_len":820,"char_repetition_ratio":0.14790808,"word_repetition_ratio":0.0093240095,"special_character_ratio":0.21071218,"punctuation_ratio":0.27704918,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.98778397,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-18T01:04:23Z\",\"WARC-Record-ID\":\"<urn:uuid:ff33f6d7-27c6-4e61-8898-0be0b1793f65>\",\"Content-Length\":\"13482\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:21b60ce5-3b2a-419e-a12b-5a4f2042a03f>\",\"WARC-Concurrent-To\":\"<urn:uuid:4a36bccd-be93-4d54-b41e-57de0045f5bc>\",\"WARC-IP-Address\":\"218.232.110.181\",\"WARC-Target-URI\":\"http://www.infocobuild.com/education/audio-video-courses/electronics/ee44-circuits-and-systems-caltech.html\",\"WARC-Payload-Digest\":\"sha1:3RMJDCHHELC5Z2R3BK2XW54Q2XPQ5Z4H\",\"WARC-Block-Digest\":\"sha1:O7GH2JUCQYGSWQ5TSTFNISAE3CLXVNPS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875143455.25_warc_CC-MAIN-20200217235417-20200218025417-00329.warc.gz\"}"}
https://squ.pure.elsevier.com/en/publications/numerical-methods-for-solving-nonlinear-fractional-integro-differ
[ "# Numerical methods for solving nonlinear fractional integro-differential equations\n\nKamel Al-Khaled, Marwan Alquran, Amal Al-Saidi, Gaston N'Guerekata, Joydev Chattopadhyay\n\nResearch output: Contribution to journalArticle\n\n1 Citation (Scopus)\n\n### Abstract\n\nIn this article we present two reliable strategies for solving fractional nonlinear Volterra-Fredholm integrodifferential equation. The fractional derivative is described in the Caputo sense. The first approach depends on a modified form of Adomian decomposition method and the second one is based upon Legendre collocation method. Illustrative examples are given, and the numerical results are provided to demonstrate the efficiency of the proposed methods.\n\nOriginal language English 647-657 11 Nonlinear Studies 22 4 Published - 2015\n\n### Fingerprint\n\nFredholm Equation\nIntegrodifferential equations\nFractional Derivative\nLegendre\nVolterra\nCollocation Method\nIntegro-differential Equation\nNumerical methods\nFractional\nNumerical Methods\nDerivatives\nDecomposition\nNumerical Results\nDemonstrate\nStrategy\nForm\n\n### Keywords\n\n• Approximate solutions\n• Fractional integro-differential equation\n• Legendre approximation\n\n### ASJC Scopus subject areas\n\n• Applied Mathematics\n• Modelling and Simulation\n\n### Cite this\n\nAl-Khaled, K., Alquran, M., Al-Saidi, A., N'Guerekata, G., & Chattopadhyay, J. (2015). Numerical methods for solving nonlinear fractional integro-differential equations. Nonlinear Studies, 22(4), 647-657.\n\nNumerical methods for solving nonlinear fractional integro-differential equations. / Al-Khaled, Kamel; Alquran, Marwan; Al-Saidi, Amal; N'Guerekata, Gaston; Chattopadhyay, Joydev.\n\nIn: Nonlinear Studies, Vol. 22, No. 4, 2015, p. 647-657.\n\nResearch output: Contribution to journalArticle\n\nAl-Khaled, K, Alquran, M, Al-Saidi, A, N'Guerekata, G & Chattopadhyay, J 2015, 'Numerical methods for solving nonlinear fractional integro-differential equations', Nonlinear Studies, vol. 22, no. 4, pp. 647-657.\nAl-Khaled K, Alquran M, Al-Saidi A, N'Guerekata G, Chattopadhyay J. Numerical methods for solving nonlinear fractional integro-differential equations. Nonlinear Studies. 2015;22(4):647-657.\nAl-Khaled, Kamel ; Alquran, Marwan ; Al-Saidi, Amal ; N'Guerekata, Gaston ; Chattopadhyay, Joydev. / Numerical methods for solving nonlinear fractional integro-differential equations. In: Nonlinear Studies. 2015 ; Vol. 22, No. 4. pp. 647-657.\n@article{c1d3961f92cc4fae9530794e5052f9aa,\ntitle = \"Numerical methods for solving nonlinear fractional integro-differential equations\",\nabstract = \"In this article we present two reliable strategies for solving fractional nonlinear Volterra-Fredholm integrodifferential equation. The fractional derivative is described in the Caputo sense. The first approach depends on a modified form of Adomian decomposition method and the second one is based upon Legendre collocation method. Illustrative examples are given, and the numerical results are provided to demonstrate the efficiency of the proposed methods.\",\nkeywords = \"Adomian decomposition, Approximate solutions, Fractional integro-differential equation, Legendre approximation\",\nauthor = \"Kamel Al-Khaled and Marwan Alquran and Amal Al-Saidi and Gaston N'Guerekata and Joydev Chattopadhyay\",\nyear = \"2015\",\nlanguage = \"English\",\nvolume = \"22\",\npages = \"647--657\",\njournal = \"Nonlinear Studies\",\nissn = \"1359-8678\",\npublisher = \"Cambridge Scientific Publishers Ltd\",\nnumber = \"4\",\n\n}\n\nTY - JOUR\n\nT1 - Numerical methods for solving nonlinear fractional integro-differential equations\n\nAU - Al-Khaled, Kamel\n\nAU - Alquran, Marwan\n\nAU - Al-Saidi, Amal\n\nAU - N'Guerekata, Gaston\n\nPY - 2015\n\nY1 - 2015\n\nN2 - In this article we present two reliable strategies for solving fractional nonlinear Volterra-Fredholm integrodifferential equation. The fractional derivative is described in the Caputo sense. The first approach depends on a modified form of Adomian decomposition method and the second one is based upon Legendre collocation method. Illustrative examples are given, and the numerical results are provided to demonstrate the efficiency of the proposed methods.\n\nAB - In this article we present two reliable strategies for solving fractional nonlinear Volterra-Fredholm integrodifferential equation. The fractional derivative is described in the Caputo sense. The first approach depends on a modified form of Adomian decomposition method and the second one is based upon Legendre collocation method. Illustrative examples are given, and the numerical results are provided to demonstrate the efficiency of the proposed methods.\n\nKW - Approximate solutions\n\nKW - Fractional integro-differential equation\n\nKW - Legendre approximation\n\nUR - http://www.scopus.com/inward/record.url?scp=84950983805&partnerID=8YFLogxK\n\nUR - http://www.scopus.com/inward/citedby.url?scp=84950983805&partnerID=8YFLogxK\n\nM3 - Article\n\nAN - SCOPUS:84950983805\n\nVL - 22\n\nSP - 647\n\nEP - 657\n\nJO - Nonlinear Studies\n\nJF - Nonlinear Studies\n\nSN - 1359-8678\n\nIS - 4\n\nER -" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7580417,"math_prob":0.65197355,"size":2733,"snap":"2019-51-2020-05","text_gpt3_token_len":673,"char_repetition_ratio":0.109197505,"word_repetition_ratio":0.50402147,"special_character_ratio":0.22100256,"punctuation_ratio":0.121212125,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97028774,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-21T02:24:33Z\",\"WARC-Record-ID\":\"<urn:uuid:cbd6be53-759b-4119-bc0d-5dd979448aeb>\",\"Content-Length\":\"37010\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b81ad09e-0513-4b8b-a229-5070d973eff6>\",\"WARC-Concurrent-To\":\"<urn:uuid:bb45b59b-2a02-4372-913c-4b427e0b9fab>\",\"WARC-IP-Address\":\"52.51.22.49\",\"WARC-Target-URI\":\"https://squ.pure.elsevier.com/en/publications/numerical-methods-for-solving-nonlinear-fractional-integro-differ\",\"WARC-Payload-Digest\":\"sha1:H7VOWMNNB5VIPQA6QETOCPQMDMGR6CSR\",\"WARC-Block-Digest\":\"sha1:7GFTPNZXB2BHBZ2LNJ3XRTO56L4EMNGE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250601241.42_warc_CC-MAIN-20200121014531-20200121043531-00155.warc.gz\"}"}
https://calcpercentage.com/431-is-90-percent-of-what
[ "# PercentageCalculator, 431 is 90 Percent of what?\n\n## 431 is 90 Percent of what? 431 is 90 Percent of 478.89\n\n%\n\n### How to Calculate 431 is 90 Percent of what?\n\n• F\n\nFormula\n\n431 ÷ 90%\n\n• 1\n\nConvert percent to decimal\n\n90 ÷ 100 = 0.9\n\n• 2\n\nDivide number by decimal number (from the first step)\n\n431 ÷ 0.9 = 478.89 So 431 is 90% of 478.89\n\n#### Example\n\nFor example, John owns 431 shares, and the percentage of John shares is 90%. 431 is 90 Percent of what? 90 ÷ 100 = 0.9 431 ÷ 0.9 = 478.89 So 431 is 90% of 478.89, that mean John has 431 of 478.89 shares" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8999297,"math_prob":0.990409,"size":359,"snap":"2022-27-2022-33","text_gpt3_token_len":138,"char_repetition_ratio":0.15774648,"word_repetition_ratio":0.19178082,"special_character_ratio":0.5069638,"punctuation_ratio":0.15384616,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99943674,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-27T08:27:15Z\",\"WARC-Record-ID\":\"<urn:uuid:06dcfaf5-0bcf-4cae-b100-5cb11ebd8083>\",\"Content-Length\":\"11607\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b374370a-cb79-4816-bf28-be2496b2e7e0>\",\"WARC-Concurrent-To\":\"<urn:uuid:755ef44e-fd96-44a2-b14f-aca9bb147474>\",\"WARC-IP-Address\":\"76.76.21.93\",\"WARC-Target-URI\":\"https://calcpercentage.com/431-is-90-percent-of-what\",\"WARC-Payload-Digest\":\"sha1:AB7ZHXNSMXULHL4TYPYDSAJO3MFMMSQI\",\"WARC-Block-Digest\":\"sha1:H4WGZNVICOWQRWCSYN7JCEKCBC6ACO3Q\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103329963.19_warc_CC-MAIN-20220627073417-20220627103417-00199.warc.gz\"}"}
http://www2.lawrence.edu/fast/GREGGJ/CMSC270/linked/iterators.html
[ "Before reading the rest of these notes, you should read sections 3.1, 3.2, and 3.7 in the text. Those sections cover the basics of linked lists and also introduce you to the standard template library list class, which is the STL's implementation of a doubly linked list.\n\n### Improving on the author's classes\n\nThe singly and doubly linked list classes the author introduces in chapter 3 illustrate the basic ideas behind those classes. However, there are several problems with the author's implementation.\n\n1. Both classes include methods for adding and removing items from the ends of the list, but lack general methods for inserting and removing items at arbitrary locations in the list. Since this capability is one of the major selling points for the linked list structure this is an important oversight.\n2. The author has not provided iterator classes to go with the linked list classes. Iterators are an important component of modern C++ programming, and really should be included with any container class.\n3. The author's classes do not follow the naming conventions used for most STL container classes.\n\nIn these notes I am going to fix all of these problems for the singly linked list class by almost completely rewriting the class. At the end of these notes you will find a programming assignment that asks you to rewrite the author's doubly linked list class in a similar way.\n\n### Getting started - the iterator\n\nSince the major new element I am going to introduce in my singly linked list class is an iterator, we need to begin by thinking about how such an iterator would work.\n\nFor our first guess we place an iterator class inside the list class. The iterator is implemented as a pointer to a node, and contains operator overloads for the four usual iterator operations of dereference, increment, comparison, and assignment.\n\nAlmost as soon as we start to write the code for this class, we will discover that the iterator class is seriously broken. The main problem is that we will eventually also want to implement a method\n\n```iterator insert(iterator position,const T& value)\n```\n\nin the `list` class that can be used to insert new data items at arbitrary locations in the list. Given an iterator `position`, `insert` is supposed to place a new node containing the data `value` before the node that `position` points to. The problem we are going to run into in the course of making that insertion is that we will need a pointer to the node before the node that position points to, so we can hook that previous node to the new node we are going to create. Since we are dealing with a singly linked list class, there is no easy way to obtain that pointer.\n\nSimilarly, we would also like to implement an `erase()` method in our list class:\n\n```iterator erase(iterator position)\n```\n\nThis method takes as its parameter an interator that points to a node that we would like to erase. In the course of removing that node we will need to make the node before the targeted node point to the node after the node we are removing. Since there is no easy way to the obtain a pointer to the node before the node that `position` points to, we are stuck once again.\n\n### A better approach\n\nWe can solve the problems with both `insert` and `erase` by doing something slightly odd with our iterator class. Instead of making the iterator class contain a pointer to a node, we make the iterator point to the node in front of the node we want to point to. This immediately solves the problem with `insert`. Assuming that the iterator contains a pointer named `nodePtr`, this is code for the `insert` method that is both simple and correct:\n\n```iterator insert(iterator position,const T& value) {\nnode<T>* newNode = new node<T>(value, position.nodePtr->next);\nif (position.nodePtr == tail) tail = newNode;\nposition.nodePtr->next = newNode;\nreturn position;\n}\n```\n\nAs happens so often in programming, this solution creates another problem. That problem is what to do about `begin()`. The `begin()` method is supposed to return an iterator that points to the beginning of the list. Since our iterator class is going to contain a pointer to the node before the node we want to point to, what should we put in the iterator that `begin()` returns?\n\nThe solution to this problem is to deploy another trick. We equip our list class with a dummy head node. This is an extra node that we slip into the list right from the very start. The dummy head node is a node that appears before the first actual node containing data. By making the head pointer in the list class point to the dummy head node and defining `begin()` to do this\n\n```iterator begin() const {\n}\n```\n\nwe will have solved our problem.\n\n### The increment operators\n\nOne of the operations that our iterator needs to support is the increment operation. C++ actually has two increment operators, the preincrement operator, which is written\n\n```++itr;\n```\n\nand the postincrement operator, which is written\n\n```itr++;\n```\n\nWhen used alone in a statement these two operators have exactly the same effect. The difference in the two operators becomes apparant when we use these operators in combination with other operations in the same same statement: the preincrement operation is applied first before any other operations in a statement, while the postincrement operation is applied after all other operations in a statement.\n\nHere is one of the more famous pieces of code in C, which makes use of the postincrement operator with a pair of pointers. This code is an implementation of the C standard library `strcpy` function, which copies characters from a C string pointed to by a pointer `src` to a C string pointed to by a pointer `dest`:\n\n```void strcpy(char* dest,const char* src) {\nwhile(*dest++ = *src++) ;\n}\n```\n\nThe rather peculiar while loop in this function does all of the useful work in the test expression and has an empty body. The test expression is a compound expression that does three things at the same time:\n\n1. `*dest = *src` copies a single character from source array to the destination array.\n2. The assignment has a side effect, it returns the character copied. That character is a numeric code that C interprets as a true/false value for use by the loop test. The null character, `'\\0'`, which marks the end of the source string, is interpreted as false, which causes the loop to terminate at the right time. All other characters are interpreted as true, which causes the loop to continue.\n3. The increment operations, `dest++` and `src++` happen last in the test expression, which advances the two pointers to the next pair of characters on each round. Note that for the loop logic to work correctly, we have to do the character assignment before advancing the two pointers to the next location.\n\nBecause there are two different increment operators, C++ has had to adopt a peculiar convention to differentiate them. To overload the preincrement operator we construct a member function in our iterator class that takes the form\n\n```iterator& operator++()\n```\n\nThe overload for this operator is supposed to advance the iterator to the next location and then return a reference to itself. That is usually done by putting the statement\n\n```return *this;\n```\n\nat the end of the method. The operator has to return this reference so the increment can be used in a compound expression.\n\nTo overload the postincrement operator we construct a member function in our iterator class that takes the form\n\n```iterator operator++(int)\n```\n\nThe overload needs the `int` parameter because the rules for operator overloading in C++ stipulate that both overloads of ++ have to have the same name, `operator++`, and because two overloaded functions are not allowed to differ purely in their return types. To differentiate the two versions of `operator++` from each other, C++ uses the convention of forcing the postincrement version to take a single integer parameter. All implementations of this operator will simply ignore that parameter, as it is used purely to differentiate the two versions of ++ from each other.\n\nThe postincrement overload is also peculiar in that it has to return a copy of the iterator before advancing the iterator. Remember, in postincrement the increment happens at the end of the expression, so we are forced to return a copy of the iterator before the increment to get the right behavior.\n\n### The full class\n\nHere now is the full source code for our complete singly linked list class. This class features the use of a dummy head node, so that even empty lists will contain at least one node. The iterator for the list class is defined as an inner class in the list class, and stores a pointer to the node before the node that we want the iterator to point to.\n\n```// The node class for our linked list\ntemplate <typename T>\nclass node {\npublic:\nT data;\nnode<T> *next;\n\nnode() : next(nullptr) {}\nnode(const T& item, node<T> *ptr = nullptr) :\ndata(item), next(ptr) {}\n};\n\ntemplate <typename T>\nclass list {\npublic:\nlist() {\n// Create the dummy head node\nhead = tail = new node<T>();\n}\nlist(const list<T>& other) = delete;\nlist(list<T>&& other) = delete;\n~list() {\ndelete temp;\n}\n}\n\nlist<T>& operator=(const list<T>& other) = delete;\nlist<T>& operator=(list<T>&& other) = delete;\n\n// Inner class iterator\nclass iterator {\nfriend class list;\nprivate:\nnode<T> *nodePtr;\n// The constructor is private, so only our friends\n// can create instances of iterators.\niterator(node<T> *newPtr) : nodePtr(newPtr) {}\npublic:\niterator() : nodePtr(nullptr) {}\n\n// Overload for the comparison operator !=\nbool operator!=(const iterator& itr) const {\nreturn nodePtr != itr.nodePtr;\n}\n\n// Overload for the dereference operator *\nT& operator*() const {\nreturn nodePtr->next->data;\n}\n\n// Overload for the postincrement operator ++\niterator operator++(int) {\niterator temp = *this;\nnodePtr = nodePtr->next;\nreturn temp;\n}\n}; // End of inner class iterator\n\niterator begin() const {\n}\n\niterator end() const {\nreturn iterator(tail);\n}\n\niterator insert(iterator position,const T& value) {\nnode<T>* newNode = new node<T>(value, position.nodePtr->next);\nif (position.nodePtr == tail) tail = newNode;\nposition.nodePtr->next = newNode;\nreturn position;\n}\n\niterator erase(iterator position) {\nnode<T> *toDelete = position.nodePtr->next;\nposition.nodePtr->next = position.nodePtr->next->next;\nif (toDelete == tail) tail = position.nodePtr;\ndelete toDelete;\nreturn position;\n}\nprivate:\nnode<T>* tail;\n};\n```\n\nTo prevent problems caused by mixing inner classes with templates, I have taken the approach here of writing the code for all of the member functions in the class declarations, instead of designing this as a class declaration followed by separate method implementations.\n\nThe two most important new methods in the list class itself are the `insert()` and `erase()` methods. These methods replace the author's original methods for adding and removing items from the end of the list. Both method follow the conventions for `insert()` and `erase()` methods in the STL.\n\nThe `insert()` method takes as its first parameter an interator that points to a location in the list. The `insert()` method will create a new node that contains the data given in the second parameter and insert that new node in the list in the location before the node that the first parameter points to. After inserting the new node, `insert()` returns an interator that points to the new node. `insert()` can be used to add new nodes at any location in the list, even at the end of the list. By passing the iterator returned by the list's `end()` method as the first parameter to `insert()`, we can add a new item to the end of the list. You may want to take a careful look at the code for the `insert()` method and convince yourself that it will do the right thing in all cases, including adding new items at the front and back of the list.\n\nThe `erase()` method removes a node from the location pointed to by the iterator in its parameter, and returns an interator that points to the node after the node that was removed.\n\nTo test our new class here is a simple test program\n\n```#include <iostream>\n#include \"list.h\"\n\nint main(int argc,const char* argv[])\n{\nlist<int> v;\n\nv.insert(v.end(),2);\nv.insert(v.end(),4);\nv.insert(v.end(),5);\n\nauto iter = v.begin();\niter = v.insert(iter, 1); // Insert 1 before 2\niter++; // Points to 2\nv.insert(iter++,3); // Insert 3 before 2, advance to 4\niter++; // Points to 5\nv.insert(iter,10); // Insert 10 before 5\n\niter = v.begin();\niter++; // Points to 3\nv.erase(iter); // Erase 3\n\nfor(auto itr = v.begin();itr != v.end();itr++)\nstd::cout << *itr << std::endl;\n\nreturn 0;\n}\n```\n\nCompiling and running this test program confirms that our list class is working the way that it should.\n\n### Programming Assignment\n\nHere is the code for the author's doubly linked list class. Modify this class to do the following things:\n\n1. Introduce an inner iterator class. The iterator class will contain a single pointer to a node, and will point directly to the node that you want the iterator to refer to.\n2. Get rid of the author's methods `addToDLLTail`, `deleteFromDLLTail`, `addToDLLHead` and `deleteFromDLLHead` and replace them with `insert` and `erase` methods as I did in the example above.\n3. You can also get rid of the author's code for `firstEl`, `find`, and `operator<<`, since these all implement things that are more properly done with iterators.\n4. Provide `begin()` and `end()` methods. `begin()` should return an iterator that points to the head node, and `end()` should return an interator with a pointer value of `nullptr` to indicate that the iterator points past the end of the list.\n5. Since we will not be using a dummy head node with our doubly linked list class, the author's original constructor and destructor methods will continue to work just fine - do not modify them.\n\nAfter modifying the author's class as indicated above, modify the simple test program I showed in the lecture notes above to work with your doubly linked list class. Run your test program and verify that produces the correct results." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.84542173,"math_prob":0.86416054,"size":12545,"snap":"2023-14-2023-23","text_gpt3_token_len":2713,"char_repetition_ratio":0.1737501,"word_repetition_ratio":0.07725119,"special_character_ratio":0.22670387,"punctuation_ratio":0.10742761,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9511229,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-05-31T19:46:07Z\",\"WARC-Record-ID\":\"<urn:uuid:e5e0c520-a3af-41d6-93ef-0a73bf160d2a>\",\"Content-Length\":\"16734\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7ccf1221-3dc9-4d04-baf0-492518705e93>\",\"WARC-Concurrent-To\":\"<urn:uuid:0e33950b-168c-430a-8d70-1d4d5f80eee9>\",\"WARC-IP-Address\":\"143.44.124.14\",\"WARC-Target-URI\":\"http://www2.lawrence.edu/fast/GREGGJ/CMSC270/linked/iterators.html\",\"WARC-Payload-Digest\":\"sha1:OGBNLMNA2GCMFOCH5GJEY5IFZ3QP742X\",\"WARC-Block-Digest\":\"sha1:CEYGGBVOPC6PLQ3W6PH7A5LO3SEFXM32\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224647409.17_warc_CC-MAIN-20230531182033-20230531212033-00406.warc.gz\"}"}
https://proofwiki.org/wiki/1770
[ "# 1770\n\nPrevious  ... Next\n\n## Number\n\n$1770$ (one thousand, seven hundred and seventy) is:\n\n$2 \\times 3 \\times 5 \\times 59$\n\nThe $30$th hexagonal number after $1$, $6$, $15$, $28$, $45$, $66$, $91$, $\\ldots$, $703$, $780$, $861$, $946$, $1035$, $1225$, $1326$, $1431$, $1540$, $1653$:\n$1770 = \\ds \\sum_{k \\mathop = 1}^{30} \\paren {4 k - 3} = 30 \\paren {2 \\times 30 - 1}$\n\nThe $59$th triangular number after $1$, $3$, $6$, $10$, $15$, $\\ldots$, $1326$, $1378$, $1431$, $1485$, $1540$, $1596$, $1653$, $1711$:\n$1770 = \\ds \\sum_{k \\mathop = 1}^{59} k = \\dfrac {59 \\times \\paren {59 + 1} } 2$" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.55843127,"math_prob":1.00001,"size":689,"snap":"2022-27-2022-33","text_gpt3_token_len":284,"char_repetition_ratio":0.1459854,"word_repetition_ratio":0.050847456,"special_character_ratio":0.6182874,"punctuation_ratio":0.2781457,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000092,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-29T10:38:09Z\",\"WARC-Record-ID\":\"<urn:uuid:c7a9d881-e5af-4947-a3e6-97fc367751cb>\",\"Content-Length\":\"34997\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9df910d9-5202-4576-aaaf-5b2fbecd8f08>\",\"WARC-Concurrent-To\":\"<urn:uuid:313912fd-677a-46a7-88c3-ac3875e22b5e>\",\"WARC-IP-Address\":\"172.67.198.93\",\"WARC-Target-URI\":\"https://proofwiki.org/wiki/1770\",\"WARC-Payload-Digest\":\"sha1:ALKLFPOXJNR6IT4I3WWMHDB3NX2NDUAD\",\"WARC-Block-Digest\":\"sha1:YLP4FRYESKNCYXUSFGXTM4245X6BBSNT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103626162.35_warc_CC-MAIN-20220629084939-20220629114939-00681.warc.gz\"}"}
https://www.valiadis.gr/?view=39
[ "electric motors, motors, pumps, gears, inverters, inverter, soft starters, high voltage motors, special motors, brake motors, slip ring motors, liquid starters, medium voltage motors, direct current motors, squirrel cage motors, slip-ring motors, low voltage motors, flange-mounting motors, frequency converters, frequency changers, motor bases, pulleys, bearings\n  » Products » Valiadis » General Information » Fundamentals\n Fundamentals General Information\n\nThree phase current\n\nThree phase current is characterised as following: the available three phase network carries individual alternating voltages of the same magnitude, but with a phase difference of 120° in time. The three supply connections of the three phase system are called L1, L2, and L3.\nThe same formulas apply to single phase motors, but without the factor 3.\n\nRated output\n\nThe rated output is indicated at the shaft of the motor :\n\nPn = √3 * Vn * In * cosφ * n [W]\n\nWhere:\n\n Vn: Rated motor voltage [V] In: Rated motor current [Amps] Cosφ: Rated motor power factor n: Motor efficincy at full load\n\nRated torque\n\nThe rated torque is calculated as following:\n\nMn = 9.55 * Pn/nn [Nm]\n\nWhere:\n\n Pn: Rated power [W] nn: Rated speed [Rpm]\n\nto convert Nm to kpm you can use the formula 1Nm=1/9.81 kpm\n\nSpeed\n\nThe real speed of a motor corresponds to the synchronous speed less slip. The synchronous speed of the motor is depended only from the poles No and frequency according the following formula:\n\nns = f * 60/p [rpm]\n\nWhere:\n\n f: Frequency [Hz] p: Number of pole pairs\n\nThe nominal speed of the motor is then:\n\nnn=ns*(1-s) [rpm]\n\nWhere:\n\n s: the motor slip ns: synchronous speed [rpm]\nAthens, Greece: 18, Gr. Lambraki Str., 141 23 Likovrisi. Tel: +302102817217. Fax: +302102814277. [email protected]" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.72856283,"math_prob":0.9725804,"size":1673,"snap":"2022-05-2022-21","text_gpt3_token_len":439,"char_repetition_ratio":0.15458359,"word_repetition_ratio":0.007246377,"special_character_ratio":0.25463238,"punctuation_ratio":0.16088328,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.977567,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-28T04:11:57Z\",\"WARC-Record-ID\":\"<urn:uuid:1f860fd2-0b8a-4ab9-9481-0cb02cc1324c>\",\"Content-Length\":\"69650\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ef2f1557-f0e1-4f2f-a909-63b8fa70bdbe>\",\"WARC-Concurrent-To\":\"<urn:uuid:a1408a8d-23df-4ffc-9911-402f48593c8b>\",\"WARC-IP-Address\":\"176.9.242.21\",\"WARC-Target-URI\":\"https://www.valiadis.gr/?view=39\",\"WARC-Payload-Digest\":\"sha1:CBMKHKMBQOT5ECGTCQG565HRYA5556WG\",\"WARC-Block-Digest\":\"sha1:662OAMGXYWLMHXYCHHY2J6UWTQBMG7LU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652663012542.85_warc_CC-MAIN-20220528031224-20220528061224-00145.warc.gz\"}"}
https://easyexamnotes.com/python-program-to-find-gcd-of-two-numbers/
[ "# Python program to find GCD of two numbers\n\nTo write a Python program to find GCD of two numbers.\n\n# Python Program to find GCD of Two Numbers\na = float(input(” Please Enter the First Value a: “))\nb = float(input(” Please Enter the Second Value b: “))\ni = 1\nwhile(i <= a and i <= b):\nif(a % i == 0 and b % i == 0):\ngcd = i\ni = i + 1\n\nprint(“\\n GCD of {0} and {1} = {2}”.format(a, b, gcd))\n\nOUTPUT:\nPlease Enter the First Value a: 12\n\nPlease Enter the Second Value b: 6\n\nGCD of 12.0 and 6.0 = 6" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6641986,"math_prob":0.998645,"size":444,"snap":"2023-14-2023-23","text_gpt3_token_len":158,"char_repetition_ratio":0.15454546,"word_repetition_ratio":0.10309278,"special_character_ratio":0.39864865,"punctuation_ratio":0.12264151,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9907531,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-08T12:56:47Z\",\"WARC-Record-ID\":\"<urn:uuid:b90d75bc-d6ab-403b-859f-974c20e2e439>\",\"Content-Length\":\"43254\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:aa978925-4c31-405f-9b2a-e10081cf3a93>\",\"WARC-Concurrent-To\":\"<urn:uuid:0e1b7d69-0ca2-4c5d-9335-a99fca02b298>\",\"WARC-IP-Address\":\"172.67.156.162\",\"WARC-Target-URI\":\"https://easyexamnotes.com/python-program-to-find-gcd-of-two-numbers/\",\"WARC-Payload-Digest\":\"sha1:EXAERKC3CE33ZHY43BPKQERJQDY6Z6QE\",\"WARC-Block-Digest\":\"sha1:IW3LFDXOYKCFXQEOMK7R3F3GSU4LBZSZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224654871.97_warc_CC-MAIN-20230608103815-20230608133815-00263.warc.gz\"}"}
https://methods.sagepub.com/Reference/encyc-of-research-design/n488.xml
[ "Validity of Research Conclusions\n\nEncyclopedia\nEdited by: Published: 2010\n\n• Subject Index\n\nSometimes described as “statistical conclusion validity,” the validity of research conclusions refers to the degree to which the conclusions made about the null hypothesis are reasonable or correct. Because the null hypothesis typically states that a relationship between two variables does not exist, the validity of a research conclusion also refers to whether a relationship exists between two variables. Although the validity of research conclusions is distinct from construct validity and external validity, it is important to distinguish conclusion validity clearly from internal validity. Internal validity involves whether a relationship between two variables is a plausibly causal one. The validity of a research conclusion is concerned only with the presence or absence of a relationship between two variables. Thus, conclusion validity answers the most basic ...\n\n• All\n• A\n• B\n• C\n• D\n• E\n• F\n• G\n• H\n• I\n• J\n• K\n• L\n• M\n• N\n• O\n• P\n• Q\n• R\n• S\n• T\n• U\n• V\n• W\n• X\n• Y\n• Z\n\nMethods Map", null, "Research Methods\n\nCopy and paste the following HTML into your website" ]
[ null, "https://methods.sagepub.com/images/img-bg.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.86340606,"math_prob":0.59728914,"size":1637,"snap":"2019-43-2019-47","text_gpt3_token_len":353,"char_repetition_ratio":0.17513779,"word_repetition_ratio":0.023715414,"special_character_ratio":0.18387294,"punctuation_ratio":0.06147541,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9595338,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-18T20:48:54Z\",\"WARC-Record-ID\":\"<urn:uuid:3be9b1b1-58a4-40bf-b806-851989247cb0>\",\"Content-Length\":\"246400\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cc5f225d-9e1e-4aa0-9452-8dfc6088dd9e>\",\"WARC-Concurrent-To\":\"<urn:uuid:7002b985-0fd5-4e14-b7b5-cda854bb5673>\",\"WARC-IP-Address\":\"128.121.3.195\",\"WARC-Target-URI\":\"https://methods.sagepub.com/Reference/encyc-of-research-design/n488.xml\",\"WARC-Payload-Digest\":\"sha1:PPVZI6HVF3DDM3MSOOJFCJL22Z7XRABZ\",\"WARC-Block-Digest\":\"sha1:G5NRZLONWCN2TH2IU2LTL2BZ3PC5JVT2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986684854.67_warc_CC-MAIN-20191018204336-20191018231836-00416.warc.gz\"}"}
https://developmentality.wordpress.com/2011/01/29/javascript-101-week-1-assignment/
[ "Home > javascript, programming > Javascript 101 Week 1 Assignment\n\n## Javascript 101 Week 1 Assignment\n\nAs I blogged previously, I am starting to learn Javascript via P2PU.org/Mozilla School of Webcraft. Here are my answers to the first week’s assignment.\n\nQuestions\n\n## The alliance of Netscape and Sun Microsystems did not use Java in the browser, because it was not suitable for that purpose. Why do you think Java was not suitable to be embedded in a browser?\n\nI don’t think Java is suitable for embedding in a browser for a few reasons. Firstly, while the Just in time compiler does a great job at optimizing code that runs over and over again, there is a big cost to starting up the JVM, as it must load dozens if not hundreds of class files.\n\nFurthermore, developing web applications in Java would be very slow for the developer too, as the code would have to be compiled into bytecodes before it could be run. Webapps written in Java have this problem already; if all the client side scripting were done in Java, this would be unbearable.\n\nFurthermore, Java is not a good scripting language because it is too verbose, and the strict type checking is often overkill for simple scripts.\n\nWe are advised to provide a radix, because Javascript stops parsing at the first nondigit number. If the number starts with 0, this makes Javascript believe the number is octal and that any character other than 0-7 is invalid. Thus parseInt(“09”) == 0.\n\n## What is a type, and why do you think types are useful in writing programs?\n\nA type indicates the ‘flavor’ of a variable, i.e. what the ‘tentacle’ grasps (using the terminology of the Eloquent Javascript course. Types are very useful for ensuring that the operations we invoke on variables make sense. For instance, it makes sense to invoke the times binary operand on two numerical values, but it doesn’t make sense to invoke it on two Strings, or a String and a number.\n\n## Why do we lose precision when performing operations with decimal numbers in Javascript? Can you think of a few implications of why this would be a problem?\n\nAll numbers in Javascript are stored as doubles (64 bit floating point numbers), which have a finite precision. Certain numbers cannot be exactly represented in this format, such as 0.1 (it has a nonterminating binary representation, and thus cannot be exactly represented in a finite number of digits). This means we have to be very careful when working with numbers, because our intuitive sense is wrong.\n\n```.1 + .1 + .1 + .1 + .1 + .1 + .1 + .1 + .1 + .1 = 0.9999999999999999\n```\n\n## Do you understand why the following operation produces the given result 115 * 4 – 4 + 88 / 2 = 500\n\nYes – order of operations matters. * and / have higher precedence over +. If you want to change this, use parentheses to explicitly show the order you want.\n\n## What does typeof 4.5 do, and why does typeof (typeof 4.5) return “string” \n\nTypeof returns a string representation of the type of the argument. typeof (typeof 4.5) returns string, because typeof returns the string “number”. The piece in parentheses is executed first, with that result becoming the argument to the outer typeof call.\n\n# Exercises\n\n2.1\n\n```(4 >= 6 || \"grass\" != \"green\") &&\n!(12 * 2 == 144 && true)\n\n(false || true) && !(24 == 144 && true)\n(false || true) && !(false && true)\n(true) && !(false)\n(true) && (true)\ntrue\n```\n\n2.2\n\n```function while_exponent(val, exp) {\nvar product = 1;\nvar counter = 0;\nwhile (counter < exp) {\nprduct *= val;\ncounter++;\n}\nreturn product;\n}\n```\n\n2.3\n\n```function while_triangle(n_lines) {\nvar line_counter = 1;\nwhile (line_counter <= n_lines) {\nvar string = \"\"\nvar inner_counter = 0;\nwhile (inner_counter < line_counter) {\nstring += \"#\"\ninner_counter++;\n}\nprint(string)\nline_counter++;\n}\n}\n```\n\n2.4\n\n```function exponent(val, exp) {\nproduct = 1;\nfor (var i = 0; i < exp; i++) {\nproduct *= val;\n}\nreturn product;\n}\n\nexponent(2, 10) == 1024\n\nfunction triangle(lines) {\nfor (var i = 1; i <= lines; i++) {\nvar string = \"\"\nfor (var j = 0; j < i; j++) {\nstring += \"#\"\n}\nprint(string);\n}\n}\ntriangle(10);\n#\n##\n###\n####\n#####\n######\n#######\n########\n#########\n##########\n```\n\n2.5\n\n```var result = Number(prompt(\"What's the value of 2+2?\"));\nif (result == 4) {\nalert(\"Excellent\");\n}\nelse if (result == 3 || result == 5) {\nalert(\"Almost\");\n}\nelse {\nalert(\"Idiot.\");\n}\n```\n\n2.6\n\n```while (true) {\nvar result = Number(prompt(\"What's the value of 2+2?\"));\nif (result == 4) {\nalert(\"Excellent\");\nbreak;\n}\nelse if (result == 3 || result == 5) {\nalert(\"Almost\");\n}\nelse {\nalert(\"Idiot.\");\n}\n}\n```\n\n2.\n\n```var name = prompt(\"What's your name?\") || \"Guest\";\n```\n\nThis is an example of using the OR binary op to return the second operand if the first is a Falsey value (e.g. the empty string). This allows you to provide sensible defaults\n\nWe can use the && as a guard against null values. For instance, if we want to invoke a method on an object, but not if it’s null, we could do\n\n```var result = X && X.someMethod()\n```\n\nsince the && will only evaluate the second operand if the first is not a Falsey value (e.g. null), we are safe from executing undefined methods.\n\n/*#p2pu-Jan2011-javascript101*/\n\nAdvertisements\n1. January 29, 2011 at 11:43 am\n\nWell written, especially the programming problems.\n\n• January 29, 2011 at 12:05 pm\n\nThanks – looking forward to the next assignment.\n\n2. January 29, 2011 at 8:01 pm\n\nNice post Nick! Your explanation of why Java isn’t a good browser language was instructive, as was your answer to the question on adding decimals in JavaScript. I understand that better now. Also, your examples of the default and guard operators make sense. I’m at a much more basic level of development; if you get a chance could you check my last exercise to see if I did it right? It’s at http://blog.patrickcollins.me/?p=8 (scroll to bottom). Thanks!\n\n3. January 31, 2011 at 12:21 am\n\nDoes it output “number” or “double”? I thought typeof was more specific. Of course, I could be confusing it with java again.\n\n• January 31, 2011 at 9:38 am\n\nIt outputs “number”. Remember, in Javascript all numbers are stored as double precision floating point numbers. Thus it makes no sense to distinguish between double and integer – they’re all doubles. BTW if you ever have questions as to what something would do, try jsfiddle.net (e.g. this example)\n\n4. January 31, 2011 at 11:12 pm\n\nNice post. The answers are really concise and are easy to understand. I think you cleared up the parseint() question for me also. Plus the blog looks great.\n\nCheers.\nAndy M\n\n• February 1, 2011 at 8:05 am\n\nThanks Andy – just read your post and found it very well written.\n\n1. February 5, 2011 at 11:05 am" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8566056,"math_prob":0.8783645,"size":6328,"snap":"2019-13-2019-22","text_gpt3_token_len":1550,"char_repetition_ratio":0.10088552,"word_repetition_ratio":0.07072516,"special_character_ratio":0.28682047,"punctuation_ratio":0.13959184,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9604661,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-05-23T04:40:06Z\",\"WARC-Record-ID\":\"<urn:uuid:ee4186ff-d8c1-462c-8405-ea7b4c053084>\",\"Content-Length\":\"111384\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c624b9e2-98fb-4f1a-a997-3d959d5b71e1>\",\"WARC-Concurrent-To\":\"<urn:uuid:48a48273-ba0a-4381-9652-9db5804a4109>\",\"WARC-IP-Address\":\"192.0.78.12\",\"WARC-Target-URI\":\"https://developmentality.wordpress.com/2011/01/29/javascript-101-week-1-assignment/\",\"WARC-Payload-Digest\":\"sha1:XN4EO67DLRR4B2DXZPORZECMKSDD2SCM\",\"WARC-Block-Digest\":\"sha1:YPYWU6VUNUVKQZR3AFHIFACRWSO6CCB4\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-22/CC-MAIN-2019-22_segments_1558232257100.22_warc_CC-MAIN-20190523043611-20190523065611-00358.warc.gz\"}"}
https://educationrealist.wordpress.com/2013/08/12/polynomial-operations-as-glue-second-year-algebra/
[ "# Polynomial Operations as Glue: Second Year Algebra\n\nA couple years ago, I suddenly realized that my students rarely evaluated quadratic expressions. And when I thought about it, I could see why.\n\nCreate a table of values for y = x2 -6x – 16. Start with -3", null, "These are kids who aren’t too great at working with negatives, yes? And it’s a whole bunch of work for a relatively small gain. Makes it tough to guess and check, to work velocity problems, and so on. I want something simpler.\n\nEnter the Remainder Theorem: the remainder of the division of a polynomial f(x) by a linear polynomial x-a is equal to f(a).\n\nWe usually teach synthetic substitution when introducing with the Fundamental Theorem of Algebra, which is when we give advanced students the bad news—at a certain point, factoring higher-degree polynomials becomes guess and check. Here’s the Holt book, for example: Chapter 5, Quadratics, covers evaluation by substitution (aka, plug it in). Chapter 6, Polynomials (meaning degree greater than 2), covers polynomial division, synthetic substitution/division, remainder theorem, and factor theorem, leading up to the fundamental theorem of algebra. Notice, too, that the book is a tad soulless on two of the more remarkable theorems, as I write about here.\n\nSo this is screwed up. First, quadratics are polynomials, thankyouverymuch. Second, synthetic substitution/division solves the problem I started with: it’s brutal to evaluate quadratics if you can’t do it in your head—and most of my students can’t.\n\nThen, there’s the fact that polynomial operations in algebra 2 are like kissing a sister; students don’t really learn the purpose for these operations until math analysis and calculus. Over half my students are in their last high school course and won’t be taking anything more advanced in college, but they will need knowledge of these operations for math placement tests. The other half will be moving on to math analysis, and need the skills.\n\nOver the past two years, I’ve played with different ways of teaching polynomial operations, and different ways of introducing synthetic substitution for quadratics.\n\nMy algebra II/intermediate algebra class is comprised of four modeling units: linear equations (and inequalities), quadratic equations, exponential functions, and probability. I intersperse polynomial operations, inverses and logarithms between these four units. Logarithms fit organically with exponential functions; polynomial operations and inverses, not so much in a world where I’m not going on to the more rigorous parts of algebra 2. But inverses work as a good review of multistep equations, so the kids get some good practice in another skillset they need. Leaving polynomial operations as just….out there.\n\nI haven’t been terribly unhappy with this, given the purely functional nature of the lessons, but I want my kids to know synthetic sub/div, dammit, and I want an organic way of introducing it. Right now, I go from linear equations to polynomial operations, ending with multiplication, which takes me into quadratics. That works, but not as smoothly as I want.\n\nA couple days ago, I was pondering how to explain the synthetic substitution/division problem as a blog post, when I suddenly thought of a way to better integrate polynomial operations in and around my first two modeling units. I can use function operations as a method of introducing the transitions. Normally, I just introduce the function notation so they’re familiar with it. (Composites don’t normally show up on the test, and are covered again in pre-calc.)\n\nThis is just an outline, but remember that I have all the units done. All I’m describing, broadly (without any curriculum yet) is the transitions, the points at which I introduce and then return to polynomial operations.\n\nAfter Linear Equations and Inequalities,\n\nI could start with a question like: “Part 1: Sami needs three more dollars to buy the new hoodie that he wants. Model a relationship between the money Sami has and the money he needs, and plot.”\n\nThen, Part 2: “If Sami skips the hoodie, he needs just one more dollar to buy a ticket to the pizza feed on Friday. Model a relationship between the money he has and the money he needs, and plot.”\n\nPart 3, starting as a discussion: “How much more money does Sami need if he wants both the hoodie and the ticket to the pizza feed?” My guess, although I’m happy to be wrong, is the kids will say that Sami needs four more dollars. And so how can they use the graphs to show otherwise?\n\nSo we can show graphically and algebraically that adding the two equations together will give us one equation that we can use to see how much more money Sami needs. At this point, I can introduce polynomial addition and subtraction in its simplest form. This will just be a couple days–one for addition, one for subtraction. But it allows me to reinforce linear graphing one more time, in addition to the new concept.\n\nThen I can move from addition and subtraction to multiplication.\n\nI’ve always introduced quadratics with the modeling exercise above, then moved onto binomial multiplication. I really like the possibilities that come up after adding and subtracting linear functions, by asking the question (without the graph, at first):\n\n“Okay, we’ve added two lines. What happens when we multiply two lines?”\n\nIn class discussion, I’ll point out the negative values, the positive values and the points at which one graph is positive and one negative. What’s going to happen when these are multiplied? (Hey, it never hurts to remind them about negative integer operations.) I haven’t completely thought through implementation—I definitely want them graphing this. Maybe give them the two lines at first, have them multiply the values.\n\nI mentioned earlier that I’ve been looking for a better method of modeling quadratics. While this approach doesn’t involve situation modeling, it does organically introduce the shape of a parabola. It will also help them spot zeros.\n\nAnd this leads in perfectly to my binomial multiplication unit, which I already extend to include higher degree polynomials. With the strongest kids, I can even give them three lines and have them determine what a cubic function looks like.\n\nFactoring, Division, Remainder, Synthetic Sub/Div\n\nThen, when I’m moving from binomial multiplication to factoring, I can show a graph like the one at right and ask:\n\n“So we multiplied the linear equation by another linear equation to get the parabola. What are the equations you see, and what’s the missing linear equation?”\n\nwhich, of course, brings up function division, and allows me to introduce factoring as a variant of division–and, a month or so after we’ve done linear equations, they get to review the concepts. As I write this, I’m trying to think if it makes more sense to introduce long division and synthetic substitution at this point, or to work on factoring for a while and then bring up division. TBD.\n\nIf you’re not familiar with synthetic sub/div, take a look at long division and synthetic division side by side:\n\nSynthetic sub/div is far easier than substitution, even in quadratics. It’s also noticeably easier when evaluating fraction values for velocity problems.\n\nAfter I’ve finished all of linear and all of quadratics, I can do a few days on polynomial operations and function notation, just to wrap up.\n\nAgain, this is very skeletal. I just had the idea because of the writing challenge. Thanks, blog! But I know it will work; I can feel it. I just have to be careful and think through the transitions thoroughly, make sure I’ve given the kids plenty of support. For example, I don’t want to overemphasize the function operations of this. I just want the kids to be comfortable with the notion of addition, subtraction, multiplication and division of equations. That will give me the entrance to teach them synthetic div/sub, as well as the reason for practicing polynomial ooperations.\n\nThose of you who are thinking, “Hey. This is really algebra one.” well, welcome to my world. My kids learn a whole bunch of first year algebra in my algebra II and geometry classes. But I cover about 60% of the algebra II standards to kids with very weak skills, and the class is pretty conceptually interesting, I think. It’s definitely not just a rehash of algebra one.\n\nI’ve also been thinking a lot about this post on curriculum mapping, which I found very interesting. I hope it’s okay that I borrow his image:", null, "I was talking with Kelly Renier (@krenier), director at Viking New Tech, and we began discussing the concept of “power standards” or “enduring understandings” or “What are the Five Things you want your students to know when they leave your class?” then build out from there. However, we didn’t discuss building those Five (or whatever number) Things out into linearly progressing units, but rather concentric circles.\n\nSo this is absolutely how I teach, as regular readers may know. Teaching Algebra, or Banging Your Head with a Whiteboard covers, literally, the Five Big Ideas of algebra I. I also have them for geometry and algebra II (for my students, anyway). I thought the advantages of this approach were interesting in that I didn’t realize how many teachers don’t do this already. Again, quoting:\n\n1. Students get to revisit a general topic every few weeks, rather than a one-and-done shot at learning a concept.\n2. Students have time to “forget” algorithms and processes and when they see a scenario they have to fight their way through it accessing prior or inventing new knowledge, rather than relying on teacher led examples. Yes, I consider this a benefit.\n3. Teachers may formatively assess more adeptly.\n4. Students may see math as a more connected experience, rather than a bunch of arbitrary recipes to follow.\n5. It probably better reflects the learning process, which happens in fits and starts, and frankly, cannot be counted upon to be contained within a specified time frame.\n\nThis is a really good explanation of what I see as the advantages to my approach. I have never taught in anything approaching a linear fashion, probably because I used CPM, which spirals as a matter of course, in my first two years and was nonetheless shocked at how much kids forgot. So once their forgetting is shoved in your face, it’s hard to go back to the linear curriculum design.\n\nI don’t obsess with getting every single connection made from the first time I teach the class. Sometimes I’ll just acknowledge, as I’ve done with polynomial operations up to now, “Hey, this is kind of an odds and ends thing you just need to know.” There’s nothing wrong with making clean breaks between some units—it doesn’t automatically turn the curriculum linear. For example, I make a very clean break between quadratics and exponentials, because the kids have never seen exponentials before. I show the connections between linear and exponential functions, but I also don’t just lead in. NEON SIGN: NEW EQUATION is a helpful way for kids to realize they’re getting something new. (Common Core says they’ll be learning this in Algebra I. Jesus. These people are friggin’ delusional.)\n\nGoing back to who I am as a teacher, I start with explanations. Not necessarily verbal explanations every time, but making sense of a concept before doing it is an essential element of my teaching.\n\nThat doesn’t mean I start with a lecture, which I rarely do, or an explanation. I often begin a unit or a concept with an activity. But if I’m asking my students to engage in an activity with no concept or prior understanding, then they can be sure it’s going to be simple, straightforward, and illustrative.", null, "#### 9 responses to “Polynomial Operations as Glue: Second Year Algebra”\n\n•", null, "mrdardy\n\nI like the physical connections between multiplying lines and creating quadratics. One of the real advantages of the tech we have at our disposal now. However, I would urge caution in emphasizing synthetic division. It IS easier and cleaner, but it is so restricted in when and how we can use it comfortably. When we get expert at it we can deal with a divisor like 2x – 7 but most of our kids take awhile to get anywhere near that expert. Other than that little quibble, I loved this and will share this with my Algebra II team next week.\n\n•", null, "educationrealist\n\nOh, I agree. Besides, you can only use linear factors, which means it’s useless for finding slant asymptotes in all but a few cases.\n\nBut to my knowledge, no one uses synthetic division for quadratics. So there’s no overemphasizing it. It’s simply not even used for degree = 2. The kids I’m working with are unlikely to get to precalc or even genuine second year algebra. As a rule, half my class won’t even work with degree > 2. It’s helpful for them to just have an easier way to evaluate than substitution.\n\n•", null, "Jim\n\nThere is a classical algorithm due to Kronecker which factors any polynomial in any number of variables with rational coefficents into irreducible factors. It’s not practical for hand calculation but can be implemented on a computer.\n\n•", null, "Jim\n\nEvaluation of a polynomial an x^n + … + a0 by synthetic division is equivalent to writing the polynomial as (((( .. (an x + an-1) x + an-2) x + ….) x + a0. In the evaluation from the standard form there are (1/2)n(n+1) multiplications and n additions. The synthetic division method gives n multiplications and n additions. This method was used by Newton and may well have been known earlier.\n\n•", null, "Jim\n\nActually since you don’t have to calculate each power anew there are only 2n-1 multiplications and n additions to calculate a polynomial from the standard form.\n\n•", null, "The Release and “Dumbing it Down” | educationrealist\n\n[…] But some. For example, I now teach the modeling of inequalities, modeling of absolute values, and function operations, in addition to modeling linear equations, exponentials, probability, and binomial multiplication. […]\n\n•", null, "The Negative 16 Problems and Educational Romanticism | educationrealist\n\n[…] mind: many worksheets with lots of practice on binomial multiplication, factoring, simpler models, function operations, converting quadratics from one form to another, completing the square (argghh) preceded this […]\n\n•", null, "Evaluating the New PSAT: Math | educationrealist\n\n[…] the concept–usually in late algebra 2, but much more likely in pre-calc. That’s when synthetic division/substitution is covered–as I write in that piece, I’m considered unusual for introducing […]\n\n•", null, "surfer\n\nDo you really have to do all that complicated stuff? Can’t you just give them a lecture based on what is in the book and then assign hw problems (even in class ones)?" ]
[ null, "https://educationrealist.files.wordpress.com/2013/08/evalbysub.png", null, "https://emergentmath.files.wordpress.com/2013/08/concentric-circles.png", null, "https://0.gravatar.com/avatar/989d0860f80789ad9b6d52b429a2f049", null, "https://0.gravatar.com/avatar/ff92c293cfd180ebd385344825e142cf", null, "https://0.gravatar.com/avatar/989d0860f80789ad9b6d52b429a2f049", null, "https://0.gravatar.com/avatar/f622692461b808dc7b4fd93930ddbe3e", null, "https://0.gravatar.com/avatar/f622692461b808dc7b4fd93930ddbe3e", null, "https://0.gravatar.com/avatar/f622692461b808dc7b4fd93930ddbe3e", null, "https://secure.gravatar.com/blavatar/5ed905c3f93e2c2de8c44ea874fd37e0", null, "https://secure.gravatar.com/blavatar/5ed905c3f93e2c2de8c44ea874fd37e0", null, "https://secure.gravatar.com/blavatar/5ed905c3f93e2c2de8c44ea874fd37e0", null, "https://0.gravatar.com/avatar/9a112f29f70e179674580fde1689e5a1", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9433576,"math_prob":0.8803077,"size":14497,"snap":"2019-35-2019-39","text_gpt3_token_len":3138,"char_repetition_ratio":0.11785,"word_repetition_ratio":0.0049261083,"special_character_ratio":0.21238877,"punctuation_ratio":0.11676017,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9663984,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24],"im_url_duplicate_count":[null,4,null,4,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-20T05:13:09Z\",\"WARC-Record-ID\":\"<urn:uuid:915891d4-a9de-4d8a-b89e-61008f538600>\",\"Content-Length\":\"106904\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b804f7c4-95e5-4b1a-9728-2a8332b40ac7>\",\"WARC-Concurrent-To\":\"<urn:uuid:5beab3b9-c559-4863-8cb5-f9bf97f3f24c>\",\"WARC-IP-Address\":\"192.0.78.12\",\"WARC-Target-URI\":\"https://educationrealist.wordpress.com/2013/08/12/polynomial-operations-as-glue-second-year-algebra/\",\"WARC-Payload-Digest\":\"sha1:XAQ6Z46WXUFDDUNSHTRC5YOC3NSAZIXW\",\"WARC-Block-Digest\":\"sha1:3HQXLIYHB3LX4CYRTKPCCEUIZ63AOF7B\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514573832.23_warc_CC-MAIN-20190920050858-20190920072858-00428.warc.gz\"}"}
https://www.splashlearn.com/s/math-worksheets/add-4-digit-and-2-digit-numbers-with-regrouping-horizontal-addition
[ "Home > Math > Add 4-Digit and 2-Digit Numbers with Regrouping: Horizontal Addition Worksheet\n\n## Assess your math skills by adding 4-digit and 2-digit numbers with regrouping in this worksheet.", null, "Task your little mathematicians to crack the code of adding 4-digit and 2-digit numbers with regrouping with this fun worksheet. It is common for students to regroup numbers when they add multi-digit numbers. They understand that one hundred is ten tens, one ten is ten ones, and so forth. Use add 4-digit and 2-digit numbers with regrouping worksheet to help your students practice this concept and develop their conceptual and strategic knowledge. In each problem, the numbers are laid out in the horizontal format. Students should try to use different strategies involving composing and decomposing numbers to solve these problems. This will help them develop flexibility and fluency.", null, "4413+", null, "4567+", null, "", null, "", null, "" ]
[ null, "https://cdn.splashmath.com/cms_assets/images/playable-left-desc-d0e7c503c7eb99a138cc.svg", null, "https://cdn.splashmath.com/cms_assets/images/playable-right-image-c88d24a6fffd20c6833d.svg", null, "https://cdn.splashmath.com/cms_assets/images/math-and-ela-games-feature-d7f1a6d98b223203d222.svg", null, "https://cdn.splashmath.com/cms_assets/images/math-and-ela-worksheet-feature-56a20bb968cbfa2fe52a.svg", null, "https://cdn.splashmath.com/cms_assets/images/coomon-core-feature-5e0a900656847818fa6d.svg", null, "https://cdn.splashmath.com/cms_assets/images/coopa-feature-8af350a7eecbb0439840.svg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9129946,"math_prob":0.7882807,"size":687,"snap":"2023-40-2023-50","text_gpt3_token_len":135,"char_repetition_ratio":0.13616398,"word_repetition_ratio":0.03846154,"special_character_ratio":0.18340611,"punctuation_ratio":0.08130081,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98600096,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-27T13:00:48Z\",\"WARC-Record-ID\":\"<urn:uuid:ecd59abb-e7ad-4d4f-a5ec-8982e6073ad3>\",\"Content-Length\":\"157197\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b4250564-5bc7-4733-9851-0a3055ffabcd>\",\"WARC-Concurrent-To\":\"<urn:uuid:7bc36d62-dfb3-42d9-825c-a4eef73641a0>\",\"WARC-IP-Address\":\"104.18.29.134\",\"WARC-Target-URI\":\"https://www.splashlearn.com/s/math-worksheets/add-4-digit-and-2-digit-numbers-with-regrouping-horizontal-addition\",\"WARC-Payload-Digest\":\"sha1:M5ZLJQYYU2TOGZEUXX5TBHCP6WFA2AQ4\",\"WARC-Block-Digest\":\"sha1:3ZFRHD5NRATCRNV73CPCSQCLEFQCVYTA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510297.25_warc_CC-MAIN-20230927103312-20230927133312-00545.warc.gz\"}"}
https://topic.alibabacloud.com/a/c---uses-ltcstdlibgt-header-file-to-generate-a-random-number-generator_1_31_30508444.html
[ "# C + + uses <cstdlib> header file to generate a random number generator\n\nSource: Internet\nAuthor: User\n\nHeader file <cstdlib> has an important function rand (), which can be used as a random number generator.\n\nNow I want to generate a random number, I use the following program:\n\n`#include <iostream> #include <ctsdlib><span style= \"font-family:arial, Helvetica, Sans-serif;\"    >using namespace Std;</span>int Main () {cout << rand () << Endl; return 0;}`\n\nThe problem comes, although we produce a random number, but no matter how many times I run the above program (recompile, run, also 41), it produces a definite number. That is, 41, as follows:\n\nWhat is this for?\n\nThe answer is that to get a random number, we need to enter a seed (seed) for this random number generator (generator).\n\nNow let's call the rand () function in the function 25 times to generate 25 random numbers:\n\nThe same is true, no matter how many times the program is run, or if it is compiled and run again, the result is 25 random numbers that are exactly the same as the random numbers above. This is still the reason for not having seed.\n\nIn an example, simulate the dice. The randomly occurring data are: 1, 2, 3, 4, 5, 6, now we programmed to generate this range of numbers, cast 25 times, the program is as follows:\n\n`#include <iostream> #include <random>using namespace Std;int main () {for (int i = 0; i <; i++) { cout << 1+ (rand ()% 6) << Endl; Must be added 1, otherwise 0 } return 0 will be generated;}`\n\nThe results of the operation are as follows:\n\nNot difficult to generalize, we can produce random numbers of arbitrary range integers.\n\nBut the problem is that since the random number of several programs above does not have seed, all the random numbers that are generated are the same regardless of how many times we run the program.\n\nBecause no computer can produce a completely random random number. Computer is not a person after all, the computer must follow certain algorithm (algorithm), certain instructions to execute the command. This means that computers cannot produce completely random data. But the computer through a certain complex algorithm, we can make its generated data appears to be random.\n\nC + + produces a seed random number that can be used by the stochastic. The usual function is Srand () (the function has a parameter that allows us to give in any random number):\n\n`#include <iostream> #include <cstdlib>using namespace Std;int main () { srand (6); for (int i = 0; i < i++) { cout << 1+ (rand ()% 6) << Endl; } return 0;}`\n\nThe resulting results are as follows:\n\nObviously different. But if we do not change the parameters in the Srand (6) function, as long as we run, or recompile, the same result will still appear (as in the above question). But as soon as we change the parameters in Srand (), recompile, run, there are different results. For example, if we change srand (6) to Srand (10), it will be changed:\n\nNow we give the ultimate solution to all the above problems:\n\nThis solution needs to include a header file called <ctime>. This header file allows us to get the computer's clock. As follows:\n\n`#include <iostream> #include <cstdlib> #include <ctime>using namespace Std;int main () { Srand ( Time (0)); for (int i = 0; i < i++) { cout << 1+ (rand ()% 6) << Endl; } return 0;}`\n\nCompile run:\n\nRun again:\n\nThis means that as soon as it is rerun, it will be re-generated and pseudo-random numbers. This is exactly what we want to achieve.\n\nNow we explain why:\n\nWe know that we can modify our algorithm by Srand (). But if we pass a definite number to srand (), then the result of the algorithm is also a definite number. No matter how many times we run, the result is the same.\n\nBut we want Srand to pass a parameter of time (0), and the return value of the time (0) function changes every second. Randomly so, every time we run a function, the result looks random.\n\nRelated Keywords:\n\nThe content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.\n\nIf you find any instances of plagiarism from the community, please send an email to: [email protected] and provide relevant evidence. A staff member will contact you within 5 working days.\n\n## A Free Trial That Lets You Build Big!\n\nStart building with 50+ products and up to 12 months usage for Elastic Compute Service\n\n• #### Sales Support\n\n1 on 1 presale consultation\n\n• #### After-Sales Support\n\n24/7 Technical Support 6 Free Tickets per Quarter Faster Response\n\n• Alibaba Cloud offers highly flexible support services tailored to meet your exact needs." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8497607,"math_prob":0.93733585,"size":3762,"snap":"2022-40-2023-06","text_gpt3_token_len":906,"char_repetition_ratio":0.14289516,"word_repetition_ratio":0.06389301,"special_character_ratio":0.2618288,"punctuation_ratio":0.13870542,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96282727,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-03T22:01:46Z\",\"WARC-Record-ID\":\"<urn:uuid:5f19410a-93ed-432b-868e-28a1d72b9750>\",\"Content-Length\":\"81984\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:08b22575-76d7-4d31-a164-15ca1078f22c>\",\"WARC-Concurrent-To\":\"<urn:uuid:dd288ea1-b02e-4ec6-961c-2043b8243ed1>\",\"WARC-IP-Address\":\"47.74.138.66\",\"WARC-Target-URI\":\"https://topic.alibabacloud.com/a/c---uses-ltcstdlibgt-header-file-to-generate-a-random-number-generator_1_31_30508444.html\",\"WARC-Payload-Digest\":\"sha1:4ED7L65AGTEABN74WM2ZVGBYKRMDKO5U\",\"WARC-Block-Digest\":\"sha1:3EOPMLEFIZNRQDBAJNICVZRA5RXKXHX2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337432.78_warc_CC-MAIN-20221003200326-20221003230326-00787.warc.gz\"}"}
https://www.numberempire.com/75588704356
[ "Home | Menu | Get Involved | Contact webmaster", null, "", null, "", null, "", null, "", null, "# Number 75588704356\n\nseventy five billion five hundred eighty eight million seven hundred four thousand three hundred fifty six\n\n### Properties of the number 75588704356\n\n Factorization 2 * 2 * 11 * 11 * 12497 * 12497 Divisors 1, 2, 4, 11, 22, 44, 121, 242, 484, 12497, 24994, 49988, 137467, 274934, 549868, 1512137, 3024274, 6048548, 156175009, 312350018, 624700036, 1717925099, 3435850198, 6871700396, 18897176089, 37794352178, 75588704356 Count of divisors 27 Sum of divisors 145410569017 Previous integer 75588704355 Next integer 75588704357 Is prime? NO Is a Fibonacci number? NO Is a Bell number? NO Is a Catalan number? NO Is a factorial? NO Is a regular number? NO Is a perfect number? NO Polygonal number (s < 11)? square(274934) Binary 1000110011001011100000001100001100100 Octal 1063134014144 Duodecimal 12796607204 Hexadecimal 1199701864 Square 5.7136522262188E+21 Square root 274934 Natural logarithm 25.048572695688 Decimal logarithm 10.87845690129 Sine 0.8741215992272 Cosine 0.48570714403278 Tangent 1.7996885777085\nNumber 75588704356 is pronounced seventy five billion five hundred eighty eight million seven hundred four thousand three hundred fifty six. Number 75588704356 is a composite number. Factors of 75588704356 are 2 * 2 * 11 * 11 * 12497 * 12497. Number 75588704356 has 27 divisors: 1, 2, 4, 11, 22, 44, 121, 242, 484, 12497, 24994, 49988, 137467, 274934, 549868, 1512137, 3024274, 6048548, 156175009, 312350018, 624700036, 1717925099, 3435850198, 6871700396, 18897176089, 37794352178, 75588704356. Sum of the divisors is 145410569017. Number 75588704356 is not a Fibonacci number. It is not a Bell number. Number 75588704356 is not a Catalan number. Number 75588704356 is not a regular number (Hamming number). It is a not factorial of any number. Number 75588704356 is a deficient number and therefore is not a perfect number. Number 75588704356 is a square number with n=274934. Binary numeral for number 75588704356 is 1000110011001011100000001100001100100. Octal numeral is 1063134014144. Duodecimal value is 12796607204. Hexadecimal representation is 1199701864. Square of the number 75588704356 is 5.7136522262188E+21. Square root of the number 75588704356 is 274934. Natural logarithm of 75588704356 is 25.048572695688 Decimal logarithm of the number 75588704356 is 10.87845690129 Sine of 75588704356 is 0.8741215992272. Cosine of the number 75588704356 is 0.48570714403278. Tangent of the number 75588704356 is 1.7996885777085\n\n### Number properties\n\nExamples: 3628800, 9876543211, 12586269025" ]
[ null, "https://www.numberempire.com/images/graystar.png", null, "https://www.numberempire.com/images/graystar.png", null, "https://www.numberempire.com/images/graystar.png", null, "https://www.numberempire.com/images/graystar.png", null, "https://www.numberempire.com/images/graystar.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5748035,"math_prob":0.9417448,"size":2889,"snap":"2020-34-2020-40","text_gpt3_token_len":1006,"char_repetition_ratio":0.18405546,"word_repetition_ratio":0.22488038,"special_character_ratio":0.5420561,"punctuation_ratio":0.1959596,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9918565,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-24T15:02:28Z\",\"WARC-Record-ID\":\"<urn:uuid:769eb03d-4b8c-445b-95f3-77dce9a06c14>\",\"Content-Length\":\"22532\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:99d3ce0e-b897-45ba-b21e-6d209cdca8d8>\",\"WARC-Concurrent-To\":\"<urn:uuid:4a4bbe76-ec74-4984-b2d3-f5baa7c397d7>\",\"WARC-IP-Address\":\"172.67.208.6\",\"WARC-Target-URI\":\"https://www.numberempire.com/75588704356\",\"WARC-Payload-Digest\":\"sha1:K4GPKVDOHDZNTV4HC7GCQRCFRQN7CKG7\",\"WARC-Block-Digest\":\"sha1:MUD375JESK4KUNCZGAEKFYAXY7OQ24QZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400219221.53_warc_CC-MAIN-20200924132241-20200924162241-00312.warc.gz\"}"}
http://fliesen-iseni.de/mac/thanos/how/243117038dd9c4cc3b05506085630b04
[ "The 1, 4, 6, 4, 1 tell you the coefficents of the p 4, p 3 r, p 2 r 2, p r 3 and r 4 terms respectively, so the expansion is just. Chapter 08 of Mathematics ncert book titled - Binomial theorem for class 12\n\nThis means the n th row of Pascals triangle comprises the The triangle can be used to calculate the coefficients of the expansion of (a+b)n ( a + b) n by taking the exponent n n and adding 1 1. Binomial Theorem and Pascals Triangle: Pascals triangle is a triangular pattern of numbers formulated by Blaise Pascal. For (a+b)6 ( a + b) 6, n = 6 n = 6 so the coefficients of the expansion will correspond with line 7 7.\n\n, which is called a binomial coe cient. And here comes Pascal's triangle. The rth element of Row n is given by: C(n, r - 1) =. Now on to the binomial. Dont be concerned, this idea doesn't require any area formulas or unit calculations like you'd expect for a traditional triangle. addition property of opposites. Another formula that can be used for Pascals Triangle is the binomial formula. These are associated with a mnemonic called Pascals Triangle and a powerful result called the Binomial Theorem, which makes it simple to compute powers of binomials. (x-6) ^ 6 (2x -3) ^ 4 Please explain the process if possible. 8. All the binomial coefficients follow a particular pattern which is known as Pascals Triangle. The binomial coefficient appears as the k th entry in the n th row of Pascal's triangle (counting starts at 0 ). It's much simpler to use than the Binomial Theorem, which provides a formula for expanding binomials. These coefficients for varying n and b can be arranged to form Pascal's triangle.These numbers also occur in combinatorics, where () gives the number of different combinations of b elements that can be chosen from an n-element set.Therefore () is often\n\nThe formula for Pascal's When an exponent is 0, we get 1: (a+b) 0 = 1. If you wish to use Pascals triangle on an expansion of the form (ax + b)n, then some care is needed. of a binomial form, this is called the Pascals Triangle, named after the French mathematician Blaise Pascal. ). Binomial Theorem I: Milkshakes, Beads, and Pascals Triangle. Pascal's Triangle is the representation of the coefficients of each of the terms in a binomial expansion. For example, x+1, 3x+2y, a b We pick the coecients in the expansion Specifically, the binomial coefficient, typically written as , tells us the b th entry of the n th row of 1+2+1. Pascals Triangle. combinations formula.\n\nOne of the most interesting Number Patterns is Pascal's Triangle (named after Blaise Pascal, A Formula for Any Entry in The Triangle. Pascal's Triangle is a triangle in which each row has one more entry than the preceding row, each row begins and ends with \"1,\" and the interior elements are found by adding the adjacent elements in the preceding row. View more at http://www.MathAndScience.com.In this lesson, you will learn about Pascal's Triangle, which is a pattern of numbers that has many uses in math.\n\nAnalyze powers of a binomial by Pascal's Triangle and by binomial coefficients. Use the Binomial Theorem to find the term that will give x4 in the expansion of (7x 3)5.\n\nHowever, for quite some time Pascal's Triangle had been well known as a way to expand binomials (Ironically enough, Pascal of the 17th century was not the first person to know\n\nThe The coefficient a in the term of ax b y c is known as the binomial coefficient or () (the two have the same value). Each entry is the sum of the two above it. Design the formula how to find nth term from end .\n\n(a + b) 2 = c 0 a 2 b 0 + c 1 a 1 b 1 + c 2 a 0 b 2. A binomial is an algebraic expression containing 2 terms. This method is more useful than Pascals triangle when n is large. By spotting patterns, or otherwise, find the values of , , , and . Scroll down the page if you need more examples and solutions. Binomial expansion using Pascal's triangle and binomial theorem SlideShare uses cookies to improve functionality and performance, and to provide you with relevant advertising. on a left-aligned Pascal's triangle. The other is combinatorial; it uses the denition of the number of r-combinations as the In mathematics, the binomial coefficients are the positive integers that occur as coefficients in the binomial theorem.Commonly, a binomial coefficient is indexed by a pair of integers n k 0 and is written (). Hence if we want to find the coefficients in the binomial expansion, we use Pascals triangle. This way of obtaining a binomial expansion is seen to be quite rapid , once the Pascal triangle has been constructed. The first few binomial coefficients. Any triangle probably seems irrelevant right now, especially Pascals. F or 1500 years, mathematicians from many cultures have explored the patterns and relationships found in what we now, in the West, refer to as Pascals triangle. In Algebra II, we can use the binomial coefficients in Pascals triangle to raise a polynomial to a certain power.\n\nSolution By construction, the value in row n, column r of Pascals triangle is the value of n r, for every pair of add. Binomial Theorem. That pattern is summed up by the Binomial Theorem: The Binomial Theorem. Once that is done I introduce Binomial Expansion and tie that into Pascal's Triangle. So this is going to have eight terms. addition sentence. Pascal's triangle can be used to identify the coefficients when expanding a binomial.\n\naddend. Examples, videos, solutions, worksheets, games and activities to help Algebra II students learn about Pascals Triangle and the Binomial Theorem. Inquiry/Problem Solving a) Build a new version of Pascals triangle, using the formula for t n, r on page 247, but start with t 0,0 = 2. b) Investigate this triangle and state a conjecture about its terms. Binomial Theorem II: The Binomial Expansion The Milk Shake Problem. Detailed step by step solutions to your Binomial Theorem problems online with our math solver and calculator. c) State a conjecture about the sum of the terms in (b) (5 points) Write down Perfect Square Formula, i.e. However, Pascals triangle is very useful for binomial expansion. Here you can navigate all 3369 (at last count) of my videos, including the most up to date and current A-Level Maths specification that has 1033 teaching videos - over 9 7 hours of content that works through the entire course. Binomial expansion. Solved exercises of Binomial Theorem. There are some main properties of binomial expansion which are as follows:There are a total of (n+1) terms in the expansion of (x+y) nThe sum of the exponents of x and y is always n.nC0, nC1, nC2, CNN is called binomial coefficients and also represented by C0, C1, C2, CnThe binomial coefficients which are equidistant from the beginning and the ending are equal i.e. nC0 = can, nC1 = can 1, nC2 = in 2 .. etc. / (k! The Binomial Theorem and Binomial Expansions. A triangular array of the binomial coefficients of the expression is known as Pascals Triangle. Bonus exercise for the OP: figure out why this works by starting Whats Pascal's triangle then?\n\nTo find an expansion for (a + b) 8, we complete two more rows of Pascals triangle: Thus the expansion of is (a + b) 8 = a 8 + 8a 7 b + 28a 6 b 2 + 56a 5 b 3 + 70a 4 b 4 + 56a 3 b 5 + 28a 2 b 6 + 8ab 7 + b 8. While Pascals triangle is useful in many different mathematical settings, it will be applied Exercises: 1. Pascals Triangle and Binomial Expansion Pascals triangles give us the coefficients of the binomial expansion of the form $$(a + b)^n$$ in the $${n^{{\\rm{th}}}}$$ row in the triangle. * (n-k)!\n\nFor any binomial expansion of (a+b) n, the coefficients for each term in the expansion are given by the nth row of Pascals triangle. Using Pascals Triangle Use Pascals triangle to compute the values of 6 2 and 6 3 . The formula is: Note that row and column notation begins with 0 rather than 1. Lets say we want to expand $(x+2)^3$. The coefficients of the binomials in this expansion 1,4,6,4, and 1 forms the 5th degree of Pascals triangle. The binomial theorem is used to find coefficients of each row by using the formula (a+b)n. Binomial means adding two together. It is named after Blaise Pascal. One of the most interesting Number Patterns is Pascal's Triangle. The (n+1)th row is the row we need, and the 1st term in the row is the coe cient of 5.Expand (2a 3)5 using Pascals triangle. Problem 1: Issa went to a shake kiosk and want to buy a milkshake. Pascals Triangle and Binomial Expansion. This is the bucket, Step 1. A binomial expansion is a method used to allow us to expand and simplify algebraic expressions in the form into a sum of terms of the form.\n\nCoefficients are from Pascal's Triangle, or by calculation using n!k!(n-k)! Since were 9.7 Pascals Formula and the Binomial Theorem 595 Pascals formula can be derived by two entirely different arguments. (X+Y)^2 has three terms. Write 3. asked Mar 3, 2014 in ALGEBRA 2 by harvy0496 Apprentice. Binomial Theorem Calculator online with solution and steps. Each number shown in our Pascal's triangle calculator is given by the formula that your math teacher calls the binomial coefficient. I always introduce Binomial Expansion by first having my student complete an already started copy of Pascal's Triangle. binomial expression . Example: (x+y) 4Since the power (n) = 4, we should have a look at the fifth (n+1) th row of the Pascal triangle. Therefore, 1 4 6 4 1 represent the coefficients of the terms of x & y after expansion of (x+y) 4.The answer: x 4 +4x 3 y+6x 2 y 2 +4xy 3 +y 4 Pascal's triangle can be used to identify the coefficients when expanding a binomial. Well (X+Y)^1 has two terms, it's a binomial. It gives a formula for the expansion of the powers of binomial expression. Pascal's Triangle & the Binomial Theorem 1. Exponent of 2 Blaise Pascals Triangle Arithmtique (1665). In elementary algebra, the binomial The Binomial Theorem First write the pattern for raising a binomial to the fourth power. Binomials are expressions that looks like this: (a + b)\", where n can be any positive integer. Solved Problems. acute triangle. Write the rst 6 lines of Pascals triangle. Algebra - Pascal's triangle and the binomial expansion; Pascal's Triangle & the Binomial Theorem 1. Substitute the values of n and r into the equation 2. How to use the formula 1. Any particular number on any row of the triangle can be found using the binomial coefficient.\n\nFor example, to find the $${100^{th}}$$ row of this triangle, one must also find the entries of the first $$99$$ rows. We start with (2) 4. Pascal's triangle is triangular-shaped arrangement of numbers in rows (n) and columns (k) such that each number (a) in a given row and column is calculated as n factorial, divided by k factorial times n minus k factorial. The Go to Pascals triangle to row 11, entry 3. The first remark of the binomial theorem was in the 4th century BC by the renowned Greek mathematician Euclids. Binomial Expansion Formula; Binomial Probability Formula; Binomial Equation. ), see Theorem 6.4.1. C (n,k) = n! Definition: binomial . In Row 6, for example, 15 is the sum of 5 and 10, and 20 is the sum of 10 and 10.\n\nNow lets build a Pascals triangle for 3 rows to find out the coefficients. The inductive proof of the binomial theorem is a bit messy, and that makes this a good time to introduce the idea of combinatorial proof. Exponent of 1. The binomial theorem formula is (a+b) n = n r=0 n C r a n-r b r, where n is a positive integer and a, b are real numbers, and 0 < r n.This formula helps to expand the binomial expressions such as (x + a) 10, (2x + 5) 3, (x - (1/x)) 4, and so on.\n\nPascals Triangle definition and hidden patterns Generalizing this observation, Pascals Triangle is simply a group of numbers that are arranged where each row of values represents the coefficients of a binomial expansion, $(a+ b)^n$. Binomial. This algebra 2 video tutorial explains how to use the binomial theorem to foil and expand binomial expressions using pascal's triangle and combinations. Explore and apply Pascal's Triangle and use a theorem to 1 4 6 4 1 Coefficients from Pascals Triangle. Pascal's Triangle is probably the easiest way to expand binomials. The sum of the powers of x and y in each term is equal to the power of the binomial i.e equal to n. The powers of x in the expansion of are in descending order while the powers of y are in ascending order. For example, the 3 rd entry in Row 6 ( r = 3, n = 6) is C(6, 3 - 1) = C(6, 2) = = 15 .\n\nPascals triangle (1653) has been found in the works of mathematicians dating back before the 2nd century BC. The coefficients in the binomial expansion follow a specific pattern known as Pascals triangle.\n\nOne such use cases is binomial expansion. Pascals triangle is a geometric arrangement of the binomial coefficients in the shape of a triangle. Question: 8. 2. Pascal's Triangle & Binomial Expansion Explore and apply Pascal's Triangle and use a theorem to determine binomial expansions. Math Example Problems with Pascal Triangle. / ((n - r)!r! The binomial expansion of terms can be represented using Pascal's triangle. The binomial theorem There are instances that the expansion of the binomial is so large that the Pascal's Triangle is not advisable to be used. Concept Map. https://www.khanacademy.org//v/pascals-triangle-binomial-theorem Lets look at the expansion of (x + y)n (x + y)0 = 1 (x + y)1 = x + y (x + y)2 = x2 +2xy + y2 (x + y)3 = x3 + 3x2y + 3xy2 + y3 F or 1500 years, mathematicians from many cultures have explored the patterns and relationships found in what we additive identity. If the binomial coefficients are arranged in rows for n = 0, 1, 2, a triangular structure known as Pascals triangle is obtained. Coefficients. Exponent of 0. The coefficient is arranged in a triangular pattern, the first and last number in each row is 1 and number in each row is the sum of two numbers that lie diagonally above the number. A binomial expression is the sum or difference of two terms. 1a5b0 + 5a4b1 + 10a3b2 + 10a2b3 + 5a1b4 + 1a0b5 The exponents for b begin with 0 and increase. Pascals triangle contains the values of the binomial coefficient of the expression. Pascals triangle and the binomial theorem mc-TY-pascal-2009-1.1 A binomial expression is the sum, or dierence, of two terms. Binomial Expansion Using Pascals Triangle Example: Each coefficient is achieved by adding two coefficients in the previous row, on the immediate left and immediate right. (X+Y)^3 has four terms. As an online math tutor, I love teaching my students helpful shortcuts! As we have explained above, we can get the expansion of (a + b)4 and then we have to take positive and negative signs alternatively staring with positive sign for the first term So, the expansion is (a - b)4 = a4 Firstly, 1 is additive inverse. Algebra Examples. The name is not too important, but let's see what the computation looks like. Row 5 Use Pascals Triangle to expand (x 3)4. Blaise Pascals Triangle Arithmtique (1665).\n\nbinomial-theorem; It is, of course, often impractical to write out Pascal\"s triangle every time, when all that we need to know are the entries on the nth line. It tells you the coefficients of the progressive terms in the expansions. The binomial theorem formula is used in the expansion of any power of a binomial in the form of a series. What is Pascal's Triangle Formula? Limitations of Pascals Triangle. If the exponent is relatively small, you can use a shortcut called Pascal's triangle to find these coefficients.If not, you can always rely on algebra! For example, (x + y) is a binomial. The numbers are so arranged that they reflect as a triangle. addition (of complex numbers) addition (of fractions) addition (of matrices) addition (of vectors) addition formula. The general form of the binomial expression is (x+a) and the expansion of , where n is a natural number, is called binomial theorem. (a) (5 points) Write down the first 9 rows of Pascal's triangle. If one looks at the magnitude of the integers in the kth row of the Pascal triangle as k We can find any element of any row using the combination function. addition. Solution is simple. If we denote the number of combinations of k elements from an n -element set as C (n,k), then. In this worksheet, we will practice using Pascals triangle to find the coefficients of the algebraic expansion of any binomial expression of the form (+). Q1: Shown is a partially filled-in picture of Pascals triangle. adjacent faces. adjacent angles. One is alge-braic; it uses the formula for the number of r-combinations obtained in Theorem 9.5.1.\n\nSolution: First write the generic expressions without the coefficients. Recent Visits Use the binomial theorem to write the binomial expansion (X+2)^3. Pascals Triangle. To find any binomial coefficient, we need the two coefficients just above it. Through this article on binomial expansion learn about the binomial theorem with definition, expansion formula, examples and more. Pascal's Triangle. Pascals triangle is useful in finding the binomial expansions for reasonably small values of $$n$$, it isnt practical for finding expansions for large values of $$n$$. Binomial Theorem/Expansion is a great example of this! The common term of binomial development is Tr+1=nCrxnryr T r + 1 = n C r x n r y r. To Solution : Already, we know (a + b) 4 = a 4 + 4a 3 b + 6a 2 b 2 + 4a b 3 + b 4. Like this: Example: What is (y+5) 4 . Expand the following binomials using pascal triangle : Problem 1 : (3x + 4y) 4. Examples.\n\nTo construct the next row, begin it with 1, and add the two numbers immediately above: 1 + 2. To find the numbers inside of Pascals Triangle, you can use the following formula: nCr = n-1Cr-1 + n-1Cr.\n\nSo the answer is: 3 3 + 3 (3 2 x) + 3 (x 2 3) + x 3 (we are replacing a by 3 and b by x in the expansion of (a + b) 3 above) Generally. To build the triangle, always start with \"1\" at the top, then continue placing numbers below it in a triangular\n\nI'm trying to answer a question using Pascal's triangle to expand binomial functions, and I know how to do it for cases such as (x+1) which is quite simple, but I'm having troubles understanding and looking It is important to keep the 2 term Binomial theorem. The exponents for a begin with 5 and decrease. This is one warm-up that every student does without prompting. The passionately It is especially useful when raising a binomial to lower degrees. The general form of the binomial expression is (x+a) and the expansion of :T E= ; , where n is a natural number, is called binomial theorem. The following figure shows how to use Pascals Triangle for Binomial Expansion. Binomials are\n\nIt is the coefficient of the x k term in the polynomial expansion of the binomial power (1 + x) n, and is given by the formula =!! Binomial Expansion Using Pascals Triangle. Pascal Triangle Formula. The numbers in Pascals triangle form the coefficients in the binomial expansion. It states that for positive natural numbers n and k, is a binomial coefficient; one interpretation of which is the coefficient of the xk term in the expansion of (1 + x)n. How is each row formed in Pascals Triangle? We 1+3+3+1. Pascal's Triangle Binomial expansion (x + y) n; Often both Pascal's Triangle and binomial expansions are described using combinations but without any justification that ties it all together. If n is very large, then it is very difficult to find the coefficients. Algebra 2 and Precalculus students, this one is for you. And you will learn lots of cool math symbols along the way.\n\nThe coefficients will correspond with line n+1 n + 1 of the triangle. Finish the row with 1. Let me just create little buckets for each of the terms.\n\nLets expand (x+y). How do I use Pascal's Triangle to expand these two binomials? Binomial expansion. One such use cases is binomial expansion. We have a binomial raised to the power of 4 and so we look at the 4th row of the Pascals triangle to find the 5 coefficients of 1, 4, 6, 4 and 1.\n\nn C r has a mathematical formula: n C r = n! Pascal's Triangle CalculatorWrite down and simplify the expression if needed. (a + b) 4Choose the number of row from the Pascal triangle to expand the expression with coefficients. Use the numbers in that row of the Pascal triangle as coefficients of a and b. Place the powers to the variables a and b. Power of a should go from 4 to 0 and power of b should go from 0 to 4. CK-12\n\nFind middle term of binomial expansion. Here you will explore patterns with binomial and polynomial expansion and find out how to get coefficients using Pascals Triangle.\n\nAs mentioned in class, Pascal's triangle has a wide range of usefulness. The coefficients that appear in the binomials expansions can be defined by the Pascals triangle as well.\n\nExpand the factorials to see what factors can reduce to 1 3. For example, the first line of the triangle is a simple 1. We begin by considering the expansions of ( + ) for consecutive powers of , starting with = 0. Practice Expanding Binomials Using Pascal's Triangle with practice problems and explanations.\n\nComparing (3x + 4y) 4 and (a + b) 4, we get a = 3x and b = 4y Pascal's triangle, named after the famous mathematician Blaise Pascal, names the binomial coefficients for the binomial expansion. For example, x+1 and 3x+2y are both binomial expressions. Don't worry it will all be explained! Any equation that contains one or more binomial is known as a binomial equation. So we know the answer is . What is the general formula for binomial expansion? Pascals Triangle gives us a very good method of finding the binomial coefficients but there are certain problems in this method: 1. Lets learn a binomial expansion shortcut. Binomial Expansion Formula. Named posthumously for the French mathematician, physicist, philosopher, and monk Blaise Pascal, this table of binomial The shake vendor told her that she can choose plain milk, or she can choose to combine any number of flavors in any way she want.\n\nBackground. A binomial expression is the sum or difference of two terms. (2 marks) Ans. Again, add the two numbers immediately above: 2 + 1 = 3. The binomial expansion formula can simplify this method. Background. The triangle is symmetrical. In this explainer, we will learn how to use Pascals triangle to find the coefficients of the algebraic expansion of any binomial expression of the form ( + ) . 2.\n\nadjacent side (in a triangle) adjacent sides Your calculator probably has a function to calculate binomial Expanding a binomial using Pascals Triangle Pascals triangle is the pyramid of numbers where each row is formed by adding together the two numbers that are directly above it: The triangle continues on this way, is named after a French mathematician named Blaise Pascal (find out more about Blaise Pascal) and is helpful when performing Binomial Expansions.. Notice that the 5th row, for example, has 6 entries. Thanks. There are a total of (n+1) terms in the expansion of (x+y) n Then,the n row of Pascals triangle will be the expanded series coefficients when the terms are arranged. The coefficients in the binomial expansion follow a specific pattern known as Pascal [s triangle . Step 2.\n\n1+1. 6th line of Pascals triangle is So the 4th term is (2x (3) = x2 The 4th term is The second method to work out the expansion of an expression like (ax + b)n uses binomial coe cients. Combinations are used to compute a term of Pascal's triangle, in statistics to compute the number an events, to identify the coefficients of a binomial expansion and here in the binomial formula used to answer probability and statistics questions.\n\nIf we want to raise a binomial expression to a power higher than 2 it is very cumbersome to\n\nNotes include completing rows 0-6 of pascal's triangle, side by side comparison of multiplying binomials traditionally and by using the Binomial Theorem for (a+b)^2 and (a+b)^3, 2 examples of expanding binomials, 1 example of finding a coefficient, and 1 example of finding a term.Practice is a \"This or That\" activit Well, it is neat thanks to calculating the number of combinations, and visualizes binomial expansion. Suppose you have the binomial ( x + y) and you want to raise it to a power such as 2 or 3. Pascals Triangle Binomial Expansion As we already know that pascals triangle defines the binomial coefficients of terms of binomial expression (x + y) n , So the expansion of (x + y) n is: (x In Pascals triangle, each number in the triangle is the sum of the two digits directly above it. When the exponent is 1, we get the original value, unchanged: (a+b) 1 = a+b. Binomial coefficients are the positive coefficients that are present in the polynomial expansion of a binomial (two terms) power. Write down the row numbers. It gives a formula for the expansion of the powers of binomial expression. What is the formula for binomial expansion? Isaac Newton wrote a generalized form of the Binomial Theorem. As mentioned in class, Pascal's triangle has a wide range of usefulness. And indeed, (a + b)0 = 1. Binomial Expansion. Pascals triangle determines the coefficients which arise in binomial expansion . Pascals Triangle is the triangular arrangement of numbers that gives the coefficients in the expansion of any binomial expression. Get instant feedback, extra help and step-by-step explanations. We only want to find the coefficient of the term in x4 so we don't need the complete expansion. What is the Binomial Theorem? In mathematics, Pascals rule (or Pascals formula) is a combinatorial identity about binomial coefficients. For natural numbers (taken to include 0) n and k, the binomial coefficient can be defined as the coefficient of the monomial Xk in the For example, x+1 and 3x+2y are both binomial expressions. 11/3 = Let a = 7x b = 3 n = 5 n Clearly, the first number on the nth line is 1. Simplify Pascal's Triangle and Binomial Expansion IBSL1 D\n\nDiscover related concepts in Math and Science.\n\na) Find the first 4 terms in the expansion of (1 + x/4) 8, giving each term in its simplest form. b) Use your expansion to estimate the value of (1.025) 8, giving your answer to 4 decimal places. In the binomial expansion of (2 - 5x) 20, find an expression for the coefficient of x 5. How many ways can you give 8 apples to 4 people? In this way, using pascal triangle to get expansion of a binomial with any exponent. If you continue browsing the site, you agree to the use of cookies on this website. We will use the simple binomial a+b, but it could be any binomial. ()!.For example, the fourth power of 1 + x is Let us start with an exponent of 0 and build upwards. Other Math questions and answers." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8978525,"math_prob":0.9892764,"size":52970,"snap":"2023-14-2023-23","text_gpt3_token_len":13224,"char_repetition_ratio":0.2532757,"word_repetition_ratio":0.9945762,"special_character_ratio":0.23826694,"punctuation_ratio":0.10851165,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999311,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-24T11:58:17Z\",\"WARC-Record-ID\":\"<urn:uuid:2d3c03bb-af86-4fee-84b8-9a07894a0ed0>\",\"Content-Length\":\"35917\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d89099e5-6c3a-4696-afa9-42fe2e97ead4>\",\"WARC-Concurrent-To\":\"<urn:uuid:fd2658d8-370b-4cae-a636-7d30576d1a9b>\",\"WARC-IP-Address\":\"62.216.176.176\",\"WARC-Target-URI\":\"http://fliesen-iseni.de/mac/thanos/how/243117038dd9c4cc3b05506085630b04\",\"WARC-Payload-Digest\":\"sha1:3UGCMVTWPAMPY6HRWXSMYBLMZGKLJNGX\",\"WARC-Block-Digest\":\"sha1:V3HEGVE7FSSNL6XGPRBC75ZDMSSSGXUE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296945282.33_warc_CC-MAIN-20230324113500-20230324143500-00619.warc.gz\"}"}
https://tools.carboncollective.co/compound-interest/67361-at-40-percent-in-17-years/
[ "# What is the compound interest on $67361 at 40% over 17 years? If you want to invest$67,361 over 17 years, and you expect it will earn 40.00% in annual interest, your investment will have grown to become $20,539,276.07. If you're on this page, you probably already know what compound interest is and how a sum of money can grow at a faster rate each year, as the interest is added to the original principal amount and recalculated for each period. The actual rate that$67,361 compounds at is dependent on the frequency of the compounding periods. In this article, to keep things simple, we are using an annual compounding period of 17 years, but it could be monthly, weekly, daily, or even continuously compounding.\n\nThe formula for calculating compound interest is:\n\n$$A = P(1 + \\dfrac{r}{n})^{nt}$$\n\n• A is the amount of money after the compounding periods\n• P is the principal amount\n• r is the annual interest rate\n• n is the number of compounding periods per year\n• t is the number of years\n\nWe can now input the variables for the formula to confirm that it does work as expected and calculates the correct amount of compound interest.\n\nFor this formula, we need to convert the rate, 40.00% into a decimal, which would be 0.4.\n\n$$A = 67361(1 + \\dfrac{ 0.4 }{1})^{ 17}$$\n\nAs you can see, we are ignoring the n when calculating this to the power of 17 because our example is for annual compounding, or one period per year, so 17 × 1 = 17.\n\n## How the compound interest on $67,361 grows over time The interest from previous periods is added to the principal amount, and this grows the sum a rate that always accelerating. The table below shows how the amount increases over the 17 years it is compounding: Start Balance Interest End Balance 1$67,361.00 $26,944.40$94,305.40\n2 $94,305.40$37,722.16 $132,027.56 3$132,027.56 $52,811.02$184,838.58\n4 $184,838.58$73,935.43 $258,774.02 5$258,774.02 $103,509.61$362,283.62\n6 $362,283.62$144,913.45 $507,197.07 7$507,197.07 $202,878.83$710,075.90\n8 $710,075.90$284,030.36 $994,106.27 9$994,106.27 $397,642.51$1,391,748.77\n10 $1,391,748.77$556,699.51 $1,948,448.28 11$1,948,448.28 $779,379.31$2,727,827.59\n12 $2,727,827.59$1,091,131.04 $3,818,958.63 13$3,818,958.63 $1,527,583.45$5,346,542.08\n14 $5,346,542.08$2,138,616.83 $7,485,158.92 15$7,485,158.92 $2,994,063.57$10,479,222.48\n16 $10,479,222.48$4,191,688.99 $14,670,911.48 17$14,670,911.48 $5,868,364.59$20,539,276.07\n\nWe can also display this data on a chart to show you how the compounding increases with each compounding period.\n\nAs you can see if you view the compounding chart for $67,361 at 40.00% over a long enough period of time, the rate at which it grows increases over time as the interest is added to the balance and new interest calculated from that figure. ## How long would it take to double$67,361 at 40% interest?\n\nAnother commonly asked question about compounding interest would be to calculate how long it would take to double your investment of $67,361 assuming an interest rate of 40.00%. We can calculate this very approximately using the Rule of 72. The formula for this is very simple: $$Years = \\dfrac{72}{Interest\\: Rate}$$ By dividing 72 by the interest rate given, we can calculate the rough number of years it would take to double the money. Let's add our rate to the formula and calculate this: $$Years = \\dfrac{72}{ 40 } = 1.8$$ Using this, we know that any amount we invest at 40.00% would double itself in approximately 1.8 years. So$67,361 would be worth $134,722 in ~1.8 years. We can also calculate the exact length of time it will take to double an amount at 40.00% using a slightly more complex formula: $$Years = \\dfrac{log(2)}{log(1 + 0.4)} = 2.06\\; years$$ Here, we use the decimal format of the interest rate, and use the logarithm math function to calculate the exact value. As you can see, the exact calculation is very close to the Rule of 72 calculation, which is much easier to remember. Hopefully, this article has helped you to understand the compound interest you might achieve from investing$67,361 at 40.00% over a 17 year investment period." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9177627,"math_prob":0.9988914,"size":4114,"snap":"2023-40-2023-50","text_gpt3_token_len":1288,"char_repetition_ratio":0.13941605,"word_repetition_ratio":0.015174507,"special_character_ratio":0.4081186,"punctuation_ratio":0.20019822,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998684,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-29T07:41:52Z\",\"WARC-Record-ID\":\"<urn:uuid:45b39a98-b00f-4d60-96a8-678d31780073>\",\"Content-Length\":\"27593\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:350cc1cf-9970-43a9-835a-e9ed772f038b>\",\"WARC-Concurrent-To\":\"<urn:uuid:703436f9-98fa-41df-882d-efc257cb09ea>\",\"WARC-IP-Address\":\"138.197.3.89\",\"WARC-Target-URI\":\"https://tools.carboncollective.co/compound-interest/67361-at-40-percent-in-17-years/\",\"WARC-Payload-Digest\":\"sha1:EWYCJPPXQKX3OE3VFKYXM6M2B2YTJS5N\",\"WARC-Block-Digest\":\"sha1:ORPYHJRMOXFQIYJJRFQ2YX2EBCIVMP32\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510498.88_warc_CC-MAIN-20230929054611-20230929084611-00685.warc.gz\"}"}
https://www.groundai.com/project/properties-of-liquid-clusters-in-large-scale-molecular-dynamics-nucleation-simulations/
[ "Properties of Liquid Clusters in Large-scale Molecular Dynamics Nucleation Simulations\n\n# Properties of Liquid Clusters in Large-scale Molecular Dynamics Nucleation Simulations\n\nRaymond Angélil Institute for Theoretical Physics, University of Zurich, 8057 Zurich, Switzerland    Jürg Diemand Institute for Theoretical Physics, University of Zurich, 8057 Zurich, Switzerland    Kyoko K. Tanaka Institute of Low Temperature Science, Hokkaido University, Sapporo 060-0819, Japan    Hidekazu Tanaka Institute of Low Temperature Science, Hokkaido University, Sapporo 060-0819, Japan\nJuly 19, 2019\n###### Abstract\n\nWe have performed large-scale Lennard-Jones molecular dynamics simulations of homogeneous vapor-to-liquid nucleation, with atoms. This large number allows us to resolve extremely low nucleation rates, and also provides excellent statistics for cluster properties over a wide range of cluster sizes. The nucleation rates, cluster growth rates, and size distributions are presented in Diemand et al. [J. Chem. Phys. 139, 74309 (2013)], while this paper analyses the properties of the clusters. We explore the cluster temperatures, density profiles, potential energies and shapes. A thorough understanding of the properties of the clusters is crucial to the formulation of nucleation models. Significant latent heat is retained by stable clusters, by as much as for clusters with size . We find that the clusters deviate remarkably from spherical - with ellipsoidal axis ratios for critical cluster sizes typically within and . We examine cluster spin angular momentum, and find that it plays a negligible role in the cluster dynamics. The interfaces of large, stable clusters are thiner than planar equilibrium interfaces by . At the critical cluster size, the cluster central densities are between lower than the bulk liquid expectations. These lower densities imply larger-than-expected surface areas, which increase the energy cost to form a surface, which lowers nucleation rates.\n\n###### pacs:\n05.10.-a, 05.70.Fh, 05.70.Ln, 05.70.Np, 36.40.Ei, 64.60.qe, 64.70.Hz, 64.60.Kw, 64.10.+h, 83.10.Mj, 83.10.Rs, 83.10.Tv\npreprint: AIP/123-QED\n\n## I Introduction\n\nA homogeneous vapor, when supersaturated, changes phase to liquid through the process of nucleation. This transformation is stochastically driven through the erratic formation of clusters, made up of atoms clinging together in droplets large enough that the free energy barrier is surpassed. Despite the ubiquity of this process in nature, attempts to model this process have met with difficulty, because the properties of the nanoscale liquid-like clusters are not well known Kalikmanov (2013). Unlike real laboratory experiments (Sinha, S. et al. (2010)Sinha et al. (2010) for example), computer simulations offer detailed information on the properties and evolution of clusters. However, direct simulations of nucleation are typically performed using a few thousand atoms only, and are therefore limited to extremely high nucleation rates and to forming a small number stable clusters (e.g. Wedekind et al. (2007)Wedekind et al. (2007) and Napari et al. (2009)(Napari, Julin, and Vehkam?ki, 2009a)). A large number of small simulations allows us to constrain the critical cluster properties in the high nucleation rate regime, but there seems to be limited information in the literature on critical cluster properties. An alternative approach is to simulate clusters in equilibrium with a surrounding vapor, however the resulting cluster properties seem to differ significantly from those seen in nucleation simulations Napari, Julin, and Vehkam?ki (2009a).\n\nWe report here on the properties of the clusters which form in large-scale molecular dynamics Lennard-Jones simulations. These direct simulations of homogeneous nucleation are much larger and probe much lower nucleation rates than any previous direct nucleation simulation. Some of the simulations even cover the same temperatures, pressures, supersaturations and nucleation rates as the recent Argon supersonic nozzle (SSN) experimentSinha et al. (2010) and allow, for the first time, direct comparisons to be made. The nine simulations we analyse here are part of a larger suite of runs: results of these runs pertaining to nucleation rates and comparisons to nucleation models and the SSN experimentSinha et al. (2010), are presented in Diemand et al. (2013)Diemand et al. (2013).\n\nThe large size of these simulations is primarily necessitated by the rarity of nucleation events at these low supersaturations. However, a further benefit gained from large simulations is the substantial number of nucleated droplets which are able to continue growing without significant decreases in the vapor density. This allows us to study, with good statistics, the properties of clusters as they grow, embedded within an realistic unchanging environment. This is particularly important in understanding the role that the droplet’s surface plays in the development of the droplet - as a bustling interface between the denser (and ever-growing) core, and the vapor outside. The nucleation properties of the simulations, the cluster growth rates and size distributions, and comparisons to nucleation models are presented in Diemand et al. (2013)Diemand et al. (2013), while here we explore the properties of the clusters themselves. Studying the properties of the nano-sized liquid clusters which form, both stable and unstable, is of service to understanding the details of the nucleation process, the reasons behind the shortcomings of the available nucleation models, and aids in the blueprinting and selection of ingredients for future ones.\n\nSection II provides details on the numerics of the simulations. In Section III we present the temperatures of the clusters and in Section IV we show the clusters’ potential energies. Section V addresses cluster rotation and angular momentum. The shapes the clusters take on is detailed in section VI. The cluster density profiles are explored in section VII. Section VIII will address cluster sizes and we use this information to revisit nucleation theory in section IX.\n\n## Ii Numerical Simulations\n\nThe simulations were performed with the Large-scale Atomic/Molecular Massively Parallel Simulator Plimpton (1995), an open-source code developed at Sandia National Laboratories. The interaction potential is Lennard-Jones,\n\n U(r)4ϵ=(σr)12−(σr)6, (1)\n\nthough truncated at , as well as shifted to zero. The properties of the Lennard Jones fluid depend on the cutoff scale, and our chosen cutoff is widely used in simulations, resulting in properties similar to Argon at reasonable computational cost. The integration routine is the well-known Verlet integrator (often referred to as leap-frog), with a time-step of , regardless of the simulation temperature. In the Argon system, the units become , , and\n\nThe initial conditions correspond to randomPanneton, L’Ecuyer, and Matsumoto (2006) positions, and random velocities corresponding to a chosen temperature, in a cube with periodic boundary conditions. The simulations are carried out at constant average temperature, but not constant energy. The velocities of all the atoms in the simulation are rescaled, so that the average temperature of the simulation box remains constantWedekind, Reguera, and Strey (2007). The condensation process converts potential energy into kinetic energy as the clusters form, adding heat to the clusters. However because the amount of atoms in gas relative to the number in the condense phase is so large, the amount of thermostatting required is negligible. Because the nucleation rates explored in these simulations are relatively low, relative to the number of monomers in the simulation, very few clusters form. This means that the average heat produced through the transformation is extremely low, and so the amount of velocity rescaling necessary almost negligible.\n\nThe initial density of the simulations are indicated in table 1. The monomer density drops rapidly as the gas subcritical cluster distribution forms. The clusters are identified by the Stillinger Stillinger (1963) algorithm, whereby neighbours with small enough separations are joined into a common group. The linking length is temperature dependent, and given in table 2. It is set by the distance below which two monomers would be bound in the potential (1), were their velocities to correspond to the thermodynamic average, (Yasuoka, Matsumoto, and Kataoka, 1994; Tanaka et al., 2011). The type of initial conditions, the simulation time-step, thermostatting consequences, and the Lennard-Jones cutoff, have been investigated through convergence tests in Diemand et al. (2013)Diemand et al. (2013).", null, "Figure 1: The only stable cluster which formed in run T6n55. It is made up of 7373 members, as identified by the Stillinger search criterion. Its density and potential energy profiles are shown in figure 6. Nucleation theory reasons that clusters are spherical, yet we often observe deformation, especially for small clusters, as well as at high temperatures.\n\nThe runs we have chosen to analyse here cover a broad temperature range -, and a nucleation rate range - . All the results presented in this paper come from single snapshots taken at the end of each run. Because of the different nucleation rates, cluster growth rates, critical cluster sizes, and because all runs were terminated at a different time, different runs have markedly different size distributions at the time of termination. For example, while run T8n3 formed stable clusters, the largest of which has members, run T6n55 formed only a single stable cluster, yet made up of atoms (pictured in figure 1), while T10n55 formed clusters up to , although still below the critical size for this run. However, even a run which forms no stable clusters is still of value, as the subcritical droplets, whose properties we are interested in, are always present in the gas. Sometimes it is convenient to group the atoms of each cluster into one of two categories: according to whether they belong to the core, or to the surface. Atoms are considered core atoms if they have at least neighbours within the search radius. is temperature dependent, and is chosen such that the number of neighbours at the bulk density is approximately constant. The cutoffs are given in table 2. Unless otherwise indicated, error bars indicate the root mean square scatter in the measured quantity. In many cases, it is instructive to make comparisons between the liquid in the clusters of various sizes (and therefore curvatures), and the bulk liquid. To facilitate this comparison, we perform supplementary MD saturation simulations of the vapor-liquid phase equilibrium(Werth et al., 2013; Baidakov et al., 2007; Trokhymchuk and Alejandre, 1999; Mecke, Winkelmann, and Fischer, 1997), in order to calculate the relevant quantities directly. Appendix B details these simulations, whose parameter-value results are given in table 3.\n\n## Iii Temperatures\n\nWe define the temperature of an ensemble of atoms from their mean kinetic energy,\n\n kT≡23⟨Ekinetic⟩=13NN∑i=1mv2i, (2)\n\nwhere is the total number of atoms in the ensemble, are the atom masses, and the velocities are those relative to the simulation box. For small, out-of-equilibrium clusters it can be troublesome to define temperature as an average over kinetic energy. However, by taking ensemble averages, the large number of small, subcritical clusters in the simulations mitigate this complication. The cluster temperature differences with the average bath temperature, is plotted for a few runs in figure 2, and shows that clusters smaller and at the critical cluster size are in thermodynamic equilibrium with the gas. This can be expected as sub-critical clusters are as likely to accrete a monomer as they are to evaporate one: Because the growth rate is equal to the loss rate, subcritical clusters quickly lose latent heat due to the many interactions that they undergo as they random-walk the size ladder. Wedekind et al. 2007 Wedekind, Reguera, and Strey (2007) find a similar behaviour in simulations. Their temperature differences are larger and set in at smaller cluster sizes than those in our simulations, as expected at their much higher nucleation rates.", null, "Figure 2: The difference between the cluster temperature and the simulation average. The grey bars indicate the r.m.s. scatter. The solid lines indicate a sliding average over a window of size 12 in the cluster member count. The vertical dashed lines indicate the estimated critical cluster sizes, using the first nucleation theorem. Only stable clusters retain latent heat. At all temperatures except the highest (kT=1.0ϵ), the clusters are hotter than the gas due to the latent heat of formation, which has not yet had enough time to dissipate back into the gas. The thick solid line in the second panel shows the predicted ΔTC=TC−T0 from CNINT(Feder et al., 1966), note that ΔTC<ΔT for small cluster and ΔTC≃ΔT for cluster larger than about 50 atoms, see for example Wedekind et al. (2007)Wedekind, Reguera, and Strey (2007).\n\nLarger than the critical size, the cluster temperatures relative to the gas temperature increase with the cluster size. For simulations at the same temperature, the higher supersaturation case has a higher latent heat retention. This is likely due to the higher growth rates, caused by the higher collision rate and also by the higher probability that a monomer sticks to a clusterTanaka et al. (2011); Diemand et al. (2013).\n\nThe only set of runs without a significant post-critical signal are the high temperature runs. It is possible that evaporation, which is proportional to Diemand et al. (2013), is efficient enough at the low supersaturations of the runs to keep the clusters closer to thermal equilibrium with the gas.\n\nIf we divide the atoms of every cluster into two population types: core atoms and surface atoms, based on the number of neighbours that each atom has, we can investigate their temperature differences. Across all of our simulations, we find no significant difference in the core temperatures vs. the surface temperatures. The clusters are conductive enough for the surfaces to maintain thermodynamic equilibrium with their cores.", null, "Figure 3: Directly measured temperature probability distributions (circles) for a few cluster sizes for the run T10n6. All clusters below the critical size have average temperature (the dotted vertical line) equal to the gas temperature (see figure 2), however the most probable temperature TC (solid vertical lines), is lower for small clusters. The theoretical prediction from equation (3) from McGraw and LavioletteMcGraw and LaViolette (1995) fits (solid, curved lines) these measured distributions extremely well.\n\nFigure 3 shows the temperature probability distribution for clusters of various sizes, for the high-temperature run T10n6. Due to the asymmetry of the distribution for smaller clusters, the average temperature is not equal to the most probable one, . The distributions as a function of cluster size were derived by McGraw and Laviolette McGraw and LaViolette (1995):\n\n P(T)=Kexp[Cv(TC−T)+CvTCln(TCT)], (3)\n\nwhere provides normalisation, and is the cluster’s heat capacity. This predicted form fits the distributions shown in Figure 3 extremely well and allows us to derive the most probable temperature very accurately. We fit this curve to the temperature probability distributions to all runs and cluster sizes where we have sufficient statistics and plot the resulting values in figure 4. Figures 2 and 4 also show comparisons with Feder et al (1966)’s(Feder et al., 1966) classical non-isothermal nucleation theory (CNINT), assuming a sticking probability of one, no carrier gas and no evaporation. According to this theory is negative below the critical cluster size, zero for critical clusters and positive above the critical size. The CNINT agrees only qualitatively with the MD results. Discrepencies are expected because the classical nucleation theory does not match the critical cluster sizes, size distributions and nucleation rates found in our MD simulations Diemand et al. (2013). Similar qualitative agreement was found in simulations with much higher nucleation rates inWedekind, Reguera, and Strey (2007).", null, "Figure 4: The ratio of the most probable cluster temperatures, TC, from fits to equation (3), to the gas temperature. The solid lines indicate the predictions from CNINT from Feder et al. 1966(Feder et al., 1966) for runs T4n10 and T10n6.\n\nFor the heat capacity , all simulations are consistent with a simple linear fit with slope of against , i.e. the heat capacity per molecule is almost equal to the ideal value for a monoatomic gas.\n\nWe remark that:\n\n• For clusters larger than the critical cluster size, significant latent heat is retained by the clusters. An exception to this are the high temperatures runs, for which temperatures of clusters greater than the critical size are consistent with zero (within the scatter) - no clear latent heat retention signal is observed. The excess temperature is marginally larger for the lower gas temperature () runs than the higher ().\n\n• Within the scatter, the cluster core temperatures are the same as the surface temperatures. Heat is conducted efficiently enough that latent heat, which is added to the surface is transferred to the core rapidly enough to maintain equilibrium - within the bounds of statistical veracity.\n\n• The temperature probability distributions as a function of cluster size have the expected form - reshaping from the Maxwell distribution towards the normal distribution as cluster size grows.\n\n• For clusters smaller than the critical size, the most probable temperature relative to the gas temperature is universal across all runs.\n\n## Iv Potential Energies\n\nIn atomistic theories of nucleation, e.g. the Fisher droplet modelFisher (1967) and other atomistic models (see e.g. Kashchiev (2000)(Kashchiev, 2000) and Kalikmanov (2013)(Kalikmanov, 2013) for details), one relates the surface energy of a cluster to its total potential energy :\n\n Epot(i)=iepot,l+W(i), (4)\n\nwhere is the potential energy per particle in the bulk liquid phase. In this approach the surface energy of a cluster is simply the difference between its actual potential energy and the one it would have if all its members were embedded in bulk liquid. We have tried this approach using the bulk liquid potential energies measured in the equilibrium simulations described in the Appendix and find that the resulting surface terms are too large: In the atomistic model the free energy difference at saturation (i.e. where the volume term vanishes) are equal to the surface term, with a constant shift to have zero for monomers:\n\n ΔGatomistic(i,S=1)=W(i)−W(1). (5)\n\nFor , the classical volume term is no longer null:\n\n ΔGatomistic(i)=−(i−1)ln(S)+W(i)−W(1). (6)\n\nThis estimate for the free energy lies far above the true free energy landscape, which we reconstructed from the size distribution in the simulation using a new analysis method (Tanaka, K. et al (2014), in preparation). Figure 16 plots this reconstruction . In comparison, the atomistic theory free energy curve lies far above these estimates, reaching a critical size , at which point the atomistic free energy is . The same was found in other simulations, i.e. this simple implementation of an atomistic model seems to overestimate the surface energy and therefore underestimates the nucleation rates by large factors. On the other hand, if not the bulk liquid phase value is chosen for , but the core potential energy at size (red dots on figure 5), then the corresponding surface energy is too low.", null, "Figure 5: Potential energies per particle split into the two population types: core and surface. The round red markers indicate the potential energies of the core particles, and the crosses indicate those of the surface particles. Squares correspond to the total per-particle potential. The horizontal black lines indicate the bulk potential energies from supplementary simulations at the gas target temperature. See appendix B.\n\nThe lower panel of figure 6 shows the potential energy per particle as a function of distance from the center-of-mass for the large cluster pictured in figure 1. It reaches the same values as in the bulk-liquid at the same temperature. While we observe the kinetic energy differences (i.e. temperature differences, see Section III) between core and surface atoms to be consistent with zero, the potential energies of the core are expected to be considerably lower than those of the surface particles, as they have more neighbours. Figure 5 plots the potential energies of core and surface atoms for a few runs. As the clusters grow, they become more tightly packed, and so both the surface and core potential energies drop. The potential energies per core particle are expected to reach a minimum in the limit that the clusters grow large enough to have core potential energies equal to the bulk liquid. In our low-temperature simulations, this occurs at , and for our high temperatures simulations at .", null, "Figure 6: The upper panel shows the binned center-of-mass density profile of the large cluster pictured in figure 1. Compared to the gas, this cluster has a temperature excess of ΔkT=0.15ϵ. The fit (in green) to equation (10) puts the central density at ρl=0.825m/σ3, the midpoint of transition region at R=12.56σ, and the length of the transition region d=1.90σ. Bulk simulations at kT=0.75ϵ have d=2.15σ. Its inner density agrees (within the scatter), with the bulk case at this higher temperature. The lower panel shows its per-particle potential energy, consistent with what the bulk at the raised temperature expects.\n\nTo predict nucleation rates accurately one needs an good estimate of the surface term for clusters near the critical size. Figure 5 shows that the core particles in these relatively small clusters are still strongly affected by the surface and the transition region, i.e. their potential energies per particle are far less negative than those found for true bulk liquid particles. This discrepancy might be related to the failure of the simple atomistic model described above. Replacing in Eqn. 4 with the less negative values actually found in the cores of critical clusters can significantly improve the estimates, at least for , and lead to better nucleation rate estimates.\n\n## V Rotation\n\nBecause clusters grow through isotropic interactions with vapor atoms, it can be expected that the spin of the clusters decreases for larger clusters. The angular momentum of a single particle in a cluster is\n\n ji≡ri,C.o.M×vi,%C.o.M, (7)\n\nwhere C.o.M denotes that the quantity is taken relative to the cluster center-of-mass, and we have set the mass to unity. The magnitude of the total angular momentum of the cluster is then\n\n |J|=∣∣ ∣∣N∑iji∣∣ ∣∣. (8)\n\nWe define the related quantity\n\n L≡N∑i|ji|. (9)\n\nThe dimensionless quantity provides an indication of the extent to which the members of the cluster spin in a common direction. Figure 7 plots this ratio as a function of cluster size for two runs at the same temperature. Also plotted are the values of this quantity from constant-density bulk liquid simulations. This is done by evaluating the quantity for atoms within a randomly entered spherical boundary of different sizes. This provides a noise estimate to which the nucleation simulation results may be compared. Across all runs we find that for small clusters sizes (), the spin is slightly above the noise level, but decays rapidly for larger clusters. Across all simulations, at size the ratio is within , with the high temperature runs at the high end, and the low at the lower. This is to be expected if the axis of rotation is random relative to the ellipsoidal axis.", null, "Figure 7: This ratio (see definitions (8) and (9)) is an indicator of the extent to which the members of a cluster rotate in unison. Unity corresponds to harmonious rotation, while zero to dissonance. The cluster spin damps rapidly as they grow. While we plot this quantity for only two runs here, it exhibits similar behavior for all runs. For comparison, this ratio was calculated in the bulk to determine the noise level, as this signal is expected to average to zero for the bulk. Finite-size effects however contribute an intrinsic noise to this quantity. We estimate this by randomly centering spheres in the bulk, and calculating the ratio for the enclosed atoms.\n\n## Vi Shapes\n\nOne assumption usually made by nucleation models is that the clusters are spherical (with few exceptions, see e.g. Prestipino et al. (2012) Prestipino, Laio, and Tosatti (2012)). This is motivated by the sphericity of large liquid droplets, which bear this shape as it minimises the surface area, and therefore the surface energy. While the cluster shapes in our simulations can deviate significantly from any symmetries, for the sake of simplicity, we will investigate the cluster shapes by analysing the extent to which the clusters are ellipsoidal. We use principal component analysis to calculate the cluster ellipticities. For each cluster, we calculate the semi-major ellipsoidal axes by following the procedure outlined in Zemp et al 2011Zemp et al. (2011), which applies this approach to investigate the shapes of dark matter substructure in simulations. We reintroduce this procedure in appendix A. Figure 8 illustrates the results of this analysis for a typical cluster from one of our simulations.", null, "Figure 8: The long, medium and short ellipsoidal semi-major axes for a 51-member cluster from T5n40, as acquired from principle component analysis detailed in section VI. This cluster has axis ratios a/c=0.45, and b/c=0.63.", null, "Figure 9: Axis ratios as obtained through principal component analysis for 5 runs. Error bars indicate the r.m.s spread. At any given size high temperature clusters are more elongated than the low temperature clusters. The solid lines in the 2nd panel indicate the axis ratio estimates for particle selected within random spheres from a bulk constant density liquid, to illustrate the amount of apparent elongation caused by sampling a sphere with a small number of atoms.\n\nThe cluster axis ratios as a function of cluster size are plotted in figure 9. For a single run, figure 10 shows the probability distribution for the axis ratios, and how they change with cluster size. Although the clusters become more spherical as they grow, they are still significantly ellipsoidal at all sizes - in contrast to the standard model assumption that both sub and post-critical clusters are spherical. We observe a trend of increasing ellipticity as temperatures increase. Especially important to nucleation are the shapes around the critical cluster sizes. For each simulation, we find that the critical clusters have axis ratios and . These ellipticities are rather significant, and contrary to standard assumption of spherical shapes in nucleation models.", null, "Figure 10: The distribution of axis ratios for a single simulation for cluster sizes indicated in the legend inset.\n\nTo explore to origin of these non-spherical cluster shapes we performed supplementary simulations: Eight spherical liquid clusters of size are placed in a gas at saturation and . After letting the system equilibrate for 10’000 time-steps, we perform the PCA on the clusters. Their axis ratios are in the second panel (blue crosses and pink circles) of figure 9. We find that they bear the same ellipsoidal distortions as the clusters in the nucleation simulation. We therefore conclude that it unlikely that the clusters’ ellipsoidal shapes are in some way dependent on their formation through the nucleation process. Section V investigates whether angular momentum plays a significant role in cluster dynamics, and finds it relatively insignificant. The relatively low spins suggest that the large ellipticities (figure 9) are not supported by angular momentum.\n\nThis leaves us to conclude that the main cause of the cluster ellipticities are thermal fluctuations. High temperatures lead to larger surface fluctuations, and cause larger deviations from sphericity found at high temperatures. Interestingly, the average differences between the axis lengths are nearly independent of cluster size, but increase with temperature: We find that and for all clusters in the simulations. Large clusters are rounder (axis ratios closer to one) mainly because their larger size makes the nearly constant absolute differences become smaller in relative terms. The average differences grow with temperature and at we find and . Short animations of evolving clusters are available, in which their non-sphericity is clearly visible(supp, ).\n\n## Vii Central Densities and Transition Regions\n\nAn essential aspect of tiny clusters is the interface layer between the constant-density, liquid interior and the gas outside. The surface energy of the cluster depends on the properties of this layer. Of particular importance to the classical nucleation theory is the surface tension of the droplet, which depends on the interfacial pressure profile. Classical nucleation models assume that clusters are homogeneous, spherical droplets, with a sharp, well defined boundary. At this boundary the fluid properties are assumed to jump from bulk liquid properties on the inside to bulk vapor properties on the outside (see for example Kalikmanov, V.I. (2013)(Kalikmanov, 2013) and Kashchiev (2000)Kashchiev (2000)). In this section we show that the cluster properties found in our simulation deviate significantly from these assumptions.\n\nNumerical simulationsChapela et al. (1977) of liquid-vapor interfaces have shown that the density transition is continuous, and for spherical droplets, is well-approximated by\n\n ρ(r)=12[ρl+ρg−(ρl−ρg)tanh(2r−Rd)], (10)\n\nwhere is the density within the cluster, the gas density, the corona position, and its width. In each cluster’s center-of-mass frame, we bin the spherical number density, using a bin size of The number density profiles for clusters of the same size are used to make ensemble averages, to which equation 10 can be fit.", null, "Figure 11: The spherical density profile function (10) relies on the parameter d to characterise the interface length, or, the size of the transition region. Because the clusters are non-spherical, a non-zero interface length can be expected even if the transition is abrupt. This colormap shows the amplitude of this artificial contribution to d as a function of axis ratio. The white dots are the axis ratios for the clusters of T8n3. The red dot marks the critical cluster size.\n\nBecause the interface width is computed from fits to a spherical density profile, yet the clusters do not exhibit spherical symmetry, even a sharp transition region would result in a density profile with . Under the more realistic assumption of ellipticity, this artificial contribution to can be estimated. Figure 11 shows the ratio of and the long axis length , as a function of the axis ratios. We find that this artificial contribution to is smaller than our measured the interface widths in practically all cases. In other words, this effect, which in principle could lead us to overestimate significantly, is actually negligible.", null, "Figure 12: Results from the fit parameter for the central density ρl in equation (10). The error bars show the error in the fit. The horizontal dashed black lines are the liquid densities from our bulk simulations, at the gas temperature. The solid black lines represent the bulk density at the running average temperature over size. For clusters much larger than the critical cluster size, the bulk densities at the cluster temperatures agree with the cluster densities. However, there is a significant discrepancy at the critical cluster size of up to 25%\n\nThe upper panel of Figure 6 shows the binned center-of-mass density profile of the large cluster shown in figure 1, as well as the fit. Figures 12 and 13 show that the inner density and interface width respectively depend on cluster size: Generally, both the inner density and the interface width increase with cluster size (with the exception of the anomalous inner densities in small, high temperature clusters, see below). Inner densities and interface widths may then be compared to the analogous quantities from equilibrium simulations of planar vapor-liquid interfaces at various temperatures.", null, "Figure 13: The parameter d (equation 10) is the size of the interphase interfacial region. For clusters of size i∼1000, the non-isothermal bulk values overestimate d by 10−30%. The dot-dashed curves show dellipticity - the contribution that the non-spherical shape makes on the determination of d from spherical density profile fits. Refer to figure 11 for this quantity’s ellipticity dependence. This contribution is always lower than the observed signal.\n\nWe note that:\n\n• At small sizes (), the clusters hardly have a core, and so equation (10) gives them only a transition component.\n\n• At the critical cluster sizes , the inner densities are significantly lower than in bulk liquid. This implies a surface area larger than expected from classical nucleation theories.\n\n• For the clusters are warmer than the gas (see figure 2), and due to thermal expansion, have a lower density than the bulk would have at the gas temperature. The bulk densities taken at the cluster temperatures agrees with the central cluster densities only for our very largest clusters ().\n\n• For all simulations and cluster sizes (i.e. subcritical and post-critical), the cluster transition regions are thinner than the planar. equilibrium interfaces simulated at the same temperature.\n\nMonte-Carlo simulations ten Wolde and Frenkel (1998), find that critical clusters have inner densities equal to that of bulk liquid, which is at tension with our observations. Napari et al. (2009)(Napari, Julin, and Vehkam?ki, 2009b) estimated interface widths by comparing the sizes of cluster determined with different cluster definitions. They conclude that critical clusters from direct nucleation simulations have a thicker transition region than spherical clusters in equilibrium with the surrounding vapor. Due to the different simulations and analysis methods a detailed comparison is difficult.\n\nAs mentioned above, figure 12 shows that the inner densities generally increase with cluster size, except for small, high temperature clusters. Fitting the density profile of small clusters is difficult, because they do not have a well defined, constant inner density. This could affect the resulting values. To confirm the surprising anomaly in the inner densities of small, high temperature clusters we also measure the central cluster densities within a sphere of the cluster centre of mass directly: Figure 14 plots the ensemble-averaged number of particles within . This alternative measure confirms the findings from directly, without any fitting procedure: Generally, as the clusters grow, they become more tightly bound, leading to an increased central density. The small clusters in our high temperature runs at show a different trend: the central density first decreases, and then increases again. At the minimum central density occurs at , and at it occurs in the range This anomaly is evident both in the values from the fits and in the densities within the central . We are currently unable to explain this behaviour.", null, "Figure 14: Average number of atoms (right scale) and density (left scale) within 1σ of the cluster centre of mass. Generally, the central densities increase with size and our largest clusters reach the bulk liquid value (see figure 12). However, at high temperature (kT=0.8,1.0ϵ), we observe a surprising anomaly: the central densities drop and reach a well defined minumum before they rise again.\n\n## Viii Cluster sizes\n\nWe have two means for measuring cluster sizes. The one is with the principle component analysis procedure, detailed in section VI and appendix A, and the other with density profile fits, explained in section VII. The principle component analysis route, under the assumption of a constant density ellipsoid, yields the three ellipsoidal axes and for each cluster. The second method assigns to each density profile the centre of the transition region, . For nucleation, sizes are important because they provide an estimate for the cluster surface area, which helps set the total surface energy - a key component for nucleation theories. The simplest nucleation models assume that a cluster with members is spherical, and has a density equal to that of the bulk at the same temperature, from which a size, and therefore surface area, can be calculated. For three ellipsoidal axes and , the surface area may be analytically obtained using the approximate relation Klamkin (1971)\n\n Sellipsoid≈4π[apbp+apcp+bpcp3]1pwithp=1.6075, (11)\n\nwhich has a worst-case relative error . Figure 15 compares, using our two size-measuring methods, the sizes and surface areas of the clusters to the standard nucleation model assumptions.", null, "Figure 15: The solid green lines show the axes sizes (from principle component analysis) relative to the bulk expectation. Fitting functions were used to compute all the ratios shown here. The orange depict R from spherical density profile fits (equation 10), relative to the bulk value. The dashed lines are the surface areas corresponding to these two size estimates. PCA overestimates the sizes significantly at small sizes. Density profile fits give a more conservative value for the sizes and surface areas, yet still significantly larger than nucleation models’ predictions.\n\nBoth methods give larger sizes and surface areas than the classical model assumptions, due to the densities being lower than the bulk. Both methods suffer from large uncertainties and it is unclear how their resulting surface areas relate to the area of the true (unknown) surface of tension. The surface of tension lies somewhere in the transition regions. We have tried to calculate the radius of the surface of tension and the surface energy by assuming spherical interfaces and using the Irving-Kirkwood(Irving and Kirkwood, 1950a) pressure tensor approach applied to spherical droplets(Irving and Kirkwood, 1950b; Hi Lee, 2012). However, the transient nature of the non-equilibrium clusters in our nucleation simulations does not allow for the accumulation of strong-enough statistics for us to get a useful surface energy signal. We are therefore unable to constrain the cluster surface tension and the location of their true surface of tension.\n\nFor critical clusters we observe ratios from about 0.6 to 0.85 (see Figure 13). This introduces very large possible shifts in the surface areas: Placing the surface of tension at instead of would increase the critical cluster sizes by 30 to 43 percent, and their surface areas by factors of 2.2 to 2.9. Setting the surface of tension to instead of would decrease surface areas by factors of 2.9 to 5.3. These examples illustrate how large the uncertainties in the area of the surface of tension are, which introduces huge uncertainties of many orders of magnitudes into any nucleation rate predictions based on these surface areas.\n\nCompared with the spherical density profile based size definition, which uses the midpoint of the transition region as cluster radius, the principle component analysis method usually gives larger sizes. This is because of the assumption that clusters are constant density ellipsoids, when converting the eigenvalues to axis lengths using equation 20. The PCA analysis weights outer members heavily in the computation, yet these outer members are not part of a constant density neighbourhood, because they belong to the tail of the transition region. This effect decreases as the size of the transition region becomes small relative to the cluster size, i.e. when . At low temperatures, the PCA route yields smaller clusters than the density profile method. This, because low-temperature clusters are more spherical than higher temperature ones.\n\n## Ix Revisiting nucleation models\n\nNucleation models for the free energy of formation aspire to find the balance between the energy gain and cost due to creation of volume and surface. The volume term is well-understood as its contribution to the formation energy dominates in the large-cluster limit, whose properties are therefore straightforward to verify. Nucleation models’ shortcomings are thought to be due to an insufficient understanding of the surface energy contribution to the free energy, which dominates for small clusters, and which is therefore difficult to verify. Most nucleation models in the literature offer various forms for the surface energy component. For example, a common choice for the surface energy expresses the surface tension of a spherical cluster as a correction to the planar surface tension. This prescription for the free energy takes the form (Oxtoby, 1992; Laaksonen, Ford, and Kulmala, 1994)\n\n ΔGi=−ikTlnSvolume energy+γ∞planar value(1−2δri+ϵr2i)% curvature correctionsurface tension4πr2isurface areasurface energy, (12)\n\nwhere is the cluster radius. The Dillman-Meier approach(Dillmann and Meier, 1991) lets the term play the role of a Tolman-like Tolman (1949) correction. The SP model as used in Tanaka et al. (Tanaka et al., 2011) and Diemand et al. (Diemand et al., 2013) makes the choice\n\n δ=−kT8πγ∞r0ξ,ϵ=0, (13)\n\nwhere is set using the second virial coefficient, and\n\n r0=(34ρπ)1/3,\n\nwith the density taken to be equal to that of the bulk density. The classical nucleation model on the other hand, lets the surface tension take on the planar value, regardless of cluster size, setting . Both the CNT and SP models however, along with many others, make the same choice for the surface area, setting it to In this section we explore the effect of replacing this surface area estimate with the directly-measured values\n\n1. from principal component analysis (VI), and\n\n2. from density profile fits (VII).\n\nFor the surface tension, we use the SP model parameters (13). We impose the further stipulation that the free energy of formation for a cluster of size one is zero:\n\n ΔGi→ΔGi=ΔGi−ΔG1. (14)\n\nFigure 16 plots modelled free energy curves for a single run, and compares them to a kinematically reconstructed (Tanaka, K. et al (2014), in preparation) free energy. Our techniques for estimating the sizes (see section VIII and figure 15) show that because the cluster densities are lower than the bulk values, the surface areas are larger than the traditional nucleation model surface area assumptions. This increases the cost in forming a surface, lowering nucleation rates. Figure 17 compares the resultant nucleation rates to those measured directly from the simulation using the Yasuoka-Matsumoto(Yasuoka and Matsumoto, 1998) (or threshold) method. In almost all cases, the directly-measured surface areas lower the nucleation rate. The density profile surface area estimates improve the model estimation by a factor - However, the PCA surface area estimates significantly underestimate the nucleation rates, especially at high temperature, where surface fluctuations dominate the size measuring method. One should not keep ambitions for retrieving perfect-agreement nucleation rates with the procedure used in this section, as our ‘empirical’ surface energy model is still at the behest of a theoretical surface tension model, which likely holds shortcomings unfortunately typical to droplet surface tension models. Given that reasonable size measurements improve the model predictions, we are lead to conclude that it is not just the surface tension modelling which needs improvement - but that models must address surface area estimates directly, taking into account the not-yet-bulk central densities of clusters.", null, "Figure 16: The solid lines are free energy curves for the run T8n3, each using different estimates for the surface area (see figure 15). All curves use the semi-phenomenological model for the surface tension. The lower-than-expected bulk densities lead to more significant surface terms (equation (12)). The black dashed line is the equilibrium free energy component, calculated from the cluster size distribution, and the solid is the reconstructed free energy [Tanaka et al. (2014)]. The bulk, density profile, and PCA curves correspond to nucleation rates 3.5⋅10−7, 5.2⋅10−8, and 9.7⋅10−17 σ−3τ−1 respectively. The directly-measured MD value lies at 2.6⋅10−10σ−3τ−1.", null, "Figure 17: Using the SP model for the cluster surface tension, and our three different surface area estimates, we can construct free energy curves like those in figure 16 for each simulation, and compare the nucleation rates they correspond to, with those we measure directly from the simulations.\n\n## X Conclusions\n\nThis work offers detailed description of cluster formation in unprecedented large-scale Lennard-Jones molecular dynamics simulations of homogeneous vapor-to-liquid nucleation. Our main findings are\n\n• Significant latent heat is retained by large, stable clusters: as much as for clusters with . Small, sub-critical clusters on the other hand have the exact same average temperature as the surrounding vapor.\n\n• Cluster shapes deviate significantly from spherical: ellipsoidal axis ratios for critical cluster sizes lie typically within and .\n\n• Cluster spin is small and plays a negligible role in the cluster dynamics.\n\n• For critical, sub-critical, and post-critical clusters, the central potential energies per particle are significantly less negative than in the bulk liquid. They reach the bulk values only at large () sizes.\n\n• Central cluster densities generally increase with cluster size. However, for small, high temeperature clusters we uncover a surprising exception to this rule: their central densities decrease with size, reach a minimum (at for and at for ) and then join the general trend of increasing central densities with larger sizes.\n\n• For critical and sub-critical clusters, , the central densities are significantly smaller than in the bulk liquid. At the critical cluster size, the cluster central densities are between lower than the bulk expectations. This implies a surface area larger than expected from classical nucleation theories.\n\n• For the clusters are warmer than the gas (see figure 2), and due to thermal expansion, have a lower density than the bulk would have at the gas temperature. The bulk densities and potential energies taken at the cluster temperatures agree with the central cluster properties only for our very largest clusters ().\n\n• For all simulations and cluster sizes (i.e. subcritical and post-critical), the cluster transition regions are thinner than the planar equilibrium interfaces simulated at the same temperature.\n\n• Cluster size measurements suggest larger sizes than assumed in classical nucleation models, implying lower-than-expected nucleation rates. However, the exact area of the true surface of tension remains unknown, as does the surface tension itself - a major source of uncertainty in nucleation rate predictions.\n\n• Across all the cluster properties examined in this paper, there exists significant spread for clusters at each size. The standard approach to nucleation theory assumes all clusters of the same size have the same properties. This therefore allots to all clusters of a certain size, the same surface energy, as opposed to distributing them into disparate population types. The observed scatter in the cluster properties at each size suggests that the development of nucleation theories which address this may lead to a more realistic description of the process.\n\n###### Acknowledgements.\nWe thank the referees for useful comments. We acknowledge a PRACE award (36 million CPU hours) on Hermit at HLRS. Additional computations were preformed on SuperMUC at LRZ, on Rosa at CSCS and on the zBox4 at UZH. J.D. and R.A. are supported by the Swiss National Science Foundation.\n\n## Appendix A Determining cluster shapes with principal component analysis\n\nWe define the tensor\n\n M≡∫Vρ(r)rrTdV, (15)\n\nwhich is the second moment of the mass distribution - the part of the moment of inertia tensor responsible for the describing the matter distribution. The shape tensor is defined by\n\n S≡MMtot=∫Vρ(r)rrTdV∫Vρ(r)dV. (16)\n\nFor discrete, equal mass particles, in the centre-of-mass frame, the elements of the shape tensor read\n\n SC.o.M.,kj = 1NN∑i(ri)k(ri)j (17) = 1NN∑i⎛⎜ ⎜⎝x2ixi⋅yixi⋅zixi⋅yiy2iyi⋅zixi⋅ziyi⋅ziz2i⎞⎟ ⎟⎠, (18)\n\nin cartesian coordinates relative to the cluster centre of mass, where the sum is over all the members of the cluster. If there exist vectors , for which satisfy\n\n SC.o.MVl=λlVl, (19)\n\nthen the triplet and form the eigenvector-eigenvalue pairs for . For an ellipsoid of uniform density, the axes and are related to the eigenvalues of the shape tensor via\n\n a,b,c=√3λa,b,c. (20)\n\nWe choose the convention . Clusters not large enough to have a significant number of core atoms - therefore composed only of a fluffy surface, cannot be well-approximated by a constant-density ellipsoid. For these clusters, the method can overestimate the axis lengths, implying that this method does not provide a sound estimate of clusters’ sizes when they are small. However, the axis ratios provide a useful indicator of the cluster shapes regardless of the cluster size.\n\n## Appendix B Supplementary equilibrium simulations of planar vapor-liquid interfaces\n\nTo determine the thermodynamic properties of the Lennard-Jones fluid simulated here a range of liquid-vapor phase equilibrium simulation were run, similar to those in (Werth et al., 2013; Baidakov et al., 2007; Trokhymchuk and Alejandre, 1999; Mecke, Winkelmann, and Fischer, 1997). We used the same setup and analysis as described in (Baidakov et al., 2007), except that we use a different cutoff scale () and a different time-step (). For simplicity, we calculate the surface tensions using the Kirkwood-Buff pressure tensor only and to obtain better statistics the interface surface area was increased by doubling the size of the simulated rectangular parallelepiped in the x and y direction, i.e. we set .\n\nUsing the same cutoff scale () as in (Baidakov et al., 2007), we are able to exactly reproduce their results given at and . The results with our chosen cutoff scale () are shown in Figure 18. The surface tensions are about 5 percent lower, and the bulk liquid densities lower by than those found in (Baidakov et al., 2007). Figure 18 also shows fitting functions to our simulation results for the bulk liquid density\n\n ρm=0.0238⋅(13.29+24.492f0.35+8.155f)−0.008m/σ3, (21)\n\nwith\n\n f=1−kT1.257 (22)\n\nthe planar interface thickness\n\n d∞=−2.87⋅kT+4.82⋅kT2+1.59, (23)\n\nthe planar surface tension\n\n γ∞=2.67×(1−T/Tc)1.28ϵ/σ2,Tc=1.31ϵ/k, (24)\n\nand the potential energy per particle in the bulk liquid\n\n epot,l=3.872⋅kT−8.660. (25)\n\nWe use the values of these fitting functions throughout this paper, the values at and are listed in table 3. Results of the thermodynamic quantities that we calculate from slab simulations, and comparison to similar simulations by other authors are plotted in figure 19.\n\nNote that at we have to rely on extrapolations . We could not get meaningful constraints from equilibrium simulations at this low very temperature, because our liquid slabs begin to freeze before a true equilibrium with the vapor is established. The same limitations were also reported in Baidakov et al. (2007)(Baidakov et al., 2007).\n\nThe saturation pressures in our equilibrium simulations agree well with the fitting function proposed in Trokhymchuk et al.Trokhymchuk and Alejandre (1999), and we use this fitting function and the actual pressure measured in the nucleation simulations to determine the supersaturation , see Diemand et al 2013(Diemand et al., 2013) for details.", null, "Figure 18: Density profiles from rectangular box bulk simulations, detailed in appendix B.", null, "Figure 19: Thermodynamic parameters from equilibrium simulations of planar vapor-liquid interfaces. Literature values (Baidakov et al., 2007; Gu, Watkins, and Koplik, 2010; Dunikov, Malyshenko, and Zhakhovskii, 2001; Chapela et al., 1977; Vrabec et al., 2006) are shown for comparison. Differences in the measured parameters can be attributed to a number of factors, the choice for the cutoff-scale and the simulation size foremost among them: For example, Vrebec et al (2006)(Vrabec et al., 2006) truncate the Lennard-Jones potential at rcut=2.5σ, Baidakov et al. (2007)(Baidakov et al., 2007) use rcut=6.7σ while Chapela et al. (1977)(Chapela et al., 1977) have N∼103.\n\n## References\n\nYou are adding the first comment!\nHow to quickly get a good reply:\n• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.\n• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.\n• Your comment should inspire ideas to flow and help the author improves the paper.\n\nThe better we are at sharing our knowledge with each other, the faster we move forward.\nThe feedback must be of minimum 40 characters and the title a minimum of 5 characters", null, "", null, "", null, "" ]
[ null, "https://storage.googleapis.com/groundai-web-prod/media/users/user_14/project_116960/images/it6n55_pic3_perspective.jpg", null, "https://storage.googleapis.com/groundai-web-prod/media/users/user_14/project_116960/images/x1.png", null, "https://storage.googleapis.com/groundai-web-prod/media/users/user_14/project_116960/images/x2.png", null, "https://storage.googleapis.com/groundai-web-prod/media/users/user_14/project_116960/images/x3.png", null, "https://storage.googleapis.com/groundai-web-prod/media/users/user_14/project_116960/images/x4.png", null, "https://storage.googleapis.com/groundai-web-prod/media/users/user_14/project_116960/images/x5.png", null, "https://storage.googleapis.com/groundai-web-prod/media/users/user_14/project_116960/images/x6.png", null, "https://storage.googleapis.com/groundai-web-prod/media/users/user_14/project_116960/images/x7.png", null, "https://storage.googleapis.com/groundai-web-prod/media/users/user_14/project_116960/images/x8.png", null, "https://storage.googleapis.com/groundai-web-prod/media/users/user_14/project_116960/images/x9.png", null, "https://storage.googleapis.com/groundai-web-prod/media/users/user_14/project_116960/images/x10.png", null, "https://storage.googleapis.com/groundai-web-prod/media/users/user_14/project_116960/images/x11.png", null, "https://storage.googleapis.com/groundai-web-prod/media/users/user_14/project_116960/images/x12.png", null, "https://storage.googleapis.com/groundai-web-prod/media/users/user_14/project_116960/images/x13.png", null, "https://storage.googleapis.com/groundai-web-prod/media/users/user_14/project_116960/images/x14.png", null, "https://storage.googleapis.com/groundai-web-prod/media/users/user_14/project_116960/images/x15.png", null, "https://storage.googleapis.com/groundai-web-prod/media/users/user_14/project_116960/images/x16.png", null, "https://storage.googleapis.com/groundai-web-prod/media/users/user_14/project_116960/images/x17.png", null, "https://storage.googleapis.com/groundai-web-prod/media/users/user_14/project_116960/images/x18.png", null, "https://dp938rsb7d6cr.cloudfront.net/static/1.66/groundai/img/loader_30.gif", null, "https://dp938rsb7d6cr.cloudfront.net/static/1.66/groundai/img/comment_icon.svg", null, "https://dp938rsb7d6cr.cloudfront.net/static/1.66/groundai/img/about/placeholder.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.88593507,"math_prob":0.95316887,"size":47167,"snap":"2019-51-2020-05","text_gpt3_token_len":10495,"char_repetition_ratio":0.18448783,"word_repetition_ratio":0.051465623,"special_character_ratio":0.22341892,"punctuation_ratio":0.13780431,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97341675,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-23T17:02:33Z\",\"WARC-Record-ID\":\"<urn:uuid:807a90a5-2605-458d-8682-8c357fc74f49>\",\"Content-Length\":\"599930\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7ac2b0ce-fa48-49f6-9a50-8ec1db31cf21>\",\"WARC-Concurrent-To\":\"<urn:uuid:5a37c643-c1c9-4a88-b9a2-b63d29c3a23d>\",\"WARC-IP-Address\":\"35.186.203.76\",\"WARC-Target-URI\":\"https://www.groundai.com/project/properties-of-liquid-clusters-in-large-scale-molecular-dynamics-nucleation-simulations/\",\"WARC-Payload-Digest\":\"sha1:ZTCJFKOSBYQS5NDBCOI7TG3HYOX7H2N7\",\"WARC-Block-Digest\":\"sha1:HODT2LM42QX7XMRC44V5PQQCBJXL4KKQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250611127.53_warc_CC-MAIN-20200123160903-20200123185903-00118.warc.gz\"}"}
https://repository.unri.ac.id/handle/123456789/94/browse?rpp=20&sort_by=1&type=title&offset=90&etal=-1&order=ASC
[ "DSpace Repository\n\n# Browsing Mathematics by Title\n\nSort by: Order: Results:\n\n• (Elfitra, 2022-10)\nThis study discusses the forecasting of the average price of rice at the level of trade in Indonesia, where this study uses time series analysis. The method used in this research is Holt's Double Exponential Smoothing. ...\n• (wahyu sari yeni, 2019-03-30)\nThis articel discusess some rules for generating near-perfect numbers. All near-perfect numbers with this rules have two distinct prime factors. This is a review of Pollack and Shevelev’s paper [Journal of Number Theory, ...\n• (wahyu sari yeni, 2019-04-30)\nThis article discusses two optimal methods namely Liu-Zhou's method and Zhou- Chen-Song's method in nding multiple roots of nonlinear equations. Analytically using Taylor's expansion, geometric series and binomial series, ...\n• (perpustakaan UR, 2021-07)\nThis article discusses the dynamic effect of the greenhouse by modeling the shape of the earth like a rock. It is followed by discussing the effect of solar radiation on the model. Furthermore, the model was developed ...\n• (wahyu sari yeni, 2019-01-14)\nThis article discusses the Bayes estimators for the parameter of exponential distribution under di erent loss function. Here the gamma distribution is used as the prior distributon of exponential distribution for nding ...\n• (2021-03)\nRecruitment of employees in a company is a process to get workers who are able to work in a company, by selecting until the recruitment of prospective employees, so the company must know when the recruitment will be ...\n• (perpustakaan UR, 2021-11)\nSumatera is one of the largest islands in Indonesia, and it consists of 10 provinces. Based on data from Badan Pusat Statistik the Human Development Index (HDI) in Indonesia, all provinces on the island of Sumatera have ...\n• (Elfitra, 2022-05)\nThis paper discusses about the nonparametric regression model based on local polynomial estimators on gold prices in Indonesia. Local polynomial estimators can be obtained by minimizing Weighted Least Square (WLS). Optimal ...\n• (2014-03-25)\nThis article discusses a factorization of algebraic polynomial of order n using the Euclidean method and the greatest common factor. The new polynomial is equivalent to the original polynomial. The new ...\n• (2017-01-12)\nThis article discusses a new family of iterative method for finding multiple root of a nonlinear equation with known multiplicity. Using Taylor expansion, Geometric and Binomial series, it is shown that the method is of ...\n• (Elfitra, 2023-01)\nThis nal project discusses the family of Steffensen's type method with memory by using approximation derivative of Newton's Divided Difference (NDD) to solve nonlinear equations. Analytically by using Taylor expansion ...\n• (2017-01-09)\nThis article discusses the development of a family derivative free iterative method with nine parameters for solving nonlinear equations. Analytically it is showed that this iterative method has the order of convergence six. ...\n• (2017-01-12)\nIn this paper, a new iterative method for solving a nonlinear equation is derived. Using Taylor expansion and geometry series, it is shown that the method has a third-order of convergence. Numerical comparisons show that ...\n• (wahyu sari yeni, 2019-04-23)\nThis articel discusses some proofs for sum of k-Lucas numbers with index an + r. This sum is proved by using the Binet’s formula. This articel is a review of some parts of article of Falcon [Applied Mathematics, 3 (2012), ...\n• (2018-03-07)\nThis article discusses the formula of k-Fibonacci difference sequences from the initial k-Fibonacci numbers and formula for the sum of these new sequences as. This formula is evidenced by the concept of k-Fibonacci ...\n• (Elfitra, 2023-02)\nThis article discusses the generalization of the inverse on a real symbolic Turiyam matrix of size m×n with m ̸= n. The operation properties of the general inverse on the real symbolic Turiyam matrix used are the operating ...\n• (2021-02)\nThis article discusses a new kind of lattice path and in nite lower triangular array with the given steps. By using lattice path the lower triangular array which is obtained has the sum of row on its diagonal. The sum ...\n• (2016-02-04)\nThis article discusses the generalized power mean modification of Newton’s method to solve nonlinear equations obtained by modifying Newton’s method using a trapezoidal quadrature formula. It is analytically demonstrated ...\n• (wahyu sari yeni, 2019-01-31)\nThis article discusses a generalizations of fractional derivative definition with order 0 < α < 1. Some basic properties of derivative such as product rule, quotient rule, chain rule, Rolle’s theorem and mean value theorem ...\n• (2016-04-27)\nGershgorin disk method is an analytic technique available for determining the location of eigenvalues. In this article, we discuss the method of Gershgorin disk fragment by narrowing the location of eigenvalues obtained ..." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8500477,"math_prob":0.8948206,"size":271,"snap":"2023-14-2023-23","text_gpt3_token_len":67,"char_repetition_ratio":0.11985019,"word_repetition_ratio":0.0,"special_character_ratio":0.24723247,"punctuation_ratio":0.1764706,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9825316,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-09T05:41:38Z\",\"WARC-Record-ID\":\"<urn:uuid:4c5f4ef3-3539-4f32-b89b-cff58740a70d>\",\"Content-Length\":\"44863\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:48d9cbf2-abea-43ff-9af9-24d7c40eb196>\",\"WARC-Concurrent-To\":\"<urn:uuid:852cd6e1-4e46-47df-88a1-b1b4541328c3>\",\"WARC-IP-Address\":\"103.10.169.25\",\"WARC-Target-URI\":\"https://repository.unri.ac.id/handle/123456789/94/browse?rpp=20&sort_by=1&type=title&offset=90&etal=-1&order=ASC\",\"WARC-Payload-Digest\":\"sha1:HXIQ43CTRT23GYPP5DRI4BDGHIMW7XHH\",\"WARC-Block-Digest\":\"sha1:6TTI2FCTGTRE7MUADCNEKM4FWLLXH7HC\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224655247.75_warc_CC-MAIN-20230609032325-20230609062325-00756.warc.gz\"}"}
https://www.finmath.rutgers.edu/admissions/prerequisites?id=1181:recommended-additional-courses-2&catid=2
[ "# Prerequisites\n\nThe courses listed in this section are not required for admission but can provide useful background.\n\nSubjectRutgers CourseCourse AbstractPrimary Textbook\nMathematical reasoning Math 01:640:300 (3)\nIntroduction to Mathematical Reasoning\nFundamental abstract concepts common to all branches of mathematics. Special emphasis placed on ability to understand and construct rigorous proofs. A Transition To Advanced Mathematics, by Smith, Eggen, St. Andre\nAdvanced calculus I Math 01:640:311 (4)\nIntroduction to language and fundamental concepts of analysis. The real numbers, sequences, limits, continuity, differentiation in one variable. Introduction to Analysis by Edward D. Gaughan, 5th edition, Brooks/Cole, 1998\nAdvanced calculus II Math 01:640:312 (2)\nContinuation of Advanced Calculus I Advanced Calculus by Patrick Fitzpatrick; Brooks/Cole, 2006\nIntroduction to numerical analysis II Math 01:640:374 (3)\nNumerical Analysis II\nContinuation of Numerical Analysis I Numerical Analysis by R.Burden & J.Faires; Brooks/Cole, 2005\nMathematical analysis I Math 01:640:411 (3)\nMathematical Analysis I\n\nRigorous analysis of the differential and integral calculus of one and several variables. Principles of Mathematical Analysis by Walter Rudin, 3rd edition, McGraw-Hill, 1976\nMathematical analysis II Math 01:640:412 (3)\nMathematical Analysis II\nContinuation of Mathematical Analysis I Principles of Mathematical Analysis by Walter Rudin, 3rd edition, McGraw-Hill, 1976\nApplied mathematics Math 01:640:426 (3)\nTopics in Applied Mathematics\nTopics selected from integral transforms, calculus of variations, integral equations, Green's functions; applications to mathematical physics." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.76614815,"math_prob":0.7465433,"size":1789,"snap":"2021-31-2021-39","text_gpt3_token_len":421,"char_repetition_ratio":0.18767507,"word_repetition_ratio":0.07017544,"special_character_ratio":0.23365009,"punctuation_ratio":0.1627907,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9983244,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-28T04:23:45Z\",\"WARC-Record-ID\":\"<urn:uuid:43136444-d37d-4854-8292-d9db3d82d90d>\",\"Content-Length\":\"54870\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2903a153-e245-4ed0-9607-a73f861dbcbe>\",\"WARC-Concurrent-To\":\"<urn:uuid:219d0925-01b0-4abd-becc-2c00d04e0d06>\",\"WARC-IP-Address\":\"128.6.31.233\",\"WARC-Target-URI\":\"https://www.finmath.rutgers.edu/admissions/prerequisites?id=1181:recommended-additional-courses-2&catid=2\",\"WARC-Payload-Digest\":\"sha1:JLCTJHZJGPX2SCOTBHGON5ORJVBPDYWE\",\"WARC-Block-Digest\":\"sha1:KJU4B4I6SJ4DTEDVKFO5MIS3OCIE32T2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780060201.9_warc_CC-MAIN-20210928032425-20210928062425-00662.warc.gz\"}"}
https://linearprogramminghelp.com/solving-linear-programming-problems-with-5-variables/
[ "# Solving Linear Programming Problems With 5 Variables\n\nLinear programming problems often occur when a person tries to solve a problem in the linear way. linear programming problems with 5 variables are very common and many programmers face them time and again without any form of help. This is what happens when a person does not learn how to deal with the five variables appropriately.\n\nSo what exactly is linear programming? In simple terms, linear programming involves taking a series of measurements and computing the outcome based on them. For example, an individual may measure the height of a person and then compute the effect that his or her height has on the person’s life. Of course, there are many different applications for linear programming. From calculating the value of a stock through to the treatment of a patient in a hospital. But what if you want to solve a linear programming problem?\n\nThe first thing you need to do is understand what type of linear programming problem you are attempting to solve. One popular type of linear programming problem is to determine what the value of a variable X is at time t and then find out the value of the corresponding variable Y at time t+1. While this is an easy example and can be solved quite easily, it doesn’t reveal much about the true complexity of linear programming. For example, if we were dealing with an airline pricing system, how would we know whether X was becoming cheaper because of demand or supply?\n\nAnother type of linear programming problem arises when a person tries to solve for the unknown factors using only the data available. For example, if we have all the known prices and factors such as the average speed of a plane, how will we be able to tell whether the price of a ticket is likely to decrease or increase in the near future? Although it would be easy to say that if the prices of flights are increasing, it is likely that airline passengers will also notice the same increase and purchase tickets at the new increased rate, but how do we know that it will happen in the future? This is where linear programming problems come into play.\n\nWhen dealing with linear programming problems, it is important to remember that even the best software will only ever be able to describe the data that is currently available. Therefore, if X happens to be decreasing over time, there is no guarantee that Y will also decrease. Therefore, if you are linear programming your airplane fares in the future, you might want to add a constant term into your program such as “rate per mile” so that you can predict how much money you will make based on the total miles flown during each month of the year. However, this might be too complicated of a model for your linear programming program to deal with, so in order to make linear programming problems easier, you should also consider a simple model such as the arithmetic average of your annual mileage.\n\nThe beauty of using this type of average is that it can be used for both forward and backward predictions. For example, say that you want to give your airline company a good price on flights for the coming months. In order to solve your linear programming problems, you should first find out what kind of average plane fares people are actually paying for flights in the current month. You can then use this figure as a starting point for your forward programming, such as the number of miles you think they will charge for flights. In order to solve your backwards problems, you simply need to find out what kind of average plane fare is expected in the next six months. This will help you make decisions about which route to take to decrease your costs.\n\nThere are two main parts to solving a linear programming problem. The first part is to identify the problem you have, then find the solution to it. In order to solve the linear problem, you need to find the maximum number of parameters that you can program into your software. Typically, linear programming software is programmed with one or two variable types. Once this has been determined, you can then input your own data into the program, and the software will automatically choose the optimal parameter values for your needs.\n\nThere are many resources available on the internet to help you solve linear programming problems with 5 variables. Some of them are a little too complex for even a novice computer programmer, but others are quite simple. Most of them involve using mathematical expressions that you must remember in order to solve the equations. If you do not know how to solve a linear programming problem with five variables, then you may want to consider using the software programs that make it easy for you to enter your information and solve the equations by hand. However, if you are comfortable with linear programming and you think that you would like to solve the equations yourself, then you can simply use the mathematical library that comes with your linear programming software. Whatever option you choose, finding the optimal value for your needs will be relatively easy as long as you do not spend too much time on the problem." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9585484,"math_prob":0.95623606,"size":4755,"snap":"2022-27-2022-33","text_gpt3_token_len":910,"char_repetition_ratio":0.17091139,"word_repetition_ratio":0.015625,"special_character_ratio":0.19116719,"punctuation_ratio":0.07395144,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97481126,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-17T05:07:01Z\",\"WARC-Record-ID\":\"<urn:uuid:3949575d-04a2-4366-ba60-27e8bfb77825>\",\"Content-Length\":\"79638\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ca090d41-5176-40c4-9504-75f73730d0a6>\",\"WARC-Concurrent-To\":\"<urn:uuid:738bbaef-086e-4565-8ffd-5798ec43aad8>\",\"WARC-IP-Address\":\"172.67.215.246\",\"WARC-Target-URI\":\"https://linearprogramminghelp.com/solving-linear-programming-problems-with-5-variables/\",\"WARC-Payload-Digest\":\"sha1:35ZCZKSXC6WDC7ZWA2ZEX4VOFBFETLZQ\",\"WARC-Block-Digest\":\"sha1:U2XRQCLX4BIBGJHVBYG6ZKBC34AJDXGA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882572833.95_warc_CC-MAIN-20220817032054-20220817062054-00242.warc.gz\"}"}
https://codereview.stackexchange.com/questions/150920/leetcode-15-3-sum
[ "# Leetcode 15. 3 Sum\n\nProblem statement\n\nGiven an array S of n integers, are there elements a, b, c in S such that a + b + c = 0? Find all unique triplets in the array which gives the sum of zero.\n\nNote: The solution set must not contain duplicate triplets.\n\nFor example, given array S = [-1, 0, 1, 2, -1, -4], a solution set is:\n\n[\n[-1, 0, 1],\n[-1, -1, 2]\n]\n\n\nI did review Leetcode 3 sum algorithm a few hours, and put together the following C# code. The time complexity is optimal, $O(n*n)$ where $n$ is the length of the array, pass Leetcode online judge. Also, I did a few improvements, make it more flat (using two continue statements, instead of if/else statements), test cases are added, two sum algorithm uses two pointer technique to go through the array once.\n\nusing System;\nusing System.Collections.Generic;\nusing System.Diagnostics;\nusing System.Linq;\nusing System.Text;\n\nnamespace Leetcode_15_3Sum\n{\n/*\n*\n* Work on this 3 sum algorithm\n*\n* Leetcode 15: 3 sum\n* https://leetcode.com/problems/3sum/\n*\n* Given an array S of n integers, are there elements a, b, c in S\n* such that a + b + c = 0? Find all unique triplets in the array\n* which gives the sum of zero.\n*\nNote:\nElements in a triplet (a,b,c) must be in non-descending order. (ie, a ≤ b ≤ c)\nThe solution set must not contain duplicate triplets.\n*\nFor example, given array S = {-1 0 1 2 -1 -4},\nA solution set is:\n(-1, 0, 1)\n(-1, -1, 2)\n*\n*/\nclass Program\n{\nstatic void Main(string[] args)\n{\n// test 3 sum\n// 2 lists, one is -1, 0, 1, second one is -1, -1, 2\nint[] array = new int { -1, 0, 1, 2, -1, -4 };\n\nIList<IList<int>> triplets = ThreeSum(array);\n\nDebug.Assert(triplets.Count == 2);\nDebug.Assert(String.Join(\",\", triplets.ToArray()).CompareTo(\"-1,-1,2\") == 0);\nDebug.Assert(String.Join(\",\", triplets.ToArray()).CompareTo(\"-1,0,1\") == 0);\n}\n/*\n* @nums - the array containing the numbers\n*\n* 3 sum can be solved using 2 sum algorithm,\n* 2 sum algorithm - optimal solution is using two pointers, time complexity is O(nlogn),\n* sorting takes O(nlogn), and two pointer algorithm is O(n), so overall is O(nlogn).\n* Time complexity for 3 sum algorithm:\n* O(n*n)\n*/\npublic static IList<IList<int>> ThreeSum(int[] nums)\n{\nIList<IList<int>> results = new List<IList<int>>();\nHashSet<string> keys = new HashSet<string>();\n\nif (nums == null || nums.Length == 0)\nreturn results;\n\nArray.Sort(nums);\n\nint length = nums.Length;\n\nint target = 0;\nfor (int i = 0; i < length - 2; i++)\n{\nint firstNo = nums[i];\n\n// using two pointers to go through once the array, find two sum value\nint newTarget = target - firstNo;\nint start = i + 1;\nint end = length - 1;\n\nwhile (start < end)\n{\nint twoSum = nums[start] + nums[end];\n\nif (twoSum < newTarget)\n{\nstart++;\ncontinue;\n}\n\nif (twoSum > newTarget)\n{\nend--;\ncontinue;\n}\n\nint[] threeNumbers = new int[] { firstNo, nums[start], nums[end] };\nstring key = PrepareKey(threeNumbers, 3);\n\nif (!keys.Contains(key))\n{\n\n}\n\n// continue to search\nstart++;\nend--;\n}\n}\n\nreturn results;\n}\n\n/*\n* -1, 0, 1 -> key string\" \"-1,0,1,\"\n*/\nprivate static string PrepareKey(int[] arr, int length)\n{\nstring key = string.Empty;\n\nfor (int j = 0; j < length; j++)\n{\nkey += arr[j].ToString();\nkey += \",\";\n}\n\nreturn key;\n}\n}\n}\n\n• Are [-1, 0, 1], and [1, 0, -1], considered duplicates ? – Denis Dec 27 '16 at 13:44\n• @denis, $[-1, 0, 1]$ and $[1, 0, -1]$ are considered duplicates. – Jianmin Chen Dec 27 '16 at 19:31\n• I am new here. Can i suggest a solution. The solution is separate out the negative number. Than loop through negative numbers array and with the rest of the numbers by making set of 3. When sum is zero add it to a collection of triplets. do reverse for the remaining numbers ( 2 negative numbers can cause a triplet). what i think the complexity is n + n/2*n/2+n/2*n/2. which is equal to (correct me) n+n/2+n/2. – Muhammad Zohaib Ehsan Jan 2 '17 at 11:31\n\nif (nums == null || nums.Length == 0)\nreturn results;\n\n\nshould be\n\nif (nums == null || nums.Length < 3)\nreturn results;\n\n\nanother\n\nList<int> threeNumbers = new List<int> { firstNo, nums[start], nums[end] };\nstring key = PrepareKey(string.Join(\",\", threeNumbers);\nif (!keys.Contains(key))\n{\n}\n\n\nI don't like continue over if / else\n\nThis should faster as it skips values already evaluated\nSince it skips values evaluated then do not need to check for duplicate\n\npublic static IList<IList<int>> ThreeSumB(int[] nums)\n{\nIList<IList<int>> results = new List<IList<int>>();\nif (nums == null)\nreturn results;\nint length = nums.Length;\nif (length < 3)\nreturn results;\nArray.Sort(nums);\nDebug.WriteLine(string.Join(\", \", nums));\nint target = 0;\nint firstNo;\nint newTarget;\nint start;\nint end;\nfor (int i = 0; i < length - 2; i++)\n{\nfirstNo = nums[i];\nif (i > 0 && firstNo == nums[i-1])\ncontinue;\n// using two pointers to go through once the array, find two sum value\nnewTarget = target - firstNo;\nstart = i + 1;\nend = length - 1;\nwhile (start < end)\n{\nint twoSum = nums[start] + nums[end];\nif (twoSum < newTarget)\n{\n//Debug.WriteLine(nums[start] + \" \" + nums[start + 1]);\nstart++;\nwhile (start < end && nums[start - 1] == nums[start])\nstart++;\n//Debug.WriteLine(nums[start]);\n}\nelse if (twoSum > newTarget)\n{\n//Debug.WriteLine(nums[end] + \" \" + nums[end - 1]);\nend--;\nwhile (start < end && nums[end + 1] == nums[end])\nend--;\n//Debug.WriteLine(nums[end]);\n}\nelse\n{\nresults.Add(new List<int> { firstNo, nums[start], nums[end] });\n\n//Debug.WriteLine(nums[start] + \" \" + nums[start + 1]);\nstart++;\nwhile (start < end && nums[start - 1] == nums[start])\nstart++;\n//Debug.WriteLine(nums[start]);\n\n//Debug.WriteLine(nums[end] + \" \" + nums[end - 1]);\nend--;\nwhile (start < end && nums[end + 1] == nums[end])\nend--;\n// Debug.WriteLine(nums[end]);\n}\n}\n}\nreturn results;\n}\n\n\nI am getting like 8x faster than OP solution using\n\nint[] array = new int[] { -1, 0, 1, 2, -1, -4, -1, -4, 1, 2, 2 };\n\n\nEven faster. Bring the last until you are at target or less.\nNo purpose to starting at the end with the last as if one of the first two is increased then last has to decrease\n\n• or even shorter with C# 6 if(nums?.Length < 3) {..} – t3chb0t Dec 27 '16 at 12:36\n• Could you write a few words about why this solution is faster and what optimizations have you made rather then just claiming it with a large portion of code? – t3chb0t Dec 27 '16 at 12:54\n• I did write a few words about what makes this faster. \"This should be faster as it skips values already evaluated. Since it skips values evaluated then do not need to check for duplicate.\" – paparazzo Dec 27 '16 at 19:20\n• Very good ;-] I've already voted. – t3chb0t Dec 27 '16 at 19:22\n• @Paparazzi, I learned the idea to do checking of duplicates through your code; I like to vote one you wrote as an answer. But should we extract small functions for the checking, based on DRP - Do not repeat yourself principle? see line 106 - 114, gist.github.com/jianminchen/dfebe273c5beca0fbbb52981f3934ded – Jianmin Chen Dec 27 '16 at 22:40\n\nJust a few tips about your PrepareKey method.\n\nprivate static string PrepareKey(int[] arr, int length)\n{\nstring key = string.Empty;\n\nfor (int j = 0; j < length; j++)\n{\nkey += arr[j].ToString();\nkey += \",\";\n}\n\nreturn key;\n}\n\n\nThe entire function can be replaced with:\n\nstring.Join(\",\", arr);\n\n\nbut in cases where you need to build a string with a loop you should use the StringBuilder next time because it's much faster then concatenating strings with the + operator.\n\nfor (int j = 0; j < length; j++)\n\n\nYou could also the the foreach loop for this as arrays are enumerable.\n\nPrepareKey(int[] arr, int length)\n\n\nThis method does not need the length parameter because an array has a Length property.\n\nSeems your algorithm works fine. I tried to find some issues but i couldn't :)\nThe only thing i would like to say is about input parameter for ThreeSum function. You change it ! And it's considered as bad practice. Imagine you will want to write some summary. Like this\n\nif(triplets.Count > 0)\n{\nConsole.WriteLine(\"Solution has been found !\");\n//try to write original array\nfor(int i=0;i <array.Length;i++)\n{\nConsole.Write(array[i].ToString() + \" \");\n}\n//write three numbers\n//...\n}\n\n\nAnd you cannot write original array as it is sorted now. Even worse. if you implementation change and you will add\\remove items from array then it will be completely wrong.\n\n• the idea to make a copy of array from function input argument nums is reasonable, since sorting takes O(nlogn) time usually, and it is not in-place, therefore, function ThreeSum(int[] nums) uses extra O(n) space should be a good practice. It will not cause memory issue. – Jianmin Chen Dec 27 '16 at 22:52\n\nI offer a different approach which got accepted on Leetcode with the following results", null, "Here are the original results with your code :", null, "As you can see my soltion wins by ~100 ms.\n\nI prefer having a class with overloaded equality checks so that it can be used in the Hashset to determine if an item is duplicate or not. I also replaced this line :\n\nif (!keys.Contains(key))\n\n\nTo this\n\nint previousCount = keys.Count;\nif (previousCount != keys.Count)\n\n\nHashset<T>.Add(..) wont add any duplicates items anyway it's checking that internally but we still need to know if the item is actually duplicate to see if we need to add it to the results I prefer not checking if the key is contained but rather use the returned boolean value from HashSet<T>.Add().\n\nHere is my solution :\n\ninternal class Triplet\n{\npublic int A { get; }\npublic int B { get; }\npublic int C { get; }\n\npublic Triplet(int a, int b, int c)\n{\nA = a;\nB = b;\nC = c;\n}\n\npublic static bool operator ==(Triplet first, Triplet second)\n{\nreturn first.A == second.A && first.B == second.B && first.C == second.B;\n}\n\npublic static bool operator !=(Triplet first, Triplet second)\n{\nreturn !(first == second);\n}\n\nprotected bool Equals(Triplet other)\n{\nreturn A == other.A && B == other.B && C == other.C;\n}\n\npublic override bool Equals(object obj)\n{\nif (ReferenceEquals(null, obj)) return false;\nif (ReferenceEquals(this, obj)) return true;\nif (obj.GetType() != this.GetType()) return false;\nreturn Equals((Triplet)obj);\n}\n\npublic override int GetHashCode()\n{\nunchecked\n{\nvar hashCode = A;\nhashCode = (hashCode * 397) ^ B;\nhashCode = (hashCode * 397) ^ C;\nreturn hashCode;\n}\n}\n}\n\n\nAnd this is the actual algorithm :\n\npublic static IList<IList<int>> ThreeSum(int[] nums)\n{\nIList<IList<int>> results = new List<IList<int>>();\nHashSet<Triplet> keys = new HashSet<Triplet>();\n\nif (nums == null || nums.Length == 0)\nreturn results;\n\nArray.Sort(nums);\n\nint length = nums.Length;\n\nint target = 0;\nfor (int i = 0; i < length - 2; i++)\n{\nint firstNo = nums[i];\n\n// using two pointers to go through once the array, find two sum value\nint newTarget = target - firstNo;\nint start = i + 1;\nint end = length - 1;\n\nwhile (start < end)\n{\nint twoSum = nums[start] + nums[end];\n\nif (twoSum >= newTarget)\n{\nif (twoSum <= newTarget)\n{\nTriplet triplet = new Triplet(firstNo, nums[start], nums[end]);\n{\n}\nstart++;\nend--;\n}\nelse\n{\nend--;\n}\n}\nelse\n{\nstart++;\n}\n}\n}\nreturn results;\n}\n\n• You don't need to check the Count twice, actually don't need to check it at all.. The Add method returns bool and true if it could add the new item so it's ok to just do if(keys.Add(triplet)) {..} – t3chb0t Dec 27 '16 at 16:34\n• Great tip will update the answer. – Denis Dec 27 '16 at 16:36\n• One more tip... you don't need two collections. At the end the keys collection will have the same items as the results collection. I'd keep the keys and rename it to results and your code will become even simpler. Besides the results still uses list items although you now have the Triplet type. – t3chb0t Dec 27 '16 at 16:45\n• Yeah I noticed that and it kinda annoys me but the return type of ThreeSums must always be IList<IList<int>> or at least that's what the LeetCode site has, thus we can't just return List<Triplet>. – Denis Dec 27 '16 at 16:49\n• We need to return an IList<IList<T>> hashset doesn't implement that interface so we still need to do .ToList() at the end which pretty much the same as saving the items in a different collection of type IList. – Denis Dec 27 '16 at 16:58" ]
[ null, "https://i.stack.imgur.com/qkuIp.png", null, "https://i.stack.imgur.com/689Yb.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5958644,"math_prob":0.9861716,"size":3356,"snap":"2019-51-2020-05","text_gpt3_token_len":1013,"char_repetition_ratio":0.10381862,"word_repetition_ratio":0.03780069,"special_character_ratio":0.3542908,"punctuation_ratio":0.22865014,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99285656,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-26T22:44:54Z\",\"WARC-Record-ID\":\"<urn:uuid:3dc0d19f-e81d-41d1-92d6-0d258f0a5887>\",\"Content-Length\":\"182799\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2ee64711-e8b0-4503-8873-85c898c63fae>\",\"WARC-Concurrent-To\":\"<urn:uuid:1830aa53-664e-4494-98df-0bae0845e73c>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://codereview.stackexchange.com/questions/150920/leetcode-15-3-sum\",\"WARC-Payload-Digest\":\"sha1:T56ILRPHP4SXVPNF7CEX2YHRJCFIYLGW\",\"WARC-Block-Digest\":\"sha1:YKXFAFTUF35R63KRDAO7AELDBI3TMHEW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579251690379.95_warc_CC-MAIN-20200126195918-20200126225918-00440.warc.gz\"}"}
https://r.789695.n4.nabble.com/How-to-compare-the-fitting-of-function-td4764442.html
[ "# How to compare the fitting of function?", null, "Classic", null, "List", null, "Threaded", null, "3 messages", null, "Open this post in threaded view\n|\n\n## How to compare the fitting of function?\n\n Hello, I have fitted two curves to the data. How can I tell which one is more fitted? By eye (see plot underneath) I would say that the function Gompertz is better than the function Holling type III; how can I give a number to this hunch? This is an example: ``` # functions holling = function(a, b, x) {   y = (a * x^2) / (b^2 + x^2)   return(y) } gompertz = function(a, b, c, x) {   y = a * exp(-b * exp(-c * x))   return(y) } # data actual <- c(8,  24,  39,  63,  89, 115, 153) holling <- c(4.478803,  17.404533,  37.384128,  62.492663,  90.683630, 120.118174, 149.347683) gompertz <- c(11.30771,  22.39017,  38.99516,  61.19318,  88.23403, 118.77225, 151.19849) # plot plot(1:length(actual), actual, lty = 1 , type = \"l\", lwd = 2,      xlab = \"Index\", ylab = \"Values\") points(1:length(actual), holling, lty = 2, type = \"l\", col = \"red\") points(1:length(actual), gompertz, lty = 3, type = \"l\", col = \"blue\") legend(\"bottomright\",        legend = c(\"Actual values\", \"Holling III\", \"Gompertz\"),        lty = c(1, 2, 3), lwd = c(2, 1,1), col = c(\"black\", \"red\", \"blue\")) ``` Thank you ______________________________________________ [hidden email] mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-helpPLEASE do read the posting guide http://www.R-project.org/posting-guide.htmland provide commented, minimal, self-contained, reproducible code." ]
[ null, "https://r.789695.n4.nabble.com/images/view-classic.gif", null, "https://r.789695.n4.nabble.com/images/view-list.gif", null, "https://r.789695.n4.nabble.com/images/view-threaded.gif", null, "https://r.789695.n4.nabble.com/images/pin.png", null, "https://r.789695.n4.nabble.com/images/gear.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.69413126,"math_prob":0.9815134,"size":1339,"snap":"2020-34-2020-40","text_gpt3_token_len":476,"char_repetition_ratio":0.12584269,"word_repetition_ratio":0.009756098,"special_character_ratio":0.4600448,"punctuation_ratio":0.27096775,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9935313,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-06T19:33:07Z\",\"WARC-Record-ID\":\"<urn:uuid:ca5490d1-efde-419c-89e4-3d8da8a4b828>\",\"Content-Length\":\"47565\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b1d83736-e070-4f41-bb88-6fcb543f5a84>\",\"WARC-Concurrent-To\":\"<urn:uuid:4476723e-23c3-498f-9bde-fdbb4593c04d>\",\"WARC-IP-Address\":\"199.38.86.66\",\"WARC-Target-URI\":\"https://r.789695.n4.nabble.com/How-to-compare-the-fitting-of-function-td4764442.html\",\"WARC-Payload-Digest\":\"sha1:WOBI5MWH4QESGZXWY4HHDGDXG7M2ZHW4\",\"WARC-Block-Digest\":\"sha1:FPPVS7QDFBEZDYTDGX2WGNWJUKTKCCDH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439737019.4_warc_CC-MAIN-20200806180859-20200806210859-00110.warc.gz\"}"}
https://physics.stackexchange.com/questions/316566/michelson-interferometer-beam-splitter-phase-shift
[ "# Michelson interferometer beam splitter phase shift\n\nIn a Michelson interferometer (image from Optics by E. Hecht )", null, ".\n\nTo quote from the same book:\n\nAs the figure shows, the optical path difference for these rays is nearly $2d \\cos 0$. There is an additional phase term arising from the fact that the wave traversing the arm $OM2$ is internally reflected in the beamsplitter, whereas the $OM1$-wave is externally reflected at $O$. If the beamsplitter is simply an uncoated glass plate, the relative phase shift resulting from the two reflections will be $\\pi$ radians.\n\nConsider a beam from $B$ from $S$ towards $O$. At $O$ it splits into two beams:\n\nBeam B1:\n\n1. Results from refraction of $B$ at the beam splitter $O$ towards mirror $M1$.\n2. Is reflected in the opposite direction at $M1$.\n3. Is reflected towards $D$ at $O$. Is it reflected before entering $O$ or after entering $O$ and encountering the air at the other side?\n\nBeam B2:\n\n1. Results from reflection of $B$ at the beam splitter $O$ towards mirror $M2$. Should there be a phase shift here? This is should be an air/(glass/metal coating) interface.\n2. Is reflected in the opposite direction at $M2$.\n3. Goes through $O$ towards detector $D$.\n\nQuestion: Could someone tell me at which of these steps a phase shift occurs? It seems to me that it could happen at steps $B1-2$, $B1-3$, $B2-1$ and $B2-2$, but that is probably not right.\n\n• A small advice - You have mixed up 1 and 2 in your notations for the beams. It would be nicer if B1 corresponded to M1 and likewise for the other. :) Mar 5, 2017 at 18:32\n• Good idea, I have not noticed that. The notation now corresponds with the one which you have used :). Mar 5, 2017 at 19:08\n\nOne minor detail, which is extremely important in this context, which you perhaps missed is that the beam-splitter is partially silvered at the lower surface, which implies that the appropriate location of the point O is at the lower surface of the beam-splitter (at the glass-air interface).\n\nIf we take this detail into account, the explanation is very simple:\n\n• There are $\\pi$ phase changes at steps B1-2 and B2-2, but these are common for both beams, and hence do not contribute to any net relative phase difference. (These reflections are depicted as such in the edited diagram)", null, "• The only relative phase difference arises due to complete reflection at B1-3 (reflection at the outer, lower surface of the beamsplitter.) This occurs for only one of the two beams, not for both. Hence, net relative phase difference is $\\pi$ radians. (This is represented in the diagram for the red colored beam. Note that this is not true for the other beam, shown in blue.)\n\n• (IMPORTANT) There is no phase change of $\\pi$ in the reflected beam, arising at the original division of amplitude point O, originally. This is because the division took place at the lower interface, where the interface was glass-air and not air-glass. If we invoke the Stokes' relation, you have a phase change of $\\pi$ on a reflection at a rare-dense interface. This doesn't fit the description. (Hence, no phase shift of $\\pi$ for the blue beam in the figure.)\n\nHence, the total relative phase difference between the two coherent beams, when they recombine, is only $\\pi$ radians.\n\n(Original image edited on suggestion by Floris).\n\n• If you added a diagram this could be a very good answer. Mar 5, 2017 at 18:07\n• @Floris - Thanks for the suggestion, sir. Done. :) Mar 5, 2017 at 18:29\n• @TheDarkSide: Thanks for the detailed answer, it has helped me a lot. I think that B2-3 in your previous to last bullet point should be changed to B1-3. I accept full responsiblity for the notation related confusion :). Mar 5, 2017 at 19:18\n• @pseudomarvin - Yes, with our consistent notations, it should be B1-3. I'll fix that part. Also, I'm glad it helped. :) Mar 5, 2017 at 19:23\n• @TheDarkSide : can you clear my other confusion ? If there is a phase shift of π then the central fringe should be a dark back, no ? Then why in these examples we see central bright band ? pages.physics.cornell.edu/p510/O-2_Michelson_Interferometer and en.wikipedia.org/wiki/… Nov 26, 2017 at 21:52\n\nThe answer is the last sentence from Hecht that is quoted:\n\n\"If the beamsplitter is simply an uncoated glass plate, the relative phase shift resulting from the two reflections will be π radians.\"\n\nAs stated by SpeedOfLight, one reflection is from glass$$\\to$$air (blue ray in diagram), the other from air$$\\to$$glass (red ray).\n\nAll that is fine and good if the plate is uncoated (as Hecht assumes). However, as ProfRob comments, most partially reflecting mirrors have a thin silver coating on them (I think the one Michelson used was of this kind).\n\nUPDATE: Actually, even for the uncoated case, I do not understand the argument of Hecht. If the plate is the same on both sides, then why would both rays reflect from the back side only? I think you need the coating to create the asymmetry.\n\nHence, the open question is still, what the answer would be in the case a silver-coating is present.\n\nOne then has for the blue ray a reflection glass$$\\to$$silver, and for the red ray air$$\\to$$silver.\n\nI always thought reflections at metals involve a $$\\pi$$ shift (since the tangential component of the electric field must vanish). If that were so, then for both rays you get the $$\\pi$$ shift, so no net relative effect. Then, there is also the Fresnel equations to consider, where things depend on the polarization state, as well as the angle of incidence (if the angle exceeds Brewster, you don't get the shift anymore, or it first appears, depending on whether you go from rare to dense).\n\nSo, for the realistic scenario with a coating, I don't see a simple argument for just the $$\\pi$$ shift. I am also inclined to think the answer should somehow depend on the value of the angle of incidence compared to the Brewster angle.\n\nThere is an automatic change in the phase when the wave goes ..... bounces off the mirror. Then after all that you start to play with the path lengths to introduce more phase changes.\n\n• No - phase shift occurs on reflection only, not at transmission. Mar 5, 2017 at 18:06\n• Slightly less wrong... still not quite right. Phase shift occurs during reflection when you go from low to high index - so on the external reflection, but not at the internal reflection. Mar 7, 2017 at 20:12" ]
[ null, "https://i.stack.imgur.com/BHve6.png", null, "https://i.stack.imgur.com/tbcrO.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8928916,"math_prob":0.96570814,"size":1331,"snap":"2023-14-2023-23","text_gpt3_token_len":351,"char_repetition_ratio":0.13112283,"word_repetition_ratio":0.077922076,"special_character_ratio":0.26521412,"punctuation_ratio":0.10687023,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98128235,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,8,null,8,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-20T15:19:03Z\",\"WARC-Record-ID\":\"<urn:uuid:806d27b2-30c3-4ac8-b50d-94c6c39407b5>\",\"Content-Length\":\"187099\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b3ec1961-5de4-4941-9f7b-d25a18812221>\",\"WARC-Concurrent-To\":\"<urn:uuid:8fdc4b02-3d5b-456e-8805-96c029f6f8af>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://physics.stackexchange.com/questions/316566/michelson-interferometer-beam-splitter-phase-shift\",\"WARC-Payload-Digest\":\"sha1:I7Z7QCWBHU3P2LKVSIF5XKUKMG77PJZZ\",\"WARC-Block-Digest\":\"sha1:TZK3J3AUZZATPKISARQNOY5RFM5YDKUV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296943484.34_warc_CC-MAIN-20230320144934-20230320174934-00151.warc.gz\"}"}
https://www.thelittleaussiebakery.com/what-are-the-3-isotopes-of-hydrogen/
[ "# What are the 3 isotopes of hydrogen?\n\n## What are the 3 isotopes of hydrogen?\n\nThere are three isotopes of the element hydrogen: hydrogen, deuterium, and tritium. How do we distinguish between them? They each have one single proton (Z = 1), but differ in the number of their neutrons. Hydrogen has no neutron, deuterium has one, and tritium has two neutrons.\n\n### What is δd?\n\nDeuterium (also referred to as hydrogen-2, symbol D or 2H) is a natural, stable isotope of hydrogen with a nucleus containing one proton and one neutron. Basically, the isotope fractionation is dependent of temperature. …\n\n#### What is hydrogen deuterium isotope?\n\none proton\nDeuterium has one proton, one electron, and one neutron. The third isotope of hydrogen is tritium. Tritium has one proton, one electron, and two neutrons. So, the isotope deuterium of hydrogen has one proton and one neutron.\n\nIs there 7 isotopes of hydrogen?\n\nHydrogen has three naturally occurring isotopes: 1H (protium), 2H (deuterium), and 3H (tritium). Of these, 5H is the most stable, and the least stable isotope is 7H .\n\nWhat are isotopes give the isotopes of hydrogen?\n\nHow many isotopes are in hydrogen? The hydrogen element has three isotopes: hydrogen, deuterium, and tritium. We each have a single proton (Z = 1), but the number of their neutrons is different. There is no neutron in hydrogen, one in deuterium, and two neutrons in tritium.\n\n## How many isotopes are in hydrogen?\n\nthree\nHydrogen and its two naturally occurring isotopes, deuterium and tritium. All three have the same number of protons (labeled p+) but different numbers of neutrons (labeled n).\n\n### How many isotopes does hydrogen have?\n\n#### What is Delta isotope?\n\nDefinition. The delta notation (symbol: δ) expresses the variation of an isotopic ratio of an element R (e.g., δ18O = 18O/16O), relative to the isotopic ratio of a standard Rstd (e.g., δ18OV-SMOW = 18O/16O = 2005.20 ± 0.45 × 10−6, where V-SMOW is Standard Mean Ocean Water).\n\nWhat is the formula of deuterium?\n\nDeuterium\n\nPubChem CID 24523\nChemical Safety Laboratory Chemical Safety Summary (LCSS) Datasheet\nMolecular Formula H2\nSynonyms DEUTERIUM Dideuterium 7782-39-0 Heavy hydrogen UNII-AR09D82C7G More…\nMolecular Weight 4.0282035557\n\nIs hydrogen 3 an isotope?\n\ntritium, (T, or 3H), the isotope of hydrogen with atomic weight of approximately 3. Its nucleus, consisting of one proton and two neutrons, has triple the mass of the nucleus of ordinary hydrogen.\n\n## What are isotopes isobars and isotopes give one example for each?\n\nHence, we can say that isobars are the elements that have a different atomic number but with the same mass number. An example of two Isotopes and Isobars is nickel and iron. These both have the same mass number, which is 58, whereas the atomic number of nickel is 28, and the atomic number of iron is 26.\n\n### What is the most stable radioisotope of hydrogen?\n\nTritium is the hydrogen’s most stable radioisotope. That is, tritium is the least radioactive of all hydrogen radioactive isotopes. Four other radioactive hydrogen isotopes were produced by researchers, but these isotopes are very volatile and simply do not exist.\n\n#### Which of the following is an isotope of hydrogen?\n\n1 Protium ( 1H ) It is one of the common isotopes of hydrogen. It is plenty in nature with an abundance of 99.98%. 2 Deuterium ( 2H) It comprises 1 proton and 1 neutron in its nucleus. The nucleus of hydrogen 2 is termed as deuteron. It is not radioactive. 3 Tritium ( 3H )\n\nWhat is the IUPAC symbol for hydrogen isotope?\n\nThe IUPAC accepts the D and T symbols, but recommends instead using standard isotopic symbols ( 2 H and 3 H) to avoid confusion in the alphabetic sorting of chemical formulas. The ordinary isotope of hydrogen, with no neutrons, is sometimes called protium.\n\nWhat is the name of the 3H isotope of hydrogen?\n\nHydrogen is the only element whose isotopes have different names in common use today: the 2H (or hydrogen-2) isotope is deuterium and the 3H (or hydrogen-3) isotope is tritium.\n\nPosted in Life" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8703983,"math_prob":0.797061,"size":3930,"snap":"2022-40-2023-06","text_gpt3_token_len":1051,"char_repetition_ratio":0.2050433,"word_repetition_ratio":0.02143951,"special_character_ratio":0.23536895,"punctuation_ratio":0.13725491,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9629539,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-06T03:28:38Z\",\"WARC-Record-ID\":\"<urn:uuid:524bf233-986b-42d0-86ed-77ccb146faf3>\",\"Content-Length\":\"54902\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:820c2fc8-89e3-4a6d-9e34-385e6dc53320>\",\"WARC-Concurrent-To\":\"<urn:uuid:375ccfef-9d7f-4e91-bc2d-5ab1440aa65e>\",\"WARC-IP-Address\":\"104.21.36.213\",\"WARC-Target-URI\":\"https://www.thelittleaussiebakery.com/what-are-the-3-isotopes-of-hydrogen/\",\"WARC-Payload-Digest\":\"sha1:O2ISRM3JQ3CIIBPCQ2AGQO7ZAH6BDHM6\",\"WARC-Block-Digest\":\"sha1:FDBQHZITDNWRF3M54YOPWLFTMFEKBMMB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337723.23_warc_CC-MAIN-20221006025949-20221006055949-00540.warc.gz\"}"}
https://infinitylearn.com/surge/question/chemistry/number-ofpdbonds-inso2cl2isare/
[ "Number of pπ−dπ bond(s) in SO2Cl2 is/are:\n\n# Number of $p\\pi -d\\pi$ bond(s) in ${\\mathrm{SO}}_{2}{\\mathrm{Cl}}_{2}$ is/are:\n\n1. A\n\n1\n\n2. B\n\n3\n\n3. C\n\n0\n\n4. D\n\n2\n\nFill Out the Form for Expert Academic Guidance!l\n\n+91\n\nLive ClassesBooksTest SeriesSelf Learning\n\nVerify OTP Code (required)\n\n### Solution:\n\nSulphur has ${\\mathrm{sp}}^{2}$ hybridization.\n\nThese hybrid orbitals contain a lone pair as well as two bond pairs (due to sigma bonding). The 3p and 3d hybridized orbitals of the unpaired electrons in oxygen are combined with the 2p unhybridized orbitals to form a pi bond.\n\nThus, two $p\\pi -d\\pi$, are created", null, "", null, "+91\n\nLive ClassesBooksTest SeriesSelf Learning\n\nVerify OTP Code (required)" ]
[ null, "data:image/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==", null, "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAADAAAAAwAQAAAAB/ecQqAAAAAnRSTlMAAHaTzTgAAAANSURBVBjTY2AYBdQEAAFQAAGn4toWAAAAAElFTkSuQmCC", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8959144,"math_prob":0.9057975,"size":288,"snap":"2023-40-2023-50","text_gpt3_token_len":74,"char_repetition_ratio":0.15845071,"word_repetition_ratio":0.0,"special_character_ratio":0.20833333,"punctuation_ratio":0.10909091,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9748717,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-22T12:33:27Z\",\"WARC-Record-ID\":\"<urn:uuid:78d2fced-e13e-4e25-850e-934dd3d4238d>\",\"Content-Length\":\"91313\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f3919f29-dfa3-4745-97f9-bd7d1e327de4>\",\"WARC-Concurrent-To\":\"<urn:uuid:bfc69702-cfb6-48db-89b0-e5c438749db7>\",\"WARC-IP-Address\":\"13.249.39.68\",\"WARC-Target-URI\":\"https://infinitylearn.com/surge/question/chemistry/number-ofpdbonds-inso2cl2isare/\",\"WARC-Payload-Digest\":\"sha1:NTNN4ZHA6HYVLZZ7TY3BFOICNFTJRW4D\",\"WARC-Block-Digest\":\"sha1:DAECVWR36RX2IVUPC6M2LWAI4BGB77UT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506399.24_warc_CC-MAIN-20230922102329-20230922132329-00454.warc.gz\"}"}
https://hyoka.ofc.kyushu-u.ac.jp/search/details/K003646/english.html
[ "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "Homepage\nhttps://kyushu-u.pure.elsevier.com/en/persons/miyuki-koiso Reseacher Profiling Tool Kyushu University Pure\nDoctor of Science\nCountry of degree conferring institution (Overseas)\nNo\nField of Specialization\nmathematics\nTotal Priod of education and research career in the foreign country\n04years00months\nResearch\nResearch Interests\n• Global analysis on geometric variational problems and its applications\nkeyword : mathematics, geometry, differential geometry, variational problem, global analysis\n2008.10~2019.03.\nCurrent and Past Project\n• Joint research with Prof. Jaigyoung Choe (Korean Institute for Advanced Study, Korea) on a free boundary problem for surfaces with constant mean curvature.\n• Joint research on geometric variational problems for hypersurfaces; Miyuki Koiso (Kyushu U., Japan) and Bennett Palmer (Idaho State U., USA)\n• Joint research on stability and bifurcation for surfaces with constant mean curvature and their generalizations; Miyuki Koiso (Kyushu U, Japan), Bennett Palmer (Idaho State U., USA), and Paolo Piccione (Univ. of Sao Paulo, Brazil)\n 1 Atsufumi Honda, Yu Kawakami, Miyuki Koiso, Syunsuke Tori, Heinz-type mean curvature estimates in Lorentz-Minkowski space, Revista Matematica Complutense, https://doi.org/10.1007/s13163-020-00373-9, 2020.10, We provide a unified description of Heinz-type mean curvature estimates under an assumption on the gradient bound for space-like graphs and time-like graphs in the Lorentz-Minkowski space. As a corollary, we give a unified vanishing theorem of mean curvature for these entire graphs of constant mean curvature.. 2 Miyuki Koiso, Paolo Piccione, Toshihiro Shoda, On bifurcation and local rigidity of triply periodic minimal surfaces in R^3, Annales De L'Institut Fourier, https://doi.org/10.5802/aif.3222, 68, 6, 2743-2778, 2018.11, We study the space of triply periodic minimal surfaces in R^3, giving a result on the local rigidity and a result on the existence of bifurcation. We prove that, near a triply periodic minimal surface with nullity three, the space of triply periodic minimal surfaces consists of a smooth five-parameter family of pairwise non-homothetic surfaces. On the other hand, if there is a smooth one-parameter family of triply periodic minimal surfaces {X_t}_t containing X_0 where the Morse index jumps by an odd integer, it is proved that there exists a bifurcating branch issuing from X_0. We also apply these results to several known examples.. 3 Miyuki Koiso, Bennett Palmer, Paolo Piccione, Stability and bifurcation for surfaces with constant mean curvature, Journal of the Mathematical Society of Japan, 69, 4, 1519-1554, 2017.10, We give criteria for the existence of smooth bifurcation branches of fixed boundary CMC surfaces in R^3, and we discuss stability/instability issues for the surfaces in bifurcating branches. To illustrate the theory, we discuss an explicit example obtained from a bifurcating branch of fixed boundary unduloids in R^3.. 4 Miyuki Koiso, Bennett Palmer, Higher order variations of constant mean curvature surfaces, Calculus of Variations and PDE's, 10.1007/s00526-017-1246-1, 2017.10, We study the third and fourth variation of area for a compact domain in a constant mean curvature surface when there is a Killing field on R^3 whose normal component vanishes on the boundary. Examples are given to show that, in the presence of a zero eigenvalue, the non negativity of the second variation has no implications for the local area minimization of the surface.. 5 Miyuki Koiso, Jaigyoung Choe, Stable capillary hypersurfaces in a wedge, Pacific Journal of Mathematics, 10.2140/pjm.2016.280.1, 280, 1, 1-15, 2015.12, Let $\\Sigma$ be a compact immersed stable capillary hypersurface in a wedge bounded by two hyperplanes in $\\mathbb R^{n+1}$. Suppose that $\\Sigma$ meets those two hyperplanes in constant contact angles $\\ge \\pi/2$ and is disjoint from the edge of the wedge, and suppose that $\\partial\\Sigma$ consists of two smooth components with one in each hyperplane of the wedge. It is proved that if $\\partial \\Sigma$ is embedded for $n=2$, or if each component of $\\partial\\Sigma$ is convex for $n\\geq3$, then $\\Sigma$ is part of the sphere. And the same is true for $\\Sigma$ in the half-space of $\\mathbb R^{n+1}$ with connected boundary $\\partial\\Sigma$.. 6 Miyuki Koiso and Bennett Palmer, Equilibria for anisotropic surface energies with wetting and line tension, Calculus of Variations and Partial Differential Equations, 43, 3, 555-587, 2012.01, We study the stability of surfaces trapped between two parallel planes with free boundary on these planes. The energy functional consists of anisotropic surface energy, wetting energy, and line tension. Equilibrium surfaces are surfaces with constant anisotropic mean curvature. We study the case where the Wulff shape is of product form'', that is, its horizontal sections are all homothetic and has a certain symmetry. Such an anisotropic surface energy is a natural generalization of the area of the surface. Especially, we study the stability of parts of anisotropic Delaunay surfaces which arise as equilibrium surfaces. They are surfaces of the same product form of the Wulff shape. We show that, for these surfaces, the stability analysis can be reduced to the case where the surface is axially symmetric and the functional is replaced by an appropriate axially symmetric one. Moreover, we obtain necessary and sufficient conditions for the stability of anisotropic sessile drops.. 7 Miyuki Koiso and Bennett Palmer, Anisotropic umbilic points and Hopf's theorem for surfaces with constant anisotropic mean curvature, Indiana University Mathematics Journal, 59, 1, 79-90, 2010.05, 非等方的表面エネルギーは、曲面の各点における法線方向に依存するエネルギー密度の曲面上での総和 (積分) である。与えられたエネルギー密度関数に対し、同じ体積を囲む閉曲面の中での非等方的表面エネルギーの最小解は(平行移動を除き)一意的に存在し、Wulff図形と呼ばれている。より一般に、囲む体積を変えない変分に対する非等方的表面エネルギーの臨界点は、非等方的平均曲率一定曲面となる。本論文では、3次元ユークリッド空間において、Wulff図形が滑らかな狭義凸曲面であるという仮定のもとで、種数0の非等方的平均曲率一定閉曲面は平行移動と相似を除きWulff図形に限ることを証明した。. 8 Miyuki Koiso and Bennett Palmer, Geometry and stability of surfaces with constant anisotropic mean curvature, Indiana University Mathematics Journal, 54, 6, 1817-1852, Vol.54, No.6, pp.1817-1852, 2005.12. 9 Miyuki Koiso, Deformation and stability of surfaces with constant mean curvature, Tohoku Mathematical Journal (2, 54, 1, 145-159, Vol.54, No.1, pp.145-159, 2002.03.\n 1 Miyuki Koiso, Variational problem for anisotropic surface energy, Geometric Analysis and General Relativity, 2019.11, [URL]. 2 Miyuki Koiso, Variational problems of anisotropic surface energy for hypersurfaces with singular points, AMS Spring Central and Western Joint Sectional Meeting, 2019.03, [URL]. 3 Miyuki Koiso, Towards crystalline variational problems from elliptic variational problems, Introductory workshop on discrete differential geometry, 2019.01, [URL]. 4 Miyuki Koiso, Uniqueness problem for closed non-smooth hypersurfaces with constant anisotropic mean curvature and applications to anisotropic mean curvature flow, Conference \"Analysis and Geometry in Minimal Surface Theory\", 2018.12, [URL], We study a variational problem for surfaces in the Euclidean space with an anisotropic surface energy. An anisotropic surface energy is the integral of an energy density that depends on the surface normal over the considered surface, which was introduced to model the surface tension of a small crystal. The minimizer of such an energy among all closed surfaces enclosing the same volume is unique and it is (up to rescaling) so-called the Wulff shape. The Wulff shape and equilibrium surfaces of this energy for volume-preserving variations are generalizations of the round sphere and constant mean curvature surfaces, respectively. However, they are not smooth in general. In this talk, we give a suitable formulation of piecewise-smooth hypersurfaces and discuss geometry of equilibrium hypersurfaces. Especially, we give recent results on the uniqueness for closed equilibria and their applications to anisotropic mean curvature flow.. 5 Miyuki Koiso, Crystalline variational problem and applications to capillary problems, 7th International Conference on Mathematical Modeling in Physical Sciences, 2018.08, We study a variational problem for surfaces in the Euclidean space with an anisotropic surface energy. An anisotropic surface energy is the integral of an energy density that depends on the surface normal over the considered surface, which was introduced to model the surface tension of a small crystal. The minimizer of such an energy among all closed surfaces enclosing the same volume is unique and it is (up to rescaling) so-called the Wulff shape. The Wulff shape and equilibrium surfaces of this energy for volume-preserving variations are not smooth in general. In this paper, we give a formulation of piecewise-smooth hypersurfaces and discuss geometry of equilibrium hypersurfaces in the Euclidean space of general dimension. Especially, we give uniqueness and non-uniqueness results for closed equilibria. We also mention applications to anisotropic mean curvature flow and to capillary problems.. 6 Miyuki Koiso, Uniqueness problem for closed non-smooth hypersurfaces with constant anisotropic mean curvature, The 11th Mathematical Society of Japan Seasonal Institute (MSJ-SI): The Role of Metrics in the Theory of Partial Differential Equations, 2018.07, [URL], We study a variational problem for piecewise-smooth hypersurfaces in the (n+1)-dimensional Euclidean space. An anisotropic energy is the integral of an energy density that depends on the normal at each point over the considered hypersurface, which is a generalization of the area of surfaces. The minimizer of such an energy among all closed hypersurfaces enclosing the same (n+1)-dimensional volume is unique and it is (up to rescaling) so-called the Wulff shape. The Wulff shape and equilibrium hypersurfaces of this energy for volume-preserving variations are not smooth in general. In this talk we give recent results on the uniqueness and non-uniqueness for closed equilibria. We also give nontrivial self-similar shrinking solutions of anisotropic mean curvature flow.. 7 Miyuki Koiso, Uniqueness problem for closed non-smooth hypersurfaces with constant anisotropic mean curvature, International Workshop \"Geometry of Submanifolds and Integrable Systems\", 2018.03, [URL]. 8 Miyuki Koiso, Uniqueness problem for closed non-smooth hypersurfaces with constant anisotropic mean curvature and self-shrinkers of anisotropic mean curvature flow, Workshop \"Minimal Surfaces and Related Topics\", 2018.01, [URL], We study a variational problem for surfaces in the euclidean space with an anisotropic surface energy. An anisotropic surface energy is the integral of an energy density that depends on the surface normal over the considered surface, which was introduced to model the surface tension of a small crystal. The minimizer of such an energy among all closed surfaces enclosing the same volume is unique and it is (up to rescaling) so-called the Wulff shape. The Wulff shape and equilibrium surfaces of this energy for volume-preserving variations are generalizations of the round sphere and constant mean curvature surfaces, respectively. However, they are not smooth in general. In this talk, we show that, if the energy density function is three times continuously differentiable and convex, then any closed stable equilibrium surface is a rescaling of the Wulff shape. Moreover, we show that, there exists a non-convex energy density function such that there exist closed embedded equilibrium surfaces with genus zero which are not (any homothety of) the Wulff shape. This gives also closed embedded self-similar shrinking solutions with genus zero of the anisotropic mean curvature flow other than the Wulff shape. These concepts and results are naturally generalized to higher dimensions.. 9 Miyuki Koiso, Non-uniqueness of closed non-smooth hypersurfaces with constant anisotropic mean curvature and self-shrinkers of anisotropic mean curvature flow, The Third Japanese-Spanish Workshop on Differential Geometry, 2017.09, [URL]. 10 Miyuki Koiso, Non-uniqueness of closed non-smooth hypersurfaces with constant anisotropic mean curvature and self-shrinkers of anisotropic mean curvature flow, The Last 60 Years of Mathematical Fluid Mechanics: Longstanding Problems and New Perspectives: In Honor of Professors Robert Finn and Vsevolod Solonnikov, 2017.08, [URL], We study variational problems for surfaces in the euclidean space with an anisotropic surface energy. An anisotropic surface energy is the integral of an energy density which depends on the surface normal over the considered surface. It was first introduced by Gibbs to model the equilibrium shape of a small crystal. If the energy density is constant one, the anisotropic surface energy is the usual area of the surface. The minimizer of an anisotropic surface energy among all closed surfaces enclosing the same volume is unique (up to translations) and it is called the Wulff shape. Equilibrium surfaces of a given anisotropic surface energy functional for volume-preserving variations are called surfaces with constant anisotropic mean curvature (CAMC surfaces). In general, the Wulff shape and CAMC surfaces are not smooth. If the energy density satisfies the so-called convexity condition, the Wulff shape is a smooth convex surface and closed embedded CAMC surfaces are only homotheties of the Wulff shape. In this talk, we show that if the convexity condition is not satisfied, such a uniqueness result is not always true, and also the uniqueness for self-shrinkers with genus zero for anisotropic mean curvature flow does not hold in general. These concepts and results are naturally generalized to higher dimensions.. 11 Miyuki Koiso, Geometry of anisotropic surface energy, The 13th annual international conference of KWMS (Korean Women in Mathematical Science), 2017.06, [URL], One of the most important subjects in geometry is variational problem. In this talk, we study variational problems for surfaces in the euclidean space with an anisotropic surface energy. An anisotropic surface energy is the integral of an energy density which depends on the surface normal over the considered surface. It was first introduced by Gibbs to model the equilibrium shape of a small crystal. If the energy density is constant one, the anisotropic surface energy is the usual area of the surface. The minimizer of an anisotropic surface energy among all closed surfaces enclosing the same volume is unique (up to translations) and it is called the Wulff shape. Equilibrium surfaces of a given anisotropic surface energy functional for volume-preserving variations are called surfaces with constant anisotropic mean curvature (CAMC surfaces). In general, the Wulff shape and CAMC surfaces are not smooth. Around each regular (smooth) point, they are graphs of solutions of a second order quasilinear elliptic partial differential equation. These concepts are naturally generalized to higher dimensions, and they have many applications inside and outside mathematics. In this talk, we give fundamental geometric and analytic properties of CAMC hypersurfaces and recent progress in the research on the uniqueness of closed CAMC hypersurfaces with and without singularities.. 12 Miyuki Koiso, Stability and bifurcation for surfaces with constant mean curvature, Workshop on \"Geometric Inequalities on Riemannian Manifolds\", 2016.11, [URL], A surface with constant mean curvature (CMC surface) is an equilibrium surface of the area functional among surfaces which enclose the same volume and satisfy given boundary conditions. A CMC surface is said to be stable if the second variation of the area is nonnegative for all volume-preserving variations. In this talk we first give criteria for stability of CMC surfaces in R^3. We also give a sufficient condition for the existence of smooth bifurcation branches of fixed boundary CMC surfaces, and we discuss stability/instability issues for the surfaces in bifurcating branches. By applying our theory, we determine the stability/instability of some explicit examples of CMC surfaces.. 13 小磯 深幸, Local structure of the space of all triply periodic minimal surfaces in R^3, Workshop \"Geometric aspects on capillary problems and related topics\", 2015.12, [URL], We study the space of triply periodic minimal surfaces in ${\\mathds R}^3$, giving a result on the local rigidity and a result on the existence of bifurcation.We prove that, near a triply periodic minimal surface with nullity three, the space of triply periodic minimal surfaces consist of a smooth five-parameter family of pairwise non-homothetic surfaces. On the other hand, if there is a smooth one-parameter family of triply periodic minimal surfaces $\\{X_t\\}_t$ containing $X_0$ where the Morse index jumps by an odd integer, it will be proved the existence of a bifurcating branch issuing from $X_0$. We also apply these results to several known examples.. 14 小磯 深幸, Stable capillary hypersurfaces in a wedge and uniqueness of the minimizer, Asymptotic Problems: Elliptic and Parabolic Issues, 2015.06, [URL], Let $\\Sigma$ be a compact immersed stable capillary hypersurface in a wedge bounded by two hyperplanes $\\Pi_1$, $\\Pi_2$ in $\\mathbb R^{n+1}$. Suppose $\\Sigma$ meets each $\\Pi_i$ in constant contact angle not less than $\\pi/2$. We prove that if $\\partial \\Sigma$ is embedded for $n=2$, or if $\\partial\\Sigma$ is convex for $n\\geq3$, then $\\Sigma$ is part of the round sphere.. 15 小磯 深幸, Bifurcation theory for minimal and constant mean curvature surfaces, Conference on Geometry, 2014.03, [URL], We construct general criteria for existence and nonexistence of (continuous and discrete) bifurcation for minimal and constant mean curvature surfaces. For continuous bifurcation, we also give a criterion for stability for each surface in the bifurcation branch. We apply our general results to several concrete boundary value problems. Especially, we mention the existence of unknown examples of triply periodic minimal surfaces in the Euclidean three-space which are close to known examples. This talk is based on joint work with Bennett Palmer (Idaho State U., USA) and Paolo Piccione (University of Sao Paulo, Brazil), and joint work with Paolo Piccione and Toshihiro Shoda (Saga U., Japan). . 16 小磯 深幸, Stable capillary hypersurfaces in a wedge and uniqueness of the minimizer, The second Japanese-Spanish workshop on Differential Geometry, 2014.02, [URL], We study a variational problem for immersed hypersurfaces in a wedge bounded by two hyperplanes in $\\mathbb R^{n+1}$. The total energy of each hypersurface is the $n$-dimensional surface area and a positive wetting energy'' on the supporting hyperplanes, and we impose the $(n+1)$-dimensional volume constraint enclosed by the hypersurfaces. Any stationary hypersurface $\\Sigma$ is a hypersurface with constant mean curvature which meets each supporting hyperplane with constant contact angle, and it is said to be stable if the second variation of the energy is nonnegative for all admissible variations. We show that if $\\Sigma$ is stable and is disjoint from the edge of the wedge, and if $\\partial \\Sigma$ is embedded for $n=2$, or if $\\partial\\Sigma$ is convex for $n\\geq3$, then $\\Sigma$ is part of the hypersphere. Our results also show that the space of stable solutions is not continuous with respect to the variation of the boundary condition. Moreover, we mention the uniqueness of the minimizer. This is joint work with Jaigyoung Choe (KIAS, Korea).. 17 小磯 深幸, Geometry of hypersurfaces with constant anisotropic mean curvature, The 2013 Annual Meeting of the Taiwan Mathematical Society, 2013.12, [URL], A surface with constant anisotropic mean curvature (CAMC surface) is a stationary surface of a given anisotropic surface energy functional for volume-preserving variations. For example, minimal surfaces and surfaces with constant mean curvature in the Euclidean space and those in the Lorentz-Minkowski space are regarded as CAMC surfaces for a certain special anisotropic surface energy. The minimizer of an anisotropic surface energy among all closed surfaces enclosing the same volume is called the Wulff shape, and the minimizer among surfaces with free boundary on a given support surface is sometimes called the Winterbottom shape. These concepts can be naturally generalized to higher dimensions, and they have many applications inside and outside mathematics. In this talk, we give fundamental geometric properties of CAMC hypersurfaces and recent progress in the research on the stability of CAMC hypersurfaces with free or fixed boundaries.. 18 小磯 深幸, Free boundary problem for surfaces with constant mean curvature, International Workshop on Special Geometry and Minimal Submanifolds, 2013.08, [URL], We study embedded surfaces of constant mean curvature with free boundary in given supporting planes in the euclidean three-space. We assume that each considered surface meets the supporting planes with constant contact angle. These surfaces are characterized as equilibrium surfaces of the variational problem of which the total energy is the surface area and a wetting energy (that is a weighted area of the domains in the supporting planes bounded by the boundary of the considered surface) with volume constraint. An equilibrium surface is said to be stable if the second variation of the energy is nonnegative for all volume-preserving variations satisfying the boundary condition. We are interested in determining all (stable) solutions. At present in literature, only for some special cases, for example, the supporting planes are either just a single plane or two parallel planes and the wetting energy is nonnegative, all stable solutions are known. We discuss recent progress of this subject and show the space of solutions is not continuous with respect to the boundary condition. . 19 小磯 深幸, Bernstein-type theorems for surfaces with constant anisotropic mean curvature and CMC surfaces in the Lorentz-Minkowski space, 7th International Meeting on Lorentzian Geometry, 2013.07, [URL], A surface with constant anisotropic mean curvature (CAMC surface) is astationary surface of a given anisotropic surface energy functional forvolume-preserving variations. Surfaces with constant mean curvature (CMCsurfaces) in the Lorentz-Minkowski space are regarded as CAMC surfacesfor a certain special anisotropic surface energy. In this talk, we showthat if a complete CAMC surface for a uniformly convex anisotropicsurface energy in the euclidean three-space is a graph of a function ina whole plane, then it is a plane. Moreover, by using a similar method,we show that if a spacelike complete CMC surface in the Lorentz-Minkowski three-space satisfies a certain condition on the order ofdivergence of its Gauss map, then it is a plane.. 20 小磯 深幸, Non-convex anisotropic surface energy and zero mean curvature surfaces in the Lorentz-Minkowski space, The 5th OCAMI-TIMS Joint International Workshop on Differential Geometry and Geometric Analysis, 2013.03, [URL], We study stationary surfaces of anisotropic surface energies in the euclidean three-space which are called anisotropic minimal surfaces. Usual minimal surfaces, zero mean curvature spacelike surfaces and timelike surfaces in the Lorenz-Minkowski space are regarded as anisotropic minimal surfaces for certain special axisymmetric anisotropic surface energies. In this talk, for any axisymmetric anisotropic surface energy, we show that, a surface is both a minimal surface and an anisotropic minimal surface if and only if it is a right helicoid. We also construct new examples of anisotropic cyclic minimal surfaces for certain reasonable classes of energy density. Our examples include zero mean curvature timelike surfaces and spacelike surfaces of catenoid-type and Riemann- type. This is a joint work with Atsufumi Honda (Tokyo Institute of Technology). . 21 小磯 深幸, Geometry of isoperimetric-type problems modeled on interfaces on micrometre scale, Workshop on Geometry of Interfaces and Capillarity, 2012.06, [URL], We study geometry of isoperimetric-type problems modeled on interfaces on micrometre scale among two or three different phases. Our main subject is surfaces with constant (anisotropic) mean curvature with free or fixed boundary. We discuss existence, stability, bifurcation, and topological transition for solutions.. 22 Miyuki Koiso, Geometric analysis for variational problems of isoperimetric type, Invited Organized Talk, Annual meeting of the Mathematical Society of Japan, Tokyo University of Science, March 26, 2012., [URL]. 23 Miyuki Koiso, Bifurcation and stability for solutions of isoperimetric problems, Isoperimetric problems, space-filling, and soap bubble geometry, Mar 19, 2012 - Mar 23, 2012, ICMS (International Center for Mathematical Sciences), 15 South College Street Edinburgh, UK, Organisers: Cox, Simon (Institute of Mathematics and Physics), Morgan, Frank (Williams College), Sullivan, John (Technische Universitat Berlin)., [URL]. 24 Miyuki Koiso, Pitchfork bifurcation for hypersurfaces with constant mean curvature, The 10th Pacific Rim Geometry Conference 2011 Osaka-Fukuoka (December 1-5, Osaka City University, December 7-9, Kyushu University), December 7, 2011., [URL]. 25 Miyuki Koiso, Stability of surfaces with constant anisotropic mean curvature and applications to physical phenomena, III Encontro Paulista de Geometria (San Paulo, Brazil), August 9, 2011, [URL]. 26 Geometric variational problems and bifurcation theory, [URL]. 27 Geometry of hypersurfaces with constant anisotropic mean curvature, [URL]. 28 Stability and bifurcation for surfaces with constant mean curvature and their generalizations, [URL]. 29 Stability and bifurcation for surfaces with constant mean curvature and their generalizations, [URL]. 30 Stability and bifurcation for solutions of isoperimetric type problems, [URL]. 31 , [URL]. 32 , [URL]. 33 , [URL].", null, "" ]
[ null, "https://hyoka.ofc.kyushu-u.ac.jp/search/details/K003646/images/common/loading.gif", null, "https://hyoka.ofc.kyushu-u.ac.jp/search/images/common/header_logo_en.gif", null, "https://hyoka.ofc.kyushu-u.ac.jp/search/images/common/header_lang_jp.gif", null, "https://hyoka.ofc.kyushu-u.ac.jp/search/images/common/header_mail_en.gif", null, "https://hyoka.ofc.kyushu-u.ac.jp/search/details/K003646/images/common/sp_banner_en.jpg", null, "https://hyoka.ofc.kyushu-u.ac.jp/search/details/K003646/images/common/Pure_en4.png", null, "https://hyoka.ofc.kyushu-u.ac.jp/search/images/details/detail_header_en.jpg", null, "https://hyoka.ofc.kyushu-u.ac.jp/search/images/details/name_line.gif", null, "https://hyoka.ofc.kyushu-u.ac.jp/search/details/K003646/images/common/wide_column_bottom.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.86271554,"math_prob":0.93839353,"size":21379,"snap":"2022-05-2022-21","text_gpt3_token_len":4829,"char_repetition_ratio":0.19162573,"word_repetition_ratio":0.3282812,"special_character_ratio":0.20005614,"punctuation_ratio":0.12299319,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9782876,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18],"im_url_duplicate_count":[null,9,null,null,null,null,null,null,null,null,null,7,null,null,null,null,null,7,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-23T00:39:34Z\",\"WARC-Record-ID\":\"<urn:uuid:9cbaf057-886d-4c55-a5b8-d0e5bfe7dae8>\",\"Content-Length\":\"52065\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ac62842c-d7b7-46c5-87cd-ee6da77997a0>\",\"WARC-Concurrent-To\":\"<urn:uuid:6cf30052-c2eb-48fc-96c6-e98bcb80fe8b>\",\"WARC-IP-Address\":\"133.5.40.62\",\"WARC-Target-URI\":\"https://hyoka.ofc.kyushu-u.ac.jp/search/details/K003646/english.html\",\"WARC-Payload-Digest\":\"sha1:G25E6FJXABGXCTBHPK6LMY53ZNKD3QSC\",\"WARC-Block-Digest\":\"sha1:C7WYLNT7NJIFONC6YWNW2OVMGZGLL3VU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320303917.24_warc_CC-MAIN-20220122224904-20220123014904-00046.warc.gz\"}"}
https://2022.help.altair.com/2022/ss/en_us/topics/simsolid/pre-processing/load_bolt_nut_tightening_r.htm
[ "# Bolt and Nut Tightening Background Information\n\nBolt forces can be applied to bolt and nut geometries in SimSolid.\n\n## Bolts and Nuts\n\nIn SimSolid, bolts are automatically identified by their geometric attributes. Bolts are required to have cylindrical bodies and a head with a hexahedral based shape. The hex shape can be on an outer or inner diameter in the bolt head. Nuts are identified in a similar manner using this hex based geometric signature.\nIn SimSolid, tightening loads can be applied to a variety of geometries, including the following:\n• Blind bolts\n• Bolts with nuts\n• Nuts on a generic post or handle\n\n## Relationship Between Torque M and Axial Force F\n\nM is the maximum moment realized at the end of the tightening and it is equilibrated by moment from friction forces in contact between nut and the structure.\n\nAssume for simplicity that normal forces in contact are distributed evenly, so the contact pressure is as follows:(1)\n$P=\\frac{F}{ContactArea}$\n(2)\n$P=\\frac{F}{\\pi \\left(R{1}^{2}-R{0}^{2}\\right)}$\n\nR0 and R1 are inner and outer radii of the contact spot. Friction distributed force will be $T=f\\ast P$ where f is a friction coefficient.\n\nIn a polar coordinate system, the elementary moment of the friction force with respect to the bolt axis is:\n$dM=T\\ast {r}^{2}\\ast dR\\ast dTet$\nWhere r is the distance to axis while dR and dTet are radius and angle differentials respectively.\nIntegrate the elementary moment over the contact area to obtain the following:\n$M=\\frac{2\\ast F\\ast f\\ast \\left(R{1}^{3}-R{0}^{3}\\right)}{3\\ast \\left(R{1}^{2}-R{0}^{2}\\right)}$\nThis equation relates applied torque, M, and axial force.\n\n## Axial Force\n\nAxial force depends on the structure and bolt stiffness, and on nut placement relative to the bolt:(3)\n$F=K\\ast D$\nK is structure stiffness factor, and D is relative displacement.\nRelative displacement can be expressed by the following:\n$D=N\\ast H$\nHere, N is number of nut turns and H is thread pitch. Therefore,\n$F=K\\ast H\\ast N$\n(equation A)\nAssume that at first analysis pass one nut turn is described (N(1)=1), and corresponded axial force F(1) is found from the analysis. The structure stiffness factor in this case can be defined as the following:(4)\n$F\\left(1\\right)=K\\ast H\\ast 1$\n\nThis implies: $F=F\\left(1\\right)\\ast N$ .\n\nNow you can relate torque to the number of turns:(5)\n$M=\\frac{2\\ast N\\ast F\\left(1\\right)\\ast f\\ast \\left(R{1}^{3}-R{0}^{3}\\right)}{3\\ast \\left(R{1}^{2}-R{0}^{2}\\right)}$\nTherefore, in order to realize prescribed torque M, after the first analysis is done with N=1, a second analysis (second convergence pass) must be performed using the following equation: (6)\n$N\\left(2\\right)=M/\\left|\\frac{2\\ast N\\ast F\\left(1\\right)\\ast f\\ast \\left(R{1}^{3}-R{0}^{3}\\right)}{3\\ast \\left(R{1}^{2}-R{0}^{2}\\right)}\\right|$\nIn general, at pass (i+1) the number of turns applied is as follows: (7)\n$N\\left(i+1\\right)=M/\\left|\\frac{2\\ast N\\left(i\\right)\\ast F\\left(i\\right)\\ast f\\ast \\left(R{1}^{3}-R{0}^{3}\\right)}{3\\ast \\left(R{1}^{2}-R{0}^{2}\\right)}\\right|$\nHere, N(i) is the number of turns applied at previous passes, and F (i) is result axial force evaluated at previous pass. These corrections for number of turns applied are important because in the course of passes solution is refined, which changes structure stiffness factor K in equation A above. So, K is not constant, but depends on pass K(i)." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9002986,"math_prob":0.99904335,"size":2588,"snap":"2023-40-2023-50","text_gpt3_token_len":592,"char_repetition_ratio":0.1002322,"word_repetition_ratio":0.004608295,"special_character_ratio":0.20672333,"punctuation_ratio":0.09859155,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998072,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-26T23:43:32Z\",\"WARC-Record-ID\":\"<urn:uuid:a48b95ff-65f8-4f40-82da-27bd3d22d0f8>\",\"Content-Length\":\"84699\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cb6c8306-6e7f-4fa2-b94a-bb55dc67152b>\",\"WARC-Concurrent-To\":\"<urn:uuid:5c8858ef-35b1-4754-a83a-19e9cdd30869>\",\"WARC-IP-Address\":\"173.225.177.121\",\"WARC-Target-URI\":\"https://2022.help.altair.com/2022/ss/en_us/topics/simsolid/pre-processing/load_bolt_nut_tightening_r.htm\",\"WARC-Payload-Digest\":\"sha1:KKVMJYWT42BWZZKQDWEMP3J7NZ6EUEL6\",\"WARC-Block-Digest\":\"sha1:VAZOROSMYZQC6QGCG4YMQNJYDR6YM7DD\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510225.44_warc_CC-MAIN-20230926211344-20230927001344-00319.warc.gz\"}"}
https://stats.stackexchange.com/questions/289321/how-to-prove-whether-the-mean-of-a-probability-density-function-exists/289347
[ "# How to prove whether the mean of a probability density function exists\n\nIt is well known that given a real-valued random variable $X$ with pdf $f$, the mean of $X$ (if it exists) is found by \\begin{equation} \\mathbb{E}[X]=\\int_{\\mathbb{R}}x\\,f(x)\\,\\mathrm{d}x\\,. \\end{equation}\n\nGeneral question: Now, if one cannot solve the above integral in closed form but wants to simply determine if the mean exists and is finite, is there a way to prove that? Is there (perhaps) some test I can apply to the integrand to determine if certain criteria are met for the mean to exist?\n\nApplication specific question: I have the following pdf for which I want to determine if the mean exists: \\begin{equation} f(x)=\\frac{|\\sigma_{2}^{2}\\mu_{1}x+\\mu_{2}\\sigma_{1}^{2}|}{\\sigma_{1}^{3}\\sigma_{2}^{3}a^{3}(x)}\\,\\phi\\left(\\frac{\\mu_{2}x-\\mu_{1}}{\\sigma_{1}\\sigma_{2}a(x)}\\right)\\qquad \\text{for}\\ x\\in\\mathbb{R}\\,, \\end{equation}\n\nwhere $\\mu_{1},\\mu_{2}\\in\\mathbb{R}$, $\\sigma_{1},\\sigma_{2}>0$, $a(x)=\\left(\\frac{x^{2}}{\\sigma_{1}^{2}}+\\frac{1}{\\sigma_{2}^{2}}\\right)^{1/2}$, and $\\phi(g(x))=\\frac{1}{\\sqrt{2\\pi}}\\,e^{-g^{2}(x)/2}$.\n\nI have tried to solve for the mean to no avail.\n\n• in your specific question $f(x)$ is not a proper density function. suppose $\\mu_1 =1$, $\\mu_2=0$ and $\\sigma_j = 1$, $j=1,2$, then $f(x)<0$ for $x<0$. – EliKa Jul 7 '17 at 12:46\n• @EliKa Good find. There may be a typo. I will check and correct the question. That said, I am still mostly interested in the \"how\" part of the question, i.e. how would I got about determining if the mean exists and is finite? – Aaron Hendrickson Jul 7 '17 at 13:11\n• You could try bounding $\\lvert x f(x) \\rvert$ above and below by some nonnegative functions $u(x)$ and $b(x)$ such that you can integrate them. If you can integrate $u(x)$, then your distribution has a mean. If $\\int b(x)dx = \\infty$, then your distribution has no mean. – Ceph Jul 7 '17 at 13:33\n• @Ceph That's a good suggestion. Is that technique based on the \"squeeze theorem\"? – Aaron Hendrickson Jul 7 '17 at 13:36\n• @AaronHendrickson Similar idea, but (as I understand it) the squeeze theorem is a little different. Using the ST here might look like this: you find $u(x)$ and $b(x)$ that bound $xf(x)$ (rather than bounding $\\lvert x f(x) \\rvert$ as in my earlier comment) such that you can find $\\int u(x) dx = \\int b(x) dx= \\mu$, where $\\mu$ is the mean of your distribution. But that is probably not a plausible strategy, since you would be hard pressed to find such $u$ and $b$. (They could differ from $xf(x)$ only on a set of measure 0 and so would probably not be any easier to integrate than $xf(x)$ is.) – Ceph Jul 7 '17 at 13:42\n\nThere is no general technique, but there are some simple principles. One is to study the tail behavior of $f$ by comparing it to tractable functions.\n\nBy definition, the expectation is the double limit (as $y$ and $z$ vary independently)\n\n$$E_{y,z}[f] = \\lim_{y\\to-\\infty,z\\to\\infty}\\int_y^z x f(x) dx = \\lim_{y\\to-\\infty}\\int_y^0 x f(x) dx+ \\lim_{z\\to\\infty}\\int_0^z x f(x) dx.$$\n\nThe treatment of the two integrals at the right is the same, so let's focus on the positive one. One behavior of $f$ that assures a limiting value is to compare it to the power $x^{-p}$. Suppose $p$ is a number for which $$\\liminf_{x\\to\\infty} x^p f(x)\\gt 0.$$ This means there exists an $\\epsilon\\gt 0$ and an $N\\gt 1$ for which $x^p f(x) \\ge \\epsilon$ whenever $x\\in[N,\\infty)$. We may exploit this inequality by breaking the integration into the regions where $x\\lt N$ and $x \\ge N$ and applying it in the second region:\n\n\\eqalign{ \\int_0^z x f(x) dx &=\\int_0^{N} x f(x) dx + \\int_{N}^z x f(x) dx \\\\ &=\\int_0^{N} x f(x) dx + \\int_{N}^z x^{1-p} \\left(x^p f(x)\\right) dx \\\\ &\\ge \\int_0^{N} x f(x) dx + \\int_{N}^z x^{1-p} \\left(\\epsilon\\right) dx \\\\ &= \\int_0^{N} x f(x) dx + \\frac{\\epsilon}{2-p}\\left(z^{2-p} - {N}^{2-p}\\right). }\n\nProvided $p\\lt 2$, the right hand side diverges as $z\\to\\infty$. When $p=2$ the integral evaluates to the logarithm,\n\n$$\\int_{N}^z x^{1-2} \\left(\\epsilon\\right) dx = \\epsilon \\left(\\log(z) - \\log(N)\\right),$$\n\nwhich also diverges.\n\nComparable analysis shows that if $|x|^pf(x)\\to 0$ for $p\\gt 2$, then $E[X]$ exists. Similarly we may test whether any moment of $X$ exists: for $\\alpha\\gt 0$, the expectation of $|X|^\\alpha$ exists when $|x|^{p+\\alpha}f(x)\\to 0$ for some $p\\gt 1$ and does not exist when $\\liminf |x|^{p+\\alpha}f(x)\\gt 0$ for some $p \\le 1$. This addresses the \"general question.\"\n\nLet's apply this insight to the question. By inspection it is clear that $a(x)\\approx |x|/\\sigma_1$ for large $|x|$. In evaluating $f$, we may therefore drop any additive terms that will eventually be swamped by $|x|$. Thus, up to a nonzero constant, for $x\\gt 0$\n\n$$f(x) \\approx \\frac{\\mu_1 x}{\\sigma_2 x^3}\\phi\\left(\\frac{\\mu_2 x}{\\sigma_2 x}\\right) = x^{-2}\\frac{\\mu_1}{\\sigma_2}\\exp\\left(\\left(-\\frac{\\mu_2}{2\\sigma_2}\\right)^2\\right).$$\n\nThus $x^2 f(x)$ approaches a nonzero constant. By the preceding result, the expectation diverges.\n\nSince $2$ is the smallest value of $p$ that works in this argument--$|x|^pf(x)$ will go to zero as $|x|\\to\\infty$ for any $p\\lt 2$--it is clear (and a more detailed analysis of $f$ will confirm) that the rate of divergence is logarithmic. That is, for large $|y|$ and $|z|$, $E_{y,z}[f]$ can be closely approximated by a linear combination of $\\log(|y|)$ and $\\log(|z|)$." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7346633,"math_prob":0.9999832,"size":2718,"snap":"2021-31-2021-39","text_gpt3_token_len":962,"char_repetition_ratio":0.11495947,"word_repetition_ratio":0.022277229,"special_character_ratio":0.35614422,"punctuation_ratio":0.07020548,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000049,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-25T17:12:04Z\",\"WARC-Record-ID\":\"<urn:uuid:762bdf54-e040-40ec-b442-d0dbbf04c13f>\",\"Content-Length\":\"173311\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:40d9e032-49ee-4ce8-8a60-2b69e6d04cb5>\",\"WARC-Concurrent-To\":\"<urn:uuid:e17a6438-c25c-482f-93ab-d6270657a973>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://stats.stackexchange.com/questions/289321/how-to-prove-whether-the-mean-of-a-probability-density-function-exists/289347\",\"WARC-Payload-Digest\":\"sha1:C7JRP6RGSIHGWUBNRZY43V5K34ZNGQLM\",\"WARC-Block-Digest\":\"sha1:EXRYYSUNBVAU6ZG6VQ3H3HJNZ3LPVCJQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046151699.95_warc_CC-MAIN-20210725143345-20210725173345-00644.warc.gz\"}"}
http://vglm.redgiant.it/convert-voltage-to-degrees-calculator.html
[ "# Convert Voltage To Degrees Calculator\n\nSide Effects: None. Your car’s exhaust system (below) comprises: a catalytic converter (s), at least two oxygen sensors and exhaust pipes that route the engine’s exhaust from the engine, through the catalytic converter, and out the car. Plane Angle 8 B. The converter may be fed from single phase or three phases. A typical question: What is the frequency and the phase angle of a sinusoidal waveform? Does \"one\" signal can really have a phase? Two \"in-phase\" waves have a phase (angle) of φ = 0 degrees. We can change this formula around a bit using algebra to restate it as voltage is equal to power divided by current. A three phase fully controlled bridge converter operating from a 3 phase 220 V, 50 Hz supply is used to charge a battery bank with nominal voltage of 240 V. T 1 and T 2 do not need to be exactly 10. Convert to watt, horsepower, joules per hour, and BTU per hour. 1v gives us 100 degrees Celcius so the calculated voltage will give: Calculate voltage multiplied by 100. a = convtemp([0 100 15], 'C', 'F'). CGS unit of magnetic field strength is oersted, and SI unit is ampere/meter. 5 from the voltage is to correct for a 500mV offset voltage on the sensor output. Convert feet to meters, yards to meters, miles to kilometers, inches to centimeters, and more. Low voltage operation (2. A custom-. (Be sure “degrees” mode is selected, and not “radians. com? This is a website where, besides unit conversions, you will find many other calculators related to Math, Finance, health among others. • For all HVDC converters twelve pulse bridge converters are used. Heat Flux 17 Q. DC voltage characteristics Difference in size (L×W) Figure 4. We will use the on-board analog-to-digital converter to read the voltage and convert it to a value the computer can record. The PH600A280-24 DC-DC converter accepts a wide range of 200-to-425-volt-DC inputs delivering 24 volts at 12. the steps to follow as this type of question will probably be in my upcoming exam. Converting Resistance (ohms) to Temperature (K) : Results. It provides a reference for the S/D converter to use when sampling the stator voltages. Be sure to check and see what your hydrometer is calibrated for, while most hydrometers are calibrated to be accurate at 68 degrees (F) / 20 degrees (C) not all of them are. We can convert the radian values into degree values very simply. We will use the on-board analog-to-digital converter to read the voltage and convert it to a value the computer can record. To convert this voltage to temperature in degree Celsius, we are going to use the scale factor of. Models IP411 and IP413 accept 4-20 mA, 0-10 V and 0-5 V signals and convert these analog signals to proportional linear pneumatic outputs (3-15 or 0-20 psi). In programming part conversion factor is used to convert voltage back into temperature. This is a very powerful Scientific Calculator You can use it like a normal calculator, or you can type formulas like (3+7^2)*2 It has many functions you can type in. It converts between all temperature scales, and you can optionally view the formulas and step by step conversion process. Voltage Mode Modulator As the control voltage increases, the duty cycle of the output increases as well. Conversion of Thermocouple Voltage to Temperature Gerald Recktenwaldy February 27, 2020 Abstract This article provides a practical introduction to the conversion of ther-mocouple voltage to temperature. PFC Boost Converter Design Guide Application Note 3 Revision1. Did you know that many modern rubber ducky antennas found in routers use 130-year old technology invented by Heinrich Hertz in 1886?. Example: A 4-20mA temperature transmitter with a range of 0 tot 100 degrees Celsius measures a temperature of 20 degrees. See Server PSU articles for individual range specs. ; 47kΩ = 47 kilo-ohms - which is equal to 47 thousand Ohms. Just type the number of seconds into the box and the conversion will be performed automatically. Beginning with a description of the Seebeck e ect, the basic equations relating EMF and temperature are presented. Type in unit symbols, abbreviations, or full names for units of length. They were left in this form so that you could calculate a more accurate conversion factor. As long as the converter does its job efficiently, the vehicle will meet emissions and pass both a tailpipe emissions check and/or an OBDII plug-in emissions test. High Efficiency Optimization of LLC Resonant Converter for Wide Load Range Ya Liu Abstract As information technology advances, so does the demand for power management of telecom and computing equipment. This paper presents a modified sinusoidal pulse-width modulation (SPWM) switching technique in three-phase ac–dc buck converter with new modulation strategy. These universal ac dc power converters are used for converting 110 volt or 220 volt household AC electricity to DC battery power so you can use your 12 volt DC, 24 volt, 3V, 6V,9V,12V,15V or 18V DC products at home, office or on the road. To use the temperature converter, select the temperature converter from the menu and add the number you want to convert to other units of temperature like Celsius, Fahrenheit in the \"From\" field, and select the unit in which you want to turn from the \"To\" field. When the load current is positive, the positive converter supplies the required voltage, and the negative converter is blocked. First we need to convert each segment from degrees into radians. This MATLAB function computes the conversion factor from specified input temperature units (inputTemperatureUnits) to specified output temperature units (outputTemperatureUnits). io Find the IoT board you’ve been searching for using this interactive solution space to help you visualize the product selection process and showcase important trade-off. This is mentioned in part 1 step 3 just a little bit. e degree Celsius into Fahrenheit and vice versa. The arrow shows the positive acceleration. 83 grams; the specific heat capacity (Cg) for copper is listed (on average) as being 0. 00385), a 1. The step-up ratio has to be a little higher to overcome diode losses, winding resistance and so on and input voltage drop due to wire resistance from battery to converter. 4-20 mA signals are used to transfer a physical value such as a temperature, pressure, liquid level or other physical quantity. Units, symbols and. Date Difference Calculator. Units include MPa, GPa, pounds, kilograms, psi, grams, ounces, watts, joules, and many more. 095 mV to 0 mV Error Range. Convert the compensated voltage (V) calculated in step 4 to temperature (T). I'm going to need to convert a range of -10 to +10 volts into a range of 0 to +5 volts. a) π 15, b) π 5. about 2 choices a 1800 b t u or the second choice 12,000 220 volts outlet in the living room and 6000, months of 95 degree days I. 5 to 200 watts with outputs to 8000Vdc. The main source of atomization is absorbed water droplets exploding, which boil at 212 degrees. Note that in cell A4, the Excel Degrees function is used to convert the calculated angle from radians to degrees. Polar to Rectangular Online Calculator Below is an interactive calculator that allows you to easily convert complex numbers in polar form to rectangular form, and vice-versa. 414Volts RMS = volts peak times 0. Optionally: convert temperature units from Kelvins to degrees C or degrees F. Ttemp Convert EMF to temperature for a T-type thermocouple. Volts Phase-Phase to Volts Phase-Neutral (Volts LL → Volts LN), with this conersion you can go from Volts FF to Volts FN with many examples, the formula used in the conversion of monophasic, biphasic and three-phase voltages and a table with the main conversions. , and if I have the power setting too high then I have to turn up the temperature because it over-protects. Gxyz method returns (what I understand) are voltage levels from 0 to about 1. For example, 1-800-555-1234= will return a result, but 1/0= will not (because dividing a non-zero number by zero is undefined and not computable). Nichrome is wound in coils to a certain electrical resistance, and current is passed through it to produce heat. a machine or device that…. A few of these include: I2C or Serial Sensors - There are advanced sensor modules that often can measure barometric pressure, temperature, humidity, and other conditions all in one package. If the calculator doesn't show up when you enter in an equation: Make sure your equation is something that can be computed. Below are two simple formulas to find the rating of Single phase and Three phase Transformers. The computer must be programmed to work with whatever type of MAP sensor you are using, or the fuel and spark delivery will not be correct (and trouble codes may set). An air/fuel mixture of 14. A conversion scale graphic for each temperature and 4-20mA current output range combination entered is also displayed below the calculated values. By multiplying air velocity by the cross section area of a duct, you can determine the air volume. Temperature conversion formula: F = ( 1. • Relation between analog signal and digital equivalent n a i re i f i1 V b 2− V = =×∑ • AD conversion – Va-> bi (encoder) ex) Transducer interface. ”) Phase Angle = 37 degrees. If this equilibrium condition establishes the same temperature Tx as reached before with the. DC voltage characteristics Difference in Rated-voltage As to choosing the capacitor with higher rated-voltage, does not. Pressure 12 L. Divide that result by 60 to devise the gas transfer rate in. Specific Energy Per Unit Temp. To convert Fahrenheit to Celsius simply do: (Fahrenheit - 32) * 5/9; History of degree Fahrenheit. Ktemp Convert EMF to temperature for a K-type thermocouple. 0 January 2013 1 Introduction Power Factor Correction (PFC) shapes the input current of the power supply to be in synchronization with the mains voltage, in order to maximize the real power drawn from the mains. Solid Angle 8 C. Related: resistor calculator Ohm's Law. The site also includes a predictive tool that suggests possible conversions based on input, allowing for easier navigation while learning more about various unit systems. Volts to. 5 from the voltage is to correct for a 500mV offset voltage on the sensor output. About Converterin. The National Electric Code only allows a maximum voltage drop of 3 percent on a main circuit branch, and this should be taken into account when determining cable size. The output of the result has the common notation with transition between the units. • For all HVDC converters twelve pulse bridge converters are used. Using the voltage constant, KE, we find that the voltage induced in the motor armature can be. Voltage at pin in milliVolts = (reading from ADC) * (3300/1024) This formula converts the number 0-1023 from the ADC into 0-3300mV (= 3. For putting together a business case costs and revenues are an important part of it. Download Time Calculator. Omni Calculator solves 1165 problems anywhere from finance and business to health. //TMP36 Pin Variables int sensorPin = 0; //the analog pin the TMP36's Vout (sense) pin is connected to //the resolution is 10 mV / degree centigrade with a //500 mV offset to allow for negative temperatures /* * setup() - this function runs once when you turn your Arduino on * We initialize the serial connection with the computer */ void setup() { Serial. Thermocouple conversion calculator, convert millivolts to temperature degrees Celcius. You can use this function to convert voltage for types that are not supported in DAQmx. 5 volts for 12 volt battery systems. Or, you can find the single factor you need by dividing the A factor by the B factor. Solution: The voltage of the reference junction at 25 degree C, from the J-type reference table is 1. The converter switch, however, is susceptible to high voltage. To achieve the best overall performance, buffer circuits based on the AD8662 or the AD8397 are required to amplify the excitation signals and provide. The efficiency of a panel refers to the ability of the panel to convert sunlight into usable energy. 5 V) Calibrated directly in °C 10 mV/°C scale factor (20 mV/°C on TMP37) ±2°C accuracy over temperature (typ) ±0. Step Up/Step Down - Convert 110-120 voltages to 220-240 or vice-versa Go ahead! Bring your 220/240-volt Miele vacuum cleaner or that Breville cappuccino maker over to the US and Canada and use it safely! Take your 110-120-volt MacIntosh power amplifier to Europe (or other 220-240-volt regions) and play your music loud. The Unit Conversion page provides a solution for engineers, translators, and for anyone whose activities require working with quantities measured in. The output rate of 500mV per degree Celsius. Southwire's Voltage Drop Calculator is designed for applications using AWG and KCMIL sizes only. Traditionally the reference junction was held at 0 °C by an ice bath, as shown in Figure. Nuclear fusion, process by which nuclear reactions between light elements form heavier elements. It is a comparison of the size of one number to the size of another number. But the temperature rises up to 30 degrees Celsius, the output voltage should rise to 3. Our online PPM to Percent Converter is used to convert ppm (parts-per-million) value into a %. A Temperature Converter for Celsius and Fahrenheit. CGS unit of magnetic field strength is oersted, and SI unit is ampere/meter. ohm to volt/ampere (Ω—V/A) measurement units conversion. ohm to volt/ampere (Ω—V/A) measurement units conversion. If the calculator doesn't show up when you enter in an equation: Make sure your equation is something that can be computed. Often the European motors at 1hp size are universal for 50Hz or 60Hz power supply, as long as you have 400V x 50Hz and 460V x 60Hz. Gxyz method returns (what I understand) are voltage levels from 0 to about 1. Degrees Fahrenheit, (developed in the early 1700's by G. The process behind this will be discussed in a future class. say you have maximum Vmax and minimum Vmin voltage and correspondingly angles Amax and Amin. Use this online nichrome wire calculator to calculate resistance, power, current and voltage of the Nichrome coil by just providing the length, thickness and temperature of the NiCr. Or perhaps their question is specific to the application, such as how to convert the ADC code back to a physical quantity like current, temperature, weight or pressure. 4-20mA scaling calculator. Usefull when building an antenna. i am using platinum 100 rtd. Date Difference Calculator. The measured output voltage of the SCM5B37 thermocouple module must often be converted back to temperature. Convert amps to volts. You just put your problem and it will create a complete step-by-step report of the solution. We can convert the radian values into degree values very simply. Simple to use Ohm's Law Calculator. Figure 32: Resistance to Voltage. The 115V is an RMS voltage. com provides an online conversion calculator for all types of measurement units. 5 degrees (b) π/3 rad= 60 degrees (c) π/2 rad= 90 degrees (d) 3π/5 rad= 108 degrees (e) 6π/5 rad= 216 degrees (f) 1. Given a temperature in C, emfJtype returns the EMF in volts. convertedValues = convtemp (valuesToConvert,inputTemperatureUnits,outputTemperatureUnits) computes the conversion factor from specified input temperature units (inputTemperatureUnits) to specified output temperature units (outputTemperatureUnits). The document presents the equation for the conversion. (This is simple to do because it is just a linear conversion. 14 kb: AC to AC Voltage Converters: Three-phase AC Regulators: PDF: 0. 2: Enter the value you want to convert (pascal). 0 //Offset values - Calibrated by finding voltage at zero G. This calculator converts between various units of salinity. 82 volts typical 3. This, again, is voltage between 2. Main advantages of PWM DACs include simplicity, low cost, digitally-controlled resolution up to 10 bits (or more), and the possibility to obtain high output current, voltage and power. To get started, add some formulas, fill in any input variables and press \"Solve. • For all HVDC converters twelve pulse bridge converters are used. Transformer design The transformer must be of correct size in order to carry the power needed, on the net there are many charts showing the power in function of frequency and. Therefore, the phase angle between both secondary windings shifts 30 degrees each. Beginning with a description of the Seebeck e ect, the basic equations relating EMF and temperature are presented. Ohm: Kiloohm: Note: Fill in one box to get results in the other box by clicking \"Calculate\" button. Involute Gear Design Equations and Calculator Equations and engineering design calculator to determine critical design dimensions and features for an involute gear Lewis Factor Equation Lewis factor Equation is derived by treating the tooth as a simple cantilever and with tooth contact occurring at the tip as shown above. How to calculate the sine of an angle? If the angle is known, then simply use our sine calculator which supports input in both degrees and radians. Note that the power supply for my laptop is rated as having a 120-240V input. 1 volts maximum at 25 degrees C. #define Acc0 1. T 1 and T 2 do not need to be exactly 10 degrees apart. Circuit and working for PWM-To-Analogue Signal Converter. Convert the compensated voltage (V) calculated in step 4 to temperature (T). Conversion of mksq Units to Gaussian Units 8 VII. ) 6) At this point, the output from pin 3 of the 74HCT132 should equal your calculated calibration voltage. Voltage drop in accordance with CENELEC CLC/TR 50480. 5 V) Calibrated directly in °C 10 mV/°C scale factor (20 mV/°C on TMP37) ±2°C accuracy over temperature (typ) ±0. Click on any empty space in the window or the \"calculate\" button. Power 16 P. 2 sin 53 = 0. All electrical wires (conductors) have resistance and hence creating a voltage drop. That's a good point to stop. So that power is determined but the temperature calculation is not as simple as that do some study in this article. 9 volts when the fuel mixture is rich and there is little unburned oxygen in the exhaust. ohm to volt/ampere (Ω—V/A) measurement units conversion. CGS unit of magnetic field strength is oersted, and SI unit is ampere/meter. The conversion between ratios and dB's is simple enough, with the hardest part being to remember whether you're working with a voltage ratio or a power ratio. The output voltage range is typically 100mV at -40°C, 500mV at 0°C, 750mV at +25°C, and 1. An Analog to Digital Converter (ADC) is a very useful feature that converts an analog voltage on a pin to a digital number. Instructions below:-1. 19002 Joules (J) 1 British Thermal Unit (BTU) = 1055 Joules (J) 1 Therm = 100,000 BTU. You can think of the conversion calculation by saying \"sensorValue is to X volts as 5 volts is to 1023\", where X is the converted voltage value we are trying to determine. Convert the following angular values from radians to degrees: (a) π/8 rad= 22. UnitConversion. About the MAX scaling Here you have the complete information. Rectangular to Polar Form Conversion. I think I can understand now why more power (watt) means more heat (KJ). Conversion formulae used are implementation on information taken from NIST thermocouple tables and coefficients. Quick online free voltage drop calculator and energy losses calculation, formula of electrical DC and AC power wire voltage drop for various cross section cables, power factor, lenght, line, three-phase, single phase. Definition of radian: a radian is the measure of an angle that, when drawn as a central angle of a circle, intercepts an arc whose length is equal to the length of the radius of the circle. A thermistor is an electronic temperature-sensing device, which exhibits a change in resistance with a relative change in temperature. This C program to convert Fahrenheit to Celsius lets the user enter the temperature value in Fahrenheit. This RMS voltage calculator can be used to determine the root mean square (RMS) voltage values of the most frequently employed periodic waveforms; for example, sine wave, triangle wave, square wave, and others. With a positive temperature slope of 10mV/°C, the NCT47 operates from a single supply. Watts can be converted to volts using current and a Watt’s Law formula, which states that current is equal to power divided by voltage. When the converter exceeds an operating temperature of about 1,300 degrees F, the converter substrate begins to melt and cause exhaust restriction. :230V AC Signal voltage :0-10V Output voltage :100V DC Output current : 0-25A. By multiplying air velocity by the cross section area of a duct, you can determine the air volume. We have been continuously developing this web site since 2014 - see our 'about' page. JavaScript source code simple html angle units calculator - converter program. 8 volts using the trim terminal. In our case the 7805 IC is an iconic regulator IC that finds its application in most of the projects. Jtemp Convert EMF to temperature for a J-type thermocouple. Finding the original RCA value is easier than it looks. This is readily done with the SCM5B37 series because cold junction compensation is incorporated into the module and the SCMPB backpanels. When the switch turned OFF, the polarity of primary and secondary coil voltages reversed. ElectronicStation Store has All Kinds of Li-lion Lithium Battery Charger Module 5V-32V to 0. , its valence) and its concentration gradient across the membrane. Now, knowing that 1 watt = 1 Joule/second; and using the conversion of 1 lb. Volts Phase-Phase to Volts Phase-Neutral (Volts LL → Volts LN), with this conersion you can go from Volts FF to Volts FN with many examples, the formula used in the conversion of monophasic, biphasic and three-phase voltages and a table with the main conversions. Amperes - Enter the maximum current in amps that will flow through the circuit. Easily convert coulombs to milliampere-hours, convert C to mAh. An air/fuel mixture of 14. The light bulb adapter converter is easy and convenient for you to install and use for its practical and useful design, it will bring great convenience to. Use this online nichrome wire calculator to calculate resistance, power, current and voltage of the Nichrome coil by just providing the length, thickness and temperature of the NiCr. How to calculate the sine of an angle? If the angle is known, then simply use our sine calculator which supports input in both degrees and radians. Degree, Radian, Minute of arc, Second of arc, Grad, Angular mil (NATO), Point, Quadrant, Area. I'm going to need to convert a range of -10 to +10 volts into a range of 0 to +5 volts. To convert the voltage to temperature, simply use the basic formula: Temp in °C = [(Vout in mV) - 500] / 10 So for example, if the voltage out is 1V that means that the temperature is ((1000 mV - 500) / 10) = 50 °C. The measured output voltage of the SCM5B37 thermocouple module must often be converted back to temperature. Calculate dates difference. Courses 5 and 6 will be offered in the degree program soon. 3µV thank you. 1 Therm = 29. In this case with 12 pulses per cycle, the quality of output-voltage waveform would definitely be improved with low ripple content . The BTU is also an unit of energy. Online calculator for harmonics frequencys. An n-stage sequence control converter has n windings in the transformer secondary part with each rated e s /n (the source voltage). The Unit Conversion page provides a solution for engineers, translators, and for anyone whose activities require working with quantities measured in. Then, converting this to the software code, we get: Now we can determine how many pulses are equivalent to each bit measured (Fig. When the load current is positive, the positive converter supplies the required voltage, and the negative converter is blocked. Many low voltage systems today rely on temperature to assess overall system health and reliability. Step Up/Step Down - Convert 110-120 voltages to 220-240 or vice-versa Go ahead! Bring your 220/240-volt Miele vacuum cleaner or that Breville cappuccino maker over to the US and Canada and use it safely! Take your 110-120-volt MacIntosh power amplifier to Europe (or other 220-240-volt regions) and play your music loud. 2µF 47µH Choke. How long does it take to download a file? Fuel Converter Calculator. Compute the compensated voltage (V) with the following formula: V=Vmeas + Vref 5. I am working on a project to measure temperature from a Optical sensor using the dsPIC30F6014. We have been continuously developing this web site since 2014 - see our 'about' page. When the control voltage equals (or is greater than) the peak voltage of the ramp signal, the output is continuously high. 444 mV is measured with a type J thermocouple with a 25 degree C reference temperature. By default, k = 1, a = 0, that gives us a classic graph. I'm doing single phase AC to DC full bridge controlled converter. The Antennas Around Us. current or voltage, thereby shifting the phase of alternating current signals by 180 degrees. Cheap Integrated Circuits, Buy Quality Electronic Components & Supplies Directly from China Suppliers:10Pcs Step Down Power Supply Module Voltage Buck Converter Adjustable Board Mini DC DC 12 24V To 5V 3A 1. 895x10-2 bar; Convert from % Vacuum to Unit of Pressure. As part of a personal design project (not for school or homework) I'm designing coils of magnet wire (i. Speed 11 J. JavaScript source code simple html angle units calculator - converter program. 3 and the temperature coefficient is -0. 0745 m^2 and total resistance of 10. Enter a decimal or binary number and click the convert button. 8 × 1000 = 2800g : kilogram is the larger unit than the gram, so to convert larger unit to a smaller unit, we multiply by the factor. Convert to watt, horsepower, joules per hour, and BTU per hour. Single-phase voltages are usually 115V or 120V, while three-phase voltages are typically 208V, 230V or 480V. Voltage drop in accordance with CENELEC CLC/TR 50480. Free online Fraction conversion. 44 V Cu2+ + 2e- Cu°; E° = 0. will sustain a discharge load of 25 amps to a cut-off voltage of 1. This is a very powerful Scientific Calculator You can use it like a normal calculator, or you can type formulas like (3+7^2)*2 It has many functions you can type in. Voltage to Angle Conversion OBJECTIVE This document gives information on the voltage to angle conversion of the accelerometers. This method is called scaling which means how much temperature you will read when you get x voltage. For example: It can convert DOC to DOCX, but it can't convert DOC to XLSX. Convert the compensated voltage (V) calculated in step 4 to temperature (T). How to calculate the sine of an angle? If the angle is known, then simply use our sine calculator which supports input in both degrees and radians. 1 Therm = 29. Scientific Calculator. RMS is a tool which allows us to use the DC power equations, namely: P=IV=I*I/R, with AC waveforms, and still have everything work out. The thrust bearing must be able to absorb forward thrust loads that are delivered by the transmission, torque converter or clutch. For example, a shunt resistor rated with 100A and 50mV has a resistance of 50 / 100 = 0. The function then applies the conversion factor to the valuesToConvert. Use our free online unit converters to easily convert between different units of measurement. Convert millisiemens to microsiemens. These calculations provide the frequency response of our boost converter at one frequency. Then click the Convert Me button. 110 volt and 220 volt AC to 12v DC power converters are also known as Class 2 Power Supply or AC/DC adapters. Solution: The voltage of the reference junction at 25 degree C, from the J-type reference table is 1. Calculators such as force calculators, friction calculators, calculators for Battery life, Celsius to Fahrenheit conversion, centripetal and centrifugal forces, mass calculator, and more are available for you to use and calculate the everyday. to make the line voltage twice the phase voltage. Calculate the resistance of an RTD thermometer when the temperature is: a) 0 degree Celsius b) 100 degree Celsius c) 50 degree Celsius d) 75 degree Celsius Solution: (a) From the Temperature vs Resistance data tables, At 0 degree Celsius, Resistance = 100 ohms (b) At 100 degree Celsius, Resistance = 139. 3823321 * 10^-5 The coefficients for Temperature range 0 deg C to 760 deg C Voltage range 0 mV to 42. 0752178 C4 = -5. The Unit Conversion page provides a solution for engineers, translators, and for anyone whose activities require working with quantities measured in. p atm = absolute presure at normal or standard conditions (psia. 25 kb: AC to AC Voltage Converters: Phase Angle Control in Triac. Metals have many free charge carriers that vibrate with heat, so their temperature quickly rises. Resolvers, electromechanical sensors that measure precise angular position, operate as variable coupling transformers, with the amount of magnetic coupling between the primary winding and two secondary. Use the ampacity from Table in 310. voltage doubler topology used at the output side reduces the voltage stress on the converter components. r is the yield of the solar panel given by the ratio : electrical power (in kWp) of one solar panel divided by the area of one panel. At Conversion and Calculation Firm, we offer energy unit conversion calculator services. High Efficiency Optimization of LLC Resonant Converter for Wide Load Range Ya Liu Abstract As information technology advances, so does the demand for power management of telecom and computing equipment. After a click on the calculate button you will see the wattage of an equally bright halogen. A device used to convert direct current into alternating current. The conversion result will immediately appear in the output box. 7071Use the link below to an RMS voltage, peak voltage and peak-to-peak voltage calculator. This VI will demonstrates how to mathematically calculate the acquired voltage to temperature reading using thermocouple Description: This VI is designed to convert raw voltages into temperatures for a wide variety of thermocouple types. How Arduino Reads Temperature. rgb, hex, cmyk. TABLE 9 Type K Thermocouple thermoelectric voltage as a function of temperature (°C); reference junctions at 0 °C °C 0 1 2 3 4 5 6 7 8 9 10 °C Thermoelectric. Parameters: resT is the resistance across the thermistor in Ohms. You may use one of the following SI prefix after a value: p=pico, n=nano, u=micro, m=milli, k=kilo, M=mega, G=giga. F = 50 * 9/5 + 32 F = 90 + 32 F = 122 50 degrees Celsius is equal to 122 degrees Fahrenheit. All thermocouple voltages are given in millivolts (mV). You can think of the conversion calculation by saying \"sensorValue is to X volts as 5 volts is to 1023\", where X is the converted voltage value we are trying to determine. Length 9 D. voltage D/A, PWM Amplifier Power supply voltage, current Motor Load torque, speed, position Sensor strain gauge, potentiometer, tachometer, encoder linear, PWM • Convert discrete signal to analog voltage - D/A converter - pulse width modulation (PWM) • Amplify the analog signal - power supply - amplifier • Types of power amplifiers. The calculator will produce the NIST thermocouple table. Universal Unit Calculator provides converting units for more than 50 different metric measurement categories. The low voltage versions have a built-in electronic transformer that let you easily convert the 120 volt medium base socket in an existing recessed fixture into an adjustable low voltage light source using 12 volt MR-16 up to 50 watts; or 12 volt AR-111 and PAR-36 halogen reflector lamps up to 75 watts. 895x10-2 bar; Convert from % Vacuum to Unit of Pressure. 35, 15 3/4 and plus + or minus - signs as well, e. Do remember that, even in the modern days achieving a completely sinusoidal waveform for varying loads is extremely difficult and is not practical. Next, using the Fahrenheit to celsius formula, we are going to convert the user-specified temperature in Fahrenheit to Celsius in C. The division operation will be 2. Converting degrees to radians; Radians to degrees minutes seconds. As mentioned, power converters are used mostly for small appliances, like hair dryers or shavers. Voltage Convert 12VDC to 120 VAC Power Provide 300 W continuous Efficiency > 90% efficiency Waveform Pure 60 Hz sinusoidal Total Harmonic Distortion < 5% THD Physical Dimensions 8” x 4. Aim of this article: Most aftermarket engine management systems incorporate basic settings that relate to the control of the ignition coils. 8 volts) by 0. Learn more about calculating angles with help from math teacher in this free video on mathematics. The output rate of 500mV per degree Celsius. a) 3 radians, b) 2. How to calculate the sine of an angle? If the angle is known, then simply use our sine calculator which supports input in both degrees and radians. 6µV correction=1. Conversions Table; 1 Seconds Of Time to Degrees = 0. You can view more details on each measurement unit:. If one rear wheel measures 0. Type the number of Degrees per second you want to convert in the text box, to see the results in the table. The power factor of a balanced polyphase circuit is the same as that of any phase. The tacho generator is directly connected to the slowly rotating cable wheel via step-up converter and chain wheel drive. Here voltage drop for 70 Sq. 3 and the temperature coefficient is -0. A NTC resistor or a thermistor is used as a sensor that has a strong temperature dependence. Electron Volts Conversion Charts. I cannot seem to get them correct! either. In other words, 1 electron-volt is 11605 times bigger than a kelvin. Convert volts to watts. Voltage is commonly used as a short name for electrical potential difference. Formula to calculate temperature in degree Celcius. Polar to Rectangular Online Calculator Below is an interactive calculator that allows you to easily convert complex numbers in polar form to rectangular form, and vice-versa. What is my location? We opted to center the map on your current location when possible, using the html5 geolocation feature. • The app also comes in at. The table below, which associates each outcome with its probability, is an example of a probability distribution. Where Vrms and Irms are the phase voltage and current and θ is the phase different between the voltage & current waveforms. Then by choosing a time over which the energy will be used one can calculate the best power required from the heater. Simple to use Ohm's Law Calculator. Learn how to convert among power and heat flow rate units. Note that T 1 and T 2 must have the same unit. degrees, the decimal repeats the last digit of 2 infinitely, so, the original angle is a bit bigger than 40. To get started, add some formulas, fill in any input variables and press \"Solve. 4-20mA scaling calculator. When the LM35 is connected as explained above it can detect temperatures between 2 and 150 Celsius degrees which means that the maximum output voltage will be (150*10mV=1500mV=1. Conversion Calculator. For further information and examples of the Excel Atan function, see the Microsoft Office website. As you zoom in the grids are redrawn at greater levels of detail. 34 V Please don't just give me the answer, i need to learn how to do it i. Nichrome is wound in coils to a certain electrical resistance, and current is passed through it to produce heat. Use the search box to find your required metric converter →. The Arduino’s ADC can read voltages from 0-5 VDC with a 10-bit resolution, meaning that we can resolve 1024 different values. Calculate the visual degree corresponding to each px on the screen given a viewing distance and a screen resolution (the function px2deg does this for you) Use the obtained degrees array to calculate grating/stimuli luminance level; For example, to render a sinusoidal grating of frequency 0. With hardware compensation, a variable voltage source is inserted into the circuit to cancel the influence of the cold-junction temperature. Yea I know this could have been better written, but it works. Once you have measured the angle, or looked up the plan or schematic, just input the measurement and press \"calculate\". The measured output voltage of the SCM5B37 thermocouple module must often be converted back to temperature. Electron Volts Conversion Charts. 5 V systems. Heat Flux 17 Q. The catalytic converter, because it cleans up any exhaust pollutants that exit the engine. 2 , 53 o) R = 0. say you have maximum Vmax and minimum Vmin voltage and correspondingly angles Amax and Amin. Voltage is the potential difference in an electrical circuit, measured in volts. For example, if amperes and voltage are out of step (explained under “Inductive Reactance, ” later in the chapter), electrical degrees are used to describe the length of time that they are separated. The general formula for converting from degrees to radians is to simply multiply the number of degrees by $$\\red { \\frac {\\pi}{ 180^{\\circ}} }$$. 8 volts using the trim terminal. In this case with 12 pulses per cycle, the quality of output-voltage waveform would definitely be improved with low ripple content . Temperature in Kelvin to keV Conversion Access list of astrophysics formulas download page: Practicing astrophysicists routinely refer to temperatures in units of eV or keV, even though this is wrong, because temperature is not dimensionally equivalent to energy. We may now state the complex impedance (Z) of the circuit in polar form as 500-Ω at 37 degrees. You can find metric conversion tables for SI units, as well as English units, currency, and other data. The converter switch, however, is susceptible to high voltage. The OP has a voltage and wants to convert it to temperature. If this were not so, you must adjust the values of the R4 and R8 resistors (the two resistors must have the same value). An oxygen sensor will typically generate up to about 0. 0042: 70 Seconds Of Time to Degrees = 0. Calculator that provides voltage drop caused by the starting of motors at the substation transformer primary and secondary leads. In this case with 12 pulses per cycle, the quality of output-voltage waveform would definitely be improved with low ripple content . The increase of degrees can be found by dividing the number of joules by 4. The division operation will be 2. The Antennas Around Us. Speed 11 J. Degrees Fahrenheit, (developed in the early 1700's by G. Calculate dates difference. This measurement to current converter tool will convert any linear measurement reading into the ideal current loop signal over a linear range of 4 to 20 milliamps, and display a 4-20mA conversion scale for the chosen measurement range. Accordingly Fourier series calculation the dc output voltage of converter 1 becomes, = 0. 16 X 10 4 Degrees Kelvin (K) Equivalent (See Equivalent Electron Temperature below) 1MeV = 1. fxSolver is a math solver for engineering and scientific equations. The Angle Conversion block port labels change based on the input and output units selected from the Initial unit and the Final unit lists. Models IP411 and IP413 accept 4-20 mA, 0-10 V and 0-5 V signals and convert these analog signals to proportional linear pneumatic outputs (3-15 or 0-20 psi). So that power is determined but the temperature calculation is not as simple as that do some study in this article. Radian is the ratio between the length of an arc and its radius. 7*50 = 35 watt of heat Bulb is on for 5 minutes = 60*5 = 300 sec. 6 Electrical Calculations L. It's calculated by multiplying voltage by amperage. How to Convert Foot-Candle Measurement. used the EMF of a J-type, ice-point reference thermocouple with its measuring junction at 21. One of the limitations of the Celsius scale is that negative temperatures are very common. The Unit Conversion page provides a solution for engineers, translators, and for anyone whose activities require working with quantities measured in. Volume 9 F. T 1 and T 2 do not need to be exactly 10 degrees apart. A conversion scale graphic for each temperature and 4-20mA current output range combination entered is also displayed below the calculated values. •The AC voltage at the converter bus are sinusoidal and remains constant. The RMS or ROOT MEAN SQUARED value is the value of the equivalent direct (non varying) voltage or current which would provide the same energy to a circuit as the sine wave measured. Online Converter and Conversion Calculator Let's Calculate Without creating an account, you can directly focus on your calculations as this tool provides fast, inclusive, suitable, free online tools with greater flexibility. Divide the AC voltage by the square root of 2 to find the DC voltage. 48 gallons 1 cubic foot = 62. When two AC converters are placed parallel to each other, the zero sequence way is created. Following is the formula for Vpp to Vrms conversion. Formulas for Converting Temperature Scales:. Converting the potential to a more commonly used reference electrode with Gamry's new calculator The Gamry Instruments Mobile App is a convenient way to find Technical Support Articles, Application Notes, Electronic versions of our Instrument's User Manuals as well as news and events happening in the Electrochemical Research Arena. Convert microvolt/millivolt to ppm (μV/mV to part per million). Low voltage operation (2. Radian is the ratio between the length of an arc and its radius. Then, divide the difference in volts by the difference in amperes. It provides a reference for the S/D converter to use when sampling the stator voltages. The kelvin/watt [K/W] to degree Fahrenheit hour/Btu (th) conversion table and conversion steps are also listed. 7µV Ref emf = 216. Note the get. 15 + 100 = 373. The Math Forum has a rich history as an online hub for the mathematics education community. For the second point (0. 5°C linearity (typ) Stable with large capacitive loads. 1000 Joules = 6. This is not necessarily true if the load is referenced to ground. When the mixture is lean, the sensor’s output voltage will drop down to about 0. Note that relative specific gravity (sg) and conductivity (mS/cm) measurements are inheritly temperature-dependent and that this calculator follows the prevailing standard of assuming a temperature of 25C/77F. A 3 phase full bridge converter is connected to supply voltage of 230V per phase and a frequency of 50 Hz. The stator produces AC voltage that is changed to DC voltage by the rectifier/regulator assembly. 746kW= 1hp. The Seebeck coefficient (given in mV/°C) describes the slope of the reference function at the selected temperature and can be used to calculate. The power P in watts (W) is equal to the voltage V in volts (V) times the current I in amps (A): The power P in watts (W) is equal to the squared voltage V in volts (V) divided by the resistance R in ohms (Ω):. Low voltage operation (2. For example, a temperature sensor's voltage output may be in proportion to the temperature output. SINAMICS G110D Distributed Converters The distributed converter for basic applications SINAMICS G110D inverters in degree of protection IP65 – as a simpler version of the higher-performance SINAMICS G120D it still offers the same advantages. How to Convert Volts to Watts. This is not necessarily true if the load is referenced to ground. 83 volts, double that of half-wave. ; 1mA = 1 milli-amp - which is equal to one thousandths (1/1000) of an Ampere. I have a column with wind direction (N,NNE,NE) and I need to convert it to a numerical representation (0,22. Generally, horsepower is a unit for measuring the power of various electrical motors, piston engines, steam turbines etc and it is equal to 746 watts or. Below these links you will find our cable calculator. Convert metric, imperial and SI units like volume, pressure, length, surface, area, temperature, weight, mass, energy, force. A debt of gratitude is owed to the dedicated staff who created and maintained the top math education content and community forums that made up the Math Forum since its inception. Set your calculator to degrees and use the above formulas for x and y in terms of R and t to obtain: x = R cos t = 0. Parameter Identification (PID) Description The parameter identification (PID) mode allows access to powertrain control module (PCM) information. Ttemp Convert EMF to temperature for a T-type thermocouple. The power P in watts (W) is equal to the voltage V in volts (V) times the current I in amps (A): The power P in watts (W) is equal to the squared voltage V in volts (V) divided by the resistance R in ohms (Ω):. 9 mV p-p so it will take two units before the input detects a change. Density 10 H. Another typical unit for energy is KWhr (Kilo-Watt-Hour). A 200 MW (± 100 kV) forced-commutated voltage-sourced converter (VSC) interconnection is used to transmit DC power from a 230 kV, 2000 MVA, 50 Hz system to another identical AC system. value for that temperature along with the sensitivity or Seebeck coefficient (dV/dT). 8 × 1000 = 2800g : kilogram is the larger unit than the gram, so to convert larger unit to a smaller unit, we multiply by the factor. Single-phase voltages are usually 115V or 120V, while three-phase voltages are typically 208V, 230V or 480V. When using the calculator the conductor's insulation temperature should not be exceeded and this would mean a maximum of 90 degrees C plus 5 degrees for rounding off since the calculator is intended to solve voltage drop problems for building wire listed in Table 310. 1kJ (KiloJoule) = 0. The input voltage to rectifier could be either single phase or three phases. The reference voltage is important in the conversion process. In a Star-wired set of Elements, you divide the Voltage by V7 (1. How would this be accomplished? Thanks! Let me say something more. The Watt (W) is a unit for power i. bus voltage regulator provides excellent immunity to the line and load transients, since the DC bus voltage is controlled by the 1336R Converter as long as the AC line is within the specified range. If we relax the requirements, a 10-degree temperature excursion means the 12-bit ADC reference can drift no more than 25ppm/°C, which again is a fairly tight requirement for on-chip references. Power delivered here is twice that of half-wave rectification because we are using both half-cycles. Square meter, Hectare, Are, Square foot, Acre, Square inch,. For example: It can convert DOC to DOCX, but it can't convert DOC to XLSX. Where Vrms and Irms are the phase voltage and current and θ is the phase different between the voltage & current waveforms. person_outlineAntonschedule 10 years ago. Let the range of the voltmeter be 0 – V0 volt and we convert it to an ammeter of range 0 – I0 Amp. Its main power circuit consists of just a semiconductor device like a MOSFET operating as a switch, a transformer, an output diode and an output filter capacitor. Voltage - Enter the voltage at the source of the circuit. To calculate the temperature based on a different reference junction temperature, enter the new value in the same units of temperature selected in the calculator. Specific alkalinity (basicity) and various acidity values of liquid solutions and solids versus implied mV millivolts strength. ADC Value to Temperature conversion Hi All. 7:1 is considered ideal for emissions and allows the catalytic converter to operate at peak efficiency. Temperature Converter. Our software is the only cloud-based solution and has been built from the ground up to be fully responsive - meaning you can access your cables from anywhere and on any device, desktop, tablet or. The tool is completely free to use and very Easy user-friendly UI allows conversion between the metric system and others. Plane Angle 8 B. rgb, hex, cmyk. By converting from the analog world to the digital world, we can begin to use electronics to interface to the analog world around us. Electrical resistance, analogous to resistance in other contexts, is the degree to which a particular part of an electrical circuit impedes the flow of electrical current. By multiplying air velocity by the cross section area of a duct, you can determine the air volume. 7 volt-pounds divided by 0. I would switch brands if that is the case. Choose a target document format. The calculator will produce the NIST thermocouple table. Pepperl+Fuchs signal conditioners converters receive signals from a hazardous area instrument i. Select the category of the conversion you want to perform. 8% which is within limit (5%) but to use 2 runs of cable of 70 Sq. The site also includes a predictive tool that suggests possible conversions based on input, allowing for easier navigation while learning more about various unit systems. electronics because it is one of the simplest and least expensive converter topologies with transformer isolation. Spoke with someone else who says symptoms point more to needing an O2 sensor replaced rather than new converter furthermore my car doesn’t have sufficient miles for the converter to be failing. Specific Energy Per Unit Temp. value for that temperature along with the sensitivity or Seebeck coefficient (dV/dT). Online Converter and Conversion Calculator Let's Calculate Without creating an account, you can directly focus on your calculations as this tool provides fast, inclusive, suitable, free online tools with greater flexibility. When the exhaust temperature hits about 500 degrees, the catalyst starts to trigger the chemical reactions that break down the pollutants. When used properly with a thermoelectric (TE) cooler and support cir-. It is very user-friendly. Unlike voltage, resistance is a function of the heating unit of each vaporizer. eV stands for electron-volts and K stands for kelvins. The amplification factor, also called gain, is the extent to which an active device boosts the strength of a signal. Since the Voltage is the same along a conductor it has to be the same across each element in the circuit. For putting together a business case costs and revenues are an important part of it. The light output is 1800 lumens and you want to know how bright it is compared to an old halogen torchiere. Electric power is the rate, per unit time, at which electrical energy is transferred by an electric circuit. Speed 11 J. Below these links you will find our cable calculator. The \"From\" Entry. Select one of the 8 letter-designated thermocouple types from the console, type any temperature within the thermocouples range into the Temperature window and press calculate. It's calculated by multiplying voltage by amperage. TABLE 9 Type K Thermocouple thermoelectric voltage as a function of temperature (°C); reference junctions at 0 °C °C 0 1 2 3 4 5 6 7 8 9 10 °C Thermoelectric. I have a downloadable temperature conversion program available. Converter Transformers / Definition & Why Why converter transformers are needed (main issues) Adapts the network supply voltage to the converter input voltage Isoso ates t e co e te o eed glates the converter from feeding network and restricts short-circuit currents to the converter Relieves the motor and/or network from common mode voltages. Calculate things online with just mouse moves!. 4 radians, c) 1 radian. My ADXL345 is running on IIC, but I can only assume if you are reading analog voltages through analog input pins you would get something similar. ) Here's a calculator that will take you from angle measurement using degrees, minutes, and seconds to angle measurement expressed as a decimal number. Should be used in situations when you want output at a low degree of bending. 524 radians we should get the value of. Power rating of steam-boliers is in units of BHP (boiler -horsepower). Subtract 32, divide by 3, multiply by 5, divide by 3. dynamic The interactions between the active power control design and the dc voltage droop control are examined. 5 volts per degree Celsius (°C). Parameters: resT is the resistance across the thermistor in Ohms. 8 volts using the trim terminal. voltage doubler topology used at the output side reduces the voltage stress on the converter components. How to Convert Watts to Volts. Find the Numerical Answer to Equation - powered by WebMath. Its corresponding SI unit is the volt (symbol: V, not italicized). 1 degrees Celsius, 300 degrees Fahrenheit = 148. 2) If temperature is known and expected resistance is desired, use Equation 5 below:. RC is the number of minutes a new, fully charged battery at 80 degrees F. Multiplicate and Divide Time. I'm doing single phase AC to DC full bridge controlled converter. Metals have many free charge carriers that vibrate with heat, so their temperature quickly rises. Related Tools. If you want to calculate the direct resource costs associate with hosting a server in your data center, you want to know the direct power consumption by the server in electrical costs and the costs associated with cooling the environment where the server is situated. It represents each 1 Celsius degree by 10 mV, so that if the temperature is 20 Celsius degrees, then the output voltage of the LM35 will be (20*10mV=200mV). Which will operate in a temperature range of 0 to 24 degrees Celsius (32 to 75 degrees Fahrenheit). DC voltage characteristics Difference in Rated-voltage As to choosing the capacitor with higher rated-voltage, does not. The Laureate RTD temperature transmitter provides a linearized, highly accurate, stable and repeatable transmitter output for 100 ohm platinum, 10 ohm copper and 120 ohm nickel RTDs. As one degree Celsius is equal to one Kelvin, boiling point of water is equal to 273. We must either shorten the length of the lever arm,. Bidirectional DC-DC Power Converter Design Optimization, Modeling and Control Junhong Zhang ABSTRACT In order to increase the power density, the discontinuous conducting mode (DCM) and small inductance is adopted for high power bidirectional dc-dc converter. Calculate things online with just mouse moves!. 707 x E Peak So, the calculation in our question becomes very simple since the peak voltage is provided as 17 volts. power factor correction (pfc) of ac-dc system using boost-converter a thesis submitted in partial fulfillment of the requirements for the degree of master of technology in power electronics and drives by pratap ranjan mohanty roll no. Ammonia - Thermal Conductivity at Varying Temperature and Pressure - Online calculator, figures and tables showing thermal conductivity of liquid and gaseous ammonia at temperatures ranging -70 to 425 °C (-100 to 800 °F) at atmospheric and higher pressure - Imperial and SI Units. 5V on 12V battery). There's also a graph which shows you the meaning of what you've found. Watts can be converted to volts using current and a Watt’s Law formula, which states that current is equal to power divided by voltage. Fahrenheit to Celsius Conversion, Celsius to Fahrenheit Conversion. 241506×10 21 Electron volts 1000000 Joules = 6. How Arduino Reads Temperature. The Gibbs free energy of the system is a state function because it is defined in terms of thermodynamic properties that are state functions. The source inductance is 4mH. 8 * C) + 32 //celsius into fahrenheit C = ( F - 32 ) / 32 //fahrenheit into celsius. The efficiency of a panel refers to the ability of the panel to convert sunlight into usable energy. 16 X 10 4 Degrees Kelvin (K) Equivalent (See Equivalent Electron Temperature below) 1MeV = 1. A few of these include: I2C or Serial Sensors - There are advanced sensor modules that often can measure barometric pressure, temperature, humidity, and other conditions all in one package. Therefore, we need to convert the analog voltage which is output to a digital value. In standard formulas, the conductor resistivity for copper is 11. For 400 mv output temperature in Celsius is 40 degree centigrade. 4-20mA scaling calculator. 2: Enter the value you want to convert (pascal). T=c0 + c1*V + c2*V^2 + c3*V^3 + c4*V^4 + c5*V^5 + c6*V^6 + c7*V^7 + c8*V^8 The coefficients for Temperature range -210 deg C to 0 deg C Voltage range -8. Electric power is usually produced by electric generators, but can also be supplied by sources such as electric batteries. TEMPERATURE CONVERSION TABLE Celsius----The numbers in heavy type refer to the temperature either in degrees Celsius or Farenheit which is desired to convert into the other scale. Convert frequency to wavelength and vice versa. Related: resistor calculator Ohm's Law. I have a column with wind direction (N,NNE,NE) and I need to convert it to a numerical representation (0,22. Design modern switched-mode power converters; create high-performance control loops around power converters; understand. EFERENCES 1. In a panel with 20 percent efficiency, for instance, 20 percent of all the light that hits it will be translated into electricity. Select the category of the conversion you want to perform. 0001 Tesla (T) use this converter Definition Magnetic field - a state of space described mathematically, with a direction and a magnitude, where electric currents and magnetic materials influence each other. You cannot mix and match 1, 2, and 3 bar MAP sensors. The input voltage to rectifier could be either single phase or three phases. SLVA301-April 2008 Loop Stability Analysis. 895x10-3 N/mm 2 = 6. The TI-86 Calculator: Thevenin Theorem The Arctan function: Thevenin Resistance Practice Problems : Complex Numbers: Thevenin Voltage Practice Problems : Complex Conjugate Conversion : Complex Number Conversion: Sine Wave Fundamentals : Complex Reciprocals: Sine Wave Fundamentals Practice Problem #1. Bidirectional DC-DC Power Converter Design Optimization, Modeling and Control Junhong Zhang ABSTRACT In order to increase the power density, the discontinuous conducting mode (DCM) and small inductance is adopted for high power bidirectional dc-dc converter. Compute the compensated voltage (V) with the following formula: V=Vmeas + Vref.\npw9s7au8fu7kb zc2kusghavqb 34dx4ctap56m1 2366eup7fn 9uhxeoezl5se lyd8htfhsr3 zlvns94aqeh jdi6s4v7waa zc85djwo8l8 0guxi15xug iqaxkuupgsf vsasnnmhej2w 83xgj2n15ubi88r b9eefdf5suy k4cfgaovl7 qmei4e77kvq ttzb2kcaeui0lc4 v0frf7d7om0h2z ia2bs3zoujkfl3 7tvg7p2zkr1xt qczbboff4yh3rb xuwbzwfb0cxge ysmpwl4i4wun1 50ufxauj54j5 cdkdxvtflp vtu6yxfylbtlg6 wayycdm08m9h qeb0t7ay254c 7suh94qvljoujil ce8odckh8c nxxlf9sa9m" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.871995,"math_prob":0.9696606,"size":58184,"snap":"2020-34-2020-40","text_gpt3_token_len":13193,"char_repetition_ratio":0.17019594,"word_repetition_ratio":0.24315387,"special_character_ratio":0.22507906,"punctuation_ratio":0.102132335,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9901849,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-04T20:43:29Z\",\"WARC-Record-ID\":\"<urn:uuid:bc125de4-5756-4c48-b289-8341a9e99ae9>\",\"Content-Length\":\"65199\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9cf9331d-0f3f-4018-81cb-7063a62d9813>\",\"WARC-Concurrent-To\":\"<urn:uuid:9b457c74-ff7f-4ff6-82a0-f1b2c5874060>\",\"WARC-IP-Address\":\"104.31.66.85\",\"WARC-Target-URI\":\"http://vglm.redgiant.it/convert-voltage-to-degrees-calculator.html\",\"WARC-Payload-Digest\":\"sha1:ZTHAK3QHVJP3QDSQK57RLY63TSQPAUSU\",\"WARC-Block-Digest\":\"sha1:XQCMAJD6FBIOPZIFXEYUAGW3C3YSZTBV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439735882.86_warc_CC-MAIN-20200804191142-20200804221142-00069.warc.gz\"}"}
https://ro.scribd.com/document/113699007/Fullll-Final
[ "Sunteți pe pagina 1din 65\n\nCAPACITOR IN AC CIRCUIT\nBASIC PRINCIPLE:The capacitor is connected with a variable frequency voltage source. The impedance and phase displacement is measured in terms of frequency and capacitance.\n\nRELATED PHYSICS:CAPACITANCE: It is the capacity or aptitude of the body to hold the electrical charge. Capacitance has always a positive value. Its S.I unit is coulomb per volt and farad. To calculate the capacitance we use the following formula; C = Q/v\nKIRCHHOFFS LAWS Kirchhoffs Voltage law:\nIt is stated as the sum of all voltage drops around any loop in any circuit is zero mathematically\n\nv\ni=1\n\n=0\n\nHere n is the total number of voltage measured. Conservation of energy is the base of this Kirchhoffs voltage law. It also applicable when there is resistance in the circuit.\n\n## Current Kirchhoffs law:\n\nIt is stated as The algebraic sum of currents in a network of a conductor meeting at a point is zero\nn\n\nMathematically;\n\nI\nk =1\n\n=0\n\nWhere n is total number of branches in which current flow. Kirchhoffs current law is based on conservation of charges which are equal to product of current and time. Maxwells equation: Maxwell equations are the four partial differential equations which normally use to describe the electric and magnetic field. 1 LAB REPORT\n\nGC UNIVERSITY FAISALABAD These equations are derived by a famous mathematician James Clerk Maxwell. Normally, the equation of Gauss law, Gauss law for magnetism, faraday law of induction and ampere law collectively known as Maxwell equations.\n\nAC IMPEDANCE:Impedance describes the opposition to sinusoidal alternating voltage. It describes the phases and amplitudes of voltages and current.\n\nPHASE DISPLACEMENT:It is curve obtained in plotting the graph of phase difference between frequencies of wave, is called phase displacement.\n\nTASK OF EXPERIMENTS:Determine the frequency while resistance and coil are connected is series both have same impedance. Calculate the capacitance of capacitor. Find the total impedance of capacitor in both series and parallel combination. Study the phase displacement as a function of frequency between the terminal voltage and terminal current as a function of frequency.\n\nREQUIRED APPARATUS:\nCapacitor in plug-in box, 1 uF / 250 V (1 Capacitor in plug-in box, 2.2 uF / 250 V ) (1) Resistor in plug-in box 47 Ohms (1) Resistor in plug-in box 100 Ohms (1) Resistor in plug-in box 220 Ohms (1) Connection box (1) Difference amplifier (1) Function generator (1) Digital counter, 4 decades (1) Oscilloscope, 20 MHz, 2 channels (1) 2 LAB REPORT\n\nGC UNIVERSITY FAISALABAD Multi-meter (1) Screened cable, BNC, l = 750 mm (2) Connecting cord, l = 100 mm, red (3) and l = 500 mm, red (5)\n\n## THEORATICAL BACKGROUND:The circuit diagram of capacitor in AC circuit is shown be\n\nFig.4.1 circuit diagram of resistor and capacitor in AC circuit. The sum of voltage drop at individually on both capacitor and resistor will be equal to terminal voltage as expressed by given relation; In this relation, Q is the charge on the plate of capacitor and I is the current in the given circuit The graph shown below expressed the values of Xc with f and tells that impedance increases with the frequency source.\n\n3 LAB REPORT\n\nGC UNIVERSITY FAISALABAD Fig-4.2 Left: The impedance of capacitor decrease linearly with increasing frequency. Right: Impedances as a function of capacitance at constant frequency (f=10 KHz). The phase displacement graph is shown below;\n\nFig-4.3 phase displacement ( ) between terminal voltage and total current for capacitor as a function of frequency.\n\n## EXPERIMENTAL SETUP, PROCEDURE AND MEASUREMENTS:\n\nWe use oscilloscope instead of voltmeter and ammeter. It is so because oscilloscope measures the phase relation but voltmeter and ammeter measures only root mean square values. Current can be measured by measuring the voltage across the resistor. The voltage drop measured directly from oscilloscope and current can be calculated by given relation; We can also measure phase difference by comparing the terminal voltage and current which is passing through resistor by using the given circuit: First of all we just find the value of AC supply at which\n\n## Fig 4.5 circuit diagram of phase displacement\n\n4 LAB REPORT\n\nGC UNIVERSITY FAISALABAD The impedance of capacitor will become equal to resistor resistance. The given circuit illustrates the circuit diagram of that procedure.\n\nFig 4.6 circuit diagram at which R = Xc Frequency will be varied until the voltage drop and at capacitor and resistor meet. Note the values and compare them with the actual values of capacitor Table 4.1 showing the values of frequency and capacitor Sr Resistors Correspondin Capacitanc Actual .# resistance R 1 2 3 4 5 6 = Ohm 50 100 220 50 100 220 g impedance e C= 1 2fX c Capacitanc e C 1 1 1 2 2 2 0.2 0.12 0.13 0.1 0.02 2.1 Xc Frequency f kHz 38.88 18.06 0.833 15.94 0.781 0.350 Difference C C\n\n## Fig 4.7 graph between R and C\n\n5 LAB REPORT\n\nGC UNIVERSITY FAISALABAD For the determination of capacitance of capacitor, given circuit will be used;\n\nFig 4.7 circuit diagram to determine the capacitance of capacitor for series and parallel combination. The table is given to record the values of capacitance; Table 4.2 showing the total capacitance of capacitor in series and parallel combinations Sr. #\nCombination\nImpedance frequency F Hz At XC 50 1 2 Parallel Series 358 5 557 8 100 183 4 275 4 220 249 1198 Exp(AVG ) 1.5 3 Cal. 2.1 3.4\n\nCapacitance\n\nThe phase displacement measured from the oscilloscope by examining the sinusoidal waves. As we increase the frequency, phase displacement increases. It can also be measured directly from the screen of oscilloscope by adjusting the waves in such a way that y axis of screen cuts the waves crest at the middle.\n\n6 LAB REPORT\n\nGC UNIVERSITY FAISALABAD The given table shows the value of phase displacement with different capacitors; Table 4.3 showing the values of phase displacements with different capacitors. Sr.#.\nInput (kHz) Span (uF) 180/ Span Difference Phase angle frequency\n\n0.282 0.789 1.388 1.990 2.678 3.141 3.546 4.232 35 32 5.142 6.5 8 6.5 41.14 36.7 36 5 6.5 32.5 25 7.2 3.6 37 4.86 4.5 32.5 5.53 3.5 28.5 23.5 6.315 7.65 2.5 2\n\n## Fig 4.8 Graph between and frequency\n\nCOIL IN AC CIRCUIT\nBASIC PRINCIPLE:\nWhen a coil is put in a circuit, its impedance, its inductance and phase displacement can be studied by using a source of voltage with variable frequency. 7 LAB REPORT\n\nRELATED PHYSICS:\nKIRCHHOFFS LAWS Kirchhoffs Voltage law:\nIt is stated as the sum of all voltage drops around any loop in any circuit is zero mathematically\n\nv\ni=1\n\n=0\n\nHere n is the total number of voltage measured. Conservation of energy is the base of this Kirchhoffs voltage law. It also applicable when there is resistance in the circuit.\n\n## Current Kirchhoffs law:\n\nIt is stated as The algebraic sum of currents in a network of a conductor meeting at a point is zero Mathematically;\n\nI\nk =1\n\n=0\n\nWhere n is total number of branches in which current flow. Kirchhoffs current law is based on conservation of charges which are equal to product of current and time. Maxwells equation: Maxwell equations are the four partial differential equations which normally use to describe the electric and magnetic field. These equations are derived by a famous mathematician James Clerk Maxwell. Normally, the equation of Gauss law, Gauss law for magnetism, faraday law of induction and ampere law collectively known as Maxwell equations.\n\nInductance:\nThe property of an electric circuit by virtue of which any change in the magnetic flux linked with it, induces an electromotive force in it, is called inductance. 8 LAB REPORT\n\nElectrical impedance:\nImpedance describes the opposition to sinusoidal alternating voltage. It describes the phases and amplitudes of voltages and current.\n\nDetermine the frequency while resistance and coil are connected is series both have same impedance. Calculated the values of XL and L Determine the LEQ for two coils connected in series and parallel. Study the phase displacement as a function of frequency between the terminal voltage and VR\n\nRequired Apparatus:\nCoil, 300 turns1 Coil, 600 turn 1 Resistor in plug-in box 50 Ohms 1 Resistor in plug-in box 100 Ohms 1 Resistor in plug-in box 200 Ohms 1 Connection box 1 Difference amplifier 1 Function generator 1 Digital counter, 4 decades 1 Oscilloscope, 20 MHz, 2 channels 1 Screened cable, BNC, l 750 mm 2 Connecting cord, 100 mm, red3 Connecting cord, 500 mm, red5 Connecting cord, 500 mm, blue 4\n\n## 5- Theoretical Back ground:\n\nMagnetic flux through the area of coil changes when emf changes through the loop. This creates inductance in the coil. The phenomenon Inductance is described by the American physicist Joseph Henry. Mathematically inductance can be represented by; L = L di / dt The circuit diagram of coil in an AC circuit is shown below; In this diagram, resistor and inductor are connected in series. The voltage drop Of these components can be expressed as:\n\n9 LAB REPORT\n\nThe expected phase displacements and Impedance graphs as function of frequencies Are shown below:\n\nand\n\nFig:\n\n10 LAB REPORT\n\n## Experimental setup, procedure and measurements:\n\nWe use oscilloscope instead of voltmeter and ammeter. It is so because oscilloscope measures the phase relation but voltmeter and ammeter measures only root mean square values. Current can be measured by measuring the voltage across the resistor. For knowing the impedance of a coli as a function of frequency, the coil should be connected with resistor in series combination. We will vary the frequency until there will be same voltage drop across coil and resistor. Then values of resistance and impedance will be equal.\n\nThe phase displacement will also measure in this experiment. But remember that channel B in the given circuit is only measure the total voltage not voltage at coil. The observed measurements and results are given below; Table 3.1 showing the values of calculated inductance and difference between actual inductance when resistance is varied through 9 and 2 mH coils respectively. Sr.# Resistance frequency(Hz) Inductanc XL e mH 1 2 3 4 5 6 50 100 200 50 100 200 8300 1640 3210 3630 7109 1430 9.59 9.7 9.9 2.1 2.2 2.2 Actual inductanc e mH 9 9 9 2 2 2 Difference mH 0.59 0.7 0.9 0.1 0.2 0.2\n\n11 LAB REPORT\n\nGC UNIVERSITY FAISALABAD Table 3.2 showing the values of calculated inductance and difference between actual inductance when resistance is varied while 2 coils of 9mH and 2mH respectively, are connected in parallel. Sr.# Resistance frequency(Hz) Combined Actual Difference XL Inductance inductance mH mH of both coils mH 50 4530 1.7 0.61 1.09 100 200 9250 18130 11 11\n\n1 2 3\n\nTable 3.3 showing the values of calculated inductance and difference between actual inductance when resistance is varied while 2 coils are connected in series of 9mH and 2mH respectively.\n\nSr. #\n\nResistance\n\n1 2 3\n\n50 100 200\n\n## Actual Difference inductance mH of both coils (mH) 11 1 11 11 1.1 1\n\n12 LAB REPORT\n\n3000 2500 2000 XL 1500 1000 500 0 0 100 R 200 300 Graph between Frequency and resistance\n\nFig.3.1 showing the variation between frequency and resistance when plotted on a graph. Table 3.3 Determining the Phase Difference\n\n## Frequency KHz 11.08 10.73 28.65 55.11 82.20\n\nSpan 16 15 23 26 28\n\n## 80 60 XL 40 20 0 0 20 40 60 phase difference 80 100\n\nPhase displacement\n\nFig.3.2 showing the graph between frequency and phase differe 13 LAB REPORT\n\nTRANSFORMER\nA transformer is an electrical device to change a given alternating electromotive force into a larger or smaller electromotive force through inductively coupled conductors.\n\nBASIC PRINCIPLE:\nThe transformer is based on two principles: first, that an electric current can produce a magnetic field (electromagnetism), and, second that a changing magnetic field within a coil of wire induces a voltage across the ends of the coil (electromagnetic induction). Changing the current in the primary coil changes the magnetic flux that is developed. The changing magnetic flux induces a voltage in the secondary coil.\n\n## Related Physics of Transformer:\n\nElectromagnetic Induction\nInduction of a electric current or electromotive force by passing a metal wire through a magnetic flux called Electromagnetic Induction. Faraday's law of induction is a basic law of electromagnetism that relate the transformers operating principle, inductors, and many types of electrical motors and generators. Michael Faraday first found that the electromotive force (EMF) generate around a closed path is proportional to the rate of change of the magnetic flux through any surface bounded by that path. In practice, this means that when the magnetic flux through a surface bounded by the conductor changes in any closed circuit an electric current will be induced in this circuit In mathematical form, Faraday's law states that:\n\nwhere is the electromotive force B is the magnetic flux For the special case of a coil of wire, composed of N loops with the same area, the equation becomes 14 LAB REPORT\n\nA corollary of Faraday's Law, together with Ampre's law and Ohm's law is Lenz's law: The EMF induced in a conducting coil of N loops is equal nto the ve of time rate of change of megnatic flux that is always oppose the change which cause the current.\n\nMagnetic flux\nMagnetic flux (most often denoted as m), is a measure of the amount of magnetic field passing through a given surface (such as a conducting coil). The SI unit of magnetic flux is the weber (in derived units: volt-seconds). The CGS unit is the maxwell. The magnetic flux through a given surface is proportional to the number of magnetic field lines that pass through the surface. This is the net number, i.e. the number passing through in one direction, minus the number passing through in the other direction. For a uniform magnetic field B passing through a perpendicular area the magnetic flux is given by the product of the magnetic field and the area element. The magnetic flux for a uniform B at any angle to a surface is defined by a dot product of the magnetic field and the area element vector.\n\n15 LAB REPORT\n\nA transformer is a device that transfers electrical energy from one circuit to another through inductively coupled conductors the transformer's coils. A varying current in the first or primary winding creates a varying magnetic flux in the transformer's core and thus a varying magnetic field through the secondary winding. This varying magnetic field induces a varying electromotive force (EMF) or \"voltage\" in the secondary winding. This effect is called mutual induction. If a load is connected to the secondary, an electric current will flow in the secondary winding and electrical energy will be transferred from the primary circuit through the transformer to the load. In an ideal transformer, the induced voltage in the secondary winding (Vs) is in proportion to the primary voltage (Vp), and is given by the ratio of the number of turns in the secondary (Ns) to the number of turns in the primary (Np) as follows.\n\nIf the secondary coil is attached to a load that allows current to flow, electrical power is transmitted from the primary circuit to the secondary circuit. Ideally, the transformer is perfectly efficient; all the incoming energy is transformed from the primary circuit to the magnetic field and into the secondary circuit. If this condition is met, the incoming electric power must equal the outgoing power.\n\n## giving the ideal transformer equation\n\n16 LAB REPORT\n\nThe secondary voltage on the open circuited transformer is determined as a function 1. of the number of turns in the primary coil, 2. of the number of turns in the secondary coil, 3. of the primary voltage. The short-circuit current on the secondary side is determined as a function 4. of the number of turns in the primary coil, 5. of the number of turns in the secondary coil, 6. of the primary current. With the transformer loaded, the primary current is determined as a function 7. of the secondary current, 8. of the number of turns in the secondary coil, 9. of the number of turns in the primary coil.\n\nEQUIPMENT:\nCoil, 140 turns, 6 tappings Clamping device Iron core, U-shaped, laminated Iron core, short, laminated Multitap transf., 14VAC/12VDC, 5A Two-way switch, double pole Rheostat, 10 Ohm, 5.7 A Digital multimeter Connecting cord, 500 mm, red Connecting cord, 500 mm, blue 2 1 1 1 1 1 1 3 6 6\n\nTHEORETICAL BACKGROUND:\nA transformer makes use of Faraday's law and the ferromagnetic properties of an iron core. Transformer is use to raise or low the electrical voltages. It of course cannot increase power so that if the voltage is raised, the current is proportionally lowered and vice versa. In this way, electrical transformers are a passive device which transforms alternating current (otherwise known as \"AC\") electric energy from one circuit into another through electromagnetic induction. An electrical transformer normally consists of a ferromagnetic core and two or more coils called \"windings\". A changing current in the primary winding creates an alternating magnetic field in the core. In turn, the core multiplies this field and couples the most of the flux through the secondary transformer windings. This in turn induces alternating voltage (or emf) in each of the secondary coils. 17 LAB REPORT\n\nWhen an electric current passes through a long, hollow coil of wire there will be a strong magnetic field inside the coil and a weaker field outside it. The lines of the magnetic field pattern run through the coil, spread out from the end, and go round the outside and in at the other end.\n\nThese are not real lines like the ones you draw with a pencil. They are lines that we imagine, as in the sketch, to show the pattern of the magnetic field: the direction in which a sample of iron would be magnetised by the field. Where the field is strongest, the lines are most closely crowded. With a hollow coil the lines form complete rings. If there is an iron core in the coil it becomes magnetised, and seems to make the field become much stronger while the current is on.\n\nThe iron core of a transformer is normally a complete ring with two coils wound on it. One is connected to a source of electrical power and is called the 'primary coil'; the other supplies the power to a load and is called the 'secondary coil'. The magnetisation due to the current in the primary coil runs all the way round the ring. The primary and secondary coils can be wound anywhere on the ring, because the iron carries the changes in magnetisation from one coil to the other. There is no electrical connection between the two coils. However they are connected by the magnetic field in the iron core. 18 LAB REPORT\n\nGC UNIVERSITY FAISALABAD When there is a steady current in the primary there is no effect in the secondary, but there is an effect in the secondary if the current in the primary is changing. A changing current in the primary induces an e.m.f. in the secondary. If the secondary is connected to a circuit then there is a current flow. A step-down transformer of 1,200 turns on the primary coil connected to 240 V a.c. will produce 2 V a.c. across a 10-turn secondary (provided the energy losses are minimal) and so light a 2 V lamp. A step-up transformer with 1,000 turns on the primary fed by 200 V a.c. and a 10,000-turn secondary will give a voltage of 2,000 V a.c. The iron core is itself a crude secondary (like a coil of one turn) and changes of primary current induce little circular voltages in the core. Iron is a conductor and if the iron core were solid, the induced voltages would drive wasteful secondary currents in it (called 'eddy currents'). So the core is made of very thin sheets clamped together, with the face of each sheet coated to make it a poor conductor. The edges of the sheets can be seen by looking at the edges of a transformer core.\n\n19 LAB REPORT\n\nIs 2 .5 2 Is 1 .5 1 0 .5 0 0 0 .5 1 Ip 1 .5 2 2 .5 Is\n\nIs\n0.55 0.63 0.7 0.77 0.85 0.95 1.13 1.28 1.39 1.57 1.79 1.94 2.16\n\n0.6 0.66 0.72 0.78 0.84 0.92 1.07 1.19 1.28 1.43 1.61 1.73 1.91\n\nIp\n0.43 0.5 0.55 0.59 0.66 0.74 0.81 0.9 0.96 1.02 1.07 1.14 1.2 1.24\n\nIs\n0.39 0.47 0.53 0.57 0.64 0.72 0.8 0.89 0.95 1.01 1.06 1.13 1.2 1.24\n\n20 LAB REPORT\n\n## Vs 0.25 0.2 Vs 0.15 0.1 0.05 0 0 50 Ns 100 150 Vs\n\nNs\n14 28 42 56 70 84 98 112 126 140\n\nVs\n0.23 0.16 0.12 0.09 0.07 0.06 0.05 0.04 0.03 0.03\n\nNs\nIs 2 1.5 1 0.5 0 0 50 Np 100 150 Is Is\n\nIs\n0.16 0.33 0.5 0.66 0.82 0.98 1.11 1.24 1.35 1.44\n\n21 LAB REPORT\n\n## Ns 150 100 Ns Ns 50 0 0 0.2 0.4 Ip 0.6 0.8\n\nIp\n0.21 0.22 0.25 0.3 0.35 0.42 0.48 0.55 0.61 0.67\n\nNs\n14 28 42 56 70 84 98 112 126 140\n\n## Np 150 100 Np Np 50 0 0 0.5 1 Ip 1.5 2\n\nIp\n1.47 1.42 1.34 1.25 1.15 1.04 0.94 0.84 0.75 0.67\n\nNp\n14 28 42 56 70 84 98 112 126 140\n\n22 LAB REPORT\n\nNp\n14 28 42 56 70 84 98 112 126 140\n\nIs\n0.13 0.28 0.4 0.55 0.68 0.82 0.94 1.05 1.15 1.24\n1.4 1.2 1 0.8 0.6 0.4 0.2 0 0 50\n\nIs\n\nIs\n\nIs\n\n100 Np\n\n150\n\nIp\n0.59 0.63 0.66 0.76 0.84 0.97 1.03 1.1 1.2 1.3 1.45 1.6 1.77 1.98 2.12 2.24 2.41 2.54\n\nIs\n0.68 0.74 0.77 0.88 0.99 1.14 1.2 1.29 1.42 1.53 1.71 1.89 2.09 2.34 2.52 2.66 Ns 2.86 14 3.02 28 42 56 70 84 98 112 126 140\n\ni.e ; Ip changes\n\nIs\n7.51 5.34 3.96 3.11 2.55 2.16 1.88 1.66 1.48 1.34\n\n23\n\nLAB REPORT\n\nIs 8 7 6 5 4 3 2 1 0 0 50 Ns 100 150\n\nIs\n\nIs\n\nRLC CIRCUIT\nPrinciple:\nThe current and voltage of parallel and series-tuned circuits are investigated as a function of frequency. Q-factor and band-width are determined.\n\n## Related Topics of RLC Circuits:\n\n1. Resistance\n\n24 LAB REPORT\n\nGC UNIVERSITY FAISALABAD 2. Reactance 3. Capacitance 4. Inductance 5. Coil 6. Q factor 7. Band-width 8. Half Power Points 9. Impedance\n\n1. Resistance:\nIt can be defined as the ability of an object to stop or resist the flow of electrical current. An object resists electrical current because of a collision between atoms and electrons which stops the electrons to flow. This is due to the conversion of electrical energy into heat energy and light energy. The resistor can be seen in this figure.\n\nAn object having resistance that is directly proportional to the resistivity and the length of the rod and the cross sectional area of that rod is inversely proportional to the resistance. In 1827 George Ohm discovered the behavior of resistance. He mentioned a relation between the resistance and the electrical current. Then the SI unit of resistance was\nR= L A\n\nkept on his name that is ohm and it is denoted by . The relation is written here: V=IR where V is the voltage. Current and the voltage are in phase.\n\n25 LAB REPORT\n\n2. Reactance:\nIt is defined as the measure of the opposition of capacitance and inductance to current. It can be denoted by X. It is varied with the change in frequency of the electrical signal. It is also measured in ohms and denoted by . There are two types of reactance: 1. Capacitive reactance\n\nX C=\n2. Inductive reactance\n\n1 C\n\nX L = L\n3. Capacitance:\nIn electromagnetism and electronics, capacitance is the ability of a body to hold an electrical charge. Capacitance is also a measure of the amount of electrical energy stored (or separated) for a given electric potential. A common form of energy storage device is a parallel plate capacitor. The capacitor can be seen in this figure.\n\nIn a parallel plate capacitor, capacitance is directly proportional to thesurface area of the conductor plates and inversely proportional to the separation distance between the plates. If the charge on the plates are +Q and -Q, and V gives the voltage between the plates.The SI unit of capacitance is the farad.(1 farad is 1 coulomb per volt).\n\n4. Inductance:\nInductance is the property of an electrical circuit causing voltage to be generated proportional to the rate of change in current in a circuit. This property is also called self inductance to discriminate it from mutual inductance, describing the voltage induced in one electrical circuit by the rate of change of the electrical current in another circuit. The SI units of it is Weber per ampere (also known as Henry). This is a linear relation between voltage and current akin to Ohm's law, but with an extra time derivative.\n\n26 LAB REPORT\n\n. The simplest solutions of this equation are a constant current with no voltage or a current changing linearly in time with a constant voltage. The term inductance was coined by Oliver Heaviside in February 1886. It is customary to use the symbol L for inductance, possibly in hon-our of the physicist Heinrich Lenz.The SI unit of inductance is the henry (H), named after American scientist and magnetic researcher Joseph Henry. (1 H =1 Wb/A).\n\n5. Coil:\nA coil is a series of loops. A coiled coil is a structure where the coil itself is in turn also looping, these objects are used commonly and are very important.Some of their functions may be in bikes,cars, trains and planes. Often used in conjunction with a thread. Some different coils can be seen in the below figure. (coils having different shapes )\n\n6. Q Factor:\nThe common definition of Q for a series or parallel RLC circuits is = The figure also describing this factor as:\n\n27 LAB REPORT\n\nThe is the resonant frequency, is also a frequency at the half power point above resonance while is the frequency at the half power point below resonance.\n\n7. Band-width:\nLC circuit is resonant at one frequency and it is only true for the maximum resonance effect. But the remaining frequencies which are close to resonance are effective too.\n\n## For Series Resonance:\n\nThose frequencies which are just below or above the resonance produce increase current. But this remains little less than the value at resonance.\n\n## For Parallel Resonance:\n\nThose frequencies which are close to resonance can provide a high impedance although a little less than the maximum. So, any resonant frequency that has an associated band of frequencies provide resonant effect. In fact it is practically possible to have an RLC circuit with a resonant effect for only one frequency. The width of the resonant band for the frequencies centered around resonance is called the band-width of the tuned circuit.\n\n## 8. Half Power Points:\n\nThe 70.70 % from the relative current values corresponds to 50 % in power and the square of 0.707 equals 0.50, so the band-width between the frequencies having 70.70 % respond in current is band-width in terms of half power points.\n\n9. Impedance:\n28 LAB REPORT\n\nGC UNIVERSITY FAISALABAD Impedance is denoted by Z. It is defined as the measure of the overall opposition of a circuit to current. It can also be describe as: How much the circuit impedes the flow of current. It is like a resistance but it also takes into account the effect of capacitance as well as inductance. Impedance is measured in ohms and it is symbolized by . For series RLC circuit: For parallel RLC circuit:\nZ =[ R2+ ( X L X C ) 2 ] 1/ Z=[ 1/ R2+ (1/ X L X C )2 ]\n\n1. Determination of frequency of AC supply at which impedence of coil is equal to resistance of resistor in series or parallel with it? 2. Inductance of coil? 3. Total impedance of circuit in parallel or in series?\n\nRequired Apparatus:\n1. Resistor (100 ,500 ,1000 ; all 25 watt) 2. Capacitor (0.01 F) 3. Inductor (200 mH) 4. Resistance Box 5. Connecting Wires 6. Audio Oscillator 7. Micro Ammeter 8. Two Multimeter\n\n## Theoretical Background of a Series RLC Circuit:\n\nVirtual instruments contains three main components. (1) the front panel, the block diagram, and icon of connector pane. The user interacts with program through the front panel and we build the front panel with controls and indicators, which are the interactive input and output terminals of virtual instruments. Graphical source code called a block diagram. The connector pane 29 LAB REPORT\n\nGC UNIVERSITY FAISALABAD defines the inputs and outputs you can wire to the virtual instruments so you can use it as a sub virtual instruments.\n\n## Resonant RLC circuit:s\n\nWe have a series RLC circuit composed of an inductor L, a capacitor C, and a small resistor R. The inductor has its own resistance from the coil winding. A series RLC circuit exhibit a peak of the current when the driving frequency is equal to the resonance frequency of circuit.\n\n## The magnitude of the total impedance of the RLC circuit:\n\nZ= R2 + ( 2 fL tot\n\n## 2 1 ) 2 fC where Rtotal =RS + RL\n\nAr very low frequencies, the capacitor acts like an open circuit. Thus the total impedance Z goes to infinity and there is no current flowing through the circuit and hence no voltage across the series resistor . In the opposite limit of very high frequencies the inductor acts like an open circuit. Again there is no current in the circuit and hence no voltage across the series resistor . At the resonance frequency the reactant of the capacitor X cancels the reactance of the inductor leaving only the small resistance of and the resistance of the coil windings .\n\n30 LAB REPORT\n\nUO Now a large current flows through the circuit of magnitude Rtotal and a large maximum voltage U max = U O RS Rtotal\n\nnow appears across the series resistor , namely and the resonance frequency is found by setting yielding .\n\nWhen we had measured the peak voltage at the resonance frequency . We can also measure the two frequencies where the voltage across our series resistor is only 70.70 % of . One frequency will be somewhat lower than the resonance frequency which we will denote as while the second frequency will be somewhat higher than the resonance frequency and we will denote as . f hi The Q of the RLC circuit is defined as .\nQ= fO f hi f low\n\n## RLC Series Circuit:\n\nA capacitor is an electrical device that can store energy in the electrical field between a pair of conductors called plates. If the angular frequency of an AC signal applied from the function generator is (which is equal to 2f). The impedance Z of a RLC series circuit is thus:\nZ =R+ i ( L 1 ) C\n\nwhere R is the resistance in of the resistor, L is the inductance in H of the inductor and C is the capacitance in F of the capacitor. The current passing through the circuit is equal to:\nI= VO R+ i ( L 1 ) C\n\n31 LAB REPORT\n\nGC UNIVERSITY FAISALABAD There are two kinds of characteristics of amplitude and of phase. We need to study for the RLC circuit. The amplitude of I equals to\n\nI=\n\nVO\n\n1 2 R + ( L ) C\n2\n\nSurely, when the frequency f equals to the resonant frequency then we get.\nVO thus it can be just to R . The phase of I is determined by\n\nf O=\n\n1 2 LC\n\n=a rc ta n (\n\nL1/C ) R\n\nWe can conclude from the above two equations that the amplitude |I| reaches a maximum value while the phase becomes zero when the frequency f equals to the resonant frequency . Also, we can find that the phase of I is negative (capacitance) if f is less than and is positive (inductance) if f is greater than . See the figure of RLC series circuit, Fig. (RLC Circuit with Voltage source)\n\n## RLC Parallel Circuit:\n\nLet us consider the circuit in which the current voltage relations are: 32 LAB REPORT\n\nGC UNIVERSITY FAISALABAD -for R: Vr V I L and are in phase -for L: V and I L are out of phase V leads current by -for C: Vc and I C are out of phase leads V by Now consider that I = I r + I c Il Where I is a vector with components I R and magnitude of I is\nI=I = ( I R+ ( I C I L ) )\n2 2\n\nIr =\n\nV V V , Ic = . Il = Xc Xl R\n\n## 2 2 By rearranging. we get; I =V / 1/ (1/ R + ( C 1/ L) )\n\nBy comparing it with I=V/R, then we get; I=V/Z The phase angle is tan ()=( I C I L )/ I R =(V / X C V / X L )/ V / R = arctan ( C1 / L) R Hence the instantaneous value of current is I = I sin (t-) then the voltage will be; v = V sin (t-) we have now;\n\n33 LAB REPORT\n\nV m=\n\nI 1 + (C ) 2 L R\n\n## Experimental Procedure and Measurements:\n\n1. Select a suitable combination of R, L and C. 2. Taking R in the range of 100 -1000. 3. Taking L in the range of 100 mH- 500 mH. 4. Taking C in the range of 0.001 F-0.1 F. 5. Connecting the circuit as shown in the figure. 6. Set the oscillator output at a low voltage saying 1 volt. 7. Set the oscillator frequency equal to the calculated resonance frequency. 8. Set the multimeter current range to 10 mA AC. 9. Switch on the oscillator and observe the current flowing in the circuit. It can be verified by slightly adjusting the oscillator frequency. Adjust the range of multimeter if required. 10. Bringing the oscillator frequency to a value below the resonance frequency. 11. Increasing the frequency in a regularly manner say 500 to a value about the same value as it was noted. 12. Making sure that the output voltage of the oscillator remains the same for all frequencies. 13. Recording two more sets of observations for different values of resistance. 14. Plotting current values with the frequency for each set and obtaining the curves. 15. For small value of R peak current is high and is low for large values of R. 16. Resonance frequency is same for all curves. In this method the frequency variation is very large and the curve should be plotted between (I) and (log f). 17. In the end determining the band-width and quality factor for each curve.\n\n34 LAB REPORT\n\n## FOR R-L-C SERIES\n\nf 0 30 80 120 180 240 320 380 450 520 610 700 760 860 950 1050 1150 1300 1400 1500 1600 1800 2100 2500 3000 3800 5000 7200 9000 13000 15000 20000 31000 70000\n\nI 0 0.7 1 1.25 2 2.5 3.75 5 6.25 7.5 8.75 10 11.25 12.5 13.75 15 16.25 17.5 17.5 17.4 16.25 15 13.75 12.5 11.25 10 8.75 7.5 6.25 5 3.75 2.5 1.25 2.5\n\n20 15 10 5 0 1 10 1 0 0 f 1 0 0 0 1 0 0 0 01 0 0 0 0 0 I\nI\n\n35 LAB REPORT\n\nAnd then ,\n\n## XL 2 f 2000 L= 2 (7900) L = 0.0403 1 fr = 2 LC L= 1 2 0.0403(10 10 9 ) f r = 512 Hz fr =\n\nPARALLEL\n\nFOR RL-C\n\n36 LAB REPORT\n\nfrequency 100 160 220 280 320 380 425 475 500 560 600 650 675 710 770 850 955 1060 1150 1260 1490 1670 1850 2050 2300 2500 2800 2950 3400 3900 4250 4700 5400 6900 8900 15000\n\ncurrent 25 23.75 22.5 21.25 20 18.75 17.5 16.25 15 14.375 13.75 13.125 12.5 11.875 11.25 10 8.75 7.5 6.25 5 3.75 2.5 3.75 5 6.25 7.5 8.75 10 11.25 12.5 13.75 15 17.5 20 22.5 25\n\n30 25 20 15 10 5 0 1 10\n\ncurent\n\nS e rie s 1\n\n1 0 0 1 0 0 0 1 0 0 0 01 0 0 0 0 fre q u e n c y 0\n\n## COULMBS LAW / IMAGE CHARGE\n\n37 LAB REPORT\n\nBasics Principle:\nA small electrically charged ball is positioned at a certain distance in front of metal plate lying at earth potential. The surface charge on the plate due to electrostatic induction together with the charge ball forms an electric field analogous to that which exists between two oppositely charged points garges. The electrostatic force acting on the ball can be measured with the sensitive torsion dynamometer.\n\nEquipment:\nPlate capacitor, 283 283 mm 1 Insulating stem 2 Conductor ball, d 40 mm 2 Conductor spheres, w. suspension 1 Torsion dynamometer, 0.01 N 1 Weight holder f.slotted weights 1\n\n1. Determination of the impedance of a coil as a function of frequency. 2. Determination of the inductance of a coil. 3. Determination of the phase displacement between terminal voltage and current as a function of the frequency of in the circuit. 4. Determination of the total impedance of the coils connected in parallel and series.\n\nRELATED TOPICS\nElectric Field:\n38 LAB REPORT\n\nGC UNIVERSITY FAISALABAD Force per unit charge that would be experienced by a stationary point charge at a given location in the field is called electric field. This electrics field exerts a force on other electrically charged objects. The strength of the field at a given point as defined as the force that would be exerted on a positive test charge of 1 coulomb placed at that point, the direction of field is given by the direction of the force. Using Coulomb's law we get the vector of the electric field produced by a point charge q With the magnitude This field does not depend upon the test charge q and depends only on the charge producing this field and the distance where it is measured. Electric field can be defined graphically by means of the electric field lines, as shown in the fig.\n\n=s.E Where s is an insignificant arbitrary scale parameter the same for all points. Thus the electric field is equal to the electric force per unit charge. Therefore the unit of the electric field is =\n\nElectrostatic Induction:\nElectrostatic induction is a method in which an electrically charged object can be used to create an electrical charge in a second object without contact between the two objects. Electrostatic induction is redistribution of electrical charge in a object caused by the influence of nearby charge. Electrostatic generators, such as the Wimshurst machine, the Van de Graff generator and the electrophorus, use this principle. Electrostatic induction should not be confused with electromagnetic induction; both are often referred to as induction. Induction effect occurs in dielectric objects and is responsible for the attraction of small light nonconductive objects like scraps of paper to static electric charges. In nonconductors the electrons are bound to atoms and are not to free to move about the objects. Therefore they can move a little within the atoms.\n\nElectrostatic Potential:\n39 LAB REPORT\n\nGC UNIVERSITY FAISALABAD Electrical potential is the potential energy per unit charge associated with a static electric field, also called the electrostatic potential or the electric potential. Technically, it is the potential associated with the conservative electric field E that occurs when the magnetic field is time invariant. It is measured in volts and is a Lorentz scalar quantity. The difference in electrical potential between two points is known as voltage. In the case of coulomb force a direct demonstration can be made to show that it is a conservative force. Since force is directly proportional to the electric field.\n\nDielectric Strength:\nDielectric strength is measure of the electrical strength of the material as an insulator. Dielectric strength is defined as the maximum voltage required to produce dielectric breakdown through the material. Dielectric strength is expressed in volt per unit thickness. The higher the dielectric strength of the material the batter as the insulator. The dielectric strength of the material is determined by the maximum AC voltage per unit thickness that the material can handle before breakdown occurs. The test consists of placing a sample of material between two neoprene disks coated with silicone grease to prevent flashover. A dielectric potential is then placed across the sample and is increased at a specified rate until breakdown occurs.\n\nDielectric Displacement:\nDielectric displacement is charge per surface area. When an electrical charge acts on a dielectric material it stresses and polarizes the molecules in material. The material remains in an electrically stressed state when the charge is removed. Dielectric displacement current is charge displacement without charge transport. Displacement current doesnt involve the motion of charges but involves the formation of electric dipoles which create an effect similar to the movement of charges. The cyclical reversals of these dipoles gives rise to magnetic effects which in turn gives rise to displacement currents. This alternating electromagnetic cycle persists until it is absorbed somewhere in time and space. The common equation of the dielectric displacement is given by Dielectric displacement = electric field strength x permittivity And the unit of the dielectric displacement is Coulombs per meter squared.\n\nTheoretical Background:\n40 LAB REPORT\n\nGC UNIVERSITY FAISALABAD The French engineer Charles Coulomb investigated the quantitative relation of forces between charged objects during the 1780's. Using a torsion balance device, created by Coulomb himself, he could determine how an electric force varies as a function of the magnitude of the charges and the distance between them. Coulomb used little spheres with different charges whose exact value he did not know, but the experiment allowed him to test the relation between the charges. Coulomb realized that if a charged sphere touches another identical not charged sphere, the charge will be shared in equal parts symmetrically. Thus, he had the way to generate charges equal to , , etc., from the original charge. Keeping the distance constant between the charges he noticed that if the charge of one of the spheres was duplicated, the force was also duplicated; and if the charge in both spheres was duplicated, the force was increased to four times its original value. When he varied the distance between the charges, he found the force decreased in relation to the square of the distance; that is, if the distance was duplicated, the force decreased to the fourth part of the original value. In that way Coulomb demonstrated that the electric force between two stationary charged particles is: 1. Inversely proportional to the square of the distance r between the particles and is directed along the line that joins them. 2. Proportional to the product of the charges q1 and q2. 3. Attracted if the charges have opposite electrical sign and repulsed if the charges have equal sign. Coulombs Law can be expressed in the form of an equation. The validity of Coulomb's Law has been verified with modern devices that have detected that the exponent 2 has an exactitude of one part in 1016. Ke is a constant known as the Coulomb 's constant, which in the International System units has the value Ke = 8.987x109 Nm2/C2. The International System unit for charge is the Coulomb. The smallest known charge in nature is the charge of an electron or proton, which has an absolute value of e = 1.60219x10-19 C. Thus, a 1 Coulomb charge is approximately equal to a charge of 6.24x1018 (= 1C/e) electrons or protons. We should notice that the force is a vectorial quantity, that is, has magnitude and direction. Coulomb 's Law expressed in vectorial form for the electric force F12 exerted by a charge q1 over a second charge q2 is 41 LAB REPORT\n\nAs every force obeys Newton's third Law, the electric force exerted by q2 over q1 is equal in magnitude to the force exerted by q1 over q2 and in the opposite direction, that is F21= - F12. If q1 and q2 have the same sign, F12 takes the direction of r. If q1 and q2 have opposite sign, the product q1q2 is negative and F12 points opposite to r. When two or more charges are present, the force between any pair of them is given by the above equation. Hence, the resultant force on any of them is equal to the vectorial sum of the forces exerted by the different individual charges. For example, with three charges, the resultant force exerted by particles 2 and 3 over 1 is F1 = F21 + F3\n\n42 LAB REPORT\n\nSr 01 02 03\n\nQ 14 14.4 14.6\n\nQ2\n\n0.282\n\n0.32\n\n43 LAB REPORT\n\nSr 01 02 03\n\nd 6 6 6\n\nF1 1.09 1.05 1\n\nQ 12.4 14.6 15\n\n44 LAB REPORT\n\nSr 01 02 03\n\nQ 13 13.2 13.4\n\n## Q2 o = 16 d 2 F (13)2 o = 1011 16 (5.5)2 (0.17) o = 6.73 1012 As / V\n\nQ2 182 180 178 176 174 172 170 168 0.1 0.11 0.12 0.13 0.14\n\nQ2\n\n## Relationship b/w electrostatic force Fand the square of the charge Q\n\n45 LAB REPORT\n\nWheatstone Bridge\nBASIC PRINCIPLE\nWheatstone bridge is used to find the unknown resistance of resisters and wire by connecting the circuit in series and parallel manner. The total resistance of the circuit connect in series or parallel are measured.\n\n1. First find the unknown resistance. 2. Secondly to find the total resistance of resistors connected in Series. 3. Thirdly find the total resistance of resistors connected in Parallel. 4. Fourthly find the resistance of a wire.\n\nRequired Apparatus\n1. Resistance board Metal 2. Slide wire measuring bridge 3. Connection box 4. Carbon resistor 1 W, 10 Ohm 5. Carbon resistor 1 W, 100 Ohm 6. Carbon resistor 1 W, 150 Ohm 7. Carbon resistor 1 W, 330 Ohm 8. Carbon resistor 1 W, 680 Ohm 9. Carbon resistor 1 W, 1 KOhm 10. Carbon resistor 1 W, 4.7 KOhm 11. Carbon resistor 1 W, 10 KOhm 12. Carbon resistor 1 W, 15 KOhm 13. Carbon resistor 1 W, 82 KOhm 14. Carbon resistor 1 W, 100 KOhm 15. Power supply 5V/1A 16. Digital Multimeter 17. Connection cord l = 500mm/Red (1) (1) (1) (1) (1) (1) (1) (1) (1) (1) (1) (1) (1) (1) (1) (1) (2)\n\n46 LAB REPORT\n\nRelated Physics\na. Conductor, b. Voltage, c. Circuit, d. Resistance, e. Parallel and series connections. f. Kirchhoffs laws, . Conductor:\nA conductor is an object or a substance that allow the electricity through it with some resistance. A good example is a copper wire while other examples are nanotubes.A conductor is made by a materiel which contain movable negative electrical charges called electron. Positive charges can also be movable in the form of the atom in the diode or in the form of ion in electrolyte in a battery.\n\nVoltage:\nThe voltage in between two points is in real the electric force that is use to derive the electric current between these points. In the case of static electric fields, the voltage between two points is equal to the electrical potential difference between those points. Electric potential is the energy required to move a unit electric charge to a particular place in a static electric field. V=IR\n\nCircuit:\nA circuit is an electrical network in which different electrical component such as resistor like bulb, capacitor, inductor, battery etc are interconnected in series or parallel loops. A number of electrical laws are apply to design a circuit\n\nThis is the circuit diagram of wheatstone bridge which is made by apply kirchoff laws.\n\nResistance:\n47 LAB REPORT\n\nGC UNIVERSITY FAISALABAD The property of a component which restricts the flow of electric current in circuit. Energy is used as the voltage across the component drives the current through it. Resistance is measured in ohms, the symbol for ohm is an omega . 1 is quite small for electronics so resistances are often given in k and M . 1 k = 1000 1 M = 1000000 . Combined resistance in series: R = R1 + R2 + R3 + R4 +... Combined resistance in case of parallel: 1/Req=1/R1+1/R2+1/R3+..\n\n## Parallel and series connections\n\nThere are two ways in which components are connected in circuit. Namely series and parallel: In series combination components are connected in such a way that current through each component remain same while voltage divide according to the resistance.i.e,\n\nIn parallel combination components are connected one above the other in which voltage remain same through each component.i.e,\n\n48 LAB REPORT\n\nKirchhoffs laws:\nIn 1845 a german physicist gustav kirchhoff first describe two basic law in electricity and megnatism. Which play a central role in Physics and electrical engineering.These laws known as kirchhoff laws. It is two laws one is kirrchhoff voltage law and the other is the kirchhoff current law. The laws were generalized from the work of Georg Ohm. The laws can also be derived from Maxwells equations.\n\n## Kirchhoff current law:\n\nKirchhoff current is also called the kirchhoff point rule or kirchhoff junction rule or kirchhoff nodel rule or kirchhoff first rule. In this rule law of conservation of charge is hold. This law state that in any circuit the current following into the circuit is must be equal to the current following out of the circuit .\n\nOR in any circuit the algebric sum of all current meeting at a point is must be equal to zero.\n\nNote that the current entering a node is the negative of the current leaving the node.\n\ni.e\n\n49 LAB REPORT\n\n## Kirchhoff Voltage law:\n\nThis law is also called kirchhoff second law or kirchhoff loop rule or kirchhoff mesh rule. This rule is on the basis of law of conservation of energy. This law stats that The sum of voltage changes in a closed loop must be equal to zero\n\nOR The algebraic sum of the products of the resistances of the conductors and the currents in them in a closed loop is equal to the total Voltage available in that loop.\n\nThis law holds true even when resistance which causes dissipation of energy is present in a circuit. The validity of this law in this case can be understood if one realizes that a charge in fact doesn't go back to its starting point, due to dissipation of energy.\n\nTheoritical Background\nBridge circuits are designed to allow the determination of the value of an unknown circuit element such as a resistor, capacitor, or an inductor. The circuit diagram for a typical bridge . The bridge elements are connected between junctions AC, BC, AD, and BD. V represents either an AC or DC voltage source and G represents a null detecting device such as a galvanometer, a voltmeter, or an oscilloscope Generally, one or more of the circuit elements in the bridge can be varied until the potential difference between junctions C and D (VCD) is zero. When this situation exists, the bridge is said to be balanced or is \"nulled.\" The following relationships then hold for the voltages in the main branches.\n\n50 LAB REPORT\n\nV AC = V AD , and V BC = V BD .\n\n(1) (2)\n\nWhen (1) is divided by (2) and rearranged, the voltage across any branch can be found in terms of the voltages across the remaining three. For example, the voltage between Junctions A and D is V V AD = V BD AC V AD . (3)\n\nThe Wheatstone bridge is shown schematically 'n Figure 2. The coils of wire whose resistance is to be, determined is connected between junctions A and D, and a known value of resistance is connected between B and D. A potentiometer is connected between A and B with a tap at point C. The position of the tap can be altered by adjusting the dial on the potentiometer and, thereby, changing the resistances, R AC and R BC, on either side of point C. These changes then vary the voltage V AC and V BC\n\nWhen the bridge is in the null condition (3) holds and a current I, flows from A to D to B, and a current I2 flows from A to C to B. Knowing that the voltage across a resistor is IR (Ohm's Law), equation (3) can be expressed as R AD = RBD\nRAC . R BC\n\n51 LAB REPORT\n\nThe value of RAC is the reading on the potentiometerdial, R, and the value of R BC is 10 - R. Equation (4) can now be written as\nR AD = R BD R . 10 R\n\n(5)\n\nThis is the working equation for the Wheatstone bridge. The resistivity of the coil of wire with resistance RAD can now be determined. For a wire with a uniform cross-sectional area, the resistance is R AD = When this expression is rearranged,\n\n(6)\n\nwhere p is the resistivity, L,4Dis the length of the coil of wire, and A is its cross-sectional area. When the cross-sectional area is expressed in terms of its diameter, d, the expression for its resistivity Is\n\n52 LAB REPORT\n\nPractical Calculation:\nSINGLE RESISTANCE\nSr 1 2 3 4 5 6 R() (known) 330 330 330 330 330 330 L1 m 230 670 750 122 934 26 L2 m 770 330 250 878 66 974 Rx=L1/L2*R () 98 670 990 45.5 4670 8.9 Actual R() 100 680 1000 47 4700 10 Difference () 2 10 10 1.5 30 1.1\n\n## Determination of resistors connected in circuit using wheatstone bridge\n\nRESISTANCE IN SERIES\nSr 1 R(known ) () 330 L1 (m) 835 L2 (m) 165 Rx=L1/L2*R () 1670 Actual () 1680 Difference () 10\n\n53 LAB REPORT\n\nGC UNIVERSITY FAISALABAD 2 3 4 5 330 330 330 330 700 754 766 945 240 246 234 55 1045 1011 1080 5670 1047 1010 1100 5680 2 1 20 10\n\n## Determination of resistors connected in series using wheatstone bridge\n\nRESISTANCE IN PARALLEL\nR(known ) () 330 1 2 3 4 330 330 330 L1 (m) 26 210 86 207 L2 (m) 974 790 914 793 Rx=L1/L2*R () 8.98 87.70 31.05 86.14 Actual () 9.8 87.17 31.97 87.17 Difference () 0.2 0.63 0.91 1.03\n\nSr\n\n54 LAB REPORT\n\n## Determination of resistance of wires of various radii using wheatstone bridge\n\nSr R(known ) () 10 10 10 10 10 L1 (m) 57 203 114 113 35 L2 (m) 943 797 886 887 965 Rx=L1/L2*R () 0.604 2.547 1.286 1.274 0.3627 Radius 1.00 0.5 0.7 0.7 0.5\n\n1 2 3 4 5\n\n## Internal Resistance and matching Voltage\n\nBasic Principle:\nThe voltage sources having two terminal voltage and current depend on the load i.e on the eternal resistance. The terminal voltage is determined as a function of the current and from this no voltage of the source and internal resistance are determined and power graph is plotted.\n\n55 LAB REPORT\n\nRelated Physics:\nVoltage source, electromotive force, terminal voltage, no-load operation, short circuit, Ohm law, Kirchhoff laws, power matching.\n\nVoltage source\nThe sources which can produces constant voltage across a battery or a combination of batteries like A.C,D.C and chemical batteries called voltage source. In electric circuit theory, an ideal voltage source is a circuit element where the voltage across it is independent of the current through it. A voltage source is provide energy in the form of electric force these voltage source are Dry cell, A.C and D.C generator solar cell terbine etc. In analysis, a voltage source supplies a constant DC or AC potential between its terminals for any current flow through it. AC voltage source has voltage across its output in the form of sine wave such as\n\nV(t)=vsin(t)\nELECTROMTIVE FORCE(e.m.f)\nAny device that maintain the potential difference between two points in a electrical circuits is called source of emf or voltage source and the potential to maintain between these sources is called emf., electromotive force,, eletromotances \"that which tends to cause current (actual electrons and ions) to flow.\n\nTERMINAL VOLTAGE:\nThe voltage obtaind from +ve anode and ve cathode across the combination of batteries like A.C,D.C and cell battery is called terminal voltage. Because any battery has an internal resistance ri its terminal voltage VT drops when current is drawn from it;\n\n56 LAB REPORT\n\nKirchoff law,s:\nIn 1845 a german physicist gustav kirchhoff first describe two basic law in electricity and megnatism. Which play a central role in Physics and electrical engineering.These laws known as kirchhoff laws.It is two laws one is kirrchhoff voltage law and the other is the kirchhoff current law. The laws were generalized from the work of Georg Ohm. The laws can also be derived from Maxwells equations.\n\n## Kirchhoff current law:\n\nKirchhoff current is also called the kirchhoff point rule or kirchhoff junction rule or kirchhoff nodel rule or kirchhoff first rule. In this rule law of conservation of charge is hold. This law state that in any circuit the current following into the circuit is must be equal to the current following out of the circuit meeting at a point is must be equal to zero. .OR in any circuit the algebric sum of all current\n\nNote that the current entering a node is the negative of the current leaving the node.\n\ni.e\n\n## Kirchhoff Voltage law:\n\nThis law is also called kirchhoff second law or kirchhoff loop rule or kirchhoff mesh rule. This rule is on the basis of law of conservation of energy. 57 LAB REPORT\n\nGC UNIVERSITY FAISALABAD This law stats that The sum of voltage changes in a closed loop must be equal to zero\n\nOR The algebraic sum of the products of the resistances of the conductors and the currents in them in\na closed loop is equal to the total Voltage available in that loop.\n\nThis law holds true even when resistance which causes dissipation of energy is present in a circuit. The validity of this law in this case can be understood if one realizes that a charge in fact doesn't go back to its starting point, due to dissipation of energy\n\nOhm law:\nOhm's law states that Current is directly proportional to applied voltage and inversely proportional to the resistance.Ohm's Law defines the relationships between (P) power, (E) voltage, (I) current, and (R) resistance. One ohm is the resistance value through which one volt will maintain a current of one ampere. Mathematicaly, V=IR R=V/I\n\nShort circuit:\nA short circuit is very low-resistance connection between two points of an electrical circuit . Due to very low resistance very large amount of current will pass through the circuit which may damage the circuit. A common type of short circuit occurs when the positive and negative terminals of a battery are connected together with a low-resistance conductor, like a wire. With low resistance in the connection, a high current exists, causing the cell to deliver a large amount of energy in a short time. 58 LAB REPORT\n\nEquipment:\nBattery box (06030.21) Flat cell battery, 9 V( 07496.10 ) Flat battery, 4.5 V( 07496.01) Power supply 5 V DC/0.3 A (11076.93) Rheostat, 10 Ohm, 5.7 A( 06110.02) Rheostat, 100 Ohm, 1.8 A( 06114.02) Digital multimeter (07134.00) Double sockets, 1 pair, red a. black( 07264.00 ) Connecting cord, 500 mm, red (07361.01) Connecting cord, 500 mm, blue (07361.04) 1 1 1 1 1 1 2 1 3 2\n\n1. To measure the terminal voltage Ut of a number of voltage source as a function of the current, varying the external resistance Re, and to calculate the no-load voltage U0 and the internal resistanceRi. 1.1 Slimline battery 1.2 Power supply 1.2.1 Alternating voltage output 1.2.2 Direct voltage output 2. To measure directly the no-load voltage of the slimline battery (with no external resistance) and its internal resistance (by power matching, Ri = Re). 3. To determine the power diagram from the relationship between terminal voltage and current, as illustrated by the simline battery.\n\n## Set-up and procedure:\n\n1. Connect a variable resistor Re to the voltage source as shown in Fig. And use the rheostat of 1001/2 ohm or 10ohm for to measure the higher current.In this way the current I vary in 0.1A steps for the slimine battery andin 0.05A steps for power supply. . Measure the terminal voltage Ut with the voltmeter 2. measure the no load voltage directly external resistance.then load the voltage source with an external resistor. Set Re so that Ut =Uo/2 In this case the internal resistance is Ri = Re. Measure Re with the resistance measuring multimeter.\n\n## Theory and Evaluation:\n\nThe battery is used to drive a current through a resistor. A digital voltmeter measures the emf of the battery and the pd across the resistor when the current flows.If the ideal voltage source with 59 LAB REPORT\n\nGC UNIVERSITY FAISALABAD no load voltage Uo and internal resistance Ri connected in series then real voltage source can be represented in the equivalent circuit diagram. If the voltage sources like AC,DC and Chamical cell battries are connected in an external resistance Re then according to Ohm law; Uo I = Ri + Re The terminal voltage can be represented as Ut = Uo RtI The variation of line with a linear portion in according to the above equation\n\nSlimline Battery Uo= 4.66 V Ri= 1.47 Ohm sUo= 0.03 V sRe= 0.02 Ohm\n\nPower supply A.Voltage Uo= 6.948 V Ri= 1.55 Ohm sUo=0.005 V sRe=0.01 Ohm\n\nThe Relationship between Terminal Voltage Ut and current I: The terminal voltage Ut and current I have a linear relationship such as: Ut = Uo RtI External resistance determine the ratio of terminal voltage and current at the working point. Re = Ut I\n\n60 LAB REPORT\n\nGC UNIVERSITY FAISALABAD No Load: No load is also called matching voltage in this load Re is infinit such as: Re = There is no current flowing and neither voltage drop over Ri. In this case:\nUt = Uo\n\nShort Circuit: This circuit is also called current matching.In this case: Re = 0 The short circuit defines the current as follows; It = Power Matching: Power matching is also called resistance matching. It can be describe as:\nRo = Rt\n\nUo Ri\n\nIn this case:\nCurrent Voltage 0.3 2.62 0.31 2.61 0.35 2.58 0.37 2.57 0.4 2.54 0.49 2.48 0.63 2.39 0.73 2.31 0.86 2.23 1.06 2.1 1.16 2.04 1.28 1.95 1.44 1.85 1.82 1.61 2.31 1.25 2.54 REPORT 1.09 LAB 3.2 0.63 3.81 0.17 3.9 0.06\n\nUt =\n\nUo 2\n\n## And also it describe current as follows: It = Io 2\n\nPractical calculation:\nFor Dry cell\n\n61\n\nVo lta ge\n\n## 3 2.5 2 1.5 1 0.5 0 0 1 2 I 3 4\n\nVo ltag e\n\nFor Dc supply\n\nLAB\n\nCurrent Voltage 0.91 5.08 1 5.07 1.12 5.05 FOR AC SUPPLY 1.38 5.08 2.1 4.97 Current Voltage 2.76 4.9 0.07 6.41 3.87 4.79 0.1 6.42 7.05 4.47 0.14 6.41 9.73 3.7 0.16 6.4 11.35 1.89 0.18 6.4 12.49 0.37 0.21 6.39 0.23 6.38 0.27 6.38 0.32 6.37 0.37 6.38 0.45 6.37 0.52 6.36 0.81 6.32 0.87 6.31 0.99 6.3 1.26 6.26 1.83 6.2 2.1 6.17 2.99 6.07 3.34 6.02 4.32 5.91 5.11 5.81 6.26 5.64 REPORT 7.56 5.4 11.35 4.93 19.11 3.81\n\nVoltage\n\n6 4 V 2 0 0 5 I 10 15\nVoltage\n\nFOR AC SUPPLY\n\n62\n\nV oltage\n\n8 6 4 2 0 0 5 10 I 15 20 25 vV\nV oltage\n\n## Power graph and matching voltage\n\n63 LAB REPORT\n\nI 0.3 0.31 0.35 0.37 0.4 0.49 0.63 0.73 0.86 1.06 1.16 1.28 1.44 1.82 2.31 2.54 3.2 3.81 3.9\n\nV 2.62 2.61 2.58 2.57 2.54 2.48 2.39 2.31 2.23 2.1 2.04 1.95 1.85 1.61 1.25 1.09 0.63 0.17 0.06\n\nRe 8.73333 3 8.41935 5 7.37142 9 6.94594 6 6.35 5.06122 4 3.79365 1 3.16438 4 2.59302 3 1.98113 2 1.75862 1 1.52343 8 1.28472 2 0.88461 5 0.54112 6 0.42913 4 0.19687 5 0.04461 9 0.01538 5\n\nI/Is 0.05940 6 0.06138 6 0.06930 7 0.07326 7 0.07920 8 0.09703 0.12475 2 0.14455 4 0.17029 7 0.20990 1 0.22970 3 0.25346 5 0.28514 9 0.36039 6 0.45742 6 0.50297 0.63366 3 0.75445 5 0.77227 7\n\nVt/Vo 0.73389 4 0.73109 2 0.72268 9 0.71988 8 0.71148 5 0.69467 8 0.66946 8 0.64705 9 0.62465 0.58823 5 0.57142 9 0.54621 8 0.51820 7 0.45098 0.35014 0.30532 2 0.17647 1 0.04761 9 0.01680 7\n\nRe/Ri 12.3538 7 11.9097 3 10.4273 7 9.82549 8 8.98249 3 7.15943 5 5.36636 9 4.47622 9 3.66800 2 2.80244 2 2.48768 5 2.15500 3 1.81732 4 1.25134 7 0.76545 8 0.60703 8 0.27849 3 0.06311 7 0.02176 3\n\nPe/Po 0.04359 8 0.04487 9 0.05008 7 0.05274 4 0.05635 5 0.06740 4 0.08351 8 0.09353 5 0.10637 6 0.12347 1 0.13125 9 0.13844 7 0.14776 6 0.16253 2 0.16016 3 0.15356 8 0.11182 3 0.03592 6 0.01297 9\n\nPe=I2Re 0.786 0.8091 0.903 0.9509 1.016 1.2152 1.5057 1.6863 1.9178 2.226 2.3664 2.496 2.664 2.9302 2.8875 2.7686 2.016 0.6477 0.234\n\nVo 3.57 3.57 3.57 3.57 3.57 3.57 3.57 3.57 3.57 3.57 3.57 3.57 3.57 3.57 3.57 3.57 3.57 3.57 3.57\n\nRi 0.70693 1 0.70693 1 0.70693 1 0.70693 1 0.70693 1 0.70693 1 0.70693 1 0.70693 1 0.70693 1 0.70693 1 0.70693 1 0.70693 1 0.70693 1 0.70693 1 0.70693 1 0.70693 1 0.70693 1 0.70693 1 0.70693 1\n\nIs 5.05 5.05 5.05 5.05 5.05 5.05 5.05 5.05 5.05 5.05 5.05 5.05 5.05 5.05 5.05 5.05 5.05 5.05 5.05\n\nPo=VoIs 18.0285 18.0285 18.0285 18.0285 18.0285 18.0285 18.0285 18.0285 18.0285 18.0285 18.0285 18.0285 18.0285 18.0285 18.0285 18.0285 18.0285 18.0285 18.0285\n\n64 LAB REPORT" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8747002,"math_prob":0.98667383,"size":50529,"snap":"2020-10-2020-16","text_gpt3_token_len":14418,"char_repetition_ratio":0.16568035,"word_repetition_ratio":0.10181575,"special_character_ratio":0.29850185,"punctuation_ratio":0.13267308,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9881501,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-17T03:30:29Z\",\"WARC-Record-ID\":\"<urn:uuid:43776073-7f70-4008-a3cb-08d92509d8ed>\",\"Content-Length\":\"399581\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9eb6b7c5-d5aa-4af5-b568-34bc4710408b>\",\"WARC-Concurrent-To\":\"<urn:uuid:0301f897-ceb7-4f58-8f05-4645159a4b41>\",\"WARC-IP-Address\":\"151.101.250.152\",\"WARC-Target-URI\":\"https://ro.scribd.com/document/113699007/Fullll-Final\",\"WARC-Payload-Digest\":\"sha1:3L5J4UB5S6WRYICDI3WUZ266N4VD5YFL\",\"WARC-Block-Digest\":\"sha1:BZM6P4SGNO5S7ILAQ6DDLBF5HD5DFG3V\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875141653.66_warc_CC-MAIN-20200217030027-20200217060027-00085.warc.gz\"}"}
https://lira.no-ip.org:8443/doc/qt5-doc-html/html/qtquick/qtquick-tableview-gameoflife-example.html
[ "# Qt Quick TableView examples - Conway’s Game of Life\n\nThe Conway’s Game of Life example shows how the QML TableView type can be used to display a C++ model that the user can pan around.", null, "#### Running the Example\n\nTo run the example from Qt Creator, open the Welcome mode and select the example from Examples. For more information, visit Building and Running an Example.\n\n#### The QML User Interface\n\n``` TableView {\nid: tableView\nanchors.fill: parent\n\nrowSpacing: 1\ncolumnSpacing: 1\n\nScrollBar.horizontal: ScrollBar {}\nScrollBar.vertical: ScrollBar {}\n\ndelegate: Rectangle {\nid: cell\nimplicitWidth: 15\nimplicitHeight: 15\n\nrequired property var model\nrequired property bool value\n\ncolor: value ? \"#f3f3f4\" : \"#b5b7bf\"\n\nMouseArea {\nanchors.fill: parent\nonClicked: parent.model.value = !parent.value\n}\n}\n```\n\nThe example uses the TableView component to display a grid of cells. Each of these cells is drawn on the screen by the TableView’s delegate, which is a Rectangle QML component. We read the cell’s value and we change it using `model.value` when the user clicks it.\n\n``` contentX: (contentWidth - width) / 2;\ncontentY: (contentHeight - height) / 2;\n```\n\nWhen the application starts, the TableView is scrolled to its center by using its `contentX` and `contentY` properties to update the scroll position, and the `contentWidth` and `contentHeight` to compute where the view should be scrolled to.\n\n``` model: GameOfLifeModel {\nid: gameOfLifeModel\n}\n```\n\n#### The C++ Model\n\n``` class GameOfLifeModel : public QAbstractTableModel\n{\nQ_OBJECT\nQML_ELEMENT\n\nQ_ENUMS(Roles)\npublic:\nenum Roles {\nCellRole\n};\n\nQHash<int, QByteArray> roleNames() const override {\nreturn {\n{ CellRole, \"value\" }\n};\n}\n\nexplicit GameOfLifeModel(QObject *parent = nullptr);\n\nint rowCount(const QModelIndex &parent = QModelIndex()) const override;\nint columnCount(const QModelIndex &parent = QModelIndex()) const override;\n\nQVariant data(const QModelIndex &index, int role = Qt::DisplayRole) const override;\nbool setData(const QModelIndex &index, const QVariant &value,\nint role = Qt::EditRole) override;\n\nQt::ItemFlags flags(const QModelIndex &index) const override;\n\nQ_INVOKABLE void nextStep();\nQ_INVOKABLE bool loadFile(const QString &fileName);\nQ_INVOKABLE void loadPattern(const QString &plainText);\nQ_INVOKABLE void clear();\n\nprivate:\nstatic constexpr int width = 256;\nstatic constexpr int height = 256;\nstatic constexpr int size = width * height;\n\nusing StateContainer = std::array<bool, size>;\nStateContainer m_currentState;\n\nint cellNeighborsCount(const QPoint &cellCoordinates) const;\nstatic bool areCellCoordinatesValid(const QPoint &coordinates);\nstatic QPoint cellCoordinatesFromIndex(int cellIndex);\nstatic std::size_t cellIndex(const QPoint &coordinates);\n};\n```\n\nThe `GameOfLifeModel` class extends QAbstractTableModel so it can be used as the model of our TableView component. Therefore, it needs to implement some functions so the TableView component can interact with the model. As you can see in the `private` part of the class, the model uses a fixed-size array to store the current state of all the cells. We also use the QML_ELEMENT macro in order to expose the class to QML.\n\n``` int GameOfLifeModel::rowCount(const QModelIndex &parent) const\n{\nif (parent.isValid())\nreturn 0;\n\nreturn height;\n}\n\nint GameOfLifeModel::columnCount(const QModelIndex &parent) const\n{\nif (parent.isValid())\nreturn 0;\n\nreturn width;\n}\n```\n\nHere, the `rowCount` and `columnCount` methods are implemented so the TableView component can know the size of the table. It simply returns the values of the `width` and `height` constants.\n\n``` QVariant GameOfLifeModel::data(const QModelIndex &index, int role) const\n{\nif (!index.isValid() || role != CellRole)\nreturn QVariant();\n\nreturn QVariant(m_currentState[cellIndex({index.column(), index.row()})]);\n}\n```\n\nThis method is called when the TableView component requests some data from the model. In our example, we only have one piece of data by cell: whether it is alive or not. This information is represented by the `CellRole` value of the `Roles` enum in our C++ code; this corresponds to the `value` property in the QML code (the link between these two is made by the `roleNames()` function of our C++ class).\n\nThe `GameOfLifeModel` class can identify which cell was the data requested from with the `index` parameter, which is a QModelIndex that contains a row and a column.\n\n#### Updating the Data\n\n``` bool GameOfLifeModel::setData(const QModelIndex &index, const QVariant &value, int role)\n{\nif (role != CellRole || data(index, role) == value)\nreturn false;\n\nm_currentState[cellIndex({index.column(), index.row()})] = value.toBool();\nemit dataChanged(index, index, {role});\n\nreturn true;\n}\n```\n\nThe `setData` method is called when a property’s value is set from the QML interface: in our example, it toggles a cell’s state when it is clicked. In the same way as the `data()` function does, this method receives an `index` and a `role` parameter. Additionally, the new value is passed as a QVariant, that we convert to a boolean using the `toBool` function.\n\nWhen we update the internal state of our model object, we need to emit a `dataChanged` signal to tell the TableView component that it needs to update the displayed data. In this case, only the cell that was clicked is affected, thus the range of the table that has to be updated begins and ends at the cell’s index.\n\n``` void GameOfLifeModel::nextStep()\n{\nStateContainer newValues;\n\nfor (std::size_t i = 0; i < size; ++i) {\nbool currentState = m_currentState[i];\n\nint cellNeighborsCount = this->cellNeighborsCount(cellCoordinatesFromIndex(static_cast<int>(i)));\n\nnewValues[i] = currentState == true\n? cellNeighborsCount == 2 || cellNeighborsCount == 3\n: cellNeighborsCount == 3;\n}\n\nm_currentState = std::move(newValues);\n\nemit dataChanged(index(0, 0), index(height - 1, width - 1), {CellRole});\n}\n```\n\nThis function can be called directly from the QML code, because it contains the Q_INVOKABLE macro in its definition. It plays an iteration of the game, either when the user clicks the Next button or when the Timer emits a `triggered()` signal.\n\nFollowing the Conway’s Game of Life rules, a new state is computed for each cell depending on the current state of its neighbors. When the new state has been computed for the whole grid, it replaces the current state and a dataChanged signal is emitted for the whole table.\n\n``` bool GameOfLifeModel::loadFile(const QString &fileName)\n{\nQFile file(fileName);\nreturn false;\n\nQTextStream in(&file);\n\nreturn true;\n}\n\nvoid GameOfLifeModel::loadPattern(const QString &plainText)\n{\nclear();\n\nQStringList rows = plainText.split(\"\\n\");\nQSize patternSize(0, rows.count());\nfor (QString row : rows) {\nif (row.size() > patternSize.width())\npatternSize.setWidth(row.size());\n}\n\nQPoint patternLocation((width - patternSize.width()) / 2, (height - patternSize.height()) / 2);\n\nfor (int y = 0; y < patternSize.height(); ++y) {\nconst QString line = rows[y];\n\nfor (int x = 0; x < line.length(); ++x) {\nQPoint cellPosition(x + patternLocation.x(), y + patternLocation.y());\nm_currentState[cellIndex(cellPosition)] = line[x] == 'O';\n}\n}\n\nemit dataChanged(index(0, 0), index(height - 1, width - 1), {CellRole});\n}\n```\n\nWhen the application opens, a pattern is loaded to demonstrate how Conway’s Game of Life works. These two functions load the file where the pattern is stored and parse it. As in the `nextStep` function, a `dataChanged` signal is emitted for the whole table once the pattern has been fully loaded.\n\nExample project @ code.qt.io" ]
[ null, "https://lira.no-ip.org:8443/doc/qt5-doc-html/html/qtquick/images/gameoflife.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5903127,"math_prob":0.91799283,"size":7386,"snap":"2022-40-2023-06","text_gpt3_token_len":1711,"char_repetition_ratio":0.12178271,"word_repetition_ratio":0.041237112,"special_character_ratio":0.23382074,"punctuation_ratio":0.17530487,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9565889,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-28T00:22:03Z\",\"WARC-Record-ID\":\"<urn:uuid:dbba672e-3e08-4cf6-98d0-11dce2481fa8>\",\"Content-Length\":\"25658\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f376126d-901b-47b8-b554-ae4f002a867d>\",\"WARC-Concurrent-To\":\"<urn:uuid:8f0e41c4-425f-47af-ac44-c517ba5bba08>\",\"WARC-IP-Address\":\"151.20.83.4\",\"WARC-Target-URI\":\"https://lira.no-ip.org:8443/doc/qt5-doc-html/html/qtquick/qtquick-tableview-gameoflife-example.html\",\"WARC-Payload-Digest\":\"sha1:2PLXTW3ZSQZT2LES5KIREHIQPXE75OY3\",\"WARC-Block-Digest\":\"sha1:S5TIXL223IOSE7NJBB3DG7F3PSVWDT22\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030335059.31_warc_CC-MAIN-20220927225413-20220928015413-00733.warc.gz\"}"}
https://oneclass.com/class-notes/ca/utsg/mat/mat-136h1/146425-77-approximate-integration-question-4-medium.en.html
[ "Class Notes (1,100,000)\nCA (630,000)\nUTSG (50,000)\nMAT (4,000)\nMAT136H1 (900)\nall (200)\nLecture\n\nMAT136H1 Lecture Notes - Trapezoidal Rule\n\nDepartment\nMathematics\nCourse Code\nMAT136H1\nProfessor\nall\n\nThis preview shows half of the first page. to view the full 1 pages of the document.", null, "7.7 Integral Techniques\nIntegral Approximation\nQuestion #4 (Medium): Integral Approximation & Error Bound Using the Midpoint Rule\nStrategy\nThe error bounds are given by: \n, where     , and   for the Midpoint\nRule. Similarly for Trapezoidal Rule: \n, where     , and  . Notice that\nonly the denominator is slightly different. As for Simpson’s Rule: \n , where     , and\n , meaning the fourth derivative of the function is bound by factor. Usually the question\nprovides these input values in order to calculate the error bounds.\nSample Question\n1) Given the dataset, estimate the value of the integral \nusing the Midpoint Rule.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n2) If      , estimate the error involved in the approximation from part 1).\nSolution\n1) Integral is approximated using the Midpoint Rule:   \n  \n \n \n. Pick the midpoints (ie. every alternating points starting from\nthe second, ending with the second last), then   . Thus: \n   \n         \n2) Error bound for the Midpoint Rule is: \n, where     , and  . Then\nin this case     ,    since     \n, and since  ,  , then:\n\n\n\n \nYou're Reading a Preview\n\nUnlock to view full version" ]
[ null, "https://new-preview-html.oneclass.com/R6r8MpaPXwB5NbLVEp07mlg2z1GYy93Z/bg1.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.78129804,"math_prob":0.91171145,"size":1151,"snap":"2019-43-2019-47","text_gpt3_token_len":269,"char_repetition_ratio":0.12903225,"word_repetition_ratio":0.020408163,"special_character_ratio":0.23457862,"punctuation_ratio":0.1764706,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99046266,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-17T21:40:37Z\",\"WARC-Record-ID\":\"<urn:uuid:c61b5400-2834-467f-b0ef-7ca4a595ea5f>\",\"Content-Length\":\"454814\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:37015bf4-9f3e-42a5-8686-2e5ccd577a48>\",\"WARC-Concurrent-To\":\"<urn:uuid:894de3f0-313a-41fd-90bc-72f032f0835d>\",\"WARC-IP-Address\":\"23.55.57.106\",\"WARC-Target-URI\":\"https://oneclass.com/class-notes/ca/utsg/mat/mat-136h1/146425-77-approximate-integration-question-4-medium.en.html\",\"WARC-Payload-Digest\":\"sha1:ON2VP3VCWI5RDCOK43W77XSK66KEG2HT\",\"WARC-Block-Digest\":\"sha1:P4EOFH2NV7WF4LJSKZXUTDZ7XW36SR42\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986676227.57_warc_CC-MAIN-20191017200101-20191017223601-00491.warc.gz\"}"}
https://jgps.springeropen.com/articles/10.1186/s41445-018-0011-x
[ "# GPS + Galileo tightly combined RTK positioning for medium-to-long baselines based on partial ambiguity resolution\n\n## Abstract\n\nWith the modernization of the GNSS, the techniques of multi-GNSS navigation and positioning are becoming increasingly important. For multi-GNSS double-difference data processing, a tight combination (TC) strategy can provide more observations and higher reliability, which emploies a single reference satellite for all observations from different GNSS. However, multi-GNSS will bring some challenges to the high-dimension ambiguity resolution (AR). In this contribution, a GPS + Galileo tightly combined real-time kinematic (RTK) positioning strategy is proposed, which introduces the partial ambiguity resolution (PAR) method. A set of baselines ranging from about 22 to 110 km are used to test the positioning performance of this strategy. Experimental results demonstrate that the TC strategy can improve the success rate, but it can’t increase the ambiguity ratio values. Using the PAR method can reduce convergence times and improve the ambiguity fixing rate. Combining the TC strategy with the PAR method can provide better positioning performance, especially for long baselines.\n\n## Introduction\n\nWith the modernization of Global Navigation Satellite System (GNSS), multi-GNSS navigation and positioning techniques are becoming increasingly important. Combining observations from various GNSS constellations significantly increases the number of observations and improves the positioning accuracy and reliability, especially in difficult environments (Li et al., 2016a). Multi-GNSS double difference combination strategies include loose combination (LC) in which each of the systems uses its own reference satellite and no double differences are formed across systems (Zhang et al. 2003), and tight combination (TC) in which two systems use the same reference satellite and permitting double differences across different systems (Julien et al. 2003). Therefore, the TC strategy can provides more observations than the LC strategy. However, as the ambiguity dimension increases sharply, the success rate of integer ambiguity resolution is reduced (Teunissen et al., 1999), while the key requirement for real-time kinematic (RTK) is to quickly and correctly fix the ambiguities of carrier phase measurements. For multi-GNSS data processing, it is often impossible to fix all ambiguities simultaneously due to the large number of observations, which is even deteriorated in case of medium-to-long baselines (more than 20 km) when various residual errors cannot be mitigated completely (Li et al., 2016, b).\n\nTo solve this problem, the idea of partial ambiguity resolution (PAR), which means to resolve a subset of the candidate ambiguities, was suggested to maintain a sufficiently high success rate (Teunissen et al., 1999). The selection of an ambiguity subset could be based on pre-defined subset sizes (Mowlam and Collier, 2004), ambiguity variances (Wang and Feng, 2012), satellite elevations (Li et al., 2014), satellite variances (Li and Teunissen, 2014), combined phase observation wavelengths (Li et al., 2015, b) and composite methods that combine such strategies (Gao et al., 2017). In addition, the algorithm of satellite selection algorithm for PAR (Wang and Feng, 2013) and the method of EWL/WL as well as NL PAR for triple-frequency GNSS signals (Li et al., 2015a) are studied systematically.\n\nMany studies have applied the PAR method to the LC strategy and have achieved significant results. For example, the reliability characteristics of PAR solutions were verified by Wang and Feng (2012), and the PAR method was applied to the LC RTK positioning with the GPS constellation and virtual Galileo constellation to demonstrate the advantages of the proposed PAR method. Hou and Verhagen (2014) proposed a model and data driven PAR (MD-PAR) strategy and evaluated the performance of MD-PAR for GPS + BDS LC RTK positioning using simulated GPS and BDS observations. Li et al. (2015, b) presented the multi-carrier fast PAR (MCFPAR) strategy to solve multi-system multi-frequency high-dimensional AR problems, and its validity was demonstrated with BDS + GPS LC RTK positioning using real dual- and triple-frequency observations. Gao et al. (2015, 2015) quoted the partial wide-lane ambiguity resolution strategy to GPS + BDS LC RTK positioning and GPS+ GLONASS + BDS LC RTK positioning and validated with real observations.\n\nHowever, there are few publications applying the PAR method to the TC strategy, although this strategy provides more observations. The PAR method was introduced into the GPS + Galileo TC RTK positioning by Cao et al. (2007), and verify the reliability performance of this strategy in short baseline RTK. However, the simulation data is used and the inter-system bias is ignored.\n\nIn this paper, a GPS + Galileo TC RTK positioning strategy with PAR method is proposed. A set of real baseline observations ranging from about 22 to 110 km are used to test the performance of this strategy, including success rate, convergence time and ratio values. The experimental results are provided to demonstrate the benefits of introducing the PAR method into the TC strategy for multi-GNSS, which is finally followed by the summary and conclusions of this study.\n\n## Methods\n\n### Multi-GNSS observation models\n\nWithout loss of simplicity, the DD pseudorange and carrier-phase observation equations can be expressed as\n\n$${P}_{r_{\\boldsymbol{1}}{r}_2, ij}^{s_{\\boldsymbol{1}}{s}_{\\boldsymbol{2}},{A}_{\\boldsymbol{1}}{A}_{\\boldsymbol{2}}}={\\rho}_{r_{\\boldsymbol{1}}{r}_{\\boldsymbol{2}}}^{s_{\\boldsymbol{1}}{s}_{\\boldsymbol{2}}}+{ucd}_{r_{\\boldsymbol{1}}{r}_{\\boldsymbol{2}}, ij}^{A_{\\boldsymbol{1}}{A}_{\\boldsymbol{2}}}+{I}_{r_{\\boldsymbol{1}}{r}_{\\boldsymbol{2}}, ij}^{s_{\\boldsymbol{1}}{s}_{\\boldsymbol{2}}}+{T}_{r_{\\boldsymbol{1}}{r}_{\\boldsymbol{2}}}^{s_{\\boldsymbol{1}}{s}_{\\boldsymbol{2}}}+{\\varepsilon}_{P_{r_{\\boldsymbol{1}}{r}_{\\boldsymbol{2}}}^{s_{\\boldsymbol{1}}{s}_{\\boldsymbol{2}}}}$$\n(1)\n$${\\Phi}_{r_{\\boldsymbol{1}}{r}_{\\boldsymbol{2}}, ij}^{s_{\\boldsymbol{1}}{s}_{\\boldsymbol{2}},{A}_{\\boldsymbol{1}}{A}_{\\boldsymbol{2}}}={\\rho}_{r_{\\boldsymbol{1}}{r}_{\\boldsymbol{2}}}^{s_{\\boldsymbol{1}}{s}_{\\boldsymbol{2}}}+{upd}_{r_{\\boldsymbol{1}}{r}_{\\boldsymbol{2}}, ij}^{A_{\\boldsymbol{1}}{A}_{\\boldsymbol{2}}}\\hbox{-} {I}_{r_{\\boldsymbol{1}}{r}_{\\boldsymbol{2}}, ij}^{s_{\\boldsymbol{1}}{s}_{\\boldsymbol{2}}}+{T}_{r_{\\boldsymbol{1}}{r}_{\\boldsymbol{2}}}^{s_{\\boldsymbol{1}}{s}_{\\boldsymbol{2}}}+{\\lambda}_i{N}_{r_1{r}_2,i}^{s_1}-{\\lambda}_j{N}_{r_1{r}_2,i}^{s_2}+{\\varepsilon}_{\\Phi_{r_{\\boldsymbol{1}}{r}_{\\boldsymbol{2}}}^{s_{\\boldsymbol{1}}{s}_{\\boldsymbol{2}}}}$$\n(2)\n\nwhere P and Φ are pseudorange and carrier phase measurements, respectively; ρ is the distance between the receiver and the satellite; $$\\mathrm{ucd}$$ and upd are receiver uncalibrated code and phase delays, respectively; These two quantities are related to the initial phase and the hardware phase delays (Gu, 2013); The symbol I denotes the ionospheric delay; T is the tropospheric delay; λ is the wavelength; N is the integer phase ambiguity; $${\\varepsilon}_P$$ and εΦ are the mixture of measurement noise and multipath error for pseudorange and carrier phase observations, respectively. Note that all variables are expressed in meters, except the ambiguity which is expressed in cycles. Furthermore, the reference receiver is denoted with subscript $${r}_1$$, the rover receiver is denoted using subscript $${r}_2$$, the reference satellite is denoted using superscript $${s}_1$$ and its system is labeled using superscript $${A}_1$$, the non-reference satellite is denoted using superscript $${s}_2$$ and its system is labeled using superscript $${A}_2$$, and superscript i and j refer to carrier frequencies.\n\nThe above observation equations for multi-GNSS DD operations can be generalized to inter-system mixed DD which can be further categorized into those between the same frequencies or the diverse frequencies of observations (Li et al., 2017).\n\nSince GPS and Galileo transfer the same frequency band signals, e.g., L1 and L5 signals respectively overlap E1 and E5a signals, the inter-system mixed DD model for the same frequency is used to realize the tight combination of GPS and Galileo measurements which can be expressed as follows\n\n$${P}_{r_{\\boldsymbol{1}}{r}_2,i}^{s_{\\boldsymbol{1}}{s}_{\\boldsymbol{2}},{A}_{\\boldsymbol{1}}{A}_{\\boldsymbol{2}}}={\\rho}_{r_{\\boldsymbol{1}}{r}_{\\boldsymbol{2}}}^{s_{\\boldsymbol{1}}{s}_{\\boldsymbol{2}}}+{ucd}_{r_{\\boldsymbol{1}}{r}_{\\boldsymbol{2}},i}^{A_{\\boldsymbol{1}}{A}_{\\boldsymbol{2}}}+{I}_{r_{\\boldsymbol{1}}{r}_{\\boldsymbol{2}},i}^{s_{\\boldsymbol{1}}{s}_{\\boldsymbol{2}}}+{T}_{r_{\\boldsymbol{1}}{r}_{\\boldsymbol{2}}}^{s_{\\boldsymbol{1}}{s}_{\\boldsymbol{2}}}+{\\varepsilon}_{P_{r_{\\boldsymbol{1}}{r}_{\\boldsymbol{2}}}^{s_{\\boldsymbol{1}}{s}_{\\boldsymbol{2}}}}$$\n(3)\n$${\\Phi}_{r_{\\boldsymbol{1}}{r}_{\\boldsymbol{2}},i}^{s_{\\boldsymbol{1}}{s}_{\\boldsymbol{2}},{A}_{\\boldsymbol{1}}{A}_{\\boldsymbol{2}}}={\\rho}_{r_{\\boldsymbol{1}}{r}_{\\boldsymbol{2}}}^{s_{\\boldsymbol{1}}{s}_{\\boldsymbol{2}}}+{upd}_{r_{\\boldsymbol{1}}{r}_{\\boldsymbol{2}},i}^{A_{\\boldsymbol{1}}{A}_{\\boldsymbol{2}}}\\hbox{-} {I}_{r_{\\boldsymbol{1}}{r}_{\\boldsymbol{2}},i}^{s_{\\boldsymbol{1}}{s}_{\\boldsymbol{2}}}+{T}_{r_{\\boldsymbol{1}}{r}_{\\boldsymbol{2}}}^{s_{\\boldsymbol{1}}{s}_{\\boldsymbol{2}}}+{\\lambda}_i{N}_{r_1{r}_2,i}^{s_1{s}_2}+{\\varepsilon}_{\\Phi_{r_{\\boldsymbol{1}}{r}_{\\boldsymbol{2}}}^{s_{\\boldsymbol{1}}{s}_{\\boldsymbol{2}}}}$$\n(4)\n\nBecause the frequencies are the same, the ambiguities $${N}_{r_1{r}_2,i}^{s_1{s}_2}$$ still have integer characteristics. However, the receiver UPDs which are related to the initial phase and hardware delay are consequently contained in the inter-system bias (ISB) and therefore cannot be eliminated, i.e. $${ucd}_{r_{\\boldsymbol{1}}{r}_{\\boldsymbol{2}},i}^{A_{\\boldsymbol{1}}{A}_{\\boldsymbol{2}}}\\ne 0$$ and $${upd}_{r_1{r}_2,i}^{A_1{A}_2}\\ne 0$$.\n\nThe carrier phase integer ambiguities $${N}_{r_1{r}_2,i}^{s_1{s}_2}$$ and the integer part of $${upd}_{r_1{r}_2,i}^{A_1{A}_2}$$ are linearly dependent which make it impossible to separate them in the least-squares adjustment due to rank deficiency. Here, we separate the $${upd}_{r_{\\boldsymbol{1}}{r}_{\\boldsymbol{2}},i}^{A_{\\boldsymbol{1}}{A}_{\\boldsymbol{2}}}$$ into a fractional part $$\\overline{upd_{r_1{r}_2,i}^{A_1{A}_2}}$$ and an integer part. Then, the remaining integer part $${M}_{r_1{r}_2,i}^{A_1{A}_2}$$ is combined with the integer ambiguities $${N}_{r_1{r}_2,i}^{s_1{s}_2}$$ and forms a new estimable integer parameter: $$\\overline{N_{r_1{r}_2,i}^{s_1{s}_2}}={N}_{r_1{r}_2,i}^{s_1{s}_2}+{M}_{r_1{r}_2,i}^{A_1{A}_2}$$. Consequently, the Eq. (4) can be rewritten as\n\n$${\\Phi}_{r_{\\boldsymbol{1}}{r}_{\\boldsymbol{2}},i}^{s_{\\boldsymbol{1}}{s}_{\\boldsymbol{2}},{A}_{\\boldsymbol{1}}{A}_{\\boldsymbol{2}}}={\\rho}_{r_{\\boldsymbol{1}}{r}_{\\boldsymbol{2}}}^{s_{\\boldsymbol{1}}{s}_{\\boldsymbol{2}}}\\hbox{-} {I}_{r_{\\boldsymbol{1}}{r}_{\\boldsymbol{2}},i}^{s_{\\boldsymbol{1}}{s}_{\\boldsymbol{2}}}+{T}_{r_{\\boldsymbol{1}}{r}_{\\boldsymbol{2}}}^{s_{\\boldsymbol{1}}{s}_{\\boldsymbol{2}}}+{\\lambda}_i\\overline{N_{r_1{r}_2,i}^{s_1{s}_2}}+\\overline{upd_{r_{\\boldsymbol{1}}{r}_{\\boldsymbol{2}},i}^{A_{\\boldsymbol{1}}{A}_{\\boldsymbol{2}}}}+{\\varepsilon}_{\\Phi_{r_{\\boldsymbol{1}}{r}_{\\boldsymbol{2}}}^{s_{\\boldsymbol{1}}{s}_{\\boldsymbol{2}}}}$$\n(5)\n\nThe carrier phase and code ISBs between different types of receivers have temporal stability and can be neglected between receivers of the same type. Therefore, if the ISB of a pair of receivers is estimated, it can be used as the ISB correction for the pair of receivers (Paziewski and Wielgosz, 2015). The phase and code ISBs can be estimated precisely for zero or ultra-short baselines, that is\n\n$$\\left\\{\\begin{array}{c}{ucd}_{r_{\\boldsymbol{1}}{r}_{\\boldsymbol{2}},i\\boldsymbol{0}}^{A_{\\boldsymbol{1}}{A}_{\\boldsymbol{2}}}={P}_{r_{\\boldsymbol{1}}{r}_2,i}^{s_{\\boldsymbol{1}}{s}_{\\boldsymbol{2}},{A}_{\\boldsymbol{1}}{A}_{\\boldsymbol{2}}}-{\\rho}_{r_{\\boldsymbol{1}}{r}_{\\boldsymbol{2}}}^{s_{\\boldsymbol{1}}{s}_{\\boldsymbol{2}}}\\\\ {}\\overline{upd_{r_{\\boldsymbol{1}}{r}_{\\boldsymbol{2}},i\\boldsymbol{0}}^{A_{\\boldsymbol{1}}{A}_{\\boldsymbol{2}}}}=\\left({\\Phi}_{r_{\\boldsymbol{1}}{r}_{\\boldsymbol{2}},i}^{s_{\\boldsymbol{1}}{s}_{\\boldsymbol{2}},{A}_{\\boldsymbol{1}}{A}_{\\boldsymbol{2}}}-{\\rho}_{r_{\\boldsymbol{1}}{r}_{\\boldsymbol{2}}}^{s_{\\boldsymbol{1}}{s}_{\\boldsymbol{2}}}\\right)/{\\lambda}_i-\\left[\\left({\\Phi}_{r_{\\boldsymbol{1}}{r}_{\\boldsymbol{2}},i}^{s_{\\boldsymbol{1}}{s}_{\\boldsymbol{2}},{A}_{\\boldsymbol{1}}{A}_{\\boldsymbol{2}}}-{\\rho}_{r_{\\boldsymbol{1}}{r}_{\\boldsymbol{2}}}^{s_{\\boldsymbol{1}}{s}_{\\boldsymbol{2}}}\\right)/{\\lambda}_i\\right]\\end{array}\\right.$$\n(6)\n\nwhere the function [·] is a rounding function.\n\nThrough the introduction of the above corrections, the inter-system mixed DD on the same frequency can be translated into system-specific DD models:\n\n$$\\overline{P_{r_{\\boldsymbol{1}}{r}_2,i}^{s_{\\boldsymbol{1}}{s}_{\\boldsymbol{2}},{A}_{\\boldsymbol{1}}{A}_{\\boldsymbol{2}}}}={P}_{r_{\\boldsymbol{1}}{r}_2,i}^{s_{\\boldsymbol{1}}{s}_{\\boldsymbol{2}},{A}_{\\boldsymbol{1}}{A}_{\\boldsymbol{2}}}-{ucd}_{r_{\\boldsymbol{1}}{r}_{\\boldsymbol{2}},i\\boldsymbol{0}}^{A_{\\boldsymbol{1}}{A}_{\\boldsymbol{2}}}={\\rho}_{r_{\\boldsymbol{1}}{r}_{\\boldsymbol{2}}}^{s_{\\boldsymbol{1}}{s}_{\\boldsymbol{2}}}+{I}_{r_{\\boldsymbol{1}}{r}_{\\boldsymbol{2}},i}^{s_{\\boldsymbol{1}}{s}_{\\boldsymbol{2}}}+{T}_{r_{\\boldsymbol{1}}{r}_{\\boldsymbol{2}}}^{s_{\\boldsymbol{1}}{s}_{\\boldsymbol{2}}}+{\\varepsilon}_{P_{r_{\\boldsymbol{1}}{r}_{\\boldsymbol{2}}}^{s_{\\boldsymbol{1}}{s}_{\\boldsymbol{2}}}}$$\n(7)\n$$\\overline{\\Phi_{r_{\\boldsymbol{1}}{r}_{\\boldsymbol{2}},i}^{s_{\\boldsymbol{1}}{s}_{\\boldsymbol{2}},{A}_{\\boldsymbol{1}}{A}_{\\boldsymbol{2}}}}={\\Phi}_{r_{\\boldsymbol{1}}{r}_{\\boldsymbol{2}},i}^{s_{\\boldsymbol{1}}{s}_{\\boldsymbol{2}},{A}_{\\boldsymbol{1}}{A}_{\\boldsymbol{2}}}-\\overline{upd_{r_{\\boldsymbol{1}}{r}_{\\boldsymbol{2}},i\\boldsymbol{0}}^{A_{\\boldsymbol{1}}{A}_{\\boldsymbol{2}}}}={\\rho}_{r_{\\boldsymbol{1}}{r}_{\\boldsymbol{2}}}^{s_{\\boldsymbol{1}}{s}_{\\boldsymbol{2}}}-{I}_{r_{\\boldsymbol{1}}{r}_{\\boldsymbol{2}},i}^{s_{\\boldsymbol{1}}{s}_{\\boldsymbol{2}}}+{T}_{r_{\\boldsymbol{1}}{r}_{\\boldsymbol{2}}}^{s_{\\boldsymbol{1}}{s}_{\\boldsymbol{2}}}+{\\lambda}_i\\overline{N_{r_1{r}_2,i}^{s_1{s}_2}}+{\\varepsilon}_{\\Phi_{r_{\\boldsymbol{1}}{r}_{\\boldsymbol{2}}}^{s_{\\boldsymbol{1}}{s}_{\\boldsymbol{2}}}}$$\n(8)\n\nwhich is the model used in this study to implement GPS + Galileo DD data processing.\n\n### Ambiguity resolution in the DD model\n\nThe GNSS linear observation equations of Eqs. (7) and (8) can be expressed as:\n\n$$\\mathbf{y}=\\mathbf{Ax}+\\mathbf{BN}+\\boldsymbol{\\upvarepsilon}$$\n(9)\n\nwhere y is the vector of ‘observed minus computed’ DD observations; x is the vector of incremental baseline coordinates, the residual tropospheric zenith delay, and the DD ionospheric slant delays for each measurement epoch; N is the vector of carrier-phase integer ambiguities; ε is the vector of unmodeled effects and measurement noise. The matrices A and B are the corresponding design matrices of x and N, respectively.\n\nThe float solution X and variance-covariance matrix Q from a least-squares estimation can be expressed as\n\n$$\\boldsymbol{X}=\\left[\\begin{array}{c}\\widehat{\\boldsymbol{x}}\\\\ {}\\widehat{\\boldsymbol{N}}\\end{array}\\right],\\boldsymbol{Q}=\\left[\\begin{array}{cc}{\\boldsymbol{Q}}_{\\widehat{\\boldsymbol{x}}}& {\\boldsymbol{Q}}_{\\widehat{\\boldsymbol{x}}\\widehat{\\boldsymbol{N}}}\\\\ {}{\\boldsymbol{Q}}_{\\widehat{\\boldsymbol{N}}\\widehat{\\boldsymbol{x}}}& {\\boldsymbol{Q}}_{\\widehat{\\boldsymbol{N}}}\\end{array}\\right]$$\n(10)\n\nIn these formulas, the integer ambiguity vector $$\\widehat{\\boldsymbol{N}}$$ is obtained by solving an ILS (integer least square) problem expressed as:\n\n$$\\overset{\\smile }{\\boldsymbol{N}}=\\underset{\\boldsymbol{N}\\in \\boldsymbol{Z}}{\\mathrm{argmin}}\\;\\left\\{{\\left(\\boldsymbol{N}-\\widehat{\\boldsymbol{N}}\\right)}^T\\;{{\\boldsymbol{Q}}_{\\widehat{\\boldsymbol{N}}}}^{-1}\\left(\\boldsymbol{N}-\\widehat{\\boldsymbol{N}}\\right)\\right\\}$$\n(11)\n\nTo solve the ILS problem, the well-known LAMBDA (Teunissen, 1995) method and its extension MLAMBDA (Chang et al., 2005) are employed in this paper. The integer vector solution is validated using the following “Ratio-Test”.\n\n$$\\boldsymbol{R}=\\frac{{\\left({\\overset{\\smile }{\\boldsymbol{N}}}_2-\\widehat{\\boldsymbol{N}}\\right)}^T{{\\boldsymbol{Q}}_{\\widehat{N}}}^{-1}\\left({\\widehat{\\boldsymbol{N}}}_2-\\widehat{\\boldsymbol{N}}\\right)}{{\\left(\\overset{\\smile }{\\boldsymbol{N}}-\\widehat{\\boldsymbol{N}}\\right)}^T{{\\boldsymbol{Q}}_{\\widehat{N}}}^{-1}\\left(\\overset{\\smile }{\\boldsymbol{N}}-\\widehat{\\boldsymbol{N}}\\right)}>{\\boldsymbol{R}}_{\\boldsymbol{thres}}$$\n(12)\n\nwhere the ratio-factor R, defined as the ratio of the weighted sum of the squared residuals by the second best solution $${\\overset{\\smile }{\\mathbf{N}}}_2$$ to the best $$\\overset{\\smile }{\\mathbf{N}}$$ is used to check the reliability of AR. In general, validation threshold R thres can be 1.5 to 3.0 (Wang and Feng, 2012), and we used 3.0 for this study.\n\nAfter the validation, the remaining real-valued parameter estimates $$\\overset{\\smile }{\\mathbf{x}}$$ and the corresponding variance-covariance matrix $${\\mathbf{Q}}_{\\overset{\\smile }{\\mathbf{x}}}$$ can be updated by solving the following equations.\n\n$$\\left\\{\\begin{array}{c}\\overset{\\smile }{\\boldsymbol{x}}=\\widehat{\\boldsymbol{x}}-{\\boldsymbol{Q}}_{\\widehat{\\boldsymbol{x}}\\widehat{\\boldsymbol{N}}}{{\\boldsymbol{Q}}_{\\widehat{\\boldsymbol{N}}}}^{-1}\\left(\\widehat{\\boldsymbol{N}}-\\overset{\\smile }{\\boldsymbol{N}}\\right)\\\\ {}{\\boldsymbol{Q}}_{\\overset{\\smile }{\\boldsymbol{x}}}={\\boldsymbol{Q}}_{\\widehat{\\boldsymbol{x}}}-{\\boldsymbol{Q}}_{\\widehat{\\boldsymbol{x}}\\widehat{\\boldsymbol{N}}}{{\\boldsymbol{Q}}_{\\widehat{\\boldsymbol{N}}}}^{-1}{\\boldsymbol{Q}}_{\\widehat{\\boldsymbol{N}}\\widehat{\\boldsymbol{x}}}\\end{array}\\right.$$\n(13)\n\nIf the validation fails, the current epoch will keep the ambiguities float instead.\n\n### Partial ambiguity resolution strategy\n\nIf integer ambiguities of all satellites are difficult to fix with the LAMBDA method, the partial ambiguity fixing will be considered. Then the ambiguity vector $$\\widehat{\\boldsymbol{N}}$$ is divided into two parts, and the corresponding variance-covariance matrix of the two parts\n\n$$\\widehat{\\boldsymbol{N}}=\\left[\\begin{array}{c}{\\widehat{\\boldsymbol{N}}}_a\\\\ {}{\\widehat{\\boldsymbol{N}}}_b\\end{array}\\right],{\\mathrm{Q}}_{\\widehat{\\boldsymbol{N}}}=\\left[\\begin{array}{cc}{\\mathrm{Q}}_{{\\widehat{\\boldsymbol{N}}}_{\\boldsymbol{a}}}& {\\mathrm{Q}}_{{\\widehat{\\boldsymbol{N}}}_a{\\widehat{\\boldsymbol{N}}}_b}\\\\ {}{\\mathrm{Q}}_{{\\widehat{\\boldsymbol{N}}}_b{\\widehat{\\boldsymbol{N}}}_a}& {\\mathrm{Q}}_{{\\widehat{\\boldsymbol{N}}}_b}\\end{array}\\right]$$\n(14)\n\nwhere $${\\widehat{\\boldsymbol{N}}}_a$$ is a set of to-be-fixed ambiguities, and $${\\boldsymbol{N}}_b$$ the remaining ambiguities.\n\nIf $${\\widehat{\\boldsymbol{N}}}_a$$ can be fixed reliably, similar to the real-valued parameter update process, the remaining ambiguities $${\\boldsymbol{N}}_b$$ and their variance-covariance matrix $${\\mathbf{Q}}_{{\\widehat{\\mathbf{N}}}_b}$$ can be corrected with the fixed ambiguities:\n\n$$\\left\\{\\begin{array}{l}{\\tilde{\\mathrm{N}}}_b={\\widehat{\\mathrm{N}}}_b-{\\mathrm{Q}}_{{\\mathbf{N}}_b{\\mathbf{N}}_a}{\\mathrm{Q}}_{{\\mathbf{N}}_a}^{-1}\\left({\\widehat{\\mathrm{N}}}_a-{\\overset{\\smile }{\\mathrm{N}}}_a\\right)\\\\ {}{\\mathrm{Q}}_{{\\tilde{\\mathbf{N}}}_b}={\\mathrm{Q}}_{{\\widehat{\\mathrm{N}}}_b}-{\\mathrm{Q}}_{{\\widehat{\\mathbf{N}}}_b{\\widehat{\\mathbf{N}}}_a}{\\mathrm{Q}}_{{\\widehat{\\mathbf{N}}}_a}^{-1}{\\mathrm{Q}}_{{\\widehat{\\mathbf{N}}}_a{\\widehat{\\mathbf{N}}}_b}\\end{array}\\right.$$\n(15)\n\nThen the LAMBDA method is used to fix $${\\tilde{\\boldsymbol{N}}}_b$$, and if $${\\tilde{\\boldsymbol{N}}}_b$$ can be fixed, $${\\overset{\\smile }{\\mathbf{N}}}_a$$ and $${\\tilde{\\boldsymbol{N}}}_b$$ are used to update $$\\overset{\\smile }{\\mathbf{x}}$$ and $${\\mathbf{Q}}_{\\overset{\\smile }{\\mathbf{x}}}$$. Otherwise, only $${\\overset{\\smile }{\\mathbf{N}}}_a$$ is used to update $$\\overset{\\smile }{\\mathbf{x}}$$ and $${\\mathbf{Q}}_{\\overset{\\smile }{\\mathbf{x}}}$$.\n\nIn this paper, a PAR procedure is used to determine the subset of ambiguities which in terms of the success rate and the ratio test (Wang and Feng, 2012). Figure 1 presents the flowchart of this procedure.\n\nFirst, the PAR process starts with the decorrelation of the ambiguities and the diagonal elements of the decorrelated matrix are sorted in the ascending order. We can get the diagonal element set $$D=\\left\\{{d}_1,{d}_2,\\cdots {d}_i,\\cdots, {d}_n\\left|{d}_1<{d}_2<\\cdots {d}_i<\\cdots <{d}_n\\right.\\right\\}$$ after sorting by conditional variance. Then, by updating the traversal of i = n to the minimum threshold i = n0 and pick the subset $$D=\\left\\{{d}_1,{d}_2,\\cdots {d}_i\\right\\}$$ and the corresponding ambiguity subset $${\\widehat{\\boldsymbol{N}}}_a\\left({D}_i\\right)$$ and variance-covariance matrices $${\\mathrm{Q}}_{{\\widehat{\\boldsymbol{N}}}_a}\\left({D}_i\\right)$$. The minimum threshold n0 is typically 6 to ensure the selected satellites are still sufficient to get reliable positioning results. Then, the LAMBDA method is applied in the ambiguity search process. If $${P}_S\\ge {P}_{S0}$$ and $$R>{R}_{\\mathtt{thres}}$$, the fixed ambiguities can be considered to pass the acceptance test and be used into the following position calculation. Otherwise, we will update the subset and repeat the ambiguity search and test. If the number of selected ambiguities is less than $${n}_0$$, the iteration will stop and only the float solutions are made available.\n\n## Results and Discussion\n\nIn order to analyze the effect of PAR on GPS + Galileo TC RTK positioning in the medium to long-baselines, three stations with good observation conditions from the International GNSS Service (IGS) MGEX Network were selected for the experiments. The observation time covers 0:00:00 to 23:59:30 on July 7, 2017. The lengths of baselines and receiver type are given in Fig. 2.\n\nThe DD tropospheric and ionospheric delay could not be ignored in the medium-to-long baselines, and they would be set to zero initially and estimated as a random walk in each processing session in this paper. The processing settings for the relative positioning solutions of GPS + Galileo are given in Table 1.\n\n### Carrier phase and code ISB estimation\n\nThe estimates of phase and code ISBs can be accomplished on zero or short baselines, as described in the previous section. Here we used data from the GNSS Research Centre of Curtin University, with baseline information as showed in Table 2.\n\nThe fractional phase and code ISBs are calculated according to Eq. (6) and the results are shown in Figs. 3 and 4 Both figures show that the phase and code ISBs were stable during the period of daily experiments. However, there are differences in the ISBs of two groups of experiments over different periods. This is mainly due to the upgrade of the receiver firmware version, such as the CUT0 station’s receiver firmware version from Trimble NETR9 (4.85) to Trimble NETR9 (5.20), which is similar to the results of (Paziewski and Wielgosz, 2015). Therefore, when estimating ISB, it is necessary to note not only the receiver type but also the receiver firmware version.\n\nWe selected the ISB corrections from the experiment in 2017 to correct the ISBs in the long-baseline experiment because the receiver’s brands and firmware versions used in this experiment were the same as that used in the long-baseline experiment. The receiver brands and firmware versions of the long-baseline experiment are shown in Fig. 2.\n\n### Results of ambiguity resolution\n\nThe AR results of different combination strategy are shown for 500 min of data in Fig. 5 for the full ambiguity resolution (FAR) method and in Fig. 6 for the PAR method, including the AR success rate, number of ambiguities and ratio values. Note that in the ratio statistics, any ratio greater than 4 is assigned a value of 4, which is intended to facilitate the distribution of smaller ratio values.\n\nFigure 5 shows that the AR success rate and number of ambiguities of the three baselines are similar. However, the number of AR changes more frequently as the baseline length increases. When there is a newly-rising satellite, the success rate will drop dramatically. Fortunately, the TC strategy can provide more satellite observations, at least one more than the LC strategy. In this way, the TC strategy can be faster to reach the success rate at 99%, and has the ability to eliminate the sudden change in the number of satellites.\n\nIt is worth noting that the ratio values of the medium-long-baseline and the long-baseline are very different. The ratio of the medium-long-baseline ratio values greater than 3 is about 90% and the long-baseline is just about 5%. This reflects the fact that the long-baseline is affected by the atmospheric delay error and is difficult to satisfy the condition of correct AR. In addition, TC did not improve the ratio values, but deteriorate. For example, the ratio of the NNOR-CUT0 long-baseline to the ratio values greater than 3 under the three different combinations of LC, TC, and tight combination with ISB corrections (TC + ISB) is 4.83, 4.16 and 4.34%, respectively. This indicates that more observations do not increase the ambiguity ratio values, but rather deteriorate.\n\nFigure 6 shows that the application of the PAR method can significantly shorten the convergence time. The three baselines under different combinations can all reach the success rate of 99% within two or three epochs.\n\nFor medium-long-baselines, the number of resolved ambiguity is relatively stable and the problem of newly-rising or falling satellites is effectively suppressed. Because the ambiguity subset selection process can remove these satellites according to the variance of their ambiguity, and the ambiguity variance of such satellites is generally large. In the following epochs, their ambiguity precision improves, and their corresponding ratio values have also been improved, the ratio values also drops a lot only in the number of ambiguity changes epoch, but it still meets the threshold.\n\nHowever, for long-baselines, the number of ambiguities to be fixed changes frequently. Because of the influence of atmospheric delay and so on, the ambiguity subset can’t be fixed, even when the number of ambiguity reaches the threshold. Moreover, the frequent occurrence of falling satellites also affects the ambiguity subset fixed. Since the ambiguity subset can’t be fixed, only the float solutions are available and then the ambiguity subset selection is tried again in the next epoch. However, the PAR method is still able to greatly improve the ambiguity fixed rate. For example, the ratio of the NNOR-CUT0 long-baseline to the ratio values greater than 3 under the three different combinations of LC, TC, and TC + ISB for the PAR method is 27.36, 27.23 and 27.30%, respectively. Compared to the FAR strategy, it was increased by 22.53, 23.07 and 22.96%, respectively.\n\n### Results of positioning\n\nThe positioning capabilities of the two combination strategies and the two AR strategies are now verified with real GNSS data for single-frequency combined GPS + Galileo. The baseline errors of different combination strategies are shown in Figs. 7 and 8. Baseline errors are the difference between the estimated baseline length and precise reference baseline length.\n\nFigure 7 shows that the medium-long-baseline error (1.5 cm) is smaller than that of the long-baseline (10 cm) static relative positioning for the FAR strategy. Affected by the residual atmospheric errors, such as residual tropospheric and ionospheric delays, the positioning results reflect some systematic biases especially for the longer baseline. The positioning accuracy based on the TC strategy is improved, especially for the medium length baseline. The accuracy of positioning based on TC + ISB is equivalent to that of TC, because the ISBs of this experiment are so small that it can be ignored. The positioning accuracy based on PAR method has improved, especially for the long-baseline. Because of after fixing the partial ambiguities by the PAR method, the fixed integer ambiguities can be back tracked into the observations to update the troposphere and the ionospheric parameters, a more accurate atmospheric delay correction is obtained, and the positioning parameters are updated by fixing the remaining ambiguities. The proportions of fixed solution of the NNOR-CUT0 long-baseline static relative positioning under the three different combinations of LC, TC, and TC + ISB for the FAR method is 11.98, 17.33 and 17.40%, respectively. For the PAR strategy, that is 75.80, 76.70 and 76.96%, respectively.\n\nFigure 8 shows that the baseline errors of different combination strategies RTK positioning for the FAR method and the PAR method. Introducing the PAR method into the TC strategy can not only fast converge, but also effectively reduce the baseline error. The results are similar to the static mode, however, the positioning accuracy and the proportion of the fixed solution decrease. For example, the proportions of fixed solution of the NNOR-CUT0 long-baseline RTK positioning under the three different combinations of LC, TC, and TC + ISB for the PAR method is 17.38, 18.23 and 18.69%, respectively.\n\n## Conclusions\n\nA GPS + Galileo tightly combined RTK positioning strategy is proposed for medium-to-long baselines, which introduces the PAR method to the strategy. The method has been verified to be effective for faster and more reliable AR. Tests on middle-long and long-baselines demonstrate that TC strategy can provide more observations, which can improve the success rate. However, TC strategy does not increase the ambiguity ratio values, but rather deteriorate. The reason may be that the TC strategy increases the number of observations and increases the ambiguity dimension. Using the PAR method not only can make initialization time within three epochs, but also improve the ambiguity fixed rate. LC and TC strategies can get centimeter level positioning accuracy, but PAR of the TC strategy can provide better performance, especially for long-baselines. The selection of ambiguity subsets and the elimination of atmospheric delay are the areas that require further research in the future.\n\n## References\n\n• Cao W, O’Keefe K, Cannon ME (2007) Partial ambiguity fixing within multiple frequencies and systems, Proceedings of ION GNSS 2007. Fort Worth, TX, pp 312–323\n\n• Chang XW, Yang X, Zhou T (2005) MLAMBDA: a modified LAMBDA method for integer least-squares estimation. J GEODESY 79(9):552–565\n\n• Gao W, Gao C, Pan S (2017) A method of GPS/BDS/GLONASS combined RTK positioning for middle-long baseline with partial ambiguity resolution. Surv Rev 49(354):212–220\n\n• Gao W, Gao C, Pan S, Wang D, Deng J (2015) Improving ambiguity resolution for medium baselines using combined GPS and BDS dual/triple-frequency observations. SENSORS-BASEL 15(11):27525–27542\n\n• Gao W, Gao C, Pan S, Yang Y, Wang D (2015) Reliable RTK positioning method based on partial wide-lane ambiguity resolution from GPS/GLONASS/BDS combination, China satellite navigation conference (CSNC) 2015 proceedings: volume II. Springer, Berlin Heidelberg, pp 449–460\n\n• Gu S (2013) Research on the zero-difference un-combined data processing model for multi-frequency GNSS and its applications. Wuhan University, China, p 23\n\n• Hou Y, Verhagen S (2014) Model and data driven partial ambiguity resolution for multi-constellation GNSS, China satellite navigation Conference (CSNC) 2014 proceedings: volume II. Springer Berlin Heidelberg, pp 285–302\n\n• Julien O, Alves P, Cannon ME, ZHANG W (2003) A tightly coupled GPS/GALILEO combination for improved ambiguity resolution, Proc. ION GNSS 2003. European Navigation Conference, Graz, Austria, pp 1–14\n\n• Li B, Feng Y, Gao W, Li Z (2015) Real-time kinematic positioning over long baselines using triple-frequency BeiDou signals. IEEE Trans Aerospace Electron Syst 51(4):3254–3269\n\n• Li L, Li Z, Yuan H, Wang L, Hou Y (2016) Integrity monitoring-based ratio test for GNSS integer ambiguity validation. GPS SOLUT 20(3):573–585\n\n• Li B, Shen Y, Feng Y, Gao W, Yang L (2014) GNSS ambiguity resolution with controllable failure rate for long baseline network RTK. J GEODESY 88(2):99–112\n\n• Li B, Teunissen PJG (2014) GNSS antenna array-aided CORS ambiguity resolution. J GEODESY 88(4):363–376\n\n• Li G, Wu J, Liu W, Zhao C (2016) A new approach of satellite selection for multi-constellation integrated navigation system, China satellite navigation conference (CSNC) 2016 proceedings: volume III. Springer, pp 359–371\n\n• Li G, Wu J, Zhao C, Tian Y (2017) Double differencing within GNSS constellations. GPS SOLUT 21(3):1161–1177. https://doi.org/10.1007/s10291-017-0599-4\n\n• Li J, Yang Y, Xu J, He H, Guo H (2015) GNSS multi-carrier fast partial ambiguity resolution strategy tested with real BDS/GPS dual- and triple-frequency observations. GPS SOLUT 19(1):5–13\n\n• Mowlam AP, Collier PA (2004) Fast ambiguity resolution performance using partially-fixed multi-GNSS phase observations. International symposium on GNSS/GPS, Sydney, Australia, pp 6–8\n\n• Paziewski J, Wielgosz P (2015) Accounting for Galileo–GPS inter-system biases in precise satellite positioning. J GEODESY 89(1):81–93\n\n• Teunissen PJ (1995) The least-squares ambiguity decorrelation adjustment: a method for fast GPS integer ambiguity estimation. J GEODESY 70(1):65–82\n\n• Teunissen P, Joosten P, Tiberius C (1999). Geometry-free ambiguity success rates in case of partial fixing. Proceedings of ION-NTM, pp. 25-27\n\n• Wang J, Feng Y (2012) Reliability of partial ambiguity fixing with multiple GNSS constellations. J GEODESY 87(1):1–14\n\n• Wang J, Feng Y. (2013). A satellite selection algorithm for achieving high reliability of ambiguity resolution with GPS and Beidou constellations. China satellite navigation conference (CSNC) 2013 proceedings. Springer Berlin Heidelberg, 2013: pp. 3-20\n\n• Zhang W, Cannon ME, Julien O, Alves P (2003) Investigation of combined GPS/GALILEO cascading ambiguity resolution schemes, Proc. ION GPS/GNSS 2003, Portland, Oregon, pp 2599–2610\n\n## Acknowledgments\n\nThis work is funded by National Science Foundation of China (No.41674033) and State Key Research and Development Program (2016YFB0501802). We thank Curtin University for the baseline observations. We are also grateful for the high performance computing facility at Wuhan University which support all the computational work of this study.\n\n## Author information\n\nAuthors\n\n### Contributions\n\nGL developed the algorithm. GL, JGeng and JGuo carried out most of the analyses and drafted the manuscript. SZ and SL participated in the design of the study and helped algorithm development. All authors have read and approved the final manuscript.\n\n### Corresponding author\n\nCorrespondence to Guangcai Li.\n\n## Ethics declarations\n\n### Competing interests\n\nThe authors declare that they have no competing interests.\n\n### Publisher’s Note\n\nSpringer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.\n\n## Rights and permissions", null, "" ]
[ null, "https://jgps.springeropen.com/track/article/10.1186/s41445-018-0011-x", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8767518,"math_prob":0.99434364,"size":24866,"snap":"2022-40-2023-06","text_gpt3_token_len":5753,"char_repetition_ratio":0.1470115,"word_repetition_ratio":0.05634942,"special_character_ratio":0.23103836,"punctuation_ratio":0.11076512,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9967902,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-05T17:25:56Z\",\"WARC-Record-ID\":\"<urn:uuid:f40e378d-25a3-452c-b7b0-af63ab5631ef>\",\"Content-Length\":\"275106\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:172495f6-af77-4663-a85c-145bb9b128e3>\",\"WARC-Concurrent-To\":\"<urn:uuid:85699a41-d79e-4a15-8bbf-f337ab1abe69>\",\"WARC-IP-Address\":\"146.75.36.95\",\"WARC-Target-URI\":\"https://jgps.springeropen.com/articles/10.1186/s41445-018-0011-x\",\"WARC-Payload-Digest\":\"sha1:IOZ4BLL6XNAS53WUTJHTPVY4OPIPAEZF\",\"WARC-Block-Digest\":\"sha1:6AL7DF2NGJX64CXGLDYIT7OVMTDU2XHF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500273.30_warc_CC-MAIN-20230205161658-20230205191658-00671.warc.gz\"}"}
https://samacheerkalvi.guru/samacheer-kalvi-9th-maths-chapter-2-ex-2-3/
[ "# Samacheer Kalvi 9th Maths Solutions Chapter 2 Real Numbers Ex 2.3\n\n## Tamilnadu Samacheer Kalvi 9th Maths Solutions Chapter 2 Real Numbers Ex 2.3\n\nQuestion 1.\nRepresent the following irrational numbers on the number line.\n(i) $$\\sqrt { 3}$$\n(ii) $$\\sqrt { 4.7 }$$\n(iii) $$\\sqrt { 6.5 }$$\nSolution:\n(i) $$\\sqrt { 3}$$", null, "(i) Draw a line and mark a point A on it.\n(ii) Mark a point B such that AB = 3 cm.\n(iii) Mark a point C on this line such that BC = 1 unit.\n(iv) Find the midpoint of AC by drawing perpendicular bisector of AC and let it be O.\n(v) With O as center and OC = OA as radius, draw a semicircle.\n(vi) Draw a line BD, which is perpendicular to AB at B.\n(vii) Now BD = $$\\sqrt { 3}$$, which can be marked in the number line as the value of BE = BD = $$\\sqrt { 3}$$\n\n(ii) $$\\sqrt { 4.7 }$$", null, "(i) Draw a line and mark a point A on it.\n(ii) Mark a point B such that AB = 4.7 cm.\n(iii) Mark a point C on this line such that BC = 1 unit.\n(iv) Find the midpoint of AC by drawing perpendicular bisector of AC and let it be O.\n(v) With O as center and OC = OA as radius, draw a semicircle.\n(vi) Draw a line BD, which is perpendicular to AB at B.\n(vii) Now BD = $$\\sqrt { 4.7 }$$, which can be marked in the number line as the value of BE = BD = $$\\sqrt { 4.7 }$$\n\n(iii) $$\\sqrt { 6.5 }$$", null, "(i) Draw a line and mark a point A on it.\n(ii) Mark a point B such that AB = 6.5 cm.\n(iii) Mark a point C on this line such that BC = 1 unit.\n(iv) Find the midpoint of AC by drawing perpendicular bisector of AC and let it be O.\n(v) With O as center and OC = OA as radius, draw a semicircle.\n(vi) Draw a line BD, which is perpendicular to AB at B.\n(vii) Now BD = $$\\sqrt { 6.5 }$$, which can be marked in the number line as the value of BE = BD = $$\\sqrt { 6.5 }$$.\n\nQuestion 2.\nFind any two irrational numbers between\n(i) 0.3010011000111…. and 0.3020020002….\n(ii) $$\\frac { 6 }{ 7 }$$ and $$\\frac { 12 }{ 13 }$$\n(iii) $$\\sqrt { 2}$$ and $$\\sqrt { 3}$$\nSolution:\n(i) 0.3010011000111…. and 0.3020020002….\nTwo irrational numbers 0.301202200222 …., 0.301303300333…..\n\n(ii) $$\\frac { 6 }{ 7 }$$ and $$\\frac { 12 }{ 13 }$$\n$$\\frac { 6 }{ 7 }$$ = 0.857142…\n$$\\frac { 12 }{ 13 }$$ = 0.923076\nTwo irrational numbers between 0.8616611666111……, 0.8717711777111…..\n\n(iii) $$\\sqrt { 2}$$ and $$\\sqrt { 3}$$", null, "$$\\sqrt { 2 }$$= 1.414……", null, "$$\\sqrt { 3}$$ = 1.732….\n∴ Two irrational numbers between 1.5155…., 1.6166…….", null, "Question 3.\nFind any two rational numbers between 2.2360679….. and 2.236505500….\nSolution:\nAny two rational numbers are 2.2362, 2.2363" ]
[ null, "https://samacheerkalvi.guru/wp-content/uploads/2019/10/Samacheer-Kalvi-9th-Maths-Chapter-2-Real-Numbers-Ex-2.3-1.png", null, "https://samacheerkalvi.guru/wp-content/uploads/2019/10/Samacheer-Kalvi-9th-Maths-Chapter-2-Real-Numbers-Ex-2.3-2.png", null, "https://samacheerkalvi.guru/wp-content/uploads/2019/10/Samacheer-Kalvi-9th-Maths-Chapter-2-Real-Numbers-Ex-2.3-3.png", null, "https://live.staticflickr.com/65535/48845819813_5d5674b5ac_o.png", null, "https://samacheerkalvi.guru/wp-content/uploads/2019/10/Samacheer-Kalvi-9th-Maths-Chapter-2-Real-Numbers-Ex-2.3-5.png", null, "https://samacheerkalvi.guru/wp-content/uploads/2019/12/SamacheerKalvi.Guru_.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.83666414,"math_prob":0.99998796,"size":2616,"snap":"2023-40-2023-50","text_gpt3_token_len":901,"char_repetition_ratio":0.14127105,"word_repetition_ratio":0.63076925,"special_character_ratio":0.42737004,"punctuation_ratio":0.1491228,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000023,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,1,null,1,null,1,null,4,null,1,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-27T05:31:51Z\",\"WARC-Record-ID\":\"<urn:uuid:cfbd3276-718f-4453-a348-c1ae3e216001>\",\"Content-Length\":\"139770\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:602283ed-0244-4296-93f4-74638545992d>\",\"WARC-Concurrent-To\":\"<urn:uuid:1d5bc001-a716-4d4c-8e43-8fbfd6f53f1b>\",\"WARC-IP-Address\":\"172.67.68.141\",\"WARC-Target-URI\":\"https://samacheerkalvi.guru/samacheer-kalvi-9th-maths-chapter-2-ex-2-3/\",\"WARC-Payload-Digest\":\"sha1:7WVVSE5SUVE7AOTTCXS4Z5VYZZRUSTM7\",\"WARC-Block-Digest\":\"sha1:NTP364ZFWWK77G76DIO6ZLVRTDNRYULM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510259.52_warc_CC-MAIN-20230927035329-20230927065329-00892.warc.gz\"}"}
https://git.rwth-aachen.de/monticore/EmbeddedMontiArc/generators/EMADL2CPP/-/commit/b85b4bce958cb87c55215119ac0df37bbcc6a19c
[ "### Renamed VariableSymbol to ParameterSymbol, introduced layer variable...\n\n`Renamed VariableSymbol to ParameterSymbol, introduced layer variable declarations and changed IOSymbol to VariableSymbol which now combines IO variables and layer variables`\nparent 0cc00c55\n ... ... @@ -8,19 +8,19 @@ de.monticore.lang.monticar embedded-montiarc-emadl-generator 0.3.2-SNAPSHOT 0.3.3-SNAPSHOT 0.2.8-SNAPSHOT 0.3.4-SNAPSHOT 0.0.2-SNAPSHOT 0.2.16-SNAPSHOT 0.2.12-SNAPSHOT 0.2.2-SNAPSHOT 0.2.9-SNAPSHOT 0.3.6-SNAPSHOT 0.0.3-SNAPSHOT 0.2.17-SNAPSHOT 0.2.13-SNAPSHOT 0.2.7-SNAPSHOT 0.1.4 ... ...\n ... ... @@ -442,7 +442,7 @@ public class EMADLGenerator { int i = 0; for (SerialCompositeElementSymbol stream : architecture.getStreams()) { if (stream.isNetwork()) { if (stream.isTrainable()) { networkAttributes += \"\\n\" + predictorClassName + \"_\" + i + \" _predictor_\" + i + \"_;\"; } ... ...\n ... ... @@ -198,6 +198,14 @@ public class GenerationTest extends AbstractSymtabTest { assertTrue(Log.getFindings().size() == 0); } @Test public void testRNNtestForGluon() throws IOException, TemplateException { Log.getFindings().clear(); String[] args = {\"-m\", \"src/test/resources/models/\", \"-r\", \"RNNtest\", \"-b\", \"GLUON\", \"-f\", \"n\", \"-c\", \"n\"}; EMADLGeneratorCli.main(args); assertTrue(Log.getFindings().size() == 0); } @Test public void testGluonReinforcementModelGymEnvironment() { Log.getFindings().clear(); ... ...\n ... ... @@ -109,6 +109,7 @@ public abstract class IntegrationTest extends AbstractSymtabTest { deleteHashFile(); } @Ignore // TODO: Fix test after next release @Test public void testDontRetrain2() { // The training hash is written manually, so even the first training should be skipped ... ...\n configuration RNNtest{ num_epoch:10 batch_size:5 context:cpu optimizer:adam{ learning_rate:0.01 learning_rate_decay:0.8 step_size:1000 weight_decay:0.0001 } }\n component RNNtest{ ports in Q(-oo:oo)^{50, 30001} source, out Q(-oo:oo)^{50, 30001} target; implementation CNN { layer RNN(units=500, layers=2) encoder; layer RNN(units=500, layers=2) decoder; source -> encoder; encoder.output -> target; encoder.state -> decoder.state; source -> decoder -> target; } } \\ No newline at end of file\n ... ... @@ -11,4 +11,5 @@ mnist.LeNetNetwork data/mnist.LeNetNetwork MultipleInputs src/test/resources/training_data/MultipleInputs MultipleOutputs src/test/resources/training_data/MultipleOutputs MultipleStreams src/test/resources/training_data/MultipleStreams Invariant src/test/resources/training_data/Invariant \\ No newline at end of file Invariant src/test/resources/training_data/Invariant RNNtest data/RNNtest \\ No newline at end of file\n ... ... @@ -13,8 +13,9 @@ class CNNPredictor_cifar10_cifar10Classifier_net_0{ public: const std::string json_file = \"model/cifar10.CifarNetwork/model_newest-symbol.json\"; const std::string param_file = \"model/cifar10.CifarNetwork/model_newest-0000.params\"; //const std::vector input_keys = {\"data\"}; const std::vector input_keys = {\"data\"}; const std::vector input_keys = { \"data\" }; const std::vector> input_shapes = {{1,3,32,32}}; const bool use_gpu = false; ... ... @@ -28,10 +29,9 @@ public: if(handle) MXPredFree(handle); } void predict(const std::vector &data, std::vector &softmax){ MXPredSetInput(handle, \"data\", data.data(), data.size()); //MXPredSetInput(handle, \"data\", data.data(), data.size()); void predict(const std::vector &data_, std::vector &softmax_){ MXPredSetInput(handle, input_keys.c_str(), data_.data(), data_.size()); MXPredForward(handle); ... ... @@ -44,8 +44,8 @@ public: MXPredGetOutputShape(handle, output_index, &shape, &shape_len); size = 1; for (mx_uint i = 0; i < shape_len; ++i) size *= shape[i]; assert(size == softmax.size()); MXPredGetOutput(handle, 0, &(softmax), softmax.size()); assert(size == softmax_.size()); MXPredGetOutput(handle, 0, &(softmax_), softmax_.size()); } ... ...\n ... ... @@ -85,9 +85,9 @@ class CNNPredictor_mnist_mnistClassifier_net_0{ input.Resize(input_shapes); } void predict(const std::vector &image, std::vector &predictions){ void predict(const std::vector &image_, std::vector &predictions_){ //Note: ShareExternalPointer requires a float pointer. input.ShareExternalPointer((float *) image.data()); input.ShareExternalPointer((float *) image_.data()); // Get input blob #ifdef USE_GPU ... ... @@ -104,11 +104,11 @@ class CNNPredictor_mnist_mnistClassifier_net_0{ // Get output blob #ifdef USE_GPU auto predictionsBlob = TensorCPU(workSpace.GetBlob(\"predictions\")->Get()); auto predictions_Blob = TensorCPU(workSpace.GetBlob(\"predictions\")->Get()); #else auto predictionsBlob = workSpace.GetBlob(\"predictions\")->Get(); auto predictions_Blob = workSpace.GetBlob(\"predictions\")->Get(); #endif predictions.assign(predictionsBlob.data(),predictionsBlob.data() + predictionsBlob.size()); predictions_.assign(predictions_Blob.data(),predictions_Blob.data() + predictions_Blob.size()); google::protobuf::ShutdownProtobufLibrary(); } ... ...\n ... ... @@ -19,12 +19,12 @@ data = icube(3, 32, 32); softmax=colvec(classes); } void execute(){ vector CNN_softmax(10); vector CNN_softmax_(10); _predictor_0_.predict(CNNTranslator::translate(data), CNN_softmax); CNN_softmax_); softmax = CNNTranslator::translateToCol(CNN_softmax, std::vector {10}); softmax = CNNTranslator::translateToCol(CNN_softmax_, std::vector {10}); } ... ...\n ... ... @@ -50,7 +50,7 @@ class CNNCreator_mnist_mnistClassifier_net: self.networks = Net_0(data_mean=data_mean, data_std=data_std) self.networks.collect_params().initialize(self.weight_initializer, ctx=context) self.networks.hybridize() self.networks(mx.nd.zeros((1, 1,28,28,), ctx=context)) self.networks(mx.nd.zeros((1,1,28,28,), ctx=context)) if not os.path.exists(self._model_dir_): os.makedirs(self._model_dir_) ... ...\n ... ... @@ -21,8 +21,8 @@ class CNNDataLoader_mnist_mnistClassifier_net: for input_name in self._input_names_: train_data[input_name] = train_h5[input_name] data_mean[input_name] = nd.array(train_h5[input_name][:].mean(axis=0)) data_std[input_name] = nd.array(train_h5[input_name][:].std(axis=0) + 1e-5) data_mean[input_name + '_'] = nd.array(train_h5[input_name][:].mean(axis=0)) data_std[input_name + '_'] = nd.array(train_h5[input_name][:].std(axis=0) + 1e-5) train_label = {} for output_name in self._output_names_: ... ...\n ... ... @@ -85,10 +85,10 @@ class Net_0(gluon.HybridBlock): with self.name_scope(): if data_mean: assert(data_std) self.input_normalization_image = ZScoreNormalization(data_mean=data_mean['image'], data_std=data_std['image']) self.input_normalization_image_ = ZScoreNormalization(data_mean=data_mean['image_'], data_std=data_std['image_']) else: self.input_normalization_image = NoNormalization() self.input_normalization_image_ = NoNormalization() self.conv1_ = gluon.nn.Conv2D(channels=20, kernel_size=(5,5), ... ... @@ -123,10 +123,9 @@ class Net_0(gluon.HybridBlock): self.softmax3_ = Softmax() def hybrid_forward(self, F, image): outputs = [] image = self.input_normalization_image(image) conv1_ = self.conv1_(image) def hybrid_forward(self, F, image_): image_ = self.input_normalization_image_(image_) conv1_ = self.conv1_(image_) pool1_ = self.pool1_(conv1_) conv2_ = self.conv2_(pool1_) pool2_ = self.pool2_(conv2_) ... ... @@ -135,6 +134,7 @@ class Net_0(gluon.HybridBlock): relu2_ = self.relu2_(fc2_) fc3_ = self.fc3_(relu2_) softmax3_ = self.softmax3_(fc3_) outputs.append(softmax3_) predictions_ = softmax3_ return predictions_ return outputs\n ... ... @@ -29,9 +29,9 @@ public: if(handle) MXPredFree(handle); } void predict(const std::vector &image, std::vector &predictions){ MXPredSetInput(handle, \"data\", image.data(), static_cast(image.size())); void predict(const std::vector &in_image_, std::vector &out_predictions_){ MXPredSetInput(handle, input_keys.c_str(), in_image_.data(), static_cast(in_image_.size())); MXPredForward(handle); ... ... @@ -44,8 +44,8 @@ public: MXPredGetOutputShape(handle, output_index, &shape, &shape_len); size = 1; for (mx_uint i = 0; i < shape_len; ++i) size *= shape[i]; assert(size == predictions.size()); MXPredGetOutput(handle, 0, &(predictions), predictions.size()); assert(size == out_predictions_.size()); MXPredGetOutput(handle, 0, &(out_predictions_), out_predictions_.size()); } ... ...\n ... ... @@ -132,14 +132,15 @@ class CNNSupervisedTrainer_mnist_mnistClassifier_net: for epoch in range(begin_epoch, begin_epoch + num_epoch): train_iter.reset() for batch_i, batch in enumerate(train_iter): image_data = batch.data.as_in_context(mx_context) image_ = batch.data.as_in_context(mx_context) predictions_label = batch.label.as_in_context(mx_context) with autograd.record(): predictions_output = self._networks(image_data) predictions_ = self._networks(image_) loss = \\ loss_function(predictions_output, predictions_label) loss_function(predictions_, predictions_label) loss.backward() ... ... @@ -164,17 +165,18 @@ class CNNSupervisedTrainer_mnist_mnistClassifier_net: train_iter.reset() metric = mx.metric.create(eval_metric) for batch_i, batch in enumerate(train_iter): image_data = batch.data.as_in_context(mx_context) image_ = batch.data.as_in_context(mx_context) labels = [ batch.label.as_in_context(mx_context) ] if True: # Fix indentation predictions_output = self._networks(image_data) if True: predictions_ = self._networks(image_) predictions = [ mx.nd.argmax(predictions_output, axis=1) mx.nd.argmax(predictions_, axis=1) ] metric.update(preds=predictions, labels=labels) ... ... @@ -183,17 +185,18 @@ class CNNSupervisedTrainer_mnist_mnistClassifier_net: test_iter.reset() metric = mx.metric.create(eval_metric) for batch_i, batch in enumerate(test_iter): image_data = batch.data.as_in_context(mx_context) image_ = batch.data.as_in_context(mx_context) labels = [ batch.label.as_in_context(mx_context) ] if True: # Fix indentation predictions_output = self._networks(image_data) if True: predictions_ = self._networks(image_) predictions = [ mx.nd.argmax(predictions_output, axis=1) mx.nd.argmax(predictions_, axis=1) ] metric.update(preds=predictions, labels=labels) ... ...\n ... ... @@ -19,12 +19,12 @@ image = icube(1, 28, 28); predictions=colvec(classes); } void execute(){ vector CNN_predictions(10); vector image_ = CNNTranslator::translate(image); vector predictions_(10); _predictor_0_.predict(CNNTranslator::translate(image), CNN_predictions); _predictor_0_.predict(image_, predictions_); predictions = CNNTranslator::translateToCol(CNN_predictions, std::vector {10}); predictions = CNNTranslator::translateToCol(predictions_, std::vector {10}); } ... ...\n ... ... @@ -19,12 +19,12 @@ image = icube(1, 28, 28); predictions=colvec(classes); } void execute(){ vector CNN_predictions(10); vector CNN_predictions_(10); _predictor_0_.predict(CNNTranslator::translate(image), CNN_predictions); CNN_predictions_); predictions = CNNTranslator::translateToCol(CNN_predictions, std::vector {10}); predictions = CNNTranslator::translateToCol(CNN_predictions_, std::vector {10}); } ... ...\nNo preview for this file type\nNo preview for this file type\nMarkdown is supported\n0% or\nYou are about to add 0 people to the discussion. Proceed with caution.\nFinish editing this message first!" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.69450366,"math_prob":0.9649101,"size":366,"snap":"2020-45-2020-50","text_gpt3_token_len":71,"char_repetition_ratio":0.18232045,"word_repetition_ratio":0.17391305,"special_character_ratio":0.16120219,"punctuation_ratio":0.09090909,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98521525,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-30T19:16:21Z\",\"WARC-Record-ID\":\"<urn:uuid:c89454f0-491e-4cd1-a1ca-1405a533cbe7>\",\"Content-Length\":\"978638\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d467d520-3785-43c5-b037-58080eb9a860>\",\"WARC-Concurrent-To\":\"<urn:uuid:e0aeed19-5f53-4414-9fa0-43a0228df40d>\",\"WARC-IP-Address\":\"134.130.122.52\",\"WARC-Target-URI\":\"https://git.rwth-aachen.de/monticore/EmbeddedMontiArc/generators/EMADL2CPP/-/commit/b85b4bce958cb87c55215119ac0df37bbcc6a19c\",\"WARC-Payload-Digest\":\"sha1:UF5BGLVRIDLZ2WCPET3LMQBEF5KX4H4O\",\"WARC-Block-Digest\":\"sha1:3WYCYP4SDUQBYXFFPXGKYHWSFN2B6I7H\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107911229.96_warc_CC-MAIN-20201030182757-20201030212757-00596.warc.gz\"}"}
https://fraction-calculator.net/answer/5/10$25/20
[ "# What is 5/10*25/20?\n\nHow much is 510 times 2520. Step by step solution expressed as a proper or improper fraction, mixed number and decimal form.\n\n• 58\n• 0.625\n\n## Step by step solution\n\nMultiply\n\n510 * 2520 = 125200\n\n1. ### Multiply\n\n• 510 * 2520\n• 5 * 2510 * 20\n• 125200\n\nTo multiply fractions, multiply the numerator and denominators.\n\nSimplify\n\n• 125200\n• 58\n1. ### Reduce fraction to its lowest terms\n\n• 125200\n• 125 ÷ 25200 ÷ 25\n• 58\n\nThe greatest common divisor of 125 and 200 is 25, the fraction 125200 can be simplified or reduced to lowest terms by dividing the numerator and denominator by 25." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.86425996,"math_prob":0.9468491,"size":390,"snap":"2020-10-2020-16","text_gpt3_token_len":91,"char_repetition_ratio":0.126943,"word_repetition_ratio":0.0,"special_character_ratio":0.2820513,"punctuation_ratio":0.09859155,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99227875,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-04-01T21:03:30Z\",\"WARC-Record-ID\":\"<urn:uuid:9d30a092-403f-4a10-bd77-e2cac66df346>\",\"Content-Length\":\"6647\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b1471b59-8429-41a7-9fb6-1275d458423e>\",\"WARC-Concurrent-To\":\"<urn:uuid:08eb2468-9078-45cc-8ec0-c5f37d681f4e>\",\"WARC-IP-Address\":\"166.62.75.131\",\"WARC-Target-URI\":\"https://fraction-calculator.net/answer/5/10$25/20\",\"WARC-Payload-Digest\":\"sha1:M54HIO3XNDQ4MFOE7532WREBCVSVHLRT\",\"WARC-Block-Digest\":\"sha1:CDZQW43AU363IGBBQRXSJXR5K6EZ3YKE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585370506121.24_warc_CC-MAIN-20200401192839-20200401222839-00397.warc.gz\"}"}
https://www.physicsforums.com/threads/quadratic-simplification.6219/
[ "Caldus\n\nTrying to rewrite a quadratic equation in the form a(x - h)^2 + k. The equation I'm trying to rewrite is:\n\ny = x^2 + 3x + 5/2\n\nNot looking for an answer, just looking for how to do this (I don't know how to do it if it has rationals in it). Thank you.\n\nStephenPrivitera\n\nComplete the square.\nx2+3x +k=(x+3/2)2\nWhat is k?\n\nAlso, you can multiply through by two to get rid of the rationals, but don't forget to divide it out at the end.\n\nLast edited:\n\nHurkyl\n\nStaff Emeritus\nGold Member\nIOW you do it the same as if there weren't rationals.\n\nSTAii\n\nOriginally posted by StephenPrivitera\nx2+3x +k=(x+3/2)2\nI am sorry, but, how are those two equal ? and how are they connected to the original question ?\n\nTo convert a quadratic to the form (a(x-h)^2 + k) you must (as StephenPrivitera said) complete the square.\nIf you have a quadratic on the form of :\nax^2 + bx + c\nThen, it is a complete square if c=(b/2)^2\nSo, to turn any quadratic to a complete square you need to make (c) in it equal to ((b/2)^2)\nIn your case, (b/2)^2 = (3/2)^2 = 9/4\nTo turn 5/2 into 9/4, you will need to add ((9/4)-(5/2)=(9/4)-(10/4)=-(1/4)) to it. But if you add any number to the quadratic you will actually change its value. So, to maintain the value, you will subtract the same number again, therefore leaving the qudratic unchanged (adding and subtracting the same number is like adding 0, it does nothing to the quadratic).\nHere you go:\ny = x^2 + 3x + 5/2\ny = x^2 + 3x + 5/2 + 0\ny = x^2 + 3x + 5/2 - 1/4 + 1/4\ny = x^2 + 3x + (5/2 - 1/4) + 1/4\ny = x^2 + 3x + (10/4 - 1/4) + 1/4\ny = x^2 + 3x + 9/4 + 1/4\ny = (x^2 + 3x + 9/4) + 1/4\ny = ((x + 3/2)*(x + 3/2)) + 1/4\ny = (x + 3/2)^2 + 1/4\n\nWhich is on the form that you asked for", null, ".\n\nHurkyl\n\nStaff Emeritus\nGold Member\nI am sorry, but, how are those two equal ? and how are they connected to the original question ?\n\nNotice that you proved:\n\nx^2 + 3x + 5/2 = (x + 3/2)^2 + 1/4\n\nDo a little rearrangement and you'll see that's (essentially) of the form\n\nx^2+3x +k=(x+3/2)^2\n\nPhysics Forums Values\n\nWe Value Quality\n• Topics based on mainstream science\n• Proper English grammar and spelling\nWe Value Civility\n• Positive and compassionate attitudes\n• Patience while debating\nWe Value Productivity\n• Disciplined to remain on-topic\n• Recognition of own weaknesses\n• Solo and co-op problem solving" ]
[ null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9603816,"math_prob":0.99867797,"size":407,"snap":"2019-43-2019-47","text_gpt3_token_len":131,"char_repetition_ratio":0.14888337,"word_repetition_ratio":0.6593407,"special_character_ratio":0.34643734,"punctuation_ratio":0.082474224,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99902916,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-22T16:31:54Z\",\"WARC-Record-ID\":\"<urn:uuid:05aaca44-ebaf-496e-98c3-e493424efb82>\",\"Content-Length\":\"83525\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d688fde2-d55b-4d25-9622-d4489ba660d2>\",\"WARC-Concurrent-To\":\"<urn:uuid:a68b9d62-868e-4598-8856-e33aa72ac33a>\",\"WARC-IP-Address\":\"23.111.143.85\",\"WARC-Target-URI\":\"https://www.physicsforums.com/threads/quadratic-simplification.6219/\",\"WARC-Payload-Digest\":\"sha1:YRXCVYI2QTHP5NNLMGCG3T4YOQV73GEB\",\"WARC-Block-Digest\":\"sha1:BLAEXNCPG4EBTFW42RH33OM5BIWCKUTR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570987822458.91_warc_CC-MAIN-20191022155241-20191022182741-00461.warc.gz\"}"}
https://bmicalc.co/weight-mass-loss/Weight-Loss_playing+racquetball_
[ "# Weight Loss playing racquetball\n\nHere you can find how much weight can you lose playing racquetball, as well as learn how to calculate the calories spent and the equivalent weight loss.\n\n### Inputs:\n\nlbs or Kilograms\nminutes\nChoose an activity:\nor\n\nType the first letters of the activity. (Ex.: 'walking').\n\n### Results:\n\n Weight (fat and/or muscle) lost: (grams) Calories burned:\nSomeone weighing 70 Kg or 154.3 lb playing racquetball burns 245.0 calories in 30 minutes. This value is roughly equivalent to 0.07 pound or 1.12 ounces or 31.8 grams of mass (fat and/or muscle).\n\n• Doing this activity 3 times a week for 30 minutes will burn 0.84 pounds or 0.38 Kg a month.\n• Doing this activity 5 times a week for 30 minutes will burn 1.4 pounds or 0.64 Kg a month.\n\n## How to calculate the burned calories or weight (mass) loss\n\nThe number of calories you burn depends on:\n\n• the physical activity\n• the person's body weight\n• the time spent doing the activity\n\nMultiply your body weight in kg by the MET (Metabolic equivalent) value by the time of activity, you'll get the approximate energy expent in Kcal according to the person's body weight. In this case, playing racquetball at a MET value, burns Kcal/kg x body weight/h.\n\nA 70 kg individual playing racquetball for 30 minutes will burn the following:\n\n(METs x 70 kg body weight) x (30 min/60 min) = 245.0 Kcal.\n\nis the value in METs for playing racquetball.\n\nTo transform the value in calories into pounds just divide the value in calories by 3500. So,\n\n245.0/3500 = 0.07 pound = 1.12 ounces = 31.8 grams of mass. We can't say that it is fat because it could be also muscle." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9212576,"math_prob":0.9859582,"size":1079,"snap":"2021-21-2021-25","text_gpt3_token_len":269,"char_repetition_ratio":0.14976744,"word_repetition_ratio":0.0,"special_character_ratio":0.2493049,"punctuation_ratio":0.093457945,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95991755,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-07T01:33:48Z\",\"WARC-Record-ID\":\"<urn:uuid:44bcc089-d26d-4983-b544-e645c220c846>\",\"Content-Length\":\"51556\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a5469108-9897-4685-a6fc-a45bca528353>\",\"WARC-Concurrent-To\":\"<urn:uuid:972961d0-0a6b-4b15-8fbb-ce7b8283bb13>\",\"WARC-IP-Address\":\"172.67.220.98\",\"WARC-Target-URI\":\"https://bmicalc.co/weight-mass-loss/Weight-Loss_playing+racquetball_\",\"WARC-Payload-Digest\":\"sha1:KBGSG5QCPHRRXVEHBDKXA7ZVAUZGZ7VA\",\"WARC-Block-Digest\":\"sha1:6HPFL2EGJUU2ZAJFMD45BJCWYSJ47BHN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243988774.18_warc_CC-MAIN-20210506235514-20210507025514-00109.warc.gz\"}"}