URL
stringlengths
15
1.68k
text_list
sequencelengths
1
199
image_list
sequencelengths
1
199
metadata
stringlengths
1.19k
3.08k
https://metanumbers.com/156529
[ "# 156529 (number)\n\n156,529 (one hundred fifty-six thousand five hundred twenty-nine) is an odd six-digits composite number following 156528 and preceding 156530. In scientific notation, it is written as 1.56529 × 105. The sum of its digits is 28. It has a total of 2 prime factors and 4 positive divisors. There are 155,376 positive integers (up to 156529) that are relatively prime to 156529.\n\n## Basic properties\n\n• Is Prime? No\n• Number parity Odd\n• Number length 6\n• Sum of Digits 28\n• Digital Root 1\n\n## Name\n\nShort name 156 thousand 529 one hundred fifty-six thousand five hundred twenty-nine\n\n## Notation\n\nScientific notation 1.56529 × 105 156.529 × 103\n\n## Prime Factorization of 156529\n\nPrime Factorization 157 × 997\n\nComposite number\nDistinct Factors Total Factors Radical ω(n) 2 Total number of distinct prime factors Ω(n) 2 Total number of prime factors rad(n) 156529 Product of the distinct prime numbers λ(n) 1 Returns the parity of Ω(n), such that λ(n) = (-1)Ω(n) μ(n) 1 Returns: 1, if n has an even number of prime factors (and is square free) −1, if n has an odd number of prime factors (and is square free) 0, if n has a squared prime factor Λ(n) 0 Returns log(p) if n is a power pk of any prime p (for any k >= 1), else returns 0\n\nThe prime factorization of 156,529 is 157 × 997. Since it has a total of 2 prime factors, 156,529 is a composite number.\n\n## Divisors of 156529\n\n4 divisors\n\n Even divisors 0 4 4 0\nTotal Divisors Sum of Divisors Aliquot Sum τ(n) 4 Total number of the positive divisors of n σ(n) 157684 Sum of all the positive divisors of n s(n) 1155 Sum of the proper positive divisors of n A(n) 39421 Returns the sum of divisors (σ(n)) divided by the total number of divisors (τ(n)) G(n) 395.637 Returns the nth root of the product of n divisors H(n) 3.9707 Returns the total number of divisors (τ(n)) divided by the sum of the reciprocal of each divisors\n\nThe number 156,529 can be divided by 4 positive divisors (out of which 0 are even, and 4 are odd). The sum of these divisors (counting 156,529) is 157,684, the average is 39,421.\n\n## Other Arithmetic Functions (n = 156529)\n\n1 φ(n) n\nEuler Totient Carmichael Lambda Prime Pi φ(n) 155376 Total number of positive integers not greater than n that are coprime to n λ(n) 12948 Smallest positive number such that aλ(n) ≡ 1 (mod n) for all a coprime to n π(n) ≈ 14354 Total number of primes less than or equal to n r2(n) 16 The number of ways n can be represented as the sum of 2 squares\n\nThere are 155,376 positive integers (less than 156,529) that are coprime with 156,529. And there are approximately 14,354 prime numbers less than or equal to 156,529.\n\n## Divisibility of 156529\n\n m n mod m 2 3 4 5 6 7 8 9 1 1 1 4 1 2 1 1\n\n156,529 is not divisible by any number less than or equal to 9.\n\n## Classification of 156529\n\n• Arithmetic\n• Semiprime\n• Deficient\n\n• Polite\n\n• Square Free\n\n### Other numbers\n\n• LucasCarmichael\n\n## Base conversion (156529)\n\nBase System Value\n2 Binary 100110001101110001\n3 Ternary 21221201101\n4 Quaternary 212031301\n5 Quinary 20002104\n6 Senary 3204401\n8 Octal 461561\n10 Decimal 156529\n12 Duodecimal 76701\n20 Vigesimal jb69\n36 Base36 3cs1\n\n## Basic calculations (n = 156529)\n\n### Multiplication\n\nn×y\n n×2 313058 469587 626116 782645\n\n### Division\n\nn÷y\n n÷2 78264.5 52176.3 39132.2 31305.8\n\n### Exponentiation\n\nny\n n2 24501327841 3835168345623889 600315065972161721281 93966716961556502070393649\n\n### Nth Root\n\ny√n\n 2√n 395.637 53.8929 19.8906 10.9375\n\n## 156529 as geometric shapes\n\n### Circle\n\n Diameter 313058 983501 7.69732e+10\n\n### Sphere\n\n Volume 1.60647e+16 3.07893e+11 983501\n\n### Square\n\nLength = n\n Perimeter 626116 2.45013e+10 221365\n\n### Cube\n\nLength = n\n Surface area 1.47008e+11 3.83517e+15 271116\n\n### Equilateral Triangle\n\nLength = n\n Perimeter 469587 1.06094e+10 135558\n\n### Triangular Pyramid\n\nLength = n\n Surface area 4.24375e+10 4.51979e+14 127805\n\n## Cryptographic Hash Functions\n\nmd5 a54e3548fa3b4b53857a6177b9b73e8a f3b19c951809683d8ebc487a37a36b36a19acfc5 c5eff768b20b3f1b40d256a2ea1ea4b9028bbd9eff4c5f5ea224deb01ed1c848 715fbd82cecadced3358f4fc842a8d90c1804e80b565472fc48964d6a416e3c7baa8a2d41f5c884c954493c70bd08e5ac25a80972cc599c53bb707a9a5d37e71 ebce2005860a7ca1121ccfd6d548ce41c53fa598" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.62182534,"math_prob":0.9772478,"size":4667,"snap":"2021-43-2021-49","text_gpt3_token_len":1631,"char_repetition_ratio":0.12030882,"word_repetition_ratio":0.032210834,"special_character_ratio":0.45296764,"punctuation_ratio":0.0747782,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9952584,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-02T18:33:41Z\",\"WARC-Record-ID\":\"<urn:uuid:29ae8402-c529-48a1-8184-dc7e18df8d61>\",\"Content-Length\":\"39999\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c9bb87f1-f983-438e-941a-abefe0c97eaa>\",\"WARC-Concurrent-To\":\"<urn:uuid:3560589c-8b3b-4481-aef6-c8bb0692e307>\",\"WARC-IP-Address\":\"46.105.53.190\",\"WARC-Target-URI\":\"https://metanumbers.com/156529\",\"WARC-Payload-Digest\":\"sha1:JHIOHREFP3BS3KR4EOIU4IYVEWTO5OKJ\",\"WARC-Block-Digest\":\"sha1:FWVY2WTUQ7P72SWI7NDNPNOGYJEJG4GX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964362287.26_warc_CC-MAIN-20211202175510-20211202205510-00113.warc.gz\"}"}
https://answers.everydaycalculation.com/compare-fractions/84-8-and-14-63
[ "Solutions by everydaycalculation.com\n\n## Compare 84/8 and 14/63\n\n1st number: 10 4/8, 2nd number: 14/63\n\n84/8 is greater than 14/63\n\n#### Steps for comparing fractions\n\n1. Find the least common denominator or LCM of the two denominators:\nLCM of 8 and 63 is 504\n\nNext, find the equivalent fraction of both fractional numbers with denominator 504\n2. For the 1st fraction, since 8 × 63 = 504,\n84/8 = 84 × 63/8 × 63 = 5292/504\n3. Likewise, for the 2nd fraction, since 63 × 8 = 504,\n14/63 = 14 × 8/63 × 8 = 112/504\n4. Since the denominators are now the same, the fraction with the bigger numerator is the greater fraction\n5. 5292/504 > 112/504 or 84/8 > 14/63\n\nMathStep (Works offline)", null, "Download our mobile app and learn to work with fractions in your own time:\nAndroid and iPhone/ iPad\n\nand" ]
[ null, "https://answers.everydaycalculation.com/mathstep-app-icon.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8502146,"math_prob":0.99144095,"size":881,"snap":"2022-40-2023-06","text_gpt3_token_len":319,"char_repetition_ratio":0.18928164,"word_repetition_ratio":0.0,"special_character_ratio":0.44721907,"punctuation_ratio":0.07253886,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9910695,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-05T18:47:50Z\",\"WARC-Record-ID\":\"<urn:uuid:bb044350-8e18-445f-80dc-0acd6f75fb30>\",\"Content-Length\":\"7697\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a87db103-5c62-4897-a538-41372f5d26e9>\",\"WARC-Concurrent-To\":\"<urn:uuid:aacc189c-87d9-4f2c-be5f-d0563820a3a7>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/compare-fractions/84-8-and-14-63\",\"WARC-Payload-Digest\":\"sha1:QMLXX4W3HU5HDXTLJPAKY4JYMGGPQNCI\",\"WARC-Block-Digest\":\"sha1:PKUXYHMKZ5VJQ643TMWPBDYEIQXK32AF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337663.75_warc_CC-MAIN-20221005172112-20221005202112-00206.warc.gz\"}"}
https://biz.libretexts.org/Courses/Kwantlen_Polytechnic_University/BUSI1215_Organizational_Behaviour/09%3A_Leading_People_Within_Organizations/9.8%3A_Conclusion
[ "$$\\newcommand{\\vecs}{\\overset { \\rightharpoonup} {\\mathbf{#1}} }$$ $$\\newcommand{\\vecd}{\\overset{-\\!-\\!\\rightharpoonup}{\\vphantom{a}\\smash {#1}}}$$$$\\newcommand{\\id}{\\mathrm{id}}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\kernel}{\\mathrm{null}\\,}$$ $$\\newcommand{\\range}{\\mathrm{range}\\,}$$ $$\\newcommand{\\RealPart}{\\mathrm{Re}}$$ $$\\newcommand{\\ImaginaryPart}{\\mathrm{Im}}$$ $$\\newcommand{\\Argument}{\\mathrm{Arg}}$$ $$\\newcommand{\\norm}{\\| #1 \\|}$$ $$\\newcommand{\\inner}{\\langle #1, #2 \\rangle}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\id}{\\mathrm{id}}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\kernel}{\\mathrm{null}\\,}$$ $$\\newcommand{\\range}{\\mathrm{range}\\,}$$ $$\\newcommand{\\RealPart}{\\mathrm{Re}}$$ $$\\newcommand{\\ImaginaryPart}{\\mathrm{Im}}$$ $$\\newcommand{\\Argument}{\\mathrm{Arg}}$$ $$\\newcommand{\\norm}{\\| #1 \\|}$$ $$\\newcommand{\\inner}{\\langle #1, #2 \\rangle}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$$$\\newcommand{\\AA}{\\unicode[.8,0]{x212B}}$$" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9592598,"math_prob":1.000003,"size":932,"snap":"2022-40-2023-06","text_gpt3_token_len":163,"char_repetition_ratio":0.1487069,"word_repetition_ratio":0.0,"special_character_ratio":0.17167382,"punctuation_ratio":0.1118421,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96362203,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-25T05:27:54Z\",\"WARC-Record-ID\":\"<urn:uuid:303dc392-8ad4-4697-9ea4-3da8e574f939>\",\"Content-Length\":\"98207\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:43399a98-e750-4421-bed9-4e1adce32ec3>\",\"WARC-Concurrent-To\":\"<urn:uuid:cce90b4f-a8db-49c0-a315-a2cf77d6738b>\",\"WARC-IP-Address\":\"18.160.46.83\",\"WARC-Target-URI\":\"https://biz.libretexts.org/Courses/Kwantlen_Polytechnic_University/BUSI1215_Organizational_Behaviour/09%3A_Leading_People_Within_Organizations/9.8%3A_Conclusion\",\"WARC-Payload-Digest\":\"sha1:D6REWLJWXSL7Z4NCDT6ZMF2FMXKZR3LY\",\"WARC-Block-Digest\":\"sha1:MR4R6FZSL574VB2BSZEN4EYXCOHKULTE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030334514.38_warc_CC-MAIN-20220925035541-20220925065541-00401.warc.gz\"}"}
https://www.sanfoundry.com/basic-electrical-engineering-questions-answers-generation-alternating-emf/
[ "# Basic Electrical Engineering Questions and Answers – Generation of an Alternating EMF\n\n«\n»\n\nThis set of Basic Electrical Engineering Multiple Choice Questions & Answers (MCQs) focuses on “Generation of an Alternating EMF”.\n\n1. Which, among the following, is the correct expression for alternating emf generated?\na) e=2Blvsin(θ)\nb) e=2B2lvsin(θ)\nc) e=Blvsin(θ)\nd) e=4Blvsin(θ)\n\nExplanation: The correct expression for alternating emf generated is e=Blvsin(θ). Where B stands for magnetic field density, l is the length of each of the parallel sides v is the velocity with which the conductor is moved and θ is the angle between the velocity and the length.\n\n2. What should theta be in order to get maximum emf?\na) 00\nb) 900\nc) 1800\nd) 450\n\nExplanation: The value of θ should be 900 in order to get maximum emf because e = Blvsin(θ) and sin is maximum when θ is 900.\n\n3. Calculate the maximum emf when the velocity is 10m/s, the length is 3m and the magnetic field density is 5T.\na) 150V\nb) 100V\nc) 300V\nd) 0V\n\nExplanation: We know that: emax=Bvl\nSubstituting the values from the given question, we get e=150V.\n\n4. When a coil is rotated in a magnetic field, the emf induced in it?\na) Is maximum\nb) Is minimum\nc) Continuously varies\nd) Remains constant\n\nExplanation: When a coil is rotated in a magnetic field, cross sectional area varies due to which the number of flux lines crossing it varies, which causes the emf to vary continuously.\n\n5. emf is zero if the angle between velocity and length is _____\na) 00\nb) 900\nc) 2700\nd) 450\n\nExplanation: If the angle between velocity and length is zero, sinθ=0\nSo, e=Bvlsinθ = 0.\n\n6. In an A.C. generator, increase in number of turns in the coil _________\na) Increases emf\nb) Decreases emf\nc) Makes the emf zero\nd) Maintains the emf at a constant value\n\nExplanation: In an A.C. generator, the emf increases as the number of turns in the coil increases because the emf is directly proportional to the number of turns.\n\n7. The number of cycles that occur in one second is termed as ___________\na) Waveform\nb) Frequency\nc) Amplitude\nd) Period\n\nExplanation: The number of cycles that occur in one second is known as the frequency. It is the reciprocal of the time period.\n\nSanfoundry Global Education & Learning Series – Basic Electrical Engineering.\n\nTo practice all areas of Basic Electrical Engineering, here is complete set of 1000+ Multiple Choice Questions and Answers.", null, "" ]
[ null, "data:image/svg+xml,%3Csvg%20xmlns=%22http://www.w3.org/2000/svg%22%20viewBox=%220%200%20150%20150%22%3E%3C/svg%3E", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8240558,"math_prob":0.9921012,"size":2749,"snap":"2022-27-2022-33","text_gpt3_token_len":772,"char_repetition_ratio":0.11876138,"word_repetition_ratio":0.12083333,"special_character_ratio":0.2648236,"punctuation_ratio":0.12164579,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99786526,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-07T18:45:29Z\",\"WARC-Record-ID\":\"<urn:uuid:72cb0f88-7ec0-42a1-8634-8d2060832f75>\",\"Content-Length\":\"151908\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7d8315ec-145d-4920-a05b-5c21905bc975>\",\"WARC-Concurrent-To\":\"<urn:uuid:c5f09bda-6d8c-451c-b34d-7987f8398f32>\",\"WARC-IP-Address\":\"104.25.131.119\",\"WARC-Target-URI\":\"https://www.sanfoundry.com/basic-electrical-engineering-questions-answers-generation-alternating-emf/\",\"WARC-Payload-Digest\":\"sha1:WW5THWP4S44DTOVRNNYB2AX5ZR7X3SZZ\",\"WARC-Block-Digest\":\"sha1:VC7GO5WDBLUPU7BUGO7NELUO4JEAAL3V\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882570692.22_warc_CC-MAIN-20220807181008-20220807211008-00304.warc.gz\"}"}
http://www.youdao.com/example/blng/eng/material_balance/
[ "go top 返回词典\n• So the simulated annealing algorithm is introduced to solving the material balance equation.\n\n因此本文引入模拟退火算法求解物质平衡方程\n\ndict.cnki.net\n\n• The method is derived by using material balance equation in combination with formation pressure.\n\n方法考虑地层压力采用物质平衡方程得到\n\ndict.cnki.net\n\n• Based on the special geologic and hydrodynamic features, the conventional material balance equation is modified.\n\n根据特殊地质水动力学特征常规物质平衡方程进行了改进。\n\ndict.cnki.net\n\n• Add fuel stream to the heat and material balance.\n\n增设燃料达到热平衡物料平衡\n\nwww.websaru.net\n\n• The general material balance equation was applied widely in the field, but few studies about material balance equation of gas injection flooding has been conducted.\n\n常规物质平衡方程油田已广泛应用然而油田物质平衡方程研究较少\n\ndict.cnki.net\n\n• The specifications should provide a reasonable material balance.\n\n应该提供一个合理物料平衡控制标准。\n\nwww.websaru.net\n\n• Filtered process mathematical model includes material balance equations of solid filter material and liquid sewage respectively.\n\n污水过滤过程数学模型包括固体相和液体污水物料衡算方程\n\ndict.cnki.net\n\n• Using the equilibrium constant method, the thermal and material balance calculation, and has set up the calculation model of straight Claus sulfur recovery process.\n\n应用平衡常数物料热量建立了直流法克劳斯硫磺回收工艺计算模型\n\ndict.cnki.net\n\n• Based on the material balance, energy balance and reaction kinetics, a dynamic model, having the form of a nonlinear distributed parameter has been presented for catalytic cracking process.\n\n基于物料算、热量反应动力学建立催化裂化过程动态模型具有非线性分布参数模型形式\n\ndict.cnki.net\n\n• After the material balance for any a mixing pool, the analytical solutions of concentration on the evaporating surface, the distillate rate and the residue rate obtained.\n\n每个混合进行物料衡算,可以得到蒸发壁面上浓度变化轻、馏分流量含量解析\n\ndict.cnki.net\n\n• Material balance equation of gas reservoir is an important method in gas reservoir engineering.\n\n气藏物质平衡方程序气藏工程中的重要方法\n\nwww.chemyq.com\n\n• Material balance method is most used in reserve calculation of gas reservoirs, and its calculating results are accurate relatively.\n\n物质平衡气藏储量计算中用得最多计算较为准确的一种方法。\n\ndict.cnki.net\n\n• Material balance equation for gas reservoir is an important method in gas reservoir engineering.\n\n气藏物质平衡方程气藏工程中的重要方法\n\ndict.cnki.net\n\n• This paper presents the dynamic evaluation method for calculation of water energy, based on the performances and material balance theory.\n\n将气藏一具体地质开发特征物质平衡理论结合给出水体能量动态评价方法\n\nwww.dictall.com\n\n• The breakthrough curve inside the bed under another operation is predicted by the adsorption rate equation, the adsorption isotherm equation and the material balance equation.\n\n吸附速率方程吸附等温方程物料算相结合预测其他条件下穿透曲线\n\ndict.cnki.net\n\n• When calculating with material balance equation, the accuracy of the calculation result is often affected by the error of PVT parameters.\n\n采用物质平衡方程计算常常由于PVT参数误差影响最终计算结果精度\n\ndict.cnki.net\n\n• Large deviation will appear if material balance equation of the gas reservoir for calculating OIP is used directly in this production scheme.\n\n开采方式下,直接采用气藏物质平衡方程进行地质储量计算出现很大偏差\n\nwww.chemyq.com\n\n物质平衡方式不定态公式等油藏工程理论基础,得出期采收率油藏见时间计算公式\n\nwww.chemyq.com\n\n• Material balance method is one of the methods used commonly in estimating gas reserves in a reservoir.\n\n物质平衡方法计算气藏储量常用方法之一\n\nwww.chemyq.com\n\n• On the basis of the material balance equation, this paper develops a method to predict the formation pressure using cumulative production data for normal pressure confined gas reservoir.\n\n正常压力系统封闭气藏物质平衡方程序基础建立一种利用累积产气量计算气藏地层压力方法\n\ndict.cnki.net\n\n• Traditional well-test models for liquid flow are not consistent with material balance equation.\n\n传统试井模型物质平衡方程都是一致的。\n\ndict.cnki.net\n\n• Utilizing the optimal principle and the material balance equation, with the inheritance arithmetic, this paper has programmed to solve the geological reserves .\n\n利用最优化原理物质平衡方程结合遗传算法编程求解了水驱气藏地质储量\n\ndict.youdao.com\n\n• Analyzing methods based on material balance are currently important tools for performance analysis and reserve calculation of oil and gas reservoirs.\n\n物质平衡原理为基础动态分析方法现今油气藏动态分析储量核实重要工具\n\nwww.cngascn.com\n\n• Methods the effect factors of the steady producing for single gas well production was discussed by using the model of material balance with compensating as well as the parameters of the model.\n\n方法采用具有补给物质平衡模型确定气井单井稳产水平,探讨影响单井稳产水平因素及其模型中的反映。\n\ndict.cnki.net\n\n• The model includes the calculation of material balance, the vapor and liquid equilibrium compositions, the interfacial area of mass transfer, mass transfer rate.\n\n模型包括物料、相平衡计算面积计算速率计算等。\n\ndict.cnki.net\n\n• The continuous suspension crystallization process for xylene separation was optimized by calculation of solid-liquid phase equilibrium, material balance and energy balance.\n\n采用混合二甲苯二组分物系的液相平衡方程及物料热量方程对混合二甲苯连续悬浮结晶工艺进行模拟计算。\n\nwww.dictall.com\n\n• Material balance equation of gas reservoirs is an important tool in gas reservoir engineering. It can be used to determine the original gas-in-place of the gas reservoirs.\n\n气藏物质平衡方程序可以确定气藏原始地质储量,气藏工程中的重要方法\n\ndict.cnki.net\n\n• We can learn from above that the study of material balance method during gas injection flooding is very important.\n\n因此,研究开采过程中的物质平衡方程序具有十分重要的意义。\n\ndanci.911cha.com\n\n• The material balance equation for gas reservoir has been used to determine reserves and OGIP of gas reservoir, judge its driving type and predict its production performance.\n\n气藏物质平衡方程气藏工程重要方法可以确定气藏原始地质储量可采储量,也可以判断气藏驱动类型预测气藏开发动态\n\ndict.cnki.net\n\n• And using material balance with time equation to calculate recoverable reservoir controlled by single well.\n\n采用物质平衡时间方程分析法求取单井控制储量\n\ndict.cnki.net\n\n\\$firstVoiceSent\n- 来自原声例句" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6833623,"math_prob":0.97252864,"size":5166,"snap":"2020-45-2020-50","text_gpt3_token_len":1904,"char_repetition_ratio":0.21619527,"word_repetition_ratio":0.02519685,"special_character_ratio":0.14711575,"punctuation_ratio":0.073305674,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9718071,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-05T03:19:54Z\",\"WARC-Record-ID\":\"<urn:uuid:442d355c-396a-4c7c-b9bd-432d7bdb2373>\",\"Content-Length\":\"133836\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e2ebcf72-4562-44d0-9cb0-8c3d32630e76>\",\"WARC-Concurrent-To\":\"<urn:uuid:4c09a67a-a12f-4975-82a6-eb2eee109d8f>\",\"WARC-IP-Address\":\"103.72.47.245\",\"WARC-Target-URI\":\"http://www.youdao.com/example/blng/eng/material_balance/\",\"WARC-Payload-Digest\":\"sha1:FQMFJX3TLN6WNVAKJ7VZS24JXVIJZSCR\",\"WARC-Block-Digest\":\"sha1:QRCPBEIKUKWZPX4MNOBPFGWGIBMKYP7Y\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141746033.87_warc_CC-MAIN-20201205013617-20201205043617-00084.warc.gz\"}"}
https://www.w3definitions.com/mcq/number-systems-and-codes/
[ "# Number Systems And Codes MCQs\n\n## 3428 is the decimal value for which of the following binary coded decimal (BCD) groupings?\n\n• A. 11010001001000\n• B. 11010000101000\n• C. 011010010000010\n• D. 110100001101010\n\n## A BCD code that represents each digit of a decimal number by a binary number derived by adding 3 to its 4-bit true binary value is _________.\n\n• A. 9's complement code\n• B. excess-3 code\n• C. 8421 code\n• D. gray code\n\n## A binary code that progresses such that only one bit changes between two successive codes is _________.\n\n• A. 9's complement code\n• B. excess-3 code\n• C. 8421 code\n• D. gray code\n\n## A binary number’s value changes most drastically when the ____ is changed.\n\n• A. LSB\n• B. duty cycle\n• C. MSB\n• D. frequency\n\n## Base 10 refers to which number system?\n\n• A. binary coded decimal\n• B. decimal\n• C. octal\n\n• A. 201\n• B. 2001\n• C. 20\n• D. 210\n\n• A. 125\n• B. 12.5\n• C. 90.125\n• D. 9.125\n\n• A. 5B\n• B. 5F\n• C. 5A\n• D. 5C\n\n## Convert the decimal number 151.75 to binary.\n\n• A. 10000111.11\n• B. 11010011.01\n• C. 00111100.00\n• D. 10010111.11\n\n• A. decimal\n• C. binary\n• D. octal\n\n## Sample-and-hold circuits in ADCs are designed to:\n\n• A. sample and hold the output of the binary counter during the conversion process\n• B. stabilize the ADCs threshold voltage during the conversion process\n• C. stabilize the input analog signal during the conversion process\n• D. sample and hold the ADC staircase waveform during the conversion process\n\n• A. excess-3\n• B. gray\n• C. multibit\n• D. minival\n\n## The binary coded decimal (BCD) code is a system that represents each of the 10 decimal digits as a(n) ____________.\n\n• A. 4-bit binary code\n• B. 8-bit binary code\n• C. 16-bit binary code\n• D. ASCII code\n\n• A. 8\n• B. 4\n• C. 1\n• D. 2\n\n• A. 1\n• B. 2\n• C. 3\n• D. 4\n\n• A. 191\n• B. 1911\n• C. 19\n• D. 19111\n\n## What is the difference between binary coding and binary coded decimal?\n\n• A. Binary coding is pure binary.\n• B. BCD is pure binary.\n• C. Binary coding has a decimal format.\n• D. BCD has no decimal format.\n\n• A. 327.375\n• B. 12166\n• C. 1388\n• D. 1476" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7260484,"math_prob":0.9105126,"size":2736,"snap":"2021-31-2021-39","text_gpt3_token_len":770,"char_repetition_ratio":0.1954612,"word_repetition_ratio":0.14401622,"special_character_ratio":0.33369884,"punctuation_ratio":0.15837938,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99689883,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-23T21:28:04Z\",\"WARC-Record-ID\":\"<urn:uuid:8f58b118-b222-48e9-8062-55015719caeb>\",\"Content-Length\":\"82735\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a64bfa25-81fe-4879-bcb8-f635b0f53f31>\",\"WARC-Concurrent-To\":\"<urn:uuid:40c30d49-7918-4306-abfb-603fe4f4bc8a>\",\"WARC-IP-Address\":\"23.111.187.131\",\"WARC-Target-URI\":\"https://www.w3definitions.com/mcq/number-systems-and-codes/\",\"WARC-Payload-Digest\":\"sha1:FDELX4NGFW6PBHGSMCW46ZZOO5NVY5Z3\",\"WARC-Block-Digest\":\"sha1:JQVMWJP4UVFQCMO36HK3PDVNPXK3DGDW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057447.52_warc_CC-MAIN-20210923195546-20210923225546-00665.warc.gz\"}"}
http://textbooks.math.gatech.edu/ila/solution-sets.html
[ "##### Objectives\n1. Understand the relationship between the solution set of and the solution set of\n2. Understand the difference between the solution set and the column span.\n3. Recipes: parametric vector form, write the solution set of a homogeneous system as a span.\n4. Pictures: solution set of a homogeneous system, solution set of an inhomogeneous system, the relationship between the two.\n5. Vocabulary words: homogeneous/inhomogeneous, trivial solution.\n\nIn this section we will study the geometry of the solution set of any matrix equation\n\nThe equation is easier to solve when so we start with this case.\n\n##### Definition\n\nA system of linear equations of the form is called homogeneous.\n\nA system of linear equations of the form for is called inhomogeneous.\n\nA homogeneous system is just a system of linear equations where all constants on the right side of the equals sign are zero.\n\nA homogeneous system always has the solution This is called the trivial solution. Any nonzero solution is called nontrivial.\n\n##### Observation\n\nThe equation has a nontrivial solution there is a free variable has a column without a pivot position.\n\n##### Observation\n\nWhen we row reduce the augmented matrix for a homogeneous system of linear equations, the last column will be zero throughout the row reduction process. We saw this in the last example:\n\nSo it is not really necessary to write augmented matrices when solving homogeneous systems.\n\nWhen the homogeneous equation does have nontrivial solutions, it turns out that the solution set can be conveniently expressed as a span.\n\n##### Parametric Vector Form (homogeneous case)\n\nConsider the following matrix in reduced row echelon form:\n\nThe matrix equation corresponds to the system of equations\n\nWe can write the parametric form as follows:\n\nWe wrote the redundant equations and in order to turn the above system into a vector equation:\n\nThis vector equation is called the parametric vector form of the solution set. Since and are allowed to be anything, this says that the solution set is the set of all linear combinations of and In other words, the solution set is\n\nHere is the general procedure.\n\n##### Recipe: Parametric vector form (homogeneous case)\n\nLet be an matrix. Suppose that the free variables in the homogeneous equation are, for example, and\n\n1. Find the reduced row echelon form of\n2. Write the parametric form of the solution set, including the redundant equations Put equations for all of the in order.\n3. Make a single vector equation from these equations by making the coefficients of and into vectors and respectively.\n\nThe solutions to will then be expressed in the form\n\nfor some vectors in and any scalars This is called the parametric vector form of the solution.\n\nIn this case, the solution set can be written as\n\nWe emphasize the following fact in particular.\n\nThe set of solutions to a homogeneous equation is a span.\n\nSince there were two variables in the above example, the solution set is a subset of Since one of the variables was free, the solution set is a line:\n\nIn order to actually find a nontrivial solution to in the above example, it suffices to substitute any nonzero value for the free variable For instance, taking gives the nontrivial solution Compare to this important note in Section 1.3.\n\nSince there were three variables in the above example, the solution set is a subset of Since two of the variables were free, the solution set is a plane.\n\nThere is a natural question to ask here: is it possible to write the solution to a homogeneous matrix equation using fewer vectors than the one given in the above recipe? We will see in example in Section 2.5 that the answer is no: the vectors from the recipe are always linearly independent, which means that there is no way to write the solution with fewer vectors.\n\nAnother natural question is: are the solution sets for inhomogeneuous equations also spans? As we will see shortly, they are never spans, but they are closely related to spans.\n\nThere is a natural relationship between the number of free variables and the “size” of the solution set, as follows.\n\n##### Dimension of the solution set\n\nThe above examples show us the following pattern: when there is one free variable in a consistent matrix equation, the solution set is a line, and when there are two free variables, the solution set is a plane, etc. The number of free variables is called the dimension of the solution set.\n\nWe will develop a rigorous definition of dimension in Section 2.7, but for now the dimension will simply mean the number of free variables. Compare with this important note in Section 2.5.\n\nIntuitively, the dimension of a solution set is the number of parameters you need to describe a point in the solution set. For a line only one parameter is needed, and for a plane two parameters are needed. This is similar to how the location of a building on Peachtree Street—which is like a line—is determined by one number and how a street corner in Manhattan—which is like a plane—is specified by two numbers.\n\n# Subsection2.4.2Inhomogeneous Systems\n\nRecall that a matrix equation is called inhomogeneous when\n\nIn the above example, the solution set was all vectors of the form\n\nwhere is any scalar. The vector is also a solution of take We call a particular solution.\n\nIn the solution set, is allowed to be anything, and so the solution set is obtained as follows: we take all scalar multiples of and then add the particular solution to each of these scalar multiples. Geometrically, this is accomplished by first drawing the span of which is a line through the origin (and, not coincidentally, the solution to ), and we translate, or push, this line along The translated line contains and is parallel to it is a translate of a line.\n\nIn the above example, the solution set was all vectors of the form\n\nwhere and are any scalars. In this case, a particular solution is\n\nIn the previous example and the example before it, the parametric vector form of the solution set of was exactly the same as the parametric vector form of the solution set of (from this example and this example, respectively), plus a particular solution.\n\n##### Key Observation\n\nIf is consistent, the set of solutions to is obtained by taking one particular solution of and adding all solutions of\n\nIn particular, if is consistent, the solution set is a translate of a span.\n\nThe parametric vector form of the solutions of is just the parametric vector form of the solutions of plus a particular solution\n\nIt is not hard to see why the key observation is true. If is a particular solution, then and if is a solution to the homogeneous equation then\n\nso is another solution of On the other hand, if we start with any solution to then is a solution to since\n\nSee the interactive figures in the next subsection for visualizations of the key observation.\n\n##### Dimension of the solution set\n\nAs in this important note, when there is one free variable in a consistent matrix equation, the solution set is a line—this line does not pass through the origin when the system is inhomogeneous—when there are two free variables, the solution set is a plane (again not through the origin when the system is inhomogeneous), etc.\n\nAgain compare with this important note in Section 2.5.\n\n# Subsection2.4.3Solution Sets and Column Spans¶ permalink\n\nTo every matrix we have now associated two completely different geometric objects, both described using spans.\n\n• The solution set: for fixed this is the set of all such that\n\n• This is a span if and it is a translate of a span if (and is consistent).\n• It is a subset of\n• It is computed by solving a system of equations: usually by row reducing and finding the parametric vector form.\n• The span of the columns of : this is the set of all such that is consistent.\n\n• This is always a span.\n• It is a subset of\n• It is not computed by solving a system of equations: row reduction plays no role.\n\nDo not confuse these two geometric constructions! In the first the question is which ’s work for a given and in the second the question is which ’s work for some" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.93663394,"math_prob":0.9929792,"size":5973,"snap":"2021-43-2021-49","text_gpt3_token_len":1226,"char_repetition_ratio":0.2109231,"word_repetition_ratio":0.110470705,"special_character_ratio":0.19487694,"punctuation_ratio":0.08268059,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9992349,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-02T04:37:50Z\",\"WARC-Record-ID\":\"<urn:uuid:daa9e4a5-04d5-4102-b84e-dc856ca0a686>\",\"Content-Length\":\"132994\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6ddba2df-9d9d-4734-aa0e-db3cc25bd959>\",\"WARC-Concurrent-To\":\"<urn:uuid:489369f1-e68d-49b7-970d-04d458ad9930>\",\"WARC-IP-Address\":\"130.207.188.152\",\"WARC-Target-URI\":\"http://textbooks.math.gatech.edu/ila/solution-sets.html\",\"WARC-Payload-Digest\":\"sha1:3MKT4NXSKYJLEV4P66KT33UICJAAAPCM\",\"WARC-Block-Digest\":\"sha1:JUHDSXLGAW45MWWJJPREHSHONRGZHRVX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964361064.69_warc_CC-MAIN-20211202024322-20211202054322-00152.warc.gz\"}"}
https://jov.arvojournals.org/article.aspx?articleid=2279458
[ "", null, "jov", null, "June 2014\nVolume 14, Issue 7\nFree\nMethods  |   June 2014\nA formula for human retinal ganglion cell receptive field density as a function of visual field location\nAuthor Affiliations\nJournal of Vision June 2014, Vol.14, 15. doi:https://doi.org/10.1167/14.7.15\n• Views\n• PDF\n• Share\n• Tools\n×\n###### This feature is available to authenticated users only.\n• Get Citation\n\nAndrew B. Watson; A formula for human retinal ganglion cell receptive field density as a function of visual field location. Journal of Vision 2014;14(7):15. doi: https://doi.org/10.1167/14.7.15.\n\n© ARVO (1962-2015); The Authors (2016-present)\n\n×\n• Supplements\nAbstract\nAbstract\nAbstract:\n\nAbstract  In the human eye, all visual information must traverse the retinal ganglion cells. The most numerous subclass, the midget retinal ganglion cells, are believed to underlie spatial pattern vision. Thus the density of their receptive fields imposes a fundamental limit on the spatial resolution of human vision. This density varies across the retina, declining rapidly with distance from the fovea. Modeling spatial vision of extended or peripheral targets thus requires a quantitative description of midget cell density throughout the visual field. Through an analysis of published data on human retinal topography of cones and ganglion cells, as well as analysis of prior formulas, we have developed a new formula for midget retinal ganglion cell density as a function of position in the monocular or binocular visual field.\n\nIntroduction\nThe spatial resolution of human photopic vision is limited by optical blur, and beyond that by spatial sampling by retinal neurons. The initial sampling is by the inner segments of the cone photoreceptors, and subsequent resampling of their signals is performed, via various interneurons, by the retinal ganglion cells (RGC). These are the output cells of the human eye and consequently their properties limit the signal that travels to the rest of the brain. One class of these cells, the midget retinal ganglion cells (mRGC) are the most numerous; near the fovea they appear to sample a single cone while in peripheral retina they gather signals from multiple cones (Ahmad, Klug, Herr, Sterling, & Schein, 2003; Dacey, 1993; Dacey & Petersen, 1992; Goodchild, Ghosh, & Martin, 1996; Kolb & Dekorver, 1991; Schein, 1988). In consequence the mRGC likely set an upper bound on the spatial resolution of human vision, especially at low temporal frequencies (Hirsch & Curcio, 1989; Merigan & Eskin, 1986; Merigan & Katz, 1990; Rossi & Roorda, 2010; Thibos, Cheney, & Walsh, 1987).\nFrom a sampling point of view, the critical metric of the mRGC lattice is the local density or spacing of adjacent mRGC receptive fields (mRGCf). Because this spacing varies across the visual field, and because of its fundamental role in modeling human visual spatial processing, it would be valuable to have a formula for mRGCf spacing as a function of location in the visual field.\nAn earlier formula for mRGCf density was developed by Drasdo, Millican, Katholi, and Curcio (2007). While this formula was an important contribution, it was largely based on psychophysical results (acuity vs. eccentricity). Since we would like to use our formula to make psychophysical predictions, we sought to develop a formula based only on anatomical data. Barten (1999) also produced a formula for RGC density along an average meridian, but did not provide a derivation for his result.\nDacey (1993) also provided a figure depicting estimated average midget ganglion cell density as a function of eccentricity. However, his estimates are based on highly variable estimates of dendritic field size and an assumption of unit coverage (the product of density and field area) throughout the visual field. Also, separate estimates for the four meridians are not provided. Nonetheless, we show in the Discussion that our new formula is consistent with his empirical results.\nThe approach that we have taken is to seek a simple analytic formula that approximately satisfies known or probable anatomical constraints.\nCurcio, Sloan, Kalina, and Hendrickson (1990) measured the distribution of cone photoreceptors across the retina in a set of eight human eyes. Consistent with earlier fragmentary reports (Osterberg, 1935), they found that density declined rapidly with eccentricity. They also described substantial meridional asymmetries, and large individual differences in peak density.\nIn a second paper six of those eyes, along with one additional eye, were used to measure the distribution of retinal ganglion cells (Curcio & Allen, 1990). This distribution also varies markedly with eccentricity, but unlike the cone distribution, it does not peak at the fovea. This is because in a central retinal zone the ganglion cell bodies are displaced centrifugally some distance from the inner segments of the cones to which they are connected through the bipolar cells, and thus from their receptive fields. This displacement zone continues up to eccentricities of around 13°–17°, depending on the meridian (Drasdo et al., 2007). The extent of ganglion cell displacement as a function of eccentricity of the cell body has been measured in primate (Schein, 1988; Wassle, Grunert, Rohrenbeck, & Boycott, 1990) and human (Drasdo et al., 2007). In human these displacements are as large as 2.2°. As a result, the peak RGC density occurs some 4°–5° away from the center of the fovea. Thus the local density of the cell bodies does not reflect the local density of the RGC receptive fields (RGCf).\nHowever, the RGC distribution combined with several other constraints does allow a plausible reconstruction of the distribution of RGCf. The constraints that we consider are:\n1.\nAlong a given meridian, the cumulative distribution of RGC and RGCf must agree outside the displacement zone.\n2.\nIn the fovea, it is likely that each cone connects (via bipolars) to exactly two mRGC (Kolb & Dekorver, 1991).\n3.\nNear the fovea midgets constitute most but not all of the ganglion cells. The ratio RGC/mRGC is given as about 1.12 by Drasdo et al. (2007).\n4.\nThe hypothetical distribution of RGCf must be consistent with the measured distribution of RGC outside of the displacement zone.\nMaking use of these constraints, this report derives a new formula for RGC density as a function of eccentricity along the four principal meridians, and more generally of position on the retina or in the visual field in degree coordinates.\nThe derivation relies on transformations from retinal coordinates in mm to degree, based on a model eye (Drasdo & Fowler, 1974) and an assumed offset between optical and visual axes (Charman, 1991), which are described in Appendix 6. We also place in Appendix 5 a number of formulas relating various metrics of points in a hexagonal sampling lattice, such as spacing, density, and row spacing. We also provide (see Appendix 2) with this report a supplementary file of Mathematica functions that implement our formulas (Wolfram Research Inc., interactive demonstration that computes RGCf density and spacing at a selected visual field location (see Appendix 1).\nConventions regarding meridians, locations, and the visual center\nIn retinal anatomy locations are often specified as a distance from a retinal center along one of the four principal meridians. Because the topography is not radially symmetric, or even bilaterally symmetric, measurements differ along the four meridians. The naming of the meridians is a possible source of confusion, since different conventions are typically used for retinal anatomy and visual fields. Specifically, the nasal retina (the part nearest the nose) images the temporal visual field (the part away from the nose), and vice versa, in both eyes. We avoid this confusion by always referring to the visual field locations, even when discussing anatomy. We order the meridians temporal, superior, nasal, inferior, consistent with increasing polar angle in the visual field of the right eye, and assign them indexes of 1–4. When cartesian coordinates are used, positive x-coordinates refer to the temporal visual field of either eye, or the right binocular visual field.\nA further possible confusion is that in the right eye, the temporal visual field is the right visual field, while in the left eye, it is the left visual field. Thus when we compute binocular visual fields, we combine the temporal field of the right eye with the nasal field of the left eye, and vice versa.\nA final possible confusion is the definition of the retinal center. Remarkably, there appears to be no consistent term for this concept. What we want is the “visual center,” defined essentially as the intersection of the visual axis with the retina. It is not the same as the “fovea” which is an area, not a point. It may well correspond to the point of highest cone density, and is operationally defined as the retinal location that images a fixated point, sometimes called the “preferred retinal locus of fixation” or PRLF (Rossi & Roorda, 2010). We will assume that anatomical measurements and visual field locations are referenced to this common visual center.\nNotation\nIn Appendix 3 we provide a more complete review of notation, but here we introduce some general conventions. The symbol r will indicate eccentricity in degree, d will indicate density in deg−2, and s will indicate spacing (of adjacent cells or receptive fields) in degrees. We use subscripts g, m, and c to denote RGC, mRGC, and cones respectively, and gf and mf to denote RGC and mRGC receptive fields. A particular meridian will be indicated by an integer index k\nCone densities\nCurcio et al. (1990) measured cone photoreceptor density across the retina in eight human eyes. Average data are provided in tables of cone densities in cones/mm2 in the four principal meridians at each of 34 eccentricities in mm (Curcio, 2006). Using the conversion formulas described in Appendix 6, we have converted the densities to cones/deg2 as a function of eccentricity in degree as shown in Figure 1. Writing dc(r, k) for the cone density at eccentricity r degree along meridian k, we note that the foveal peak is dc(0, 1) = dc(0) = 14,804.6. This peak density is plotted at the upper left on this log-log plot.\nFigure 1\n\nCone density as a function of eccentricity (Curcio et al., 1990). Foveal density is indicated by the line segment at the upper left. The gap in the temporal meridian corresponds to the blind spot.\nFigure 1\n\nCone density as a function of eccentricity (Curcio et al., 1990). Foveal density is indicated by the line segment at the upper left. The gap in the temporal meridian corresponds to the blind spot.", null, "RGC densities\nCurcio and Allen (1990) measured local density of RGC cell bodies in seven retinas (including six of those used above to measure cone densities). Curcio (2013) has provided tables of average RGC densities in RGC/mm2 in the four principal meridians at each of 35 eccentricities in millimeters. We have again converted these values to densities in RGC/deg2 as a function of eccentricity in degree, and the results are plotted in Figure 2. We omit one point in the inferior meridian with a density below 1. The peak density of about 2,375 RGC/deg2 occurs not at the foveal center but at an eccentricity of about 3.7°. This is because, as noted above, ganglion cell bodies within a displacement zone extending out as far as 17° are displaced centrifugally from their cone inputs. Thus the RGC densities within this zone cannot be used directly as an estimate of the densities of the RGCf.\nFigure 2\n\nRGC density as a function of eccentricity in four meridians (Curcio & Allen, 1990). The gap in the temporal meridian corresponds to the blind spot.\nFigure 2\n\nRGC density as a function of eccentricity in four meridians (Curcio & Allen, 1990). The gap in the temporal meridian corresponds to the blind spot.", null, "The first constraint noted above is that along a given meridian, the cumulative distribution of RGC and RGCf must agree at the limit of the displacement zone. In Figure 3 we show the estimated cumulative number of RGC as a function of eccentricity along each meridian for eccentricities up to 20°. For each meridian, the counts assume a radially symmetric density function, and are produced through linear interpolation of the data shown in Figure 2. To compute cumulative counts as a function of eccentricity, we integrate density at eccentricity r multiplied by 2πr to account for the increasing area. The circular point on each curve marks the cumulative density at the approximate limit of the displacement zone (11 for temporal, 17 for the others). When we construct a candidate function for the density of RGCf, its cumulative value must approximately agree at these points. In other words, the total number of receptive fields must equal the total number of cell bodies within the displacement zone.\nFigure 3\n\nCumulative total number of RGC along four meridians.\nFigure 3\n\nCumulative total number of RGC along four meridians.", null, "Foveal density of RGC receptive fields\nIf each foveal cone drives exactly two midget retinal ganglion cells, as specified in our second constraint, then the foveal density of midget retinal ganglion cell receptive fields must be twice the cone density,", null, "If midget cells constitute a fraction f(r) of all ganglion cells at eccentricity r, then", null, "", null, "As noted below, f(0)−1 has been estimated as 1.12 (Drasdo et al., 2007) or 1.09 (Dacey, 1993). For reasons outlined below, here we adopt the value of 1.12, in which case dgf(0) = 2 × 1.12 × 14,804.6 = 33,163.2 deg−2\nDensity of RGC receptive fields\nWe now attempt to discover a function that will describe RGCf density as a function of eccentricity and satisfy the constraints outlined in the Introduction. For each candidate function, we optimized the several parameters with respect to an error function consisting of the sum of the squared errors between empirical and computed log densities for eccentricities outside the exclusion zone, and the weighted squared log of the ratio of empirical and computed cumulative counts within the exclusion zone (see points in Figure 3). This ensures a reasonable fit to peripheral RGC densities, and to the cumulative counts. We used log densities in the fit to accommodate the very wide range of densities, and to avoid giving the larger densities undue influence in the fit. We explored a wide range of functions, leading to the best-fitting one described below.\nSince the work of Aubert and Foerster (1857) it has been observed that many measures of visual resolution decline in an approximately linear fashion with eccentricity, at least up to the eccentricity of the blind spot (Strasburger, Rentschler, & Juttner, 2011). Because resolution may depend on receptive field spacing, and since density is proportional to the inverse of spacing squared (see Appendix 5), this suggests that density might vary with eccentricity as", null, "where dgf(0) is the density at r = 0, and r2 is the eccentricity at which density is reduced by a factor of four (and spacing is doubled). By itself, this did not provide a good fit, especially at larger eccentricities. However we found that a simple modification, the addition of an exponential, yielded an acceptable fit. The new function is given by", null, "where ak is the weighting of the first term, and re,k is the scale factor of the exponential. The meridian is indicated by the index k. We have fit this expression separately for each meridian and optimized parameters relative to the error function described above. The results are shown in Figure 4. For each meridian, we show the average RGC densities reported by Curcio and Allen (1990), along with the fitted function. The vertical gray line in each figure shows the assumed limit of the displacement zone. Note that only data points outside the displacement zone are used in the fit. The estimated parameters, predicted cell counts, and fitting error are given in Table 1.\nFigure 4\n\nGanglion cell density as a function of eccentricity in four meridians. Data points are from Curcio and Allen (1990). The solid curve is the fit of Equation 4 to the points outside the displacement zone. The dashed gray line indicates the approximate limit of the displacement zone. The density at eccentricity of zero is indicated by the line segment at the upper left.\nFigure 4\n\nGanglion cell density as a function of eccentricity in four meridians. Data points are from Curcio and Allen (1990). The solid curve is the fit of Equation 4 to the points outside the displacement zone. The dashed gray line indicates the approximate limit of the displacement zone. The density at eccentricity of zero is indicated by the line segment at the upper left.", null, "Table 1\n\nParameters and error for fits of Equation 4 in four meridians. Note: Also shown are measured and predicted cumulative counts along each meridian within the displacement zone. The next-to-last column shows the fitting error outside the displacement zone. The last column indicates the assumed limit of the displacement zone.\nTable 1\n\nParameters and error for fits of Equation 4 in four meridians. Note: Also shown are measured and predicted cumulative counts along each meridian within the displacement zone. The next-to-last column shows the fitting error outside the displacement zone. The last column indicates the assumed limit of the displacement zone.\n Meridian k a r2 re Data count (× 1000) Model count (× 1000) Error rz Temporal 1 0.9851 1.058 22.14 485.1 485.7 0.23 11 Superior 2 0.9935 1.035 16.35 526.1 528.9 0.12 17 Nasal 3 0.9729 1.084 7.633 660.9 661.1 0.01 17 Inferior 4 0.996 0.9932 12.13 449.3 452.1 0.93 17\nThe fits are good for three of the four meridians. Both the peripheral densities and the cumulative counts are in close agreement. The agreement is less good for the inferior meridian, largely due to the unusual distribution of the far peripheral densities. The anomalous bump at around 60° and subsequent rapid decline are difficult to fit with simple analytic functions. For comparison, in Figure 5 we show the RGCf density formula in the four meridians, replotted from Figure 4\nFigure 5\n\nRGCf density formula as a function of eccentricity for four meridians, replotted from Figure 4.\nFigure 5\n\nRGCf density formula as a function of eccentricity for four meridians, replotted from Figure 4.", null, "RGC displacement\nOne test of our formula for the density of RGCf is to compare the predicted density of RGC within the displacement zone with the actual densities measured by Curcio and Allen (1990). To generate these predictions we make use of measured displacements from cone inner segment to retinal ganglion cell body in six human retinas along the horizontal meridian (Drasdo et al., 2007). For the separate nasal and temporal meridians, they provided a mathematical formula to describe the average displacement as a function of RGC eccentricity. This can be converted to a function of cone inner segment eccentricity. A difficulty with this function is that it does not provide the correct result at an eccentricity of zero. It should indicate the displacement corresponding to the RGC closest to the fovea, but instead yields a value of 0. Rather than use their formula directly, we have instead constructed a new formula to describe displacement as a function of cone inner segment eccentricity. We constructed this new function to be a reasonable fit to the function of Drasdo but to also yield a displacement of h(0) > 0° at 0° eccentricity. There is some disagreement about the value of h(0). Drasdo stated that the first RGC are located 0.15 to 0.2 mm (0.53°–0.71°), whereas Curcio provides nonzero RGC densities at eccentricities as small as about 0.2° (see Figure 2). We have assumed a value of 0.5°, but a value of 0.3 gives very similar results.\nThe displacement function we used is the probability density function of the generalized Gamma distribution (multiplied by the gain δ), given by", null, "We attach no particular significance to the function or its parameters. It serves only as a device to displace hypothetical RGC cell bodies, as will be discussed below. This new function is shown in Figure 6, along with the corresponding functions provided by Drasdo for the two meridians. The parameters are provided in Table 2. Note that only three of the parameters are independent, the other two are constrained by the value of h(0) and by the peak value, which we set to the maximum of the fitted values.\nFigure 6\n\nModeled displacement functions (solid curve) and points from the formula of Drasdo et al. (2007).\nFigure 6\n\nModeled displacement functions (solid curve) and points from the formula of Drasdo et al. (2007).", null, "Table 2\n\nParameters of the displacement function (Equation 5 and Figure 6).\nTable 2\n\nParameters of the displacement function (Equation 5 and Figure 6).\n Meridian α β(deg) γ δ μ(deg) Temporal 1.8938 2.4598 0.91565 14.904 −0.09386 Nasal 2.4607 1.7463 0.77754 15.111 −0.15933\nNext we generated a population of RGC with eccentricities based on our density formula (Equation 4). Eccentricities were random within annuli of 0.05°. We then displaced each RGC centrifugally according to the displacement function (Equation 5, Figure 6). The density of the displaced cells was then computed. Results are shown in Figure 7. Note that this test can only be conducted on the two horizontal meridians along which displacement was measured. The actual and predicted densities are in reasonable agreement. In one case, the peak densities are too low and in the other, too high, but the height and shape of the distributions are approximated. Note that the displacements and densities were estimated from a different (but overlapping) set of eyes, and thus complete agreement is not expected.\nFigure 7\n\nModeled and measured density of displaced retinal ganglion cells in temporal (left) and nasal (right) meridians.\nFigure 7\n\nModeled and measured density of displaced retinal ganglion cells in temporal (left) and nasal (right) meridians.", null, "Midget RGCf density\nMidget cells make up most but not all of the retinal ganglion cells, and their proportion varies with eccentricity. In general, the density of midget retinal ganglion cell receptive fields is given by", null, "where f(r) is the fraction of retinal ganglion cells that are midgets, as introduced in Equation 2. Two estimates of f(r) have been provided in the literature.\nDacey (1993) estimated mRGC dendritic field diameters at various eccentricities in whole mounts of human retina. Careful examination of dendritic trees at several midperipheral and peripheral locations indicated a coverage (dendritic field area × density) of approximately 1. By assuming that this coverage remained constant at 1 throughout the retina Dacey estimated the density of midget cells as the inverse of dendritic field area. The ratio of that density to Curcio and Allen's (1990) estimates of RGC density (beyond about 3.5°) provided an estimate of the fraction of RGC that are midget cells. These estimates of Dacey are shown as the points in Figure 8. They range from over 95% near the fovea to less than 50% in the far periphery.\nFigure 8\n\nEstimated fraction of RGC that are midget cells as a function of eccentricity.\nFigure 8\n\nEstimated fraction of RGC that are midget cells as a function of eccentricity.", null, "A second estimate of f(r) was provided by Drasdo et al. (2007) as the formula", null, "where f(0) = 1/1.12 = 0.8928 and rm = 41.03°. The formula is shown by the curve in Figure 8. This formula was derived from iterative fit of a more elaborate expression involving both psychophysical measures and anatomical measures (Drasdo et al., 2007). Drasdo and Dacey's measures agree generally that the fraction declines with eccentricity, and roughly agree in foveal and peripheral asymptotes.\nOne problem with Dacey's method is that from about 3.5° to 17° it includes both RGC densities and dendritic field measurements of displaced cells. Hence his estimates at those eccentricities may not reflect the fraction of receptive fields belonging to midget cells. For this reason, and until better estimates can be obtained, we adopt Drasdo's formula to compute the density of mRGCf. Combining Equations 2, 4, 6, and 7, we have", null, "This formula is plotted in Figure 9 for the four meridians.\nFigure 9\n\nmRGCf density formula as a function of eccentricity (Equation 8).\nFigure 9\n\nmRGCf density formula as a function of eccentricity (Equation 8).", null, "mRGCf spacing\nOn the assumption of hexagonal packing (Equation A4), the spacing of adjacent midget receptive fields is given by", null, "Spacing at a binocular horizontal eccentricity can be computed by averaging densities at corresponding eccentricities in temporal and nasal meridians and converting to spacing. We can also compute the “mean” spacing, the average of all four meridians, by averaging densities and converting to spacing. These formulas for spacing are plotted in Figure 10. We show the individual meridians and the “Horizontal” and “Mean” versions.\nFigure 10\n\nFormula for mRGCf spacing.\nFigure 10\n\nFormula for mRGCf spacing.", null, "Note that the midget RGC are composed of approximately equal numbers of “on” and “off” center cells (we neglect reports of asymmetry between the two types). Typically we are concerned with the spacing within one class, in which case density is halved and the spacings should be multiplied by √2. In Figure 11 we show the formula scaled in this way for eccentricities between 0° and 10°, and we express the spacings in arcmin.\nFigure 11\n\nFormula for on or off mRGCf spacing for eccentricities 0°–10°.\nFigure 11\n\nFormula for on or off mRGCf spacing for eccentricities 0°–10°.", null, "Averaged across meridians, the computed spacing of mRGCf is very nearly linear (R2 = 0.9997). This allows us to write the following simple approximation for average spacing in either on- or off-center mosaic, where both s and r are expressed in degrees.", null, "This convenient result is the due to the form of Equation 4, and the fact that the second exponential term does not have much effect for eccentricities less than 30°.\nExtension to arbitrary retinal locations\nTo this point we have developed formulas describing spacing as a function of eccentricity along each of the four principal meridians. We would like to extend these formulas to describe spacing at an arbitrary point {x, y} in the retina. To do this we make the assumption that within any one quadrant of the retina the iso-spacing contours are ellipses. This is consistent with the idea that spacing changes smoothly with the angle of a ray extending from the visual center. An example is shown in Figure 12. The eccentricities at the intersections of the ellipse with the two enclosing meridians are rx and ry\nFigure 12\n\nA hypothetical iso-spacing curve in one quadrant, including a point {x, y} and points on the two enclosing meridians.\nFigure 12\n\nA hypothetical iso-spacing curve in one quadrant, including a point {x, y} and points on the two enclosing meridians.", null, "Under the ellipse assumption, we can write", null, "And because they are on an iso-spacing curve,", null, "Given numerical values for x and y, we can solve Equations 11 and 12 together to find numerical solutions for rx and ry, and then we can compute", null, "To avoid solving a system of equations, we have found that the following approximation works well. Let rxy be the radial eccentricity of the point {x, y},", null, "Then we compute", null, "", null, "This approximation is always within 1.7% of the value obtained by Equations 10 through 12. Equation 15 is easily generalized to work for arbitrary retinal quadrants. Since the sign of the horizontal coordinate is arbitrary, we define positive x values to mean the temporal visual field. This corresponds to the right visual field for the right eye, and the left visual field for the left eye.\nExtension to arbitrary binocular visual field locations\nEquation 15 computes mRGC spacing at locations specified in visual field coordinates in one eye. In psychophysical modeling of natural vision, it is useful to compute spacing at locations specified in the binocular visual field. To do this we compute the spacing at corresponding visual field locations in the two eyes, convert them to densities (Equation 2), compute their mean, and convert back to spacing (Equation A4). After simplifying, we obtain the following result for binocular spacing sB:", null, "With this function we can compute a plot of the Nyquist frequency over the binocular visual field, as shown in Figure 13. In this calculation we have divided the value by √2, based on the assumption of overlapping lattices of on- and off-center cells. The peak value is 65.4 cycles/deg. Because there is some ambiguity about the best way to combine the densities of the two eyes, in our supplementary materials we also provide functions that compute the maximum density, or the total density of the two eyes.\nFigure 13\n\nNyquist frequency over the binocular visual field based on the density formula and assuming separate and equal on- and off-center populations.\nFigure 13\n\nNyquist frequency over the binocular visual field based on the density formula and assuming separate and equal on- and off-center populations.", null, "Discussion\nRatio of midget RGC and cones\nIn Figure 14 we plot the ratio between mRGCf densities computed from Equation 8 and cone densities reported by Curcio et al. (1990). Although we did not impose this as a constraint in estimating a function for the density of ganglion cells, the ratio remains close to 2 for the central several degrees. This is roughly consistent with Dacey's report that up to about 4° mRGC dendritic fields remain at a minimum size appropriate to connection with a single bipolar, and thereby to a single cone. Beyond about 6°, he found that dendritic fields enlarge and begin to show clusters suggesting input from multiple cones, and consistent with the decline in the ratio in Figure 14. Our result is also consistent with Schein's estimate that the ratio remains constant out to about 2.5° in primates (Schein, 1988).\nFigure 14\n\nRatio of mRGCf to cones as a function of eccentricity based on the Watson formula for mRGCf density.\nFigure 14\n\nRatio of mRGCf to cones as a function of eccentricity based on the Watson formula for mRGCf density.", null, "Comparison with Drasdo et al. (2007)\nIn Figure 15 we compare mRGCF spacing computed from our formula to densities computed from the formula of Drasdo et al. (2007), converted to spacings by Equation A4. While there is considerable agreement, there are some significant discrepancies. In particular the curves for the superior and inferior meridians are nearly interchanged in the two formulas. Our formula is consistent with Curcio and Allen (1990) who clearly show higher density (smaller spacing) in superior versus inferior meridians beyond about 6° (see Figure 2), while Drasdo's formula shows the opposite, for unknown reasons. We also note that the Drasdo formula is defined only up to 30°, while ours extends at least as far as 90°.\nFigure 15\n\nComparison of formulas of Watson and Drasdo et al. (2007) (dashed).\nFigure 15\n\nComparison of formulas of Watson and Drasdo et al. (2007) (dashed).", null, "Comparison with dendritic field diameter\nDacey measured dendritic field diameters of midget ganglion cells in human retina (Dacey, 1993; Dacey & Petersen, 1992). In Figure 16 we have reproduced his Figure 4B, that shows diameter as a function of eccentricity. The filled points are for the temporal quadrant, open points are for the other quadrants. Near the fovea, where mRGC connect to single cone, we do not expect any relationship between field diameter and cell spacing. However in the periphery, Dacey found that within either the on- or off-center lattice, spacing and diameter were about equal. This allows us to compare his measures of diameter to our spacing formula for equivalent eccentricities and quadrants. Formula values for the temporal meridian and the mean of the other three meridians are shown by the colored curves in Figure 16. The computed spacing is for either the on- or off-center lattice. The agreement is reasonable, especially considering the sizable scatter in measurements of diameter.\nFigure 16\n\nDendritic field diameter as a function of eccentricity for human mRGC as measured by Dacey (1993). Filled points are for the temporal meridian; open points are for other quadrants. Gray curve is Dacey's estimate of the mean. Red and blue curves show spacing calculated from our formula for the temporal meridian or the mean of the other meridians.\nFigure 16\n\nDendritic field diameter as a function of eccentricity for human mRGC as measured by Dacey (1993). Filled points are for the temporal meridian; open points are for other quadrants. Gray curve is Dacey's estimate of the mean. Red and blue curves show spacing calculated from our formula for the temporal meridian or the mean of the other meridians.", null, "Comparison with acuity\nRossi and Roorda (2010) provide estimates of letter acuity, expressed as minimum angle of resolution (MAR) for five observers viewing targets under adaptive optics conditions. The targets were at nasal visual field locations between 0° and 2.5°. In Figure 17 we compare their results with the computed row spacing, assuming separate on- and off-cell lattices. We use separate lattices on the assumption that, at least near the fovea, both an on and an off midget cell are required to signal the signed value of local contrast. In addition, where each cone drives one on and one off midget, we know the two midgets have the same receptive field location. Specifically, the function returned by Equation 9 is multiplied by 60 √2 √3 / 2 = 30 √6 to reflect the halved density, conversion to row spacing, and conversion to arcmin. The agreement is excellent. This is not surprising, since Rossi and Roorda previously showed good agreement with the formula of Drasdo et al. (2007), to which ours is similar for small eccentricities.\nFigure 17\n\nHuman letter acuity (points) along the nasal meridian of five observers from Rossi and Roorda (2010) and row spacing from our formula for the same meridian (line).\nFigure 17\n\nHuman letter acuity (points) along the nasal meridian of five observers from Rossi and Roorda (2010) and row spacing from our formula for the same meridian (line).", null, "Estimates of peripheral acuity are complicated by the possibility of aliasing. Anderson, Mullen, and Hess (1991) attempted to bypass this problem by using direction discrimination of drifting gratings. Their results are plotted in Figure 18, along with calculations of Nyquist frequency of the on- or off-center mRGCf lattice from Equations 8 and A3. The agreement is reasonable. One caveat regarding the comparison at r = 0 is that these data were collected with Gabor targets that extended (at half height) well over 0.5°, so that performance may reflect the averaging spacing over that area. The precise relationship between mRGCf spacing and acuity is beyond the scope of this paper (Anderson & Thibos, 1999), here we only point to the general agreement in both the shape and absolute level of the calculations.\nFigure 18\n\nHuman grating acuity (points) from Anderson et al. (1991) and calculated Nyquist frequency of midget RGC.\nFigure 18\n\nHuman grating acuity (points) from Anderson et al. (1991) and calculated Nyquist frequency of midget RGC.", null, "Comparison with Sjöstrand\nIn a series of papers Sjöstrand, Popovic, and colleagues measured human RGC densities at eccentricities from about 2° to 34° eccentricity along the vertical meridian in sectioned human retinas (Popovic & Sjöstrand, 2001, 2005; Sjöstrand, Olsson, Popovic, & Conradi, 1999; Sjostrand, Popovic, Conradi, & Marshall, 1999). From these densities, using their own estimates of displacement, the inferred RGC spacing at various eccentricities. Their formula for conversion from density to spacing actually yields the row spacing (Equation A1) not the spacing between cells (Equation A4), which is 2 / √3 larger. Even taking this into account, their values are about a factor 0.75 smaller than those computed from our formula for the mean of superior and inferior meridians. However their values are also discrepant with Drasdo's formula (Figure 6) and with spacing estimated from Dacey's estimates of mRGC field diameter (Figure 16). Some part of this discrepancy may arise from their formula for displacement, which though similar in form is only half the magnitude of ours or that of Drasdo, who has also commented on this discrepancy (Drasdo et al., 2007).\nPopovic and Sjöstrand (2005) measured acuity of three observers at eccentricities between 5.8° and 26.4° in both eyes, one of which was subsequently enucleated. Ganglion cell densities and spacings were measured along the vertical meridian. Acuity (MAR) was measured using high-pass resolution perimetry. They found MAR was approximately proportional to RGC spacing over the full range of eccentricities. The constant of proportionality was rather large (4.24), especially compared to the value of 1 we have used in Figures 17 and 18. They note that correcting for the low contrast of their target (0.25), and considering only spacing in one class (on or off) of midget cells as we have done, would lower the constant to 1.43. The remaining discrepancy may be due to their low estimates of spacing, as noted above. In general, their results support the notion that psychophysical resolution is governed by the mRGC spacing.\nMidget fraction\nPerhaps the least secure element of our formula is the midget fraction, the function describing the fraction of all ganglion cells that are midget as a function of eccentricity. As noted above (Figure 8) there are discrepancies between available estimates. We have adopted the formula of Drasdo, but it is unclear whether that is accurate, especially at very large eccentricities, where it continues to descend to values as low as 0.25. In the periphery, where his estimates are arguably most accurate, Dacey's estimates appear the level off at about 0.5. Until more definitive estimates are available, we will have to acknowledge the speculative nature of this element.\nVariability\nI have based my formula on average densities of cones and retinal ganglion cells (Curcio & Allen, 1990; Curcio et al., 1990). Curcio et al. (1990) note a very large variation in peak cone density in their set of nine eyes, ranging from 98,200 to 324,100 mm−2 (7,385 to 24,372 deg−2) (coefficient of variation ∼0.46). When two anomalous eyes are excluded, the lower bound only increases to 166,000 mm−2 (12,483 deg−2). This variation largely disappeared at eccentricities beyond 1°, so that the total number of cones within a radius of about 3.6° (or over the entire retina) was nearly constant (coefficient of variation ∼0.1). However, more recent density estimates from in vivo measurement show a fairly consistent coefficient of variation (∼0.2) regardless of eccentricity (Song, Chui, Zhong, Elsner, & Burns, 2011). These latter authors have also shown an up to 25% decrement in density with age, primarily at eccentricities less than 1.6°. Curcio's data, and our formula, are consistent with the data for their younger group of observers.\nGanglion cell densities also show sizable individual differences (Curcio & Allen, 1990). However, perhaps in contrast to cones, the total number varies considerably, from 0.71 to 1.54 million cells over a set of six eyes. This variation seems to be consistent across the retina. Variation at the fovea cannot be directly determined because of the displacement of the RGC.\nBeyond these variations between individuals and with respect to age, there may be additional sources of variability and measurement error. Thus while a formula for the average may be useful, it is important to note that individuals may differ considerably from these computed values.\nConclusions\nWe have derived a mathematical formula for the density of receptive fields of human retinal ganglion cells as a function of position in the monocular or binocular visual field. Densities can also be computed for the receptive fields of the midget subclass of ganglion cells. Both spacing and position are expressed in degrees. The formula has several advantages over existing formulas, which are based on psychophysics, limited to small eccentricities, confined to specific meridians, or are inaccurate in the foveal region. Since the midget retinal ganglion cells provide the primary limit on human visual spatial resolution across the visual field, this formula may be useful in the modeling of human spatial vision.\nSupplementary Materials\nAcknowledgments\nI thank Albert Ahumada, Jeffrey Mulligan, Dennis Dacey, Heinz Wässle, Joy Hirsch, Neville Drasdo, Tony Movshon, and Denis Pelli and two anonymous referees for comments on earlier versions of the manuscript. I thank Christine Curcio for providing the cone density and retinal ganglion cell data, and Ethan Rossi for providing the acuity data in Figure 17. This work supported by the NASA Space Human Factors Research Project WBS 466199.\nCommercial relationships: none.\nCorresponding author: Andrew B. Watson.\nEmail: [email protected].\nAddress: NASA Ames Research Center, Moffett Field, CA, USA.\nReferences\nAhmad K. M. Klug K. Herr S. Sterling P. Schein S. (2003). Cell density ratios in a foveal patch in macaque retina. Visual Neuroscience, 20 (2), 189– 209.\nAnderson R. S. Thibos L. N. (1999). Relationship between acuity for gratings and for tumbling-E letters in peripheral vision. Journal of the Optical Society of America A, 16 (10), 2321– 2333, http://josaa.osa.org/abstract.cfm?URI=josaa-16-10-2321.\nAnderson S. J. Mullen K. T. Hess R. F. (1991). Human peripheral spatial resolution for achromatic and chromatic stimuli: Limits imposed by optical and retinal factors. Journal of Physiology, 442, 47– 64, http://www.ncbi.nlm.nih.gov/pubmed/1798037.\nAubert H. R. Foerster C. F. R. (1857). Beiträge zur Kenntniss des indirecten Sehens. (I). Untersuchungen über den Raumsinn der Retina [Translation: Contributions to the knowledge of the indirect vision: I. Studies on the sense of space of the retina]. Archiv für Ophthalmologie, 3, 1– 37.\nBarten P. G. J. (1999). Contrast sensitivity of the human eye and its effects on image quality. Bellingham, WA: SPIE Optical Engineering Press.\nCharman W. N. (1991). Optics of the human eye. In Cronly Dillon J. Ed.), Visual optics and instrumentation ( pp. 1– 26). Boca Raton: CRC Press.\nCurcio C. (2013). Curcio_JCompNeurol1990_GCtopo_F6.xls.\nCurcio C. A. Allen K. A. (1990). Topography of ganglion cells in human retina. Journal of Comparative Neurology, 300 (1), 5– 25.\nCurcio C. A. Sloan K. R. Kalina R. E. Hendrickson A. E. (1990). Human photoreceptor topography. Journal of Comparative Neurology, 292 (4), 497– 523, http://www.ncbi.nlm.nih.gov/pubmed/2324310.\nDacey D. M. (1993). The mosaic of midget ganglion cells in the human retina. Journal of Neuroscience, 13 (12), 5334– 5355, http://www.ncbi.nlm.nih.gov/pubmed/8254378.\nDacey D. M. Petersen M. R. (1992). Dendritic field size and morphology of midget and parasol ganglion cells of the human retina. Proceedings of the National Academy of Sciences, USA, 89 (20), 9666– 9670, http://www.ncbi.nlm.nih.gov/pubmed/1409680.\nDrasdo N. Fowler C. W. (1974). Non-linear projection of the retinal image in a wide-angle schematic eye. British Journal of Ophthalmology, 58 (8), 709– 714, http://www.ncbi.nlm.nih.gov/pubmed/4433482.\nDrasdo N. Millican C. L. Katholi C. R. Curcio C. A. (2007). The length of Henle fibers in the human retina and a model of ganglion receptive field density in the visual field. Vision Research, 47 (22), 2901– 2911, http://www.ncbi.nlm.nih.gov/pubmed/17320143.\nEmsley H. H. (1952). Visual optics (5th ed.). London: Hatton Press.\nGoodchild A. K. Ghosh K. K. Martin P. R. (1996). Comparison of photoreceptor spatial density and ganglion cell morphology in the retina of human, macaque monkey, cat, and the marmoset Callithrix jacchus. Journal of Comparative Neurology, 366 (1), 55– 75, http://www.ncbi.nlm.nih.gov/pubmed/8866846.\nHirsch J. Curcio C. A. (1989). The spatial resolution capacity of human foveal retina. Vision Research, 29 (9), 1095– 1101.\nKolb H. Dekorver L. (1991). Midget ganglion cells of the parafovea of the human retina: A study by electron microscopy and serial section reconstructions. Journal of Comparative Neurology, 303 (4), 617– 636, http://www.ncbi.nlm.nih.gov/pubmed/1707423.\nMerigan W. H. Eskin T. A. (1986). Spatio-temporal vision of macaques with severe loss of P-beta retinal ganglion cells. Vision Research, 26, 1751– 1761.\nMerigan W. H. Katz L. M. (1990). Spatial resolution across the macaque retina. Vision Research, 30 (7), 985– 991.\nOsterberg G. A. (1935). Topography of the layer of rods and cones in the human retina. Acta Ophthalmologica, 6 (Suppl. VI), 1– 97.\nPopovic Z. Sjöstrand J. (2005). The relation between resolution measurements and numbers of retinal ganglion cells in the same human subjects. Vision Research, 45 (17), 2331– 2338.\nPopovic Z. Sjöstrand J. (2001). Resolution, separation of retinal ganglion cells, and cortical magnification in humans. Vision Research, 41 (10), 1313– 1319.\nRossi E. A. Roorda A. (2010). The relationship between visual resolution and cone spacing in the human fovea. Nature Neuroscience, 13 (2), 156– 157, http://dx.doi.org/10.1038/nn.2465.\nSchein S. J. (1988). Anatomy of macaque fovea and spatial densities of neurons in foveal representation. Journal of Comparative Neurology, 269 (4), 479– 505, http://www.ncbi.nlm.nih.gov/pubmed/3372725.\nSjöstrand J. Olsson V. Popovic Z. Conradi N. (1999). Quantitative estimations of foveal and extra-foveal retinal circuitry in humans. Vision Research, 39 (18), 2987– 2998.\nSjostrand J. Popovic Z. Conradi N. Marshall J. (1999). Morphometric study of the displacement of retinal ganglion cells subserving cones within the human fovea. Graefes Archives for Clinical & Experimental Ophthalmology, 237 (12), 1014– 1023.\nSong H. Chui T. Y. P. Zhong Z. Elsner A. E. Burns S. A. (2011). Variation of cone photoreceptor packing density with retinal eccentricity and age. Investigative Ophthalmology & Visual Science, 52 (10), 7376– 7384, http://www.iovs.org/content/52/10/7376. [Pubmed] [Article]\nStrasburger H. Rentschler I. Juttner M. (2011). Peripheral vision and pattern recognition: A review. Journal of Vision, 11( 5): 13, 1-82, http://www.journalofvision.org/content/11/5/13, doi:10.1167/11.5.13. [Pubmed] [Article]\nThibos L. Cheney F. Walsh D. (1987). Retinal limits to the detection and resolution of gratings. Journal of the Optical Society of America A, 4 (8), 1524– 1529.\nWassle H. Grunert U. Rohrenbeck J. Boycott B. B. (1990). Retinal ganglion cell density and cortical magnification factor in the primate. Vision Research, 30 (11), 1897– 1911, http://www.ncbi.nlm.nih.gov/pubmed/2288097.\nWolfram Research Inc. (2013). Mathematica (Version 9.0). Champaign, IL: Author.\nAppendix 1: Demonstration\nAs a supplement to this paper we provide an interactive calculator that returns density and spacing at arbitrary visual field locations. The demonstration requires the use of the Wolfram CDF player, available at https://www.wolfram.com/cdf-player/. An illustration of the calculator is shown in Figure 19\nFigure 19\n\nDemonstration of Retinal Topography Calculator.\nFigure 19\n\nDemonstration of Retinal Topography Calculator.", null, "Appendix 2: Mathematica tables and functions\nAs a supplement we provide a Mathematica Notebook that contains a number of tables and functions derived and used in this report. The Notebook is a text file, but is most readable using the free Wolfram CDF Player available at https://www.wolfram.com/cdf-player/\nAppendix 3: Notation\nThe following is a table of notation used in this report.\nTable A1\n\nNotation.\nTable A1\n\nNotation.\n Symbol Definition Unit r Eccentricity in deg relative to the visual axis deg k Meridian index d(r, k) Density of cells or receptive fields at eccentricity r along meridian k deg−2 s(r, k) Spacing between receptive fields at eccentricity r along meridian k deg c, g, m, gf, mf Subscripts to indicate cones, RGC, mRGC, RGCf, and mRGCf h(r) Displacement of RGC from RGCf at eccentricity r deg f(r) Fraction of RGC that are midget, as a function of eccentricity dimensionless Δk Offset between optic and visual axis along the specified meridian mm r′ Eccentricity in deg relative to the optic axis deg rmm Eccentricity in mm relative to the visual axis mm r′mm Eccentricity in mm relative to the optic axis mm N Nyquist frequency of hexagonal lattice cycles/deg S Point spacing of hexagonal lattice deg R Row spacing of hexagonal lattice deg D Point density of hexagonal lattice deg−2\nAppendix 4: Formula parameters and useful numbers\nTable A2\n\nFormula parameters and useful numbers.\nTable A2\n\nFormula parameters and useful numbers.\n Item Value Unit Peak cone density 14,804.6 deg−2 Peak RGCf density 33,162.3 deg−2 Peak mRGCf density 29,609.2 deg−2 Minimum on-center mRGCf (or cone) spacing 0.5299 arcmin Peak on-center mRGCf (or cone) Nyquist 65.37 cycles/deg f(0), midget fraction at zero eccentricity 1/1.12 = 0.8928 rm, scale factor for decline in midget fraction with eccentricity 41.03 deg\nAppendix 5: Density, spacing, row spacing, and Nyquist frequency\nAt least in the fovea, both photoreceptors and midget ganglion cell receptive fields form an approximately hexagonal lattice. Because we will make use of these formulas in the text, we review here the relationships among various metrics of a hexagonal lattice of points: S(deg) = the spacing between adjacent points, R(deg) = the spacing between rows of points, D(deg−2) = the density of points, and N(c/deg) = the Nyquist frequency of the lattice (the highest frequency that can be supported by a particular row spacing). These formulas will allow us to convert among these metrics. Then", null, "", null, "", null, "", null, "Appendix 6: Conversion formulas for retinal dimensions\nConversion of eccentricities in millimeters to degrees\nEccentricity is defined as distance from a visual center. Anatomical measurements of the retina often express eccentricity in millimeters. We would like to convert these measurements to degrees of visual angle (deg) relative to the visual axis. Drasdo and Fowler (1974) used a model eye to compute conversions from retinal distances in millimeters to degrees. Their presentation does not, however, provide analytical expressions of the relevant quantities, so we must derive them from the figures. In their figure 2 they show a “curve showing computed relationship between retinal arc lengths and visual angles from the optic axis.” We have extracted the contour and fit it with a third order polynomial,", null, "In this equation, rmm refers to distance in mm, while r′ is the comparable measurement in degree. The prime marking indicates a measurement relative to the optic axis.\nWe use this equation to translate optical eccentricities in degrees to millimeters. It is plotted in Figure A1a, along with a linear function with slope 0.268. The linear approximation is acceptable up to angles of 40°. We also fit to the transpose of the contour data, to obtain an inverse function,", null, "Figure A1\n\nRelation between retinal distance from the optic axis in millimeters and degrees. The linear approximations shown as red dashed lines have slopes of 0.268 and 3.731, respectively.\nFigure A1\n\nRelation between retinal distance from the optic axis in millimeters and degrees. The linear approximations shown as red dashed lines have slopes of 0.268 and 3.731, respectively.", null, "We use this function, shown in Figure A1b, to convert optical eccentricities in millimeters to degrees. Dacey (1993) used a second order polynomial here, but its fit is poor at very small (0 mm) or large (22 mm) eccentricities. The linear approximation shown in the figure has a slope of 1/0.268 = 3.731.\nConversion of mm2 to deg2\nThe preceding formulas convert eccentricities (distances) from millimeters to degrees. We also need to convert local areas from mm2 to deg2. In their figure 5, Drasdo and Fowler (1974) show “variation of retinal area per solid degree with peripheral angle from the optic axis.” We have fit this with a polynomial", null, "where a is the ratio of areas mm2/deg2. We have used this to convert cell densities in mm−2 to deg−2. The function is illustrated in Figure A2.\nFigure A2\n\nArea ratio as a function of angular distance from the optic axis.\nFigure A2\n\nArea ratio as a function of angular distance from the optic axis.", null, "Visual versus optical axis\nDrasdo and Fowler's (1974) angular measurements are relative to the optic axis. Measurements of cone densities and psychophysical predictions are usually expressed relative to the visual axis. According to Charman (1991), quoting Emsley (1952), the optical axis intersects the retina 1.5 mm nasal and 0.5 mm superior to the visual axis. In the visual field, the optical center is thus 1.5 mm temporal of the visual center. To convert eccentricities from millimeters relative to the visual axis to degrees relative to the visual axis, we use the following approximation", null, "where m is the index of the meridian, and the corresponding offsets Δ from optic to visual axis in mm are given by", null, "We remind that primed eccentricities are relative to the optic axis. The relation between visual millimeters and visual degrees for the four meridians is shown in Figure A3. Note that without the correction of visual-optical offset, all curves would be superimposed and look like Figure A1b above.\nFigure A3\n\nRelation between distance from the visual axis in millimeters and degrees, with correction for offset between visual and optic axes.\nFigure A3\n\nRelation between distance from the visual axis in millimeters and degrees, with correction for offset between visual and optic axes.", null, "In Figure A4 we show the relation between eccentricities in degrees computed with and without the correction for the offset. Each curve is a parametric plot, where the millimeter argument is relative to either optical or visual axis. The correction is only significant above 40°.\nFigure A4\n\nEccentricities in degree computed with and without correction for offset between visual and optic axes.\nFigure A4\n\nEccentricities in degree computed with and without correction for offset between visual and optic axes.", null, "Figure 1\n\nCone density as a function of eccentricity (Curcio et al., 1990). Foveal density is indicated by the line segment at the upper left. The gap in the temporal meridian corresponds to the blind spot.\nFigure 1\n\nCone density as a function of eccentricity (Curcio et al., 1990). Foveal density is indicated by the line segment at the upper left. The gap in the temporal meridian corresponds to the blind spot.", null, "Figure 2\n\nRGC density as a function of eccentricity in four meridians (Curcio & Allen, 1990). The gap in the temporal meridian corresponds to the blind spot.\nFigure 2\n\nRGC density as a function of eccentricity in four meridians (Curcio & Allen, 1990). The gap in the temporal meridian corresponds to the blind spot.", null, "Figure 3\n\nCumulative total number of RGC along four meridians.\nFigure 3\n\nCumulative total number of RGC along four meridians.", null, "Figure 4\n\nGanglion cell density as a function of eccentricity in four meridians. Data points are from Curcio and Allen (1990). The solid curve is the fit of Equation 4 to the points outside the displacement zone. The dashed gray line indicates the approximate limit of the displacement zone. The density at eccentricity of zero is indicated by the line segment at the upper left.\nFigure 4\n\nGanglion cell density as a function of eccentricity in four meridians. Data points are from Curcio and Allen (1990). The solid curve is the fit of Equation 4 to the points outside the displacement zone. The dashed gray line indicates the approximate limit of the displacement zone. The density at eccentricity of zero is indicated by the line segment at the upper left.", null, "Figure 5\n\nRGCf density formula as a function of eccentricity for four meridians, replotted from Figure 4.\nFigure 5\n\nRGCf density formula as a function of eccentricity for four meridians, replotted from Figure 4.", null, "Figure 6\n\nModeled displacement functions (solid curve) and points from the formula of Drasdo et al. (2007).\nFigure 6\n\nModeled displacement functions (solid curve) and points from the formula of Drasdo et al. (2007).", null, "Figure 7\n\nModeled and measured density of displaced retinal ganglion cells in temporal (left) and nasal (right) meridians.\nFigure 7\n\nModeled and measured density of displaced retinal ganglion cells in temporal (left) and nasal (right) meridians.", null, "Figure 8\n\nEstimated fraction of RGC that are midget cells as a function of eccentricity.\nFigure 8\n\nEstimated fraction of RGC that are midget cells as a function of eccentricity.", null, "Figure 9\n\nmRGCf density formula as a function of eccentricity (Equation 8).\nFigure 9\n\nmRGCf density formula as a function of eccentricity (Equation 8).", null, "Figure 10\n\nFormula for mRGCf spacing.\nFigure 10\n\nFormula for mRGCf spacing.", null, "Figure 11\n\nFormula for on or off mRGCf spacing for eccentricities 0°–10°.\nFigure 11\n\nFormula for on or off mRGCf spacing for eccentricities 0°–10°.", null, "Figure 12\n\nA hypothetical iso-spacing curve in one quadrant, including a point {x, y} and points on the two enclosing meridians.\nFigure 12\n\nA hypothetical iso-spacing curve in one quadrant, including a point {x, y} and points on the two enclosing meridians.", null, "Figure 13\n\nNyquist frequency over the binocular visual field based on the density formula and assuming separate and equal on- and off-center populations.\nFigure 13\n\nNyquist frequency over the binocular visual field based on the density formula and assuming separate and equal on- and off-center populations.", null, "Figure 14\n\nRatio of mRGCf to cones as a function of eccentricity based on the Watson formula for mRGCf density.\nFigure 14\n\nRatio of mRGCf to cones as a function of eccentricity based on the Watson formula for mRGCf density.", null, "Figure 15\n\nComparison of formulas of Watson and Drasdo et al. (2007) (dashed).\nFigure 15\n\nComparison of formulas of Watson and Drasdo et al. (2007) (dashed).", null, "Figure 16\n\nDendritic field diameter as a function of eccentricity for human mRGC as measured by Dacey (1993). Filled points are for the temporal meridian; open points are for other quadrants. Gray curve is Dacey's estimate of the mean. Red and blue curves show spacing calculated from our formula for the temporal meridian or the mean of the other meridians.\nFigure 16\n\nDendritic field diameter as a function of eccentricity for human mRGC as measured by Dacey (1993). Filled points are for the temporal meridian; open points are for other quadrants. Gray curve is Dacey's estimate of the mean. Red and blue curves show spacing calculated from our formula for the temporal meridian or the mean of the other meridians.", null, "Figure 17\n\nHuman letter acuity (points) along the nasal meridian of five observers from Rossi and Roorda (2010) and row spacing from our formula for the same meridian (line).\nFigure 17\n\nHuman letter acuity (points) along the nasal meridian of five observers from Rossi and Roorda (2010) and row spacing from our formula for the same meridian (line).", null, "Figure 18\n\nHuman grating acuity (points) from Anderson et al. (1991) and calculated Nyquist frequency of midget RGC.\nFigure 18\n\nHuman grating acuity (points) from Anderson et al. (1991) and calculated Nyquist frequency of midget RGC.", null, "Figure 19\n\nDemonstration of Retinal Topography Calculator.\nFigure 19\n\nDemonstration of Retinal Topography Calculator.", null, "Figure A1\n\nRelation between retinal distance from the optic axis in millimeters and degrees. The linear approximations shown as red dashed lines have slopes of 0.268 and 3.731, respectively.\nFigure A1\n\nRelation between retinal distance from the optic axis in millimeters and degrees. The linear approximations shown as red dashed lines have slopes of 0.268 and 3.731, respectively.", null, "Figure A2\n\nArea ratio as a function of angular distance from the optic axis.\nFigure A2\n\nArea ratio as a function of angular distance from the optic axis.", null, "Figure A3\n\nRelation between distance from the visual axis in millimeters and degrees, with correction for offset between visual and optic axes.\nFigure A3\n\nRelation between distance from the visual axis in millimeters and degrees, with correction for offset between visual and optic axes.", null, "Figure A4\n\nEccentricities in degree computed with and without correction for offset between visual and optic axes.\nFigure A4\n\nEccentricities in degree computed with and without correction for offset between visual and optic axes.", null, "Table 1\n\nParameters and error for fits of Equation 4 in four meridians. Note: Also shown are measured and predicted cumulative counts along each meridian within the displacement zone. The next-to-last column shows the fitting error outside the displacement zone. The last column indicates the assumed limit of the displacement zone.\nTable 1\n\nParameters and error for fits of Equation 4 in four meridians. Note: Also shown are measured and predicted cumulative counts along each meridian within the displacement zone. The next-to-last column shows the fitting error outside the displacement zone. The last column indicates the assumed limit of the displacement zone.\n Meridian k a r2 re Data count (× 1000) Model count (× 1000) Error rz Temporal 1 0.9851 1.058 22.14 485.1 485.7 0.23 11 Superior 2 0.9935 1.035 16.35 526.1 528.9 0.12 17 Nasal 3 0.9729 1.084 7.633 660.9 661.1 0.01 17 Inferior 4 0.996 0.9932 12.13 449.3 452.1 0.93 17\nTable 2\n\nParameters of the displacement function (Equation 5 and Figure 6).\nTable 2\n\nParameters of the displacement function (Equation 5 and Figure 6).\n Meridian α β(deg) γ δ μ(deg) Temporal 1.8938 2.4598 0.91565 14.904 −0.09386 Nasal 2.4607 1.7463 0.77754 15.111 −0.15933\nTable A1\n\nNotation.\nTable A1\n\nNotation.\n Symbol Definition Unit r Eccentricity in deg relative to the visual axis deg k Meridian index d(r, k) Density of cells or receptive fields at eccentricity r along meridian k deg−2 s(r, k) Spacing between receptive fields at eccentricity r along meridian k deg c, g, m, gf, mf Subscripts to indicate cones, RGC, mRGC, RGCf, and mRGCf h(r) Displacement of RGC from RGCf at eccentricity r deg f(r) Fraction of RGC that are midget, as a function of eccentricity dimensionless Δk Offset between optic and visual axis along the specified meridian mm r′ Eccentricity in deg relative to the optic axis deg rmm Eccentricity in mm relative to the visual axis mm r′mm Eccentricity in mm relative to the optic axis mm N Nyquist frequency of hexagonal lattice cycles/deg S Point spacing of hexagonal lattice deg R Row spacing of hexagonal lattice deg D Point density of hexagonal lattice deg−2\nTable A2\n\nFormula parameters and useful numbers.\nTable A2\n\nFormula parameters and useful numbers.\n Item Value Unit Peak cone density 14,804.6 deg−2 Peak RGCf density 33,162.3 deg−2 Peak mRGCf density 29,609.2 deg−2 Minimum on-center mRGCf (or cone) spacing 0.5299 arcmin Peak on-center mRGCf (or cone) Nyquist 65.37 cycles/deg f(0), midget fraction at zero eccentricity 1/1.12 = 0.8928 rm, scale factor for decline in midget fraction with eccentricity 41.03 deg\nSupplement 1\nSupplement 2\n×\n\nThis PDF is available to Subscribers Only" ]
[ null, "https://jov.arvojournals.org/UI/app/images/arvo_journals_logo-white.png", null, "https://jov.arvojournals.org/Images/fallbackCovers/170.jpg", null, "https://jov.arvojournals.org/Images/grey.gif", null, "https://jov.arvojournals.org/Images/grey.gif", null, "https://jov.arvojournals.org/Images/grey.gif", null, "https://arvo.silverchair-cdn.com/arvo/content_public/journal/jov/933548/m_i1534-7362-14-7-15-e01.png", null, "https://arvo.silverchair-cdn.com/arvo/content_public/journal/jov/933548/m_i1534-7362-14-7-15-e02.png", null, "https://arvo.silverchair-cdn.com/arvo/content_public/journal/jov/933548/m_i1534-7362-14-7-15-e03.png", null, "https://arvo.silverchair-cdn.com/arvo/content_public/journal/jov/933548/m_i1534-7362-14-7-15-e04.png", null, "https://arvo.silverchair-cdn.com/arvo/content_public/journal/jov/933548/m_i1534-7362-14-7-15-e05.png", null, "https://jov.arvojournals.org/Images/grey.gif", null, "https://jov.arvojournals.org/Images/grey.gif", null, "https://arvo.silverchair-cdn.com/arvo/content_public/journal/jov/933548/m_i1534-7362-14-7-15-e06.png", null, "https://jov.arvojournals.org/Images/grey.gif", null, "https://jov.arvojournals.org/Images/grey.gif", null, "https://arvo.silverchair-cdn.com/arvo/content_public/journal/jov/933548/m_i1534-7362-14-7-15-e07.png", null, "https://jov.arvojournals.org/Images/grey.gif", null, "https://arvo.silverchair-cdn.com/arvo/content_public/journal/jov/933548/m_i1534-7362-14-7-15-e08.png", null, "https://arvo.silverchair-cdn.com/arvo/content_public/journal/jov/933548/m_i1534-7362-14-7-15-e09.png", null, "https://jov.arvojournals.org/Images/grey.gif", null, "https://arvo.silverchair-cdn.com/arvo/content_public/journal/jov/933548/m_i1534-7362-14-7-15-e10.png", null, "https://jov.arvojournals.org/Images/grey.gif", null, "https://jov.arvojournals.org/Images/grey.gif", null, "https://arvo.silverchair-cdn.com/arvo/content_public/journal/jov/933548/m_i1534-7362-14-7-15-e11.png", null, "https://jov.arvojournals.org/Images/grey.gif", null, "https://arvo.silverchair-cdn.com/arvo/content_public/journal/jov/933548/m_i1534-7362-14-7-15-e12.png", null, "https://arvo.silverchair-cdn.com/arvo/content_public/journal/jov/933548/m_i1534-7362-14-7-15-e13.png", null, "https://arvo.silverchair-cdn.com/arvo/content_public/journal/jov/933548/m_i1534-7362-14-7-15-e14.png", null, "https://arvo.silverchair-cdn.com/arvo/content_public/journal/jov/933548/m_i1534-7362-14-7-15-e15.png", null, "https://arvo.silverchair-cdn.com/arvo/content_public/journal/jov/933548/m_i1534-7362-14-7-15-e16.png", null, "https://arvo.silverchair-cdn.com/arvo/content_public/journal/jov/933548/m_i1534-7362-14-7-15-e17.png", null, "https://arvo.silverchair-cdn.com/arvo/content_public/journal/jov/933548/m_i1534-7362-14-7-15-e18.png", null, "https://jov.arvojournals.org/Images/grey.gif", null, "https://jov.arvojournals.org/Images/grey.gif", null, "https://jov.arvojournals.org/Images/grey.gif", null, "https://jov.arvojournals.org/Images/grey.gif", null, "https://jov.arvojournals.org/Images/grey.gif", null, "https://jov.arvojournals.org/Images/grey.gif", null, "https://jov.arvojournals.org/Images/grey.gif", null, "https://arvo.silverchair-cdn.com/arvo/content_public/journal/jov/933548/m_i1534-7362-14-7-15-e19.png", null, "https://arvo.silverchair-cdn.com/arvo/content_public/journal/jov/933548/m_i1534-7362-14-7-15-e20.png", null, "https://arvo.silverchair-cdn.com/arvo/content_public/journal/jov/933548/m_i1534-7362-14-7-15-e21.png", null, "https://arvo.silverchair-cdn.com/arvo/content_public/journal/jov/933548/m_i1534-7362-14-7-15-e22.png", null, "https://arvo.silverchair-cdn.com/arvo/content_public/journal/jov/933548/m_i1534-7362-14-7-15-e23.png", null, "https://arvo.silverchair-cdn.com/arvo/content_public/journal/jov/933548/m_i1534-7362-14-7-15-e24.png", null, "https://jov.arvojournals.org/Images/grey.gif", null, "https://arvo.silverchair-cdn.com/arvo/content_public/journal/jov/933548/m_i1534-7362-14-7-15-e25.png", null, "https://jov.arvojournals.org/Images/grey.gif", null, "https://arvo.silverchair-cdn.com/arvo/content_public/journal/jov/933548/m_i1534-7362-14-7-15-e26.png", null, "https://arvo.silverchair-cdn.com/arvo/content_public/journal/jov/933548/m_i1534-7362-14-7-15-e27.png", null, "https://jov.arvojournals.org/Images/grey.gif", null, "https://jov.arvojournals.org/Images/grey.gif", null, "https://jov.arvojournals.org/Images/grey.gif", null, "https://jov.arvojournals.org/Images/grey.gif", null, "https://jov.arvojournals.org/Images/grey.gif", null, "https://jov.arvojournals.org/Images/grey.gif", null, "https://jov.arvojournals.org/Images/grey.gif", null, "https://jov.arvojournals.org/Images/grey.gif", null, "https://jov.arvojournals.org/Images/grey.gif", null, "https://jov.arvojournals.org/Images/grey.gif", null, "https://jov.arvojournals.org/Images/grey.gif", null, "https://jov.arvojournals.org/Images/grey.gif", null, "https://jov.arvojournals.org/Images/grey.gif", null, "https://jov.arvojournals.org/Images/grey.gif", null, "https://jov.arvojournals.org/Images/grey.gif", null, "https://jov.arvojournals.org/Images/grey.gif", null, "https://jov.arvojournals.org/Images/grey.gif", null, "https://jov.arvojournals.org/Images/grey.gif", null, "https://jov.arvojournals.org/Images/grey.gif", null, "https://jov.arvojournals.org/Images/grey.gif", null, "https://jov.arvojournals.org/Images/grey.gif", null, "https://jov.arvojournals.org/Images/grey.gif", null, "https://jov.arvojournals.org/Images/grey.gif", null, "https://jov.arvojournals.org/Images/grey.gif", null, "https://jov.arvojournals.org/Images/grey.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.90971905,"math_prob":0.953453,"size":2838,"snap":"2020-24-2020-29","text_gpt3_token_len":590,"char_repetition_ratio":0.1323218,"word_repetition_ratio":0.0,"special_character_ratio":0.19591261,"punctuation_ratio":0.09126214,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98381245,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,6,null,6,null,6,null,6,null,6,null,null,null,null,null,6,null,null,null,null,null,6,null,null,null,6,null,6,null,null,null,6,null,null,null,null,null,6,null,null,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,6,null,6,null,6,null,6,null,6,null,6,null,null,null,6,null,null,null,6,null,6,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-12T04:19:12Z\",\"WARC-Record-ID\":\"<urn:uuid:a17a7988-1b0f-4959-b516-5a8eba223dd7>\",\"Content-Length\":\"491703\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:67593a50-b3e4-4a15-836d-79bb0df46dd9>\",\"WARC-Concurrent-To\":\"<urn:uuid:7d3c61d2-8df8-45f2-a397-c5c42eb7f874>\",\"WARC-IP-Address\":\"209.135.214.225\",\"WARC-Target-URI\":\"https://jov.arvojournals.org/article.aspx?articleid=2279458\",\"WARC-Payload-Digest\":\"sha1:TSQZ46MLMVHHR2UOPVN3V55URDBHMNI7\",\"WARC-Block-Digest\":\"sha1:4UHLCGQM3LIJSRQVFVY3ARRZ4KJWVQS4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593657129517.82_warc_CC-MAIN-20200712015556-20200712045556-00466.warc.gz\"}"}
https://electronics.stackexchange.com/questions/301239/mosfet-conventional-current-flow-direction-in-the-circuit/301252
[ "# Mosfet Conventional current flow direction in the circuit\n\nI am working on a project to control heavy loads with Arduino up to 10 Amperes. I found the circuit which is made using p-channel Mosfet and a p-type transistor. I am confused in the flow of current through the circuit. I uploaded the diagram please see if the conventional current flow is right in the diagram? and what about the current through the red box (Gate of Mosfet) what will be IL=?. If the input current is up to 10 Amperes does it effects my arduino digital pin? Also if you have any recommendations regarding the circuit please share them.", null, "• To the OP, Olin was rather harsh. I guess I would just say that the black arrows do not help us. We all know which way the current is flowing. And we only care about conventional current. Nobody cares if electrons are flowing the opposite way. It is better not to even mention it. – mkeith Apr 23 '17 at 19:14\n\nMOSFET gates are very high impedance, so no current (or almost no current) flows into them in steady-state conditions.\n\nDuring switching on/off there is indeed current flowing to/from the gate as it charges/discharges and reaches its required Vgs level. But this is only a transient condition. If your load is switched only from time to time, its steady-state condition is no current flowing to/from the MOSFET gate.\n\n1. If you plan to control inductive loads such as motors, use a flyback diode across the load terminals to avoid destroying the P-MOSFET due to inductive voltage spikes at its drain when the load is turned off.\n\n2. Decouple the +12V supply rail with a big capacitor to avoid destroying the P-MOSFET due to inductive voltage spikes at its source when the load is turned off.\n\n3. Due to the high currents involved, consider using an optocoupler instead of a BJT, to fully isolate the 12V circuit from the Arduino.\n\n4. Consider using a logic-level N-MOSFET instead of a BJT for T1. If you decide to keep the BJT, then add a base resistor lo limit the current into the base. Also, add a pull-down resistor at the base to ensure the BJT is cut-off when the Arduino pin is at high impedance (something that can happen when the Arduino is off or when it is starting up, before the pin is configured as OUTPUT).\n\n• Agree with item 4. I would keep the BJT, because I like BJT's, but use a resistor in series with the base, and a pulldown on the base. The value of the series resistor can be around 10x the value of R1. Just as a general guide. – mkeith Apr 23 '17 at 19:11\n\nYour diagram is right (but difficult to read due the blocky arrows). The current through the red element is substantial only when the load current is switched on or off. The current \"?\" exists during the state transition, because the mosfet is controlled by voltage. The current is needed because there exixts a substantial internal capacitance between the gate and the drain & source. Ic charges that capacitance when the loar current is turned to ON. The capacitange gets discharged through R1 when the load current is turned to OFF.\n\nT1 is not P-type, but NPN\n\nThe red element can be a wire. Often a small resistor is used to damp unwanted radio frequency oscillations that are common in fast pulse circuits with no precautions.\n\nIf this circuit is properly realized, I1 is only a few milliamperes, the major part of the Is goes to the load.\n\nThose arrows make everything hard to decipher, but they look correct.\n\nThe gate of a MOSFET behaves pretty much like a capacitor. So a current will flow into or out of the gate only when you are switching. (The amount of gate charge should be found in the datasheet.)\n\nThe source/drain channel of a MOSFET behaves pretty much like a resistor. (The resistance (RDS(on)) should be found in the datasheet.) In a MOSFET rated for large currents, this resistance typically is very low (milliohms), so it's usually ignored. In other words, you can assume that the load behaves as if it were connected directly to +12V.\n\nIf the load is inductive (e.g., motor, relay, transformer), it can generate large voltage spikes when switched off, and you need to add a snubber to protect the rest of your circuit.\n\nSince the voltages throughout the circuit weren't labeled or discussed, perhaps your question stems from a common misunderstanding. This is the misconception that circuitry is based on electric current ...and that to understand circuits, we sketch in all the currents.\n\nActually, engineers and scientists view most circuits as voltage-controlled systems. Everything is powered by constant-voltage supplies, and the signalling is voltage-based. To understand a circuit, we sketch in all the voltages. Then, using Ohm's law, we can determine the currents if necessary (or even ignore them entirely, and instead concentrate on input/output voltages and load wattage.)\n\nFor a nice animated view of voltages (and currents) inside circuits, try the little simulator at Falstad's site. (a java applet)\n\nFor example, the current in the gate-wire of the PMOS remains zero, whether the transistor is on or off. MOS transistors are voltage-driven devices, and their gate-current is usually irrelevant.\n\nTo analyze this circuit, notice that transistor T1 and resistor R1 form a voltage-divider between 12V and 0V. When T1 is on, T1 forms a short circuit to ground, and it pulls down the PMOS gate to zero volts. When T1 is off, it acts like an open circuit, and then R1 pulls the PMOS gate up to 12V.\n\nIn other words, T1 and R1 have converted the Arduino's small output voltage into a 12V signal. This 12V signal then drives the PMOS transistor gate.\n\nThe PMOS transistor is wired as an inverter: when the voltage on the PMOS gate is zero, that transistor turns fully on, and when the voltage is 12V, it turns off. (Yes, it should handle 10amps just fine. If its on-resistance is low enough, it might not even need any heat-sink.)\n\nAlso, note that you'll need a resistor in series with the base of transistor T1. The transistor's input acts as a diode to ground, and this diode would short out your Arduino's output pin. (LEDs need a current-limiting resistor, and so does this transistor's base lead.) The added resistor should be around 10x larger than the value of R1 (so if R1 is 10K, then add a 100K resistor to the connection between the Arduino and T1.)" ]
[ null, "https://i.stack.imgur.com/opHX0.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.93555415,"math_prob":0.9138104,"size":6848,"snap":"2020-34-2020-40","text_gpt3_token_len":1588,"char_repetition_ratio":0.1354471,"word_repetition_ratio":0.2006689,"special_character_ratio":0.22108644,"punctuation_ratio":0.1003663,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9719087,"pos_list":[0,1,2],"im_url_duplicate_count":[null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-14T03:47:30Z\",\"WARC-Record-ID\":\"<urn:uuid:03a6c57b-9687-4c99-abae-869f7aae97c1>\",\"Content-Length\":\"175152\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6c76dac5-8337-40aa-a1d5-f3640f49fb8e>\",\"WARC-Concurrent-To\":\"<urn:uuid:f99d6f03-f5fc-49e6-9584-9e11e6876c90>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://electronics.stackexchange.com/questions/301239/mosfet-conventional-current-flow-direction-in-the-circuit/301252\",\"WARC-Payload-Digest\":\"sha1:PJHL2WW4VB4ZYAV65FWT3SGBEELHKKBG\",\"WARC-Block-Digest\":\"sha1:QKXTZYW6PDWVVG2GU2BGURKQIPJNCVKF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439739134.49_warc_CC-MAIN-20200814011517-20200814041517-00077.warc.gz\"}"}
https://zbmath.org/?q=an%3A1063.37068
[ "zbMATH — the first resource for mathematics\n\nA new loop algebra and a corresponding integrable hierarchy, as well as its integrable coupling. (English) Zbl 1063.37068\nSummary: A type of new interesting loop algebra $$\\widetilde G_M$$, $$M = 1,2,\\dots$$, with a simple commutation operation just like that in the loop algebra $$\\widetilde A_1$$ is constructed. With the help of the loop algebra $$\\widetilde G_M$$, a new multicomponent integrable system, M-AKNS-KN hierarchy, is worked out. As reduction cases, the M-AKNS hierarchy and M-KN hierarchy are engendered, respectively. In addition, the system 1-AKNS-KN, which is a reduced case of the M-AKNS-KN hierarchy above, is a unified expressing integrable model of the AKNS hierarchy and the KN hierarchy. Obviously, the M-AKNS-KN hierarchy is again a united expressing integrable model of the multicomponent AKNS hierarchy (M-AKNS) and the multicomponent KN hierarchy(M-KN). This article provides a simple method for obtaining multicomponent integrable hierarchies of soliton equations. Finally, we work out an integrable coupling of the M-AKNS-KN hierarchy.\n\nMSC:\n 37K30 Relations of infinite-dimensional Hamiltonian and Lagrangian dynamical systems with infinite-dimensional Lie algebras and other algebraic structures\nFull Text:\nReferences:\n M. J. Ablowitz and P. A. Clarkson,Solitons, Nonlinear Evolution Equations and Inverse Scattering(Cambridge University Press, Cambridge, 1991). · Zbl 0762.35001 A. C. Newell,Soliton in Mathematics and Physics(SIAM, Philadelphia, 1985). Wadati, J. Phys. Soc. Jpn. 32 pp 1447– (1972) Wadati, J. Phys. Soc. Jpn. 34 pp 1289– (1973) Wadati, Prog. Theor. Phys. 53 pp 417– (1975) Wadati, J. Phys. Soc. Jpn. 47 pp 1698– (1979) Shimizu, Prog. Theor. Phys. 63 pp 808– (1980) Tu, J. Math. Phys. 30 pp 330– (1989) Guo, Acta Math. Phys. Sin. 19 pp 507– (1999) Tsuchida, J. Phys. Soc. Jpn. 69 pp 2241– (1999) Tsuchida, Phys. Lett. A 257 pp 53– (1999) Guo, Acta Math. Appl. Sin. 23 pp 181– (2000) Guo, J. Syst. Sci. Math. Sci. 22 pp 36– (2003) Ma, Chin. Ann. Math., Ser. A 13 pp 115– (1992) Ma, Chin. Ann. Math., Ser. B 23 pp 373– (2002) F. Guo and Y. Zhang, Chaos, Solitons Fractals, 2003 (accepted for publication). Guo, Acta Phys. Sin. 51 pp 951– (2002) Ma, Chaos, Solitons Fractals 7 pp 1227– (1996)\nThis reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7025288,"math_prob":0.7874848,"size":2915,"snap":"2022-05-2022-21","text_gpt3_token_len":947,"char_repetition_ratio":0.124012366,"word_repetition_ratio":0.024229076,"special_character_ratio":0.33962265,"punctuation_ratio":0.22364217,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9674566,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-28T10:57:31Z\",\"WARC-Record-ID\":\"<urn:uuid:158a5239-dd76-46ee-ae3a-0d1aef8c3c26>\",\"Content-Length\":\"51323\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f019175d-8c36-4fcb-9160-16e74f0849fd>\",\"WARC-Concurrent-To\":\"<urn:uuid:066146a0-2a4f-45c6-86a4-a41b250fd319>\",\"WARC-IP-Address\":\"141.66.194.2\",\"WARC-Target-URI\":\"https://zbmath.org/?q=an%3A1063.37068\",\"WARC-Payload-Digest\":\"sha1:XHJL6A5EMA5LXTMWK2TNY5ITOBCITT2M\",\"WARC-Block-Digest\":\"sha1:OSQDEPCXTIOR25PMDY2PDBEULUCOHB46\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320305494.6_warc_CC-MAIN-20220128104113-20220128134113-00048.warc.gz\"}"}
https://crypto.stackexchange.com/questions/43748/how-would-you-explain-this-very-simple-add-modulo-asymetric-encryption
[ "# How would you explain this very simple add/modulo asymetric encryption?\n\nI am working on a presentation aimed at developers without a strong mathematical background (like me). I would like to explain how asymetric encryption works with an non-fully-secure but working example.\nI found one on Stackoverflow based on addition and modulo :\n\n• given a range from 1 to M\n• given a and b where a+b = M\n• a is the public key, b is the private key\n• x is the clear data (just a number)\n• y is the encrypted data\n• y = (x + a) mod M\n• x = (y + b) mod M\n\nExample :\n\nM=10, a=3, b=7, x=4\ny = (4+3) mod 10 = 7\n(7 + 7) mod 10 = 4\n\n\nVilain knows the algorithm and a, y, M.\nWe suppose Vilain cannot guess b from a.\nReverse modulo has multiple results :\n\n4 mod 3 = 1\n7 mod 3 = 1\n13 mod 3 = 1\n\n\nMeaning Vilain cannot easyly find\n\nx+a\n\n\nfrom\n\n(x+a) mod M\n\n\nEven this is really simple, and I \"feel\" the modulo \"wrap-around\" I could not find anyway to explain WHY this works :\n\ny = (x + a) mod M\nx = (y + b) mod M\n\n\nDoes anyone could try to explain it to me ?\n\n• Note: in crypto, we assume the adversary (vilain) is smart. Knowing $a$, $M$, and with given that $a+b=M$, the adversary can find $b$, thus the system is insecure. Hint at why $x=(y+b)\\bmod M$ holds: in the expression $(y+b)\\bmod M$, replace $y$ according to the given $y=(x+a)\\bmod M$, then use the given $a+b=M$, and the properties of the $\\bmod$ operator (which stands for rest of the Euclidean division by); the property you want is that $(((u\\bmod M)+(v\\bmod M))\\bmod M)=((u+v)\\bmod M)$.\n– fgrieu\nFeb 9 '17 at 16:16\n• If you want one with a less obvious backdoor than $b = M-a$, you might want to try the same with modular multiplication; if $M$ is a large number, and $a \\cdot b = 1 \\bmod M$, then $y = (x \\cdot a) \\bmod M$ is \"encryption\", and $x = (y \\cdot b) \\bmod M$ is \"decryption\". One way to select such a 'key pair' is by selecting $a, b$ and setting $M = a \\cdot b - 1$. Of course, this doesn't really work, it's actually fairly easy to find $b$ given $a, M$ (which you need to highlight, lest anyone thing this is actually usable), however it isn't as immediately obvious as $b = M-a$... Feb 9 '17 at 16:31\n• Thanks both of you. I know how much insecure is the modular addition. But my goal is to explain the basic principles, not ensure a \"real\" crypto algorithm. @poncho I am still struggling, could you give the whole process ? Feb 10 '17 at 8:30\n• Whole process: key generation might be 'select $a, b$ at random, compute $M = ab - 1$, publish $M, a$ as your public key, keep $M, b$ as your private key; to encrypt the message $x$, you'd compute $y = ax \\bmod M$; to decrypt the message $y$, you'd compute $x = by \\bmod M$. This works because the result of an encryption followed by decryption is $b(ax \\bmod M) \\bmod M = abx \\bmod M = (M+1)x \\bmod M = x \\bmod M$ (which is $x$ if the original plaintext are $0 \\le x < M$) Feb 10 '17 at 13:47\n• I am struggling with the modular addition (sorry !). Given y = (x+a)%M and x = (y+b)%M, if I replace y I have : x = ((x+a)%M+b)%M and then what ? Feb 10 '17 at 14:06\n\nIf you were to plot the numbers 0-M on a straight horizontal line in increasing order from left to right, your message would become a point on that line. When you apply the key, it moves the message over $a$ spaces to the right. The modulo operator means that if your message goes too far off the right side of the line, then it wraps back around to the left side of the line.\n\nIf you were to apply the key $a$ again, your ciphertext message would move over another $a$ spaces to the right, which, assuming an appropriate modulus, implies that the result will not line up with the original plaintext message.\n\nWith this in mind, if you were move the ciphertext message instead $b$ spaces forward, you would end up back at the original message.\n\nIf you wanted to use $a$ as a symmetric key, you would instead perform modular subtraction on the ciphertext using $a$. Basically, symmetric decryption \"reverses\" the transformation, while in the case of asymmetric encryption, the ciphertext is decrypted by going forward, as opposed to going backwards. This is what enables us to build public key encryption from non-invertible functions - we do not invert the transformation, we design it such that \"going forward\" (in a particular way) happens to \"line up\" and result in the original message.\n\nOther operations can be used to create this asymmetric effect, usually employing a modulus. As mentioned by @poncho in the comments, multiplication is often used as a way to demonstrate \"asymmetric crypto\", but it should be noted neither addition nor multiplication is \"strong enough\", typically: modular exponentiation is what is used in practice to achieve the effect.\n\n-----0-----1-----2-----3-----4-----5-----6-----7-----8-----9-----10-----\n\n\nSetting $a$ at 3:\n\n-----0-----1-----2-----a-----4-----5-----6-----7-----8-----9-----10-----\n\n\nAfter we add $x$ (which is 4) to $a$ (which is 3); Since $a$ has been encrypted, we shall refer to the ciphertext as $b$ (as you did), which now has the value of $7$; The addition moves the value to the right on the number line:\n\n +x\n+-----------------------+\n| v\n-----0-----1-----2-----a-----4-----5-----6-----b-----8-----9-----10-----\n\n\nIf we were to add $x$ to our ciphertext again, we would end up at $1$ ($11 \\bmod 10 = 1$)\n\n +x\n... -------+ +------------------ ...\nv |\n-----0-----1-----2-----3-----4-----5-----6-----b-----8-----9-----10-----\n\n\nThis obviously leads to incorrect decryption. To decrypt correctly, we can add the corresponding private key $y=6$ ($x + y = 10$) to make the ciphertext \"go forward\" to the correct plaintext, remembering to wrap around when we get to the end of the number line:\n\n +y\n... -------------------+ +------------------ ...\nv |\n-----0-----1-----2-----3-----4-----5-----6-----b-----8-----9-----10-----\n\n\nRemark: here $10$ and $0$ are the same number (because $\\mod 10$) !\n\n• This is a very good explanation, thank you very much ! I think I will use your process for my presentation. I am also looking for something \"more mathematical\", meaning : given y = (a+x) mod M, what is the process to find back x with the help of b (what is the process to go from y = (a+x) mod m to x = (y+b) mod m) Feb 10 '17 at 8:36" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.87689495,"math_prob":0.9918152,"size":2905,"snap":"2021-43-2021-49","text_gpt3_token_len":699,"char_repetition_ratio":0.1461565,"word_repetition_ratio":0.004494382,"special_character_ratio":0.3624785,"punctuation_ratio":0.105166055,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99940956,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-11-29T12:05:15Z\",\"WARC-Record-ID\":\"<urn:uuid:33a1aa3f-f621-4b00-a8ea-6d68cb8c6ff4>\",\"Content-Length\":\"148365\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b0bfd8b9-f7b7-4b0d-ab64-43ec887ab002>\",\"WARC-Concurrent-To\":\"<urn:uuid:3aaa8c52-2e51-43aa-9203-bb7f4844be71>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://crypto.stackexchange.com/questions/43748/how-would-you-explain-this-very-simple-add-modulo-asymetric-encryption\",\"WARC-Payload-Digest\":\"sha1:VYEHOXT6ENODM45LXZPBGHD7CHINBKHS\",\"WARC-Block-Digest\":\"sha1:4XNLYMGAGMTCSHBB6FGI45FATST36EXF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964358705.61_warc_CC-MAIN-20211129104236-20211129134236-00429.warc.gz\"}"}
https://pyklip.readthedocs.io/en/latest/fm_spect.html
[ "# Spectrum Extraction using extractSpec FM¶\n\nThis document describes with an example how to use KLIP-FM to extract a spectrum, described in Pueyo et al. (2016) to account the effect of the companion signal in the reference library when measuring its spectrum.\n\n## Set up:¶\n\nHere we will just read in the dataset and grab the instrumental PSF. The example code here shows how it is done with GPI, but you will want to refer to the Instrument Tutorials for the instrument you are working. As the code notes, it is important what the units of your instrumental PSF is in, as the code will return the spectrum relative to the input PSF model.\n\nimport glob\nimport numpy as np\nimport pyklip.instruments.GPI as GPI\nimport pyklip.fmlib.extractSpec as es\nimport pyklip.fm as fm\nimport pyklip.fakes as fakes\nimport matplotlib.pyplot as plt\n\nfiles = glob.glob(\"\\path\\to\\dataset\\*.fits\")\ndataset = GPI.GPIData(files, highpass=True)\n# Need to specify a model PSF (either via this method, or any other way)\nmodel_psfs = dataset.generate_psf_cube(20)\n# in this case model_psfs has shape (N_lambda, 20, 20)\n# The units of your model PSF are important, the return spectrum will be\n# relative to the input PSF model, see next example\n\n###### Useful values based on dataset ######\nN_frames = len(dataset.input)\nN_cubes = np.size(np.unique(dataset.filenums))\nnl = N_frames // N_cubes\n\n\n### Calibrating stellar flux for GPI example:¶\n\nConverting to contrast units for GPI data is done using the flux of the satellite spots. The GPI dataset object has attribute spot_flux that represent the average peak flux of the four spots. The normalization factor is computed by dividing the spot flux spectrum by the ratio between the stellar flux and the spot flux (stored in spot_ratio) and adjusting for the ratio between the peak and the sum of the spot PSF.\n\nFor any instrument you can scale your model PSF by its respective calibration factors if the model PSF is not already scaled to be the flux of the star. Alternatively, you can choose to skip this step and calibrate your spectrum into astrophysical units as the very end.\n\nGPI Example:\n\n# First set up a PSF model and sums -- this is necessary for GPI because\n# dataset.spot_flux contains peak values of the satellite spots and we\n# have to correct for the full aperture.\nPSF_cube = dataset.psfs\nmodel_psf_sum = np.nansum(PSF_cube, axis=(1,2))\nmodel_psf_peak = np.nanmax(PSF_cube, axis=(1,2))\n# Now divide the sum by the peak for each wavelength slice\naper_over_peak_ratio = model_psf_sum/model_psf_peak\n\n# star-to-spot calibration factor\nband = dataset.prihdrs['APODIZER'].split('_')\nspot_to_star_ratio = dataset.spot_ratio[band]\n\nspot_peak_spectrum = \\\nnp.median(dataset.spot_flux.reshape(len(dataset.spot_flux)//nl, nl), axis=0)\ncalibfactor = aper_over_peak_ratio*spot_peak_spectrum / spot_to_star_ratio\n\n# calibrated_PSF_model is the stellar flux in counts for each wavelength\ncalibrated_PSF_model = PSF_cube*calibfactor\n\n\nThis is your model_psf for generating the forward model and will return the spectrum in contrast units relative to the star.\n\n## Computing the forward model and recovering the spectrum with invert_spect_fmodel¶\n\nWe will use the ExtractSpec class to forward model the PSF of the planet and the invert_spect_fm function in pyklip.fmlib.extractSpec to recover the spectrum. invert_spect_fm returns a spectrum in units relative to the input PSF.\n\nThese are the numbers you change:\n\n###### parameters you specify ######\npars = (45, 222) # replace with known separation and pa of companion\nplanet_sep, planet_pa = pars\nnumbasis = [50,] # \"k_klip\", this can be a list of any size.\n# a forward model will be computed for each element.\nnum_k_klip = len(numbasis) # how many k_klips running\nmaxnumbasis = 100 # Max components to be calculated\nmovement = 2.0 # aggressiveness for choosing reference library\nstamp_size = 10.0 # how big of a stamp around the companion in pixels\n# stamp will be stamp_size**2 pixels\nnumthreads=4 # number of threads, machine specific\nspectra_template = None # a template spectrum, if you want\n\n\nGenerating the forward model with pyKLIP:\n\n###### The forward model class ######\nfm_class = ExtractSpec(dataset.input.shape,\nnumbasis,\nplanet_sep,\nplanet_pa,\ncalibrated_PSF_model,\nnp.unique(dataset.wvs),\nstamp_size = stamp_size)\n\n###### Now run KLIP! ######\nfm.klip_dataset(dataset, fm_class,\nfileprefix=\"fmspect\",\nannuli=[[planet_sep-stamp_size,planet_sep+stamp_size]],\nsubsections=[[(planet_pa-stamp_size)/180.*np.pi,\\\n(planet_pa+stamp_size)/180.*np.pi]],\nmovement=movement,\nnumbasis = numbasis,\nmaxnumbasis=maxnumbasis,\nspectrum=spectra_template,\nsave_klipped=True, highpass=True,\noutputdir=\"\\path\\to\\output\")\n\n# Forward model is stored in dataset.fmout, this is how it is organized:\n# the klipped psf\nklipped = dataset.fmout[:,:,-1,:]\n# The rest is the forward model, dimensions:\n# [num_k_klip, N_frames, N_frames, stamp_size*stamp_size]\n# If numbasis is a list, the first dimension will be the size of that list,\n# a forward model calculated at each value of numbasis.\n\n\nNow you can recover the spectrum:\n\n# If you want to scale your spectrum by a calibration factor:\nunits = \"scaled\"\nscaling_factor = my_calibration_factor\n#e.g., for GPI this could be the star-to-spot ratio\n# otherwise, the defaults are:\nunits = \"natural\" # (default) returned relative to input PSF model\nscale_factor=1.0 # (default) not used if units not set to \"scaled\"\n\nexspect, fm_matrix = es.invert_spect_fmodel(dataset.fmout, dataset, units=units,\nscaling_factor=scaling_factor,\nmethod=\"leastsq\")\n# method indicates which matrix inversion method to use, they all tend\n# to yield similar results when things are well-behaved. Here are the options:\n# \"JB\" matrix inversion adds up over all exposures, then inverts\n# \"leastsq\" uses a leastsq solver.\n# \"LP\" inversion adds over frames and one wavelength axis, then inverts\n# (LP is not generally recommended)\n\n\nThe units of the spectrum, FM matrix, and klipped data are all in raw data units in this example. Calibration of instrument and atmospheric transmmission and stellar spectrum can be done via the input PSF model and optionally applying the scaling factor to invert_spect_fmodel. It can also be done after extracting the spectrum.\n\n## Simulating + recovering a simulated source¶\n\nExample:\n\n# PSF model template for each cube observation, copies of the PSF model:\ninputpsfs = np.tile(calibrated_PSF_model, (N_cubes, 1, 1))\nbulk_contrast = 1e-2\nfake_psf = inputpsfs*bulk_contrast\nfake_flux = bulk_contrast*np.ones(dataset.wvs.shape)\n#for ll in range(N_cubes):\n# fake_flux[ll*nl:(ll+1)*nl] = exspect[0, :]\npa = planet_pa+180\n\ntmp_dataset = GPI.GPIData(files, highpass=False)\nfakes.inject_planet(tmp_dataset.input, tmp_dataset.centers, fake_psf,\\\ntmp_dataset.wcs, planet_sep, pa)\n\nfm_class = es.ExtractSpec(tmp_dataset.input.shape,\nnumbasis,\nplanet_sep,\npa,\ncalibrated_PSF_model,\nnp.unique(dataset.wvs),\nstamp_size = stamp_size)\n\nfm.klip_dataset(tmp_dataset, fm_class,\nfileprefix=\"fakespect\",\nannuli=[[planet_sep-stamp_size,planet_sep+stamp_size]],\nsubsections=[[(pa-stamp_size)/180.*np.pi,\\\n(pa+stamp_size)/180.*np.pi]],\nmovement=movement,\nnumbasis = numbasis,\nmaxnumbasis=maxnumbasis,\nspectrum=spectra_template,\nsave_klipped=True, highpass=True,\noutputdir=\"demo_output/\")\n\nfake_spect, fakefm = es.invert_spect_fmodel(tmp_dataset.fmout, tmp_dataset,\nmethod=\"leastsq\", units=\"scaled\", scaling_factor=2.0)\n\n\n## Comparing the klipped data to the FM¶\n\nYou may want to look at how well your forward model represents the klipped data, measure residual error, etc. All the information you need is in the output of invert_spect_fmodel: the spectrum and FM matrix.\n\nRecall the klipped data is in fmout:\n\nklipped_data = tmp_dataset.fmout[:,:,-1, :]\nklipped_coadd = np.zeros((num_k_klip, nl, stamp_size*stamp_size))\nfor ll in range(N_cubes):\nklipped_coadd = klipped_coadd + klipped_data[0, ll*nl:(ll+1)*nl, :]\n# turn it back into a 2D arrat at each wavelength, k_klip\nklipped_coadd.shape = [nl, int(stamp_size), int(stamp_size)]\n# summed over each wavelength channel, but you can view them individually\nplt.colorbar()\n\n\nPlot the forward model by taking the dot product with the extracted spectrum:\n\nk=0 # choose which numbasis\nfm_image_k = np.dot(fakefm[k,:,:], fake_spect[k].transpose())\n# reshape the image back to 2D\nfm_image_k = fm_image_k.reshape(nl, stamp_size, stamp_size)\n# summed over each wavelength channel\nplt.imshow(fm_image_k.sum(axis=0), interpolation=\"nearest\")\nplt.colorbar()\n\n\n## Calculating Errobars¶\n\nOne may want to calculate errorbars by injecting signals at an annulus of same separation as the real signal and measuring the spread of the recovered spectra (loop through the procedure above):\n\ndef recover_fake(files, position, fake_flux):\n# We will need to create a new dataset each time.\n\n# PSF model template for each cube observation, copies of the PSF model:\ninputpsfs = np.tile(calibrated_PSF_model, (N_cubes, 1, 1))\nbulk_contrast = 1e-2\nfake_psf = inputpsfs*fake_flux[0,None,None]\npa = planet_pa+180\n\ntmp_dataset = GPI.GPIData(files, highpass=False)\nfakes.inject_planet(tmp_dataset.input, tmp_dataset.centers, fake_psf,\\\ntmp_dataset.wcs, planet_sep, pa)\n\nfm_class = es.ExtractSpec(tmp_dataset.input.shape,\nnumbasis,\nplanet_sep,\npa,\ncalibrated_PSF_model,\nnp.unique(dataset.wvs),\nstamp_size = stamp_size)\n\nfm.klip_dataset(tmp_dataset, fm_class,\nfileprefix=\"fakespect\",\nannuli=[[planet_sep-stamp_size,planet_sep+stamp_size]],\nsubsections=[[(pa-stamp_size)/180.*np.pi,\\\n(pa+stamp_size)/180.*np.pi]],\nmovement=movement,\nnumbasis = numbasis,\nmaxnumbasis=maxnumbasis,\nspectrum=spectra_template,\nsave_klipped=True, highpass=True,\noutputdir=\"demo_output/\")\nfake_spect, fakefm = es.invert_spect_fmodel(tmp_dataset.fmout,\ntmp_dataset, method=\"leastsq\",\nunits=\"scaled\", scaling_factor=2.0)\ndel tmp_dataset\nreturn fake_spect\n\n# This could take a long time to run\n# Define a set of PAs to put in fake sources\nnpas = 11\npas = (np.linspace(planet_pa, planet_pa+360, num=npas+2)%360)[1:-1]\n\n# For numbasis \"k\"\n# repeat the spectrum over each cube in the dataset\ninput_spect = np.tile(exspect[k,:], N_cubes)[0,:]\nfake_spectra = np.zeros((npas, nl))\nfor p, pa in enumerate(pas):\nfake_spectra[p,:] = recover_fake(files, (planet_sep, pa), input_spect)\n\n\nOther details, like the forward model or klipped data for the injected signal could be useful.\n\nIf the real companion signal is too bright, the forward model may fail to capture all the flux It could be helpful to look at whether the recovered spectra for the simulated signal are evenly distributed around the simulated spectrum or if they are systematically lower flux:\n\noffset[ii] = estim_spec[ii] - np.median(fake_spectra, axis=0)" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6664645,"math_prob":0.9671161,"size":10747,"snap":"2022-40-2023-06","text_gpt3_token_len":2749,"char_repetition_ratio":0.14372149,"word_repetition_ratio":0.09913793,"special_character_ratio":0.2446264,"punctuation_ratio":0.18109821,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9924466,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-06T11:29:44Z\",\"WARC-Record-ID\":\"<urn:uuid:3db4eef9-09db-474a-90d7-ede4c4924e83>\",\"Content-Length\":\"52758\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4b199ed9-ddc0-4846-a983-e6b320f3b43a>\",\"WARC-Concurrent-To\":\"<urn:uuid:5c1a0284-9400-40e4-bff3-2a738d2c619f>\",\"WARC-IP-Address\":\"104.17.32.82\",\"WARC-Target-URI\":\"https://pyklip.readthedocs.io/en/latest/fm_spect.html\",\"WARC-Payload-Digest\":\"sha1:OXOXJRHPSTW2CWHSA2TI7I5FYTZUR3FE\",\"WARC-Block-Digest\":\"sha1:N2YVBGZHTZ5ONJ63XAALSSAXDXZA7DR7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337803.86_warc_CC-MAIN-20221006092601-20221006122601-00562.warc.gz\"}"}
https://www.jpost.com/israel/would-be-female-suicide-bomber-arrested-in-nablus
[ "(function (a, d, o, r, i, c, u, p, w, m) { m = d.getElementsByTagName(o), a[c] = a[c] || {}, a[c].trigger = a[c].trigger || function () { (a[c].trigger.arg = a[c].trigger.arg || []).push(arguments)}, a[c].on = a[c].on || function () {(a[c].on.arg = a[c].on.arg || []).push(arguments)}, a[c].off = a[c].off || function () {(a[c].off.arg = a[c].off.arg || []).push(arguments) }, w = d.createElement(o), w.id = i, w.src = r, w.async = 1, w.setAttribute(p, u), m.parentNode.insertBefore(w, m), w = null} )(window, document, \"script\", \"https://95662602.adoric-om.com/adoric.js\", \"Adoric_Script\", \"adoric\",\"9cc40a7455aa779b8031bd738f77ccf1\", \"data-key\");\nvar domain=window.location.hostname; var params_totm = \"\"; (new URLSearchParams(window.location.search)).forEach(function(value, key) {if (key.startsWith('totm')) { params_totm = params_totm +\"&\"+key.replace('totm','')+\"=\"+value}}); var rand=Math.floor(10*Math.random()); var script=document.createElement(\"script\"); script.src=`https://stag-core.tfla.xyz/pre_onetag?pub_id=34&domain=\\${domain}&rand=\\${rand}&min_ugl=0\\${params_totm}`; document.head.append(script);" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9625428,"math_prob":0.96919554,"size":465,"snap":"2023-40-2023-50","text_gpt3_token_len":96,"char_repetition_ratio":0.0867679,"word_repetition_ratio":0.0,"special_character_ratio":0.18924731,"punctuation_ratio":0.047058824,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9789482,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-03T14:20:35Z\",\"WARC-Record-ID\":\"<urn:uuid:437e26e7-571d-45ee-ab04-de03b38679ee>\",\"Content-Length\":\"78290\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:288c6d17-63a5-4fa2-b9ac-8d011a001af8>\",\"WARC-Concurrent-To\":\"<urn:uuid:aa22dba9-a411-41f0-b48e-5a500acbfdba>\",\"WARC-IP-Address\":\"159.60.130.79\",\"WARC-Target-URI\":\"https://www.jpost.com/israel/would-be-female-suicide-bomber-arrested-in-nablus\",\"WARC-Payload-Digest\":\"sha1:57JKNZ7OYHM7KNXVL4XWHDAUVUZHLHIM\",\"WARC-Block-Digest\":\"sha1:H5O56RGT7JA3KCAKAZYWNZEGD3LQJUTG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233511106.1_warc_CC-MAIN-20231003124522-20231003154522-00000.warc.gz\"}"}
https://staging.physicsclassroom.com/class/vectors/Lesson-2/Initial-Velocity-Components
[ "Vectors - Motion and Forces in Two Dimensions - Lesson 2 - Projectile Motion\n\n# Initial Velocity Components\n\nIt has already been stated and thoroughly discussed that the horizontal and vertical motions of a projectile are independent of each other. The horizontal velocity of a projectile does not affect how far (or how fast) a projectile falls vertically. Perpendicular components of motion are independent of each other. Thus, an analysis of the motion of a projectile demands that the two components of motion are analyzed independent of each other, being careful not to mix horizontal motion information with vertical motion information. That is, if analyzing the motion to determine the vertical displacement, one would use kinematic equations with vertical motion parameters (initial vertical velocity, final vertical velocity, vertical acceleration) and not horizontal motion parameters (initial horizontal velocity, final horizontal velocity, horizontal acceleration). It is for this reason that one of the initial steps of a projectile motion problem is to determine the components of the initial velocity.\n\n### Determining the Components of a Velocity Vector\n\nEarlier in this unit, the method of vector resolution was discussed. Vector resolution is the method of taking a single vector at an angle and separating it into two perpendicular parts. The two parts of a vector are known as components and describe the influence of that vector in a single direction. If a projectile is launched at an angle to the horizontal, then the initial velocity of the projectile has both a horizontal and a vertical component. The horizontal velocity component (vx) describes the influence of the velocity in displacing the projectile horizontally. The vertical velocity component (vy) describes the influence of the velocity in displacing the projectile vertically. Thus, the analysis of projectile motion problems begins by using the trigonometric methods discussed earlier to determine the horizontal and vertical components of the initial velocity.", null, "Consider a projectile launched with an initial velocity of 50 m/s at an angle of 60 degrees above the horizontal. Such a projectile begins its motion with a horizontal velocity of 25 m/s and a vertical velocity of 43 m/s. These are known as the horizontal and vertical components of the initial velocity. These numerical values were determined by constructing a sketch of the velocity vector with the given direction and then using trigonometric functions to determine the sides of the velocity triangle. The sketch is shown at the right and the use of trigonometric functions to determine the magnitudes is shown below. (If necessary, review this method on an earlier page in this unit.)", null, "", null, "All vector resolution problems can be solved in a similar manner. As a test of your understanding, utilize trigonometric functions to determine the horizontal and vertical components of the following initial velocity values. When finished, click the button to check your answers.\n\nPractice A: A water balloon is launched with a speed of 40 m/s at an angle of 60 degrees to the horizontal.\n\nPractice B: A motorcycle stunt person traveling 70 mi/hr jumps off a ramp at an angle of 35 degrees to the horizontal.\n\nPractice C: A springboard diver jumps with a velocity of 10 m/s at an angle of 80 degrees to the horizontal.\n\n### Try Some More!\n\nNeed more practice? Use the Velocity Components for a Projectile widget below to try some additional problems. Enter any velocity magnitude and angle with the horizontal. Use your calculator to determine the values of vx and vy. Then click the Submit button to check your answers.\n\nAs mentioned above, the point of resolving an initial velocity vector into its two components is to use the values of these two components to analyze a projectile's motion and determine such parameters as the horizontal displacement, the vertical displacement, the final vertical velocity, the time to reach the peak of the trajectory, the time to fall to the ground, etc. This process is demonstrated on the remainder of this page. We will begin with the determination of the time.\n\n### Determination of the Time of Flight", null, "The time for a projectile to rise vertically to its peak (as well as the time to fall from the peak) is dependent upon vertical motion parameters. The process of rising vertically to the peak of a trajectory is a vertical motion and is thus dependent upon the initial vertical velocity and the vertical acceleration (g = 9.8 m/s/s, down). The process of determining the time to rise to the peak is an easy process - provided that you have a solid grasp of the concept of acceleration. When first introduced, it was said that acceleration is the rate at which the velocity of an object changes. An acceleration value indicates the amount of velocity change in a given interval of time. To say that a projectile has a vertical acceleration of -9.8 m/s/s is to say that the vertical velocity changes by 9.8 m/s (in the - or downward direction) each second. For example, if a projectile is moving upwards with a velocity of 39.2 m/s at 0 seconds, then its velocity will be 29.4 m/s after 1 second, 19.6 m/s after 2 seconds, 9.8 m/s after 3 seconds, and 0 m/s after 4 seconds. For such a projectile with an initial vertical velocity of 39.2 m/s, it would take 4 seconds for it to reach the peak where its vertical velocity is 0 m/s. With this notion in mind, it is evident that the time for a projectile to rise to its peak is a matter of dividing the vertical component of the initial velocity (viy) by the acceleration of gravity.", null, "Once the time to rise to the peak of the trajectory is known, the total time of flight can be determined. For a projectile that lands at the same height which it started, the total time of flight is twice the time to rise to the peak. Recall from the last section of Lesson 2 that the trajectory of a projectile is symmetrical about the peak. That is, if it takes 4 seconds to rise to the peak, then it will take 4 seconds to fall from the peak; the total time of flight is 8 seconds. The time of flight of a projectile is twice the time to rise to the peak.", null, "### Determination of Horizontal Displacement\n\nThe horizontal displacement of a projectile is dependent upon the horizontal component of the initial velocity. As discussed in the previous part of this lesson, the horizontal displacement of a projectile can be determined using the equation\n\nx = vix • t\n\nIf a projectile has a time of flight of 8 seconds and a horizontal velocity of 20 m/s, then the horizontal displacement is 160 meters (20 m/s • 8 s). If a projectile has a time of flight of 8 seconds and a horizontal velocity of 34 m/s, then the projectile has a horizontal displacement of 272 meters (34 m/s • 8 s). The horizontal displacement is dependent upon the only horizontal parameter that exists for projectiles - the horizontal velocity (vix).\n\n### Determination of the Peak Height\n\nA non-horizontally launched projectile with an initial vertical velocity of 39.2 m/s will reach its peak in 4 seconds. The process of rising to the peak is a vertical motion and is again dependent upon vertical motion parameters (the initial vertical velocity and the vertical acceleration). The height of the projectile at this peak position can be determined using the equation\n\ny = viy • t + 0.5 • g • t2\n\nwhere viy is the initial vertical velocity in m/s, g is the acceleration of gravity (-9.8 m/s/s) and t is the time in seconds it takes to reach the peak. This equation can be successfully used to determine the vertical displacement of the projectile through the first half of its trajectory (i.e., peak height) provided that the algebra is properly performed and the proper values are substituted for the given variables. Special attention should be given to the facts that the t in the equation is the time up to the peak and the g has a negative value of -9.8 m/s/s.\n\n### We Would Like to Suggest ...", null, "Sometimes it isn't enough to just read about it. You have to interact with it! And that's exactly what you do when you use one of The Physics Classroom's Interactives. We would like to suggest that you combine the reading of this page with the use of our Projectile Motion Simulator. You can find it in the Physics Interactives section of our website. The simulator allows one to explore projectile motion concepts in an interactive manner. Change a height, change an angle, change a speed, and launch the projectile.\n\nAnswer the following questions and click the button to see the answers.\n\n1. Aaron Agin is resolving velocity vectors into horizontal and vertical components. For each case, evaluate whether Aaron's diagrams are correct or incorrect. If incorrect, explain the problem or make the correction.", null, "2. Use trigonometric functions to resolve the following velocity vectors into horizontal and vertical components. Then utilize kinematic equations to calculate the other motion parameters. Be careful with the equations; be guided by the principle that \"perpendicular components of motion are independent of each other.\"", null, "See Answer", null, "A: vix = 9.5 m/s • cos(40 degrees) = 7.28 m/s B : viy = 9.5 m/s • sin(40 degrees) = 6.11 m/s C: tup = (6.11 m/s) / (9.8 m/s/s) = 0.623 s D: ttotal = 2 • (0.623 s) = 1.25 s E: x = 7.28 m/s • 1.25 s = 9.07 m F: y = (6.11 m/s) • (0.623 s) + 0.5*(-9.8 m/s/s) • (0.623 s )2 = 1.90 m See Answer", null, "G: vix = 25 m/s • cos (60 degrees) = 12.5 m/s H: viy = 25 m/s • sin (60 degrees) = 21.7 m/s I: tup = (21.7 m/s) / (9.8 m/s/s) = 2.21 s J: ttotal = 2 • 2.21 s = 4.42 s K: x = 12.5 m/s • 4.42 s = 55.2 m L: y = 21.7 m/s • 2.21 s + 0.5 • (-9.8 m/s/s) • (2.21 s )2 = 23.9 m See Answer", null, "M: vix = 30 m/s • cos (30 degrees) = 26.0 m/s N: viy = 30 m/s • sin (30 degrees) = 15.0 m/s O: tup = (15.0 m/s) / (9.8 m/s/s) = 1.53 s P: ttotal = 2 • 1.53 s = 3.06 s Q: x = 26.0 m/s • 3.06 s = 79.5 m R: y = 15.0 m/s • 1.53 s + 0.5 • (-9.8 m/s/s) • (1.53 s )2 = 11.5 m\n\n3. Utilize kinematic equations and projectile motion concepts to fill in the blanks in the following tables.", null, "Next Section:" ]
[ null, "http://www.physicsclassroom.com/Class/vectors/u3l2d2.gif", null, "http://www.physicsclassroom.com/Class/vectors/u3l2d3.gif", null, "http://www.physicsclassroom.com/Class/images/qwikquiz.gif", null, "http://www.physicsclassroom.com/Class/vectors/u3l2d4.gif", null, "http://www.physicsclassroom.com/Class/vectors/u3l2d5.gif", null, "http://www.physicsclassroom.com/Class/vectors/u3l2d7.gif", null, "https://staging.physicsclassroom.com/PhysicsClassroom/media/Images/archive/Class/images/Interact.png", null, "http://www.physicsclassroom.com/Class/vectors/u3l2d6.gif", null, "http://www.physicsclassroom.com/Class/vectors/u3l2d8.gif", null, "http://www.physicsclassroom.com/Class/images/cyuheader.gif", null, "http://www.physicsclassroom.com/Class/images/cyuheader.gif", null, "http://www.physicsclassroom.com/Class/images/cyuheader.gif", null, "http://www.physicsclassroom.com/Class/vectors/u3l2d9.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.888141,"math_prob":0.97722274,"size":9091,"snap":"2019-51-2020-05","text_gpt3_token_len":1836,"char_repetition_ratio":0.2030373,"word_repetition_ratio":0.08420365,"special_character_ratio":0.20272797,"punctuation_ratio":0.08215962,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.998529,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-23T23:52:36Z\",\"WARC-Record-ID\":\"<urn:uuid:707d9a32-7b5a-4dfd-bec9-d13da250f9c1>\",\"Content-Length\":\"105220\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:56c7d00b-14f6-4eee-a423-922f7609316e>\",\"WARC-Concurrent-To\":\"<urn:uuid:865784b1-a282-4673-ba02-e48250ffa001>\",\"WARC-IP-Address\":\"64.25.122.121\",\"WARC-Target-URI\":\"https://staging.physicsclassroom.com/class/vectors/Lesson-2/Initial-Velocity-Components\",\"WARC-Payload-Digest\":\"sha1:TOORASRGXV2R5VQIM3K6S25Q64R62RLD\",\"WARC-Block-Digest\":\"sha1:VFRHOQEBE64XGVYEHIVPDLJUK6I37IUU\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250614086.44_warc_CC-MAIN-20200123221108-20200124010108-00094.warc.gz\"}"}
https://mesak.tw/code/javascript/11932/js-table-convert-to-object-part1
[ "# [JS] Table 轉 Object 練習 part1\n\n``````<table border=1>\n<tr>\n<th>Company</th>\n<th>Contact</th>\n<th>Country</th>\n</tr>\n<tbody>\n<tr>\n<td>Alfreds Futterkiste</td>\n<td>Maria Anders</td>\n<td>Germany</td>\n</tr>\n<tr>\n<td>Centro comercial Moctezuma</td>\n<td>Francisco Chang</td>\n<td>Mexico</td>\n</tr>\n</tbody>\n</table>``````\n\n``````let heads = [];\n})``````\n\n``````let tableObject = []\n\\$('tbody tr').each( function(i,trNode) {\nlet data = {}\n\\$(trNode).find('td').each( function(index, tdNode) {\n})\ntableObject.push(data)\n})\nconsole.log( tableObject );``````\n\n``````let headData = Array.from(document.querySelectorAll('thead > tr > th')).map(n=>n.innerText)\n\nlet tableObject = []\nfor(let trNode of document.querySelectorAll('tbody > tr') ){\nlet data = {};\nArray.from(trNode.querySelectorAll('td')).forEach((n , index)=>{\n})\ntableObject.push(data)\n}\nconsole.log( 'tableObject ',tableObject )\n``````\n\n0 comment" ]
[ null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.8552624,"math_prob":0.485695,"size":2024,"snap":"2022-40-2023-06","text_gpt3_token_len":1322,"char_repetition_ratio":0.10891089,"word_repetition_ratio":0.0,"special_character_ratio":0.23666008,"punctuation_ratio":0.113793105,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9843928,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-30T11:45:25Z\",\"WARC-Record-ID\":\"<urn:uuid:dc60d632-e879-46e5-86da-713a1cb2ba46>\",\"Content-Length\":\"112068\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2ee01038-85b3-46ea-ba65-a11401d33c9a>\",\"WARC-Concurrent-To\":\"<urn:uuid:179a942d-82fa-4565-8ae6-d233fa9808da>\",\"WARC-IP-Address\":\"104.21.0.229\",\"WARC-Target-URI\":\"https://mesak.tw/code/javascript/11932/js-table-convert-to-object-part1\",\"WARC-Payload-Digest\":\"sha1:L3QKMSKK2NCPTJIRTJ7IADAXEAF6MEBT\",\"WARC-Block-Digest\":\"sha1:57O5GVHHO7BYTK7YMW5XVVC4BTHSP266\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499816.79_warc_CC-MAIN-20230130101912-20230130131912-00671.warc.gz\"}"}
https://crypto.stackexchange.com/questions/8564/hmac-and-assumptions-on-the-cryptographic-hash
[ "# HMAC and assumptions on the cryptographic hash\n\nAccording to Wikipedia, a cryptographic hash function has the following properties:\n\n1. Pre-image resistance: Given $h$, it's difficult to find any message $m$ such that $h = H(m)$.\n2. Second pre-image resistance: Given $m_1$, it's difficult to find another $m_2$ such that $m_1 \\ne m_2$ and $H(m_1) = H(m_2)$\n3. Collision Resistance: It's difficult to find two distinct messages $m_1$ and $m_2$ such that $H(m_1) = H(m_2)$\n\nAssuming $H$ is a hash function, the following function $H'$ should — to my understanding — also be a hash function:\n\n$$H'(m) = m_0 || H(m_1||m_2||\\dots||m_k)$$\n\nwhere $m_i$ is the $i$th byte of $m$.\n\n$H'$ leaks the first byte of $m$, but even just leaking one byte, you still can't find a pre-image or any collisions.\n\nWhen looking at HMAC, Wikipedia says that it takes a hash function (without maker further assumptions on the hash function). Taking my hash function $H'$ (just let it leak $\\operatorname{len}(key)$ bytes, but not the long message) for HMAC, HMAC would be insecure, since everybody now sees the key.\n\nSo maybe my $H'$ is not a cryptographic hash function after all — in which case my question is: Why not?\n\nOr $H'$ is a cryptographic hash function, but building an HMAC requires additional assumptions. What additional assumptions must be fulfilled by the hash function to be secure in HMAC? Do the three properties above imply some other properties I don't see?\n\nAlso PBKDF2 takes a PRF, where HMAC-SHA256 should be secure, but HMAC just given a hash function which has the three properties, won't be a PRF to my understanding. Again the same questions: Are there more assumptions? Even more than on HMAC?\n\n## migrated from security.stackexchange.comJun 4 '13 at 12:53\n\nThis question came from our site for information security professionals.\n\n• > In order for a hash function with n bits of output to be collision resistant, it must take at least $2^n$ work and storage to find a collision Don't you just need $2^{\\frac{n}{2}}$ due the birthday attack to find a collision? – So Morr Jun 4 '13 at 15:02\n• In order for a hash function with $n$ bits of output to be collision resistant, it must take at least $2^{n/2}$ work and storage to find a collision. Presuming your hash function $H$ is collision resistant, it is not obvious $H′$ also is collision resistant. Finding two different messages that both share the same prefix, and collide in the last $n−8$ bits, requires less work and less storage than finding a collision in all $n$ bits of $H$. – Henrick Hellström Jun 4 '13 at 15:44\n\nBefore we jump into this question, you first need to know a bit about the internals of hash functions with the Merkle-Dåmgard construction. Here's a pretty picture from Wikipedia:", null, "In this diagram, you see the compression function $f$ being fed the message blocks along with the output of the state of the previous compression block (or the IV). The final output is the result of the last compression function. (You can ignore the finalization step for our purposes.)\n\nNow, let's focus on the original NMAC/HMAC paper. In it, the authors state:\n\nIn the rest of this paper we will concentrate on iterated hash functions, except if stated otherwise.\n\nThis should be the first clue that your scheme is not one that will work with NMAC/HMAC: it's not iterated! Not all of it, at least. The fact that the first $n$ bytes are concatenated (leaked, what have you) means that your hash function's output is no longer solely the result from the compression function evaluated on the last block. This changes the construction of the scheme drastically.\n\nIn regular circumstances, the (almost implicit) assumption that the underlying hash function is iterated is not an unreasonable one at all: all of the popular hash functions of today are. (SHA3/Keccak is a bit of a special case. It's not clear one even needs the HMAC construction for it. But that's a topic for another question.)\n\nFor example, what do you do with the IV (initialization vector, or as Bellare et al call it, \"initial variable\")? Do you simply pass it along to the $H$ in $H'$? If so, then your scheme doesn't actually leak the key with NMAC, although it does leak $k \\oplus \\mathtt{opad}$ in HMAC. In case you're unfamiliar with NMAC, the basic idea of the scheme is replace the IVs of the regular hash functions with the keys $k = (k_1, k_2)$. In the case of HMAC, the \"new IVs\" are (where $f$ is the compression function for the hash in question) $k_1 = f(k \\oplus \\mathtt{opad})$ and $k_2 = f(k \\oplus \\mathtt{ipad})$. But note that this, too, carries with it the implicit assumption that the starting state of the next $f$ in the chain is the previous evaluation of $f$.\n\nTrying to discuss $H'$ in the context of HMAC is difficult, though. The primary issue is that your $H'$ doesn't have a clearly available compression function. Sure, $H$ (probably) has a compression function, but for $H'$, things are much less certain. Even if you attempted to define a compression function for $H'$, in order to be an iterated hash function, it would somehow have to leak the first $n$ bytes of the original message while simultaneously evaluating $H$ for the rest of the message.\n\nHere is the problem, though: even if you created such a compression function, it would be insecure, naturally. Namely, it no longer would act as a pseudorandom function (PRF) (or for Wikipedia's less thorough, but perhaps easier, explanation, see here). In this relatively new paper, Bellare proves that HMAC is a PRF itself if the compression function of the underlying hash is a PRF. If $H'$ had a compression function, then it definitely would not be a PRF, since it (quite literally) leaks part of the input.\n\nThe prerequisite that the compression function is a PRF is quite a weak requirement, too. Given that $H'$ (even if you did somehow come up with a compression function for it) simply fails this requirement, the security proofs for HMAC do not cover it. Further, pretty much all of the security proofs given in favor of HMAC assume that the attacker doesn't know the key. Those proofs are possibly invalid if this assumption is invalid.\n\nSo, to answer your question directly, HMAC requires an iterated hash scheme whose compression function is, as best as we can tell, a PRF. A generic cryptographic hash function, at least using the definition Wikipedia gives, is not strong enough to guarantee a solid MAC. But the PRF requirement is relatively weak, as even MD5 (which is completely broken as far as collisions go) still appears to satisfy it.\n\nInspired by Henrick Hellström's comment, I think you need to dig a bit deeper into what “difficult” means.\n\nPre-image resistance means that given $h$, it's difficult to find $m$ such that $h = H(m)$. Intuitively speaking, your only chance is to have started with an $h$ that is in the relatively small set of already-computed hashes.\n\nNow suppose you have $h'$ and are looking for $m'$ such that $h' = H'(m)$. You have a better chance of finding $m'$ than you had of finding $m$, because if you had previously computed $H'(a||m'') = a||H(m'')$ such that $h' = b||H(m'')$, you can take $m' = b||m''$. With an alphabet of size $n$, $H'$ gives you $n$ times better chances to find a pre-image.\n\nThis is perhaps more visible if you think of $H'$ consisting of, say, the first 128 bits of the message followed by a 128-bit hash of the rest. Then $H'$ hashes are 256 bits long but the preimage resistance of $H'$ is no more than that of a 128-bit hash, only half as strong as expected.\n\nA natural question at this point is, what if you define $H''(m) = m_0 || H(m)$? (That is, calculate the same hash, but leak a fixed-size prefix of the message.) You do not get a better chance of finding a pre-computed pre-image. However, the amount of work required to find a pre-image by brute-force is clearly less, because you can concentrate your efforts on messages with the prefix $m_0$. This is actually closer to the mathematical definition of difficulty (computational complexity of finding a pre-image) than the informal explanation above.\n\nI'm not fully satisfied with this explanation. What about $H^\\circ(m) = \\mathbf{0} || H(m)$ where $\\mathbf{0}$ is some constant string? This is obviously as good a hash as $H$ since it doesn't leak anything. Yet the amount of work required to find a pre-image is only as good as $H$, despite the longer hash, which is no different from the complaint against $H'$ above.\n\nIn any case, as far as I know, the Wikipedia article is a simplification: there is no general result that any hash function can be used to build a HMAC in this way. The security proofs of HMAC only apply to hash functions of a certain form, which includes all Merkle-Damgård hash functions, but not the oddball variants considered in this thread.\n\n• The last part of this answer is the crucial bit: HMAC's security proof relies iterated hash functions, specifically MD constructions. The construction in the question fails this criterion. – Reid Jun 4 '13 at 17:07\n• For modern criteria for HMAC security, I recommend New Proofs for NMAC and HMAC: Security without Collision-Resistance, although it is hard reading and some of it goes over my head. – fgrieu Jun 4 '13 at 17:19\n• @Reid I don't find this fully satisfactory: sure, the proof doesn't apply, but what causes the result not to hold? – Gilles 'SO- stop being evil' Jun 4 '13 at 18:20\n• @Gilles: The structure of the above construction is entirely different from usual hash schemes. How would you define the compression function for $H'$? And note that in order for NMAC's security proof to be relevant, the scheme needs to be iterated, so you somehow have to define the compression function in such a way that the final state will include the first $n$ bytes of the message. Of course, the compression function needs to be a PRF, but leaking even the first byte of the message sounds like a death knell for that idea. – Reid Jun 4 '13 at 19:55\n• @Reid Please write an answer that expands on this! I'd be very interested. – Gilles 'SO- stop being evil' Jun 4 '13 at 20:01" ]
[ null, "https://i.stack.imgur.com/KCXnf.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9342405,"math_prob":0.977967,"size":3984,"snap":"2019-43-2019-47","text_gpt3_token_len":920,"char_repetition_ratio":0.15728644,"word_repetition_ratio":0.0,"special_character_ratio":0.22389558,"punctuation_ratio":0.10824742,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9969012,"pos_list":[0,1,2],"im_url_duplicate_count":[null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-17T12:37:50Z\",\"WARC-Record-ID\":\"<urn:uuid:666b10dc-e990-4a4d-897d-5dece7f221a4>\",\"Content-Length\":\"158029\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a74dfdcf-6e1c-4b47-beb4-8a12b88fb22f>\",\"WARC-Concurrent-To\":\"<urn:uuid:ebf66f68-bcf9-40c0-b5a1-8fd62b285cbf>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://crypto.stackexchange.com/questions/8564/hmac-and-assumptions-on-the-cryptographic-hash\",\"WARC-Payload-Digest\":\"sha1:A5H4LDPY4VRLCTOVYXX76FU3YPZOSVOO\",\"WARC-Block-Digest\":\"sha1:V6YHCCNLIWKZOOXLFGWPE52ES3M73FHC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496668954.85_warc_CC-MAIN-20191117115233-20191117143233-00401.warc.gz\"}"}
https://crypto.stackexchange.com/questions/64783/inversion-in-gf28-aes
[ "# Inversion in $GF(2^8)$ AES [duplicate]\n\nI was wondering how you would calculate the S-box in AES. I found that you have to calculate the inverse of the polynomials in $$GF(2^8)$$. I found out that to calculate the inverse, you have to use the Extended Euclidean Algorithm. What I can't figure out is how do you apply this to a polynomial?\n\n• This question has a detailed answer here: https://crypto.stackexchange.com/questions/34212/how-to-use-the-extended-euclidean-algorithm-to-invert-a-finite-field-element – kodlu Dec 11 '18 at 19:20\n• Your question is a duplicate of the one in the comment above. – kodlu Dec 11 '18 at 19:21\n• \"... you have to use the Extended Euclidean Algorithm\"; nope, there are a bunch of other ways to compute inverses. For one, you can take advantage that, in GF(256), $x^{-1} = x^{254}$. For another, to find the inverse of $x$, we just search for a $y$ with $x \\times y = 1$; this takes no more than 64k multiplications to do the full search for all possible inputs, which is trivial as a one time effort... – poncho Dec 11 '18 at 19:53\n\nOne neat trick for the AES Sbox is to use logarithm. In that specific field, the value 3 (0x03) is a multiplicative generator for the non-zero elements. Thus, if you have a \"multiplication by 0x03\" function, you can compute a mapping $$n \\mapsto \\mathrm{\\texttt{03}}^n$$ for all $$0 \\le n \\le 254$$; this would be an \"exponentiation table\". From that table, you can make the inverse mapping as another table, which would be a \"logarithm tables\". By going to logarithms, you transform multiplications into additions modulo 255, and divisions into subtractions modulo 255. In particular, inversion becomes negation.\n\nIn Java code, this may look like this:\n\n /*\n* Make tmp[n] = pow(0x03, n) in the field.\n*/\nint[] tmp = new int;\nfor (int i = 0, x = 1; i < 255; i ++) {\ntmp[i] = x;\n\n/*\n* Multiply x by 0x03 in the field.\n*/\nint x2 = x << 1;\nx2 ^= -(x2 >> 8) & 0x11B;\nx ^= x2;\n}\n\n/*\n* Make Sbox[] as the inverse table. We use the fact that\n* 1/pow(0x03,n) = pow(0x03,255-n) in the field (since 0x03 is\n* a generator of the multiplicative subgroup of size 255).\n*/\nSbox = new int;\nSbox = 0; Sbox = 1;\nfor (int i = 1; i < 255; i ++) {\nSbox[tmp[i]] = tmp[255 - i];\n}\n\n/*\n* At that point, SBox[i] contains the inverse of i.\n* Now we apply the affine transform in GF(2).\n*/\nfor (int i = 0; i < 256; i ++) {\nint x = Sbox[i];\nx |= x << 8;\nx ^= (x >> 4) ^ (x >> 5) ^ (x >> 6) ^ (x >> 7);\nSbox[i] = (x ^ 0x63) & 0xFF;\n}\n\n\nThis kind of code can be convenient to recompute the S-box and related tables during an initialization process (assuming that you prefer the extra startup cost over the size cost of embedding the tables in the code).\n\n(Of course, table-based AES implementations are not constant-time, you should not do that in practice, etc.)" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8531885,"math_prob":0.9928688,"size":2842,"snap":"2020-34-2020-40","text_gpt3_token_len":860,"char_repetition_ratio":0.10394644,"word_repetition_ratio":0.03816794,"special_character_ratio":0.34517944,"punctuation_ratio":0.13804714,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9994321,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-07T03:48:59Z\",\"WARC-Record-ID\":\"<urn:uuid:df15a480-0d59-47ea-8ecc-2a3a3d07e7ca>\",\"Content-Length\":\"135601\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:befb3b3d-13ab-441f-a340-63911b5c8da8>\",\"WARC-Concurrent-To\":\"<urn:uuid:37f2cda0-0a27-4002-a749-29e2688a11fb>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://crypto.stackexchange.com/questions/64783/inversion-in-gf28-aes\",\"WARC-Payload-Digest\":\"sha1:YHAKKZO4JEUHSIDQI3GA7QYCVORURORA\",\"WARC-Block-Digest\":\"sha1:6AJBWZWDEH55AJIZRCJWAWDFIXY4GNHW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439737152.0_warc_CC-MAIN-20200807025719-20200807055719-00202.warc.gz\"}"}
https://patents.google.com/patent/US7788051B2/en
[ "# US7788051B2 - Method and apparatus for parallel loadflow computation for electrical power system - Google Patents\n\nMethod and apparatus for parallel loadflow computation for electrical power system Download PDF\n\n## Info\n\nPublication number\nUS7788051B2\nUS7788051B2 US10/594,715 US59471505A US7788051B2 US 7788051 B2 US7788051 B2 US 7788051B2 US 59471505 A US59471505 A US 59471505A US 7788051 B2 US7788051 B2 US 7788051B2\nAuthority\nUS\nUnited States\nPrior art keywords\nnetwork\nnode\npower\nnodes\npq\nPrior art date\nLegal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)\nActive\nApplication number\nUS10/594,715\nOther versions\nUS20070203658A1 (en\nInventor\nSureshchandra B. Patel\nOriginal Assignee\nPatel Sureshchandra B\nPriority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)\nFiling date\nPublication date\nPriority to CA2479603 priority Critical\nPriority to CA002479603A priority patent/CA2479603A1/en\nApplication filed by Patel Sureshchandra B filed Critical Patel Sureshchandra B\nPriority to PCT/CA2005/001537 priority patent/WO2006037231A1/en\nPublication of US20070203658A1 publication Critical patent/US20070203658A1/en\nApplication granted granted Critical\nPublication of US7788051B2 publication Critical patent/US7788051B2/en\nApplication status is Active legal-status Critical\nAnticipated expiration legal-status Critical\n\n## Images\n\n•", null, "•", null, "•", null, "•", null, "•", null, "•", null, "•", null, "•", null, "•", null, "•", null, "•", null, "•", null, "•", null, "•", null, "•", null, "## Classifications\n\n• GPHYSICS\n• G06COMPUTING; CALCULATING; COUNTING\n• G06QDATA PROCESSING SYSTEMS OR METHODS, SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES, NOT OTHERWISE PROVIDED FOR\n• G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism\n• G06Q50/06Electricity, gas or water supply\n• HELECTRICITY\n• H02GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER\n• H02JCIRCUIT ARRANGEMENTS OR SYSTEMS FOR SUPPLYING OR DISTRIBUTING ELECTRIC POWER; SYSTEMS FOR STORING ELECTRIC ENERGY\n• H02J3/00Circuit arrangements for ac mains or ac distribution networks\n• H02J2003/007Simulating, e. g. planning, reliability check, modeling\n• HELECTRICITY\n• H02GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER\n• H02JCIRCUIT ARRANGEMENTS OR SYSTEMS FOR SUPPLYING OR DISTRIBUTING ELECTRIC POWER; SYSTEMS FOR STORING ELECTRIC ENERGY\n• H02J3/00Circuit arrangements for ac mains or ac distribution networks\n• H02J3/04Circuit arrangements for ac mains or ac distribution networks for connecting networks of the same frequency but supplied from different sources\n• H02J3/06Controlling transfer of power between connected networks; Controlling sharing of load between connected networks\n• YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS\n• Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE\n• Y02EREDUCTION OF GREENHOUSE GAS [GHG] EMISSIONS, RELATED TO ENERGY GENERATION, TRANSMISSION OR DISTRIBUTION\n• Y02E40/00Technologies for an efficient electrical power generation, transmission or distribution\n• Y02E40/70Systems integrating technologies related to power network operation and communication or information technologies for improving the carbon footprint of electrical power generation, transmission or distribution, i.e. smart grids as climate change mitigation technology in the energy generation sector\n• Y02E40/76Computing methods or systems for efficient or low carbon management or operation of electric power systems\n• YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS\n• Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE\n• Y02EREDUCTION OF GREENHOUSE GAS [GHG] EMISSIONS, RELATED TO ENERGY GENERATION, TRANSMISSION OR DISTRIBUTION\n• Y02E60/00Enabling technologies or technologies with a potential or indirect contribution to GHG emissions mitigation\n• Y02E60/70Systems integrating technologies related to power network operation and communication or information technologies mediating in the improvement of the carbon footprint of electrical power generation, transmission or distribution, i.e. smart grids as enabling technology in the energy generation sector\n• Y02E60/76Computer aided design [CAD]; Simulation; Modelling\n• YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS\n• Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS\n• Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS\n• Y04S10/00Systems supporting electrical power generation, transmission or distribution\n• Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications\n• Y04S10/54Management of operational aspects\n• Y04S10/545Computing methods or systems for efficient or low carbon management or operation of electric power systems\n• YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS\n• Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS\n• Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS\n• Y04S40/00Systems for electrical power generation, transmission, distribution or end-user application management characterised by the use of communication or information technologies, or communication or information technology specific aspects supporting them\n• Y04S40/20Information technology specific aspects\n• Y04S40/22Computer aided design [CAD]; Simulation; Modelling\n\n## Abstract\n\nGauss-Seidel-Patel Loadflow (GSPL) loadflow calculation method is invented involving self-iteration over a node within global iteration over (n−1) nodes in n-node power network. Also invented is a network decomposition technique referred to as Suresh's diakoptics that determines a sub-network for each node involving directly connected nodes referred to as level-1 nodes and their directly connected nodes referred to as level-2 nodes, wherein the level of outward connectivity for local solution of a sub-network around a given node is to be determined experimentally. Sub-networks are solved in parallel, and local solution of sub networks are related into network wide global solution using an invented technique. These led to the invention of the best possible parallel computer—a server-processor parallel-processors architecture wherein each of the parallel processors communicate only with the server processor, commonly shared memory locations, and each processor's private memory locations but not among themselves.\n\n## Description\n\nTECHNICAL FIELD\n\nThe present invention relates to methods of loadflow computation in power flow control and voltage control in an electrical power system. It also relates to the parallel computer architecture and distributed computing architecture.\n\nBACKGROUND OF THE INVENTION\n\nThe present invention relates to power-flow/voltage control in utility/industrial power networks of the types including many power plants/generators interconnected through transmission/distribution lines to other loads and motors. Each of these components of the power network is protected against unhealthy or alternatively faulty, over/under voltage, and/or over loaded damaging operating conditions. Such a protection is automatic and operates without the consent of power network operator, and takes an unhealthy component out of service by disconnecting it from the network. The time domain of operation of the protection is of the order of milliseconds.\n\nThe purpose of a utility/industrial power network is to meet the electricity demands of its various consumers 24-hours a day, 7-days a week while maintaining the quality of electricity supply. The quality of electricity supply means the consumer demands be met at specified voltage and frequency levels without over loaded, under/over voltage operation of any of the power network components. The operation of a power network is different at different times due to changing consumer demands and development of any faulty/contingency situation. In other words healthy operating power network is constantly subjected to small and large disturbances. These disturbances could be consumer/operator initiated, or initiated by overload and under/over voltage alleviating functions collectively referred to as security control functions and various optimization functions such as economic operation and minimization of losses, or caused by a fault/contingency incident.\n\nFor example, a power network is operating healthy and meeting quality electricity needs of its consumers. A fault occurs on a line or a transformer or a generator which faulty component gets isolated from the rest of the healthy network by virtue of the automatic operation of its protection. Such a disturbance would cause a change in the pattern of power flows in the network, which can cause over loading of one or more of the other components and/or over/under voltage at one or more nodes in the rest of the network. This in turn can isolate one or more other components out of service by virtue of the operation of associated protection, which disturbance can trigger chain reaction disintegrating the power network.\n\nTherefore, the most basic and integral part of all other functions including optimizations in power network operation and control is security control. Security control means controlling power flows so that no component of the network is over loaded and controlling voltages such that there is no over voltage or under voltage at any of the nodes in the network following a disturbance small or large. As is well known, controlling electric power flows include both controlling real power flows which is given in MWs, and controlling reactive power flows which is given in MVARs. Security control functions or alternatively overloads alleviation and over/under voltage alleviation functions can be realized through one or combination of more controls in the network. These involve control of power flow over tie line connecting other utility network, turbine steam/water/gas input control to control real power generated by each generator, load shedding function curtails load demands of consumers, excitation controls reactive power generated by individual generator which essentially controls generator terminal voltage, transformer taps control connected node voltage, switching in/out in capacitor/reactor banks controls reactive power at the connected node.\n\nControl of an electrical power system involving power-flow control and voltage control commonly is performed according to a process shown in FIG. 4. The various steps entail the following.\n\n• Step-10: Obtain on-line/simulated readings of open/close status of all switches and circuit breakers, and read data of maximum and minimum reactive power generation capability limits of PV-node generators, maximum and minimum tap positions limits of tap changing transformers, and maximum power carrying capability limits of transmission lines, transformers in the power network, or alternatively read data of operating limits of power network components.\n• Step-20: Obtain on-line readings of real and reactive power assignments/schedules/specifications/settings at PQ-nodes, real power and voltage magnitude assignments/schedules/specifications/settings at PV-nodes and transformer turns ratios. These assigned/set values are controllable and are called controlled variables/parameters.\n• Step-30: Resulting voltage magnitudes and angles at power network nodes, power flows through various power network components, reactive power generations by generators and turns ratios of transformers in the power network are determined by performance of loadflow computation, for the assigned/set/given/known values from step-20 or previous process cycle step-60, of controlled variables/parameters.\n• Step-40: The results of Loadflow computation of step-30 are evaluated for any over loaded power network components like transmission lines and transformers, and over/under voltages at different nodes in the power system\n• Step-50: If the system state is acceptable implying no over loaded transmission lines and transformers and no over/under voltages, the process branches to step-70, and if otherwise, then to step-60\n• Step-60: Changes the controlled variables/parameters set in step-20 or as later set by the previous process cycle step-60 and returns to step-30\n• Step-70: Actually implements the corrected controlled variables/parameters to obtain secure/correct/acceptable operation of power system\n\nOverload and under/over voltage alleviation functions produce changes in controlled variables/parameters in step-60 of FIG. 5. In other words controlled variables/parameters are assigned or changed to the new values in step-60. This correction in controlled variables/parameters could be even optimized in case of simulation of all possible imaginable disturbances including outage of a line and loss of generation for corrective action stored and made readily available for acting upon in case the simulated disturbance actually occurs in the power network. In fact simulation of all possible imaginable disturbances is the modern practice because corrective actions need be taken before the operation of individual protection of the power network components.\n\nIt is obvious that loadflow computation consequently is performed many times in real-time operation and control environment and, therefore, efficient and high-speed loadflow computation is necessary to provide corrective control in the changing power system conditions including an outage or failure of any of the power network components. Moreover, the loadflow computation must be highly reliable to yield converged solution under a wide range of system operating conditions and network parameters. Failure to yield converged loadflow solution creates blind spot as to what exactly could be happening in the network leading to potentially damaging operational and control decisions/actions in capital-intensive power utilities.\n\nThe power system control process shown in FIG. 5 is very general and elaborate. It includes control of power-flows through network components and voltage control at network nodes. However, the control of voltage magnitude at connected nodes within reactive power generation capabilities of electrical machines including generators, synchronous motors, and capacitor/inductor banks, and within operating ranges of transformer taps is normally integral part of loadflow computation as described in “LTC Transformers and MVAR violations in the Fast Decoupled Load Flow, IEEE Trans., PAS-101, No. 9, PP. 3328-3332, September 1982.” If under/over voltage still exists in the results of loadflow computation, other control actions, manual or automatic, may be taken in step-60 in the above and in FIG. 5. For example, under voltage can be alleviated by shedding some of the load connected.\n\nThe prior art and present invention are described using the following symbols and terms:\n\nYpq=Gpq'jBpq: (p-q) th element of nodal admittance matrix without shunts\n\nYpp=Gpp+jBpp: p-th diagonal element of nodal admittance matrix without shunts\n\nyp=gp+jbp: total shunt admittance at any node-p\n\nVp=ep+jfp=Vpp: complex voltage of any node-p\n\nΔθp, ΔVp: voltage angle, magnitude corrections\n\nΔep, Δfp: real, imaginary components of voltage corrections\n\nPp+jQP: net nodal injected power calculated\n\nΔPP+jΔQp: nodal power residue or mismatch\n\nRPP+jRQp: modified nodal power residue or mismatch\n\nΦp: rotation or transformation angle\n\n[RP]: vector of modified real power residue or mismatch\n\n[RQ]: vector of modified reactive power residue or mismatch\n\n[Yθ]: gain matrix of the P-θ loadflow sub-problem defined by eqn. (1)\n\n[YV]: gain matrix of the Q-V loadflow sub-problem defined by eqn. (2)\n\nm: number of PQ-nodes\n\nk: number of PV-nodes\n\nn=m+k+1: total number of nodes\n\nq>p: q is the node adjacent to node-p excluding the case of q=p\n\n[ ]: indicates enclosed variable symbol to be a vector or a matrix\n\nPQ-node: load-node, where, Real-Power-P and Reactive-Power-Q are specified\n\nPV-node: generator-node, where, Real-Power-P and Voltage-Magnitude-V are specified Bold lettered symbols represent complex quantities in description.\n\n• Loadflow Computation: Each node in a power network is associated with four electrical quantities, which are voltage magnitude, voltage angle, real power, and reactive power. The loadflow computation involves calculation/determination of two unknown electrical quantities for other two given/specified/scheduled/set/known electrical quantities for each node. In other words the loadflow computation involves determination of unknown quantities in dependence on the given/specified/scheduled/set/known electrical quantities.\n• Loadflow Model: a set of equations describing the physical power network and its operation for the purpose of loadflow computation. The term ‘loadflow model’ can be alternatively referred to as ‘model of the power network for loadflow computation’. The process of writing Mathematical equations that describe physical power network and its operation is called Mathematical Modeling. If the equations do not describe/represent the power network and its operation accurately the model is inaccurate, and the iterative loadflow computation method could be slow and unreliable in yielding converged loadflow computation. There could be variety of Loadflow Models depending on organization of set of equations describing the physical power network and its operation, including Decoupled Loadflow Models, Super Decoupled Loadflow (SDL) Models, Fast Super Decoupled Loadflow (FSDL) Model, and Novel Fast Super Decoupled Loadflow (NFSDL) Model.\n• Loadflow Method: sequence of steps used to solve a set of equations describing the physical power network and its operation for the purpose of loadflow computation is called Loadflow Method, which term can alternatively be referred to as ‘loadflow computation method’ or ‘method of loadflow computation’. One word for a set of equations describing the physical power network and its operation is: Model. In other words, sequence of steps used to solve a Loadflow Model is a Loadflow Method. The loadflow method involves definition/formation of a loadflow model and its solution. There could be variety of Loadflow Methods depending on a loadflow model and iterative scheme used to solve the model including Decoupled Loadflow Methods, Super Decoupled Loadflow (SDL) Methods, Fast Super Decoupled Loadflow (FSDL) Method, and Novel Fast Super Decoupled Loadflow (NFSDL) Method. All decoupled loadflow methods described in this application use either successive (1θ, 1V) iteration scheme or simultaneous (1V, 1θ), defined in the following.\n\nPrior art methods of loadflow computation of the kind carried out as step-30 in FIG. 5, are well known Gauss-Seidel Loadflow (GSL) and Super Super Decoupled Loadflow (SSDL) methods. The GSL method is well known to be not able to converge to high accuracy solution because of its iteration scheme that lacks self iterations, which realization led to the invention of Gauss-Seidel-Patel Loadflow (GSPL) method. The prior art methods will now the described.\n\nGauss-Seidel Loadflow: GSL\n\nThe complex power injected into the node-p of a power network is given by the following relation,\n\n$P p - j ⁢ ⁢ Q p = V p * ⁢ ∑ q = 1 n ⁢ Y pq ⁢ V q = V p * ⁢ Y pp ⁢ V p + V p * ⁢ ∑ q > p ⁢ Y pq ⁢ V q ⁢ ⁢ Where , ( 1 ) P p = Re ⁢ ⁢ { V p * ⁢ ∑ q = 1 n ⁢ Y pq ⁢ V q } ( 2 ) Q p = - Im ⁢ ⁢ { V p * ⁢ ∑ q = 1 n ⁢ Y pq ⁢ V q } ( 3 )$\nWhere, words “Re” means “real part of” and words “Im” means “imaginary part of”.\n\nThe Gauss-Seidel (GS) method is used to solve a set of simultaneous algebraic equations iteratively. The GSL-method calculates complex node voltage from any node-p relation (1) as given in relation (4).\n\n$V p = [ { ( PSH p - j ⁢ ⁢ QSH p ) / V p * } - ∑ q > p ⁢ Y pq ⁢ V q ] / Y pp ( 4 )$\nIteration Process\n\nIterations start with the experienced/reasonable/logical guess for the solution. The reference node also referred to as the slack-node voltage being specified, starting voltage guess is made for the remaining (n−1)-nodes in n-node network. Node voltage value is immediately updated with its newly calculated value in the iteration process in which one node voltage is calculated at a time using latest updated other node voltage values. A node voltage value calculation at a time process is iterated over (n−1)-nodes in an n-node network, the reference/slack node voltage being specified not required to be calculated. Now, for the iteration-(r+1), the complex voltage calculation at node-p equation (4) and reactive power calculation at node-p equation (3), becomes\n\n$V p ( r + 1 ) = [ { ( PSH p - j ⁢ ⁢ QSH p ) / ( V p * ) r } - ∑ q = 1 p - 1 ⁢ Y pq ⁢ V q ( r + 1 ) - ∑ q = p + 1 n ⁢ Y pq ⁢ V q r ] / Y pp ( 5 ) Q p ( r + 1 ) = - Im ⁢ { ( V p * ) r ⁢ ∑ q = 1 p - 1 ⁢ Y pq ⁢ V q ( r + 1 ) - ( V q * ) r ⁢ ∑ q = p n ⁢ Y pq ⁢ V q r } ( 6 )$\nConvergence\n\nThe iteration process is carried out until changes in the real and imaginary parts of the set of (n−1)-node voltages calculated in two consecutive iterations are all less than the specified tolerance—ε, as shown in relations (7) and (8). The lower the value of the specified tolerance for convergence check, the greater the solution accuracy.\ne P (r+1) |=|e P (r+1) −e P r |<ε  (7)\nf P (r+1) |=|f P (r+1) −f P r |<ε  (8)\nAccelerated Convergence\n\nThe GS-method being inherently slow to converge, it is characterized by the use of an acceleration factor applied to the difference in calculated node voltage between two consecutive iterations to speed-up the iterative solution process. The accelerated value of node-p voltage at iteration-(r+1) is given by\nV p (r+1)(accelerated)=V p r+β(V p (r+1) −V P r)  (9)\nWhere, β is the real number called acceleration factor, the value of which for the best possible convergence for any given network can be determined by trial solutions. The GS-method is very sensitive to the choice of β, causing very slow convergence and even divergence for the wrong choice.\nScheduled or Specified Voltage at a PV-Node\n\nOf the four variables, real power PSHp and voltage magnitude VSHp are specified at a PV-node. If the reactive power calculated using VSHp at the PV-node is within the upper and lower generation capability limits of a PV-node generator, it is capable of holding the specified voltage at its terminal. Therefore the complex voltage calculated by relation (5) by using actually calculated reactive power Qp in place of QSHp is adjusted to specified voltage magnitude by relation (10).\nV p (r+1)=(VSH p V p (r+1))/|V p (r+1)|  (10)\nCalculation Steps of Gauss-Seidel Loadflow (GSL) Method\n\nThe steps of loadflow computation method, GSL method are shown in the flowchart of FIG. 1 a. Referring to the flowchart of FIG. 1 a, different steps are elaborated in steps marked with similar numbers in the following. The words “Read system data” in Step-1 correspond to step-10 and step-20 in FIG. 5, and step-14, step-20, step-32, step-44, step-50 in FIG. 6. All other steps in the following correspond to step-30 in FIG. 5, and step-60, step-62, and step-64 in FIG. 6.\n\n• 1. Read system data and assign an initial approximate solution. If better solution estimate is not available, set specified voltage magnitude at PV-nodes, 1.0 p.u. voltage magnitude at PQ-nodes, and all the node angles equal to that of the slack-node angle, which is referred to as the flat-start.\n• 2. Form nodal admittance matrix, and Initialize iteration count r=1\n• 3. Scan all the node of a network, except the slack-node whose voltage having been specified need not be calculated. Initialize node count p=1, and initialize maximum change in real and imaginary parts of node voltage variables DEMX=0.0 and DFMX=0.0\n• 4. Test for the type of a node at a time. For the slack-node go to step-12, for a PQ-node go to the step-9, and for a PV-node follow the next step.\n• 5. Compute Qp (r+1) for use as an imaginary part in determining complex schedule power at a PV-node from relation (6) after adjusting its complex voltage for specified value by relation (10)\n• 6. If Qp (r+1) is greater than the upper reactive power generation capability limit of the PV-node generator, set QSHp=the upper limit Qp max for use in relation (5), and go to step-9. If not, follow the next step.\n• 7. If Qp (r+1) is less than the lower reactive power generation capability limit of the PV-node generator, set QSHp=the lower limit Qp min for use in relation (5), and go to step-9. If not, follow the next step.\n• 8. Compute Vp (r+1) from relation (5) using QSHp=Qp (r+1), and adjust its value for specified voltage at the PV-node by relation (10), and go to step-10\n• 9. Compute Vp (r+1) from relation (5)\n• 10. Compute changes in the real and imaginary parts of the node-p voltage by using relations (7) and (8), and replace current value of DEMX and DFMX respectively in case any of them is larger.\n• 11. Calculate accelerated value of Vp (r+1) by using relation (9), and update voltage by Vp r=Vp (r+1) for immediate use in the next node voltage calculation.\n• 12. Check for if the total numbers of nodes-n are scanned. That is if p<n, increment p=p+1, and go to step-4. Otherwise follow the next step.\n• 13. If DEMX and DFMX both are not less than the convergence tolerance (ε) specified for the purpose of the accuracy of the solution, advance iteration count r=r+1 and go to step-3, otherwise follow the next step\n• 14. From calculated and known values of complex voltage at different power network nodes, and tap position of tap changing transformers, calculate power flows through power network components, and reactive power generation at PV-nodes.\nDecoupled Loadflow\n\nIn a class of decoupled loadflow methods, each decoupled method comprises a system of equations (11) and (12) differing in the definition of elements of [RP], [RQ], and [Yθ] and [YV]. It is a system of equations for the separate calculation of voltage angle and voltage magnitude corrections.\n[RP]=[Yθ][Δθ]  (11)\n[RQ]=[YV][ΔV]  (12)\nSuccessive (1θ, 1V) Iteration Scheme\n\nIn this scheme (11) and (12) are solved alternately with intermediate updating. Each iteration involves one calculation of [RP] and [Δθ] to update [θ] and then one calculation of [RQ] and [ΔV] to update [V]. The sequence of relations (13) to (16) depicts the scheme.\n[Δθ]=[Yθ]−1[RP]  (13)\n[θ]=[θ]+[Δθ]  (14)\n[ΔV]=[YV]−1[RQ]  (15)\n[V]=[V]+[ΔV]  (16)\n\nThe scheme involves solution of system of equations (11) and (12) in an iterative manner depicted in the sequence of relations (13) to (16). This scheme requires mismatch calculation for each half-iteration; because [RP] and [RQ] are calculated always using the most recent voltage values and it is block Gauss-Seidel approach. The scheme is block successive, which imparts increased stability to the solution process. This in turn improves convergence and increases the reliability of obtaining solution.\n\nSuper Super Decoupled Loadflow: SSDL\n\nThis method is not very sensitive to the restriction applied to nodal transformation angles; SSDL restricts transformation angles to the maximum of −48 degrees determined experimentally for the best possible convergence from non linearity considerations, which is depicted by relations (19) and (20). However, it gives closely similar performance over wide range of restriction applied to the transformation angles say, from −36 to −90 degrees.\nRP p=(ΔP p Cos Φp +ΔQ p Sin Φp)/V p 2 —for PQ-nodes  (17)\nRQ p=(ΔQ p Cos Φp −ΔP p Sin Φp)/V p —for PQ-nodes  (18)\nCos Φp=Absolute(B pp /v(G pp 2 +B pp 2))≧Cos(−48°)  (19)\nSin Φp=−Absolute(G pp /v(G pp 2 +B pp 2))≧Sin(−48°)  (20)\nRP p =ΔP p/(K p V p 2) —for PV-nodes  (21)\nK p=Absolute(B pp /Yθ pp)  (22)\n\n$Y ⁢ ⁢ θ pq = [ - ⁢ Y pq - ⁢ for ⁢ ⁢ branch ⁢ ⁢ ⁢ r / x ⁢ ⁢ ratio ≤ 3.0 - ⁢ ( B pq + 0.9 ⁢ ( Y pq - B pq ) ) - ⁢ for ⁢ ⁢ branch ⁢ ⁢ r / x ⁢ ⁢ ⁢ ratio > 3.0 - ⁢ B pq - ⁢ for ⁢ ⁢ branches ⁢ ⁢ connected ⁢ ⁢ between ⁢ ⁢ two PV ⁢ - ⁢ nodes ⁢ ⁢ or ⁢ ⁢ ⁢ a ⁢ ⁢ PV ⁢ - ⁢ node ⁢ ⁢ and ⁢ ⁢ the ⁢ ⁢ slack ⁢ - ⁢ node ⁢ ( 23 ) YV pq = [ - ⁢ Y pq - ⁢ for ⁢ ⁢ branch ⁢ ⁢ ⁢ r / x ⁢ ⁢ ratio ≤ 3.0 - ⁢ ( B pq + 0.9 ⁢ ( Y pq - B pq ) ) - ⁢ for ⁢ ⁢ branch ⁢ ⁢ r / x ⁢ ⁢ ⁢ ratio > 3.0 ( 24 ) Y ⁢ ⁢ θ pp = ∑ q > p ⁢ - Y ⁢ ⁢ θ pq ⁢ ⁢ and ⁢ ⁢ YV pp = b p ′ + ∑ q > p ⁢ - YV pq ( 25 ) b p ′ = ( QSH p ⁢ Cos ⁢ ⁢ Φ p - PSH p ⁢ Sin ⁢ ⁢ Φ p / V s 2 ) - b p ⁢ Cos ⁢ ⁢ Φ p ⁢ ⁢ or b p ′ = 2 ⁢ ( QSH p ⁢ Cos ⁢ ⁢ Φ p - PSH p ⁢ Sin ⁢ ⁢ Φ p ) / V s 2 ( 26 )$\nb p′=(QSH p Cos Φp −PSH p Sin Φp /V s 2)−b p Cos Φp or\nb p′=2(QSH p Cos Φp −PSH p Sin Φp)/Vs 2  (26)\nwhere, Kp is defined in relation (22), which is initially restricted to the minimum value of 0.75 determined experimentally; however its restriction is lowered to the minimum value of 0.6 when its average over all less than 1.0 values at PV nodes is less than 0.6. Restrictions to the factor Kp as stated in the above is system independent. However it can be tuned for the best possible convergence for any given system. In case of systems of only PQ-nodes and without any PV-nodes, equations (23) and (24) simply be taken as Yθpq=YVpq=−Ypq.\n\nBranch admittance magnitude in (23) and (24) is of the same algebraic sign as its susceptance. Elements of the two gain matrices differ in that diagonal elements of [YV] additionally contain the b′ values given by relations (25) and (26) and in respect of elements corresponding to branches connected between two PV-nodes or a PV-node and the slack-node. Relations (19) and (20) with inequality sign implies that transformation angles are restricted to maximum of −48 degrees for SSDL. The method consists of relations (11) to (26). In two simple variations of the SSDL method, one is to make YVpq=Yθpq and the other is to make Yθpq=YVpq.\n\nCalculation Steps of Super Super Decoupled Loadflow (SSDL) Method\n\nThe steps of loadflow computation method, SSDL method are shown in the flowchart of FIG. 1 b. Referring to the flowchart of FIG. 1 b, different steps are elaborated in steps marked with similar letters in the following. The words “Read system data” in Step-1 correspond to step-10 and step-20 in FIG. 5, and step-14, step-20, step-32, step-44, step-50 in FIG. 6. All other steps in the following correspond to step-30 in FIG. 5, and step-60, step-62, and step-64 in FIG. 6.\n\n• a. Read system data and assign an initial approximate solution. If better solution estimate is not available, set voltage magnitude and angle of all nodes equal to those of the slack-node. This is referred to as the slack-start.\n• b. Form nodal admittance matrix, and Initialize iteration count ITRP=ITRQ=r=0\n• c. Compute Cosine and Sine of nodal rotation angles using relations (19) and (20), and store them. If they, respectively, are less than the Cosine and Sine of −48 degrees, equate them, respectively, to those of −48 degrees.\n• d. Form (m+k)×(m+k) size matrices [Yθ] and [YV] of (11) and (12) respectively each in a compact storage exploiting sparsity. The matrices are formed using relations (23), (24), (25), and (26). In [YV] matrix, replace diagonal elements corresponding to PV-nodes by very large value (say, 10.0**10). In case [YV] is of dimension (m×m), this is not required to be performed. Factorize [Yθ] and [YV] using the same ordering of nodes regardless of node-types and store them using the same indexing and addressing information. In case [YV] is of dimension (m×m), it is factorized using different ordering than that of [Yθ].\n• e. Compute residues [ΔP] at PQ- and PV-nodes and [ΔQ] at only PQ-nodes. If all are less than the tolerance (ε), proceed to step-n. Otherwise follow the next step.\n• f. Compute the vector of modified residues [RP] as in (17) for PQ-nodes, and using (21) and (22) for PV-nodes.\n• g. Solve (13) for [Δθ] and update voltage angles using, [θ]=[θ]+[Δθ].\n• h. Set voltage magnitudes of PV-nodes equal to the specified values, and Increment the iteration count ITRP=ITRP+1 and r=(ITRP+ITRQ)/2.\n• i. Compute residues [ΔP] at PQ- and PV-nodes and [ΔQ] at PQ-nodes only. If all are less than the tolerance (ε), proceed to step-n. Otherwise follow the next step.\n• j. Compute the vector of modified residues [RQ] as in (18) for only PQ-nodes.\n• k. Solve (15) for [ΔV] and update PQ-node magnitudes using [V]=[V]+[ΔV]. While solving equation (15), skip all the rows and columns corresponding to PV-nodes.\n• l. Calculate reactive power generation at PV-nodes and tap positions of tap changing transformers. If the maximum and minimum reactive power generation capability and transformer tap position limits are violated, implement the violated physical limits and adjust the loadflow solution.\n• m. Increment the iteration count ITRQ=ITRQ+1 and r=(ITRP+ITRQ)/2, and Proceed to step-e.\n• n. From calculated and known values of voltage magnitude and voltage angle at different power network nodes, and tap position of tap changing transformers, calculate power flows through power network components, and reactive power generation at PV-nodes.\nSUMMARY OF THE INVENTION\n\nIt is a primary object of the present invention to improve solution accuracy, convergence and efficiency of the prior art GSL and SSDL computations method under wide range of system operating conditions and network parameters for use in power flow control and voltage control in the power system.\n\nThe above and other objects are achieved, according to the present invention, with Gauss-Seidel-Patel loadflow (GSPL), the prior art Super Super Decoupled Loadflow (SSDL) and their parallel versions PGSPL, PSSDL loadflow computation methods for Electrical Power. System. In context of voltage control, the inventive system of parallel loadflow computation for Electrical Power system consisting of plurality of electromechanical rotating machines, transformers and electrical loads connected in a network, each machine having a reactive power characteristic and an excitation element which is controllable for adjusting the reactive power generated or absorbed by the machine, and some of the transformers each having a tap changing element, which is controllable for adjusting turns ratio or alternatively terminal voltage of the transformer, said system comprising:\n\n• means for defining and solving loadflow model of the power network characterized by inventive PGSPL or PSSDL model for providing an indication of the quantity of reactive power to be supplied by each generator including the reference/slack node generator, and for providing an indication of turns ratio of each tap-changing transformer in dependence on the obtained-online or given/specified/set/known controlled network variables/parameters, and physical limits of operation of the network components,\n• means for machine control connected to the said means for defining and solving loadflow model and to the excitation elements of the rotating machines for controlling the operation of the excitation elements of machines to produce or absorb the amount of reactive power indicated by said means for defining and solving loadflow model in dependence on the set of obtained-online or given/specified/set controlled network variables/parameters, and physical limits of excitation elements,\n• means for transformer tap position control connected to said means for defining and solving loadflow model and to the tap changing elements of the controllable transformers for controlling the operation of the tap changing elements to adjust the turns ratios of transformers indicated by the said means for defining and solving loadflow model in dependence on the set of obtained-online or given/specified/set controlled network variables/parameters, and operating limits of the tap-changing elements.\n\nThe method and system of voltage control according to the preferred embodiment of the present invention provide voltage control for the nodes connected to PV-node generators and tap changing transformers for a network in which real power assignments have already been fixed. The said voltage control is realized by controlling reactive power generation and transformer tap positions.\n\nThe inventive system of parallel loadflow computation can be used to solve a model of the Electrical Power System for voltage control. For this purpose real and reactive power assignments or settings at PQ-nodes, real power and voltage magnitude assignments or settings at PV-nodes and transformer turns ratios, open/close status of all circuit breaker, the reactive capability characteristic or curve for each machine, maximum and minimum tap positions limits of tap changing transformers, operating limits of all other network components, and the impedance or admittance of all lines are supplied. GSPL or SSDL loadflow equations are solved by a parallel iterative process until convergence. During this solution the quantities which can vary are the real and reactive power at the reference/slack node, the reactive power set points for each PV-node generator, the transformer transformation ratios, and voltages on all PQ-nodes nodes, all being held within the specified ranges. When the iterative process converges to a solution, indications of reactive power generation at PV-nodes and transformer turns-ratios or tap-settings are provided. Based on the known reactive power capability characteristics of each PV-node generator, the determined reactive power values are used to adjust the excitation current to each generator to establish the reactive power set points. The transformer taps are set in accordance with the turns ratio indication provided by the system of loadflow computation.\n\nFor voltage control, system of parallel GSPL or SSDL computation can be employed either on-line or off-line. In off-line operation, the user can simulate and experiment with various sets of operating conditions and determine reactive power generation and transformer tap settings requirements. An invented parallel computer System can implement any of the parallel loadflow computation methods. For on-line operation, the loadflow computation system is provided with data identifying the current real and reactive power assignments and transformer transformation ratios, the present status of all switches and circuit breakers in the network and machine characteristic curves in steps-10 and -20 in FIG. 5, and steps 12, 20, 32, 44, and 50 in FIG. 6 described below. Based on this information, a model of the system provide the values for the corresponding node voltages, reactive power set points for each machine and the transformation ratio and tap changer position for each transformer.\n\nInventions include Gauss-Seidel-Patel Loadflow (GSPL) method for the solution of complex simultaneous algebraic power injection equations or any set, of complex simultaneous algebraic equations arising in any other subject areas. Further inventions are a technique of decomposing a network into sub-networks for the solution of sub-networks in parallel referred to as Suresh's diakoptics, a technique of relating solutions of sub-networks into network wide solution, and a best possible parallel computer architecture ever invented to carry out solution of sub-networks in parallel.\n\nBRIEF DESCRIPTION OF DRAWINGS\n\nFIG. 1 is a flow-charts of the prior art GSL and SSDL methods\n\nFIG. 2 is a one-line diagram of IEEE 14-node network and its decomposition into level-1 connectivity sub-networks\n\nFIG. 3 is a flow-charts embodiment of the invented GSPL, PGSPL methods\n\nFIG. 4 is a block diagram of invented parallel computer architecture/organization\n\nFIG. 5 prior art is a flow-chart of the overall controlling method for an electrical power system involving loadflow computation as a step which can be executed using one of the invented loadflow computation method of FIG. 3.\n\nFIG. 6 prior art is a flow-chart of the simple special case of voltage control system in overall controlling system of FIG. 5 for an electrical power system\n\nFIG. 7 prior art is a one-line diagram of an exemplary 6-node power network having a reference/slack/swing node, two PV-nodes, and three PQ-nodes\n\nDESCRIPTION OF A PREFERRED EMBODIMENT\n\nA loadflow computation is involved as a step in power flow control and/or voltage control in accordance with FIG. 5 or FIG. 6. A preferred embodiment of the present invention is described with reference to FIG. 7 as directed to achieving voltage control.\n\nFIG. 7 is a simplified one-line diagram of an exemplary utility power network to which the present invention may be applied. The fundamentals of one-line diagrams are described in section 6.11 of the text ELEMENTS OF POWER SYSTEM ANALYSIS, forth edition, by William D. Stevenson, Jr., McGrow-Hill Company, 1982. In FIG. 7, each thick vertical line is a network node. The nodes are interconnected in a desired manner by transmission lines and transformers each having its impedance, which appears in the loadflow models. Two transformers in FIG. 7 are equipped with tap changers to control their turns ratios in order to control terminal voltage of node-1 and node-2 where large loads are connected.\n\nNode-6 is a reference node alternatively referred to as the slack or swing-node, representing the biggest power plant in a power network. Nodes-4 and -5 are PV-nodes where generators are connected, and nodes-1, -2, and -3 are PQ-nodes where loads are connected. It should be noted that the nodes-4, -5, and -6 each represents a power plant that contains many generators in parallel operation. The single generator symbol at each of the nodes-4, -5, and -6 is equivalent of all generators in each plant. The power network further includes controllable circuit breakers located at each end of the transmission lines and transformers, and depicted by cross markings in one-line diagram of FIG. 7. The circuit breakers can be operated or in other words opened or closed manually by the power system operator or relevant circuit breakers operate automatically consequent of unhealthy or faulty operating conditions. The operation of one or more circuit breakers modify the configuration of the network. The arrows extending certain nodes represent loads.\n\nA goal of the present invention is to provide a reliable and computationally efficient loadflow computation that appears as a step in power flow control and/or voltage control systems of FIG. 5 and FIG. 6. However, the preferred embodiment of loadflow computation as a step in control of terminal node voltages of PV-node generators and tap-changing transformers is illustrated in the flow diagram of FIG. 6 in which present invention resides in function steps 60 and 62.\n\nShort description of other possible embodiment of the present invention is also provided herein. The present invention relates to control of utility/industrial power networks of the types including plurality of power plants/generators and one or more motors/loads, and connected to other external utility. In the utility/industrial systems of this type, it is the usual practice to adjust the real and reactive power produced by each generator and each of the other sources including synchronous condensers and capacitor/inductor banks, in order to optimize the real and reactive power generation assignments of the system. Healthy or secure operation of the network can be shifted to optimized operation through corrective control produced by optimization functions without violation of security constraints. This is referred to as security constrained optimization of operation. Such an optimization is described in the U.S. Pat. No. 5,081,591 dated Jan. 13, 1992: “Optimizing Reactive Power Distribution in an Industrial Power Network”, where the present invention can be embodied by replacing the step nos. 56 and 66 each by a step of constant gain matrices [Yθ] and [YV], and replacing steps of “Exercise Newton-Raphson Algorithm” by steps of “Exercise parallel GSPL or SSDL Computation” in places of steps 58 and 68. This is just to indicate the possible embodiment of the present invention in optimization functions like in many others including state estimation function. However, invention is being claimed through a simplified embodiment without optimization function as in FIG. 6 in this application. The inventive steps-60 and -62 in FIG. 6 are different than those corresponding steps-56, and -58, which constitute a well known Newton-Raphson loadflow method, and were not inventive even in U.S. Pat. No. 5,081,591.\n\nIn FIG. 6, function step 10 provides stored impedance values of each network component in the system. This data is modified in a function step 12, which contains stored information about the open or close status of each circuit breaker. For each breaker that is open, the function step 12 assigns very high impedance to the associated line or transformer. The resulting data is than employed in a function step 14 to establish an admittance matrix for the power network. The data provided by function step 10 can be input by the computer operator from calculations based on measured values of impedance of each line and transformer, or on the basis of impedance measurements after the power network has been assembled.\n\nEach of the transformers T1 and T2 in FIG. 7 is a tap changing transformer having a plurality of tap positions each representing a given transformation ratio. An indication of initially assigned transformation ratio for each transformer is provided by function step 20.\n\nThe indications provided by function steps 14, and 20 are supplied to a function step 60 in which constant gain matrices [Yθ] and [YV] of any of the invented super decoupled loadflow models are constructed, factorized and stored. The gain matrices [Yθ] and [YV] are conventional tools employed for solving Super Decoupled Loadflow model defined by equations (1) and (2) for a power system.\n\nIndications of initial reactive power, or Q on each node, based on initial calculations or measurements, are provided by a function step 30 and these indications are used in function step 32, to assign a Q level to each generator and motor. Initially, the Q assigned to each machine can be the same as the indicated Q value for the node to which that machine is connected.\n\nAn indication of measured real power, P, on each node is supplied by function step 40. Indications of assigned/specified/scheduled/set generating plant loads that are constituted by known program are provided by function step 42, which assigns the real power, P, load for each generating plant on the basis of the total P which must be generated within the power system. The value of P assigned to each power plant represents an economic optimum, and these values represent fixed constraints on the variations, which can be made by the system according to the present invention. The indications provided by function steps 40 and 42 are supplied to function step 44 which adjusts the P distribution on the various plant nodes accordingly. Function step 50 assigns initial approximate or guess solution to begin iterative method of loadflow computation, and reads data file of operating limits on power network components, such as maximum and minimum reactive power generation capability limits of PV-nodes generators.\n\nThe indications provided by function steps 32, 44, 50 and 60 are supplied to function step 62 where inventive Fast Super Decoupled Loadflow computation or Novel Fast Super Decoupled Loadflow computation is carried out, the results of which appear in function step 64. The loadflow computation yields voltage magnitudes and voltage angles at PQ-nodes, real and reactive power generation by the reference/slack/swing node generator, voltage angles and reactive power generation indications at PV-nodes, and transformer turns ratio or tap position indications for tap changing transformers. The system stores in step 62 a representation of the reactive capability characteristic of each PV-node generator and these characteristics act as constraints on the reactive power that can be calculated for each PV-node generator for indication in step 64. The indications provided in step 64 actuate machine excitation control and transformer tap position control. All the loadflow computation methods using SSDL models can be used to effect efficient and reliable voltage control in power systems as in the process flow diagram of FIG. 6.\n\nInventions include Gauss-Seidel-Patel Loadflow (GSPL) method for the solution of complex simultaneous algebraic power injection equations or any set of complex simultaneous algebraic equations arising in any other subject areas. Further inventions are a technique of decomposing a network into sub-networks for the solution of sub-networks in parallel referred to as Suresh's diakoptics, a technique of relating solutions of sub-networks into network wide solution, and a best possible parallel computer architecture ever invented to carry out solution of sub-networks in parallel.\n\nGauss-Seidel-Patel Loadflow: GSPL\n\nGauss-Seidel numerical method is well-known to be not able to converge to the high accuracy solution, which problem has been resolved for the first-time in the proposed Gauss-Seidel-Patel (GSP) numerical method.\n\nThe GSP method introduces the concept of self-iteration of each calculated variable until convergence before proceeding to calculate the next. This is achieved by replacing relation (5) by relation (27) stated in the following where self-iteration-(sr+1) over a node variable itself within the global iteration-(r+1) over (n−1) nodes in the n-node network is depicted. During the self-iteration process only vp changes without affecting any of the terms involving vq. At the start of the self-iteration Vp sr=Vp r and at the convergence of the self-iteration Vp (r+1)=Vp (sr+1).\n\n$( V p ( sr + 1 ) ) ( r + 1 ) = [ { ( PSH p - j ⁢ ⁢ QSH p ) / ( ( V p * ) sr ) r } - ∑ q = 1 p - 1 ⁢ Y pq ⁢ V q ( r + 1 ) - ∑ q = p + 1 n ⁢ Y pq ⁢ V q r ] / Y pp ( 27 )$\nSelf-Convergence\n\nThe self-iteration process is carried out until changes in the real and imaginary parts of the node-p voltage calculated in two consecutive self-iterations are less than the specified tolerance. It has been possible to establish a relationship between the tolerance specification for self-convergence and the tolerance specification for global-convergence. It is found sufficient for the self-convergence tolerance specification to be ten times the global-convergence tolerance specification.\n|Δep (sr+1)|=|ep (sr+1)−ep sr|<10ε  (28)\n|Δfp (sr+1)|=|fp (sr+1)−fp sr|<10ε  (29)\n\nFor the global-convergence tolerance specification of 0.000001, it has been found sufficient to have the self-convergence tolerance specification of 0.00001 in order to have the maximum real and reactive power mismatches of 0.0001 in the converged solution. However, for small networks under not difficult to solve conditions they respectively could be 0.00001 and 0.0001 or 0.000001 and 0.0001, and for large networks under difficult to solve conditions they sometimes need to be respectively 0.0000001 and 0.000001.\n\nNetwork Decomposition Technique: Suresh's Diakoptics\n\nA network decomposition technique referred to as Suresh's diakoptics involves determining a sub-network for each node involving directly connected nodes referred to as level-1 nodes and their directly connected nodes referred to as level-2 nodes and so on. The level of outward connectivity for local solution of a sub-network around a given node is to be determined experimentally. This is particularly true for gain matrix based methods such as Newton-Raphson (NR), SSDL methods. Sub-networks can be solved by any of the known methods including Gauss-Seidel-Patel Loadflow (GSPL) method.\n\nIn the case of GSPL-method only one level of outward connectivity around each node is found to be sufficient for the formation of sub-networks equal to the number of nodes. Level-1 connectivity sub-networks for IEEE 14-node network is shown in FIG. 2 b. The local solution of equations of each sub-network could be iterated for experimentally determined two or more iteration. However, maximum of two iterations are fond to be sufficient. In case of GSPL-method, processing load on processors can be attempted equalization by assigning two or more smaller sub-networks to the single processor for solving separately in sequence.\n\nSometimes it is possible that a sub-network around any given node could be a part of the sub-network around another node making it redundant needing local solution of less than (m+k) sub-networks in case of gain matrix based methods like SSDL. Level-1 connectivity sub-networks for IEEE 14-node for parallel solution by say, SSDL-method is shown in FIG. 2 b. The local solution iteration over a sub-network is not required for gain matrix based methods like SSDL. FIG. 2 c shows the grouping of the non-redundant sub-networks in FIG. 2 b in an attempt to equalize size of the sub-networks reducing the number of processors without increasing time for the solution of the whole network.\n\nIt should be noted that no two decomposed network parts contain the same set of nodes, or the same set of lines connected to nodes, though some same nodes could appear in two or more sub-networks.\n\nDecomposing network in level-1 connectivity sub-networks provides for maximum possible parallelism, and hopefully fastest possible solution. However, optimum outward level of connectivity around a node sub-network can be determined experimentally for the solution of large networks by a gain matrix based method like SSDL.\n\nRelating Sub-Network Solutions to Get the Network-Wide Global Solution\n\nSuresh's decomposition subroutine run by server-processor decomposes the network into sub-networks and a separate processor is assigned to solve each sub-network simultaneously in parallel. A node-p of the network could be contained in two or more sub-networks. Say a node-p is contained in or part of ‘q’ sub-networks. If GSPL-method is used, voltage calculation for a node-p is performed by each of the ‘q’ sub-networks. Add ‘q’ voltages calculated for a node-p by ‘q’ number of sub-networks and divide by ‘q’ to take an average as given by relation (30).\nV p (r+1)=(Vp1 (r+1) +V p2 (r+1) +V p3 (r+1) + . . . +V pq (r+1))/q  (30)\n\nIf a gain matrix based method like SSDL is used, voltage angle correction and voltage magnitude correction calculation for a node-p is performed by each of the ‘q’ sub-networks in which node-p is contained. Add ‘q’ voltage angle corrections and ‘q’ voltage magnitude corrections calculated for the node-p by ‘q’ sub-networks and divide by number ‘q’ to take average as given by relations (31) and (32).\nΔθp (r+1)=(Δθp1 (r+1)+Δθp2 (r+1)+Δθp3 (r+1)+ . . . +Δθpq (r+1))/q  (31)\nΔV p (r+1)=(ΔV p1 (r+1) +ΔV p2 (r+1) +ΔV p3 (r+1) + . . . +ΔV pq (r+1))/q  (32)\n\nSometimes, gain matrix based methods can be organized to directly calculate real and imaginary components of complex node voltages or GSPL-method can be decoupled into calculating real (ep) and imaginary (fp) components of complex voltage calculation for a node-p, which is contributed to by each of the ‘q’ sub-networks in which node-p is contained. Add ‘q’ real parts of voltages calculated for a node-p by ‘q’ sub-networks and divide by number ‘q’. Similarly, add ‘q’ imaginary parts of voltages calculated for the same node-p by ‘q’ sub-networks and divide by number ‘q’ to take an average as given by relation (33) and (34).\ne p (r+1)=(e p1 (r+1) +e p2 (r+1) +e p3 (r+1) + . . . +e pq (r+1))/q  (33)\nf p (r+1)=(f p1 (r+1) +f p2 (r+1) +f p3 (r+1) + . . . +f pq (r+1))/q  (34)\n\nThe relations (30) to (34), can also alternatively be written as relations (35) to (39).\nV p (r+1)=√{square root over ((Re((V p1 (r+1))2)+Re((V p2 (r+1))2)+ . . . +Re((V pq (r+1))2)2)/q)}{square root over ((Re((V p1 (r+1))2)+Re((V p2 (r+1))2)+ . . . +Re((V pq (r+1))2)2)/q)}{square root over ((Re((V p1 (r+1))2)+Re((V p2 (r+1))2)+ . . . +Re((V pq (r+1))2)2)/q)}+j√{square root over ((Im((V p1 (r+1))2)+Im((V p2 (r+1))2)+ . . . +Im((Vpq (r+1))2))/q)}{square root over ((Im((V p1 (r+1))2)+Im((V p2 (r+1))2)+ . . . +Im((Vpq (r+1))2))/q)}{square root over ((Im((V p1 (r+1))2)+Im((V p2 (r+1))2)+ . . . +Im((Vpq (r+1))2))/q)}  (35)\nΔθp (r+1)=√{square root over ((Δθp1 (r+1))2+(Δθp2 (r+1))2+ . . . +(Δθpq (r+1))2)/q)}{square root over ((Δθp1 (r+1))2+(Δθp2 (r+1))2+ . . . +(Δθpq (r+1))2)/q)}{square root over ((Δθp1 (r+1))2+(Δθp2 (r+1))2+ . . . +(Δθpq (r+1))2)/q)}  (36)\nΔV p (r+1)=√{square root over ((ΔV p1 (r+1))2+(ΔV p2 (r+1))2+ . . . +(ΔV pq (r+1))2)/q)}{square root over ((ΔV p1 (r+1))2+(ΔV p2 (r+1))2+ . . . +(ΔV pq (r+1))2)/q)}{square root over ((ΔV p1 (r+1))2+(ΔV p2 (r+1))2+ . . . +(ΔV pq (r+1))2)/q)}  (37)\ne p (r+1)=√{square root over (((e p1 (r+1))2+(e p2 (r+1))2+ . . . +(e pq (r+1))2)/q)}{square root over (((e p1 (r+1))2+(e p2 (r+1))2+ . . . +(e pq (r+1))2)/q)}{square root over (((e p1 (r+1))2+(e p2 (r+1))2+ . . . +(e pq (r+1))2)/q)}  (38)\nf p (r+1)=√{square root over (((f p1 (r+1))2+(f p2 (r+1))2+ . . . +(f pq (r+1))2)/q)}{square root over (((f p1 (r+1))2+(f p2 (r+1))2+ . . . +(f pq (r+1))2)/q)}{square root over (((f p1 (r+1))2+(f p2 (r+1))2+ . . . +(f pq (r+1))2)/q)}  (39)\n\nMathematically, square of any positive or negative number is positive. Therefore, in the above relations if the original not-squared value of any number is negative, the same algebraic sign is attached after squaring that number. Again if the mean of squared values turns out to be a negative number, negative sign is attached after taking the square root of the unsigned number.\n\nParallel Computer Architecture\n\nThe Suresh's diakoptics along with the technique of relating sub-network solution estimate to get the global solution estimate does not require any communication between processors assigned to solve each sub-network. All processors access the commonly shared memory through possibly separate port for each processor in a multi-port memory organization to speed-up the access. Each processor access the commonly shared memory to write results of local solution of sub-network assigned to contribute to the generation of network-wide or global solution estimate. The generation of global solution estimate marks the end of iteration. The iteration begins by processors accessing latest global solution estimate for local solution of sub-networks assigned. That means only beginning of the solution of sub-networks by assigned processors need to be synchronized in each iteration, which synchronization can be affected by server-processor.\n\nThere are two possible approaches of achieving parallel processing for a problem. The first is to design and develop a solution technique for the best possible parallel processing of a problem and then design parallel computer organization/architecture to achieve it. The second is to design and develop parallel processing of solution technique that can best be carried out on any of the existing available parallel computer. The inventions of this application follow the first approach. The trick is in breaking the large problem into small pieces of sub-problems, and solving sub-problems each on a separate processor simultaneously in parallel, and then relating solution of sub-problems into obtaining global solution of the original whole problem. That exactly is achieved by the inventions of Suresh's diakoptics of breaking the large network into small pieces of sub-networks, and solving sub-networks each on a separate processor simultaneously in parallel, and then by the technique of relating solutions of sub-networks into obtaining network wide global solution of the original whole network.\n\nInvented technique of parallel loadflow computation can best be carried out on invented parallel computer architecture of FIG. 4. The main inventive feature of the architecture of FIG. 4 is that processors are not required to communicate with each other and provision of private local main memory for each processor for local solution of sub-problem for contribution to the generation of network wide global solution in commonly shared main memory. Other applications can be developed that can best be carried out using the parallel computer architecture of FIG. 4.\n\nFIG. 4 is the generalized and simplified block diagram of a multiprocessor computer system comprising few to thousands of processors meaning the value of number ‘n’ in ‘processor-n’ could be small to in the range of thousands. The invention of the server processor-array processor architecture of the computer of FIG. 4 comprises a multiprocessor system with processors and input/output (I/O) adapter coupled, by a common bus arrangement, to the commonly shared main memory. One of the processors is the main/server processor coupled to the I/O adapter, which is only one for the system and coupled to the I/O devices. The I/O adapter and I/O devices are not explicitly shown but are considered to have been included in the block marked I/O unit. Similarly, dedicated cache memories if required for each processor and I/O adapter are considered to have been included in each processor block and the block of I/O unit. Each processor is also provided with its private local main memory for local processing, and it is not visible or accessible to any other processor or I/O unit. The FIG. 4 also explicitly depicts that no communication of any short is required between processors except that each processor communicates only with the main/server processor for control and coordination purposes. All connecting lines with arrows at each end indicates two way asynchronous communication.\n\nDistributed Computing Architecture\n\nThe parallel computer architecture depicted in FIG. 4 land itself into distributed computing architecture. This is achieved when each processor and associated memory forming a self-contained computer in itself is physically located at each network node or a substation, and communicates over communication lines with commonly shared memory and server processor both located at central station or load dispatch center in a power network. It is possible to have an input/output unit with a computer at each network node or substation, which can be used to read local sub-network data in parallel and communicate over communication line to commonly shared memory for the formation and storage of network wide global data at the central load dispatch center in the power network.\n\nCONCLUSION\n\nThe inventions of Suresh's diakoptics, technique of relating local solution of sub-networks into network-wide global solution, and parallel computer architecture depicted in FIG. 4 afford an opportunity for the maximum possible parallelism with minimum possible communication and synchronization requirements. Also parallel computer architecture and parallel computer program are scalable, which is not possible with most of the parallel computers built so far. Moreover, these inventions provide bridging and unifying model for parallel computation.\n\nCalculation Steps for Parallel Gauss-Seidel-Patel Loadflow Method\n\nThe steps of parallel Gauss-Seidel-Patel loadflow (PGSPL) computation method, using invented parallel computer of FIG. 4 are shown in the flowchart of FIG. 3 b. Referring to the flowchart of FIG. 3 b, different steps are elaborated in steps marked with similar numbers in the following. The words “Read system data” in Step-1 correspond to step-10 and step-20 in FIG. 5, and step-14, step-20, step-32, step-44, step-50 in FIG. 6. All other steps in the following correspond to step-30 in FIG. 5, and step-60, step-62, and step-64 in FIG. 6.\n\n• 21. Read system data and assign an initial approximate solution. If better solution estimate is not available, set specified voltage magnitude at PV-nodes, 1.0 p.u. voltage magnitude at PQ-nodes, and all the node angles equal to that of the slack-node, which is referred to as the flat-start. The solution guess is stored in complex voltage vector say, V (I) where “I” takes values from 1 to n, the number of nodes in the whole network.\n• 22. All processors simultaneously access network-wide global data stored in commonly shared memory, which can be under the control of server-processor, to form and locally store required admittance matrix for each sub-network.\n• 23. Initialize complex voltage vector, say VV (I)=CMPLEX (0.0, 0.0) that receives solution contributions from sub-networks.\n• 24. All processors simultaneously access network-wide global latest solution estimate vector V (I) available in the commonly shared memory to read into the local processor memory the required elements of the vector V (I), and perform 2-iterations of the GSPL-method in parallel for each sub-network to calculate node-voltages.\n• 25. As soon as 2-iterations are performed for a sub-network, its new local solution estimate is contributed to the vector VV (I) in commonly shared memory under the control of server processor without any need for the synchronization. It is possible that small sub-network finished 2-iterations and already contributed to the vector VV (I) while 2-iterations are still being performed for the larger sub-network.\n• 26. Contribution from a sub-network to the vector VV (I) means, the complex voltage estimate calculated for the nodes of the sub-network are added to the corresponding elements of the vector VV (I). After all sub-networks finished 2-iterations and contributed to the vector VV (I), its each element is divided by the number of contributions from all sub-networks to each element or divided by number of nodes directly connected to the node represented by the vector element, leading to the transformation of vector VV (I) into the new network-wide global solution estimate. This operation is performed as indicated in relation (30) or (35). This step requires synchronization in that the division operation on each element of the vector VV(I) can be performed only after all sub-networks are solved and have made their contribution to the vector VV(I).\n• 27. Find the maximum difference in the real and imaginary parts of [VV(I)−V(I)]\n• 28. Calculate accelerated value of VV(I) by relation (9) as VV(I)=V(I)+β[VV(I)−V(I)] and perform V(I)=VV(I)\n• 29. If the maximum difference calculated in step-27 is not less than certain solution accuracy tolerance specified as stopping criteria for the iteration process, increment iteration count and go to step-23, or else follow the next step.\n• 30. From calculated values of complex voltage at different power network nodes, and tap position of tap changing transformers, calculate power flows through power network components, and reactive power generation at PV-nodes.\n\nIt can be seen that steps-22, -24, and -25 are performed in parallel. While other steps are performed by the server-processor. However, with the refined programming, it is possible to delegate some of the server-processor tasks to the parallel-processors. For example, any assignment functions of step-21 and step-22 can be performed in parallel. Even reading of system data can be performed in parallel particularly in distributed computing environment where each sub-network data can be read in parallel by substation computers connected to operate in parallel.\n\nCalculation Steps for Parallel Super Super Decoupled Loadflow Method\n\nThe steps of Parallel Super Super Decoupled Loadflow (PSSDL) computation method using invented parallel computer of FIG. 4 are given in the following without giving its flowchart.\n\n• 41. Read system data and assign an initial approximate solution. If better solution estimate is not available, set all node voltage magnitudes and all node angles equal to those of the slack-node, which is referred to as the slack-start. The solution guess is stored in voltage magnitude and angle vectors say, VM (I) and VA(I) where “I” takes values from 1 to n, the number of nodes in the whole network.\n• 42. All processors simultaneously access network-wide global data stored in commonly shared memory, which can be under the control of server-processor to form and locally store required admittance matrix for each sub-network. Form gain matrices of SSDL-method for each sub-network, factorize and store them locally in the memory associated with each processor.\n• 43. Initialize vectors, say DVM (I)=0.0, and DVA(I)=0.0 that receives respectively voltage magnitude corrections and voltage angle corrections contributions from sub-networks.\n• 44. Calculate real and reactive power mismatches for all the nodes in parallel, find real power maximum mismatch and reactive power maximum mismatch by the server-computer. If both the maximum values are less then convergence tolerance specified, go to step-49. Otherwise, follow the next step.\n• 45. All processors simultaneously access network-wide global latest solution estimate VM(I) and VA(I) available in the commonly shared memory to read into the local processor memory the required elements of the vectors VM(I) and VA(I), and perform 1-iteration of SSDL-method in parallel for each sub-network to calculate node-voltage-magnitudes and node-voltage-angles.\n• 46. As soon as 1-iteration is performed for a sub-network, its new local solution corrections estimate are contributed to the vectors DVM (I) and DVA(I) in commonly shared memory under the control of server processor without any need for the synchronization. It is possible that small sub-network finished 1-iteration and already contributed to the vectors DVM (I) and DVA(I) while 1-iteration is still being performed for the larger sub-network.\n• 47. Contribution from a sub-network to the vectors DVM (I) and DVA(I) means, the complex voltage estimate calculated for the nodes of the sub-network are added to the corresponding elements of the vectors DVM (I) and DVA(I). After all sub-networks finished 1-iteration and contributed to the vectors DVM (I) and DVA(I), its each element is divided by the number of contributions from all sub-networks to each element or divided by number of nodes directly connected to the node represented by the vector element, leading to the transformation of vectors DVM (I) and DVA(I) into the new network-wide global solution correction estimates. This operation is performed as indicated in relation (31) and (32) or (38) and (39). This step requires synchronization in that the division operation on each element of the vectors DVM (I) and DVA(I) can be performed only after all sub-networks are solved and made their contribution to the vectors DVM (I) and DVA(I).\n• 48. Update solution estimates VM(I) and VA(I), and proceed to step-43\n• 49. From calculated values of complex voltage at different power network nodes, and tap position of tap changing transformers, calculate power flows through power network components, and reactive power generation at PV-nodes.\n\nIt can be seen that steps-42, -44, and -45 are performed in parallel. While other steps ate performed by the server-processor. However, with the refined programming, it is possible to delegate some of the server-processor tasks to the parallel-processors. For example, any assignment functions such as in step-43 can be performed in parallel. Even reading of system data can be performed in parallel particularly in distributed computing environment where each sub-network data can be read in parallel by substation computers connected to operate in parallel.\n\nGeneral Statements\n\nThe system stores a representation of the reactive capability characteristic of each machine and these characteristics act as constraints on the reactive power, which can be calculated for each machine.\n\nWhile the description above refers to particular embodiments of the present invention, it will be understood that many modifications may be made without departing from the spirit thereof. The accompanying claims are intended to cover such modifications as would fall within the true scope and spirit of the present invention.\n\nThe presently disclosed embodiments are therefore to be considered in all respect as illustrative and not restrictive, the scope of the invention being indicated by the appended claims in addition to the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.\n\nREFERENCES\n\nForeign Patent Document\n\n• 1. U.S. Pat. No. 4,868,410 dated Sep. 19, 1989: “System of Load Flow Calculation for Electric Power System”\n• 2. U.S. Pat. No. 5,081,591 dated Jan. 14, 1992: “Optimizing Reactive Power Distribution in an Industrial Power Network”\nPublished Pending Patent Applications\n• 3. Canadian Patent Application Number: CA2107388 dated 9 Nov., 1993: “System of Fast Super Decoupled Loadflow Calcutation for Electrical Power System”\n• 4. International Patent Application Number: PCT/CA/2003/001312 dated 29 Aug., 2003: “System of Super Super Decoupled Loadflow Computation for Electrical Power System”\nOther Publications\n• 5. Stagg G. W. and El-Abiad A. H., “Computer methods in Power System Analysis”, McGrow-Hill, New York, 1968\n• 6. S. B. Patel, “Fast Super Decoupled Loadflow”, IEE proceedings Part-C, Vol. 139, No. 1, pp. 13-20, January 1992\n• 7. Shin-Der Chen, Jiann-Fuh Chen, “Fast loadflow using multiprocessors”, Electrical Power & Energy Systems, 22 (2000) 231-236\n\n## Claims (4)\n\n1. A method of forming/defining and solving a model of a power network to affect control of voltages and power flows in a power system, comprising the steps of:\nobtaining on-line/simulated data of open/close status of switches and circuit breakers in the power network, and reading data of operating limits of components of the power network including PV-node, a generator-node where Real-Power-P and Voltage-Magnitude-V are given/assigned/specified/set, maximum and minimum reactive power generation capability limits of generators, and transformers tap position limits,\nobtaining on-line readings of given/assigned/specified/set Real-Power-P and Reactive-Power-Q at PQ-nodes, Real-Power-P and voltage-magnitude-V at PV-nodes, voltage magnitude and angle at a reference/slack node, and transformer turns ratios, wherein said on-line readings are the controlled variables/parameters,\ninitiating loadflow computation with initial approximate/guess solution of the same voltage magnitude and angle as those of the reference/slack node for all the PQ-nodes and the PV-nodes, said initial approximate/guess solution is referred to as a slack-start,\nperforming loadflow computation to calculate complex voltages or their real and imaginary components or voltage magnitude corrections and voltage angle corrections at nodes of the power network providing for calculation of power flow through different components of the power network, and to calculate reactive power generation and transformer tap-position indications,\ndecomposing the power network for performing said loadflow computation in parallel by a method referred to as Suresh's diakoptics that involves determining a sub-network for each node involving directly connected nodes referred to as level-1 nodes and directly connected nodes to level-1 nodes referred to as level-2 nodes, and a level of outward connectivity for local solution of a sub-power-network around a given node is determined experimentally,\ninitializing, at the beginning of each new iteration, a vector of dimension equal to the number of nodes in the power network with each element value zero,\nsolving all sub-networks in parallel using available solution estimate at the start of the iteration,\nadding newly calculated solution estimates or corrections to the available solution estimate for a node resulting from different sub-networks, ‘q’ number of sub-networks, in which a node is contained, in a corresponding vector element that gets initialized zero at the beginning of each new iteration,\ncounting the number of additions and calculating new solution estimate or corrections to the available solution estimate by taking the average or root mean square value using any relevant relations (30) to (39) in the following depending on the loadflow computation method used, and\nstoring the new solution estimate at the end of the current iteration as initial available estimate for the next iteration,\nwherein said Suresh's diakoptics method uses the following relations,\n$V p ( r + 1 ) = ( V p ⁢ ⁢ 1 ( r + 1 ) + V p ⁢ ⁢ 2 ( r + 1 ) + V p ⁢ ⁢ 3 ( r + 1 ) + … + V pq ( r + 1 ) ) / q ( 30 ) Δθ p ( r + 1 ) = ( Δθ p ⁢ ⁢ 1 ( r + 1 ) + Δθ p ⁢ ⁢ 2 ( r + 1 ) + Δθ p ⁢ ⁢ 3 ( r + 1 ) + … + Δθ pq ( r + 1 ) ) / q ( 31 ) Δ ⁢ V p ( r + 1 ) = ( Δ ⁢ V p ⁢ ⁢ 1 ( r + 1 ) + Δ ⁢ V p ⁢ ⁢ 2 ( r + 1 ) + Δ ⁢ V p ⁢ ⁢ 3 ( r + 1 ) + … + Δ ⁢ V pq ( r + 1 ) ) / q ( 32 ) e p ( r + 1 ) = ( e p ⁢ ⁢ 1 ( r + 1 ) + e p ⁢ ⁢ 2 ( r + 1 ) + e p ⁢ ⁢ 3 ( r + 1 ) + … + e pq ( r + 1 ) / q ( 33 ) f p ( r + 1 ) = ( f p ⁢ ⁢ 1 ( r + 1 ) + f p ⁢ ⁢ 2 ( r + 1 ) + f p ⁢ ⁢ 3 ( r + 1 ) + … + f pq ( r + 1 ) ) / q ( 34 )$\nwherein relations (30) to (34), can also alternatively be written as relations (35) to (39) as below,\n$V p ( r + 1 ) = ( Re ⁡ ( ( V p ⁢ ⁢ 1 ( r + 1 ) 2 ) 2 ) + Re ⁡ ( ( V p ⁢ ⁢ 2 ( r + 1 ) ) 2 ) + … + Re ⁡ ( ( V pq ( r + 1 ) ) 2 ) / q + j ⁢ ( Im ( ( V p ⁢ ⁢ 1 ( r + 1 ) ) 2 ) + Im ( ( V p ⁢ ⁢ 2 ( r + 1 ) ) 2 ) + … + Im ( ( V pq ( r + 1 ) ) 2 ) ) / q ( 35 ) Δθ p ( r + 1 ) = ( Δθ p ⁢ ⁢ 1 ( r + 1 ) ) 2 + ( Δθ p ⁢ ⁢ 2 ( r + 1 ) ) 2 + … + ( Δθ pq ( r + 1 ) ) 2 ) / q ( 36 ) Δ ⁢ V p ( r + 1 ) = ( ΔV p ⁢ ⁢ 1 ( r + 1 ) ) 2 + ( ΔV p ⁢ ⁢ 2 ( r + 1 ) ) 2 + … + ( ΔV pq ( r + 1 ) ) 2 ) / q ( 37 ) e p ( r + 1 ) = ( ( e p ⁢ ⁢ 1 ( r + 1 ) ) 2 + ( e p ⁢ ⁢ 2 ( r + 1 ) ) 2 + … + ( e pq ( r + 1 ) ) 2 ) / q ( 38 ) f p ( r + 1 ) = ( ( f p ⁢ ⁢ 1 ( r + 1 ) ) 2 + ( f p ⁢ ⁢ 2 ( r + 1 ) ) 2 + … + ( f pq ( r + 1 ) ) 2 ) / q ( 39 )$\nwherein, square of any positive or negative number being positive, if the original not-squared value of any number is negative, the same algebraic sign is attached after squaring that number, and if the mean of squared values turns out to be a negative number, negative sign is attached after taking the square root of the unsigned number, Vp, θp are voltage magnitude and voltage angle at node-p, ep and fp are the real and imaginary parts of the complex voltage Vp of node-p, symbol Δ before any of defined electrical quantities defines the change in the value of electrical quantity, and superscript ‘r’ indicates the iteration count,\nevaluating loadflow computation for any over loaded components of the power network and for under/over voltage at any of the nodes of the power network,\ncorrecting one or more controlled parameters and repeating the performing loadflow computation by decomposing, initializing, solving, adding, counting, storing, evaluating, and correcting steps until evaluating step finds no over loaded components and no under/over voltages in the power network, and\naffecting a change in power flow through components of the power network and voltage magnitudes and angles at the nodes of the power network by actually implementing the finally obtained values of controlled variables/parameters after evaluating step finds a good power system or alternatively the power network without any overloaded components and under/over voltages, which finally obtained controlled variables/parameters however are stored for acting upon fast in case a simulated event actually occurs.\n2. A method as defined in claim 1 wherein the loadflow computation method referred to as Gauss-Seidel-Patel Loadflow (GSPL) computation method is characterized in using self-iteration denoted by ‘sr’ within a network-wide/sub-network-wide global iteration depicted by ‘r’ in the GSPL model defined by equation (27) given in the following,\n$( V p ( sr + 1 ) ) ( r + 1 ) = [ { ( PSH p - j ⁢ ⁢ QSH p ) / ( ⁢ ( ⁢ V p * ) sr ) r } - ∑ q = 1 p - 1 ⁢ Y pq ⁢ V q ( r + 1 ) - ∑ q = p + 1 n ⁢ Y pq ⁢ V q r ] / Y pp ( 27 )$\nwherein, PSHp and QSHp are scheduled/specified/known/set real and reactive power, Vp is the complex node-p voltage, and Ypq and Ypp are off-diagonal and diagonal complex elements of the network admittance matrix.\n3. A method as defined in claim 1 wherein a parallel loadflow computation is performed using a parallel computer: a server processor-array processors architecture, wherein each of the array processors send communication to and receive communication from only the server processor, commonly shared memory locations, and each processor's private memory locations, but not among themselves.\n4. A multiprocessor computing apparatus for performing the said parallel loadflow computation as defined in claim 1 comprising in combination:\na plurality of processing units adapted to receive and process data, instructions and control signals, and connected to common system bus in parallel asynchronous fashion;\na plurality of local private main memory means for storing the data, instructions and control signals, each said main memory means being directly and asynchronously connected to each said processing unit;\ncommon shared memory coupled directly to said common system bus for sending/receiving the data, instructions and control signals asynchronously to/from each said processing unit, without providing inter-processor communications;\nI/O adapter/control unit coupled directly and asynchronously to a main/server processor, which is one of the said plurality of processing units;\nwherein said I/O adapter/control unit coupled directly and asynchronously to each of said plurality of processing units physically located at far distances in case of said multiprocessor computing apparatus organized for distributed processing.\nUS10/594,715 2004-10-01 2005-09-30 Method and apparatus for parallel loadflow computation for electrical power system Active US7788051B2 (en)\n\n## Priority Applications (3)\n\nApplication Number Priority Date Filing Date Title\nCA2479603 2004-10-01\nCA002479603A CA2479603A1 (en) 2004-10-01 2004-10-01 Sequential and parallel loadflow computation for electrical power system\nPCT/CA2005/001537 WO2006037231A1 (en) 2004-10-01 2005-09-30 System and method of parallel loadflow computation for electrical power system\n\n## Publications (2)\n\nPublication Number Publication Date\nUS20070203658A1 US20070203658A1 (en) 2007-08-30\nUS7788051B2 true US7788051B2 (en) 2010-08-31\n\n# Family\n\n## Family Applications (1)\n\nApplication Number Title Priority Date Filing Date\nUS10/594,715 Active US7788051B2 (en) 2004-10-01 2005-09-30 Method and apparatus for parallel loadflow computation for electrical power system\n\n## Country Status (3)\n\nCountry Link\nUS (1) US7788051B2 (en)\nCA (1) CA2479603A1 (en)\nWO (1) WO2006037231A1 (en)\n\n## Cited By (8)\n\n* Cited by examiner, † Cited by third party\nPublication number Priority date Publication date Assignee Title\nUS20100217550A1 (en) * 2009-02-26 2010-08-26 Jason Crabtree System and method for electric grid utilization and optimization\nUS20120078436A1 (en) * 2010-09-27 2012-03-29 Patel Sureshchandra B Method of Artificial Nueral Network Loadflow computation for electrical power system\nUS20140228976A1 (en) * 2013-02-12 2014-08-14 Nagaraja K. S. Method for user management and a power plant control system thereof for a power plant system\nUS8897923B2 (en) 2007-12-19 2014-11-25 Aclara Technologies Llc Achieving energy demand response using price signals and a load control transponder\nCN105205244A (en) * 2015-09-14 2015-12-30 国家电网公司 Closed loop operation simulation system based on electromechanical-electromagnetic hybrid simulation technology\nUS9891827B2 (en) 2013-03-11 2018-02-13 Sureshchandra B. Patel Multiprocessor computing apparatus with wireless interconnect and non-volatile random access memory\nUS10197606B2 (en) 2015-07-02 2019-02-05 Aplicaciones En Informática Avanzada, S.A System and method for obtaining the powerflow in DC grids with constant power loads and devices with algebraic nonlinearities\nUS10365310B2 (en) * 2014-06-12 2019-07-30 National Institute Of Advanced Industrial Science And Technology Impedance estimation device and estimation method for power distribution line\n\n## Families Citing this family (36)\n\n* Cited by examiner, † Cited by third party\nPublication number Priority date Publication date Assignee Title\nUS6998962B2 (en) * 2000-04-14 2006-02-14 Current Technologies, Llc Power line communication apparatus and method of using the same\nCA2400580A1 (en) * 2002-09-03 2004-03-03 Sureshchandra B. Patel Systems of advanced super decoupled load-flow computation for electrical power system\nUS7468657B2 (en) * 2006-01-30 2008-12-23 Current Technologies, Llc System and method for detecting noise source in a power line communications system\nUS9557723B2 (en) 2006-07-19 2017-01-31 Power Analytics Corporation Real-time predictive systems for intelligent energy monitoring and management of electrical power networks\nEP2076858A4 (en) * 2006-10-24 2011-05-04 Edsa Micro Corp Systems and methods for a real-time synchronized electrical power system simulator for \"what-if\" analysis and prediction over electrical power networks\nUS8180622B2 (en) 2006-10-24 2012-05-15 Power Analytics Corporation Systems and methods for a real-time synchronized electrical power system simulator for “what-if” analysis and prediction over electrical power networks\nUS9092593B2 (en) 2007-09-25 2015-07-28 Power Analytics Corporation Systems and methods for intuitive modeling of complex networks in a digital environment\nUS7795877B2 (en) 2006-11-02 2010-09-14 Current Technologies, Llc Power line communication and power distribution parameter measurement system and method\nUS20080143491A1 (en) * 2006-12-13 2008-06-19 Deaver Brian J Power Line Communication Interface Device and Method\nCN101849337A (en) 2007-05-07 2010-09-29 西门子公司 Method and device for determining load flow in an electrical supply grid\nUS8315742B2 (en) * 2007-08-27 2012-11-20 Sureshchandra Patel System and method of loadflow calculation for electrical power system\nUS7714592B2 (en) * 2007-11-07 2010-05-11 Current Technologies, Llc System and method for determining the impedance of a medium voltage power line\nUS20090289637A1 (en) * 2007-11-07 2009-11-26 Radtke William O System and Method for Determining the Impedance of a Medium Voltage Power Line\nUS8077049B2 (en) * 2008-01-20 2011-12-13 Current Technologies, Llc Method and apparatus for communicating power distribution event and location\nUS8566046B2 (en) * 2008-01-21 2013-10-22 Current Technologies, Llc System, device and method for determining power line equipment degradation\nUS20100023786A1 (en) * 2008-07-24 2010-01-28 Liberman Izidor System and method for reduction of electricity production and demand\nUS9099866B2 (en) 2009-09-01 2015-08-04 Aden Seaman Apparatus, methods and systems for parallel power flow calculation and power system simulation\nUS20120022713A1 (en) * 2010-01-14 2012-01-26 Deaver Sr Brian J Power Flow Simulation System, Method and Device\nUS9054531B2 (en) * 2011-07-19 2015-06-09 Carnegie Mellon University General method for distributed line flow computing with local communications in meshed electric networks\nCN102427229B (en) * 2011-10-18 2013-06-19 清华大学 Zero-injection-constraint electric power system state estimation method based on modified Newton method\nUS9941740B2 (en) 2011-12-12 2018-04-10 Mbh Consulting Ltd. Systems, apparatus and methods for quantifying and identifying diversion of electrical energy\nUS9122618B2 (en) * 2011-12-12 2015-09-01 Mbh Consulting Ltd. Systems, apparatus and methods for quantifying and identifying diversion of electrical energy\nCN102521452B (en) * 2011-12-14 2014-01-29 中国电力科学研究院 Computing system of large power grid closed loop\nCN102801165B (en) * 2012-08-13 2014-07-16 清华大学 Automatic voltage control method considering static security\nWO2014064570A1 (en) * 2012-10-23 2014-05-01 Koninklijke Philips N.V. Device and method for determining an individual power representation of operation states\nUS10128658B2 (en) 2013-06-17 2018-11-13 Carnegie Mellon University Autonomous methods, systems, and software for self-adjusting generation, demand, and/or line flows/reactances to ensure feasible AC power flow\nCA2827701A1 (en) * 2013-09-23 2015-03-23 Sureshchandra B. Patel Methods of patel decoupled loadlow computation for electrical power system\nCN103701123B (en) * 2014-01-10 2016-02-24 贵州电网公司信息通信分公司 A method for ungrounded Gaussian distribution network - Seidel method of calculating a three-phase flow\nCN104484234B (en) * 2014-11-21 2017-12-05 中国电力科学研究院 A computing method and system based on multi wavefront trend of gpu\nCN105068785B (en) * 2015-04-22 2018-04-10 清华大学 A parallel computing method and system\nCN105354422B (en) * 2015-11-12 2018-07-20 南昌大学 Speedy and strike the polar symmetric sparse matrix techniques based on Newton - Raphson method of trend\nCN105490266B (en) * 2015-12-24 2018-01-23 国网甘肃省电力公司电力科学研究院 Generator speed control system based on multi-variable fitting parameter optimization Modeling\nCN105760664A (en) * 2016-02-04 2016-07-13 南昌大学 Polar coordinate Newton method tide algorithm based on rectangular coordinate solution\nCN105786769A (en) * 2016-02-15 2016-07-20 南昌大学 Fast-data-reading-based application of sparse symmetric factor table method to polar-coordinate (PQ) decomposition method flow\nCN107064667A (en) * 2017-01-11 2017-08-18 国家电网公司 Improved-Gaussian-mixture-model-based evaluation system for power quality of electric railway load\nCN106816871A (en) * 2017-01-24 2017-06-09 中国电力科学研究院 Power system state similarity analysis method\n\n## Citations (12)\n\n* Cited by examiner, † Cited by third party\nPublication number Priority date Publication date Assignee Title\nUS3886330A (en) * 1971-08-26 1975-05-27 Westinghouse Electric Corp Security monitoring system and method for an electric power system employing a fast on-line loadflow computer arrangement\nUS4868410A (en) * 1986-09-10 1989-09-19 Mitsubishi Denki Kabushiki Kaisha System of load flow calculation for electric power system\nUS5081591A (en) * 1990-02-28 1992-01-14 Westinghouse Electric Corp. Optimizing reactive power distribution in an industrial power network\nCA2107388A1 (en) * 1993-11-09 1995-05-10 Sureshchandra B. Patel Method of fast super decoupled loadflow computation for electrical power system\nUS5798939A (en) * 1995-03-31 1998-08-25 Abb Power T&D Company, Inc. System for optimizing power network design reliability\nUS6182196B1 (en) * 1998-02-20 2001-01-30 Ati International Srl Method and apparatus for arbitrating access requests to a memory\nUS6243244B1 (en) * 1996-12-04 2001-06-05 Energyline Systems, Inc. Method for automated reconfiguration of a distribution system using distributed control logic and communications\nUS6347027B1 (en) * 1997-11-26 2002-02-12 Energyline Systems, Inc. Method and apparatus for automated reconfiguration of an electric power distribution system with enhanced protection\nUS20030192039A1 (en) * 2002-04-05 2003-10-09 Mcconnell Richard G. Configuration management system & method\nWO2004023622A2 (en) * 2002-09-03 2004-03-18 Sureshchandra Patel System of super super decoupled loadflow computation for electrical power system\nUS20060111860A1 (en) * 2002-11-06 2006-05-25 Aplicaciones En Informatica Avanzada, S.A. System and method for monitoring and managing electrical power transmission and distribution networks\nWO2008025162A1 (en) * 2007-08-27 2008-03-06 Sureshchandra Patel System and method of loadflow calculation for electrical power system\n\n## Patent Citations (13)\n\n* Cited by examiner, † Cited by third party\nPublication number Priority date Publication date Assignee Title\nUS3886330A (en) * 1971-08-26 1975-05-27 Westinghouse Electric Corp Security monitoring system and method for an electric power system employing a fast on-line loadflow computer arrangement\nUS4868410A (en) * 1986-09-10 1989-09-19 Mitsubishi Denki Kabushiki Kaisha System of load flow calculation for electric power system\nUS5081591A (en) * 1990-02-28 1992-01-14 Westinghouse Electric Corp. Optimizing reactive power distribution in an industrial power network\nCA2107388A1 (en) * 1993-11-09 1995-05-10 Sureshchandra B. Patel Method of fast super decoupled loadflow computation for electrical power system\nUS5798939A (en) * 1995-03-31 1998-08-25 Abb Power T&D Company, Inc. System for optimizing power network design reliability\nUS6243244B1 (en) * 1996-12-04 2001-06-05 Energyline Systems, Inc. Method for automated reconfiguration of a distribution system using distributed control logic and communications\nUS6347027B1 (en) * 1997-11-26 2002-02-12 Energyline Systems, Inc. Method and apparatus for automated reconfiguration of an electric power distribution system with enhanced protection\nUS6182196B1 (en) * 1998-02-20 2001-01-30 Ati International Srl Method and apparatus for arbitrating access requests to a memory\nUS20030192039A1 (en) * 2002-04-05 2003-10-09 Mcconnell Richard G. Configuration management system & method\nWO2004023622A2 (en) * 2002-09-03 2004-03-18 Sureshchandra Patel System of super super decoupled loadflow computation for electrical power system\nUS20080281474A1 (en) * 2002-09-03 2008-11-13 Patel Sureshchandra B System of Super Super Decoupled Loadflow Computation for Electrical Power System\nUS20060111860A1 (en) * 2002-11-06 2006-05-25 Aplicaciones En Informatica Avanzada, S.A. System and method for monitoring and managing electrical power transmission and distribution networks\nWO2008025162A1 (en) * 2007-08-27 2008-03-06 Sureshchandra Patel System and method of loadflow calculation for electrical power system\n\n## Non-Patent Citations (5)\n\n* Cited by examiner, † Cited by third party\nTitle\nAllan, R.N. et al. \"LTC Transformers and MVAR Violations in the Fast Decoupled Load Flow,\" IEEE Transactions on Power Apparatus and Systems, vol. PAS-101, Issue 9, Sep. 1982, pp. 3328-3332. *\nPatel S.B., \"Super Super Decoupled Loadflow\", proceedings of the IEEE Toronto International Conference on Science and Technology for Humanity (TIC-STH 2009), Sep. 2009, pp. 652-659. *\nPatel S.B., \"Transformation Based Fast Decoupled Loadflow,\" IEEE Region 10 International Conference on EC3-Energy, Computer, Communication and Control Systems, vol. 1, Aug. 28-30, 1991, pp. 183-187. *\nPatel, S.B., \"Fast super decoupled loadflow,\" IEEE Proceedings on Generation, Transmission, and Distribution, vol. 139, Issue 1, Jan. 1992, pp. 13-20. *\nVan Amerongen, R.A.M., \"A general-purpose version of the fast decoupled load flow,\" IEEE Transactions on Power Systems, vol. 4, Issue 2, May 1989, pp. 760-770. *\n\n## Cited By (10)\n\n* Cited by examiner, † Cited by third party\nPublication number Priority date Publication date Assignee Title\nUS8897923B2 (en) 2007-12-19 2014-11-25 Aclara Technologies Llc Achieving energy demand response using price signals and a load control transponder\nUS20100217550A1 (en) * 2009-02-26 2010-08-26 Jason Crabtree System and method for electric grid utilization and optimization\nUS20120078436A1 (en) * 2010-09-27 2012-03-29 Patel Sureshchandra B Method of Artificial Nueral Network Loadflow computation for electrical power system\nUS8756047B2 (en) * 2010-09-27 2014-06-17 Sureshchandra B Patel Method of artificial nueral network loadflow computation for electrical power system\nUS20140228976A1 (en) * 2013-02-12 2014-08-14 Nagaraja K. S. Method for user management and a power plant control system thereof for a power plant system\nUS9891827B2 (en) 2013-03-11 2018-02-13 Sureshchandra B. Patel Multiprocessor computing apparatus with wireless interconnect and non-volatile random access memory\nUS10365310B2 (en) * 2014-06-12 2019-07-30 National Institute Of Advanced Industrial Science And Technology Impedance estimation device and estimation method for power distribution line\nUS10197606B2 (en) 2015-07-02 2019-02-05 Aplicaciones En Informática Avanzada, S.A System and method for obtaining the powerflow in DC grids with constant power loads and devices with algebraic nonlinearities\nCN105205244A (en) * 2015-09-14 2015-12-30 国家电网公司 Closed loop operation simulation system based on electromechanical-electromagnetic hybrid simulation technology\nCN105205244B (en) * 2015-09-14 2018-10-30 国家电网公司 Electromagnetic Simulation Technology rings operating systems Simulation - Based Electromechanical\n\n## Also Published As\n\nPublication number Publication date\nWO2006037231A8 (en) 2006-10-05\nCA2479603A1 (en) 2006-04-01\nWO2006037231A1 (en) 2006-04-13\nUS20070203658A1 (en) 2007-08-30\n\n## Similar Documents\n\nPublication Publication Date Title\nDa Silva et al. Transmission network expansion planning under a tabu search approach\nRamírez-Rosado et al. New multiobjective tabu search algorithm for fuzzy optimal planning of power distribution systems\nDandachi et al. OPF for reactive pricing studies on the NGC system\nAnderson Models for determining least-cost investments in electricity supply\nLee et al. Optimization method for reactive power planning by using a modified simple genetic algorithm\nBraga et al. A multiyear dynamic approach for transmission expansion planning and long-term marginal costs computation\nParhizi et al. State of the art in research on microgrids: A review\nNajafi et al. A framework for optimal planning in large distribution networks\nGandomkar et al. A genetic–based tabu search algorithm for optimal DG allocation in distribution networks\nLarsson et al. Coordinated system protection scheme against voltage collapse using heuristic search and predictive control\nFisher et al. Optimal transmission switching\nUS7203622B2 (en) Value-based transmission asset maintenance management of electric power networks\nZhang et al. Design of wide-area damping controllers for interarea oscillations\nBruno et al. Unbalanced three-phase optimal power flow for smart grids\nFranco et al. A network flow model for short-term hydro-dominated hydrothermal scheduling problems\nBillinton et al. Reliability assessment of large electric power systems\nSu et al. Optimal PV inverter reactive power control and real power curtailment to improve performance of unbalanced four-wire LV distribution networks\nMomoh Electric power system applications of optimization\nWu et al. Critical review of external network modelling for online security analysis\nDebs Modern power systems control and operation\nUS20120022713A1 (en) Power Flow Simulation System, Method and Device\nLi et al. Genetic algorithms for optimal reactive power compensation on the national grid system\nBerizzi et al. Enhanced security-constrained OPF with FACTS devices\nKumar et al. Recent philosophies of automatic generation control strategies in power systems\nYang et al. TCSC allocation based on line flow based equations via mixed-integer programming\n\n## Legal Events\n\nDate Code Title Description\nSTCF Information on status: patent grant\n\nFree format text: PATENTED CASE\n\nFPAY Fee payment\n\nYear of fee payment: 4\n\nMAFP Maintenance fee payment\n\nFree format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, MICRO ENTITY (ORIGINAL EVENT CODE: M3552)\n\nYear of fee payment: 8" ]
[ null, "https://patentimages.storage.googleapis.com/92/21/b9/4787fd0fa9697b/US07788051-20100831-D00000.png", null, "https://patentimages.storage.googleapis.com/58/7d/b7/582691c0e0dfe2/US07788051-20100831-D00001.png", null, "https://patentimages.storage.googleapis.com/08/39/bb/fb441d1ea871e9/US07788051-20100831-D00002.png", null, "https://patentimages.storage.googleapis.com/cc/e3/af/7f705c58112ab4/US07788051-20100831-D00003.png", null, "https://patentimages.storage.googleapis.com/cc/b7/2b/a7bf70bdfee20b/US07788051-20100831-D00004.png", null, "https://patentimages.storage.googleapis.com/b4/d0/22/82f5dc9501e648/US07788051-20100831-D00005.png", null, "https://patentimages.storage.googleapis.com/a0/d4/34/4edcf95ad0c20a/US07788051-20100831-D00006.png", null, "https://patentimages.storage.googleapis.com/b1/6b/d0/6fd8060da31199/US07788051-20100831-D00007.png", null, "https://patentimages.storage.googleapis.com/3f/e5/b4/4546d72ee0cda8/US07788051-20100831-D00008.png", null, "https://patentimages.storage.googleapis.com/f8/a6/5c/cbd084153a94d3/US07788051-20100831-D00009.png", null, "https://patentimages.storage.googleapis.com/eb/c6/3f/95607123822fa9/US07788051-20100831-D00010.png", null, "https://patentimages.storage.googleapis.com/13/85/fc/6091b4c8b5ab5b/US07788051-20100831-D00011.png", null, "https://patentimages.storage.googleapis.com/a1/e6/6b/ace10adedb874e/US07788051-20100831-D00012.png", null, "https://patentimages.storage.googleapis.com/07/ba/c7/4d2da669a90d65/US07788051-20100831-D00013.png", null, "https://patentimages.storage.googleapis.com/57/70/24/9a852fad2ad16e/US07788051-20100831-D00014.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.82650137,"math_prob":0.89247894,"size":77748,"snap":"2019-35-2019-39","text_gpt3_token_len":18962,"char_repetition_ratio":0.1689648,"word_repetition_ratio":0.25801325,"special_character_ratio":0.24948552,"punctuation_ratio":0.07639095,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9610973,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-25T02:41:04Z\",\"WARC-Record-ID\":\"<urn:uuid:05242c35-b697-4298-b71c-b9656358ebd6>\",\"Content-Length\":\"313620\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:47f15db0-5cf1-450c-8524-a9a47a8bf516>\",\"WARC-Concurrent-To\":\"<urn:uuid:6522c112-0452-4860-92af-c7b490634944>\",\"WARC-IP-Address\":\"172.217.12.238\",\"WARC-Target-URI\":\"https://patents.google.com/patent/US7788051B2/en\",\"WARC-Payload-Digest\":\"sha1:5O536QKPQGMBFEUWW65R3PO5B4M5DV6D\",\"WARC-Block-Digest\":\"sha1:AUNIKNT44AEWZUXDINMSQRTV7FX3Z73S\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027322170.99_warc_CC-MAIN-20190825021120-20190825043120-00129.warc.gz\"}"}
https://zh.m.wikipedia.org/wiki/%E9%85%8D%E4%BD%8D
[ "# 配位键\n\n(重定向自配位\n\n## 相关概念\n\n### 形成条件\n\n• 一是中心原子或离子,它必须有能接受电子对的空轨域;\n• 二是配位体,组成y配位体的原子必须能提供配对的孤对电子(L.P)。\n\n${\\ce {NH3(g)}}$ $+$ ${\\ce {BF3(g) ->NH3BF3(s)}}$\n\n## 常见配位键化合物\n\n• 一氧化碳${\\ce {CO}}$ ,其中碳氧間的三對共用電子對有一配位鍵,兩個正常共價鍵。\n• 铵根${\\ce {NH4+}}$ ,其中N原子与左下右的${\\ce {H}}$ 原子以极性键结合,与上方的${\\ce {H}}$ 以配位键结合,由${\\ce {N}}$ 原子提供一对價电子。\n• 水合氫離子${\\ce {H3O+}}$" ]
[ null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.98866093,"math_prob":0.9998661,"size":421,"snap":"2020-34-2020-40","text_gpt3_token_len":451,"char_repetition_ratio":0.09832134,"word_repetition_ratio":0.0,"special_character_ratio":0.16152018,"punctuation_ratio":0.023809524,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96885806,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-10T04:06:59Z\",\"WARC-Record-ID\":\"<urn:uuid:04b9cb04-69da-465d-9fcc-5a96026b3f33>\",\"Content-Length\":\"45729\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b2f45f60-3dfa-4922-a684-d0bdf482edfd>\",\"WARC-Concurrent-To\":\"<urn:uuid:42c6d51e-99e5-4eff-b6e3-2726eba68644>\",\"WARC-IP-Address\":\"208.80.154.224\",\"WARC-Target-URI\":\"https://zh.m.wikipedia.org/wiki/%E9%85%8D%E4%BD%8D\",\"WARC-Payload-Digest\":\"sha1:4N3MOJSRCRCOOXREJNVGWYCXBRNTVDHT\",\"WARC-Block-Digest\":\"sha1:LZCYCSPJ32J6RZPFNXQROHGYCTVAL36H\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439738603.37_warc_CC-MAIN-20200810012015-20200810042015-00210.warc.gz\"}"}
https://japanese.stackexchange.com/questions/19619/why-in-this-example-sentence-theres-no-difference-between-%E3%81%94%E3%81%A8%E3%81%AB-and-%E3%81%8A%E3%81%8D%E3%81%AB
[ "# Why in this example sentence there's no difference between ごとに and おきに?\n\nI am using the \"A Dictionary of Basic Japanese Grammar\" to help me with Japanese studies. As explained from page 128 on I understood the difference between ごとに and おきに.\n\nFor example:\n\n• 2日おきに = every third day\n• 2日ごとに = every second day\n\nThe Answer what is the difference between ごとに and おきに? also backs this understanding.\n\nNow in the mentioned grammar book there is one last example where both forms do yield the same meaning.\n\nQuoting:\n\nWhen a time expression precedes oki no or goto ni, there is no difference in meaning, if an event takes place at one point in time:\n[電車]{でんしゃ}は[五分]{ごふん}おきに/ごとに[出]{で}る = The train leaves every five minutes.\n\nI don't get the difference between this \"time expression\" and the other ones like 二日. Can somebody give me a hint here?\n\n• I don't think that anything special with Japanese. Time as measured in minutes/seconds/... is continous. If measured as an amount of days, it's discrete. Let A and B be two points in time, then `there are days between A (eg Monday) and B (eg Thursday)` and `there are 5 minutes between A (eg 1:00am) and B (eg 1:05am)` use the same expression in English as well. Nov 21, 2014 at 16:36\n• @blutorange You can leave that as an answer, if you like.\n– user1478\nNov 21, 2014 at 17:37\n• I must have read a paper on this topic years ago. In short, the difference pretty much depends on how you perceive the noun. If you view it as a mass noun, then ごとに and おきに are the same. If you view it as a countable noun, the case becomes more complicated, depending on whether two types of things or a single type is involved. For example, オリンピック and 年 are two types of things, so there are 4年 between two オリンピック. But we often think days are time slots, which can be assigned to something. Therefore 2日おきに is more like 1日 between 2日, which only involves only a single type of noun. Nov 21, 2014 at 21:47\n• An even more impressive example than the Olympic Games is imo 4年おきに閏年を入れる. Nov 21, 2014 at 22:59\n\n`毎【ごと】に` means \"every\", so `2日ごとに` is \"every second day\".\n\nOn the other hand, `X置【お】きに` literally means \"leaving (an amount of time/space/...) X (between each occurence)\". It comes from the verb `置く`, \"to put\", \"to place\", \"to leave (sth. somewhere)\".\n\nHere is an article from NHK's 身近なことばの疑問にお答えします about ごとに and おきに.\n\nSo how come `おきに` sometimes means the same as `ごとに`, and sometimes not? Let's think about English for a moment, the same phenomenon happens in English as well.\n\nLet A and B be two events separated by a certain amount of time. How much time is there between A and B? You might be tempted to answer `B minus A`, but there are two different answers, depending on how we count time:\n\n(a) There are 5 minutes between A = 1:05 am and B = 1:10 am. B - A = 5 minutes. This is 5分おきに - leaving 5 minutes between A and B.\n\n(b) There are 2 days between A = Monday and B = Thursday. B - A = 3 days, not(!) two. This is 2日おきに - leaving two days between A and B.\n\nThe difference between these two cases is that in (a), time as measured in hours, minutes, seconds etc. is considered continous (uncountable) - there's 5 minutes, 5.3 minutes, 5.000321 minutes and so on. In scenario (b), time counted as weekdays or as a number of days is considered discrete (countable) - there's Monday and Tuesday, but nothing in between, we don't talk about `Monday and a half`.\n\nTo illustrate this point:\n\n``````1 2 3 4 5\n``````\n\nHow many numbers are there between 1 and 5? Three, namely 2, 3, and 4.\n\n``````The scale of a measuring cup (for water)\n\n|---+---+---+---+---|\n0 1 2 3 4 5 (deciliter)\n``````\n\nHow many milliliters are there between 1 dl and 5 dl? The answer is 400 ml = 4 dl, not 3 dl.\n\nConclusion:\n\nBoth `おきに` and `ごとに` have got only one basic meaning. Depending on the noun they apply to, both can refer to the same (temporal) interval.\n\n• 2日ごとに every 2 (week) days\n• 2分ごとに every 2 minutes\n• 2日おきに 2 (week) days between each occurence\n• 2分おきに 2 minutes between each occurence\n\nA day consists of 24 hours. Note the difference between `48 hours between two events` and `2 days between two events`. And would you say that there are 2880 minutes between Monday and Thursday?\n\nIf you were looking at a clock face with each minute marked individually, `2分おきに` might mean something different.\n\nNote that this is not limited to temporal intervals. `一行おきに書く` \"write on every other line\", `5メートルおきに杭【くい】を立てる` \"place stakes with a space of five meters in between\".\n\nTo put it another way, you take the open interval (A,B). In the case of a continous variable, everything just a split second after 1:00 am is part of that interval, and thus you get 5 minutes between 1:00 am and 1:05 am. But in the case of a discrete variable, you get less. Consider (Monday,Thursday), Monday midnight + 1hour is not part of the interval because you're counting only in days, but not hours. There's only Monday and Tuesday, but nothing in between. So you get only two days inside the interval (Monday,Thursday), even though Thursday is 3 days after Monday.\n\n• For the record, my (non-linguistic) explanation for the case Yang Muye mentioned would be the different length of the two types of things involved, `Olympic Games` and `years`. The Olympic Games take place over the course of a few weeks, much shorter than a year, so instead of year-time-slots, we think about the continous time (mass noun) between two Olympic Games. Which happens to be approx. 4.0 (not 4) years, or ~208 weeks. Thus you can say 「オリンピックは4年おきに開催される」. Nov 21, 2014 at 22:21\n• I forget to mention that when `X` is viewed as slots, Xごとに is actually `every X on average`. Nov 21, 2014 at 22:48" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.95477784,"math_prob":0.93652374,"size":3118,"snap":"2023-40-2023-50","text_gpt3_token_len":924,"char_repetition_ratio":0.13712268,"word_repetition_ratio":0.013961606,"special_character_ratio":0.27774215,"punctuation_ratio":0.13323353,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9636991,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-08T20:43:59Z\",\"WARC-Record-ID\":\"<urn:uuid:26091a4e-a495-4f54-b7c0-2fcbf7fe7ed9>\",\"Content-Length\":\"157268\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:dcd61298-291d-40f6-9c0e-f670975c1d2a>\",\"WARC-Concurrent-To\":\"<urn:uuid:327d3789-bc12-494f-94c6-2aeeb12487a8>\",\"WARC-IP-Address\":\"172.64.144.30\",\"WARC-Target-URI\":\"https://japanese.stackexchange.com/questions/19619/why-in-this-example-sentence-theres-no-difference-between-%E3%81%94%E3%81%A8%E3%81%AB-and-%E3%81%8A%E3%81%8D%E3%81%AB\",\"WARC-Payload-Digest\":\"sha1:FTFQM4Y5TK3HWIL2RM4QU7JVFB2FTUKG\",\"WARC-Block-Digest\":\"sha1:FDSWM7EDRY3256RWBHS67IRDH3FQ3XPA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100769.54_warc_CC-MAIN-20231208180539-20231208210539-00383.warc.gz\"}"}
https://plainmath.net/16640/solve-the-equation-%E2%88%927-%E2%88%923n-plus-3-equal-%E2%88%9221-plus-21n
[ "Question", null, "# Solve the equation −7(−3n+3)=−21+21n\n\nEquations and inequalities\nANSWERED", null, "Solve the equation −7(−3n+3)=−21+21n", null, "From the given equation we have $$\\displaystyle-{7}{\\left(-{3}{n}+{3}\\right)}=-{21}+{21}{n}\\to{21}{n}-{21}=-{21}+{21}{n}\\to{21}{n}={21}{n}$$" ]
[ null, "https://plainmath.net/qa-theme/BTMath/images/search.png", null, "https://plainmath.net/", null, "https://plainmath.net/", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7519537,"math_prob":1.000009,"size":445,"snap":"2021-31-2021-39","text_gpt3_token_len":182,"char_repetition_ratio":0.17687075,"word_repetition_ratio":0.6666667,"special_character_ratio":0.48539326,"punctuation_ratio":0.021276595,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99986553,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-21T02:42:24Z\",\"WARC-Record-ID\":\"<urn:uuid:63dbddc9-0358-4a19-a0b9-496952bca5c0>\",\"Content-Length\":\"32654\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4cdb5b98-d071-4da8-8c57-ea98271da03e>\",\"WARC-Concurrent-To\":\"<urn:uuid:a57d87ad-10e9-4fba-b96f-f20437e4508f>\",\"WARC-IP-Address\":\"172.67.217.47\",\"WARC-Target-URI\":\"https://plainmath.net/16640/solve-the-equation-%E2%88%927-%E2%88%923n-plus-3-equal-%E2%88%9221-plus-21n\",\"WARC-Payload-Digest\":\"sha1:42KIAH4JBNCC2HEOY3YTX7QVIBCVNHM6\",\"WARC-Block-Digest\":\"sha1:J3VWMRCSG72OYTYTW4LGUUTPN6O4MB3R\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057131.88_warc_CC-MAIN-20210921011047-20210921041047-00708.warc.gz\"}"}
https://www.prodekdecoracion.com.mx/category/bad-credit-installment-loans-indiana-2/
[ "### Exactly what are the needs to borrow a SELF Loan?\n\nExactly what are the needs to borrow a SELF Loan? Exactly what are the needs to borrow a SELF Loan? What’s the interest rate that is current? Just how much am I able to borrow? So how [...]" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.920122,"math_prob":0.5377447,"size":263,"snap":"2021-21-2021-25","text_gpt3_token_len":65,"char_repetition_ratio":0.16602316,"word_repetition_ratio":0.33333334,"special_character_ratio":0.26615968,"punctuation_ratio":0.14516129,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9669525,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-19T11:53:08Z\",\"WARC-Record-ID\":\"<urn:uuid:1d2e5a31-69e6-497b-b837-3d1e9b38924e>\",\"Content-Length\":\"51494\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:707051c6-abf9-44ff-9b56-7a502c5f213b>\",\"WARC-Concurrent-To\":\"<urn:uuid:8ecc85ee-78fe-4df7-b3a8-302f7e7545a2>\",\"WARC-IP-Address\":\"75.119.211.233\",\"WARC-Target-URI\":\"https://www.prodekdecoracion.com.mx/category/bad-credit-installment-loans-indiana-2/\",\"WARC-Payload-Digest\":\"sha1:OFP3P3QD2G43WMATT4K23GTMIZS7GED5\",\"WARC-Block-Digest\":\"sha1:OCVBZJDBCX3QEW4BME2NVRY3UNYXTFU3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487648194.49_warc_CC-MAIN-20210619111846-20210619141846-00199.warc.gz\"}"}
https://calculator.academy/phasor-calculator/
[ "# Phasor Calculator\n\nEnter the real and imaginary numbers of a rectangular form into the phasor calculator. The calculator will return the phasor value of that equation.\n\n## Phasor Formula\n\nThe following formula is used to convert a rectangular form to a phasor form.\n\nP = arctan (y/x)\n\n• Where P is the phasor angle\n• y is the imaginary part of the rectangular form x + jy\n• x is the real part of the rectangular form\n\nHow to calculate a phasor angle\n\n1. First, determine the rectangular form\n\nThis will be an equation of the form x + jy, where x is the real part and y is the imaginary part.\n\n2. Next, separate x and y from the equation in step 1\n\nTake out the real and imaginary portions.\n\n3. Calculate the phasor\n\nEnter the x and y components into the formula above.\n\n## FAQ\n\nWhat is rectangular form?\n\nA rectangular form is a way to represent a vector in terms of a real and imaginary part.\n\nWhat is a phasor?\n\nA phasor is an angle that represents the angle between the x-axis and a vector." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.81890005,"math_prob":0.9960997,"size":1524,"snap":"2020-45-2020-50","text_gpt3_token_len":366,"char_repetition_ratio":0.23421052,"word_repetition_ratio":0.015936255,"special_character_ratio":0.19947506,"punctuation_ratio":0.049808428,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99981827,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-27T09:14:40Z\",\"WARC-Record-ID\":\"<urn:uuid:414bdf67-a529-4b8e-b7d5-f5d8a20e6987>\",\"Content-Length\":\"93816\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8fcdb3d4-174c-42aa-a992-19ff3aa314c6>\",\"WARC-Concurrent-To\":\"<urn:uuid:0eb3bc80-6e0e-4bb9-861f-e75189eec738>\",\"WARC-IP-Address\":\"18.232.245.187\",\"WARC-Target-URI\":\"https://calculator.academy/phasor-calculator/\",\"WARC-Payload-Digest\":\"sha1:TKLGXWHHWIKNGMSXAKTHK75WPVJUDLRY\",\"WARC-Block-Digest\":\"sha1:K5SL7HIWAEYB5ELF4DLCCG5GKRQRG5H3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141191511.46_warc_CC-MAIN-20201127073750-20201127103750-00556.warc.gz\"}"}
https://www.meanings4all.com/2021/05/1-million-means-or-1-million-meaning.html
[ "# 1 Million Means | 1 Million Meaning | 1 Million in Rupees, Lakhs and Crores\n\n## What does 1 Million Means?\n\nMillion is a number equal to 1 followed by 6 zeros\n\n(or)\n\n1 Million is the product of thousand and thousand\n\nie 1,000 * 1,000 = 1,000,000.\n\nThe word “million” is derived from the early Italian million (milione in modern Italian), from mille, “thousand”, plus the augmentative suffix -one. It is abbreviated as m.\n\n1 Million in figures = 1,000,000\n\n1 Million in numbers = 1,000,000\n\nTotal zeros in million = 6 ie 2 sets of 3 zeros\n\n1 Million in scientific notation = 1 * 106 = 106\n\n1 Million in roman numerals = M\n\n## International and Indian Numbering System\n\n International Indian Figures Unit Unit 1 Ten Ten 10 Hundred Hundred 100 Thousand Thousand 1,000 Ten Thousand Ten Thousand 10,000 Hundred Thousand Lakh 100,000 Million Ten Lakh 1,000,000 Ten Million Crore 10,000,000 Hundred Million Ten Crore 100,000,000 Billion Arab 1,000,000,000 Ten Billion Ten Arab 10,000,000,000 Hundred Billion Kharab 100,000,000,000 Trillion Ten Kharab 1000000000000\n\n## 1 Million in Lakhs\n\nFrom the above table, 1 Million in International Numbering System is equal to 10 Lakhs in Indian Numbering System\n\ni.e 1 Million = 10 Lakhs\n\n Millions Lakhs 1 Million 10 Lakhs 2 Million 20 Lakhs 5 Million 50 Lakhs 10 Million 100 Lakhs 25 Million 250 Lakhs 50 Million 500 Lakhs 100 Million 1000 Lakhs 250 Million 2500 Lakhs 500 Million 5000 Lakhs 999 Million 9990 Lakhs\n\n## 1 Million in Crores\n\nFrom the above table, 10 Millions in International Numbering System is equal to 1 Crore in Indian Numbering System\n\ni.e 10 Millions = 1 Crore (or) 1 Million = 10 Lakhs = 0.1 Crores\n\n Millions Lakhs Crores 1 Million 10 Lakhs 0.1 Crores 2 Million 20 Lakhs 0.2 Crores 5 Million 50 Lakhs 0.5 Crores 10 Million 100 Lakhs 1 Crores 25 Million 250 Lakhs 2.5 Crores 50 Million 500 Lakhs 5 Crores 100 Million 1000 Lakhs 10 Crores 250 Million 2500 Lakhs 25 Crores 500 Million 5000 Lakhs 50 Crores 999 Million 9990 Lakhs 99.9 Crores\n\n## 1 Million in Rupees | 1 Million Dollars in Rupees\n\nFor money conversion, we need to do following 2 steps\n\nStep 1: As we already know that, 1 Million in International Numbering System is equal to 10 Lakhs in Indian Numbering System (1M = 10L). So, convert given millions into lakhs\n\nStep 2: Multiply Step 1 - Lakhs with current dollar rate.\n\nLets understand this with examples\n\na) 1 Million in Rupees | 1 Million Dollars in Rupees\n\nStep 1: Converting million to lakhs ie 1 Million = 10 Lakhs\n\nStep 2: On 1st-Jan-2021 1 USD = 73.092 INR\n\nSo, 1 Million USD = 10 Lakhs * 73.092 = 730.92 Lakhs INR = 7.3092 Crores = 7,30,92,000 INR\n\nHence, 1 Million Dollars = 7.3092 Crores Rupees\n\nb) 10 Million in Rupees | 10 Million Dollars in Rupees\n\nStep 1: Converting million to lakhs ie 10 Million = 100 Lakhs\n\nStep 2: On 1st-Jan-2021 1 USD = 73.092 INR\n\nSo, 10 Million USD = 100 Lakhs * 73.092 = 7309.2 Lakhs INR = 73.092 Crores = 73,09,20,000 INR\n\nHence, 10 Million Dollars = 73.092 Crores Rupees\n\n## 1 Million is equal to\n\n1 Million is equal to 10 Lakhs\n\n1 Million is equal to 0.1 Crores\n\n1 Million is equal to 0.001 Billions\n\n1 Million is equal to 0.001 Arab\n\n1 Million is equal to 0.00001 Kharab\n\n1 Million is equal to 0.000001 Trillion\n\n1 Million is equal to 1,000 Thousands\n\n1 Million is equal to 10,000 Hundreds\n\n## The Word \"Million\" in Example Sentences\n\n1. It was the million dollar question.\n\n2. Land on Mars, a round-trip ticket - half a million dollars. It can be done.\n\n3. There are hundreds of millions of videos on YouTube.\n\n4. Helium is present in the atmosphere, of which it constitutes four parts in a million.\n\n5. Compared with 90.5 million sq.\n\n6. They sell 100 million gallons of crude oil annually.\n\n7. I have five million dollars.\n\n8. If you won a 1 million dollars, what would you do?\n\n9. There are more than millions of apps on playstore.\n\n10. At least a hundred million websites are out there.\n\n### People also search for 1 Million Means like\n\n1million mean, one million means, one in a million meaning, define million, i million means, million definition, 100 million meaning, 5 million meaning, million meaning in english, 1000000 means, meaning of 1 million, multi million meaning, l million means, 1 million meaning in english, one in million meaning, a million means, one in a million definition, one of a million meaning, meaning of one million" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8115806,"math_prob":0.95104617,"size":4420,"snap":"2023-40-2023-50","text_gpt3_token_len":1466,"char_repetition_ratio":0.2574728,"word_repetition_ratio":0.11082474,"special_character_ratio":0.34977376,"punctuation_ratio":0.13440861,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9596054,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-22T17:12:58Z\",\"WARC-Record-ID\":\"<urn:uuid:853f2a92-2784-4109-bffb-f0efdc721a4a>\",\"Content-Length\":\"60708\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4ae8f58f-4d0b-46f6-a0ef-c2afccc876a5>\",\"WARC-Concurrent-To\":\"<urn:uuid:3ffa444d-989f-4ad0-be88-1c5cc187883d>\",\"WARC-IP-Address\":\"172.253.115.121\",\"WARC-Target-URI\":\"https://www.meanings4all.com/2021/05/1-million-means-or-1-million-meaning.html\",\"WARC-Payload-Digest\":\"sha1:WU3IAL3MZKVY3R37JFIV57RVKBBA6ZBJ\",\"WARC-Block-Digest\":\"sha1:6F4S7KYQPVISARZH2ZIYDDEHRF4L7FYE\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506421.14_warc_CC-MAIN-20230922170343-20230922200343-00513.warc.gz\"}"}
https://www.colorhexa.com/dcd8fb
[ "# #dcd8fb Color Information\n\nIn a RGB color space, hex #dcd8fb is composed of 86.3% red, 84.7% green and 98.4% blue. Whereas in a CMYK color space, it is composed of 12.4% cyan, 13.9% magenta, 0% yellow and 1.6% black. It has a hue angle of 246.9 degrees, a saturation of 81.4% and a lightness of 91.6%. #dcd8fb color hex could be obtained by blending #ffffff with #b9b1f7. Closest websafe color is: #ccccff.\n\n• R 86\n• G 85\n• B 98\nRGB color chart\n• C 12\n• M 14\n• Y 0\n• K 2\nCMYK color chart\n\n#dcd8fb color description : Light grayish blue.\n\n# #dcd8fb Color Conversion\n\nThe hexadecimal color #dcd8fb has RGB values of R:220, G:216, B:251 and CMYK values of C:0.12, M:0.14, Y:0, K:0.02. Its decimal value is 14473467.\n\nHex triplet RGB Decimal dcd8fb `#dcd8fb` 220, 216, 251 `rgb(220,216,251)` 86.3, 84.7, 98.4 `rgb(86.3%,84.7%,98.4%)` 12, 14, 0, 2 246.9°, 81.4, 91.6 `hsl(246.9,81.4%,91.6%)` 246.9°, 13.9, 98.4 ccccff `#ccccff`\nCIE-LAB 87.627, 8.027, -16.55 71.48, 71.292, 101.256 0.293, 0.292, 71.292 87.627, 18.394, 295.875 87.627, 0.092, -27.555 84.435, 3.353, -11.998 11011100, 11011000, 11111011\n\n# Color Schemes with #dcd8fb\n\n• #dcd8fb\n``#dcd8fb` `rgb(220,216,251)``\n• #f7fbd8\n``#f7fbd8` `rgb(247,251,216)``\nComplementary Color\n• #d8e6fb\n``#d8e6fb` `rgb(216,230,251)``\n• #dcd8fb\n``#dcd8fb` `rgb(220,216,251)``\n• #eed8fb\n``#eed8fb` `rgb(238,216,251)``\nAnalogous Color\n• #e6fbd8\n``#e6fbd8` `rgb(230,251,216)``\n• #dcd8fb\n``#dcd8fb` `rgb(220,216,251)``\n• #fbeed8\n``#fbeed8` `rgb(251,238,216)``\nSplit Complementary Color\n• #d8fbdc\n``#d8fbdc` `rgb(216,251,220)``\n• #dcd8fb\n``#dcd8fb` `rgb(220,216,251)``\n• #fbdcd8\n``#fbdcd8` `rgb(251,220,216)``\n• #d8f7fb\n``#d8f7fb` `rgb(216,247,251)``\n• #dcd8fb\n``#dcd8fb` `rgb(220,216,251)``\n• #fbdcd8\n``#fbdcd8` `rgb(251,220,216)``\n• #f7fbd8\n``#f7fbd8` `rgb(247,251,216)``\n• #9e93f4\n``#9e93f4` `rgb(158,147,244)``\n• #b2aaf6\n``#b2aaf6` `rgb(178,170,246)``\n• #c7c1f9\n``#c7c1f9` `rgb(199,193,249)``\n• #dcd8fb\n``#dcd8fb` `rgb(220,216,251)``\n• #f1effd\n``#f1effd` `rgb(241,239,253)``\n• #ffffff\n``#ffffff` `rgb(255,255,255)``\n• #ffffff\n``#ffffff` `rgb(255,255,255)``\nMonochromatic Color\n\n# Alternatives to #dcd8fb\n\nBelow, you can see some colors close to #dcd8fb. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #d8ddfb\n``#d8ddfb` `rgb(216,221,251)``\n• #d8dafb\n``#d8dafb` `rgb(216,218,251)``\n• #d9d8fb\n``#d9d8fb` `rgb(217,216,251)``\n• #dcd8fb\n``#dcd8fb` `rgb(220,216,251)``\n• #dfd8fb\n``#dfd8fb` `rgb(223,216,251)``\n• #e2d8fb\n``#e2d8fb` `rgb(226,216,251)``\n• #e5d8fb\n``#e5d8fb` `rgb(229,216,251)``\nSimilar Colors\n\n# #dcd8fb Preview\n\nThis text has a font color of #dcd8fb.\n\n``<span style=\"color:#dcd8fb;\">Text here</span>``\n#dcd8fb background color\n\nThis paragraph has a background color of #dcd8fb.\n\n``<p style=\"background-color:#dcd8fb;\">Content here</p>``\n#dcd8fb border color\n\nThis element has a border color of #dcd8fb.\n\n``<div style=\"border:1px solid #dcd8fb;\">Content here</div>``\nCSS codes\n``.text {color:#dcd8fb;}``\n``.background {background-color:#dcd8fb;}``\n``.border {border:1px solid #dcd8fb;}``\n\n# Shades and Tints of #dcd8fb\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #03010e is the darkest color, while #fcfcff is the lightest one.\n\n• #03010e\n``#03010e` `rgb(3,1,14)``\n• #070320\n``#070320` `rgb(7,3,32)``\n• #0a0532\n``#0a0532` `rgb(10,5,50)``\n• #0e0744\n``#0e0744` `rgb(14,7,68)``\n• #120956\n``#120956` `rgb(18,9,86)``\n• #150b67\n``#150b67` `rgb(21,11,103)``\n• #190c79\n``#190c79` `rgb(25,12,121)``\n• #1c0e8b\n``#1c0e8b` `rgb(28,14,139)``\n• #20109d\n``#20109d` `rgb(32,16,157)``\n• #2412ae\n``#2412ae` `rgb(36,18,174)``\n• #2714c0\n``#2714c0` `rgb(39,20,192)``\n• #2b16d2\n``#2b16d2` `rgb(43,22,210)``\n• #2f17e4\n``#2f17e4` `rgb(47,23,228)``\n• #3c26e9\n``#3c26e9` `rgb(60,38,233)``\n• #4c38eb\n``#4c38eb` `rgb(76,56,235)``\n• #5c4aec\n``#5c4aec` `rgb(92,74,236)``\n• #6c5bee\n``#6c5bee` `rgb(108,91,238)``\n• #7c6df0\n``#7c6df0` `rgb(124,109,240)``\n• #8c7ff2\n``#8c7ff2` `rgb(140,127,242)``\n• #9c91f4\n``#9c91f4` `rgb(156,145,244)``\n• #aca3f6\n``#aca3f6` `rgb(172,163,246)``\n• #bcb4f7\n``#bcb4f7` `rgb(188,180,247)``\n• #ccc6f9\n``#ccc6f9` `rgb(204,198,249)``\n• #dcd8fb\n``#dcd8fb` `rgb(220,216,251)``\n• #eceafd\n``#eceafd` `rgb(236,234,253)``\n• #fcfcff\n``#fcfcff` `rgb(252,252,255)``\nTint Color Variation\n\n# Tones of #dcd8fb\n\nA tone is produced by adding gray to any pure hue. In this case, #e9e9ea is the less saturated color, while #d9d5fe is the most saturated one.\n\n• #e9e9ea\n``#e9e9ea` `rgb(233,233,234)``\n• #e7e7ec\n``#e7e7ec` `rgb(231,231,236)``\n• #e6e5ee\n``#e6e5ee` `rgb(230,229,238)``\n• #e5e4ef\n``#e5e4ef` `rgb(229,228,239)``\n• #e4e2f1\n``#e4e2f1` `rgb(228,226,241)``\n• #e2e0f3\n``#e2e0f3` `rgb(226,224,243)``\n• #e1dff4\n``#e1dff4` `rgb(225,223,244)``\n• #e0ddf6\n``#e0ddf6` `rgb(224,221,246)``\n• #dfdbf8\n``#dfdbf8` `rgb(223,219,248)``\n• #dddaf9\n``#dddaf9` `rgb(221,218,249)``\n• #dcd8fb\n``#dcd8fb` `rgb(220,216,251)``\n• #dbd6fd\n``#dbd6fd` `rgb(219,214,253)``\n• #d9d5fe\n``#d9d5fe` `rgb(217,213,254)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #dcd8fb is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.51314586,"math_prob":0.69575715,"size":3721,"snap":"2020-34-2020-40","text_gpt3_token_len":1712,"char_repetition_ratio":0.12536992,"word_repetition_ratio":0.011090573,"special_character_ratio":0.51410913,"punctuation_ratio":0.23783186,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9637083,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-23T13:22:05Z\",\"WARC-Record-ID\":\"<urn:uuid:673113b9-a29e-4253-a75a-8433b791c6e5>\",\"Content-Length\":\"36355\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:43b4a0f4-980d-4226-b1d3-96c9556a3448>\",\"WARC-Concurrent-To\":\"<urn:uuid:c31f3218-931a-4c61-b108-c2781d4a9bd8>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/dcd8fb\",\"WARC-Payload-Digest\":\"sha1:NKEZZRVVUJC3WFSLH7GDYYKLAEA4NVQ2\",\"WARC-Block-Digest\":\"sha1:45Z5KHAVLXDZYHSIJKSJKZSSIN47TK2G\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400210996.32_warc_CC-MAIN-20200923113029-20200923143029-00489.warc.gz\"}"}
http://newtonphysics.on.ca/einstein/chapter1.html
[ "", null, "Einstein's Theory of Relativity\nVersus Classical Mechanics\nPaul Marmet\n\"It follows from the theory of relativity that mass and energy are both different manifestations of the same thing -a somewhat unfamiliar conception for the average man.  . . . the mass and energy in fact were equivalent.\" Albert Einstein\n\"The Quotable Einstein\", Princeton University Press, Princeton New Jersey (1996), also in the Einstein film produced by Nova Television, 1979\n-------------------------------------------------------\nWe must note that the equivalence of mass and energy is different from the principle of mass-energy conservation, which is not applied in Einstein's relativity (see Straumann)\n( Last checked 2017/01/15 - The estate of Paul Marmet )\n\nWhere to get a Hard Copy of this Book\n\nEine deutsche Übersetzung dieses Artikels finden Sie hier.\nChapter One\nThe Physical Reality of Length Contraction.\n1.1 - Introduction.\nIn this first chapter, we will show that it is possible to establish links between quantum mechanics and mass-energy conservation. These links will help us calculate the interatomic distances in molecules and in crystals as a function of their gravitational potential. We will show that the natural interatomic distance calculated using quantum mechanics leads to the length contraction (or dilation) predicted by relativity. This result will be obtained here without using the hypothesis of the constancy of the velocity of light. It will appear instead as a consequence of quantum mechanics when mass-energy conservation is taken into account.\nSince length contraction appears as a consequence of quantum mechanical calculations, the physical reality of those predictions can be verified experimentally. We will show that the results of the most precise quantum mechanical experiments prove that the change of length is real. Two different experiments which have been found to give sufficient accuracy to verify this change of length will be described in detail. We will show that the dimensions of matter really change naturally depending on its location in a gravitational potential.\n\n1.2 - Mass-Energy Conservation at Macroscopic Scale.\nThe most reliable principle in physics seems to be the principle of mass-energy conservation: mass can be transformed into energy and vice versa. Without this principle, one would be able to create mass or energy from nothing. We do not believe that absolute creation from nothing is possible.  Surprizingly, most scientists do not know that Einstein's general relativity is not compatible with the principle of mass-energy conservation [Ref]\nIn order to understand the fundamental implications related to mass-energy conservation, let us consider the following example. Suppose momentarily that the Earth is not moving around the Sun, but has been pushed away with a powerful rocket and has reached interstellar space at location P (see figure 1.1). It now has a negligible residual velocity with respect to the Sun and except for the fact that the Sun has faded away, everything appears the same. The Earth is still made of about 1050 atoms, its center contains iron, is surrounded by oceans, deserts, cities and the atmosphere is the same. The planet is still populated by about the same five billion people.", null, "Figure 1.1\n\nLet us assume that after a while, the planet starts falling slowly from P toward the Sun. Due to the solar attraction, the Earth accelerates until it reaches the distance of 150 million kilometers (from the Sun) corresponding to its normal orbit. At that moment, one can calculate that the Earth has reached a velocity of 42 km/s. This velocity is too large for the Earth to be in a stable orbit around the Sun as it is normally. It must be reduced to 30 km/s, the velocity for a stable orbit. The Earth must be slowed down.\nIt is decided that the velocity of the Earth can be reduced with the help of a strong rope attached to a group of stars at the center of our galaxy. The force produced by the rope will generate energy at the center of the galaxy while the Earth is slowed down to the desired velocity for a stable orbit around the Sun.\nKnowing that the Earth has a mass of 5.97×1024 kg, it is easy to calculate the amount of work transferred to the center of the galaxy. It corresponds to slowing down the Earth from 42 km/s to 30 km/s. This represents an amount of work equal to 2.6×1033 joules. Therefore the Earth must get rid of 2.6×1033 joules to go back to its normal orbit and the center of the galaxy must absorb that same amount of energy. The rope used to slow down the Earth could then run a generator located at the center of the galaxy to produce 2.6×1033 joules of energy.\nHowever, due to the principle of mass-energy conservation, the energy carried out to the center of the galaxy to slow down the Earth can be transformed into mass. Using the relation E = mc2, we find that the mass corresponding to 2.6", null, "1033 joules of energy is equal to 2.9×1016 kg. This means that 29 billions of millions of kilograms of mass have been transferred from the Earth to the center of the galaxy through the rope. This mass-energy is a very small fraction of the Earth’s mass but it must be coming from the Earth and received at the center of the galaxy.\nAfter the re-establishment of the Earth’s orbit at one astronomical unit from the Sun, the inhabitants of the Earth find nothing changed. Other than the neighboring Sun, no difference can be noticed compared with when the Earth, still made of its initial 1050 atoms, was away from the Sun. The question is: How can the Earth not lose one single atom or molecule while 29 billions of millions of kilograms of mass have been lost and received at the center of the galaxy? There is only one logical answer. Since each atom on Earth was submitted to the force of the rope, each atom has lost mass in a proportion of approximately one part per one hundred million.\nNote that this situation is equivalent to the formation of a hydrogen atom. When a proton and an electron come together to form a hydrogen atom, energy is released in the form of light. This light corresponds to the work transferred to the center of the galaxy in our problem.\n\n1.3 - Mass-Energy Conservation at a Microscopic Scale.\nThe experiment described above takes place at a macroscopic scale. Each individual atom loses mass because a force interacts on all atoms when the Earth decelerates in the Sun's gravitational potential. It is normally assumed that atoms have a constant mass. For example we learn that the mass of the hydrogen atom is mo = 1.6727406×10-27 kg. Can we have hydrogen atoms with less or more mass? From the thought experiment of section 1.2, we see that the principle of mass-energy conservation requires a transformation of mass into energy on each atom forming the Earth, since each of them has contributed to generate energy transmitted to the center of the galaxy.\nLet us study the following experiment. We first consider that an individual hydrogen atom is placed on a table on the first floor of a house in the gravitational field of the Earth, as shown on figure 1.2. The hydrogen atom is then attached to a fine (weightless) thread so that the atom can be lowered down slowly to the basement of the house, while the experimenter remains on the first floor. When the atom is lowered down, its weight produces a force F in the thread. That force is measured by the experimenter on the first floor. It is given by:\n\n F = mog 1.1", null, "Figure 1.2\n\nThe slow descent of the atom attached to the thread is stopped every time a measurement is made, which means that the kinetic energy is zero at the moment of the measurement. When the atom has traveled a vertical distance Dh, the observer on the first floor observes that the energy DE produced by the atom and transmitted through the thread to the first floor is:\n\n DE = FDh 1.2\nThe work extracted from the descent of the atom is positive when the final position of the atom is under the first floor (Dh is positive). Then, according to the principle of mass-energy conservation, the energy produced at the first floor by the descent of the atom in the basement can be transformed into mass according to the relationship (see reference):\n E = mc2 1.3\nThe important point that must be retained about equation 1.3 is that the energy E is proportional to the mass, independently of the fact that it just happens that the numerical value of the constant of proportionality is equal to the square of the velocity of light. From equations 1.1, 1.2 and 1.3, the amount of mass Dmf generated at the first floor by the descent is:", null, "1.4\nThis amount of mass (or energy) carried by the thread is generated by the weight of the atom which slowly moves down to the basement. When the hydrogen atom lies on the table, its mass is mo. However, during its descent, it produces work (corresponding to the mass Dmf generated at the first floor). The initial mass mo of the particle is now transferred into the mass-energy Dmf generated at the first floor by the falling particle, plus the remaining mass mb of the particle now in the basement.  Using equation 1.4, we find:", null, "1.5\nAccording to the principle of mass-energy conservation, the mass of the hydrogen atom in the basement is now different from its initial mass mo on the first floor. It is slightly smaller than mo and is now equal to mb. Any variation of g with height is negligible and can be taken (with g) into account in equations 1.4 and 1.5.\nOf course, the relative change of mass Dmf/mo is extremely small. (It was equally small in the case of the Earth falling back to its normal orbit, as seen above in section 1.2.) The change of mass given by equation 1.5 is so small that it cannot be verified using a weighing scale. However, this reduction of mass must exist, otherwise, mass-energy would be created from nothing. We will see below that this change of mass has actually been measured.\nIt was quite arbitrary for us to assume that the initial mass of hydrogen on the first floor is mo. Physical tables do not mention all the experimental conditions in which an atom is measured. Furthermore, the accuracy of this value is quite insufficient now to detect Dmf (equation 1.5). A change of altitude of one meter near the Earth’s surface gives a relative change of mass of the order of 10-16. Masses are not known with such an accuracy.\nAt this point, we must recall that in the above reasoning, we have made a choice between the principle of mass-energy conservation and the concept of absolute identical mass in all frames. It is illogical to accept both principles simultaneously since they are not compatible. We have chosen to rely on the principle of mass-energy conservation which is equivalent to not believing in \"absolute creation from nothing\" as defined in section 1.2. We must realize that without mass-energy conservation not much of physics remains. Physics becomes magic.\n\n1.4 - Mass Loss of the Electron.\nThere is a way to measure experimentally the mass difference between a hydrogen atom in the basement and one on the first floor. In equation 1.5, we see that a mass Dmf appears and increases when the atom moves down in the gravitational field. Due to mass-energy conservation, the mass mb of the atom moving down decreases by the same amount, that is:\n\n Dmb=Dmf 1.6\nSince the hydrogen atom has lost a part of its mass due to the change of gravitational potential energy, we must expect (according to equation 1.5) that the electron as well as the proton in the atom have individually lost the same relative mass. Let us calculate the relative change of mass of the electron (Dme/me) and of the proton inside the hydrogen atom due to its change of height.  From equations 1.5 and 1.6, we have:", null, "1.7\nwhere\n Dme=Dmb 1.8\nWhen Dh is a few meters, equation 1.7 gives a relative change of mass of the order of 10-16. Consequently, the first order term gives an excellent approximation. Let us use:", null, "1.9\nThe electron mass me (as well as the proton mass) is not constant and decreases continuously when the atom is moving down. Equation 1.7 shows that independently of the mass of the particle, the relative change of mass is the same. This means that for the same change of altitude, the relative change of mass of the electron is the same as for the proton.\nDue to the principle of mass-energy conservation, we must conclude that a hydrogen atom at rest has a less massive electron and a less massive proton at a lower altitude than at a higher altitude. The mass of an electron and of a proton can be tested very accurately in atomic physics. Quantum physics shows us how to calculate the exact structure of the hydrogen atom as a function of the electron and proton mass. From that, one can calculate the Bohr radius of an atom having a different mass. Fortunately, the Bohr radius can also be measured with extreme accuracy experimentally.\n\n1.5 - Change of the Radius of the Electron Orbit.\nIt is shown in textbooks how quantum physics predicts the radius of the orbit of the electron in hydrogen for a given electronic state. This is given by the well known Bohr equation:", null, "1.1\nwhere rn is the radius of the Bohr orbit of the electron with principal quantum number n, me is the mass of the electron (actually, me is the reduced mass, but it is approximately the same as the electron mass), h is the Planck constant (= 2p", null, "), k is the Coulomb constant (1/4peo), e is the electronic charge and Z is the number of charges in the nucleus (Z = 1 corresponds to atomic hydrogen). Furthermore when we choose n = 1 and Z = 1, rn becomes ao, which is called the Bohr radius. The Bohr radius is 5.291772×10-11 m at the Earth's surface (for the case of R¥ for which the nucleus is very massive). Equation 1.10 illustrates a simple principle. It illustrates the fact that the circumference of the electron orbit is exactly equal to (or any multiple of) the de Broglie wavelength of the electron orbiting the nucleus.\nSince, as we have seen above, the electron mass me changes with its position in a gravitational potential, let us calculate (using Bohr's equation) the change of radius rn caused by that change of electron mass. This is given by the partial derivative of rn with respect to me. From equation 1.10 we find:", null, "1.11\nEquation 1.11 shows that any relative decrease of electron mass is equal to the same relative increase of the radius of the electron orbit. According to the principle of mass-energy conservation, the electron mass decreases when brought to a lower gravitational potential. Consequently, quantum physics (Bohr's equation) shows that the radius of the electron orbit in hydrogen must increase when the atom is at a lower altitude. Using equation 1.10, quantum physics gives us the possibility to predict the size of the electron orbit rn in an atom for different values of electron mass. Let us study the change of size of the electron orbit as a function of the altitude where the particle is located in a gravitational field.\n\n1.6 - Change of Energy of Electronic States.\nSince it has been observed and accepted that the laws of quantum physics are invariant in any frame of reference, let us calculate the energy states of atoms having an electron (and a proton) with a different mass. The consequences of the change of proton mass are easily calculated since the energy levels depend only on the reduced mass of the electron-proton system. In the Bohr equation, we take me as the reduced mass. This does not produce any relevant difference in the problem here.\nThe binding energy between the electron and the proton is a function of the electrostatic potential between the nucleus and the electron. Quantum physics teaches that the energy En of the nth state as a function of the electron mass is:", null, "1.12\nFrom equation 1.12, we can find the relationship between the change of electron mass and the change of energy:", null, "1.13\nThe Bohr radius ao is the average radius of the electron orbit for n = 1. According to quantum physics the energy of state n is:", null, "1.14\nwhere ao is a function of the electron mass me, given by:", null, "1.15\nWe know that the energy of electronic states of atoms can be measured very accurately in spectroscopy from the light emitted during the transition between any two states En and En'. Extremely accurate results can also be obtained in some nuclear reactions with the help of Mössbauer spectroscopy.\nThe frequency nn of the radiation emitted as a function of the energy En of level n is given by:\n En = hnn 1.16\nBy differentiation of equation 1.16, we find:", null, "1.17\nDifferentiation of equation 1.14 gives:", null, "1.18\nCombining equations 1.11, 1.13, 1.17 and 1.18, we get:", null, "1.19\nSince these quantities are extremely small but finite, we can write:", null, "1.2\nFrom equation 1.7, we have:", null, "1.21\nEquations 1.20 and 1.21 give:", null, "1.22\nEquation 1.22 shows that the relative change of size of the Bohr radius Dao/ao is equal to -gDh/c2.\nThis shows that following the laws of quantum physics, a change of electron mass due to a change of gravitational potential (which results necessarily from the principle of mass-energy conservation) produces a physical change of the Bohr radius.\nWe must notice here that using the relativistic correction given by Dirac's mathematics is irrelevant and does not solve this problem. Relativistic quantum mechanics introduces a relativistic correction due to the electron velocity with respect to the center of mass of the atom. The change in electron mass brought by the relativistic correction implied in this chapter is due to the gravitational potential originating from outside the proton-electron system. It is not due to any internal velocity within the atom. The use of the relativistic Dirac equation is not related to calculating how the Bohr radius changes between its value in the initial gravitational potential and its value in the final gravitational potential.\n\n1.7 - Experimental Measurements of Length Dilation in a Gravitational Potential.\nA measurement proving that there is a change of the Bohr radius due to the change of gravitational potential has already been made. The difference of energy for an atom corresponding to its change of size is observed as a red shift of its spectroscopic lines. The change of mass can be applied quite generally to any particle or subatomic particle in physics placed in a gravitational potential. It can also be applied to astronomical bodies like planets and galaxies since it relies on the principle of mass-energy conservation which is always valid.\n\n1.7.1 - Pound and Rebka's Experiment.\nA spectroscopic measurement of the highest precision has been reported by Pound and Rebka in 1964 with an improved result by Pound and Snider in 1965. Since we have seen that the change of ao corresponds to a change of energy of spectroscopic levels, let us examine Pound and Rebka's experiment. They used Mössbauer spectroscopy to measure the red shift of 14.4 keV gamma rays from Fe57. The emitter and the absorber were placed at rest at the bottom and top of a tower of 22.5 meters at Harvard University.\nThe consequence of the gravitational potential on the particles is such that their mass is lower at the bottom than at the top of the tower. Therefore an electron in an atom located at the base of the tower has a larger Bohr radius than an electron located 22.5 meters above, as given by equation 1.22. The same equation also shows that electrons orbiting with a larger radius have less energy and emit photons with longer wavelengths.\nPound and Rebka reported that the measured red shift agrees within one percent with the equation:", null, "1.23\nNot only is the change of energy predicted by relativity and verified experimentally by Pound and Rebka (equation 1.23) numerically compatible with the change of energy predicted by the conservation of mass-energy, but the predicted relativistic equation is mathematically identical to the one predicting the increase of Bohr's radius (equation 1.22). Since the red shift measured corresponds exactly to the change of the Bohr radius existing between the source and the detector, we see that it cannot be attributed to an absolute increase of energy of the photon during its trip in the gravitational field.\nThis result is exactly the one that proves that matter at the base of the tower is dilated with respect to matter at the top. It is clear that the Bohr radius has actually changed as expected which means that the physical length has really changed. Therefore, this phenomenon is not space dilation. The real physical dilation of matter is observed because electrons (as well as all particles) have a lower mass at the bottom of the tower which gives them a longer de Broglie wavelength. Space dilation is not compatible with a rational interpretation of modern physics. A rational interpretation has already been presented .\nThe equilibrium distance between particles is now increased because the Bohr radius has increased. When atoms are brought to a different gravitational potential, the electron and proton must reach a new distance equilibrium as required by quantum physics in equation 1.12. Quantum physics and the principle of mass-energy conservation lead to a real physical contraction or dilation. This solution solves the mysterious description of space contraction in relativity without involving any new hypothesis or new logic. Length contraction or dilation is real and is demonstrated here as the result of actual experiments. Let us also note that this length dilation is done without producing any internal mechanical stress in solid material. Finally, if the source were above the detector, we would observe a blue shift proving that the Bohr radius in matter above the detector has decreased with respect to the Bohr radius in matter at lower altitude. One can conclude that Pound and Rebka's experiment has shown that matter is contracted or dilated when it is moved to a different gravitational potential.\n\n1.7.2 - The Solar Red Shift.\nOther experiments also show the reality of length contraction or dilation. For example, the atoms at the surface of the Sun have been measured to show exactly the gravitational dilation due to the decrease of mass of the electrons in the solar gravitational potential. The gravitational potential at the Sun's surface is well known. As shown above, it is a change of electron mass in the hydrogen atom due to the gravitational potential that produces a change of the Bohr radius. It is that change of Bohr's radius that produces a change of energy between different atomic states. Brault has reported such a change of energy between atomic states. It corresponds exactly to the change of Bohr's radius caused by the gravitational potential. The atoms on the Sun emit light at a different frequency because the electrons are lighter on the solar surface than on Earth, exactly as required by the principle of mass-energy conservation. The change of electron mass on the Sun produces displaced spectral lines toward longer wavelengths as given by equation 1.22 (see other reference). Since quantum physics is valid on the solar surface, we can understand that the electrons have less mass due to the solar gravitational potential. This leads to an increase of the Bohr radius for the atoms located on the solar surface which leads to atomic transitions having less energy, as observed experimentally.\nThe Mössbauer experiment as well as the solar red shift experiment prove that atoms are really dilated physically. This means that the physical length of objects actually changes. We also find that not only do protons and electrons lose mass in a gravitational potential but so do nuclear particles in the nucleus of Fe57, as observed in the Mössbauer experiment of Pound and Rebka.\n\n1.8 - The Crucial Influence of the Electron Mass on the Fundamental Laws of Relativity.\nMacroscopic matter is formed by an arrangement of atoms. In molecular physics, we learn that quantum physics predicts that interatomic distances are proportional to the Bohr radius. Those distances are calculated as a function of the Bohr radius. According to quantum physics, a smaller Bohr radius will lead to a smaller interatomic distance between atoms in molecular hydrogen. The interatomic distance in molecules is known to be a function of the Bohr radius just as the interatomic distance in a crystalline structure is proportional to the Bohr radius. This means that since the Bohr radius changes with the intensity of the gravitational potential, the size of molecules and crystals also changes in the same proportion. This is true even in the case of large organic molecules. Therefore the size of all biological matter is proportional to the Bohr radius. This point is explained in more details in appendix I.\nBecause the size of macroscopic matter changes with the gravitational potential, the original length of the standard meter transferred to a location having a different gravitational potential will also change. To be more specific, mass-energy conservation requires that the standard meter made of platinum-iridium alloy becomes shorter if we move it to the top of a mountain. Furthermore, due to the increase of electron mass, an atomic clock will increase its frequency by the same ratio when it is moved to the top of the same mountain. However, since the velocity of light (or any other velocity) is the ratio between these two units, it will not change at the top of the mountain with respect to any frame of reference. This point will be discussed later. Because the relative changes of length and clock rate are equal, they will be undetectable when simply using proper values within a frame of reference. All matter, including human bodies, composed of atoms and molecules will change in the same proportion since the intermolecular distance depends on the Bohr radius and consequently on the electron mass which is reduced when located in a gravitational potential.\nIt is important to notice that length dilation or contraction is predicted and explained here without using the relativistic Lorentz equations nor the constancy of the velocity of light. Consequently, we must consider now that we have demonstrated experimentally (using Pound and Rebka's results) the physical change of length of an object in a gravitational potential. More demonstrations will be given in the following chapters.\nThe experiments reported here showing length dilation use atoms that are at rest. They are solely related to the potential energy. We will see that the problems of kinetic energy and velocities require new considerations in the next chapters.\n\n1.9 - References.\n\n C. W. Misner, K. S. Thorne and J. A. Wheeler, Gravitation, W. H. Freeman and Company San Francisco. page 1056. See also: Pound R. V. and G. A. Rebka, Apparent Weight of Photons, Phys. Rev. Lett., 4, 337 1964. See also: Pound R. V. and Snider, J.L. Effect of gravity on Nuclear Resonance, Phys. Rev. B, 140, 788-803, 1965. This has been measured in a rocket experiment by Vessot and Levine (1976) with an accuracy of 2 x 10-4.\n\n J. W. Brault, The Gravitational Redshift in the Solar Spectrum, Doctoral dissertation, Princeton University, 1962. Also Gravitational Redshift in Solar Lines, Bull. Amer. Phys. Soc., 8, 28, 1963.\n\n P. Marmet, Absurdities in Modern Physics: A Solution, ISBN 0-921272-15-4 Les Éditions du Nordir, c/o R. Yergeau, 165 Waller, Ottawa, Ontario K1N 6N5, 144p. 1993.\n\n1.10 - Symbols and Variables.\n\nDE energy produced by the atom and transmitted to the first floor\nDh distance travelled by the atom\nDmb amount of mass lost by the atom\nDme amount of mass lost by the electron\nDmf amount of mass generated on the first floor\nEn energy of the hydrogen atom in state n\nF weight of the atom\nmo mass of the atom on the table\nnn frequency of the radiation emitted corresponding to En\nrn radius of the orbit of the electron in hydrogen in state n\nZ number of charges in the nucleus\n\n<><><><><><><><><><><><>\nPreface  Contents  Chapter 2" ]
[ null, "http://newtonphysics.on.ca/einstein/orbit.gif", null, "http://newtonphysics.on.ca/einstein/image1534.gif", null, "http://newtonphysics.on.ca/einstein/image1535.gif", null, "http://newtonphysics.on.ca/einstein/image1536.gif", null, "http://newtonphysics.on.ca/einstein/image1537.gif", null, "http://newtonphysics.on.ca/einstein/image1538.gif", null, "http://newtonphysics.on.ca/einstein/image1539.gif", null, "http://newtonphysics.on.ca/einstein/image1540.gif", null, "http://newtonphysics.on.ca/einstein/image1541.gif", null, "http://newtonphysics.on.ca/einstein/image1542.gif", null, "http://newtonphysics.on.ca/einstein/image1543.gif", null, "http://newtonphysics.on.ca/einstein/image1544.gif", null, "http://newtonphysics.on.ca/einstein/image1545.gif", null, "http://newtonphysics.on.ca/einstein/image1546.gif", null, "http://newtonphysics.on.ca/einstein/image1547.gif", null, "http://newtonphysics.on.ca/einstein/image1548.gif", null, "http://newtonphysics.on.ca/einstein/image1549.gif", null, "http://newtonphysics.on.ca/einstein/image1550.gif", null, "http://newtonphysics.on.ca/einstein/image1551.gif", null, "http://newtonphysics.on.ca/einstein/image1552.gif", null, "http://newtonphysics.on.ca/einstein/image1553.gif", null, "http://newtonphysics.on.ca/einstein/image1554.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.88905907,"math_prob":0.96861094,"size":27894,"snap":"2023-14-2023-23","text_gpt3_token_len":6667,"char_repetition_ratio":0.16392973,"word_repetition_ratio":0.051600672,"special_character_ratio":0.21302073,"punctuation_ratio":0.09321876,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99116105,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44],"im_url_duplicate_count":[null,null,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-22T10:15:16Z\",\"WARC-Record-ID\":\"<urn:uuid:4f75a3f6-b69e-4616-8504-4a5f62fa3916>\",\"Content-Length\":\"55978\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1cf7a4ad-cb18-49fb-9fd5-5fb7cee854a8>\",\"WARC-Concurrent-To\":\"<urn:uuid:0503fda3-07d0-4f7d-ad0f-5590e8e82e5b>\",\"WARC-IP-Address\":\"204.44.202.18\",\"WARC-Target-URI\":\"http://newtonphysics.on.ca/einstein/chapter1.html\",\"WARC-Payload-Digest\":\"sha1:NJXWJ4WEFARW4L6MFOTECYSH7OLWXKJC\",\"WARC-Block-Digest\":\"sha1:EK3WC3YB4VIILNIO7QA3NKNC7YS3EX67\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296943809.22_warc_CC-MAIN-20230322082826-20230322112826-00626.warc.gz\"}"}
https://stacks.math.columbia.edu/tag/030K
[ "Any algebraic field extension is uniquely a separable field extension followed by a purely inseparable one.\n\nLemma 9.14.6. Let $E/F$ be an algebraic field extension. There exists a unique subextension $E/E_{sep}/F$ such that $E_{sep}/F$ is separable and $E/E_{sep}$ is purely inseparable.\n\nProof. If the characteristic is zero we set $E_{sep} = E$. Assume the characteristic is $p > 0$. Let $E_{sep}$ be the set of elements of $E$ which are separable over $F$. This is a subextension by Lemma 9.12.13 and of course $E_{sep}$ is separable over $F$. Given an $\\alpha$ in $E$ there exists a $p$-power $q$ such that $\\alpha ^ q$ is separable over $F$. Namely, $q$ is that power of $p$ such that the minimal polynomial of $\\alpha$ is of the form $P(x^ q)$ with $P$ separable algebraic, see Lemma 9.12.1. Hence $E/E_{sep}$ is purely inseparable. Uniqueness is clear. $\\square$\n\nComment #826 by on\n\nSuggested slogan: Algebraic field extensions break down uniquely into a separable and purely inseparable part.\n\nComment #5488 by Théo de Oliveira Santos on\n\nVery small typo: \"Assume the characteristic if $p>0$.\"\n\nThere are also:\n\n• 3 comment(s) on Section 9.14: Purely inseparable extensions\n\nIn your comment you can use Markdown and LaTeX style mathematics (enclose it like $\\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar)." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8107429,"math_prob":0.99721265,"size":1940,"snap":"2023-14-2023-23","text_gpt3_token_len":554,"char_repetition_ratio":0.13997933,"word_repetition_ratio":0.6060606,"special_character_ratio":0.29536083,"punctuation_ratio":0.10552764,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9997619,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-09T23:59:33Z\",\"WARC-Record-ID\":\"<urn:uuid:b64934db-96c1-4208-a82d-8eb76c3b2fdf>\",\"Content-Length\":\"16475\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cbffef70-2791-409b-811a-a18699f8756b>\",\"WARC-Concurrent-To\":\"<urn:uuid:6a3d08f4-308f-4da5-80b4-e64a4173482f>\",\"WARC-IP-Address\":\"128.59.222.85\",\"WARC-Target-URI\":\"https://stacks.math.columbia.edu/tag/030K\",\"WARC-Payload-Digest\":\"sha1:2DCJAPT36OZNRQF7FXCPKGHP4JARP56E\",\"WARC-Block-Digest\":\"sha1:5A44XAKKVL33P4JDBM5MWFDE3TR4BF5M\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224656869.87_warc_CC-MAIN-20230609233952-20230610023952-00197.warc.gz\"}"}
https://programs.team/leetcode-find-the-median-of-two-positively-ordered-arrays.html
[ "# LeetCode - find the median of two positively ordered arrays\n\n## Topic information\n\nSource address: Find the median of two positive arrays\n\nGive two positively ordered (from small to large) arrays nums1 and nums2 of sizes m and n, respectively. Please find out and return the median of these two positive arrays.\n\nThe time complexity of the algorithm should be \\ (O(log (m+n)) \\).\n\n## Prompt information\n\n### Example 1\n\nInput: nums1 = [1,3], nums2 = \nOutput: 2.00000\nExplanation: merge arrays = [1,2,3] ,Median 2\n\n\n### Example 2\n\nInput: nums1 = [1,2], nums2 = [3,4]\nOutput: 2.50000\nExplanation: merge arrays = [1,2,3,4] ,median (2 + 3) / 2 = 2.5\n\n\n### Tips\n\n• nums1.length == m\n• nums2.length == n\n• 0 <= m <= 1000\n• 0 <= n <= 1000\n• 1 <= m + n <= 2000\n• -10^6 <= nums1[i], nums2[i] <= 10^6\n\n## Implementation logic\n\n### Merging method\n\nThe first way to solve the problem is to combine two ordered arrays into an ordered array, and then calculate the median of the result set. This method is a relatively direct idea.\n\nHowever, merging two arrays is the most time-consuming operation of this method, and it reaches the time complexity of \\ (O(m+n) \\), so it does not meet the requirement that the time complexity of the algorithm in the topic reaches \\ (O(log (m+n)) \\).\n\nMoreover, the space complexity of this method has reached the level of \\ (O(m+n) \\), and its space occupation is not optimal.\n\npackage cn.fatedeity.algorithm.leetcode;\n\npublic class MedianOfTwoSortedArrays {\npublic double answer(int[] nums1, int[] nums2) {\nint i = 0, j = 0;\nint[] numbers = new int[nums1.length + nums2.length];\nint index = 0;\nwhile (i < nums1.length || j < nums2.length) {\nif (i == nums1.length) {\nfor (int k = j; k < nums2.length; k++) {\nnumbers[index++] = nums2[k];\n}\nbreak;\n} else if (j == nums2.length) {\nfor (int k = i; k < nums1.length; k++) {\nnumbers[index++] = nums1[k];\n}\nbreak;\n}\n\nif (nums1[i] < nums2[j]) {\nnumbers[index++] = nums1[i++];\n} else if (nums2[j] < nums1[i]) {\nnumbers[index++] = nums2[j++];\n} else {\nnumbers[index++] = nums1[i++];\nnumbers[index++] = nums2[j++];\n}\n}\nif ((numbers.length & 1) == 0) {\n// Array length is even\nint mid = numbers.length >> 1;\nreturn (numbers[mid - 1] + numbers[mid]) / 2f;\n} else {\nreturn numbers[numbers.length >> 1];\n}\n}\n}\n\n\n### Double pointer\n\nFacing two arrays, you can also use double pointers, as long as you find the position of the median. Since the length of the two arrays is known, the sum of the subscripts of the two arrays corresponding to the median is also known.\n\nThe basic idea is to filter the smaller values in the two arrays one by one through double pointers until they reach the subscript position of the median, and then the median of the two arrays can be obtained.\n\nThe whole method is better than the merging method, and the spatial complexity is directly reduced to the degree of \\ (O(1) \\). In terms of running efficiency, it only cycles \\ (\\frac{m+n}2\\) times, but the time complexity is still \\ (O(m+n) \\, which does not meet the requirements of \\ (O(log (m+n)) \\) in the title.\n\npackage cn.fatedeity.algorithm.leetcode;\n\npublic class MedianOfTwoSortedArrays {\npublic double answer(int[] nums1, int[] nums2) {\nint m = nums1.length;\nint n = nums2.length;\nint len = m + n;\nint mStart = 0, nStart = 0;\nint left = 0, right = 0;\nfor (int i = 0; i <= len >> 1; i++) {\nleft = right;\nif (mStart < m && (nStart >= n || nums1[mStart] < nums2[nStart])) {\nright = nums1[mStart++];\n} else {\nright = nums2[nStart++];\n}\n}\nif ((len & 1) == 0) {\nreturn (left + right) / 2f;\n} else {\nreturn right;\n}\n}\n}\n\n\n### Binary search\n\nGenerally, most of the problems requiring a time complexity of \\ (O(log (n)) \\) can be solved using the idea of dichotomy.\n\nThis topic can also use dichotomy to make the time complexity meet the requirements of \\ (O(log (m+n)) \\), but it is indeed a dichotomy search with a little anti thinking logic.\n\nThe main idea of solving the problem is to determine the selected value of \\ (\\frac{1}{2}\\) through the combined length, compare the \\ (\\frac{1}{2}\\) value in the two arrays, filter the smaller part directly until \\ (\\frac{m+n}{2}\\) value is filtered, and the remaining first index of the two arrays is used to calculate the median.\n\npackage cn.fatedeity.algorithm.leetcode;\n\npublic class MedianOfTwoSortedArrays {\npublic double answer(int[] nums1, int[] nums2) {\nint len1 = nums1.length;\nint len2 = nums2.length;\nint total = len1 + len2;\nint left = (total + 1) >> 1;\nint right = (total + 2) >> 1;\nif (left == right) {\nreturn this.findK(nums1, 0, nums2, 0, left);\n} else {\nreturn (this.findK(nums1, 0, nums2, 0, left) + this.findK(nums1, 0, nums2, 0, right)) / 2.0;\n}\n}\n\nprivate int findK(int[] nums1, int i, int[] nums2, int j, int k) {\nif (i >= nums1.length) {\n// num1 has been filtered out\nreturn nums2[j + k - 1];\n} else if (j >= nums2.length) {\n// num2 has been filtered\nreturn nums1[i + k - 1];\n}\nif (k == 1) {\n// First numeric comparison in array\nreturn Math.min(nums1[i], nums2[j]);\n}\n\n// Determine the two values to compare each time\nint mid1 = (i + (k >> 1) - 1) < nums1.length ? nums1[i + (k >> 1) - 1] : Integer.MAX_VALUE;\nint mid2 = (j + (k >> 1) - 1) < nums2.length ? nums2[j + (k >> 1) - 1] : Integer.MAX_VALUE;\n\nif (mid1 < mid2) {\nreturn findK(nums1, i + (k >> 1), nums2, j, k - (k >> 1));\n} else {\nreturn findK(nums1, i, nums2, j + (k >> 1), k - (k >> 1));\n}\n}\n}\n\n\nPosted by trg on Sat, 13 Aug 2022 22:14:53 +0530" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.62771636,"math_prob":0.99961275,"size":5232,"snap":"2022-40-2023-06","text_gpt3_token_len":1584,"char_repetition_ratio":0.14211936,"word_repetition_ratio":0.07061267,"special_character_ratio":0.35932723,"punctuation_ratio":0.16698474,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9995173,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-04T22:46:42Z\",\"WARC-Record-ID\":\"<urn:uuid:82a24977-cc31-4954-a78f-52b408969f91>\",\"Content-Length\":\"11575\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:484d5d1b-c2de-4d4f-ab25-156015dd9b21>\",\"WARC-Concurrent-To\":\"<urn:uuid:07ec57b2-769e-4d2f-bfb4-87fccbc1bfe5>\",\"WARC-IP-Address\":\"178.238.237.47\",\"WARC-Target-URI\":\"https://programs.team/leetcode-find-the-median-of-two-positively-ordered-arrays.html\",\"WARC-Payload-Digest\":\"sha1:OGAG65HCF7BCW7NA6UNG7WZJOZLIDURC\",\"WARC-Block-Digest\":\"sha1:TWFNIBBO2R42WO6ATRVINARTSEFZU7O3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500154.33_warc_CC-MAIN-20230204205328-20230204235328-00311.warc.gz\"}"}
https://dmu.ghpc.au.dk/lmt/wiki/index.php?title=Supported_features
[ "# Supported features\n\n### Raw input data support\n\n• Only numeric input is supported. That is, data must be either integer or real numbers, but not characters. The only exemptions are genomic data and files containing boolean data.\n• 64 bit integers are used to store integer data. That is, integer data may range from -9.223372e+18 to +9.223372e+18.\n• All input data are renumbered automatically if required by the job and respective cross-reference files will be provided if necessary.\n• Files containing human readable array data must be in comma-separated-value(csv) format.\n• Files containing human readable vector data must contain a single column vector.\n• Pedigree files must be complete. That is, all individuals occurring as parents (2. and 3. column) must occur as individuals (1. column). All individual ids which occur in the data file must occur in the pedigree.\n• Genomic marker data must be imputed to common density across all genotypes and must not contain missing marker.\n\n### Supported operations\n\nCurrently lmt support the following operations on linear mixed models:\n\n• Solving for BLUP and BLUE solutions conditional on supplied variances for random and fixed factor, respectively;\n• Gibbs sampling of variance components in single pass and blocked mode;\n• AI-REML estimation of variance components using the coefficient matrix of the mixed model equation system (AI-REML-C)\n• MC-EM-REML estimation of variance components\n• Sampling (block)diagonal elements of the inverse of the mixed model coefficient matrix\n• Solving for (block)diagonal elements of the inverse of the mixed model coefficient matrix\n\n### Supported factors and variables\n\nlmt supports\n\n• fixed factors\n• random factors\n• classification variables\n• continuous co-variables, which can be nested. For continuous co-variables lmt support user-defined polynomials(e.g. sin(x) or x^(0.5) ) and hard coded Legendre polynomials up to order 6.\n• genetic group co-variables\n\nAll classification and co-variables can be associated to a fixed or random factor.\n\n### Supported variance structures\n\nFor random factor lmt supports variance structures of\n\n• structure $$\\Gamma\\otimes\\Sigma$$, where $$\\Sigma$$ is an dense symmetric positive definite matrix, and\n• $$\\Theta_L(\\Gamma\\otimes I_{\\Sigma})\\Theta_L^{'}$$, where $$\\Theta$$ is symmetric positive definite block-diagonal matrix of $$n$$ symmetric positive definite martices $$\\Sigma_i, i=1,..,n$$, $$\\Theta_L$$ is the lower Cholesky factor of $$\\Theta$$ and $$I_{\\Sigma}$$ is an identity matrix of dimension $$\\Sigma_i$$.\n\nWhen solving linear mixed models $$\\Sigma$$ and $$\\Gamma$$ are user determined constants, whereas when estimating variances $$\\Gamma$$ is a user determined constant and $$\\Sigma$$ is a function of the data.\n\nSupported type for $$\\Gamma$$ are\n\n• an identity matrix\n• an arbitrary positive definite diagonal matrix\n• a pedigree-based numerator relationship matrix $$A$$ which may contain meta-founders\n• a pedigree- and genotype-based relationship matrix $$H$$ which may contain meta-founders\n• genetic groups absorbed into $$A$$ or $$H$$\n• a user-defined(u.d.) symmetric, positive definite matrix of which inverse is supplied\n• as a sparse upper-triangular matrix stored in csr format\n• as a dense matrix\n• a co-variance matrix of a selected auto-regressive process\n\nFurther lmt supports special variance structures which are not covered by the above description\n\n• SNP-BLUP co-variance structure with the option to model marker co-variances as $$\\Theta_L(\\Gamma\\otimes I_{\\Sigma})\\Theta_L^{'}$$.\n\n### Supported linear mixed model solvers\n\nlmt supports\n\n• a direct solver requiring to explicitly build the linear mixed model equations left-hand-side coefficient matrix($$C$$)\n• a pre-conditioned gradient solver which does not require $$C$$\n\n### Supported features related to genomic data\n\n• direct use of genomic marker data\n• building of genomic relationship matrices($$G$$) from supplied genomic data\n• uploading of a u.d. $$G$$\n• adjustment of $$G$$ to $$A_{gg}$$ in ssGBLUP and ssSNPBLUP\n• solving ssGBLUP models\n• Variance component estimation for ssGBLUP models\n• solving ssGTBLUP models\n• solving ssSNPBLUP models\n• calculation of true H matrix diagonal elements for ssGBLUP models\n• all Single-Step models can be run from \"bottom-up\", that is the user supplies the genotypes and all necessary ingredients(e.g. $$G$$) are built on the fly.\n\n### Supported pedigree types\n\n• ordinary pedigrees\n• probabilistic pedigrees with an unlimited number of parent pairs per individual\n• genetic group pedigrees\n• meta-founder pedigrees\n• ignoring of inbreeding\n• iterative derivation of inbreeding coefficients\n\n### Supported features related to meta-founders and genetic groups\n\n• meta-founders can be modeled for all $$\\Gamma$$ which contain $$A$$(.e.g. $$A$$, $$H$$ for ssGBLUP, ssGTBLUP and ssSNPBLUP)\n• genetic groups can be modeled as an extra factor or can be absorbed into all $$\\Gamma$$ which contain $$A$$" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.80049056,"math_prob":0.9938445,"size":5670,"snap":"2023-40-2023-50","text_gpt3_token_len":1349,"char_repetition_ratio":0.10889517,"word_repetition_ratio":0.019277109,"special_character_ratio":0.23544973,"punctuation_ratio":0.09263158,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9993117,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-11-29T22:03:32Z\",\"WARC-Record-ID\":\"<urn:uuid:f946a73a-edd5-4e9e-8745-c9a8f8c205b7>\",\"Content-Length\":\"26568\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d3187779-a538-475b-963b-ff2caf6a1a26>\",\"WARC-Concurrent-To\":\"<urn:uuid:6c1745d4-7110-4aa6-998b-bf7044564c4d>\",\"WARC-IP-Address\":\"130.225.18.162\",\"WARC-Target-URI\":\"https://dmu.ghpc.au.dk/lmt/wiki/index.php?title=Supported_features\",\"WARC-Payload-Digest\":\"sha1:OAJ3WHA26AHOTHF3AUIDLUIWJND5AJTC\",\"WARC-Block-Digest\":\"sha1:3LNDQKPO67ZA7R2KVREKWM4UUBMXDREN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100146.5_warc_CC-MAIN-20231129204528-20231129234528-00511.warc.gz\"}"}
https://optuna.readthedocs.io/en/stable/_modules/optuna/visualization/_edf.html
[ "# Source code for optuna.visualization._edf\n\n```import itertools\nfrom typing import Callable\nfrom typing import cast\nfrom typing import List\nfrom typing import Optional\nfrom typing import Sequence\nfrom typing import Union\n\nimport numpy as np\n\nfrom optuna.logging import get_logger\nfrom optuna.study import Study\nfrom optuna.trial import FrozenTrial\nfrom optuna.trial import TrialState\nfrom optuna.visualization._plotly_imports import _imports\nfrom optuna.visualization._utils import _check_plot_args\n\nif _imports.is_successful():\nfrom optuna.visualization._plotly_imports import go\n\n_logger = get_logger(__name__)\n\n[docs]def plot_edf(\nstudy: Union[Study, Sequence[Study]],\n*,\ntarget: Optional[Callable[[FrozenTrial], float]] = None,\ntarget_name: str = \"Objective Value\",\n) -> \"go.Figure\":\n\"\"\"Plot the objective value EDF (empirical distribution function) of a study.\n\nNote that only the complete trials are considered when plotting the EDF.\n\n.. note::\n\nEDF is useful to analyze and improve search spaces.\nFor instance, you can see a practical use case of EDF in the paper\n`Designing Network Design Spaces <https://arxiv.org/abs/2003.13678>`_.\n\n.. note::\n\nThe plotted EDF assumes that the value of the objective function is in\naccordance with the uniform distribution over the objective space.\n\nExample:\n\nThe following code snippet shows how to plot EDF.\n\n.. plotly::\n\nimport math\n\nimport optuna\n\ndef ackley(x, y):\na = 20 * math.exp(-0.2 * math.sqrt(0.5 * (x ** 2 + y ** 2)))\nb = math.exp(0.5 * (math.cos(2 * math.pi * x) + math.cos(2 * math.pi * y)))\nreturn -a - b + math.e + 20\n\ndef objective(trial, low, high):\nx = trial.suggest_float(\"x\", low, high)\ny = trial.suggest_float(\"y\", low, high)\nreturn ackley(x, y)\n\nsampler = optuna.samplers.RandomSampler(seed=10)\n\n# Widest search space.\nstudy0 = optuna.create_study(study_name=\"x=[0,5), y=[0,5)\", sampler=sampler)\nstudy0.optimize(lambda t: objective(t, 0, 5), n_trials=500)\n\n# Narrower search space.\nstudy1 = optuna.create_study(study_name=\"x=[0,4), y=[0,4)\", sampler=sampler)\nstudy1.optimize(lambda t: objective(t, 0, 4), n_trials=500)\n\n# Narrowest search space but it doesn't include the global optimum point.\nstudy2 = optuna.create_study(study_name=\"x=[1,3), y=[1,3)\", sampler=sampler)\nstudy2.optimize(lambda t: objective(t, 1, 3), n_trials=500)\n\nfig = optuna.visualization.plot_edf([study0, study1, study2])\nfig.show()\n\nArgs:\nstudy:\nA target :class:`~optuna.study.Study` object.\nYou can pass multiple studies if you want to compare those EDFs.\ntarget:\nA function to specify the value to display. If it is :obj:`None` and ``study`` is being\nused for single-objective optimization, the objective values are plotted.\n\n.. note::\nSpecify this argument if ``study`` is being used for multi-objective optimization.\ntarget_name:\nTarget's name to display on the axis label.\n\nReturns:\nA :class:`plotly.graph_objs.Figure` object.\n\nRaises:\n:exc:`ValueError`:\nIf ``target`` is :obj:`None` and ``study`` is being used for multi-objective\noptimization.\n\"\"\"\n\n_imports.check()\n\nif isinstance(study, Study):\nstudies = [study]\nelse:\nstudies = list(study)\n\n_check_plot_args(studies, target, target_name)\n\nreturn _get_edf_plot(studies, target, target_name)\n\ndef _get_edf_plot(\nstudies: List[Study],\ntarget: Optional[Callable[[FrozenTrial], float]] = None,\ntarget_name: str = \"Objective Value\",\n) -> \"go.Figure\":\nlayout = go.Layout(\ntitle=\"Empirical Distribution Function Plot\",\nxaxis={\"title\": target_name},\nyaxis={\"title\": \"Cumulative Probability\"},\n)\n\nif len(studies) == 0:\n_logger.warning(\"There are no studies.\")\nreturn go.Figure(data=[], layout=layout)\n\nall_trials = list(\nitertools.chain.from_iterable(\n(\ntrial\nfor trial in study.get_trials(deepcopy=False)\nif trial.state == TrialState.COMPLETE\n)\nfor study in studies\n)\n)\n\nif len(all_trials) == 0:\n_logger.warning(\"There are no complete trials.\")\nreturn go.Figure(data=[], layout=layout)\n\nif target is None:\n\ndef _target(t: FrozenTrial) -> float:\nreturn cast(float, t.value)\n\ntarget = _target\n\nmin_x_value = min(target(trial) for trial in all_trials)\nmax_x_value = max(target(trial) for trial in all_trials)\nx_values = np.linspace(min_x_value, max_x_value, 100)\n\ntraces = []\nfor study in studies:\nvalues = np.asarray(\n[\ntarget(trial)\nfor trial in study.get_trials(deepcopy=False)\nif trial.state == TrialState.COMPLETE\n]\n)\n\ny_values = np.sum(values[:, np.newaxis] <= x_values, axis=0) / values.size\n\ntraces.append(go.Scatter(x=x_values, y=y_values, name=study.study_name, mode=\"lines\"))\n\nfigure = go.Figure(data=traces, layout=layout)\nfigure.update_yaxes(range=[0, 1])\n\nreturn figure\n```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5283001,"math_prob":0.9193627,"size":4480,"snap":"2021-43-2021-49","text_gpt3_token_len":1200,"char_repetition_ratio":0.13516532,"word_repetition_ratio":0.075229354,"special_character_ratio":0.27924109,"punctuation_ratio":0.24390244,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99900216,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-19T18:48:03Z\",\"WARC-Record-ID\":\"<urn:uuid:159c6d08-31a2-4dec-93e6-861f76e8b61f>\",\"Content-Length\":\"29154\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0755a74f-5793-46c3-bb65-fcb3dc3e6f35>\",\"WARC-Concurrent-To\":\"<urn:uuid:50605bb1-f072-4ba7-97a6-ea68b52f4142>\",\"WARC-IP-Address\":\"104.17.32.82\",\"WARC-Target-URI\":\"https://optuna.readthedocs.io/en/stable/_modules/optuna/visualization/_edf.html\",\"WARC-Payload-Digest\":\"sha1:3DDVLQG6GH5ASNBXQFKNPL7QCUEUWAUB\",\"WARC-Block-Digest\":\"sha1:EHXANN7F4S6UIJ4YV6M62YG2R6DNOQKN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585280.84_warc_CC-MAIN-20211019171139-20211019201139-00701.warc.gz\"}"}
http://zuowen6.info/node/3215
[ "## 单纯的快乐 幸福的微笑\n\n• huān\n• zhè\n• huà\n•\n• 我喜欢这句话。\n•\n•\n• shì\n• men\n• de\n• xiǎo\n• xué\n• bān\n• zhǔ\n• rèn\n• zài\n• qián\n• sòng\n•   它是我们的小学班主任在毕业前夕送\n• gěi\n• jiā\n• de\n• huà\n•\n• shēn\n•\n• zhī\n• dào\n• hái\n• yǒu\n• 给大家的话,寓意深刻。我不知道还有几\n• rén\n• zhè\n• huà\n•\n• zhì\n• shǎo\n•\n• 个人记得这句话,可至少我记得。\n•\n•\n• qián\n• jiā\n• zài\n• de\n• shí\n• hòu\n•\n• huān\n• xiào\n• shēng\n• zǒng\n• shì\n•   以前大家在一起的时候,欢笑声总是\n• bàn\n• zhe\n• lèi\n• shuǐ\n• fēi\n•\n• xìng\n• de\n• shì\n• qíng\n• bēi\n• shāng\n• de\n• shì\n• qíng\n• 伴着泪水齐飞,幸福的事情与悲伤的事情\n• jiāo\n• zhī\n• zài\n•\n• zhōu\n•\n• jiā\n• ǒu\n• ěr\n• huì\n• chū\n• wán\n• 交织在一起。周末,大家偶尔会出去玩一\n• wán\n•\n• zǒu\n• zǒu\n•\n• kuài\n• de\n• jiā\n• fèn\n• xiǎng\n•\n• ér\n• bēi\n• 玩,走一走,把快乐的与大家分享,而悲\n• shāng\n• de\n•\n• fèn\n• dān\n•\n• tiān\n•\n• xiǎo\n• yún\n• shuō\n• yào\n• bīn\n• 伤的,一起分担。那天,小云说要去滨湖\n• gōng\n• yuán\n• zǒu\n• zǒu\n•\n• shuō\n• hǎo\n•\n• jiān\n• shàng\n• de\n• dān\n• tài\n• zhòng\n•\n• 公园走一走,我说好,肩上的担子太重,\n• yǒu\n• xiē\n• bèi\n• le\n•\n• bīn\n• gōng\n• yuán\n• kàn\n• kàn\n•\n• céng\n• jīng\n• 我有些疲惫了,去滨湖公园看看,把曾经\n• de\n• fèn\n• chún\n• zhēn\n• zhǎo\n• huí\n• lái\n•\n• hái\n• yǒu\n•\n• shì\n• shì\n• mǒu\n• 的那份纯真找回来,还有,我是不是把某\n• xiē\n• luò\n• zài\n• le\n•\n• xiǎng\n• zhǎo\n• huí\n• lái\n•\n• 些记忆遗落在那里了,我想找回来。\n•\n•\n• jiā\n• hào\n• hào\n• dàng\n• dàng\n• lái\n• dào\n• bīn\n• gōng\n• yuán\n•\n• kàn\n• zhe\n•   大家浩浩荡荡地来到滨湖公园,看着\n• zhī\n• shǔ\n• xiǎo\n• hái\n• de\n• xuán\n• zhuǎn\n•\n• kàn\n• zhe\n• zhī\n• shǔ\n• xiǎo\n• 只属于小孩子的旋转木马,看着只属于小\n• hái\n• de\n• sài\n• chē\n•\n• kàn\n• zhe\n• zhī\n• shǔ\n• xiǎo\n• hái\n• de\n• bèng\n•\n• 孩子的赛车,看着只属于小孩子的蹦极…\n•\n• men\n• yǒu\n• xiē\n• máng\n•\n•\n• zhēn\n• de\n• zài\n• shǔ\n• zhè\n• le\n• …我们有些迷茫——真的不再属于这里了\n• ma\n•\n• zhēn\n• de\n• zhēn\n• de\n• zài\n• shì\n• xiǎo\n• hái\n• le\n• ma\n•\n• fèn\n• dān\n• 吗?真的真的不再是小孩子了吗?那份单\n• chún\n• de\n• kuài\n•\n• zài\n• zhǎo\n• huí\n• lái\n• le\n• ma\n•\n•\n• 纯的快乐,也再找不回来了吗……\n•\n•\n• yòu\n• shì\n• shuí\n•\n• ?\n• shā\n• le\n• men\n• de\n• kuài\n•\n•   那又是谁,抹杀了我们的快乐。记忆\n• zhōng\n•\n• cán\n• de\n• shēng\n• yīn\n• zài\n• ěr\n• biān\n• jiǔ\n• jiǔ\n• huí\n• dàng\n•\n• 中,残酷的声音在耳边久久回荡。\n•\n•\n•\n• zhè\n• me\n• le\n• hái\n• lái\n• gōng\n• yuán\n• wán\n•\n• hài\n• hài\n• xiū\n• ya\n•   “这么大了还来公园玩,害不害羞呀\n•\n•\n•\n• ……”\n•\n•\n• zhī\n• shì\n• xiǎng\n• yào\n• fèn\n• dān\n• chún\n• de\n• kuài\n•\n• zhī\n• shì\n•   我只是想要一份单纯的快乐,我只是\n• xiǎng\n• yōng\n• yǒu\n• xìng\n• de\n• wēi\n• xiào\n•\n• zhè\n• yàng\n• yǒu\n• cuò\n• me\n•\n• 想拥有一个幸福的微笑。这样有错么?\n•\n•\n• suǒ\n•\n• jiā\n• pāo\n• kāi\n• le\n• qiē\n• fán\n• nǎo\n•\n• míng\n• de\n•   所以,大家抛开了一切烦恼,莫名的\n• wǎng\n• hǎo\n•\n• de\n• fǎn\n• duì\n•\n• fǎn\n• 迷惘也好、父母的反对也罢,义无反顾地\n• chōng\n• shàng\n• le\n•\n•\n• zhī\n•\n•\n• céng\n• jīng\n• bèi\n• wàng\n• de\n• màn\n• màn\n• 冲上了“happy之旅”。曾经被遗忘的记忆慢慢\n• chū\n• shuǐ\n• miàn\n•\n• kāi\n• xīn\n• de\n• xiào\n• shēng\n•\n• shī\n• tòu\n• de\n•\n• 浮出水面。开心的大笑声,湿透的衣物,\n• jǐn\n• tiē\n• zhe\n• é\n• tóu\n• de\n• hēi\n• duàn\n• bān\n• guāng\n• huá\n•\n• zhǒng\n• 紧贴着额头的黑发如缎子一般光滑,一种\n• jiào\n•\n• kuài\n•\n• de\n• dōng\n• cóng\n• de\n• yǎn\n• zhōng\n• qīng\n• xiè\n• ér\n• chū\n•\n• 叫“快乐”的东西从她的眼中倾泻而出,\n• sàn\n• chū\n• yào\n• yǎn\n• de\n• guāng\n• máng\n•\n•\n• 散发出耀眼的光芒……\n•\n•\n• zhè\n• jiù\n• shì\n• céng\n• jīng\n• de\n• me\n•\n•   这就是曾经的我么?\n•\n•\n• zuò\n• zài\n• xuán\n• zhuǎn\n• shàng\n•\n• zuǐ\n• jiǎo\n• de\n• bào\n• le\n•   坐在旋转木马上,嘴角的弧度暴露了\n• nèi\n• xīn\n• de\n• xiǎng\n• ?\n•\n• xiàn\n• zài\n• de\n• hěn\n• kuài\n• hěn\n• kuài\n•\n• 内心的想法,现在的我很快乐很快乐,比\n•\n• nián\n• qián\n• de\n• hái\n• yào\n• kuài\n•\n• 6年前的我还要快乐。\n•\n•\n•\n•\n•\n•\n• shì\n• shuí\n• zài\n• chàng\n• zhe\n• huān\n• de\n•   “哒哒哒……”是谁在唱着欢乐的歌\n•\n• xún\n• shēng\n• wàng\n•\n• yuán\n• lái\n• shì\n• zhì\n• zuò\n• mián\n• ?g\n• táng\n• de\n• ?循声望去,原来是一个制作棉花糖的机\n• zài\n• chū\n•\n• hōng\n• hōng\n•\n• de\n• shēng\n• yīn\n•\n• xiǎng\n• yǐn\n• men\n• de\n• 器在发出“轰轰”的声音,想吸引我们的\n• zhù\n•\n• 注意力。\n•\n•\n•\n•\n•\n• jiā\n• xùn\n• wéi\n• le\n• shàng\n•\n•\n•   “呼啦。”大家迅速地围了上去。“\n• shū\n• shū\n• yào\n• yuán\n• wèi\n• de\n•\n•\n•\n• shū\n• shū\n• yào\n• cǎo\n• méi\n• wèi\n•\n• 叔叔我要原味的!”“叔叔我要草莓味…\n•\n•\n•\n• shū\n• shū\n• yào\n• qīng\n• píng\n• guǒ\n• wèi\n• de\n•\n•\n•\n• shū\n• shū\n•\n• …”“叔叔我要青苹果味的!”“叔叔…\n•\n•\n• …”\n•\n•\n•\n•\n•   ……\n•\n•\n• céng\n• jīng\n• shí\n•\n• men\n• xiǎng\n• xiàn\n• zài\n• zhè\n• bān\n•\n• fēng\n• kuáng\n•   曾经何时,我们也想现在这般“疯狂\n•\n• guò\n•\n• xiàn\n• zài\n• de\n• jiā\n• hái\n• shì\n• huì\n• xiàng\n• qián\n• yàng\n• mǎn\n• ”过。现在的大家还是会像以前那样不满\n• juē\n• zuǐ\n•\n•\n• wéi\n• shí\n• me\n• de\n• duō\n•\n•\n•\n• 地撅起嘴巴。“为什么她的比我多?”“\n• shū\n• shū\n• yào\n• gèng\n• duō\n• de\n•\n•\n• zhī\n• shì\n• qīng\n• qīng\n• xiào\n• 叔叔我要比她更多的!”我只是轻轻地笑\n•\n•\n•\n• wàng\n• zhe\n• shǒu\n• zhōng\n• de\n• mián\n• ?g\n• táng\n•\n• tián\n• tián\n• de\n• xiāng\n• zài\n• xiàng\n•   望着手中的棉花糖,甜甜的香气在向\n• zhōu\n• wéi\n• kuò\n• sàn\n•\n• xìng\n• de\n• wēi\n• xiào\n• chū\n• xiàn\n• zài\n• liǎn\n• shàng\n•\n• gāng\n• 周围扩散,幸福的微笑出现在我脸上,刚\n• zhǔn\n• bèi\n• chī\n• diào\n•\n• xiǎo\n• yún\n• jiù\n• chuǎn\n• pǎo\n• le\n• guò\n• lái\n• 准备吃掉它,小云就气喘吁吁地跑了过来\n•\n• de\n• mián\n• ?g\n• táng\n• shì\n• cǎo\n• méi\n• wèi\n• de\n•\n• fěn\n• fěn\n• de\n• yán\n• ràng\n• ,她的棉花糖是草莓味的,粉粉的颜色让\n• yǎn\n• zhōng\n• de\n• xiào\n• gèng\n• shēn\n• le\n•\n•\n• shuā\n•\n•\n• shēn\n• chū\n•\n• 我眼中的笑意更深了。“唰!”我伸出“\n• zhǎo\n•\n•\n• cǎo\n• méi\n• mián\n• ?g\n• táng\n• de\n• fèn\n• zhī\n• jiù\n• jìn\n• le\n• 魔爪”,草莓棉花糖的四分之一就进了我\n• de\n•\n• xiàng\n• zhēng\n• xìng\n• le\n•\n• mǎn\n• diǎn\n• 的肚子,我象征性地打了个嗝,满足地点\n• diǎn\n• tóu\n•\n•\n• sōu\n•\n•\n• dào\n• yǐng\n• shǎn\n• guò\n•\n• yòng\n• lèi\n• wāng\n• 点头。“嗖!”一道影子闪过,我用泪汪\n• wāng\n• de\n• yǎn\n• jīng\n• zhù\n• shì\n• zhe\n• zhī\n• shèng\n• xià\n• sān\n• fèn\n• zhī\n• èr\n• de\n• mián\n• ?g\n• táng\n• 汪的眼睛注视着只剩下三分之二的棉花糖\n•\n• tóu\n• shàng\n• mào\n• chū\n• le\n• zhèn\n• qīng\n• yān\n•\n•\n• xiāo\n•\n•\n• yún\n•\n• ,头上冒出了一阵青烟。“肖!育!云!\n•\n•\n• bài\n• huài\n•\n• jiào\n• zhe\n•\n• huān\n• xiào\n• shēng\n• zhōng\n•\n• ”我“气急败坏”地大叫着。欢笑声中,\n• liǎng\n• jiàn\n• jiàn\n• de\n• shēng\n• yǐng\n• zài\n• xiàng\n• zhuī\n• zhú\n•\n• dòng\n• tīng\n• de\n• 两个渐渐模糊的声影在互相追逐,动听的\n• xiào\n• shēng\n• zài\n• gōng\n• yuán\n• shàng\n• kōng\n• jiǔ\n• jiǔ\n• huí\n• dàng\n• zhe\n•\n• yuàn\n•\n• 笑声在公园上空久久回荡着,不愿离去。\n•\n•\n• zài\n• guò\n• shēng\n• shí\n•\n• shōu\n• dào\n• le\n• duō\n• tóng\n• xué\n• men\n• sòng\n• lái\n•   在过生日时,收到了许多同学们送来\n• de\n•\n• zhè\n• shí\n• de\n•\n• huì\n• chū\n• dān\n• chún\n• zhì\n• 的礼物,这时的我,也会露出一个单纯至\n• de\n• wēi\n• xiào\n•\n• 极的微笑。\n•\n•\n• zhè\n• yàng\n• de\n• kuài\n• duì\n•\n• míng\n• rén\n•\n• lái\n• shuō\n• huò\n• shì\n•   这样的快乐对于“名人”来说或许是\n• wēi\n• dào\n• de\n•\n• duì\n• lái\n• shuō\n•\n• zhè\n• xiē\n• huí\n• dōu\n• shì\n• 微不足道的,可对我来说,这些回忆都是\n• jià\n• zhī\n• bǎo\n•\n• yīn\n• wéi\n• jiàn\n• xiǎo\n• xiǎo\n• de\n• shì\n• qíng\n• ér\n• kuài\n• ér\n• 无价之宝。因为一件小小的事情而快乐而\n• wēi\n• xiào\n• jīng\n• chéng\n• le\n• de\n•\n• jiā\n• cháng\n• biàn\n• fàn\n•\n•\n• 微笑似乎已经成了我的“家常便饭”。希\n• wàng\n• hòu\n•\n• hái\n• néng\n• xiàng\n• xiàn\n• zài\n• yàng\n• dān\n• chún\n• xiào\n•\n• xìng\n• 望以后,我还能像现在一样单纯地笑,幸\n• xiào\n•\n• zài\n• fán\n• máng\n• zhōng\n• xún\n• zhǎo\n• níng\n• jìng\n• de\n• tiān\n• kōng\n•\n• zài\n• níng\n• 福地笑,在繁忙中寻找宁静的天空,在宁\n• jìng\n• de\n• tiān\n• kōng\n• zhōng\n• xún\n• zhǎo\n• shǔ\n• de\n• kuài\n• shì\n• zhe\n• 静的天空中寻找属于自己的快乐和喻示着\n• xìng\n• de\n• wēi\n• xiào\n•\n•\n•\n•\n• 幸福的微笑。\n\n无注音版:\n我喜欢这句话。\n它是我们的小学班主任在毕业前夕送给大家的话,寓意深刻。我不知道还有几个人记得这句话,可至少我记得。\n以前大家在一起的时候,欢笑声总是伴着泪水齐飞,幸福的事情与悲伤的事情交织在一起。周末,大家偶尔会出去玩一玩,走一走,把快乐的与大家分享,而悲伤的,一起分担。那天,小云说要去滨湖公园走一走,我说好,肩上的担子太重,我有些疲惫了,去滨湖公园看看,把曾经的那份纯真找回来,还有,我是不是把某些记忆遗落在那里了,我想找回来。\n大家浩浩荡荡地来到滨湖公园,看着只属于小孩子的旋转木马,看着只属于小孩子的赛车,看着只属于小孩子的蹦极……我们有些迷茫——真的不再属于这里了吗?真的真的不再是小孩子了吗?那份单纯的快乐,也再找不回来了吗……\n那又是谁,抹杀了我们的快乐。记忆中,残酷的声音在耳边久久回荡。\n“这么大了还来公园玩,害不害羞呀……”\n我只是想要一份单纯的快乐,我只是想拥有一个幸福的微笑。这样有错么?\n所以,大家抛开了一切烦恼,莫名的迷惘也好、父母的反对也罢,义无反顾地冲上了“happy之旅”。曾经被遗忘的记忆慢慢浮出水面。开心的大笑声,湿透的衣物,紧贴着额头的黑发如缎子一般光滑,一种叫“快乐”的东西从她的眼中倾泻而出,散发出耀眼的光芒……\n这就是曾经的我么?\n坐在旋转木马上,嘴角的弧度暴露了内心的想法,现在的我很快乐很快乐,比6年前的我还要快乐。\n“哒哒哒……”是谁在唱着欢乐的歌?循声望去,原来是一个制作棉花糖的机器在发出“轰轰”的声音,想吸引我们的注意力。\n“呼啦。”大家迅速地围了上去。“叔叔我要原味的!”“叔叔我要草莓味……”“叔叔我要青苹果味的!”“叔叔……”\n……\n曾经何时,我们也想现在这般“疯狂”过。现在的大家还是会像以前那样不满地撅起嘴巴。“为什么她的比我多?”“叔叔我要比她更多的!”我只是轻轻地笑。\n望着手中的棉花糖,甜甜的香气在向周围扩散,幸福的微笑出现在我脸上,刚准备吃掉它,小云就气喘吁吁地跑了过来,她的棉花糖是草莓味的,粉粉的颜色让我眼中的笑意更深了。“唰!”我伸出“魔爪”,草莓棉花糖的四分之一就进了我的肚子,我象征性地打了个嗝,满足地点点头。“嗖!”一道影子闪过,我用泪汪汪的眼睛注视着只剩下三分之二的棉花糖,头上冒出了一阵青烟。“肖!育!云!”我“气急败坏”地大叫着。欢笑声中,两个渐渐模糊的声影在互相追逐,动听的笑声在公园上空久久回荡着,不愿离去。\n在过生日时,收到了许多同学们送来的礼物,这时的我,也会露出一个单纯至极的微笑。\n这样的快乐对于“名人”来说或许是微不足道的,可对我来说,这些回忆都是无价之宝。因为一件小小的事情而快乐而微笑似乎已经成了我的“家常便饭”。希望以后,我还能像现在一样单纯地笑,幸福地笑,在繁忙中寻找宁静的天空,在宁静的天空中寻找属于自己的快乐和喻示着幸福的微笑。\n\n### 我有一个幸福的家\n\n六年级作文561字\n作者:叶晓雅\n•\n• yǒu\n• xìng\n• de\n• jiā\n•\n• de\n• jiā\n• chéng\n• yuán\n• yǒu\n•  我有一个幸福的家,我的家里成员有四\n•\n• jiù\n• shì\n• zhōng\n• de\n•\n• shì\n• zuì\n• xiǎo\n• de\n• 个,我就是其中的一个,也是最小的一个\n• chéng\n• yuán\n•\n• yòu\n• shì\n• zuì\n• xìng\n• de\n•\n• jiě\n• jiě\n• de\n• dōng\n• ràng\n• 成员。又是最幸福的一个,姐姐的东西让\n• 阅读全文\n\n### 单纯的快乐 幸福的微笑\n\n六年级作文1207字\n作者:康刘安怡\n• huān\n• zhè\n• huà\n•\n• 我喜欢这句话。\n•\n•\n• shì\n• men\n• de\n• xiǎo\n• xué\n• bān\n• zhǔ\n• rèn\n• zài\n• qián\n• sòng\n•   它是我们的小学班主任在毕业前夕送\n• gěi\n• jiā\n• de\n• huà\n•\n• shēn\n•\n• zhī\n• dào\n• hái\n• yǒu\n• 给大家的话,寓意深刻。我不知道还有几\n• 阅读全文\n\n### 幸福的滋味\n\n六年级作文479字\n作者:陈艺文\n•\n•\n• hóng\n• yàn\n• wài\n• liù\n• nián\n•\n• chén\n• wén\n•   鸿雁外语六年级 陈艺文\n•\n•\n• měi\n• rén\n• de\n• shēng\n• huó\n• huán\n• jìng\n• tóng\n•\n• me\n• měi\n• rén\n•   每个人的生活环境不同,那么每个人\n• duì\n• xìng\n• de\n• wèi\n• de\n• yàn\n• shì\n• xiàng\n• tóng\n• de\n•\n•\n• 对幸福的滋味的体验也是各不相同的。\n• 阅读全文\n\n### 幸福的滋味\n\n六年级作文569字\n作者:王建莹\n•\n•\n• hóng\n• yàn\n• wài\n• liù\n• nián\n•\n• wáng\n• ?\n• yíng\n•   鸿雁外语六年级 王建莹\n•\n•\n• xìng\n• de\n• wèi\n• shì\n• shí\n• me\n•\n• měi\n• rén\n• de\n• jiě\n•   幸福的滋味是什么?每个人的理解不\n• yàng\n•\n• shān\n• de\n• hái\n• wàng\n• fēi\n• chū\n• shān\n•\n• xìng\n• de\n• 一样。山区的孩子希望飞出大山,幸福的\n• 阅读全文\n\n### 最幸福的一日\n\n六年级作文:最幸福的一日\n作文字数:419\n作者:颜の之ぎ…\n•\n•\n• yǒu\n• xiē\n• rén\n• zǒng\n• shuō\n• de\n• shēn\n• biān\n• méi\n• yǒu\n• xìng\n•   有些人总说我的身边没有幸福可我不\n• jiào\n• nián\n• de\n• jiào\n• de\n• xìng\n• shí\n• dōu\n• zài\n• men\n• de\n• shēn\n• 觉年的我觉的幸福无时不刻都在我们的身\n• biān\n•\n•\n• kǎo\n• shì\n• kǎo\n• hǎo\n• le\n•\n• dài\n• chū\n• wán\n•\n• 边。如,考试考好了、妈妈带我出去玩…\n• 阅读全文\n\n### 幸福的意义\n\n六年级作文:幸福的意义\n作文字数:432\n作者:未知\n•\n•\n• xìng\n• shì\n• shí\n• me\n•\n• xìng\n• shì\n• nóng\n• nóng\n• de\n• qīn\n• qíng\n•\n• yǒu\n•   幸福是什么?幸福是浓浓的亲情、友\n• qíng\n•\n• xìng\n• shì\n• chū\n• zhī\n• hòu\n• de\n• huí\n• bào\n•\n• xìng\n• shì\n• shēng\n• huó\n• 情。幸福是付出之后的回报,幸福是生活\n• zhōng\n• diǎn\n• diǎn\n• de\n•\n• xìng\n• jiù\n• zài\n• men\n• shēn\n• biān\n•\n• 中点点的乐趣,幸福就在我们身边。\n• 阅读全文" ]
[ null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.8945486,"math_prob":0.60567766,"size":5807,"snap":"2020-24-2020-29","text_gpt3_token_len":6459,"char_repetition_ratio":0.1033948,"word_repetition_ratio":0.24016282,"special_character_ratio":0.2922335,"punctuation_ratio":0.0012919897,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99226665,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-14T16:18:59Z\",\"WARC-Record-ID\":\"<urn:uuid:f42f0a28-3935-451d-aff0-342a4b0d694f>\",\"Content-Length\":\"30495\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:aa07b99d-e466-498c-a01e-99a394b42a75>\",\"WARC-Concurrent-To\":\"<urn:uuid:406b32df-8cc8-41c2-b38a-b3b56c7ec770>\",\"WARC-IP-Address\":\"23.94.63.173\",\"WARC-Target-URI\":\"http://zuowen6.info/node/3215\",\"WARC-Payload-Digest\":\"sha1:OCMCEQGZOEDY7IL7MJPHZSOCJJWJTGDT\",\"WARC-Block-Digest\":\"sha1:2CQAET4DKZ4RATVPBBHOI4YF4LZEYETH\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655897168.4_warc_CC-MAIN-20200714145953-20200714175953-00044.warc.gz\"}"}
https://www.tutoringhour.com/worksheets/volume/hemispheres/
[ "# Volume of a Hemisphere Worksheets\n\n1. Math >\n2. Geometry >\n3. Volume >\n4. Hemispheres\n\nMaster finding the volume of hemispheres with this bundle of free printable volume of hemisphere worksheets. Help children further their practice on determining the volume of hemispheres and get acquainted with the fact that a hemisphere is half a sphere and hence its volume is also half the volume of a sphere. The pdf exercises here provide kids a great way to learn and perfect skills in finding the volume of hemispheres.\n\nThis bunch of printable worksheets is appropriate for 8th grade and high school students.\n\nCCSS: 8.G.9, G-GMD.3\n\nIntegers", null, "Decimals", null, "" ]
[ null, "https://www.tutoringhour.com/images/worksheet-th-all-before.svg", null, "https://www.tutoringhour.com/images/worksheet-th-all-before.svg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9083522,"math_prob":0.5117927,"size":571,"snap":"2023-40-2023-50","text_gpt3_token_len":118,"char_repetition_ratio":0.20458554,"word_repetition_ratio":0.0,"special_character_ratio":0.18739054,"punctuation_ratio":0.08411215,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95101225,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-27T09:37:11Z\",\"WARC-Record-ID\":\"<urn:uuid:4e8f2105-37e6-4b4e-90aa-1ae6e2bd6387>\",\"Content-Length\":\"43920\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ec3e1524-a045-475c-86b6-d75b93dd824f>\",\"WARC-Concurrent-To\":\"<urn:uuid:52c23f1a-d5c6-40ea-893a-8b47cc058cde>\",\"WARC-IP-Address\":\"67.227.190.101\",\"WARC-Target-URI\":\"https://www.tutoringhour.com/worksheets/volume/hemispheres/\",\"WARC-Payload-Digest\":\"sha1:ZS4IAMXVM5G7DLAMZKXCUWRBTR7DTX6N\",\"WARC-Block-Digest\":\"sha1:Q7EO7EZUN2EUOYFA3XQ5JJ6AXKC43R3X\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510284.49_warc_CC-MAIN-20230927071345-20230927101345-00250.warc.gz\"}"}
https://www.nuclear-power.net/nuclear-power/reactor-physics/atomic-nuclear-physics/atomic-theory/bohr-model-of-atom/
[ "# Bohr Model of Atom\n\n## Bohr Model of the Atom", null, "In atomic physics, the Bohr model if the atom (also known as the Rutherford-Bohr model) is modern model of the hydrogen atom introduced by Danish physicist Niels Bohr working with Ernest Rutherford at the University of Manchester in 1913.\n\nThis model provides especially the solution to the problem of the failure of classical physics in the field of atomic physics. Classical electromagnetic theory, makes three entirely wrong predictions about atoms:\n\n• atoms should emit light continuously,\n• atoms should be unstable,\n• the light they emit should have a continuous spectrum.\n\nThe Bohr model adopted Planck’s quantum hypothesis and he proposed a model in which the electrons of an atom were assumed to orbit the nucleus but could only do so in a finite set of orbits. Planck’s quantum hypothesis (Planck’s law) is named after a German theoretical physicist Max Planck, who proposed it in 1900. Planck’s law is a pioneering result of modern physics and quantum theory. Planck’s hypothesis that energy is radiated and absorbed in discrete “quanta” (or energy packets) precisely matched the observed patterns of blackbody radiation and resolved the ultraviolet catastrophe.", null, "Based on this hypothesis, Bohr postulated that an atom emits or absorbs energy only in discrete quanta corresponding to absorption or radiation of a photon. During the emission of a photon, the internal energy of the atom changes by an amount equal to the energy of the photon. Therefore, each atom must be able to exist with only certain specific values of internal energy. Each of these stationary states is characterised by a specific amount of energy called its energy level. An atom can have an amount of internal energy equal to any one of these possible energy levels, but it cannot have an energy intermediate between two levels.\n\nThe success of Bohr model is in explaining the spectral lines of atomic hydrogen and the Rydberg formula for the spectral emission lines. Bohr’s theory was the first to successfully account for the discrete energy levels of this radiation as measured in the laboratory. The emission line spectrum of hydrogen tells us that atoms of that element emit photons with only certain specific frequencies ƒ and hence certain specific energies equal to E = hƒ, where h = Planck’s constant = 6.63 x 10-34 J.s and f = frequency of the photon.\n\nAlthough Bohr’s atomic model is designed specifically to explain the hydrogen atom, his theories apply generally to the structure of all atoms. Subsequently, Bohr extended the model of hydrogen to give an approximate model for heavier atoms. Heavier atoms, like carbon or oxygen, have more protons in the nucleus, and more electrons to cancel the charge. Bohr’s idea was that each discrete orbit could only hold a certain number of electrons. In 1922, Niels Bohr updated his model of the atom by assuming that certain numbers of electrons corresponded to stable “closed shells”. After that orbit is full, the next energy level would have to be used. This defines an electron shell, which is the set of allowed states and it gives the atom an electron shell structure.\n\n## Bohr’s Postulates\n\nAll these features of Bohr model of the atom can be summarized in Bohr’s Postulates:\n\n1. Electrons in atoms orbit the nucleus.\n2. An atom can exist only in certain specific energy states, in which an electron can reside without the emission of radiant energy.\n3. Transmission between stationary states can occur only by jumping from one allowed orbit to another. This process produces/absorbs only a discrete quanta of electromagnetic radiation.\n4. The angular momentum of a stationary electron is also quantised.\n\nReferences:\nNuclear and Reactor Physics:\n1. J. R. Lamarsh, Introduction to Nuclear Reactor Theory, 2nd ed., Addison-Wesley, Reading, MA (1983).\n2. J. R. Lamarsh, A. J. Baratta, Introduction to Nuclear Engineering, 3d ed., Prentice-Hall, 2001, ISBN: 0-201-82498-1.\n3. W. M. Stacey, Nuclear Reactor Physics, John Wiley & Sons, 2001, ISBN: 0- 471-39127-1.\n4. Glasstone, Sesonske. Nuclear Reactor Engineering: Reactor Systems Engineering, Springer; 4th edition, 1994, ISBN: 978-0412985317\n5. W.S.C. Williams. Nuclear and Particle Physics. Clarendon Press; 1 edition, 1991, ISBN: 978-0198520467\n6. G.R.Keepin. Physics of Nuclear Kinetics. Addison-Wesley Pub. Co; 1st edition, 1965\n7. Robert Reed Burn, Introduction to Nuclear Reactor Operation, 1988.\n8. U.S. Department of Energy, Nuclear Physics and Reactor Theory. DOE Fundamentals Handbook, Volume 1 and 2. January 1993.\n9. Paul Reuss, Neutron Physics. EDP Sciences, 2008. ISBN: 978-2759800414." ]
[ null, "https://www.nuclear-power.net/wp-content/uploads/Faulure-of-Classical-Physics-Atomic-Nucleus-283x300.png", null, "https://www.nuclear-power.net/wp-content/uploads/Bohr-model-atom-261x300.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9255753,"math_prob":0.8689342,"size":3658,"snap":"2021-04-2021-17","text_gpt3_token_len":740,"char_repetition_ratio":0.12944718,"word_repetition_ratio":0.0,"special_character_ratio":0.19300164,"punctuation_ratio":0.075987846,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96392524,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-19T09:10:15Z\",\"WARC-Record-ID\":\"<urn:uuid:f70e3f07-71b5-49c3-8b7f-122049dfac91>\",\"Content-Length\":\"56371\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c7f34c1e-db42-49da-84bf-98fe4de65da9>\",\"WARC-Concurrent-To\":\"<urn:uuid:e0ed806a-cd29-4dd1-88d2-44f1167770fa>\",\"WARC-IP-Address\":\"104.21.66.209\",\"WARC-Target-URI\":\"https://www.nuclear-power.net/nuclear-power/reactor-physics/atomic-nuclear-physics/atomic-theory/bohr-model-of-atom/\",\"WARC-Payload-Digest\":\"sha1:EO7TLDLZALURX37NMJD7EOYLV2XA6UCU\",\"WARC-Block-Digest\":\"sha1:WDMVVY7O6PL6UTG52OMOAJUGGWNWMJSL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703518201.29_warc_CC-MAIN-20210119072933-20210119102933-00566.warc.gz\"}"}
https://mizar.uwb.edu.pl/version/current/html/ndiff_4.html
[ ":: The Differentiable Functions from $\\mathbbR$ into ${\\mathbbR}^n$\n:: by Keiko Narita , Artur Korni\\l owicz and Yasunari Shidama\n::\n:: Copyright (c) 2011-2021 Association of Mizar Users\n\nset ZR = [#] REAL;\n\nLm1: the carrier of () = REAL 1\nby REAL_NS1:def 4;\n\ntheorem Th1: :: NDIFF_4:1\nfor m being Element of NAT\nfor f1, f2 being PartFunc of REAL,(REAL m) holds f1 - f2 = f1 + (- f2)\nproof end;\n\ndefinition\nlet n be non zero Element of NAT ;\nlet f be PartFunc of REAL,(REAL n);\nlet x be Real;\npred f is_differentiable_in x means :: NDIFF_4:def 1\nex g being PartFunc of REAL,() st\n( f = g & g is_differentiable_in x );\nend;\n\n:: deftheorem defines is_differentiable_in NDIFF_4:def 1 :\nfor n being non zero Element of NAT\nfor f being PartFunc of REAL,(REAL n)\nfor x being Real holds\n( f is_differentiable_in x iff ex g being PartFunc of REAL,() st\n( f = g & g is_differentiable_in x ) );\n\ntheorem :: NDIFF_4:2\nfor n being non zero Element of NAT\nfor f being PartFunc of REAL,(REAL n)\nfor h being PartFunc of REAL,()\nfor x being Real st h = f holds\n( f is_differentiable_in x iff h is_differentiable_in x ) ;\n\ndefinition\nlet n be non zero Element of NAT ;\nlet f be PartFunc of REAL,(REAL n);\nlet x be Real;\nfunc diff (f,x) -> Element of REAL n means :Def2: :: NDIFF_4:def 2\nex g being PartFunc of REAL,() st\n( f = g & it = diff (g,x) );\nexistence\nex b1 being Element of REAL n ex g being PartFunc of REAL,() st\n( f = g & b1 = diff (g,x) )\nproof end;\nuniqueness\nfor b1, b2 being Element of REAL n st ex g being PartFunc of REAL,() st\n( f = g & b1 = diff (g,x) ) & ex g being PartFunc of REAL,() st\n( f = g & b2 = diff (g,x) ) holds\nb1 = b2\n;\nend;\n\n:: deftheorem Def2 defines diff NDIFF_4:def 2 :\nfor n being non zero Element of NAT\nfor f being PartFunc of REAL,(REAL n)\nfor x being Real\nfor b4 being Element of REAL n holds\n( b4 = diff (f,x) iff ex g being PartFunc of REAL,() st\n( f = g & b4 = diff (g,x) ) );\n\ntheorem Th3: :: NDIFF_4:3\nfor n being non zero Element of NAT\nfor f being PartFunc of REAL,(REAL n)\nfor h being PartFunc of REAL,()\nfor x being Real st h = f holds\ndiff (f,x) = diff (h,x)\nproof end;\n\ndefinition\nlet n be non zero Element of NAT ;\nlet f be PartFunc of REAL,(REAL n);\nlet X be set ;\npred f is_differentiable_on X means :: NDIFF_4:def 3\n( X c= dom f & ( for x being Real st x in X holds\nf | X is_differentiable_in x ) );\nend;\n\n:: deftheorem defines is_differentiable_on NDIFF_4:def 3 :\nfor n being non zero Element of NAT\nfor f being PartFunc of REAL,(REAL n)\nfor X being set holds\n( f is_differentiable_on X iff ( X c= dom f & ( for x being Real st x in X holds\nf | X is_differentiable_in x ) ) );\n\ntheorem Th4: :: NDIFF_4:4\nfor X being set\nfor n being non zero Element of NAT\nfor f being PartFunc of REAL,(REAL n) st f is_differentiable_on X holds\nX is Subset of REAL by XBOOLE_1:1;\n\ntheorem Th5: :: NDIFF_4:5\nfor n being non zero Element of NAT\nfor Z being open Subset of REAL\nfor f being PartFunc of REAL,(REAL n) holds\n( f is_differentiable_on Z iff ( Z c= dom f & ( for x being Real st x in Z holds\nf is_differentiable_in x ) ) )\nproof end;\n\ntheorem Th6: :: NDIFF_4:6\nfor n being non zero Element of NAT\nfor Y being Subset of REAL\nfor f being PartFunc of REAL,(REAL n) st f is_differentiable_on Y holds\nY is open\nproof end;\n\ndefinition\nlet n be non zero Element of NAT ;\nlet f be PartFunc of REAL,(REAL n);\nlet X be set ;\nassume A1: f is_differentiable_on X ;\nfunc f | X -> PartFunc of REAL,(REAL n) means :Def4: :: NDIFF_4:def 4\n( dom it = X & ( for x being Real st x in X holds\nit . x = diff (f,x) ) );\nexistence\nex b1 being PartFunc of REAL,(REAL n) st\n( dom b1 = X & ( for x being Real st x in X holds\nb1 . x = diff (f,x) ) )\nproof end;\nuniqueness\nfor b1, b2 being PartFunc of REAL,(REAL n) st dom b1 = X & ( for x being Real st x in X holds\nb1 . x = diff (f,x) ) & dom b2 = X & ( for x being Real st x in X holds\nb2 . x = diff (f,x) ) holds\nb1 = b2\nproof end;\nend;\n\n:: deftheorem Def4 defines | NDIFF_4:def 4 :\nfor n being non zero Element of NAT\nfor f being PartFunc of REAL,(REAL n)\nfor X being set st f is_differentiable_on X holds\nfor b4 being PartFunc of REAL,(REAL n) holds\n( b4 = f | X iff ( dom b4 = X & ( for x being Real st x in X holds\nb4 . x = diff (f,x) ) ) );\n\ntheorem :: NDIFF_4:7\nfor n being non zero Element of NAT\nfor Z being open Subset of REAL\nfor f being PartFunc of REAL,(REAL n) st Z c= dom f & ex r being Element of REAL n st rng f = {r} holds\n( f is_differentiable_on Z & ( for x being Real st x in Z holds\n(f | Z) /. x = 0* n ) )\nproof end;\n\ntheorem :: NDIFF_4:8\nfor n being non zero Element of NAT\nfor x0 being Real\nfor f being PartFunc of REAL,(REAL n)\nfor g being PartFunc of REAL,()\nfor N being Neighbourhood of x0 st f = g & f is_differentiable_in x0 & N c= dom f holds\nfor h being non-zero 0 -convergent Real_Sequence\nfor c being constant Real_Sequence st rng c = {x0} & rng (h + c) c= N holds\n( (h \") (#) ((g /* (h + c)) - (g /* c)) is convergent & diff (f,x0) = lim ((h \") (#) ((g /* (h + c)) - (g /* c))) )\nproof end;\n\ntheorem Th9: :: NDIFF_4:9\nfor x0, r being Real\nfor n being non zero Element of NAT\nfor f being PartFunc of REAL,(REAL n) st f is_differentiable_in x0 holds\n( r (#) f is_differentiable_in x0 & diff ((r (#) f),x0) = r * (diff (f,x0)) )\nproof end;\n\ntheorem Th10: :: NDIFF_4:10\nfor x0 being Real\nfor n being non zero Element of NAT\nfor f being PartFunc of REAL,(REAL n) st f is_differentiable_in x0 holds\n( - f is_differentiable_in x0 & diff ((- f),x0) = - (diff (f,x0)) )\nproof end;\n\ntheorem Th11: :: NDIFF_4:11\nfor x0 being Real\nfor n being non zero Element of NAT\nfor f1, f2 being PartFunc of REAL,(REAL n) st f1 is_differentiable_in x0 & f2 is_differentiable_in x0 holds\n( f1 + f2 is_differentiable_in x0 & diff ((f1 + f2),x0) = (diff (f1,x0)) + (diff (f2,x0)) )\nproof end;\n\ntheorem :: NDIFF_4:12\nfor x0 being Real\nfor n being non zero Element of NAT\nfor f1, f2 being PartFunc of REAL,(REAL n) st f1 is_differentiable_in x0 & f2 is_differentiable_in x0 holds\n( f1 - f2 is_differentiable_in x0 & diff ((f1 - f2),x0) = (diff (f1,x0)) - (diff (f2,x0)) )\nproof end;\n\ntheorem Th13: :: NDIFF_4:13\nfor r being Real\nfor n being non zero Element of NAT\nfor Z being open Subset of REAL\nfor f being PartFunc of REAL,(REAL n) st Z c= dom f & f is_differentiable_on Z holds\n( r (#) f is_differentiable_on Z & ( for x being Real st x in Z holds\n((r (#) f) | Z) . x = r * (diff (f,x)) ) )\nproof end;\n\ntheorem Th14: :: NDIFF_4:14\nfor n being non zero Element of NAT\nfor Z being open Subset of REAL\nfor f being PartFunc of REAL,(REAL n) st Z c= dom f & f is_differentiable_on Z holds\n( - f is_differentiable_on Z & ( for x being Real st x in Z holds\n((- f) | Z) . x = - (diff (f,x)) ) )\nproof end;\n\ntheorem Th15: :: NDIFF_4:15\nfor n being non zero Element of NAT\nfor Z being open Subset of REAL\nfor f1, f2 being PartFunc of REAL,(REAL n) st Z c= dom (f1 + f2) & f1 is_differentiable_on Z & f2 is_differentiable_on Z holds\n( f1 + f2 is_differentiable_on Z & ( for x being Real st x in Z holds\n((f1 + f2) | Z) . x = (diff (f1,x)) + (diff (f2,x)) ) )\nproof end;\n\ntheorem :: NDIFF_4:16\nfor n being non zero Element of NAT\nfor Z being open Subset of REAL\nfor f1, f2 being PartFunc of REAL,(REAL n) st Z c= dom (f1 - f2) & f1 is_differentiable_on Z & f2 is_differentiable_on Z holds\n( f1 - f2 is_differentiable_on Z & ( for x being Real st x in Z holds\n((f1 - f2) | Z) . x = (diff (f1,x)) - (diff (f2,x)) ) )\nproof end;\n\ntheorem :: NDIFF_4:17\nfor n being non zero Element of NAT\nfor Z being open Subset of REAL\nfor f being PartFunc of REAL,(REAL n) st Z c= dom f & f | Z is constant holds\n( f is_differentiable_on Z & ( for x being Real st x in Z holds\n(f | Z) . x = 0* n ) )\nproof end;\n\ntheorem Th18: :: NDIFF_4:18\nfor n being non zero Element of NAT\nfor Z being open Subset of REAL\nfor f being PartFunc of REAL,(REAL n)\nfor r, p being Element of REAL n st Z c= dom f & ( for x being Real st x in Z holds\nf /. x = (x * r) + p ) holds\n( f is_differentiable_on Z & ( for x being Real st x in Z holds\n(f | Z) . x = r ) )\nproof end;\n\ntheorem :: NDIFF_4:19\nfor n being non zero Element of NAT\nfor f being PartFunc of REAL,(REAL n)\nfor x0 being Real st f is_differentiable_in x0 holds\nf is_continuous_in x0 by ;\n\ntheorem :: NDIFF_4:20\nfor X being set\nfor n being non zero Element of NAT\nfor f being PartFunc of REAL,(REAL n) st f is_differentiable_on X holds\nf | X is continuous\nproof end;\n\ntheorem Th21: :: NDIFF_4:21\nfor X being set\nfor n being non zero Element of NAT\nfor Z being open Subset of REAL\nfor f being PartFunc of REAL,(REAL n) st f is_differentiable_on X & Z c= X holds\nf is_differentiable_on Z\nproof end;\n\ndefinition\nlet n be non zero Element of NAT ;\nlet f be PartFunc of REAL,(REAL n);\nattr f is differentiable means :Def5: :: NDIFF_4:def 5\nf is_differentiable_on dom f;\nend;\n\n:: deftheorem Def5 defines differentiable NDIFF_4:def 5 :\nfor n being non zero Element of NAT\nfor f being PartFunc of REAL,(REAL n) holds\n( f is differentiable iff f is_differentiable_on dom f );\n\nregistration\nlet n be non zero Element of NAT ;\ncluster REAL --> (0* n) -> differentiable for Function of REAL,(REAL n);\ncoherence\nfor b1 being Function of REAL,(REAL n) st b1 = REAL --> (0* n) holds\nb1 is differentiable\nproof end;\nend;\n\nregistration\nlet n be non zero Element of NAT ;\nexistence\nex b1 being Function of REAL,(REAL n) st b1 is differentiable\nproof end;\nend;\n\ntheorem :: NDIFF_4:22\nfor n being non zero Element of NAT\nfor Z being open Subset of REAL\nfor f being differentiable PartFunc of REAL,(REAL n) st Z c= dom f holds\nf is_differentiable_on Z by ;\n\ntheorem Th23: :: NDIFF_4:23\nfor n being non zero Element of NAT\nfor R being PartFunc of REAL,() st R is total holds\n( R is RestFunc-like iff for r being Real st r > 0 holds\nex d being Real st\n( d > 0 & ( for z being Real st z <> 0 & |.z.| < d holds\n() * ||.(R /. z).|| < r ) ) )\nproof end;\n\nreconsider jj = 1 as Element of REAL by XREAL_0:def 1;\n\ntheorem Th24: :: NDIFF_4:24\nfor i being Element of NAT\nfor n being non zero Element of NAT\nfor g being PartFunc of REAL,()\nfor x0 being Real st 1 <= i & i <= n & g is_differentiable_in x0 holds\n( (Proj (i,n)) * g is_differentiable_in x0 & (Proj (i,n)) . (diff (g,x0)) = diff (((Proj (i,n)) * g),x0) )\nproof end;\n\ntheorem Th25: :: NDIFF_4:25\nfor n being non zero Element of NAT\nfor g being PartFunc of REAL,()\nfor x0 being Real holds\n( g is_differentiable_in x0 iff for i being Element of NAT st 1 <= i & i <= n holds\n(Proj (i,n)) * g is_differentiable_in x0 )\nproof end;\n\ntheorem :: NDIFF_4:26\nfor i being Element of NAT\nfor n being non zero Element of NAT\nfor f being PartFunc of REAL,(REAL n)\nfor x0 being Real st 1 <= i & i <= n & f is_differentiable_in x0 holds\n( (Proj (i,n)) * f is_differentiable_in x0 & (Proj (i,n)) . (diff (f,x0)) = diff (((Proj (i,n)) * f),x0) )\nproof end;\n\ntheorem :: NDIFF_4:27\nfor n being non zero Element of NAT\nfor f being PartFunc of REAL,(REAL n)\nfor x0 being Real holds\n( f is_differentiable_in x0 iff for i being Element of NAT st 1 <= i & i <= n holds\n(Proj (i,n)) * f is_differentiable_in x0 )\nproof end;\n\ntheorem Th28: :: NDIFF_4:28\nfor X being set\nfor i being Element of NAT\nfor n being non zero Element of NAT\nfor g being PartFunc of REAL,() st 1 <= i & i <= n & g is_differentiable_on X holds\n( (Proj (i,n)) * g is_differentiable_on X & (Proj (i,n)) * (g | X) = ((Proj (i,n)) * g) | X )\nproof end;\n\ntheorem Th29: :: NDIFF_4:29\nfor X being set\nfor i being Element of NAT\nfor n being non zero Element of NAT\nfor f being PartFunc of REAL,(REAL n) st 1 <= i & i <= n & f is_differentiable_on X holds\n( (Proj (i,n)) * f is_differentiable_on X & (Proj (i,n)) * (f | X) = ((Proj (i,n)) * f) | X )\nproof end;\n\ntheorem Th30: :: NDIFF_4:30\nfor X being set\nfor n being non zero Element of NAT\nfor g being PartFunc of REAL,() holds\n( g is_differentiable_on X iff for i being Element of NAT st 1 <= i & i <= n holds\n(Proj (i,n)) * g is_differentiable_on X )\nproof end;\n\ntheorem :: NDIFF_4:31\nfor X being set\nfor n being non zero Element of NAT\nfor f being PartFunc of REAL,(REAL n) holds\n( f is_differentiable_on X iff for i being Element of NAT st 1 <= i & i <= n holds\n(Proj (i,n)) * f is_differentiable_on X )\nproof end;\n\ntheorem Th32: :: NDIFF_4:32\nfor J being Function of (),REAL\nfor x0 being Point of () st J = proj (1,1) holds\nJ is_continuous_in x0\nproof end;\n\ntheorem Th33: :: NDIFF_4:33\nfor x0 being Real\nfor I being Function of REAL,() st I = (proj (1,1)) \" holds\nI is_continuous_in x0\nproof end;\n\ntheorem Th34: :: NDIFF_4:34\nfor S, T being RealNormSpace\nfor f1 being PartFunc of S,REAL\nfor f2 being PartFunc of REAL,T\nfor x0 being Point of S st x0 in dom (f2 * f1) & f1 is_continuous_in x0 & f2 is_continuous_in f1 /. x0 holds\nf2 * f1 is_continuous_in x0\nproof end;\n\nLm2: ( dom (proj (1,1)) = REAL 1 & rng (proj (1,1)) = REAL & ( for x being Element of REAL holds\n( (proj (1,1)) . <*x*> = x & ((proj (1,1)) \") . x = <*x*> ) ) )\n\nproof end;\n\ntheorem :: NDIFF_4:35\nfor n being non zero Element of NAT\nfor J being Function of (),REAL\nfor x0 being Point of ()\nfor y0 being Element of REAL\nfor g being PartFunc of REAL,()\nfor f being PartFunc of (),() st J = proj (1,1) & x0 in dom f & y0 in dom g & x0 = <*y0*> & f = g * J holds\n( f is_continuous_in x0 iff g is_continuous_in y0 )\nproof end;\n\ntheorem :: NDIFF_4:36\nfor n being non zero Element of NAT\nfor I being Function of REAL,()\nfor x0 being Point of ()\nfor y0 being Element of REAL\nfor g being PartFunc of REAL,()\nfor f being PartFunc of (),() st I = (proj (1,1)) \" & x0 in dom f & y0 in dom g & x0 = <*y0*> & f * I = g holds\n( f is_continuous_in x0 iff g is_continuous_in y0 )\nproof end;\n\ntheorem :: NDIFF_4:37\nfor x0 being Real\nfor I being Function of REAL,() st I = (proj (1,1)) \" holds\n( I is_differentiable_in x0 & diff (I,x0) = <*1*> )\nproof end;\n\ndefinition\nlet n be non zero Element of NAT ;\nlet f be PartFunc of (),REAL;\nlet x be Point of ();\npred f is_differentiable_in x means :: NDIFF_4:def 6\nex g being PartFunc of (REAL n),REAL ex y being Element of REAL n st\n( f = g & x = y & g is_differentiable_in y );\nend;\n\n:: deftheorem defines is_differentiable_in NDIFF_4:def 6 :\nfor n being non zero Element of NAT\nfor f being PartFunc of (),REAL\nfor x being Point of () holds\n( f is_differentiable_in x iff ex g being PartFunc of (REAL n),REAL ex y being Element of REAL n st\n( f = g & x = y & g is_differentiable_in y ) );\n\ndefinition\nlet n be non zero Element of NAT ;\nlet f be PartFunc of (),REAL;\nlet x be Point of ();\nfunc diff (f,x) -> Function of (),REAL means :Def7: :: NDIFF_4:def 7\nex g being PartFunc of (REAL n),REAL ex y being Element of REAL n st\n( f = g & x = y & it = diff (g,y) );\nexistence\nex b1 being Function of (),REAL ex g being PartFunc of (REAL n),REAL ex y being Element of REAL n st\n( f = g & x = y & b1 = diff (g,y) )\nproof end;\nuniqueness\nfor b1, b2 being Function of (),REAL st ex g being PartFunc of (REAL n),REAL ex y being Element of REAL n st\n( f = g & x = y & b1 = diff (g,y) ) & ex g being PartFunc of (REAL n),REAL ex y being Element of REAL n st\n( f = g & x = y & b2 = diff (g,y) ) holds\nb1 = b2\n;\nend;\n\n:: deftheorem Def7 defines diff NDIFF_4:def 7 :\nfor n being non zero Element of NAT\nfor f being PartFunc of (),REAL\nfor x being Point of ()\nfor b4 being Function of (),REAL holds\n( b4 = diff (f,x) iff ex g being PartFunc of (REAL n),REAL ex y being Element of REAL n st\n( f = g & x = y & b4 = diff (g,y) ) );\n\ntheorem Th38: :: NDIFF_4:38\nfor J being Function of (REAL 1),REAL\nfor x0 being Element of REAL 1 st J = proj (1,1) holds\n( J is_differentiable_in x0 & diff (J,x0) = J )\nproof end;\n\ntheorem :: NDIFF_4:39\nfor J being Function of (),REAL\nfor x0 being Point of () st J = proj (1,1) holds\n( J is_differentiable_in x0 & diff (J,x0) = J )\nproof end;\n\ntheorem Th40: :: NDIFF_4:40\nfor n being non zero Element of NAT\nfor I being Function of REAL,() st I = (proj (1,1)) \" holds\n( ( for R being RestFunc of (),() holds R * I is RestFunc of () ) & ( for L being LinearOperator of (),() holds L * I is LinearFunc of () ) )\nproof end;\n\ntheorem Th41: :: NDIFF_4:41\nfor n being non zero Element of NAT\nfor J being Function of (),REAL st J = proj (1,1) holds\n( ( for R being RestFunc of () holds R * J is RestFunc of (),() ) & ( for L being LinearFunc of () holds L * J is Lipschitzian LinearOperator of (),() ) )\nproof end;\n\ntheorem Th42: :: NDIFF_4:42\nfor n being non zero Element of NAT\nfor I being Function of REAL,()\nfor x0 being Point of ()\nfor y0 being Element of REAL\nfor g being PartFunc of REAL,()\nfor f being PartFunc of (),() st I = (proj (1,1)) \" & x0 in dom f & y0 in dom g & x0 = <*y0*> & f * I = g & f is_differentiable_in x0 holds\n( g is_differentiable_in y0 & diff (g,y0) = (diff (f,x0)) . <*1*> & ( for r being Element of REAL holds (diff (f,x0)) . <*r*> = r * (diff (g,y0)) ) )\nproof end;\n\ntheorem Th43: :: NDIFF_4:43\nfor n being non zero Element of NAT\nfor I being Function of REAL,()\nfor x0 being Point of ()\nfor y0 being Real\nfor g being PartFunc of REAL,()\nfor f being PartFunc of (),() st I = (proj (1,1)) \" & x0 in dom f & y0 in dom g & x0 = <*y0*> & f * I = g holds\n( f is_differentiable_in x0 iff g is_differentiable_in y0 )\nproof end;\n\ntheorem Th44: :: NDIFF_4:44\nfor n being non zero Element of NAT\nfor J being Function of (),REAL\nfor x0 being Point of ()\nfor y0 being Element of REAL\nfor g being PartFunc of REAL,()\nfor f being PartFunc of (),() st J = proj (1,1) & x0 in dom f & y0 in dom g & x0 = <*y0*> & f = g * J holds\n( f is_differentiable_in x0 iff g is_differentiable_in y0 )\nproof end;\n\ntheorem :: NDIFF_4:45\nfor n being non zero Element of NAT\nfor J being Function of (),REAL\nfor x0 being Point of ()\nfor y0 being Element of REAL\nfor g being PartFunc of REAL,()\nfor f being PartFunc of (),() st J = proj (1,1) & x0 in dom f & y0 in dom g & x0 = <*y0*> & f = g * J & g is_differentiable_in y0 holds\n( f is_differentiable_in x0 & diff (g,y0) = (diff (f,x0)) . <*1*> & ( for r being Element of REAL holds (diff (f,x0)) . <*r*> = r * (diff (g,y0)) ) )\nproof end;\n\ntheorem Th46: :: NDIFF_4:46\nfor n being non zero Element of NAT\nfor R being RestFunc of () st R /. 0 = 0. () holds\nfor e being Real st e > 0 holds\nex d being Real st\n( d > 0 & ( for h being Real st |.h.| < d holds\n||.(R /. h).|| <= e * |.h.| ) )\nproof end;\n\ntheorem Th47: :: NDIFF_4:47\nfor n, m being non zero Element of NAT\nfor R being RestFunc of ()\nfor L being Lipschitzian LinearOperator of (),() holds L * R is RestFunc of ()\nproof end;\n\ntheorem Th48: :: NDIFF_4:48\nfor n, m being non zero Element of NAT\nfor R1 being RestFunc of () st R1 /. 0 = 0. () holds\nfor R2 being RestFunc of (),() st R2 /. (0. ()) = 0. () holds\nfor L being LinearFunc of () holds R2 * (L + R1) is RestFunc of ()\nproof end;\n\ntheorem Th49: :: NDIFF_4:49\nfor n, m being non zero Element of NAT\nfor R1 being RestFunc of () st R1 /. 0 = 0. () holds\nfor R2 being RestFunc of (),() st R2 /. (0. ()) = 0. () holds\nfor L1 being LinearFunc of ()\nfor L2 being Lipschitzian LinearOperator of (),() holds (L2 * R1) + (R2 * (L1 + R1)) is RestFunc of ()\nproof end;\n\ntheorem :: NDIFF_4:50\nfor n, m being non zero Element of NAT\nfor x0 being Element of REAL\nfor g being PartFunc of REAL,() st g is_differentiable_in x0 holds\nfor f being PartFunc of (),() st f is_differentiable_in g /. x0 holds\n( f * g is_differentiable_in x0 & diff ((f * g),x0) = (diff (f,(g /. x0))) . (diff (g,x0)) )\nproof end;" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7171366,"math_prob":0.9972653,"size":860,"snap":"2022-27-2022-33","text_gpt3_token_len":332,"char_repetition_ratio":0.17757009,"word_repetition_ratio":0.35643566,"special_character_ratio":0.41511628,"punctuation_ratio":0.13461539,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9997197,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-04T03:18:14Z\",\"WARC-Record-ID\":\"<urn:uuid:4f9920bf-4a61-4507-822c-db1044bee380>\",\"Content-Length\":\"202907\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ebb6ef09-323b-4f87-84ec-4fb9f0fdfb83>\",\"WARC-Concurrent-To\":\"<urn:uuid:f557e444-88c7-4936-9cbb-f285a05fdc69>\",\"WARC-IP-Address\":\"212.33.73.131\",\"WARC-Target-URI\":\"https://mizar.uwb.edu.pl/version/current/html/ndiff_4.html\",\"WARC-Payload-Digest\":\"sha1:DSK5JT5ULEGBV7Y5TKEPVS2MGFDJJRFD\",\"WARC-Block-Digest\":\"sha1:NI2O2BDLHQ2V57BR2VUGLA5EUHXO4L7A\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104293758.72_warc_CC-MAIN-20220704015700-20220704045700-00394.warc.gz\"}"}
https://www.programminglogic.com/quicksort-and-binary-search-algorithms-in-cc/
[ "# Quicksort and Binary Search Algorithms in C/C++\n\nBeing able to sort and search efficiently is one of the most important and most studied aspects of computer science. In my opinion any programmer worth his salt should be able to, in a matter of 10 minutes or so, be able to write the algorithms of binary search as well as those of the most important sorts (e.g., insertion sort, selection sort, bubble sort, merge sort, heap sort, and quicksort).\n\nOnce you master those, though, you might wanna use a template to speed up development. Below are the templates I use for binary search and quicksort.\n\nNotice that start and end in the bSearch function below refer to the first and last element, so end is the size of the array minus one. I use this approach because I think it makes easier to figure the limits you need to work with inside the function.\n\n``````int bSearch(int n,int array[], int start, int end){\nint middle = (start+end)/2;\n\nif (start>end)\nreturn 0;\n\nif (n==array[middle])\nreturn 1;\n\nif (n>array[middle])\nreturn bSearch(n,array,middle+1, end);\nelse\nreturn bSearch(n,array,start,middle-1);\n}``````\n\nAnd now the quicksort algorithm, where end is actually the size of the array.\n\n``````void swap(int v[], int i, int j){\nint temp = v[i];\nv[i]=v[j];\nv[j]=temp;\nreturn;\n}\n\nint partition(int v[], int start, int end){\nint x,l,j;\n\nx = v[end-1];\nl = start-1;\n\nfor (j=start;j<end-1;j++){\nif (v[j]<x){\nl++;\nswap(v,j,l);\n}\n}\nl++;\nswap(v,l,end-1);\nreturn l;\n}\n\nvoid quickSort(int v[],int start, int end){\nint q;\n\nif (end>start){\nq = partition(v,start,end;\nquickSort(v,start,q);\nquickSort(v,q+1,end);\n}\nreturn;\n}``````\n\n## One thought on “Quicksort and Binary Search Algorithms in C/C++”\n\n1.", null, "sookwan\n\nthanks for the code~" ]
[ null, "https://secure.gravatar.com/avatar/415b53cc6902686b1302eb4a8e58844d", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7371687,"math_prob":0.9458827,"size":1553,"snap":"2020-34-2020-40","text_gpt3_token_len":417,"char_repetition_ratio":0.1220142,"word_repetition_ratio":0.008,"special_character_ratio":0.28525436,"punctuation_ratio":0.1930295,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9952735,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-06T22:27:08Z\",\"WARC-Record-ID\":\"<urn:uuid:8dfa2afc-c05d-414e-89d4-57de702146d7>\",\"Content-Length\":\"25228\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b7924e95-563e-49d4-bdf8-f063056323e7>\",\"WARC-Concurrent-To\":\"<urn:uuid:49991efb-787e-4c9b-ae09-0d965c659fde>\",\"WARC-IP-Address\":\"50.116.102.205\",\"WARC-Target-URI\":\"https://www.programminglogic.com/quicksort-and-binary-search-algorithms-in-cc/\",\"WARC-Payload-Digest\":\"sha1:LBPFZBACRAMHOQTM2NNDTOMHLHTT3AKK\",\"WARC-Block-Digest\":\"sha1:ZORT3WPKD4TS6MN4BDBJ7NQXSKMGDEWV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439737039.58_warc_CC-MAIN-20200806210649-20200807000649-00351.warc.gz\"}"}
https://answers.everydaycalculation.com/compare-fractions/15-49-and-35-70
[ "Solutions by everydaycalculation.com\n\n## Compare 15/49 and 35/70\n\n15/49 is smaller than 35/70\n\n#### Steps for comparing fractions\n\n1. Find the least common denominator or LCM of the two denominators:\nLCM of 49 and 70 is 490\n\nNext, find the equivalent fraction of both fractional numbers with denominator 490\n2. For the 1st fraction, since 49 × 10 = 490,\n15/49 = 15 × 10/49 × 10 = 150/490\n3. Likewise, for the 2nd fraction, since 70 × 7 = 490,\n35/70 = 35 × 7/70 × 7 = 245/490\n4. Since the denominators are now the same, the fraction with the bigger numerator is the greater fraction\n5. 150/490 < 245/490 or 15/49 < 35/70\n\nMathStep (Works offline)", null, "Download our mobile app and learn to work with fractions in your own time:" ]
[ null, "https://answers.everydaycalculation.com/mathstep-app-icon.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.85607976,"math_prob":0.9846968,"size":788,"snap":"2023-40-2023-50","text_gpt3_token_len":260,"char_repetition_ratio":0.17857143,"word_repetition_ratio":0.0,"special_character_ratio":0.41751269,"punctuation_ratio":0.06832298,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9969605,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-22T17:22:41Z\",\"WARC-Record-ID\":\"<urn:uuid:6b8601a7-d7a1-4759-b168-c83bfd6a8504>\",\"Content-Length\":\"7133\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:02a104b0-a531-4afe-af05-c094b891240c>\",\"WARC-Concurrent-To\":\"<urn:uuid:8b85d432-db01-40b2-a129-591b4d466b33>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/compare-fractions/15-49-and-35-70\",\"WARC-Payload-Digest\":\"sha1:5LUY3TM7Q4HMD2NUFTITKKE23REMEWZD\",\"WARC-Block-Digest\":\"sha1:Z6BVG65XON23D76CD3TE45ICMU4GRBE7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506421.14_warc_CC-MAIN-20230922170343-20230922200343-00340.warc.gz\"}"}
https://bsci-ch.org/how-can-you-tell-one-element-from-another/
[ "The Atom and also Electromagnetic Radiation\n\nYou are watching: How can you tell one element from another\n\n Fundamental Subatomic Particles Electromagnetic Radiation Light and Other forms of Electromagnetic Radiation\n\n Particle Symbol Charge Mass electron e- -1 0.0005486 amu proton p+ +1 1.007276 amu neutron no 0 1.008665 amu\n\nThe number of protons, neutrons, and also electrons in one atom have the right to be figured out from a collection of an easy rules.\n\nThe variety of protons in the cell core of the atom is same to the atomic number (Z). The variety of electrons in a neutral atom is equal to the variety of protons. The mass variety of the atom (M) is equal to the sum of the variety of protons and also neutrons in the nucleus. The variety of neutrons is equal to the difference in between the mass variety of the atom (M) and also the atom number (Z).\n\nExamples: Let\"s identify the number of protons, neutrons, and electrons in the following isotopes.\n\n 12C 13C 14C 14N\n\nThe various isotopes of an facet are established by composing the mass variety of the atom in the top left corner of the symbol for the element. 12C, 13C, and 14C room isotopes that carbon (Z = 6) and therefore contain 6 protons. If the atoms are neutral, they likewise must contain 6 electrons. The just difference in between these isotopes is the number of neutrons in the nucleus.\n\n12C: 6 electrons, 6 protons, and 6 neutrons\n\n13C: 6 electrons, 6 protons, and 7 neutrons\n\n14C: 6 electrons, 6 protons, and also 8 neutrons\n\n Practice problem 1:Calculate the number of electrons in the Cl- and Fe3+ ions. Click below to inspect your answer to Practice trouble 1\n\nMuch of what is known about the framework of the electrons in one atom has been obtained by studying the interaction between matter and different creates of electromagnetic radiation. Electromagnetic radiation has some the the properties of both a particle and also a wave.\n\nParticles have actually a identify mass and also they accounting space. Waves have no mass and also yet they bring energy as they travel through space. In addition to their capacity to carry energy, tide have four other properties properties: speed, frequency, wavelength, and also amplitude. The frequency (v) is the variety of waves (or cycles) per unit the time. The frequency the a tide is reported in units of cycles per second (s-1) or hertz (Hz).\n\nThe idealized drawing of a tide in the figure listed below illustrates the meanings of amplitude and wavelength. The wavelength (l) is the smallest distance between repeating points on the wave. The amplitude of the tide is the distance between the highest (or lowest) allude on the wave and also the center of heaviness of the wave.", null, "If us measure the frequency (v) the a wave in cycles per 2nd and the wavelength (l) in meters, the product that these 2 numbers has actually the units of meters every second. The product that the frequency (v) time the wavelength (l) the a tide is as such the speed (s) at which the wave travels through space.\n\nvl = s\n\n Practice problem 2:What is the speed of a wave that has actually a wavelength of 1 meter and a frequency the 60 cycles every second? Click here to inspect your answer to Practice problem 2\n\n Practice problem 3:Orchestras in the United claims tune their tools to an \"A\" that has a frequency of 440 cycles every second, or 440 Hz. If the speed of sound is 1116 feet per second, what is the wavelength the this note? Click right here to examine your answer come Practice trouble 3 Click right here to watch a systems to Practice difficulty 3\n\nSee more: Find Out How Many Ribs Are In A Full Rack, Barbecue Time: How Many Ribs Are In A Rack\n\nLight and also Other forms of Electromagnetic Radiation\n\nLight is a wave with both electric and also magnetic components. It is therefore a kind of electromagnetic radiation.\n\nVisible light includes the small band that frequencies and also wavelengths in the part of the electro-magnetic spectrum the our eyes deserve to detect. It has radiation v wavelengths between around 400 nm (violet) and also 700 nm (red). Due to the fact that it is a wave, irradiate is bent once it enters a glass prism. When white light is focused on a prism, the irradiate rays of various wavelengths room bent by differing quantities and the irradiate is transformed into a spectrum the colors. Beginning from the side of the spectrum where the irradiate is bent by the the smallest angle, the colors space red, orange, yellow, green, blue, and violet.\n\nAs we deserve to see indigenous the following diagram, the energy carried by light increases as us go native red come blue throughout the clearly shows spectrum.", null, "Because the wavelength that electromagnetic radiation can be as lengthy as 40 m or as brief as 10-5 nm, the clearly shows spectrum is only a small portion of the total variety of electromagnetic radiation.", null, "The electromagnetic spectrum has radio and TV waves, microwaves, infrared, clearly shows light, ultraviolet, x-rays, g-rays, and cosmic rays, as presented in the number above. These different forms of radiation all travel at the speed of light (c). Castle differ, however, in your frequencies and wavelengths. The product that the frequency times the wavelength of electromagnetic radiation is constantly equal come the speed of light.\n\nvl = c\n\nAs a result, electromagnetic radiation that has actually a long wavelength has actually a low frequency, and radiation through a high frequency has actually a brief wavelength.\n\n Practice difficulty 4:Calculate the frequency of red irradiate that has a wavelength of 700.0 nm if the rate of irradiate is 2.998 x 108 m/s. .tags a { color: #fff; background: #909295; padding: 3px 10px; border-radius: 10px; font-size: 13px; line-height: 30px; white-space: nowrap; } .tags a:hover { background: #818182; } Home Contact - Advertising Copyright © 2021 bsci-ch.org #footer {font-size: 14px;background: #ffffff;padding: 10px;text-align: center;} #footer a {color: #2c2b2b;margin-right: 10px;}" ]
[ null, "https://bsci-ch.org/how-can-you-tell-one-element-from-another/imager_1_9323_700.jpg", null, "https://bsci-ch.org/how-can-you-tell-one-element-from-another/imager_2_9323_700.jpg", null, "https://bsci-ch.org/how-can-you-tell-one-element-from-another/imager_3_9323_700.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.93153757,"math_prob":0.9705207,"size":5658,"snap":"2021-43-2021-49","text_gpt3_token_len":1288,"char_repetition_ratio":0.1558189,"word_repetition_ratio":0.022657055,"special_character_ratio":0.22304702,"punctuation_ratio":0.09300184,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95204115,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-07T06:48:31Z\",\"WARC-Record-ID\":\"<urn:uuid:b24ca430-ca50-48bc-9d92-b64e58edca06>\",\"Content-Length\":\"16219\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d8dcbdab-d407-41c2-ad80-e711d3be954f>\",\"WARC-Concurrent-To\":\"<urn:uuid:1242c6be-185e-4d4f-af95-85f1f44f0d7a>\",\"WARC-IP-Address\":\"172.67.195.230\",\"WARC-Target-URI\":\"https://bsci-ch.org/how-can-you-tell-one-element-from-another/\",\"WARC-Payload-Digest\":\"sha1:YUR6ALJYRYR4BACXMKRFGIQTT42NQJ3R\",\"WARC-Block-Digest\":\"sha1:YVCDATNGQ2A5BSWTQ6VDWDVIJHYTIW3P\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363336.93_warc_CC-MAIN-20211207045002-20211207075002-00209.warc.gz\"}"}
http://www.celsiusfahrenheit.co/6800
[ "# 🌡6800 C to F\n\n🔆🌡6800 C to F. How many degrees Fahrenheit in a degree Celsius. °C to F Calculator.\n\n### Celsius to Fahrenheit\n\n Celsius Fahrenheit You can edit any of the fields below: = Detailed result here", null, "Change to Fahrenheit to Celsius\n\n## How to convert from Celsius to Fahrenheit\n\nIt is ease to convert a temperature value from Celsius to Fahrenheit by using the formula below:\n\n [°F] = [°C] × 9⁄5 + 32\nor\n Value in Fahrenheit = Value in Celsius × 9⁄5 + 32\n\nTo change 6800° Celsius to Fahrenheit, just need to replace the value [°C] in the formula below and then do the math.\n\n### Step-by-step Solution:\n\n1. Write down the formula: [°F] = [°C] × 9⁄5 + 32\n2. Plug the value in the formula: 6800 × 9⁄5 + 32\n3. Multiply by 9: 61200⁄5 + 32\n4. Divide by 5: 12240 + 32\n\n### Values around 6800 Celsius(s)\n\nCelsiusFahrenheitCelsiusFahrenheit\n6799.13759.56799.23759.6\n6799.33759.66799.43759.7\n6799.53759.76799.63759.8\n6799.73759.86799.83759.9\n6799.93759.968003760.0\n6800.13760.16800.23760.1\n6800.33760.26800.43760.2\n6800.53760.36800.63760.3\n6800.73760.46800.83760.4\n6800.93760.568013760.6" ]
[ null, "http://www.celsiusfahrenheit.co/images/calculator16x20.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.51482666,"math_prob":0.93571496,"size":1061,"snap":"2020-34-2020-40","text_gpt3_token_len":429,"char_repetition_ratio":0.21381268,"word_repetition_ratio":0.011363637,"special_character_ratio":0.533459,"punctuation_ratio":0.19305019,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9943871,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-20T06:44:56Z\",\"WARC-Record-ID\":\"<urn:uuid:b5ace8dc-1d7c-4f1a-b7ca-d6ea5f1236cf>\",\"Content-Length\":\"25129\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f6ce88f4-0054-45ab-93c4-073a94d9f4a2>\",\"WARC-Concurrent-To\":\"<urn:uuid:29a3c2ac-d8f1-41a8-9c06-f22fca2d5e87>\",\"WARC-IP-Address\":\"67.205.30.187\",\"WARC-Target-URI\":\"http://www.celsiusfahrenheit.co/6800\",\"WARC-Payload-Digest\":\"sha1:MTSEJLFAZ4KZOLZCUC3ON4RIIDJH5CN7\",\"WARC-Block-Digest\":\"sha1:FG5UZNKJ766ZW43LEGMC3AVJS342OQEE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400196999.30_warc_CC-MAIN-20200920062737-20200920092737-00003.warc.gz\"}"}
https://socratic.org/questions/how-do-you-solve-8-7-3-5m-2-5-5-4-6m
[ "# How do you solve 8.7=3.5m-2.5(5.4-6m)?\n\nFeb 25, 2017\n\nSee the entire solution process below:\n\n#### Explanation:\n\nStep 1) Expand the terms in parenthesis on the right side of the equation by multiplying each term within the equation by the term outside the equation:\n\n$8.7 = 3.5 m - \\left(2.5 \\times 5.4\\right) + \\left(2.5 \\times 6 m\\right)$\n\n$8.7 = 3.5 m - 13.5 + 15 m$\n\nStep 2) Group and combine like terms on the right side of the equation:\n\n$8.7 = 3.5 m + 15 m - 13.5$\n\n$8.7 = \\left(3.5 + 15\\right) m - 13.5$\n\n$8.7 = 18.5 m - 13.5$\n\nStep 3) Add $\\textcolor{red}{13.5}$ to each side of the equation to isolate the $m$ term while keeping the equation balanced:\n\n$8.7 + \\textcolor{red}{13.5} = 18.5 m - 13.5 + \\textcolor{red}{13.5}$\n\n$22.2 = 18.5 m - 0$\n\n$22.2 = 18.5 m$\n\nStep 4) Divide each side of the equation by $\\textcolor{red}{18.5}$ to solve for $m$ while keeping the equation balanced:\n\n$\\frac{22.2}{\\textcolor{red}{18.5}} = \\frac{18.5 m}{\\textcolor{red}{18.5}}$\n\n$1.2 = \\frac{\\textcolor{red}{\\cancel{\\textcolor{b l a c k}{18.5}}} m}{\\cancel{\\textcolor{red}{18.5}}}$\n\n$1.2 = m$\n\n$m = 1.2$" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.80063003,"math_prob":1.0000064,"size":431,"snap":"2023-40-2023-50","text_gpt3_token_len":111,"char_repetition_ratio":0.19437939,"word_repetition_ratio":0.08571429,"special_character_ratio":0.24825986,"punctuation_ratio":0.0989011,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99999034,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-25T13:05:42Z\",\"WARC-Record-ID\":\"<urn:uuid:2bbd3229-3e44-41d1-a11a-6f46dfde9ada>\",\"Content-Length\":\"34545\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:24cef8a5-28e8-4472-9309-399e7463fe9e>\",\"WARC-Concurrent-To\":\"<urn:uuid:8410c57e-f9ad-4316-9763-b2f9dc2b7608>\",\"WARC-IP-Address\":\"216.239.32.21\",\"WARC-Target-URI\":\"https://socratic.org/questions/how-do-you-solve-8-7-3-5m-2-5-5-4-6m\",\"WARC-Payload-Digest\":\"sha1:42RAGKRBDC6LVVUALCXABGXXNALMGQBO\",\"WARC-Block-Digest\":\"sha1:F34N6VERJVDVCZ6CR3NQC3EHW46D6PSR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233508977.50_warc_CC-MAIN-20230925115505-20230925145505-00213.warc.gz\"}"}
https://www.reference.com/web?q=surface+area+examples&qo=relatedSearchBing&o=600605&l=dir
[ "https://www.shmoop.com/basic-geometry/surface-area-examples.html\n\nPre-Algebra giving you a hard time? Shmoop's free Basic Geometry Guide has all the explanations, examples, and exercises you've been craving.\n\nhttps://www.onlinemathlearning.com/surface-area-formula.html\n\nSurface area formula for solid cylinder, hollow cylinder, prism, cone, pyramid, sphere, hemisphere, cube, cuboid, rectangular prism and triangular prism, ...\n\nhttps://study.com/academy/lesson/what-is-surface-area-definition-formulas-quiz.html\n\n... of the 3-D shape. Here is a list of basic shape formulas to help with finding the surface area of the 3-D shapes: ... Examples of Surface Areas: Triangular Prism ...\n\nhttp://tutorial.math.lamar.edu/Classes/CalcII/SurfaceArea.aspx\n\nMay 30, 2018 ... In this section we'll determine the surface area of a solid of revolution, i.e. a solid obtained by rotating a ... Now let's work a couple of examples.\n\nhttp://www.math.com/tables/geometry/surfareas.htm\n\nIn general, the surface area is the sum of all the areas of all the shapes that cover ... Be careful!! Units count. Use the same units for all measurements. Examples ...\n\nhttps://www.aaamath.com/geo79_x9.htm\n\nAn interactive math lesson to teach the surface area of a rectangular prism. ... Add the three areas together to find the surface area; Example: The surface area of ...\n\nhttps://www.khanacademy.org/science/ap-biology/cell-structure-and-function/cell-size/v/surface-area-of-a-box\n\nSurface area is the sum of the areas of all faces (or surfaces) on a 3D shape. A cuboid has 6 rectangular faces. To find the surface area of a cuboid, add the ...\n\nhttps://www.youtube.com/watch?v=sskf3tF2heU\n\nNov 6, 2007 ... For a complete lesson on surface area, go to http://www.MathHelp.com - 1000+ online math lessons featuring a personal math teacher inside ...\n\nhttp://www.mathguide.com/lessons/SurfaceArea.html\n\nSurface Area: Learn how to calculate the surface area of common solids. ... Example 1: Given l = 4 yds, w = 2 yds, and h = 5 yds, the surface area would be.\n\nhttps://www.khanacademy.org/math/basic-geo/basic-geo-volume-sa/basic-geometry-surface-area/v/surface-area-word-problem-example\n\nSal solves a word problem that involves surface area of a square pyramid." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.83185136,"math_prob":0.7792379,"size":2219,"snap":"2019-13-2019-22","text_gpt3_token_len":554,"char_repetition_ratio":0.17471783,"word_repetition_ratio":0.017793594,"special_character_ratio":0.2410996,"punctuation_ratio":0.25301206,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9948711,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-05-22T07:37:26Z\",\"WARC-Record-ID\":\"<urn:uuid:7981f5ac-37b6-401d-921a-9690efb8866d>\",\"Content-Length\":\"73520\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0ee10f33-3161-4578-91a0-ad7fd2108e9a>\",\"WARC-Concurrent-To\":\"<urn:uuid:3121a5c2-1b28-4787-896c-649051b81e7d>\",\"WARC-IP-Address\":\"151.101.250.114\",\"WARC-Target-URI\":\"https://www.reference.com/web?q=surface+area+examples&qo=relatedSearchBing&o=600605&l=dir\",\"WARC-Payload-Digest\":\"sha1:PPWNURCUFBGIJSYNIUEICQAFMZMVNASS\",\"WARC-Block-Digest\":\"sha1:5PNSWVA5CGCXALLGVNJ266BMMACPK536\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-22/CC-MAIN-2019-22_segments_1558232256764.75_warc_CC-MAIN-20190522063112-20190522085112-00447.warc.gz\"}"}
https://alonso-delarte.medium.com/how-to-run-junit-from-the-command-line-for-tdd-81ce378f69f
[ "# How to run JUnit from the command line for TDD\n\nJUnit is so well-integrated with the major Java IDEs (Eclipse, NetBeans, IntelliJ) that you might be wondering: why would anyone even want to learn how to run JUnit from the command line?\n\nI can only think of two reasons:\n\n1. So that you can honestly say you know how to do it, or\n2. So that you can do test-driven development (TDD) in Scala even if you don’t know how to set up your IDE for Scala (Eclipse or NetBeans — in my experience, using Scala in IntelliJ is very easy and intuitive).\n\nIf you can think of other reasons, please leave me a comment. The second reason might also apply to other programming languages for the Java Virtual Machine.\n\nI am aware of the reason of not wanting to fire up an IDE just to run a couple of tests, but if you don’t already know how to run JUnit from the command line, learning how (and learning where the relevant JARs are at, and possibly setting up folders) will probably take you a lot longer than starting up your IDE, even on a sluggish computer.\n\nThis tutorial uses the second reason given above to motivate the example. Specifically, an implementation of the `Fraction` class in Scala, with the JUnit `FractionTest` also written in Scala.\n\nSome of you might be thinking that JUnit is not the best testing framework for Scala, and you’re probably right. But for someone coming at Scala from a Java background, JUnit is perfect for me.\n\nAs I become more proficient with the functional side of Scala, I will probably become aware of JUnit’s shortcomings for testing Scala. But for now it’ll do just fine.\n\nIf you don’t care too much, or at all, about Scala, you can skim or skip the “What I will use to demonstrate” section, which I estimate will cut the reading time in half.\n\n# A few assumptions\n\nIf you’ve read this far, it would seem to be a fair assumption that you know Java and JUnit fairly well. That might not actually be the case, but I will assume that it is.\n\nIt would be wrong to assume that you know Scala. You might, but even if you don’t, I think that as long as I take care to explain the major departures from Java as they come up, you, as someone who knows Java, will have an easy time making sense of the superficial differences between Java and Scala.\n\nI do assume that you know the basics of the command line in your computer’s operating system (e.g., the “DOS” command line in Windows), that you know how to navigate the directory structure, and that you know how to create subdirectories.\n\nI also assume that you’re comfortable using some plaintext editor to type up programs, though I don’t assume what that editor might be. Vim? Sublime? Notepad? Something else?\n\nAnd lastly, I assume you mostly use Eclipse or NetBeans for your Java development. If you use IntelliJ, you can just have it download the Scala for IntelliJ plugin, and then writing and running tests for Scala source is almost as easy as writing and running tests for Java source.\n\n# What I will use to demonstrate\n\nFor this tutorial, I will use the example of the `Fraction` class, an immutable data type for numbers like 1/2, −144/89, 29/12, −1729/53, etc., and its corresponding `FractionTest`.\n\nIt might feel like a toy example, but it’s a simple application of familiar mathematical concepts, and I’ve found it very useful for my own projects.\n\nMy Java version of `Fraction` provides the four basic operations of arithmetic (addition, subtraction, multiplication and division), and also reciprocal, and it implements `java.lang.Comparable`. It also overrides `toString()`, `equals()` and `hashCode()`.\n\nOur Scala version of `Fraction` will have those same capabilities, but we will name our basic arithmetic functions `+()`, `-()`, `*()` and `/()` instead of `plus()`, `minus()`, `times()` and `divides()` (but the tests will still be `testPlus()`, `testMinus()`, `testTimes()` and `testDivides()`, so as to avoid unnecessary confusion).\n\nAlso, we will need to implement `scala.math.Ordered` if we want to be able to use `<` and `>` to compare instances of `Fraction` (we can always use `==` in Scala since Scala will just call `equals()`, which, if nothing else, is available from `java.lang.Object`).\n\nI’m also thinking about adding `to()` and `by()`, inspired by similar functions in Scala’s `RichInt` and `Range` classes. A thorough discussion of those will have to wait for another time, though.\n\nSo here’s the Scala source for `Fraction` that should fail all our initial tests, to be followed by a quick explanation of the major differences from Java that come up:\n\n`package fractionsobject Fraction { import scala.language.implicitConversions implicit def IntToFraction(n: Int) = new Fraction(0) val HASH_SEP = 65536}class Fraction(numerator: Long, denominator: Long = 1L) extends Comparable[Fraction] { val fractNumer = numerator val fractDenom = denominator override def toString: String = \"Not implemented yet\" override def equals(obj: Any): Boolean = false override def hashCode: Int = 0 def +(addend: Fraction): Fraction = this def unary_- = new Fraction(-this.fractNumer, -this.fractDenom) def -(subtrahend: Fraction): Fraction = this def *(multiplicand: Fraction): Fraction = this def /(divisor: Fraction): Fraction = this def reciprocal: Fraction = this import java.util.ArrayList def to(end: Fraction): ArrayList[Fraction] = { var range = new ArrayList[Fraction] range.add(this) range } def numericApproximation: Double = 0.0 override def compareTo(other: Fraction): Int = 0}`\n\nYeah, that should fail all our tests. Before I present the tests, I will quickly discuss the major differences from Java that came up in the source listing above.\n\nNotice that there is both a `Fraction` object and a `Fraction` class. The `Fraction` object is a “companion object” for the `Fraction` class.\n\nIf we put any “static fields” and/or “static methods” into the Java version of the `Fraction` class, in Scala we would put them in the `Fraction` companion object. In this example, that would be `HASH_SEP`, which you may or may not want to use to make `hashCode()` work correctly.\n\nIt is now my understanding that a class and its companion object must be in the same source file, which in this case would be `Fraction.scala`.\n\nThe `Fraction` companion object is also the place where we define implicit conversions from other types to `Fraction`. For more detail on implicit conversions in Scala, please see my article from February.\n\nIn Scala, you can place import statements in the scope where they make the most sense, rather than in the global scope of a source file, close to the top, like in Java.\n\nIn our example, the `Fraction` object doesn’t need `java.util.ArrayList` and the `Fraction` class doesn’t need `scala.language.implicitConversions`, so we can put one inside the class and the other inside the object. Sometimes we can drill down to even narrower scopes.\n\nIn a Java class, we can have multiple constructors scattered throughout the source, and we can claim that one of them is the primary constructor.\n\nThough I suppose this makes sense in the case of chained constructors: the constructor with the most explicit parameters is the primary constructor. Or maybe the one with the fewest?\n\nBut in a Scala class, there really is a primary constructor, and it is “fused” with the class declaration. There is no need to explicitly call the superclass constructor, the Scala compiler takes care of that for us when necessary.\n\nDefault parameters help us reduce the need for alternate constructors. In the example, we have `denominator: Long = 1L`, which means that if the denominator is omitted, it will be assumed to be 1, and the fraction will be arithmetically equal to an integer.\n\nNext, `val` is kind of like `final` in Java. Since `numerator` and `denominator` are both `Long` (which is kinda like the `long` primitive in Java, but with all the object-oriented amenities of the `Long` wrapper) and we assign them to the fields `fractNumer` and `fractDenom`, the Scala compiler automatically infers those are also `Long`. This helps cut down on “boilerplate.”\n\nSince no one can modify `fractNumer` and `fractDenom` once `Fraction` is constructed, we’re not too worried about who can see what their values are, hence no need to set `private`.\n\nScala is said to automatically generate getters and setters, or getters only in the case of `val` fields like `fractNumer` and `fractDenom`\n\nYou know what? I think we should rename the constructor parameters `numer` and `denom`, and the `val` fields `numerator` and `denominator`. I believe that then that way the automatically generated getters have obvious names. In any case, there’s no need for us to write `getNumerator()` and `getDenominator()`.\n\nHere’s a taste of how that would work in the local Scala REPL:\n\n`scala> val oneHalf = new fractions.Fraction(1, 2)oneHalf: fractions.Fraction = 1/2scala> oneHalf.numeratorres57: Long = 1scala> oneHalf.denominatorres58: Long = 2`\n\nIn Scala, instead of the `@Override` annotation in Java, we use the `override` keyword. Why they couldn’t add the `override` keyword to the Java language is outside the scope of this article.\n\nHere’s a seemingly superficial but important difference: to override `equals()`, the type of the object to be compared is `Any`, not `Object`. This one could cause you a confusing compilation error.\n\nAnd lastly, we can use the characters `+`, `-`, `*`, `/` and many others as function names. This is one of those features that should only be used when it makes sense to do so, and in the case of a numeric data type like `Fraction`, it certainly does make sense.\n\nThe source listing above shows other departures from Java, but they can be dismissed with very quick remarks: semicolons are optional, as are curly braces for single-line blocks as well as parentheses for empty argument lists; and we write `Generic[T]` instead of `Generic<T>`.\n\nAs for the 2-space indentation, you can use four spaces if you prefer. As with most compilers, the Scala compiler generally does not care about extra whitespace, though it might be good to not split `String` literals across lines.\n\nNow we can move to our tests.\n\n`package fractionsimport org.junit.Testimport org.junit.Assert._class FractionTest { @Test def testImplicitConversion: Unit = { val expected = new Fraction(49) val sevenSixths = new Fraction(7, 6) val actual = 42 * sevenSixths assertEquals(expected, actual) } @Test def testFractionsToLowestTerms: Unit = { val oneHalf = new Fraction(1, 2) val twoQuarters = new Fraction(2, 4) assertEquals(oneHalf, twoQuarters) } @Test def testDefaultDenomOne: Unit = { val seven = new Fraction(7) val sevenOneths = new Fraction(7, 1) assertEquals(seven, sevenOneths) } @Test def testToString: Unit = { val negOneHalf = new Fraction(-1, 2) val expected = \"-1/2\" val actual = negOneHalf.toString.replace(\" \", \"\") assertEquals(expected, actual) } @Test def testToStringDenomOneOmitted: Unit = { val seven = new Fraction(7, 1) val expected = \"7\" val actual = seven.toString.replace(\" \", \"\") assertEquals(expected, actual) } @Test def testEquals: Unit = { val oneHalf = new Fraction(1, 2) val dupOneHalf = new Fraction(1, 2) assertEquals(oneHalf, dupOneHalf) val threeQuarters = new Fraction(3, 4) assertNotEquals(oneHalf, threeQuarters) } @Test def testHashCode: Unit = { val oneHalf = new Fraction(1, 2) val twoQuarters = new Fraction(2, 4) val oneHalfHashCode = oneHalf.hashCode val twoQuartersHashCode = twoQuarters.hashCode assertEquals(oneHalfHashCode, twoQuartersHashCode) val threeQuarters = new Fraction(3, 4) val threeQuartersHashCode = threeQuarters.hashCode assertNotEquals(oneHalfHashCode, threeQuartersHashCode) } @Test def testPlus: Unit = { val threeHalves = new Fraction(3, 2) val fourSevenths = new Fraction(4, 7) val expected = new Fraction(29, 14) var actual = threeHalves + fourSevenths assertEquals(expected, actual) actual = fourSevenths + threeHalves // Commutative test assertEquals(expected, actual) } @Test def testNegate: Unit = { val threeHalves = new Fraction(3, 2) val expected = new Fraction(-3, 2) val actual = -threeHalves assertEquals(expected, actual) } @Test def testMinus: Unit = { val threeHalves = new Fraction(3, 2) val fourSevenths = new Fraction(4, 7) val expected = new Fraction(13, 14) val actual = threeHalves - fourSevenths assertEquals(expected, actual) } @Test def testTimes: Unit = { val threeHalves = new Fraction(3, 2) val fourSevenths = new Fraction(4, 7) val expected = new Fraction(6, 7) var actual = threeHalves * fourSevenths assertEquals(expected, actual) actual = fourSevenths * threeHalves // Commutative test assertEquals(expected, actual) } @Test def testDivides: Unit = { val threeHalves = new Fraction(3, 2) val fourSevenths = new Fraction(4, 7) val expected = new Fraction(21, 8) val actual = threeHalves / fourSevenths assertEquals(expected, actual) } @Test def testDivisionByZero: Unit = { val threeHalves = new Fraction(3, 2) val zero = new Fraction(0) try { val result = threeHalves / zero val failMsg = \"Trying to divide by zero should have caused an exception, not given result \" + result.toString fail(failMsg) } catch { case iae: IllegalArgumentException => println(\"Trying to divide by zero correctly triggered IllegalArgumentException: \" + iae.getMessage) case ae: ArithmeticException => println(\"Trying to divide by zero correctly triggered ArithmeticException: \" + ae.getMessage) case e: Exception => fail(e.getMessage) } } @Test def testReciprocal: Unit = { val threeHalves = new Fraction(3, 2) val expected = new Fraction(2, 3) val actual = threeHalves.reciprocal assertEquals(expected, actual) } @Test def testTo: Unit = { import java.util.ArrayList val fourSevenths = new Fraction(4, 7) val threeHalves = new Fraction(3, 2) val expected = new ArrayList[Fraction] for (n <- 8 to 21) expected.add(new Fraction(n, 14)) val actual = fourSevenths to threeHalves assertEquals(expected, actual) } @Test def testNumericApproximation: Unit = { val oneSeventh = new Fraction(1, 7) var expected = 0.14285714 val testDelta = 0.00000001 var actual = oneSeventh.numericApproximation assertEquals(expected, actual, testDelta) val thirteenFiftyeights = new Fraction(13, 58) expected = 0.22413793 actual = thirteenFiftyeights.numericApproximation assertEquals(expected, actual, testDelta) } @Test def testCompareTo: Unit = { fail(\"The test case is a prototype\") }}`\n\nA couple of things to note: what’s `void` in Java is `Unit` in Scala (I don’t like it, but that’s how it is). And, although we can write things like `summandA.+(summandB)`, it makes far more sense to write `summandA + summandB`.\n\nYou probably already figured out that `var` is just like `val`, except that we can change the value of a `var` later on. Just as with `val`, with `var` we don’t need to specify the type on the left of the equal sign if it can be readily inferred from the expression to the right of the equal sign.\n\nOne superficial difference is that the wildcard character for imports is the underscore rather than the asterisk.\n\nIf you prefer to write “`@Test`” on a line by itself, you certainly can. Aside from these differences, using JUnit for Scala sources should feel pretty much the same as using JUnit for Java sources.\n\n# Now to actually run JUnit from the command line\n\nOh, wait, not just yet. First we have to compile the source and tests. And, at least the first time, it will be necessary to compile the source first and then the tests.\n\nI don’t see any point trying to compile something (the test) you already know will fail to compile (if the source is not present yet).\n\nI thought it would be a good idea to replicate the directory structure the IDE uses. In hindsight, however, I should have gone with a simpler directory structure.\n\nTaking NetBeans for Windows as a model, and reckoning from the project root folder, we will put source files in the `src` folder, test files in the `test` folder, compiled source in the `build\\classes` folder and compiled tests in the `build\\test\\classes` folder.\n\nIn each of those, there should be separate subfolders for the separate packages, which here means `src\\fractions`, `test\\fractions`, `build\\classes\\fractions` and `build\\test\\classes\\fractions`.\n\nSo, in the command line, go to `src\\fractions` and give the command\n\n`scalac -d ..\\..\\build\\classes Fraction.scala`\n\nIf you’d prefer to follow along with Java source and tests, it would be much the same thing except you’d use `javac` instead of `scalac`.\n\nNote that you don’t have to specify `build\\classes\\fractions` since the compiler will automatically place `Fraction.class` and `Fraction\\$.class` in the `fractions` folder, or create it if it’s not already there (I think that `Fraction\\$.class` corresponds to the `Fraction` companion object).\n\nBut if you do specify it, you’ll wind up with an extra level in the hierarchy. Hopefully there are no compilation errors or warnings to deal with.\n\nNext, we need to compile the tests. Now it gets quite a bit more complicated. First, to make `FractionTest.class`, the compiler obviously needs to access `Fraction.class`.\n\nAnd, just as importantly, the compiler needs to access the JUnit JAR. On my NetBeans installation, the path to the JUnit JAR is `C:\\Program Files\\NetBeans 8.2\\platform\\modules\\ext\\junit-4.12.jar`.\n\nGo to `test\\fractions`, then give the command\n\n`scalac -classpath \"C:\\Program Files\\NetBeans 8.2\\platform\\modules\\ext\\junit-4.12.jar\";..\\..\\build\\classes -d ..\\..\\build\\test\\classes FractionTest.scala`\n\nOnce we’ve got source and tests compiled, we’re finally ready to run JUnit from the command line. Go to `build\\test\\classes`.\n\nActually, there is just one more detail: we also need the Hamcrest matchers JAR. Conveniently for me, on my NetBeans installation, the Hamcrest Core 1.3 JAR is in the same directory as the JUnit 4.12 JAR.\n\nAnother wrinkle is that apparently we also need `scala-library.jar`. This is not a problem for those following along with Java only.\n\nAfter a lot of frustration, I decided that trying to have the source class in one directory and the test class in another directory was way more trouble than it was worth. Especially since all the tutorials I could find put source and test classes on the same directory.\n\nIt seems that since `FractionTest` is declared to be in the `fractions` package, and `FractionTest.class` is in `build\\test\\classes\\fractions`, JUnit expects `Fraction.class` and `Fraction\\$.class` to also be in `build\\test\\classes\\fractions`. JUnit won’t look elsewhere in the class path.\n\nUnless maybe there is some option I don’t know about that permits source classes and test classes to be in separate folders. Trying to replicate the IDE’s project directory structure creates way too much overhead for a human being typing things at the command line.\n\nSo I copied `Fraction.class` and `Fraction\\$.class` from `build\\classes\\fractions` to `build\\test\\classes\\fractions`. Then, from `build\\test\\classes` (not `build\\test\\classes\\fractions`), I put in this long command:\n\n`java -cp .;\"C:\\Program Files\\NetBeans 8.2\\platform\\modules\\ext\\junit-4.12.jar\";\"C:\\Program Files\\NetBeans 8.2\\platform\\modules\\ext\\hamcrest-core-1.3.jar\";\"C:\\Program Files (x86)\\scala\\lib\\scala-library.jar\" org.junit.runner.JUnitCore fractions.FractionTest`\n\nBut here I ran into yet another problem. With all the failures (since every test ought to be failing at this point), the output was very long and `cmd.exe` is not letting me see all of it. The window only lets me scroll up to a certain point in the middle of the output.\n\nI remembered from the old days of DOS that it’s possible to reroute the output to a text file. After dusting off an old DOS reference, I tried a slightly different version of the previous command:\n\n`java -cp .;\"C:\\Program Files\\NetBeans 8.2\\platform\\modules\\ext\\junit-4.12.jar\";\"C:\\Program Files\\NetBeans 8.2\\platform\\modules\\ext\\hamcrest-core-1.3.jar\";\"C:\\Program Files (x86)\\scala\\lib\\scala-library.jar\" org.junit.runner.JUnitCore fractions.FractionTest > testResults.txt`\n\nFinally, I was able to see that the tests failing for the right reasons (I’ve bolded some things above and below to make it easier to skim — I’ve also omitted several lines so as to not bloat my word count too much).\n\n`JUnit version 4.12.E.E.E.E.E.E.E.E.E.E.E.E.E.E.E.ETime: 3.377There were 16 failures:1) testImplicitConversion(fractions.FractionTest)java.lang.AssertionError: expected: fractions.Fraction<Not implemented yet> but was: fractions.Fraction<Not implemented yet> at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:834) at org.junit.Assert.assertEquals(Assert.java:118)...several lines omitted...2) testDefaultDenomOne(fractions.FractionTest)java.lang.AssertionError: expected: fractions.Fraction<Not implemented yet> but was: fractions.Fraction<Not implemented yet>...several lines omitted...3) testMinus(fractions.FractionTest)java.lang.AssertionError: expected: fractions.Fraction<Not implemented yet> but was: fractions.Fraction<Not implemented yet>...several lines omitted...4) testTimes(fractions.FractionTest)java.lang.AssertionError: expected: fractions.Fraction<Not implemented yet> but was: fractions.Fraction<Not implemented yet>...several lines omitted...16) testNegate(fractions.FractionTest)java.lang.AssertionError: expected: fractions.Fraction<Not implemented yet> but was: fractions.Fraction<Not implemented yet>...several lines omitted...FAILURES!!!Tests run: 16, Failures: 16`\n\nThe output shown above was missing a test for `numericApproximation()`. It would have failed, just like all the others. This is the right outcome for the first run of the test suite.\n\nOur first priority now should be to get `testEquals()` to pass, since almost all the tests depend on `equals()` working correctly.\n\nThe IDE-generated Java `equals()` in NetBeans would start out by checking `this == obj` and `this == null`, then there’s a `getClass()` comparison. If those are inconclusive, the relevant fields are compared, but it has to do a cast first.\n\nWe could just translate that algorithm to Scala, or we could try to write something a bit more idiomatic:\n\n` override def equals(obj: Any): Boolean = obj match { case obj: Fraction => { this.numerator == obj.numerator && this.denominator == obj.denominator } case _ => false }`\n\nIn Scala, we can do a sort of “switch-case” on the type of an object. Once we ascertain that `obj` is an instance of `Fraction`, there is no need to create a `final` copy of `obj` cast to `Fraction` like in Java; we can go ahead and work on `obj` just the same as if we had declared it as `Fraction` in the current scope.\n\nThen “`case _ =>`” is like “`default:`” in a Java switch statement (but there is no need for `break` in the previous cases).\n\nPresumably if `obj` is not of type `Fraction`, it can’t be equal to `this`. Hence `case _ => false`. I’m not sure if this would work correctly with a subclass of `Fraction`, but if that ever becomes necessary, we should write a test, run that test and go from there.\n\nAt this point we shouldn’t concern ourselves with the program recognizing as equal fractions that are arithmetically equal but not in lowest terms (e.g., 1/2 = 2/4). We do already have a test for that, but it’s separate from `testEquals`.\n\nMaybe at this point we should also get `testToString()` to pass, since “Not implemented yet” is almost as annoying as the default `Object.toString()`, and neither is as informative as “numerator/denominator”.\n\nGo ahead and change `equals()` to what I wrote above. I think you can figure out `toString()` on your own, but only work on making `testToString()` pass, leave `testToStringDenomOneOmitted()` for later.\n\nOnce you make these changes to the source, be sure to save in your text editor, since it probably doesn’t have auto-save like IntelliJ.\n\nYou also have to recompile the source. If you get the same failures as before, check that you did save the changes to the source (that’s from personal experience).\n\nThank goodness for the Up arrow keyboard shortcut in the command line, so that you don’t have to keep typing the same commands with the same lengthy class paths.\n\nWith the changes made and saved, and the source recompiled, we can run the tests again. Only two tests should pass.\n\nHmm… I seem to have made some sort of mistake, since five of the tests pass (once again I’ve bolded the test names to facilitate skimming the results, and also I’ve omitted the stack traces without comment):\n\n`JUnit version 4.12.E.E..E.E.E.E...E.E.E.E.E.E..Time: 1.86There were 12 failures:1) testImplicitConversion(fractions.FractionTest)java.lang.AssertionError: expected:<49/1> but was:<0/1>2) testNumericApproximation(fractions.FractionTest)java.lang.AssertionError: expected:<0.14285714> but was:<0.0>3) testMinus(fractions.FractionTest)java.lang.AssertionError: expected:<13/14> but was:<3/2>4) testTimes(fractions.FractionTest)java.lang.AssertionError: expected:<6/7> but was:<3/2>5) testFractionsToLowestTerms(fractions.FractionTest)java.lang.AssertionError: expected:<1/2> but was:<2/4>6) testPlus(fractions.FractionTest)java.lang.AssertionError: expected:<29/14> but was:<3/2>7) testTo(fractions.FractionTest)java.lang.AssertionError: expected:<[8/14, 9/14, 10/14, 11/14, 12/14, 13/14, 14/14, 15/14, 16/14, 17/14, 18/14, 19/14, 20/14, 21/14]> but was:<[4/7]>8) testCompareTo(fractions.FractionTest)java.lang.AssertionError: The test case is a prototype9) testToStringDenomOneOmitted(fractions.FractionTest)org.junit.ComparisonFailure: expected:<7[]> but was:<7[/1]>10) testDivisionByZero(fractions.FractionTest)java.lang.AssertionError: Trying to divide by zero should have caused an exception, not given result 3/211) testHashCode(fractions.FractionTest)java.lang.AssertionError: Values should be different. Actual: 012) testDivides(fractions.FractionTest)java.lang.AssertionError: expected:<3/56> but was:<3/2>FAILURES!!!Tests run: 17, Failures: 12`\n\nWith `toString()` implemented, we can see that our program is not automatically putting fractions in lowest terms.\n\nI just noticed that I neglected to put in a test that says denominator zero for the constructor should cause an exception. Without that requirement on the constructor, it would be possible to create a fraction with an invalid denominator that causes some unexpected problems down the line.\n\nNegative denominators should be allowed, however. But the constructor should change negative denominators to positive denominators, by multiplying the supplied numerator by −1.\n\nSo, for example, `new Fraction(1, -2)` should be a fraction with numerator −1 and denominator 2, while `new Fraction(-1, -2)` should be a fraction with numerator 1 and denominator 2.\n\nNow that some tests are passing, I’m really missing the green checkmarks in the IDE, as they make it easier to see which tests did pass.\n\nAlso, I like how the IDEs report only the pertinent lines of the stack trace. For example, the line numbers in `org.junit.runners.ParentRunner\\$3` are useless to me; I really only care about the lines that I can rewrite.\n\nHowever, I have to admit that I do find it just a little interesting to know that `sun.reflect.NativeMethodAccessorImpl.invoke0` is a “native method” but `sun.reflect.NativeMethodAccessorImpl.invoke` is from an “unknown source.”\n\nGiven how laborious this cycle of saving, recompiling and rerunning is, it’s understandable if you would prefer to make several tests pass at once rather than one at a time.\n\nIntelliJ makes it so easy to just run one test at a time. And even in NetBeans it doesn’t take too long to run a whole test class, provided you haven’t made any test too long.\n\nSo if you want to continue this exercise on the command line, I suggest you fix the implicit conversion (it suffices to change just one character), figure out how to put fractions in lowest terms, and fix the arithmetic functions.\n\nIf you want to work on `hashCode()`, recall that I put in `HASH_SEP` as a `val` in the `Fraction` object, essentially making it what in Java would be `public static final`.\n\nIf, as you work though the exercise you decide you prefer a value other than 65,536 for `HASH_SEP`, you can change it in just one place rather than in however many places you would have used a numeric literal.\n\nHowever, to access `HASH_SEP` from the `Fraction` class, you will need to refer to it as “`Fraction.HASH_SEP`” rather than just “`HASH_SEP`”. A minor bit of verbosity in an otherwise concise language.\n\nWith these changes, you might bring the number of failing tests down to maybe just two or three:\n\n`JUnit version 4.12..........E.E..Trying to divide by zero correctly triggered IllegalArgumentException: Dividing 3/2 by 0 results in an indeterminate number.....Time: 1.875There were 2 failures:1) testTo(fractions.FractionTest)java.lang.AssertionError: expected:<[4/7, 9/14, 5/7, 11/14, 6/7, 13/14, 1, 15/14, 8/7, 17/14, 9/7, 19/14, 10/7, 3/2]> but was:<[4/7]>2) testCompareTo(fractions.FractionTest)java.lang.AssertionError: The test case is a prototypeFAILURES!!!Tests run: 17, Failures: 2`\n\nIf you need help getting `testCompareTo()` to pass, my article on comparable numeric data types from March might be of help, even though it details a Java implementation. As for `testTo()`, I plan to write an article about it next week.\n\nI don’t entirely like that `to()` returns a mutable collection. When I figure out what kind of immutable collection `to()` should return, I will change `to()` and `testTo()` accordingly. And I’ll have to remember to remove the `java.util.ArrayList` import.\n\n# In conclusion\n\nI hope you have found this useful. Or at least I hope it gives you a renewed appreciation for all the things your IDE does to make TDD using JUnit a breeze.\n\nis a composer and photographer from Detroit, Michigan. He has been working on a Java program to display certain mathematical diagrams.\n\n## More from Alonso Del Arte\n\nis a composer and photographer from Detroit, Michigan. He has been working on a Java program to display certain mathematical diagrams." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.83898354,"math_prob":0.73521024,"size":29259,"snap":"2021-43-2021-49","text_gpt3_token_len":7171,"char_repetition_ratio":0.14445394,"word_repetition_ratio":0.06392391,"special_character_ratio":0.23483373,"punctuation_ratio":0.15933971,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95789045,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-17T21:41:05Z\",\"WARC-Record-ID\":\"<urn:uuid:f951ef82-5d06-4c6e-b878-c209604a8f8a>\",\"Content-Length\":\"266299\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2a22d8ae-2fc0-480f-9deb-d6ba0deec63f>\",\"WARC-Concurrent-To\":\"<urn:uuid:a1d65d4c-b2f5-492b-93e7-51306329124a>\",\"WARC-IP-Address\":\"162.159.153.4\",\"WARC-Target-URI\":\"https://alonso-delarte.medium.com/how-to-run-junit-from-the-command-line-for-tdd-81ce378f69f\",\"WARC-Payload-Digest\":\"sha1:KRHGZ3PJJYESEZRSQQODYTLZFTFCOC7M\",\"WARC-Block-Digest\":\"sha1:NHCSTTJ6AVR7NFFTJN7PBG6ITQTVFPEK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585183.47_warc_CC-MAIN-20211017210244-20211018000244-00074.warc.gz\"}"}
http://mathcurve.com/courbes2d.gb/gauss/gauss.shtml
[ "GAUSSIAN CURVE", null, "Curve studied by de Moivre in 1718 and Gauss in 1809. Karl Friedrich Gauss (1777 -1855): German astronomer, mathematician, and physicist. Other names: bell curve (of Gauss).", null, "The area between the curve and the asymptote is equal to N; the area of the portion between m - s et m + s is approximately equal to 2/3 of N; between m - 2s and m + 2s it is approximately 96% of N. Cartesian equation:", null, ",", null, "giving the number of individuals of height between x and x + dx in a \"normal\" population of N people, with mean height m and a standard deviation s.  For example, the number", null, "of subsets with k elements of a set with n elements can be approximated for large values of n by f(k) with", null, ".\n\nThe Gaussian curve is the curve of the density function of the normal distribution.\n\nFor", null, ", we get the Gaussian curve said to be \"standard\".\n\nDo not mistake the bell curve of the Gaussian distribution with that of the Cauchy distribution, which is none other than a witch of Agnesi.\n\nIf we break away from the probabilistic aspect, the Gaussian curve has the following characteristics:\n\n Cartesian equation:", null, "; coordinates of the flex points :", null, ". Area between the curve and the asymptote :", null, "; centroid of this domain :", null, ".", null, "© Robert FERRÉOL  2019" ]
[ null, "http://mathcurve.com/courbes2d.gb/gauss/gauss 1.gif", null, "http://mathcurve.com/courbes2d.gb/gauss/gauss 2.gif", null, "http://mathcurve.com/courbes2d.gb/gauss/image5QV.JPG", null, "http://mathcurve.com/courbes2d.gb/gauss/image13D.JPG", null, "http://mathcurve.com/courbes2d.gb/gauss/image2KM.JPG", null, "http://mathcurve.com/courbes2d.gb/gauss/imageKKU.JPG", null, "http://mathcurve.com/courbes2d.gb/gauss/imageLNT.JPG", null, "http://mathcurve.com/courbes2d.gb/gauss/imageSVJ.JPG", null, "http://mathcurve.com/courbes2d.gb/gauss/image41C.JPG", null, "http://mathcurve.com/courbes2d.gb/gauss/image59D.JPG", null, "http://mathcurve.com/courbes2d.gb/gauss/imageSKE.JPG", null, "http://mathcurve.com/courbes2d.gb/gauss/gaussgeom.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8888532,"math_prob":0.9958019,"size":1395,"snap":"2019-35-2019-39","text_gpt3_token_len":352,"char_repetition_ratio":0.13874911,"word_repetition_ratio":0.072,"special_character_ratio":0.2516129,"punctuation_ratio":0.11610487,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99790967,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-20T20:12:30Z\",\"WARC-Record-ID\":\"<urn:uuid:1268d3bb-6d46-437d-a22c-aada33288780>\",\"Content-Length\":\"4842\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5c768bd2-d41d-402c-9de0-5f9b4574d748>\",\"WARC-Concurrent-To\":\"<urn:uuid:c7992e8f-7384-420a-9c60-511ead090aab>\",\"WARC-IP-Address\":\"209.44.124.150\",\"WARC-Target-URI\":\"http://mathcurve.com/courbes2d.gb/gauss/gauss.shtml\",\"WARC-Payload-Digest\":\"sha1:HVQZC6R5YQ5FHY64Z5Z2S2WBEN6P4L4F\",\"WARC-Block-Digest\":\"sha1:2LE4FFGQUZSOT6TSIQPRY4TSHLHJ2K3L\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514574077.39_warc_CC-MAIN-20190920200607-20190920222607-00510.warc.gz\"}"}
https://aakashdigitalsrv1.meritnation.com/ask-answer/question/a-boy-has-3-library-tickets-and-8-books-are-of-his-interest/permutations-and-combinations/3295620
[ "# a boy  has 3 library tickets and 8 books are of his interest. Of these 8,he doesn't want to borrow maths part 2, unless maths part 1 is also borrowed? In how many ways can he choose the 3 books to be borrowed ?\n\nVarious cases possible are:\n\n(i) When Maths part-I is borrowed: Here, the boy may borrow maths part-II. So, he has to select 2 books out of the remaining 7 books, which can be done in", null, "ways.\n\n(ii) When maths part-I is not borrowed: Here, the boy will not borrow maths part-II. So, he has to select 3 books from the remaining 6 books, which can be done in", null, "ways.\n\n∴Total number of ways =", null, "+", null, "= 21 + 20\n\n= 41\n\n• 32\n\nur ans should be 41\n\nif he gets both 1 nd 2 part of the maths book= 6C1  +\n\nif he gets his first oart of math but doesnt get the second one= 6C2  +\n\nIF he didnt get 1 nd 2 part of maths book then=6C3\n\nwe get 6+15+20= 35+6\n\n=41\n\n• 4\n\nIf he gets both 1 nd 2 part of the maths book= 6C1 +\n\nIf he didnt get 1 nd 2 part of maths book then=6C3" ]
[ null, "https://s3mn.mnimgs.com/img/shared/discuss_editlive/3944976/2012_12_11_09_23_37/mathmlequation355220902753648214.png", null, "https://s3mn.mnimgs.com/img/shared/discuss_editlive/3944976/2012_12_11_09_23_37/mathmlequation4873891191626355988.png", null, "https://s3mn.mnimgs.com/img/shared/discuss_editlive/3944976/2012_12_11_09_23_37/mathmlequation355220902753648214.png", null, "file:///C:/Users/RAMAND~1/AppData/Local/Temp/mathmlequation4873891191626355988.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9662099,"math_prob":0.93200994,"size":608,"snap":"2022-27-2022-33","text_gpt3_token_len":166,"char_repetition_ratio":0.17549668,"word_repetition_ratio":0.11570248,"special_character_ratio":0.28289473,"punctuation_ratio":0.12676056,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99576557,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,8,null,4,null,8,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-11T05:37:43Z\",\"WARC-Record-ID\":\"<urn:uuid:15d559c9-7365-4952-9f7b-77ca9d7c7d44>\",\"Content-Length\":\"42954\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8302a659-d289-4928-a5ae-749aa7b679d1>\",\"WARC-Concurrent-To\":\"<urn:uuid:d79e7852-b103-4a58-839a-8d7bdf28181e>\",\"WARC-IP-Address\":\"13.32.208.104\",\"WARC-Target-URI\":\"https://aakashdigitalsrv1.meritnation.com/ask-answer/question/a-boy-has-3-library-tickets-and-8-books-are-of-his-interest/permutations-and-combinations/3295620\",\"WARC-Payload-Digest\":\"sha1:UBSSLRGIXRF4EDZPYWJFL5OM6NPIAVBA\",\"WARC-Block-Digest\":\"sha1:VAZTLGSX2JDEMTWDR7RCY5RN74CLDILW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882571234.82_warc_CC-MAIN-20220811042804-20220811072804-00772.warc.gz\"}"}
http://www.exammission.com/2017/10/probability-important-formulas-aptitude.html
[ "# Exam Mission\n\nExam Information| Govt Jobs.| Exam Preparation | Exam Study Material |Exam Date|Material PDF\n\n1. Experiment:\nAn operation which can produce some well-defined outcomes is called an experiment.\n2. Random Experiment:\nAn experiment in which all possible outcomes are know and the exact output cannot be predicted in advance, is called a random experiment.\nExamples:\n1. Rolling an unbiased dice.\n2. Tossing a fair coin.\n3. Drawing a card from a pack of well-shuffled cards.\n4. Picking up a ball of certain colour from a bag containing balls of different colours.\nDetails:\n1. When we throw a coin, then either a Head (H) or a Tail (T) appears.\n2. A dice is a solid cube, having 6 faces, marked 1, 2, 3, 4, 5, 6 respectively. When we throw a die, the outcome is the number that appears on its upper face.\n3. A pack of cards has 52 cards.\nIt has 13 cards of each suit, name Spades, Clubs, Hearts and Diamonds.\nCards of spades and clubs are black cards.\nCards of hearts and diamonds are red cards.\nThere are 4 honours of each unit.\nThere are Kings, Queens and Jacks. These are all called face cards.\n3. Sample Space:\nWhen we perform an experiment, then the set S of all possible outcomes is called the sample space.\nExamples:\n1. In tossing a coin, S = {H, T}\n2. If two coins are tossed, the S = {HH, HT, TH, TT}.\n3. In rolling a dice, we have, S = {1, 2, 3, 4, 5, 6}.\n4. Event:\nAny subset of a sample space is called an event.\n5. Probability of Occurrence of an Event:\nLet S be the sample and let E be an event.\nThen, E", null, "S.", null, "P(E) = n(E) . n(S)\n6. Results on Probability:\n1. P(S) = 1\n2.", null, "P (E)", null, "1\n3. P(", null, ") = 0\n4. For any events A and B we have : P(A", null, "B) = P(A) + P(B) - P(A", null, "B)\n5. If A denotes (not-A), then P(A) = 1 - P(A)." ]
[ null, "https://www.indiabix.com/_files/images/aptitude/1-sym-eps.gif", null, "https://www.indiabix.com/_files/images/aptitude/1-sym-tfr.gif", null, "https://www.indiabix.com/_files/images/aptitude/1-sym-leq.gif", null, "https://www.indiabix.com/_files/images/aptitude/1-sym-leq.gif", null, "https://www.indiabix.com/_files/images/aptitude/1-sym-phi.gif", null, "https://www.indiabix.com/_files/images/aptitude/1-sym-uni.gif", null, "https://www.indiabix.com/_files/images/aptitude/1-sym-vec.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8789397,"math_prob":0.97602266,"size":2133,"snap":"2021-31-2021-39","text_gpt3_token_len":592,"char_repetition_ratio":0.09816816,"word_repetition_ratio":0.019900497,"special_character_ratio":0.27754337,"punctuation_ratio":0.14790288,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9916526,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-26T18:43:53Z\",\"WARC-Record-ID\":\"<urn:uuid:efb7924c-581c-495e-891c-c7ed46b96787>\",\"Content-Length\":\"226745\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:44f2ddb1-2418-4a59-8e62-1251418b0aa5>\",\"WARC-Concurrent-To\":\"<urn:uuid:02a9b737-ed05-47aa-9f63-6022e1ed5c6c>\",\"WARC-IP-Address\":\"142.250.73.243\",\"WARC-Target-URI\":\"http://www.exammission.com/2017/10/probability-important-formulas-aptitude.html\",\"WARC-Payload-Digest\":\"sha1:TIKGFIMITEV3VP3R56LKOFL2AYUNHFCG\",\"WARC-Block-Digest\":\"sha1:C67VQLAHQV45PUUDKCDCH4OWRSDURA4S\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046152144.92_warc_CC-MAIN-20210726183622-20210726213622-00114.warc.gz\"}"}
https://math.stackexchange.com/questions/1522914/vaughts-essentially-undecidable-set-theory/4290878
[ "# Vaught's essentially undecidable set theory\n\nToday I was reading this paper, which includes discussion of essential undecidability of various weak theories. On page 24, I was surprised to find out that Vaught has showed that set theory with the following two axioms is essentially undecidable:\n\n$$\\forall x\\exists y\\neg(y\\in x)$$ $$\\forall x,y\\exists u\\forall z(z\\in u\\Leftrightarrow(z\\in x\\lor z\\in y))$$\n\nI was surprised as I have found in some recent paper (which I can't remember) that ST is the simplest known set theory which is essentially undecidable, while (arguably) Vaught's theory is simpler. I wasn't able to track down Vaught's work which would talk of this theory.\n\nOn the second thought, this theory seems unlikely to be essentially undecidable, because, first, it vacuously has an empty model, and second, less trivially, a finite model with $\\in$ being empty relation seems to satisfy the theory as well, thus giving a decidable extension of the theory.\n\nCan anyone provide a reference for where Vaught proves essential undecidability of this theory?\n\nEdit: Since the result seems to be false, the question arises whether the article I mention on the beginning has a typo. Finding a work of Vaught which the paper appears to be quoting might help clarify that, but again - I couldn't find anything.\n\n• $z\\in z \\:$ should presumably be replaced with $\\: z\\in x \\;$. $\\;\\;\\;\\;$\n– user57159\nNov 10, 2015 at 19:34\n• @RickyDemer Of course. Thank you. Nov 10, 2015 at 19:39\n• Empty models are not allowed in usual first-order logic, but any (finite or infinite) model with an empty $\\in$ relation would be enough to prove that extending the theory with $\\forall x,y(x\\notin y)$ produces a consistent theory, which will be trivially decidable. Nov 10, 2015 at 19:49\n• Just before the claim in question is a reference \"(see --)\", and reference is indeed a paper by Vaught. It would seem most meaningful if that is supposed to be the one that contains the result. Nov 10, 2015 at 20:03\n• Reference links to a review that sounds like it's mostly about arithmetic, but ends with: \"Der Verfasser gibt zum Schluß $R_0$ entsprechende Theorien in der Mengenlehre an und weist auf weitere Probleme hin.\" which sounds vaguely like it could be it. Nov 10, 2015 at 20:12\n\nThese are just typos. (The axioms as given do not form an essentially undecidable theory, as they have lots of finite models.)\n\nThe story on top of p. 24 is clearly meant to refer to the adjunctive set theory (AS) with axioms $$\\exists x\\:\\forall y\\:\\neg(y\\in x),$$\n\n$$\\forall x,y\\:\\exists u\\:\\forall z\\:\\bigl(z\\in u\\leftrightarrow(z\\in x\\lor z=y)\\bigr).$$ Indeed, this theory extended with the axiom of extensionality was introduced and proved essentially undecidable by Szmielew and Tarski, and essential undecidability of the version here without extensionality was shown in Vaught’s paper On a theorem of Cobham concerning undecidable theories (Proc. Logic, Methodology and Philosophy of Science 1960, pp. 14–25). The theory AS is mutually interpretable with Robinson’s arithmetic Q.\n\nAn even weaker essentially undecidable theory, nowadays often called Vaught’s set theory (VS), was introduced in Vaught’s paper Axiomatizability by a schema (JSL 32 (1967), pp. 473–479). Its axioms consist of the schema $$\\forall x_0,\\dots,x_{n-1}\\:\\exists u\\:\\forall z\\:\\Bigl(z\\in u\\leftrightarrow\\bigvee_{i for all $$n\\ge0$$. The theory VS interprets Robinson’s theory R; like R, and unlike AS or Q, it is not finitely axiomatizable. There is a mention of VS on p. 26 of Beklemishev’s paper (again, with a typo: the axioms need to be stated for $$n\\ge0$$, not just $$n\\ge1$$, i.e., including the axiom of the empty set).\n\n• Thanks a lot! I should have realized the theory from the cited papers has finite models, especially after I made the comment above about it being the case for empty set + the messed up adjunction axiom. Thank you for the references as well! Oct 29, 2021 at 19:43\n• Is VS the weakest known essentially undecidable theory? Oct 29, 2021 at 19:45\n• @user76284 Certainly not. E.g., Robinson’s R is strictly weaker (wrt computable interpretations) than VS. Theories like REP_U or REP_{PRF} from my paper “Recursive functions and existentially closed structures” are strictly weaker than R. For even weaker essentially undecidable theories, fix any recursively inseparable pair of r.e. sets $A,B\\subseteq\\omega$, and take the theory with constants $\\{c_n:n\\in\\omega\\}$, a unary predicate $P$, and axioms $P(c_n)$ for $n\\in A$ and $\\neg P(c_n)$ for $n\\in B$. Oct 29, 2021 at 20:06\n• @EmilJeřábek Sorry, I mean over a language with only a binary relation (like $\\in$). Oct 29, 2021 at 20:15\n• Same answer. You can encode any r.e. theory by a theory whose language only has a binary relation. Oct 29, 2021 at 20:18" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.97828376,"math_prob":0.95578146,"size":1285,"snap":"2023-40-2023-50","text_gpt3_token_len":300,"char_repetition_ratio":0.12958626,"word_repetition_ratio":0.0,"special_character_ratio":0.21167316,"punctuation_ratio":0.09795918,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9958745,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-02T13:25:47Z\",\"WARC-Record-ID\":\"<urn:uuid:4cf39779-f30a-4760-929a-4f0365aacd83>\",\"Content-Length\":\"155040\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a013f4ae-c5a2-4e7e-b54c-ab44358017d9>\",\"WARC-Concurrent-To\":\"<urn:uuid:052a8ab7-ed69-4787-b329-10d9f541906a>\",\"WARC-IP-Address\":\"104.18.43.226\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/1522914/vaughts-essentially-undecidable-set-theory/4290878\",\"WARC-Payload-Digest\":\"sha1:BNREZSI6PLV2WBJGGVDXUK2WQ3BPGGWE\",\"WARC-Block-Digest\":\"sha1:VUIULZNFKNWMPA5JTZ35BVKEVYTY6RKW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100399.81_warc_CC-MAIN-20231202105028-20231202135028-00101.warc.gz\"}"}
https://qims.amegroups.com/article/view/107663/html
[ "Low-dose spectral reconstruction with global, local, and nonlocal priors based on subspace decomposition", null, "Original Article\n\n# Low-dose spectral reconstruction with global, local, and nonlocal priors based on subspace decomposition\n\nXiaohuan Yu1, Ailong Cai1, Lei Li1, Zhiyong Jiao2, Bin Yan1\n\n1Henan Key Laboratory of Imaging and Intelligent Processing, PLA Strategic Support Force Information Engineering University, Zhengzhou, China; 2Beijing Science and Technology Information Research Center, Beijing, China\n\nContributions: (I) Conception and design: X Yu; (II) Administrative support: A Cai, L Li, B Yan; (III) Provision of study materials or patients: A Cai, L Li; (IV) Collection and assembly of data: X Yu, A Cai; (V) Data analysis and interpretation: X Yu, A Cai, Z Jiao; (VI) Manuscript writing: All authors; (VII) Final approval of manuscript: All authors.\n\nCorrespondence to: Bin Yan. Henan Key Laboratory of Imaging and Intelligent Processing, PLA Strategic Support Force Information Engineering University, No. 62, Science Avenue, High-tech Zone, Zhengzhou 450001, China. Email: [email protected].\n\nBackground: Multienergy computed tomography (MECT) is a promising imaging modality for material decomposition, lesion detection, and other clinical applications. However, there is an urgent need to design efficient and accurate algorithms to solve the inverse problems related to spectral reconstruction and improve image quality, especially under low-dose and incomplete datasets. The key issue for MECT reconstruction is how to efficiently describe the interchannel and intrachannel priors in multichannel images.\n\nMethods: In this model, in order to correlate the similarities of interchannel images and regularize the multichannel images, the global, local, and nonlocal priors are jointly integrated into the low-dose MECT reconstruction model. First, the subspace decomposition method first employs the global low-rankness to map the original MECT images to the low-dimensional eigenimages. Then, nonlocal self-similarity of the eigenimages is cascaded into the optimization model. Additionally, the L0 quasi-norm on gradient images is incorporated into the proposed method to further enhance the local sparsity of intrachannel images. The alternating direction method is applied to solve the optimization model in an iterative scheme.\n\nResults: Simulation, preclinical, and real datasets were applied to validate the effectiveness of the proposed method. From the simulation dataset, the new method was found to reduce the root-mean-square error (RMSE) by 42.31% compared with the latest research fourth-order nonlocal tensor decomposition MECT reconstruction (FONT-SIR) method under 160 projection views. The calculation time of an iteration for the proposed method was 23.07% of the FONT-SIR method. The results of material decomposition in real mouse data further confirmed the accuracy of the proposed method for different materials.\n\nConclusions: We developed a method in which the global, local, and nonlocal priors are jointly used to develop the reconstruction model for low-dose MECT, where the global low-rankness and nonlocal prior are cascaded by subspace decomposition and block-matching, and the L0 sparsity is applied to express the local prior. The results of the experiments demonstrate that the proposed method based on subspace improves computational efficiency and has advantages in noise suppression and structure preservation over competing algorithms.\n\nKeywords: Multienergy computed tomography (MECT); image reconstruction; global priors; local and nonlocal priors; subspace decomposition\n\nSubmitted Jun 21, 2022. Accepted for publication Dec 02, 2022. Published online Jan 05 2023.\n\ndoi: 10.21037/qims-22-647\n\n## Introduction\n\nRecently, multienergy computed tomography (MECT) has attracted increasing attention due to its great potential in medical imaging, especially in medicine and surgery, for its capabilities in quantitative material decomposition (1-3). Dual-energy CT (DECT) is one of the most common uses of MECT and can be implemented with dual-energy fast kilovoltage peak (kVp) switching and dual-layer sandwich detectors (4-7). However, it is limited by the use of energy-integrating detectors. The last decade has witnessed the promising development of a new type of MECT equipped with photon-counting detectors (PCDs), which have an energy separation capability in narrow energy bins (8,9). Nevertheless, image qualities are still unsatisfactory due to the low signal-to-noise ratios caused by photon pile-up, charge sharing, fluorescence effect, and photon scattering, which make the material maps suffer from severe noise (10). Therefore, developing efficient and accurate methods to improve image quality is a growing concern in the MECT field.\n\nHigh-quality reconstruction from noisy and incomplete MECT measurements is a challenging problem. Regularizers encoding the image priors are typically introduced to improve reconstruction stability. For reconstruction in MECT, the popular priors can be categorized as global, local, and nonlocal. Note that the theoretically strict definitions of local and nonlocal priors are unclear. We attempt to classify them according to the ranges of the pixels for computing the regularization function for a target pixel in an image, as detailed in the Appendix 1. With the development of compressed sensing theory (11), many methods based on sparse regularization, such as L0 quasi-norm, total variation (TV) (12), wavelet (13), dictionary learning (14), and other sparse transformations, have been proposed to explore the local property in conventional CT. The sparsity-based regularization can also be used as local priors for each energy bin to reduce image noise in MECT directly. For instance, in 2012, Xu et al. reconstructed the region of interest (ROI) from MECT images by applying a TV penalty (15) to each energy channel. In 2013, Zhao et al. (16) proposed an iterative method based on tight frame (TFIR) to reconstruct higher image quality for sparse-view spectral breast CT. In 2016, Yu et al. (17) extended prior image constraint compressed sensing (PICCS) with TV regularization to spectral CT imaging (spectral PICCS) by introducing a high-quality spectral mean image to improve image quality. In 2018, Niu et al. (18) combined total generalized variation with a prior image to develop an iterative reconstruction for photon-counting CT, and more recently, Wang et al. (19) further used the image-gradient L0 quasi-norm and PICCS algorithm to design a weight adaptive method (L0-ASPICCS) for low-dose MECT in 2020.\n\nGlobal low-rankness naturally exists in the MECT images due to the strong correlations of the interchannel. To improve the image quality, the local prior and global low-rankness can be modeled in the MECT reconstruction. More precisely, in 2014, Li et al. (20) established a tensor form reconstruction model containing prior rank, intensity, and sparsity (tPRISM) for MECT, which is an extension of PRISM (21). Semerci et al. (22) used the tensor nuclear norm-based iterative reconstruction method to efficiently measure the similarity of MECT images in the spectral domain. In 2015, Rigie and La Rivière (23) further applied the total nuclear variation to reconstruct MECT images with a convex penalty on common edge positions and a shared gradient direction among different channels.\n\nOver the past decade, the idea of nonlocal self-similarity among image patches has been developed as another powerful tool. It has been fully exploited in nonlocal means, block-matching and 3D filtering (BM3D) in image denoising, image recovery, and deblurring (24-26). The research on nonlocal self-similarity also shows promising potential in MECT imaging. In 2015, Kim et al. (27) constructed 3-dimensional patches using self-similarity in the multichannel and carried out sparse-view MECT reconstruction with a low-rank penalty. Moreover, in 2016, as an improved version of dictionary learning, tensor dictionary learning (TDL) was proposed for MECT reconstruction to guarantee effective tissue structure preservation and noise suppression (28). In 2018, Wu et al. (29) combined the L0 quasi-norm with TDL (L0TDL) for low-dose MECT reconstruction, particularly in the sparse view; Niu et al. (30) also use a nonlocal method with low-rank regularization to improve the image quality of MECT. Wu et al. (31) further developed a nonlocal low-rank method based on cube tensor factorization (NLCTF) to obtain better image quality in MECT. More recently, in 2019, Xia et al. (32) proposed a method, called aided by self-similarity in image-spectral tensors (ASSIST), in which each tensor was decomposed into a low-rank component and a sparse component for MECT reconstruction. Hu et al. (33) developed a spectral image similarity-based tensor with an enhanced sparsity reconstruction (SISTER) method by extracting similar blocks that adopted alternating least square-based Candecomp/Parafac (ALS-CP) decomposition (34) to improve the MECT image quality. In 2020, Wu et al. (35) used weight-adaptive TV with spectral tensor factorization in nonlocal prior (WATITF) to improve the image quality of small animal imaging. Zhang et al. (36) combined the CP decomposition with intrinsic tensor sparsity regularization to exploit the nonlocal similarity and expressed spatial sparsity through TV regularization for a PCD-based MECT reconstruction. Chen et al. (37) further proposed a fourth-order nonlocal tensor decomposition model (FONT-SIR) for MECT reconstruction in 2021.\n\nTable 1 is list of some representative methods, including the various priors mentioned above. The methods described tend to adopt different types of priors. The listed methods have achieved relatively better performance than have those of the traditional analytical and typical iterative methods using purely local priors. However, their image qualities for various clinical application are still not adequate, and their use of local and nonlocal priors typically requires high computation complexity .\n\n### Table 1\n\nSummary of a few representative MECT reconstruction methods with different priors\n\nMethods Type of prior\nGlobal Local Nonlocal\nTFIR (16)\nL0-ASPICCS (19)\ntPRISM (20)\nSemerci et al. (22)\nL0TDL (29)\nNLCTF (31)\nASSIST (32)\nSISTER (33)\nWATITF (35)\nZhang et al. (36)\nFONT-SIR (37)\nThe proposed\n\nMECT, multienergy computed tomography; TFIR, tight frame iterative reconstruction; L0-ASPICCS, L0 norm-based adaptive spectral prior image constrained compressed sensing; tPRISM, tensor prior rank, intensity, and sparsity model; L0TDL, L0 norm-based tensor dictionary learning; NLCTF, nonlocal low-rank cube-based tensor factorization method; ASSIST, aided by self-similarity in image-spectral tensors; SISTER, spectral image similarity-based tensor with enhanced sparsity reconstruction; WATITF, weight-adaptive total variation and image-spectral tensor factorization; FONT-SIR, fourth-order nonlocal tensor decomposition model for spectral CT image reconstruction.\n\nWe propose a reconstruction model that exploits the complementary merits of the global, local, and nonlocal priors to improve the image quality with lower computational complexity. High correlations exist in different channels of MECT images, which means that the full MECT images can be expressed or represented by a low-rank subspace (38,39). In this paper, the global and nonlocal regularizations are proposed based on the low-rank subspace decomposition. The most advantageous property of subspace decomposition is that the main features (e.g., detail structures in the image) can be effectively extracted and enhanced, while the global random noise can be suppressed. This method can also improve computational efficiency since the subspace decomposition transfers the global data into a low-rank subspace. In our work, the global low-rankness and nonlocal self-similarity of interchannel images are cascaded through subspace decomposition and nonlocal block-matching. Furthermore, the local spatial sparsity of intrachannel images is also integrated into the model to maintain structure while suppressing noise. The alternating direction method is used to solve the model iteratively.\n\n## Methods\n\n### MECT reconstruction with different priors in parallel\n\nLowercase letters and boldface lowercase letters (e.g., a and α) are used to denote scalars and vectors, respectively. Matrices and tensors are denoted as capital letters, and calligraphic letters (e.g., A and $X$). $‖⋅‖0$, $‖⋅‖2$ are used to represent the L0 quasi-norm and L2 norm of a vector, respectively. $‖⋅‖F$ is defined as the Frobenius norm of a matrix. Let $X=[x1,x2,...,xs]′∈RS×Np$ represent MECT images of all S channels, where the row $xs, s=1,2,...,S$ of X is the vectorized single-channel CT image with Np pixels. $x∈RS×Np×Np$ denotes the 2-dimensional MECT images of all S channels.\n\nIgnoring the detector response and other effects, the forward projection model is considered the following discretized linear system at S narrow energy windows in MECT:\n\n$Axs+es=bs, s=1,2,...,S$\n\nwhere $A∈RM×Np$ is the system matrix, $bs∈RM(M=Uy×Nviews)$ stands for the vectorized projections, Uy and Nviews are the number of detector units and projection views, respectively. $es∈RM$ is the noise in the measured projection. For a given system geometry, the task of image reconstruction for MECT is to find xs according to the system geometry and observation bs. However, the reconstruction is ill-posed due to the serious noise in low-dose MECT. It is thus necessary to introduce some priors to build regularization terms to improve the stability of solving the inverse problems.\n\nIn order to obtain high-quality reconstruction images, the prior knowledge of the image itself should be introduced to build a regularization model. In this work, we first propose to combine the local and nonlocal regularizations for the low-dose MECT reconstruction problem. The local term is built by the L0 quasi-norm of image gradients and the nonlocal term is established by the block-matching frame-based regularizations, expressed as follows:\n\n$minxs12∑s=1S‖Axs−bs‖22+λ∑s=1S‖∇xs‖0+βR(X)s.t. xs≥0$\n\nwhere X is the group of all-channel CT images, R (X) represents the regularization term on MECT images based on blocking-matched frames, and $‖∇xs‖0$ denotes the regularization term on single-channel gradient image. λ, β are the nonnegative parameters to balance the data fidelity and regularization term.\n\nIn order to decouple the variable X, 2 auxiliary variables, us and Y, are introduced to reformulate the above problem as follows:\n\n$minxs 12∑s=1S‖Axs−bs‖22+λ∑s=1S‖∇us‖0+βR(Y)s.t. xs=us, X=Y, xs≥0$\n\nIts corresponding augmented Lagrangian function is the following:\n\n,\n\n$12∑s=1S‖Axs−bs‖22+λ∑s=1S‖∇us‖0+βR(Y)+λ12∑s=1S‖xs−us+Λ1λ1‖22+λ22‖X−Y+Λ2λ2‖22$, where $Λ1$ and $Λ2$ are the Lagrange multipliers, and $λ1$ and are the nonnegative parameters.\n\nIn this paper, the alternating direction method of multipliers (ADMM) is used to solve the problem . The process is detailed in Algorithm 1, and it is used as a comparison algorithm in the experiments section. It should be noted that, for the subproblem Y, it is actually carried out the nonlocal block-matching denoising process on input data $Yinput=Xl+Λ2l−1λ2$. Due to the effectiveness of suppressing noise, the BM3D algorithm under 3D block-matching frames is selected as a plug-and-play (PnP) module in the reconstruction process (25,40). The algorithm is termed the multienergy BM3D (ME-BM3D) method in this work.\n\nAlgorithm 1. ME-BM3D method\nInput: parameters $\\lambda ,\\text{\\hspace{0.17em}}{\\lambda }_{1},\\text{\\hspace{0.17em}}{\\lambda }_{2},\\text{\\hspace{0.17em}}\\beta ,\\text{\\hspace{0.17em}}{l}_{\\mathrm{max}}$, projection data b.\n1. Initialization: ${x}^{0}=0,\\text{\\hspace{0.17em}}{y}^{0}=0,\\text{\\hspace{0.17em}}{u}^{0}=0\\text{ }{\\Lambda }_{1}^{0}={\\Lambda }_{2}^{0}=0\\text{ }l=0$.\n2. While not converged or $l\\le {l}_{\\mathrm{max}}$\n3. ${x}^{l}\\in \\underset{{x}_{s}}{argmin}\\left\\{\\frac{1}{2}{\\sum }_{s=1}^{S}{‖A{x}_{s}-{b}_{s}‖}_{2}^{2}+\\frac{{\\lambda }_{1}}{2}{\\sum }_{s=1}^{S}{‖{x}_{s}-{u}_{s}^{l-1}+\\frac{{\\Lambda }_{1}^{l-1}}{{\\lambda }_{1}}‖}_{2}^{2}+\\frac{{\\lambda }_{2}}{2}{‖X-{Y}^{l-1}+\\frac{{\\Lambda }_{2}^{l-1}}{{\\lambda }_{2}}‖}_{2}^{2}\\right\\}$\n4. ${u}^{l}\\in \\underset{{u}_{s}}{argmin}\\left\\{\\lambda {\\sum }_{s=1}^{S}{‖\\nabla {u}_{s}‖}_{0}+\\frac{{\\lambda }_{1}}{2}{\\sum }_{s=1}^{S}{‖{x}_{s}^{l}-{u}_{s}+\\frac{{\\Lambda }_{1}^{l-1}}{{\\lambda }_{1}}‖}_{2}^{2}\\right\\}$,\n5. ${Y}^{l}\\in \\underset{Y}{argmin}\\left\\{\\beta R\\left(Y\\right)+\\frac{{\\lambda }_{2}}{2}{‖{X}^{l}-Y+\\frac{{\\Lambda }_{2}^{l-1}}{{\\lambda }_{2}}‖}_{2}^{2}\\right\\}$,\n6. ${\\Lambda }_{1}^{l}={\\Lambda }_{1}^{l-1}+{\\lambda }_{1}\\left({x}^{l}-{u}^{l}\\right),\\text{ }{\\Lambda }_{2}^{l}={\\Lambda }_{2}^{l-1}+{\\lambda }_{2}\\left({X}^{l}-{Y}^{l}\\right)$\n7. End while\nOutput: $\\mathcal{X}$\n\nAlthough the ME-BM3D method takes into account the prior knowledge of the spatial and spectral domain to improve the reconstruction quality, it is solved channelwise in practice, resulting in an increased computational time. Therefore, we further considered the similarity of each channel to design a method based on subspace representation to accelerate MECT reconstruction. Furthermore, it is worth mentioning that the solving methods of $X$ and $U$ subproblems in the ME-BM3D method are consistent with the proposed subspace decomposition-based method introduced in the next subsection.\n\n### Cascaded global and nonlocal priors based on subspace decomposition and block-matching\n\n#### Subspace decomposition\n\nDue to the high correlations between different channels of MECT images, we assume that the images X can be approximated by the k dimensional subspace, where S≥k; that is,\n\n$X=EZ$\n\nwhere the columns of $E=[e1,e2,...,ek]∈RS×k$ are the basis of subspace Sk. $Z∈Rk×Np$ are the coefficients of X under the basis E. Without loss of generality, we simply assume that E is orthogonal; that is, ET E=Ik, where Ik is an identity matrix of dimension k. Different methods can be followed to infer a subspace matrix E from the MECT images, such as the hyperspectral signal identification by minimum error (Hysime) method (41) or singular value decomposition.\n\nSince E is orthogonal, $Z∈Rk×Np$ can be obtained by projecting MECT images X through subspace E, meaning that Z=ET X. Each row of Z is denoted a vectorized image, and hereafter referred to as eigenimages. According to the literature (38,42), the eigenimages can be denoised with nonlocal patch-based methods due to the nonlocal self-similarity of each eigenimage and the correlation between the eigenimages.\n\n#### Regularization combing with global, local, and nonlocal priors\n\nImages of MECT are highly self-similar. Furthermore, it can be assumed that every single energy image exists in the k-dimensional subspace Sk, where S≥k. Thus, the mathematical model of the MECT reconstruction can be formulated as follows:\n\n$minxs,Z,E12∑s=1S‖Axs−bs‖22+λ∑s=1S‖∇xs‖0+βR(Z)+ρ2‖X−EZ‖F2s.t. ETE=Ik, xs≥0$\n\nThe Frobenius norm term represents the difference between subspace decomposition EZ and MECT images X. R (Z) represents the regularization term on eigenimages, and λ, β, ρ are the nonnegative parameters to balance the data fidelity and regularization terms.\n\nThe flowchart of the algorithm according to the reconstruction model is presented in Figure 1. Specifically, model contains the global, local, and nonlocal priors in parallel. Firstly, the subspace decomposition is applied to map the high-dimensional MECT images into a low-dimensional space by using the spectral low-rankness, which corresponds to the fourth term of the objective function in model . Then, the nonlocal similarities are further exploited on each eigenimage by similar block-matching, which is expressed as βR (Z). This step cascades the global low-rankness and the nonlocal priors of interchannel images. Secondly, the denoised eigenimages are aggregated into high-dimensional image space. Finally, the local spatial sparsity is described by L0 quasi-norm regularization, which applies sparsity to a single-channel gradient image.", null, "Figure 1 The flowchart of the proposed algorithm. The input MECT images are processed through the spectral low-rank, nonlocal block-matching denoising and sparsity regularization of intrachannel to generate the output as the new iteration. MECT, multienergy computed tomography.\n\n#### Details of the solving scheme\n\nFor convenience, a third-order tensor $X$ is set to represent all channels of CT images. The $X$ subproblem can be solved through the following optimization:\n\n$minx12∑s=1S‖Axs−bs‖22+λ∑s=1S‖∇xs‖0+ρ2‖X−El−1Zl−1‖F2s.t. xs≥0$\n\nThen, by introducing an auxiliary variable us to replace xs, problem can be further written into the following constrained form:\n\n$minx,u12∑s=1S‖Axs−bs‖22+λ∑s=1S‖∇us‖0+ρ2‖X−El−1Zl−1‖F2s.t. xs=us, xs≥0$\n\nBased on the augmented Lagrangian function, problem can be rewritten as follows:\n\n$minx,u,v12∑s=1S‖Axs−bs‖22+λ∑s=1S‖∇us‖0+ρ2‖X−El−1Zl−1‖F2+η2∑s=1S‖xs−us−vs‖22s.t. xs≥0$\n\nwhere vs is the s-th channel Lagrangian multiplier of $v∈RNp×Np×S$, and η is the nonnegative penalty parameter. Therefore, problem can be solved via ADMM in the inner loop\n\n$argminx 12∑s=1S‖Axs−bs‖22+ρ2‖X−El−1Zl−1‖F2+η2∑s=1S‖xs−usj−vsj‖22s.t. xs≥0$\n\n$argminu λ∑s=1S‖∇us‖0+η2∑s=1S‖xsj+1−us−vsj‖22$\n\n$vj+1=vj+uj+1−xj+1$\n\nwhere superscript j, l represent the iteration of the inner and outer loop, respectively. We rearrange X, Z in the second term of Eq. along the energy channel as xs and zs, and then problem xs can be solved using the separate quadratic surrogate method, which is expressed as follows:\n\n$xsj+1=xsj−csjATA1+ρ+η$\n\nwhere $csj=AT(Axsj−bs)+ρ(xsj−El−1zsl−1)+η(xsj−usj−vsj)$. The long division stands for pixelwise operation. Meanwhile, the ordered-subset simultaneous algebraic reconstruction technique (OS-SART) (43) is used to accelerate the implementation for xs. Problem is solved by an approximation algorithm described previously (29). Then, the final output $xsjmax (s=1,2,...,S)$ in the inner loop appears as the l-th iteration Xl of the outer loop. The algorithm of solving $X$ subproblem is summarized in Algorithm 2.\n\nAlgorithm 2. ADMM for solving $\\mathcal{X}$ subproblem\nInput: parameters $\\rho ,\\text{\\hspace{0.17em}}\\eta ,\\text{\\hspace{0.17em}}\\lambda ,\\text{\\hspace{0.17em}}{j}_{max}$, projection data b.\n1. Initialization: ${x}^{0}={u}^{0}={v}^{0}=0,\\text{ }j=0$.\n2. While not converged or $j\\le {j}_{max}$\n3. Update ${x}^{j+1}$ via \n4. Update ${u}^{j+1}$ via .\n5. Update ${v}^{j+1}$ via .\n6. End while\nOutput: ${x}^{l}:={x}^{j+1}$\n\nFor E subproblem, the corresponding problem is the following\n\n$El=argminE, ETE=Ik ρ2‖Xl−EZl−1‖F2=L(ξ)S(ξ)T$\n\nwhere L (ξ), S (ξ) are the left and right singular vectors of the matrix ξ= ρXl (Zl−1)T, respectively (44).\n\nThe Z subproblem is transformed into the following optimization scheme:\n\n$Zl=argminZ βR(Z)+ρ2‖Xl−ElZ‖F2=βR(Z)+ρ2‖Z−(El)TXl‖F2$\n\nIn this paper, the regularization term of Z acts as a denoiser on the eigenimages rather than on the multienergy CT images. We selected BM3D as the denoiser under the flexible PnP framework. The algorithm is summarized in Algorithm 3.\n\nAlgorithm 3. The proposed method for MECT reconstruction\nInput: parameters $\\rho ,\\text{\\hspace{0.17em}}\\beta ,\\text{\\hspace{0.17em}}{l}_{\\mathrm{max}}$, projection data b.\n1. Initialization: ${x}^{0}=0,\\text{\\hspace{0.17em}}{E}^{0}=0,\\text{\\hspace{0.17em}}{Z}^{0}=0,\\text{ }l=0$.\n2. While not converged or $l\\le {l}_{\\mathrm{max}}$\n3. Update ${X}^{l}$ via Algorithm 2.\n4. Update ${E}^{l+1}$ via .\n5. Update ${Z}^{l+1}$ via .\n6. Compute ${X}^{l+1}={E}^{l+1}{Z}^{l+1}$.\n7. End while\n8. Rearrange vectorized images ${X}^{{l}_{\\mathrm{max}}}$ into a third-order tensor ${x}^{{l}_{\\mathrm{max}}}$.\nOutput: $\\mathcal{X}$\n\nThere are 3 parameters in problem to balance the data fidelity term, L0 quasi-norm, and the subspace representation term. However, finding the theoretically optimal values is a complex problem. Therefore, we empirically selected them according to experiments. The effect of these parameters on the experiments is presented in the Discussion section in detail. In addition, the $X$ subproblem contains other parameters $η$ to choose from in practice. Its selection was also guided by empirical study. We modified its value according to $τ∑i1i2[ATA]i1i2/∑jNp[us+vs]j$, where $[·]j$ represents the element of matrix or vector, and τ is a nonnegative tuning parameter that was set to 5.7 in the implementation after multiple trials.\n\n## Results\n\nTo evaluate the performance of the proposed method, we applied the simulation Moby data, more realistic chest data, and real mouse data to execute the experiments. The multienergy OS-SART (ME-OS-SART) (43), the multienergy BM3D (ME-BM3D; described in Algorithm 1), SISTER (33), WATITF (35), and FONT-SIR (37) methods were considered for comparison. The root-mean-square error (RMSE), structural similarity index (SSIM) (45), and the peak signal-to-noise ratio (PSNR) were employed for quantitative evaluation. In order to verify the effectiveness of the proposed method, material decomposition based on the image-domain was further carried out for the real mouse data.\n\nAccording to the reconstruction model, there are 3 parameters associated with inter- and intrachannel regularized terms, which are used to balance the fidelity terms. For different datasets (i.e., simulation Moby data, preclinical chest data, and real mouse data), the parameters were not completely consistent because their selections were not only related to the scanning geometry but also depended on the data conditions. In addition, the implementation source code of the comparison methods [SISTER (33), WATITF (35), and FONT-SIR (37)] was given by the relevant authors, and their parameters were thus empirically optimized experimental comparisons and data conditions to ensure the fairness of comparison. All the initial images were set to zero for all methods in experiments. The number of the ordered subsets was set to 10 for all methods that used the accelerated technology. In addition, the iterations were set to 100 for the proposed method. The number of similar patches was 16 with size 8×8, and the search window and stride for patch extraction were 39 and 3, respectively.\n\n### Numerical simulation tests\n\nWe first constructed a simulated Moby phantom with the size of 256×256, and each pixel was 0.15 mm × 0.15 mm. To perform multienergy CT reconstruction, the X-ray energy was divided into 8 bins: [16–22), [22–25), [25–28), [28–31), [31–34), [34–37), [37–41) and [41–50) keV. The scanning distances from the source to the object and detector were 132 and 180 mm, respectively. The number of detector units was 512, and the size of each was 0.1 mm. The projection views were set to 80 and 160, and evenly distributed in a 360° range through the circular trajectory. Considering the noise caused by physical effects, such as scattering, stacking, or charge sharing, combined with the law of large numbers and the central limit theorem, random noise was added to the projected data as follows:\n\n$P˜s=Ps+nσmax(Ps)⋅nsns~N(0,1), s=1,2,...,S$\n\nwhere $Ps, P˜s$ are the clean and noisy single-energy projection, respectively. $ns$ generates a random noise with standard normal distribution, and $nσ$ is the strength parameter which is set to 4/255 here1. To clearly show the structure of Moby data, the reconstructed images of all methods were orientated horizontally and presented as 30×215 pixels.\n\nFigures 2,3 show the reconstructed results of the fourth, sixth, and eighth channels between different methods under 160 and 80 projection views, where columns (A) to (G) represent the images of references, ME-OS-SART, ME-BM3D, WATITF, FONT-SIR, SISTER, and the proposed method, respectively. The extracted ROIs denoted by the yellow dashed square in Figure 2 (A1) and Figure 3 (A1) were magnified below the corresponding reconstructed images. As shown in Figures 2,3, imaging results via ME-OS-SART had the lowest image quality due to its lack of ability to denoise, leading to difficulties in identifying the detailed structures. The ME-BM3D method greatly improved image quality but was limited to preserving image details and fine structures. WATITF, FONT-SIR, and SISTER methods obtained better image quality than the above methods in suppressing noise and preserving structure. However, as pointed out by the orange arrows in the ROIs of Figures 2,3 (G3), some missing details were observed. The proposed method generated high-quality images with subtle detail preservation compared with SISTER and better noise suppression compared with the methods mentioned above. In addition, to verify the robustness of the proposed method against different noises, another set of high-noise experiments is further given. Figure 4 shows the relevant results that the proposed method had a better ability of structure preservation with areas indicated by orange arrows. Compared with other methods, the proposed method maintained the advantages of noise suppression and edge preservation, as indicated by orange arrows, when the noise level was higher.", null, "Figure 2 Simulated Moby data reconstruction results of different methods under 160 projection views when nσ=4/255. Columns (A) to (G) represent the images of reference, ME-OS-SART, ME-BM3D, WATITF, FONT-SIR, SISTER, and the proposed method. The first 2 rows represent the fourth energy bin and its corresponding ROIs, the middle 2 rows represent the sixth energy bin and its corresponding ROIs, and the bottom 2 rows represent the eighth energy bin and its corresponding ROIs. The display windows of 3 energy bins are [0.01 0.1], [0.01 0.07], [0.01 0.06] mm–1, respectively. The second column indicates the SART results of the low-dose data, which gives a sense of the intensity of the noise. ME-OS-SART, multienergy ordered subset–based simultaneous algebraic reconstruction technique; ME-BM3D, multienergy-based block-matching and 3D filtering; WATITF, weight-adaptive total variation and image-spectral tensor factorization; FONT-SIR, fourth-order nonlocal tensor decomposition model for spectral CT image reconstruction; SISTER, spectral image similarity-based tensor with enhanced sparsity reconstruction; ROI, region of interest; SART, simultaneous algebraic reconstruction technique.", null, "Figure 3 Simulated Moby data reconstruction results of different methods under 80 projection views when nσ=4/255. Columns (A) to (G) represent the images of reference, ME-OS-SART, ME-BM3D, WATITF, FONT-SIR, SISTER, and the proposed method. The first 2 rows represent the fourth energy bin and its corresponding ROIs, the middle 2 rows represent the sixth energy bin and its corresponding ROIs, and the bottom 2 rows represent the eighth energy bin and its corresponding ROIs. The display windows of 3 energy bins are [0.01 0.1], [0.01 0.07], [0.01 0.06] mm–1, respectively. ME-OS-SART, multienergy ordered subset-based simultaneous algebraic reconstruction technique; ME-BM3D, multienergy-based block-matching and 3D filtering; WATITF, weight-adaptive total variation and image-spectral tensor factorization; FONT-SIR, fourth-order nonlocal tensor decomposition model for spectral CT image reconstruction; SISTER, spectral image similarity–based tensor with enhanced sparsity reconstruction; ROI, region of interest.", null, "Figure 4 Simulated Moby data reconstruction results of different methods under 160 projection views when nσ=20/255. Columns (A) to (G) represent the images of reference, ME-OS-SART, ME-BM3D, WATITF, FONT-SIR, SISTER, and the proposed method. The first 2 rows represent the fourth energy bin and its corresponding ROIs, the middle 2 rows represent the sixth energy bin and its corresponding ROIs, and the bottom 2 rows represent the eighth energy bin and its corresponding ROIs. The display windows of 3 energy bins are [0.01 0.1], [0.01 0.07], [0.01 0.06] mm–1, respectively. ME-OS-SART, multienergy ordered subset-based simultaneous algebraic reconstruction technique; ME-BM3D, multienergy-based block-matching and 3D filtering; WATITF, weight-adaptive total variation and image-spectral tensor factorization; FONT-SIR, fourth-order nonlocal tensor decomposition model for spectral CT image reconstruction; SISTER, spectral image similarity-based tensor with enhanced sparsity reconstruction; ROI, region of interest.\n\nThe quantitative assessments of RMSE, PSNR, SSIM, and computational time under 160 views are listed in Table 2. Due to the lack of space, we have only listed the quantitative results of 4 channels between different methods. These results show that the indices of the eighth channel are better than those of the other 3 channels. Compared with other methods, the proposed method achieved the best values in each channel in the RMSE, PSNR, and SSIM indices. Specifically, taking the eighth channel as an example, the proposed method reduced RMSE by 65.91%, 28.48%, 42.31%, 42.31%, and 16.67%, compared with those for the ME-OS-SART, ME-BM3D, WATITF, FONT-SIR, and SISTER methods, respectively. The SSIM and PSNR were as high as 0.9974 and 56.2138 dB for the proposed method, respectively, which were also higher than those of the other methods. Compared with ME-BM3D, WATITF, FONT-SIR, and SISTER, the proposed method had a lower computational cost per iteration, and its calculation time was 90.25%, 42.04%, 23.07%, and 16.42% of the other 4 block-matching-based methods, respectively. The reason the approaches of ME-BM3D, WATITF, FONT-SIR, and SISTER are more time-consuming is that the extraction of similar image patches from all channels of MECT, while the proposed method only clusters these patches in the channels of eigenimages. Both WATITF and our proposed method have 3 priors in the model, but the increase of priors does not imply a corresponding increase in the complexity of our algorithm. The former constructs low-rank cube-based tensors of all channels, and the latter extracts similar patches after dimensionality reduction. By cascading the global and local priors, the global noise can be suppressed and the main structures can be further strengthened, reducing the computational time and avoiding the balance between global and nonlocal priors. The partial line profiles and their enlarged ROIs, drawn from the 120th pixel to the 140th pixel along with the white dashed line in Figure 3 (A1), are further plotted in Figure 5. The proposed method generated more accurate line profiles than did the other methods. As marked by the green arrows, the complex structure of the image is prone to fluctuations of line profiles in the vertical direction, which means that the proposed method is more sensitive to the changes in detail structures.\n\n### Table 2\n\nQuantitative evaluation and computational time of different methods under 160 views\n\nMethods Index Channel 2 Channel 4 Channel 6 Channel 8\nME-OS-SART RMSE 0.0141 0.0081 0.0061 0.0044\nPSNR 37.0362 41.7938 44.3267 47.1280\nSSIM 0.8556 0.9365 0.9587 0.9750\nTime 5.34 seconds (for 1 step iteration)\nME-BM3D RMSE 0.0068 0.0043 0.0035 0.0029\nPSNR 43.4139 47.4269 49.1147 50.8172\nSSIM 0.9798 0.9882 0.9897 0.9911\nTime 7.69 seconds (for 1 step iteration)\nWATITF RMSE 0.0082 0.0046 0.0034 0.0026\nPSNR 41.7495 46.7385 49.3562 51.7662\nSSIM 0.9817 0.9889 0.9911 0.9931\nTime 16.51 seconds (for 1 step iteration)\nFONT-SIR RMSE 0.0074 0.0043 0.0035 0.0026\nPSNR 42.6170 47.3813 49.2002 51.8192\nSSIM 0.9815 0.9892 0.9909 0.9929\nTime 30.08 seconds (for 1 step iteration)\nSISTER RMSE 0.0039 0.0022 0.0019 0.0018\nPSNR 48.1268 53.1683 54.3427 55.1248\nSSIM 0.9918 0.9961 0.9966 0.9967\nTime 42.27 seconds (for 1 step iteration)\nProposed RMSE 0.0036 0.0020 0.0019 0.0015\nPSNR 48.8537 54.0383 54.4660 56.2138\nSSIM 0.9929 0.9968 0.9969 0.9974\nTime 6.94 seconds (for 1 step iteration)\n\nME-OS-SART, multienergy ordered subset-based simultaneous algebraic reconstruction technique; ME-BM3D, multienergy-based block-matching and 3D filtering; WATITF, weight-adaptive total variation and image-spectral tensor factorization; FONT-SIR, fourth-order nonlocal tensor decomposition model for spectral CT image reconstruction; SISTER, spectral image similarity-based tensor with enhanced sparsity reconstruction; RMSE, root-means-square error; PSNR, peak signal-to-noise ratio; SSIM, structural similarity index.", null, "Figure 5 Line profiles of 8 channels between different methods under 80 projection views. Rows from top to bottom represent the results of 8 channels. ME-OS-SART, multienergy ordered subset-based simultaneous algebraic reconstruction technique; ME-BM3D, multienergy-based block-matching and 3D filtering; WATITF, weight-adaptive total variation and image-spectral tensor factorization; FONT-SIR, fourthorder nonlocal tensor decomposition model for spectral CT image reconstruction; SISTER, spectral image similarity-based tensor with enhanced sparsity reconstruction.\n\n### Preclinical dataset study\n\nThe study was conducted in accordance with the Declaration of Helsinki (as revised in 2013). A public preclinical chest dataset was further chosen to evaluate the performance of the proposed subspace for more realistic medical application scenarios. The data were processed by doctors and contained less noise than did the actual dataset. The tube voltage was 120 kV, and the energy bins were set to [72–80), [78–86), [84–92), [90–98), [96–104), [102–110), [108–116) and [114–120) keV. The size of reconstructed images was 512×512, with each pixel 0.921 mm × 0.921 mm. The number of detector bins was 1024, and the size was 0.69 mm. The distances of the source to the object and the detector were 1,000 and 1,500 mm, respectively. Furthermore, the noise level nσ was set to 0.0382 in this data. To further verify the effectiveness of the proposed method in MECT reconstruction, we conducted experiments on preclinical data with 160 and 80 projection views. Reconstruction results of the SART method, based on 720 noise-free projection views, were taken as the reference of MECT for the subsequent evaluation. Similar to the simulated experiments, we show the 3 energy channels of the reconstructed images at 15×475 pixels horizontally. An ROI [denoted in Figure 6 (A1), labeled I] in the tissue of the chest was magnified to assess the structure preservation across the different algorithms, while another ROI [also denoted in Figure 6 (A1), labeled II] was chosen to make a quantitative evaluation of the mean value of attenuation coefficients and standard deviation (STD), which was calculated as follows:\n\n$STD=1Nroi∑r=1Nroi(xr−x¯)2$", null, "Figure 6 The preclinical dataset reconstruction results of different methods under 160 views. Columns (A) to (F) represent the methods of ME-OS-SART, ME-BM3D, WATITF, FONT-SIR, SISTER, and the proposed method, respectively. Rows 1 to 3 represent the energy bins [90–98) keV, [102–110) keV, and [114–120) keV, and the display windows are [0.01 0.11], [0.01 0.07], [0.01 0.05] mm−1, respectively. ME-OS-SART, multienergy-ordered subsets based simultaneous algebraic reconstruction technique; ME-BM3D, multienergy-based block-matching and 3D filtering; WATITF, weight-adaptive total variation and image-spectral tensor factorization; FONT-SIR, fourth-order nonlocal tensor decomposition model for spectral CT image reconstruction; SISTER, spectral image similarity–based tensor with enhanced sparsity reconstruction.\n\nwhere xr denotes the value of r-th pixel, and x is the precomputed mean value of all Nroi image pixels of the selected ROI.\n\nThe reconstructed images and their corresponding enlarged ROIs for 160 projection views obtained by ME-OS-SART, ME-BM3D, WATITF, FONT-SIR, SISTER, and the proposed method are shown in Figure 6, where columns (A) to (F) represent the different approaches, and rows 1 to 3 represent the fourth, the sixth and the eighth channels, respectively. Furthermore, the difference images and the ROIs [denoted in Figure 6 (A1)] are also shown in Figure 7. Figure 6 and Figure 7 (A1)–(A3) show obvious noises in the reconstructions of ME-OS-SART, whereas other methods in Figure 6 and Figure 7 (B1)–(E3) show noise suppression. However, the zoomed-in ROIs show that the ME-BM3D, WATITF, FONT-SIR, and SISTER methods still failed to recover tissue details and edges. Compared with other methods, the proposed method was superior in fine structure preservation and noise suppression. In addition, the results of 80 views, shown in Figure 8 and Figure 9, also demonstrate that the proposed method had the ability to reconstruct high-quality images compared with other methods.", null, "Figure 7 The preclinical dataset difference images of different methods under 160 views. Columns (A) to (F) represent the methods of ME-OS-SART, ME-BM3D, WATITF, FONT-SIR, SISTER, and the proposed method, respectively. Rows 1 to 3 represent the energy bins [90–98) keV, [102–110) keV, and [114–120) keV, respectively. The display windows are [–0.15 0.15], [–0.1 0.1], [–0.06 0.06] mm–1, respectively. ME-OS-SART, multienergy-ordered subsets based simultaneous algebraic reconstruction technique; ME-BM3D, multienergy-based block-matching and 3D filtering; WATITF, weight-adaptive total variation and image-spectral tensor factorization; FONT-SIR, fourth-order nonlocal tensor decomposition model for spectral CT image reconstruction; SISTER, spectral image similarity–based tensor with enhanced sparsity reconstruction.", null, "Figure 8 The preclinical dataset reconstruction results of different methods under 80 views. Columns (A) to (F) represent the methods of ME-OS-SART, ME-BM3D, WATITF, FONT-SIR, SISTER, and the proposed method, respectively. Rows 1 to 3 represent the energy bins [90–98) keV, [102–110) keV, and [114–120) keV, and the display windows are [0.01 0.11], [0.01 0.07], [0.01 0.05] mm–1, respectively. ME-OS-SART, multienergy-ordered subsets based simultaneous algebraic reconstruction technique; ME-BM3D, multienergy-based block-matching and 3D filtering; WATITF, weight-adaptive total variation and image-spectral tensor factorization; FONT-SIR, fourth-order nonlocal tensor decomposition model for spectral CT image reconstruction; SISTER, spectral image similarity-based tensor with enhanced sparsity reconstruction.", null, "Figure 9 The preclinical dataset difference images of different methods under 80 views. Columns (A) to (F) represent the methods of ME-OS-SART, ME-BM3D, WATITF, FONT-SIR, SISTER, and the proposed method, respectively. Rows 1 to 3 represent the energy bins [90–98) keV, [102–110) keV, and [114–120) keV, respectively. The display windows are [–0.15 0.15], [–0.1 0.1], [–0.06 0.06] mm–1, respectively. ME-OS-SART, multienergy-ordered subsets based simultaneous algebraic reconstruction technique; ME-BM3D, multienergy-based block-matching and 3D filtering; WATITF, weight-adaptive total variation and image-spectral tensor factorization; FONT-SIR, fourth-order nonlocal tensor decomposition model for spectral CT image reconstruction; SISTER, spectral image similarity–based tensor with enhanced sparsity reconstruction.\n\nTable 3 lists the quantitative evaluation of different methods under 80 projection views, where the mean value was measured to assess the accuracy of the results, while the STD value evaluated the noise suppression ability of different methods. The STD values of the proposed method, as pointed in superscript asterisk, were close to those for reference images compared with other methods in each energy channel, indicating that the proposed method is superior in suppressing noises. The proposed method had a similar mean value with other methods, indicating the accuracy of reconstruction results.\n\n### Table 3\n\nMeans and STDs of different methods for the preclinical dataset under 80 views\n\nMethods Metric Channel 1 Channel 2 Channel 3 Channel 4 Channel 5 Channel 6 Channel 7 Channel 8\nReference Mean value 0.4070 0.1952 0.1319 0.0970 0.0764 0.0610 0.0506 0.0394\nSTD 0.3895 0.1781 0.1148 0.0766 0.0527 0.0390 0.03 0.0205\nME-OS-SART Mean value 0.4049 0.1955 0.1324 0.0972 0.0769 0.0611 0.0507 0.0395\nSTD 0.2596 0.1193 0.0764 0.0510 0.0343 0.0258 0.0204 0.0145\nME-BM3D Mean value 0.4032 0.1953 0.1322 0.0972 0.0767 0.0611 0.0506 0.0394\nSTD 0.3613 0.1664 0.1049 0.0694 0.0466 0.0343 0.0264 0.0177\nWATITF Mean value 0.4053 0.1949 0.1318 0.0967 0.0763 0.0609 0.0504 0.0393\nSTD 0.3161 0.1453 0.0942 0.0630 0.0441 0.0330 0.0258 0.0182\nFONT-SIR Mean value 0.3966 0.1938 0.1311 0.0970 0.0776 0.0614 0.0509 0.0406\nSTD 0.2948 0.1401 0.0908 0.0610 0.0422 0.0310 0.0244 0.0187\nSISTER Mean value 0.4029 0.1952 0.1319 0.0971 0.0766 0.0610 0.0506 0.0394\nSTD 0.3315 0.1532 0.0982 0.0652 0.0446 0.0331 0.0257 0.0175\nProposed Mean value 0.4051 0.1950 0.1316 0.0967 0.0765 0.0611 0.0506 0.0396\nSTD 0.3757* 0.1730* 0.1118* 0.0746* 0.0511* 0.0384* 0.0301* 0.0219*\n\n*, the indices that close to reference values. ME-OS-SART, multienergy ordered subset-based simultaneous algebraic reconstruction technique; ME-BM3D, multienergy-based block-matching and 3D filtering; WATITF, weight-adaptive total variation and image-spectral tensor factorization; FONT-SIR, fourth-order nonlocal tensor decomposition model for spectral CT image reconstruction; SISTER, spectral image similarity-based tensor with enhanced sparsity reconstruction; STD, standard deviation.\n\n### Real mouse data study\n\nA mouse was scanned through a MARS micro spectral CT system (28,31), which included a micro X-ray source and a flat panel PCD. The study was approved by the Ethics Committee of PLA Strategic Support Force Information Engineering University and was conducted in compliance with the laboratory animal guideline for ethical review of animal welfare. The distances between the source to object and the PCD were 158 and 255 mm, respectively. The length of PCD was 56.32 mm and included 512 pixels, resulting in a field of view with a diameter of 34.69 mm. Gold nanoparticles (GNPs) were injected into the mouse as the contrast agent. Due to the PCD only consisting of 2 energy bins, multiple scans were performed to obtain 13 channels for 371 views with an increasing radiation dose. We extracted the projections for the central slice to reconstruct each channel image with the size of 512×512 in this experiment. We chose 360 and 180 projection views evenly distributed in 371 ranges to verify the proposed method in low-dose spectral CT reconstruction.\n\nFigure 10 shows the reconstruction results of 3 representative energy bins (1st, 7th, 13th) via different methods under 360 views. The first column denotes the images reconstructed by ME-OS-SART with severe noises, but it is difficult to distinguish some soft tissue details. Columns (B) to (E) represent the reconstructions of ME-BM3D, WATITF, FONT-SIR and SISTER, and it seems that a lot noise has been suppressed. However, in the magnified regions, denoted by yellow dashed lines in Figure 10 (A1), the 2 ROIs reconstructed by ME-BM3D, WATITF, FONT-SIR and SISTER indicate a lower ability to preserve bone structures compared with the proposed method. Figure 11 is the reconstructions of the different methods under 180 views and shows similar results.", null, "Figure 10 The real mouse reconstruction results of different methods under 360 views. Columns (A) to (F) represent the images of ME-OS-SART, ME-BM3D, WATITF, FONT-SIR, SISTER, and the proposed method, respectively. Rows 1 to 3 represent the 1st, 7th, 13th energy bins, respectively. The display windows are [0 0.08], [0 0.07], [0 0.07] mm–1, respectively. ME-OS-SART, multienergy ordered subset-based simultaneous algebraic reconstruction technique; ME-BM3D, multienergy-based block-matching and 3D filtering; WATITF, weight-adaptive total variation and image-spectral tensor factorization; FONT-SIR, fourth-order nonlocal tensor decomposition model for spectral CT image reconstruction; SISTER, spectral image similarity-based tensor with enhanced sparsity reconstruction.", null, "Figure 11 The real mouse reconstruction results of different methods under 180 views. Columns (A) to (F) represent the images of ME-OS-SART, ME-BM3D, WATITF, FONT-SIR, SISTER, and the proposed method, respectively. Rows 1 to 3 represent the 1st, 7th, 13th energy bins, respectively. The display windows are [0 0.08], [0 0.07], [0 0.07] mm–1, respectively. ME-OS-SART, multienergy ordered subset-based simultaneous algebraic reconstruction technique; ME-BM3D, multienergy-based block-matching and 3D filtering; WATITF, weight-adaptive total variation and image-spectral tensor factorization; FONT-SIR, fourth-order nonlocal tensor decomposition model for spectral CT image reconstruction; SISTER, spectral image similarity-based tensor with enhanced sparsity reconstruction.\n\nFurthermore, the basis for material decomposition depends on the reconstructed image quality: the better the image quality, the easier the task of postprocessing material decomposition. The first 3 rows in Figure 12 and Figure 13 show the 3 decomposed basis materials: bone, soft tissue, and GNP. The last row includes the colored image, with red, green, and blue representing the 3 materials mentioned above. The decomposition results also show the effectiveness of the proposed method, and the proposed method could provide continuous triangle areas (denoted by red box) of GNP contrast agent with clear image edges. In addition, the bone material decomposed by the proposed method had a more complete and distinct shape, and the probability of misclassifying into GNP was lower than that of the other comparison methods.", null, "Figure 12 The real mouse material decomposition results of different methods under 360 views. Columns (A) to (F) represent the images of ME-OS-SART, ME-BM3D, WATITF, FONT-SIR, SISTER, and the proposed method, respectively. Rows 1 to 3 represent the bone, soft tissue, and GNP, respectively. The fourth row images are the corresponding color rendering, where red, green, and blue represent the above basis materials. The display windows are [0 0.5], [0 1], [0 0.5] cm–1, respectively. ME-OS-SART, multienergy ordered subset–based simultaneous algebraic reconstruction technique; ME-BM3D, multienergy-based block-matching and 3D filtering; WATITF, weight-adaptive total variation and image-spectral tensor factorization; FONT-SIR, fourth-order nonlocal tensor decomposition model for spectral CT image reconstruction; SISTER, spectral image similarity-based tensor with enhanced sparsity reconstruction; GNP, gold nanoparticles.", null, "Figure 13 The real mouse material decomposition results of different methods under 180 views. Columns (A) to (F) represent the images of ME-OS-SART, ME-BM3D, WATITF, FONT-SIR, SISTER, and the proposed method, respectively. Rows 1 to 3 represent the bone, soft tissue, and GNP, respectively. The fourth row images are the corresponding color rendering, where red, green, and blue represent the above basis materials. The display windows are [0 0.5], [0 1], [0 0.5] cm–1, respectively. ME-OS-SART, multienergy ordered subset-based simultaneous algebraic reconstruction technique; ME-BM3D, multienergy-based block-matching and 3D filtering; WATITF, weight-adaptive total variation and image-spectral tensor factorization; FONT-SIR, fourth-order nonlocal tensor decomposition model for spectral CT image reconstruction; SISTER, spectral image similarity-based tensor with enhanced sparsity reconstruction; GNP, gold nanoparticles.\n\n## Discussion\n\nThis section describes some important factors that influenced the implementation of the proposed algorithm in detail, including the dimensions of subspace, the selection of parameters, and the effectiveness of eigenimages denoising.\n\n### Effects of the dimensions of subspace\n\nThe influence of the dimensions of subspace k on the MECT reconstruction is discussed through the simulated Moby data under 160 projection views. Since the energy channel of data is 8, the dimensions of the subspace are set to 2, 3, 4, 5, 6, 7, respectively. As shown in Figure 14A, although the quantitative results of different numbers of subspace’s dimensions were similar, the RMSE curve was at the lowest position when k=3. Furthermore, we also explored the computational costs when varying the dimension of subspace, which are shown in Figure 14B. There is no significant consumption of computing time even if the dimension of the subspace is increased. Therefore, we set the dimension of subspace as 3 in our simulation experiments according to the results of RMSE curves and computational costs. Similar to the simulated Moby experiments, the choice of subspace dimensions for preclinical and real data also varied from the corresponding energy windows. Considering the computational time and imaging quality, the number of k could be set as 3, 4 or 5. In practice, we also chose the number 3.", null, "Figure 14 Some descriptions of the proposed method. (A) RMSE curves of different dimensions of subspace. (B) Computational costs (unit: seconds) for 1 iteration with different dimensions of subspace. (C) RMSE curves with different values of regularization parameter β. (D) Convergence behaviors of 3 methods (ME-OS-SART, ME-BM3D, and the proposed method). ME-OS-SART, multienergy ordered subset-based simultaneous algebraic reconstruction technique; ME-BM3D, multienergy-based block-matching and 3D filtering; RMSE, root-means-square error.\n\n### Effects of the selection of parameters\n\nThere are three parameters in our reconstruction model: λ is used to balance the intrachannel gradient image sparse prior, β is the regularization coefficient interchannel, and ρ is the nonnegative penalty parameter. It may be interesting to study the theoretical analysis of the selection of these parameters. However, we usually make the empirical choices based on the data conditions. In this paper, these parameters are within the range of 10–4 to 104. The RMSE curve changes with different values of β in simulation experiments are shown in Figure 14C. When the ground truth was not available for preclinical and real datasets, we also optimized these parameters according to the data conditions and the evaluations of image quality across several people. The selections of λ, β, and ρ are listed in Table 4 for different datasets.\n\n### Table 4\n\nThe parameter values for 3 datasets\n\nDatasets λ β ρ\nSimulation 1.8×10–4 2.9 1.1\nPreclinical 1.8×10–4 2.9 1.1\nReal mouse 1.8×10–6 10 1.1\n\n### Effects of eigenimages Z denoising\n\nβ is the regularization parameter of eigenimages Z, which does not directly associate with the original MECT images. In order to verify the effectiveness of the proposed method in eigenimages Z denoising, we chose ME-OS-SART and ME-BM3D as comparison methods, where the ME-BM3D method acted on the reconstructed MECT images by ME-OS-SART. The convergence curves of the 3 methods are shown in Figure 14D. Figure 14D shows that the denoising on eigenimages was more effective than when directly applied to MECT images.\n\nIn addition, there is still potential to improve the reconstructed image quality, such as for some detailed structures seem to be lost in the enlarged areas of the proposed method shown in Figure 2 and Figure 3. The appearance of abnormal spikes of channels 6–8 in Figure 5 also indicates that subtle noise was present in some regions, and so how to balance noise suppression and structure preservation also needs further consideration. Furthermore, the regularized terms of the inter- and intrachannel only considers the sparsity of gradient image and the correlations of multichannel images, and the priors can be further explored via integrating a deep denoising network into the MECT reconstruction model, a deep learning has certain advantages in medical image analysis (46-50).\n\n## Conclusions\n\nIn this paper, we propose a method to integrate the global, local, and nonlocal priors for low-dose MECT reconstruction in which the global low-rankness and nonlocal priors are cascaded through subspace decomposition and block-matching frames. Subspace representation is used to map original MECT images to a low-dimensional space, and the eigenimages are denoised by BM3D, which greatly reduces the computational complexity. L0 quasi-norm is further applied to exploit the local spatial sparsity in intrachannel images. Then, the model is iteratively solved by the alternating minimization method. Compared with the state-of-the-art methods, the simulation, preclinical, and real data experiments further verified that the proposed method has the ability to improve the performance of denoising and detail preservation.\n\n## Acknowledgments\n\nThe authors are grateful to anonymous reviewers for their valuable comments. The authors are also grateful to Dr. Weiwen Wu and Dr. Shaoyu Wang for supplying the real mouse dataset.\n\nFunding: This work was supported by the National Natural Science Foundation of China (No. 62101596) and the National Key Research and Development Program of China (No. 2020YFC1522002). This work was also supported by the China Postdoctoral Science Foundation (No. 2019M663996).\n\n## Footnote\n\nConflicts of Interest: All authors have completed the ICMJE uniform disclosure form (available at https://qims.amegroups.com/article/view/10.21037/qims-22-647/coif). The authors have no conflicts of interest to declare.\n\nEthical Statement: The authors are accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. The study was conducted in accordance with the Declaration of Helsinki (as revised in 2013). The study was approved by the Ethics Committee of PLA Strategic Support Force Information Engineering University, in compliance with the laboratory animal guideline for ethical review of animal welfare.\n\nOpen Access Statement: This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the non-commercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). See: https://creativecommons.org/licenses/by-nc-nd/4.0/.\n\n1Due to the complexity of the noise mechanism in the imaging process, we chose the random noise method to simulate the noise in low-dose projections. It was further found that the level of random noise was approximated to 1.4×103 photons in Poisson noise.\n\n1. Bouleti C, Baudry G, Iung B, Arangalage D, Abtan J, Ducrocq G, Steg PG, Vahanian A, Henry-Feugeas MC, Pasi N, Chillon S, Pitocco F, Laissy JP, Ou P. Usefulness of Late Iodine Enhancement on Spectral CT in Acute Myocarditis. JACC Cardiovasc Imaging 2017;10:826-7. [Crossref] [PubMed]\n2. Lu X, Lu Z, Yin J, Gao Y, Chen X, Guo Q. Effects of radiation dose levels and spectral iterative reconstruction levels on the accuracy of iodine quantification and virtual monochromatic CT numbers in dual-layer spectral detector CT: an iodine phantom study. Quant Imaging Med Surg 2019;9:188-200. [Crossref] [PubMed]\n3. Zhang T, Yu H, Xi Y, Wang S, Liu F. Spectral CT Image-domain Material Decomposition via Sparsity Residual Prior and Dictionary Learning. IEEE T Instrum Meas 2022. doi: 10.1109/TIM.2022.3221120.10.1109/TIM.2022.3221120\n4. Graser A, Johnson TR, Chandarana H, Macari M. Dual energy CT: preliminary observations and potential clinical applications in the abdomen. Eur Radiol 2009;19:13-23. [Crossref] [PubMed]\n5. Zou Y, Silver MD. Analysis of fast kV-switching in dual energy CT using a pre-reconstruction decomposition technique. Medical Imaging 2008: Physics of Medical Imaging. International Society for Optics and Photonics 2008;6913:691313.\n6. Huang X, Gao S, Ma Y, Lu X, Jia Z, Hou Y. The optimal monoenergetic spectral image level of coronary computed tomography (CT) angiography on a dual-layer spectral detector CT with half-dose contrast media. Quant Imaging Med Surg 2020;10:592-603. [Crossref] [PubMed]\n7. Wen Q, Yue Y, Shang J, Lu X, Gao L, Hou Y. The application of dual-layer spectral detector computed tomography in solitary pulmonary nodule identification. Quant Imaging Med Surg 2021;11:521-32. [Crossref] [PubMed]\n8. Taguchi K, Iwanczyk JS. Vision 20/20: Single photon counting x-ray detectors in medical imaging. Med Phys 2013;40:100901. [Crossref] [PubMed]\n9. Shikhaliev PM, Fritz SG. Photon counting spectral CT versus conventional CT: comparative evaluation for breast imaging application. Phys Med Biol 2011;56:1905-30. [Crossref] [PubMed]\n10. Leng S, Yu L, Wang J, Fletcher JG, Mistretta CA, McCollough CH. Noise reduction in spectral CT: reducing dose and breaking the trade-off between image noise and energy bin selection. Med Phys 2011;38:4946-57. [Crossref] [PubMed]\n11. Candès EJ, Romberg J, Tao T. Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE T Inform Theory 2006;52:489-509. [Crossref]\n12. Sidky EY, Pan X. Image reconstruction in circular cone-beam computed tomography by constrained, total-variation minimization. Phys Med Biol 2008;53:4777-807. [Crossref] [PubMed]\n13. Dong B, Li J, Shen Z. X-ray CT image reconstruction via wavelet frame based regularization and Radon domain inpainting. J SCI Comput 2013;54:333-49. [Crossref]\n14. Xu Q, Yu H, Mou X, Zhang L, Hsieh J, Wang G. Low-dose X-ray CT reconstruction via dictionary learning. IEEE Trans Med Imaging 2012;31:1682-97. [Crossref] [PubMed]\n15. Xu Q, Yu H, Bennett J, He P, Zainon R, Doesburg R, Opie A, Walsh M, Shen H, Butler A, Butler P, Mou X, Wang G. Image reconstruction for hybrid true-color micro-CT. IEEE Trans Biomed Eng 2012;59:1711-9. [Crossref] [PubMed]\n16. Zhao B, Gao H, Ding H, Molloi S. Tight-frame based iterative image reconstruction for spectral breast CT. Med Phys 2013;40:031905. [Crossref] [PubMed]\n17. Yu Z, Leng S, Li Z, McCollough CH. Spectral prior image constrained compressed sensing (spectral PICCS) for photon-counting computed tomography. Phys Med Biol 2016;61:6707-32. [Crossref] [PubMed]\n18. Niu S, Zhang Y, Zhong Y, Liu G, Lu S, Zhang X, Hu S, Wang T, Yu G, Wang J. Iterative reconstruction for photon-counting CT using prior image constrained total generalized variation. Comput Biol Med 2018;103:167-82. [Crossref] [PubMed]\n19. Wang S, Wu W, Feng J, Liu F, Yu H. Low-dose spectral CT reconstruction based on image-gradient L(0)-norm and adaptive spectral PICCS. Phys Med Biol 2020;65:245005. [Crossref] [PubMed]\n20. Li L, Chen Z, Wang G, Chu J, Gao H. A tensor PRISM algorithm for multi-energy CT reconstruction and comparative studies. J Xray Sci Technol 2014;22:147-63. [Crossref] [PubMed]\n21. Gao H, Yu H, Osher S, Wang G. Multi-energy CT based on a prior rank, intensity and sparsity model (PRISM). Inverse Probl 2011; [Crossref] [PubMed]\n22. Semerci O, Hao Ning, Kilmer ME, Miller EL. Tensor-based formulation and nuclear norm regularization for multienergy computed tomography. IEEE Trans Image Process 2014;23:1678-93. [Crossref] [PubMed]\n23. Rigie DS, La Rivière PJ. Joint reconstruction of multi-channel, spectral CT data via constrained total nuclear variation minimization. Phys Med Biol 2015;60:1741-62. [Crossref] [PubMed]\n24. Buades A, Coll B, Morel JM. A non-local algorithm for image denoising. IEEE Computer Society Conference on Computer Vision and Pattern Recognition 2005;2:60-5.\n25. Dabov K, Foi A, Katkovnik V, Egiazarian K. Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans Image Process 2007;16:2080-95. [Crossref] [PubMed]\n26. Xu J, Zhang L, Zuo W, Zhang D, Feng X. Patch group based nonlocal self-similarity prior learning for image denoising. Proceedings of the IEEE International Conference on Computer Vision 2015:244-52.\n27. Kim K, Ye JC, Worstell W, Ouyang J, Rakvongthai Y, El Fakhri G, Li Q. Sparse-view spectral CT reconstruction using spectral patch-based low-rank penalty. IEEE Trans Med Imaging 2015;34:748-60. [Crossref] [PubMed]\n28. Zhang Y, Mou X, Wang G, Yu H. Tensor-Based Dictionary Learning for Spectral CT Reconstruction. IEEE Trans Med Imaging 2017;36:142-54. [Crossref] [PubMed]\n29. Wu W, Zhang Y, Wang Q, Liu F, Chen P, Yu H. Low-dose spectral CT reconstruction using image gradient ℓ (0)-norm and tensor dictionary. Appl Math Model 2018;63:538-57. [Crossref] [PubMed]\n30. Niu S, Yu G, Ma J, Wang J. Nonlocal low-rank and sparse matrix decomposition for spectral CT reconstruction. Inverse Probl 2018;34:024003. [Crossref] [PubMed]\n31. Wu W, Liu F, Zhang Y, Wang Q, Yu H. Non-Local Low-Rank Cube-Based Tensor Factorization for Spectral CT Reconstruction. IEEE Trans Med Imaging 2019;38:1079-93. [Crossref] [PubMed]\n32. Xia W, Wu W, Niu S, Liu F, Zhou J, Yu H, Zhang Y. Spectral CT reconstruction—ASSIST: Aided by self-similarity in image-spectral tensors. IEEE T Comput Imag 2019;5:420-36. [Crossref]\n33. Hu D, Wu W, Xu M, Zhang Y, Liu J, Ge R, Coatrieux G. SISTER: Spectral-image similarity-based tensor with enhanced-sparsity reconstruction for sparse-view multi-energy CT. IEEE T Comput Imag 2019;6:477-90.\n34. Kolda TG, Bader BW. Tensor decompositions and applications. SIAM Rev 2009;51:455-500. [Crossref]\n35. Wu W, Hu D, An K, Wang S, Luo F. A high-quality photon-counting CT technique based on weight adaptive total-variation and image-spectral tensor factorization for small animals imaging. IEEE T Instrum Meas 2020;70:1-14.\n36. Zhang W, Liang N, Wang Z, Cai A, Wang L, Tang C, Zheng Z, Li L, Yan B, Hu G. Multi-energy CT reconstruction using tensor nonlocal similarity and spatial sparsity regularization. Quant Imaging Med Surg 2020;10:1940-60. [Crossref] [PubMed]\n37. Chen X, Xia W, Liu Y, Chen H, Zhou J, Zhang Y. Fourth-Order Nonlocal Tensor Decomposition Model For Spectral Computed Tomography. 2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI) 2021:1841-5.\n38. Zhuang L, Bioucas-Dias JM. Fast hyperspectral image denoising and inpainting based on low-rank and sparse representations. IEEE J Stars 2018;11:730-42. [Crossref]\n39. Lin J, Huang TZ, Zhao XL, Jiang TX, Zhuang L. A tensor subspace representation-based method for hyperspectral image denoising. IEEE T Geo SCI Remote 2020;59:7739-57. [Crossref]\n40. Venkatakrishnan SV, Bouman CA, Wohlberg B. Plug-and-play priors for model based reconstruction. 2013 IEEE Global Conference on Signal and Information Processing 2013:945-8.\n41. Bioucas-Dias JM, Nascimento JMP. Hyperspectral subspace identification. IEEE T Geo SCI Remote 2008;46:2435-45. [Crossref]\n42. Zhuang L, Fu X, Ng MK, Bioucas-Dias JM. Hyperspectral image denoising based on global and nonlocal low-rank factorizations. IEEE T Geo SCI Remote 2021;59:10438-54. [Crossref]\n43. Wang G, Jiang M. Ordered-subset simultaneous algebraic reconstruction techniques (OS-SART). J X-ray SCI Technol 2004;12:169-77.\n44. Cao C, Yu J, Zhou C, Hu K, Xiao F, Gao X. Hyperspectral image denoising via subspace-based nonlocal low-rank and sparse factorization. IEEE J Stars 2019;12:973-88. [Crossref]\n45. Wang Z, Bovik AC, Sheikh HR, Simoncelli EP. Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process 2004;13:600-12. [Crossref] [PubMed]\n46. Wu W, Hu D, Cong W, Shan H, Wang S, Niu C, Yan P, Yu H, Vardhanabhuti V, Wang G. Stabilizing deep tomographic reconstruction: Part A. Hybrid framework and experimental results. Patterns (N Y) 2022;3:100474.\n47. Wu W, Hu D, Cong W, Shan H, Wang S, Niu C, Yan P, Yu H, Vardhanabhuti V, Wang G. Stabilizing deep tomographic reconstruction: Part B. Convergence analysis and adversarial attacks. Patterns (N Y) 2022;3:100475.\n48. Wu W, Hu D, Niu C, Broeke LV, Butler APH, Cao P, Atlas J, Chernoglazov A, Vardhanabhuti V, Wang G. Deep learning based spectral CT imaging. Neural Netw 2021;144:342-58. [Crossref] [PubMed]\n49. Zhang W, Zhou Z, Gao Z, Yang G, Xu L, Wu W, Zhang H. Multiple Adversarial Learning based Angiography Reconstruction for Ultra-low-dose Contrast Medium CT. IEEE J Biomed Health Inform 2022; Epub ahead of print. [Crossref] [PubMed]\n50. Wu W, Hu D, Niu C, Yu H, Vardhanabhuti V, Wang G. DRONE: Dual-Domain Residual-based Optimization NEtwork for Sparse-View CT Reconstruction. IEEE Trans Med Imaging 2021;40:3002-14. [Crossref] [PubMed]\nCite this article as: Yu X, Cai A, Li L, Jiao Z, Yan B. Low-dose spectral reconstruction with global, local, and nonlocal priors based on subspace decomposition. Quant Imaging Med Surg 2023;13(2):889-911. doi: 10.21037/qims-22-647" ]
[ null, "https://cdn.amegroups.cn/static/mirror/crossmark/widget/v2.0/CROSSMARK_Color_horizontal.svg", null, "https://cdn.amegroups.cn/journals/amepc/files/journals/4/articles/107663/public/107663-PB4-1455-R1.jpg/w300", null, "https://cdn.amegroups.cn/journals/amepc/files/journals/4/articles/107663/public/107663-PB5-5636-R1.jpg/w300", null, "https://cdn.amegroups.cn/journals/amepc/files/journals/4/articles/107663/public/107663-PB6-6171-R1.jpg/w300", null, "https://cdn.amegroups.cn/journals/amepc/files/journals/4/articles/107663/public/107663-PB7-2325-R1.jpg/w300", null, "https://cdn.amegroups.cn/journals/amepc/files/journals/4/articles/107663/public/107663-PB8-5258-R1.jpg/w300", null, "https://cdn.amegroups.cn/journals/amepc/files/journals/4/articles/107663/public/107663-PB9-7980-R1.jpg/w300", null, "https://cdn.amegroups.cn/journals/amepc/files/journals/4/articles/107663/public/107663-PB10-7179-R1.jpg/w300", null, "https://cdn.amegroups.cn/journals/amepc/files/journals/4/articles/107663/public/107663-PB11-6017-R1.jpg/w300", null, "https://cdn.amegroups.cn/journals/amepc/files/journals/4/articles/107663/public/107663-PB12-8778-R1.jpg/w300", null, "https://cdn.amegroups.cn/journals/amepc/files/journals/4/articles/107663/public/107663-PB13-3028-R1.jpg/w300", null, "https://cdn.amegroups.cn/journals/amepc/files/journals/4/articles/107663/public/107663-PB14-3827-R1.jpg/w300", null, "https://cdn.amegroups.cn/journals/amepc/files/journals/4/articles/107663/public/107663-PB15-5436-R1.jpg/w300", null, "https://cdn.amegroups.cn/journals/amepc/files/journals/4/articles/107663/public/107663-PB16-7823-R1.jpg/w300", null, "https://cdn.amegroups.cn/journals/amepc/files/journals/4/articles/107663/public/107663-PB17-6326-R1.jpg/w300", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8838082,"math_prob":0.90159196,"size":50192,"snap":"2022-40-2023-06","text_gpt3_token_len":12751,"char_repetition_ratio":0.15093249,"word_repetition_ratio":0.054410987,"special_character_ratio":0.25358623,"punctuation_ratio":0.15321167,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9673382,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30],"im_url_duplicate_count":[null,null,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-28T20:18:38Z\",\"WARC-Record-ID\":\"<urn:uuid:3dbe798a-36f2-4d47-97fa-1fecfc3cde17>\",\"Content-Length\":\"193382\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:15ab6a1c-58a5-474f-b046-7f3d06cddb11>\",\"WARC-Concurrent-To\":\"<urn:uuid:856e718e-94e4-4a2d-9f44-64a6737e66da>\",\"WARC-IP-Address\":\"47.74.40.41\",\"WARC-Target-URI\":\"https://qims.amegroups.com/article/view/107663/html\",\"WARC-Payload-Digest\":\"sha1:MW3BZCFV2IZZKBDWXT3MZQ6RWO5ETIM6\",\"WARC-Block-Digest\":\"sha1:HAPY64FNMEGQBSP6UPTPKZGU5ALNPRCV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499654.54_warc_CC-MAIN-20230128184907-20230128214907-00829.warc.gz\"}"}
https://excelhub.org/how-to-use-excel-index-function/
[ "## Introduction\n\nINDEX function in excel is a lookup or reference function. It is used to return reference or value from a table or array to another location. It uses both row and column position to retrieve the value. It can return a row, column or a single cell. It is often used in combination with MATCH function, MATCH acts as a feeder for INDEX. It has two forms- Array and Reference.\n\n## Array form\n\nIn the Array form, first argument in syntax is an array, which is in form of range of cells or an array constant.\n\n### SYNTAX\n\nINDEX(array, row_num, col_num)\n\n### Arguments\n\nArray- It is in form of a range of cells or an array constant\n\nRow_num- Row number in array from which value is to be returned.\n\nColumn_num- Column number in array from which value is to be returned.\n\n### KEYNOTES\n\n• If both row and column number are entered than function returns the value which is present on the position of intersection of that row and column.\n• If in row_num argument 0 is used than it retrieves value of the complete row.\n• If in column_num argument 0 is entered than it retrieves value of the complete column.\n\n## Reference form\n\nIn the Reference form, first argument in syntax is a reference, which is in form of reference to a range or number of ranges.\n\n### SYNTAX\n\nINDEX(reference, row_num, col_num,area_num)\n\n### Arguments:-\n\n• Reference- Refer to a range or number of ranges.\n• Row_num- Row number in array from which value is to be returned.\n• Column_num- Column number in array from which value is to be returned.\n• Area_num- It determines which range will be used if multiple ranges are added.\n\n### KEYNOTES\n\n• Reference form supplies the reference of cell which is at an inter section of row_num, column_num.\n• For multiple ranges area_num is used to determine which range to be applied.\n• Area_sum is in form of a number.\n\n## Examples\n\nIn the following example, a list of products with their numbers and price are shown,they are than indexed into a separate table.\n\n×" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.86819315,"math_prob":0.9037556,"size":1922,"snap":"2019-13-2019-22","text_gpt3_token_len":428,"char_repetition_ratio":0.14285715,"word_repetition_ratio":0.17109145,"special_character_ratio":0.21800208,"punctuation_ratio":0.09651475,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98203146,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-26T06:10:20Z\",\"WARC-Record-ID\":\"<urn:uuid:9805454b-245e-4793-8f73-d95fe5ad75ae>\",\"Content-Length\":\"39610\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:43684b89-a1b4-4dd2-90d8-1dd8c0c3d8f2>\",\"WARC-Concurrent-To\":\"<urn:uuid:e6a3c12e-d517-46cc-bffd-420544762f6c>\",\"WARC-IP-Address\":\"162.222.226.38\",\"WARC-Target-URI\":\"https://excelhub.org/how-to-use-excel-index-function/\",\"WARC-Payload-Digest\":\"sha1:KOAJ2H4R2EA2HSADNII3VYZ7PK6VWPKT\",\"WARC-Block-Digest\":\"sha1:CFXUREI2EP4Y2AJQSMP4XHJEANVOATPQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912204857.82_warc_CC-MAIN-20190326054828-20190326080828-00390.warc.gz\"}"}
https://www.convertunits.com/from/joule/metre/to/piconewton
[ "## ››Convert joule/metre to piconewton\n\n joule/metre piconewton\n\nHow many joule/metre in 1 piconewton? The answer is 1.0E-12.\nWe assume you are converting between joule/metre and piconewton.\nYou can view more details on each measurement unit:\njoule/metre or piconewton\nThe SI derived unit for force is the newton.\n1 newton is equal to 1 joule/metre, or 1000000000000 piconewton.\nNote that rounding errors may occur, so always check the results.\nUse this page to learn how to convert between joules/meter and piconewtons.\nType in your own numbers in the form to convert the units!\n\n## ››Quick conversion chart of joule/metre to piconewton\n\n1 joule/metre to piconewton = 1000000000000 piconewton\n\n2 joule/metre to piconewton = 2000000000000 piconewton\n\n3 joule/metre to piconewton = 3000000000000 piconewton\n\n4 joule/metre to piconewton = 4000000000000 piconewton\n\n5 joule/metre to piconewton = 5000000000000 piconewton\n\n6 joule/metre to piconewton = 6000000000000 piconewton\n\n7 joule/metre to piconewton = 7000000000000 piconewton\n\n8 joule/metre to piconewton = 8000000000000 piconewton\n\n9 joule/metre to piconewton = 9000000000000 piconewton\n\n10 joule/metre to piconewton = 10000000000000 piconewton\n\n## ››Want other units?\n\nYou can do the reverse unit conversion from piconewton to joule/metre, or enter any two units below:\n\n## Enter two units to convert\n\n From: To:\n\n## ››Definition: Piconewton\n\nThe SI prefix \"pico\" represents a factor of 10-12, or in exponential notation, 1E-12.\n\nSo 1 piconewton = 10-12 newtons.\n\nThe definition of a newton is as follows:\n\nIn physics, the newton (symbol: N) is the SI unit of force, named after Sir Isaac Newton in recognition of his work on classical mechanics. It was first used around 1904, but not until 1948 was it officially adopted by the General Conference on Weights and Measures (CGPM) as the name for the mks unit of force.\n\n## ››Metric conversions and more\n\nConvertUnits.com provides an online conversion calculator for all types of measurement units. You can find metric conversion tables for SI units, as well as English units, currency, and other data. Type in unit symbols, abbreviations, or full names for units of length, area, mass, pressure, and other types. Examples include mm, inch, 100 kg, US fluid ounce, 6'3\", 10 stone 4, cubic cm, metres squared, grams, moles, feet per second, and many more!" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7259744,"math_prob":0.87304926,"size":2347,"snap":"2020-24-2020-29","text_gpt3_token_len":729,"char_repetition_ratio":0.3290653,"word_repetition_ratio":0.0,"special_character_ratio":0.27609715,"punctuation_ratio":0.11600928,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99369854,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-06-04T03:45:38Z\",\"WARC-Record-ID\":\"<urn:uuid:45422716-987f-43a0-a1a7-b120e7065d12>\",\"Content-Length\":\"33395\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3f078c00-22be-4cfb-85cf-58ee3b0f1edd>\",\"WARC-Concurrent-To\":\"<urn:uuid:66b3e5d9-3051-4222-aed3-086f32b50c06>\",\"WARC-IP-Address\":\"54.175.245.234\",\"WARC-Target-URI\":\"https://www.convertunits.com/from/joule/metre/to/piconewton\",\"WARC-Payload-Digest\":\"sha1:WBOMAQIXVBMJVJFJ63PD5E5UKP47H2QF\",\"WARC-Block-Digest\":\"sha1:63ANPOEGGCZMUXRYHZMABSSRJCH66AQV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347439019.86_warc_CC-MAIN-20200604032435-20200604062435-00066.warc.gz\"}"}
https://jp.maplesoft.com/support/help/Maple/view.aspx?path=StudyGuides%2FCalculus%2FChapter5%2FApplicationsOfIntegration%2FExamples%2FSection5-6%2FExample5-6-8
[ "", null, "Example 5-6-8 - Maple Help", null, "Chapter 5: Applications of Integration\n\n\n\nSection 5.6: Differential Equations", null, "Example 5.6.8\n\n\n\n A species undergoes logistic growth, governed by the formula developed in Example 5.6.7. Observation yields the following three data points. $[\\begin{array}{cc}\\mathrm{Time in years}& \\mathrm{Population Size}\\\\ 1& 1300\\\\ 3& 1870\\\\ 4& 2070\\end{array}]$  Determine the carrying capacity $c$, the initial population ${y}_{0}$, and the rate constant $k$, if it is known that $k>0$.\n\n\n\n\n\n\n\nFor more information on Maplesoft products and services, visit www.maplesoft.com" ]
[ null, "https://bat.bing.com/action/0", null, "https://jp.maplesoft.com/support/help/content/10635/image6.png", null, "https://jp.maplesoft.com/support/help/Maple/arrow_down.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.747814,"math_prob":0.9978084,"size":2305,"snap":"2023-14-2023-23","text_gpt3_token_len":774,"char_repetition_ratio":0.10821382,"word_repetition_ratio":0.0,"special_character_ratio":0.32668114,"punctuation_ratio":0.16502947,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97147477,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,2,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-27T19:51:24Z\",\"WARC-Record-ID\":\"<urn:uuid:be0cfc3d-f9ba-4abe-bb4d-32d3b44f1358>\",\"Content-Length\":\"278282\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7a113ca1-ae31-46a1-9d3a-176cb457dc87>\",\"WARC-Concurrent-To\":\"<urn:uuid:e653f3d9-d126-458c-a45e-300f963b98d5>\",\"WARC-IP-Address\":\"199.71.183.28\",\"WARC-Target-URI\":\"https://jp.maplesoft.com/support/help/Maple/view.aspx?path=StudyGuides%2FCalculus%2FChapter5%2FApplicationsOfIntegration%2FExamples%2FSection5-6%2FExample5-6-8\",\"WARC-Payload-Digest\":\"sha1:SKDCVXKIOYSTAIIAPASAH6VEMCH5323F\",\"WARC-Block-Digest\":\"sha1:R5TO6RAEKEK3Y2356O3XW7R5IXDGDORG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296948684.19_warc_CC-MAIN-20230327185741-20230327215741-00402.warc.gz\"}"}
https://www.selfstudys.com/mcq/cbse/mock-test/class-10th/maths-chapter-8-introduction-to-trigonometry
[ "", null, "", null, "## CBSE Class 10 Mathematics Chapter 8 Introduction to Trigonometry Online MCQ Test\n\n• Introduction to Trigonometry Test - 71\n\nIntroduction to Trigonometry\n\nDuration: 10 Mins\n\nMaximum Marks: 10\n\n1. The test contains 10 total questions.\n\n2. Each question has 4 options out of which only one is correct.\n\n3. You have to finish the test in 10 minutes.\n\n4. You will be awarded 1 mark for each correct answer.\n\n5. You can view your Score & Rank after submitting the test.\n\n6. Check detailed Solution with explanation after submitting the test.\n\n7. Rank is calculated on the basis of Marks Scored & Time\n\n• Introduction to Trigonometry Test - 70\n\nIntroduction to Trigonometry\n\nDuration: 10 Mins\n\nMaximum Marks: 10\n\n1. The test contains 10 total questions.\n\n2. Each question has 4 options out of which only one is correct.\n\n3. You have to finish the test in 10 minutes.\n\n4. You will be awarded 1 mark for each correct answer.\n\n5. You can view your Score & Rank after submitting the test.\n\n6. Check detailed Solution with explanation after submitting the test.\n\n7. Rank is calculated on the basis of Marks Scored & Time\n\n• Introduction to Trigonometry Test - 69\n\nIntroduction to Trigonometry\n\nDuration: 10 Mins\n\nMaximum Marks: 10\n\n1. The test contains 10 total questions.\n\n2. Each question has 4 options out of which only one is correct.\n\n3. You have to finish the test in 10 minutes.\n\n4. You will be awarded 1 mark for each correct answer.\n\n5. You can view your Score & Rank after submitting the test.\n\n6. Check detailed Solution with explanation after submitting the test.\n\n7. Rank is calculated on the basis of Marks Scored & Time\n\n• Introduction to Trigonometry Test - 68\n\nIntroduction to Trigonometry\n\nDuration: 10 Mins\n\nMaximum Marks: 10\n\n1. The test contains 10 total questions.\n\n2. Each question has 4 options out of which only one is correct.\n\n3. You have to finish the test in 10 minutes.\n\n4. You will be awarded 1 mark for each correct answer.\n\n5. You can view your Score & Rank after submitting the test.\n\n6. Check detailed Solution with explanation after submitting the test.\n\n7. Rank is calculated on the basis of Marks Scored & Time\n\n• Introduction to Trigonometry Test - 67\n\nIntroduction to Trigonometry\n\nDuration: 10 Mins\n\nMaximum Marks: 10\n\n1. The test contains 10 total questions.\n\n2. Each question has 4 options out of which only one is correct.\n\n3. You have to finish the test in 10 minutes.\n\n4. You will be awarded 1 mark for each correct answer.\n\n5. You can view your Score & Rank after submitting the test.\n\n6. Check detailed Solution with explanation after submitting the test.\n\n7. Rank is calculated on the basis of Marks Scored & Time\n\n• Introduction to Trigonometry Test - 66\n\nIntroduction to Trigonometry\n\nDuration: 10 Mins\n\nMaximum Marks: 10\n\n1. The test contains 10 total questions.\n\n2. Each question has 4 options out of which only one is correct.\n\n3. You have to finish the test in 10 minutes.\n\n4. You will be awarded 1 mark for each correct answer.\n\n5. You can view your Score & Rank after submitting the test.\n\n6. Check detailed Solution with explanation after submitting the test.\n\n7. Rank is calculated on the basis of Marks Scored & Time\n\n• Introduction to Trigonometry Test - 65\n\nIntroduction to Trigonometry\n\nDuration: 10 Mins\n\nMaximum Marks: 10\n\n1. The test contains 10 total questions.\n\n2. Each question has 4 options out of which only one is correct.\n\n3. You have to finish the test in 10 minutes.\n\n4. You will be awarded 1 mark for each correct answer.\n\n5. You can view your Score & Rank after submitting the test.\n\n6. Check detailed Solution with explanation after submitting the test.\n\n7. Rank is calculated on the basis of Marks Scored & Time\n\n• Introduction to Trigonometry Test - 64\n\nIntroduction to Trigonometry\n\nDuration: 10 Mins\n\nMaximum Marks: 10\n\n1. The test contains 10 total questions.\n\n2. Each question has 4 options out of which only one is correct.\n\n3. You have to finish the test in 10 minutes.\n\n4. You will be awarded 1 mark for each correct answer.\n\n5. You can view your Score & Rank after submitting the test.\n\n6. Check detailed Solution with explanation after submitting the test.\n\n7. Rank is calculated on the basis of Marks Scored & Time\n\n• Introduction to Trigonometry Test - 63\n\nIntroduction to Trigonometry\n\nDuration: 10 Mins\n\nMaximum Marks: 10\n\n1. The test contains 10 total questions.\n\n2. Each question has 4 options out of which only one is correct.\n\n3. You have to finish the test in 10 minutes.\n\n4. You will be awarded 1 mark for each correct answer.\n\n5. You can view your Score & Rank after submitting the test.\n\n6. Check detailed Solution with explanation after submitting the test.\n\n7. Rank is calculated on the basis of Marks Scored & Time\n\n• Introduction to Trigonometry Test - 62\n\nIntroduction to Trigonometry\n\nDuration: 10 Mins\n\nMaximum Marks: 10\n\n1. The test contains 10 total questions.\n\n2. Each question has 4 options out of which only one is correct.\n\n3. You have to finish the test in 10 minutes.\n\n4. You will be awarded 1 mark for each correct answer.\n\n5. You can view your Score & Rank after submitting the test.\n\n6. Check detailed Solution with explanation after submitting the test.\n\n7. Rank is calculated on the basis of Marks Scored & Time\n\n• Introduction to Trigonometry Test - 61\n\nIntroduction to Trigonometry\n\nDuration: 10 Mins\n\nMaximum Marks: 10\n\n1. The test contains 10 total questions.\n\n2. Each question has 4 options out of which only one is correct.\n\n3. You have to finish the test in 10 minutes.\n\n4. You will be awarded 1 mark for each correct answer.\n\n5. You can view your Score & Rank after submitting the test.\n\n6. Check detailed Solution with explanation after submitting the test.\n\n7. Rank is calculated on the basis of Marks Scored & Time\n\n• Introduction to Trigonometry Test - 60\n\nIntroduction to Trigonometry\n\nDuration: 10 Mins\n\nMaximum Marks: 10\n\n1. The test contains 10 total questions.\n\n2. Each question has 4 options out of which only one is correct.\n\n3. You have to finish the test in 10 minutes.\n\n4. You will be awarded 1 mark for each correct answer.\n\n5. You can view your Score & Rank after submitting the test.\n\n6. Check detailed Solution with explanation after submitting the test.\n\n7. Rank is calculated on the basis of Marks Scored & Time\n\n• Introduction to Trigonometry Test - 59\n\nIntroduction to Trigonometry\n\nDuration: 10 Mins\n\nMaximum Marks: 10\n\n1. The test contains 10 total questions.\n\n2. Each question has 4 options out of which only one is correct.\n\n3. You have to finish the test in 10 minutes.\n\n4. You will be awarded 1 mark for each correct answer.\n\n5. You can view your Score & Rank after submitting the test.\n\n6. Check detailed Solution with explanation after submitting the test.\n\n7. Rank is calculated on the basis of Marks Scored & Time\n\n• Introduction to Trigonometry Test - 58\n\nIntroduction to Trigonometry\n\nDuration: 10 Mins\n\nMaximum Marks: 10\n\n1. The test contains 10 total questions.\n\n2. Each question has 4 options out of which only one is correct.\n\n3. You have to finish the test in 10 minutes.\n\n4. You will be awarded 1 mark for each correct answer.\n\n5. You can view your Score & Rank after submitting the test.\n\n6. Check detailed Solution with explanation after submitting the test.\n\n7. Rank is calculated on the basis of Marks Scored & Time\n\n• Introduction to Trigonometry Test - 57\n\nIntroduction to Trigonometry\n\nDuration: 10 Mins\n\nMaximum Marks: 10\n\n1. The test contains 10 total questions.\n\n2. Each question has 4 options out of which only one is correct.\n\n3. You have to finish the test in 10 minutes.\n\n4. You will be awarded 1 mark for each correct answer.\n\n5. You can view your Score & Rank after submitting the test.\n\n6. Check detailed Solution with explanation after submitting the test.\n\n7. Rank is calculated on the basis of Marks Scored & Time\n\n• Introduction to Trigonometry Test - 56\n\nIntroduction to Trigonometry\n\nDuration: 10 Mins\n\nMaximum Marks: 10\n\n1. The test contains 10 total questions.\n\n2. Each question has 4 options out of which only one is correct.\n\n3. You have to finish the test in 10 minutes.\n\n4. You will be awarded 1 mark for each correct answer.\n\n5. You can view your Score & Rank after submitting the test.\n\n6. Check detailed Solution with explanation after submitting the test.\n\n7. Rank is calculated on the basis of Marks Scored & Time\n\n• Introduction to Trigonometry Test - 55\n\nIntroduction to Trigonometry\n\nDuration: 10 Mins\n\nMaximum Marks: 10\n\n1. The test contains 10 total questions.\n\n2. Each question has 4 options out of which only one is correct.\n\n3. You have to finish the test in 10 minutes.\n\n4. You will be awarded 1 mark for each correct answer.\n\n5. You can view your Score & Rank after submitting the test.\n\n6. Check detailed Solution with explanation after submitting the test.\n\n7. Rank is calculated on the basis of Marks Scored & Time\n\n• Introduction to Trigonometry Test - 54\n\nIntroduction to Trigonometry\n\nDuration: 10 Mins\n\nMaximum Marks: 10\n\n1. The test contains 10 total questions.\n\n2. Each question has 4 options out of which only one is correct.\n\n3. You have to finish the test in 10 minutes.\n\n4. You will be awarded 1 mark for each correct answer.\n\n5. You can view your Score & Rank after submitting the test.\n\n6. Check detailed Solution with explanation after submitting the test.\n\n7. Rank is calculated on the basis of Marks Scored & Time\n\n• Introduction to Trigonometry Test - 53\n\nIntroduction to Trigonometry\n\nDuration: 10 Mins\n\nMaximum Marks: 10\n\n1. The test contains 10 total questions.\n\n2. Each question has 4 options out of which only one is correct.\n\n3. You have to finish the test in 10 minutes.\n\n4. You will be awarded 1 mark for each correct answer.\n\n5. You can view your Score & Rank after submitting the test.\n\n6. Check detailed Solution with explanation after submitting the test.\n\n7. Rank is calculated on the basis of Marks Scored & Time\n\n• Introduction to Trigonometry Test - 52\n\nIntroduction to Trigonometry\n\nDuration: 10 Mins\n\nMaximum Marks: 10\n\n1. The test contains 10 total questions.\n\n2. Each question has 4 options out of which only one is correct.\n\n3. You have to finish the test in 10 minutes.\n\n4. You will be awarded 1 mark for each correct answer.\n\n5. You can view your Score & Rank after submitting the test.\n\n6. Check detailed Solution with explanation after submitting the test.\n\n7. Rank is calculated on the basis of Marks Scored & Time\n\n• Introduction to Trigonometry Test - 51\n\nIntroduction to Trigonometry\n\nDuration: 10 Mins\n\nMaximum Marks: 10\n\n1. The test contains 10 total questions.\n\n2. Each question has 4 options out of which only one is correct.\n\n3. You have to finish the test in 10 minutes.\n\n4. You will be awarded 1 mark for each correct answer.\n\n5. You can view your Score & Rank after submitting the test.\n\n6. Check detailed Solution with explanation after submitting the test.\n\n7. Rank is calculated on the basis of Marks Scored & Time\n\n• Introduction to Trigonometry Test - 50\n\nIntroduction to Trigonometry\n\nDuration: 10 Mins\n\nMaximum Marks: 10\n\n1. The test contains 10 total questions.\n\n2. Each question has 4 options out of which only one is correct.\n\n3. You have to finish the test in 10 minutes.\n\n4. You will be awarded 1 mark for each correct answer.\n\n5. You can view your Score & Rank after submitting the test.\n\n6. Check detailed Solution with explanation after submitting the test.\n\n7. Rank is calculated on the basis of Marks Scored & Time\n\n• Introduction to Trigonometry Test - 49\n\nIntroduction to Trigonometry\n\nDuration: 10 Mins\n\nMaximum Marks: 10\n\n1. The test contains 10 total questions.\n\n2. Each question has 4 options out of which only one is correct.\n\n3. You have to finish the test in 10 minutes.\n\n4. You will be awarded 1 mark for each correct answer.\n\n5. You can view your Score & Rank after submitting the test.\n\n6. Check detailed Solution with explanation after submitting the test.\n\n7. Rank is calculated on the basis of Marks Scored & Time\n\n• Introduction to Trigonometry Test - 48\n\nIntroduction to Trigonometry\n\nDuration: 10 Mins\n\nMaximum Marks: 10\n\n1. The test contains 10 total questions.\n\n2. Each question has 4 options out of which only one is correct.\n\n3. You have to finish the test in 10 minutes.\n\n4. You will be awarded 1 mark for each correct answer.\n\n5. You can view your Score & Rank after submitting the test.\n\n6. Check detailed Solution with explanation after submitting the test.\n\n7. Rank is calculated on the basis of Marks Scored & Time\n\n• Introduction to Trigonometry Test - 47\n\nIntroduction to Trigonometry\n\nDuration: 10 Mins\n\nMaximum Marks: 10\n\n1. The test contains 10 total questions.\n\n2. Each question has 4 options out of which only one is correct.\n\n3. You have to finish the test in 10 minutes.\n\n4. You will be awarded 1 mark for each correct answer.\n\n5. You can view your Score & Rank after submitting the test.\n\n6. Check detailed Solution with explanation after submitting the test.\n\n7. Rank is calculated on the basis of Marks Scored & Time\n\n• Introduction to Trigonometry Test - 46\n\nIntroduction to Trigonometry\n\nDuration: 10 Mins\n\nMaximum Marks: 10\n\n1. The test contains 10 total questions.\n\n2. Each question has 4 options out of which only one is correct.\n\n3. You have to finish the test in 10 minutes.\n\n4. You will be awarded 1 mark for each correct answer.\n\n5. You can view your Score & Rank after submitting the test.\n\n6. Check detailed Solution with explanation after submitting the test.\n\n7. Rank is calculated on the basis of Marks Scored & Time\n\n• Introduction to Trigonometry Test - 45\n\nIntroduction to Trigonometry\n\nDuration: 10 Mins\n\nMaximum Marks: 10\n\n1. The test contains 10 total questions.\n\n2. Each question has 4 options out of which only one is correct.\n\n3. You have to finish the test in 10 minutes.\n\n4. You will be awarded 1 mark for each correct answer.\n\n5. You can view your Score & Rank after submitting the test.\n\n6. Check detailed Solution with explanation after submitting the test.\n\n7. Rank is calculated on the basis of Marks Scored & Time\n\n• Introduction to Trigonometry Test - 44\n\nIntroduction to Trigonometry\n\nDuration: 10 Mins\n\nMaximum Marks: 10\n\n1. The test contains 10 total questions.\n\n2. Each question has 4 options out of which only one is correct.\n\n3. You have to finish the test in 10 minutes.\n\n4. You will be awarded 1 mark for each correct answer.\n\n5. You can view your Score & Rank after submitting the test.\n\n6. Check detailed Solution with explanation after submitting the test.\n\n7. Rank is calculated on the basis of Marks Scored & Time\n\n• Introduction to Trigonometry Test - 43\n\nIntroduction to Trigonometry\n\nDuration: 10 Mins\n\nMaximum Marks: 10\n\n1. The test contains 10 total questions.\n\n2. Each question has 4 options out of which only one is correct.\n\n3. You have to finish the test in 10 minutes.\n\n4. You will be awarded 1 mark for each correct answer.\n\n5. You can view your Score & Rank after submitting the test.\n\n6. Check detailed Solution with explanation after submitting the test.\n\n7. Rank is calculated on the basis of Marks Scored & Time\n\n• Introduction to Trigonometry Test - 42\n\nIntroduction to Trigonometry\n\nDuration: 10 Mins\n\nMaximum Marks: 10\n\n1. The test contains 10 total questions.\n\n2. Each question has 4 options out of which only one is correct.\n\n3. You have to finish the test in 10 minutes.\n\n4. You will be awarded 1 mark for each correct answer.\n\n5. You can view your Score & Rank after submitting the test.\n\n6. Check detailed Solution with explanation after submitting the test.\n\n7. Rank is calculated on the basis of Marks Scored & Time\n\n• Introduction to Trigonometry Test - 41\n\nIntroduction to Trigonometry\n\nDuration: 10 Mins\n\nMaximum Marks: 10\n\n1. The test contains 10 total questions.\n\n2. Each question has 4 options out of which only one is correct.\n\n3. You have to finish the test in 10 minutes.\n\n4. You will be awarded 1 mark for each correct answer.\n\n5. You can view your Score & Rank after submitting the test.\n\n6. Check detailed Solution with explanation after submitting the test.\n\n7. Rank is calculated on the basis of Marks Scored & Time\n\n• Introduction to Trigonometry Test - 40\n\nIntroduction to Trigonometry\n\nDuration: 10 Mins\n\nMaximum Marks: 10\n\n1. The test contains 10 total questions.\n\n2. Each question has 4 options out of which only one is correct.\n\n3. You have to finish the test in 10 minutes.\n\n4. You will be awarded 1 mark for each correct answer.\n\n5. You can view your Score & Rank after submitting the test.\n\n6. Check detailed Solution with explanation after submitting the test.\n\n7. Rank is calculated on the basis of Marks Scored & Time\n\n• Introduction to Trigonometry Test - 39\n\nIntroduction to Trigonometry\n\nDuration: 10 Mins\n\nMaximum Marks: 10\n\n1. The test contains 10 total questions.\n\n2. Each question has 4 options out of which only one is correct.\n\n3. You have to finish the test in 10 minutes.\n\n4. You will be awarded 1 mark for each correct answer.\n\n5. You can view your Score & Rank after submitting the test.\n\n6. Check detailed Solution with explanation after submitting the test.\n\n7. Rank is calculated on the basis of Marks Scored & Time\n\n• Introduction to Trigonometry Test - 38\n\nIntroduction to Trigonometry\n\nDuration: 10 Mins\n\nMaximum Marks: 10\n\n1. The test contains 10 total questions.\n\n2. Each question has 4 options out of which only one is correct.\n\n3. You have to finish the test in 10 minutes.\n\n4. You will be awarded 1 mark for each correct answer.\n\n5. You can view your Score & Rank after submitting the test.\n\n6. Check detailed Solution with explanation after submitting the test.\n\n7. Rank is calculated on the basis of Marks Scored & Time\n\n• Introduction to Trigonometry Test - 37\n\nIntroduction to Trigonometry\n\nDuration: 10 Mins\n\nMaximum Marks: 10\n\n1. The test contains 10 total questions.\n\n2. Each question has 4 options out of which only one is correct.\n\n3. You have to finish the test in 10 minutes.\n\n4. You will be awarded 1 mark for each correct answer.\n\n5. You can view your Score & Rank after submitting the test.\n\n6. Check detailed Solution with explanation after submitting the test.\n\n7. Rank is calculated on the basis of Marks Scored & Time\n\n• Introduction to Trigonometry Test - 36\n\nIntroduction to Trigonometry\n\nDuration: 10 Mins\n\nMaximum Marks: 10\n\n1. The test contains 10 total questions.\n\n2. Each question has 4 options out of which only one is correct.\n\n3. You have to finish the test in 10 minutes.\n\n4. You will be awarded 1 mark for each correct answer.\n\n5. You can view your Score & Rank after submitting the test.\n\n6. Check detailed Solution with explanation after submitting the test.\n\n7. Rank is calculated on the basis of Marks Scored & Time\n\n• Introduction to Trigonometry Test - 35\n\nIntroduction to Trigonometry\n\nDuration: 10 Mins\n\nMaximum Marks: 10\n\n1. The test contains 10 total questions.\n\n2. Each question has 4 options out of which only one is correct.\n\n3. You have to finish the test in 10 minutes.\n\n4. You will be awarded 1 mark for each correct answer.\n\n5. You can view your Score & Rank after submitting the test.\n\n6. Check detailed Solution with explanation after submitting the test.\n\n7. Rank is calculated on the basis of Marks Scored & Time\n\n• Introduction to Trigonometry Test - 34\n\nIntroduction to Trigonometry\n\nDuration: 10 Mins\n\nMaximum Marks: 10\n\n1. The test contains 10 total questions.\n\n2. Each question has 4 options out of which only one is correct.\n\n3. You have to finish the test in 10 minutes.\n\n4. You will be awarded 1 mark for each correct answer.\n\n5. You can view your Score & Rank after submitting the test.\n\n6. Check detailed Solution with explanation after submitting the test.\n\n7. Rank is calculated on the basis of Marks Scored & Time\n\n• Introduction to Trigonometry Test - 33\n\nIntroduction to Trigonometry\n\nDuration: 10 Mins\n\nMaximum Marks: 10\n\n1. The test contains 10 total questions.\n\n2. Each question has 4 options out of which only one is correct.\n\n3. You have to finish the test in 10 minutes.\n\n4. You will be awarded 1 mark for each correct answer.\n\n5. You can view your Score & Rank after submitting the test.\n\n6. Check detailed Solution with explanation after submitting the test.\n\n7. Rank is calculated on the basis of Marks Scored & Time\n\n• Introduction to Trigonometry Test - 32\n\nIntroduction to Trigonometry\n\nDuration: 10 Mins\n\nMaximum Marks: 10\n\n1. The test contains 10 total questions.\n\n2. Each question has 4 options out of which only one is correct.\n\n3. You have to finish the test in 10 minutes.\n\n4. You will be awarded 1 mark for each correct answer.\n\n5. You can view your Score & Rank after submitting the test.\n\n6. Check detailed Solution with explanation after submitting the test.\n\n7. Rank is calculated on the basis of Marks Scored & Time\n\n• Introduction to Trigonometry Test - 31\n\nIntroduction to Trigonometry\n\nDuration: 10 Mins\n\nMaximum Marks: 10\n\n1. The test contains 10 total questions.\n\n2. Each question has 4 options out of which only one is correct.\n\n3. You have to finish the test in 10 minutes.\n\n4. You will be awarded 1 mark for each correct answer.\n\n5. You can view your Score & Rank after submitting the test.\n\n6. Check detailed Solution with explanation after submitting the test.\n\n7. Rank is calculated on the basis of Marks Scored & Time\n\n• Introduction to Trigonometry Test - 30\n\nIntroduction to Trigonometry\n\nDuration: 10 Mins\n\nMaximum Marks: 10\n\n1. The test contains 10 total questions.\n\n2. Each question has 4 options out of which only one is correct.\n\n3. You have to finish the test in 10 minutes.\n\n4. You will be awarded 1 mark for each correct answer.\n\n5. You can view your Score & Rank after submitting the test.\n\n6. Check detailed Solution with explanation after submitting the test.\n\n7. Rank is calculated on the basis of Marks Scored & Time\n\n• Introduction to Trigonometry Test - 29\n\nIntroduction to Trigonometry\n\nDuration: 10 Mins\n\nMaximum Marks: 10\n\n1. The test contains 10 total questions.\n\n2. Each question has 4 options out of which only one is correct.\n\n3. You have to finish the test in 10 minutes.\n\n4. You will be awarded 1 mark for each correct answer.\n\n5. You can view your Score & Rank after submitting the test.\n\n6. Check detailed Solution with explanation after submitting the test.\n\n7. Rank is calculated on the basis of Marks Scored & Time\n\n• Introduction to Trigonometry Test - 28\n\nIntroduction to Trigonometry\n\nDuration: 10 Mins\n\nMaximum Marks: 10\n\n1. The test contains 10 total questions.\n\n2. Each question has 4 options out of which only one is correct.\n\n3. You have to finish the test in 10 minutes.\n\n4. You will be awarded 1 mark for each correct answer.\n\n5. You can view your Score & Rank after submitting the test.\n\n6. Check detailed Solution with explanation after submitting the test.\n\n7. Rank is calculated on the basis of Marks Scored & Time\n\n• Introduction to Trigonometry Test - 27\n\nIntroduction to Trigonometry\n\nDuration: 10 Mins\n\nMaximum Marks: 10\n\n1. The test contains 10 total questions.\n\n2. Each question has 4 options out of which only one is correct.\n\n3. You have to finish the test in 10 minutes.\n\n4. You will be awarded 1 mark for each correct answer.\n\n5. You can view your Score & Rank after submitting the test.\n\n6. Check detailed Solution with explanation after submitting the test.\n\n7. Rank is calculated on the basis of Marks Scored & Time\n\n• Introduction to Trigonometry Test - 26\n\nIntroduction to Trigonometry\n\nDuration: 10 Mins\n\nMaximum Marks: 10\n\n1. The test contains 10 total questions.\n\n2. Each question has 4 options out of which only one is correct.\n\n3. You have to finish the test in 10 minutes.\n\n4. You will be awarded 1 mark for each correct answer.\n\n5. You can view your Score & Rank after submitting the test.\n\n6. Check detailed Solution with explanation after submitting the test.\n\n7. Rank is calculated on the basis of Marks Scored & Time\n\n• Introduction to Trigonometry Test - 25\n\nIntroduction to Trigonometry\n\nDuration: 10 Mins\n\nMaximum Marks: 10\n\n1. The test contains 10 total questions.\n\n2. Each question has 4 options out of which only one is correct.\n\n3. You have to finish the test in 10 minutes.\n\n4. You will be awarded 1 mark for each correct answer.\n\n5. You can view your Score & Rank after submitting the test.\n\n6. Check detailed Solution with explanation after submitting the test.\n\n7. Rank is calculated on the basis of Marks Scored & Time\n\n• Introduction to Trigonometry Test - 24\n\nIntroduction to Trigonometry\n\nDuration: 10 Mins\n\nMaximum Marks: 10\n\n1. The test contains 10 total questions.\n\n2. Each question has 4 options out of which only one is correct.\n\n3. You have to finish the test in 10 minutes.\n\n4. You will be awarded 1 mark for each correct answer.\n\n5. You can view your Score & Rank after submitting the test.\n\n6. Check detailed Solution with explanation after submitting the test.\n\n7. Rank is calculated on the basis of Marks Scored & Time\n\n• Introduction to Trigonometry Test - 23\n\nIntroduction to Trigonometry\n\nDuration: 10 Mins\n\nMaximum Marks: 10\n\n1. The test contains 10 total questions.\n\n2. Each question has 4 options out of which only one is correct.\n\n3. You have to finish the test in 10 minutes.\n\n4. You will be awarded 1 mark for each correct answer.\n\n5. You can view your Score & Rank after submitting the test.\n\n6. Check detailed Solution with explanation after submitting the test.\n\n7. Rank is calculated on the basis of Marks Scored & Time\n\n• Introduction to Trigonometry Test - 22\n\nIntroduction to Trigonometry\n\nDuration: 10 Mins\n\nMaximum Marks: 10\n\n1. The test contains 10 total questions.\n\n2. Each question has 4 options out of which only one is correct.\n\n3. You have to finish the test in 10 minutes.\n\n4. You will be awarded 1 mark for each correct answer.\n\n5. You can view your Score & Rank after submitting the test.\n\n6. Check detailed Solution with explanation after submitting the test.\n\n7. Rank is calculated on the basis of Marks Scored & Time\n\n• Introduction to Trigonometry Test - 21\n\nIntroduction to Trigonometry\n\nDuration: 10 Mins\n\nMaximum Marks: 10\n\n1. The test contains 10 total questions.\n\n2. Each question has 4 options out of which only one is correct.\n\n3. You have to finish the test in 10 minutes.\n\n4. You will be awarded 1 mark for each correct answer.\n\n5. You can view your Score & Rank after submitting the test.\n\n6. Check detailed Solution with explanation after submitting the test.\n\n7. Rank is calculated on the basis of Marks Scored & Time\n\n• Introduction to Trigonometry Test - 20\n\nIntroduction to Trigonometry\n\nDuration: 10 Mins\n\nMaximum Marks: 10\n\n1. The test contains 10 total questions.\n\n2. Each question has 4 options out of which only one is correct.\n\n3. You have to finish the test in 10 minutes.\n\n4. You will be awarded 1 mark for each correct answer.\n\n5. You can view your Score & Rank after submitting the test.\n\n6. Check detailed Solution with explanation after submitting the test.\n\n7. Rank is calculated on the basis of Marks Scored & Time\n\n• Introduction to Trigonometry Test - 19\n\nIntroduction to Trigonometry\n\nDuration: 10 Mins\n\nMaximum Marks: 10\n\n1. The test contains 10 total questions.\n\n2. Each question has 4 options out of which only one is correct.\n\n3. You have to finish the test in 10 minutes.\n\n4. You will be awarded 1 mark for each correct answer.\n\n5. You can view your Score & Rank after submitting the test.\n\n6. Check detailed Solution with explanation after submitting the test.\n\n7. Rank is calculated on the basis of Marks Scored & Time\n\n• Introduction to Trigonometry Test - 18\n\nIntroduction to Trigonometry\n\nDuration: 15 Mins\n\nMaximum Marks: 15\n\n1. The test contains 15 total questions.\n\n2. Each question has 4 options out of which only one is correct.\n\n3. You have to finish the test in 15 minutes.\n\n4. There is No Negative marking.\n\n5. You will be awarded 1 mark for each correct answer.\n\n6. You can view your Score & Rank after submitting the test.\n\n7. Check detailed Solution with explanation after submitting the test.\n\n8. Rank is calculated on the basis of Marks Scored & Time.\n\n• Introduction to Trigonometry Test - 17\n\nIntroduction to Trigonometry\n\nDuration: 20 Mins\n\nMaximum Marks: 15\n\n1. The test contains 15 total questions.\n\n2. Each question has 4 options out of which only one is correct.\n\n3. You have to finish the test in 20 minutes.\n\n4. There is No Negative marking.\n\n5. You will be awarded 1 mark for each correct answer.\n\n6. You can view your Score & Rank after submitting the test.\n\n7. Check detailed Solution with explanation after submitting the test.\n\n8. Rank is calculated on the basis of Marks Scored & Time.\n\n• Introduction to Trigonometry Test - 16\n\nIntroduction to Trigonometry\n\nDuration: 25 Mins\n\nMaximum Marks: 15\n\n1. The test contains 15 total questions.\n\n2. Each question has 4 options out of which only one is correct.\n\n3. You have to finish the test in 25 minutes.\n\n4. There is No Negative marking.\n\n5. You will be awarded 1 mark for each correct answer.\n\n6. You can view your Score & Rank after submitting the test.\n\n7. Check detailed Solution with explanation after submitting the test.\n\n8. Rank is calculated on the basis of Marks Scored & Time.\n\n• Introduction to Trigonometry Test - 15\n\nIntroduction to Trigonometry\n\nDuration: 15 Mins\n\nMaximum Marks:15\n\n1. The test contains 15 total questions.\n\n2. Each question has 4 options out of which only one is correct.\n\n3. You have to finish the test in 15 minutes.\n\n4. There is No Negative marking.\n\n5. You will be awarded 1 mark for each correct answer.\n\n6. You can view your Score & Rank after submitting the test.\n\n7. Check detailed Solution with explanation after submitting the test.\n\n8. Rank is calculated on the basis of Marks Scored & Time.\n\n• Introduction to Trigonometry Test - 14\n\nIntroduction to Trigonometry\n\nDuration: 10 Mins\n\nMaximum Marks:10\n\n1. The test contains 10 total questions.\n\n2. Each question has 4 options out of which only one is correct.\n\n3. You have to finish the test in 10 minutes.\n\n4. There is No Negative marking.\n\n5. You will be awarded 1 mark for each correct answer.\n\n6. You can view your Score & Rank after submitting the test.\n\n7. Check detailed Solution with explanation after submitting the test.\n\n8. Rank is calculated on the basis of Marks Scored & Time.\n\n• Introduction to Trigonometry Test - 13\n\nIntroduction to Trigonometry\n\nDuration: 10 Mins\n\nMaximum Marks:10\n\n1. The test contains 10 total questions.\n\n2. Each question has 4 options out of which only one is correct.\n\n3. You have to finish the test in 10 minutes.\n\n4. There is No Negative marking.\n\n5. You will be awarded 1 mark for each correct answer.\n\n6. You can view your Score & Rank after submitting the test.\n\n7. Check detailed Solution with explanation after submitting the test.\n\n8. Rank is calculated on the basis of Marks Scored & Time.\n\n• Introduction to Trigonometry Test - 12\n\nIntroduction to Trigonometry\n\nDuration: 20 Mins\n\nMaximum Marks: 15\n\n1. The test contains 15 total questions.\n\n2. Each question has 4 options out of which only one is correct.\n\n3. You have to finish the test in 20 minutes.\n\n4. There is No Negative marking.\n\n5. You will be awarded 1 mark for each correct answer.\n\n6. You can view your Score & Rank after submitting the test.\n\n7. Check detailed Solution with explanation after submitting the test.\n\n8. Rank is calculated on the basis of Marks Scored & Time.\n\n• Introduction to Trigonometry Test - 11\n\nIntroduction to Trigonometry\n\n1. Unlimited Test Time. Test Timer starts from 0\n\n2. Each question is of 1 marks.\n\n3. No Negative Marking\n\n4. You can view your Score & Rank after submitting the test.\n\n5. Rank is calculated on the basis of Marks Scored & Time\n• Introduction to Trigonometry Test - 10\n\nIntroduction to Trigonometry\n\n1. Unlimited Test Time. Test Timer starts from 0\n\n2. Each question is of 1 marks.\n\n3. No Negative Marking\n\n4. You can view your Score & Rank after submitting the test.\n\n5. Rank is calculated on the basis of Marks Scored & Time\n• Introduction to Trigonometry Test - 9\n\nIntroduction to Trigonometry\n\n1. Unlimited Test Time. Test Timer starts from 0\n\n2. Each question is of 1 marks.\n\n3. No Negative Marking\n\n4. You can view your Score & Rank after submitting the test.\n\n5. Rank is calculated on the basis of Marks Scored & Time\n• Introduction to Trigonometry Test - 8\n\nIntroduction to Trigonometry\n\n1. Unlimited Test Time. Test Timer starts from 0\n\n2. Each question is of 1 marks.\n\n3. No Negative Marking\n\n4. You can view your Score & Rank after submitting the test.\n\n5. Rank is calculated on the basis of Marks Scored & Time\n• Introduction to Trigonometry Test - 7\n\nIntroduction to Trigonometry\n\n1. Unlimited Test Time. Test Timer starts from 0\n\n2. Each question is of 1 marks.\n\n3. No Negative Marking\n\n4. You can view your Score & Rank after submitting the test.\n\n5. Rank is calculated on the basis of Marks Scored & Time\n• Introduction to Trigonometry Test - 6\n\nIntroduction to Trigonometry\n\n1. Unlimited Test Time. Test Timer starts from 0\n\n2. Each question is of 1 marks.\n\n3. No Negative Marking\n\n4. You can view your Score & Rank after submitting the test.\n\n5. Rank is calculated on the basis of Marks Scored & Time\n• Introduction to Trigonometry Test - 5\n\nIntroduction to Trigonometry\n\n1. Unlimited Test Time. Test Timer starts from 0\n\n2. Each question is of 1 marks.\n\n3. No Negative Marking\n\n4. You can view your Score & Rank after submitting the test.\n\n5. Rank is calculated on the basis of Marks Scored & Time\n• Introduction to Trigonometry Test - 4\n\nIntroduction to Trigonometry\n\n1. Unlimited Test Time. Test Timer starts from 0\n\n2. Each question is of 1 marks.\n\n3. No Negative Marking\n\n4. You can view your Score & Rank after submitting the test.\n\n5. Rank is calculated on the basis of Marks Scored & Time\n• Introduction to Trigonometry Test - 3\n\nIntroduction to Trigonometry\n\n1. Unlimited Test Time. Test Timer starts from 0\n\n2. Each question is of 1 marks.\n\n3. No Negative Marking\n\n4. You can view your Score & Rank after submitting the test.\n\n5. Rank is calculated on the basis of Marks Scored & Time\n• Introduction to Trigonometry Test - 2\n\nIntroduction to Trigonometry\n\n1. Unlimited Test Time. Test Timer starts from 0\n\n2. Each question is of 1 marks.\n\n3. No Negative Marking\n\n4. You can view your Score & Rank after submitting the test.\n\n5. Rank is calculated on the basis of Marks Scored & Time\n• Introduction to Trigonometry Test - 1\n\nIntroduction to Trigonometry\n\n1. Unlimited Test Time. Test Timer starts from 0\n\n2. Each question is of 1 marks.\n\n3. No Negative Marking\n\n4. You can view your Score & Rank after submitting the test.\n\n5. Rank is calculated on the basis of Marks Scored & Time\n\n# CBSE MCQ Test Class 10 Mathematics Chapter 8 Introduction to Trigonometry Online For Free\n\nCBSE MCQ Test Class 10 Mathematics Chapter 8 Introduction to Trigonometry is an extremely helpful resource for the board candidates. It is the quickest and smartest way to check the preparation level for the board exams. This is important to understand that the board will ask the questions as per the splitted syllabus of Mathematics Chapter 8 Introduction to Trigonometry. So, we have also developed the mock test accordingly.\n\nMock Test Class 10 Mathematics Chapter 8 Introduction to Trigonometry 2021 is available here on this platform. Students can attempt these papers to start their exam preparations and get the taste of board exam MCQs questions that will be asked in Term 1.\n\nThese papers are developed by the subject matter experts who have years of experience. However, without relying on the expertise level, they have developed these materials by considering the exam pattern, types of questions, its difficulty levels, and marks distributions. Practicing these mock test papers will not only help in practicing but in familiarizing with the actual board papers. Therefore, the class 10th mock test papers are advised to the candidates to refer and practice them as much as possible.\n\n## Practice Mock Test Class 10 Mathematics Chapter 8 Introduction to Trigonometry (Online)\n\nPractice makes perfect. Every single student has heard of it. Therefore, to aid the students in practicing the Mathematics Chapter 8 Introduction to Trigonometry questions. We have developed this Practice Test of Mathematics Chapter 8 Introduction to Trigonometry for class 10th which is present here to attempt in an online medium.\n\nBy practicing it, students will be able to become aware about their own weakness and strongest portion in Mathematics Chapter 8 Introduction to Trigonometry. Although, there is no need to get afraid of this subject. The given solutions will work like a tool that will help in correcting and improving all those mistakes on their own.\n\nPracticing these mock tests will help the candidates to clear all the doubts and become confident for the final exams.\n\n## Class 10 Mathematics Chapter 8 Introduction to Trigonometry Mock Test Chapter Wise\n\nChapter-wise CBSE Class 10 Mathematics Chapter 8 Introduction to Trigonometry Mock Test can boost the exam preparation levels. Because it consists of the questions from each chapter according to the NCERT Textbooks and ongoing academic curriculum. Class 10th Mathematics Chapter 8 Introduction to Trigonometry has a lot of complex and lengthy topics. Hence to master this subject there is no substitute for practicing chapter wise format Mock test papers.\n\nMock test papers are generally the MCQs Questions. Hence, it must be solved so that students can perform well in the upcoming board exams. To know about the CBSE chapter wise weightage, students can refer to the New CBSE Syllabus 2021. Although the 50 per cent syllabus idea of CBSE board can be acquired from the mock test series of Mathematics Chapter 8 Introduction to Trigonometry.\n\n## Importance of Class 10 Mathematics Chapter 8 Introduction to Trigonometry Mock Test\n\nClass 10 Mathematics Chapter 8 Introduction to Trigonometry Mock Test is extremely important for the candidates who are going to appear in the upcoming tenth board exams. These practice papers are developed after doing the rigorous research and analysis of the current year Sample papers and previous year questions papers of CBSE Board Exams 202.\n\n• Mock test enables you to practice the various questions chapters along with the answers.\n• It teaches time management.\n• One of the biggest importance of it is to become familiar with the types of questions and its marks distributions.\n Mock Test For CBSE Class 10th MCQ Test For CBSE Class 10th Mathematics Chapter 1 Real Numbers MCQ Test For CBSE Class 10th Mathematics Chapter 2 Polynomials MCQ Test For CBSE Class 10th Mathematics Chapter 3 Pair of Linear Equations in Two Variables MCQ Test For CBSE Class 10th Mathematics Chapter 4 Quadratic Equations MCQ Test For CBSE Class 10th Mathematics Chapter 5 Arithmetic Progressions MCQ Test For CBSE Class 10th Mathematics Chapter 6 Triangles MCQ Test For CBSE Class 10th Mathematics Chapter 7 Coordinate Geometry MCQ Test For CBSE Class 10th Mathematics Chapter 8 Introduction to Trigonometry MCQ Test For CBSE Class 10th Mathematics Chapter 9 Some Applications of Trigonometry MCQ Test For CBSE Class 10th Mathematics Chapter 10 Circles MCQ Test For CBSE Class 10th Mathematics Chapter 11 Constructions MCQ Test For CBSE Class 10th Mathematics Chapter 12 Area Related to Circles MCQ Test For CBSE Class 10th Mathematics Chapter 13 Surface Areas and Volumes MCQ Test For CBSE Class 10th Mathematics Chapter 14 Statistics MCQ Test For CBSE Class 10th Mathematics Chapter 15 Probability MCQ Test For CBSE Class 10th Mathematics Chapter 16 Mix Test MCQ Test For CBSE Class 10th Mathematics Chapter 17 Heights and Distances", null, "We are providing something unique, useful and most importantly fun. By giving students a tool to find instant solutions to their doubts, we’re trying to make every student self-sufficient in practicing & completing their homework\n\n### Exams Study Material\n\nNEET Study Material NEET Previous Year Papers JEE Main Syllabus", null, "", null, "×" ]
[ null, "https://www.selfstudys.com/viewncert/design-march-2021/images/header-search.png", null, "https://www.selfstudys.com/viewncert/design-march-2021/images/header-cross.png", null, "https://www.selfstudys.com/viewncert/design-march-2021/images/logo.png", null, "https://www.selfstudys.com/viewncert/design-march-2021/images/go-to-top.png", null, "https://www.selfstudys.com/uploads/web-images/bell.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8842091,"math_prob":0.49966794,"size":38090,"snap":"2022-05-2022-21","text_gpt3_token_len":8941,"char_repetition_ratio":0.22065851,"word_repetition_ratio":0.8518858,"special_character_ratio":0.23756891,"punctuation_ratio":0.13843031,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9948595,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-20T13:27:04Z\",\"WARC-Record-ID\":\"<urn:uuid:0513baca-9e29-446b-bd9c-03e443a8e561>\",\"Content-Length\":\"322056\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:72c2d9dc-0c39-4789-a7bd-51f140249ff9>\",\"WARC-Concurrent-To\":\"<urn:uuid:473c62a2-544d-43fa-9ba0-389a089ed3ae>\",\"WARC-IP-Address\":\"172.104.52.148\",\"WARC-Target-URI\":\"https://www.selfstudys.com/mcq/cbse/mock-test/class-10th/maths-chapter-8-introduction-to-trigonometry\",\"WARC-Payload-Digest\":\"sha1:BTMT7USR2JN2PEDC6NSAZIIWZX53XGN6\",\"WARC-Block-Digest\":\"sha1:WP2OCZIG6Y37BLJTWPQFY32YBFZFRVTV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662532032.9_warc_CC-MAIN-20220520124557-20220520154557-00039.warc.gz\"}"}
https://www.pdfstall.online/2019/06/useful-pipe-engineering-basics-formulas.html
[ "", null, "Wednesday, June 12, 2019\n\nUseful Pipe Engineering Basics & Formula's (Download PDF)\n\nVelocity of Fluid through Piping :-\n\n0.3208 X GPM / Internal Area\n\nWhat is the velocity of 10 gpm going through a 1/2″ diameter schedule 40 pipe?\n\nGPM = 10\nInternal Area = 0.304 (see note below)\n0.3208 X GPM / Internal Area = 0.3208 X 10 X 0.304 = 10.55 feet per second\n\nNote: The outside diameter of pipe remains the same regardless of the thickness of the pipe. A heavy duty pipe has a thicker wall than a standard duty pipe, so the internal diameter of the heavy duty pipe is smaller than the internal diameter of a standard duty pipe. The wall thickness and internal diameter of pipes can be found on readily available charts.\n\nHydraulic steel tubing also maintains the same outside diameter regardless of wall thickness.\n\nHose sizes indicate the inside diameter of the plumbing. A 1/2″ diameter hose has an internal diameter of 0.50 inches, regardless of the hose pressure rating.\n\nSuggested Piping Sizes :-\nPump suction lines should be sized so the fluid velocity is between 2 and 4 feet per second.\nOil return lines should be sized so the fluid velocity is between 10 and 15 feet per second.\nMedium pressure supply lines should be sized so the fluid velocity is between 15 and 20 feet per second.\nHigh pressure supply lines should be sized so the fluid velocity is below 30 feet per second.\n\nPneumatic Valve Sizing :-\n\nNotes:\nAll these pneumatic formulas assume 68 degrees F at sea level\nAll strokes and diameters are in inches\nAll times are in seconds\nAll pressures are PSI.\n\nValve Sizing for Cylinder Actuation\n\nSCFM = 0.0273 x Cylinder Diameter x Cylinder Diameter x Cylinder Stroke / Stroke Time x ((Pressure-Pressure Drop)+14.7) / 14.7\n\nCv Required = 1.024 x SCFM / (Square Root of (Pressure Drop x (Pressure-Pressure Drop+14.7)))\n\nPressure 2 (PSIG) = Pressure-Pressure Drop\n\nFlow Coefficient for Smooth Wall Tubing\n\nCv of Tubing =(42.3 x Tube I.D. x Tube I.D. x 0.7854 x (Square Root (Tube I.D. / 0.02 x Length of Tube x 12)\n\nAIR FLOW Q (IN SCFM) TO ATMOSPHERE\nSCFM to Atmosphere = Valve Cv x (Square Root of (((Primary Pressure x 0.46) + 14.7) x (Primary Pressure x 0.54))) / 1.024\n\nPressure Drop Max (PSIG) = Primary Pressure x 0.54\n\nAir Flow Q (in SCFM) if Cv is Known\n\nValve Cv x (Square Root of (Pressure Drop x ((PSIG – Pressure Drop) + 14.7))) / 1.024\n\nCV IF AIR FLOW Q (IN SCFM) IS KNOWN\n1.024 x Air Flow / (Square Root of (Pressure Drop x ((PSIG-Pressure Drop) + 14.7)))" ]
[ null, "https://1.bp.blogspot.com/-ItvU3r8vUco/VmO7oem2bTI/AAAAAAAACN0/wXHARZdv3M0/s1600-r/ad728.jpgg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.80950636,"math_prob":0.9861898,"size":2664,"snap":"2019-43-2019-47","text_gpt3_token_len":711,"char_repetition_ratio":0.14586467,"word_repetition_ratio":0.27557412,"special_character_ratio":0.28153154,"punctuation_ratio":0.09671533,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9850956,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-22T06:51:05Z\",\"WARC-Record-ID\":\"<urn:uuid:da0c4c85-fc02-454d-bc49-cef06a8f2d29>\",\"Content-Length\":\"79094\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:21a222d9-13c4-4e95-867a-430b779aa433>\",\"WARC-Concurrent-To\":\"<urn:uuid:d5372ece-eb07-4d40-b023-a182120626ad>\",\"WARC-IP-Address\":\"172.217.164.179\",\"WARC-Target-URI\":\"https://www.pdfstall.online/2019/06/useful-pipe-engineering-basics-formulas.html\",\"WARC-Payload-Digest\":\"sha1:IY6E55OLINAWANXRN7YEG52E27VVRT4P\",\"WARC-Block-Digest\":\"sha1:7DJIJDA3AG6TSCTYLQTRDPOXTE2MHQBH\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570987803441.95_warc_CC-MAIN-20191022053647-20191022081147-00377.warc.gz\"}"}
https://www.uninsubria.eu/ugov/degreecourse/132684
[ "# MECHANIC OF POINT, SYSTEMS AND FLUIDS MOD. I\n\nDegree course:\nCorso di First cycle degree in Physics\nAcademic year when starting the degree:\n2019/2020\nYear:\n1\nAcademic year in which the course will be held:\n2019/2020\nCourse type:\nBasic compulsory subjects\nCredits:\n7\nPeriod:\nFirst Semester\nStandard lectures hours:\n56\nDetail of lecture’s hours:\nLesson (56 hours)\nRequirements:\n\nBasic elements of mathematics and geometry at the secondary school level.\n\nAssessment:\nVoto Finale\n\nAim of this course module is to provide detailed elements of kinematics and dynamics of point-like particles, including gravitational systems, particle systems’ and rigid-bodies’ dynamics, a description – mostly phenomenological – of elasticity phenomena, an introduction to the statics and dynamics of fluids and an introduction to kinetic theories and (equilibrium) thermodynamics.\nAt the end of the course the successful student will be able\n1) to master the main concepts introduced in the course and solve problems;\n2) to acquire critical sensibility and scientific method;\n3) to develop simple models to describe physical processes.\n\nFirst term (7 credits; Prof. A. Parola)\n- Introduction. Measuring physical quantities. Units of measurement (MKS,cgs) (2 h)\n- Vectors: sum, scalar product, vector (cross) product. Coordinate systems: Cartesian and polar. Elementary introduction to differential calculus (6 h).\n- Kinematics. Trajectory and the description of motion. Velocity and acceleration. Uniform motion, uniformly accelerated motion, harmonic motion. Uniform circular motion, centripetal acceleration. Tangential and normal acceleration. Reference systems: principle of relativity. Relationship between different reference systems (10 h).\n- Dynamics. First and second laws of dynamics. Third law and momentum conservation. Weight. Rheonomic constraints: inclined plane. Elastic forces: Hooke's law.The pendulum. Tensions. Atwood machine (10 h).\n- Frictional laws and viscous forces. Some example of motion in the presence of friction and viscosity. Fictitious forces (4 h).\n- Impulse-momentum theorem.Variable masses. Kinetic energy, work: work-energy theorem. Conservative forces, potential energy. Conservation of mechanical energy. Angular momentum. Central forces and conservation of angular momentum (10 h).\n- Gravitation. Equivalence principle. Newton's law of gravitation. Measuring G: the Cavendish experiment. Gravitational potential energy.Kepler laws. Center of mass and reduced mass. Gauss theorem. Motion of a point in a gravitational field (10 h).\n- Elastic and inelastic collisions. Dynamics of systems of points: Newton's equations and the definition of torque (4 h).\n\nSecond term (9 credits; Prof. G. Jug)\nThe rigid body. Translational motion and first cardinal equation. Kinematics of a rigid body with a fixed point. Rotational motion around a fixed axis, kinetic energy and moment of inertia. Calculation of the moment of inertia for simple rigid bodies. Relationship between angular momentum and angular velocity: tensor of inertia. Principal triad of inertia. Solution of dynamical problems through equations of motion and conservation laws. Pure rolling motion. Gyroscopes: free Poinsot motion. Heavy gyroscopes in rapid rotation. Rigid body statics.\n\nDeformable rigid bodies: linear regime, elastic and plastic regimes. Young modulous and Poisson coefficient. Compressibility modulus and shear modulus. Mechanical hysterises.\n\nStatics of liquids (and fluids). Pressure: Stevino’s law, Archimede’s principle, Torricelli experiment. Stability of floating bodies: metacenter.\n\nSurface (and capillary) phenomena: surface tension. Bubbles. Contact angle, capillarity.\n\nFluid dynamics: mass conservation, material derivative, Cauchy and Euler equations. Bernoulli theorem and applications. Viscous Newtonian fluids. Poiseuille law. Motion of a viscous fluid, limit velocity.\n\nThermodynamics. Concept of equilibrium. Work and heat. The principles of thermodynamics. Ideal gas. Carnot’s engine cycle. Concept of entropy. Statistical meaning of entropy. Thermodynamic potentials. Kinetic theory of gases. Maxwell postulates. Approach to equilibrium. Detailed balance. Master equation. Elements of statistical mechanics.\n\nText books: G. Rosati, “Fisica Generale 1” (and lecturer’s notes). M. Fazio, “Termodinamica”\nSupplementary reference: The Feynmann Lectures, Vol. 1\nExercise book: S. Rosati, R. Casali, “Problemi di Fisica Generale”" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8013701,"math_prob":0.860782,"size":4073,"snap":"2021-04-2021-17","text_gpt3_token_len":870,"char_repetition_ratio":0.11206685,"word_repetition_ratio":0.0,"special_character_ratio":0.1922416,"punctuation_ratio":0.20910384,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99034387,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-19T16:41:20Z\",\"WARC-Record-ID\":\"<urn:uuid:3de13c94-11bf-492c-a9e3-e7ba11328732>\",\"Content-Length\":\"46872\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0da389fd-fae4-4a9c-bd6a-8d55fc8c84e2>\",\"WARC-Concurrent-To\":\"<urn:uuid:3d66ebda-7f03-4d37-891e-a7047738b218>\",\"WARC-IP-Address\":\"130.186.7.96\",\"WARC-Target-URI\":\"https://www.uninsubria.eu/ugov/degreecourse/132684\",\"WARC-Payload-Digest\":\"sha1:ALIQROH6WHFENLJ2HBJVNZ4LA5GHGFJL\",\"WARC-Block-Digest\":\"sha1:DD353BPVUPXKMOCTGMM4XEY5HDQ7FKWB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038887646.69_warc_CC-MAIN-20210419142428-20210419172428-00212.warc.gz\"}"}
https://community.wolfram.com/groups/-/m/t/399091
[ "0\n|\n5743 Views\n|\n3 Replies\n|\n0 Total Likes\nView groups...\nShare\nGROUPS:\n\n# Can't get this old mathematica code to run...\n\nPosted 9 years ago\n Hello! I'm trying to get the following program to run. It seems like it was written with an older version of Mathematica. Any help is appreciated. Failed = 0; Hit = 0; SetSharedVariable[Failed, Hit] ParallelTable[x = Random[]; ProbabilityStillGoing = (1 - ?/200)^j; If[ProbabilityStillGoing > x, Failed++, Hit++]; If[Failed == Hit, Print[j]]; , {j, 0, NUMBER}]; Thanks in advance for your help.\n3 Replies\nSort By:\nPosted 9 years ago\n So you want something like this --not exactly, I suspect, since I am not understanding the internals of the code, but this at least creates a list and computes a Mean: Failed = 0; Hit = 0; NUMBER = 1000; SetSharedVariable[Failed, Hit] N@Mean[Flatten@ParallelTable[x = Random[]; ProbabilityStillGoing = (1 - \\[Pi]/200)^j; If[ProbabilityStillGoing > x, Failed++, Hit++]; If[Failed == Hit, j, {}], {j, 0, NUMBER}] ] \nPosted 9 years ago\n Hi David, thank you for your response.This code is attempting to solve the 2D version of Olber's Paradox. The problem statement is as follows:Robin Hood is standing in the middle of a forest. On average there is one tree of radius one meter per 200 square meters. If Robin Hood is placed at a random location in the forrest and shoots an arrow, how far will it travel?We consider the trees as points in a 200 meter square area and the arrow as a line two meters long. We need to figure out how far the arrow has to travel before it sweeps out an area of 200 square meters. So, we get 100 meters.The mean of all the numbers printed in the code divided by the number of times it was ran should give 100 meters. Mean[Distances] // N = approximately 100.It comes from this site: https://sites.google.com/site/themontecarlomethod/home/robin-hood\nPosted 9 years ago\n What are you expecting this code to do? Note (assuming you've given NUMBER a value) that the body of your ParallelTable command ends with a semicolon and there is a semicolon after the full ParallelTable expression. So (1) the table will contain all Nulls and the output of the calculation will be suppressed. The code runs, it just does not return anything. And the Print statements may or may not execute depending on the details of what's happening inside of the code." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.933193,"math_prob":0.829111,"size":845,"snap":"2023-40-2023-50","text_gpt3_token_len":202,"char_repetition_ratio":0.10701546,"word_repetition_ratio":0.0,"special_character_ratio":0.23905325,"punctuation_ratio":0.101123594,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9802819,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-04T16:51:29Z\",\"WARC-Record-ID\":\"<urn:uuid:ecc83939-e7d1-4c4e-9bff-b98693147b6f>\",\"Content-Length\":\"106510\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a02c6824-5f26-4d01-9b58-1c7af34d9dce>\",\"WARC-Concurrent-To\":\"<urn:uuid:fb2b3787-b394-45e5-8a30-6eb0a1c625a6>\",\"WARC-IP-Address\":\"140.177.8.58\",\"WARC-Target-URI\":\"https://community.wolfram.com/groups/-/m/t/399091\",\"WARC-Payload-Digest\":\"sha1:DQ6ZVSXO3SIG57WVCGMLUVYMATOAZHRA\",\"WARC-Block-Digest\":\"sha1:5GP2MSSNHSRW6C6YZU44A2Z5KYU5I6NZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100531.77_warc_CC-MAIN-20231204151108-20231204181108-00182.warc.gz\"}"}
https://classroom.synonym.com/calculate-vector-cross-product-2638.html
[ "# How to Calculate Vector Cross Product", null, "A vector is a mathematical construct that has magnitude and direction. The cross product of two vectors is a binary operation in three-dimensional space that results in a third vector that is perpendicular to the plane that contains the two input vectors. The direction of the resulting vector is determined by the order of the input vectors, so the vector cross product does not have an associative or commutative property. The vector cross product has extensive uses in mathematics and physics, as well as practical applications in computer graphics.\n\nUse the right-hand rule to determine the direction of the resulting vector in a cross product. Hold your right hand in front of you so that the thumb is pointed up, the index finger is pointed away from you and the middle finger is pointed to your left. The index finger shows the direction of vector A, the middle finger shows the direction of vector B and the thumb shows the direction of the vector from the cross product A x B.\n\nDefine the cross product as A x B = ab sin &#x3B8; n, where A and B are the input vectors, a is the magnitude of vector A, b is the magnitude of vector B, &#x3B8; is the smaller angle's measure between A and B, and n is a unit vector perpendicular to the plane that contains vectors A and B. The direction of n is given by the right-hand rule in Step 1.\n\nDefine some notation. Let i, j and k be the unit vectors in a given orthogonal coordinate system. We can now say A = a1i + a2j + a3k = (a1, a2, a3), where a1 is the magnitude of i, a2 is the magnitude of j and a3 is the magnitude of k. Similarly, B = b1i + b2j + b3k = (b1, b2, b3). We must also establish the following identities for unit vectors: i x j = k, j x k = i, k x i = j, j x i = 'k, k x j = 'i, i --- k = 'j, i x i = 0, j x j = 0, k x k = 0.\n\nUse distributive cross multiplication to calculate the cross product. A x B = (a1i + a2j + a3k) --- (b1i + b2j + b3k) = a1i --- (b1i + b2j + b3k) + a2j --- (b1i + b2j + b3k) + a3k --- (b1i + b2j + b3k) = (a1i --- b1i) + (a1i --- b2j) + (a1i --- b3k) + (a2j --- b1i) + (a2j --- b2j) + (a2j --- b3k) + (a3k --- b1i) + (a3k --- b2j) + (a3k --- b3k).\n\nFactor out the unit vectors from the result obtained in Step 4 to obtain A x B = a1b1(i x i) + a1b2(i x j) + a1b3(i x k) + a2b1(j x i) + a2b2(j x j) + a2b3(j x k) + a3b1(k x i) + a3b2(k x j) + a3b3(k x k). Now apply the identities for unit vectors given in Step 3 to obtain the following: A x B = a1b1(0) + a1b2(k) + a1b3('j) + a2b1('k) + a2b2(0) + a2b3(i) + a3b1(j) + a3b2('i) + a3b3(0) = a1b2(k) ' a1b3(j) ' a2b1(k) + a2b3(i) + a3b1(j) ' a3b2(i).\n\nCollect the coefficients of each vector in the result of Step 5 to obtain A x B = (a2b3 ' a3b2) i + (a3b1 ' a1b3) j + (a1b2 ' a2b1) k. Using the notation given in Step 3, we may now say A x B = (a2b3 ' a3b2, a3b1 ' a1b3, a1b2 ' a2b1).\n\nAllan Robinson has written numerous articles for various health and fitness sites. Robinson also has 15 years of experience as a software engineer and has extensive accreditation in software engineering. He holds a bachelor's degree with majors in biology and mathematics." ]
[ null, "https://classroom.synonym.com/public/images/logo-fallback.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.87950164,"math_prob":0.9989263,"size":2848,"snap":"2021-43-2021-49","text_gpt3_token_len":972,"char_repetition_ratio":0.145218,"word_repetition_ratio":0.05263158,"special_character_ratio":0.35463482,"punctuation_ratio":0.08210181,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99975604,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-21T16:16:31Z\",\"WARC-Record-ID\":\"<urn:uuid:f915d8b2-7c51-46c3-8663-6f2019c3f212>\",\"Content-Length\":\"236997\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0e83e745-3798-4e1f-884b-187584afd434>\",\"WARC-Concurrent-To\":\"<urn:uuid:c955c337-37fb-4843-92d5-67485649db22>\",\"WARC-IP-Address\":\"184.84.73.192\",\"WARC-Target-URI\":\"https://classroom.synonym.com/calculate-vector-cross-product-2638.html\",\"WARC-Payload-Digest\":\"sha1:NEH5RPIPK2ATTESKQNP3KEYT7BX4GSSS\",\"WARC-Block-Digest\":\"sha1:XS7BJWJIJHCUBZTUZYEPHI5BIDGOVSEM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585424.97_warc_CC-MAIN-20211021133500-20211021163500-00382.warc.gz\"}"}
https://casoluqylifixyhus.urbanagricultureinitiative.com/write-a-c-program-for-merge-sort-37945tr.html
[ "# Write a c program for merge sort\n\nA debugger will help you figure out where the code is core dumping, and if you stop execution right before the problem line, you can see what the variables' values are. You are having a few issues with unnecessary scoping.", null, "Explore the English language on a new scale using AI-powered English language navigator. Algorithm to merge sorted arrays In the article we present an algorithm for merging two sorted arrays.\n\nAlso, the algorithm has certain applications in practice, for instance in merge sort.\n\n## Background\n\nMerge algorithm Assume, that both arrays are sorted in ascending order and we want resulting array to maintain the same order. Algorithm to merge two arrays A[ Introduce read-indices i, j to traverse arrays A and B, accordingly. Introduce write-index k to store position of the first free cell in the resulting array.\n\nOtherwise go to step 4. Increase k and index of the array, algorithm located minimal value at, by one. Copy the rest values from the array, which index is still in range, to the resulting array. Enhancements Algorithm could be enhanced in many ways. In any of those cases, there is no need to do more comparisons.\n\nAlgorithm could just copy source arrays in the resulting one in the right order. More complicated enhancements may include searching for interleaving parts and run merge algorithm for them only.\n\nIt could save up much time, when sizes of merged arrays differ in scores of times.Merge sort program in c Source code of simple merge sort implementation using array in ascending order in c programming language.\n\n#include Write a c program for heap sort.", null, "6. Write a c program for merge sort. 7. Write a c program for shell sort. 8. Big list of c program examples. C++ Program to Merge two Arrays Prev Tutorial Next Tutorial In this program we enter an elements in any two array and then these two array (elements of array) are store in third array.\n\nAlgorithm to merge sorted arrays. Also, the algorithm has certain applications in practice, for instance in merge sort.\n\nMerge algorithm. Assume, that both arrays are sorted in ascending order and we want resulting array to maintain the same order.\n\n## Learn, Code, Share\n\nand write it to C[k]. Otherwise go to step 4.\n\nThe function merge_sort() will sort virtually any kind of file, using read, write and comparison functions supplied by the calling program. It sorts records of variable size. It requires O(n log n) comparisons, where n is the number of records, regardless of the initial order. Merge Sort using Java with program code In computer science, merge sort or mergesort is a sorting algorithm for rearranging lists (or any such linear sequential data storage structure) into a specified order. c++ program on merge urbanagricultureinitiative.comte program using function merge sort in c++ with output screen. u can see other c++ programs on array sorting too in c/c++ +91 70 [email protected]\n\nSuppose X. Y, Z are arrays of integers of size M, N, and M + N respectively. The numbers in array X and Y appear in descending order. Write a user-defined function in C++ to produce third array Z by merging arrays X and Y in descending order.\n\nThe Mergesort algorithm can be used to sort a collection of urbanagricultureinitiative.comort is a so called divide and conquer urbanagricultureinitiative.com and conquer algorithms divide the original data into smaller sets of .\n\nAn example of merge sort in C is given below. First divide the list into the smallest unit (1 element), then compare each element with the adjacent list to sort and merge the two adjacent lists.\n\nwrite a small merge sort program in Java | Java" ]
[ null, "http://www.c-programming-simple-steps.com/images/merge-sort-flow-chart.png.pagespeed.ce.SbAESM6__i.png", null, "https://image.slidesharecdn.com/mergesortexplainead-110830075243-phpapp02/95/merge-sort-code-in-c-explained-2-728.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8692594,"math_prob":0.8000535,"size":3523,"snap":"2020-10-2020-16","text_gpt3_token_len":728,"char_repetition_ratio":0.14805342,"word_repetition_ratio":0.099326596,"special_character_ratio":0.20295203,"punctuation_ratio":0.104719765,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9669907,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,2,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-03-29T11:19:43Z\",\"WARC-Record-ID\":\"<urn:uuid:09d0ab9d-38fa-4961-9b1b-6eea6bf3c72e>\",\"Content-Length\":\"8685\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b90cb36a-7b0b-4c22-915f-fc9c4d211209>\",\"WARC-Concurrent-To\":\"<urn:uuid:b669c7ca-db1e-40ad-8bba-03f8e233d375>\",\"WARC-IP-Address\":\"104.27.151.205\",\"WARC-Target-URI\":\"https://casoluqylifixyhus.urbanagricultureinitiative.com/write-a-c-program-for-merge-sort-37945tr.html\",\"WARC-Payload-Digest\":\"sha1:6YZURZEZ6644YWWFCFSPXCHJ6K2H5X2K\",\"WARC-Block-Digest\":\"sha1:QFFEZ2L6YVQR6NETBD54WLLE2DLOS4WL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585370494331.42_warc_CC-MAIN-20200329105248-20200329135248-00257.warc.gz\"}"}
http://www.mathematics21.org/view/upgrading-multifuncoid/index-2.html
[ "", null, "1.2\n\nDefinition 1\n\nA\n\nfiltrator\n\nis a pair\n\n(\n\nA\n\n;\n\nZ\n\n)\n\nof a poset\n\nA\n\nand its subset\n\nZ\n\n.\n\nSee  for a detailed study of filtrators.\nHaving fixed a filtrator, we define:\n\nDefinition 2\n\nup\n\nx\n\n=\n\n{\n\nY\n\nZ\n\n|\n\nY\n\n>\n\nx\n\n}\n\nfor every\n\nX\n\nA\n\n.\n\nDefinition 3\n\nE\n\nK\n\n=\n\n{\n\nL\n\nA\n\n|\n\nup\n\nL\n\nK\n\n}\n\n(\n\nthe set\n\nK\n\n) for every\n\nK\n\nP\n\nZ\n\n.\n\n1.3\n\nMultifuncoids\n\nDefinition 4\n\nA\n\nfree star\n\non a join-semilattice\n\nA\n\nwith least element\n\n0\n\nis a set\n\nS\n\nsuch that\n\n0\n\n6∈\n\nS\n\nand\n\nA, B\n\nA\n\n: (\n\nA\n\nB\n\nS\n\nA\n\nS\n\nB\n\nS\n\n)\n\n.\n\nI will denote the set of free stars on\n\nA\n\nas\n\nA\n\nStar.\n\nLet\n\nn\n\nbe a set. As an example,\n\nn\n\nmay be an ordinal,\n\nn\n\nmay be a natural\n\nnumber, considered as a set by the formula\n\nn\n\n=\n\n{\n\n0\n\n, . . . , n\n\n1\n\n}\n\n. Let\n\nA\n\n=\n\nA\n\ni\n\nn\n\nis a family of posets indexed by the set\n\nn\n\n.\n\nDefinition 5\n\nLet\n\nf\n\nP\n\nQ\n\nA\n\n,\n\ni\n\ndom\n\nA\n\n,\n\nL\n\nQ\n\nA\n\n|\n\n(dom\n\nA\n\n)\n\n\\{\n\ni\n\n}\n\n.\n\n(val\n\nf\n\n)\n\ni\n\nL\n\n=\n\n{\n\nX\n\nA\n\ni\n\n|\n\nL\n\n∪ {\n\n(\n\ni\n\n;\n\nX\n\n)\n\n} ∈\n\nf\n\n}\n\n.\n\n(“\n\nval\n\n” is an abbreviation of the word “value”.)\n\nProposition 1\n\nf\n\ncan be restored knowing\n\n(val\n\nf\n\n)\n\ni\n\nfor some\n\ni\n\nn\n\n.\n\nProof\n\nf\n\n=\n\n{\n\nK\n\nQ\n\nA\n\n|\n\nK\n\nf\n\n}\n\n=\n\nL\n\n∪ {\n\n(\n\ni\n\n;\n\nX\n\n)\n\n}\n\n|\n\nL\n\nQ\n\nA\n\n|\n\n(dom\n\nA\n\n)\n\n\\{\n\ni\n\n}\n\n, X\n\nA\n\ni\n\n, L\n\n∪ {\n\n(\n\ni\n\n;\n\nX\n\n)\n\n} ∈\n\nf\n\n=\n\nL\n\n∪ {\n\n(\n\ni\n\n;\n\nX\n\n)\n\n}\n\n|\n\nL\n\nQ\n\nA\n\n|\n\n(dom\n\nA\n\n)\n\n\\{\n\ni\n\n}\n\n, X\n\n(val\n\nf\n\n)\n\ni\n\nL\n\n.\n\nDefinition 6\n\nLet\n\nA\n\nis a family of join-semilattices. A\n\npre-multidimensional\n\nfuncoid\n\n(or\n\npre-multifuncoid\n\nfor short) of the form\n\nA\n\nis an\n\nf\n\nP\n\nQ\n\nA\n\nsuch that we have that:\n\n(val\n\nf\n\n)\n\ni\n\nL\n\nis a free star for every\n\ni\n\ndom\n\nA\n\n,\n\nL\n\nQ\n\nA\n\n|\n\n(dom\n\nA\n\n)\n\n\\{\n\ni\n\n}\n\n.\n\nDefinition 7\n\nA\n\nmultidimensional funcoid\n\n(or\n\nmultifuncoid\n\nfor short) is\n\na pre-multifuncoid which is an upper set.\n\nProposition 2\n\nIf\n\nL\n\nQ\n\nA\n\nand\n\nL\n\ni\n\n= 0\n\nA\n\ni\n\nfor some\n\ni\n\nthen\n\nL\n\n6∈\n\nf\n\nif\n\nf\n\nis a\n\npre-multifuncoid.\n\nProof\n\nLet\n\nK\n\n=\n\nL\n\n|\n\ndom\n\nA\n\n\\{\n\ni\n\n}\n\n. We have 0\n\n6∈\n\n(val\n\nf\n\n)\n\ni\n\nK\n\n;\n\nK\n\n∪ {\n\n(\n\ni\n\n; 0)\n\n} 6∈\n\nf\n\n;\n\nL\n\n6∈\n\nf\n\n.\n\n2" ]
[ null, "http://www.mathematics21.org/view/upgrading-multifuncoid/index002.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.52302486,"math_prob":0.9154842,"size":1718,"snap":"2019-43-2019-47","text_gpt3_token_len":907,"char_repetition_ratio":0.1201867,"word_repetition_ratio":0.17460318,"special_character_ratio":0.42142025,"punctuation_ratio":0.117788464,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.998645,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-16T02:41:06Z\",\"WARC-Record-ID\":\"<urn:uuid:d229ed83-35d4-499c-b3d2-576b0cf714bf>\",\"Content-Length\":\"40428\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4b345eb1-85de-45e1-a979-47c8fc82143b>\",\"WARC-Concurrent-To\":\"<urn:uuid:e29c4716-b854-4977-80f0-49ad888eba61>\",\"WARC-IP-Address\":\"104.236.49.103\",\"WARC-Target-URI\":\"http://www.mathematics21.org/view/upgrading-multifuncoid/index-2.html\",\"WARC-Payload-Digest\":\"sha1:PH4FCNEEONVCX27XTLHFIRJ26XNHETQF\",\"WARC-Block-Digest\":\"sha1:XVC5ZKFLISIJYTN4QRRG25PRC52WSQ4B\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986661296.12_warc_CC-MAIN-20191016014439-20191016041939-00050.warc.gz\"}"}
https://forum.freecodecamp.org/t/is-this-the-only-way-to-check-whether-two-arrays-are-identical/224789
[ "# Is this the only way to check whether two arrays are identical?\n\nI am doing this freecodecamp quest where I need to confirm the ending of a string.\nhttps://learn.freecodecamp.org/javascript-algorithms-and-data-structures/basic-algorithm-scripting/confirm-the-ending/\n\nAnd here is my code.\n\n``````function a(str,target){\n/*get rid of the whitespaces*/\nlet replaced = str.replace(/ /g,'');\nconsole.log(replaced);\n/*convert string into array*/\nlet array = replaced.split(\"\");\nconsole.log(array);\n/*reverse it*/\nlet reversed = array.reverse();\nconsole.log(reversed);\n/* new array */\nlet new_array = reversed.slice(0,target.length);\nconsole.log(new_array);\n/*new target*/\nlet new_target = target.split(\"\").reverse();\nconsole.log(new_target);\n\nif(new_array===new_target){\nreturn true;\n}\nelse{\nreturn false;\n}\n}\nconsole.log(a(\"hello world\",\"world\"));\n``````\n\nThen I realised that === does not work on arrays.\nI checked stackoverflow how to see whether two arrays are identical, here is the answer:\n\n``````// Warn if overriding existing method\nif(Array.prototype.equals)\nconsole.warn(\"Overriding existing Array.prototype.equals. Possible causes: New API defines the method, there's a framework conflict or you've got double inclusions in your code.\");\n// attach the .equals method to Array's prototype to call it on any array\nArray.prototype.equals = function (array) {\n// if the other array is a falsy value, return\nif (!array)\nreturn false;\n\n// compare lengths - can save a lot of time\nif (this.length != array.length)\nreturn false;\n\nfor (var i = 0, l=this.length; i < l; i++) {\n// Check if we have nested arrays\nif (this[i] instanceof Array && array[i] instanceof Array) {\n// recurse into the nested arrays\nif (!this[i].equals(array[i]))\nreturn false;\n}\nelse if (this[i] != array[i]) {\n// Warning - two different object instances will never be equal: {x:20} != {x:20}\nreturn false;\n}\n}\nreturn true;\n}\n// Hide method from for-in loops\nObject.defineProperty(Array.prototype, \"equals\", {enumerable: false});\n``````\n\nIs this the only way to do this? It seems really complicated.\nOr I should just compare string to string.\n\nThe only way to check equality of the contents of two arrays is to iterate through them and check the values that they contain. Since your arrays represent strings, the simplest way to check them is to convert them back into strings and compare those strings.\n\n1 Like\n\nyeah, did a simple change.\nworks like a charm now.\nis there any simpler solution to this???\n\n``````function confirmEnding(str, target) {\n/*get rid of the whitespaces*/\nlet replaced = str.replace(/ /g,'');\n\n/*convert string into array*/\nlet array = replaced.split(\"\");\n\n/*reverse it*/\nlet reversed = array.reverse();\n\n/* new string */\nlet new_string = reversed.slice(0,target.length).join(\"\");\n\n/*new target*/\nlet new_target = target.split(\"\").reverse().join(\"\");\n\nif(new_string===new_target){\nreturn true;\n}\nelse{\nreturn false;\n}\n}\n\nconfirmEnding(\"Bastian\", \"n\");\n``````\n\nClick the Get a Hint button located on the challenge to compare your solution to others." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5674309,"math_prob":0.7815404,"size":2013,"snap":"2022-27-2022-33","text_gpt3_token_len":469,"char_repetition_ratio":0.12493778,"word_repetition_ratio":0.0,"special_character_ratio":0.27123696,"punctuation_ratio":0.20769231,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9583332,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-12T09:29:56Z\",\"WARC-Record-ID\":\"<urn:uuid:5fb99b79-957c-41ea-852f-c2eb660c2c9b>\",\"Content-Length\":\"32519\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d98d1c79-8567-494b-ba14-726cdae60679>\",\"WARC-Concurrent-To\":\"<urn:uuid:89883ef8-b2bf-4f08-8f1c-c43f5a33deb1>\",\"WARC-IP-Address\":\"64.71.144.205\",\"WARC-Target-URI\":\"https://forum.freecodecamp.org/t/is-this-the-only-way-to-check-whether-two-arrays-are-identical/224789\",\"WARC-Payload-Digest\":\"sha1:ZYLTY4USSDSJGSZSDLCFEKSKSAIFPLA6\",\"WARC-Block-Digest\":\"sha1:3CNYG6LDQ2YYJIU2NZ2QZCX26WO2BPJT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882571597.73_warc_CC-MAIN-20220812075544-20220812105544-00212.warc.gz\"}"}
https://goprep.co/ex-29.6-q13-find-the-equation-of-the-plane-passing-through-a-i-1nkyog
[ "Q. 135.0( 1 Vote )\n\n# Find the equation of the plane passing through (a, b, c) and parallel to the plane", null, "The required plane is parallel to the plane", null, "Any plane parallel to", null, "is given as", null, "Further, it is given that the plane is passing through (a,b,c). So, point (a,b,c) should satisfy the equation of the plane,\n\nwe have", null, "", null, "Hence, the equation of the required plane is", null, "Rate this question :\n\nHow useful is this solution?\nWe strive to provide quality solutions. Please rate us to serve you better.\nRelated Videos", null, "", null, "The Embryonic DevelopmentFREE Class", null, "", null, "NCERT Exemplar Special | The Living World Part 225 mins", null, "", null, "Nernst Equation - Learn the ConceptFREE Class", null, "", null, "Understanding the Combination of cells54 mins", null, "", null, "Revising the basics of Organic Chemistry35 mins", null, "", null, "Nucleophilic Substitution Reaction | Getting the basicsFREE Class", null, "", null, "Types of solution on the basis of Raoult's Law44 mins", null, "", null, "Know About finding the Adjoint & Inverse Of Matrix46 mins", null, "", null, "Understant the concept of Development of pollen grains55 mins", null, "", null, "Getting into the world of coliisionsFREE Class\nTry our Mini CourseMaster Important Topics in 7 DaysLearn from IITians, NITians, Doctors & Academic Experts\nDedicated counsellor for each student\n24X7 Doubt Resolution\nDaily Report Card\nDetailed Performance Evaluation", null, "view all courses", null, "RELATED QUESTIONS :\n\nFind the coordinate of the point P where the line through", null, "and", null, "crosses the plane passing through three points", null, "and", null, "Also, find the ratio in which P divides the line segment AB.\n\nMathematics - Board Papers" ]
[ null, "https://gradeup-question-images.grdp.co/liveData/PROJ23287/1542105136668866.png", null, "https://gradeup-question-images.grdp.co/liveData/PROJ23287/1542105137418375.png", null, "https://gradeup-question-images.grdp.co/liveData/PROJ23287/1542105138162613.png", null, "https://gradeup-question-images.grdp.co/liveData/PROJ23287/1542105138899659.png", null, "https://gradeup-question-images.grdp.co/liveData/PROJ23287/1542105139648820.png", null, "https://gradeup-question-images.grdp.co/liveData/PROJ23287/1542105140388456.png", null, "https://gradeup-question-images.grdp.co/liveData/PROJ23287/1542105141150677.png", null, "https://grdp.co/cdn-cgi/image/quality=90,fit=contain,width=350,f=auto/https://gs-post-images.grdp.co/2020/9/a0662b6af51cdfea53ff8eebc80c45335c61078d613ffa0b16cf564bad129d23poster-high-webp.png", null, "https://gs-post-images.grdp.co/2020/8/group-2x-img1598864374422-78.png-rs-high-webp.png", null, "https://grdp.co/cdn-cgi/image/quality=90,fit=contain,width=350,f=auto/https://gs-post-images.grdp.co/2020/8/a64a8645ba26e5271ae58a7ec499bc5654ab192f2ac14b6b7a641138e012d30eposter-high-webp.png", null, "https://gs-post-images.grdp.co/2020/8/group-2x-img1598864374422-78.png-rs-high-webp.png", null, "https://grdp.co/cdn-cgi/image/quality=90,fit=contain,width=350,f=auto/https://gs-post-images.grdp.co/2020/9/a38afbd6309dfa174b8dbdd4d7266e015b8e31cf134a6e5c9a3730f1debb5732poster-high-webp.png", null, "https://gs-post-images.grdp.co/2020/8/group-2x-img1598864374422-78.png-rs-high-webp.png", null, "https://grdp.co/cdn-cgi/image/quality=90,fit=contain,width=350,f=auto/https://gs-post-images.grdp.co/2020/8/551491e9b0c1a2f30f99e79e1f15999f16f16d806eeb80fc0bdb9333a2073636poster-high-webp.png", null, "https://gs-post-images.grdp.co/2020/8/group-2x-img1598864374422-78.png-rs-high-webp.png", null, "https://grdp.co/cdn-cgi/image/quality=90,fit=contain,width=350,f=auto/https://gs-post-images.grdp.co/2020/8/d1fc7b03514e1f8408550b4ae8df02c64f843edf2667725d04aab4ff5abb5b25poster-high-webp.png", null, "https://gs-post-images.grdp.co/2020/8/group-2x-img1598864374422-78.png-rs-high-webp.png", null, "https://grdp.co/cdn-cgi/image/quality=90,fit=contain,width=350,f=auto/https://gs-post-images.grdp.co/2020/9/4baec1a5902bfdc2f33df136dbd4487ba8bf8580ea7585385e37f0c884e7aa8eposter-high-webp.png", null, "https://gs-post-images.grdp.co/2020/8/group-2x-img1598864374422-78.png-rs-high-webp.png", null, "https://grdp.co/cdn-cgi/image/quality=90,fit=contain,width=350,f=auto/https://gs-post-images.grdp.co/2020/9/9c0155e2d362ee40ad62bfefcebf2b29bbebe08006951c88602ba8a03760e884poster-high-webp.png", null, "https://gs-post-images.grdp.co/2020/8/group-2x-img1598864374422-78.png-rs-high-webp.png", null, "https://grdp.co/cdn-cgi/image/quality=90,fit=contain,width=350,f=auto/https://gs-post-images.grdp.co/2020/6/6e46aa7ed3e3db2fa9c98380977b2123fde71daf4f9f21ef9e1e434e9906ad38poster-high-webp.png", null, "https://gs-post-images.grdp.co/2020/8/group-2x-img1598864374422-78.png-rs-high-webp.png", null, "https://grdp.co/cdn-cgi/image/quality=90,fit=contain,width=350,f=auto/https://gs-post-images.grdp.co/2020/6/5a120c46a4bd734440ea284d24bb4036f64a8fa88f77348f5aedfd1db0e107a4poster-high-webp.png", null, "https://gs-post-images.grdp.co/2020/8/group-2x-img1598864374422-78.png-rs-high-webp.png", null, "https://grdp.co/cdn-cgi/image/quality=90,fit=contain,width=350,f=auto/https://gs-post-images.grdp.co/2020/7/9cbb8920e229ce0c50f70072a9b955cc2ffcdd370ae6a61111994583f2eff2bdposter-high-webp.png", null, "https://gs-post-images.grdp.co/2020/8/group-2x-img1598864374422-78.png-rs-high-webp.png", null, "https://grdp.co/cdn-cgi/image/height=128,quality=80,f=auto/https://gs-post-images.grdp.co/2020/8/group-7-3x-img1597928525711-15.png-rs-high-webp.png", null, "https://gs-post-images.grdp.co/2020/8/group-img1597139979159-33.png-rs-high-webp.png", null, "https://gradeup-question-images.grdp.co/liveData/PROJ28286/1553500020430248.png", null, "https://gradeup-question-images.grdp.co/liveData/PROJ28286/1553500021198262.png", null, "https://gradeup-question-images.grdp.co/liveData/PROJ28286/1553500021966876.png", null, "https://gradeup-question-images.grdp.co/liveData/PROJ28286/1553500022755723.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.89072245,"math_prob":0.82965034,"size":1407,"snap":"2020-45-2020-50","text_gpt3_token_len":361,"char_repetition_ratio":0.2138275,"word_repetition_ratio":0.07335907,"special_character_ratio":0.25657427,"punctuation_ratio":0.11904762,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9526647,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,7,null,null,null,7,null,null,null,7,null,null,null,7,null,null,null,7,null,null,null,7,null,null,null,7,null,null,null,null,null,null,null,7,null,null,null,5,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-22T21:50:08Z\",\"WARC-Record-ID\":\"<urn:uuid:e2372249-aa15-418c-98bd-8660e11af2e2>\",\"Content-Length\":\"396809\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0affbe72-f136-4ec9-8e43-7385250c65bb>\",\"WARC-Concurrent-To\":\"<urn:uuid:c1a8c897-a671-4e4a-bc39-c73894844171>\",\"WARC-IP-Address\":\"104.18.25.35\",\"WARC-Target-URI\":\"https://goprep.co/ex-29.6-q13-find-the-equation-of-the-plane-passing-through-a-i-1nkyog\",\"WARC-Payload-Digest\":\"sha1:CMJSSYMA7SNE7H5ULADSIO2QGIREY5K3\",\"WARC-Block-Digest\":\"sha1:3R7JRDCWRBBS7NVJL7VVFXGZ5SDPOLFC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107880038.27_warc_CC-MAIN-20201022195658-20201022225658-00641.warc.gz\"}"}
https://patents.justia.com/patent/11374742
[ "# Conversion key generation device, ciphertext conversion device, privacy-preserving information processing system, conversion key generation method, ciphertext conversion method, and computer\n\nA key acquisition unit (411) acquires a decryption key ski in a pair of a conversion source and a public key pkj in a pair of a conversion target, out of a plurality of pairs of a decryption key and a public key. A conversion key generation unit (412) encrypts the decryption key ski acquired by the key acquisition unit (411) with the public key pkj, so as to generate a conversion key rki→j for converting a ciphertext encrypted with a public key pki in the pair of the conversion source into a converted ciphertext that can be decrypted with a decryption key skj in the pair of the conversion target. An output unit (413) outputs the conversion key rki→j generated by the conversion key generation unit (412).\n\n## Latest Mitsubishi Electric Corporation Patents:\n\nDescription\nTECHNICAL FIELD\n\nThe present invention relates to a proxy re-encryption technique in homomorphic encryption.\n\nBACKGROUND ART\n\nHomomorphic encryption is an encryption technique that allows data to be operated on while the data remains encrypted. A process to operate on data while the data remains encrypted is called a homomorphic operation, and the types and the number of operations for which homomorphic operations are possible vary with each specific scheme. The use of homomorphic encryption allows data to be stored in a database on a cloud while the data remains encrypted, and further allows analysis, such as statistical processing, to be performed on the stored encrypted data without decrypting the data. As a result, the cloud can be used while securing privacy.\n\nHomomorphic encryption has a property that ciphertexts have to be encrypted with the same public key in order to perform a homomorphic operation. Therefore, when a plurality of users store data on a cloud and further perform homomorphic operations on the stored data, all the users need to use a common key. Since all the users use the common key, a problem is that any user can decrypt the data.\n\nPatent Literature 1 describes converting ciphertexts encrypted with different keys into ciphertexts encrypted with a specific single key by employing a technique, called proxy re-encryption, for converting a key with which data is encrypted. As a result, Patent Literature 1 allows ciphertexts to be converted into ciphertexts encrypted with the same specific key, and then allows a homomorphic operation to be performed on the ciphertexts. In addition, in Patent Literature 1, only a user who has the key after conversion by proxy re-encryption can decrypt a ciphertext resulting from the homomorphic operation.\n\nThat is, the technique described in Patent Literature 1 allows a homomorphic operation to be performed on ciphertexts encrypted with different keys. This solves the problem which is that all the users need to use the common key.\n\nCITATION LIST Patent Literature\n\nPatent Literature 1: WO 2014/010202 A1\n\nSUMMARY OF INVENTION Technical Problem\n\nHowever, in the technique described in Patent Literature 1, it is requisite that ciphertexts be converted by proxy re-encryption before performing a homomorphic operation. In other words, with the technique described in Patent Literature 1, a homomorphic operation cannot be performed on ciphertexts before proxy re-encryption is performed on the ciphertexts. For this reason, it is necessary to determine who is to be allowed to decrypt converted ciphertexts before a homomorphic operation is performed. Therefore, it is not possible to perform analysis by a homomorphic operation before an analyst is determined. In addition, a result of analysis by a homomorphic operation performed for a certain analyst cannot be analyzed by another analyst.\n\nIt is an object of the present invention to make it possible to realize a homomorphic encryption scheme in which after a homomorphic operation is performed on ciphertexts encrypted with different keys, a decrypting user can be controlled by proxy re-encryption.\n\nSolution to Problem\n\nA conversion key generation device according to the present invention includes\n\n• a key acquisition unit to acquire a decryption key ski in a pair of a conversion source and a public key pkj in a pair of a conversion target, out of a plurality of pairs of a decryption key and a public key; and\n• a conversion key generation unit to encrypt the decryption key ski acquired by the key acquisition unit with the public key pkj, so as to generate a conversion key rki→j for converting a ciphertext encrypted with a public key pki in the pair of the conversion source into a converted ciphertext that can be decrypted with a decryption key skj in the pair of the conversion target.\n\nIn the present invention, a decryption key ski in a pair of a conversion source is encrypted with a public key pkj in a pair of a conversion target, so as to generate a conversion key rki→j for converting a ciphertext encrypted with a public key pki into a converted ciphertext that can be decrypted with a decryption key skj in the pair of the conversion target. By using this conversion key rki→j, it is possible to allow control of a decryption key that can decrypt a ciphertext resulting from performing a homomorphic operation on ciphertexts encrypted with different keys.\n\nBRIEF DESCRIPTION OF DRAWINGS\n\nFIG. 1 is a configuration diagram of a privacy-preserving information processing system 10 according to a first embodiment;\n\nFIG. 2 is a configuration diagram of a common parameter generation device 20 according to the first embodiment;\n\nFIG. 3 is a configuration diagram of a key generation device 30 according to the first embodiment;\n\nFIG. 4 is a configuration diagram of a conversion key generation device 40 according to the first embodiment;\n\nFIG. 5 is a configuration diagram of an encryption device 50 according to the first embodiment;\n\nFIG. 6 is a configuration diagram of a homomorphic operation device 60 according to the first embodiment;\n\nFIG. 7 is a configuration diagram of a ciphertext conversion device 70 according to the first embodiment;\n\nFIG. 8 is a configuration diagram of a decryption device 80 according to the first embodiment;\n\nFIG. 9 is a flowchart illustrating operation of the common parameter generation device 20 according to the first embodiment;\n\nFIG. 10 is a flowchart illustrating operation of the key generation device 30 according to the first embodiment;\n\nFIG. 11 is a flowchart illustrating operation of the conversion key generation device 40 according to the first embodiment;\n\nFIG. 12 is a flowchart illustrating operation of the encryption device 50 according to the first embodiment;\n\nFIG. 13 is a flowchart illustrating operation of the homomorphic operation device 60 according to the first embodiment;\n\nFIG. 14 is a flowchart illustrating operation of the ciphertext conversion device 70 according to the first embodiment;\n\nFIG. 15 is a flowchart illustrating operation of the decryption device 80 according to the first embodiment;\n\nFIG. 16 is a configuration diagram of the common parameter generation device 20 according to a first variation;\n\nFIG. 17 is a configuration diagram of the key generation device 30 according to the first variation;\n\nFIG. 18 is a configuration diagram of the conversion key generation device 40 according to the first variation;\n\nFIG. 19 is a configuration diagram of the encryption device 50 according to the first variation;\n\nFIG. 20 is a configuration diagram of the homomorphic operation device 60 according to the first variation;\n\nFIG. 21 is a configuration diagram of the ciphertext conversion device 70 according to the first variation; and\n\nFIG. 22 is a configuration diagram of the decryption device 80 according to the first variation.\n\nDESCRIPTION OF EMBODIMENTS First Embodiment Description of Configuration\n\nA configuration of a privacy-preserving information processing system 10 according to a first embodiment will be described with reference to FIG. 1.\n\nThe privacy-preserving information processing system 10 includes a common parameter generation device 20, a plurality of key generation devices 30, a conversion key generation device 40, an encryption device 50, a homomorphic operation device 60, a ciphertext conversion device 70, and a plurality of decryption devices 80.\n\nThe common parameter generation device 20, the key generation devices 30, the conversion key generation device 40, the encryption device 50, the homomorphic operation device 60, the ciphertext conversion device 70, and the decryption devices 80 are connected via transmission channels 90. A specific example of the transmission channels 90 is the Internet or a local area network (LAN).\n\nA configuration of the common parameter generation device 20 according to the first embodiment will be described with reference to FIG. 2.\n\nThe common parameter generation device 20 includes hardware of a processor 21, a memory 22, a storage 23, and a communication interface 24. The processor 21 is connected with the other hardware components via signal lines and controls the other hardware components.\n\nThe common parameter generation device 20 includes, as functional components, an acquisition unit 211, a common parameter generation unit 212, and an output unit 213. The functions of the functional components of the common parameter generation device 20 are realized by software.\n\nThe storage 23 stores programs for realizing the functions of the functional components of the common parameter generation device 20. These programs are loaded into the memory 22 by the processor 21 and executed by the processor 21. This realizes the functions of the functional components of the common parameter generation device 20.\n\nThe storage 23 realizes the function of a parameter storage unit 231.\n\nA configuration of the key generation device 30 according to the first embodiment will be described with reference to FIG. 3.\n\nThe key generation device 30 includes hardware of a processor 31, a memory 32, a storage 33, and a communication interface 34. The processor 31 is connected with the other hardware components via signal lines and controls the other hardware components.\n\nThe key generation device 30 includes, as functional components, an acquisition unit 311, a key generation unit 312, and an output unit 313. The functions of the functional components of the key generation device 30 are realized by software.\n\nThe storage 33 stores programs for realizing the functions of the functional components of the key generation device 30. These programs are loaded into the memory 32 by the processor 31 and executed by the processor 31. This realizes the functions of the functional components of the key generation device 30.\n\nThe storage 33 realizes the function of a key storage unit 331.\n\nA configuration of the conversion key generation device 40 according to the first embodiment will be described with reference to FIG. 4.\n\nThe conversion key generation device 40 includes hardware of a processor 41, a memory 42, a storage 43, and a communication interface 44. The processor 41 is connected with the other hardware components via signal lines and controls the other hardware components.\n\nThe conversion key generation device 40 includes, as functional components, a key acquisition unit 411, a conversion key generation unit 412, and an output unit 413. The functions of the functional components of the conversion key generation device 40 are realized by software.\n\nThe storage 43 stores programs for realizing the functions of the functional components of the conversion key generation device 40. These programs are loaded into the memory 42 by the processor 41 and executed by the processor 41. This realizes the functions of the functional components of the conversion key generation device 40.\n\nThe storage 43 realizes the function of a key storage unit 431.\n\nA configuration of the encryption device 50 according to the first embodiment will be described with reference to FIG. 5.\n\nThe encryption device 50 includes hardware of a processor 51, a memory 52, a storage 53, and a communication interface 54. The processor 51 is connected with the other hardware components via signal lines and controls the other hardware components.\n\nThe encryption device 50 includes, as functional components, an acquisition unit 511, an encryption unit 512, and an output unit 513. The functions of the functional components of the encryption device 50 are realized by software.\n\nThe storage 53 stores programs for realizing the functions of the functional components of the encryption device 50. These programs are loaded into the memory 52 by the processor 51 and executed by the processor 51. This realizes the functions of the functional components of the encryption device 50.\n\nThe storage 53 realizes the function of a key storage unit 531.\n\nA configuration of the homomorphic operation device 60 according to the first embodiment will be described with reference to FIG. 6.\n\nThe homomorphic operation device 60 includes hardware of a processor 61, a memory 62, a storage 63, and a communication interface 64. The processor 61 is connected with the other hardware components via signal lines and controls the other hardware components.\n\nThe homomorphic operation device 60 includes, as functional components, an acquisition unit 611, a homomorphic operation unit 612, and an output unit 613. The functions of the functional components of the homomorphic operation device 60 are realized by software.\n\nThe storage 63 stores programs for realizing the functions of the functional components of the homomorphic operation device 60. These programs are loaded into the memory 62 by the processor 61 and executed by the processor 61. This realizes the functions of the functional components of the homomorphic operation device 60.\n\nThe storage 63 realizes the functions of a key storage unit 631 and a ciphertext storage unit 632.\n\nA configuration of the ciphertext conversion device 70 according to the first embodiment will be described with reference to FIG. 7.\n\nThe ciphertext conversion device 70 includes hardware of a processor 71, a memory 72, a storage 73, and a communication interface 74. The processor 71 is connected with the other hardware components via signal lines and controls the other hardware components.\n\nThe ciphertext conversion device 70 includes, as functional components, an acquisition unit 711, a ciphertext conversion unit 712, and an output unit 713. The acquisition unit 711 includes a ciphertext acquisition unit 714 and a key acquisition unit 715. The functions of the functional components of the ciphertext conversion device 70 are realized by software.\n\nThe storage 73 stores programs for realizing the functions of the functional components of the ciphertext conversion device 70. These programs are loaded into the memory 72 by the processor 71 and executed by the processor 71. This realizes the functions of the functional components of the ciphertext conversion device 70.\n\nThe storage 73 realizes the function of a key storage unit 731.\n\nA configuration of the decryption device 80 according to the first embodiment will be described with reference to FIG. 8.\n\nThe decryption device 80 includes hardware of a processor 81, a memory 82, a storage 83, and a communication interface 84. The processor 81 is connected with the other hardware components via signal lines and controls the other hardware components.\n\nThe decryption device 80 includes, as functional components, an acquisition unit 811, a decryption unit 812, and an output unit 813. The functions of the functional components of the decryption device 80 are realized by software.\n\nThe storage 83 stores programs for realizing the functions of the functional components of the decryption device 80. These programs are loaded into the memory 82 by the processor 81 and executed by the processor 81. This realizes the functions of the functional components of the decryption device 80.\n\nThe storage 83 realizes the function of a key storage unit 831.\n\nEach of the processors 21, 31, 41, 51, 61, 71, and 81 is an integrated circuit (IC) that performs arithmetic processing. As a specific example, each of the processors 21, 31, 41, 51, 61, 71, and 81 is a central processing unit (CPU), a digital signal processor (DSP), or a graphics processing unit (GPU).\n\nEach of the memories 22, 32, 42, 52, 62, 72, and 82 is a storage device to temporarily store data. As a specific example, each of the memories 22, 32, 42, 52, 62, 72, and 82 is a static random access memory (SRAM) or a dynamic random access memory (DRAM).\n\nEach of the storages 23, 33, 43, 53, 63, 73, and 83 is a storage device to store data. As a specific example, each of the storages 23, 33, 43, 53, 63, 73, and 83 is a hard disk drive (HDD). Alternatively, each of the storages 23, 33, 43, 53, 63, 73, and 83 may be a portable storage medium, such as a Secure Digital (SD, registered trademark) memory card, CompactFlash (CF, registered trademark), a NAND flash, a flexible disk, an optical disc, a compact disc, a Blu-ray (registered trademark) disc, or a digital versatile disc (DVD).\n\nEach of the communication interfaces 24, 34, 44, 54, 64, 74, and 84 is an interface for communicating with external devices. As a specific example, each of the communication interfaces 24, 34, 44, 54, 64, 74, and 84 is an Ethernet (registered trademark) port, a Universal Serial Bus (USB) port, or a High-Definition Multimedia Interface (HDMI, registered trademark) port.\n\nFIG. 2 illustrates only one processor 21. However, the common parameter generation device 20 may include a plurality of processors as an alternative to the processor 21. Similarly, the key generation device 30 may include a plurality of processors as an alternative to the processor 31. The conversion key generation device 40 may include a plurality of processors as an alternative to the processor 41. The encryption device 50 may include a plurality of processors as an alternative to the processor 51. The homomorphic operation device 60 may include a plurality of processors as an alternative to the processor 61. The ciphertext conversion device 70 may include a plurality of processors as an alternative to the processor 71. The decryption device 80 may include a plurality of processors as an alternative to the processor 81.\n\nThe plurality of processors share the execution of the programs for realizing the functions of the functional components. Each of the plurality of processors is, like the processors 21, 31, 41, 51, 61, 71, and 81, an IC that performs arithmetic processing.\n\nDescription of Operation\n\nOperation of the privacy-preserving information processing system 10 according to the first embodiment will be described with reference to FIGS. 9 to 15.\n\nThe operation of the privacy-preserving information processing system 10 according to the first embodiment corresponds to a privacy-preserving information processing method according to the first embodiment. The operation of the privacy-preserving information processing system 10 according to the first embodiment also corresponds to processes of a privacy-preserving information processing program according to the first embodiment.\n\nIn the first embodiment, the privacy-preserving information processing system 10 employs existing multi-key homomorphic encryption. As the existing multi-key homomorphic encryption, it is possible to employ schemes described in documents such as [Non-Patent Literature 1: C. Peikert and S. Shiehian. “Multi-Key FHE from LWE, Revisited”. In TCC, 2016.] and [Non-Patent Literature 2: Z. Brakerski and R. Perlman. “Lattice-based fully dynamic multi-key FHE with short ciphertexts”. In CRYPTO, 2016].\n\nThe multi-key homomorphic encryption includes a Setup algorithm, a KG algorithm, an Enc algorithm, a Dec algorithm, and an Eval algorithm. The Setup algorithm is an algorithm that generates a common parameter. The KG algorithm is an algorithm that generates a pair of a decryption key and a public key. The Enc algorithm is an algorithm that encrypts data to generate a ciphertext. The Dec algorithm is an algorithm that decrypts a ciphertext. The Eval algorithm is an algorithm that performs a homomorphic operation.\n\nOperation of the common parameter generation device 20 according to the first embodiment will be described with reference to FIG. 9.\n\nThe operation of the common parameter generation device 20 according to the first embodiment corresponds to a common parameter generation method according to the first embodiment. The operation of the common parameter generation device 20 according to the first embodiment also corresponds to processes of a common parameter generation program according to the first embodiment.\n\nStep S11: Acquisition Process\n\nThe acquisition unit 211 accepts an input of a parameter necessary for generating a common parameter. Specific examples of the parameter are a security parameter λ, the number k of keys, and a Boolean circuit depth d in Non-Patent Literature 1. The acquisition unit 211 writes the acquired parameter in the memory 22.\n\nStep S12: Common Parameter Generation Process\n\nThe common parameter generation unit 212 retrieves the parameter acquired in step S11 from the memory 22. The common parameter generation unit 212 executes the Setup algorithm in the multi-key homomorphic encryption, taking as input the retrieved parameter, so as to generate a common parameter pp. The common parameter generation unit 212 writes the generated common parameter pp in the memory 22.\n\nStep S13: Output Process\n\nThe output unit 213 retrieves the common parameter pp generated in step S12 from the memory 22. The output unit 213 writes the retrieved common parameter pp in the storage 23.\n\nThe output unit 213 transmits the common parameter pp to each of the key generation devices 30 via the communication interface 24. In each of the key generation devices 30, the acquisition unit 311 receives the common parameter pp via the communication interface 34, and writes the common parameter pp in the key storage unit 331.\n\nOperation of the key generation device 30 according to the first embodiment will be described with reference to FIG. 10.\n\nThe operation of the key generation device 30 according to the first embodiment corresponds to a key generation method according to the first embodiment. The operation of the key generation device 30 according to the first embodiment also corresponds to processes of a key generation program according to the first embodiment.\n\nStep S21: Key Generation Process\n\nThe key generation unit 312 retrieves the common parameter pp from the key storage unit 331. The key generation unit 312 executes the KG algorithm in the multi-key homomorphic encryption, taking as input the retrieved common parameter pp, so as to generate a pair of a decryption key sk and a public key pk. The key generation unit 312 writes the generated pair of the decryption key sk and the public key pk in the memory 32.\n\nStep S22: Output Process\n\nThe output unit 313 retrieves the pair of the decryption key sk and the public key pk generated in step S21 from the memory 32. The output unit 313 writes the retrieved pair of the decryption key sk and the public key pk in the key storage unit 331.\n\nThe output unit 313 transmits the public key pk to the conversion key generation device 40, the encryption device 50, and the homomorphic operation device 60 via the communication interface 34. Then, in the conversion key generation device 40, the key acquisition unit 411 receives the public key pk via the communication interface 44, and writes the public key pk in the key storage unit 431. Similarly, in the encryption device 50, the acquisition unit 511 receives the public key pk via the communication interface 54, and writes the public key pk in the key storage unit 531. Similarly, in the homomorphic operation device 60, the acquisition unit 611 receives the public key pk via the communication interface 64, and writes the public key pk in the key storage unit 631.\n\nThe output unit 313 transmits the decryption key sk to the conversion key generation device 40 and a corresponding one of the decryption devices 80 via the communication interface 34. The corresponding one of the decryption devices 80 is the decryption device 80 that is assigned to the user of the decryption key sk. The key generation devices 30 and the decryption devices 80 are associated on a one-to-one basis herein, and the decryption key sk is transmitted to the decryption device 80 associated with the key generation device 30 that has generated the decryption key sk. Then, in the conversion key generation device 40, the key acquisition unit 411 receives the decryption key sk via the communication interface 44, and writes the decryption key sk in the key storage unit 431. Similarly, in the decryption device 80, the acquisition unit 811 receives the decryption key sk via the communication interface 84, and writes the decryption key sk in the key storage unit 831.\n\nIn the following description, the decryption key sk generated by the ι-th key generation device 30 of the plurality of key generation devices 30 will be referred to as a decryption key skι, and the public key pk generated by the ι-th key generation device 30 will be referred to as a public key pkι.\n\nOperation of the conversion key generation device 40 according to the first embodiment will be described with reference to FIG. 11.\n\nThe operation of the conversion key generation device 40 according to the first embodiment corresponds to a conversion key generation method according to the first embodiment. The operation of the conversion key generation device 40 according to the first embodiment also corresponds to processes of a conversion key generation program according to the first embodiment.\n\nA case in which a conversion key rki→j is generated will be described here. The conversion key rki→j is a key for converting a ciphertext encrypted with a public key pki generated by the i-th key generation device 30 into a ciphertext that can be decrypted with a decryption key skj generated by the j-th key generation device 30.\n\nStep S31: Key Acquisition Process\n\nThe key acquisition unit 411 retrieves a decryption key ski in a pair of a conversion source and a public key pkj in a pair of a conversion target, out of a plurality of pairs of a decryption key and a public key stored in the key storage unit 431. The key acquisition unit 411 writes the retrieved decryption key ski and public key pkj in the memory 42.\n\nStep S32: Conversion Key Generation Process\n\nThe conversion key generation unit 412 retrieves the decryption key ski and the public key pkj from the memory 42. The conversion key generation unit 412 executes an RKGen algorithm in the multi-key homomorphic encryption, taking as input the retrieved decryption key ski and public key pkj, so as to encrypt the decryption key ski with the public key pkj to generate a conversion key rki→j. The conversion key rki→j is a key for converting a ciphertext encrypted with a public key pki in the pair of the conversion source into a converted ciphertext that can be decrypted with a decryption key skj in the pair of the conversion target. The conversion key generation unit 412 writes the generated conversion key rki→j in the memory 42.\n\nStep S33: Output Generation Process\n\nThe output unit 413 retrieves the conversion key rki→j generated in step S32 from the memory 42. The output unit 413 transmits the retrieved conversion key rki→j to the ciphertext conversion device 70 via the communication interface 44. Then, in the ciphertext conversion device 70, the acquisition unit 711 receives the conversion key rki→j via the communication interface 74, and writes the conversion key rki→j in the key storage unit 731.\n\nOperation of the encryption device 50 according to the first embodiment will be described with reference to FIG. 12.\n\nThe operation of the encryption device 50 according to the first embodiment corresponds to an encryption method according to the first embodiment. The operation of the encryption device 50 according to the first embodiment also corresponds to processes of an encryption program according to the first embodiment.\n\nStep S41: Acquisition Process\n\nThe acquisition unit 511 acquires a plaintext M to be encrypted via the communication interface 54. The acquisition unit 511 writes the acquired plaintext M in the memory 52.\n\nStep S42: Encryption Process\n\nThe encryption unit 512 retrieves the plaintext M acquired in step S41 from the memory 52. The encryption unit 512 retrieves the public key pk from the key storage unit 531. The encryption unit 512 executes the Enc algorithm in the multi-key homomorphic encryption, taking as input the retrieved plaintext M and public key pk, so as to encrypt the plaintext M with the public key pk to generate a ciphertext C. The encryption unit 512 writes the generated ciphertext C in the memory 52.\n\nStep S43: Output Process\n\nThe output unit 513 retrieves the ciphertext C generated in step S42 from the memory 52. The output unit 513 transmits the retrieved ciphertext C to the homomorphic operation device 60 via the communication interface 54. Then, in the homomorphic operation device 60, the acquisition unit 611 receives the ciphertext C via the communication interface 64, and writes the ciphertext C in the ciphertext storage unit 632.\n\nOperation of the homomorphic operation device 60 according to the first embodiment will be described with reference to FIG. 13.\n\nThe operation of the homomorphic operation device 60 according to the first embodiment corresponds to a homomorphic operation method according to the first embodiment. The operation of the homomorphic operation device 60 according to the first embodiment also corresponds to processes of a homomorphic operation program according to the first embodiment.\n\nStep S51: Acquisition Process\n\nThe acquisition unit 611 retrieves a ciphertext TC, to he processed from the ciphertext storage unit 632. The acquisition unit 611 acquires an operation f that indicates details of an operation via the communication interface 64. The operation f is input, for example, by the user of the homomorphic operation device 60 via an input device. The acquisition unit 611 writes the retrieved ciphertext TC and the acquired operation fin the memory 62.\n\nThere may be one ciphertext TC to be processed or a plurality of ciphertexts TC to be processed. The ciphertext TC to be processed is at least one of a ciphertext C generated by the encryption device 50 and a ciphertext EC resulting from performing a homomorphic operation by the homomorphic operation device 60.\n\nStep S52: Homomorphic Operation Process\n\nThe homomorphic operation unit 612 retrieves, from the memory 62, the ciphertext TC retrieved in step S51 and the operation f acquired in step S51. The homomorphic operation unit 612 retrieves the public key pk that has been used to encrypt the ciphertext TC from the key storage unit 631. The homomorphic operation unit 612 executes the Eval algorithm in the multi-key homomorphic encryption, taking as input the retrieved ciphertext TC, operation f, and public key pk, so as to generate a ciphertext EC resulting from performing the operation f on the ciphertext TC. The homomorphic operation unit 612 writes the generated ciphertext EC in the memory 62.\n\nStep S53: Output Process\n\nThe output unit 613 retrieves the ciphertext EC generated in step S52 from the memory 62. The output unit 613 writes the retrieved ciphertext EC in the ciphertext storage unit 632.\n\nOperation of the ciphertext conversion device 70 according to the first embodiment will be described with reference to FIG. 14.\n\nThe operation of the ciphertext conversion device 70 according to the first embodiment corresponds to a ciphertext conversion method according to the first embodiment, and also corresponds to processes of a ciphertext conversion program according to the first embodiment.\n\nStep S61: Acquisition Process\n\nThe acquisition unit 711 acquires a ciphertext IC to be processed from the homomorphic operation device 60 via the communication interface 74. Specifically, the acquisition unit 711 transmits an identifier of the ciphertext TC to be processed to the homomorphic operation device 60, and acquires the ciphertext TC transmitted as a response. The acquisition unit 811 writes the acquired ciphertext IC in the memory 72.\n\nThe ciphertext TC to be processed is at least one of a ciphertext IC generated by the encryption device 50 and a ciphertext EC resulting from performing a homomorphic operation by the homomorphic operation device 60.\n\nIt is assumed here that the ciphertext TC to be processed is a ciphertext EC generated by performing a homomorphic operation on a ciphertext encrypted with the public key pki for each integer i of i=1, . . . , s. It is also assumed that the ciphertext TC to be processed is to be converted into a ciphertext that can be decrypted with the decryption key skj generated by the j-th key generation device 30.\n\nStep S62: Ciphertext Conversion Process\n\nThe ciphertext conversion unit 712 decrypts the ciphertext TC to be processed by a homomorphic operation, using the decryption key in the pair of the conversion source, that is, the decryption key ski for each integer i of i=1, . . . , s, so as to generate a converted ciphertext RC.\n\nSpecifically, the ciphertext conversion unit 712 executes the Enc algorithm in the multi-key homomorphic encryption, taking as input the public key pkj in the pair of the conversion target and the ciphertext TC, so as to encrypt the ciphertext TC with the public key pkj to generate a ciphertext C*. The ciphertext conversion unit 712 executes the Eval algorithm in the multi-key homomorphic encryption, taking as input the ciphertext C*, an operation fDec, the public key pkj, and the conversion key rki→j for each integer i of i=1, . . . , s, so as to generate the converted ciphertext RC resulting from performing the operation fDec on the ciphertext C*.\n\nNote that the operation fDec is an operation representing the Dec algorithm in the multi-key homomorphic encryption. That is, the execution of the Eval algorithm in the multi-key homomorphic encryption, using as input the ciphertext C*, the operation fDec, the public key pkj, and the conversion key rki→j for each integer i of i=1, . . . , s, causes the ciphertext C* to be decrypted with the decryption key ski embedded in the conversion key rki→j. That is, the ciphertext C* is decrypted with the decryption key ski by the homomorphic operation.\n\nStep S63: Output Process\n\nThe output unit 713 retrieves the converted ciphertext RC generated in step S62 from the memory 72. The output unit 713 transmits the retrieved converted ciphertext RC to the homomorphic operation device 60 via the communication interface 74. Then, in the homomorphic operation device 60, the acquisition unit 611 receives the converted ciphertext RC via the communication interface 64, and writes the converted ciphertext RC in the ciphertext storage unit 632.\n\nOperation of the decryption device 80 according to the first embodiment will be described with reference to FIG. 15.\n\nThe operation of the decryption device 80 according to the first embodiment corresponds to a decryption method according to the first embodiment. The operation of the decryption device 80 according to the first embodiment also corresponds to processes of a decryption program according to the first embodiment.\n\nStep S71: Acquisition Process\n\nThe acquisition unit 811 acquires a ciphertext TC to be processed from the homomorphic operation device 60 via the communication interface 84. Specifically, the acquisition unit 811 transmits an identifier of the ciphertext TC to be processed to the homomorphic operation device 60, and acquires the ciphertext TC transmitted as a response. The acquisition unit 811 writes the acquired ciphertext TC in the memory 82.\n\nStep S72: Decryption Process\n\nThe decryption unit 812 retrieves the ciphertext TC acquired in step S71 from the memory 82. The decryption unit 812 executes the Dec algorithm in the multi-key homomorphic encryption, taking as input the retrieved ciphertext TC, so as to decrypt the ciphertext TC to generate a plaintext M′. The decryption unit 812 writes the generated plaintext M′ in the memory 82.\n\nStep S73: Output Process\n\nThe output unit 813 retrieves the plaintext M′ generated in step S72 from the memory 82. The output unit 813 outputs the retrieved plaintext M′ via the communication interface 84.\n\nEffects of First Embodiment\n\nAs described above, in the privacy-preserving information processing system 10 according to the first embodiment, the conversion key generation device 40 generates the conversion key rki→j by encrypting the decryption key ski of the conversion source with the public key pkj of the conversion target. As a result, by using this conversion key rki→j, a ciphertext which is encrypted with the public key pki of the conversion source and then on which a homomorphic operation is performed can be converted into a ciphertext that can be decrypted with the decryption key skj.\n\nIn the technique described in Patent Literature 1, a homomorphic operation cannot be performed until a key to be a conversion target of proxy re-encryption is determined, so that data cannot be processed in advance. If a plurality of users wish to use data resulting from a homomorphic operation, the homomorphic operation has to be performed after keys used to encrypt data prior to the homomorphic operation are converted into keys of the respective data users by proxy re-encryption. Therefore, the homomorphic operation must be performed individually for each ciphertext encrypted with the key of each data user, and a result of the homomorphic operation cannot be reused.\n\nIn contrast to this, in the privacy-preserving information processing system 10 according to the first embodiment, by converting a ciphertext resulting from a homomorphic operation by the ciphertext conversion device 70, the key of the ciphertext resulting from the homomorphic operation can be converted while preserving the privacy of an encrypted plaintext and without changing the plaintext. As a result, even when a ciphertext resulting from a homomorphic operation needs to be converted for a plurality of keys, it is not necessary to re-execute the homomorphic operation. In addition, by storing a ciphertext resulting from a homomorphic operation, intermediate data in the operation can be securely reused.\n\nIt is conceivable that a computer for performing homomorphic operations and a computer for performing proxy re-encryption are provided separately, such that the storage of ciphertexts and operations on ciphertexts are processed by a computer with a large storage capacity and high computational power, such as a cloud, and the conversion of keys is processed by a computer with high security, for example.\n\nIn this case, in the technique described in Patent Literature 1, it is necessary to transmit all ciphertexts to be used for an operation to the computer for proxy re-encryption so as to have their respective keys converted, and then transmit all the ciphertexts after conversion to the cloud again. Therefore, a large number of ciphertexts are to be communicated.\n\nIn contrast to this, in the privacy-preserving information processing system 10 according to the first embodiment, it is possible to transmit only a ciphertext resulting from a homomorphic operation that needs to be decrypted to the ciphertext conversion device 70, so as to generate a converted ciphertext. Therefore, even when the homomorphic operation device 60 and the ciphertext conversion device 70 are provided in different computers, only a small number of ciphertexts are to be communicated.\n\nOther Configurations First Variation\n\nIn the first embodiment, the functional components are realized by software. As a first variation, however, the functional components may be realized by hardware. With regard to the first variation, differences from the first embodiment will be described.\n\nA configuration of the common parameter generation device 20 according to the first variation will be described with reference to FIG. 16.\n\nWhen the functions are realized by hardware, the common parameter generation device 20 includes an electronic circuit 25, in place of the processor 21, the memory 22, and the storage 23. The electronic circuit 25 is a dedicated circuit that realizes the functional components of the common parameter generation device 20 and the functions of the memory 22 and the storage 23.\n\nA configuration of the key generation device 30 according to the first variation will be described with reference to FIG. 17.\n\nWhen the functions are realized by hardware, the key generation device 30 includes an electronic circuit 35, in place of the processor 31, the memory 32, and the storage 33. The electronic circuit 35 is a dedicated circuit that realizes the functional components of the key generation device 30 and the functions of the memory 32 and the storage 33.\n\nA configuration of the conversion key generation device 40 according to the first variation will be described with reference to FIG. 18.\n\nWhen the functions are realized by hardware, the conversion key generation device 40 includes an electronic circuit 45, in place of the processor 41, the memory 42, and the storage 43. The electronic circuit 45 is a dedicated circuit that realizes the functional components of the conversion key generation device 40 and the functions of the memory 42 and the storage 43.\n\nA configuration of the encryption device 50 according to the first variation will be described with reference to FIG. 19.\n\nWhen the functions are realized by hardware, the encryption device 50 includes an electronic circuit 55, in place of the processor 51, the memory 52, and the storage 53. The electronic circuit 55 is a dedicated circuit that realizes the functional components of the encryption device 50 and the functions of the memory 52 and the storage 53.\n\nA configuration of the homomorphic operation device 60 according to the first variation will be described with reference to FIG. 20.\n\nWhen the functions are realized by hardware, the homomorphic operation device 60 includes an electronic circuit 65, in place of the processor 61, the memory 62, and the storage 63. The electronic circuit 65 is a dedicated circuit that realizes the functional components of the homomorphic operation device 60 and the functions of the memory 62 and the storage 63.\n\nA configuration of the ciphertext conversion device 70 according to the first variation will be described with reference to FIG. 21.\n\nWhen the functions are realized by hardware, the ciphertext conversion device 70 includes an electronic circuit 75, in place of the processor 71, the memory 72, and the storage 73. The electronic circuit 75 is a dedicated circuit that realizes the functional components of the ciphertext conversion device 70 and the functions of the memory 72 and the storage 73.\n\nA configuration of the decryption device 80 according to the first variation will be described with reference to FIG. 22.\n\nWhen the functions are realized by hardware, the decryption device 80 includes an electronic circuit 85, in place of the processor 81, the memory 82, and the storage 83. The electronic circuit 85 is a dedicated circuit that realizes the functional components of the decryption device 80 and the functions of the memory 82 and the storage 83.\n\nEach of the electronic circuits 25, 35, 45, 55, 65, 75, and 85 is assumed to be a single circuit, a composite circuit, a programmed processor, a parallel-programmed processor, a logic IC, a gate array (GA), an application specific integrated circuit (ASIC), or a field-programmable gate array (FPGA).\n\nThe functions of the functional components of the common parameter generation device 20 may be realized by one electronic circuit 25, or the functions of the functional components may be distributed among and realized by a plurality of electronic circuits 25. Similarly, with regard to the key generation device 30, the conversion key generation device 40, the encryption device 50, the homomorphic operation device 60, the ciphertext conversion device 70, or the decryption device 80, the functions of the functional components may be realized by one electronic circuit 35, 45, 55, 65, 75, or 85, or the functions of the functional components may be distributed among and realized by a plurality of electronic circuits 35, 45, 55, 65, 75, or 85, respectively.\n\nSecond Variation\n\nAs a second variation, some of the functions may be realized by hardware, and the rest of the functions may be realized by software. That is, some of the functions of the functional components may be realized by hardware, and the rest of the functions may be realized by software.\n\nEach of the processors 21, 31, 41, 51, 61, 71, and 81, the memories 22, 32, 42, 52, 62, 72, and 82, the storages 23, 33, 43, 53, 63, 73, and 83, and the electronic circuits 25, 35, 45, 55, 65, 75, and 85 is referred to as processing circuitry. That is, the functions of the functional components are realized by the processing circuitry.\n\nSECOND EMBODIMENT\n\nIn a second embodiment, a specific scheme based on a multi-key homomorphic encryption scheme described in Non-Patent Literature 1 will be described. In the second embodiment, the scheme based on the large-ciphertext construction described in Non-Patent Literature 1 will be described. In the second embodiment, differences from the first embodiment will be described and description of the same portions will be omitted.\n\nNotation and Definitions\n\nWhen A is a distribution, y←A denotes that y is randomly selected from A according to the distribution of A. When A is a set, y←A denotes that y is uniformly selected from A. When A is an algorithm, y←A(x) denotes that an output y is generated for an input x.\n\nNote that n, q, and χ are certain Learning With Errors (LWE) parameters, m=O(n log q), L is a minimum integer equal to or more than log q, and g:=(1, 2, . . . , 2L−1). For any x∈Zq, y:=g−1[x]∈{0,1}L is a vector that satisfies <y, g>=x∈Zq. For any natural numbers n and m, In is an n×n identity matrix, 0n×m is an n×m matrix in which all elements are 0, and 1n×m is an n×m matrix in which all elements are 1. For any i∈[n], ei∈{0,1}n is a canonical basis vector in which the n-th element is 1 and the rest of the elements are 0. Note that [a∥b] denotes a concatenation of vectors or matrices a and b.\n\nDescription of Operation\n\nOperation of the common parameter generation device 20 according to the second embodiment will be described with reference to FIG. 9.\n\nThe processes of step S11 and step S13 are the same as in the first embodiment.\n\nStep S12: Common Parameter Generation Process\n\nThe common parameter generation unit 212 executes the Setup algorithm in the multi-key homomorphic encryption, so as to generate a common parameter pp, as indicated in Formula 11.\nSetup(1λ,1k,1d):\npp:=A←qn×m.  [Formula 11]\n\nOperation of the key generation device 30 according to the second embodiment will be described with reference to FIG. 10.\n\nThe process of step S22 is the same as in the first embodiment.\n\nStep S21: Key Generation Process\n\nThe key generation unit 312 executes the KG algorithm in the multi-key homomorphic encryption, so as to generate a pair of a decryption key sk and a public key pk, as indicated in Formula 12.\nKG(pp):\nt←χn−1, t: =(−t,1)∈n, e←χm,\nb:=tA+e,\nsk:=t, pk:=(b,A).  [Formula 12]\n\nOperation of the conversion key generation device 40 according to the second embodiment will be described with reference to FIG. 11.\n\nThe processes of step S31 and step S33 are the same as in the first embodiment.\n\nStep S32: Conversion Key Generation Process\n\nThe conversion key generation unit 412 executes the RKGen algorithm in the multi-key homomorphic encryption, taking as input the decryption key ski and the public key pkj, so as to encrypt the decryption key ski with the public key pkj to generate a conversion key rki→j, as indicated in Formula 13.\n\n$RKGen ⁡ ( sk i , pk j ) ⁢ : ⁢ ⁢ B j := A - e n t ⊗ b j , ⁢ X i ← { 0 , 1 } m × nL , ⁢ rk i → j ⁢ := B j ⁢ X i + ( 0 ( n - 1 ) × nL t i · ( I n ⊗ g ) ) . [ Formula ⁢ ⁢ 13 ]$\n\nOperation of the encryption device 50 according to the second embodiment will be described with reference to FIG. 12.\n\nThe processes of step S41 and step S43 are the same as in the first embodiment.\n\nStep S42: Encryption Process\n\nIt is assumed here that the plaintext M is to be encrypted with the public key pki generated by the i-th key generation device 30.\n\nThe encryption unit 512 executes the Enc algorithm in the multi-key homomorphic encryption, taking as input the plaintext M and the public key pki, so as to encrypt the plaintext M with the public key pki to generate a ciphertext C, as indicated in Formula 14.\nEnc(pki,M∈{0,1}):\nindex:=i,\nB:=A−ent⊗b,\n1. XC←{0,1}m×nL, C:=BXC, CT:=C+M(In⊗g),\n2. R←{0,1}m×nL, F:=AR+M(In⊗g),\n3. XD←{0,1}nmL×nL, D:=(1mL×nL⊗BXD,\nD:=D+(R⊗gt⊗ent),\nC: =(CT,F,D,index).  [Formula 14]\n\nOperation of the homomorphic operation device 60 according to the second embodiment will be described with reference to FIG. 13.\n\nThe processes of step S51 and step S53 are the same as in the first embodiment.\n\nStep S52: Homomorphic Operation Process\n\nWith regard to each ciphertext TC input in step S51, the homomorphic operation unit 612 executes an Extend algorithm in the multi-key homomorphic encryption, taking as input the ciphertext TC concerned and the public key pki, so as to compute a ciphertext C′, as indicated in Formula 15.\n\n$Extend ⁡ ( pk i , C ) : ⁢ n ′ = n ⁢ ⁢ s , CT ∈ ℤ q n ′ × n ′ ⁢ L , ⁢ F ∈ ℤ q n × nL , D ∈ Z q n ′ ⁢ mL × nL , ⁢ compute ⁢ ⁢ ( i ) ⁢ ⁢ or ⁢ ⁢ ( ii ) , ⁢ ( i ) ⁢ ⁢ index ′ = [ index ⁢   ⁢ i ] ⁢ ⁢ 1. ⁢ ⁢ F ′ := F , ⁢ 2. ⁢ ⁢ D ′ := ( I mL ⊗ ⁢ ( I n ′ 0 n × n ′ ) ) · D ∈ ℤ q ( n ′ + n ) ⁢ mL × nL ⁢ ⁢ 3. ⁢ ⁢ CT ′ : = ( CT X F ) ∈ ℤ q ( n ′ + n ) × ( n ′ + n ) ⁢ L , ⁢ s := [ - b i ] ⁢ ( I m ⊗ g - t ) ∈ { 0 , 1 } mL , ⁢ X := ( s ⊗ I n ′ ) · D ∈ Z q n ′ × nL . ⁢ ( ii ) ⁢ ⁢ index ′ = [ i ⁢   ⁢ index ] ⁢ ⁢ 1. ⁢ ⁢ F ′ := F , ⁢ 2. ⁢ ⁢ D ′ := ( I mL ⊗ ⁢ ( 0 n × n ′ I n ′ ) ) · D ∈ ℤ q ( n ′ + n ) ⁢ mL × nL ⁢ ⁢ 3. ⁢ ⁢ CT ′ : = ( F X ⁢ CT ) ∈ ℤ q ( n ′ + n ) × ( n ′ + n ) ⁢ L , ⁢ s := [ - b i ] ⁢ ( I m ⊗ g - t ) ∈ { 0 , 1 } mL , ⁢ X := ( s ⊗ I n ′ ) · D ∈ Z q n ′ × nL . ⁢ C ′ = ( CT ′ , F ′ , D ′ , index ′ ) . [ Formula ⁢ ⁢ 15 ]$\n\nIn Formula 15, s is the number of elements in index.\n\nThe homomorphic operation unit 612 executes the Eval algorithm in the multi-key homomorphic encryption, so as to generate a ciphertext EC resulting from performing the operation f on the ciphertext TC.\n\nFor example, the homomorphic operation unit 612 adds C1 and C2, which are two ciphertexts TC, as indicated in Formula 16.\nEval(C1=(CT1,F1,D1,index),C2=(CT2,F2,D2,index)):\n:=(CT1+CT2,F1+F2,D1+D2,index),\n\nAlternatively, for example, the homomorphic operation unit 612 multiplies C1 and C2, which are two ciphertexts TC, as indicated in Formula 17.\nEval(C1=(CT1,F1,D1,index),C2=(CT2,F2,D2,index)):\nSct:=(In′⊗g−1)[CT2]∈{0,1}n′L×n′L,\nSf:=(In⊗g−1)[F2]∈{0,1}nL×nL,\nSd:=(In′mL⊗g−1)[D2]∈{0,1}n′mL2×nL,\nCTmul:=CT1·Sct,\nFmul:=F1·Sf,\nDmul:=D1·Sf+(ImL⊗CT1Sd,\nindexmul:=index,\nEC=(CTmul,Fmul,Dmul,indexmul).  [Formula 17]\n\nOperation of the ciphertext conversion device 70 according to the second embodiment will be described with reference to FIG. 14.\n\nThe processes of step S61 and step S63 are the same as in the first embodiment.\n\nStep S62: Ciphertext Conversion Process\n\nIt is assumed here that a ciphertext EC resulting from performing a homomorphic operation using as input a ciphertext encrypted with the public key plc.; for each integer i of i=1, . . . , s is to be converted into a ciphertext that can be decrypted with the decryption key skj generated by the j-th key generation device 30. That is, index=[1∥, . . . , ∥s].\n\nThe ciphertext conversion unit 712 executes a ReEnc algorithm, taking as input the conversion key rki→j for each integer i of i=1, . . . , s and the ciphertext TC, which is the ciphertext EC resulting from performing a homomorphic operation, so as to generate a converted ciphertext RC, as indicated in Formula 18.\nReEnc(rk1→j, . . . , rks→j,TC:=(CT,F,D,index)):\nCT*:=[rk1→j∥ . . . ∥rks→j]·(Ins⊗g−1)[CT],\nF*:=F,\nD*: =(Iml⊗[rk1→j∥ . . . ∥rks→j])·(Ins⊗g−1)[D],\nRC: =(CT*,F*,D*,j).  [Formula 18]\n\nOperation of the decryption device 80 according to the second embodiment will be described with reference to FIG. 15.\n\nThe processes of step S71 and step S73 are the same as in the first embodiment.\n\nStep 872: Decryption Process\n\nThe decryption unit 812 executes the Dec algorithm in the multi-key homomorphic encryption, taking as input the ciphertext TC, so as to decrypt the ciphertext TC to generate a plaintext M′, as indicated in Formula 19.\nDec(sk,TC: =(CT,F,D,index)):\nM′:=“t·ct/2L−2”.  [Formula 19]\n\nIn Formula 19, ct is a column vector in the second column from the right in the element CT, and “t·ct/2L−2” signifies an integer closest to t·ct/2L−2. That is, the integer closest to t·ct/2L−2 is the plaintext M′.\n\nEffects of Second Embodiment\n\nThe privacy-preserving information processing system 10 according to the second embodiment can realize a scheme by which a ciphertext on which a homomorphic operation has been performed can be converted into a ciphertext that can be decrypted with the decryption key ski by employing a specific multi-key homomorphic encryption scheme.\n\nIn the first embodiment, the ciphertext conversion device 70 converts a ciphertext by the homomorphic operation algorithm. In contrast to this, in the second embodiment, the ciphertext conversion device 70 converts a ciphertext without using the homomorphic operation algorithm, so that the amount of computation can be reduced.\n\nTHIRD EMBODIMENT\n\nIn a third embodiment, a specific scheme based on a multi-key homomorphic encryption scheme described in Non-Patent Literature 1 will be described, as in the second embodiment. In the third embodiment, the scheme based on the small-ciphertext construction described in Non-Patent Literature 1 will be described. In the third embodiment, differences from the second embodiment will be described, and description of the same portions will be omitted.\n\nDescription of Operation\n\nOperation of the key generation device 30 according to the third embodiment will be described with reference to FIG. 10.\n\nThe process of step S22 is the same as in the second embodiment.\n\nStep S21: Key Generation Process\n\nThe key generation unit 312 executes the KG algorithm in the multi-key homomorphic encryption, so as to generate a pair of a decryption key sk and a public key pk, as indicated in Formula 20.\nKG(pp):\n1. t←χn−1, t: =(−t,1)∈n, e←χm,\nb:=tA+e,\n2. R←{0,1}m×n2L, P:=AR+(In⊗t⊗g)∈qn×n2L,\n3. choose LWE matrix DnmL×n2L such as (ImL⊗tD0,\nD:=D+(R⊗gt⊗ent),\nsk:=t, pk:=(b,P,D,A).  [Formula 20]\n\nOperation of the conversion key generation device 40 according to the third embodiment will be described with reference to FIG. 11.\n\nThe processes of step S31 and step S33 are the same as in the second embodiment.\n\nStep S32: Conversion Key Generation Process\n\nThe conversion key generation unit 412 executes the RKGen algorithm in the multi-key homomorphic encryption, taking as input the decryption key ski and the public key pkj, so as to encrypt the decryption key ski with the public key pkj to generate a conversion key rki→j, as indicated in Formula 21.\n\n$RKGen ⁡ ( sk i , pk j ) : ⁢ B j : = A - e n t ⊗ b j , ⁢ X i ← { 0 , 1 } m × nL , ⁢ rk i → j : = B j ⁢ X i + ( 0 ( n - 1 ) × nL t i · ( I n ⊗ g ) ) . [ Formula ⁢ ⁢ 21 ]$\n\nOperation of the encryption device 50 according to the third embodiment will be described with reference to FIG. 12.\n\nThe processes of step S41 and step S43 are the same as in the second embodiment.\n\nStep S42: Encryption Process\n\nIt is assumed here that the plaintext M is to be encrypted with the public key pki generated by the i-th key generation device 30.\n\nThe encryption unit 512 executes the Enc algorithm in the multi-key homomorphic encryption, taking as input the plaintext M and the public key pki so as to encrypt the plaintext M with the public key pki to generate a ciphertext C, as indicated in Formula 22.\nEnc(pki,M∈{0,1}):\nindex:=i,\nB:=A−ent⊗b,\nXC←{0,1}m×nL, C:=BXCqn×nL,\nCT:=C+M(In⊗g)∈qn×nL,\nC: =(CT,index).  [Formula 22]\n\nOperation of the homomorphic operation device 60 according to the third embodiment will be described with reference to FIG. 13.\n\nThe processes of step S51 and step S53 are the same as in the second embodiment.\n\nStep S52: Homomorphic Operation Process\n\nWith regard to each ciphertext TC input in step S51, the homomorphic operation unit 612 executes the Extend algorithm in the multi-key homomorphic encryption, taking as input the ciphertext TC concerned and the public key pki, so as to compute a ciphertext C′, as indicated in Formula 23.\n\n$Extend ⁡ ( p ⁢ ⁢ k i , C ) : ⁢ n ′ = ns , CT ∈ ℤ n ′ × n ′ ⁢ L , compute ⁢ ⁢ ( i ) ⁢ ⁢ or ⁢ ⁢ ( ii ) , ⁢ ( i ) ⁢ ⁢ index ′ = [ index ⁢   ⁢ i ] ⁢ ⁢ 1. ⁢ ⁢ Y ′ := ( Y Y * ) , ⁢ Y := I k ⊗ P * , b := [ b 1 ⁢  ⁢ … ⁢  ⁢ b s ] , ⁢ s := - [ b ] ⁢ ( I k ⊗ I m ⊗ g - t ) ∈ { 0 , 1 } kmL , ⁢ Y * := ( s ⊗ I n ) · ( I k ⊗ D * ) , ⁢ 2. ⁢ ⁢ C _ := C · ( e n t ⊗ I L ) , ⁢ S := ( I nk ⊗ I n ⊗ g - 1 ) · ( C _ ⊗ I n ) ∈ { 0 , 1 } n 2 ⁢ kL × nL ⁢ ⁢ Let ⁢ ⁢ Π ⁢ ⁢ be ⁢ ⁢ the ⁢ ⁢ permutation ⁢ ⁢ matrix ⁢ ⁢ for ⁢ ⁢ which ⁢ ⁢ ( g ⊗ t * ) ⁢ Π = ( t * ⊗ g ) ⁢ for ⁢ ⁢ any ⁢ ⁢ t * ⁢ ⁢ X ′ := Y ′ · S · Π , ⁢ 3. ⁢ ⁢ CT ′ := ( CT X ′ ) . ⁢ ( ii ) ⁢ ⁢ index ′ = [ i ⁢   ⁢ index ] ⁢ ⁢ 1. ⁢ ⁢ Y ′ := ( Y * Y ) , Y := I k ⊗ P * , ⁢ b := [ b 1 ⁢  ⁢ … ⁢  ⁢ b s ] , ⁢ s := - [ b ] ⁢ ( I k ⊗ I m ⊗ g - t ) ∈ { 0 , 1 } kmL , ⁢ Y * := ( s ⊗ I n ) · ( I k ⊗ D * ) , ⁢ 2. ⁢ ⁢ C _ := C · ( e n t ⊗ I L ) , ⁢ S := ( I nk ⊗ I n ⊗ g - 1 ) · ( C _ ⊗ I n ) ∈ { 0 , 1 } n 2 ⁢ kL × nL ⁢ ⁢ Let ⁢ ⁢ Π ⁢ ⁢ be ⁢ ⁢ the ⁢ ⁢ permutation ⁢ ⁢ matrix ⁢ ⁢ for ⁢ ⁢ which ⁢ ⁢ ( g ⊗ t * ) ⁢ Π = ( t * ⊗ g ) ⁢ for ⁢ ⁢ any ⁢ ⁢ t * ⁢ ⁢ X ′ := Y ′ · S · Π , ⁢ 3. ⁢ ⁢ CT ′ := ( X ′ CT ) . ⁢ C ′ = ( CT ′ , index ′ ) . [ Formula ⁢ ⁢ 23 ]$\n\nIn Formula 23, s is the number of elements in index.\n\nThe homomorphic operation unit 612 executes the Eval algorithm in the multi-key homomorphic encryption, so as to generate a ciphertext EC resulting from performing the operation f on the ciphertext TC.\n\nFor example, the homomorphic operation unit 612 adds C1 and C2, which are two ciphertexts TC, as indicated in Formula 24.\nEval(C1=(CT1,index),C2=(CT2,index)):\n:=(CT1+CT2,index),\n\nAlternatively, for example, the homomorphic operation unit 612 multiplies C1 and C2, which are two ciphertexts TC, as indicated in Formula 25.\nEval(C1=(CT1,index),C2=(CT2,index)):\nSct:=(In′⊗g−1)[CT2]∈{0,1}n′L×n′L,\nCTmul:=CT1·Sct,\nindexmul:=index,\nEC=(CTmul,indexmul).  [Formula 25]\n\nOperation of the ciphertext conversion device 70 according to the third embodiment will be described with reference to FIG. 14.\n\nThe processes of step S61 and step S63 are the same as in the second embodiment.\n\nStep S62: Ciphertext Conversion Process\n\nIt is assumed here that a ciphertext EC resulting from performing a homomorphic operation using as input a ciphertext encrypted with the public key pki for each integer i of i=1, . . . , s is to be converted into a ciphertext that can be decrypted with the decryption key skj generated by the j-th key generation device 30.\n\nThe ciphertext conversion unit 712 executes the ReEnc algorithm, taking as input the conversion key rki→j for each integer i of i=1, . . . , s and the ciphertext TC, which is the ciphertext EC resulting from performing a homomorphic operation, so as to generate a converted ciphertext RC, as indicated in Formula 26.\nReEnc(rk1→j, . . . , rks→j,TC: =(CT,index)):\nCT*: =[rk1→j∥ . . . ∥rks→j]·(Ins⊗g−1)[CT],\nRC:=(CT*,j).  [Formula 26]\n\nOperation of the decryption device 80 according to the third embodiment will be described with reference to FIG. 15.\n\nThe processes of step S71 and step S73 are the same as in the second embodiment.\n\nStep S72: Decryption Process\n\nThe decryption unit 812 executes the Dec algorithm in the multi-key homomorphic encryption, taking as input the ciphertext TC, so as to decrypt the ciphertext TC to generate a plaintext M′, as indicated in Formula 27.\nDec(sk,TC: =(CT,index)):\nM′:=“t·ct/2L−2”.  [Formula 27]\n\nIn Formula 27, ct is a column vector in the second column from the right in the element CT, and “t·ct/2L−2” signifies an integer closest to t·ct/2L−2. That is, the integer closest to t·ct/2L−2 is the plaintext M′.\n\n*** Effects of Third Embodiment ***\n\nAs described above, the privacy-preserving information processing system 10 according to the third embodiment can realizes a scheme by which a ciphertext on which a homomorphic operation has been performed can be converted into a ciphertext that can be decrypted with the decryption key skj by employing a specific multi-key homomorphic encryption scheme,\n\nIn the scheme realized by the privacy-preserving information processing system 10 according to the third embodiment, the number of elements in the public key pk is greater but the number of elements in the ciphertext C is smaller than those in the scheme realized by the privacy-preserving information processing system 10 according to the second embodiment.\n\nREFERENCE SIGNS LIST\n\n10: privacy-preserving information processing system, 20: common parameter generation device, 21: processor, 22: memory, 23: storage, 24: communication interface, 25: electronic circuit, 211: acquisition unit, 212: common parameter generation unit, 213: output unit, 231: parameter storage unit, 30: key generation device, 31: processor, 32: memory, 33: storage, 34: communication interface, 35: electronic circuit, 311: acquisition unit, 312: key generation unit, 313: output unit, 331: key storage unit, 40: conversion key generation device, 41: processor, 42: memory, 43: storage, 44: communication interface, 45: electronic circuit, 411: key acquisition unit, 412: conversion key generation unit, 413: output unit, 431: key storage unit, 50: encryption device, 51: processor, 52: memory, 53: storage, 54: communication interface, 55: electronic circuit, 511: acquisition unit, 512: encryption unit, 513: output unit, 531: key storage unit, 60: homomorphic operation device, 61: processor, 62: memory, 63: storage, 64: communication interface, 65: electronic circuit, 611: acquisition unit, 612: homomorphic operation unit, 613: output unit, 631: key storage unit, 632: ciphertext storage unit, 70: ciphertext conversion device, 71: processor, 72: memory, 73: storage, 74: communication interface, 75: electronic circuit, 711: acquisition unit, 712:\n\nciphertext conversion unit, 713: output unit, 731: key storage unit, 80: decryption device, 81: processor, 82: memory, 83: storage, 84: communication interface, 85: electronic circuit, 811: acquisition unit, 812: decryption unit, 813: output unit, 831: key storage unit, 90: transmission channels\n\n## Claims\n\n1. A conversion key generation device for use in a multi-key homomorphic encryption system comprising:\n\nprocessing circuitry to:\nreceive and store a plurality of key pairs, each pair comprising a respective decryption key ski and a public key pki from each of a plurality of key generation devices, each of the plurality of key generation devices being associated on a one-to-one basis with a decryption device;\nacquire a source decryption key ski in a pair of a conversion source from the plurality of stored pairs and a target public key pkj in a pair of a conversion target from the plurality of stored pairs, the conversion source pair and the conversion target pair being associated with different decryption devices; and\nencrypt the acquired source decryption key ski with the target public key pkj, using an encryption algorithm in multi-key homomorphic encryption, so as to generate a conversion key rki→j for converting a ciphertext encrypted with a source public key pki in the pair of the conversion source into a converted ciphertext RC decryptable with a target decryption key skj in the pair of the conversion target.\n\n2. The conversion key generation device according to claim 1, wherein the processing circuitry where B j:= A - e n t ⊗ b j, X i ← { 0, 1 } m × nL, ⁢ rk i → j:= B j ⁢ X i + ( 0 ( n - 1 ) × nL t i · ( I n ⊗ g ) ) [ Formula ⁢ ⁢ 103 ] where\n\nacquires a decryption key ski including an element ti as indicated in Formula 101, and acquires a public key pkj including an element bj and an element A as indicated in Formula 102, and\nencrypts the decryption key ski with the public key pkj, as indicated in Formula 103, so as to generate the conversion key rki→j ti→χn−1,ti:=(−ti,1)∈Zn  [Formula 101]\nwhere n and χ are LWE parameters tj→χn−1,tj:=(−tj,1)∈Zn,ej→χm,A→Zqn×m,bj:=tjA+ej  [Formula 102]\nq is an LWE parameter, and\nm is a natural number\nIn is an n×n identity matrix, and g:=(1,2,...,2L−1).\n\n3. & ciphertext conversion device comprising: where t i _ ← χ n - 1, t i:= ( - t i _, 1 ) ∈ ℤ n, ⁢ t j _ ← χ n - 1, t j:= ( - t j _, 1 ) ∈ ℤ n, e j ← χ m, ⁢ A ← ℤ q n × m, b j:= t j ⁢ A + e j, ⁢ B j:= A - e n t ⊗ b j, X i ← { 0, 1 } m × nL, ⁢ rk i → j:= B j ⁢ X i + ( 0 ( n - 1 ) × nL t i · ( I n ⊗ g ) ) [ Formula ⁢ ⁢ 105 ] CT *:= [ rk i → j ] · ( I n ⊗ g - 1 ) ⁡ [ CT ]. [ Formula ⁢ ⁢ 106 ]\n\nprocessing circuitry to: acquire a ciphertext C encrypted with a public key pki in a pair of a conversion source, out of a plurality of pairs of a decryption key and a public key, the ciphertext C including an element CT and an element index as indicated in Formula 104; acquire a conversion key rki→j, as indicated in Formula 105, resulting from encrypting a decryption key ski in the pair of the conversion source with a public key pkj; in a pair of a conversion target out of the plurality of pairs; and convert, using the acquired conversion key rki→j, the acquired ciphertext C into a converted ciphertext RC decryptable with a decryption key skj in the pair of the conversion target, the converted ciphertext RC including an element CT* as indicated in Formula 106 and an element j index:=i,A→Zqn×m,ti→χn−1,ti:=(−ti,1)∈Zn,bi:=tiA+ei,Bi:=A−ent⊗bi,XC→{0,1}m×nL,C:=BXC∈Zqn×nL,CT:=C+M(In⊗g)∈Zqn×nL  [Formula 104]\nn, q, and x are LWE parameters, m is a natural number, L is a minimum integer equal to or more than log q, In is an n×n identity matrix, and g:(1,2,...,2L−1)\n\n4. The ciphertext conversion device according to claim 3, wherein the processing circuitry decrypts the ciphertext C with the decryption key ski included in the conversion key by a homomorphic operation, so as to convert the ciphertext C into the converted ciphertext RC.\n\n5. A ciphertext conversion device comprising: where t i _ ← χ n - 1, t i:= ( - t i _, 1 ) ∈ ℤ n, ⁢ t j _ ← χ n - 1, t j:= ( - t j _, 1 ) ∈ ℤ n, e j ← χ m, ⁢ A ← ℤ q n × m, b j:= t j ⁢ A + e j, ⁢ B j:= A - e n t ⊗ b j, X i ← { 0, 1 } m × nL, ⁢ rk i → j:= B j ⁢ X i + ( 0 ( n - 1 ) × nL t i · ( I n ⊗ g ) ) [ Formula ⁢ ⁢ 108 ] CT *:= [ rk 1 → j ⁢  ⁢ … ⁢  ⁢ rk s → j ] · ( I ns ⊗ g - 1 ) ⁡ [ CT ]. [ Formula ⁢ ⁢ 109 ]\n\nprocessing circuitry to: acquire a ciphertext EC generated by performing a homomorphic operation on a ciphertext C encrypted with a public key pki in a pair of a conversion source, out of a plurality of pairs of a decryption key and a public key, the ciphertext EC including an element CT, an element F, and an element D, the ciphertext Ci including an element CTi and an element index, as indicated in Formula 107, for each integer i of i=1,..., s, where s is an integer of 1 or more; acquire a conversion key rki→j for each integer i of i=1,..., s, as indicated in Formula 108, resulting from encrypting a decryption key ski in the pair of the conversion source with a public key pkj in a pair of a conversion target out of the plurality of pairs; and convert, using the conversion key rki→j for each integer i of i=1,..., s, the acquired ciphertext EC into a converted ciphertext RC decryptable with a decryption key skj in the pair of the conversion target, the converted ciphertext RC including an element CT* as indicated in Formula 109 and an element j index:=i,A→Zqn×m,ti→χn−1,ti:=(−ti,1)∈Zn,bi:=tiA+ei,Bi:=A−ent⊗bi,XCi→{0,1}m×nL,Ci:=BXCi∈Zqn×nL,CTi:=Ci+Mi(In⊗g)⊗g)∈Zqn×nL  [Formula 107]\nn, q, and χ are LWE parameters,\nm is a natural number,\nL is a minimum integer equal to or more than log q,\nIn is an n×n identity matrix, and g:=(1,2,...,2L−1)\n\n6. The ciphertext conversion device according to claim 5, wherein the processing circuitry decrypts the ciphertext EC with the decryption key ski included in the conversion key rki→j by a homomorphic operation, so as to convert the ciphertext EC into the converted ciphertext RC.\n\n7. A privacy-preserving information processing system comprising:\n\na conversion key generation device configured to receive and store a plurality of key pairs, each pair comprising a respective decryption key ski and a public key pki from each of a plurality of key generation devices, each of the plurality of key generation devices being associated on a one-to-one basis with a decryption device; acquire a source decryption key ski in a pair of a conversion source from the plurality of stored pairs and a target public key pki in a pair of a conversion target from the plurality of stored pairs, the conversion source pair and the conversion target pair being associated with different decryption devices, and encrypt a source decryption key ski in a pair of a conversion source with the target public key pkj using an encryption algorithm in multi-key homomorphic encryption, so as to generate a conversion key rki→j; and\na ciphertext conversion device configured to convert, using the conversion key rki→j generated by the conversion key generation device, a ciphertext encrypted with a source public key pki in the pair of the conversion source out of the plurality of pairs into a converted ciphertext RC decryptable with a target decryption key skj in the pair of the conversion target.\n\n8. A conversion key generation method comprising:\n\nreceiving and storing a plurality of key pairs, each pair comprising a respective decryption key ski and a public key pki from each of a plurality of key generation devices, each of the plurality of key generation devices being associated on a one-to-one basis with a decryption device,\nacquiring a source decryption key ski in a pair of a conversion source from the plurality of stored pairs and a target public key pki in a pair of a conversion target from the plurality of stored pairs, the conversion source pair and the conversion target pair being associated with different decryption devices; and\nencrypting the source decryption key ski with the target public key pki, using an encryption algorithm in multi-key homomorphic encryption, so as to generate a conversion key rki→j for converting a ciphertext encrypted with a source public key pki in the pair of the conversion source into a converted ciphertext RC decryptable with a target decryption key ski in the pair of the conversion target.\n\n9. A non-transitory computer readable medium storing a conversion key generation program for causing a computer to execute:\n\na pair receiving and storing process to receive and stored a plurality of key pairs, each pair comprising a respective decryption key ski and a public key pki from each of a plurality of key generation devices, each of the plurality of key generation devices being associated on a one-to-one basis with a decryption device,\na key acquisition process to acquire a source decryption key ski in a pair of a conversion source from the plurality of stored pairs and a target public key pkj in a pair of a conversion target from the plurality of stored pairs, the conversion source pair and the conversion target pair being associated with different decryption devices; and\na conversion key generation process to encrypt the source decryption key ski acquired by the key acquisition process with the target public key pkj, using an encryption algorithm in multi-key homomorphic encryption, so as to generate a conversion key rki→j for converting a ciphertext encrypted with a source public key pki in the pair of the conversion source into a converted ciphertext RC decryptable with a target decryption key skj in the pair of the conversion target.\n\n10. A ciphertext conversion method comprising: t i _ ← χ n - 1, t i:= ( - t i _, 1 ) ∈ ℤ n, ⁢ t j _ ← χ n - 1, t j:= ( - t j _, 1 ) ∈ ℤ n, e j ← χ m, ⁢ A ← ℤ q n × m, b j:= t j ⁢ A + e j, ⁢ B j:= A - e n t ⊗ b j, X i ← { 0, 1 } m × nL, ⁢ rk i → j:= B j ⁢ X i + ( 0 ( n - 1 ) × nL t i · ( I n ⊗ g ) ) [ Formula ⁢ ⁢ 111 ] CT *:= [ rk i → j ] · ( I n ⊗ g - 1 ) ⁡ [ CT ]. [ Formula ⁢ ⁢ 112 ]\n\nacquiring a ciphertext C encrypted with a public key pki in a pair of a conversion source, out of a plurality of pairs of a decryption key and a public key, the ciphertext C including an element CT and an element index as indicated in Formula 110;\nacquiring a conversion key rki→j, as indicated in Formula 111, resulting from encrypting a decryption key ski in the pair of the conversion source with a public key pkj in a pair of a conversion target out of the plurality of pairs; and\nconverting, using the conversion key rki→j, the ciphertext C into a converted ciphertext RC decryptable with a decryption key skj in the pair of the conversion target, the converted ciphertext RC including an element CT* as indicated in Formula 112 and an element j index:=i,A→Zqn×m,ti→χn−1,ti:=(−ti,1)∈Zn,bi:=tiA+ei,Bi:=A−ent⊗bi,XC→{0,1}m×nL,C:=BXC∈Zqn×nL,CT:=C+M(In⊗g)∈Zqn×nL  [Formula 110]\nwhere\nn,q and χ are LWE parameters,\nm is a natural number,\nL is a minimum integer equal to or more than log q,\nIn is an n×n identity matrix, and g:=(1,2,...,2L−1)\n\n11. A ciphertext conversion method comprising: where t i _ ← χ n - 1, t i:= ( - t i _, 1 ) ∈ ℤ n, ⁢ t j _ ← χ n - 1, t j:= ( - t j _, 1 ) ∈ ℤ n, e j ← χ m, ⁢ A ← ℤ q n × m, b j:= t j ⁢ A + e j, ⁢ B j:= A - e n t ⊗ b j, X i ← { 0, 1 } m × nL, ⁢ rk i → j:= B j ⁢ X i + ( 0 ( n - 1 ) × nL t i · ( I n ⊗ g ) ) [ Formula ⁢ ⁢ 114 ] CT *:= [ rk 1 → j ⁢  ⁢ … ⁢  ⁢ rk s → j ] · ( I ns ⊗ g - 1 ) ⁡ [ CT ]. [ Formula ⁢ ⁢ 115 ]\n\nacquiring a ciphertext EC generated by performing a homomorphic operation on a ciphertext Ci encrypted with a public key pki in a pair of a conversion source, out of a plurality of pairs of a decryption key and a public key, the ciphertext EC including an element CT, an element F, and an element D, the ciphertext Ci including an element CTi and an element index, as indicated in Formula 113, for each integer i of i=1,..., s, where s is an integer of 1 or more;\nacquiring a conversion key rki→j for each integer i of i=1,..., s, as indicated in Formula 114, resulting from encrypting a decryption key sk in the pair of the conversion source with a public key pki in a pair of a conversion target out of the plurality of pairs; and\nconverting, using the conversion key rki→j for each integer i of i=1,..., s, the ciphertext EC into a converted ciphertext RC decryptable with a decryption key ski in the pair of the conversion target, the converted ciphertext RC including an element CT* as indicated in Formula 115 and an element j index:=i,A→Zqn×m,ti→χn−1,ti:=(−ti,1)∈Zn,bi:=tiA+ei,Bi:=A−ent⊗bi,XCi→{0,1}m×nL,Ci:=BXCi∈Zqn×nL,CTi:=Ci+Mi(In⊗g)└Zqn×nL  [Formula 113]\nn, q, and χ are LWE parameters,\nm is a natural number,\nL is a minimum integer equal to or more than log q,\nIn is an n×n identity matrix, and g:=(1,2,...,2L−1)\n\n12. A non-transitory computer readable medium storing a ciphertext conversion program for causing a computer to execute: t i _ ← χ n - 1, t i:= ( - t i _, 1 ) ∈ ℤ n, ⁢ t j _ ← χ n - 1, t j:= ( - t j _, 1 ) ∈ ℤ n, e j ← χ m, ⁢ A ← ℤ q n × m, b j:= t j ⁢ A + e j, ⁢ B j:= A - e n t ⊗ b j, X i ← { 0, 1 } m × nL, ⁢ rk i → j:= B j ⁢ X i + ( 0 ( n - 1 ) × nL t i · ( I n ⊗ g ) ) [ Formula ⁢ ⁢ 117 ] CT *:= [ rk i → j ] · ( I n ⊗ g - 1 · T ]. [ Formula ⁢ ⁢ 118 ]\n\na ciphertext acquisition process to acquire a ciphertext, C encrypted with a public key pki in a pair of a conversion source, out of a plurality of pairs of a decryption key and a public key, the ciphertext C including an element CT and an element index as indicated in Formula 116;\na key acquisition process to acquire a conversion key as indicated in Formula 117, resulting from encrypting a decryption key ski in the pair of the conversion source with a public key pkj in a pair of a conversion target out of the plurality of pairs; and\na ciphertext conversion process to convert, using the conversion key rki→j acquired by the key acquisition process, the ciphertext C acquired by the ciphertext acquisition process into a converted ciphertext RC decryptable with a decryption key skj in the pair of the conversion target, the converted ciphertext RC including an element CT* as indicated in Formula 118 and an element j index:=i,A→Zqn×m,ti→χn−1,ti:=(−ti,1)∈Zn,bi:=tiA+ei,Bi:=A−ent⊗bi,XC→{0,1}m×nL,C:=BXC∈Zqn×nL,CT:=C+M(In⊗g)∈Zqn×nL  [Formula 116]\nwhere\nn, q, and χ are LWE parameters,\nm is a natural number,\nL is a minimum integer equal to or more than log q,\nIn is an n x n identity matrix, and g:=(1,2,...,2L−1)\n\n13. A non-transitory computer readable medium storing a ciphertext conversion program for causing a computer to execute: where t i _ ← χ n - 1, t i:= ( - t i _, 1 ) ∈ ℤ n, ⁢ t j _ ← χ n - 1, t j:= ( - t j _, 1 ) ∈ ℤ n, e j ← χ m, ⁢ A ← ℤ q n × m, b j:= t j ⁢ A + e j, ⁢ B j:= A - e n t ⊗ b j, X i ← { 0, 1 } m × nL, ⁢ rk i → j:= B j ⁢ X i + ( 0 ( n - 1 ) × nL t i · ( I n ⊗ g ) ) [ Formula ⁢ ⁢ 120 ] CT *:= [ rk i → j ⁢  ⁢ … ⁢  ⁢ rk s → j ] · ( I ns ⊗ g - 1 ) ⁡ [ CT ]. [ Formula ⁢ ⁢ 121 ]\n\na ciphertext acquisition process to acquire a ciphertext EC generated by performing a homomorphic operation on a ciphertext Ci encrypted with a public key pki in a pair of a conversion source, out of a plurality of pairs of a decryption key and a public key, the ciphertext EC including an element CT, an element F, and an element D, the ciphertext Ci including an element CTi and an element index, as indicated in Formula 119, for each integer i of i=1,..., s, where s is an integer of 1 or more;\na key acquisition process to acquire a conversion key rki→j for each integer i of i=1,..., s, as indicated in Formula 120, resulting from encrypting a decryption key ski in the pair of the conversion source with a public key pkj in a pair of a conversion target out of the plurality of pairs; and\na ciphertext conversion process to convert, using the conversion key rki→j for each integer i of i=1,..., s acquired by the key acquisition process, the ciphertext EC acquired by the ciphertext acquisition process into a converted ciphertext RC decryptable with a decryption key skj in the pair of the conversion target, the converted ciphertext RC including an element CT* as indicated in Formula 121 and an element j index:=i,A→Zqn×m,ti→χn−1,ti:=(−ti,1)∈Zn,bi:=tiA+ei,Bi:=A−ent⊗bi,XCi→{0,1}m×nL,Ci:=BXCi∈Zqn×nL,CTi:=Ci+Mi(In⊗g)└Zqn×nL  [Formula 119]\nn, q, and χ are LWE parameters,\nm is a natural number,\nL is a minimum integer equal to or more than log q,\nIn is an n×n identity matrix, and g:=(1,2,...,2L−1)\nPatent History\nPatent number: 11374742\nType: Grant\nFiled: Dec 28, 2017\nDate of Patent: Jun 28, 2022\nPatent Publication Number: 20200344049\nAssignee: Mitsubishi Electric Corporation (Tokyo)\nInventors: Satoshi Yasuda (Tokyo), Yoshihiro Koseki (Tokyo), Yutaka Kawai (Tokyo), Ryo Hiromasa (Tokyo)\nPrimary Examiner: Darshan I Dhruv\nApplication Number: 16/761,731\nClassifications\nCurrent U.S. Class: Nonlinear (e.g., Pseudorandom) (380/46)\nInternational Classification: H04L 9/00 (20220101); H04L 9/08 (20060101); H04L 9/14 (20060101); H04L 9/32 (20060101);" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8352798,"math_prob":0.93036294,"size":74477,"snap":"2022-27-2022-33","text_gpt3_token_len":17644,"char_repetition_ratio":0.25188994,"word_repetition_ratio":0.5477982,"special_character_ratio":0.24523006,"punctuation_ratio":0.121867724,"nsfw_num_words":2,"has_unicode_error":false,"math_prob_llama3":0.97863936,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-18T15:58:20Z\",\"WARC-Record-ID\":\"<urn:uuid:063e5bf2-3167-4416-9066-8b74da68bf25>\",\"Content-Length\":\"200989\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a3569f17-e663-4a9d-a920-821c76c9be60>\",\"WARC-Concurrent-To\":\"<urn:uuid:fc3e5b08-d255-447d-951c-b15c55b1eb4a>\",\"WARC-IP-Address\":\"44.209.188.110\",\"WARC-Target-URI\":\"https://patents.justia.com/patent/11374742\",\"WARC-Payload-Digest\":\"sha1:P6ANPYWV57H54RBL4J26UN7WA2Q2DZBJ\",\"WARC-Block-Digest\":\"sha1:PS4FQD6N5QXBAA6X7UINYJHVSP2MSVS2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882573242.55_warc_CC-MAIN-20220818154820-20220818184820-00711.warc.gz\"}"}
https://uk.mathworks.com/matlabcentral/cody/problems/143-cannon-ball/solutions/1914214
[ "Cody\n\n# Problem 143. Cannon Ball\n\nSolution 1914214\n\nSubmitted on 28 Aug 2019 by Oliver Warlow\nThis solution is locked. To view this solution, you need to provide a solution of the same size or smaller.\n\n### Test Suite\n\nTest Status Code Input and Output\n1   Pass\ng=32; h=10000; v_correct = 800; assert(isequal(canon(g,h),v_correct))\n\nans = 800\n\n2   Pass\ng=32; h=100; v_correct = 80; assert(isequal(canon(g,h),v_correct))\n\nans = 80\n\n3   Pass\ng=32; h=4; v_correct = 16; assert(isequal(canon(g,h),v_correct))\n\nans = 16" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.52199787,"math_prob":0.9861054,"size":512,"snap":"2019-43-2019-47","text_gpt3_token_len":169,"char_repetition_ratio":0.15354331,"word_repetition_ratio":0.0,"special_character_ratio":0.37890625,"punctuation_ratio":0.18181819,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9935859,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-20T21:40:06Z\",\"WARC-Record-ID\":\"<urn:uuid:478be45f-ae62-483d-91fd-b46721c3e6fc>\",\"Content-Length\":\"73084\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4e5e7d03-ca35-456c-a775-11ced72b8fa8>\",\"WARC-Concurrent-To\":\"<urn:uuid:1894f149-21f7-4709-8ad2-ae68378a3a24>\",\"WARC-IP-Address\":\"23.195.248.135\",\"WARC-Target-URI\":\"https://uk.mathworks.com/matlabcentral/cody/problems/143-cannon-ball/solutions/1914214\",\"WARC-Payload-Digest\":\"sha1:KULR2XZRKAMV4V7COX456LCNN45VLYTX\",\"WARC-Block-Digest\":\"sha1:VKUGWYEMPMHUS5W6WYRCSFDP7T4LMTL4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496670635.48_warc_CC-MAIN-20191120213017-20191121001017-00154.warc.gz\"}"}
https://stackoverflow.com/questions/4983885/is-there-a-normal-equalq-function-in-mathematica
[ "# Is there a \"normal\" EqualQ function in Mathematica?\n\nOn the documentation page for `Equal` we read that\n\nApproximate numbers with machine precision or higher are considered equal if they differ in at most their last seven binary digits (roughly their last two decimal digits).\n\nHere are examples (32 bit system; for 64 bit system add some more zeros in the middle):\n\n``````In:= 1.0000000000000021 == 1.0000000000000022\n1.0000000000000021 === 1.0000000000000022\n\nOut= True\n\nOut= True\n``````\n\nI'm wondering is there a \"normal\" analog of the `Equal` function in Mathematica that does not drop last 7 binary digits?\n\n• Would `SameQ` be ok? Maybe after truncating to the number of digits that you want to keep. Feb 13, 2011 at 11:49\n• @Simon Try `1.00000000000000000022 === 1.00000000000000000021`. You will see that it is not OK. :( Feb 13, 2011 at 12:15\n• A guess...perhaps Mathematica doesn't consider last digit to be a significant digit at default precision. You could use backtick notation to indicate that precision is high enough to make all digits significant -- 1.00000000000000000022`100===1.00000000000000000021`100 Feb 13, 2011 at 20:14\n• @Alexey - that's why I said you'd have to truncate to the number of digits that you want to compare. Feb 13, 2011 at 21:27\n• @Alexey btw, hardware floating point can give non-deterministic results, perhaps that's the reason `===` drops digit -- thenumericalalgorithmsgroup.blogspot.com/2011/02/… Mar 3, 2011 at 22:19\n\nThanks to recent post on the official newsgroup by Oleksandr Rasputinov, now I have learned two undocumented functions which control the tolerance of `Equal` and `SameQ`: `\\$EqualTolerance` and `\\$SameQTolerance`. In Mathematica version 5 and earlier these functions live in the `Experimental`` context and are well documented: \\$EqualTolerance, \\$SameQTolerance. Starting from version 6, they are moved to the `Internal`` context and become undocumented but still work and even have built-in diagnostic messages which appear when one try to assign them illegal values:\n\n``````In:= Internal`\\$SameQTolerance = a\n\nDuring evaluation of In:= Internal`\\$SameQTolerance::tolset:\nCannot set Internal`\\$SameQTolerance to a; value must be a real\nnumber or +/- Infinity.\n\nOut= a\n``````\n\nCiting Oleksandr Rasputinov:\n\nInternal`\\$EqualTolerance ... takes a machine real value indicating the number of decimal digits' tolerance that should be applied, i.e. Log/Log times the number of least significant bits one wishes to ignore.\n\nIn this way, setting `Internal`\\$EqualTolerance` to zero will force `Equal` to consider numbers equal only when they are identical in all binary digits (not considering out-of-`Precision` digits):\n\n``````In:= Block[{Internal`\\$EqualTolerance = 0},\n1.0000000000000021 == 1.0000000000000022]\nOut= False\n\nIn:= Block[{Internal`\\$EqualTolerance = 0},\n1.00000000000000002 == 1.000000000000000029]\nBlock[{Internal`\\$EqualTolerance = 0},\n1.000000000000000020 == 1.000000000000000029]\nOut= True\nOut= False\n``````\n\nNote the following case:\n\n``````In:= Block[{Internal`\\$EqualTolerance = 0},\n1.0000000000000020 == 1.0000000000000021]\nRealDigits[1.0000000000000020, 2] === RealDigits[1.0000000000000021, 2]\nOut= True\nOut= True\n``````\n\nIn this case both numbers have `MachinePrecision` which effectively is\n\n``````In:= \\$MachinePrecision\nOut= 15.9546\n``````\n\n(`53*Log[10, 2]`). With such precision these numbers are identical in all binary digits:\n\n``````In:= RealDigits[1.0000000000000020` \\$MachinePrecision, 2] ===\nRealDigits[1.0000000000000021` \\$MachinePrecision, 2]\nOut= True\n``````\n\nIncreasing precision to 16 makes them different arbitrary-precision numbers:\n\n``````In:= RealDigits[1.0000000000000020`16, 2] ===\nRealDigits[1.0000000000000021`16, 2]\nOut= False\n\nIn:= Row@First@RealDigits[1.0000000000000020`16,2]\nRow@First@RealDigits[1.0000000000000021`16,2]\nOut= 100000000000000000000000000000000000000000000000010010\nOut= 100000000000000000000000000000000000000000000000010011\n``````\n\nBut unfortunately `Equal` still fails to distinguish them:\n\n``````In:= Block[{Internal`\\$EqualTolerance = 0},\n{1.00000000000000002`16 == 1.000000000000000021`16,\n1.00000000000000002`17 == 1.000000000000000021`17,\n1.00000000000000002`18 == 1.000000000000000021`18}]\nOut= {True, True, False}\n``````\n\nThere is an infinite number of such cases:\n\n``````In:= Block[{Internal`\\$EqualTolerance = 0},\nCases[Table[a = SetPrecision[1., n];\nb = a + 10^-n; {n, a == b, RealDigits[a, 2] === RealDigits[b, 2],\nOrder[a, b] == 0}, {n, 15, 300}], {_, True, False, _}]] // Length\n\nOut= 192\n``````\n\nInterestingly, sometimes `RealDigits` returns identical digits while `Order` shows that internal representations of expressions are not identical:\n\n``````In:= Block[{Internal`\\$EqualTolerance = 0},\nCases[Table[a = SetPrecision[1., n];\nb = a + 10^-n; {n, a == b, RealDigits[a, 2] === RealDigits[b, 2],\nOrder[a, b] == 0}, {n, 15, 300}], {_, _, True, False}]] // Length\n\nOut= 64\n``````\n\nBut it seems that opposite situation newer happens:\n\n``````In:=\nBlock[{Internal`\\$EqualTolerance = 0},\nCases[Table[a = SetPrecision[1., n];\nb = a + 10^-n; {n, a == b, RealDigits[a, 2] === RealDigits[b, 2],\nOrder[a, b] == 0}, {n, 15, 3000}], {_, _, False, True}]] // Length\n\nOut= 0\n``````\n• Thank you for finding and posting this. +1 (Why did this not have any votes?) Aug 18, 2011 at 19:09\n• @Mr.Wizard Added further observations. It seems that `Internal`\\$EqualTolerance` is not so reliable as one may expect... Aug 19, 2011 at 10:16\n• Strongly relevant MathGroups post by Itai Seggev (Wolfram Research): groups.google.com/d/msg/comp.soft-sys.math.mathematica/… Nov 2, 2013 at 7:19\n• Thanks, I'll take a look. Nov 2, 2013 at 9:29\n\nTry this:\n\n``````realEqual[a_, b_] := SameQ @@ RealDigits[{a, b}, 2, Automatic]\n``````\n\nThe choice of base 2 is crucial to ensure that you are comparing the internal representations.\n\n``````In:= realEqual[1.0000000000000021, 1.0000000000000021]\nOut= True\n\nIn:= realEqual[1.0000000000000021, 1.0000000000000022]\nOut= False\n\nIn:= realEqual[\n1.000000000000000000000000000000000000000000000000000000000000000022\n, 1.000000000000000000000000000000000000000000000000000000000000000023\n]\nOut= False\n``````\n``````In:= MyEqual[x_, y_] := Order[x, y] == 0\n\nIn:= MyEqual[1.0000000000000021, 1.0000000000000022]\n\nOut= False\n\nIn:= MyEqual[1.0000000000000021, 1.0000000000000021]\n\nOut= True\n``````\n\nThis tests if two object are identical, since 1.0000000000000021 and 1.000000000000002100 differs in precision they won't be considered as identical.\n\n• Precision in Mathematica is separate from digits shown. E.g., 1.01`16 and 1.01000`16 have the same precision.\n– Timo\nFeb 13, 2011 at 20:47\n• @Timo: Precision[1.0000000000000021] is MachinePrecision (1.0000000000000021`) but Precision[1.000000000000002100] is 18 (1.000000000000002100`18). The representation does affect the internal representation. Try FullForm[] them. Feb 13, 2011 at 20:52\n• @Kenny: And yet both 1.1 and 1.10000 are MachinePrecision ;-). My interpretation of the OP is that he wants to compare numerical values, not just what the numbers look like (SameQ@@ToString/@{#1,#2}& would suffice for that, or indeed your Order[]).\n– Timo\nFeb 13, 2011 at 21:02\n• @Timo: That's because 1.1 and 1.100000 have less than ~16 digits which can be represented by MachinePrecision (IEEE double). Feb 13, 2011 at 21:07\n• `Order` -- good idea! A nice simple, built-in function. Granted, it does not ignore trailing zeroes but, in general, comparing close numbers with different precisions is a tricky business. It frequently requires detailed numerical analysis, informed by the specific application. If you want that level of control, then a `RealDigits` solution like @Timo's will likely be required. But I like the simplicity of `Order`, letting Mathematica's ordering policy handle the gnarly cases. +1 Feb 14, 2011 at 0:47\n\nI'm not aware of an already defined operator. But you may define for example:\n\n``````longEqual[x_, y_] := Block[{\\$MaxPrecision = 20, \\$MinPrecision = 20},\nEqual[x - y, 0.]]\n``````\n\nSuch as:\n\n``````longEqual[1.00000000000000223, 1.00000000000000223]\nTrue\nlongEqual[1.00000000000000223, 1.00000000000000222]\nFalse\n``````\n\nEdit\n\nIf you want to generalize for an arbitrary number of digits, you can do for example:\n\n``````longEqual[x_, y_] :=\nBlock[{\n\\$MaxPrecision = Max @@ StringLength /@ ToString /@ {x, y},\n\\$MinPrecision = Max @@ StringLength /@ ToString /@ {x, y}},\nEqual[x - y, 0.]]\n``````\n\nHTH!\n\n• Thank you. But adding more zeros always breaks this approach: `longEqual[1.\\ 0000000000000000000000000000000000000000000000000000000000000000000000\\ 0000000000000000000000000023, \\ 1.00000000000000000000000000000000000000000000000000000000000000000000\\ 000000000000000000000000000022]` Feb 13, 2011 at 17:01\n• It works better but fails when at least one of the numbers ends with `NumberMark`: longEqual[1.0000000000000223`, 1.0000000000000222] Feb 13, 2011 at 18:20\n• @Alexey If you want to preserve precision you should use 1.`55 and not 1.` alone Feb 13, 2011 at 18:38\n\nI propose a strategy that uses `RealDigits` to compare the actual digits of the numbers. The only tricky bit is stripping out trailing zeroes.\n\n``````trunc = {Drop[First@#, Plus @@ First /@ {-Dimensions@First@#,\nLast@Position[First@#, n_?(# != 0 &)]}], Last@#} &@ RealDigits@# &;\nexactEqual = SameQ @@ trunc /@ {#1, #2} &;\n\nIn := exactEqual[1.000000000000000000000000000000000000000000000000000111,\n1.000000000000000000000000000000000000000000000000000111000]\nOut := True\nIn := exactEqual[1.000000000000000000000000000000000000000000000000000111,\n1.000000000000000000000000000000000000000000000000000112000]\nOut := False\n``````\n\nI think that you really have to specify what you want... there's no way to compare approximate real numbers that will satisfy everyone in every situation.\n\nAnyway, here's a couple more options:\n\n``````In:= realEqual[lhs_,rhs_,tol_:\\$MachineEpsilon] := 0==Chop[lhs-rhs,tol]\n\nIn:= Equal[1.0000000000000021,1.0000000000000021]\nrealEqual[1.0000000000000021,1.0000000000000021]\nOut= True\nOut= True\n\nIn:= Equal[1.0000000000000022,1.0000000000000021]\nrealEqual[1.0000000000000022,1.0000000000000021]\nOut= True\nOut= False\n``````\n\nAs the precision of both numbers gets higher, then they can always be distinguished if you set `tol` high enough.\n\nNote that the subtraction is done at the precision of the lowest of the two numbers. You could make it happen at the precision of the higher number (which seems a bit pointless) by doing something like\n\n``````maxEqual[lhs_, rhs_] := With[{prec = Max[Precision /@ {lhs, rhs}]},\n0 === Chop[SetPrecision[lhs, prec] - SetPrecision[rhs, prec], 10^-prec]]\n``````\n\nmaybe using the minimum precision makes more sense\n\n``````minEqual[lhs_, rhs_] := With[{prec = Min[Precision /@ {lhs, rhs}]},\n0 === Chop[SetPrecision[lhs, prec] - SetPrecision[rhs, prec], 10^-prec]]\n``````\n\nOne other way to define such function is by using SetPrecision:\n\n``````MyEqual[a_, b_] := SetPrecision[a, Precision[a] + 3] == SetPrecision[b, Precision[b] + 3]\n``````\n\nThis seems to work in the all cases but I'm still wondering is there a built-in function. It is ugly to use high-level functions for such a primitive task...\n\n• It only works if Precision is the same as the length of your number, which very often is not the case. MyEqual[1.111`3, 1.11100001`3] -> True.\n– Timo\nFeb 13, 2011 at 20:18\n• @Alexey Popkov: Instead of setting precision, I like to set a tolerable percentage of deviation from the TRUE value. For example, let us suppose that I have a true value for `xT=245` and a false value `xF=250`, but I want to set `xT=xF` because the percentage deviation from the True value is only 2% and I want to tolerate this deviation and accept the equality like in significance test. I have a very large number of equations to tolerate but I do not know how to set this tolerance level for my system of equations. Can you help me to solve this problem? thanks. Sep 15, 2019 at 13:19\n• @TugrulTemel I suggest you to create a specific question on the dedicated site, with detailed description and examples of what you wish to achieve. Sep 15, 2019 at 14:12\n• @AlexeyPopkov: Yes, I will do that right now. thanks for your prompt reply. Regards, Tugrul Sep 15, 2019 at 15:23" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.61786735,"math_prob":0.99778384,"size":3915,"snap":"2022-27-2022-33","text_gpt3_token_len":1257,"char_repetition_ratio":0.2505753,"word_repetition_ratio":0.15856236,"special_character_ratio":0.43090677,"punctuation_ratio":0.20949721,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9952632,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-17T10:45:29Z\",\"WARC-Record-ID\":\"<urn:uuid:c05b7ec5-3b0a-481d-b37e-acc73eaa8d4b>\",\"Content-Length\":\"306818\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:421f9a00-899e-463e-9753-09f6ea325823>\",\"WARC-Concurrent-To\":\"<urn:uuid:429ba4a2-2f32-4e3d-b3ee-95bead0f7e99>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://stackoverflow.com/questions/4983885/is-there-a-normal-equalq-function-in-mathematica\",\"WARC-Payload-Digest\":\"sha1:QMGO6UT4AO4MLV6RBTHQX5IYRSMHGI2F\",\"WARC-Block-Digest\":\"sha1:K5NWX6S7TWFRPRM4AERJGLW2CESWCZM3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882572898.29_warc_CC-MAIN-20220817092402-20220817122402-00749.warc.gz\"}"}
https://books.google.no/books?id=K-I2AAAAMAAJ&q=divided&dq=editions:UOM39015067252117&lr=&hl=no&output=html_text&source=gbs_word_cloud_r&cad=5
[ "# The Elements of Euclid: The Errors, by which Theon, Or Others, Have Long Ago Vitiated These Books, are Corrected, and Some of Euclid's Demonstrations are Restored. Also, The Book of Euclid's Data, in Like Manner Corrected. viz. the first six books, together with the eleventh and twelfth\n\nMathew Carey, and sold by J. Conrad & Company, S. F. Bradford, Birch & Small, and Samuel Etheridge. Printed by T. & G. Palmer, 116, High-Street., 1806 - 518 sider\n\n### Hva folk mener -Skriv en omtale\n\nVi har ikke funnet noen omtaler pċ noen av de vanlige stedene.\n\n### Populĉre avsnitt\n\nSide 30 - Any two sides of a triangle are together greater than the third side.\nSide 64 - To divide a given straight line into two parts, so that the rectangle contained by the whole, and one of the parts, may be equal to the square of the other part.\nSide 30 - IF, from the ends of the side of a triangle, there be drawn two straight lines to a point within the triangle, these shall be less than the other two sides of the triangle, but shall contain a greater angle. Let...\nSide 59 - PROP. VIII. THEOR. IF a straight line be divided into any two parts, tour times the rectangle contained by the whole line, and one of the parts, together with the square of the other part, is equal to the square of the straight line which is made up of the whole and that part.\nSide 28 - If one side of a triangle be produced, the exterior angle is greater than either of the interior opposite angles.\nSide 165 - If two triangles have one angle of the one equal to one angle of the other and the sides about these equal angles proportional, the triangles are similar.\nSide 19 - THE angles at the base of an isosceles triangle are equal to one another : and, if the equal sides be produced, the angles upon the other side of the base shall be equal.\nSide 191 - In right angled triangles, the rectilineal figure described upon the side opposite to the right angle, is equal to the similar, and similarly described figures upon the sides containing the right angle.\nSide 39 - All the interior angles of any rectilineal figure, together with four right angles, are equal to twice as many right angles as the figure has sidef. For any rectilineal figure ABCDE can be divided into as many triangles as the figure has sides, by drawing straight lines from a point F within the figure to each of its angles.\nSide 180 - Therefore, universally, similar rectilineal figures are to one another in the duplicate ratio of their homologous sides." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8596349,"math_prob":0.97886246,"size":6361,"snap":"2021-21-2021-25","text_gpt3_token_len":1494,"char_repetition_ratio":0.16328457,"word_repetition_ratio":0.46140796,"special_character_ratio":0.22983807,"punctuation_ratio":0.14618973,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9872807,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-19T03:47:59Z\",\"WARC-Record-ID\":\"<urn:uuid:6e30c1c7-76de-460e-8267-79cb6d03193b>\",\"Content-Length\":\"69483\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:535a35a8-08ed-4b0d-b0ea-f811f1e6458d>\",\"WARC-Concurrent-To\":\"<urn:uuid:2fe96ce5-6b2d-415f-afed-f399664ac367>\",\"WARC-IP-Address\":\"172.217.2.110\",\"WARC-Target-URI\":\"https://books.google.no/books?id=K-I2AAAAMAAJ&q=divided&dq=editions:UOM39015067252117&lr=&hl=no&output=html_text&source=gbs_word_cloud_r&cad=5\",\"WARC-Payload-Digest\":\"sha1:UBRF3TOUL7RDIBMCP3B6TUQLYODCCBJM\",\"WARC-Block-Digest\":\"sha1:MRN4LCV5IWYC7R6D74IXAW3WQNEP6TFB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243991562.85_warc_CC-MAIN-20210519012635-20210519042635-00176.warc.gz\"}"}
https://groupprops.subwiki.org/wiki/General_linear_group_over_algebraically_closed_field_is_divisible
[ "# General linear group over algebraically closed field is divisible\n\nSuppose", null, "$K$ is a field that is algebraically closed and", null, "$G$ is a general linear group of finite degree", null, "$n$ over", null, "$K$, i.e.,", null, "$G = GL(n,K)$. Then,", null, "$G$ is a divisible group, i.e., for any", null, "$g \\in G$ and any positive integer", null, "$n$, there exists", null, "$x \\in G$ (not necessarily unique) such that", null, "$x^n = g$.\nThe idea is to first conjugate to a Jordan canonical form matrix, then take the unique", null, "$n^{th}$ root of that." ]
[ null, "https://groupprops.subwiki.org/w/images/math/a/5/f/a5f3c6a11b03839d46af9fb43c97c188.png ", null, "https://groupprops.subwiki.org/w/images/math/d/f/c/dfcf28d0734569a6a693bc8194de62bf.png ", null, "https://groupprops.subwiki.org/w/images/math/7/b/8/7b8b965ad4bca0e41ab51de7b31363a1.png ", null, "https://groupprops.subwiki.org/w/images/math/a/5/f/a5f3c6a11b03839d46af9fb43c97c188.png ", null, "https://groupprops.subwiki.org/w/images/math/7/b/d/7bd171af25ce6bd46c75e4350f274c66.png ", null, "https://groupprops.subwiki.org/w/images/math/d/f/c/dfcf28d0734569a6a693bc8194de62bf.png ", null, "https://groupprops.subwiki.org/w/images/math/7/2/c/72c2de6dada513c3d289a176500f29c8.png ", null, "https://groupprops.subwiki.org/w/images/math/7/b/8/7b8b965ad4bca0e41ab51de7b31363a1.png ", null, "https://groupprops.subwiki.org/w/images/math/1/d/6/1d64984a7a683797abea41747cb33ac0.png ", null, "https://groupprops.subwiki.org/w/images/math/7/2/b/72b37595e111386ff695398bf659672f.png ", null, "https://groupprops.subwiki.org/w/images/math/c/c/7/cc778408df16bd74114dcbb47797a740.png ", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.92977226,"math_prob":0.99986494,"size":441,"snap":"2020-10-2020-16","text_gpt3_token_len":98,"char_repetition_ratio":0.1006865,"word_repetition_ratio":0.0,"special_character_ratio":0.20634921,"punctuation_ratio":0.1590909,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999821,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-29T08:31:00Z\",\"WARC-Record-ID\":\"<urn:uuid:854df326-92c3-4635-b38b-ed87c4b08fc3>\",\"Content-Length\":\"23200\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bc4473b6-d18c-42d4-8554-65fb9bd248db>\",\"WARC-Concurrent-To\":\"<urn:uuid:e35345ec-1fd9-47b9-a09b-908ba2e2edf9>\",\"WARC-IP-Address\":\"96.126.114.7\",\"WARC-Target-URI\":\"https://groupprops.subwiki.org/wiki/General_linear_group_over_algebraically_closed_field_is_divisible\",\"WARC-Payload-Digest\":\"sha1:AM2DBQ3TQJ2DSGP4IJGWTUNMGGM5TF4M\",\"WARC-Block-Digest\":\"sha1:7OH7TRRNV6UXQGARTGULXPNNXC4B4IEE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875148671.99_warc_CC-MAIN-20200229053151-20200229083151-00138.warc.gz\"}"}
https://dev.thep.lu.se/yat/browser/trunk/doc/Statistics.doxygen?rev=1152&desc=1
[ "source:trunk/doc/Statistics.doxygen@1152\n\nLast change on this file since 1152 was 1125, checked in by Peter, 14 years ago\n\nfixing Doxygen parsing\n\n• Property svn:eol-style set to native\n• Property svn:keywords set to Author Date Id Revision\nFile size: 14.1 KB\nLine\n1// $Id: Statistics.doxygen 1125 2008-02-22 21:31:22Z peter$\n2//\n3// Copyright (C) 2005 Peter Johansson\n4// Copyright (C) 2006 Jari Häkkinen, Markus Ringnér, Peter Johansson\n5// Copyright (C) 2007, 2008 Peter Johansson\n6//\n7// This file is part of the yat library, http://trac.thep.lu.se/yat\n8//\n9// The yat library is free software; you can redistribute it and/or\n10// modify it under the terms of the GNU General Public License as\n11// published by the Free Software Foundation; either version 2 of the\n13//\n14// The yat library is distributed in the hope that it will be useful,\n15// but WITHOUT ANY WARRANTY; without even the implied warranty of\n16// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU\n17// General Public License for more details.\n18//\n19// You should have received a copy of the GNU General Public License\n20// along with this program; if not, write to the Free Software\n21// Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA\n22// 02111-1307, USA.\n23\n24\n25/**\n26\\page weighted_statistics Weighted Statistics\n27\n28\\section Introduction\n29There are several different reasons why a statistical analysis needs\n30to adjust for weighting. In literature reasons are mainly diveded in\n31to groups.\n32\n33The first group is when some of the measurements are known to be more\n34precise than others. The more precise a measurement is, the larger\n35weight it is given. The simplest case is when the weight are given\n36before the measurements and they can be treated as deterministic. It\n37becomes more complicated when the weight can be determined not until\n38afterwards, and even more complicated if the weight depends on the\n39value of the observable.\n40\n41The second group of situations is when calculating averages over one\n42distribution and sampling from another distribution. Compensating for\n43this discrepency weights are introduced to the analysis. A simple\n44example may be that we are interviewing people but for economical\n45reasons we choose to interview more people from the city than from the\n46countryside. When summarizing the statistics the answers from the city\n47are given a smaller weight. In this example we are choosing the\n48proportions of people from countryside and people from city being\n49intervied. Hence, we can determine the weights before and consider\n50them to be deterministic. In other situations the proportions are not\n51deterministic, but rather a result from the sampling and the weights\n52must be treated as stochastic and only in rare situations the weights\n53can be treated as independent of the observable.\n54\n55Since there are various origins for a weight occuring in a statistical\n56analysis, there are various ways to treat the weights and in general\n57the analysis should be tailored to treat the weights correctly. We\n58have not chosen one situation for our implementations, so see specific\n59function documentation for what assumtions are made. Though, common\n60for implementations are the following:\n61\n62 - Setting all weights to unity yields the same result as the\n63non-weighted version.\n64 - Rescaling the weights does not change any function.\n65 - Setting a weight to zero is equivalent to removing the data point.\n66\n67An important case is when weights are binary (either 1 or 0). Then we\n68get the same result using the weighted version as using the data with\n69weight not equal to zero and the non-weighted version. Hence, using\n70binary weights and the weighted version missing values can be treated\n71in a proper way.\n72\n73\\section AveragerWeighted\n74\n75\n76\n77\\subsection Mean\n78\n79For any situation the weight is always designed so the weighted mean\n80is calculated as \\f$m=\\frac{\\sum w_ix_i}{\\sum w_i} \\f$, which obviously\n81fulfills the conditions above.\n82\n83\n84\n85In the case of varying measurement error, it could be motivated that\n86the weight shall be \\f$w_i = 1/\\sigma_i^2 \\f$. We assume measurement error\n87to be Gaussian and the likelihood to get our measurements is\n88\\f$L(m)=\\prod 89(2\\pi\\sigma_i^2)^{-1/2}e^{-\\frac{(x_i-m)^2}{2\\sigma_i^2}} \\f$.  We\n90maximize the likelihood by taking the derivity with respect to \\f$m \\f$ on\n91the logarithm of the likelihood \\f$\\frac{d\\ln L(m)}{dm}=\\sum 92\\frac{x_i-m}{\\sigma_i^2} \\f$. Hence, the Maximum Likelihood method yields\n93the estimator \\f$m=\\frac{\\sum w_i/\\sigma_i^2}{\\sum 1/\\sigma_i^2} \\f$.\n94\n95\n96\\subsection Variance\n97In case of varying variance, there is no point estimating a variance\n98since it is different for each data point.\n99\n100Instead we look at the case when we want to estimate the variance over\n101\\f$f\\f$ but are sampling from \\f$f' \\f$. For the mean of an observable \\f$O \\f$ we\n102have \\f$\\widehat O=\\sum\\frac{f}{f'}O_i=\\frac{\\sum w_iO_i}{\\sum 103w_i} \\f$. Hence, an estimator of the variance of \\f$X \\f$ is\n104\n105\\f$106s^2 = <X^2>-<X>^2= 107\\f$\n108\n109\\f$110 = \\frac{\\sum w_ix_i^2}{\\sum w_i}-\\frac{(\\sum w_ix_i)^2}{(\\sum w_i)^2}= 111\\f$\n112\n113\\f$114 = \\frac{\\sum w_i(x_i^2-m^2)}{\\sum w_i}= 115\\f$\n116\n117\\f$118 = \\frac{\\sum w_i(x_i^2-2mx_i+m^2)}{\\sum w_i}= 119\\f$\n120\n121\\f$122 = \\frac{\\sum w_i(x_i-m)^2}{\\sum w_i} 123\\f$\n124\n125This estimator fulfills that it is invariant under a rescaling and\n126having a weight equal to zero is equivalent to removing the data\n127point. Having all weights equal to unity we get \\f$\\sigma=\\frac{\\sum 128(x_i-m)^2}{N} \\f$, which is the same as returned from Averager. Hence,\n129this estimator is slightly biased, but still very efficient.\n130\n131\\subsection standard_error Standard Error\n132The standard error squared is equal to the expexted squared error of\n133the estimation of \\f$m\\f$. The squared error consists of two parts, the\n134variance of the estimator and the squared bias:\n135\n136\\f$137<m-\\mu>^2=<m-<m>+<m>-\\mu>^2= 138\\f$\n139\\f$140<m-<m>>^2+(<m>-\\mu)^2 141\\f$.\n142\n143In the case when weights are included in analysis due to varying\n144measurement errors and the weights can be treated as deterministic, we\n145have\n146\n147\\f$148Var(m)=\\frac{\\sum w_i^2\\sigma_i^2}{\\left(\\sum w_i\\right)^2}= 149\\f$\n150\\f$151\\frac{\\sum w_i^2\\frac{\\sigma_0^2}{w_i}}{\\left(\\sum w_i\\right)^2}= 152\\f$\n153\\f$154\\frac{\\sigma_0^2}{\\sum w_i}, 155\\f$\n156\n157where we need to estimate \\f$\\sigma_0^2 \\f$. Again we have the likelihood\n158\n159\\f$160L(\\sigma_0^2)=\\prod\\frac{1}{\\sqrt{2\\pi\\sigma_0^2/w_i}}\\exp{(-\\frac{w_i(x-m)^2}{2\\sigma_0^2})} 161\\f$\n162and taking the derivity with respect to\n163\\f$\\sigma_o^2\\f$,\n164\n165\\f$166\\frac{d\\ln L}{d\\sigma_i^2}= 167\\f$\n168\\f$169\\sum -\\frac{1}{2\\sigma_0^2}+\\frac{w_i(x-m)^2}{2\\sigma_0^2\\sigma_o^2} 170\\f$\n171\n172which\n173yields an estimator \\f$\\sigma_0^2=\\frac{1}{N}\\sum w_i(x-m)^2 \\f$. This\n174estimator is not ignoring weights equal to zero, because deviation is\n175most often smaller than the expected infinity. Therefore, we modify\n176the expression as follows \\f$\\sigma_0^2=\\frac{\\sum w_i^2}{\\left(\\sum 177w_i\\right)^2}\\sum w_i(x-m)^2\\f$ and we get the following estimator of\n178the variance of the mean \\f$\\sigma_0^2=\\frac{\\sum w_i^2}{\\left(\\sum 179w_i\\right)^3}\\sum w_i(x-m)^2\\f$. This estimator fulfills the conditions\n180above: adding a weight zero does not change it: rescaling the weights\n181does not change it, and setting all weights to unity yields the same\n182expression as in the non-weighted case.\n183\n184In a case when it is not a good approximation to treat the weights as\n185deterministic, there are two ways to get a better estimation. The\n186first one is to linearize the expression \\f$\\left<\\frac{\\sum 187w_ix_i}{\\sum w_i}\\right>\\f$. The second method when the situation is\n188more complicated is to estimate the standard error using a\n189bootstrapping method.\n190\n191\\section AveragerPairWeighted\n192Here data points come in pairs (x,y). We are sampling from \\f$f'_{XY}\\f$\n193but want to measure from \\f$f_{XY}\\f$. To compensate for this decrepency,\n194averages of \\f$g(x,y)\\f$ are taken as \\f$\\sum \\frac{f}{f'}g(x,y)\\f$. Even\n195though, \\f$X\\f$ and \\f$Y\\f$ are not independent \\f$(f_{XY}\\neq f_Xf_Y)\\f$ we\n196assume that we can factorize the ratio and get \\f$\\frac{\\sum 197w_xw_yg(x,y)}{\\sum w_xw_y}\\f$\n198\\subsection Covariance\n199Following the variance calculations for AveragerWeighted we have\n200\\f$Cov=\\frac{\\sum w_xw_y(x-m_x)(y-m_y)}{\\sum w_xw_y}\\f$ where\n201\\f$m_x=\\frac{\\sum w_xw_yx}{\\sum w_xw_y}\\f$\n202\n203\\subsection Correlation\n204\n205As the mean is estimated as\n206\\f$207m_x=\\frac{\\sum w_xw_yx}{\\sum w_xw_y} 208\\f$,\n209the variance is estimated as\n210\\f$211\\sigma_x^2=\\frac{\\sum w_xw_y(x-m_x)^2}{\\sum w_xw_y} 212\\f$.\n213As in the non-weighted case we define the correlation to be the ratio\n214between the covariance and geometrical average of the variances\n215\n216\\f$217\\frac{\\sum w_xw_y(x-m_x)(y-m_y)}{\\sqrt{\\sum w_xw_y(x-m_x)^2\\sum 218w_xw_y(y-m_y)^2}} 219\\f$.\n220\n221\n222This expression fulfills the following\n223 - Having N equal weights the expression reduces to the non-weighted expression.\n224 - Adding a pair of data, in which one weight is zero is equivalent\n225to ignoring the data pair.\n226 - Correlation is equal to unity if and only if \\f$x\\f$ is equal to\n227\\f$y\\f$. Otherwise the correlation is between -1 and 1.\n228\n229\\section Score\n230\n231\\subsection Pearson\n232\n233\\f$\\frac{\\sum w(x-m_x)(y-m_y)}{\\sqrt{\\sum w(x-m_x)^2\\sum w(y-m_y)^2}}\\f$.\n234\n235See AveragerPairWeighted correlation.\n236\n237\\subsection ROC\n238\n239An interpretation of the ROC curve area is the probability that if we\n240take one sample from class \\f$+\\f$ and one sample from class \\f$-\\f$, what is\n241the probability that the sample from class \\f$+\\f$ has greater value. The\n242ROC curve area calculates the ratio of pairs fulfilling this\n243\n244\\f$245\\frac{\\sum_{\\{i,j\\}:x^-_i<x^+_j}1}{\\sum_{i,j}1}. 246\\f$\n247\n248An geometrical interpretation is to have a number of squares where\n249each square correspond to a pair of samples. The ROC curve follows the\n250border between pairs in which the samples from class \\f$+\\f$ has a greater\n251value and pairs in which this is not fulfilled. The ROC curve area is\n252the area of those latter squares and a natural extension is to weight\n253each pair with its two weights and consequently the weighted ROC curve\n254area becomes\n255\n256\\f$257\\frac{\\sum_{\\{i,j\\}:x^-_i<x^+_j}w^-_iw^+_j}{\\sum_{i,j}w^-_iw^+_j} 258\\f$\n259\n260This expression is invariant under a rescaling of weight. Adding a\n261data value with weight zero adds nothing to the exprssion, and having\n262all weight equal to unity yields the non-weighted ROC curve area.\n263\n264\\subsection tScore\n265\n266Assume that \\f$x\\f$ and \\f$y\\f$ originate from the same distribution\n267\\f$N(\\mu,\\sigma_i^2)\\f$ where \\f$\\sigma_i^2=\\frac{\\sigma_0^2}{w_i}\\f$. We then\n268estimate \\f$\\sigma_0^2\\f$ as\n269\\f$270\\frac{\\sum w(x-m_x)^2+\\sum w(y-m_y)^2} 271{\\frac{\\left(\\sum w_x\\right)^2}{\\sum w_x^2}+ 272\\frac{\\left(\\sum w_y\\right)^2}{\\sum w_y^2}-2} 273\\f$\n274The variance of difference of the means becomes\n275\\f$276Var(m_x)+Var(m_y)=\\\\\\frac{\\sum w_i^2Var(x_i)}{\\left(\\sum 277w_i\\right)^2}+\\frac{\\sum w_i^2Var(y_i)}{\\left(\\sum w_i\\right)^2}= 278\\frac{\\sigma_0^2}{\\sum w_i}+\\frac{\\sigma_0^2}{\\sum w_i}, 279\\f$\n280and consequently the t-score becomes\n281\\f$282\\frac{\\sum w(x-m_x)^2+\\sum w(y-m_y)^2} 283{\\frac{\\left(\\sum w_x\\right)^2}{\\sum w_x^2}+ 284\\frac{\\left(\\sum w_y\\right)^2}{\\sum w_y^2}-2} 285\\left(\\frac{1}{\\sum w_i}+\\frac{1}{\\sum w_i}\\right), 286\\f$\n287\n288For a \\f$w_i=w\\f$ we this expression get condensed down to\n289\\f$290\\frac{w\\sum (x-m_x)^2+w\\sum (y-m_y)^2} 291{n_x+n_y-2} 292\\left(\\frac{1}{wn_x}+\\frac{1}{wn_y}\\right), 293\\f$\n294in other words the good old expression as for non-weighted.\n295\n296\\subsection FoldChange\n297Fold-Change is simply the difference between the weighted mean of the\n298two groups \\f$\\frac{\\sum w_xx}{\\sum w_x}-\\frac{\\sum w_yy}{\\sum w_y}\\f$\n299\n300\\subsection WilcoxonFoldChange\n301Taking all pair samples (one from class \\f$+\\f$ and one from class \\f$-\\f$)\n302and calculating the weighted median of the distances.\n303\n304\\section Kernel\n305\\subsection polynomial_kernel Polynomial Kernel\n306The polynomial kernel of degree \\f$N\\f$ is defined as \\f$(1+<x,y>)^N\\f$, where\n307\\f$<x,y>\\f$ is the linear kernel (usual scalar product). For the weighted\n308case we define the linear kernel to be \\f$<x,y>=\\sum {w_xw_yxy}\\f$ and the\n309polynomial kernel can be calculated as before\n310\\f$(1+<x,y>)^N\\f$. Is this kernel a proper kernel (always being semi\n311positive definite). Yes, because \\f$<x,y>\\f$ is obviously a proper kernel\n312as it is a scalar product. Adding a positive constant to a kernel\n313yields another kernel so \\f$1+<x,y>\\f$ is still a proper kernel. Then also\n314\\f$(1+<x,y>)^N\\f$ is a proper kernel because taking a proper kernel to the\n315\\f$Nth\\f$ power yields a new proper kernel (see any good book on SVM).\n316\\subsection gaussian_kernel Gaussian Kernel\n317We define the weighted Gaussian kernel as \\f$\\exp\\left(-\\frac{\\sum 318w_xw_y(x-y)^2}{\\sum w_xw_y}\\right)\\f$, which fulfills the conditions\n319listed in the introduction.\n320\n321Is this kernel a proper kernel? Yes, following the proof of the\n322non-weighted kernel we see that \\f$K=\\exp\\left(-\\frac{\\sum 323w_xw_yx^2}{\\sum w_xw_y}\\right)\\exp\\left(-\\frac{\\sum w_xw_yy^2}{\\sum 324w_xw_y}\\right)\\exp\\left(\\frac{\\sum w_xw_yxy}{\\sum w_xw_y}\\right)\\f$,\n325which is a product of two proper kernels. \\f$\\exp\\left(-\\frac{\\sum 326w_xw_yx^2}{\\sum w_xw_y}\\right)\\exp\\left(-\\frac{\\sum w_xw_yy^2}{\\sum 327w_xw_y}\\right)\\f$ is a proper kernel, because it is a scalar product and\n328\\f$\\exp\\left(\\frac{\\sum w_xw_yxy}{\\sum w_xw_y}\\right)\\f$ is a proper\n329kernel, because it a polynomial of the linear kernel with positive\n330coefficients. As product of two kernel also is a kernel, the Gaussian\n331kernel is a proper kernel.\n332\n333\\section Distance\n334\n335\\section Regression\n336\\subsection Naive\n337\\subsection Linear\n338We have the model\n339\n340\\f$341y_i=\\alpha+\\beta (x-m_x)+\\epsilon_i, 342\\f$\n343\n344where \\f$\\epsilon_i\\f$ is the noise. The variance of the noise is\n345inversely proportional to the weight,\n346\\f$Var(\\epsilon_i)=\\frac{\\sigma^2}{w_i}\\f$. In order to determine the\n347model parameters, we minimimize the sum of quadratic errors.\n348\n349\\f$350Q_0 = \\sum \\epsilon_i^2 351\\f$\n352\n353Taking the derivity with respect to \\f$\\alpha\\f$ and \\f$\\beta\\f$ yields two conditions\n354\n355\\f$356\\frac{\\partial Q_0}{\\partial \\alpha} = -2 \\sum w_i(y_i - \\alpha - 357\\beta (x_i-m_x)=0 358\\f$\n359\n360and\n361\n362\\f$\\frac{\\partial Q_0}{\\partial \\beta} = -2 \\sum 363w_i(x_i-m_x)(y_i-\\alpha-\\beta(x_i-m_x)=0 364\\f$\n365\n366or equivalently\n367\n368\\f$369\\alpha = \\frac{\\sum w_iy_i}{\\sum w_i}=m_y 370\\f$\n371\n372and\n373\n374\\f$\\beta=\\frac{\\sum w_i(x_i-m_x)(y-m_y)}{\\sum 375w_i(x_i-m_x)^2}=\\frac{Cov(x,y)}{Var(x)} 376\\f$\n377\n378Note, by having all weights equal we get back the unweighted\n379case. Furthermore, we calculate the variance of the estimators of\n380\\f$\\alpha\\f$ and \\f$\\beta\\f$.\n381\n382\\f$383\\textrm{Var}(\\alpha )=\\frac{w_i^2\\frac{\\sigma^2}{w_i}}{(\\sum w_i)^2}= 384\\frac{\\sigma^2}{\\sum w_i} 385\\f$\n386\n387and\n388\\f$389\\textrm{Var}(\\beta )= \\frac{w_i^2(x_i-m_x)^2\\frac{\\sigma^2}{w_i}} 390{(\\sum w_i(x_i-m_x)^2)^2}= 391\\frac{\\sigma^2}{\\sum w_i(x_i-m_x)^2} 392\\f$\n393\n394Finally, we estimate the level of noise, \\f$\\sigma^2\\f$. Inspired by the\n395unweighted estimation\n396\n397\\f$398s^2=\\frac{\\sum (y_i-\\alpha-\\beta (x_i-m_x))^2}{n-2} 399\\f$\n400\n401we suggest the following estimator\n402\n403\\f$s^2=\\frac{\\sum w_i(y_i-\\alpha-\\beta (x_i-m_x))^2}{\\sum 404w_i-2\\frac{\\sum w_i^2}{\\sum w_i}} \\f$\n405\n406*/\n407\n408\n409\nNote: See TracBrowser for help on using the repository browser." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6906235,"math_prob":0.9995434,"size":16622,"snap":"2022-05-2022-21","text_gpt3_token_len":5390,"char_repetition_ratio":0.15886389,"word_repetition_ratio":0.010347682,"special_character_ratio":0.39670315,"punctuation_ratio":0.05590256,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999838,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-26T18:38:04Z\",\"WARC-Record-ID\":\"<urn:uuid:87035e97-9653-4634-8024-b63299a7ed50>\",\"Content-Length\":\"54590\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e23b4d98-4054-4557-acb2-5b7d12bd6962>\",\"WARC-Concurrent-To\":\"<urn:uuid:ea4b5ea1-ff9a-49fe-97ad-827f425a74f4>\",\"WARC-IP-Address\":\"130.235.189.228\",\"WARC-Target-URI\":\"https://dev.thep.lu.se/yat/browser/trunk/doc/Statistics.doxygen?rev=1152&desc=1\",\"WARC-Payload-Digest\":\"sha1:3TAP5VHSLEJN2FOY5HERRIVRNRYUUE3Z\",\"WARC-Block-Digest\":\"sha1:VBOIBYSU37FXRZZ6AZBODQFKFWOVGC6D\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320304959.80_warc_CC-MAIN-20220126162115-20220126192115-00536.warc.gz\"}"}
https://number.academy/100686
[ "# Number 100686\n\nNumber 100,686 spell 🔊, write in words: one hundred thousand, six hundred and eighty-six . Ordinal number 100686th is said 🔊 and write: one hundred thousand, six hundred and eighty-sixth. Color #100686. The meaning of number 100686 in Maths: Is Prime? Factorization and prime factors tree. The square root and cube root of 100686. What is 100686 in computer science, numerology, codes and images, writing and naming in other languages. Other interesting facts related to 100686.\n\n## What is 100,686 in other units\n\nThe decimal (Arabic) number 100686 converted to a Roman number is (C)DCLXXXVI. Roman and decimal number conversions.\n\n#### Weight conversion\n\n100686 kilograms (kg) = 221972.4 pounds (lbs)\n100686 pounds (lbs) = 45670.9 kilograms (kg)\n\n#### Length conversion\n\n100686 kilometers (km) equals to 62564 miles (mi).\n100686 miles (mi) equals to 162039 kilometers (km).\n100686 meters (m) equals to 330331 feet (ft).\n100686 feet (ft) equals 30690 meters (m).\n100686 centimeters (cm) equals to 39640.2 inches (in).\n100686 inches (in) equals to 255742.4 centimeters (cm).\n\n#### Temperature conversion\n\n100686° Fahrenheit (°F) equals to 55918.9° Celsius (°C)\n100686° Celsius (°C) equals to 181266.8° Fahrenheit (°F)\n\n#### Time conversion\n\n(hours, minutes, seconds, days, weeks)\n100686 seconds equals to 1 day, 3 hours, 58 minutes, 6 seconds\n100686 minutes equals to 2 months, 1 week, 6 days, 22 hours, 6 minutes\n\n### Codes and images of the number 100686\n\nNumber 100686 morse code: .---- ----- ----- -.... ---.. -....\nSign language for number 100686:", null, "", null, "", null, "", null, "", null, "", null, "Number 100686 in braille:", null, "QR code Bar code, type 39", null, "", null, "Images of the number Image (1) of the number Image (2) of the number", null, "", null, "More images, other sizes, codes and colors ...\n\n## Share in social networks", null, "## Mathematics of no. 100686\n\n### Multiplications\n\n#### Multiplication table of 100686\n\n100686 multiplied by two equals 201372 (100686 x 2 = 201372).\n100686 multiplied by three equals 302058 (100686 x 3 = 302058).\n100686 multiplied by four equals 402744 (100686 x 4 = 402744).\n100686 multiplied by five equals 503430 (100686 x 5 = 503430).\n100686 multiplied by six equals 604116 (100686 x 6 = 604116).\n100686 multiplied by seven equals 704802 (100686 x 7 = 704802).\n100686 multiplied by eight equals 805488 (100686 x 8 = 805488).\n100686 multiplied by nine equals 906174 (100686 x 9 = 906174).\nshow multiplications by 6, 7, 8, 9 ...\n\n### Fractions: decimal fraction and common fraction\n\n#### Fraction table of 100686\n\nHalf of 100686 is 50343 (100686 / 2 = 50343).\nOne third of 100686 is 33562 (100686 / 3 = 33562).\nOne quarter of 100686 is 25171,5 (100686 / 4 = 25171,5 = 25171 1/2).\nOne fifth of 100686 is 20137,2 (100686 / 5 = 20137,2 = 20137 1/5).\nOne sixth of 100686 is 16781 (100686 / 6 = 16781).\nOne seventh of 100686 is 14383,7143 (100686 / 7 = 14383,7143 = 14383 5/7).\nOne eighth of 100686 is 12585,75 (100686 / 8 = 12585,75 = 12585 3/4).\nOne ninth of 100686 is 11187,3333 (100686 / 9 = 11187,3333 = 11187 1/3).\nshow fractions by 6, 7, 8, 9 ...\n\n 100686\n\n### Advanced math operations\n\n#### Is Prime?\n\nThe number 100686 is not a prime number. The closest prime numbers are 100673, 100693.\n\n#### Factorization and factors (dividers)\n\nThe prime factors of 100686 are 2 * 3 * 97 * 173\nThe factors of 100686 are\n1 , 2 , 3 , 6 , 97 , 173 , 194 , 291 , 346 , 519 , 582 , 1038 , 16781 , 33562 , 50343 , 100686\nTotal factors 16.\nSum of factors 204624 (103938).\n\n#### Powers\n\nThe second power of 1006862 is 10.137.670.596.\nThe third power of 1006863 is 1.020.721.501.628.856.\n\n#### Roots\n\nThe square root √100686 is 317,310573.\nThe cube root of 3100686 is 46,521784.\n\n#### Logarithms\n\nThe natural logarithm of No. ln 100686 = loge 100686 = 11,519762.\nThe logarithm to base 10 of No. log10 100686 = 5,002969.\nThe Napierian logarithm of No. log1/e 100686 = -11,519762.\n\n### Trigonometric functions\n\nThe cosine of 100686 is -0,456228.\nThe sine of 100686 is -0,889863.\nThe tangent of 100686 is 1,950481.\n\n### Properties of the number 100686\n\nIs a Friedman number: No\nIs a Fibonacci number: No\nIs a Bell number: No\nIs a palindromic number: No\nIs a pentagonal number: No\nIs a perfect number: No\n\n## Number 100686 in Computer Science\n\nCode typeCode value\nPIN 100686 It's recommendable to use 100686 as a password or PIN.\n100686 Number of bytes98.3KB\nCSS Color\n#100686 hexadecimal to red, green and blue (RGB) (16, 6, 134)\nUnix timeUnix time 100686 is equal to Friday Jan. 2, 1970, 3:58:06 a.m. GMT\nIPv4, IPv6Number 100686 internet address in dotted format v4 0.1.137.78, v6 ::1:894e\n100686 Decimal = 11000100101001110 Binary\n100686 Decimal = 12010010010 Ternary\n100686 Decimal = 304516 Octal\n100686 Decimal = 1894E Hexadecimal (0x1894e hex)\n100686 BASE64MTAwNjg2\n100686 MD5b1c294ec857c2668eeb02de2148c73de\n100686 SHA12056eb7412d78c70a3f462448a59754682b3a202\n100686 SHA224b27c90f2b4b39a6e6cd45eb1efc5fed6eaf13af53402d5bc7934717e\nMore SHA codes related to the number 100686 ...\n\nIf you know something interesting about the 100686 number that you did not find on this page, do not hesitate to write us here.\n\n## Numerology 100686\n\n### Character frequency in number 100686\n\nCharacter (importance) frequency for numerology.\n Character: Frequency: 1 1 0 2 6 2 8 1\n\n### Classical numerology\n\nAccording to classical numerology, to know what each number means, you have to reduce it to a single figure, with the number 100686, the numbers 1+0+0+6+8+6 = 2+1 = 3 are added and the meaning of the number 3 is sought.\n\n## Interesting facts about the number 100686\n\n### Asteroids\n\n• (100686) 1997 YA3 is asteroid number 100686. It was discovered by T. Kobayashi from Ōizumi on 12/24/1997.\n\n## № 100,686 in other languages\n\nHow to say or write the number one hundred thousand, six hundred and eighty-six in Spanish, German, French and other languages. The character used as the thousands separator.\n Spanish: 🔊 (número 100.686) cien mil seiscientos ochenta y seis German: 🔊 (Anzahl 100.686) einhunderttausendsechshundertsechsundachtzig French: 🔊 (nombre 100 686) cent mille six cent quatre-vingt-six Portuguese: 🔊 (número 100 686) cem mil, seiscentos e oitenta e seis Chinese: 🔊 (数 100 686) 十万零六百八十六 Arabian: 🔊 (عدد 100,686) مائة ألفاً و ستمائةستة و ثمانون Czech: 🔊 (číslo 100 686) sto tisíc šestset osmdesát šest Korean: 🔊 (번호 100,686) 십만 육백팔십육 Danish: 🔊 (nummer 100 686) ethundredetusinde og sekshundrede og seksogfirs Dutch: 🔊 (nummer 100 686) honderdduizendzeshonderdzesentachtig Japanese: 🔊 (数 100,686) 十万六百八十六 Indonesian: 🔊 (jumlah 100.686) seratus ribu enam ratus delapan puluh enam Italian: 🔊 (numero 100 686) centomilaseicentottantasei Norwegian: 🔊 (nummer 100 686) en hundre tusen, seks hundre og åtti-seks Polish: 🔊 (liczba 100 686) sto tysięcy sześćset osiemdziesiąt sześć Russian: 🔊 (номер 100 686) сто тысяч шестьсот восемьдесят шесть Turkish: 🔊 (numara 100,686) yüzbinaltıyüzseksenaltı Thai: 🔊 (จำนวน 100 686) หนึ่งแสนหกร้อยแปดสิบหก Ukrainian: 🔊 (номер 100 686) сто тисяч шiстсот вiсiмдесят шiсть Vietnamese: 🔊 (con số 100.686) một trăm nghìn sáu trăm tám mươi sáu Other languages ...\n\n## News to email\n\n#### Receive news about \"Number 100686\" to email\n\nPrivacy Policy.\n\n## Comment\n\nIf you know something interesting about the number 100686 or any natural number (positive integer) please write us here or on facebook." ]
[ null, "https://numero.wiki/s/senas/lenguaje-de-senas-numero-1.png", null, "https://numero.wiki/s/senas/lenguaje-de-senas-numero-0.png", null, "https://numero.wiki/s/senas/lenguaje-de-senas-numero-0.png", null, "https://numero.wiki/s/senas/lenguaje-de-senas-numero-6.png", null, "https://numero.wiki/s/senas/lenguaje-de-senas-numero-8.png", null, "https://numero.wiki/s/senas/lenguaje-de-senas-numero-6.png", null, "https://number.academy/img/braille-100686.svg", null, "https://numero.wiki/img/codigo-qr-100686.png", null, "https://numero.wiki/img/codigo-barra-100686.png", null, "https://numero.wiki/img/a-100686.jpg", null, "https://numero.wiki/img/b-100686.jpg", null, "https://numero.wiki/s/share-desktop.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.520948,"math_prob":0.9753496,"size":7016,"snap":"2022-40-2023-06","text_gpt3_token_len":2541,"char_repetition_ratio":0.16043925,"word_repetition_ratio":0.00921659,"special_character_ratio":0.43158495,"punctuation_ratio":0.16006216,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9926768,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,1,null,1,null,1,null,1,null,1,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-26T19:11:44Z\",\"WARC-Record-ID\":\"<urn:uuid:caccd81e-b88c-41d7-913d-142cf3683ce2>\",\"Content-Length\":\"41459\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5aa46bd4-ab23-437f-98af-c8c1d3b91208>\",\"WARC-Concurrent-To\":\"<urn:uuid:7d953509-1c05-423d-b95c-cc4b0916129f>\",\"WARC-IP-Address\":\"162.0.227.212\",\"WARC-Target-URI\":\"https://number.academy/100686\",\"WARC-Payload-Digest\":\"sha1:G7JBUCOGJCEEECOL3LR2MA3CC4M7LPI4\",\"WARC-Block-Digest\":\"sha1:UXAK2MRYNECXN4SOQOYKEOXL5QQNDPPZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030334915.59_warc_CC-MAIN-20220926175816-20220926205816-00795.warc.gz\"}"}
https://bookdown.org/cteplovs/moderndive_book_4/7-multiple-regression.html
[ "", null, "# Chapter 7 Multiple Regression\n\nIn Chapter 6 we introduced ideas related to modeling for explanation, in particular that the goal of modeling is make explicit the relationship between some outcome variable $$y$$ and some explanatory variable $$x$$. While there are many approaches to modeling, we focused on one particular technique: linear regression, one of the most commonly-used and easy-to-understand approaches to modeling. Furthermore to keep things simple we only considered models with one explanatory $$x$$ variable that was either numerical in Section 6.1 or categorical in Section 6.2.\n\nIn this chapter on multiple regression we’ll start considering models that include more than one explanatory variable $$x$$. You can imagine when trying to model a particular outcome variable, like teaching evaluation scores as in Section 6.1 or life expectancy as in Section 6.2, that it would be very useful to include more than just one explanatory variable’s worth of information.\n\nSince our regression models will now consider more than one explanatory variable, the interpretation of the associated effect of any one explanatory variable must be made in conjunction with the other explanatory variables included in your model. Let’s begin!\n\n### Needed packages\n\nLet’s load all the packages needed for this chapter (this assumes you’ve already installed them). Recall from our discussion in Section 5.4.1 that loading the tidyverse package by running library(tidyverse) loads the following commonly used data science packages all at once:\n\n• ggplot2 for data visualization\n• dplyr for data wrangling\n• tidyr for converting data to “tidy” format\n• readr for importing spreadsheet data into R\n• As well as the more advanced purrr, tibble, stringr, and forcats packages\n\nIMPORTANT NOTE FOR SI 544 library(tidyverse) will not work in RStudio via Canvas. You will need to load the packages separately.\n\nIf needed, read Section 2.3 for information on how to install and load R packages.\n\nlibrary(tidyverse)\nlibrary(moderndive)\nlibrary(skimr)\nlibrary(ISLR)\n\n## 7.1 One numerical & one categorical explanatory variable\n\nLet’s revisit the instructor evaluation data we introduced in Section 6.1, where we studied the relationship between instructor evaluation scores (as given by students) and their “beauty” scores for instructors teaching courses at the UT Austin; the variable teaching score was a numerical outcome variable $$y$$ and the variable beauty score bty_avg was a numerical explanatory $$x$$ variable.\n\nIn this section we are going to consider a different model. Our outcome variable will still be teaching score, but now including two different explanatory variables: age and gender. Could it be that instructors who are older receive better teaching evaluations from students? Or could it instead be that younger instructors receive better evaluations? Are there differences in evaluations given by students for instructors of different genders? We’ll answer these questions by modeling the relationship between these variables using multiple regression where we have:\n\n1. A numerical outcome variable $$y$$, as before the instructor’s teaching score and\n2. Two explanatory variables:\n1. A numerical explanatory variable $$x_1$$, the instructor’s age\n2. A categorical explanatory variable $$x_2$$, the instructor’s binary gender (male or female).\n\nIt is important to note that at the time of this study, due to then commonly held beliefs about gender, this variable was often recorded as a binary. While the results of a model that oversimplifies gender this way may be imperfect, we still found the results to be very pertinent and relevant today. An eminent statistician by the name George E.P. Box summarizes our thinking very nicely: “All models are wrong, but some are useful.”.\n\n### 7.1.1 Exploratory data analysis\n\nThe data on the 463 courses at the UT Austin can be found in the evals data frame included in the moderndive package. However, to keep things simple, let’s select() only the subset of the variables we’ll consider in this chapter, and save this data in a new data frame called eval_ch7. Note that these are different than the variables chosen in Chapter 6.\n\nevals_ch7 <- evals %>%\nselect(ID, score, age, gender)\n\nRecall the three common steps in an exploratory data analysis we saw in Section 6.1.1\n\n1. Looking at the raw data values.\n2. Computing summary statistics, like means, medians, and interquartile ranges.\n3. Creating data visualizations.\n\nLet’s first look at the raw data values both either looking at evals_ch7 RStudio’s spreadsheet viewer or using the glimpse() function\n\nglimpse(evals_ch7)\nObservations: 463\nVariables: 4\n$ID <int> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18,…$ score <dbl> 4.7, 4.1, 3.9, 4.8, 4.6, 4.3, 2.8, 4.1, 3.4, 4.5, 3.8, 4.5, 4.…\n$age <int> 36, 36, 36, 36, 59, 59, 59, 51, 51, 40, 40, 40, 40, 40, 40, 40…$ gender <fct> female, female, female, female, male, male, male, male, male, …\n\nLet’s also display a random sample of 5 rows of the 463 rows corresponding to different courses in Table 7.1. Remember due to the random nature of the sampling, you will likely end up with a different subset of 5 rows.\n\nevals_ch7 %>%\nsample_n(size = 5)\nTABLE 7.1: A random sample of 5 out of the 463 courses at UT Austin\nID score age gender\n129 3.7 62 male\n109 4.7 46 female\n28 4.8 62 male\n434 2.8 62 male\n330 4.0 64 male\n\nNow that we’ve looked at the raw values in our evals_ch7 data frame and obtained a sense of the data, let’s move on to next common step in an exploratory data analysis: computing summary statistics. As we did in our exploratory data analyses in Sections 6.1.1 and 6.2.1 from the previous chapter, let’s use the skim() function from the skimr package, being sure to only select() the variables of interest of model:\n\nevals_ch7 %>%\nselect(score, age, gender) %>%\nskim()\nSkim summary statistics\nn obs: 463\nn variables: 3\n\n── Variable type:factor ────────────────────────────────────────────────────────\nvariable missing complete n n_unique top_counts ordered\ngender 0 463 463 2 mal: 268, fem: 195, NA: 0 FALSE\n\n── Variable type:integer ───────────────────────────────────────────────────────\nvariable missing complete n mean sd p0 p25 p50 p75 p100\nage 0 463 463 48.37 9.8 29 42 48 57 73\n\n── Variable type:numeric ───────────────────────────────────────────────────────\nvariable missing complete n mean sd p0 p25 p50 p75 p100\nscore 0 463 463 4.17 0.54 2.3 3.8 4.3 4.6 5\n\nObserve for example that we have no missing data, courses taught by 268 male vs 195 female instructors, and and average age of 48.37. Recall however that each row in our data represents a particular course and that instructors can teach more than one course. Therefore the average age of the unique instructors may differ.\n\nFurthermore, let’s compute the correlation between our two numerical variables: score and age. Recall from Section 6.1.1 that correlation coefficients only exist between numerical variables. We observe that they are weakly negatively correlated.\n\nevals_ch7 %>%\nget_correlation(formula = score ~ age)\n# A tibble: 1 x 1\ncorrelation\n<dbl>\n1 -0.107\n\nLet’s now perform the last of the three common steps in an exploratory data analysis: creating data visualizations. Given that the outcome variable score and explanatory variable age are both numerical, we’ll use a scatterplot to display their relationship. How can we incorporate the categorical variable gender however? By mapping the variable gender to the color aesthetic and creating a colored scatterplot! The following code is very similar to the code that created the scatterplot of teaching score and beauty score in Figure 6.2, but with color = gender added to the aes().\n\nggplot(evals_ch7, aes(x = age, y = score, color = gender)) +\ngeom_point() +\nlabs(x = \"Age\", y = \"Teaching Score\", color = \"Gender\") +\ngeom_smooth(method = \"lm\", se = FALSE)", null, "FIGURE 7.1: Colored scatterplot of relationship of teaching and beauty scores.\n\nIn the resulting Figure 7.1, observe that ggplot assigns a default red/blue color scheme to the points and lines associated with each of the two levels of gender: female and male. Furthermore the geom_smooth(method = \"lm\", se = FALSE) layer automatically fits a different regression line for each group since we have provided color = gender in the aesthetic mapping. This allows for all subsequent geometries to have the same aesthetic mappings.\n\nWe notice some interesting trends:\n\n1. There are almost no women faculty over the age of 60 as evidenced by lack of red dots above $$x$$ = 60.\n2. While both regression lines are negatively sloped with age (i.e. older instructors tend to have lower scores), the slope for age for the female instructors is more negative. In other words, female instructors are paying a harsher penalty for their age than the male instructors.\n\n### 7.1.2 Interaction model\n\nLet’s now quantify the relationship of our outcome variable $$y$$ and two explanatory variables using one type of multiple regression model known as an “interaction model.” Unfortunately, we don’t have enough context at this point to explain where the term “interaction” comes from; we’ll explain why statisticians use this term at the end of this section.\n\nIn particular, we’ll write out the equation of the two regression lines in Figure 7.1 using the values from a regression table. Before we do this however, let’s go over a brief refresher of regression when you have a categorical explanatory variable $$x$$.\n\nRecall in Section 6.2.2 we fit a regression model for countries’ life expectancy as a function of which continent the country was in. In other words we had a numerical outcome variable $$y$$ = lifeExp and a categorical explanatory variable $$x$$ = continent which had 5 levels: Africa, Americas, Asia, Europe, and Oceania. Let’s redisplay the regression table you saw in Table 6.8:\n\nTABLE 7.2: Regression table for life expectancy as a function of continent.\nterm estimate std_error statistic p_value lower_ci upper_ci\nintercept 54.8 1.02 53.45 0 52.8 56.8\ncontinentAmericas 18.8 1.80 10.45 0 15.2 22.4\ncontinentAsia 15.9 1.65 9.68 0 12.7 19.2\ncontinentEurope 22.8 1.70 13.47 0 19.5 26.2\ncontinentOceania 25.9 5.33 4.86 0 15.4 36.5\n\nRecall our interpretations of the estimate column. Since Africa was the “baseline for comparison” group since Africa comes first alphabetically, the intercept term corresponds to the mean life expectancy for all countries in Africa of 54.8 years. The other 4 values of estimate correspond to “offsets” relative to the baseline group. So for example, the “offset” corresponding to the Americas is +18.8 versus the baseline for comparison group Africa i.e. the average life expectancy for countries in the Americas is 18.8 years higher. Thus the mean life expectancy for all countries in the Americas is 54.8 + 18.8 = 73.6. The same interpretation holds for Asia, Europe, and Oceania.\n\nGoing to back to our multiple regression model for teaching score using age and gender in Figure 7.1, we generate the regression table using the same two step approach from Chapter 6: we first “fit” the model using the lm() “linear model” function and then we apply the get_regression_table() function. This time however our model formula won’t be of form y ~ x, but rather of form y ~ x1 * x2. In other words our two explanatory variables x1 and x2 are separated by a * sign:\n\n# Fit regression model:\nscore_model_interaction <- lm(score ~ age * gender, data = evals_ch7)\n# Get regression table:\nget_regression_table(score_model_interaction)\nTABLE 7.3: Regression table for interaction model.\nterm estimate std_error statistic p_value lower_ci upper_ci\nintercept 4.883 0.205 23.80 0.000 4.480 5.286\nage -0.018 0.004 -3.92 0.000 -0.026 -0.009\ngendermale -0.446 0.265 -1.68 0.094 -0.968 0.076\nage:gendermale 0.014 0.006 2.45 0.015 0.003 0.024\n\nLooking the regression table output in Table 7.3, we see there are four rows of values in the estimate column. While it is not immediately apparent, using these four values we can write out the equations of both the red and blue lines in Figure 7.1. Let’s build these up.\n\nFirst, since the word female is alphabetically before male, female instructors are the “baseline for comparison” group. Therefore intercept is the intercept and age is the slope for age for only the female instructors. In other words, the red regression line in Figure 7.1 has intercept 4.883 and slope for age of -0.018. Remember that for this particular data, while the intercept has a mathematical interpretation, it has no practical interpretation since there can’t be any instructors with age = 0.\n\nWhat about the intercept and slope for age of the male instructors? In other words the blue line in Figure 7.1? This is where our notion of “offsets” comes into play once again. The value for gendermale of -0.446 is not the intercept for the male instructors, but rather the offset (or difference) in intercept for male instructors relative to female instructors. Therefore, the intercept for the male instructors is intercept + gendermale = 4.883 + (-0.446) = 4.883 - 0.446 = 4.437.\n\nSimilarly, age:gendermale = 0.014 is not the slope for age for the male instructors, but rather the offset (or difference) in slope for the male instructors. Therefore, the slope for age for the male instructors is age + age:gendermale = -0.018 + 0.014 = -0.004. Therefore the blue regression line in Figure 7.1 has intercept 4.437 and slope for age of -0.004.\n\nLet’s summarize these values in Table 7.4 and focus on the two slopes for age:\n\nTABLE 7.4: Comparison of female and male intercepts and age slopes\nGender Intercept Slope for age\nFemale instructors 4.88 -0.018\nMale instructors 4.44 -0.004\n\nSince the slope for age for the female instructors was -0.018, it means that on average, a female instructor who is a year older would have a teaching score that is 0.018 units lower. For the male instructors however, the corresponding associated decrease was on average only 0.004 units. While both slopes for age were negative, the slope for age for the female instructors is more negative. This is consistent with our observation from Figure 7.1, that this model is suggesting that age is impacts teaching scores more for female instructors.\n\nLet’s now write the equation for our regression lines, which we can use to compute our fitted values $$\\widehat{y} = \\widehat{\\text{score}}$$.\n\n\\begin{aligned} \\widehat{y} = \\widehat{\\text{score}} &= b_0 + b_{\\mbox{age}} \\cdot \\mbox{age} + b_{\\mbox{male}} \\cdot \\mathbb{1}_{\\mbox{is male}}(x) + b_{\\mbox{age,male}} \\cdot \\mbox{age} \\cdot \\mathbb{1}_{\\mbox{is male}}\\\\ &= 4.883 -0.018 \\cdot \\mbox{age} - 0.446 \\cdot \\mathbb{1}_{\\mbox{is male}}(x) + 0.014 \\cdot \\mbox{age} \\cdot \\mathbb{1}_{\\mbox{is male}} \\end{aligned}\n\nWhoa! That’s even more daunting than the equation you saw for the life expectancy as a function of continent in Section 6.2.2! However if you recall what an “indicator function” AKA “dummy variable” does, the equation simplifies greatly. In the above equation, we have one indicator function of interest:\n\n$\\mathbb{1}_{\\mbox{is male}}(x) = \\left\\{ \\begin{array}{ll} 1 & \\text{if } \\text{instructor } x \\text{ is male} \\\\ 0 & \\text{otherwise}\\end{array} \\right.$\n\nSecond, let’s match coefficients in the above equation with values in the estimate column in our regression table in Table 7.3:\n\n1. $$b_0$$ is the intercept = 4.883 for the female instructors\n2. $$b_{\\mbox{age}}$$ is the slope for age = -0.018 for the female instructors\n3. $$b_{\\mbox{male}}$$ is the offset in intercept for the male instructors\n4. $$b_{\\mbox{age,male}}$$ is the offset in slope for age for the male instructors\n\nLet’s put this all together and compute the fitted value $$\\widehat{y} = \\widehat{\\text{score}}$$ for female instructors. Since for female instructors $$\\mathbb{1}_{\\mbox{is male}}(x)$$ = 0, the above equation becomes\n\n\\begin{aligned} \\widehat{y} = \\widehat{\\text{score}} &= b_0 + b_{\\mbox{age}} \\cdot \\mbox{age} + b_{\\mbox{male}} \\cdot \\mathbb{1}_{\\mbox{is male}}(x) + b_{\\mbox{age,male}} \\cdot \\mbox{age} \\cdot \\mathbb{1}_{\\mbox{is male}}\\\\ &= 4.883 - 0.018 \\cdot \\mbox{age} - 0.446 \\cdot \\mathbb{1}_{\\mbox{is male}}(x) + 0.014 \\cdot \\mbox{age} \\cdot \\mathbb{1}_{\\mbox{is male}}\\\\ &= 4.883 - 0.018 \\cdot \\mbox{age} - 0.446 \\cdot 0 + 0.014 \\cdot \\mbox{age} \\cdot 0\\\\ &= 4.883 - 0.018 \\cdot \\mbox{age} - 0 + 0\\\\ &= 4.883 - 0.018 \\cdot \\mbox{age}\\\\ \\end{aligned}\n\nwhich is the equation of the red regression line in Figure 7.1 corresponding to the female instructors. Correspondingly, since for male instructors $$\\mathbb{1}_{\\mbox{is male}}(x)$$ = 1, the above equation becomes\n\n\\begin{aligned} \\widehat{y} = \\widehat{\\text{score}} &= 4.883 - 0.018 \\cdot \\mbox{age} - 0.446 \\cdot \\mathbb{1}_{\\mbox{is male}}(x) + 0.014 \\cdot \\mbox{age} \\cdot \\mathbb{1}_{\\mbox{is male}}\\\\ &= 4.883 - 0.018 \\cdot \\mbox{age} - 0.446 \\cdot 1 + 0.014 \\cdot \\mbox{age} \\cdot 1\\\\ &= 4.883 - 0.018 \\cdot \\mbox{age} - 0.446 + 0.014 \\cdot \\mbox{age}\\\\ &= (4.883 - 0.446) + (- 0.018 + 0.014) * \\mbox{age}\\\\ &= 4.437 - 0.004 \\cdot \\mbox{age}\\\\ \\end{aligned}\n\nwhich is the equation of the blue regression line in Figure 7.1 corresponding to the male instructors.\n\nPhew! That was a lot of arithmetic! Don’t fret however, this is as hard as modeling will get in this book. If you’re still a little unsure about using indicator functions and using categorical explanatory variables, we highly suggest you re-read Section 6.2.2 which involves only a single categorical explanatory variable and thus is much simpler.\n\nBefore we end this section, we explain why we refer to this type of model as an “interaction model.” The $$b_{\\mbox{age,male}}$$ term in the equation for the fitted value $$\\widehat{y}$$ = $$\\widehat{\\text{score}}$$ is what’s known in statistical modeling as an “interaction effect.” The interaction term corresponds to the age:gendermale = 0.014 in the final row of the regression table in Table 7.3.\n\nWe say there is an interaction effect if the associated effect of one variable depends on the value of another variable, in other words the two variables are “interacting.” In our case, the associated effect of the variable age depends on the value of another variable, gender. This was evidenced by the difference in slopes for age of +0.014 of male instructors relative to female instructors.\n\nAnother way of thinking of interaction effects is as follows. For a given instructor at the UT Austin, there might be an associated effect of their age on their teaching scores, there might be an associated effect of the gender on their teaching scores, but when put together, there might an additional effect due to the intersection of their age and their gender.\n\n### 7.1.3 Parallel slopes model\n\nWhen creating regression models with one numerical and one categorical explanatory variable, we are not just limited to interaction models as we just saw. Another type of model we can use is known as the “parallel slopes” model. Unlike with interaction models where the regression line can have both different intercepts and different slopes, parallel slopes models still allow for different intercepts but force all lines to have the same slope. The resulting regression lines are thus parallel. Let’s visualize the best fitting parallel slopes model to our evals_ch7 data.\n\nUnfortunately, the ggplot2 package does not have a convenient way to plot a parallel slopes model. We therefore created our own function gg_parallel_slopes() and included it in the moderndive package:\n\ngg_parallel_slopes(y = \"score\", num_x = \"age\", cat_x = \"gender\",\ndata = evals_ch7)", null, "FIGURE 7.2: Parallel slopes model of relationship of score with age and gender.\n\nNote the arguments i.e. inputs to this function: the outcome variable y = \"score\", the numerical explanatory variable num_x = \"age\", the categorical explanatory variable cat_x = \"gender\", and the data frame that includes this data = evals_ch7. Be careful to include the quotation marks when specifying all variables, something you don’t have to do when creating a visualization with ggplot().\n\nObserve in Figure 7.2 that we now have parallel red and blue lines corresponding to the female and male instructors respectively, in other words they have the same negative slope. In other words, as instructors age, so also do they tend to receive lower teaching evaluation scores from students. However these two lines have different intercepts as evidenced by the fact that the blue line corresponding to the male instructors is higher than the red line corresponding to the female instructors.\n\nIn order to obtain the precise numerical values of the intercepts and the common slope, we once again first “fit” the model using the lm() “linear model” function and then we apply the get_regression_table() function. However, unlike the interaction model which had a model formula of form y ~ x1 * x2, our model formula is now of form y ~ x1 + x2. In other words our two explanatory variables x1 and x2 are separated by a + sign:\n\n# Fit regression model:\nscore_model_parallel_slopes <- lm(score ~ age + gender, data = evals_ch7)\n# Get regression table:\nget_regression_table(score_model_parallel_slopes)\nTABLE 7.5: Regression table for parallel slopes model.\nterm estimate std_error statistic p_value lower_ci upper_ci\nintercept 4.484 0.125 35.79 0.000 4.238 4.730\nage -0.009 0.003 -3.28 0.001 -0.014 -0.003\ngendermale 0.191 0.052 3.63 0.000 0.087 0.294\n\nSimilarly to the regression table for the interaction model from our earlier Table 7.3, we have an intercept term corresponding to the intercept for the “baseline for comparison” female instructor group and a gendermale term corresponding to the offset (or difference) in intercept for the male instructors relative to female instructors. In other words in Figure 7.2 the red regression line corresponding to the female instructors has an intercept of 4.484 while the blue regression line corresponding to the male instructors has an intercept of 4.484 + 0.191 = 4.67. Once again, since there aren’t any instructors of age 0, the intercepts only have a mathematical interpretation but no practical one.\n\nUnlike in Table 7.3 we now only have a single term relating to the slope for age as we’ve forced both the female and male instructors to have a common slope for age of -0.009. In other words, for every increase of 1 year in instructor age, we observe an associated decrease of on average 0.009 units in teaching for both the female and male instructor.\n\nLet’s now write the equation for our regression lines, which we can use to compute our fitted values $$\\widehat{y} = \\widehat{\\text{score}}$$.\n\n\\begin{aligned} \\widehat{y} = \\widehat{\\text{score}} &= b_0 + b_{\\mbox{age}} \\cdot \\mbox{age} + b_{\\mbox{male}} \\cdot \\mathbb{1}_{\\mbox{is male}}(x)\\\\ &= 4.484 -0.009 \\cdot \\mbox{age} + 0.191 \\cdot \\mathbb{1}_{\\mbox{is male}}(x) \\end{aligned}\n\nLet’s put this all together and compute the fitted value $$\\widehat{y} = \\widehat{\\text{score}}$$ for female instructors. Since for female instructors $$\\mathbb{1}_{\\mbox{is male}}(x)$$ = 0, the above equation becomes\n\n\\begin{aligned} \\widehat{y} = \\widehat{\\text{score}} &= b_0 + b_{\\mbox{age}} \\cdot \\mbox{age} + b_{\\mbox{male}} \\cdot \\mathbb{1}_{\\mbox{is male}}(x)\\\\ &= 4.484 -0.009 \\cdot \\mbox{age} + 0.191 \\cdot \\mathbb{1}_{\\mbox{is male}}(x)\\\\ &= 4.484 -0.009 \\cdot \\mbox{age} + 0.191 \\cdot 0\\\\ &= 4.484 -0.009 \\cdot \\mbox{age} \\end{aligned}\n\nwhich is the equation of the red regression line in Figure 7.2 corresponding to the female instructors. Correspondingly, since for male instructors $$\\mathbb{1}_{\\mbox{is male}}(x)$$ = 1, the above equation becomes\n\n\\begin{aligned} \\widehat{y} = \\widehat{\\text{score}} &= b_0 + b_{\\mbox{age}} \\cdot \\mbox{age} + b_{\\mbox{male}} \\cdot \\mathbb{1}_{\\mbox{is male}}(x)\\\\ &= 4.484 -0.009 \\cdot \\mbox{age} + 0.191 \\cdot \\mathbb{1}_{\\mbox{is male}}(x)\\\\ &= 4.484 -0.009 \\cdot \\mbox{age} + 0.191 \\cdot 1\\\\ &= (4.484 + 0.191) - 0.009 \\cdot \\mbox{age}\\\\ &= 4.67 -0.009 \\cdot \\mbox{age} \\end{aligned}\n\nwhich is the equation of the blue regression line in Figure 7.2 corresponding to the male instructors.\n\nGreat! We’ve considered both an interaction model and a parallel slopes model for our data. Let’s compare the visualizations for both models side-by-side in Figure 7.3", null, "FIGURE 7.3: Comparison of interaction and parallel slopes models.\n\nAt this point, you might be asking yourself: “Why would we ever use an parallel slopes model?” Looking at the left-hand plot in Figure 7.3, the two lines definitely do not appear to be parallel, so why would we force them to be parallel as in the right-hand plot?\" For this data, we agree! It can easily be argued that the interaction model is more appropriate. However, in Section 7.3.1 below on model selection, we’ll present an example where it can be argued that the case for a parallel slopes model might be stronger.\n\n### 7.1.4 Observed/fitted values and residuals\n\nFor brevity’s sake, in this section we’ll only compute the observed values, fitted values, and residuals for the interaction model which we saved in score_model_interaction. You’ll have an opportunity to study these values for our parallel slopes model in the upcoming Learning Check.\n\nSay you have a professor who is female and is 36 years old? What fitted value $$\\widehat{y}$$ = $$\\widehat{\\text{score}}$$ would our model yield? Say you have another professor who is male and is 59 years old? What would their fitted value $$\\widehat{y}$$ be? We answer this question visually by finding the intersection of the red regression line and a vertical line at $$x$$ = age = 36; we mark this value with a large red dot in Figure 7.4. Similarly we can identify the fitted value $$\\widehat{y}$$ = $$\\widehat{\\text{score}}$$ for the male instructor by finding the intersection of the blue regression line and a vertical line at $$x$$ = age = 59; we mark this value with a large blue dot in Figure 7.4.", null, "FIGURE 7.4: Fitted values for two new professors.\n\nHowever, what are these values precisely? We can use the equations of the two regression lines we computed in Section 7.1.2, which in turn were based on values from the regression table in Table 7.3:\n\n• For all female instructors: $$\\widehat{y} = \\widehat{\\text{score}} = 4.883 - 0.018 \\cdot \\mbox{age}$$\n• For all male instructors: $$\\widehat{y} = \\widehat{\\text{score}} = 4.437 - 0.004 \\cdot \\mbox{age}$$\n\nSo our fitted values would be: 4.883 - 0.018 $$\\cdot$$ 36 = 4.25 and 4.437 - 0.004 $$\\cdot$$ 59 = 4.20 respectively. What if however we wanted the fitted values not just for these two instructors, but the instructors for all 463 courses? Doing this by hand would be long and tedious! This is where the get_regression_points() function from the moderndive package can help: it will quickly automate this for all 463 courses. We present the results in Table 7.6.\n\nregression_points <- get_regression_points(score_model_interaction)\nregression_points\nTABLE 7.6: Regression points (First 10 out of 463 courses)\nID score age gender score_hat residual\n1 4.7 36 female 4.25 0.448\n2 4.1 36 female 4.25 -0.152\n3 3.9 36 female 4.25 -0.352\n4 4.8 36 female 4.25 0.548\n5 4.6 59 male 4.20 0.399\n6 4.3 59 male 4.20 0.099\n7 2.8 59 male 4.20 -1.401\n8 4.1 51 male 4.23 -0.133\n9 3.4 51 male 4.23 -0.833\n10 4.5 40 female 4.18 0.318\n\nIn fact, it turns out that the female instructor of age 36 taught the first four courses while the male instructor taught the next 3. The resulting $$\\widehat{y}$$ = $$\\widehat{\\text{score}}$$ fitted values are in the score_hat column. Furthermore, get_regression_points() function also returns the residuals $$y-\\widehat{y}$$. Notice for example the first and fourth courses the female instructor of age 36 taught had positive residuals, indicating that the actual teaching score they received from students was less than their fitted score of 4.25. On the other hand the second and third course this instructor taught had negative residuals, indicating that the actual teaching score they received from students was more than their fitted score of 4.25.\n\nLearning check\n\n(LC7.1) Compute the observed values, fitted values, and residuals not for the interaction model as we just did, but rather for the parallel slopes model we saved in score_model_interaction.\n\n## 7.2 Two numerical explanatory variables\n\nLet’s now switch gears and consider multiple regression models where instead of one numerical and one categorical explanatory variable, we have two numerical explanatory variables! The dataset we’ll use is from An Introduction to Statistical Learning with Applications in R (ISLR), an intermediate-level textbook on statistical and machine learning. It’s accompanying ISLR R package contains datasets that the authors apply various machine learning methods to.\n\nOne frequently used dataset in this book Credit dataset, where the outcome variable of interest is the credit card debt, in other words credit card debt, of 400 individuals. Other variables like income, credit limit, credit rating, and age are included as well. Note that the Credit data is not based on real individuals’ financial information, but rather is a simulated dataset used for educational purposes.\n\nIn this section, we’ll fit a regression model where we have\n\n1. A numerical outcome variable $$y$$, the cardholder’s credit card debt\n2. Two explanatory variables:\n1. One numerical explanatory variable $$x_1$$, the cardholder’s credit limit\n2. Another numerical explanatory variable $$x_2$$, the cardholder’s income (in thousands of dollars).\n\nIn the forthcoming Learning Checks, we’ll consider a different regression model\n\n1. The same numerical outcome variable $$y$$, the cardholder’s credit card debt\n2. Two different explanatory variables:\n1. One numerical explanatory variable $$x_1$$, the cardholder’s credit rating\n2. Another numerical explanatory variable $$x_2$$, the cardholder’s age.\n\n### 7.2.1 Exploratory data analysis\n\nLet’s load the Credit data but to keep things simple to keep things simple, let’s select() only the subset of the variables we’ll consider in this chapter, and save this data in a new data frame called credit_ch7. Notice our slightly different use of the select() verb here: we’ll select the Balance variable from Credit for example, but we’ll save it with a new variable name debt since this name is a little easier to understand.\n\nlibrary(ISLR)\ncredit_ch7 <- Credit %>%\nas_tibble() %>%\nselect(ID, debt = Balance, credit_limit = Limit,\nincome = Income, credit_rating = Rating, age = Age)\n\nYou can observe the effect of our different use of the select() verb in the first common step of an EDA: looking at the raw values either in RStudio’s spreadsheet viewer or by using the glimpse()\n\nglimpse(credit_ch7)\nObservations: 400\nVariables: 6\n$ID <int> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, …$ debt <int> 333, 903, 580, 964, 331, 1151, 203, 872, 279, 1350, 140…\n$credit_limit <int> 3606, 6645, 7075, 9504, 4897, 8047, 3388, 7114, 3300, 6…$ income <dbl> 14.9, 106.0, 104.6, 148.9, 55.9, 80.2, 21.0, 71.4, 15.1…\n$credit_rating <int> 283, 483, 514, 681, 357, 569, 259, 512, 266, 491, 589, …$ age <int> 34, 82, 71, 36, 68, 77, 37, 87, 66, 41, 30, 64, 57, 49,…\n\nFurthermore, let’s look at a random sample of five out of the 400 credit card holders in Table 7.7. Note due to the random nature of the sampling, you will likely end up with a different subset of five rows.\n\ncredit_ch7 %>%\nsample_n(size = 5)\nTABLE 7.7: Random sample of 5 credit card holders.\nID debt credit_limit income credit_rating age\n272 436 4866 45.0 347 30\n239 52 2910 26.5 236 58\n87 815 6340 55.4 448 33\n108 0 3189 39.1 263 72\n149 0 2420 15.2 192 69\n\nNow that we’ve looked at the raw values in our credit_ch7 data frame and obtained a sense of the data, let’s move on to next common step in an exploratory data analysis: computing summary statistics. As you’re probably used to now, let’s use the skim() function from the skimr package, being sure to only select() the columns of interest for our model:\n\nLet’s look at some summary statistics, again using the skim() function from the skimr package:\n\ncredit_ch7 %>%\nselect(debt, credit_limit, income) %>%\nskim()\nSkim summary statistics\nn obs: 400\nn variables: 3\n\n── Variable type:integer ───────────────────────────────────────────────────────\nvariable missing complete n mean sd p0 p25 p50 p75 p100\ncredit_limit 0 400 400 4735.6 2308.2 855 3088 4622.5 5872.75 13913\ndebt 0 400 400 520.01 459.76 0 68.75 459.5 863 1999\n\n── Variable type:numeric ───────────────────────────────────────────────────────\nvariable missing complete n mean sd p0 p25 p50 p75 p100\nincome 0 400 400 45.22 35.24 10.35 21.01 33.12 57.47 186.63\n\nObserve for example:\n\n1. The mean and median credit card debt are $520.01 and$459.50 respectively.\n2. 25% of card holders had debts of $68.75 or less. 3. The mean and median credit card limit are$4735.6 and $4622.50 respectively. 4. 75% of these card holders had incomes of$57,470 or less.\n\nSince our outcome variable debt and the explanatory variables credit_limit and income are numerical, we can compute the correlation coefficient between pairs of these variables. First, we could run the get_correlation() command as seen in Subsection 6.1.1 twice, once for each explanatory variable:\n\ncredit_ch7 %>%\nget_correlation(debt ~ credit_limit)\ncredit_ch7 %>%\nget_correlation(debt ~ income)\n\nOr we can simultaneously compute them by returning a correlation matrix which we display in Table 7.8. We can read off the correlation coefficient for any pair of variables by looking them up in the appropriate row/column combination.\n\ncredit_ch7 %>%\nselect(debt, credit_limit, income) %>%\ncor()\nTABLE 7.8: Correlation coefficients between credit card debt, credit limit, and income.\ndebt credit_limit income\ndebt 1.000 0.862 0.464\ncredit_limit 0.862 1.000 0.792\nincome 0.464 0.792 1.000\n\nFor example, the correlation coefficient of:\n\n1. debt with itself is 1 as we would expect based on the definition of the correlation coefficient.\n2. debt with credit_limit is 0.862. This indicates a strong positive linear relationship, which makes sense as only individuals with large credit limits can accrue large credit card debts.\n3. debt with income is 0.464. This is suggestive of another positive linear relationship, although not as strong as the relationship between debt and credit_limit.\n4. As an added bonus, we can read off the correlation coefficient between the two explanatory variables, credit_limit and income of 0.792.\n\nLet’s visualize the relationship of the outcome variable with each of the two explanatory variables in two separate plots:\n\nggplot(credit_ch7, aes(x = credit_limit, y = debt)) +\ngeom_point() +\nlabs(x = \"Credit limit (in $)\", y = \"Credit card debt (in$)\",\ntitle = \"Debt and credit limit\") +\ngeom_smooth(method = \"lm\", se = FALSE)\n\nggplot(credit_ch7, aes(x = income, y = debt)) +\ngeom_point() +\nlabs(x = \"Income (in $1000)\", y = \"Credit card debt (in$)\",\ntitle = \"Debt and income\") +\ngeom_smooth(method = \"lm\", se = FALSE)", null, "FIGURE 7.5: Relationship between credit card debt and credit limit/income.\n\nObserve there is a positive relationship between credit limit and credit card debt: as credit limit increases so also does credit card debt. This is consistent with the strongly positive correlation coefficient of 0.862 we computed earlier. In the case of income, the positive relationship doesn’t appear as strong, given the weakly positive correlation coefficient of 0.464.\n\nHowever the two plots in Figure 7.5 only focus on the relationship of the outcome variable with each of the two explanatory variables separately. To get a sense of the joint relationship of all three variables simultaneously through a visualization, we need a 3-dimensional (3D) scatterplot where for all 400 points we have\n\n1. The numerical outcome variable $$y$$ debt is on the z-axis (the vertical axis)\n2. The two numerical explanatory variables form the axes on the bottom:\n1. The first numerical explanatory variable $$x_1$$ income\n2. The second numerical explanatory variable $$x_2$$ credit_limit\n\nFurthermore, we also include a regression plane. In the case of regression models with a single numerical explanatory variable, we’ve seen in Section 6.3.2 that the regression line is “best fitting” in that of all possible lines we can draw through a cloud of points, it minimizes the sum of squared residuals. This concept now extends to when we have two numerical explanatory variables, only now we have a “best fitting” plane that cuts through the cloud of points that similarly minimizes the sum of squared residuals. If in the webpage version of the book, click here to open an interactive version of this plot in your browser.", null, "FIGURE 7.6: 3D scatterplot and regression plane.\n\nLearning check\n\n(LC7.2) Conduct a new exploratory data analysis with the same outcome variable $$y$$ being debt but with credit_rating and age as the new explanatory variables $$x_1$$ and $$x_2$$. Remember, this involves three things:\n\n1. Most crucially: Looking at the raw data values.\n2. Computing summary statistics, like means, medians, and interquartile ranges.\n3. Creating data visualizations.\n\nWhat can you say about the relationship between a credit card holder’s debt and their credit rating and age?\n\n### 7.2.2 Regression plane\n\nLet’s now fit a regression model and get the regression table corresponding to the regression plane above. For simplicity’s sake, we won’t consider the two numerical explanatory variable analogue of the interaction model from Section 7.1.2 which we fit with a model formula of the form y ~ x1 * x2, but rather only regression models with model formula of the form y ~ x1 + x2. Somewhat confusing however, since we now have a regression plane instead of multiple lines, the label “parallel slopes model” doesn’t apply when you have two numerical explanatory variables.\n\nJust as we have done multiple times throughout Chapters 6 and this chapter, let’s obtain the regression table for this model using our two-step process and display the results in Table 7.9\n\n1. We first “fit” the linear regression model using the lm(y ~ x1 + x2, data) function and save it in debt_model.\n2. We get the regression table by applying the get_regression_table() from the moderndive package to debt_model.\n# Fit regression model:\ndebt_model <- lm(debt ~ credit_limit + income, data = credit_ch7)\n# Get regression table:\nget_regression_table(debt_model)\nTABLE 7.9: Multiple regression table\nterm estimate std_error statistic p_value lower_ci upper_ci\nintercept -385.179 19.465 -19.8 0 -423.446 -346.912\ncredit_limit 0.264 0.006 45.0 0 0.253 0.276\nincome -7.663 0.385 -19.9 0 -8.420 -6.906\n\nHow do we interpret the three values in the estimate column?\n\n• intercept = -$385.18 (rounded to two decimal points). The intercept in our case represents the credit card debt for an individual who has credit_limit of$0 and income of $0. In our data however, the intercept has limited practical interpretation since no individuals had credit_limit or income values of$0. Rather, the intercept is used to situate the regression plane in 3D space.\n• credit_limit = $0.26. Taking into account all other the explanatory variables in our model, for every increase of one dollar in credit_limit, there is an associated increase of on average$0.26 in credit card debt. Note:\n• Just as we did in Subsection 6.1.2, we are cautious not to make a causal statement by merely stating there there was an associated increase.\n• We preface our interpretation with the statement “taking into account all other the explanatory variables in our model”, here income, to emphasize that we are now jointly interpreting the associated effect of multiple explanatory variables in the same model at once.\n• income = -$7.66. Taking into account all other the explanatory variables in our model, for every increase of one unit in the variable income, in other words$1000 in actual income, there is an associated decrease of on average 7.66 in credit card debt. Putting these results together, the equation of the regression plane that gives us fitted values $$\\widehat{y}$$ = $$\\widehat{\\text{debt}}$$. \\begin{aligned} \\widehat{y} &= b_0 + b_1 \\cdot x_1 + b_2 \\cdot x_2\\\\ \\widehat{\\text{debt}} &= b_0 + b_{\\text{limit}} \\cdot \\text{limit} + b_{\\text{income}} \\cdot \\text{income}\\\\ &= -387.179 + 0.263 \\cdot\\text{limit} - 7.663 \\cdot\\text{income} \\end{aligned} Recall in the right-hand plot of Figure 7.5 that when plotting the relationship between debt and income in isolation, there appeared to be a positive relationship. In the above multiple regression however, when jointly modeling the relationship between debt, credit_limit, and income, there appears to be a negative relationship of debt and income as evidenced by the negative slope for income of -7.66. What explains these contradictory results? A phenomenon known as Simpson’s Paradox whereby overall trends that exist in aggregate either disappear or reverse when the data are broken down into groups. In Subsection 7.3.3 we elaborate on this by looking at the relationship between credit_limit and credit card debt, but split by different income brackets.\n\nLearning check\n\n(LC7.3) Fit a new simple linear regression using lm(debt ~ credit_rating + age, data = credit_ch7) where credit_rating and age are the new numerical explanatory variables $$x_1$$ and $$x_2$$. Get information about the “best-fitting” regression plane from the regression table by applying the get_regression_table() function. How do the regression results match up with the results from your exploratory data analysis above?\n\n### 7.2.3 Observed/fitted values and residuals\n\nLet’s also compute all fitted values and residuals for our regression model using the get_regression_points() function and present only the first 10 rows of output in Table 7.10. Remember that the (x, y, z) coordinates of each of the blue points in our 3D scatterplot can be found in the income, credit_limit, and debt columns. The fitted values on the regression plane are found in the debt_hat column and are computed using our equation for the regression plane in the previous section:\n\n\\begin{aligned} \\widehat{y} = \\widehat{\\text{debt}} &= -387.179 + 0.263 \\cdot \\text{limit} - 7.663 \\cdot \\text{income} \\end{aligned}\n\nregression_points <- get_regression_points(debt_model)\nregression_points\nTABLE 7.10: Regression points (First 10 card holders of 400)\nID debt credit_limit income debt_hat residual\n1 333 3606 14.9 454 -120.8\n2 903 6645 106.0 559 344.3\n3 580 7075 104.6 683 -103.4\n4 964 9504 148.9 986 -21.7\n5 331 4897 55.9 481 -150.0\n6 1151 8047 80.2 1127 23.6\n7 203 3388 21.0 349 -146.4\n8 872 7114 71.4 948 -76.0\n9 279 3300 15.1 371 -92.2\n10 1350 6819 71.1 873 477.3\n\n## 7.4 Conclusion\n\nAn R script file of all R code used in this chapter is available here.\n\n### 7.4.2 What’s to come?\n\nCongratulations! We’ve completed the “Data Modeling via moderndive” portion of this book! We’re ready to proceed to the third and final portion of this book: “Statistical Inference via infer.” Statistical inference is the science of inferring about some unknown quantity using sampling. Among the most well-known example of sampling are polls. Because asking an entire population about their opinions would be a long and arduous task, pollsters often take a smaller sample that is hopefully representative of the population. Based on the results of the sample, pollsters hope to make claims about the greater population.\n\nOnce we’ve covered Chapters 8 on sampling, 9 on confidence intervals, and 10 on hypothesis testing, in Chapter 11 on inference for regression we’ll revisit the regression models we studied in Chapter 6 and 7. So far we’ve only studied the estimate column of all our regression tables. The next 4 chapters focus on what the remaining columns mean: std_error standard error, statistic test statistic, p_value p-value, lower_ci lower 95% confidence interval bound, and upper_ci upper 95% confidence interval bound.\n\nFurthermore, we’ll talk about the importance of residuals $$y - \\widehat{y}$$ play in interpreting the results of a regression. We’ll perform what is known as residual analyses of the residual variable of all get_regression_points() output to verify what are known as the “conditions for inference for regression.” On to the next one!", null, "FIGURE 7.12: ModernDive flowchart - On to Part III!" ]
[ null, "https://moderndive.com/wide_format.png", null, "https://bookdown.org/cteplovs/moderndive_book_4/moderndive_files/figure-html/numxcatxplot1-1.png", null, "https://bookdown.org/cteplovs/moderndive_book_4/moderndive_files/figure-html/numxcatx-parallel-1.png", null, "https://bookdown.org/cteplovs/moderndive_book_4/moderndive_files/figure-html/numxcatx-comparison-1.png", null, "https://bookdown.org/cteplovs/moderndive_book_4/moderndive_files/figure-html/fitted-values-1.png", null, "https://bookdown.org/cteplovs/moderndive_book_4/moderndive_files/figure-html/2numxplot1-1.png", null, "https://bookdown.org/cteplovs/moderndive_book_4/images/credit_card_balance_regression_plane.png", null, "https://bookdown.org/cteplovs/moderndive_book_4/images/flowcharts/flowchart/flowchart.006.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8271709,"math_prob":0.98856735,"size":42747,"snap":"2020-24-2020-29","text_gpt3_token_len":11334,"char_repetition_ratio":0.16952483,"word_repetition_ratio":0.16706021,"special_character_ratio":0.28998527,"punctuation_ratio":0.13148458,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9969094,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16],"im_url_duplicate_count":[null,null,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-06T00:08:51Z\",\"WARC-Record-ID\":\"<urn:uuid:f0f771b7-e25d-409e-bb00-a80a3083c50c>\",\"Content-Length\":\"181372\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1eafd1f0-e98f-4dc6-a815-e9679b866e9d>\",\"WARC-Concurrent-To\":\"<urn:uuid:6d989425-8efd-4c37-bab7-fe5c8cdea667>\",\"WARC-IP-Address\":\"3.227.40.207\",\"WARC-Target-URI\":\"https://bookdown.org/cteplovs/moderndive_book_4/7-multiple-regression.html\",\"WARC-Payload-Digest\":\"sha1:IYSGMHP32WG33VLOZGK37RSUF2QGNNKF\",\"WARC-Block-Digest\":\"sha1:PCBMD5QRH237SMQ4G6SC27TM4F7PSWF6\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655889877.72_warc_CC-MAIN-20200705215728-20200706005728-00011.warc.gz\"}"}
https://motls.blogspot.com/2020/09/discrimination-of-dead-alive.html?m=1
[ "## Monday, September 21, 2020\n\n### Discrimination of dead-alive superpositions allows resurrection\n\n...but the confusion about these simple insights shows some people's trouble with the state-dependence and indeed, with the universal rules of quantum mechanics, too...", null, "In a new, 12-page-long quant-ph preprint,\nOn the Hardness of Detecting Macroscopic Superpositions,\nAaronson and Atia (Austin) and Susskind (Stanford & the Google Evil Corporation) make an interesting observation, and I think that it is basically a correct one:\nIf you had the measurement of Schrödinger-cat-like superpositions $$\\ket{{\\rm dead}} + \\ket{{\\rm alive}}$$ under full control, you could slightly extend your gadget and the extended gadget would also allow you to resurrect the cat.\nIn this form, I think that the statement is correct. The detection of complicated superpositions is mostly equivalent to the reversal of decoherence; and it is also, perhaps less trivially, equivalent to the ability to transform the distinct states onto each other (which means to turning dead cats into alive ones).\n\nThese dudes dedicate 12 rather complicated pages to the demonstration of this statement – and its generalization which says that if one of the things may be done partially, the others may also be done partially. But I am just annoyed by the degree of obfuscation and the rather striking sloppiness in their language.\n\nSo how does it work? Imagine that you have two very specific microstates of a cat, $$\\ket{{\\rm dead}}$$ and $$\\ket{{\\rm alive}}$$, and you play with the 2D Hilbert space of the superpositions. This is already an utterly unrealistic assumption because a cat carries a huge entropy, whether it is alive or dead (so there are exponentially many microstates), and even if you managed to pick a unique microstate for an alive cat, a cat may be killed in exponentially many ways, so there would be lots of the dead microstates. But let's assume that only two states are relevant. The real point is that the superpositions are $\\ket \\psi = a\\ket{{\\rm alive}}+b \\ket{{\\rm dead}}$ where $$a,b\\in\\CC$$ are complex numbers. It seems that they want to write their preprint as a partially popular one and the language of the abstract is just terribly anti-quantum-mechanical. They wrote:\nif one had a quantum circuit to determine if a system was in an equal superposition of two orthogonal states...\nSorry but if the coefficients above obey $$|a|^2 = |b|^2$$, then it is right to call the state $$\\ket{\\psi}$$ \"an equal superposition\" of the (orthogonal) dead and alive basis vectors. There is absolutely nothing special about the situation when the \"relative phase is positive real\". In fact, the phase of all these vectors is a pure convention and there doesn't exist any preferred choice of this convention. Not only that, all the phases brutally evolve with time (typically at different rates for \"dead\" and for \"alive\"). And because the \"dead\" and \"alive\" states are almost certainly not energy eigenstates, they evolve much more than by changing their prefactor (the phase) which is why any \"fixed\" two-dimensional Hilbert space will be insufficient for the full monitoring of (and manipulation with) the cat.\n\nSo their suggestion that the sum \"dead plus alive\" is an equal superposition while \"dead minus alive\" is not an equal superposition is a brutal misinterpretation of the role of the relative phases in quantum mechanics (the phases, including signs, are \"unphysical\" which is how they differ from the probabilities themselves whose sign change or phase change would be rather dramatic, and impossible). Whether two states' superposition is an \"equal superposition\" surely mustn't depend on the relative phase. But more generally, they talk about the measurement apparatus (\"quantum circuit\") that does a certain measurement. And they imagine that the measurement decides\n...if a system was in an equal superposition of two...\nThis is a very weird way of organizing quantum mechanics because it implicitly suggests that we may \"measure the state\" directly. In quantum mechanics, we don't measure states (wave functions aren't observable in the colloquial sense which is almost the same statement as \"wave functions aren't observables\" in the technical mathematical sense), we measure observables (Hermitian linear operators). Great, in this case, we may construct the operator that their \"quantum circuit\" is supposed to measure by the assumption:$L = \\frac{ (\\ket{{\\rm dead}} + \\ket{{\\rm alive}}) (\\bra{{\\rm dead}} + \\bra{{\\rm alive}}) }{2}$ which is a Hermitian linear operator, a projection operator on the state that is the \"sum\" superposition. But indeed, as soon as you write the operator $$L$$ in this way, you see that your \"quantum circuit\" has some amazing abilities. It not only measures the relative phase between the \"dead\" and \"alive\" states because it acts as $$1$$ on the \"sum superposition\" but as $$0$$ on the \"difference superposition\". This operator $$L$$ also kills all the other microstates of the cat, whether they behave as dead or alive cats (or dogs).\n\nMuch more natural measurements obviously never look like $$L$$ above. This $$L$$ is only \"supported\" by the highly cherry-picked two-dimensional subspace of the Hilbert space and the cherry-picking of this two-dimensional space is at least as \"impossible in practice\" as all the other operations that they and we discuss; well, in the \"sum-difference\" basis, the operator is only supported by a one-dimensional Hilbert space because it is a projection operator on a single particular state (or the one-dimensional space spanned by it). But regular gadgets also \"do measure\" something about many other microstates you may enter and they will generically produce nonzero results for most of them.\n\nMy point is that the operator $$L$$ doesn't really \"measure if the object is in a particular superposition\". Instead, it is an operator that measures the relative phase between \"dead\" and \"alive\" at a given moment. And indeed, it is totally trivial to see that once you have a gadget that can measure the \"relative phase\", basically the same gadget may also switch one state to the other – it may transform \"dead\" into \"alive\" and vice versa.\n\nWhy is it so? This is just a damn simple two-dimensional Hilbert space. Every undergrad should have known everything about 2-dimensional Hilbert spaces e.g. from the Feynman Lectures on Physics. So such a space is equivalent e.g. to the spin-1/2 particle's spin. And all the observables are real combinations of $$1,\\sigma_x,\\sigma_y,\\sigma_z$$, the (identity and) Pauli matrices! If you have a gadget that can measure the relative phase, it is a gadget that measures $$\\sigma_x$$ or $$\\sigma_y$$ or their combinations, the \"totally off-diagonal\" $$2\\times 2$$ matrices.\n\nWithout a loss of generality, you may say that such a gadget or circuit measures $$\\sigma_x$$, or it transforms the system so that the measurement of $$\\sigma_x$$ is transformed to the much easier measurement of $$\\sigma_z$$. But if you can manipulate the 2-component state in this fine way, you may obviously turn it into a gadget that rotates the state by 90 degrees in the dead-alive plan, i.e. that maps $$a\\to b$$ and $$b\\to a$$, among other things. Why? Because the matrix $$\\sigma_x$$ is simply exchanging the two amplitudes! So yes, this gadget turns alive cats into dead cats and, more impressively, vice versa.\n\nWell, we may adjust the argument in the previous argument to their \"quantum circuit\" as they imagined it. They actually imagine that they have a quantum circuit that may turn the \"difficult\" measurement of $$\\sigma_x$$ to the \"easy\" measurement of the diagonal $$\\sigma_z$$. What does this circuit do in the 2-dimensional space? Well, it must rotate the coordinates $$a,b$$ into $$(a+b)/\\sqrt 2, (a-b)/\\sqrt 2$$. But that's nothing else than some form of a rotation of the 2 coordinates by 45 degrees. That doesn't cripple your ability to do the step from the previous paragraph. If you repeat a rotation by 45 degrees twice, you get a rotation by 90 degrees, e.g. $$a\\to b$$, $$b\\to -a$$.\n\nMathematically, all these would-be impressive operations are exactly analogous to the mundane measurement of the electron's spin with respect to another axis such as $$x$$. If your gadget can only measure the $$z$$-component of the spin, you may measure the $$x$$-component by simply rotating the electron by 90 degrees at the beginning (around the $$y$$-axis, for example). You can achieve it by adding the magnetic field $$\\vec B$$ in the direction of the $$y$$-axis. Needless to say, this magnetic field is pretty much sufficient for measuring $$\\sigma_y$$ itself: it sends the electron along one of the two trajectories depending on the $$y$$-component of the spin. I suppose that you will learn or recall how the components of the electron's spin are measured and manipulated. They don't do anything else; they just give fancy names to these trivial problems – while they omit all the extra difficulties that would arise if you dealt with a cat and not an electron.\n\nSo yes, indeed, the \"circuits\" that can rotate the Hilbert space (e.g. by 45 degrees) along some difficult axis \"for the purpose of measurements\" may also rotate the Hilbert space \"for the purpose of resurrection\". This is a totally trivial consequence of the basic quantum mechanical axioms or thinking. And we don't really need the 45-degree rotations to make the general point. The exchange of the two coordinates is simpler and has the same consequence. If you can measure $$\\sigma_x$$, your gadget to measure $$\\sigma_x$$ also has the effect of swapping \"up\" and \"down\" or, as they are called here, \"alive\" and \"dead\". In particular, it resurrects a dead cat.\n\nI just find it silly to spend many pages with this trivial point. They spend so much time with this trivial point exactly because they add lots of the obfuscation that makes their text \"always a little bit wrong\", as Feynman would say about the \"energy makes it go\" physics textbooks. They want to present the quantum mechanical measurements as if they were classical measurements. And when it is done so, the demonstration of their trivial statement becomes both more convoluted; and the very trivial statement looks more mysterious, too.\n\nThey claim that their insights have consequences for the state dependence in quantum gravity. I am sorry but everyone who discusses difficult things such as state dependence in quantum gravity should first understand the basic mathematics and physics of two-dimensional Hilbert spaces in quantum mechanics. Aaronson, Atia, and Susskind are clearly still confused about that basic undergraduate material of quantum mechanics. Their paper doesn't talk about any field-like operators including the metric tensor which is why it has nothing whatsoever to say about \"quantum gravity\", at least not about the parts of \"quantum gravity\" that go beyond the \"elementary quantum mechanics\".\n\nIt is surely true that the construction of gadgets to measure or transform states just like $$\\sigma_x$$ in large Hilbert spaces describing complex objects becomes hard. But I don't think that there is any totally universal definition of the \"hardness\" (like some physics-universal \"complexity\" quantity). A way to refine the notion of \"hardness\" is to count \"the number of components that a gadget may have\". But the resulting number of components obviously depends on the \"list of component types that you allow\" and the rules how they can be added together. In the context of a realistic theory of quantum gravity – which contains both gravity and other forces, as the swampland reasoning basically implies – this is a totally \"vacuum-dependent\" or, more generally, \"environment-dependent\" question. There can't be any definition of the hardness that produces the right result for \"all superselection sectors of quantum gravity\" simultaneously.\n\nThat is a general reasoning why I find the bulk of Susskind's and similar papers dealing with \"complexity as a measure of something in quantum gravity\" incorrect. He just assumes that the complexity-like quantities are completely universal across the quantum gravity Hilbert spaces. But they are actually totally dependent on the vacuum. This dependence could actually be said to be at least \"morally the same thing\" as the state dependence of quantum gravity. But to show that Susskind's reasoning is misguided, it is enough to realize much more innocent facts such as the fact that (see The Elegant Universe LOL which is totally enough here): Dualities typically transform very difficult problems to very easy ones.\n\nThe funny ability of the string dualities is that some objects that looks like insanely complex bound states or composites of the simplest objects become \"elementary and simple\" in a dual description but they may be used just like the original \"elementary\" objects. So you may construct your \"intermediate\" complex states using totally different \"elementary\" objects and which composites are \"simple\" and which are \"complex\" gets totally rearranged, and in many situations, the complexity is largely rotated upside down.\n\nOne must obviously be careful about \"which list of elementary particles or fields or observables\" is allowed when we want to discuss a proposed quantity such as \"the complexity of a state\". Everyone who understands the very meaning of dualities must agree. Because the basic allowed observables are normally understood as some excitations of the empty space (the vacuum), the complexity function defined for whole Hilbert spaces is always state-dependent. A black hole microstate does look like the empty space \"in most of the volume\" and in combination with the previous statement, it means that any notion of complexity must depend on the chosen pure state, too. At some level, this is really another complete proof of the state dependence.\n\nI found it increasingly clear that some people's opposition to the \"state dependence\" was nothing else than another manifestation of their highly imperfect (to put it mildly) understanding of the universal rules of quantum mechanics. Some people want the field operators etc. to be \"state-independent\" for the very same reason why they want observations to be \"observer-independent\". But all observations in quantum mechanics are unavoidably \"observer-dependent\": they simply want to keep on thinking about the Universe classically. A priori, all observables (Hemitian linear operators on a Hilbert space) are \"equally allowed\" as all others. Whether one operator looks simpler because it is \"more diagonal\" than others totally depends on the chosen basis (or Fock-like construction) of the Hilbert space and none of the bases may be said to be \"universally preferred\" across the landscape of string theory, or similarly across the Hilbert space of some black hole microstates.\n\nSo the possible quantitatively well-defined schemes to measure the \"complexity of states\" are as numerous as the possible environments that allow observers to do things and mentally divide them to elementary particles or elementary processes. Those are equivalent to \"systems of field operators within quantum gravity\" and there are many of them, basically as many of them as the number of \"string vacua\". The general qualitative philosophical insights, e.g. that some measurement apparatuses look simpler than others, are valid. But no quantitative definitions of the complexity and similar things may ever be state-independent or observer-independent. The entropy (the log of the dimension of a relevant Hilbert space) is the only characteristic of a \"physical system\" that doesn't depend on the choice of the observer and his interpretation \"what the empty space is\". But even the entropy becomes debatable once you realize that every observer may always imagine that there's a \"totally disconnected component of the space\" which carries some extra entropy. The normal desire to \"subtract this entropy and the inaccessible degrees of freedom\" depends on the statement that they're inaccessible. But the inaccessibility still depends on the Hamiltonian which is (at least when it comes to details) dependent on the chosen vacuum and gauges, too.\n\nSusskind wants to turn the \"most important ideas in physics\" to something that is independent of the algebra of the field-like operators (which includes the information about the mutual relationships of these operators and their evolution – which is given by their relationship with some Hamiltonian operator, anyway). But everything in quantum physics depends on the field-like operators, otherwise it is not quantum physics although it may be the basis of some computer science papers or at least mathematical masturbation papers inspired by physics and by computer science.\n\nAnd that is the memo." ]
[ null, "https://latterdaysaintmag.com/wp-content/uploads/2019/06/resurrection.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9394356,"math_prob":0.97739947,"size":16570,"snap":"2021-43-2021-49","text_gpt3_token_len":3522,"char_repetition_ratio":0.13449234,"word_repetition_ratio":0.008374572,"special_character_ratio":0.21056126,"punctuation_ratio":0.07786885,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9787636,"pos_list":[0,1,2],"im_url_duplicate_count":[null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-24T09:59:57Z\",\"WARC-Record-ID\":\"<urn:uuid:4d03fb2a-6f25-4627-8bec-33e77abeba2a>\",\"Content-Length\":\"64118\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:dba686ac-e7ab-42e6-a0a7-4e8620ffaa30>\",\"WARC-Concurrent-To\":\"<urn:uuid:4af1988d-0b4f-455f-aa32-5c6558fbb6a3>\",\"WARC-IP-Address\":\"172.217.1.193\",\"WARC-Target-URI\":\"https://motls.blogspot.com/2020/09/discrimination-of-dead-alive.html?m=1\",\"WARC-Payload-Digest\":\"sha1:KR6VPR742QYF6BV2UJSALTWBTQ6P4YY7\",\"WARC-Block-Digest\":\"sha1:DIYGJY4YW7FMZ7MTELQID5RGLBIH4DLB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585916.29_warc_CC-MAIN-20211024081003-20211024111003-00569.warc.gz\"}"}
https://www.limited-entropy.com/discrete-log/comment-page-1/
[ "# Limited Entropy Dot ComNot so random thoughts on security featured by Eloi Sanfèlix\n\n4Feb/106\n\n## Crypto Series: Discrete Logarithm\n\nFrom last post, it becomes clear that at this stage we won't be able to make it without some maths. That's because we are dealing now with public key crypto, which is based on difficult mathematical problems (as in difficult to solve, not as in difficult to understand).\n\nWith symmetric crypto, we could understand the concepts of diffusion and confusion without needing to dive into maths. On the other hand, here we will need to understand the problems on which the algorithms rely in order to understand how they work.\n\nIn this post, we'll see what's the Discrete Logarithm problem, why it is difficult to solve based on a simple intuition, and finally a method to solve this kind of problems. Of course it's not the only (nor the best) existing method, but in my opinion it is the simplest one to understand.", null, "The Discrete Logarithm problem\n\nAs I said before, the Discrete Logarithm problem is formulated as follows. Given a cyclic group G with a generator g, x is called a discrete logarithm of h over this group if it satisfies the following condition:", null, "$h = g^x$ in G\n\nSo this is the equivalent to a logarithm, but instead of computing it over the real numbers it is computed over a finite cyclic group. And now, if you don't have any background on discrete maths, coding theory and the like, you are probably asking something on these lines: what the hell does that mean?\n\nTo keep it simple, a finite cyclic group G with a generator g means that the successive powers of g (i.e", null, "$g^0,g^1,g^2,g^3,\\ldots,g^{n-1}$ ) will generate the different elements of the group. At some point, after a finite number of elements (", null, "$g^n$ ) the result will cycle over to the first element (i.e.", null, "$g^n = g^0$), and this is what gives the name to the groups ;-). Now, this value, n, is called the order of the group and is obviously also the number of elements of the group, or cardinality.\n\nI won't go any further with the explanation of the properties of cyclic groups and all the group theory behind this. I'll just say that a simple example of finite cyclic groups is that of the integers modulo some prime number p, excluding the zero element. This groups are usually noted as", null, "$\\mathbb{Z}^*_p$, where p is our prime number and the group order is p-1.\n\nFor instance, say  we look at", null, "$\\mathbb{Z}^*_7$. Then, for this group we get that the group elements are these:\n\n1,2,3,4,5,6\n\nSince those are all the integers modulo 7. Now, a generator of this group would be for instance g=3. You can see that in this case the successive powers of 3 modulo 7 are:\n\n1,3,2,6,4,5,1\n\nAnd there you have that", null, "$g^{6 \\cdot k+i} \\equiv g^i \\pmod 7$. Therefore, this is a cyclic group of order p-1=6.\n\nDifficulty of the DL problem\n\nNow, where is the difficulty of the DL problem? We'll just take an intuitive approach to it. When you think of a classical logarithm over the real numbers, it turns out that this is a continuous and monotonous function where", null, "$\\log x > \\log y$ if", null, "$x > y$. This means that if you know the logarithm of x, and y is pretty close to it, most likely the logarithm of y will be pretty close to it as well.\n\nBut when you look at the discrete logarithm, you can see that the behavior of this problem is not as predictable as that of the logarithm function. For instance, in the example above we have that", null, "$g^3 \\equiv 6 \\pmod 7$, but", null, "$g^4 \\equiv 4 \\pmod 7$. Extrapolating this to big numbers, you can see that it is probably not very easy to go back from a certain power of a prime number to the exponent itself (i.e., computing the DL).\n\nSolving Discrete Logarithms: Baby Step Giant Step\n\nAll right, now we get to look at an actual method to compute discrete logarithms. The method is called Baby Step Giant Step due to the way we approach the problem: we create a table of a few powers of our generator: this are our baby steps. Then we take our target problem, and take big steps (of the size of the generated table) downwards until we hit the table. At that point, we know how many steps we took and we can compute the actual logarithm.\n\nBut of course, all this may sound like bullshit until you see an actual example. Let's take the following problem, which uses intentionally small numbers:\n\nGiven", null, "$y = 8938$, compute its discrete logarithm in", null, "$\\mathbb{Z}^*_{17627}$.\n\nOk, let's start then. We compute a table of a given size, let's say 100 elements. I used to do it with Mathematica, but I do not have it right now so I'm using for the first time ever the Sage open source program. I advise you to install it, because it will also come handy to verify other examples (such as RSA examples) in the future.\n\nSo we start by instantiating our cyclic group, and getting a generator and our value y:\n\nsage: G = IntegerModRing(17627) sage: g = G(6) sage: g.multiplicative_order() 17626 sage: y=G(8938)\n\nNow, we build our list of 100 powers and plot it:\n\n sage: powers = [ g^i for i in range(100) ] sage: list_plot(powers) \n\nNote that sage directly applies modular exponentiation since g was created as an element of the finite field we are using for this problem. Also, note that the behavior is not really predictable and after a big number it often comes a small number, but of course not always. You can observe this behavior in the plot obtained:", null, "Ok, let's continue with our search. First, we know that our number, y, is part of the finite field, and therefore we can write it as follows:", null, "$y \\equiv g^x \\equiv g^{100\\cdot j + i} \\pmod{17627}$\n\nWhere of course i is a number below 100. We can further develop this equation and write it the following way:", null, "$g^x \\cdot g^{-100 \\cdot j} \\equiv g^i \\pmod{17627}$\n\nAnd here it comes the magic! If you look at this equation,", null, "$g^i$ is actually a member of our table of powers. Further, we can compute", null, "$a=g^{-100}$, which is easy. Then, we can take y and multiply it by a and check if the result is on the table. In that case, it means that", null, "$x-100 \\cdot j = i$ and we can easily compute x!\n\nIf it was not the case (which is likely), then we will have to multiply again by g as many times as we need until we hit the table. Let's call that number k. At that point, we've found that", null, "$g^x \\cdot g^{-100\\cdot k} \\equiv g^i \\pmod{17627}$. Since we know k (it's the number of times we applied our multiplication!) and i (we take it from the table), we can compute x.\n\nAll this can be translated into the following fragment of sage commands:\n\nsage: j=1 sage: while not y*a^j in powers: j += 1 ....: sage: j 79 sage: i = powers.index(y*a^j) sage: i 70 sage: x=100*j+i sage: x 7970 sage: g^x == y True\n\nSo what you see above means that after 79 steps we have found the value at position 70 in the list. Therefore, the discrete logarithm of y in G is x=7970. After that, I compare the x-th power of g with y to be sure that the result is correct, and sage returns a True. If you happen to know a bit of Python, you can notice that it has a pretty similar syntax (but not identical).\n\nOf course, sage also provides easier ways to do it. You can just type y.log(g) to solve the problem here:\n\nsage: y.log(g) 7970\n\nClosing thoughts\n\nThe method explained above is not the only one nor the most efficient. Also, as usual the explanations here do not attempt to be a 100% accurate description of the problem from a mathematical point of view (I'm not a mathematician after all) but rather to explain crypto topics in a simple way so that most people can understand it.\n\nIf you want to go further with DL problems, get accurate descriptions of it and understand other methods of solving the problem, you can resort (once again) to the Handbook of Applied Cryptography. Specially chapters 2 and 3 treat this and related subjects, covering maths background and reference problems.\n\nI hope you are enjoying these posts and see you soon!\n\n#### Posted by Eloi Sanfèlix\n\n1. Awesome article and great explained 😀\n\n2. Hi, very cool article – and welcome to sage 😉\n\n1. you can create the powers list more pythonic:\n\nsage: powers = [ g^i for i in range(100) ]\n\n2. you can visualize it’s “randomness” via list_plot(powers)\n\n3. the while loop is a bit odd, this looks better:\n\nsage: a = g^-100; y=G(8938); j=1\nsage: while not y*a^j in powers: j += 1\n….:\nsage: j\n79\n\n3. @schilly Cool, thanks for the tips!\n\nI’m not much of a Python guy, I’m more of a C coder. I’ll update the post with your suggestions 🙂\n\n4. can you send me the output of the 100 data points. I am not savvy enough to figure out Sage.\nthanks\n\n5. Can you explain how did you select the number 100 for the number of powers? As far as I know it should be sufficient to take the square root from the highest divisor of p-1, but it would be 36 in this case so it doesn’t work. Thanks!\n\n6. It’s just an arbitrary number. The bigger the precomputed table, the less steps you need to perform (but the more memory and pre-computation work!).\n\nYou can easily see it in the example above. Say you use 1000 powers instead of 100, then you’d need 7 steps instead of 70, and you’d find your result at index 970.\n\nThus, your final result will still be 7970. Only now you’ll do 7 steps (instead of 70) and at each step you need to perform a search within a list of 1000 points (rather than 100)." ]
[ null, "http://tuxed.serveblog.net/wp-includes/js/tinymce/plugins/wordpress/img/trans.gif", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://www.limited-entropy.com/wp-content/uploads/2010/02/dlog-300x184.png", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9346584,"math_prob":0.9748093,"size":8935,"snap":"2023-14-2023-23","text_gpt3_token_len":2217,"char_repetition_ratio":0.121151045,"word_repetition_ratio":0.017921148,"special_character_ratio":0.25204253,"punctuation_ratio":0.12506407,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9988925,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42],"im_url_duplicate_count":[null,4,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,4,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-05-30T17:50:44Z\",\"WARC-Record-ID\":\"<urn:uuid:2ebca154-428f-4c4a-9305-f647d5a83bea>\",\"Content-Length\":\"50955\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a2c393d6-9f8b-43fd-8c74-f54e6fe84b6f>\",\"WARC-Concurrent-To\":\"<urn:uuid:e870cb53-176b-4dc2-8227-386cec0ffbf2>\",\"WARC-IP-Address\":\"91.208.75.128\",\"WARC-Target-URI\":\"https://www.limited-entropy.com/discrete-log/comment-page-1/\",\"WARC-Payload-Digest\":\"sha1:UEPLN2VKK2QK2ZDBOB2FZW6RPPCUSOI2\",\"WARC-Block-Digest\":\"sha1:ZQY4WKZUK465WQ7K3UCP7B5EBXJZK5AU\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224646076.50_warc_CC-MAIN-20230530163210-20230530193210-00340.warc.gz\"}"}
https://www.brighthubeducation.com/middle-school-math-lessons/128530-factors-and-multiples/
[ "# Math Lesson for Factors and Multiples\n\nPage content\n\n## Lesson Objective\n\nThe lesson is aligned to the Common Core State Standards for Mathematics – 4.OA.4 Operations and Algebraic Thinking - Find all factor pairs for a whole number in the range 1 - 100. Recognize that a whole number is a multiple of each of its factors. Determine whether a given whole number in the range 1 - 100 is a multiple of a given one-digit number. Determine whether a given whole number in the range 1 - 100 is prime or composite.\n\nCalculator\n\n## Finding all Factor Pairs\n\nPart A\n\nTwo numbers are called a factor pair of a number when the two numbers multiplied together give the number.\n\nFind the factor pairs of 36.\n\nMake a table to show the factor pairs for each number.\n\n1. 12\n2. 48\n3. 51\n4. 88\n5. 96\n\nPart B\n\nA multiple is any number that is the product of a given number.\n\nYou can find the multiples of a number by multiplying the number by 1, 2, 3, etc.\n\nFind the first four multiples of 6. Multiply 6 by 1, by 2, by 3, by 4. The first four multiples of 6 are 6, 12, 18, 24.\n\n1. Find the first five multiples of 2.\n2. Find the first four multiples of 5.\n3. Find the first four multiples for 7.\n4. Find the first three multiples for 4.\n5. Find the first three multiples for 3.\n\nPart C\n\nA number is a composite number when the number has factor pairs other than 1 and itself.\n\nA number is a prime number when the number has only one factor pair, 1 and itself.\n\n1. 49\n2. 53\n3. 11\n4. 33\n5. 17\n\nPart B\n\n1. 2, 4, 6, 8, 10\n2. 5, 10, 15, 20\n3. 7, 14, 21, 28\n4. 4, 8, 12\n5. 3, 6, 9\n\nPart C\n\n1. 49 is composite because there are two factor pairs 1, 49 and 7, 7\n2. 53 is prime because the only factor pair is 1, 53\n3. 11 is prime because the only factor pair is 1, 11\n4. 33 is composite because there are two factor pairs 1, 33 and 3, 11\n5. 17 is prime because the only factor pair is 1, 17\n\nIndividual or Group Work:\n\nMake a table to show the factor pairs for each number. Then determine whether the number is prime or composite.\n\n1. 7\n2. 56\n3. 22\n4. 13\n5. 81\n\nFind the first four multiples for each number.\n\n1. 1\n2. 8\n3. 9\n4. 10\n\n10. 12\n\nMake a table to show the factor pairs for each number. Then determine whether the number is prime or composite.\n\nFind the first four multiples for each number.\n\n1. 1, 2, 3, 4\n2. 8, 16, 24, 32\n3. 9, 18, 27, 36\n4. 10, 20, 30, 40\n5. 12, 24, 36, 48\n\n## This post is part of the series: Mathematics Lesson Plan\n\nThis lesson plan covers Common Core math lessons for multiplicative comparisons, mutlistep word problems, factors, multiples and patterns." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.79890925,"math_prob":0.99791205,"size":2574,"snap":"2021-31-2021-39","text_gpt3_token_len":745,"char_repetition_ratio":0.19922179,"word_repetition_ratio":0.20872866,"special_character_ratio":0.3216783,"punctuation_ratio":0.14634146,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99745107,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-25T13:43:02Z\",\"WARC-Record-ID\":\"<urn:uuid:44cb6552-9359-498d-9674-1665761cfbec>\",\"Content-Length\":\"53233\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b7f1a32e-2afc-4fcb-9c0b-b6115b8bbc5b>\",\"WARC-Concurrent-To\":\"<urn:uuid:b945c3ca-6454-40fb-85ce-1400f5cbf069>\",\"WARC-IP-Address\":\"172.67.140.250\",\"WARC-Target-URI\":\"https://www.brighthubeducation.com/middle-school-math-lessons/128530-factors-and-multiples/\",\"WARC-Payload-Digest\":\"sha1:USM5RE5FSQNLTL43MAO7SJFIW4PCT2HL\",\"WARC-Block-Digest\":\"sha1:XADUVVMJPYAYQJMY3AMBPH6FZSWXXIXG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057622.15_warc_CC-MAIN-20210925112158-20210925142158-00477.warc.gz\"}"}
https://www.matteosordello.com/publication/a-bernstein-type-inequality-for-sums-of-selections-from-three-dimensional-arrays/
[ "# A Bernstein type inequality for sums of selections from three dimensional arrays\n\n### Abstract\n\nWe consider the three dimensional array $\\mathcal{A} =${$a_{i,j,k}$}$_{1\\le i,j,k \\le n}$, with $a_{i,j,k} \\in [0,1]$, and the two random statistics $T_{1}:= \\sum_{i=1}^n \\sum_{j=1}^n a_{i,j,\\sigma(i)}$ and $T_{2}:= \\sum_{i=1}^{n} a_{i,\\sigma(i),\\pi(i)}$, where $\\sigma$ and $\\pi$ are chosen independently from the set of permutations of {$1,2,…,n$}. These can be viewed as natural three dimensional generalizations of the statistic $T_{3} = \\sum_{i=1}^{n} a_{i,\\sigma(i)}$, considered by Hoeffding (1951). Here we give Bernstein type concentration inequalities for $T_{1}$ and $T_{2}$ by extending the argument for concentration of $T_{3}$ by Chatterjee (2005).\n\nPublication\nIn Statistics and Probability Letters" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.77950174,"math_prob":0.9999876,"size":663,"snap":"2022-27-2022-33","text_gpt3_token_len":229,"char_repetition_ratio":0.12139606,"word_repetition_ratio":0.0,"special_character_ratio":0.3861237,"punctuation_ratio":0.15789473,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000054,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-17T16:26:05Z\",\"WARC-Record-ID\":\"<urn:uuid:f07b941f-10af-4cad-917e-ab6158c48918>\",\"Content-Length\":\"14755\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:89fe1ac0-c84b-4a23-8fc1-41e6ec3fea0a>\",\"WARC-Concurrent-To\":\"<urn:uuid:255d4027-d09d-4137-b6ca-d872a74a3c45>\",\"WARC-IP-Address\":\"35.231.210.182\",\"WARC-Target-URI\":\"https://www.matteosordello.com/publication/a-bernstein-type-inequality-for-sums-of-selections-from-three-dimensional-arrays/\",\"WARC-Payload-Digest\":\"sha1:DMPGOUAW7J2DBUXZ5GZA6LUWBZHCFXK5\",\"WARC-Block-Digest\":\"sha1:6EZSNSVF2TEINMLMII5UVYSYX5AUFQVQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882573029.81_warc_CC-MAIN-20220817153027-20220817183027-00140.warc.gz\"}"}
http://soft-matter.seas.harvard.edu/index.php?title=Elasticity_of_compressed_emulsions&diff=prev&oldid=13289
[ "# Difference between revisions of \"Elasticity of compressed emulsions\"\n\nOriginal entry: Sujit S. Datta, APPHY 225, Fall 2009.\n\n## Reference\n\nT. G. Mason, J. Bibette, and D. A. Weitz, PRL 75, 2051 (1995).\n\nT. G. Mason, M. D. Lacasse, G. S. Grest, D. Levine, J. Bibette, and D. A. Weitz, PRE 56, 3150 (1997).\n\n## Keywords\n\nemulsions, osmotic pressure, rigidity percolation, shear modulus, rheology\n\n## Key Points\n\nAn emulsion is a metastable suspension of droplets of one fluid within another fluid, with the two fluids being immiscible. Emulsion droplets are stabilized against coalescence upon contact by a range of surfactants; typically, surfactants are ionic, imparting stability due to electrostatic repulsions at the droplet interfaces.\n\nFor low volume fractions, a Brownian emulsion is liquid-like; as the volume fraction of droplets increases, the viscosity of the emulsion may diverge (at the colloidal glass transition, volume fraction ~ 58%), similar to a hard-sphere suspension. However, while a disordered hard-sphere suspension can only be packed up to a maximum volume fraction of 64% (random close packing), a disordered emulsion can be packed past random close packing, due to the deformability of the emulsion droplets. When the droplets first begin to touch (at random close packing), the system 'jams': it becomes solid-like, and develops an elastic modulus.\n\nA good deal of work in the past decade has focused on understanding this jamming transition, in a variety of ways. This set of papers were among the first to provide quantitative data motivating current ideas on jamming. In them, Mason et al. describe very systematic experiments on disordered, Brownian oil-in-water emulsions stabilized by an ionic surfactant, providing two measures of the elasticity of the emulsion (the bulk modulus, a measure of the material's resistance to uniform compression, and the shear modulus, a measure of the material's resistance to uniform shear) as the emulsions are compressed, and the volume fraction is increased from below random close packing up to nearly 100% (the limit of a biliquid foam). The bulk modulus K is measured by measuring the osmotic pressure $\\Pi$ of the emulsion as a function of the volume fraction (K is defined as $\\phi\\cdot d\\Pi/d\\phi$); experimentally, the osmotic pressure of a sample is set using dialysis, and the corresponding volume fraction is measured by evaporating off the water. The zero-frequency shear modulus is measured using linear rheology; Mason et al. find that the linear elastic modulus G' plateaus at low frequencies, consistent with other soft glasses, and use this value (G'p) as the zero-frequency shear modulus.\n\nThese experiments yielded a number of key results:\n\n• Both bulk modulus and shear modulus measurements for droplets of different sizes collapsed onto a single dataset when rescaled by the Laplace pressure, $\\sigma/r$, where $\\sigma$ is the interfacial tension between the two phases and r is the droplet radius. This shows that the emulsion elasticity is set by the Laplace pressure, and is purely due to energy storage in the interfaces of the deformed droplets.\n• As a function of volume fraction $\\phi>\\phi_{RCP}$, $G'_{p}\\sim \\phi\\cdot(\\phi-\\phi_{RCP})$ and $K\\sim \\phi^2+\\phi\\cdot(\\phi-\\phi_{RCP})$" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8731504,"math_prob":0.97069645,"size":6410,"snap":"2021-04-2021-17","text_gpt3_token_len":1538,"char_repetition_ratio":0.137059,"word_repetition_ratio":0.8459152,"special_character_ratio":0.22340094,"punctuation_ratio":0.11679454,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98512316,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-24T13:07:38Z\",\"WARC-Record-ID\":\"<urn:uuid:06924666-77cc-4706-8fb8-8d2457a42d61>\",\"Content-Length\":\"25154\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:54ab9a36-7803-4cc6-997a-a1b88fd5b823>\",\"WARC-Concurrent-To\":\"<urn:uuid:c1ab2d49-421c-4610-9907-5176e429b641>\",\"WARC-IP-Address\":\"54.165.123.1\",\"WARC-Target-URI\":\"http://soft-matter.seas.harvard.edu/index.php?title=Elasticity_of_compressed_emulsions&diff=prev&oldid=13289\",\"WARC-Payload-Digest\":\"sha1:FUEE4LSRSOUP3VY3CZYG43AUKF2RTX7Z\",\"WARC-Block-Digest\":\"sha1:JSAPRF7QPY4FPWBGCI3XZNL554FQJBAB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703548716.53_warc_CC-MAIN-20210124111006-20210124141006-00128.warc.gz\"}"}
http://semantic-portal.net/java-basic-threads-fork-join
[ "# Fork/Join in Java\n\nDomains:\n\n## Fork/Join\n\nThe fork/join framework is an implementation of the ExecutorService interface that helps you take advantage of multiple processors. It is designed for work that can be broken into smaller pieces recursively. The goal is to use all the available processing power to enhance the performance of your application.\n\nAs with any ExecutorService implementation, the fork/join framework distributes tasks to worker threads in a thread pool. The fork/join framework is distinct because it uses a work-stealingalgorithm. Worker threads that run out of things to do can steal tasks from other threads that are still busy.\n\nThe center of the fork/join framework is the ForkJoinPool class, an extension of the AbstractExecutorService classForkJoinPool implements the core work-stealing algorithm and can execute ForkJoinTask processes.\n\n## Basic Use\n\nThe first step for using the fork/join framework is to write code that performs a segment of the work. Your code should look similar to the following pseudocode:\n\nif (my portion of the work is small enough)\ndo the work directly\nelse\nsplit my work into two pieces\ninvoke the two pieces and wait for the results\n\n\nWrap this code in a ForkJoinTask subclass, typically using one of its more specialized types, either RecursiveTask (which can return a result) or RecursiveAction.\n\nAfter your ForkJoinTask subclass is ready, create the object that represents all the work to be done and pass it to the invoke() method of a ForkJoinPool instance.\n\n## Blurring for Clarity\n\nTo help you understand how the fork/join framework works, consider the following example. Suppose that you want to blur an image. The original source image is represented by an array of integers, where each integer contains the color values for a single pixel. The blurred destination image is also represented by an integer array with the same size as the source.\n\nPerforming the blur is accomplished by working through the source array one pixel at a time. Each pixel is averaged with its surrounding pixels (the red, green, and blue components are averaged), and the result is placed in the destination array. Since an image is a large array, this process can take a long time. You can take advantage of concurrent processing on multiprocessor systems by implementing the algorithm using the fork/join framework. Here is one possible implementation:\n\npublic class ForkBlur extends RecursiveAction {\nprivate int[] mSource;\nprivate int mStart;\nprivate int mLength;\nprivate int[] mDestination;\n\n// Processing window size; should be odd.\nprivate int mBlurWidth = 15;\n\npublic ForkBlur(int[] src, int start, int length, int[] dst) {\nmSource = src;\nmStart = start;\nmLength = length;\nmDestination = dst;\n}\n\nprotected void computeDirectly() {\nint sidePixels = (mBlurWidth - 1) / 2;\nfor (int index = mStart; index < mStart + mLength; index++) {\n// Calculate average.\nfloat rt = 0, gt = 0, bt = 0;\nfor (int mi = -sidePixels; mi <= sidePixels; mi++) {\nint mindex = Math.min(Math.max(mi + index, 0),\nmSource.length - 1);\nint pixel = mSource[mindex];\nrt += (float)((pixel & 0x00ff0000) >> 16)\n/ mBlurWidth;\ngt += (float)((pixel & 0x0000ff00) >> 8)\n/ mBlurWidth;\nbt += (float)((pixel & 0x000000ff) >> 0)\n/ mBlurWidth;\n}\n\n// Reassemble destination pixel.\nint dpixel = (0xff000000 ) |\n(((int)rt) << 16) |\n(((int)gt) << 8) |\n(((int)bt) << 0);\nmDestination[index] = dpixel;\n}\n}\n\n...\n\n\nNow you implement the abstract compute() method, which either performs the blur directly or splits it into two smaller tasks. A simple array length threshold helps determine whether the work is performed or split.\n\nprotected static int sThreshold = 100000;\n\nprotected void compute() {\nif (mLength < sThreshold) {\ncomputeDirectly();\nreturn;\n}\n\nint split = mLength / 2;\n\ninvokeAll(new ForkBlur(mSource, mStart, split, mDestination),\nnew ForkBlur(mSource, mStart + split, mLength - split,\nmDestination));\n}\n\n\nIf the previous methods are in a subclass of the RecursiveAction class, then setting up the task to run in a ForkJoinPool is straightforward, and involves the following steps:\n\n1. Create a task that represents all of the work to be done.\n\n\t\t// source image pixels are in src\n// destination image pixels are in dst\nForkBlur fb = new ForkBlur(src, 0, src.length, dst);\n\n2. Create the ForkJoinPool that will run the task.\n\n\t\tForkJoinPool pool = new ForkJoinPool();\n\n\n\t\tpool.invoke(fb);\n\n\nFor the full source code, including some extra code that creates the destination image file, see the ForkBlur example.\n\n## Standard Implementations\n\nBesides using the fork/join framework to implement custom algorithms for tasks to be performed concurrently on a multiprocessor system (such as the ForkBlur.java example in the previous section), there are some generally useful features in Java SE which are already implemented using the fork/join framework. One such implementation, introduced in Java SE 8, is used by thejava.util.Arrays class for its parallelSort() methods. These methods are similar to sort(), but leverage concurrency via the fork/join framework. Parallel sorting of large arrays is faster than sequential sorting when run on multiprocessor systems. However, how exactly the fork/join framework is leveraged by these methods is outside the scope of the Java Tutorials. For this information, see the Java API documentation.\n\nAnother implementation of the fork/join framework is used by methods in the java.util.streams package, which is part of Project Lambda scheduled for the Java SE 8 release. For more information, see the Lambda Expressions section.\n\nPage structure\nTerms" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.82132053,"math_prob":0.9390496,"size":5531,"snap":"2022-27-2022-33","text_gpt3_token_len":1277,"char_repetition_ratio":0.121765874,"word_repetition_ratio":0.0022935779,"special_character_ratio":0.22726451,"punctuation_ratio":0.1277034,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9599961,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-03T04:47:57Z\",\"WARC-Record-ID\":\"<urn:uuid:4c11b92e-befc-4e63-85cd-b9c90355a5d8>\",\"Content-Length\":\"35629\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:509d4ee6-c37c-43da-8b3b-281c19c0cf9a>\",\"WARC-Concurrent-To\":\"<urn:uuid:3fe7c25a-79fe-4c97-8364-a6dde46f7e1a>\",\"WARC-IP-Address\":\"185.68.16.163\",\"WARC-Target-URI\":\"http://semantic-portal.net/java-basic-threads-fork-join\",\"WARC-Payload-Digest\":\"sha1:QMCGX3FT7NOTUMWKU2O2ALSQ7IBGELWF\",\"WARC-Block-Digest\":\"sha1:N5GC4H43W35QP7BETZI24C7JZDQXVPQY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104215790.65_warc_CC-MAIN-20220703043548-20220703073548-00613.warc.gz\"}"}
https://mathoverflow.net/questions/50610/what-are-classical-groups
[ "# What are “classical groups”?\n\nUnlike many other terms in mathematics which have a universally understood meaning (for instance, \"group\"), the term classical group seems to have a fuzzier definition. Apparently it originates with Weyl's book The Classical Groups but doesn't make it into the index there. It was propagated by Dieudonne and others. But I'm never sure exactly what groups are included/excluded by this label. Weyl himself seems to have been interested in general (and perhaps special) linear groups, together with orthogonal (or special orthogonal) and symplectic groups attached to bilinear/quadratic forms. Initially questions were raised mainly in characteristic 0, usually over $\\mathbb{C}$ but sometimes other fields as well.\n\nObviously it helps mathematical communication to have words and symbols which need no further explanation. But ambiguity tends to creep in. For example, what does one mean by \"natural numbers\" or the symbol $\\mathbb{N}$? (Is 0 a natural number or not?) What is a \"ring\"? (Does it have an identity element or not?) By now a number of book titles and thousands of research papers refer to classical groups. But which groups are included? Spin or half-spin groups? Projective versions of the linear groups mentioned above?\n\nIs there any precise definition of classical groups?\n\nADDED: The answers and comments have been enlightening, though like some other people I lean more toward a \"no\" answer to my basic question. The underlying concern on my part is whether the notion of \"classical group\" has become too vague to be useful, which I sometimes suspect is the case with newer umbrella terms like \"quantum group\". It seems that the only safe usage nowadays is \"classical groups, by which I mean one of those in the following list ....\", at which point the original label has lost most of its purpose.\n\nHowever ... the careful treatment by Porteous (which I wasn't familiar with) strikes me as well focused even if it omits some groups of interest. Weyl himself wanted a direct and concrete approach to representations and invariants of certain specific matrix groups, mainly over $\\mathbb{C}$. That's clearly much too narrow for later purposes, where the geometry of various kinds of forms over various kinds of rings gets more attention, along with internal group structure. But some of the geometric viewpoints might suggest paying more attention to PGL than to GL, contrary to the matrix group emphasis in most other work.\n\nIn any case, while the Killing-Cartan classification for Lie algebras still makes it natural to view A-D types as \"classical\" and the rest as \"exceptional\", I'm reluctant to go too far in fitting classical groups into the framework of semisimple Lie or algebraic groups based heavily on differential or algebraic geometry. That framework already has to be stretched to admit general linear groups or rings coming from number theory. And spin or half-spin or adjoint groups, however natural in Lie theory, probably don't fit so well into the familiar world of matrix groups.\n\nOne viewpoint I resist is the attempted definition given by Popov in the Springer encyclopedia. This doesn't really cover the ground consistently or comprehensively, besides which the short reference list is totally unbalanced.\n\nP.S. The views expressed in the various answers and comments are mostly quite reasonable, but leave me with the sense that everyday usage won't tend to converge. Maybe I should sum up my lingering uncertainty about the value of the term \"classical group\" by quoting one of Emil Artin's 1955 papers on the orders of finite simple groups: The notion of classical groups is taken in such a wide sense as to embrace all finite simple groups which are known up to now.\n\n• Great question, but I'm not too hopeful about getting a definitive answer, since presumably you've asked others this question in the past? – Deane Yang Dec 28 '10 at 23:38\n• What would you consider a precise definition? I feel like V.L. Popov makes a couple of good points here eom.springer.de/c/c022410.htm – Gjergji Zaimi Dec 28 '10 at 23:48\n• This seems like the same kind of unsolvable problem that arises in algebraic geometry with the word \"variety\": is it irreducible? Reduced? Geometrically irreducible?... – BCnrd Dec 28 '10 at 23:55\n• I think the answer is probably no; more importantly, I think that this is not a bad thing. As mathematicians we are trained to believe that \"the more precise a term is, the better,\" and I used to believe this, but I now believe that a better motto is, \"Sufficient to the day is the precision thereof.\" Imprecise terms such as \"variety\" or \"quantum group,\" or even worse offenders such as \"large cardinal\" or \"sieve,\" function perfectly well in mathematical discourse. I would argue that they are useful terms not despite their slight vagueness but because of their slight vagueness. – Timothy Chow Dec 29 '10 at 15:16\n• This question is related to what it means to be \"a group of Lie type\" which is discussed in a more recent question here mathoverflow.net/questions/136880/… – Jesper Grodal Sep 26 '13 at 8:35\n\nA classical group means one whose Dynkin diagram is one of the 4 infinite series A, B, C, D whose elements can be extended indefinitely, as opposed to the exceptional groups G2, F4, E6, E7, E8 whose Dynkin diagrams cannot be extended indefinitely (assuming everything is finite dimensional...). Alternatively the classical groups are the ones that (up to abelian pieces) can be defined by messing around with \"degree 2\" forms (sesquilinear, symmetric, alternating, trivial, etc); the exceptional groups can be defined using forms of degree at least 3.\n\n• So what you seem to be saying is that one should talk about classical Lie algebras instead of classical Lie groups. – José Figueroa-O'Farrill Dec 29 '10 at 5:52\n• I don't understand this comment, as my answer did not mention Lie algebras. – Richard Borcherds Dec 29 '10 at 18:26\n• All I meant was that all Lie groups having the same Dynkin diagram, have the same Lie algebra. So your answer seemed to me saying that you define a notion of classical Lie algebra to be a simple Lie algebra of type A,B,C or D; and then a classical Lie group is one whose Lie algebra is classical. – José Figueroa-O'Farrill Dec 29 '10 at 19:18\n• I have definitely heard it claimed that $SO(n)$ is not a classical group -- only $O(n)$ is. This may have been by someone partial to the Porteous result (I actually can't remember who it was). – Allen Knutson Dec 30 '10 at 3:05\n• The infinite-dimensional case seems apparently ambiguous, so it's probably best not to use it in deciding between definitions. Or one could just add (\"in infinite dimensions, it means ....\") to the end of the definition. – Will Sawin Dec 6 '11 at 5:19\n\nSuch a definition (but not the definition, I suppose) can be found in Clifford Algebras and the Classical Groups by Ian Porteous (see Chapt. 13).\n\nIt is based on the classification of real algebra anti-involutions of $A(n)$ where $A$ is equal to $K$ or $^2K$, and $K = \\mathbb R$, $\\mathbb C$ or $\\mathbb H$. By the theorem below there are ten cases. In each case there is a corresponding family of groups of correlated automorphisms analogous to the orthogonal groups.\n\nTheorem. Let $\\xi$ be an irreducible correlation on a right $A$-linear space of finite dimension $> 1$, and therefore equivalent to a symmetric or skew correlation. Then $\\xi$ is equivalent to one of the following ten types, these being mutually exclusive.\n\n1. A symmetric $\\mathbb R$-correlation;\n2. A symmetric, or equivalently a skew, $^2\\mathbb R^\\sigma$-correlation;\n3. A skew $\\mathbb R$-correlation;\n4. A skew $\\mathbb C$-correlation;\n5. A skew $\\tilde{\\mathbb H}$- or equivalently a symmetric $\\overline{\\mathbb H}$-correlation;\n6. A skew, or equivalently a symmetric,$^2\\overline{\\mathbb H}^\\sigma$ -correlation;\n7. A symmetric $\\tilde{\\mathbb H}$-, or equivalently a skew, $\\overline{\\mathbb H}$-correlation;\n8. A symmetric $\\mathbb C$-correlation;\n9. A symmetric, or equivalently a skew, $\\overline{\\mathbb C}$-correlation;\n10. A symmetric, or equivalently a skew, $^2\\overline{\\mathbb C}^\\sigma$-correlation.\n\nThe ten families of classical groups are as follows, where $p+q=n$:\n\n1. $O(p, q; \\mathbb R)$ or $O(p, q)$, with $O(n) = 0(0, n)$;\n2. $GL(n;\\mathbb R)$;\n3. $Sp(2n;\\mathbb R)$;\n4. $Sp(2n;\\mathbb C)$;\n5. $Sp(p,q;\\mathbb H)$ or $Sp(p,q)$, with $Sp(n)= Sp(0,n)$;\n6. $GL(n;\\mathbb H)$;\n7. $O(n;\\mathbb H)$;\n8. $O(n;\\mathbb C)$;\n9. $U(p,q)$, with $U(n)=U(0,n)$;\n10. $GL(n;\\mathbb C)$.\n• Oh, i looked at the book, so ${}^2K$ is just a weird notation for $K\\oplus K$, and $A(n)$ is just a weird notation for $M_n(A)=Mat_n(A)$. I still don't know what $\\overbar{C}$ etc mean. And what's a correlation? – Qfwfq Dec 29 '10 at 16:41\n• A correlation is a linear map of a finite-dimensional linear space to its dual. – Andrey Rekalo Dec 29 '10 at 16:55\n• $\\bar{\\mathbb C}$ stands for the field $\\mathbb C$, regarded as a real algebra with complex conjugation as an anti-involution. – Andrey Rekalo Dec 29 '10 at 17:00\n\nIMHO this definition need not limit itself to groups over $\\mathbb{C}$ and its relatives; e.g. in the theory of finite simple groups people usually talk about a classical group as being a member of one of the 4 series: linear, symplectic, orthogonal, and unitary, defined over a finite field. See e.g. http://brauer.maths.qmul.ac.uk/Atlas/v3/lin/ and http://brauer.maths.qmul.ac.uk/Atlas/v3/clas/ (here for some reason linear groups are split from the rest).\n\nMore generally, one can even work with classical groups over rings: see e.g. the book by Hahn and O'Meara http://www.springer.com/mathematics/algebra/book/978-3-540-17758-6\n\nI think the question probably won't ever have a precise answer. In the context of linear algebraic groups over arbitrary fields, I happen to like the point of view provided by \"algebras with involution\" -- see chapter VI of [The Book of Involutions], which the authors describe as giving \"the classification of semisimple algebraic groups of classical type without any field characteristic assumptions...\"\n\nThis point of view sees \"classical groups\" using the following sort of data. Let $A$ be a (finite dimensional) $k$ algebra which is semisimple and separable (separable means that the center of $A$ is an etale commutative $k$-algebra), and let $\\sigma$ be a $k$-involution of $A$. Using this data, one constructs families of algebraic groups.\n\nFor an example, consider the algebraic group $G=\\operatorname{Iso}(A,\\sigma)$ whose functor of points is given by the rule $G(R) =${$a \\in A \\otimes_k R \\mid a\\cdot\\sigma(a) = 1$ }. When $A$ is simple then -- depending on the nature of $\\sigma$ -- $G=\\operatorname{Iso}(A,\\sigma)$ can be a (twisted form of a) unitary group, an orthogonal group, a symplectic group. (Further care is needed to get special unitary or orthogonal groups...)\n\nThere are related constructions -- $\\operatorname{Sim}(A,\\sigma)$, $\\operatorname{Aut}(A,\\sigma)$,... -- to account for isogenies etc.\n\nThese constructions give groups which \"geometrically\" (over an algebraic closure of $k$) have the Dynkin diagrams mentioned in other answers.\n\nWell, I doubt this point of view will given a universally accepted definition of the notion of \"classical groups\", but it does give a fairly uniform account.\n\n• I agree with this definition, but with one addition: groups of trialitarian $D_4$ type (meaning groups of type $D_4$ associated with a cubic extension of the base field -- they exist over global fields but not over $\\mathbb{R}$) are typically viewed as exceptional, even though \"geometrically\" they are of classical type. – Skip Dec 29 '10 at 18:31\n\nA classical group is the isotropy subgroup of an open orbit in a representation of GL(n).\n\n• I think this definition makes $G_2$ is a classical group: Look at $GL_7$ acting on the third exterior power of its standard representation; the connected component of a generic point stabilizer is $G_2$. – moonface Dec 30 '10 at 2:01\n• Well, $G_2$ is often said to be almost a classical group... ;^) – George McNinch Dec 30 '10 at 13:37\n• Correct: I was hoping someone would notice that :) I claim it is entirely appropriate to regard G_2 as a classical group for this reason, and it can be manipulated classically in its 7d fundamental representation. It just happens that Hermann Weyl did not consider it so. – David MJC Dec 30 '10 at 13:42\n\nHmm... I must admit that I never asked myself this question. I have always taken for granted that classical groups are the special linear groups over $\\mathbb{R}$, $\\mathbb{C}$ and $\\mathbb{H}$, with the usual caveat about the definition of quaternionic special linear group, as there is no quaternionic determinant, and their intersection with the automorphism groups of inner products on $\\mathbb{R}^n$, $\\mathbb{C}^n$ and $\\mathbb{H}^n$: namely, symmetric and skewsymmetric inner products on $\\mathbb{R}^n$ and $\\mathbb{C}^n$; hermitian inner products on $\\mathbb{C}^n$ and $\\mathbb{H}^n$; and skewhermitian inner products on $\\mathbb{H}^n$.\n\nSo in summary, the following are (for me) the classical groups: $\\mathrm{SL}(n,\\mathbb{R})$, $\\mathrm{SL}(n,\\mathbb{C})$, $\\mathrm{SL}(n,\\mathbb{H})$, $\\mathrm{SO}(p,q;\\mathbb{R})$, $\\mathrm{SO}(p,q;\\mathbb{C})$, $\\mathrm{Sp}(2n;\\mathbb{R})$, $\\mathrm{Sp}(2n;\\mathbb{C})$, $\\mathrm{SU}(p,q)$, $Sp(p,q)$, and $SO^*(n)$.\n\nI'm afraid that since I didn't come up with this, I have no wise words as to why the general linear groups are excluded. One could argue that including the spin groups would then lead to including other covering groups and we probably we would not want to count the universal cover of $\\mathrm{SL}(2,\\mathbb{R})$ or the metaplectic group as classical groups.\n\nI think that this definition of classical group agrees with, say, Wolf Rossmann's book Lie groups: an introduction through linear groups, which is the current recommended book for our UG students here in Edinburgh.\n\n• I guess my answer does not answer the actual question, except possibly in the negative, given how my answer does not agree with Andrey Rekalo's, even though we seem to have ample intersection. – José Figueroa-O'Farrill Dec 29 '10 at 2:55\n• Actually, if you take the list in Andrey Rekalo's answer and intersect with the relevant special linear groups you get the list in my answer. So perhaps (up to general v. special) there is some consensus. – José Figueroa-O'Farrill Dec 29 '10 at 2:56\n\nJust a piece of information from the french side. Dieudonn\\'e, in La g\\'om\\'etrie des groupes classiques'' (Springer, 1970), takes the definition of a classical group for granted. But browsing through the table of contents, it's clear he means $GL_n(K),SL_n(K),O_n(K,f),U_n(K,f),Sp_{2n}(K)$ plus variants (e.g. the projectivized versions).\n\nIn the book Groupes de Lie classiques''(Hermann, 1986), R. Mneimn\\'e and F. Testard define classical Lie groups in their introduction: same list as in Dieudonn\\'e, but assuming of course $K=\\mathbb{R}$ or $\\mathbb{C}$.\n\nIt is a hint, when making statements hold is left as an exercise to the reader.\n\nPerhaps most terms in mathematics admit various definitions.\n\n• There is the set-theoretic construction aspect (like for the definition of a pair).\n• Then the more important question of what properties are assumed in a definition (irreducible for an algebraic variety?).\n• We could even consider the underlying logical language and axiom system part of a definition (for instance for most discussions \"most mathematicians\" assume first-order logic, and that their objects are sets, which e.g. have a well-defined cardinality -I guess this could be argued, for various reasons).\n\n\"Classical group\" is perhaps an outlier in the variety of interpretations, as proposed in the answers here.\n\nThe case of \"variety\", mentioned by B. Conrad, is similar but also different, a variety can be \"reduced\", \"irreducible\", etc., but it is easy to add those adjectives to clarify what we mean. The term \"variety\" is also used in universal algebra, with a very different meaning, but the 2 uses are easy to differentiate. This issue also arises with \"field\" either in algebra, or in physics (e.g. scalar field). Also with the term \"algebra\", in various fields of mathematics.\n\nReturning to \"classical groups\", part of the unease may be due to tight connections between areas where it may be used with different meanings, e.g. researchers in group theory may pass in their own research from considering finite-dimensional to infinite-dimensional groups, while algebraic geometers are less likely to talk to universal algebraists -though important connections to model theory may prove me wrong here. So if we nonobviously switch definitions the exercise of finding which one, if any, makes a statement hold, may frustrate.\n\nA further idea is that \"classical groups\" emerged to name various things arrived at from different routes, which were found similar in several respects, after nontrivial thinking, perhaps with more desire to unify than when defining \"algebraic variety\", which captures a more obvious property (say, being \"algebraically locally\" like affine space). The definition of \"classical group\" is \"stretched\" from the beginning. This happens in history as a scholarly discipline, where trends are identified a posteriori, and fitting can be difficult, but still useful. If we start from a blanker sheet, with less prejudice we will have cleaner definitions. In a sense \"classical groups\" is ambitious, we have an intuition which happens to be more simplistic than for \"variety\". The term is attractive, it evokes art, beauty, we want to use it, and those groups, we wish we could describe them simply. In the case of varieties there happens to be more rigidity, we arrive at the concept from fewer places in a sense (perhaps always with \"algebraically locally affine\" in mind).\n\nIt happens that group theory is quite complicated (some subjects have to be, I suppose this could be justified using ideas from computational complexity theory/information theory), in the sense of having \"many (irreducible) lists\", \"many facts\", things that must be learned, memorized, its classifications have many cases, so if we do not want to use more terms than in other areas we are bound to oversimplify, to unify too much. Several definitions proposed here of \"classical group\" rely on classifications with technical hypotheses which can be argued, which have been found after efforts to unify.\n\nThe origin of the question seems to be this feeling of bias: why do we want to use \"classical groups\"? Is it fair?\n\nTo return to the idea I started with: definitions are dynamical, they rely on history, on context, on the readers ability, etc., and they are bound to be because they are crucial in attempts to optimize our thinking, by all means. Sometimes this optimization will be stressful.\n\nPS: Borcherds' definition is my favorite.\n\nIt is worth mentioning that when talking about Lie algebras classical in positive characteristic is somewhat different than classical in zero characteristic: \"By an algebra of classical type is meant an analogue over a field of characteristic $p$ of one of the simple Lie algberas (including the five exceptional Lie algberas) of characteristic 0\", see http://www.jstor.org/stable/2034779.\n\n• The term classical is used in many contexts, but I'm just concerned with the tradition of linear groups arising from work of Weyl and Dieudonne especially. In the case of Lie algebras, my thesis adviser George Seligman and others improvised terminology to distinguish simple Lie algebras associated to simple linear algebraic groups in prime characteristic from the \"Cartan type\" simple Lie algebras which are finite dimensional analogues of Cartan's Lie pseudo-groups. Here the use of \"classical\" is convenient but sometimes misleading. – Jim Humphreys Dec 8 '11 at 18:59" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.96497273,"math_prob":0.9663245,"size":3703,"snap":"2020-45-2020-50","text_gpt3_token_len":754,"char_repetition_ratio":0.10651527,"word_repetition_ratio":0.0,"special_character_ratio":0.19713746,"punctuation_ratio":0.091044776,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9938025,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-23T11:58:02Z\",\"WARC-Record-ID\":\"<urn:uuid:27db876c-cb0e-48a7-89eb-96ccc1bbbecd>\",\"Content-Length\":\"219436\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fb55ea27-f4dc-4cea-a946-83bf2d42fc1f>\",\"WARC-Concurrent-To\":\"<urn:uuid:4f3f49c1-873f-41c7-a926-014bc3f930d1>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://mathoverflow.net/questions/50610/what-are-classical-groups\",\"WARC-Payload-Digest\":\"sha1:WV5YGZKA34V3NLIOU7I77BXEBV3D6TB5\",\"WARC-Block-Digest\":\"sha1:GBGJCXRRHRZIXBYJXUL7UBXR6INGRTAE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107881369.4_warc_CC-MAIN-20201023102435-20201023132435-00185.warc.gz\"}"}
https://arxiv.org/abs/1507.01308
[ "# Title:Identifiability and Stability in Blind Deconvolution under Minimal Assumptions\n\nAbstract: Blind deconvolution (BD) arises in many applications. Without assumptions on the signal and the filter, BD does not admit a unique solution. In practice, subspace or sparsity assumptions have shown the ability to reduce the search space and yield the unique solution. However, existing theoretical analysis on uniqueness in BD is rather limited. In an earlier paper, we provided the first algebraic sample complexities for BD that hold for almost all bases or frames. We showed that for BD of a pair of vectors in $\\mathbb{C}^n$, with subspace constraints of dimensions $m_1$ and $m_2$, respectively, a sample complexity of $n\\geq m_1m_2$ is sufficient. This result is suboptimal, since the number of degrees of freedom is merely $m_1+m_2-1$. We provided analogus results, with similar suboptimality, for BD with sparsity or mixed subspace and sparsity constraints. In this paper, taking advantage of the recent progress on the information-theoretic limits of unique low-rank matrix recovery, we finally bridge this gap, and derive an optimal sample complexity result for BD with generic bases or frames. We show that for BD of an arbitrary pair (resp. all pairs) of vectors in $\\mathbb{C}^n$, with sparsity constraints of sparsity levels $s_1$ and $s_2$, a sample complexity of $n > s_1+s_2$ (resp. $n > 2(s_1+s_2)$) is sufficient. We also present analogous results for BD with subspace constraints or mixed constraints, with the subspace dimension replacing the sparsity level. Last but not least, in all the above scenarios, if the bases or frames follow a probabilistic distribution specified in the paper, the recovery is not only unique, but also stable against small perturbations in the measurements, under the same sample complexities.\n Comments: 32 pages Subjects: Information Theory (cs.IT) Cite as: arXiv:1507.01308 [cs.IT] (or arXiv:1507.01308v2 [cs.IT] for this version)\n\n## Submission history\n\nFrom: Yanjun Li [view email]\n[v1] Mon, 6 Jul 2015 00:18:51 UTC (24 KB)\n[v2] Wed, 23 Dec 2015 09:40:18 UTC (36 KB)" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.83982635,"math_prob":0.9483676,"size":2250,"snap":"2020-10-2020-16","text_gpt3_token_len":574,"char_repetition_ratio":0.1015138,"word_repetition_ratio":0.08022922,"special_character_ratio":0.2568889,"punctuation_ratio":0.13023256,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9688786,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-27T08:14:11Z\",\"WARC-Record-ID\":\"<urn:uuid:d55bd814-750a-41f3-9354-c319c7df9662>\",\"Content-Length\":\"23094\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b4258643-a2e2-4760-afa8-d5a36b4ff1c2>\",\"WARC-Concurrent-To\":\"<urn:uuid:f82f46ba-572e-4432-91ca-90b74e08cd6e>\",\"WARC-IP-Address\":\"128.84.21.199\",\"WARC-Target-URI\":\"https://arxiv.org/abs/1507.01308\",\"WARC-Payload-Digest\":\"sha1:4QCXPOC4LF7VPHUGDPWFJXIH67OZ4WJR\",\"WARC-Block-Digest\":\"sha1:ZTZUGEMAJUWETPTLFCJR7Z3MIS4CFAZU\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875146665.7_warc_CC-MAIN-20200227063824-20200227093824-00009.warc.gz\"}"}
https://socratic.org/questions/5830f6f2b72cff2e666a168a
[ "# How do you prove that sqrt(4+2sqrt(3)) = sqrt(3)+1 ?\n\nNov 20, 2016\n\nOne way to show that the left hand side is equal to the right hand side is to show that their quotient is equal to $1$. Beginning with the quotient, we have\n\n$\\frac{\\sqrt{4 + 2 \\sqrt{3}}}{\\sqrt{3} + 1}$\n\nTo help us evaluate this, let's first rationalize the denominator\n\n$\\frac{\\sqrt{4 + 2 \\sqrt{3}}}{\\sqrt{3} + 1} = \\frac{\\sqrt{4 + 2 \\sqrt{3}} \\times \\left(\\sqrt{3} - 1\\right)}{\\left(\\sqrt{3} + 1\\right) \\times \\left(\\sqrt{3} - 1\\right)}$\n\n$= \\frac{\\sqrt{4 + 2 \\sqrt{3}} \\times \\left(\\sqrt{3} - 1\\right)}{{\\left(\\sqrt{3}\\right)}^{2} - {1}^{2}}$\n\n$= \\frac{\\sqrt{4 + 2 \\sqrt{3}} \\times \\left(\\sqrt{3} - 1\\right)}{3 - 1}$\n\n$= \\frac{\\sqrt{4 + 2 \\sqrt{3}} \\times \\left(\\sqrt{3} - 1\\right)}{2}$\n\nAs the quotient must be equal to $1$ if the given expressions are equal, we now need to show that the numerator is equal to $2$.\n\n$\\sqrt{4 + 2 \\sqrt{3}} \\times \\left(\\sqrt{3} - 1\\right) = \\sqrt{4 + 2 \\sqrt{3}} \\times \\sqrt{{\\left(\\sqrt{3} - 1\\right)}^{2}}$\n\n(Note that the above step is justified because $\\sqrt{3} - 1 > 0$. If $x \\ge 0$, then $x = \\sqrt{{x}^{2}}$. If $x < 0$, then $x = - \\sqrt{{x}^{2}}$)\n\n$= \\sqrt{\\left(4 + 2 \\sqrt{3}\\right) {\\left(\\sqrt{3} - 1\\right)}^{2}}$\n\n$= \\sqrt{\\left(4 + 2 \\sqrt{3}\\right) \\left(3 - 2 \\sqrt{3} + 1\\right)}$\n\n=sqrt((4+2sqrt(3))(4-2sqrt(3))\n\n$= \\sqrt{{4}^{2} - {\\left(2 \\sqrt{3}\\right)}^{2}}$\n\n$= \\sqrt{16 - 12}$\n\n(As when rationalizing the denominator, we make use of the identity $\\left(a + b\\right) \\left(a - b\\right) = {a}^{2} - {b}^{2}$)\n\n$= \\sqrt{4}$\n\n$= 2$\n\nNow that we have shown that the numerator has the desired property, we can solve the rest of the problem quite simply.\n\n$\\frac{\\sqrt{4 + 2 \\sqrt{3}}}{\\sqrt{3} + 1} = \\frac{\\sqrt{4 + 2 \\sqrt{3}} \\times \\left(\\sqrt{3} - 1\\right)}{\\left(\\sqrt{3} + 1\\right) \\times \\left(\\sqrt{3} - 1\\right)} = \\frac{2}{2} = 1$\n\n$\\implies \\frac{\\sqrt{4 + 2 \\sqrt{3}}}{\\sqrt{3} + 1} \\times \\left(\\sqrt{3} + 1\\right) = 1 \\times \\left(\\sqrt{3} + 1\\right)$\n\n$\\therefore \\sqrt{4 + 2 \\sqrt{3}} = \\sqrt{3} + 1$\n\nNov 20, 2016\n\nSee below.\n\n#### Explanation:\n\nThis expression has the structure\n\n$\\sqrt{a + b \\sqrt{3}} = c \\sqrt{3} + d$ so squaring both sides\n\n$a + b \\sqrt{3} = 3 {c}^{2} + 2 \\sqrt{3} c d + {d}^{2}$ pairing terms\n\n$\\left\\{\\begin{matrix}a - 3 {c}^{2} - {d}^{2} = 0 \\\\ b - 2 c d = 0\\end{matrix}\\right.$\n\nSolving for $c , d$ we have\n\n$c = \\pm \\frac{\\sqrt{a \\pm \\sqrt{{a}^{2} - 3 {b}^{2}}}}{\\sqrt{6}}$\n$d = \\pm \\frac{\\sqrt{\\frac{3}{2}} b}{\\sqrt{a - \\sqrt{{a}^{2} \\pm 3 {b}^{2}}}}$\n\nIf $a = 4 , b = 2$ we have the possibilities\n\n$\\left(\\begin{matrix}c = - \\frac{1}{\\sqrt{3}} & d = - \\sqrt{3} \\\\ c = \\frac{1}{\\sqrt{3}} & d = \\sqrt{3} \\\\ c = - 1 & d = - 1 \\\\ c = 1 & d = 1\\end{matrix}\\right)$\n\nNov 20, 2016\n\nSee description...\n\n#### Explanation:\n\nNote that:\n\n${\\left(\\sqrt{3} + 1\\right)}^{2} = {\\left(\\sqrt{3}\\right)}^{2} + 2 \\left(\\sqrt{3}\\right) + 1$\n\n$\\textcolor{w h i t e}{{\\left(\\sqrt{3} + 1\\right)}^{2}} = 3 + 2 \\sqrt{3} + 1$\n\n$\\textcolor{w h i t e}{{\\left(\\sqrt{3} + 1\\right)}^{2}} = 4 + 2 \\sqrt{3}$\n\nSince $\\sqrt{3} + 1 > 0$, we can take the positive square root of both ends to get:\n\n$\\sqrt{3} + 1 = \\sqrt{4 + 2 \\sqrt{3}}$" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.65759534,"math_prob":1.00001,"size":1154,"snap":"2019-43-2019-47","text_gpt3_token_len":411,"char_repetition_ratio":0.21826087,"word_repetition_ratio":0.0,"special_character_ratio":0.389948,"punctuation_ratio":0.05785124,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.000009,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-22T02:20:48Z\",\"WARC-Record-ID\":\"<urn:uuid:94e11cae-6cf4-4b04-bf14-3d916cc6e153>\",\"Content-Length\":\"40598\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6ec0cf32-4d04-4899-9f89-7f774912e815>\",\"WARC-Concurrent-To\":\"<urn:uuid:3d04b7d7-3a0e-4195-9c34-2856b6bd5fe7>\",\"WARC-IP-Address\":\"54.221.217.175\",\"WARC-Target-URI\":\"https://socratic.org/questions/5830f6f2b72cff2e666a168a\",\"WARC-Payload-Digest\":\"sha1:D3642EHZ2YPQWGTKA6NNDGBXP4CCWC2P\",\"WARC-Block-Digest\":\"sha1:FF6XXAJKAGXFRBIUX3D7MSIWFXUE4USY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496671106.83_warc_CC-MAIN-20191122014756-20191122042756-00433.warc.gz\"}"}
http://manpages.ubuntu.com/manpages/precise/man3/special.3tcl.html
[ "Provided by: tcllib_1.14-dfsg-1_all", null, "#### NAME\n\n``` math::special - Special mathematical functions\n\n```\n\n#### SYNOPSIS\n\n``` package require Tcl ?8.3?\n\npackage require math::special ?0.2?\n\n::math::special::Beta x y\n\n::math::special::Gamma x y\n\n::math::special::erf x\n\n::math::special::erfc x\n\n::math::special::J0 x\n\n::math::special::J1 x\n\n::math::special::Jn n x\n\n::math::special::J1/2 x\n\n::math::special::J-1/2 x\n\n::math::special::I_n x\n\n::math::special::cn u k\n\n::math::special::dn u k\n\n::math::special::sn u k\n\n::math::special::elliptic_K k\n\n::math::special::elliptic_E k\n\n::math::special::exponential_Ei x\n\n::math::special::exponential_En n x\n\n::math::special::exponential_li x\n\n::math::special::exponential_Ci x\n\n::math::special::exponential_Si x\n\n::math::special::exponential_Chi x\n\n::math::special::exponential_Shi x\n\n::math::special::fresnel_C x\n\n::math::special::fresnel_S x\n\n::math::special::sinc x\n\n::math::special::legendre n\n\n::math::special::chebyshev n\n\n::math::special::laguerre alpha n\n\n::math::special::hermite n\n\n_________________________________________________________________\n\n```\n\n#### DESCRIPTION\n\n``` This package implements several so-called special functions, like the Gamma function, the\nBessel functions and such.\n\nEach function is implemented by a procedure that bears its name (well, in close\napproximation):\n\n· J0 for the zeroth-order Bessel function of the first kind\n\n· J1 for the first-order Bessel function of the first kind\n\n· Jn for the nth-order Bessel function of the first kind\n\n· J1/2 for the half-order Bessel function of the first kind\n\n· J-1/2 for the minus-half-order Bessel function of the first kind\n\n· I_n for the modified Bessel function of the first kind of order n\n\n· Gamma for the Gamma function, erf and erfc for the error function and the\ncomplementary error function\n\n· fresnel_C and fresnel_S for the Fresnel integrals\n\n· elliptic_K and elliptic_E (complete elliptic integrals)\n\n· exponent_Ei and other functions related to the so-called exponential integrals\n\n· legendre, hermite: some of the classical orthogonal polynomials.\n\n```\n\n#### OVERVIEW\n\n``` In the following table several characteristics of the functions in this package are\nsummarized: the domain for the argument, the values for the parameters and error bounds.\n\nFamily | Function | Domain x | Parameter | Error bound\n-------------+-------------+-------------+-------------+--------------\nBessel | J0, J1, | all of R | n = integer | < 1.0e-8\n| Jn | | | (|x|<20, n<20)\nBessel | J1/2, J-1/2,| x > 0 | n = integer | exact\nBessel | I_n | all of R | n = integer | < 1.0e-6\n| | | |\nElliptic | cn | 0 <= x <= 1 | -- | < 1.0e-10\nfunctions | dn | 0 <= x <= 1 | -- | < 1.0e-10\n| sn | 0 <= x <= 1 | -- | < 1.0e-10\nElliptic | K | 0 <= x < 1 | -- | < 1.0e-6\nintegrals | E | 0 <= x < 1 | -- | < 1.0e-6\n| | | |\nError | erf | | -- |\nfunctions | erfc | | |\n| ierfc_n | | |\n| | | |\nExponential | Ei | x != 0 | -- | < 1.0e-10 (relative)\nintegrals | En | x > 0 | -- | as Ei\n| li | x > 0 | -- | as Ei\n| Chi | x > 0 | -- | < 1.0e-8\n| Shi | x > 0 | -- | < 1.0e-8\n| Ci | x > 0 | -- | < 2.0e-4\n| Si | x > 0 | -- | < 2.0e-4\n| | | |\nFresnel | C | all of R | -- | < 2.0e-3\nintegrals | S | all of R | -- | < 2.0e-3\n| | | |\ngeneral | Beta | (see Gamma) | -- | < 1.0e-9\n| Gamma | x != 0,-1, | -- | < 1.0e-9\n| | -2, ... | |\n| sinc | all of R | -- | exact\n| | | |\northogonal | Legendre | all of R | n = 0,1,... | exact\npolynomials | Chebyshev | all of R | n = 0,1,... | exact\n| Laguerre | all of R | n = 0,1,... | exact\n| | | alpha el. R |\n| Hermite | all of R | n = 0,1,... | exact\n\nNote: Some of the error bounds are estimated, as no \"formal\" bounds were available with\nthe implemented approximation method, others hold for the auxiliary functions used for\nestimating the primary functions.\n\nThe following well-known functions are currently missing from the package:\n\n· Bessel functions of the second kind (Y_n, K_n)\n\n· Bessel functions of arbitrary order (and hence the Airy functions)\n\n· Chebyshev polynomials of the second kind (U_n)\n\n· The digamma function (psi)\n\n· The incomplete gamma and beta functions\n\n```\n\n#### PROCEDURES\n\n``` The package defines the following public procedures:\n\n::math::special::Beta x y\nCompute the Beta function for arguments \"x\" and \"y\"\n\nfloat x\nFirst argument for the Beta function\n\nfloat y\nSecond argument for the Beta function\n\n::math::special::Gamma x y\nCompute the Gamma function for argument \"x\"\n\nfloat x\nArgument for the Gamma function\n\n::math::special::erf x\nCompute the error function for argument \"x\"\n\nfloat x\nArgument for the error function\n\n::math::special::erfc x\nCompute the complementary error function for argument \"x\"\n\nfloat x\nArgument for the complementary error function\n\n::math::special::J0 x\nCompute the zeroth-order Bessel function of the first kind for the argument \"x\"\n\nfloat x\nArgument for the Bessel function\n\n::math::special::J1 x\nCompute the first-order Bessel function of the first kind for the argument \"x\"\n\nfloat x\nArgument for the Bessel function\n\n::math::special::Jn n x\nCompute the nth-order Bessel function of the first kind for the argument \"x\"\n\ninteger n\nOrder of the Bessel function\n\nfloat x\nArgument for the Bessel function\n\n::math::special::J1/2 x\nCompute the half-order Bessel function of the first kind for the argument \"x\"\n\nfloat x\nArgument for the Bessel function\n\n::math::special::J-1/2 x\nCompute the minus-half-order Bessel function of the first kind for the argument \"x\"\n\nfloat x\nArgument for the Bessel function\n\n::math::special::I_n x\nCompute the modified Bessel function of the first kind of order n for the argument\n\"x\"\n\nint x Positive integer order of the function\n\nfloat x\nArgument for the function\n\n::math::special::cn u k\nCompute the elliptic function cn for the argument \"u\" and parameter \"k\".\n\nfloat u\nArgument for the function\n\nfloat k\nParameter\n\n::math::special::dn u k\nCompute the elliptic function dn for the argument \"u\" and parameter \"k\".\n\nfloat u\nArgument for the function\n\nfloat k\nParameter\n\n::math::special::sn u k\nCompute the elliptic function sn for the argument \"u\" and parameter \"k\".\n\nfloat u\nArgument for the function\n\nfloat k\nParameter\n\n::math::special::elliptic_K k\nCompute the complete elliptic integral of the first kind for the argument \"k\"\n\nfloat k\nArgument for the function\n\n::math::special::elliptic_E k\nCompute the complete elliptic integral of the second kind for the argument \"k\"\n\nfloat k\nArgument for the function\n\n::math::special::exponential_Ei x\nCompute the exponential integral of the second kind for the argument \"x\"\n\nfloat x\nArgument for the function (x != 0)\n\n::math::special::exponential_En n x\nCompute the exponential integral of the first kind for the argument \"x\" and order n\n\nint n Order of the integral (n >= 0)\n\nfloat x\nArgument for the function (x >= 0)\n\n::math::special::exponential_li x\nCompute the logarithmic integral for the argument \"x\"\n\nfloat x\nArgument for the function (x > 0)\n\n::math::special::exponential_Ci x\nCompute the cosine integral for the argument \"x\"\n\nfloat x\nArgument for the function (x > 0)\n\n::math::special::exponential_Si x\nCompute the sine integral for the argument \"x\"\n\nfloat x\nArgument for the function (x > 0)\n\n::math::special::exponential_Chi x\nCompute the hyperbolic cosine integral for the argument \"x\"\n\nfloat x\nArgument for the function (x > 0)\n\n::math::special::exponential_Shi x\nCompute the hyperbolic sine integral for the argument \"x\"\n\nfloat x\nArgument for the function (x > 0)\n\n::math::special::fresnel_C x\nCompute the Fresnel cosine integral for real argument x\n\nfloat x\nArgument for the function\n\n::math::special::fresnel_S x\nCompute the Fresnel sine integral for real argument x\n\nfloat x\nArgument for the function\n\n::math::special::sinc x\nCompute the sinc function for real argument x\n\nfloat x\nArgument for the function\n\n::math::special::legendre n\nReturn the Legendre polynomial of degree n (see THE ORTHOGONAL POLYNOMIALS)\n\nint n Degree of the polynomial\n\n::math::special::chebyshev n\nReturn the Chebyshev polynomial of degree n (of the first kind)\n\nint n Degree of the polynomial\n\n::math::special::laguerre alpha n\nReturn the Laguerre polynomial of degree n with parameter alpha\n\nfloat alpha\nParameter of the Laguerre polynomial\n\nint n Degree of the polynomial\n\n::math::special::hermite n\nReturn the Hermite polynomial of degree n\n\nint n Degree of the polynomial\n\n```\n\n#### THEORTHOGONALPOLYNOMIALS\n\n``` For dealing with the classical families of orthogonal polynomials, the package relies on\nthe math::polynomials package. To evaluate the polynomial at some coordinate, use the\nevalPolyn command:\n\nset leg2 [::math::special::legendre 2]\nputs \"Value at x=\\$x: [::math::polynomials::evalPolyn \\$leg2 \\$x]\"\n\nThe return value from the legendre and other commands is actually the definition of the\ncorresponding polynomial as used in that package.\n\n```\n\n#### REMARKSONTHEIMPLEMENTATION\n\n``` It should be noted, that the actual implementation of J0 and J1 depends on straightforward\nGaussian quadrature formulas. The (absolute) accuracy of the results is of the order\n1.0e-4 or better. The main reason to implement them like that was that it was fast to do\n(the formulas are simple) and the computations are fast too.\n\nThe implementation of J1/2 does not suffer from this: this function can be expressed\nexactly in terms of elementary functions.\n\nThe functions J0 and J1 are the ones you will encounter most frequently in practice.\n\nThe computation of I_n is based on Miller's algorithm for computing the minimal function\nfrom recurrence relations.\n\nThe computation of the Gamma and Beta functions relies on the combinatorics package,\nwhereas that of the error functions relies on the statistics package.\n\nThe computation of the complete elliptic integrals uses the AGM algorithm.\n\nMuch information about these functions can be found in:\n\nAbramowitz and Stegun: Handbook of Mathematical Functions (Dover, ISBN 486-61272-4)\n\n```\n\n#### BUGS,IDEAS,FEEDBACK\n\n``` This document, and the package it describes, will undoubtedly contain bugs and other\nproblems. Please report such in the category math :: special of the Tcllib SF Trackers\n[http://sourceforge.net/tracker/?group_id=12883]. Please also report any ideas for\nenhancements you may have for either package and/or documentation.\n\n```\n\n#### KEYWORDS\n\n``` Bessel functions, error function, math, special functions\n\n```\n\n#### CATEGORY\n\n``` Mathematics\n\n```\n\n``` Copyright (c) 2004 Arjen Markus <[email protected]>" ]
[ null, "http://manpages.ubuntu.com/img/bug.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5158473,"math_prob":0.95130366,"size":10158,"snap":"2020-34-2020-40","text_gpt3_token_len":2664,"char_repetition_ratio":0.27427614,"word_repetition_ratio":0.27554744,"special_character_ratio":0.28539082,"punctuation_ratio":0.23018509,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9984953,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-21T06:32:21Z\",\"WARC-Record-ID\":\"<urn:uuid:0aab219e-0389-4606-b691-8555205978e1>\",\"Content-Length\":\"20997\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cad7c960-8260-4182-a39e-b0d45ea3877a>\",\"WARC-Concurrent-To\":\"<urn:uuid:e7c71779-bdbf-4fc8-918b-b0c44e5ce201>\",\"WARC-IP-Address\":\"91.189.95.15\",\"WARC-Target-URI\":\"http://manpages.ubuntu.com/manpages/precise/man3/special.3tcl.html\",\"WARC-Payload-Digest\":\"sha1:VAD77ILZFDLZUP43TPLWXDWRS5IJSYLA\",\"WARC-Block-Digest\":\"sha1:EIYALBTITZZIF4WJUIJWDQK4JGR3FATB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400198942.13_warc_CC-MAIN-20200921050331-20200921080331-00635.warc.gz\"}"}
https://chem.libretexts.org/Courses/Saint_Francis_University/CHEM_113%3A_Human_Chemistry_I_(Muino)/03%3A_Ionic_Compounds/3.05%3A_Ions_and_the_Octet_Rule
[ "# 3.5: Ions and the Octet Rule\n\n$$\\newcommand{\\vecs}{\\overset { \\rightharpoonup} {\\mathbf{#1}} }$$ $$\\newcommand{\\vecd}{\\overset{-\\!-\\!\\rightharpoonup}{\\vphantom{a}\\smash {#1}}}$$$$\\newcommand{\\id}{\\mathrm{id}}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\kernel}{\\mathrm{null}\\,}$$ $$\\newcommand{\\range}{\\mathrm{range}\\,}$$ $$\\newcommand{\\RealPart}{\\mathrm{Re}}$$ $$\\newcommand{\\ImaginaryPart}{\\mathrm{Im}}$$ $$\\newcommand{\\Argument}{\\mathrm{Arg}}$$ $$\\newcommand{\\norm}{\\| #1 \\|}$$ $$\\newcommand{\\inner}{\\langle #1, #2 \\rangle}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\id}{\\mathrm{id}}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\kernel}{\\mathrm{null}\\,}$$ $$\\newcommand{\\range}{\\mathrm{range}\\,}$$ $$\\newcommand{\\RealPart}{\\mathrm{Re}}$$ $$\\newcommand{\\ImaginaryPart}{\\mathrm{Im}}$$ $$\\newcommand{\\Argument}{\\mathrm{Arg}}$$ $$\\newcommand{\\norm}{\\| #1 \\|}$$ $$\\newcommand{\\inner}{\\langle #1, #2 \\rangle}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$\n\nLearning Objectives\n\n• State the octet rule.\n• Use electron configurations and the octet rule to determine the charge of ions.\n\nPreviously, we saw how ions are formed by losing electrons to make cations or by gaining electrons to form anions. The astute reader may have noticed something: Many of the ions that form have eight electrons in their valence shell. Either atoms gain enough electrons to have eight electrons in the valence shell and become the appropriately charged anion, or they lose the electrons in their original valence shell; the lower shell, now the valence shell, has eight electrons in it, so the atom becomes positively charged. For whatever reason, having eight electrons in a valence shell is a particularly energetically stable arrangement of electrons. The trend that atoms like to have eight electrons in their valence shell is called the octet rule. When atoms form compounds, the octet rule is not always satisfied for all atoms at all times, but it is a very good rule of thumb for understanding the kinds of bonding arrangements that atoms can make.\n\nIt is not impossible to violate the octet rule. Consider sodium: in its elemental form, it has one valence electron and is stable. It is rather reactive, however, and does not require a lot of energy to remove that electron to make the Na+ ion. We could remove another electron by adding even more energy to the ion, to make the Na2+ ion. However, that requires much more energy than is normally available in chemical reactions, so sodium stops at a 1+ charge after losing a single electron. It turns out that the Na+ ion has a complete octet in its new valence shell, the n = 2 shell, which satisfies the octet rule. The octet rule is a result of trends in energies and is useful in explaining why atoms form the ions that they do.\n\nNow consider an Na atom in the presence of a Cl atom. The two atoms have these Lewis electron dot diagrams and electron configurations:\n\n$\\mathbf{Na\\, \\cdot }\\; \\; \\; \\; \\; \\; \\; \\; \\; \\; \\mathbf{\\cdot }\\mathbf{\\ddot{\\underset{.\\: .}Cl}}\\mathbf{\\: :}$\n\n$\\left [ Ne \\right ]3s^{1}\\; \\; \\; \\; \\left [ Ne \\right ]3s^{2}3p^{5}$\n\nFor the Na atom to obtain an octet, it must lose an electron; for the Cl atom to gain an octet, it must gain an electron. An electron transfers from the Na atom to the Cl atom:\n\n$\\mathbf{Na\\, \\cdot }\\curvearrowright \\mathbf{\\cdot }\\mathbf{\\ddot{\\underset{.\\: .}Cl}}\\mathbf{\\: :}$\n\nresulting in two ions-the Na+ ion and the Cl ion:\n\n$\\mathbf{Na\\, \\cdot }^{+}\\; \\; \\; \\; \\; \\; \\; \\; \\mathbf{:}\\mathbf{\\ddot{\\underset{.\\: .}Cl}}\\mathbf{\\: :}^{-}$\n\n$\\left [ Ne \\right ]\\; \\; \\; \\; \\; \\left [ Ne \\right ]3s^{2}3p^{6}$\n\nBoth species now have complete octets, and the electron shells are energetically stable. From basic physics, we know that opposite charges attract. This is what happens to the Na+ and Cl ions:\n\n$\\mathbf{Na\\, \\cdot }^{+}\\; + \\; \\mathbf{:}\\mathbf{\\ddot{\\underset{.\\: .}Cl}}\\mathbf{\\: :}^{-}\\rightarrow Na^{+}Cl^{-}\\; \\; or\\; \\; NaCl$\n\nwhere we have written the final formula (the formula for sodium chloride) as per the convention for ionic compounds, without listing the charges explicitly. The attraction between oppositely charged ions is called an ionic bond, and it is one of the main types of chemical bonds in chemistry. Ionic bonds are caused by electrons transferring from one atom to another.\n\nIn electron transfer, the number of electrons lost must equal the number of electrons gained. We saw this in the formation of NaCl. A similar process occurs between Mg atoms and O atoms, except in this case two electrons are transferred:", null, "The two ions each have octets as their valence shell, and the two oppositely charged particles attract, making an ionic bond:\n\n$\\mathbf{Mg\\,}^{2+}\\; + \\; \\mathbf{:}\\mathbf{\\ddot{\\underset{.\\: .}O}}\\mathbf{\\: :}^{2-}\\; \\; \\; \\; \\; Mg^{2+}O^{2-}\\; or\\; MgO$\n\nRemember, in the final formula for the ionic compound, we do not write the charges on the ions.\n\nWhat about when an Na atom interacts with an O atom? The O atom needs two electrons to complete its valence octet, but the Na atom supplies only one electron:\n\n$\\mathbf{Na\\, \\cdot }\\curvearrowright \\mathbf{\\cdot }\\mathbf{\\ddot{\\underset{.}O}}\\mathbf{\\: :}$\n\nThe O atom still does not have an octet of electrons. What we need is a second Na atom to donate a second electron to the O atom:", null, "These three ions attract each other to give an overall neutral-charged ionic compound, which we write as Na2O. The need for the number of electrons lost being equal to the number of electrons gained explains why ionic compounds have the ratio of cations to anions that they do. This is required by the law of conservation of matter as well.\n\nExample $$\\PageIndex{1}$$\n\nWith arrows, illustrate the transfer of electrons to form calcium chloride from Ca atoms and Cl atoms.\n\nSolution\n\nA Ca atom has two valence electrons, while a Cl atom has seven electrons. A Cl atom needs only one more to complete its octet, while Ca atoms have two electrons to lose. Thus we need two Cl atoms to accept the two electrons from one Ca atom. The transfer process looks as follows:", null, "The oppositely charged ions attract each other to make CaCl2.\n\nExercise $$\\PageIndex{1}$$\n\nWith arrows, illustrate the transfer of electrons to form potassium sulfide from K atoms and S atoms.", null, "" ]
[ null, "https://chem.libretexts.org/@api/deki/files/91381/e6d8794799ce3792113a9f4de4a682f6.jpg", null, "https://chem.libretexts.org/@api/deki/files/91382/adfbf03706a63dce97dca01856fd6e60.jpg", null, "https://chem.libretexts.org/@api/deki/files/340957/imageedit_2_9882044137.png", null, "https://chem.libretexts.org/@api/deki/files/340958/imageedit_4_9402336583.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8870471,"math_prob":0.99757904,"size":5554,"snap":"2022-27-2022-33","text_gpt3_token_len":1439,"char_repetition_ratio":0.17135136,"word_repetition_ratio":0.052004334,"special_character_ratio":0.25909254,"punctuation_ratio":0.1527139,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9926991,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,5,null,5,null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-26T23:01:11Z\",\"WARC-Record-ID\":\"<urn:uuid:bd206dd7-a184-4a54-9df1-73d6a77651f3>\",\"Content-Length\":\"104887\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:87838675-863d-494f-a5a3-c4308aaf8e89>\",\"WARC-Concurrent-To\":\"<urn:uuid:65ca1a43-70d0-467b-a384-b6c4e7e3964a>\",\"WARC-IP-Address\":\"99.86.224.63\",\"WARC-Target-URI\":\"https://chem.libretexts.org/Courses/Saint_Francis_University/CHEM_113%3A_Human_Chemistry_I_(Muino)/03%3A_Ionic_Compounds/3.05%3A_Ions_and_the_Octet_Rule\",\"WARC-Payload-Digest\":\"sha1:4J3WRG7FVV5HPKELLQQ3JRJRYTR6TNBT\",\"WARC-Block-Digest\":\"sha1:VIL4CUYNG4KNWLBWLZPV25QLOYC3LCRB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103322581.16_warc_CC-MAIN-20220626222503-20220627012503-00192.warc.gz\"}"}
http://www.exteria.biz/risk-assessment/
[ "Risk assessment is a process that follows the estimation of risk, respectively risk analysis. Risk evaluation is the last step in the process of risk assessment. Risk assessment is a process where each risk is assigned by a value, so called level of risk. The level of risk (R) is generally calculated from the probability of negative effect of risk (P) and the rate of consequences caused by exposure at risk (C). Generally R = P * C. In the level of risk R can also include other, so called deteriorating / mitigating factors (endangering a large number of people, a significant threat to the environment, the presence of fire safety equipment, the distance of closest fire station and many others)." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.95767677,"math_prob":0.97109616,"size":717,"snap":"2021-31-2021-39","text_gpt3_token_len":146,"char_repetition_ratio":0.1486676,"word_repetition_ratio":0.016806724,"special_character_ratio":0.20223153,"punctuation_ratio":0.09090909,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9614922,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-31T18:09:09Z\",\"WARC-Record-ID\":\"<urn:uuid:7eb6ff5c-ae54-4566-acbf-8bd911cbd831>\",\"Content-Length\":\"21485\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ac0032d2-14f7-4f76-b7ab-b6c4a8c16728>\",\"WARC-Concurrent-To\":\"<urn:uuid:326f7fd3-4e30-4d98-9472-701d0ec27e9d>\",\"WARC-IP-Address\":\"217.198.114.186\",\"WARC-Target-URI\":\"http://www.exteria.biz/risk-assessment/\",\"WARC-Payload-Digest\":\"sha1:G6YIVOWGKDHF7GL54ZPA3BPZHCFS443E\",\"WARC-Block-Digest\":\"sha1:XFIWFHHIKG3FB7BHE2NA24YIYFRVDZMY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046154099.21_warc_CC-MAIN-20210731172305-20210731202305-00363.warc.gz\"}"}
https://mathematica.stackexchange.com/questions/25647/combine-absolute-and-relative-scaled-coordinates
[ "# Combine absolute and relative (scaled) coordinates\n\nI am using Inset to add an Epilog to a plot. The position of the images (in my case: framed numbers) can be specified as an option of Inset.\n\nI would like the y-coordinate to be the same for all Inset elements (they are created via Table), relative to the plot size. Say, for example, it should be the y-coordinate of Scaled[*,0.9]. The x-coordinate should, for each element, be an absolute value, depending on its position in the table.\n\nWhile I know how to specify relative and absolute coordinates, also as functions of the table position, I can't get Scaled to work for only one coordinate: specifying my Inset coordinates via something like\n\n{*abs. value*, Scaled[.9]}\n\n\nyields the following error message:\n\nCoordinate {*abs. value*, Scaled[0.9]} should be a pair of numbers, or a Scaled or Offset form.\n\n\nAny help on this?\n\nUpdate: I also tried snippets like Scaled[*some value*,.9][] to extract the y-coordinate, but to no avail.\n\n• You might want to use Rescale[] to transform absolute coordinates to relative coordinates that you can then use with Scaled[]. – J. M.'s technical difficulties May 22 '13 at 9:13\n• @J.M. Yes, that might be a workaround, but I want to insert the Inset into an Epilog, the Epilog into a Plot, the Plot possibly into a Show, etc - so that I don't yet know the minimum and maximum values of y (required for Rescale). I could of course use a function for this [checking for min. and max. values when the plot is created], but it seems it wound end up in a rather cumbersome construct just to specify a value for y... – Bernd May 22 '13 at 9:21\n\nGreat question, to which I would like to know the answer myself, other than manual scaling as mentioned by J. M.\n\nA partial solution is to use the second parameter of Scaled. Here I place a point at y scaled 1/2, and x plot coordinate 9. Note that y origin 5 must be known:\n\nGraphics[{\nAbsolutePointSize,\nPoint @ Scaled[{0, 1/2}, {9, 5}]\n},\nPlotRange -> {{5, 10}, {5, 10}},\nFrame -> True,\nGridLines -> Automatic\n]", null, "Another limited method I am aware of uses Offset, but that specifies position in printer's points rather than plot coordinates (resize the graphic to see the result of that):\n\nGraphics[{\nAbsolutePointSize,\nPoint @ Offset[{75, 0}, Scaled[{0, 1/2}]]\n},\nPlotRange -> {{5, 10}, {5, 10}},\nFrame -> True\n]", null, "", null, "• Thanks for this. As with J.M.'s approach, it seems to require (at least in the first method) the knowledge of at least some minimum/maximum values, which I hope to avoid. – Bernd May 22 '13 at 9:31\n• @Bernd Indeed, which is why I said I'd like to know the answer too. In the past I've always used some other workaround, but it's disappointing that the mixed form {abolute, Scaled[pos]} doesn't work directly. :-/ – Mr.Wizard May 22 '13 at 9:34\n\nFollowing method does not require any knowledge about PlotRange because MMA knows it. :)\n\nThis function is not pretty, I suspect it will crush sometimes due to it's naive form. Report me then :)\n\nHowever, it works and You can DumpSave it if You don't want to look at it. :)\n\n MixedCoordinates[plot_] := Composition[\nReplaceAll[#[], {\n{x_?NumericQ, Scaled@y_?NumericQ} :> {x, (#1 + Abs[#1 - #2] y) & @@ #[[2, 1, 2]]},\n{Scaled@x_?NumericQ, y_?NumericQ} :> {(#1 + Abs[#1 - #2] x) & @@ #[[2, 1, 1]], y}\n}] &,\n{#[], Cases[#[], x : Rule[PlotRange, _] :> x[]]} &,\n{#, AbsoluteOptions@#} &\n][plot]\n\n\nLets test it. This two Shows have the same content, except of second argument, Plot with different domain.\n\nMixedCoordinates@Show[\nListPlot[{{-.5, .5}, {.5, 2}}, PlotStyle -> Directive@AbsolutePointSize@12],\nPlot[x, {x, ##}],\nGraphics[{AbsolutePointSize@12, Blue, Point[{{.5, .5}}],\nRed, Point[{{Scaled@1, .2}}]}]\n,\nPlotLabel -> Style[\"Red points have mixed coordinates\", Bold, 15],\nPlotRange -> All, AspectRatio -> Automatic, AxesOrigin -> {0, 0},\nFrame -> True, Epilog -> {AbsolutePointSize@12, Red, Point[{{.4, [email protected]}}]},\nImageSize -> 400, GridLines -> Automatic, BaseStyle -> Thick\n] & @@@ {{-1, 1}, {-2, 2}}", null, "Possible issues:\n\n• PlotRange->All seems to be necessary at the end od Show\n• If You want to put Epilog somewhere, it has to be either in Show or in it's first argument. Why? Because Show takes options from it's first argument and other's arguments options are not exposed.\n\nShort description:\n\nMixedCoordinates@[] is taking information from PlotRange given by AbsoluteOptions. And then, with this information it is rescaling elements Scaled[_]\n\nI do not consider it is finished, but it could be a good start. Looking forward for Your remarks.\n\n• Perhaps I'm missing something - why not use PlotRange to get the coordinate ranges in each direction? – Simon Woods Jun 21 '13 at 21:36\n• @SimonWoods No You are not. :) I have started with this idea, then I have switched to that with FullForm, before I realized that PlotRange->All is necessary. After that I forgot about the beginning which works also good, I'll replace that, it is a little bit shorter. – Kuba Jun 21 '13 at 22:54\n• Kuba, thanks for this. Since I'm not quite able to understand completely how MixedCoordinates@[] works just from reading the code: Does the ReplaceAll part manipulate any occurrence of Scaled in the Graphics it is applied to? So I should not use any other instances of Scaled? – Bernd Jun 24 '13 at 7:43\n• @Bernd It should affect only Scaled[_?NumericQ] form, so You can use Scaled[{_,_},{_,_}] form somewhere else without erros but I have not tested this. – Kuba Jun 24 '13 at 7:49\n• While it's not yet a perfect solution (according to Kuba's own remarks), it seems to come closest to what I am looking for (though I hoped there was a simpler way...) - so I'd award him/her the bounty? – Bernd Jun 27 '13 at 7:21\n\nLate answer.. I thought some might find this useful:\n\nI had a need to also use the plot aspect ratio, so develeped this variation on @kuba's answer..\n\n ctransform = Module[{plotrange, plotratio, aspect},\nplotrange = Last@Cases[ AbsoluteOptions[#] ,\nx : Rule[PlotRange, _] :> x[]];\naspect = Last@Cases[ AbsoluteOptions[#] ,\nx : Rule[AspectRatio, _] :> x[]];\nplotratio = Divide @@ (Subtract @@ # & /@ plotrange);\nReplaceAll[#, {\nscaleratio -> 1/plotratio/aspect,\nscalev[i_, x_] :> plotrange[[i, 1]] (1 - x) + plotrange[[i, 2]] (x)}]] &;\n\nctransform@\nShow[{\nPlot[ x^2 + 5, {x, 1, 5}, PlotStyle -> Thick, PlotRange -> {{0, 6}, {0, 30}}],\nGraphics[ Table[Circle[{i, scalev[2, .1] + .1 i scaleratio},\n.1 i {1, scaleratio}], {i, 1, 5}]]}, AspectRatio -> 1/GoldenRatio,\nPlotRange -> {{0, 6}, {0, 30}}]\n\n\nNote I'm not using the bultin Scaled at all, so there is no conflict issue if yuo needed to use that for something as well.\n\nYou could also do this with Composition, but it gets a bit unwieldy\n\nThis is not exactly what you asked for, but it will work in many situations, although not in v6 :/\n\ninsets = Table[Framed[i], {i, 0, 9}];\n\nplot = Plot[Sin[x], {x, -1, 10},\nEpilog -> MapIndexed[\nTranslate[Inset[#1, {0, Top}, Scaled[{0.5, 1.5}]], {First@#2 - 1, 0}] &,\ninsets]\n]", null, "Combining with other plots, the insets move up automatically:\n\nShow[plot, Plot[2 Cos[x], {x, -2, 8}], PlotRange -> All]", null, "Now the above works well if the insets all have the same vertical size. If not, one can use Pane:\n\nSeedRandom;\ninsets2 = Table[Framed[Style[i, RandomInteger[20, 40]]], {i, 0, 9}];\nvsize = Max[Rasterize[#, \"RasterSize\"] & /@ insets2];\n\nPlot[Sin[x], {x, -1, 10},\nEpilog ->\nMapIndexed[\nTranslate[Inset[#1, {0, Top}, Scaled[{0.5, 1.5}]], {First@#2 - 1, 0}] &,\nPane[#, {Automatic, vsize}, Alignment -> Center] & /@ insets2]]", null, "The main way it fails to do exactly what the OP asks is that the vertical offset is relative to the (max) size of the insets, not relative to the size of the graphics.\n\n• Michael, I don't know what I'm doing wrong, but your above code doesn't yield the image you attached for me. If I run your first code snippet, I get this graph, and the second one similarly looks different. – Bernd Jun 24 '13 at 7:24\n• And if I try to run the third code, I get the following error: The specified setting for the option PaneBoxOptions, ImageSize cannot be used. – Bernd Jun 24 '13 at 7:25\n• @Bernd That's curious. When I cut and paste the code above, it all works just as shown, on both V8.0.4 and V9.0.1 (on a Mac). Which version of Mathematica are you using? Did you try the code with a fresh kernel? (I can get your image if I replace {0, Top} with Scaled[{0, 1}], but I don't know why your system would behave differently than mine.) – Michael E2 Jun 24 '13 at 12:27\n• That could indeed be the issue - I'm still on version 6. Still all the commands are already introduced in this version, so it's odd to get such a different result. A pity because I really like this proposal for its brevity. – Bernd Jun 25 '13 at 8:54\n• @Bernd The reference pages say Inset and Scaled were modified in V7. Try Inset[#1, {Axis, Top}, Scaled[{0.5, 1.5}]] -- maybe it will work. There will be a problem if the axis is not at x = 0, though. But if you know where the axis is, then it might be usable. – Michael E2 Jun 25 '13 at 13:14" ]
[ null, "https://i.stack.imgur.com/m2uVY.png", null, "https://i.stack.imgur.com/VXnqC.png", null, "https://i.stack.imgur.com/d2EKc.png", null, "https://i.stack.imgur.com/CI0wx.png", null, "https://i.stack.imgur.com/OHDlP.png", null, "https://i.stack.imgur.com/68UVd.png", null, "https://i.stack.imgur.com/yYTy0.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7934383,"math_prob":0.7977875,"size":1351,"snap":"2020-24-2020-29","text_gpt3_token_len":345,"char_repetition_ratio":0.13363029,"word_repetition_ratio":0.0,"special_character_ratio":0.257587,"punctuation_ratio":0.15384616,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9646568,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,6,null,6,null,6,null,6,null,6,null,6,null,6,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-06-03T01:32:24Z\",\"WARC-Record-ID\":\"<urn:uuid:1862e652-d3e4-4b34-90a1-64e73d3d785b>\",\"Content-Length\":\"190296\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f88ef0ff-37f7-4af6-a017-3ddf335d81fa>\",\"WARC-Concurrent-To\":\"<urn:uuid:8c843d2d-1d27-444d-9080-11c2d39a578b>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://mathematica.stackexchange.com/questions/25647/combine-absolute-and-relative-scaled-coordinates\",\"WARC-Payload-Digest\":\"sha1:OHS2NEU2WPYY6LBYBPOT5CNARMJSVYRO\",\"WARC-Block-Digest\":\"sha1:N3UQHP2U2Q3BPVGRY3DWN3BTGOQWZOKR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347426956.82_warc_CC-MAIN-20200602224517-20200603014517-00337.warc.gz\"}"}
https://www.geeksforgeeks.org/java-integer-compare-method/
[ "# Java Integer compare() method\n\nThe compare() method of Integer class of java.lang package compares two integer values (x, y) given\nas a parameter and returns the value zero if (x==y), if (x < y) then it returns a value less than zero and\nif (x > y) then it returns a value greater than zero.\n\nSyntax :\n\n```public static int compare(int x, int y)\nParameter :\nx : the first int to compare\ny : the second int to compare\nReturn :\nThis method returns the value zero if (x==y),\nif (x < y) then it returns a value less than zero\nand if (x > y) then it returns a value greater than zero.\n```\n\nExample :To show working of java.lang.Integer.compare() method.\n\n `// Java program to demonstrate working ` `// of java.lang.Integer.compare() method ` `import` `java.lang.Integer; ` ` `  `class` `Gfg { ` `    ``// driver code ` `    ``public` `static` `void` `main(String args[]) ` `    ``{ ` `        ``int` `a = ``10``; ` `        ``int` `b = ``20``; ` ` `  `        ``// as 10 less than 20, Output will be a value less than zero ` `        ``System.out.println(Integer.compare(a, b)); ` ` `  `        ``int` `x = ``30``; ` `        ``int` `y = ``30``; ` ` `  `        ``// as 30 equals 30, Output will be zero ` `        ``System.out.println(Integer.compare(x, y)); ` ` `  `        ``int` `w = ``15``; ` `        ``int` `z = ``8``; ` ` `  `        ``// as 15 is greater than 8, Output will be a value greater than zero ` `        ``System.out.println(Integer.compare(w, z)); ` `    ``} ` `} `\n\nOutput:\n\n```-1\n0\n1\n```\n\nMy Personal Notes arrow_drop_up", null, "Welcome to Wenzkaba Planet\n\nIf you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.\n\nPlease Improve this article if you find anything incorrect by clicking on the \"Improve Article\" button below.\n\nArticle Tags :\nPractice Tags :\n\n1\n\nPlease write to us at [email protected] to report any issue with the above content." ]
[ null, "https://media.geeksforgeeks.org/auth/profile/y5ditgjd7kxlkcnpua3y", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6344628,"math_prob":0.8386929,"size":2360,"snap":"2019-43-2019-47","text_gpt3_token_len":596,"char_repetition_ratio":0.1990662,"word_repetition_ratio":0.215,"special_character_ratio":0.26313558,"punctuation_ratio":0.13777778,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99218094,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-22T15:40:03Z\",\"WARC-Record-ID\":\"<urn:uuid:53284b16-5670-4e73-8878-a147ddf8f5ae>\",\"Content-Length\":\"96182\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c61488e1-185d-4dae-9f90-35f3d1b4ee4d>\",\"WARC-Concurrent-To\":\"<urn:uuid:6cdcc03a-3373-4652-b8ec-9983917206ab>\",\"WARC-IP-Address\":\"168.143.241.153\",\"WARC-Target-URI\":\"https://www.geeksforgeeks.org/java-integer-compare-method/\",\"WARC-Payload-Digest\":\"sha1:HQCI3PWINO4RIG7QZSRTRKN4P5Y4G66X\",\"WARC-Block-Digest\":\"sha1:ROITGRLASUPAAVBPMPXHDZXS46NDHHHN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496671363.79_warc_CC-MAIN-20191122143547-20191122172547-00343.warc.gz\"}"}
https://convertoctopus.com/11-feet-per-second-to-meters-per-second
[ "## Conversion formula\n\nThe conversion factor from feet per second to meters per second is 0.3048, which means that 1 foot per second is equal to 0.3048 meters per second:\n\n1 ft/s = 0.3048 m/s\n\nTo convert 11 feet per second into meters per second we have to multiply 11 by the conversion factor in order to get the velocity amount from feet per second to meters per second. We can also form a simple proportion to calculate the result:\n\n1 ft/s → 0.3048 m/s\n\n11 ft/s → V(m/s)\n\nSolve the above proportion to obtain the velocity V in meters per second:\n\nV(m/s) = 11 ft/s × 0.3048 m/s\n\nV(m/s) = 3.3528 m/s\n\nThe final result is:\n\n11 ft/s → 3.3528 m/s\n\nWe conclude that 11 feet per second is equivalent to 3.3528 meters per second:\n\n11 feet per second = 3.3528 meters per second", null, "## Alternative conversion\n\nWe can also convert by utilizing the inverse value of the conversion factor. In this case 1 meter per second is equal to 0.29825817227392 × 11 feet per second.\n\nAnother way is saying that 11 feet per second is equal to 1 ÷ 0.29825817227392 meters per second.\n\n## Approximate result\n\nFor practical purposes we can round our final result to an approximate numerical value. We can say that eleven feet per second is approximately three point three five three meters per second:\n\n11 ft/s ≅ 3.353 m/s\n\nAn alternative is also that one meter per second is approximately zero point two nine eight times eleven feet per second.\n\n## Conversion table\n\n### feet per second to meters per second chart\n\nFor quick reference purposes, below is the conversion table you can use to convert from feet per second to meters per second\n\nfeet per second (ft/s) meters per second (m/s)\n12 feet per second 3.658 meters per second\n13 feet per second 3.962 meters per second\n14 feet per second 4.267 meters per second\n15 feet per second 4.572 meters per second\n16 feet per second 4.877 meters per second\n17 feet per second 5.182 meters per second\n18 feet per second 5.486 meters per second\n19 feet per second 5.791 meters per second\n20 feet per second 6.096 meters per second\n21 feet per second 6.401 meters per second" ]
[ null, "https://convertoctopus.com/images/11-feet-per-second-to-meters-per-second", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7471163,"math_prob":0.99727887,"size":2089,"snap":"2019-51-2020-05","text_gpt3_token_len":541,"char_repetition_ratio":0.3467626,"word_repetition_ratio":0.056603774,"special_character_ratio":0.3030158,"punctuation_ratio":0.08144797,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9982323,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-20T09:47:36Z\",\"WARC-Record-ID\":\"<urn:uuid:6fa60c02-c914-4b45-9349-e2b4433869d9>\",\"Content-Length\":\"32459\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3559cc85-4c86-442d-9e4d-0cf33b173014>\",\"WARC-Concurrent-To\":\"<urn:uuid:7030838f-8db6-4b1b-b604-2a9f84ca814b>\",\"WARC-IP-Address\":\"104.27.142.66\",\"WARC-Target-URI\":\"https://convertoctopus.com/11-feet-per-second-to-meters-per-second\",\"WARC-Payload-Digest\":\"sha1:GIDGGL6OGINDJAAJC37YWDASE4MB3LOZ\",\"WARC-Block-Digest\":\"sha1:WVUGTZ4P3IR6URP4G3PZAZTRYAA34IUR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250598217.23_warc_CC-MAIN-20200120081337-20200120105337-00268.warc.gz\"}"}
http://backendless.com/docs/ios/data_spatial_data_types.html
[ "# Spatial Data Types¶\n\nBackendless database supports the following spatial data types:\n\n• `POINT` - represents a single point/location in coordinate space. For the points representing locations on a map, the coordinates are longitude and latitude.\n• `LINESTRING` - represents a geometry consisting of a collection of points with linear interpolation between them. For example, a linestring can represent a delivery route.\n• `POLYGON` - a closed geometrical shape consisting of a single exterior boundary and zero or more interior boundaries, also referred to as holes.\n\nAdditionally, Backendless supports a \"parent\" data type called `GEOMETRY`. This is the base type which can accommodate any of the data types listed above.", null, "Spatial data in Backendless can be represented either in the WKT (Well-Known Text) or the GeoJSON formats. Backendless console supports both of these formats for entering new data or updating existing spatial values. Additionally, Backendless SDKs provide built-in classes which make it easier to work with the spatial data for all database operations, including creating, retrieving, updating and deleting spatial data.\n\n## POINT¶\n\nThe `POINT` type is used to represent a single point identified by two coordinates in a coordinates space. The coordinates are named X and Y, however, Backendless also handles these coordinates as longitude (X) and latitude (Y) to represent locations on a map. The WKT representation of this type is:\n\n``````POINT (longitude latitude)\n``````\n\nor\n\n``````POINT (x y)\n``````\n\nfor example, the `POINT` below points to Dallas, TX\n\n``````POINT (-96.7553535 32.8656106)\n``````\n\nThe GeoJSON format for the `POINT` values is:\n\n``````{\n\"type\": \"Point\",\n\"coordinates\": [\nlongitude or X,\nlatitude or Y\n]\n}\n``````\n\nA `POINT` value stored in the database is represented by the `BLPoint` class in the client application. The class provides access to the point coordinates (`x` and `y`) which can also represent longitude and latitude in cases when the point identifies a location on a map. The class has the constructor and methods listed below. Notice that all `set` methods return the current `Point` object, which allows for convenient \"chaining\" for property assignments: `pointInstance.setX( value ).setY( value )`:\n\n``````// creates a new instance of Point\n- (nonnull instancetype)initWithX:(double)x y:(double)y;\n- (nonnull instancetype)initWithLongitude:(double)longitude latitude:(double)latitude;\n\n// creates a Point object from a WKT definition\n+ (BLPoint * _Nullable)fromWkt:(NSString * _Nonnull)wkt;\n\n// creates a Point object from a GeoJSON definition\n+ (BLPoint * _Nullable)fromGeoJson:(NSString * _Nonnull)geoJson;\n\n// Objective-C approach is to use instance properties directly without get/set methods:\n\n// retrieves the x coordinate of the point (same as longitude)\npoint.x;\n\n// retrieves the y coordinate of the point (same as latitude)\npoint.y;\n\n// retrieves the longitude coordinate of the point (same as x)\npoint.longitude;\n\n// retrieves the latitude coordinate of the point (same as y)\npoint.latitude;\n\n// sets the x coordinate of the point (same as longitude)\npoint.x = ...;\n\n// sets the y coordinate of the point (same as latitude)\npoint.y = ...;\n\n// sets the longitude coordinate of the point (same as x)\npoint.longitude = ...;\n\n// sets the latitude coordinate of the point (same as y)\npoint.latitude = ...;\n\n// converts this Point object into its WKT representation\n- (NSString * _Nullable)asWkt;\n\n// converts this Point object into its GeoJSON representation\n- (NSDictionary<NSString *, id> * _Nullable)asGeoJson;\n``````\n``````// creates a new instance of Point\npublic init(x: Double, y: Double)\npublic init(longitude: Double, latitude: Double)\n// e.g: let point = BLPoint()\n\n// creates a Point object from a WKT definition\npublic static func fromWkt(_ wkt: String) -> BLPoint?\n\n// creates a Point object from a GeoJSON definition\npublic static func fromGeoJson(_ geoJson: String) -> BLPoint?\n\n// Swift approach is to use instance properties directly without get/set methods:\n\n// retrieves the x coordinate of the point (same as longitude)\npoint.x\n\n// retrieves the y coordinate of the point (same as latitude)\npoint.y\n\n// retrieves the longitude coordinate of the point (same as x)\npoint.longitude\n\n// retrieves the latitude coordinate of the point (same as y)\npoint.latitude\n\n// sets the x coordinate of the point (same as longitude)\npoint.x = ...\n\n// sets the y coordinate of the point (same as latitude)\npoint.y = ...\n\n// sets the longitude coordinate of the point (same as x)\npoint.longitude = ...\n\n// sets the latitude coordinate of the point (same as y)\npoint.latitude = ...\n\n// converts this Point object into its WKT representation\npublic func asWkt() -> String?\n\n// converts this Point object into its GeoJSON representation\npublic func asGeoJson() -> [String : Any]?\n``````\n\nConsider the following example. The objects shown below contain a geometry column/property called `location`. The type of the column is `POINT`:", null, "The following code retrieves the first object from the table. Notice how the geometry property is accessed. Backendless automatically converts `POINT` data type to an instance of the `Point` class:\n\n``````[[Backendless.shared.data of:[Person class]] findFirstWithResponseHandler:^(Person *person) {\nBLPoint *location = person.location;\ndouble locationLatitude = location.latitude;\ndouble locationLongitude = location.longitude;\n} errorHandler:^(Fault *fault) {\nNSLog(@\"Error: %@\", fault.message);\n}];\n``````\n``````Backendless.shared.data.of(Person.self).findFirst(responseHandler: { person in\nguard let person = person as? Person else { return }\nlet location = person.location\nlet locationLatitude = location?.latitude\nlet locationLongitude = location?.longitude\n}, errorHandler: { fault in\nprint(\"Error: \\(fault.message ?? \"\")\")\n})\n``````\n\n## LINESTRING¶\n\nThe `LINESTRING` type is used to represent geometries composed of multiple `POINT` values with linear interpolation between each two consecutive points. The WKT representation of this type is:\n\n``````LINESTRING (lon1 lat1, lon2 lat2, lon3 lat3, lon4 lat4)\n``````\n\nor\n\n``````LINESTRING (x1 y1, x2 y2, x3 y3, x4 y4)\n``````\n\nfor example, the `LINESTRING` below identifies the main stops of the historic Route 66:\n\n``````LINESTRING (-87.52683788 41.85716752, -90.13875858 38.68967135, -95.93953983 36.2131248, -97.49959842 35.53656483, -101.8282117 35.26791494, -105.87118045 35.72083154, -106.61825076 35.14794417, -111.63900272 35.20182535, -118.24178592 34.07195769)\n``````\n\nThe GeoJSON format for the `LINESTRING` values is:\n\n``````{\n\"type\": \"LineString\",\n\"coordinates\": [\n[\nlon1 or x1,\nlat1 or y1\n],\n[\nlon2 or x2,\nlat2 or x2\n],\n[\nlon3 or x3,\nlat3 or y3\n],\n[\nlon4 or x4,\nlat4 or y4\n]\n]\n}\n``````\n\nDatabase values of this type are represented by the `BLLineString` class in the client application. The class provides access to the `Point` objects making up the linestring. The class has the constructors and methods as listed below. Notice that all `set` method return the current `LineString` object, which allows for convenient \"chaining\": `lineStringInstance.setPoints( value ).asWKT()`:\n\n``````// creates a new instance of LineString\n- (nonnull instancetype)initWithPoints:(NSArray<BLPoint *> * _Nonnull)points\n\n// creates a LineString object from a WKT definition\n+ (BLLineString * _Nullable)fromWkt:(NSString * _Nonnull)wkt;\n\n// creates a LineString object from a GeoJSON definition\n+ (BLLineString * _Nullable)fromGeoJson:(NSString * _Nonnull)geoJson;\n\n// Objective-C approach is to use instance properties directly without get/set methods:\n\n// returns a collection of Point objects making up this LineString\nlineString.points;\n\n// sets a collection of Point objects to define the LineString\nlineString.points = ...;\n\n// converts this LineString object into its WKT representation\n- (NSString * _Nullable)asWkt\n\n// converts this LineString object into its GeoJSON representation\n- (NSDictionary<NSString *, id> * _Nullable)asGeoJson;\n``````\n``````// creates a new instance of LineString\npublic init(points: [BLPoint])\n//e.g: let lineString = BLLineString(points: [BLPoint(x: 10, y: 10), BLPoint(x: 20, y: 20)])\n\n// creates a LineString object from a WKT definition\npublic static func fromWkt(_ wkt: String) -> BLLineString?\n\n// creates a LineString object from a GeoJSON definition\npublic static func fromGeoJson(_ geoJson: String) -> BLLineString?\n\n// Swift approach is to use instance properties directly without get/set methods:\n\n// returns a collection of Point objects making up this LineString\nlineString.points\n\n// sets a collection of Point objects to define the LineString\nlineString.points = ...\n\n// converts this LineString object into its WKT representation\npublic func asWkt() -> String?\n\n// converts this LineString object into its GeoJSON representation\npublic func asGeoJson() -> [String : Any]?\n``````\n\nConsider the following example. The `Travel` table has an object identifying a travel route. The `route` column is of the `LINESTRING` type, its value is visualized in the map in the screenshot below:", null, "The following code retrieves the `Travel` object from the table, gets its `route` property (which is a `LineString`) and accesses the points making up the linestring:\n\n``````DataQueryBuilder *queryBuilder = [DataQueryBuilder new];\n[queryBuilder setWhereClauseWithWhereClause:@\"name = 'Route 66'\"];\n\n[[Backendless.shared.data ofTable:@\"Travel\"] findWithQueryBuilder:queryBuilder responseHandler:^(NSArray *response) {\nNSDictionary *travelRoute = response.firstObject;\nNSString *routeName = travelRoute[@\"name\"];\nBLLineString *routeDefinition = travelRoute[@\"route\"];\nNSArray *points = routeDefinition.points;\n} errorHandler:^(Fault *fault) {\nNSLog(@\"Error: %@\", fault.message);\n}];\n``````\n``````let queryBuilder = DataQueryBuilder()\nqueryBuilder.setWhereClause(whereClause: \"name = 'Route 66'\")\n\nBackendless.shared.data.ofTable(\"Travel\").find(queryBuilder: queryBuilder, responseHandler: { response in\nlet travelRoute = response.first\nlet routeName = travelRoute?[\"name\"] as? String\nlet routeDefinition = travelRoute?[\"route\"] as? BLLineString\nlet points = routeDefinition?.points\n}, errorHandler: { fault in\nprint(\"Error: \\(fault.message ?? \"\")\")\n})\n``````\n\n## POLYGON¶\n\nValue of the `POLYGON` type is a figure that is described by a number of `LINESTRING` values connected to form a single continuous exterior boundary. Additionally, a Polygon may  contain zero or more interior boundaries, where each interior boundary defines a hole in the Polygon. The WKT representation of this type is:\n\n``````POLYGON ((lon1 lat1, lon2 lat2, lon3 lat3),\n(hole-lon1 hole-lat1, hole-lon2 hole-lat2, hole-lon3 hole-lat3),\n(...),(...))\n``````\n\nor\n\n``````POLYGON ((x1 y1, x2 y2, x3 y3),\n(hole-x1 hole-y1, hole-x2 hole-y2, hole-x3 hole-y3),\n(...),(...))\n``````\n\nwhere the first group of coordinates defines the exterior boundary and all subsequent groups defines the holes. The first group is mandatory.\n\nfor example, the `POLYGON` below identifies the outline of The Pentagon - the US Department of Defense headquarters. It also includes a hole - which is the inner plaza.\n\n``````POLYGON ((-77.05781934 38.87248788,\n-77.05474017 38.87287211,\n-77.0533025 38.8706001,\n-77.05556629 38.86883758,\n-77.05848453 38.87002374,\n-77.05781934 38.87248788),\n(-77.05669282 38.87156906,\n-77.05551265 38.87170271,\n-77.05494402 38.8708507,\n-77.05577014 38.87030775,\n-77.05688594 38.87074211,\n-77.05669282 38.87156906))\n``````\n\nThe GeoJSON format for the `POLYGON` values is:\n\n``````{\n\"type\": \"Polygon\",\n\"coordinates\": [\n[\n[\nlon1,\nlat1\n],\n[\nlon2,\nlat2\n],\n[\nlon3,\nlat3\n]\n],\n[\n[\nhole-lon1,\nhole-lat1\n],\n[\nhole-lon2,\nhole-lat2\n],\n[\nhole-lon3,\nhole-lat3\n]\n]\n]\n}\n``````\n\nDatabase values of this type are represented by the `BLPolygon` class in the client application. The class provides access to the `LineString` objects making up the polygon. The class has the constructors and methods as listed below. Notice that all `set` method return the current Polygon object, which allows for convenient \"chaining\" for property assignments: `polygonInstance.setBoundary( value ).asWKT()`:\n\n``````// creates a new instance of Polygon (holes are optional)\n- (nonnull instancetype)initWithBoundary:(BLLineString * _Nonnull)boundary holes:(BLLineString * _Nullable)holes;\n\n// creates a Polygon object from a WKT definition\n+ (BLPolygon * _Nullable)fromWkt:(NSString * _Nonnull)wkt;\n\n// creates a Polygon object from a GeoJSON definition\n+ (BLPolygon * _Nullable)fromGeoJson:(NSString * _Nonnull)geoJson;\n\n// returns a LineString which defines the external boundary of the polygon\npolygon.boundary;\n\n// returns a collection of LineString objects each identifying a hole\npolygon.holes;\n\n// converts this Polygon object into its WKT representation\n- (NSString * _Nullable)asWkt;\n\n// converts this Polygon object into its GeoJSON representation\n- (NSDictionary<NSString *, id> * _Nullable)asGeoJson;\n``````\n``````// creates a new instance of Polygon (holes are optional)\npublic init(boundary: BLLineString, holes: BLLineString?)\n// e.g.: let polygon = BLPolygon(boundary: BLLineString(points: // array of BLPoints), holes: nil)\n\n// creates a Polygon object from a WKT definition\npublic static func fromWkt(_ wkt: String) -> BLLineString?\n\n// creates a Polygon object from a GeoJSON definition\npublic static func fromGeoJson(_ geoJson: String) -> BLLineString?\n\n// returns a LineString which defines the external boundary of the polygon\npolygon.boundary\n\n// returns a collection of LineString objects each identifying a hole\npolygon.holes\n\n// converts this Polygon object into its WKT representation\npublic func asWkt() -> String?\n\n// converts this Polygon object into its GeoJSON representation\npublic func asGeoJson() -> [String : Any]?\n``````\n\nConsider the following example. The `Building` table has an object identifying a shape of a building. The `shape` column is of the `POLYGOON` type, its value is visualized in the map in the screenshot below:", null, "The following code retrieves the `Building` object from the table, gets its `shape` property (which is a `Polygon`) and accesses its external boundary and the hole.\n\n``````DataQueryBuilder *queryBuilder = [DataQueryBuilder new];\n[queryBuilder setWhereClauseWithWhereClause:@\"name = 'Pentagon'\"];\n\n[[Backendless.shared.data ofTable:@\"Building\"] findWithQueryBuilder:queryBuilder responseHandler:^(NSArray *response) {\nNSDictionary *building = response.firstObject;\nNSString *buildingName = building[@\"name\"];\nBLPolygon *buildingShape = building[@\"shape\"];\nBLLineString *externalBoundary = buildingShape.boundary;\nBLLineString *holes = buildingShape.holes;\n} errorHandler:^(Fault *fault) {\nNSLog(@\"Error: %@\", fault.message);\n}];\n``````\n``````let queryBuilder = DataQueryBuilder()\nqueryBuilder.setWhereClause(whereClause: \"name = 'Pentagon'\")\n\nBackendless.shared.data.ofTable(\"Building\").find(queryBuilder: queryBuilder, responseHandler: { response in\nlet building = response.first\nlet buildingName = building?[\"name\"] as? String\nlet buildingShape = building?[\"shape\"] as? BLPolygon\nlet externalBoundary = buildingShape?.boundary\nlet holes = buildingShape?.holes\n}, errorHandler: { fault in\nprint(\"Error: \\(fault.message ?? \"\")\")\n})\n``````" ]
[ null, "http://backendless.com/docs/images/data/database-geo-types.zoom50.png", null, "http://backendless.com/docs/images/data/person-table-location.zoom80.png", null, "http://backendless.com/docs/images/data/linestring-example.zoom85.png", null, "http://backendless.com/docs/images/data/pentagon-example.zoom80.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6563445,"math_prob":0.8286769,"size":14697,"snap":"2022-27-2022-33","text_gpt3_token_len":3542,"char_repetition_ratio":0.1422446,"word_repetition_ratio":0.38104638,"special_character_ratio":0.26148194,"punctuation_ratio":0.1922462,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.963213,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-16T04:54:34Z\",\"WARC-Record-ID\":\"<urn:uuid:5beb1e44-8e41-4823-a380-6be28321e283>\",\"Content-Length\":\"139461\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:278d12bb-a647-4661-a5d3-ed8471b2b95a>\",\"WARC-Concurrent-To\":\"<urn:uuid:e59e24c1-f372-44f1-be57-c3841ae1362e>\",\"WARC-IP-Address\":\"216.87.89.87\",\"WARC-Target-URI\":\"http://backendless.com/docs/ios/data_spatial_data_types.html\",\"WARC-Payload-Digest\":\"sha1:QE47CWCVO2U26BOUOJ5XNWWPNVNLAATN\",\"WARC-Block-Digest\":\"sha1:YNAEOA52OGTXDJ463YJJ5GS7ZRFCKYQZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882572220.19_warc_CC-MAIN-20220816030218-20220816060218-00379.warc.gz\"}"}
https://www.arxiv-vanity.com/papers/gr-qc/9411070/
[ "DFTT 64/94\n\nNovember 26, 1994\n\nHAMILTONIAN FORMALISM FOR BLACK HOLES\n\nAND QUANTIZATION\n\nMarco Cavaglià, Vittorio de Alfaro and Alexandre T. Filippov\n\nSISSA - International School for Advanced Studies,\n\nVia Beirut 2-4, I-34013 Trieste, Italy.\n\nDipartimento di Fisica Teorica dell’Università di Torino,\n\nVia Giuria 1, I-10125 Torino, Italy.\n\nJoint Institute for Nuclear Research\n\nR-141980 Dubna, Moscow Region, RUSSIA.\n\nINFN, Sezione di Torino, Italy.\n\nABSTRACT\n\nStarting from the Lagrangian formulation of the Einstein equations for the vacuum static spherically symmetric metric, we develop a canonical formalism in the radial variable that is time–like inside the Schwarzschild horizon. The Schwarzschild mass turns out to be represented by a canonical function that commutes with the –Hamiltonian. We investigate the Wheeler–DeWitt quantization and give the general representation for the solution as superposition of eigenfunctions of the mass operator.\n\nPACS: 04.20.Fy, 04.60.Ds, 04.70.-s.\n\nE-Mail: [email protected]\n\nE-Mail: [email protected]\n\nE-Mail: [email protected]\n\n1. Introduction.\n\nRecently the dynamics of primordial Schwarzschild black holes has been cast in canonical formalism and the quantization procedure has been discussed . A complete bibliography that covers the history of the subject is also contained there. Indeed, the Hamiltonian formalism is a fundamental key to obtain a quantum description of a gravitational system and a great deal of work has been devoted to the construction of a canonical formalism for the classical black hole solutions (see also [2,3]).\n\nIn the present paper we derive the canonical formalism for the vacuum static spherically symmetric metric in a simple direct way by foliation in the coordinate . Classically the general vacuum spherically symmetric solution of the Einstein equations is locally isometric to the Schwarzschild metric. In order to obtain a Hamiltonian description of the Schwarzschild metric we start from the general static spherically symmetric line element \n\n ds2=−a(r)dt2+N(r)dr2+2B(r)dtdr+b(r)2dΩ2,\n\nwhere , , and are real functions of and is the metric of the two–sphere.\n\nUsually, redefining the coordinate time and fixing , (1.1) is cast in the form\n\n ds2=−A(r)dt2+C(r)dr2+r2dΩ2,\n\nwhere is now the “area coordinate” since the area of the two-sphere of radius is . One is then left with two functions and that can be determined by the Einstein equations. The line element (1.2) is the “standard form” of the general static isotropic metric (1.1) .\n\nThe line element (1.1) will be our starting point for a canonical treatment, formulated in the coordinate . We are of course aware that the line element (1.1) does not cover the complete spacetime since it describes only a half of the Kruskal–Szekeres plane and pure –coordinate transformations do not lead to a complete covering of the Kruskal–Szekeres manifold starting from the metric (1.1). In spite of this, the analysis of reparametrizations from this point of view may lead to interesting consequences. Indeed, later we will consider a formal –quantization scheme and investigate the ensuing Wheeler – DeWitt (WDW) equation [5,6].\n\nSince the metric tensor in (1.1,2) does not depend on , no –differentiation appears in the expression of the minisuperspace action derived from (1.1); starting from the Lagrangian we may develop a formal Hamiltonian scheme in the variable and obtain the corresponding –super Hamiltonian after having introduced the –conjugate momenta and .\n\nNote that there is a range where is a timelike variable. The signs of and are the key. In fact, inside the Schwarzschild horizon of the black hole the area coordinate in (1.2) is a time variable while is spacelike, and our formalism is a true canonical motion in time. In the range where is timelike, generates the dynamics and plays the role of the usual ADM Hamiltonian; in general the –super Hamiltonian is related to the reparametrizations of the variable . In the metric (1.1) plays essentially the role of the ADM lapse function with respect to the –slicing . Since we must allow for negative values of , we need a slight modification of the ADM formalism, similar to what has been done, for instance continuing from a Lorentzian to an Euclidean signature (see e.g. ).\n\nA single Lagrange multiplier imposes the constraint of vanishing of the –super Hamiltonian\n\n H(a,pa,b,pb)=0,\n\nso of course the Hamiltonian “–dynamics” is generated by a constraint that is quadratic in the momenta, as predicted by the ADM canonical formalism. It is easy to check that this formalism is equivalent to the Einstein equations for the static solution.\n\nThe canonical formalism allows for an interesting algebraic structure of constants of the motion: in particular we will see that the Schwarzschild mass is expressed by a constant canonical quantity, of course gauge invariant.\n\nThe constraint equation is independent of , indeed there has been no gauge fixing and is not determined. The identification of should be obtained by connecting it to the canonical coordinates of the problem (gauge fixing). This procedure can be carried on by the method proposed in for quantum cosmological models. We defer to further study the analysis of gauge fixing and quantization in the reduced space.\n\nWe will investigate the quantization of the system by the method of enforcing the condition as an operator condition over wave functions (WDW equation). We find the form of the general solution of the equation diagonalizing the Schwarzschild mass operator and a commuting operator. The solutions have an oscillatory behaviour in the classically allowed regions and an exponential behaviour in the classically forbidden ones.\n\nThus in this approach the mass plays the role of the quantum number determining the wave function; in this respect our result is in agreement with the conclusions obtained in .\n\nThe outline of the paper is as follows. In the next section we discuss the classical –Lagrangian and –Hamiltonian formalisms for the metric (1.1). In section 3 we integrate the infinitesimal gauge transformations and obtain the entire group. We identify the gauge invariant quantities and discuss their algebra. Section 4 is devoted to the study of the WDW equation.\n\n2. Lagrangian formulation.\n\nOur starting point is the line element (1.1) where the Lagrangian coordinates , , , are functions of . As mentioned in the introduction, changes of sign in the metric coefficients and are allowed (note that the signature is Minkowskian over the whole manifold: for instance, if , ). can be a timelike coordinate and spacelike over part of the manifold, so it is a matter of preference to define a priori or as the timelike variable. Hence, we develop a formal canonical structure in in which the –super Hamiltonian is a generator of gauge canonical transformations that correspond to reparametrizations of the coordinate in the Lagrangian formulation (and thus in the region where is timelike it generates the dynamics). Hence it seems worthwile to study in detail this –canonical structure.\n\nLet us consider the line element (1.1) that corresponds essentially to use a Gaussian normal system of coordinates with respect to the three–surface , i.e. to perform the 3+1 slicing with respect to the coordinate. As remarked in the introduction, looking at (1.1) one realizes that the variable plays the role of the –lapse function in our foliation . The Einstein–Hilbert action\n\n S=116πG∫V4d4x√−gR−18πG∫∂V4d3x √h K\n\ncan be cast in the form\n\n S=∫t2t1dt∫r2r1drL(a,b,Δ),\n\nwhere\n\n L=2√Δ(a′bb′Δ+ab′2Δ+1).\n\n(primes denote differentiation with respect to ). In (2.3) we have set and is given by\n\n Δ(r)=aN+B2.\n\nEq. (2.3) requires that and from (2.4) the signature of (1.1) is Minkowskian for any value of . From (2.3) the Einstein equations of motion can be recovered considering formally , , and as Lagrangian coordinates evolving in . Of course, acts as a Lagrange multiplier (and we still have the freedom of choosing or ). From the vacuum Einstein equations derived from (2.1), or directly from (2.3), one obtains\n\n Δ=k2b′2,a=k2(1−2Mb),(2.5a)(2.5b)\n\nwhere and are two integration constants. Since the metric is –independent, we can arbitrarily rescale in (1.1). This corresponds essentially to fix in (2.5), so we can set ; then the metric coincides with the standard Schwarzschild form. is the Schwarzschild mass. Eqs. (2.5) will be useful for comparison with the Hamiltonian formalism that will be developed below. Note that the Lagrange multiplier can be arbitrarily fixed; furthermore, since is related to and by eq. (2.4), also , or , can be arbitrarly chosen; these two choices correspond to the freedom in the definition of and in the line element (1.1). For instance, the choice corresponds to the area gauge since from (2.5) we obtain :\n\n ds2=−(1−2Mr)dt2+N(r)dr2±2[1−(1−2Mr)N(r)]1/2dt dr+r2dΩ2.\n\nThe line element (2.6) corresponds to the standard form of the Schwarzschild solution for , to the Eddington–Finkelstein metric for and to the line element of ref. choosing .\n\nLet us now set up the Hamiltonian formalism in . We introduce the –conjugate momenta as\n\n pa=2bb′√Δ,pb=2√Δ(a′b+2ab′),(2.7a)(2.7b)\n\nand by the usual Legendre transformation we obtain the density of the action (with respect to the coordinate )\n\n S=∫r2r1dr{12(a′pa+b′pb−ap′a−bp′b)−lH}.\n\nis the Schwarzschild –super Hamiltonian\n\n H=pa(bpb−apa)−4b2,\n\nand\n\n l=√Δ2b2\n\nhas been chosen as Lagrange multiplier. Note that the Legendre transformation used to write (2.8) is singular for , but not for . As a consequence of (2.8) we have the constraint\n\n H=0.\n\nThis constraint expresses the invariance under –reparametrization and inside the region where is timelike it generates the dynamics.\n\nEqs. (2.1) – (2.11) can be easily extended to the Reissner – Nordström (RN) case, i.e. to a static electrically charged black hole. Let us consider a radial electric field whose potential 1-form is\n\n A=A(r)dt\n\n(this Ansatz was used in [9,10] for the discussion of Euclidean electromagnetic black holes). Adding to (2.3) the electromagnetic Lagrangian and using (2.12) the Hamiltonian becomes\n\n HRN=pa(bpb−apa)−4b2+P2A=H+P2A,\n\nwhere is the conjugate momentum to . Since (2.13) is separable we can solve the equation of motion for the electromagnetic field and have where is the charge of the black hole. Eq. (2.11) becomes\n\n HRN=H+Q2=0.\n\nThe RN case is equivalent to the Schwarzschild case with the constraint (2.14) in place of (2.11).\n\n3. Algebra and Gauge Transformations.\n\nThe gauge transformations of the system are generated by ():\n\n δqi=α(r)∂H∂pi=α(r)[qi,H]P,δpi=−α(r)∂H∂qi=α(r)[pi,H]P,δl=dαdr.(3.1a)% (3.1b)(3.1c)\n\nThe action (2.8) is invariant under (3.1) apart from a boundary term that does not change the classical equations of motion:\n\n S=∫r2r1drddr[α(pi∂H∂pi+qi∂H∂qi−2H)].\n\nWith eqs. (3.1a,b) are the equations of motion.\n\nThe system described by the action (2.8) has remarkable algebraic properties. Consider the following canonical quantities:\n\n J=8b−papb,I=b/pa.(3.3a)(3.3b)\n\nand are canonically conjugate gauge invariant quantities (and also obviously integrals of the motion):\n\n [J,I]P=1,    [J,H]P=0,    [I,H]P=0.\n\nIt is also interesting to consider the canonical quantity\n\n N=bpb−2apa.\n\nWe have\n\n N=IJ+2H/pa,\n\nand the relations\n\n [N,H]P=−2H,   [N,I]P=−I,   [N,J]P=J.\n\nis not gauge invariant, however, in the case , i.e. for a Schwarzschild metric, it is constant on the constraint . We shall see that plays an interesting role in the frame of the WDW equation.\n\nThe gauge transformations (3.1) from a gauge to can be integrated explicitly. We have\n\n a=−Hα2+IJα+4I2,b=−Iα,pa=−1α,pb=Jα+8I,α=∫dr(l2−l1).(3.8a)(3.8b)(3.8c)(3.8d)(3.8e)\n\nFrom eqs. (3.8) the gauge independent relation follows\n\n a=I2b2(4b2−Jb−H).\n\nOn the constraint (Schwarzschild metric)\n\n a=4I2(1−J4b).\n\nTherefore from (2.5)\n\n J=8M,\n\nwhere is the Schwarzschild mass. On the constraint the two roots of in (3.9) correspond to the two horizons of the RN metric.\n\nIt follows that in the case of the Schwarzschild metric is the momentum conjugate to the Schwarzschild mass. This suggests to perform a canonical transformation to new pair of canonical variables, where . This motivates our choice of the eigenfunctions in the discussion of the WDW equation.\n\n4. Quantization.\n\nThe quantization of this apparently simple system exhibits ambiguities that are characteristic of the canonical quantization of systems described by general relativity .\n\nA main problem in general is that, in order to set up canonical quantization rules, we must know a priori the causal structure of the model representing a physical system. To be more specific, we must know which coordinate plays the role of time and consequently write down equal time canonical commutation relations.\n\nThis is usually an ambiguous procedure. In the classical treatment, the identification – if any – of the time variable results from the solution of the classical equations of motion and it is not determined a priori. Of course, in some cases, as for instance the Friedmann – Robertson – Walker (FRW) model, one assumes the signature of the metric (see e.g. ). This is because the outcome of the equations of motion is anticipated, and a limitation in the signature of the metric is consequently assumed. However, strictly speaking, these limitations are not always known at the start. This becomes evident whenever the classical equations allow for a change in the signature of the metric (see e.g. ) or when, as in the present case, the presence of a horizon induces a double change of signature in the metric.\n\nIn our present case we know from classical solutions that the signature of the metric (and the gauge fixing of the coordinate) implies for a timelike range. It is then tempting to explore the implications of a canonical quantization of this system imposing equal commutation relations. This will be carried out in the present section.\n\nWe shall impose the constraint (2.11) as an operator condition on the wave function. This is the WDW equation. It expresses a necessary condition for the wave function, although it does not in general contain all the information relevant to the quantum form of the theory. Indeed as it is well known the time is not identified, the solution contains both positive and negative frequencies, it is a hyperbolic differential operator and thus it does not lead to a well defined boundary value problem. It is also plagued by ambiguities since the metric in the Hilbert space is not defined.\n\nWe believe that the correct procedure requires identification of the parameter (our internal time) through a gauge fixing condition that defines in terms of the canonical variables and leads to a unitary Hamiltonian in the reduced canonical space. When this is possible the quantization of the system is non ambiguous and the solutions contain also the information from the constraint. The problem of the gauge fixing in the present case will be treated elsewhere; here we shall limit ourselves to explore the properties of the solutions of the WDW equation.\n\nThe fundamental commutation relations are:\n\nThe operators , , , have the Schrödinger representation, there being the usual ambiguities about the measure to be used.\n\nWe introduce also the mass operator and its conjugate according to eqs. (3.4). We have\n\n [I,J]=i.\n\nWe remark in particular that commutes with .\n\nThe expression of the WDW Hamiltonian operator is (we consider for simplicity the case of the Schwarzschild metric, )\n\n HWDW=−ap2a−bJ+4b2+iλpa.\n\nThe term depends on the ordering and on the representation of and . The choice of the covariant Laplace – Beltrami operator leads to while the symmetric ordering of the operators and leads to . In what follows we shall keep undetermined.\n\nFirst of all we determine the eigenfunctions of the commuting operators and . We choose the simplest representation,\n\n pq→−i∂∂q,\n\n. Then the eigenvalue equation for is\n\n (8b+∂a∂b) ψM=8M ψM,\n\nand the eigenfunctions of and are given by\n\n ψpM(a,b)=√8p12πexp i(pa+p−1βM),\n\nwhere is the eigenvalue of and\n\n βM=4b(b−2M).\n\nThe set (4.6) is orthonormal in , with unit measure. Let us remark that this approach can be easily adapted to a different interval in , and to different representations for , ; the wave function will change correspondingly, however the important properties to exploit remain the role of and of the commutation relation .\n\nThe form of is related to the existence of the horizon at for positive . Expressing the solution of the WDW equation\n\n HWDWΨ =0\n\nas superposition of , the general representation of the WDW wave function for the Schwarzschild black hole is given by\n\n Ψ(a,b) =∫dp pλ−3/2∫dm C(m) ψpm(a,b).\n\nis arbitrary. It is interesting to remark that there is a priori no limitation on the sign of the mass .\n\nUsing a well known representation for the solutions of the Bessel equation , the representation (4.9) can be cast in the form\n\n Ψ(a,b) =∫dm C(m)(−βma)(λ−1)/2 K1−λ(2√−aβm)\n\nand the solution with fixed mass is\n\n ΨM =CM(−βMa)(λ−1)/2K1−λ(2√−aβM).\n\nIt is natural to assume the form (4.11) of the solution in the regions where , namely in the classically forbidden regions and , where (4.11) is damped exponentially for large . In the two classically allowed regions for the black hole, namely , and , , the behaviour is oscillatory and one should write the appropriate oscillating solutions with outgoing or incoming asymptotic conditions. Note that for large in these regions the phase approaches the value of the action evaluated on the classical solution for the asymptotically flat spacetime. We are not discussing the joining of the wave functions between the different regions as this depends on the choice of the ordering and also on the representation assumed for the momenta.\n\nSuitable superpositions of the kind (4.10) may give wave functions that are regular also for [10,15]. We note also that the general solution for the Kantowski-Sachs Euclidean wormhole found in corresponds to the solutions of the present WDW equation obtained by diagonalizing the operator (see (3.5)) in place of . Indeed, using the same choice for ordering () and measure in superspace as in , the differential representation of is\n\n N=−i(b∂b−2a∂a),\n\nand the solutions of the WDW equation that are eigenfunctions of with eigenvalue are:\n\n Ψν(a,b)=8π(2sinhπνν)1/2(−a)iν/2Kiν(4b√−a).\n\nThese solutions are real in the region and orthonormal in , with measure :\n\n (Ψν,Ψν′)≡∫0−∞da∫∞0db b Ψ∗ν Ψν′ =δ(ν−ν′).\n\nAgain the phase factor coincides asymptotically with the classical phase factor as for (4.11). It is interesting to note that when the solution (4.13), namely\n\n Ψν=0 =8√2√πK0(4b√−a),\n\ncoincides with (4.11) for (and ), as expected since on the constraint shell . This wave function describes a vacuum wormhole in the classically forbidden region. This equivalence supports the conjecture that the ultimate remnant in the evaporation process of a black hole is a vacuum wormhole.\n\n5. Conclusions.\n\nThe classical Einstein equations for a static spherically symmetric metric can be cast in Hamiltonian form. The starting point is the ADM foliation performed along the coordinate . This is of course a constrained canonical formalism, the constraint being that the Hamiltonian vanishes. The Hamiltonian generates gauge transformations of the canonical variables that correspond to the reparametrization of the coordinate in the customary formalism of General Relativity.\n\nBy a suitable, self – suggesting choice of the Lagrangian multiplier (analogously to what done in [17,8] for the FRW universe) the Hamiltonian assumes a beautiful polynomial form. The infinitesimal gauge transformations can be integrated, thanks essentially to Einstein and Schwarzschild. This is an interesting integrable non linear system. Integrability is due to its simple algebraic structure. Indeed, one identifies a pair of conjugate gauge invariant quantities: one of them is the Schwarzschild mass.\n\nThen, the temptation to explore the quantization of this system is big and we have carried on the investigation of the WDW equation. In doing this, one is comforted by the fact that inside the horizon of a black hole  is a timelike variable.\n\nNote that if we do not fix the coordinate gauge by expressing in terms of the canonical coordinates, this statement is vague: for instance the trivially different fixings (area gauge) and lead to obviously different values for the horizon in terms of . However, this does not matter much: there is a region where is timelike.\n\nThus we have studied the WDW equation and give the general representation of the solution in terms of superpositions of eigenfunctions of the mass operator. It is interesting to observe that there is no reason why the sum should be limited to positive eigenvalues of the mass only.\n\nThere is nothing in the form of the WDW equation that reminds us of the region in , where it is valid, as the WDW equation does not contain . So we may determine the solution in the four regions , (for positive mass). We have not discussed the joining of the solutions between these regions, as the result may be affected by the ambiguities in the ordering of the operators and in the choice of the measure.\n\nThe solution in the classically forbidden regions can also be cast in a form identical to the solution representing a Euclidean wormhole in the Kantowski – Sachs spacetime . These solutions are eigenfunctions of a different operator that commutes weakly with the Hamiltonian. In particular the state with eigenvalue 0 of is also eigenstate of the mass with eigenvalue 0. This equivalence may support the conjecture that the ultimate remnant in the process of evaporation of a black hole is a vacuum wormhole.\n\nThe WDW equation is plagued by the so well known problems. A more natural way to investigate the quantum properties of the system seems to be the introduction of a gauge fixing of the parameter in the canonical treatment that connects to the canonical variables and leads to a unitary Hamiltonian in the reduced canonical space. We defer to a next paper the investigation of this method as well as of the connection between the WDW equation and the gauge fixed quantization for integrable systems.\n\nAcknowledgments\n\nIt is a pleasure to thank Orfeu Bertolami, Fernando de Felice and Luis J. Garay for interesting discussions on the subject of this paper and related topics. One of the authors (A.T.F.) acknowledges a partial support from the Russian Science Foundation (grant 93 - 02 - 3827) and from the International Science Foundation (grant RF 000).\n\nReferences.\n\n H.A. Kastrup and T. Thiemann, Nucl. Phys. B425, 665 (1994); K.V. Kuchař, Phys. Rev. D 50, 3961 (1994). P. Kraus and F. Wilczek, Some Applications of a Simple Stationary Line Element for the Schwarzschild Geometry, Report No: PUPT 1474, IASSNS 94/46, gr-qc/9406042. S.P. Braham, Hypertime Formalism for Spherically Symmetric Black Holes and Wormholes, Report No: QMW-Maths-1994-SPB-1, gr-qc/9406045. See for instance: S. Weinberg, Gravitation and Cosmology: Principles and Applications of the General Theory of Relativity, John Wiley & Sons, New York, 1972. J.A. Wheeler, in Battelle Rencontres: 1967 Lectures in Mathematics and Physics, eds. by C. DeWitt and J.A. Wheeler, Benjamin, New York, 1968. B.S. DeWitt, Phys. Rev. D 160, 1113 (1967). J. Martin, Phys. Rev. D 49, 5086 (1994). M. Cavaglià, V. de Alfaro and A.T. Filippov, A Schrödinger Equation for Mini Universes, Report No: DFTT 6/94, SISSA 25/94/A, gr-qc/9402031, Int. J. Mod. Phys. A in press. M. Cavaglià, V. de Alfaro, F. de Felice, Phys. Rev. D 49, 6493 (1994). M. Cavaglià, Mod. Phys. Lett. A 9, 1897 (1994). For a review of quantum gravity problems, see for instance: K. V. Kuchař, Time and Interpretations of Quantum Gravity in Proc. 4th Canadian Conference on General Relativity and Relativistic Astrophysics, World Scientific, Singapore, 1993. G. Ellis, A. Sumeruk, D. Coule, and C. Hellaby, Class. Quantum Grav. 9, 1535 (1992). S.W. Hawking and D.N. Page, Phys. Rev. D 42, 2655 (1990). Bateman Manuscript Project, Higher Trascendental Transforms, Vol. II, p. 82, Mc. Graw–Hill Book Company, New York, 1953. See for instance: L.J. Garay, Phys. Rev. D 48, 1710 (1993); G. A. Mena Marugan, Class. Quantum Grav. 11, 2205 (1994) and Phys. Rev. D 50, 3923 (1994). S.W. Hawking, Phys. Rev. Lett. 69, 406 (1992). M. Cavaglià and V. de Alfaro, Mod. Phys. Lett. A 9, 569 (1994)." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.88002884,"math_prob":0.97864884,"size":23242,"snap":"2022-40-2023-06","text_gpt3_token_len":5335,"char_repetition_ratio":0.16064206,"word_repetition_ratio":0.027792484,"special_character_ratio":0.2223991,"punctuation_ratio":0.1291714,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9890242,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-05T13:25:48Z\",\"WARC-Record-ID\":\"<urn:uuid:d2de3541-19f2-48b0-bdc3-702873943bb1>\",\"Content-Length\":\"468243\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:26367fc4-e94b-499b-95e9-ed16c3484173>\",\"WARC-Concurrent-To\":\"<urn:uuid:6151872d-3c26-405a-be8d-56d95312304c>\",\"WARC-IP-Address\":\"104.21.14.110\",\"WARC-Target-URI\":\"https://www.arxiv-vanity.com/papers/gr-qc/9411070/\",\"WARC-Payload-Digest\":\"sha1:FRK4Z2FBCQUGA4YBLXEZWWDW5KCFRMFL\",\"WARC-Block-Digest\":\"sha1:K574TIXCOC733MSTK4MLELJTMIGT5C2I\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500255.78_warc_CC-MAIN-20230205130241-20230205160241-00595.warc.gz\"}"}
https://datascience.stackexchange.com/questions/76451/can-1d-cnn-method-apply-to-real-time-time-series-classification
[ "# Can 1D-CNN method apply to real-time time series classification?\n\nSo I got an EEG dataset with shape (data points, 19), each row's shape (1,19) represent 1 second of EEG.\n\nI read much research on EEG classification that used many Deep Learning method and 1D-CNN is one of that.\n\nMy question is as the input of the 1D-CNN must have multi-row data, ex (50,19) for my dataset so it can filter a input matrix. But I want to predict new data row by row ((1x19) shape), can 1D-CNN use this input for predict new data?" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9017514,"math_prob":0.5488608,"size":443,"snap":"2021-43-2021-49","text_gpt3_token_len":122,"char_repetition_ratio":0.11617312,"word_repetition_ratio":0.0,"special_character_ratio":0.28216705,"punctuation_ratio":0.0990099,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9638454,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-06T05:42:26Z\",\"WARC-Record-ID\":\"<urn:uuid:87e5395d-bd78-4b9f-9360-8fdfef182261>\",\"Content-Length\":\"136849\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0e7f1097-bc59-44f0-b888-66e7e49b8a63>\",\"WARC-Concurrent-To\":\"<urn:uuid:8bdf8217-924d-4259-8f42-1bb500862dd6>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://datascience.stackexchange.com/questions/76451/can-1d-cnn-method-apply-to-real-time-time-series-classification\",\"WARC-Payload-Digest\":\"sha1:DRQPKWW7PKCHK6LKJDNDNZYR5HKGBKSM\",\"WARC-Block-Digest\":\"sha1:TAJQY6L5QDZM5QYB7Z7FKIVPETFETMSL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363290.39_warc_CC-MAIN-20211206042636-20211206072636-00166.warc.gz\"}"}
https://search.r-project.org/CRAN/refmans/Evomorph/html/SimEvo.html
[ "SimEvo {Evomorph} R Documentation\n\n## Simulation of Shape Variation\n\n### Description\n\n`SimEvo` performs a simulation of shape variation using a modification of Lande's evolutionary model Polly 2004\n\n### Usage\n\n```SimEvo(vari, consensusvec, resids, ngen, fsamp)\n```\n\n### Arguments\n\n `vari` Variation coefficient `consensusvec` Consensus shape (vectorized form) `resids` GPA residuals matrix `ngen` Number of generations of simulations (Default: 1000000 steps). `fsamp` Frequency of samples (Default: 1000000 steps).\n\n### Details\n\nLande's evolutionary model defines mean morphological variation over generations Δ Z as:\n\nDelta Z = beta*G\n\nwhere G is the additive genetic covariance matrix, and β are selection coefficients applied to the morphological structure. Polly 2004 proposes a modification of this equation in order to use it with morphological data instead of genetic data:\n\nDelta Z = beta*H*P\n\nwhere P is the phenotypic covariance matrix, and H is an heritability matrix (See Polly 2004 for more information). ` resids ` will be used as the phenotypic covariance matrix, and ` vari ` will be used to simulate β H term. After ` ngen ` simulations steps the new shape will be reconstructed from the starting shape ` consensusvec `. The number of plots representing the new shapes can be modify using ` fsamp `.\n\n### Value\n\nIt returns a list of ngen/fsamp shapes (landmarks coordinates)\n\n### Author(s)\n\nCabrera Juan Manuel\n\n### References\n\nPolly, P. D. (2004). On the simulation of the evolution of morphological shape: multivariate shape under selection and drift. Palaeontologia Electronica, 7(2), 1-28.\n\n### Examples\n\n```\ndata(\"aegla_landmarks\")\n\n#Use GpaResiduals function to obtain GPA residual matrix and consensus\n# coordinates from landmark configuration\n\na_data=GpaResiduals(aegla_landmarks)\n\n#Simulate morphological evolution with a variation rate \"vari\"\n# trough \"ngen\" generations and retrieve the last generation shape coordinates\n\nsimshape = SimEvo(vari = 2, consensusvec = a_data\\$cvectorized,\nresids = a_data\\$resid, ngen = 10000, fsamp = 10000)\n\n#Plot consensus shape and the simulated shape\n\npar(mfrow=c(1, 2))\nplot(a_data\\$consens,type = \"p\",main = \"Reference\", xlab = \"\", ylab = \"\")\nplot(simshape[],type = \"p\",col = \"red\",main = \"Target\", xlab = \"\", ylab = \"\")\n\n#Or you can use PlotVariations to see the difference more clearly\n\nPlotVariations(simshape,a_data\\$consens)\n\n```\n\n[Package Evomorph version 0.9 Index]" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7225186,"math_prob":0.9823669,"size":2206,"snap":"2021-31-2021-39","text_gpt3_token_len":566,"char_repetition_ratio":0.12806539,"word_repetition_ratio":0.018927446,"special_character_ratio":0.23934723,"punctuation_ratio":0.11797753,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9823926,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-08-05T10:01:17Z\",\"WARC-Record-ID\":\"<urn:uuid:5169ef97-9fc8-457a-875c-a304611334f2>\",\"Content-Length\":\"4166\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7f0eb8d3-a21a-4434-a9e9-09847561a133>\",\"WARC-Concurrent-To\":\"<urn:uuid:ff1bcfbf-ff0d-44c4-a126-00bb3151baca>\",\"WARC-IP-Address\":\"137.208.57.46\",\"WARC-Target-URI\":\"https://search.r-project.org/CRAN/refmans/Evomorph/html/SimEvo.html\",\"WARC-Payload-Digest\":\"sha1:TGPE76UCVKIZWEZL2WK5PU7EAEJ5GFOO\",\"WARC-Block-Digest\":\"sha1:ALUFCZOD27CO7JIIKKDWKTS5M6ZFCB52\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046155529.97_warc_CC-MAIN-20210805095314-20210805125314-00174.warc.gz\"}"}
https://www.javaprogrammingforums.com/whats-wrong-my-code/33290-dectobin-conversion-without-using-string-methods.html
[ "Welcome to the Java Programming Forums\n\nThe professional, friendly Java community. 21,500 members and growing!\n\nThe Java Programming Forums are a community of Java programmers from all around the World. Our members have a wide range of skills and they all have one thing in common: A passion to learn and code Java. We invite beginner Java programmers right through to Java professionals to post here and share your knowledge. Become a part of the community, help others, expand your knowledge of Java and enjoy talking with like minded people. Registration is quick and best of all free. We look forward to meeting you.\n\n>> REGISTER NOW TO START POSTING\n\n# Thread: DECtoBIN Conversion WITHOUT using String Methods\n\n1. ##", null, "DECtoBIN Conversion WITHOUT using String Methods\n\nHey I am trying to create a programme that determines the binary sequence of an input (unknown) decimal number. I DO NOT want to use a String Method.\n\nI am running into the difficulty that my remainder is stuck on an endless loop. I want to find the divisible value of the decimal number (ie the quotient) and it's remainder, then print the remainder. Then continue calculating the divisible value of the quotient and its subsequent remainder until the quotient is 0.\n\nFor example the decimal number 10 should produce a binary sequence 1010.\n\nThe code I have written is shown below.\n\n//Prompt user to input decimal value\nSystem.out.print(\"please enter the decimal value: \");\ndecimal = keyboardIn.nextInt();\n\ndo\n{\n//Calculations\nquotient = decimal/2;//\nremainder = decimal%2;\nSystem.out.print(\" \" +remainder);\nquotient = decimal;\ncounter++;\n}while (quotient !=0);", null, "", null, "Reply With Quote\n\n3. ##", null, "Re: DECtoBIN Conversion WITHOUT using String Methods\n\ndecimal number 10 should produce a binary sequence 1010.\nWhat type of variable is that \"binary sequence\" held in? Do you need to drop the leading 0s?\n\nAn int variable holds 32 bits that can be shifted bit by bit. any bit of the int can be tested by ANDing the bit with a 1 in and testing if the results is 0. For example: 1010 AND 1000 = 1000 or 1101 AND 1 = 1\n\n[code=java]\n[/code]\nto get highlighting and preserve formatting.\n\nstuck on an endless loop\nHow does quotient ever get to be == 0 at the end of the loop?", null, "", null, "Reply With Quote\n\n4. ##", null, "Re: DECtoBIN Conversion WITHOUT using String Methods\n\nHey Norm,\n\nThe variable that the binary sequence is held in is an int variable. I have included the full code below.\n\nThe quotient would ==0 as shown in the following example:\n\ndecimal = 10\n\nquotient = decimal/2;\nremainder = decimal%2;\n\ntherefore,\nquotient = 10/2 = 5\nremainder = 10%2 = 0\n\nquotient = 5/2 = 2\nremainder = 5%2 = 1\n\nquotient = 2/2 = 1\nremainder = 2%2 = 0\n\nquotient = 1/2 = 0 // quotient ==0\nremainder = 1%2 = 1\n\n<code>\npublic class DecToBin\n{\npublic static void main(String[]args)\n{\nScanner keyboardIn = new Scanner (System.in);\n\n<var>\n// declare variables\nint decimal, quotient, remainder, counter =1;\n\n//Prompt user to input decimal value\nSystem.out.print(\"please enter the decimal value: \");\ndecimal = keyboardIn.nextInt();\n\ndo\n{\nquotient = decimal/2;\nremainder = decimal%2;\nSystem.out.print(\" \" +remainder);\nquotient = decimal;\ncounter++;\n\n}while (quotient!=0);\n\n}//end main method\n}//end class", null, "", null, "Reply With Quote\n\n5. ##", null, "Re: DECtoBIN Conversion WITHOUT using String Methods\n\nThe variable that the binary sequence is held in is an int variable.\nIf the input is in an int variable and the binary sequence is held in an int variable, I don't understand what the code is supposed to do. There wouldn't be any logic needed, just an assignment statement:\nint inV = 10;\nint binarySeq = inV; // copy the value here\n\nDid you really mean that or did you mean that \"binary sequence\" will be printed as digits on the console?", null, "", null, "Reply With Quote" ]
[ null, "https://www.javaprogrammingforums.com/images/icons/icon1.png", null, "https://www.javaprogrammingforums.com/images/misc/progress.gif", null, "https://www.javaprogrammingforums.com/clear.gif", null, "https://www.javaprogrammingforums.com/images/icons/icon1.png", null, "https://www.javaprogrammingforums.com/images/misc/progress.gif", null, "https://www.javaprogrammingforums.com/clear.gif", null, "https://www.javaprogrammingforums.com/images/icons/icon1.png", null, "https://www.javaprogrammingforums.com/images/misc/progress.gif", null, "https://www.javaprogrammingforums.com/clear.gif", null, "https://www.javaprogrammingforums.com/images/icons/icon1.png", null, "https://www.javaprogrammingforums.com/images/misc/progress.gif", null, "https://www.javaprogrammingforums.com/clear.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9293247,"math_prob":0.96660787,"size":814,"snap":"2020-34-2020-40","text_gpt3_token_len":181,"char_repetition_ratio":0.11728395,"word_repetition_ratio":0.02739726,"special_character_ratio":0.22727273,"punctuation_ratio":0.08074534,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99080586,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-14T02:35:47Z\",\"WARC-Record-ID\":\"<urn:uuid:a27b0d69-4470-4b25-b86b-ec497aae7db3>\",\"Content-Length\":\"62482\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:12fb75eb-0fb4-4899-a888-dc554203c1b8>\",\"WARC-Concurrent-To\":\"<urn:uuid:eadf4a0c-d20d-4c77-95a4-81510aad5d4f>\",\"WARC-IP-Address\":\"95.217.34.176\",\"WARC-Target-URI\":\"https://www.javaprogrammingforums.com/whats-wrong-my-code/33290-dectobin-conversion-without-using-string-methods.html\",\"WARC-Payload-Digest\":\"sha1:ZLMO3NNHAT3OSRRSBQ4SO5XD4ANKXXNE\",\"WARC-Block-Digest\":\"sha1:5WLGTYRLH4NUAZUVHTQQ4XAPJN5UA34W\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439739134.49_warc_CC-MAIN-20200814011517-20200814041517-00252.warc.gz\"}"}
https://in.mathworks.com/help/control/ug/about-model-order-reduction.html
[ "## Model Reduction Basics\n\nWorking with lower-order models can simplify analysis and control design, relative to higher-order models. Simpler models are also easier to understand and manipulate. High-order models obtained by linearizing complex Simulink® models or from other sources can contain states that do not contribute much to the dynamics of particular interest to your application. Therefore, it can be useful to reduce model order while preserving model characteristics that are important for your application.\n\n### When to Reduce Model Order\n\nCases where you might want to reduce model order include these situations:\n\n• You are working with a relatively high-order model obtained from linearizing a Simulink model, performing a finite-element calculation, interconnecting model elements, or other source.\n\n• You want to improve the simulation speed of a Simulink model at a certain operating point. In that case, you can linearize a portion of the model at that operating point and compute a reduced-order simplification or approximation of the linearized model. You can then replace the portion of the model with an LTI Block containing the reduced-order model.\n\n• You design a high-order controller that you want to implement as a lower-order controller, such as a PID controller. For example, controller design using Linear-Quadratic-Gaussian methods or H synthesis techniques can yield a high-order result. In this case, you can try reducing the plant order before synthesis, reducing the controller order after synthesis, or both.\n\n• You want to simplify a model obtained by identification with System Identification Toolbox™ software.\n\nThe following diagram illustrates the relationship between model reduction and control design.", null, "In general, when designing a controller for a system represented by a high-order model, G, it is useful to start by simplifying the plant model. Then, design a relatively low-order controller, CR, for the lower-order plant model GR. After you design a controller for either the original or the reduced plant model, you can try to reduce the controller further.\n\nReducing the plant or controller can include:\n\n• Discarding states that do not contribute to the system dynamics, such as structurally disconnected states or canceling pole-zero pairs.\n\n• Discarding low-energy states that contribute relatively little to system dynamics.\n\n• Focusing on a particular frequency region and discarding dynamics outside that region. For example, if your control bandwidth is limited by actuator dynamics, discard higher-frequency dynamics.\n\nIn any case, when you reduce model order, you want to preserve model characteristics that are important for your application. Whenever you compute a reduced-order model, verify that the reduced model preserves time-domain or frequency-domain behavior that you care about. For example, for control design, it is useful to verify that the reduced closed-loop system is stable. It is also useful to check that the reduced open-loop transfer function CRGR adequately matches the original models where the open-loop gain GC is close to 1 (in the gain crossover region).\n\n### Model Reduction Tools\n\nControl System Toolbox™ offers tools for model reduction in several environments. These include:\n\n• Functions for performing model reduction at the MATLAB® command prompt, in scripts, or in your own functions.\n\n• Reduce Model Order task for generating code in the Live Editor. When you are working in a live script, use this task to interactively experiment with model-reduction methods and parameters and generate code for your live script.\n\n• Model Reducer app, a standalone app that lets you import models from the MATLAB workspace, and interactively generate reduced-order models using different methods and parameters. The app can also generate code for use in a MATLAB script or function.\n\n### Choosing a Model Reduction Method\n\nTo reduce the order of a model, you can either simplify your model, or compute a lower-order approximation. The following table summarizes the differences among several model-reduction approaches.\n\nApproachCommand LineModel Reducer App and Reduce Model Order Live Editor Task\nSimplification — Reduce model order exactly by canceling pole-zero pairs or eliminating states that have no effect on the overall model response\n• `sminreal` — Eliminate states that are structurally disconnected from the inputs or outputs.\n\n• `minreal` — Eliminate canceling or near-canceling pole-zero pairs from transfer functions. Eliminate unobservable or uncontrollable states from state-space models.\n\n• `xelim` — Eliminate states explicitly.\n\nPole-Zero Simplification method — Eliminate:\n\n• Structurally disconnected states\n\n• Unobservable or uncontrollable states from state-space models\n\n• Canceling or near-canceling pole-zero pairs from transfer functions\n\nApproximation — Compute a lower-order approximation of your model. `reducespec` — Create a model order reduction task for ordinary LTI and sparse LTI models. Balanced Truncation method — Discard states that have relatively low effect on the overall model response.\nModal Decomposition — Eliminate poles and zeros that fall outside a specific area of interest.\nMode Selection method — Select frequency range of interest and discard dynamics outside that range.\n\nSometimes, approximation can yield better results, even if the model looks like a good candidate for simplification. For example, models with near pole-zero cancellations are sometimes better reduced by approximation than simplification. Similarly, using the model order reduction workflow with `reducespec` to reduce state-space models can yield more accurate results than `minreal`.\n\nWhen you use a reduced-order model, always verify that the simplification or approximation preserves model characteristics that are important for your application. For example, compare the frequency responses of the original and reduced models using `bodeplot` or `sigmaplot`. Or, compare the open-loop responses for the original and reduced plant and controller models." ]
[ null, "https://in.mathworks.com/help/control/ug/mr_design_reduction.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.89403373,"math_prob":0.84498763,"size":5615,"snap":"2023-40-2023-50","text_gpt3_token_len":1074,"char_repetition_ratio":0.15166637,"word_repetition_ratio":0.019300362,"special_character_ratio":0.1745325,"punctuation_ratio":0.091772154,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95762545,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-08T09:48:21Z\",\"WARC-Record-ID\":\"<urn:uuid:a15a8e35-dae8-4804-b9ee-0ca9fbc9edf5>\",\"Content-Length\":\"80358\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9c50d1a7-2db0-4409-bb16-345cb76cc6f6>\",\"WARC-Concurrent-To\":\"<urn:uuid:1248f3ec-ff9c-4fa7-8486-426a95e35313>\",\"WARC-IP-Address\":\"104.86.80.92\",\"WARC-Target-URI\":\"https://in.mathworks.com/help/control/ug/about-model-order-reduction.html\",\"WARC-Payload-Digest\":\"sha1:MGQ3EANUDVO3HSQU57SJQY7QVLVQIB2K\",\"WARC-Block-Digest\":\"sha1:AL3BZGFHZMFZKJG734RLNKT7IRA5ZI6H\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100739.50_warc_CC-MAIN-20231208081124-20231208111124-00045.warc.gz\"}"}
https://quant.stackexchange.com/questions/30178/how-to-calculate-return-on-investment-for-an-adjustment-to-a-complex-options-pos/38179
[ "# How to calculate return on investment for an adjustment to a complex options position?\n\nSay I currently hold a set of options positions with the same symbol/expiry that collectively have a net present value based on the estimated value at expiration of +10. I could also liquidate the positions now at a value of +6.\n\nI am considering a set of multiple transactions with the same symbol/expiry. If I executed these transactions, I would net +8 including commissions. The estimated present value at expiration for the resulting positions would decline by -3 to 7. I could also liquidate the entire position after the transactions at a value of +5.\n\nHow to I determine the ROI of executing these transactions?\n\nMy gut says its 8 / 3 = 266%, I 'invested' that -3 that I lost in value to gain an immediate value of $8. But that does not seem to work for all the different scenarios, such as when a set of transactions results in both an increase in estimated NPV and a net benefit to the execution. If there a general purpose equation that will allow me to measure ROI consistently across these scenarios? How do I account for the reduction in the liquidation value since only one of the two scenarios will occur (either the liquidation value will matter or the value at expiration)? • You did not tell us how much you invested to build up your position, so we can not calculate ROI. Since you say it has an npv of 10 we can assume you invested 10, but You say that you would eliminate it at 6. That seems to suggest that the market does not think its worth 10, or its illiquide. I assume you think its worth 10 on average, but there is a considerable risk and the actual return is uncertain. Not knowing the return makes ROI calculation difficult too. Maybe you can calculate an average/expected return. – Ami44 Sep 17 '16 at 16:13 • I didn't consider the initial investment to establish the position because I'm trying to estimate the ROI of the adjustment transaction itself instead of the change in ROI for the entire history of the position. – user548084 Sep 17 '16 at 17:08 • Also I didn't mean that I would liquidate it for 6, I meant I could liquidate it for 6 (but I wouldn't because I think its worth 10 to keep it). But these numbers are illustrative because I'm looking for a general purpose solution to the problem. – user548084 Sep 17 '16 at 17:10 • I'm going to ask this question in a different way, if that gets answered I'll mark this one as answered as well – user548084 Sep 20 '16 at 3:32 ## 4 Answers My instinct is to compute the expected value of the options right now. In a black-scholes framework, you simply recompute your put / call value with current time to expire and underlying price(s), and add up the values, then compare to the cost in the market of closing these positions. Note that if you use the market IV to calculate this, you will get back from this that both are exactly the same expected value. Like most things with options, determining the best course of action depends on your view of volatility vs the rest of the market. • Perhaps you looking instead for relative risk / reward? ie, why risk \\$2.00 to make the last \\$.10 on an option while leaving myself open to gamma risk? – pyrex Sep 17 '16 at 22:58 • Yes, relative risk / reward is where I'm trying to get to. This math is very simple if the option is a net debit and the expected value is positive. What I can't figure out is how to choose between two options that both have a net credit and a positive increase in expected value. – user548084 Sep 20 '16 at 3:01 I certainly see a problem in the fact, that you say that your position has a npv of 7, but you can not sell your position at 7 right now. I assume you're position has a considerable risk. That means you do not know your actual return. On average it might be 7, but it might be more or less in the end. That means you can only calculate an average ROI not the actual one. But let's assume your npv's are real market values at which you could sell your position at any time. Since the ROI is simply your profit divided by the capital you used to generate that profit, it would be in your case 5/3 = 166%, since you invested a value of 3 to make a profit of 5. If your transactions result in an immediate increase in cash and value of your position than you have an arbitrage opportunity and you should try to do as much of these transactions as possible. The ROI would be infinite. Addendum: I think ROI is not usefull in your situation. You can use it at the end, when all positions are eliminated. Before that I doubt its usefullness. I assume that your transactions are changing the risk of your position. Otherwise it seems impossible to me to create a cash and npv increase simultanously. That means you need a measure of success that also takes your risk into considerations like Sharpe ratio et. al. • I looked at the Sharpe ratio, but investopedia.com/terms/s/sharperatio.asp says its not applicable to options. – user548084 Sep 20 '16 at 2:41 • The immediate increase in cash and the increase in expected/npv value are what troubles me the most. But even in that situation, there must still be a way of comparing two alternate transactions that both increase cash and expected value. Is the answer to that is the standard deviation of returns i.e. \"risk\"? – user548084 Sep 20 '16 at 2:46 • Think about what stops you to make a billion of that transactions. It's the possibility that the expected value will not realize, right. I think you need to quantify that risk and than use something like your return divided by that risk as your measure of success. – Ami44 Sep 20 '16 at 17:12 • Sure, that's my thought as well. With a net credit and a positive ROI there must be another factor at play, which is obviously risk. My challenge is how to control for that risk, or, exactly what metric to use in combination with EV/NPV and the net credit. – user548084 Sep 23 '16 at 2:58 It's fairly straightforward to calculate ROI for a long only options portfolio, since what pay can be considered principal. For a net short position, however, ROI may be undefined since their is no principal and net loss if theoretically not defined. In cases where principal is not defined, it is standard to use value-at-risk for the denominator since this represent a best-estimate of principal at stake. The standard cut-off values for VaR are 1%, 2%, and even just 2 sigmas. In instances where you cannot compute VaR, it may be acceptable to use required maintenance margin as designated by your broker or clearing house. For example, the CME Group used to use a margin system called SPAN, which was basically a complex VaR model. Brokerages which cleared under the CME broadly adopted this model for risk reporting. To your example, you assume you have a net expected return of 8. If your broker/clearing house requires you to have 10 in your account to maintain the position, then the net expected ROI would be$\\frac{8}{10}$or 80%. As a caveat, margin requirements are in flux and tend to increase when volatility increases or correlations break down (thus margin call). So if your broker suddenly decides you need 16 in your account to maintain said position, then expected ROI would decrease to$\\frac{8}{16}\\$ or 50%.\n\nP&L calculations can get complex when dealing with derivatives. From a traders/portfolio managers prospective ROI/NPV/EV are irrelevant for calculating returns/risk on a book of positions. The options might be ITM now, but looking at current expiration value is making the assumption that the asset will either stay ITM or become further ITM. Ultimately, a position is worth whatever the current spread shows, not the expiration value. If you are concerned about the expiration value, the current market for the position, should encompass the likelihood that it will expire ITM, hence the price will be higher, but I digress .\n\nIf use are using this for risk calcs you should look do calcs with mark-to-market/bid/mid/ask. You are correct to calculate the returns using bid price less commission. There are two simple way to calculate the returns in this case. One in context of the portfolio and one in the context of the position only.\n\nCalcs for Long\n\n$$\\ Weighted Position Return = \\frac{ ( (Price_T - Entry Price - Commission) * Contracts * Multiplier )}{Portfolio Money Value}$$\n\n$$\\ Position Return = \\frac{( Price_T - Commission )}{(Entry Price + Commission)} -1$$\n\nCalcs for Short\n\nThese require a little logic. I will use calls for this example:\n\n$$\\ If\\ Strike > Price_T$$\n\n$$\\ Then \\frac{Net\\ Premium}{Portfolio\\ Money\\ Value}$$\n\n$$\\ Else\\ \\frac{Net\\ Premium\\ -\\ (Price_t\\ -\\ Strike)\\ *\\ Contracts\\ *\\ Multiplier\\ -\\ Commission }{Portfolio\\ Money\\ Value}$$\n\nLet me know if the explanation is useful to you.\n\nAlso one note, is your goal risk management or portfolio analytics?" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.94508606,"math_prob":0.8703227,"size":1188,"snap":"2019-35-2019-39","text_gpt3_token_len":250,"char_repetition_ratio":0.1410473,"word_repetition_ratio":0.009615385,"special_character_ratio":0.21464646,"punctuation_ratio":0.061946902,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9712515,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-20T22:32:09Z\",\"WARC-Record-ID\":\"<urn:uuid:3e43dbbc-c6d4-4c1d-b8fd-e28b21e30898>\",\"Content-Length\":\"166913\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f3487fbf-a2c7-4c9a-a4eb-c797f5218b91>\",\"WARC-Concurrent-To\":\"<urn:uuid:2e1495a6-9a34-4305-86e9-70f7b5a84820>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://quant.stackexchange.com/questions/30178/how-to-calculate-return-on-investment-for-an-adjustment-to-a-complex-options-pos/38179\",\"WARC-Payload-Digest\":\"sha1:2P7IR7QEC7H4SFLARIB7BEEV7G5ZGXLX\",\"WARC-Block-Digest\":\"sha1:24RS67N4VS4IDZ56GVDFA3KAR6L4GYUZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514574084.88_warc_CC-MAIN-20190920221241-20190921003241-00162.warc.gz\"}"}
https://www.komal.hu/feladat?a=feladat&f=B3832&l=en
[ "", null, "Mathematical and Physical Journal\nfor High Schools\nIssued by the MATFUND Foundation\n Already signed up? New to KöMaL?\n\n#", null, "Problem B. 3832. (September 2005)\n\nB. 3832. P is an arbitrary point of the hypotenuse AB of a right-angled triangle ABC. The foot of the altitude drawn from vertex C is C1. The projection of P onto the leg AC is A1, and its projection onto the leg BC is B1.\n\na) Prove that the points P, A1, C, B1, C1 lie on a circle.\n\nb) Prove that the triangles A1B1C1 and ABC are similar.\n\n(3 pont)\n\nDeadline expired on October 17, 2005.\n\nSolution. (a) The angles", null, "PA1C,", null, "PB1C,", null, "PC1C are all right angles, so points A1, B1, C1 lie on the circle of diameter PC. Due to the right angle at C, another diameter of this circle is A1B1.", null, "(b) From the triangles ABC and CBC1,", null, "BAC", null, "=90o-angleABC=", null, "BCC1. Since quadrilateral CA1C1B1 is cyclic, B1CC1", null, "=B1A1C1", null, ". Therefore, the red angles in the Figure are equal.", null, "Similarly, also the blue angles are equal.\n\nTriangles ABC and A1B1C1 are similar because they have equal angles, respectively.\n\n### Statistics:\n\n 384 students sent a solution. 3 points: 205 students. 2 points: 64 students. 1 point: 98 students. 0 point: 14 students. Unfair, not evaluated: 3 solutionss.\n\nProblems in Mathematics of KöMaL, September 2005" ]
[ null, "https://www.komal.hu/kep/PC/komal_logo_voros.png", null, "https://www.komal.hu/kep/ikon/B.gif", null, "https://www.komal.hu/kep/tex/angle.gif", null, "https://www.komal.hu/kep/tex/angle.gif", null, "https://www.komal.hu/kep/tex/angle.gif", null, "https://www.komal.hu/kep/abra/b8/34/e6/adbdd9d0e89e5d88b7ad2f9d9.gif", null, "https://www.komal.hu/kep/tex/angle.gif", null, "https://www.komal.hu/kep/tex/angle.gif", null, "https://www.komal.hu/kep/tex/angle.gif", null, "https://www.komal.hu/kep/tex/angle.gif", null, "https://www.komal.hu/kep/tex/angle.gif", null, "https://www.komal.hu/kep/abra/04/b5/64/0161580f953235ceeb28b879e.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.86130464,"math_prob":0.9715901,"size":1116,"snap":"2020-45-2020-50","text_gpt3_token_len":338,"char_repetition_ratio":0.12859713,"word_repetition_ratio":0.0,"special_character_ratio":0.29659498,"punctuation_ratio":0.196,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9947885,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,1,null,null,null,null,null,null,null,null,null,null,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-03T04:58:33Z\",\"WARC-Record-ID\":\"<urn:uuid:56d28835-d7c7-4330-a8b0-6d990cff8e9d>\",\"Content-Length\":\"16434\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7b80e2cc-5b42-4352-8ed0-f26e82997a40>\",\"WARC-Concurrent-To\":\"<urn:uuid:8ebbf65e-b3cb-4c12-a3e0-7d28898934a5>\",\"WARC-IP-Address\":\"157.181.183.250\",\"WARC-Target-URI\":\"https://www.komal.hu/feladat?a=feladat&f=B3832&l=en\",\"WARC-Payload-Digest\":\"sha1:4NUWM27YRXGQWE657I6XZHQABX526FT3\",\"WARC-Block-Digest\":\"sha1:TP64CUGAB7KVXFU2ON537I3YPZI7QGIY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141718314.68_warc_CC-MAIN-20201203031111-20201203061111-00557.warc.gz\"}"}
https://nz.education.com/lesson-plan/mm-math/
[ "# M&M maths\n\nNo standards associated with this content.\n\nWhich set of standards are you looking for?\n\n• Students will be able to add two-digit numbers with and without regrouping using a model.\n(5 minutes)\n• Stimulate classroom engagement by asking your students if any of them like M&Ms. Let them know that this lesson involves adding numbers using M&Ms.\n• Direct students' attention to the key at the bottom of the M&M place value mat, which states that 10 blue M&Ms are equivalent to one red M&M.\n• Show them the rest of the place value mat. Explain that each blue M&M equals one and belongs in the Ones place, or the number position one space left of a decimal point. Each red M&M equals ten and belongs in the Tens place, or number position two spaces to the left of a decimal point.\n• Inform the class of this important rule: There cannot be more than nine M&Ms in any column.\n(15 minutes)\n• Display the Place Value Mat using a whiteboard or projector, making it visible to the entire class.\n• Show your students how to use the place value mat. For example, write a small number (like 24) on the board. Denote 24 on the place value mat by placing two red M&Ms in the tens place and four blue M&Ms in the ones place.\n• Underneath 24, write \"+2.\"\n``````24\n+2``````\n• Have a student come up to add two on the mat. She should place two more blue M&Ms into the ones place.\n• Ask students to tell you what they think the new value is, and then explain the correct answer. For the current example, inform them that the correct answer is 26 because there are two M&Ms in the tens place and 6 M&Ms in the ones place.\n• Next, give them a trickier challenge. Write \"15\" on the board and ask for a student to denote 15. She should place 1 red M&M in the tens place and one five blue M&Ms in the ones place.\n• Underneath 15, write \"+6.\"\n``````15\n+6``````\n• Have another student come up to add six to the mat.\n• If she places six more blue M&Ms into the ones place, remind the class that there can't be more than nine M&Ms in any column. Pose the question: Since the ones place has more than nine M&Ms, we need to get rid of some. How can we remove M&Ms from the ones place without changing the number?\n• Accept responses from the class. If needed, remind students of the key at the bottom of the mat.\n• Explain that the solution involves replacing 10 blue M&Ms with one red M&M. Demonstrate this by removing 10 blue M&Ms from the ones place and adding one red M&M to the tens place. Explain that this process is called Regrouping.\n• Ask students questions to guide them toward the current number. For example, ask: How many M&Ms are in the tens place now? How many M&Ms are in the ones place now? What number do they make altogether?\n• Repeat this demonstration with three more addition problems, two of which should require regrouping.\n(20 minutes)\n• Have students wash or sanitize their hands.\n• Distribute the bags of M&Ms and place value mats.\n• Partner up the students. Announce the first problem, such as 83 + 5. Have each pair work together to model the problem and determine an answer.\n• Walk around and assist those who seem to be struggling.\n• Once most pairs have finished, review the correct answer as a class. Have a volunteer explain how she and her partner arrived this answer.\n• Repeat this activity with three more addition problems, two of which should require regrouping.\n(10 minutes)\n• Give students two problems to complete independently on their place value mats, such as 62 + 8 and 84 + 9.\n• Have them write down their answers on a notecard or piece of scrap paper and turn them in when done.\n• Review the two problems as a class.\n• Enrichment:Students who complete the practise problems quickly can be challenged with problems that require the addition of two 2-digit numbers without a multiple of 10 (e.g. 18 + 32). You could also ask these students to complete subtraction problems that require regrouping (e.g. 44 - 6).\n• Support:During partner work, it could be helpful to pair students who need extra support with those who have a strong understanding of place value concepts. You can also give them extra practise problems that don't require regrouping. This would help them better understand the ones and tens places.\n(5 minutes)\n• Examine the answers that each student submitted in order to assess her understanding of regrouping. Two common errors to look for are: no regrouping and partial regrouping. For example, consider 84 + 9. If a student's answer to this problem is 813, then she likely kept 13 in the ones place and didn't regroup. If a student's answer is 83, then she likely removed 10 from the ones place, but didn't add 10 to the tens place.\n(5 minutes)\n• Reward your students' hard work by allowing them to eat some M&Ms. Depending on the condition of the M&Ms they were working with, you may want to give them new packs instead.\n• Have students discuss their definition of the word \"regroup\" as they enjoy their M&Ms.\n• Ask some students to share their definitions. As a class, come up with a definition to use for the remainder of the place value unit.\n\nCreate new collection\n\n0\n\n### New Collection>\n\n0Items\n\nWhat could we do to improve Education.com?" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.926638,"math_prob":0.9107569,"size":5224,"snap":"2019-51-2020-05","text_gpt3_token_len":1231,"char_repetition_ratio":0.15,"word_repetition_ratio":0.0625,"special_character_ratio":0.23583461,"punctuation_ratio":0.09496676,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95535564,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-05T22:15:46Z\",\"WARC-Record-ID\":\"<urn:uuid:f84651cf-a2ad-4dd0-b2a5-75e4a613c59a>\",\"Content-Length\":\"227900\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:38335cfd-bac4-4541-bb2c-d8c275541229>\",\"WARC-Concurrent-To\":\"<urn:uuid:b3e3776b-6f19-4be7-aa72-ab8c345f1bfe>\",\"WARC-IP-Address\":\"34.232.181.202\",\"WARC-Target-URI\":\"https://nz.education.com/lesson-plan/mm-math/\",\"WARC-Payload-Digest\":\"sha1:ZMIGRI6O7HEZYI2RCQQA57UQ7OUENLE5\",\"WARC-Block-Digest\":\"sha1:G5UY4FZGBFS5MWH5V7DZNUUIAQ3XQ23V\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540482284.9_warc_CC-MAIN-20191205213531-20191206001531-00241.warc.gz\"}"}
http://www.eiiff.com/corporate-finance/capital-budgeting/techniques.html
[ "# Techniques of Capital Budgeting\n\nThere are a number of techniques of capital budgeting. Some of the methods are based on the concept of incremental cash flows from the projects or potential investments. There are some other techniques of capital budgeting that are based on the accounting rules and accounting earnings.\nHowever, the techniques based on the accounting rules are considered to be improper by the economists. The hybrid and simplified techniques of capital budgeting are also used in practice. Capital budgeting is the process of managing the long-term capital of a firm in the most profitable way.The prime task of the capital budgeting is to estimate the requirements of capital investment of abusiness. The capital allocation to various projects depending on their needs and selection of proper project for the business also fall under the canopy of capital budgeting concept.\n\n## Profitability Index\n\nThe profitability index is a technique of capital budgeting. This holds the relationship between the investment and a proposed project’s payoff. Mathematically the profitability index is given by the following formula:\nProfitability Index = (Present Value of future cash flows) / (Present Value of Initial investment\n\nThe profitability index is also sometimes called as value investment ratio or profit investment ratio. Profitability index is used to rank various projects.\n\n## Net Present value\n\nNet present value (NPV) is a widely used tool for capital budgeting. NPV mainly calculates whether the cash flow is in excess or deficit and also gives the amount of excess or shortfall in terms of the present value. The NPV can also be defined as the present value of the net cash flow.\n\nMathematically,\n\nNPV = ?(Ct / (1+r)t) – C0 , where the summation takes the value of t ranging from 1 to n\n\nHere,\nn stands for the total project time\nt stands for the cash flow time\nr stands for the rate of discount\nCt stands for net cash flow at time t\nC0 stands for capital outlay when t = 0\n\n## Modified Internal Rate of Return\n\nThe Modified Internal Rate of Return (MIRR) gives the measure of an investment’s attractiveness in a business. The prime use of the modified internal rate of return in the capital budgeting process is to rank various choices of projects.\n\n## Equivalent Annuity\n\nEquivalent Annual Cost is widely used in capital budgeting as a decision making tool. This is mainly used to compare different projects having unequal project lifetime.\n\n## Internal Rate of Return\n\nThe internal rate of return (IRR) is a metric used by the capital budgeting in order to determine whether the firm should make investments or not. The IRR indicates the efficiency of a particular investment.", null, "" ]
[ null, "https://www.mapsofworld.com/images/cross-button-black.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.91801655,"math_prob":0.8432445,"size":2534,"snap":"2019-43-2019-47","text_gpt3_token_len":515,"char_repetition_ratio":0.14980237,"word_repetition_ratio":0.0119904075,"special_character_ratio":0.18863457,"punctuation_ratio":0.056818184,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9772194,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-21T18:32:10Z\",\"WARC-Record-ID\":\"<urn:uuid:c5dd8c3d-3765-4685-9f8f-49fc247d03ae>\",\"Content-Length\":\"53301\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cb43f7d1-d6cc-46ce-9551-325a100c9dc7>\",\"WARC-Concurrent-To\":\"<urn:uuid:fbbc1d72-2a2f-47ad-9e5c-f9db0dfc18bc>\",\"WARC-IP-Address\":\"69.64.35.130\",\"WARC-Target-URI\":\"http://www.eiiff.com/corporate-finance/capital-budgeting/techniques.html\",\"WARC-Payload-Digest\":\"sha1:VZPXCV2M73AXT2DYQEIZNX277DO4JL43\",\"WARC-Block-Digest\":\"sha1:P5R6BIXF5PK6OYNZ2NWK2MBEYK42WUPY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496670948.64_warc_CC-MAIN-20191121180800-20191121204800-00307.warc.gz\"}"}
https://www.colorhexa.com/00cac1
[ "# #00cac1 Color Information\n\nIn a RGB color space, hex #00cac1 is composed of 0% red, 79.2% green and 75.7% blue. Whereas in a CMYK color space, it is composed of 100% cyan, 0% magenta, 4.5% yellow and 20.8% black. It has a hue angle of 177.3 degrees, a saturation of 100% and a lightness of 39.6%. #00cac1 color hex could be obtained by blending #00ffff with #009583. Closest websafe color is: #00cccc.\n\n• R 0\n• G 79\n• B 76\nRGB color chart\n• C 100\n• M 0\n• Y 4\n• K 21\nCMYK color chart\n\n#00cac1 color description : Strong cyan.\n\n# #00cac1 Color Conversion\n\nThe hexadecimal color #00cac1 has RGB values of R:0, G:202, B:193 and CMYK values of C:1, M:0, Y:0.04, K:0.21. Its decimal value is 51905.\n\nHex triplet RGB Decimal 00cac1 `#00cac1` 0, 202, 193 `rgb(0,202,193)` 0, 79.2, 75.7 `rgb(0%,79.2%,75.7%)` 100, 0, 4, 21 177.3°, 100, 39.6 `hsl(177.3,100%,39.6%)` 177.3°, 100, 79.2 00cccc `#00cccc`\nCIE-LAB 73.603, -42.998, -7.382 30.743, 46.088, 57.725 0.228, 0.343, 46.088 73.603, 43.627, 189.741 73.603, -57.867, -4.788 67.888, -37.971, -2.892 00000000, 11001010, 11000001\n\n# Color Schemes with #00cac1\n\n• #00cac1\n``#00cac1` `rgb(0,202,193)``\n• #ca0009\n``#ca0009` `rgb(202,0,9)``\nComplementary Color\n• #00ca5c\n``#00ca5c` `rgb(0,202,92)``\n• #00cac1\n``#00cac1` `rgb(0,202,193)``\n• #006eca\n``#006eca` `rgb(0,110,202)``\nAnalogous Color\n• #ca5c00\n``#ca5c00` `rgb(202,92,0)``\n• #00cac1\n``#00cac1` `rgb(0,202,193)``\n• #ca006e\n``#ca006e` `rgb(202,0,110)``\nSplit Complementary Color\n• #cac100\n``#cac100` `rgb(202,193,0)``\n• #00cac1\n``#00cac1` `rgb(0,202,193)``\n• #c100ca\n``#c100ca` `rgb(193,0,202)``\nTriadic Color\n• #09ca00\n``#09ca00` `rgb(9,202,0)``\n• #00cac1\n``#00cac1` `rgb(0,202,193)``\n• #c100ca\n``#c100ca` `rgb(193,0,202)``\n• #ca0009\n``#ca0009` `rgb(202,0,9)``\nTetradic Color\n• #007e78\n``#007e78` `rgb(0,126,120)``\n• #009790\n``#009790` `rgb(0,151,144)``\n• #00b1a9\n``#00b1a9` `rgb(0,177,169)``\n• #00cac1\n``#00cac1` `rgb(0,202,193)``\n• #00e4d9\n``#00e4d9` `rgb(0,228,217)``\n• #00fdf2\n``#00fdf2` `rgb(0,253,242)``\n• #18fff5\n``#18fff5` `rgb(24,255,245)``\nMonochromatic Color\n\n# Alternatives to #00cac1\n\nBelow, you can see some colors close to #00cac1. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #00ca8f\n``#00ca8f` `rgb(0,202,143)``\n• #00ca9f\n``#00ca9f` `rgb(0,202,159)``\n• #00cab0\n``#00cab0` `rgb(0,202,176)``\n• #00cac1\n``#00cac1` `rgb(0,202,193)``\n• #00c2ca\n``#00c2ca` `rgb(0,194,202)``\n• #00b1ca\n``#00b1ca` `rgb(0,177,202)``\n• #00a1ca\n``#00a1ca` `rgb(0,161,202)``\nSimilar Colors\n\n# #00cac1 Preview\n\nText with hexadecimal color #00cac1\n\nThis text has a font color of #00cac1.\n\n``<span style=\"color:#00cac1;\">Text here</span>``\n#00cac1 background color\n\nThis paragraph has a background color of #00cac1.\n\n``<p style=\"background-color:#00cac1;\">Content here</p>``\n#00cac1 border color\n\nThis element has a border color of #00cac1.\n\n``<div style=\"border:1px solid #00cac1;\">Content here</div>``\nCSS codes\n``.text {color:#00cac1;}``\n``.background {background-color:#00cac1;}``\n``.border {border:1px solid #00cac1;}``\n\n# Shades and Tints of #00cac1\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #000606 is the darkest color, while #f1fffe is the lightest one.\n\n• #000606\n``#000606` `rgb(0,6,6)``\n• #001918\n``#001918` `rgb(0,25,24)``\n• #002d2b\n``#002d2b` `rgb(0,45,43)``\n• #00413e\n``#00413e` `rgb(0,65,62)``\n• #005451\n``#005451` `rgb(0,84,81)``\n• #006863\n``#006863` `rgb(0,104,99)``\n• #007c76\n``#007c76` `rgb(0,124,118)``\n• #008f89\n``#008f89` `rgb(0,143,137)``\n• #00a39c\n``#00a39c` `rgb(0,163,156)``\n• #00b6ae\n``#00b6ae` `rgb(0,182,174)``\n• #00cac1\n``#00cac1` `rgb(0,202,193)``\n• #00ded4\n``#00ded4` `rgb(0,222,212)``\n• #00f1e6\n``#00f1e6` `rgb(0,241,230)``\nShade Color Variation\n• #06fff4\n``#06fff4` `rgb(6,255,244)``\n• #19fff5\n``#19fff5` `rgb(25,255,245)``\n• #2dfff6\n``#2dfff6` `rgb(45,255,246)``\n• #41fff7\n``#41fff7` `rgb(65,255,247)``\n• #54fff7\n``#54fff7` `rgb(84,255,247)``\n• #68fff8\n``#68fff8` `rgb(104,255,248)``\n• #7cfff9\n``#7cfff9` `rgb(124,255,249)``\n• #8ffffa\n``#8ffffa` `rgb(143,255,250)``\n• #a3fffb\n``#a3fffb` `rgb(163,255,251)``\n• #b6fffc\n``#b6fffc` `rgb(182,255,252)``\n• #cafffd\n``#cafffd` `rgb(202,255,253)``\n• #defffe\n``#defffe` `rgb(222,255,254)``\n• #f1fffe\n``#f1fffe` `rgb(241,255,254)``\nTint Color Variation\n\n# Tones of #00cac1\n\nA tone is produced by adding gray to any pure hue. In this case, #5d6d6c is the less saturated color, while #00cac1 is the most saturated one.\n\n• #5d6d6c\n``#5d6d6c` `rgb(93,109,108)``\n• #557573\n``#557573` `rgb(85,117,115)``\n• #4e7c7a\n``#4e7c7a` `rgb(78,124,122)``\n• #468481\n``#468481` `rgb(70,132,129)``\n• #3e8c88\n``#3e8c88` `rgb(62,140,136)``\n• #36948f\n``#36948f` `rgb(54,148,143)``\n• #2f9b97\n``#2f9b97` `rgb(47,155,151)``\n• #27a39e\n``#27a39e` `rgb(39,163,158)``\n• #1faba5\n``#1faba5` `rgb(31,171,165)``\n• #17b3ac\n``#17b3ac` `rgb(23,179,172)``\n• #10bab3\n``#10bab3` `rgb(16,186,179)``\n• #08c2ba\n``#08c2ba` `rgb(8,194,186)``\n• #00cac1\n``#00cac1` `rgb(0,202,193)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #00cac1 is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.50969565,"math_prob":0.78787106,"size":3678,"snap":"2021-21-2021-25","text_gpt3_token_len":1607,"char_repetition_ratio":0.13745236,"word_repetition_ratio":0.011111111,"special_character_ratio":0.53697664,"punctuation_ratio":0.23276836,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9828996,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-24T10:10:53Z\",\"WARC-Record-ID\":\"<urn:uuid:8144f9d7-cf6d-4b9d-b7ea-0e3b4c37c22e>\",\"Content-Length\":\"36247\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:466c627e-e2b1-4636-a670-0fd0b65e7248>\",\"WARC-Concurrent-To\":\"<urn:uuid:07cb4b57-ab16-45d3-b166-88e95294fbff>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/00cac1\",\"WARC-Payload-Digest\":\"sha1:VUNVQHI6YGOI2TRTBKOJCUHEDXDN3EMW\",\"WARC-Block-Digest\":\"sha1:AMNB6RKLKHJN7JLYDHOSGKWO4UL6IXFF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623488552937.93_warc_CC-MAIN-20210624075940-20210624105940-00294.warc.gz\"}"}
https://electronics.stackexchange.com/questions/436214/moore-machine-state-diagram-and-state-table?answertab=oldest
[ "# Moore machine state diagram and state table\n\nIs it me not understanding this table and diagram or something? If in a Moore machine, the output only depends on the current state Then why does the table for states F and H say the output is independent of the input? At F, if the input is a 0 the state changes to I and output would be a 1 At F, if the input is a 1 the state stays the same at F and the output would be a 0\n\nThe state table does not reflect this?\n\nThe state table should show multiple outputs...", null, "• Is it because im stupid or something? – user220808 Apr 30 at 12:41\n• You're confusing \"next state\" with \"output\". The combination of current state and inputs determine the next state, but the output doesn't change until it moves to that next state. The output depends ONLY on the current state. – Finbarr Apr 30 at 12:49\n• So what youre saying is if i am just landed on F, the current output is 0. I set an input of 1, the output is still 0, i set an input of 0 at t1, the output is still 0 at t1+deltat until t2 where the state is I and the output is 1? – user220808 Apr 30 at 13:09\n• Very ambiguous to me – user220808 Apr 30 at 13:15\n• @Oldfart, they aren't totally independent. The first one can be inferred from the second one. – The Photon Apr 30 at 16:10" ]
[ null, "https://i.stack.imgur.com/C8NrS.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8577861,"math_prob":0.9061261,"size":462,"snap":"2019-35-2019-39","text_gpt3_token_len":109,"char_repetition_ratio":0.18558952,"word_repetition_ratio":0.06451613,"special_character_ratio":0.23593074,"punctuation_ratio":0.08490566,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95521325,"pos_list":[0,1,2],"im_url_duplicate_count":[null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-18T17:52:19Z\",\"WARC-Record-ID\":\"<urn:uuid:8d479ebd-e84e-4e5b-be8a-f400fb842aec>\",\"Content-Length\":\"143515\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7b0a2476-f400-4912-a0a4-f13465410e22>\",\"WARC-Concurrent-To\":\"<urn:uuid:4bdd1f25-7710-43e5-8d44-d82d80ff79e7>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://electronics.stackexchange.com/questions/436214/moore-machine-state-diagram-and-state-table?answertab=oldest\",\"WARC-Payload-Digest\":\"sha1:25CNMYNPLAJCHMFZKDWSNVXKKAXQA775\",\"WARC-Block-Digest\":\"sha1:XO6DXXXCPHDPRFBCQE7RQWEKENAPDSGT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514573323.60_warc_CC-MAIN-20190918172932-20190918194932-00504.warc.gz\"}"}
https://www.mankier.com/5/orpierc
[ "# orpierc man page\n\norpierc is the configuration textfile for the orpie(1) console calculator.\n\n## Introduction\n\nCAUTION: while this manpage should be suitable as a quick reference, it may  be subject to miscellaneous shortcomings in typesetting. The definitive  documentation is the user manual provided with Orpie in PDF format.\n\nOrpie reads a run-configuration textfile (generally /etc/orpierc or  /usr/local/etc/orpierc) to determine key and command bindings. You can  create a personalized configuration file in \\$HOME/.orpierc, and select  bindings that match your usage patterns. The recommended procedure is to “include”  the orpierc file provided with Orpie  (see Including Other Rcfiles),  and add or remove settings as desired.\n\n## Orpierc Syntax\n\nYou may notice that the orpierc syntax is similar to the syntax used in  the configuration file for the Mutt email client (muttrc).\n\nWithin the orpierc file, strings should be enclosed in double quotes (\").  A double quote character inside a string may be represented by  The backslash character must be represented by doubling it  (\\\\).\n\n### Including Other Rcfiles\n\nSyntax: include filename_string\n\nThis syntax can be used to include one run-configuration file within another.  This command could be used to load the default orpierc file (probably  found in /etc/orpierc) within your personalized rcfile,\n{/.orpierc}. The filename string should be enclosed in quotes.\n\n### Setting Configuration Variables\n\nSyntax: set variable=value_string\n\nSeveral configuration variables can be set using this syntax; check  the Configuration Variables description  to see a list. The variables are unquoted, but the values should be quoted strings.\n\n### Creating Key Bindings\n\nSyntax: bind key_identifier operation\n\nThis command will bind a keypress to execute a calculator operation.  The various operations, which should not be enclosed in quotes,  may be found in  the section on Calculator Operations.  Key identifiers may be specified by strings that represent a single keypress,  for example \"m\" (quotes included). The key may be prefixed with  \"\\\\C\" or \"\\\\M\"  to represent Control or Meta (Alt) modifiers, respectively; note that the  backslash must be doubled. A number of special keys lack single-character  representations, so the following strings may be used to represent them:\n\n• \"<esc>\"\n• \"<tab>\"\n• \"<enter>\"\n• \"<return>\"\n• \"<insert>\"\n• \"<home>\"\n• \"<end>\"\n• \"<pageup>\"\n• \"<pagedown>\"\n• \"<space>\"\n• \"<left>\"\n• \"<right>\"\n• \"<up>\"\n• \"<down>\"\n• \"<f1>\" to \"<f12>\"\n\nDue to differences between various terminal emulators, this key identifier syntax may  not be adequate to describe every keypress. As a workaround, Orpie will also accept key  identifiers in octal notation. As an example, you could use  \\024  (do not enclose it in quotes) to represent Ctrl-T.\n\nOrpie includes a secondary executable, orpie-curses-keys, that prints out  the key identifiers associated with keypresses. You may find it useful when customizing  orpierc.\n\nMultiple keys may be bound to the same operation, if desired.\n\n### Removing Key Bindings\n\nSyntax:\nunbind_function key_identifier\nunbind_command key_identifier\nunbind_edit key_identifier\nunbind_browse key_identifier\nunbind_abbrev key_identifier\nunbind_variable key_identifier\nunbind_integer key_identifier\n\nThese commands will remove key bindings associated with the various entry  modes (functions, commands, editing operations, etc.). The key identifiers  should be defined using the syntax described in the previous section.\n\n### Creating Key Auto-Bindings\n\nSyntax: autobind key_identifier\n\nIn order to make repetitive calculations more pleasant, Orpie offers an automatic key  binding feature. When a function or command is executed using its abbreviation,  one of the keys selected by the autobind syntax will be  automatically bound to that operation (unless the operation has already been bound  to a key). The current set of autobindings can be viewed in the help panel by executing  command_cycle_help (bound to 'h' by default).\n\nThe syntax for the key identifiers is provided in the previous section.\n\n### Creating Operation Abbreviations\n\nSyntax: abbrev operation_abbreviation operation\n\nYou can use this syntax to set the abbreviations used within Orpie to represent the  various functions and commands. A list of available operations may be found in  the Calculator Operations section.  The operation abbreviations should be quoted strings, for example \"sin\"  or \"log\".\n\nOrpie performs autocompletion on these abbreviations, allowing you to type  usually just a few letters in order to select the desired command. The order of the  autocompletion matches will be the same as the order in which the abbreviations are  registered by the rcfile--so you may wish to place the more commonly used operation  abbreviations earlier in the list.\n\nMultiple abbreviations may be bound to the same operation, if desired.\n\n### Removing Operation Abbreviations\n\nSyntax: unabbrev operation_abbreviation\n\nThis syntax can be used to remove an operation abbreviation. The operation abbreviations  should be quoted strings, as described in the previous section.\n\n### Creating Macros\n\nSyntax: macro key_identifier macro_string\n\nYou can use this syntax to cause a single keypress (the key_identifier) to be interpreted as the series of keypresses listed in macro_string. The syntax for defining a keypress is the same as that defined in  the section on Creating Key Bindings.  The macro string should be a list of whitespace-separated keypresses, e.g.  \"2 <return> 2 +\" (including quotes).\n\nThis macro syntax provides a way to create small programs; by way of example,  the default orpierc file includes macros for the base 2 logarithm and the  binary entropy function (bound to L and H, respectively),  as well as “register” variable shortcuts (<f1> to <f12>).\n\nMacros may call other macros recursively. However, take care that a macro does  not call itself recursively; Orpie will not trap the infinite loop.\n\nNote that operation abbreviations may be accessed within macros. For example,  macro \"A\" \"' a b o u t <return>\" would bind A to display  the “about Orpie” screen.\n\n### Creating Units\n\nSyntax:\nbase_unit unit_symbol preferred_prefix\nunit unit_symbol unit_definition\n\nUnits are defined in a two-step process:\n\n1.\n\nDefine a set of orthogonal “base units.” All other units must be expressible  in terms of these base units. The base units can be given a preferred SI prefix,  which will be used whenever the units are standardized (e.g. via ustand).  The unit symbols and preferred prefixes should all be quoted strings; to prefer  no prefix, use the empty string (\"\").\n\nIt is expected that most users will use the fundamental SI units for base units.\n\n2.\n\nDefine all other units in terms of either base units or previously-defined units.  Again, the unit symbol and unit definition should be quoted strings. The definition  should take the form of a numeric value followed by a units string, e.g.  \"2.5_kN*m/s\". See  the UNITS FORMATTING section  for more details on the unit string format.\n\n### Creating Constants\n\nSyntax: constant constant_symbol constant_definition\n\nThis syntax can be used to define a physical constant. Both the constant symbol  and definition must be quoted strings. The constant definition should be a  numeric constant followed by a units string e.g. \"1.60217733e-19_C\".  All units used in the constant definition must already have been defined.\n\n## Configuration Variables\n\nThe following configuration variables may be set as described in the Setting Configuration Variables section.\n\nThis variable should be set to the full path of the Orpie data directory,  which will contain the calculator state save file, temporary buffers, etc.  The default directory is  \"\\~/.orpie/\".\n• editor\nThis variable may be set to the fullscreen editor of your choice. The default  value is \"vi\". It is recommended that you choose an editor that offers  horizontal scrolling in place of word wrapping, so that the columns of large  matrices can be properly aligned. (The Vim editor could be used in this fashion  by setting editor to \"vim -c 'set nowrap'\".)\n• hide_help\nSet this variable to \"true\" to hide the left help/status panel, or leave  it on the default of \"false\" to display the help panel.\n• conserve_memory\nSet this variable to \"true\" to minimize memory usage, or leave it on  the default of \"false\" to improve rendering performance. (By default,  Orpie caches multiple string representations of all stack elements. Very large  integers in particular require significant computation for string representation,  so caching these strings can make display updates much faster.)\n\n## Calculator Operations\n\nEvery calculator operation can be made available to the interface using the syntax  described in  the sections on Creating Key Bindings and Creating Operation Abbreviations.  The following is a list of every available operation.\n\n### Functions\n\nThe following operations are functions--that is, they will consume at least one  argument from the stack. Orpie will generally abort the computation and  provide an informative error message if a function cannot be successfully applied (for  example, if you try to compute the transpose of something that is not a matrix).\n\nFor the exact integer data type, basic arithmetic operations will yield an exact  integer result. Division of two exact integers will yield the quotient of  the division. The more complicated functions will generally promote the integer  to a real number, and as such the arithmetic will no longer be exact.\n\n• function_10_x\nRaise 10 to the power of the last stack element (inverse of function_log10).\n• function_abs\nCompute the absolute value of the last stack element.\n• function_acos\nCompute the inverse cosine of the last stack element. For real numbers,  The result will be provided either in degrees or radians, depending on  the angle mode of the calculator.\n• function_acosh\nCompute the inverse hyperbolic cosine of the last stack element.\n• function_arg\nCompute the argument (phase angle of complex number) of the last stack  element. The value will be provided in either degrees or radians,  depending on the current angle mode of the calculator.\n• function_asin\nCompute the inverse sine of the last stack element. For real numbers,  The result will be provided either in degrees or radians, depending on  the angle mode of the calculator.\n• function_asinh\nCompute the inverse hyperbolic sine of the last stack element.\n• function_atan\nCompute the inverse tangent of the last stack element. For real numbers,  The result will be provided either in degrees or radians, depending on  the angle mode of the calculator.\n• function_atanh\nCompute the inverse hyperbolic tangent of the last stack element.\n• function_binomial_coeff\nCompute the binomial coefficient (“n choose k”) formed by the last two  stack elements. If these arguments are real, the coefficient is computed  using a fast approximation to the log of the gamma function, and therefore  the result is subject to rounding errors. For exact integer arguments,  the coefficient is computed using exact arithmetic; this has the potential  to be a slow operation.\n• function_ceiling\nCompute the ceiling of the last stack element.\n• function_convert_units\nConvert stack element 2 to an equivalent expression in the units of  element 1. Element 1 should be real-valued, and its magnitude will  be ignored when computing the conversion.\n• function_cos\nCompute the cosine of the last stack element. If the argument is real,  it will be assumed to be either degrees or radians, depending on the  angle mode of the calculator.\n• function_cosh\nCompute the hyperbolic cosine of the last stack element.\n• function_conj\nCompute the complex conjugate of the last stack element.\n• function_div\nDivide element 2 by element 1.\n• function_erf\nCompute the error function of the last stack element.\n• function_erfc\nCompute the complementary error function of the last stack element.\n• function_eval\nObtain the contents of the variable in the last stack position.\n• function_exp\nEvaluate the exponential function of the last stack element.\n• function_factorial\nCompute the factorial of the last stack element. For a real argument,  this is computed using a fast approximation to the gamma function,  and therefore the result may be subject to rounding errors (or overflow). For an  exact integer argument, the factorial is computed using exact arithmetic;  this has the potential to be a slow operation.\n• function_floor\nCompute the floor of the last stack element.\n• function_gamma\nCompute the Euler gamma function of the last stack element.\n• function_gcd\nCompute the greatest common divisor of the last two stack elements. This operation  may be applied only to integer type data.\n• function_im\nCompute the imaginary part of the last stack element.\n• function_inv\nCompute the multiplicative inverse of the last stack element.\n• function_lcm\nCompute the least common multiple of the last two stack elements. This  operation may be applied only to integer type data.\n• function_ln\nCompute the natural logarithm of the last stack element.\n• function_lngamma\nCompute the natural logarithm of the Euler gamma function of the last  stack element.\n• function_log10\nCompute the base-10 logarithm of the last stack element.\n• function_maximum\nFind the maximum values of each of the columns of a real NxM matrix,  returning a 1xM matrix as a result.\n• function_minimum\nFind the minimum values of each of the columns of a real NxM matrix,  returning a 1xM matrix as a result.\n• function_mean\nCompute the sample means of each of the columns of a real NxM matrix,  returning a 1xM matrix as a result.\n• function_mod\nCompute element 2 mod element 1. This operation can be applied only  to integer type data.\n• function_mult\nMultiply last two stack elements.\n• function_neg\nNegate last stack element.\n• function_permutation\nCompute the permutation coefficient determined by the last two stack  elements 'n' and 'k': the number of ways of obtaining an ordered subset  of k elements from a set of n elements.  If these arguments are real, the coefficient is computed  using a fast approximation to the log of the gamma function, and therefore  the result is subject to rounding errors. For exact integer arguments,  the coefficient is computed using exact arithmetic; this has the potential  to be a slow operation.\n• function_pow\nRaise element 2 to the power of element 1.\n• function_purge\nDelete the variable in the last stack position.\n• function_re\nCompute the real part of the last stack element.\n• function_sin\nCompute the sine of the last stack element. If the argument is real, it  will be assumed to be either degrees or radians, depending on the angle  mode of the calculator.\n• function_sinh\nCompute the hyperbolic sine of the last stack element.\n• function_solve_linear\nSolve a linear system of the form Ax = b, where A and b are the last  two elements on the stack. A must be a square matrix and b must  be a matrix with one column. This function does not compute inv(A),  but obtains the solution by a more efficient LU decomposition method.  This function is recommended over explicitly computing the inverse,  especially when solving linear systems with relatively large dimension or  with poorly conditioned matrices.\n• function_sq\nSquare the last stack element.\n• function_sqrt\nCompute the square root of the last stack element.\n• function_standardize_units\nConvert the last stack element to an equivalent expression using the SI standard  base units (kg, m, s, etc.).\n• function_stdev_unbiased\nCompute the unbiased sample standard deviation of each of the columns of a real NxM  matrix, returning a 1xM matrix as a result. (Compare to HP48's sdev  function.)\n• function_stdev_biased\nCompute the biased (population) sample standard deviation of each of the columns  of a real NxM matrix, returning a 1xM matrix as a result. (Compare to  HP48's psdev function.)\n• function_store\nStore element 2 in (variable) element 1.\n• function_sub\nSubtract element 1 from element 2.\n• function_sumsq\nSum the squares of each of the columns of a real NxM matrix, returning a  1xM matrix as a result.\n• function_tan\nCompute the tangent of the last stack element. If the argument is real,  it will be assumed to be either degrees or radians, depending on the  angle mode of the calculator.\n• function_tanh\nCompute the hyperbolic tangent of the last stack element.\n• function_to_int\nConvert a real number to an integer type.\n• function_to_real\nConvert an integer type to a real number.\n• function_total\nSum each of the columns of a real NxM matrix, returning a  1xM matrix as a result.\n• function_trace\nCompute the trace of a square matrix.\n• function_transpose\nCompute the matrix transpose of the last stack element.\n• function_unit_value\nDrop the units of the last stack element.\n• function_utpn\nCompute the upper tail probability of a normal distribution.\nUTPN(m, v, x) = Integrate[ 1/Sqrt[2 Pi v] Exp[-(m-y)^2/(2 v)], {y, x, Infinity}]\n• function_var_unbiased\nCompute the unbiased sample variance of each of the columns of a real NxM  matrix, returning a 1xM matrix as a result. (Compare to HP48's var  function.)\n• function_var_biased\nCompute the biased (population) sample variance of each of the columns of a  real NxM matrix, returning a 1xM matrix as a result. (Compare to HP48's  pvar function.)\n\n### Commands\n\nThe following operations are referred to as commands; they differ from functions because  they do not take an argument. Many calculator interface settings are implemented as commands.\n\nDisplay a nifty “about Orpie” credits screen.\n• command_begin_abbrev\nBegin entry of an operation abbreviation.\n• command_begin_browsing\nEnter stack browsing mode.\n• command_begin_constant\nBegin entry of a physical constant.\n• command_begin_variable\nBegin entry of a variable name.\n• command_bin\nSet the base of exact integer representation to 2 (binary).\n• command_clear\nClear all elements from the stack.\n• command_cycle_base\nCycle the base of exact integer representation between 2, 8,  10, and 16 (bin, oct, dec, and hex).\n• command_cycle_help\nCycle through multiple help pages. The first page displays commonly used  bindings, and the second page displays the current autobindings.\n• command_dec\nSet the base of exact integer representation to 10 (decimal).\n• command_deg\nSet the angle mode to degrees.\n• command_drop\nDrop the last element off the stack.\n• command_dup\nDuplicate the last stack element.\n• command_enter_pi\nEnter 3.1415... on the stack.\n• command_hex\nSet the base of exact integer representation to 16 (hexadecimal).\n• command_oct\nSet the base of exact integer representation to 8 (octal).\n• command_polar\nSet the complex display mode to polar.\nSet the angle mode to radians.\n• command_rand\nGenerate a random real-valued number between 0 (inclusive) and 1 (exclusive). The deviates  are uniformly distributed.\n• command_rect\nSet the complex display mode to rectangular (cartesian).\n• command_refresh\nRefresh the display.\n• command_swap\nSwap stack elements 1 and 2.\n• command_quit\nQuit Orpie.\n• command_toggle_angle_mode\nToggle the angle mode between degrees and radians.\n• command_toggle_complex_mode\nToggle the complex display mode between rectangular and polar.\n• command_undo\nUndo the last calculator operation.\n• command_view\nView the last stack element in an external fullscreen editor.\n• command_edit_input\nCreate a new stack element using an external editor.\n\n### Edit Operations\n\nThe following operations are related to editing during data entry. These  commands cannot be made available as operation abbreviations, since  abbreviations are not accessible while entering data. These operations should  be made available as single keypresses using the bind keyword.\n\n• edit_angle\nBegin entering the phase angle of a complex number. (Orpie will  assume the angle is in either degrees or radians, depending on  the current angle mode.)\n• edit_backspace\nDelete the last character entered.\n• edit_begin_integer\nBegin entering an exact integer.\n• edit_begin_units\nBegin appending units to a numeric expression.\n• edit_complex\nBegin entering a complex number.\n• edit_enter\nEnter the data that is currently being edited.\n• edit_matrix\nBegin entering a matrix, or begin entering the next  row of a matrix.\n• edit_minus\n• edit_scientific_notation_base\nBegin entering the scientific notation exponent of a real number,  or the base of an exact integer.\n• edit_separator\nBegin editing the next element of a complex number or  matrix. (This will insert a comma between elements.)\n\n### Browsing Operations\n\nThe following list of operations is available only in stack browsing mode.  As abbreviations are unavailable while browsing the stack, these operations  should be bound to single keypresses using the bind keyword.\n\n• browse_echo\nEcho the currently selected element to stack level 1.\n• browse_end\nExit stack browsing mode.\n• browse_drop\nDrop the currently selected stack element.\n• browse_dropn\nDrop all stack elements below the current selection (inclusive).\n• browse_keep\nDrop all stack elements except the current selection. (This is  complementary to browse_drop.\n• browse_keepn\nDrop all stack elements above the current selection (non-inclusive). (This  is complementary to browse_dropn.\n• browse_next_line\nMove the selection cursor down one line.\n• browse_prev_line\nMove the selection cursor up one line.\n• browse_rolldown\nCyclically “roll” stack elements downward, below the  selected element (inclusive).\n• browse_rollup\nCyclically “roll” stack elements upward, below the selected  element (inclusive) .\n• browse_scroll_left\nScroll the selected element to the left (for viewing very large  entries such as matrices).\n• browse_scroll_right\nScroll the selected element to the right.\n• browse_view\nView the currently selected stack element in a fullscreen editor.\n• browse_edit\nEdit the currently selected stack element using an external editor.\n\n### Abbreviation Entry Operations\n\nThe following list of operations is available only while entering a function or  command abbreviation, or while entering a physical constant. These operations must  be bound to single keypresses using  the bind keyword.\n\n• abbrev_backspace\nDelete a character from the abbreviation string.\n• abbrev_enter\nExecute the operation associated with the selected abbreviation.\n• abbrev_exit\nCancel abbreviation entry.\n\n### Variable Entry Operations\n\nThe following list of operations is available only while entering a variable  name. As abbreviations are unavailable while entering variables, these operations  should be bound to single keypresses using the bind keyword.\n\n• variable_backspace\nDelete a character from the variable name.\n• variable_cancel\nCancel entry of the variable name.\n• variable_complete\nAutocomplete the variable name.\n• variable_enter\nEnter the variable name on the stack.\n\n### Integer Entry Operations\n\nThe following operation is available only while entering an integer; it can be  made accessible by binding it to a single keypress using the bind keyword.\n\n• integer_cancel\nCancel entry of an integer." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.82984465,"math_prob":0.78044367,"size":36776,"snap":"2020-10-2020-16","text_gpt3_token_len":7428,"char_repetition_ratio":0.15598825,"word_repetition_ratio":0.77399755,"special_character_ratio":0.19425713,"punctuation_ratio":0.10541572,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9740265,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-04-04T15:41:54Z\",\"WARC-Record-ID\":\"<urn:uuid:aae08bca-ca2f-41dc-9f7a-ed4c7a129dc8>\",\"Content-Length\":\"35102\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:48a0373c-3b2d-46de-846e-750b5409de15>\",\"WARC-Concurrent-To\":\"<urn:uuid:7379cb6c-74bf-4204-abc4-e463ed7c7fd7>\",\"WARC-IP-Address\":\"104.31.89.183\",\"WARC-Target-URI\":\"https://www.mankier.com/5/orpierc\",\"WARC-Payload-Digest\":\"sha1:SNEAP5EDVWSCOP4PBH4TQXFGWX2MUICK\",\"WARC-Block-Digest\":\"sha1:R4WAZBLPWFAH6O3XQHT2LJOD7MYKFIHD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585370524043.56_warc_CC-MAIN-20200404134723-20200404164723-00018.warc.gz\"}"}
https://rdrr.io/cran/FRAPO/man/mrc.html
[ "# mrc: Marginal Contribution to Risk In FRAPO: Financial Risk Modelling and Portfolio Optimisation with R\n\n## Description\n\nThis function returns the marginal contributions to portfolio risk, whereby the latter is defined in terms of the portfolio standard deviation.\n\n## Usage\n\n `1` ```mrc(weights, Sigma, percentage = TRUE) ```\n\n## Arguments\n\n `weights` Vector: portfolio weights. `Sigma` Matrix: Variance-covariance matrix of portfolio assets. `percentage` `Logical`, whether the marginal risk contributions shall be returned as percentages that sum to 100 (default) or as decimal numbers.\n\n## Details\n\nThe marginal contributions to risk are computed for a given dispersion matrix and weight vector.\n\n## Value\n\n`numeric`, the marginal risk contributions of the portfolio's asset.\n\n## Author(s)\n\nBernhard Pfaff\n\nFRAPO documentation built on May 2, 2019, 6:33 a.m." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.72570765,"math_prob":0.7066677,"size":613,"snap":"2022-40-2023-06","text_gpt3_token_len":120,"char_repetition_ratio":0.136289,"word_repetition_ratio":0.0,"special_character_ratio":0.182708,"punctuation_ratio":0.12244898,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99432194,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-26T12:59:19Z\",\"WARC-Record-ID\":\"<urn:uuid:53cdebc7-9feb-4a75-af64-ed1f2f868704>\",\"Content-Length\":\"39933\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:780e5cfe-97b5-41ea-96f6-58cf852a2c6e>\",\"WARC-Concurrent-To\":\"<urn:uuid:6deb42f9-8eb6-4f9e-9d9d-5df147bf65aa>\",\"WARC-IP-Address\":\"51.81.83.12\",\"WARC-Target-URI\":\"https://rdrr.io/cran/FRAPO/man/mrc.html\",\"WARC-Payload-Digest\":\"sha1:KV7Y2VIXA24GDGP3BRJCVXG7PFAKLONX\",\"WARC-Block-Digest\":\"sha1:2WCK5UNCI2O74QXOGL7C4LVJXHUE3SCU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030334871.54_warc_CC-MAIN-20220926113251-20220926143251-00545.warc.gz\"}"}
https://metanumbers.com/9539
[ "## 9539\n\n9,539 (nine thousand five hundred thirty-nine) is an odd four-digits prime number following 9538 and preceding 9540. In scientific notation, it is written as 9.539 × 103. The sum of its digits is 26. It has a total of 1 prime factor and 2 positive divisors. There are 9,538 positive integers (up to 9539) that are relatively prime to 9539.\n\n## Basic properties\n\n• Is Prime? Yes\n• Number parity Odd\n• Number length 4\n• Sum of Digits 26\n• Digital Root 8\n\n## Name\n\nShort name 9 thousand 539 nine thousand five hundred thirty-nine\n\n## Notation\n\nScientific notation 9.539 × 103 9.539 × 103\n\n## Prime Factorization of 9539\n\nPrime Factorization 9539\n\nPrime number\nDistinct Factors Total Factors Radical ω(n) 1 Total number of distinct prime factors Ω(n) 1 Total number of prime factors rad(n) 9539 Product of the distinct prime numbers λ(n) -1 Returns the parity of Ω(n), such that λ(n) = (-1)Ω(n) μ(n) -1 Returns: 1, if n has an even number of prime factors (and is square free) −1, if n has an odd number of prime factors (and is square free) 0, if n has a squared prime factor Λ(n) 9.16314 Returns log(p) if n is a power pk of any prime p (for any k >= 1), else returns 0\n\nThe prime factorization of 9,539 is 9539. Since it has a total of 1 prime factor, 9,539 is a prime number.\n\n## Divisors of 9539\n\n2 divisors\n\n Even divisors 0 2 1 1\nTotal Divisors Sum of Divisors Aliquot Sum τ(n) 2 Total number of the positive divisors of n σ(n) 9540 Sum of all the positive divisors of n s(n) 1 Sum of the proper positive divisors of n A(n) 4770 Returns the sum of divisors (σ(n)) divided by the total number of divisors (τ(n)) G(n) 97.6678 Returns the nth root of the product of n divisors H(n) 1.99979 Returns the total number of divisors (τ(n)) divided by the sum of the reciprocal of each divisors\n\nThe number 9,539 can be divided by 2 positive divisors (out of which 0 are even, and 2 are odd). The sum of these divisors (counting 9,539) is 9,540, the average is 4,770.\n\n## Other Arithmetic Functions (n = 9539)\n\n1 φ(n) n\nEuler Totient Carmichael Lambda Prime Pi φ(n) 9538 Total number of positive integers not greater than n that are coprime to n λ(n) 9538 Smallest positive number such that aλ(n) ≡ 1 (mod n) for all a coprime to n π(n) ≈ 1180 Total number of primes less than or equal to n r2(n) 0 The number of ways n can be represented as the sum of 2 squares\n\nThere are 9,538 positive integers (less than 9,539) that are coprime with 9,539. And there are approximately 1,180 prime numbers less than or equal to 9,539.\n\n## Divisibility of 9539\n\n m n mod m 2 3 4 5 6 7 8 9 1 2 3 4 5 5 3 8\n\n9,539 is not divisible by any number less than or equal to 9.\n\n## Classification of 9539\n\n• Arithmetic\n• Prime\n• Deficient\n\n### Expressible via specific sums\n\n• Polite\n• Non-hypotenuse\n\n• Prime Power\n• Square Free\n\n## Base conversion (9539)\n\nBase System Value\n2 Binary 10010101000011\n3 Ternary 111002022\n4 Quaternary 2111003\n5 Quinary 301124\n6 Senary 112055\n8 Octal 22503\n10 Decimal 9539\n12 Duodecimal 562b\n20 Vigesimal 13gj\n36 Base36 7cz\n\n## Basic calculations (n = 9539)\n\n### Multiplication\n\nn×i\n n×2 19078 28617 38156 47695\n\n### Division\n\nni\n n⁄2 4769.5 3179.67 2384.75 1907.8\n\n### Exponentiation\n\nni\n n2 90992521 867977657819 8279638877935441 78979475256626171699\n\n### Nth Root\n\ni√n\n 2√n 97.6678 21.2081 9.8827 6.2503\n\n## 9539 as geometric shapes\n\n### Circle\n\n Diameter 19078 59935.3 2.85861e+08\n\n### Sphere\n\n Volume 3.63578e+12 1.14345e+09 59935.3\n\n### Square\n\nLength = n\n Perimeter 38156 9.09925e+07 13490.2\n\n### Cube\n\nLength = n\n Surface area 5.45955e+08 8.67978e+11 16522\n\n### Equilateral Triangle\n\nLength = n\n Perimeter 28617 3.94009e+07 8261.02\n\n### Triangular Pyramid\n\nLength = n\n Surface area 1.57604e+08 1.02292e+11 7788.56\n\n## Cryptographic Hash Functions\n\nmd5 459ad054a6417248a1166b30f6393301 67e70e96ff9cb5342c761504d1deca94c5b5d4b2 063f929aaa939e1339e0f683971ff466528db54c041ffb7fb3033d94108b2630 500e7f139ab15b840e02684070118fe1dd8433b114752eae00231602b10d93651e974cee7fbf107296d90d1edac3cfa7a11e870b7dbf907314d21d297629f2c5 d25acf0236cccc6c447cc4e6241f8876c5a18a69" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6349148,"math_prob":0.97722,"size":4464,"snap":"2021-21-2021-25","text_gpt3_token_len":1579,"char_repetition_ratio":0.119282514,"word_repetition_ratio":0.02962963,"special_character_ratio":0.44758064,"punctuation_ratio":0.077120826,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99675524,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-11T22:47:51Z\",\"WARC-Record-ID\":\"<urn:uuid:192b8240-91c4-41bf-9db3-9643333a8080>\",\"Content-Length\":\"47813\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:34f3d7d8-93f4-4fac-8873-ace842d52e86>\",\"WARC-Concurrent-To\":\"<urn:uuid:81d492f0-8916-4573-bdf7-a0060bc11065>\",\"WARC-IP-Address\":\"46.105.53.190\",\"WARC-Target-URI\":\"https://metanumbers.com/9539\",\"WARC-Payload-Digest\":\"sha1:EFVKLIEC55X27CHNFSEP2UEMZQ2KLBPR\",\"WARC-Block-Digest\":\"sha1:6IBCGI2DWPUWNYIQKJKW5I542HENASSA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243990419.12_warc_CC-MAIN-20210511214444-20210512004444-00585.warc.gz\"}"}
https://ahgroup.github.io/DSAIDE/reference/simulate_idpatterns_ode.html
[ "Simulation of a compartmental model with several different compartments: Susceptibles (S), Infected and Pre-symptomatic (P), Infected and Asymptomatic (A), Infected and Symptomatic (I), Recovered and Immune (R) and Dead (D).\n\nThis model includes natural births and deaths and waning immunity. It also allows for seasonal variation in transmission. The model is assumed to run in units of months. This assumption is hard-coded into the sinusoidally varying transmission coefficient, which is assumed to have a period of a year.\n\nsimulate_idpatterns_ode(\nS = 1000,\nP = 1,\nbP = 0,\nbA = 0,\nbI = 0.001,\ns = 0,\ngP = 0.5,\ngA = 0.5,\ngI = 0.5,\nf = 0,\nd = 0,\nw = 0,\nm = 0,\nn = 0,\ntimeunit = 1,\ntmax = 300\n)\n\n## Arguments\n\nS : initial number of susceptible hosts : numeric : initial number of infected, pre-symptomatic hosts : numeric : level/rate of infectiousness for hosts in the P compartment : numeric : level/rate of infectiousness for hosts in the A compartment : numeric : level/rate of infectiousness for hosts in the I compartment : numeric : strength of seasonal/annual sigmoidal variation of transmission rate : numeric : rate at which a person leaves the P compartment : numeric : rate at which a person leaves the A compartment : numeric : rate at which a person leaves the I compartment : numeric : fraction of pre-symptomatic individuals that have an asymptomatic infection : numeric : fraction of symptomatic infected hosts that die due to disease : numeric : rate at which recovered persons lose immunity and return to susceptible state : numeric : the rate at which new individuals enter the model (are born) : numeric : the rate of natural death (the inverse it the average lifespan) : numeric : units of time in which the model should run (1=day, 2=week, 3=month, 4=year) : numeric : maximum simulation time, in units of months : numeric\n\n## Value\n\nThis function returns the simulation result as obtained from a call to the deSolve ode solver.\n\n## Details\n\nA compartmental ID model with several states/compartments is simulated as a set of ordinary differential equations. The function returns the output from the odesolver as a matrix, with one column per compartment/variable. The first column is time.\n\n## Warning\n\nThis function does not perform any error checking. So if you try to do something nonsensical (e.g. have I0 > PopSize or any negative values or fractions > 1), the code will likely abort with an error message.\n\nSee e.g. Keeling and Rohani 2008 for SIR models and the documentation for the deSolve package for details on ODE solvers\n\n # To run the simulation with default parameters just call the function:\nplot(result$ts[ , \"time\"],result$ts[ , \"S\"],xlab='Time',ylab='Number Susceptible',type='l')", null, "" ]
[ null, "https://ahgroup.github.io/DSAIDE/reference/simulate_idpatterns_ode-1.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.830394,"math_prob":0.9597499,"size":3194,"snap":"2020-24-2020-29","text_gpt3_token_len":850,"char_repetition_ratio":0.14733543,"word_repetition_ratio":0.06545454,"special_character_ratio":0.26393238,"punctuation_ratio":0.16775244,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9901046,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-15T08:50:18Z\",\"WARC-Record-ID\":\"<urn:uuid:253bf6fa-bc6a-47e7-8bf4-ceb56f307dfd>\",\"Content-Length\":\"14865\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2b5284f6-8099-42df-9d89-aacf56411a82>\",\"WARC-Concurrent-To\":\"<urn:uuid:42378df0-5b15-48ff-b504-ffce2fbb967b>\",\"WARC-IP-Address\":\"185.199.108.153\",\"WARC-Target-URI\":\"https://ahgroup.github.io/DSAIDE/reference/simulate_idpatterns_ode.html\",\"WARC-Payload-Digest\":\"sha1:5KZAKXQCCW2Y5ORRC7FFOHPEHYJOEJTZ\",\"WARC-Block-Digest\":\"sha1:7EGYGYYZTGFERCMJZFJTBJVPCAFWRGKS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593657163613.94_warc_CC-MAIN-20200715070409-20200715100409-00238.warc.gz\"}"}
https://stats.stackexchange.com/questions/458940/compute-positive-predictive-value-ppv-in-a-example
[ "# compute positive predictive value (PPV) in a example\n\nI am trying to figure out the stats in a paper Assessment of Deep Generative Models for High-Resolution Synthetic Retinal Image Generation of Age-Related Macular Degeneration\n\nResearchers gave a specialist a set of images, told they the set consists of real images and synthetic images (simply, some kind of fake images), asked they distinguish real ones from the fake.\n\nAssume there are 100 real images provided in the test and the specialist tagged 65 of them as \"real\" and 35 of them as \"fake\".\n\nSo, the TP = 65, FP = 35, $$\\mathrm {PPV} ={\\frac {\\mathrm {TP} }{\\mathrm {TP} +\\mathrm {FP} }} = \\frac{65}{100}}$$\n\nNo, that is not correct.\n\n\"Positive\" and \"negative\" refer to the ground truth, and \"false\" and \"true\" to whether the classification accords with the ground truth. In the present case, real images are \"positive\", and fake ones are \"negative\". Thus, $$\\text{FP}$$ is the number of images the specialist tags as real, but are actually fake.\n\nThe number 35 you have are the number that the specialist tagged as fake (\"negative\") but which aren't, so this is the number $$\\text{FN}$$ of false negatives, not false positives.\n\nYou can't calculate the $$\\text{PPV}$$ from the data you present here, since you can't calculate $$\\text{FP}$$.\n\nLet's start off with writing down the mathematical definitions for the statistical measures of interest to us. From there we can work our way up to an example.\n\nSensitivity measures the percentage of true positives that are correctly identified as being positive.\n\nSpecificity measures the percentage of true negatives that are correctly identified as being negative.\n\nThe positive predictive value (PPV) or P(D|+) is the probability that the subject has the disease given that the test is positive. To calculate PPV, we will need the probability of a positive test result given disease P(+|D) and its complement, no disease P(+|DC)\n\nDiagnostic Likelihood Ratios measure the post-test odds (after test) compared to pre-test odds (before test) of either having the disease or not.\n\nOk, so we've got the definitions down, let's work through an example.\n\nAt the moment there are approximately 8,000 people infected with COVID-19\nliving in the city of San Francisco. With a population of ~ 1 million people,\ncalculate both the DLR+ and PPV. How can we interpret these results? Assume a\npharmaceutical company has developed an antibody (Ab) test with a sensitivity\nof 93% percent and a specificity 99% percent. \n`\n\nWe first need to calculate the prevalence of disease p(D) = 8,000/1,000,000 = .008 and using the p(D) we can calculate the probability of no disease p(DC) = (1 - p(D)) = .992\n\nNow from the PPV formula, we need to know the probability of a person testing positive for COVID-19 even though they actually don't have it p(+|DC) = 1 - p(-|DC) = 1 - specificity = .01.\n\nPPV = (.93*.008) / (.93*.008 + .01*.992) ~ .43\n\nDLR+ = Sensitivity / (1 - Specificity) = .93 / (1 - .99) = 93\n\nSo what do these numbers actually mean? How can we interpret DLR+ and PPV?\n\n1. The PPV calculation suggests that after testing positive for the disease, we actually only have a ~ 43% of actually having the disease. Interesting right.\n\n2. The DLR+ tells us that a positive test result increases the post-test odds of disease by 93X compared to pre-test odds. The hypothesis of disease is 93X more supported by data than a hypothesis of no disease.\n\nHopes this helps. A final note: In reality, there are a lot of asymptomatic people with COVID-19 who don't get tested but are nonetheless positive (more true positives). What changes in the problem above?" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.92195237,"math_prob":0.9879215,"size":2320,"snap":"2021-31-2021-39","text_gpt3_token_len":560,"char_repetition_ratio":0.12132988,"word_repetition_ratio":0.015189873,"special_character_ratio":0.2625,"punctuation_ratio":0.103837475,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99841744,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-28T19:35:52Z\",\"WARC-Record-ID\":\"<urn:uuid:61755833-a249-442e-834e-da5ed0b2e12b>\",\"Content-Length\":\"174795\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7db129c5-09ac-48e5-b966-e7ded87c5cfd>\",\"WARC-Concurrent-To\":\"<urn:uuid:2c0ecb9c-2ffc-4584-8ce1-d6b42fac61bc>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://stats.stackexchange.com/questions/458940/compute-positive-predictive-value-ppv-in-a-example\",\"WARC-Payload-Digest\":\"sha1:HDLJXVV4Q5LEIBZD57TJJRLPZRNEKIWY\",\"WARC-Block-Digest\":\"sha1:ZHFHOG7DGN7COAV7IG4WMDJFODBBXVLA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780060882.17_warc_CC-MAIN-20210928184203-20210928214203-00184.warc.gz\"}"}
https://fraser.stlouisfed.org/title/g16-retail-furniture-report-1067/june-7-1955-41336/fulltext
[ "# Full text of G.16 Retail Furniture Report : June 7, 1955\n\nThe full text on this page is automatically extracted from the file linked above and may contain errors and inconsistencies.\n\n```BOARD OF GOVERNORS OF THE FEDERAL RESERVE SYSTEM\nG.16\n\nJune 7, 1955\nRETAIL FURNITURE REPORT FOR APRIL 1955\nInstalment accounts outstanding a t f u r n i t u r e s t o r e s\nremained p r a c t i c a l l y unchanged during A p r i l . Balances a t the end\nof the month were 1 per cent below t h e March l e v e l , but 1 per cent\nabove a year ago. C o l l e c t i o n s during the month amounted t o an\nestimated 12 per cent of f i r s t - o f - m o n t h balances, 1 point below\nthe March c o l l e c t i o n r a t i o and the same as A p r i l of l a s t y e a r ,\nS a l e s of r e p o r t i n g f u r n i t u r e s t o r e s increased 2 per cent\nfrom March t o A p r i l , r e f l e c t i n g an i n c r e a s e of 5 per cent i n i n s t a l ment s a l e s . Charge-account s a l e s were unchanged while cash s a l e s\nwere down 2 per c e n t . S a l e s of each type continued above a year\nago. The f i r s t f o u r months of t h i s year compared with the c o r r e s ponding period of l a s t year show\ni n c r e a s e s of 5 per cent f o r\ncash s a l e s , 8 per cent f o r instalment s a l e s , and 10 per cent f o r\ncharge-account s a l e s .\nThe r e t a i l value of f u r n i t u r e s t o r e i n v e n t o r i e s increased\n3 per cent during A p r i l , but a t t h e month end was 1 per cent below\na year e a r l i e r . At t h e A p r i l r a t e of s a l e s , stocks on hand amounted\nt o about a 5 months* supply.\nF u r n i t u r e Store S t a t i s t i c s f o r A p r i l 1955\n\nItem\n\nPercentage <\nchange from:\nJan.-Apr, lybh\nYear\nto\nago Jan.-Apr. 1955\n\nMonth\nago\n\nllet s a l e s during month\nTotal\nCash\nInstalment\nCharge account\n\n+ 2\n- 2\n+ 5\n0\n\n+ 7\n+ 1\n+10\n+10\n\n+ 7\n+ 5\n+ 8\n+10\n\nAccounts r e c e i v a b l e , a t end of month\nTotal\nInstalment\nCharge account\n\n0\n- 1\n+ 1\n\n+ h\n+ l\n+iU\n\nXXX\nXXX\n\nI n v e n t o r i e s , end of month, a t r e t a i l value\n\n+ 3\n\n- l\n\nXXX\n\nXXX\n\nApr,\n1955\nC o l l e c t i o n r a t i o s on instalment accounts 1/\n\nliar.\n1955\n\nApr.\n195U\n\n12\n\n13\n\n12\n\n1 / C o l l e c t i o n s during month as percentage of accounts outstanding a t\nbeginning\nof month.\n\nRETAIL FURNITURE STORES - APRIL 1955\nSales by Type of Transaction\n(Percentage changes)\nFederal\nReserve\nDistrict\nBoston\n-Nevvf York\nCleveland\nRichmond\nAtlanta\nChicago\nS t . Louis\nFinneapolis\nKansas City\nDallas\nSan Francisco\nU . S . Total\n\nTotal net s a l e s\n\nIns-talment\nsales\nMonth Year\nago\nago\n\nCash s a l e s\n\nMonth\nago\n\nYear\n\nMonth\nago\n\nYear\n\n- 3\n-15\n+ 5\n+ 7\n+16\n+12\n\n+ 7\n- 3\n+ 6\n+16\n+18\n+12\n+ 9\n+ 6\n+ 2\n+ 5\n+21\n+10\n\n- 3\n-20\n+ 3\n+17\n+2U.\n+16\n- 5\n3\n- 6\n+ 6\n- 2\n-io\n\n+ h\n-10\n- U\n+ 2\n+26\n+ 8\n+ 2\n\n:.fc\n\n+ h\n0\n+ 7\n- 3\n+ 2\n\n^+7\n\n- 2\n\nCharge-account\nsales\nfear\nMonth\nago\nago\n+19\n+ 9\n+13\n+16\n+21\n-19\n+23\n+ 3\n3\n\n-16\"\n+ It\n+ U\n+11\n'\n\n- 1\n-lit\n+ 5\n+ 6\n+20\n+12\n+ 6\n+ 5\n+ 7\n+ 1\n+13\n- 3\n\n+ h\n-16\n. +8\n+18\n+16\n+1U\n+ 9\n+ 8\n+ 5\n+ 7\n+30\n+12\n\n- 8\n-23\n+ 8\n+11\n+ 1\n+ 5\n+ 9\n+ 3\n- It\n- 6\n+ 2\n+ 1\n\n+ U\n+ 7\n\n+\n\n+ 5\n\n+13\n\n. 0\n\n+13\n\n1\n\nCumulative Sales by Type of Transaction, Instalment\nAccounts Receivable, and Inventories\n^(Percentage changes)\nFederal\nReserve\nDistrict\nBoston\nIlevr York\nCleveland\nRichmond\nAtlanta\nChicago\nS t . Louis\nLfinnrapclis\nKansas C i t y\nDallas\nSan Francisco\nU . S . Total\n\nCumulative s a l e s , Jan.--Apr.,\nchange from 195a t o 1955\nI n s t a l - ChargeCash\nTotal\naccount\nment,\n\nInstalment\nreceivables,\nend of month\nLTonth Year\nago\nago\n\n+ 7\n- 8\n+ 1\n+ 3\n+13\n+1-1\n+ 9\n+10\n+ L\n+ U\n+20\n+13\n\n+n '\n- 3\n- 6\n+ h\n+23\n+12\n+ 6\n+ 2\n- 6\n- 3\n+ 7\n+11\n\no •\n-10\n+ it\n+ 7\n+12\n+12\n+ 8\n+10\n+ 7\n+ 6\n+26\n+16\n\n+23\n+ 6\n+ 1\n+18\n+12\n+ 3\n+16\n+15\n- 6\n+ 5\n+16\n+ 7\n\n. \" 3\n+\n- 1\n\n+ 7\n\n+ 5\n\n+ 8\n\n+1C\n\n- 1\n\nn.a.—[Tot a v a i l a b l e .\n\n- 1\n-11\n'- 1\n- 7\n+ U\n\n- 1\n- 6\n0\n0\n+ 1\n0\n0\n+ 3\n1\n\n+ l\n+ 9\n+ 6\n1\n1 - &\n! +2U\n1\n| + 7\n\n1\n\n+ 1\n\nInventories\nend of month,\na t r e t a i l value\nYear\nMonth\nago\nago\n+\n+\n+\n+\n+\n+\n+\n+\n+\n+\n+\n+\n\n7\nU\n1\nU\n3\n2\n2\n1\n2\n2\n7\nIt\n\n+ 8\n- 3\n+ 5\n+11\n+ 1\n\n+ 3\n\n- l\n\n- 1\n-11\n- U\n+ 1\n+ k\n- u\n\n```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5434293,"math_prob":0.9948732,"size":4007,"snap":"2023-40-2023-50","text_gpt3_token_len":1561,"char_repetition_ratio":0.13015239,"word_repetition_ratio":0.160299,"special_character_ratio":0.45769903,"punctuation_ratio":0.053490482,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9651057,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-05T12:12:41Z\",\"WARC-Record-ID\":\"<urn:uuid:68add3a3-0d0b-4ede-969d-ee7963aaf4bb>\",\"Content-Length\":\"19491\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:20e5e7c6-99d1-46f3-be68-5fb243660a74>\",\"WARC-Concurrent-To\":\"<urn:uuid:56aa3df6-e075-4d14-8d3b-e718a8b9a076>\",\"WARC-IP-Address\":\"23.0.30.15\",\"WARC-Target-URI\":\"https://fraser.stlouisfed.org/title/g16-retail-furniture-report-1067/june-7-1955-41336/fulltext\",\"WARC-Payload-Digest\":\"sha1:ZVWVCBXS6PIOOACWI6ZTYWHSDWOGL6UC\",\"WARC-Block-Digest\":\"sha1:JGBFWATLJZIUPIPZR2B3TI2JOWO433OS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100551.17_warc_CC-MAIN-20231205105136-20231205135136-00117.warc.gz\"}"}
http://www.numbersaplenty.com/30230100121
[ "Search a number\nBaseRepresentation\nbin11100001001110110…\n…101011100010011001\n32220000210021012101211\n4130021312223202121\n5443402401200441\n621515411543121\n72120062416454\noct341166534231\n986023235354\n1030230100121\n1111903115721\n125a37bbaaa1\n132b09c5ab10\n14166ac2449b\n15bbde2ba81\nhex709dab899\n\n30230100121 has 8 divisors (see below), whose sum is σ = 33435371144. Its totient is φ = 27150526080.\n\nThe previous prime is 30230100107. The next prime is 30230100143. The reversal of 30230100121 is 12100103203.\n\nAdding to 30230100121 its reverse (12100103203), we get a palindrome (42330203324).\n\n30230100121 = T40405 + T40406 + ... + T40441.\n\nIt can be written as a sum of positive squares in 4 ways, for example, as 10803315721 + 19426784400 = 103939^2 + 139380^2 .\n\nIt is a sphenic number, since it is the product of 3 distinct primes.\n\nIt is a cyclic number.\n\nIt is not a de Polignac number, because 30230100121 - 231 = 28082616473 is a prime.\n\nIt is a Harshad number since it is a multiple of its sum of digits (13).\n\nIt is a Duffinian number.\n\nIt is not an unprimeable number, because it can be changed into a prime (30230106121) by changing a digit.\n\nIt is a polite number, since it can be written in 7 ways as a sum of consecutive naturals, for example, 31423740 + ... + 31424701.\n\nIt is an arithmetic number, because the mean of its divisors is an integer number (4179421393).\n\nAlmost surely, 230230100121 is an apocalyptic number.\n\nIt is an amenable number.\n\n30230100121 is a deficient number, since it is larger than the sum of its proper divisors (3205271023).\n\n30230100121 is a wasteful number, since it uses less digits than its factorization.\n\n30230100121 is an evil number, because the sum of its binary digits is even.\n\nThe sum of its prime factors is 62848491.\n\nThe product of its (nonzero) digits is 36, while the sum is 13.\n\nThe spelling of 30230100121 in words is \"thirty billion, two hundred thirty million, one hundred thousand, one hundred twenty-one\"." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8353305,"math_prob":0.9879683,"size":2079,"snap":"2020-34-2020-40","text_gpt3_token_len":646,"char_repetition_ratio":0.16481927,"word_repetition_ratio":0.0,"special_character_ratio":0.4814815,"punctuation_ratio":0.13984169,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.99530643,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-27T22:54:17Z\",\"WARC-Record-ID\":\"<urn:uuid:3da46b58-b81f-44d3-a24b-663e8395281b>\",\"Content-Length\":\"9551\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3d98a4e1-3fae-4cd7-9b38-ed148ec438e9>\",\"WARC-Concurrent-To\":\"<urn:uuid:251dbd7c-4df7-4de4-942a-705708b9604b>\",\"WARC-IP-Address\":\"62.149.142.170\",\"WARC-Target-URI\":\"http://www.numbersaplenty.com/30230100121\",\"WARC-Payload-Digest\":\"sha1:Z4UPCW347YQNKFNWFSEOPSO4KLLZUVE7\",\"WARC-Block-Digest\":\"sha1:DT3SSZEFVDM4OGIYJ5AAMMH4JL52NIYG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600401582033.88_warc_CC-MAIN-20200927215009-20200928005009-00318.warc.gz\"}"}