URL
stringlengths
15
1.68k
text_list
sequencelengths
1
199
image_list
sequencelengths
1
199
metadata
stringlengths
1.19k
3.08k
https://au.mathworks.com/help/dsp/ug/minimax-fir-filter-design.html
[ "# Minimax FIR Filter Design\n\nThis example shows how to use some of the key features of the generalized Remez FIR filter design function. This function provides all the functionality included in `firpm` plus many additional features showcased here.\n\n### Weighted-Chebyshev Design\n\nThe following is an illustration of the weighted-Chebyshev design. This example shows the compatibility of `firgr` with `firpm`.\n\n```N = 22; % Filter order F = [0 0.4 0.5 1]; % Frequency vector A = [1 1 0 0]; % Magnitude vector W = [1 5]; % Weight vector b = firgr(N,F,A,W); fvtool(b,1)```", null, "The following is a weighted-Chebyshev design where a type 4 filter (odd-order, asymmetric) has been explicitly specified.\n\n```N = 21; % Filter order F = [0 0.4 0.5 1]; % Frequency vector A = [0 0 1 1]; % Magnitude vector W = [2 1]; % Weight vector b = firgr(N,F,A,W,'4'); fvtool(b,1)```", null, "### \"Least-Squares-Like\" Design\n\nThe following illustrates a \"least-squares-like\" design. A user-supplied frequency-response function (`taperedresp.m`) is used to perform the error weighting.\n\n```N = 53; % Filter order F = [0 0.3 0.33 0.77 0.8 1]; % Frequency vector fresp = {@taperedresp, [0 0 1 1 0 0]}; % Frequency response function W = [2 2 1]; % Weight vector b = firgr(N,F,fresp,W); fvtool(b,1)```", null, "### Filter Designed for Specific Single-Point Bands\n\nThis is an illustration of a filter designed for specified single-point bands. The frequency points f = 0.25 and f = 0.55 are single-band points. These points have a gain that approaches zero.\n\nThe other band edges are normal.\n\n```N = 42; % Filter order F = [0 0.2 0.25 0.3 0.5 0.55 0.6 1]; % Frequency vector A = [1 1 0 1 1 0 1 1]; % Magnitude vector S = {'n' 'n' 's' 'n' 'n' 's' 'n' 'n'}; b = firgr(N,F,A,S); fvtool(b,1)```", null, "### Filter Designed for Specific In-Band Value\n\nHere is an illustration of a filter designed for an exactly specified in-band value. The value is forced to be exactly the specified value of 0.0 at f = 0.06.\n\nThis could be used for 60 Hz rejection (with Fs = 2 kHz). The band edge at 0.055 is indeterminate since it should abut the next band.\n\n```N = 82; % Filter order F = [0 0.055 0.06 0.1 0.15 1]; % Frequency vector A = [0 0 0 0 1 1]; % Magnitude vector S = {'n' 'i' 'f' 'n' 'n' 'n'}; b = firgr(N,F,A,S); zerophase(b,1)```", null, "### Filter Design with Specific Multiple Independent Approximation Errors\n\nHere is an example of designing a filter using multiple independent approximation errors. This technique is used to directly design extra-ripple and maximal ripple filters. One of the interesting properties that these filters have is a transition region width that is locally minimal. Further, these designs converge very quickly in general.\n\n```N = 12; % Filter order F = [0 0.4 0.5 1]; % Frequency vector A = [1 1 0 0]; % Magnitude vector W = [1 1]; % Weight vector E = {'e1' 'e2'}; % Approximation errors b = firgr(N,F,A,W,E); fvtool(b,1)```", null, "### Extra-Ripple Bandpass Filter\n\nHere is an illustration of an extra-ripple bandpass filter having two independent approximation errors: one shared by the two passbands and the other for the stopband (in blue). For comparison, a standard weighted-Chebyshev design is also plotted (in green).\n\n```N = 28; % Filter order F = [0 0.4 0.5 0.7 0.8 1]; % Frequency vector A = [1 1 0 0 1 1]; % Magnitude vector W = [1 1 2]; % Weight vector E = {'e1','e2','e1'}; % Approximation errors b1 = firgr(N,F,A,W,E); b2 = firgr(N,F,A,W); fvtool(b1,1,b2,1)```", null, "### Designing an In-Band-Zero Filter Using Three Independent Errors\n\nWe'll now re-do our in-band-zero example using three independent errors.\n\nNote: It is sometimes necessary to use independent approximation errors to get designs with forced in-band values to converge. This is because the approximating polynomial could otherwise be come very underdetermined. The former design is displayed in green.\n\n```N = 82; % Filter order F = [0 0.055 0.06 0.1 0.15 1]; % Frequency vector A = [0 0 0 0 1 1]; % Magnitude vector S = {'n' 'i' 'f' 'n' 'n' 'n'}; W = [10 1 1]; % Weight vector E = {'e1' 'e2' 'e3'}; % Approximation errors b1 = firgr(N,F,A,S,W,E); b2 = firgr(N,F,A,S); fvtool(b1,1,b2,1)```", null, "### Checking for Transition-Region Anomalies\n\nWith the `'check'` option, one is made aware of possible transition region anomalies in the filter that is being designed. Here is an example of a filter with an anomaly. The `'check'` option warns one of this anomaly: One also get a results vector `res.edgeCheck`. Any zero-valued elements in this vector indicate the locations of probable anomalies. The \"-1\" entries are for edges that were not checked (there can't be an anomaly at f = 0 or f = 1).\n\n```N = 44; % Filter order F = [0 0.3 0.4 0.6 0.8 1]; % Frequency vector A = [1 1 0 0 1 1]; % Magnitude vector b = firgr(N,F,A,'check');```\n```Warning: Probable transition-region anomalies. Verify with freqz. ```\n`fvtool(b,1)`", null, "### Determination of the Minimum Filter Order\n\nThe `firpm` algorithm repeatedly designs filters until the first iteration wherein the specifications are met. The specifications are met when all of the required constraints are met. By specifying `'minorder'`, `firpmord` is used to get an initial estimate. There is also `'mineven'` and `'minodd'` to get the minimum-order even-order or odd-order filter designs.\n\n```F = [0 0.4 0.5 1]; % Frequency vector A = [1 1 0 0]; % Magnitude vector R = [0.1 0.02]; % Deviation (ripple) vector b = firgr('minorder',F,A,R); zerophase(b,1)```", null, "### Differentiators and Hilbert Transformers\n\nWhile using the minimum-order feature, an initial estimate of the filter order can be made. If this is the case, then `firpmord` will not be used. This is necessary for filters that `firpmord` does not support, such as differentiators and Hilbert transformers as well as user-supplied frequency-response functions.\n\n```N = {'mineven',18}; % Minimum even-order, start order estimate at 18 F = [0.1 0.9]; % Frequency vector A = [1 1]; % Magnitude vector R = 0.1; % Deviation (ripple) b = firgr(N,F,A,R,'hilbert'); freqz(b,1,'whole')```", null, "### Design of an Interpolation Filter\n\nThis section illustrates the use of an interpolation filter for upsampling band-limited signals by an integer factor. Typically one would use `intfilt(r,l,alpha)` from the Signal Processing Toolbox™ to do this. However, `intfilt` does not give one as much flexibility in the design as does `firgr`.\n\n```N = 30; % Filter order F = [0 0.1 0.4 0.6 0.9 1]; % Frequency vector A = [4 4 0 0 0 0]; % Magnitude vector W = [1 100 100]; % Weight vector b = firgr(N,F,A,W); fvtool(b,1)```", null, "### A Comparison Between `firpm` and `intfilt`\n\nHere is a comparison made between a filter designed using `firpm` (blue) and a 30-th order filter designed using `intfilt` (green).\n\nNotice that by using the weighting function in `firpm`, one can improve the minimum stopband attenuation by almost 20 dB.\n\n```b2 = intfilt(4, 4, 0.4); fvtool(b,1,b2,1)```", null, "Notice that the equiripple attenuation throughout the second stopband is larger than the minimum stopband attenuation of the filter designed with `intfilt` by about 6 dB. Notice also that the passband ripple, although larger than that of the filter designed with `intfilt`, is still very small.\n\n### Design of a Minimum-Phase Lowpass Filter\n\nHere is an illustration of a minimum-phase lowpass filter.\n\n```N = 42; % Filter order F = [0 0.4 0.5 1]; % Frequency vector A = [1 1 0 0]; % Magnitude vector W = [1 10]; % Weight-constraint vector b = firgr(N,F,A,W, {64},'minphase');```\n\nThe pole/zero plot shows that there are no roots outside of the unit circle.\n\n`zplane(b,1)`", null, "" ]
[ null, "https://au.mathworks.com/help/examples/dsp/win64/MinimaxFIRFilterDesignExample_01.png", null, "https://au.mathworks.com/help/examples/dsp/win64/MinimaxFIRFilterDesignExample_02.png", null, "https://au.mathworks.com/help/examples/dsp/win64/MinimaxFIRFilterDesignExample_03.png", null, "https://au.mathworks.com/help/examples/dsp/win64/MinimaxFIRFilterDesignExample_04.png", null, "https://au.mathworks.com/help/examples/dsp/win64/MinimaxFIRFilterDesignExample_05.png", null, "https://au.mathworks.com/help/examples/dsp/win64/MinimaxFIRFilterDesignExample_06.png", null, "https://au.mathworks.com/help/examples/dsp/win64/MinimaxFIRFilterDesignExample_07.png", null, "https://au.mathworks.com/help/examples/dsp/win64/MinimaxFIRFilterDesignExample_08.png", null, "https://au.mathworks.com/help/examples/dsp/win64/MinimaxFIRFilterDesignExample_09.png", null, "https://au.mathworks.com/help/examples/dsp/win64/MinimaxFIRFilterDesignExample_10.png", null, "https://au.mathworks.com/help/examples/dsp/win64/MinimaxFIRFilterDesignExample_11.png", null, "https://au.mathworks.com/help/examples/dsp/win64/MinimaxFIRFilterDesignExample_12.png", null, "https://au.mathworks.com/help/examples/dsp/win64/MinimaxFIRFilterDesignExample_13.png", null, "https://au.mathworks.com/help/examples/dsp/win64/MinimaxFIRFilterDesignExample_14.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.81551766,"math_prob":0.9928677,"size":7311,"snap":"2023-14-2023-23","text_gpt3_token_len":2174,"char_repetition_ratio":0.15368824,"word_repetition_ratio":0.20906201,"special_character_ratio":0.31103817,"punctuation_ratio":0.16801493,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99572784,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-08T22:48:27Z\",\"WARC-Record-ID\":\"<urn:uuid:d261ac65-27ec-4799-a6b1-238e635216ef>\",\"Content-Length\":\"92266\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4c74f576-2996-4726-987b-a552a9d85286>\",\"WARC-Concurrent-To\":\"<urn:uuid:a110b005-7a21-4d9a-a770-dd4fbf987491>\",\"WARC-IP-Address\":\"104.68.243.15\",\"WARC-Target-URI\":\"https://au.mathworks.com/help/dsp/ug/minimax-fir-filter-design.html\",\"WARC-Payload-Digest\":\"sha1:NENR2LYYK5JK7FP6FA6P2JQIAGWPCRA5\",\"WARC-Block-Digest\":\"sha1:D44VROXNBP6W2DJUVSIQ4IPTYKMLFI5H\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224655143.72_warc_CC-MAIN-20230608204017-20230608234017-00401.warc.gz\"}"}
https://encyclopedia2.thefreedictionary.com/absolute+retract
[ "absolute retract\n\nabsolute retract\n\n[¦ab·sə‚lüt ri′trakt]\n(mathematics)\nA topological space, A, such that, if B is a closed subset of another topological space, C, and if A is homeomorphic to B, then B is a retract of C.\nMcGraw-Hill Dictionary of Scientific & Technical Terms, 6E, Copyright © 2003 by The McGraw-Hill Companies, Inc.\nReferences in periodicals archive ?\nNote that any absolute retract (AR-space) is an AMR-space and any absolute neighbourhood retract (ANR-space) is an ANMR-space.\nA space X is called an [R.sub.[delta]]-set if there exists a decreasing sequence {[X.sub.n]} of compact absolute retracts such that X = [[intersection].sub.n] [X.sub.n].\nsets which are homologically equivalent to one point spaces) contains, besides standard convex sets, more general absolute retracts, R-sets and contractible sets (i.e.\nSite: Follow: Share:\nOpen / Close" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7955953,"math_prob":0.92920244,"size":330,"snap":"2022-05-2022-21","text_gpt3_token_len":96,"char_repetition_ratio":0.11349693,"word_repetition_ratio":0.0,"special_character_ratio":0.24545455,"punctuation_ratio":0.16176471,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96768254,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-28T20:43:49Z\",\"WARC-Record-ID\":\"<urn:uuid:4e358b3d-048d-4cb6-b878-b766f7429380>\",\"Content-Length\":\"41469\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:025ca5ff-968d-437f-b5e3-e6a5e49025e8>\",\"WARC-Concurrent-To\":\"<urn:uuid:09ee550c-1910-4b99-bef7-d41c2e7fb6b4>\",\"WARC-IP-Address\":\"91.204.210.225\",\"WARC-Target-URI\":\"https://encyclopedia2.thefreedictionary.com/absolute+retract\",\"WARC-Payload-Digest\":\"sha1:DV7SVIFXQ7Q7T7PKEJFUXOQERT2HH2YI\",\"WARC-Block-Digest\":\"sha1:6LADLZWYBTJCNQKYBM2LCQT4GP33ZJYU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320306335.77_warc_CC-MAIN-20220128182552-20220128212552-00498.warc.gz\"}"}
https://www.onlinemath4all.com/fractions-in-mathematics.html
[ "# FRACTIONS IN MATHEMATICS\n\nIn simple words, a fraction is the part of a whole.\n\nIn fractions (for example 17/23), the number above the line (17) is called numerator and the number below the line (23) is called denominator.\n\nThe numerator represents a number of equal parts, and the denominator, which cannot be zero, indicates how many of those parts make up a unit or a whole.\n\nFor example, in the fraction 3/4, the numerator, 3, tells us that the fraction represents 3 equal parts, and the denominator, 4, tells us that 4 parts make up a whole.\n\nThe picture given below illustrates the fraction 3/4.", null, "## Different Types of Fractions\n\nWithin the world of fractions, we do have several types and ways of writing them. Let's discuss these now.\n\nProper fraction :\n\nA fraction is called a proper fraction if its\n\nDenominator > Numerator.\n\nExample : 3/4, 1/2, 9/10, 5/6\n\nImproper fraction :\n\nA fraction is called an improper fraction if its\n\nNumerator > Denominator.\n\nExample : 5/4, 6/5, 41/30, 51/25\n\nMixed fraction :\n\nA fraction consisting of a natural number and a proper fraction is called a mixed fractions.\n\nExample : 2 3/4, 1 4/5, 5 1/7\n\nLike Fractions:\n\nIn two or more fractions, the denominators (bottom numbers) are same, they are called as like fractions.\n\nExamples : 3/5 , 6/5, 2/5, 7/5\n\nIn the above fractions, all the denominators are same. That is 5.\n\nUnlike Fractions :\n\nIn two or more fractions, the denominators (bottom numbers) are different, they are called as unlike fractions.\n\nExamples : 3/5 , 6/7, 2/9, 7/2\n\nIn the above fractions, all the denominators are different. They are 5, 7, 9 and 2.\n\n## Converting Improper Fraction to Mixed Fraction\n\nThe picture shown below illustrates how to convert an improper fraction to mixed fraction.", null, "## Converting Mixed Fraction to Improper Fraction\n\nThe picture shown below illustrates how to convert a mixed fraction to improper fraction.", null, "## Addition and Subtraction of Fractions with Same Denominator\n\nExample 1 :\n\nSimplify : 2/5 + 3/5\n\nSolution :\n\nHere, for both the fractions, we have the same denominator, we have to take only one denominator and add the numerators.\n\nThen, we get\n\n2/5 + 3/5  =  (2 + 3) / 5\n\n2/5 + 3/5  =  5/5\n\n2/5 + 3/5  =  1\n\nExample 2 :\n\nSimplify : 7/5 - 3/5\n\nSolution :\n\nHere, for both the fractions, we have the same denominator, we have to take only one denominator and subtract the numerators.\n\nThen, we get\n\n7/5 - 3/5  =  (7-3) / 5\n\n7/5 - 3/5  =  4/5\n\n## Addition and Subtraction of Fractions with Different Denominators\n\nHere, we explain two methods to add two fractions with different denominators.\n\n1)  Cross-multiplication method\n\n2)  L.C.M method\n\nCross-Multiplication Method :\n\nIf the denominators of the fractions are co-prime or relatively prime, we have to apply this method.\n\nFro example, let us consider the two fractions 1/8,  1/3.\n\nIn the above two fractions, denominators are 8 and 3.\n\nFor 8 and 3, there is no common divisor other than 1. So 8 and 3 are co-prime.\n\nHere we have to apply cross-multiplication method to add the two fractions 1/8 and 1/3 as given below.", null, "L.C.M Method :\n\nIf the denominators of the fractions are not co-prime (there is a common divisor other than 1), we have to apply this method.\n\nFro example, let us consider the two fractions 5/12,  1/20.\n\nIn the above two fractions, denominators are 12 and 20.\n\nFor 12 and 20, if there is at least one common divisor other than 1, then 12 and 20 are not co-prime.\n\nFor 12 & 20, we have the following common divisors other than 1.\n\n2 and 4\n\nSo 12 and 20 are not co-prime.\n\nIn the next step, we have to find the L.C.M (Least common multiple) of 12 and 20.\n\n12  =  22  3\n\n20  =  22  5\n\nWhen we decompose 12 and 20 in to prime numbers, we find 2, 3 and 5 as prime factors for 12 and 20.\n\nTo get L.C.M of 12 and 20, we have to take 2, 3 and 5 with maximum powers found above.\n\nSo, L.C.M of 12 and 20 is\n\n=  22  3  5\n\n=  4  3  5\n\n=  60\n\nNow we have to make the denominators of both the fractions to be 60 and add the two fractions 5/12 and 1/20 as given below.", null, "Note :\n\nWe have to do the same process for subtraction of two fractions with different denominators.\n\n## Multiply a Fraction by a Whole Number\n\nTo multiply a proper or improper fraction and a whole number,first, we have to multiply the whole number and numerator of the fraction, keeping the denominator same.\n\nFor example,\n\n3/5   =  6/5\n\n7/11  =  21/11\n\nTo multiply a mixed fraction by a whole number, first convert the mixed fraction to an improper fraction and proceed as explained above.\n\nFor example,\n\n3 4/7   =  4  25/7\n\n⋅ 3 4/7  =  (4 ⋅ 25)/7\n\n⋅ 3 4/7  =  100/7\n\n⋅ 3 4/7  =  14 2/7\n\n## Multiply a Fraction by a Fraction\n\nTo multiply a proper or improper fraction by another proper or improper fraction, we have to multiply the numerators and denominators.\n\nFor example,\n\n2/3  4/5   =  8/15\n\n1/3   7/11  =  7/33\n\n## Divide a Whole Number by a Fraction\n\nTo divide a whole number by any fraction, multiply the whole number by the reciprocal of the fraction.\n\nFor example,\n\n6 ÷ 2/5  =  6  5/2\n\n÷ 2/5  =  (6 ⋅ 5)/2\n\n÷ 2/5  =  (3 ⋅ 5)/1\n\n÷ 2/5  =  15\n\n## Divide a Fraction by a Whole Number\n\nTo divide a fraction by a whole number, we have to multiply the denominator of the fraction by the whole number and simplify, if possible.\n\nFor example,\n\n2/5 ÷ 6  =  2/(5  6)\n\n2/5 ÷ 6  =  1/(5  3)\n\n2/5 ÷ 6  =  1/15\n\n## Divide a Whole Number by a Mixed Fraction\n\nWhile dividing a whole number by a mixed fraction, first convert the mixed fraction into improper fraction and proceed.\n\nFor example,\n\n6 ÷ 3 4/5  =  6 ÷ 19/5\n\n6 ÷ 3 4/5  =  6 ⋅ 5/19\n\n6 ÷ 3 4/5  =  (6 ⋅ 5)/19\n\n6 ÷ 3 4/5  =  30/19\n\n## Divide a Fraction by a Fraction\n\nTo divide a fraction by another fraction, multiply the first fraction by the reciprocal of the second fraction.\n\nFor example,\n\n1/5 ÷ 3/7  =  1/5  7/3\n\n1/5 ÷ 3/7  =  (1 ⋅ 7) / (5 ⋅ 3)\n\n1/5 ÷ 3/7  =  7/15", null, "Apart from the stuff given above, if you need any other stuff in math, please use our google custom search here.\n\nKindly mail your feedback to [email protected]\n\n## Recent Articles", null, "1. ### Worksheet on Speed Distance and Time\n\nDec 10, 23 10:09 PM\n\nWorksheet on Speed Distance and Time\n\n2. ### Time Speed and Distance Problems\n\nDec 10, 23 07:42 PM\n\nTime Speed and Distance Problems - Step by Step Solutions" ]
[ null, "data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 279 279'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 518 228'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 466 263.713636363636'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 660 282'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 662 489'%3E%3C/svg%3E", null, "data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 510 99'%3E%3C/svg%3E", null, "https://www.onlinemath4all.com/objects/xrss.png.pagespeed.ic.nVmUyyfqP7.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8909383,"math_prob":0.986602,"size":5473,"snap":"2023-40-2023-50","text_gpt3_token_len":1800,"char_repetition_ratio":0.19637959,"word_repetition_ratio":0.13302326,"special_character_ratio":0.34058103,"punctuation_ratio":0.13195549,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99968624,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-11T06:23:07Z\",\"WARC-Record-ID\":\"<urn:uuid:377ce44c-9c90-4b75-a4e6-c7ab9849b27c>\",\"Content-Length\":\"51824\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:05ad2c8e-0409-4076-ac02-7ff63508be39>\",\"WARC-Concurrent-To\":\"<urn:uuid:7498691c-b1a5-4927-8cc2-b4b258a41147>\",\"WARC-IP-Address\":\"173.247.218.242\",\"WARC-Target-URI\":\"https://www.onlinemath4all.com/fractions-in-mathematics.html\",\"WARC-Payload-Digest\":\"sha1:IATVMJ53ZANUQIWPMDP45K4HIOW6I7YU\",\"WARC-Block-Digest\":\"sha1:A7Z2UZG7TG44ZIB23QNHV6HRVHBC7VYL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679103558.93_warc_CC-MAIN-20231211045204-20231211075204-00422.warc.gz\"}"}
https://search.r-project.org/CRAN/refmans/EFA.dimensions/html/CONGRUENCE.html
[ "CONGRUENCE {EFA.dimensions} R Documentation\n\n## Factor solution congruence\n\n### Description\n\nAligns two factor loading matrices and computes the factor solution congruence and the root mean square residual.\n\n### Usage\n\nCONGRUENCE(target, loadings, verbose)\n\n### Arguments\n\n target The target loading matrix. loadings The loading matrix that will be aligned with the target. verbose Should detailed results be displayed in console? TRUE (default) or FALSE\n\n### Details\n\nThe function first searches for the alignment of the factors from the two loading matrices that has the highest factor solution congruence. It then aligns the factors in \"loadings\" with the factors in \"target\" without changing the loadings. The alignment is based solely on the positions and directions of the factors. The function then produces the Tucker-Wrigley-Neuhaus factor solution congruence coefficient as an index of the degree of similarity between between the aligned loading matrices (see Guadagnoli & Velicer, 1991; and ten Berge, 1986, for reviews).\n\n### Value\n\nA list with the following elements:\n\n rcBefore The factor solution congruence before factor alignment rcAfter The factor solution congruence after factor alignment rcFactors The congruence for each factor rmsr The root mean square residual residmat The residual matrix loadingsNew The aligned loading matrix\n\n### Author(s)\n\nBrian P. O'Connor\n\n### References\n\nGuadagnoli, E., & Velicer, W. (1991). A comparison of pattern matching indices. Multivariate Behavior Research, 26, 323-343.\n\nten Berge, J. M. F. (1986). Some relationships between descriptive comparisons of components from different studies. Multivariate Behavioral Research, 21, 29-40.\n\n### Examples\n\n\n# Rosenberg Self-Esteem scale items\nrotate='VARIMAX', verbose=FALSE)\n\ntarget <- PCA(data_RSE[151:300,], corkind='pearson', Nfactors = 3,\nrotate='VARIMAX', verbose=FALSE)\nCONGRUENCE(target = target$loadingsV, loadings = loadings$loadingsV, verbose=TRUE)\n\n# NEO-PI-R scales\nCONGRUENCE(target$loadingsV, loadings$loadingsV, verbose=TRUE)" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6653441,"math_prob":0.7821465,"size":2197,"snap":"2022-05-2022-21","text_gpt3_token_len":584,"char_repetition_ratio":0.15686275,"word_repetition_ratio":0.048275862,"special_character_ratio":0.24214838,"punctuation_ratio":0.17438692,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9700685,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-29T05:29:37Z\",\"WARC-Record-ID\":\"<urn:uuid:fa2b8b67-95fa-4d40-a0de-a22cf50910cb>\",\"Content-Length\":\"4823\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:168a409e-e804-4cff-a09a-830087dc6189>\",\"WARC-Concurrent-To\":\"<urn:uuid:511d8446-242a-4f64-a79f-0add921bbffa>\",\"WARC-IP-Address\":\"137.208.57.46\",\"WARC-Target-URI\":\"https://search.r-project.org/CRAN/refmans/EFA.dimensions/html/CONGRUENCE.html\",\"WARC-Payload-Digest\":\"sha1:A6MWFPBYMWYIQVPEHM3GE4UR4DF3SFE4\",\"WARC-Block-Digest\":\"sha1:BPJADJTKNQXKSTKW7A6O3UARX5O3NONB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652663039492.94_warc_CC-MAIN-20220529041832-20220529071832-00369.warc.gz\"}"}
https://www.brightstorm.com/math/calculus/the-definite-integral/the-definite-integral-problem-2/
[ "", null, "###### Norm Prokup\n\nCornell University\nPhD. in Mathematics\n\nNorm was 4th at the 2004 USA Weightlifting Nationals! He still trains and competes occasionally, despite his busy schedule.\n\n##### Thank you for watching the video.\n\nTo unlock all 5,300 videos, start your free trial.\n\n# The Definite Integral - Problem 2\n\nNorm Prokup", null, "###### Norm Prokup\n\nCornell University\nPhD. in Mathematics\n\nNorm was 4th at the 2004 USA Weightlifting Nationals! He still trains and competes occasionally, despite his busy schedule.\n\nShare\n\nWhen a graph is a curve, find the definite integral of the function to find the area under the curve. So, antidifferentiate the function. The definite integral from points a to b is the antiderivative at b minus the antiderivative at a. For example, if looking at the function is f(x)=x2 from x=1 to x=4, the antiderivative of f(x) is x3/3. The area under the curve is 43/3 - 13/3 = 64/3 - 1/3 = 63/3 = 21.\n\nLet's do another definite integral problem. This one is going to require a fact from Geometry. What's the area under a parabolic arch?\n\nSo imagine I construct a parabolic arch which has a shape of a parabola, and a flat base. It's kind of like a triangle except that the sides are curved. If this were a triangle, the area would be 1/2 base times height. Archimedes found that for a parabolic arch, the area is 2/3 of the base times height. So we're going to use this in the next problem.\n\nBefore I take you to the next problem, we need to review one fact about definite integrals. Now I've made the point of saying that, when the function that we're integrating is greater than or equal to 0, the definite integral gives me the exact area under the curve. However, when the function is less than or equal to 0, when the graph is below the x axis, the definite integral doesn't give me the area. It gives me the opposite of the area. Now you know that area is always positive. So this means that definite integrals are going to have negative values when you're integrating a curve that's below the x axis. So it's something to remember.\n\nSo here is the problem. The graph of y equals the quantity x minus 3 squared is shown below, this graph here. Evaluate first, the integral from 0 to 6 of that function. Well, let's do that.\n\nFirst let's observe what area that represents. From 0 to 6, the interval's from here to here. Since its function is non-negative on this interval, the integral is just going to equal the area under the curve. So all we have to do is figure out what this area is. Of course that's not straightforward, because this is not a parabolic arch. However, this is a parabolic arch in here. So what we can do is find the area of this rectangle, and subtract the area of the parabolic arch. So let me write that down; equals area of rectangle minus area of I'll write para; parabolic arch.\n\nNow the area of the rectangle is going to be the base times height. The base is 6, the height (this is not to the same scale), but the height is going to be 9. So 6 times 9 minus the area of the parabolic arch. Remember that formula; 2/3 base times height. So 2/3 (and this is the base, the flat part) the base has a length of 6. So 2/3, 6 times, (and the height is actually the same as the height of the rectangle was) it's going to be 9, the y value here. So 6 times 9. So this is 54 minus 2/3 of 54, is going to be 1/3 of 54, which is 18. This will be 54 minus 36, so that's 18. Very easy if you use subtractions. You can use the fact that the area under a parabolic arch is 2/3 base height.\n\nLet's take a look at this slightly harder problem. The integral from 0 to 6 of this function. It's our original function minus 9. So imagine taking this red curve, and subtracting 9. If we subtract 9, these two points are going to be down here. This point will be 9 units lower, so let me translate it downwards. These two points go here. This point goes nine units down, and the graph will look something like this. So this would be the graph of this function. Let's call it g(x). So this is y equals g(x).\n\nNow the integral from 0 to 6 of g(x), where would that be? Well, because this function lies entirely below the x axis, the value of the definite integral is going to e the opposite of the area. This is the fact that we had before. So if I calculate the area, and put a minus in front of it, that's the answer.\n\nWell, what would this area be? Equals 18 area. What would this area be? Well, it's the same parabolic arch we had before. It's going to be 2/3 the base of 6 times the height of 9. I put a minus in front of that. So minus 2/3 base of 6 height of 9. This is -36\n\nSo the lesson to learn from this is, when you're integrating a function that's below the x axis, expect to get an answer that's negative. When you integrate a function that's entirely above the x axis, expect to get an answer that's positive.\n\nIn the first case, the integral exactly equals the area under the curve; the area between the curve, and the x axis. In the second case, the integral equals the opposite. Put a minus in front of the area between the curve, and the x axis." ]
[ null, "https://d3a0jx1tkrpybf.cloudfront.net/img/teachers/teacher-3.png", null, "https://d3a0jx1tkrpybf.cloudfront.net/img/teachers/teacher-3.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9458386,"math_prob":0.98181,"size":4943,"snap":"2022-27-2022-33","text_gpt3_token_len":1263,"char_repetition_ratio":0.15853411,"word_repetition_ratio":0.09225875,"special_character_ratio":0.25288287,"punctuation_ratio":0.12079927,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99947566,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-01T23:42:08Z\",\"WARC-Record-ID\":\"<urn:uuid:82198626-6387-4bc4-8f25-30a8602585fc>\",\"Content-Length\":\"91934\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:81cd21e6-eedd-47db-a382-6ad5256e7d5a>\",\"WARC-Concurrent-To\":\"<urn:uuid:340f5e9c-0594-420f-b7eb-015a8acc3a6f>\",\"WARC-IP-Address\":\"54.83.15.47\",\"WARC-Target-URI\":\"https://www.brightstorm.com/math/calculus/the-definite-integral/the-definite-integral-problem-2/\",\"WARC-Payload-Digest\":\"sha1:NHAFNN7UZIQ35SPLRIDZZ6DHG7KPMSHL\",\"WARC-Block-Digest\":\"sha1:TULHNJXCAL4UFEHKDIAWURA7UB2OQCNE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103947269.55_warc_CC-MAIN-20220701220150-20220702010150-00095.warc.gz\"}"}
https://www.cut-the-knot.org/Generalization/MatrixGroups.shtml
[ "# Matrix Groups\n\nThe problem below was proposed in the Mathematics Magazine (September, 1959) and discussed in The American Mathematics Monthly (vol. 70, n. 4, Apr 1963, p. 427) by R. A. Rosenbaum:\n\n (I) The set of nonsingular n×n matrices such that the sum of the elements in each row of each matrix is 1 forms a group under multiplication.", null, "R. A. Rosenbaum points out that a generalization of that statement is of the kind that gets closer \"to the heart of the matter\" than the statement itself, by stripping away nonessentials and exposing the significant relationships. As he notes, the significant hypothesis is that the row-sums be constant - not necessarily 1, and suggests to consider another problem instead.\n\n (II) Let A = [aij ] be an m×n matrix, B = [bij ] be an n×p matrix, and C = A×B. If the row-sums of A are all equal to a and the row-sums of B are all equal to b, then the row-sums of C are also constant and are equal to ab. If the row-sums of B are all equal to b ≠ 0, and the row-sums of C are all equal to c, then the row-sums of A are also constant and are equal to c/b.\n\nThe reformulation has a\n\n### Corollary\n\nThe set of all nonsingular n×n matrices (over any field) such that, for any one matrix, the sum of the elements of each row is constant (but perhaps not the same constant for different matrices) forms a group under multiplication.\n\nIf in the generalization a, b, c are all taken to be 1, then, with m = n = p, the original problem comes out as another corollary.", null, "The proof of the generalization is indeed straightforward and, were a, b, c to be replaced by 1, would not differ of that for the original statement.\n\n ∑j cij = ∑j ∑k aik bkj = ∑k ∑j aik bkj = ∑k aik ∑j bkj = ∑k aik b = b ∑k aik = b × a = a × b.", null, "Professor W. McWorter has remarked that, for square n×n matrices, having constant row-sums means exactly having an eigenvector 1 = (1 1 ... 1)T:", null, "a11 ... a1n a21 ... a2n ... an1 ... ann", null, "", null, "1 1 ... 1", null, "=", null, "∑j a1j ∑j a2j ... ∑j anj", null, "=a", null, "1 1 ... 1", null, "Thus we can write A1 = a1, B1 = b1, and C1 = c1, implying\n\n c1 = C1 = AB1 = A(b1) = b A1 = b×a1 = a×b 1.\n\nSo, again, c = ab, as expected.\n\nAs Professor McWorter has observed, the latter derivation works just fine when vector 1 is replaced with any other vector v. More accurately, the following statement holds:\n\n (III) Let A = [aij ] and B = [bij ] be an n×n square matrices, and C = A×B. If A and B have a common eigenvector v with the eigenvalues a and b respectively, then v is also an eigenvector of C with the eigenvalue c = ab. If B and C have a common eigenvector v with the eigenvalues c and b (so that b ≠ 0 by definition), then v is also the eigenvector of A with the eigenvalue a = c/b.\n\nFurthermore, similar claims for the addition of matrices and the multiplication by a scalar also hold true.", null, "• Matrices Help Relationships\n• Eigenvalues of an incidence matrix\n• Addition of Vectors and Matrices\n• Multiplication of a Vector by a Matrix\n• Multiplication of Matrices\n• Eigenvectors by Inspection\n• Vandermonde matrix and determinant\n• When the Counting Gets Tough, the Tough Count on Mathematics\n• Merlin's Magic Squares\n•", null, "" ]
[ null, "https://www.cut-the-knot.org/gifs/tbow_sh.gif", null, "https://www.cut-the-knot.org/gifs/tbow_sh.gif", null, "https://www.cut-the-knot.org/gifs/tbow_sh.gif", null, "https://www.cut-the-knot.org/gifs/SYMB/LP.GIF", null, "https://www.cut-the-knot.org/gifs/SYMB/RP.GIF", null, "https://www.cut-the-knot.org/gifs/SYMB/LP.GIF", null, "https://www.cut-the-knot.org/gifs/SYMB/RP.GIF", null, "https://www.cut-the-knot.org/gifs/SYMB/LP.GIF", null, "https://www.cut-the-knot.org/gifs/SYMB/RP.GIF", null, "https://www.cut-the-knot.org/gifs/SYMB/LP.GIF", null, "https://www.cut-the-knot.org/gifs/SYMB/RP.GIF", null, "https://www.cut-the-knot.org/gifs/tbow_sh.gif", null, "https://www.cut-the-knot.org/gifs/tbow_sh.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.90013325,"math_prob":0.9978195,"size":2138,"snap":"2020-34-2020-40","text_gpt3_token_len":624,"char_repetition_ratio":0.1049672,"word_repetition_ratio":0.02020202,"special_character_ratio":0.28531337,"punctuation_ratio":0.12790698,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99966836,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-13T03:09:48Z\",\"WARC-Record-ID\":\"<urn:uuid:5add374a-7270-4548-aa5b-204c8b7c3294>\",\"Content-Length\":\"18522\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1968d2cf-781f-40e3-8fe8-929992c8e693>\",\"WARC-Concurrent-To\":\"<urn:uuid:81471442-109c-46f9-8c9e-bf3fe5d835d5>\",\"WARC-IP-Address\":\"107.180.50.227\",\"WARC-Target-URI\":\"https://www.cut-the-knot.org/Generalization/MatrixGroups.shtml\",\"WARC-Payload-Digest\":\"sha1:EJNRZVVRWP3PSS4MBR2ZIWNMTUX7HROX\",\"WARC-Block-Digest\":\"sha1:ZJB2ZB4GPFNDDNHUN3NQXVBYDFDJRDOX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439738950.61_warc_CC-MAIN-20200813014639-20200813044639-00269.warc.gz\"}"}
https://www.colorhexa.com/00e3fe
[ "# #00e3fe Color Information\n\nIn a RGB color space, hex #00e3fe is composed of 0% red, 89% green and 99.6% blue. Whereas in a CMYK color space, it is composed of 100% cyan, 10.6% magenta, 0% yellow and 0.4% black. It has a hue angle of 186.4 degrees, a saturation of 100% and a lightness of 49.8%. #00e3fe color hex could be obtained by blending #00ffff with #00c7fd. Closest websafe color is: #00ccff.\n\n• R 0\n• G 89\n• B 100\nRGB color chart\n• C 100\n• M 11\n• Y 0\n• K 0\nCMYK color chart\n\n#00e3fe color description : Pure (or mostly pure) cyan.\n\n# #00e3fe Color Conversion\n\nThe hexadecimal color #00e3fe has RGB values of R:0, G:227, B:254 and CMYK values of C:1, M:0.11, Y:0, K:0. Its decimal value is 58366.\n\nHex triplet RGB Decimal 00e3fe `#00e3fe` 0, 227, 254 `rgb(0,227,254)` 0, 89, 99.6 `rgb(0%,89%,99.6%)` 100, 11, 0, 0 186.4°, 100, 49.8 `hsl(186.4,100%,49.8%)` 186.4°, 100, 99.6 00ccff `#00ccff`\nCIE-LAB 82.961, -35.84, -25.934 45.353, 62.089, 103.355 0.215, 0.295, 62.089 82.961, 44.239, 215.889 82.961, -61.318, -36.737 78.797, -35.154, -22.611 00000000, 11100011, 11111110\n\n# Color Schemes with #00e3fe\n\n• #00e3fe\n``#00e3fe` `rgb(0,227,254)``\n• #fe1b00\n``#fe1b00` `rgb(254,27,0)``\nComplementary Color\n• #00fe9a\n``#00fe9a` `rgb(0,254,154)``\n• #00e3fe\n``#00e3fe` `rgb(0,227,254)``\n• #0064fe\n``#0064fe` `rgb(0,100,254)``\nAnalogous Color\n• #fe9a00\n``#fe9a00` `rgb(254,154,0)``\n• #00e3fe\n``#00e3fe` `rgb(0,227,254)``\n• #fe0064\n``#fe0064` `rgb(254,0,100)``\nSplit Complementary Color\n• #e3fe00\n``#e3fe00` `rgb(227,254,0)``\n• #00e3fe\n``#00e3fe` `rgb(0,227,254)``\n• #fe00e3\n``#fe00e3` `rgb(254,0,227)``\n• #00fe1b\n``#00fe1b` `rgb(0,254,27)``\n• #00e3fe\n``#00e3fe` `rgb(0,227,254)``\n• #fe00e3\n``#fe00e3` `rgb(254,0,227)``\n• #fe1b00\n``#fe1b00` `rgb(254,27,0)``\n• #009fb2\n``#009fb2` `rgb(0,159,178)``\n• #00b5cb\n``#00b5cb` `rgb(0,181,203)``\n• #00cce5\n``#00cce5` `rgb(0,204,229)``\n• #00e3fe\n``#00e3fe` `rgb(0,227,254)``\n• #19e6ff\n``#19e6ff` `rgb(25,230,255)``\n• #32e9ff\n``#32e9ff` `rgb(50,233,255)``\n• #4cecff\n``#4cecff` `rgb(76,236,255)``\nMonochromatic Color\n\n# Alternatives to #00e3fe\n\nBelow, you can see some colors close to #00e3fe. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #00feda\n``#00feda` `rgb(0,254,218)``\n• #00feef\n``#00feef` `rgb(0,254,239)``\n• #00f8fe\n``#00f8fe` `rgb(0,248,254)``\n• #00e3fe\n``#00e3fe` `rgb(0,227,254)``\n• #00cefe\n``#00cefe` `rgb(0,206,254)``\n• #00b9fe\n``#00b9fe` `rgb(0,185,254)``\n• #00a4fe\n``#00a4fe` `rgb(0,164,254)``\nSimilar Colors\n\n# #00e3fe Preview\n\nThis text has a font color of #00e3fe.\n\n``<span style=\"color:#00e3fe;\">Text here</span>``\n#00e3fe background color\n\nThis paragraph has a background color of #00e3fe.\n\n``<p style=\"background-color:#00e3fe;\">Content here</p>``\n#00e3fe border color\n\nThis element has a border color of #00e3fe.\n\n``<div style=\"border:1px solid #00e3fe;\">Content here</div>``\nCSS codes\n``.text {color:#00e3fe;}``\n``.background {background-color:#00e3fe;}``\n``.border {border:1px solid #00e3fe;}``\n\n# Shades and Tints of #00e3fe\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #001113 is the darkest color, while #feffff is the lightest one.\n\n• #001113\n``#001113` `rgb(0,17,19)``\n• #002226\n``#002226` `rgb(0,34,38)``\n• #00343a\n``#00343a` `rgb(0,52,58)``\n• #00454d\n``#00454d` `rgb(0,69,77)``\n• #005761\n``#005761` `rgb(0,87,97)``\n• #006875\n``#006875` `rgb(0,104,117)``\n• #007a88\n``#007a88` `rgb(0,122,136)``\n• #008b9c\n``#008b9c` `rgb(0,139,156)``\n• #009db0\n``#009db0` `rgb(0,157,176)``\n• #00aec3\n``#00aec3` `rgb(0,174,195)``\n• #00c0d7\n``#00c0d7` `rgb(0,192,215)``\n• #00d1ea\n``#00d1ea` `rgb(0,209,234)``\n• #00e3fe\n``#00e3fe` `rgb(0,227,254)``\n• #13e6ff\n``#13e6ff` `rgb(19,230,255)``\n• #26e8ff\n``#26e8ff` `rgb(38,232,255)``\n• #3aeaff\n``#3aeaff` `rgb(58,234,255)``\n• #4decff\n``#4decff` `rgb(77,236,255)``\n• #61eeff\n``#61eeff` `rgb(97,238,255)``\n• #75f0ff\n``#75f0ff` `rgb(117,240,255)``\n• #88f2ff\n``#88f2ff` `rgb(136,242,255)``\n• #9cf4ff\n``#9cf4ff` `rgb(156,244,255)``\n• #b0f7ff\n``#b0f7ff` `rgb(176,247,255)``\n• #c3f9ff\n``#c3f9ff` `rgb(195,249,255)``\n• #d7fbff\n``#d7fbff` `rgb(215,251,255)``\n• #eafdff\n``#eafdff` `rgb(234,253,255)``\n• #feffff\n``#feffff` `rgb(254,255,255)``\nTint Color Variation\n\n# Tones of #00e3fe\n\nA tone is produced by adding gray to any pure hue. In this case, #758789 is the less saturated color, while #00e3fe is the most saturated one.\n\n• #758789\n``#758789` `rgb(117,135,137)``\n• #6b8e93\n``#6b8e93` `rgb(107,142,147)``\n• #62969c\n``#62969c` `rgb(98,150,156)``\n• #589ea6\n``#589ea6` `rgb(88,158,166)``\n• #4ea5b0\n``#4ea5b0` `rgb(78,165,176)``\n``#44adba` `rgb(68,173,186)``\n• #3bb5c3\n``#3bb5c3` `rgb(59,181,195)``\n• #31bdcd\n``#31bdcd` `rgb(49,189,205)``\n• #27c4d7\n``#27c4d7` `rgb(39,196,215)``\n• #1dcce1\n``#1dcce1` `rgb(29,204,225)``\n• #14d4ea\n``#14d4ea` `rgb(20,212,234)``\n``#0adbf4` `rgb(10,219,244)``\n• #00e3fe\n``#00e3fe` `rgb(0,227,254)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #00e3fe is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5113109,"math_prob":0.8109928,"size":3695,"snap":"2023-40-2023-50","text_gpt3_token_len":1663,"char_repetition_ratio":0.14738554,"word_repetition_ratio":0.0073664826,"special_character_ratio":0.5328823,"punctuation_ratio":0.22954546,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98474735,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-03T14:26:00Z\",\"WARC-Record-ID\":\"<urn:uuid:3998a498-17f5-4f25-bc7b-61f73efc9b0a>\",\"Content-Length\":\"36215\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0d6bda87-93f8-428e-823f-9579c20d19b3>\",\"WARC-Concurrent-To\":\"<urn:uuid:7a9a7edb-ea83-4624-bb36-5927059a281c>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/00e3fe\",\"WARC-Payload-Digest\":\"sha1:JVICQVSEOOIWYEOTSF4JJQPZ4O5SI64C\",\"WARC-Block-Digest\":\"sha1:HKAOIEUTKDWDFODZYHCRCWFSBEYJIZNX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100508.23_warc_CC-MAIN-20231203125921-20231203155921-00055.warc.gz\"}"}
https://www.hackmath.net/en/word-math-problems/reason?page_num=17
[ "Reason - math word problems - page 17\n\n1. Engine pulley", null, "The engine has a 1460 rev / min (RPM). Disc diameter is 350 mm. What will be the disc peripheral speed in RPM? Pulleys on the engine has diameter 80mm, on a disc has diameter 160mm.\n2. Big factorial", null, "How many zeros end number 116! ?\n3. The farmer", null, "The farmer harvested 840 tons of grain in 2006, which was 44% less than in 2005, and one-fifth more than in 2004. How many tons of grain harvested in years 2005 and 2004?\n4. Grandmother's clocks", null, "Grandmother's clock is late every hour by half a minute. Grandmother it by set exactly at 8.00 am. How many hours will show after 24 hours.?\n5. Book", null, "To number pages of thick book was used 4201 digits. How many pages has this book?\n6. Cars speeds", null, "Distance from city A to city B is 108 km. Two cars were simultaneously started from both places. The speed of a car coming from city A was 2 km/h higher than the second car. What was the speed of each car if they met in 54 minutes?\n7. Three workers", null, "Three workers were rewarded CZK 9200 and the money divided by the work they have done. First worker to get twice than the second, the second three times more than the third. How much money each worker received?\n8. Fewer than 500 sheep,", null, "There are fewer than 500 sheep, but if they stand in a double, triple, quadruple, five and sixth order, one sheep will remain. But they can stand in the seventh order. How many are sheep?\n9. Eva", null, "When Eva buys 8 packages of cookies 4 CZK left her. If she wanted to buy 10 packages, she would have to borrow 20 CZK. How much money has Eva in her wallet?\n10. Rings - intersect", null, "There are 15 pupils on the sporting ring. 10 pupils go to football, 8 pupils go to floorball. How many pupils go to both rings at the same time?\n11. Ratio of two unknown numbers", null, "Two numbers are given. Their sum is 30. We calculate one-sixth of a larger number and add to both numbers. So we get new numbers whose ratio is 5:7. Which two numbers were given?\n12. Extreme temperatures", null, "USA hit extreme heat wave. Calculate percentage of the air temperature change from normal summer temperature 23 °C to 40 °C.\n13. Unknown number", null, "Unknown number is divisible by exactly three different primes. When we compare these primes in ascending order, the following applies: • Difference first and second prime number is half the difference between the third and second prime numbers. • The produ\n14. Have solution", null, "The sum of four consecutive even numbers is 96. Determine these numbers.\n15. Pipe", null, "Steel pipe has a length 2.5 meters. About how many decimetres is 1/3 less than 4/8 of this steel pipe?\n16. Travel by bus", null, "Five girls traveling by bus. Each holds in each hand two baskets. Each basket contains 5 cats. Each cat has 5 kittens. 3 girls getting off the bus and another girl without baskets go on board. The bus control driver. How many legs are in the bus?\n17. Hexagon rotation", null, "A regular hexagon of side 6 cm is rotated through 60° along a line passing through its longest diagonal. What is the volume of the figure thus generated?\n18. Four sides of trapezoid", null, "In the trapezoid ABCD is |AB| = 73.6 mm; |BC| = 57 mm; |CD| = 60 mm; |AD| = 58.6 mm. Calculate the size of its interior angles.\n19. Obtuse angle", null, "The line OH is the height of the triangle DOM, line MN is the bisector of angle DMO. obtuse angle between the lines MN and OH is four times larger than the angle DMN. What size is the angle DMO? (see attached image)\n20. Velocipede", null, "The front wheel of velocipede from year 1880 had a diameter 1.8 m. If the front wheel turned again one then rear wheel 6 times. What was the diameter of the rear wheel?\n\nDo you have an interesting mathematical word problem that you can't solve it? Enter it, and we can try to solve it.\n\nTo this e-mail address, we will reply solution; solved examples are also published here. Please enter e-mail correctly and check whether you don't have a full mailbox." ]
[ null, "https://www.hackmath.net/thumb/60/t_3260.jpg", null, "https://www.hackmath.net/thumb/6/t_106.jpg", null, "https://www.hackmath.net/thumb/72/t_1572.jpg", null, "https://www.hackmath.net/thumb/23/t_2223.jpg", null, "https://www.hackmath.net/thumb/99/t_399.jpg", null, "https://www.hackmath.net/thumb/9/t_5209.jpg", null, "https://www.hackmath.net/thumb/28/t_1428.jpg", null, "https://www.hackmath.net/thumb/98/t_4998.jpg", null, "https://www.hackmath.net/thumb/96/t_1496.jpg", null, "https://www.hackmath.net/thumb/33/t_4933.jpg", null, "https://www.hackmath.net/thumb/13/t_5213.jpg", null, "https://www.hackmath.net/thumb/71/t_171.jpg", null, "https://www.hackmath.net/thumb/31/t_1331.jpg", null, "https://www.hackmath.net/thumb/64/t_4764.jpg", null, "https://www.hackmath.net/thumb/13/t_213.jpg", null, "https://www.hackmath.net/thumb/15/t_1915.jpg", null, "https://www.hackmath.net/thumb/39/t_3739.jpg", null, "https://www.hackmath.net/thumb/2/t_4902.jpg", null, "https://www.hackmath.net/thumb/22/t_1422.jpg", null, "https://www.hackmath.net/thumb/19/t_1519.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9543072,"math_prob":0.93414074,"size":3684,"snap":"2019-43-2019-47","text_gpt3_token_len":940,"char_repetition_ratio":0.099728264,"word_repetition_ratio":0.0,"special_character_ratio":0.26058632,"punctuation_ratio":0.11298701,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98471695,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-19T22:43:24Z\",\"WARC-Record-ID\":\"<urn:uuid:778bbf5f-972f-491d-8249-aaece0adbe7e>\",\"Content-Length\":\"25977\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:aa12bf88-ad6e-432b-bd2c-443127e97125>\",\"WARC-Concurrent-To\":\"<urn:uuid:25bceda6-951b-4a7f-9b67-da87713f7c8d>\",\"WARC-IP-Address\":\"104.24.105.91\",\"WARC-Target-URI\":\"https://www.hackmath.net/en/word-math-problems/reason?page_num=17\",\"WARC-Payload-Digest\":\"sha1:RERT74YVD3LDDFKVXZYU5KY4DB43TFG5\",\"WARC-Block-Digest\":\"sha1:QN7AGMKZV3XHJOLTAYQ27PCKATNHVW2B\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986700435.69_warc_CC-MAIN-20191019214624-20191020002124-00341.warc.gz\"}"}
https://tools.carboncollective.co/compound-interest/43721-at-9-percent-in-10-years/
[ "# What is the compound interest on $43721 at 9% over 10 years? If you want to invest$43,721 over 10 years, and you expect it will earn 9.00% in annual interest, your investment will have grown to become $103,503.51. If you're on this page, you probably already know what compound interest is and how a sum of money can grow at a faster rate each year, as the interest is added to the original principal amount and recalculated for each period. The actual rate that$43,721 compounds at is dependent on the frequency of the compounding periods. In this article, to keep things simple, we are using an annual compounding period of 10 years, but it could be monthly, weekly, daily, or even continuously compounding.\n\nThe formula for calculating compound interest is:\n\n$$A = P(1 + \\dfrac{r}{n})^{nt}$$\n\n• A is the amount of money after the compounding periods\n• P is the principal amount\n• r is the annual interest rate\n• n is the number of compounding periods per year\n• t is the number of years\n\nWe can now input the variables for the formula to confirm that it does work as expected and calculates the correct amount of compound interest.\n\nFor this formula, we need to convert the rate, 9.00% into a decimal, which would be 0.09.\n\n$$A = 43721(1 + \\dfrac{ 0.09 }{1})^{ 10}$$\n\nAs you can see, we are ignoring the n when calculating this to the power of 10 because our example is for annual compounding, or one period per year, so 10 × 1 = 10.\n\n## How the compound interest on $43,721 grows over time The interest from previous periods is added to the principal amount, and this grows the sum a rate that always accelerating. The table below shows how the amount increases over the 10 years it is compounding: Start Balance Interest End Balance 1$43,721.00 $3,934.89$47,655.89\n2 $47,655.89$4,289.03 $51,944.92 3$51,944.92 $4,675.04$56,619.96\n4 $56,619.96$5,095.80 $61,715.76 5$61,715.76 $5,554.42$67,270.18\n6 $67,270.18$6,054.32 $73,324.49 7$73,324.49 $6,599.20$79,923.70\n8 $79,923.70$7,193.13 $87,116.83 9$87,116.83 $7,840.51$94,957.35\n10 $94,957.35$8,546.16 $103,503.51 We can also display this data on a chart to show you how the compounding increases with each compounding period. In this example we have 10 years of compounding, but to truly see the power of compound interest, it might be better to look at a larger number of compounding periods to see how much$43,721 can grow.\n\nIf you want an example with more compounding years, click here to view the compounding interest of $43,721 at 9.00% over 30 years. As you can see if you view the compounding chart for$43,721 at 9.00% over a long enough period of time, the rate at which it grows increases over time as the interest is added to the balance and new interest calculated from that figure.\n\n## How long would it take to double $43,721 at 9% interest? Another commonly asked question about compounding interest would be to calculate how long it would take to double your investment of$43,721 assuming an interest rate of 9.00%.\n\nWe can calculate this very approximately using the Rule of 72.\n\nThe formula for this is very simple:\n\n$$Years = \\dfrac{72}{Interest\\: Rate}$$\n\nBy dividing 72 by the interest rate given, we can calculate the rough number of years it would take to double the money. Let's add our rate to the formula and calculate this:\n\n$$Years = \\dfrac{72}{ 9 } = 8$$\n\nUsing this, we know that any amount we invest at 9.00% would double itself in approximately 8 years. So $43,721 would be worth$87,442 in ~8 years.\n\nWe can also calculate the exact length of time it will take to double an amount at 9.00% using a slightly more complex formula:\n\n$$Years = \\dfrac{log(2)}{log(1 + 0.09)} = 8.04\\; years$$\n\nHere, we use the decimal format of the interest rate, and use the logarithm math function to calculate the exact value.\n\nAs you can see, the exact calculation is very close to the Rule of 72 calculation, which is much easier to remember.\n\nHopefully, this article has helped you to understand the compound interest you might achieve from investing \\$43,721 at 9.00% over a 10 year investment period." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.93269277,"math_prob":0.997305,"size":4046,"snap":"2022-40-2023-06","text_gpt3_token_len":1148,"char_repetition_ratio":0.16006927,"word_repetition_ratio":0.014450867,"special_character_ratio":0.33811173,"punctuation_ratio":0.15409836,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998548,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-05T01:50:28Z\",\"WARC-Record-ID\":\"<urn:uuid:fef03273-7d44-4e46-a3fd-c827d01d64fd>\",\"Content-Length\":\"26278\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:deba2c86-f529-4fb2-a8b7-5d85c5a30773>\",\"WARC-Concurrent-To\":\"<urn:uuid:32b84f0d-0883-4624-b1bb-62737c0b4f25>\",\"WARC-IP-Address\":\"138.197.3.89\",\"WARC-Target-URI\":\"https://tools.carboncollective.co/compound-interest/43721-at-9-percent-in-10-years/\",\"WARC-Payload-Digest\":\"sha1:SXAQDUKN6XBU6YENL7G5NMBEZFUEMH44\",\"WARC-Block-Digest\":\"sha1:IPQKDWKFVILNZAMDWY7MNWQNLTFAQJUM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500158.5_warc_CC-MAIN-20230205000727-20230205030727-00457.warc.gz\"}"}
https://ww2.mathworks.cn/help/econ/jcitest.html
[ "# jcitest\n\nJohansen cointegration test\n\n## Syntax\n\n``h = jcitest(Y)``\n``h = jcitest(Tbl)``\n``h = jcitest(___,Name=Value)``\n``````[h,pValue,stat,cValue] = jcitest(___)``````\n``````[h,pValue,stat,cValue,mles] = jcitest(___)``````\n\n## Description\n\nexample\n\n````h = jcitest(Y)` returns the rejection decisions `h` from conducting the Johansen test, which assesses each null hypothesis H(r) of cointegration rank less than or equal to r among the `numDims`-dimensional multivariate time series `Y` against the alternatives H(`numDims`) (`trace` test) or H(r + 1) (`maxeig` test). The tests produce maximum likelihood estimates of the parameters in a vector error-correction (VEC) model of the cointegrated series.```\n\nexample\n\n````h = jcitest(Tbl)` returns rejection decisions from conducting the Johansen test on the variables of the table or timetable `Tbl`.To select a subset of variables in `Tbl` to test, use the `DataVariables` name-value argument.```\n\nexample\n\n````h = jcitest(___,Name=Value)` uses additional options specified by one or more name-value arguments, using any input-argument combination in the previous syntaxes.Some options control the number of tests to conduct. The following conditions apply when `jcitest` conducts multiple tests: `jcitest` treats each test as separate from all other tests.Each row of all outputs contains the results of the corresponding test. For example, `jcitest(Tbl,Model=\"H2\",DataVariables=1:5)` tests the first 5 variables in the input table `Tbl` using the Johansen model that excludes all deterministic terms.```\n\nexample\n\n``````[h,pValue,stat,cValue] = jcitest(___)``` displays, at the command window, the results of the Johansen test and returns the p-values `pValue`, test statistics `stat`, and critical values `cValue` of the test. The results display includes the ranks r, corresponding rejection decisions, p-values, decision statistics, and specified options.```\n\nexample\n\n``````[h,pValue,stat,cValue,mles] = jcitest(___)``` also returns a structure of maximum likelihood estimates associated with the VEC(q) model of the multivariate time series yt.```\n\n## Examples\n\ncollapse all\n\nTest a multivariate time series for cointegration using the default values of the Johansen cointegration test. Input the time series data as a numeric matrix.\n\nLoad data of Canadian inflation and interest rates `Data_Canada.mat`, which contains the series in the matrix `Data`.\n\n```load Data_Canada series'```\n```ans = 5x1 cell {'(INF_C) Inflation rate (CPI-based)' } {'(INF_G) Inflation rate (GDP deflator-based)'} {'(INT_S) Interest rate (short-term)' } {'(INT_M) Interest rate (medium-term)' } {'(INT_L) Interest rate (long-term)' } ```\n\nTest the interest rate series for cointegration by using the Johansen cointegration test. Use default options and return the rejection decision.\n\n`h = jcitest(Data(:,3:end))`\n```h=1×7 table r0 r1 r2 Model Lags Test Alpha _____ _____ _____ ______ ____ _________ _____ t1 true true false {'H1'} 0 {'trace'} 0.05 ```\n\nBy default, `jcitest` conducts the `trace` test and uses the `H1` Johansen form by default. The test fails to reject the null hypothesis of rank 2 cointegration in the series.\n\nConduct the Johansen cointegration test on a multivariate time series using default options, which tests all table variables.\n\nLoad data of Canadian inflation and interest rates `Data_Canada.mat`. Convert the table `DataTable` to a timetable.\n\n```load Data_Canada dates = datetime(dates,12,31); TT = table2timetable(DataTable,RowTimes=dates); TT.Observations = [];```\n\nConduct the Johansen cointegration test by passing the timetable to `jcitest` and using default options. `jcitest` tests for cointegration among all table variables by default.\n\n`h = jcitest(TT)`\n```h=1×9 table r0 r1 r2 r3 r4 Model Lags Test Alpha _____ _____ _____ _____ _____ ______ ____ _________ _____ t1 true true false false true {'H1'} 0 {'trace'} 0.05 ```\n\nThe test fails to reject the null hypotheses of rank 2 and 3 cointegration among the series.\n\nBy default, `jcitest` includes all input table variables in the cointegration test. To select a subset of variables to test, set the `DataVariables` option.\n\n`jcitest` supports two types Johansen tests. Conduct a test for each type.\n\nLoad data of Canadian inflation and interest rates `Data_Canada.mat`. Convert the table `DataTable` to a timetable. Identify the interest rate series.\n\n```load Data_Canada dates = datetime(dates,12,31); TT = table2timetable(DataTable,RowTimes=dates); TT.Observations = []; idxINT = contains(TT.Properties.VariableNames,\"INT\");```\n\nConduct the Johansen cointegration test to assess cointegration among the interest rate series. Specify both test types `trace` and `maxeig`, and set the level of significance to 2.5%.\n\n`h = jcitest(TT,DataVariables=idxINT,Test=[\"trace\" \"maxeig\"],Alpha=0.025)`\n```h=2×7 table r0 r1 r2 Model Lags Test Alpha _____ _____ _____ ______ ____ __________ _____ t1 true false false {'H1'} 0 {'trace' } 0.025 t2 false false false {'H1'} 0 {'maxeig'} 0.025 ```\n\nh is a 2-row table; rows contain results of separate tests. At 2.5% level of significance:\n\n• The `trace` test fails to reject the null hypotheses of ranks 1 and 2 cointegration among the series.\n\n• The `maxeig` test fails to reject the null hypotheses for each cointegration rank.\n\nLoad data of Canadian inflation and interest rates `Data_Canada.mat`. Convert the table `DataTable` to a timetable. Identify the interest rate series.\n\n```load Data_Canada dates = datetime(dates,12,31); TT = table2timetable(DataTable,RowTimes=dates); TT.Observations = []; idxINT = contains(TT.Properties.VariableNames,\"INT\");```\n\nConduct the Johansen cointegration test to assess cointegration among the interest rate series. Specify both test types `trace` and `maxeig`.\n\n`[h,pValue,stat,cValue] = jcitest(TT,DataVariables=idxINT,Test=[\"trace\" \"maxeig\"])`\n```************************ Results Summary (Test 1) Data: TT Effective sample size: 40 Model: H1 Lags: 0 Statistic: trace Significance level: 0.05 r h stat cValue pValue eigVal ---------------------------------------- 0 1 37.6886 29.7976 0.0050 0.4101 1 1 16.5770 15.4948 0.0343 0.2842 2 0 3.2003 3.8415 0.0737 0.0769 ************************ Results Summary (Test 2) Data: TT Effective sample size: 40 Model: H1 Lags: 0 Statistic: maxeig Significance level: 0.05 r h stat cValue pValue eigVal ---------------------------------------- 0 0 21.1116 21.1323 0.0503 0.4101 1 0 13.3767 14.2644 0.0687 0.2842 2 0 3.2003 3.8415 0.0737 0.0769 ```\n```h=2×7 table r0 r1 r2 Model Lags Test Alpha _____ _____ _____ ______ ____ __________ _____ t1 true true false {'H1'} 0 {'trace' } 0.05 t2 false false false {'H1'} 0 {'maxeig'} 0.05 ```\n```pValue=2×7 table r0 r1 r2 Model Lags Test Alpha _________ ________ ________ ______ ____ __________ _____ t1 0.0050497 0.034294 0.073661 {'H1'} 0 {'trace' } 0.05 t2 0.050346 0.06874 0.073661 {'H1'} 0 {'maxeig'} 0.05 ```\n```stat=2×7 table r0 r1 r2 Model Lags Test Alpha ______ ______ ______ ______ ____ __________ _____ t1 37.689 16.577 3.2003 {'H1'} 0 {'trace' } 0.05 t2 21.112 13.377 3.2003 {'H1'} 0 {'maxeig'} 0.05 ```\n```cValue=2×7 table r0 r1 r2 Model Lags Test Alpha ______ ______ ______ ______ ____ __________ _____ t1 29.798 15.495 3.8415 {'H1'} 0 {'trace' } 0.05 t2 21.132 14.264 3.8415 {'H1'} 0 {'maxeig'} 0.05 ```\n\n`jcitest` prints a results display for each test to the command window. All outputs are tables containing the corresponding statistics and test options.\n\nLoad data of Canadian inflation and interest rates `Data_Canada.mat`. Convert the table `DataTable` to a timetable.\n\n```load Data_Canada dates = datetime(dates,12,31); TT = table2timetable(DataTable,RowTimes=dates); TT.Observations = []; idxINT = contains(TT.Properties.VariableNames,\"INT\");```\n\nPlot the interest series.\n\n```plot(TT.Time,TT{:,idxINT}) legend(series(idxINT),Location=\"northwest\") grid on```", null, "Test the interest rate series for cointegration; use the default Johansen form `H1`. Return all outputs.\n\n`[h,pValue,stat,cValue,mles] = jcitest(TT,DataVariables=idxINT);`\n```************************ Results Summary (Test 1) Data: TT Effective sample size: 40 Model: H1 Lags: 0 Statistic: trace Significance level: 0.05 r h stat cValue pValue eigVal ---------------------------------------- 0 1 37.6886 29.7976 0.0050 0.4101 1 1 16.5770 15.4948 0.0343 0.2842 2 0 3.2003 3.8415 0.0737 0.0769 ```\n`h`\n```h=1×7 table r0 r1 r2 Model Lags Test Alpha _____ _____ _____ ______ ____ _________ _____ t1 true true false {'H1'} 0 {'trace'} 0.05 ```\n`pValue`\n```pValue=1×7 table r0 r1 r2 Model Lags Test Alpha _________ ________ ________ ______ ____ _________ _____ t1 0.0050497 0.034294 0.073661 {'H1'} 0 {'trace'} 0.05 ```\n\nThe test fails to reject the null hypothesis of rank 2 cointegration in the series.\n\nPlot the estimated cointegrating relations ${B}^{\\prime }{y}_{t-1}+{c}_{0}$:\n\n```TTLag = lagmatrix(TT,1); T = height(TTLag); B = mles.r2.paramVals.B; c0 = mles.r2.paramVals.c0; plot(TTLag.Time,TTLag{:,idxINT}*B+repmat(c0',T,1)) grid on```", null, "## Input Arguments\n\ncollapse all\n\nData representing observations of a multivariate time series yt, specified as a `numObs`-by-`numDims` numeric matrix. Each column of `Y` corresponds to a variable, and each row corresponds to an observation.\n\nData Types: `double`\n\nData representing observations of a multivariate time series yt, specified as a table or timetable with `numObs` rows. Each row of `Tbl` is an observation.\n\nTo select a subset of variables in `Tbl` to test, use the `DataVariables` name-value argument.\n\nNote\n\n`jcitest` removes the following observations from the specified data:\n\n• All rows containing at least one missing observation, represented by a `NaN` value\n\n• From the beginning of the data, initial values required to initialize lagged variables\n\n### Name-Value Arguments\n\nSpecify optional pairs of arguments as `Name1=Value1,...,NameN=ValueN`, where `Name` is the argument name and `Value` is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.\n\nBefore R2021a, use commas to separate each name and value, and enclose `Name` in quotes.\n\nExample: `jcitest(Tbl,Model=\"H2\",DataVariables=1:5)` tests the first 5 variables in the input table `Tbl` using the Johansen model that excludes all deterministic terms.\n\nJohansen form of the VEC(q) model deterministic terms , specified as a Johansen form name in the table, or a string vector or cell vector of character vectors of such values (for model parameter definitions, see Vector Error-Correction (VEC) Model).\n\nValueError-Correction TermDescription\n`\"H2\"`\n\nAB´yt − 1\n\nNo intercepts or trends are present in the cointegrating relations, and no deterministic trends are present in the levels of the data.\n\nSpecify this model only when all response series have a mean of zero.\n\n`\"H1*\"`\n\nA(B´yt−1+c0)\n\nIntercepts are present in the cointegrating relations, and no deterministic trends are present in the levels of the data.\n\n`\"H1\"`\n\nA(B´yt−1+c0)+c1\n\nIntercepts are present in the cointegrating relations, and deterministic linear trends are present in the levels of the data.\n\n`\"H*\"`A(B´yt−1+c0+d0t)+c1\n\nIntercepts and linear trends are present in the cointegrating relations, and deterministic linear trends are present in the levels of the data.\n\n`\"H\"`A(B´yt−1+c0+d0t)+c1+d1t\n\nIntercepts and linear trends are present in the cointegrating relations, and deterministic quadratic trends are present in the levels of the data.\n\nIf quadratic trends are not present in the data, this model can produce good in-sample fits but poor out-of-sample forecasts.\n\n`jcitest` conducts a separate test for each value in `Model`.\n\nExample: `Model=\"H1*\"` uses the Johansen form `H1*` for all tests.\n\nExample: `Model=[\"H1*\" \"H1\"]` uses Johansen form `H1*` for the first test, and then uses Johansen form `H1` for the second test.\n\nData Types: `string` | `char` | `cell`\n\nNumber of lagged differences q in the VEC(q) model, specified as a nonnegative integer or vector of nonnegative integers.\n\n`jcitest` conducts a separate test for each value in `Lags`.\n\nExample: `Lags=1` includes Δyt – 1 in the model for all tests.\n\nExample: `Lags=[0 1]` includes no lags in the model for the first test, and then includes Δyt – 1 in the model for the second test.\n\nData Types: `double`\n\nTest to perform, specified as a value in this table, or a string vector or cell vector of character vectors of such values.\n\nValueDescription\n`\"trace\"`\n\nThe alternative hypothesis is H(`numDims`), and the test statistics are\n\n`$-T\\left[\\mathrm{log}\\left(1-{\\lambda }_{r+1}\\right)+\\dots +\\mathrm{log}\\left(1-{\\lambda }_{m}\\right)\\right].$`\n\n`\"maxieig\"`\n\nThe alternative hypothesis is H(r + 1), and the test statistics are\n\n`$-T\\mathrm{log}\\left(1-{\\lambda }_{r+1}\\right).$`\n\nBoth tests assess the null hypothesis H(r) of cointegration rank less than or equal to r. `jcitest` computes statistics using the effective sample size T ≤ n`numObs` and ordered estimates of the eigenvalues of C = AB′, λ1 > ... > λm, where m = `numDims`.\n\n`jcitest` conducts a separate test for each value in `Test`.\n\nExample: `Test=\"maxeig\"` conducts the `maxeig` test for all tests.\n\nExample: `Test=[\"maxeig\" \"trace\"]` conducts the `maxeig` test for the first test, and then conducts the `trace` test for the second test.\n\nData Types: `char` | `string` | `cell`\n\nNominal significance level for the hypothesis test, specified as a numeric scalar between `0.001` and `0.999` or a numeric vector of such values.\n\n`jcitest` conducts a separate test for each value in `Alpha`.\n\nExample: `Alpha=[0.01 0.05]` uses a level of significance of `0.01` for the first test, and then uses a level of significance of `0.05` for the second test.\n\nData Types: `double`\n\nCommand window display control, specified as a value in this table.\n\nValueDescription\n`\"off\"``jcitest` does not display the results to the command window. If `jcitest` returns `h` or no outputs, this display is the default.\n`\"summary\"`\n\n`jcitest` displays a tabular summary of test results. The tabular display includes null ranks r = `0:(numDims − 1)` in the first column of each summary. `jcitest` displays multiple test results in separate summaries.\n\nWhen `jcitest` returns any other output than `h` (for example, `pValue`), this display is the default. You cannot set this display when `jcitest` returns `h` or no outputs.\n\n`\"params\"``jcitest` displays maximum likelihood estimates of the parameter values associated with the reduced-rank VEC(q) model of yt. You can set this display only when `jcitest` returns `mles`. `jcitest` returns the displayed parameter estimates in the field `mles.rn(j).paramVals` for null rank r = n and test `j`.\n`\"full\"``jcitest` displays both `\"summary\"` and `\"params\"`.\n\nExample: `Display=\"off\"`\n\nData Types: `char` | `string`\n\nVariables in `Tbl` for which `jcitest` conducts the test, specified as a string vector or cell vector of character vectors containing variable names in `Tbl.Properties.VariableNames`, or an integer or logical vector representing the indices of names. The selected variables must be numeric.\n\nExample: `DataVariables=[\"GDP\" \"CPI\"]`\n\nExample: `DataVariables=[true true false false]` or `DataVariables=[1 2]` selects the first and second table variables.\n\nData Types: `double` | `logical` | `char` | `cell` | `string`\n\nNote\n\n• When `jcitest` conducts multiple tests, the function applies all single settings (scalars or character vectors) to each test.\n\n• All vector-valued specifications that control the number of tests must have equal length.\n\n• A lagged and differenced time series has a reduced sample size. Absent presample values, if the test series yt is defined for t = 1,…,T, the lagged series yt– k is defined for t = k+1,…,T. The first difference applied to the lagged series yt– k further reduces the time base to k+2,…,T. With p lagged differences, the common time base is p+2,…,T and the effective sample size is T–(p+1).\n\n## Output Arguments\n\ncollapse all\n\nTest rejection decisions, returned as a `numTests`-by-(`numDims` + 3) table, where `numTests` is the number of tests, which is determined by specified options.\n\nRow `j` of `h` corresponds to test `j` with options.\n\nRows of `h` correspond to tests specified by the values of the last three variables `Model`, `Test`, and `Alpha`. Row labels are `t1`, `t2`, …, `tu`, where `u` = `numTests`.\n\nVariables of `h` correspond to different, maintained cointegration ranks r = 0, 1, …, `numDims` – 1 and specified name-value arguments that control the number of tests. Variable labels are `r0`, `r1`, …, `rR`, where `R` = `numDims` – 1, and `Model`, `Test`, and `Alpha`.\n\nTo access results, for example, the result for test `j` of null rank `k`, use `h.rk(j)`.\n\nVariable k, labeled `rk`, is logical vector whose entries have the following interpretations:\n\n• `1` (`true`) indicates rejection of the null hypothesis of cointegration rank k in favor of the alternative hypothesis.\n\n• `0` (`false`) indicates failure to reject the null hypothesis of cointegration rank k.\n\nTest statistic p-values, returned as a table with the same dimensions and labels as `h`. Variable k, labeled `rk`, is a numeric vector of p-values for the corresponding tests. The p-values are right-tailed probabilities.\n\nWhen test statistics are outside tabulated critical values, `jcitest` returns maximum (`0.999`) or minimum (`0.001`) p-values.\n\nTest statistics, returned as a table with the same dimensions and labels as `h`.\n\nThe `Test` setting of a particular test determines the test statistic.\n\nCritical values, returned as a table with the same dimensions and labels as `h`. Variable k, labeled `rk`, is a numeric vector of critical values for the corresponding tests. The critical values are for right-tailed probabilities determined by `Alpha`.\n\n`jcitest` loads tables of critical values from the file `Data_JCITest.mat`, and then linearly interpolates test critical values from the tables. Critical values in the tables derive from methods described in .\n\nMaximum likelihood estimates associated with the VEC(q) model of yt, returned as a table with the same dimensions and labels as `h`. Variable k, labeled `rk`, is a structure array of MLEs with elements for the corresponding tests.\n\nEach element of `mles.rk` has the fields in this table. You can access a field using dot notation, for example, `mles.r2(3).paramVals` contains the parameter estimates of the third test corresponding to the null hypothesis of rank 2 cointegration.\n\nFieldDescription\n`paramNames`\n\nCell vector of parameter names, of the form:\n\n{`A`, `B`, `B1`, … `Bq`, `c0`, `d0`, `c1`, `d1`}\n\nElements depend on the values of the `Lags` and `Model` name-value arguments.\n\n`paramVals`Structure of parameter estimates with field names corresponding to the parameter names in `paramNames`.\n`res` T-by-`numDims` matrix of residuals, where T is the effective sample size, obtained by fitting the VEC(q) model of yt to the input data.\n`EstCov`Estimated covariance Q of the innovations process εt.\n`eigVal`Eigenvalue associated with H(r).\n`eigVec`Eigenvector associated with the eigenvalue in `eigVal`. Eigenvectors v are normalized so that vS11v = 1, where S11 is defined as in .\n`rLL` Restricted loglikelihood of yt under the null.\n`uLL`Unrestricted loglikelihood of yt under the alternative.\n\ncollapse all\n\n### Vector Error-Correction (VEC) Model\n\nA vector error-correction (VEC) model is a multivariate, stochastic time series model consisting of a system of m = `numDims` equations of m distinct, differenced response variables. Equations in the system can include an error-correction term, which is a linear function of the responses in levels used to stabilize the system. The cointegrating rank r is the number of cointegrating relations that exist in the system.\n\nEach response equation can include a degree q autoregressive polynomial composed of first differences of the response series, a constant, a time trend, and a constant and time trend in the error-correction term.\n\nExpressed in lag operator notation, a VEC(q) model for a multivariate time series yt is\n\n`$\\begin{array}{c}\\Phi \\left(L\\right)\\left(1-L\\right){y}_{t}=A\\left(B\\prime {y}_{t-1}+{c}_{0}+{d}_{0}t\\right)+{c}_{1}+{d}_{1}t+{\\epsilon }_{t}\\\\ =c+dt+C{y}_{t-1}+{\\epsilon }_{t},\\end{array}$`\n\nwhere\n\n• yt is an m = `numDims` dimensional time series corresponding to m response variables at time t, t = 1,...,T.\n\n• $\\Phi \\left(L\\right)=I-{\\Phi }_{1}-{\\Phi }_{2}-...-{\\Phi }_{q}$, I is the m-by-m identity matrix, and Lyt = yt – 1.\n\n• The cointegrating relations are B'yt – 1 + c0 + d0t and the error-correction term is A(B'yt – 1 + c0 + d0t).\n\n• r is the number of cointegrating relations and, in general, 0 ≤ rm.\n\n• A is an m-by-r matrix of adjustment speeds.\n\n• B is an m-by-r cointegration matrix.\n\n• C = AB′ is an m-by-m impact matrix with a rank of r.\n\n• c0 is an r-by-1 vector of constants (intercepts) in the cointegrating relations.\n\n• d0 is an r-by-1 vector of linear time trends in the cointegrating relations.\n\n• c1 is an m-by-1 vector of constants (deterministic linear trends in yt).\n\n• d1 is an m-by-1 vector of linear time-trend values (deterministic quadratic trends in yt).\n\n• c = Ac0 + c1 and is the overall constant.\n\n• d = Ad0 + d1 and is the overall time-trend coefficient.\n\n• Φj is an m-by-m matrix of short-run coefficients, where j = 1,...,q and Φq is not a matrix containing only zeros.\n\n• εt is an m-by-1 vector of random Gaussian innovations, each with a mean of 0 and collectively an m-by-m covariance matrix Σ. For ts, εt and εs are independent.\n\nIf m = r, then the VEC model is a stable VAR(q + 1) model in the levels of the responses. If r = 0, then the error-correction term is a matrix of zeros, and the VEC(q) model is a stable VAR(q) model in the first differences of the responses.\n\n## Algorithms\n\n• `jcitest` identifies deterministic terms that are outside of the cointegrating relations, c1 and d1, by projecting constant and linear regression coefficients, respectively, onto the orthogonal complement of A.\n\n• If `jcitest` fails to reject the null hypothesis of cointegration rank r = 0, the inference is that the error-correction coefficient C is zero, and the VEC(q) model reduces to a standard VAR(q) model in first differences. If `jcitest` rejects all cointegration ranks r less than `numDims`, the inference is that C has full rank, and yt is stationary in levels.\n\n• The parameters A and B in the reduced-rank VEC(q) model are not identifiable, but their product C = AB is identifiable. `jcitest` constructs `B` = `V(:,1:r)` using the orthonormal eigenvectors `V` returned by `eig`, and then renormalizes so that `V'*S11*V = I` .\n\n• The time series in the specified input data can be stationary in levels or first differences (that is, I(0) or I(1)). Rather than pretesting series for unit roots (using, e.g., `adftest`, `pptest`, `kpsstest`, or `lmctest`), the Johansen procedure formulates the question within the model. An I(0) series is associated with a standard unit vector in the space of cointegrating relations, and the `jcontest` can test for its presence.\n\n• Deterministic cointegration, where cointegrating relations, perhaps with an intercept, produce stationary series, is the traditional sense of cointegration introduced by Engle and Granger (see `egcitest`). Stochastic cointegration, where cointegrating relations produce trend-stationary series (that is, `d0` is nonzero), extends the definition of cointegration to accommodate a greater variety of economic series.\n\n• Unless higher-order trends are present in the data, models with fewer restrictions can produce good in-sample fits, but poor out-of-sample forecasts.\n\n## Alternative Functionality\n\n### App\n\nThe Econometric Modeler app enables you to conduct the Johansen cointegration test.\n\n Engle, R. F. and C. W. J. Granger. \"Co-Integration and Error-Correction: Representation, Estimation, and Testing.\" Econometrica. Vol. 55, 1987, pp. 251–276.\n\n Hamilton, James D. Time Series Analysis. Princeton, NJ: Princeton University Press, 1994.\n\n Johansen, S. Likelihood-Based Inference in Cointegrated Vector Autoregressive Models. Oxford: Oxford University Press, 1995.\n\n MacKinnon, J. G. \"Numerical Distribution Functions for Unit Root and Cointegration Tests.\" Journal of Applied Econometrics. Vol. 11, 1996, pp. 601–618.\n\n Turner, P. M. \"Testing for Cointegration Using the Johansen Approach: Are We Using the Correct Critical Values?\" Journal of Applied Econometrics. v. 24, 2009, pp. 825–831." ]
[ null, "https://ww2.mathworks.cn/help/examples/econ/win64/TestMultipleSeriesForCointegrationUsingJcitestExample_01.png", null, "https://ww2.mathworks.cn/help/examples/econ/win64/TestMultipleSeriesForCointegrationUsingJcitestExample_02.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6390669,"math_prob":0.9683932,"size":16719,"snap":"2023-40-2023-50","text_gpt3_token_len":4541,"char_repetition_ratio":0.15925816,"word_repetition_ratio":0.22509076,"special_character_ratio":0.28189486,"punctuation_ratio":0.14526382,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9961979,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-01T19:57:37Z\",\"WARC-Record-ID\":\"<urn:uuid:be02f53c-e3ce-40f0-bc83-32c13633d74d>\",\"Content-Length\":\"178569\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5f90237f-8f14-498e-a649-70b77ea1a081>\",\"WARC-Concurrent-To\":\"<urn:uuid:c49a82f0-02b5-4a44-be1c-eaee05f67d29>\",\"WARC-IP-Address\":\"104.99.53.76\",\"WARC-Target-URI\":\"https://ww2.mathworks.cn/help/econ/jcitest.html\",\"WARC-Payload-Digest\":\"sha1:6RR23LZTU4VNIAVLKCUD5OTE3SLMXNMM\",\"WARC-Block-Digest\":\"sha1:OZ7PBSN645XSKP4A3I4HUA5BDLJE5FSX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510924.74_warc_CC-MAIN-20231001173415-20231001203415-00539.warc.gz\"}"}
https://lessonplanet.com/search?keywords=quartic+function
[ "### We found17 reviewed resources for quartic function\n\nLesson Planet\n\nFor Teachers 11th - 12th\nStudents explore cubic and quartic functions.  In this Pre-calculus/Calculus lesson, students examine the properties of third and fourth degree functions.  Students examine the intercepts and inflection points and examine the conditions...\nLesson Planet\n\n#### Polynomial Functions\n\nFor Students 11th\nIn this polynomial functions worksheet, 11th graders solve and complete 8 different problems that include various polynomial functions. First, they define a polynomial function. Then, students write the polynomial part in standard form...\nLesson Planet\n\n#### Saxon Math: Algebra 2 (Section 11)\n\nFor Teachers 9th - 12th Standards\nAs we approach the end of the Saxon math sections, the 11th of 12 units begins the process of summarizing and combining material from the previous lessons. The relationships between functions and their transformations are investigated,...\nLesson Planet\n\n#### Quartic Regions\n\nFor Teachers 9th - 12th Standards\nYoung scholars explore quartic functions in this calculus lesson. They investigate an application of derivatives and definite integrals, then explore the question of which of the \"three bumps\" of a quartic is the largest.\nLesson Planet\n\n#### Polynomials\n\nFor Teachers 11th\nEleventh graders explore the graphs of polynomial functions.  In this Algebra II instructional activity, 11th graders investigate the graphs of first through fourth degree equations as they identify x and y intercepts, points of...\nLesson Planet\n\n#### Polynomial Functions\n\nFor Teachers 11th - 12th\nLearners explore polynomial functions.  In this Algebra II lesson, students explore graphs of polynomial functions as classify the functions as linear, quadratic, cubic, or quartic.  Learners determine the regression equation for each...\nLesson Planet\n\n#### Quartic Regions\n\nFor Teachers 11th - 12th\nStudents explore the concept of quartic regions.  In this quartic regions instructional activity, students find the area under a curve using their Ti-89 calculator.  Students find definite integrals of two curves.  Students determine the...\nLesson Planet\n\n#### Cubic Splines\n\nFor Teachers 10th - Higher Ed\nStudents generate graphs on the calculator. In this math lesson, the students graph parabolas and other functions on the calculator with the intention of analyzing the graph. Students discover how to alter and modify each \"spine\" on the...\nEngageNY\n\n#### Modeling Riverbeds with Polynomials (part 2)\n\nFor Students 10th - 12th Standards\nExamine the power of technology while modeling with polynomial functions. Using the website wolfram alpha, learners develop a polynomial function to model the shape of a riverbed. Ultimately, they determine the flow rate through the river.\nLesson Planet\n\n#### Quartic Inflection\n\nFor Teachers 12th\nTwelfth graders investigate an application of derivatives.  In this calculus instructional activity students explore quartic polynomials and the midpoint of the line segment between the two inflection points.\nLesson Planet\n\n#### Eco-Tourism: Whale Watching ID: 8307\n\nFor Teachers 11th\nEleventh graders explore a quartic function.  In this Algebra II lesson, 11th graders examine data regarding the number of whale sightings for a given period.  Students use a quartic regression to model the data with a function and use...\nLesson Planet\n\n#### Polynomial Functions\n\nFor Students 11th\nIn this Algebra II instructional activity, 11th graders use a graphing calculator to determine the regression equation of the given data, graph polynomial functions, factor polynomials, and determine the quotient and remainder in...\nLesson Planet\n\n#### Curve Fitting\n\nFor Students 6th - Higher Ed Standards\nBeing a statistician means never having to say you are certain. An interactive simulation allows scholars to plot numerous data points to see the line of best-fit on a graph. They may also choose linear, quadratic, cubic, or quartic and...\nLesson Planet\n\n#### Building Curves\n\nFor Teachers 9th - 12th\nThis activity looks at polynomial operations from a grahing perspective. Using the basic operations of addition, subtraction, multiplication, and division on polynomials, learners investigate graphs. In addition, regression modeling is...\nLesson Planet\n\n#### Building Curves\n\nFor Teachers 10th - 12th\nExplore operations on polynomials through graphing on the TI-Nspire. Learners investigate addition, subtraction, multiplication and division of polynomials through graphing. They graph polynomials and identify key concepts.\nLesson Planet\n\n#### Stacking Bricks\n\nFor Teachers 9th - 12th\nStudents stack bricks and find a pattern that will tell the number of bricks they would need to make a large stack of 50 rows high. Students first build a stack of bricks and look for a pattern, they use the detailed directions to...\nLesson Planet\n\n#### The Slope of a Curve\n\nFor Teachers 8th - 10th\nStudents identify the slope of a curve. In this algebra activity, students identify the maximum and minimum of a parabola. They use technology and tables to create graphs." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.86521333,"math_prob":0.94280064,"size":4574,"snap":"2020-24-2020-29","text_gpt3_token_len":904,"char_repetition_ratio":0.18183808,"word_repetition_ratio":0.027149322,"special_character_ratio":0.17818102,"punctuation_ratio":0.13810742,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9958102,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-12T07:34:56Z\",\"WARC-Record-ID\":\"<urn:uuid:fc1df3e8-84eb-4d38-a0ca-3a3dbff831af>\",\"Content-Length\":\"103039\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:09c7bceb-491b-4657-a37d-adea0fb9a6d8>\",\"WARC-Concurrent-To\":\"<urn:uuid:a24ddf43-075e-4d03-9cb0-dd58157c73a7>\",\"WARC-IP-Address\":\"18.216.209.228\",\"WARC-Target-URI\":\"https://lessonplanet.com/search?keywords=quartic+function\",\"WARC-Payload-Digest\":\"sha1:PTXKOYVHVANKZQ5XMNDP5QB47ZW4PRTL\",\"WARC-Block-Digest\":\"sha1:IXLMS5FLYEKULDKXHNDHDYPT6DH7SPLG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593657131734.89_warc_CC-MAIN-20200712051058-20200712081058-00194.warc.gz\"}"}
https://personalpages.manchester.ac.uk/staff/theodore.voronov/volumes.html
[ "Volumes of classical supermanifolds\n\nThis work started from a question by E. Witten, who asked me (in fall 2012), whether the volume of every compact (even) symplectic supermanifold with respect to the super analog of the Liouville measure should vanish (except for the case of ordinary symplectic manifolds). He had some arguments in favor of such a conjecture. I presented a counterexample, which is the volume of the complex projective superspace, the simplest nontrivial case being CP1|1 where the volume is 2π. The general formula for the volume of CPn|m of radius R (with respect to the super analog of the Fubini-Study metric) is R2(n-m)πn 2m / Γ(n-m+1).\n\nTo put Witten's question into context, it was long known (Berezin's theorem, 1970s) that the volume of the super analog of the unitary group, the unitary supergroup U(n|m), vanishes unless m=0 or n =0 (which is the case of an ordinary group, either U(n) or U(m)).\n\nThe above formula led me to the investigation of other formulas for \"super\" volumes. As it turns out, - an \"experimental fact\" - the volumes of various classical supermanifolds (upon some universal normalization) can be obtained by formulas that are analytic continuation of the formulas for the volumes of the corresponding ordinary manifolds. This looks as a mystery if one thinks about the formal algebraic nature of the Berezin integral in comparison with the ordinary measure theory. Also, this fact prompts for the investigation of a meaning of these \"normalized volumes\" in non-integer dimensions and in infinite dimension. There is another challenging connection with the \"universal Lie algebra\" program of Vogel-Deligne (see the works by R. Mkrtchyan, Mkrtchyan-Veselov and Mkrtchyan-Khudaverdian).\n\nReference:\n\nThere has been some new development in this area very recently, see a preprint by Stanford and Witten: arXiv:1907.03363. Among other things, they have deduced a formula for the Liouville volume of a symplectic supermanifold in topological terms (via the Chern class of the associated vector bundle, i.e. the normal bundle to the reduced submanifold). This prompts new very attractive paths for research." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8986152,"math_prob":0.7939273,"size":1790,"snap":"2022-05-2022-21","text_gpt3_token_len":433,"char_repetition_ratio":0.12653975,"word_repetition_ratio":0.024911031,"special_character_ratio":0.21787709,"punctuation_ratio":0.08256881,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9862035,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-26T21:08:03Z\",\"WARC-Record-ID\":\"<urn:uuid:208a5199-8248-40a8-8f49-0b2ef1ae23ee>\",\"Content-Length\":\"4358\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e123d6a4-6b23-42f8-92f9-855057855998>\",\"WARC-Concurrent-To\":\"<urn:uuid:f48b4f09-db02-449b-a0eb-09f67d2ea39d>\",\"WARC-IP-Address\":\"130.88.101.51\",\"WARC-Target-URI\":\"https://personalpages.manchester.ac.uk/staff/theodore.voronov/volumes.html\",\"WARC-Payload-Digest\":\"sha1:64MFJWMHI2DOVZ26JZMQQFVEBQDWXYN4\",\"WARC-Block-Digest\":\"sha1:IKXN2REU5PSDEZZXKWDRSOW6IBWZSM6R\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320304961.89_warc_CC-MAIN-20220126192506-20220126222506-00438.warc.gz\"}"}
https://socratic.org/questions/how-do-you-calculate-the-following-to-the-correct-number-of-significant-figures
[ "# How do you calculate the following to the correct number of significant figures?\n\n## a) $\\frac{1.27 g}{5.296 m L}$ b) $\\frac{12.235 g}{1.01 L}$ c) $\\frac{17.3 g + 2.785 g}{30.20 m L}$\n\nOct 2, 2016\n\na)$0.240 \\frac{g}{m L}$\n\nb)$12.1 \\frac{g}{L}$\n\nc)$0.666 \\frac{g}{m L}$\n\n#### Explanation:\n\nWhen dividing, the quotient should have the same number of significant figures as the smaller number of sig figs in either the dividend or divisor.\n\nWhen adding, the sum should have the same number of signficant figures as the \"least accurate\" place in the addends.\n\na) $\\frac{1.27 g}{5.296 m L} = 0.2398 \\frac{g}{m L}$\n\nwhich I will round to $0.240 \\frac{g}{m L}$ because $1.27$ has only 3 significant figures.\n\nb)$\\frac{12.235 g}{1.01 L} = \\frac{12.114 g}{L}$\n\nwhich I will round to $12.1 \\frac{g}{L}$ because $1.01$ has only 3 sig figs.\n\nc)$\\frac{17.3 g + 2.785 g}{30.20 m L}$\n\nFirst add the numbers in the numerator.\n\n$17.3 + 2.785 = 20.085$ which I will round to $20.1$ because the \"least accurate\" place in either of the addends is the tenths place in $17.3$\n\nNext, complete the division.\n\n$\\frac{20.1 g}{30.20 m L} = 0.6656 \\frac{g}{m L}$\n\nwhich I will round to $0.666 \\frac{g}{m L}$ because $20.1$ has only 3 sig figs." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6635625,"math_prob":0.9998404,"size":738,"snap":"2022-05-2022-21","text_gpt3_token_len":223,"char_repetition_ratio":0.13079019,"word_repetition_ratio":0.05982906,"special_character_ratio":0.2899729,"punctuation_ratio":0.13071896,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99998033,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-22T23:40:49Z\",\"WARC-Record-ID\":\"<urn:uuid:58fb8209-c4aa-450d-b02d-aee187563ad6>\",\"Content-Length\":\"35857\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f60e4802-4f54-401e-ac2a-a418d01b7b5f>\",\"WARC-Concurrent-To\":\"<urn:uuid:be1df3b9-cecc-4e66-a2ab-2cb5850e0dc6>\",\"WARC-IP-Address\":\"216.239.36.21\",\"WARC-Target-URI\":\"https://socratic.org/questions/how-do-you-calculate-the-following-to-the-correct-number-of-significant-figures\",\"WARC-Payload-Digest\":\"sha1:5UQ5FBDSJVCLI7ZYE6GCAA2GYG3AY66I\",\"WARC-Block-Digest\":\"sha1:AIFCCUQ3B5T42VEBO4UFLCVDU6VZLOOO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662550298.31_warc_CC-MAIN-20220522220714-20220523010714-00145.warc.gz\"}"}
https://fr.scribd.com/document/318542976/Algebra-Reteachings-Lessons-1-10
[ "Vous êtes sur la page 1sur 20\n\nName\n\nDate\n\nClass\n\nReteaching\nClassifying Real Numbers\n\nA set is a collection of objects. You use several different number sets as\n\nyou study algebra.\nNatural Numbers\n\n{1, 2, 3, 4, }\n\nWhole Numbers\n\n{0, 1, 2, 3, 4,}\n\nIntegers\n\n{, 4, 3, 2, 1, 0, 1, 2, 3, 4,}\na where a and b are integers\nNumbers that can be written as __\nb\nand b0. (As decimals they terminate or repeat.)\na . (As decimals, they neither\nNumbers that cannot be written as __\nb\nterminate nor repeat.)\n\nRational Numbers\nIrrational Numbers\nReal Numbers\n\nIdentify the subsets of real numbers to\n\nwhich 17 belongs.\n\nIdentify the subsets of real numbers to\n\n\u0002\nwhich 5 belongs.\na ,so it is\nThis number cannot be written as __\nb\nan irrational number. All irrational numbers are\n\u0002\nalso real numbers, so 5 belongs to these two\nsets.\n\nThis number can be used to count things,\n\nso it is a natural number. All natural numbers\nare also whole numbers, integers, rational\nnumbers, and real numbers, so 17 belongs\nto these five sets.\n\nPractice\nIdentify the subsets of real numbers to which each number belongs.\n4\n1. __\n9\n\n2. 21\n\na , so it\nThis number can be written as __\nb\nis a rational number and a\n\nThis number is the opposite of a natural\n\nreal\n\nnumber, so it is an integer,\n\nnumber.\n\nnumber, and a\n\nreal\n\nrational\n\nnumber.\n\n3. 1.5\n\n4. 63,298\n\nrational, real\n\nrational, real\n6. 5\n\n5. 0\n\nwhole, integers, rational, real\n\nirrational, real\n\u0002\n\n3\n7. ___\n11\n\n8. 33\n\nirrational, real\n\nrational, real\nSaxon. All rights reser ved.\n\nSaxon Algebra 1\n\nReteaching\n\ncontinued\nClosure is a property of a set. A set is closed under a given operation\nif the outcome of the operation on any two members of the set is also\na member of the set. A counterexample is an example that proves\na statement false. Intersection \u0002 \u0002 \u0003 and union \u0002 \u0003 \u0003 are two ways\nto combine sets.\nFind A \u0002 B and A \u0003 B.\nA \u0005 {1, 2, 3, 6, 7, 8}; B \u0005 {2, 4, 6}\n\nDetermine whether the statement is true\n\nor false. Give a counterexample if the\nstatement is false.\nThe set of integers is closed under\nmultiplication.\n\nA \u0002 B \u0002 {2, 6}\n\nStep 2: Find the elements that are in A or B.\n\nA \u0003 B \u0002 {1, 2, 3, 4, 6, 7, 8}\n\n\u00033 \u0004 5 \u0002 \u000315\n\u00037 \u0004 \u00032 \u0002 14\nThe product is always an integer, so the\nstatement is true.\n\nPractice\nDetermine whether the statement is true or false.\nGive a counterexample if the statement is false.\n9. The set of natural numbers is closed under multiplication.\n\n8\n1\u00049\u0002 9\n7 \u0004 5 \u0002 35\nnatural number , so the statement is true\n\nThe product is always a\n\n2\u00044\u0002\n\nFind A \u0002 B and A \u0003 B.\n10. A \u0002 {1, 2, 3, 5, 7, 11}; B \u0002 {1, 3, 5, 7, 9}\n\n11. A = {4, 8, 12}; B = {0, 2, 4, 6, 8, 10}\n\n5, 7}\nA \u0003 B \u0002 {1, 2, 3, 5 , 7 , 9 , 11}\n\nA \u0002 B \u0002 {4,\n\nA \u0002 B \u0002 {1, 3,\n\n8}\n\nA \u0003 B \u0002 {0, 2, 4,\n\n8 , 10 , 12 }\n\nDetermine whether each statement is true or false.\n\nGive a counterexample if the statement is false.\n12. The set of rational numbers is closed under subtraction.\n\ntrue\n\n\u000310 \u0004 (\u00032) \u0002 \u0005 5 \u0002 5\n\n13. The set of irrational numbers is closed under division. false;\n\n2 \u0006 2 \u0005 1, not irrational\n\nFind A \u0002 B and A \u0003 B.\n14. A \u0002 {2, 6, 10, 14, 18}; B \u0002 {0, 2, 4, 6, 8}\n\nSaxon. All rights reser ved.\n\nA \u0002 B \u0005 {2, 6};\nA \u0003 B \u0005 {0, 2, 4, 6, 8, 10, 14, 18}\n2\n\nSaxon Algebra 1\n\nName\n\nDate\n\nClass\n\nReteaching\n\nUnderstanding Variables and Expressions\n\nConstants and variables are the building blocks of algebraic expressions.\n\nA constant is a quantity whose value does not change. A constant is\n\nusually a number. A variable can change, or vary, and is used to represent\nan unknown number. A letter is usually used to represent a variable.\n\nConstants\n7ab\n\n4c\n\nVariables\nIdentify the constants and variables in the\nexpression 9x 7.\n\nIdentify the constants and variables in the\n\nequation 8y xz 2.\n\nThe numbers 9 and 7 never change.\n\nThey are constants.\n\nThe numbers 8 and 2 never change.\n\nThey are constants.\n\nThe letter x represents an unknown\n\nnumber. It is a variable.\n\nThe letters x, y, and z represent\n\nunknown numbers. They are\nvariables.\n\nPractice\nIdentify the constants and variables.\n1. 11jk m\n\n2. 9z 2y 13\n\nThe number 11 is a constant because\n\nit never changes.\n\nThe numbers 9, 2, and 13\n\nare constants because they\nnever change.\n\nThe letters j, k, and m are\n\nvariables because they represent\nunknown numbers.\n\nThe letters z and y are variables\n\nbecause they represent unknown numbers.\n\n3. 3x 7y 20\n\n4. 9h 5fg 1\n\nconstants\n\nThe numbers 3, 7, and 20\n\nare constants because they never change.\n\nThe numbers 9, 5, and 1 are\n\nbecause they never change.\n\nThe letters x and y are variables\n\nbecause they represent unknow numbers.\n\nThe letters h, f, and g are variables\n\nbecause they represent unknown numbers.\n\n5. 24uv 3\nConstants:\n\n6. 100x 2wz\n\n24, 3\n\nSaxon. All rights reser ved.\n\nVariables:\n\nu, v\n\nConstants:\n3\n\n100, 2\n\nVariables:\n\nx, w, z\n\nSaxon Algebra 1\n\nReteaching\n\ncontinued\nConstants and variables can be multiplied to form a product. Each constant\nand variable in a product is a factor of the product. The product of a\nconstant and one or more variables is usually written without multiplication\nsymbols. For example, 6 \u0002 s \u0002 t is written as 6st.\nThe numeric factor in a product is called the coefficient. In the expression\n6st, the coefficient is 6 and the other factors are s and t.\nThe parts of an expression separated by or signs are called terms.\nIdentify the factors and coefficients\nin the expression 9ab.\n\n3xy \u0003 7uv \u0003 8.\n\nRecall: The terms are the parts of the\n\nexpression separated by or\nsigns.\n\nThe factors, or quantities, that are\n\nmultiplied are 9, a, and b.\nStep 2: Find the coefficients.\n\nThe coefficient is the numeric factor 9.\n\nPractice\nIdentify the factors and coefficients in each expression.\n7. 4gh\n\n8. yz\n\nThe factors, or quantities, that are\n\nmultiplied are \u00044, g, and h.\n\nThe factors, or quantities, that are\n\nmultiplied are y, and z .\n\n\u00044.\n\nThe coefficient is the numeric factor 1,\n\nwhich is implied in this case.\n\nIdentify the terms in each expression.\n\ny\n10. (9x 2) __\nxz 5xy\n\nb 8c\n9. 3a __\n4\nThe terms are the parts of the expression\nseparated by or signs. The terms are\n\nb , and 8c\n3a, __\n4\n\nThe terms are the parts of the expression\n\nseparated by or signs. The terms are .\n\ny\n(9x \u0003 2), __\nxz, and 5xy\n\nIdentify the factors and coefficients in each expression.\n\n3n\n11. ___\n5\nFactors:\n\n12. 7xyz\n\n3, n\n__\n5\n\n; Coefficient:\n\n3\n__\n5\n\nFactors:\n\n\u00047, x, y, z; Coefficient:\n\n\u00047\n\nIdentify the terms in each expression.\n\n(3r 1)\n13. 12p _______ prs\n7s\n\n(3r \u0004 1)\nTerms: 12p, _______\n7s\n\n7jk\n14. ____ 9mk 6kn\n8mn\n\n7jk 9mk, 6kn\n\n8mn\n\nTerms: _____,\n\n, prs\n4\n\nSaxon Algebra 1\n\nName\n\nDate\n\nClass\n\nReteaching\nSimplifying Expressions Using the Product Rule of Exponents\n\nYou have used multiplication to simplify expressions. Now you will use\nmultiplication to simplify expressions with exponents. An exponent can\nbe used to show repeated multiplication.\nSimplify the expression 3 4.\n\nThe base is 3 and the exponent is 4.\n\nThe exponent 4 indicates that the\nbase 3 is used as a factor\nfour times.\n\nThe base is 0.2 and the exponent is\n\n3. The exponent 3 indicates that the\nbase 0.2 is used as a factor\nthree times.\n\nStep 2: Write the power as repeated\n\nmultiplication.\n34\u0002 3 \u0002 3 \u0002 3 \u0002 3\n\nStep 2: Write the power as repeated\n\nmultiplication.\n3\n(0.2) \u0002 (0.2) (0.2) (0.2)\n\nStep 3: Simplify.\n3 4 \u0002 3 \u0002 3 \u0002 3 \u0002 3 \u0002 81\n\nStep 3: Simplify.\n(0.2) 3 \u0002 (0.2) (0.2) (0.2) \u0002 0.008\n\nPractice\nComplete the steps to simplify each expression.\n1. 4\n\n2. (0.5) 4\n\nThe base is\nis 3 .\n43\u0002 4 \u0002\n\n4\u00024\n\n0.5\n\n(0.5) 4 \u0002 (0.5)\n\n4\u00024\u00024\n\n43 \u0002\n\nThe base is\nis 4 .\n\n64\n\n(0.5) 4 \u0002\n\n\u0003 0.0625\n\n512\n\n3. 8 3 \u0002\n\n\u0002 \u0003\u0002\n\n2\n5. __\n5\n\n7. 7 4 \u0002\n\n4. (1.2) 2 \u0002\n\n16\n____\n\n6. 10 5 \u0002\n\n625\n\n\u0002 \u0003\n\n3\n8. __\n4\n\n2401\n\n9. (3.1) 2 \u0002\n\n9.61\n\n10. 10 9 \u0002\n\n\u0002 \u0003\n\n1\n___\n\n12. (1.5) 2 \u0002\n\n1\n11. __\n2\n\nSaxon. All rights reser ved.\n\n64\n5\n\n1.44\n100,000\n27\n___\n64\n1,000,000,000\n2.25\nSaxon Algebra 1\n\nReteaching\ncontinued\n\nUse the product rule of exponents to find the product of powers whose\nbases are the same.\nProduct Rule of Exponents\nIf m and n are real numbers and x \u0002 0, then\nxm\u0002 xn\u0003 xm \u0004 n\nSimplify the expression x 2 \u0002 x 4 \u0002 x 3.\nStep 1: Identify the bases and the\nexponents.\nEach factor has x as the base.\nThe exponents are 2, 4, and 3.\nStep 2: Add the exponents to find the power\nof the product.\nx2 \u0002 x4 \u0002 x3 \u0003 x2\u0004 4 \u0004 3 \u0003 x9\n\nSimplify the expression x 2 \u0002 x 5\u0002 y 3 \u0002 y 4.\n\nStep 1: Identify the bases.\nThe first two factors have x as the base.\nThe last two factors have y as the base.\nStep 2: Add the exponents for the factors that\nhave x as the base. Then add the\nexponents for the factors that have\ny as the base.\n2\u00045\n\u0002 y3\u0004 4 \u0003 x7 y7\nx\n\nPractice\nSimplify each expression.\n3\n5\n13. y \u0002 y\n\n14. s 2 \u0002 s 7 \u0002 s 3 \u0002 t 4 \u0002 t 6\n\nare 3 and 5.\n\ny3\u0002 y5\u0003 y\n\n3\u00035\n\n\u0003y\n\nThe first three factors have S as the\n\nbase. The last two factors have\nt as the base.\n\n2\u00037 \u00033\n\n\u0002t\n\n4\u00036\n\n\u0003s\n\n12\n\n\u0002t\n\n10\n\nSimplify each expression.\n\n15. z 2 \u0002 z 3 \u0003\n17. u 15 \u0002 u 10 \u0003\n\nz5\n\n16. m 4 \u0002 m 2 \u0002 m 5 \u0003\n\nu 25\n\n18. z 20 \u0002 z 80 \u0003\n\n19. a 4 \u0002 a 5 \u0002 b 2 \u0002 b 3 \u0002 b 6 \u0003\n21. x 10 \u0002 x 4 \u0002 y 17 \u0002y 2 \u0003\n\na 9 \u0002b 11\n\nm 11\nz 100\n\n20. g 6 \u0002 g 2 \u0002 g 3 \u0002 h 4 \u0003\n22. b 4 \u0002 b 3 \u0002 f 2 \u0002 f 7 \u0002 g 5 \u0003\n\nx 14 \u0002y 19\n\ng 11 \u0002h 4\nb 7 \u0002f 9 \u0002g 5\n\n23. You are taking a true-false test that has two parts. There are 2 5\nways to answer the 5 questions in Part 1. There are 2 10 ways to\nanswer the 10 questions in Part 2. How many ways are there to\n\n32,768\nSaxon. All rights reser ved.\n\nSaxon Algebra 1\n\nName\n\nDate\n\nClass\n\nReteaching\nUsing Order of Operations\n\nYou have simplified expressions that contain one operation. Now you will\nuse the order of operations to simplify expressions that contain more\nthan one operation.\nOrder of Operations\n1. Work inside grouping symbols.\n2. Simplify powers and roots.\n3. Multiply and divide from left to right.\n4. Add and subtract from left to right.\n2\nWrite the expression 22 \u0002 (5 \u0003 3) \u0002 7 \u0003 2 . Then use the order of\noperations to simplify.\n\n22 \u0002 (5 \u0003 3) \u0002 \u00032\n\n\u0004 22 \u0002 2 \u0002 7 \u0003 2 2\n\nSimplify inside the parentheses.\n\n\u0004 22 \u0002 2 \u0002 7 \u00034\n\nSimplify exponents.\n\n\u0004 22 \u0002 14 \u0003 4\n\n\u0004 32\n\nAdd and subtract from left to right.\n\nPractice\nWrite the expression. Then use the order of operations to simplify.\n1. 3 \u0002 (5 \u0002 3 ) \u0003 8 \u0005 2\n\n2. 27 \u0005 3 2 \u0002 2 \u00035\n\n3 \u0002 (5 \u0002 3 ) \u0003 8 \u0005 2\n\u00043\u0002\n\n\u0004 24 \u0003\n\n27 \u0005\n\n\u00038\u00052\n\n\u0002 2 \u00035\n\n\u00022\u00035\n\n\u0004 20\nWrite the expression. Then use the order of operations to simplify.\n3. (6 \u0002 10) \u0005 2 \u0002 5 \u0003 1 \u0004\n5. 7 2 \u0002 4 2 \u0002 3 \u0004\n\n39\n\n97\n\n6. 3 \u0002 3 \u0005 3 \u0002 3 \u0004\n\n18\n\n7. 25 \u0003 8 \u0002 2 \u0002 3 2 \u0004\n9. 3 \u0002 6 \u0003 4 2 \u0005 2 \u0004\n\n8. 6 \u0003 10 \u0005 5 \u0004\n\n10\n\n10. 4 \u0002 7 \u0002 4 \u0005 2 2 \u0004\n\n29\n\n11. 40 \u0003 2 \u0002 3 2 \u0004\n\n22\n\n12. 8 \u0002 12 \u0005 6 \u0003 3 \u0004\n\n13. 5 \u0002 3 2 \u0003 13 \u0004\n\n32\n\n14. 21 \u0002 49 \u0005 7 \u0002 1 \u0004\n\nSaxon. All rights reser ved.\n\n22\n\n4. (6 \u0002 12) \u0005 3 \u0002 4 2 \u0004\n\n7\n29\nSaxon Algebra 1\n\nReteaching\n\ncontinued\nCompare the expressions. Use \u0002, \u0003, or \u0004.\n2 \u0005 4 2 \u0006 10\n\n\u0002 6 \u0006 2 4\u0006 2\n\nStep 1: Use the order of operations to simplify each expression.\n\n2 \u0005 4 2 \u000610\n6 \u0007 2 4 \u00062\n2 \u0005 16 \u000610\n3 4\u0006 2\n18 \u000610\n12 \u0006 2\n8\n10\nStep 2: Compare using \u0002, \u0003, or \u0004.\n8 \u0002 10\nPractice\nCompare the expressions. Use \u0002, \u0003, or \u0004.\n\n\u0002 20 \u00064 \u0005 3\n\n15. 6 5 \u0006 16 \u0006 8 \u0005 4\n6 5 \u000616 \u0007 8 \u0005 4\n30 \u0006\n\n16. 5 2 2 \u0005 28 \u0007 7\n\n20 \u00064 \u0005 3\n\n5 2 \u0005 28 \u0007 7\n\n20 \u0006 16 \u0005 3\n\n2 \u00054\n28 \u0005 4\n32\n\n\u0002 14 \u0007 7 \u0005 4 2\n\n14 \u0007 7 \u0005 4\n\n\u00066\n\n2 \u00054 8 \u00066\n2 \u0005 32 \u0006 6\n34 \u0006 6\n\n24\n\n\u00027\n\n\u00066\n\n14 \u0007 7 \u0005 4 2 \u0006 6\n\n4 \u0005 28 \u0007 7\n20 \u0005 28 \u0007 7\n20 \u0005 4\n\n\u00053\n\n28\n\n\u0002 28\n\n24 \u0002\n\n32 \u0003\n\nCompare the expressions. Use \u0002, \u0003, or \u0004.\n\n18. 3 5 \u0006 (4 \u0005 2) \u0006 3 \u0002\n\u0002 24\u00056\u00072\n\u0002 10 \u0007 2 \u0005 2 (7 \u0006 4)\n\u0002\n19. 6 (2 \u0005 4) \u0007 3 \u0006 5 \u0002\n\u0003 12 \u0007 (8 \u0007 2) (3 \u0007 3) 20. 3 \u0005 4 3 \u0005 4 \u0002\n\u0003 3 \u000543\u00064\n21. (20 \u0005 12) \u0007 (4 \u0005 4) \u0002\n\u0004 3 2 \u0007 (6 \u0006 3) 22. 42 \u0007 7 \u0005 2 \u0002\n\u0003 3 \u0007 (2 \u0005 1)\n23. 5 \u0006 3 5 \u0005 6 \u0002\n24. (5 \u0005 3) \u0007 2 \u0005 2 \u0002\n\u0002 6 \u0006 (2 \u0005 4) 3\n\u0003 (3 \u0005 6) \u0007 3\n25. 54 \u0005 36 \u0007 3 \u0006 30 \u0002\n\u0003 12 \u0005 3 3 \u0006 28 26. 20 \u0007 4 5 2 \u0007 10 \u0002\n\u0003 (2 \u0005 4) \u0007 12 \u0005 1\n27. (3 \u0005 6) \u0007 3 \u0002\n28. 6 \u0006 (2 \u0005 4) 3 \u0002\n\u0002 5 \u000635\u00056\n\u0003 (5 \u0005 3) \u0007 2 \u0005 2\n17. 2 (7 \u0005 3 ) \u0007 4\n\n29. A rectangular swimming pool is 5 feet deep, 17 feet long, and 12 feet\nwide. Its volume is (5 17 12) cubic feet. A circular swimming pool\nhas a radius of 8 feet and is 6 feet deep. Its volume is about\n2\n(3.14 8 6) cubic feet. What is the difference in volume\nbetween the two pools?\n\nSaxon. All rights reser ved.\n\nSaxon Algebra 1\n\nName\n\nDate\n\nClass\n\nReteaching\nFinding Absolute Value and Adding Real Numbers\n\nYou have classified real numbers and used them in expressions. Now\nyou will find the absolute values of real numbers.\nAbsolute Value\nThe absolute value of a number is the distance from the number to 0 on a number line.\nThe absolute value of 2 is written as \u0002 2 \u0002.\n\n\u0002 \u0002\n\n3 .\nSimplify __\n4\n\nSimplify \u0002 83 \u0002.\n\n3\n4\n\n-1 -3 -1 -1\n4\n2\n4\n\n1\n4\n\n1\n2\n\n3\n4\n\n8\u00023 \u0003 5\n\n5\n\n3 unit from 0 on the number line.\n\n3 is __\n\u0002__\n4 4\n3.\n3 is __\nThe absolute value of \u0002__\n4 4\n\n-1\n\n5 is 5 units from zero on the number line.\n\nThe absolute value of 8\u00023 is 5.\nPractice\nComplete the steps to simplify each expression.\n1. \u0002 \u00022.5 \u0002\n\n2. \u0002\u0002 10\u00027 \u0002\n10\u00027 \u0003\n\n2.5\n\n3\n3\n\n-3\n\n-2\n\n-1\n\n1\n-3 -2 1\n\n\u00022.5 .is\nline.\n\n2.5\n\n\u0002 \u00022.5 \u0002 \u0003\n\nnumber line.\n\u0002 10\u00027 \u0002 \u0003 3\n\n2.5\n\n\u0002\u0002 10\u00027 \u0002 \u0003\n\nSimplify.\n3. \u0002 8.2 \u0002 \u0003\n5.\n\n8.2\n\n1 \u0003\n2\u0002__\n3\n\n7. \u0002\u0002 12\u00024 \u0002 \u0003\n\n2\n1__\n3\n8\n\n4. \u0002 \u000221 \u0002 \u0003\n\n21\n\n6. \u0002\u0002 5.4 \u0002 \u0003\n\n5.4\n\n8.\n\n9 \u0003\n\u0002___\n11\n\n9\n___\n11\nSaxon Algebra 1\n\nReteaching\ncontinued\n\nAdd the numbers absolute values and use\n\nthe same sign as the numbers.\n\nFind the difference of the numbers absolute\n\nvalues and use the sign of the number with the\ngreater absolute value.\n\nStep 1: Find the difference of the absolute\n\n17 \u0002 5 \u0003 22\nStep 2: Use the sign of the numbers. The\nsign is negative.\n\nvalues 23 \u0004 9 \u0003 14.\nStep 2: Find the sign of the number with the\nlargest absolute value. \u0002 23 \u0002 \u0005 \u0002 \u00049 \u0002, so\nthe sign is positive.\n\n(\u000417) \u0002 (\u00045) \u0003 \u000422\n\n23 \u0002 (\u00049) \u0003 14\n\nPractice\nComplete the steps to find each sum.\n\n\u0003 \u0004 \u0003 \u0004\n\n9. (\u000415) \u0002 11\n15 \u0004 11 \u0003\n\n1 \u0002\n1\n10. \u0004__\n\u0004__\n4\n2\n\n1 \u0002 __\n1\u0003\n__\n\n\u0002 \u000415 \u0002\u0005\u0002 11 \u0002 so the sign of the answer\n\nis\n\nnegative.\n\n3\n__\n4\n\nnegative,\nso the sign of the answer is negative.\n\nThe sign of the numbers is\n\n(\u000415) \u0002 11 \u0003\n\n\u0003\u0004__14 \u0004 \u0002 \u0004__12 \u0003\n\n3\n__\n4\n\nFind each sum.\n\n11. \u0003 \u000416 \u0004 \u0002 3 \u0003\n\n13\n\n3 \u0002 __\n4\u0003\n13. \u0004___\n10\n5\n\n1\n__\n\n14. 19 \u0002 (\u000436) \u0003\n\n15. 14.8 \u0002 \u0003 \u000411.2 \u0004 \u0003\n\n\u0003 \u0004 \u0003 \u0004\n\n5\n1 \u0002 \u0004__\n16. \u0004__\n3\n9\n\n3.6\n\n12\n\n17\n\u0003\n\n8\n__\n9\n\n17. Jenna saved \\$28 for a new CD. She bought it on sale for \\$19. Use\n\n\\$9\n18. In one hour, a hot-air balloon rose 600 feet. In the next hour, the\nballoon dropped 950 feet. Use addition to find the change in the\nelevation of the balloon in the two-hour period.\n\n350 ft\n\n10\n\nSaxon Algebra 1\n\nName\n\nDate\n\nClass\n\nReteaching\nSubtracting Real Numbers\n\nYou have added real numbers. Now you will subtract real numbers.\nTwo numbers with the same absolute value but different signs are called\nopposites. The opposite of a number is also called the additive inverse.\nThe sum of a number and its additive inverse is 0.\nTo Subtract Real Numbers\n\nFind the difference (6) 12.\n\n(\u00026) \u0003 (\u000212)\n\n(\u00028) \u0003 15\n\nStep 2: Find the difference of the absolute\n\n6 \u0003 12 \u0004 18\n\nvalues. 15 \u0002 8 \u0004 7\n\nStep 2: Find the sign of the number with the\n\nlargest absolute value. \u0002 15 \u0002 \u0005 \u0002 \u00028 \u0002, so\n\nsign is negative.\n\nthe sign is positive.\n\n(\u00026) \u0002 12 \u0004 \u000218\n\n(\u00028) \u0003 (\u000215) \u0004 7\n\nPractice\nComplete the steps to find each difference.\n2. (\u000219) \u0002 3\n\n1. (\u00026) \u0002 (\u000211)\n(\u00026) \u0002 (\u000211) \u0004 (\u00026) \u0003\n11 \u0002 (6) \u0004\n\n11\n\n(\u000219) \u0002 3 \u0004 (\u000219) \u0003\n\n19 \u0003 3 \u0004\n\n\u0002 \u000211 \u0002\u0005\u0002 \u00026 \u0002, so the sign of the answer\n\nis\n\n22\n\nnegative,\nso the sign of the answer is negative.\nThe sign of the numbers is\n\npositive .\n\n(\u00026) \u0002 (\u000211) \u0004\n\n(3)\n\n(\u000219) \u0002 3 \u0004\n\n\u000222\n\nFind each difference.\n\n3. \u0003 \u000217 \u0004 \u0002 (\u000212) \u0004\n\n\u0003 \u0004\n\n5 \u0002 __\n1\u0004\n5. \u0002__\n8\n4\n7. (\u00029.1) \u0002 2.6 \u0004\n\n4. 25 \u0002 (\u000232) \u0004\n\n7\n__\n8\n\n1 \u0004\n5 \u0002 \u0002___\n6. __\n12\n6\n\n11.7\n\n8. 7 \u0002 26 \u0004\n\n57\n11\n___\n12\n\n19\n\n9. For safety, scuba divers usually do not dive deeper than 40 meters\nbelow sea level. A diver in a helmet suit can safely dive about 21 meters\ndeeper than a scuba diver. What is the maximum safe depth for\na helmet suit diver in relation to sea level?\n\n\u000261 m\n\n11\n\nSaxon Algebra 1\n\nReteaching\ncontinued\n\nClosure\nA set is closed under a given operation if the outcome of the operation on any two members\nof the set is also a member of the set.\nDetermine whether the statement is true or false. Give a counterexample if the statement\nis false.\nThe set of natural numbers is closed under subtraction.\nSubtract two natural numbers:\n3\u00022\u00031\n4\u00022\u00032\n2 \u0002 3 \u0003 \u00021\nThe last statement is a counterexample, so the statement is false.\nPractice\nComplete the steps to determine whether the statement is true or false.\nWrite a counterexample if the statement is false.\n10. The set A \u0003 {\u00021, 0, 1} is closed under subtraction.\nSubtract two numbers in the set:\n1\u00020\u0003\n\n0\u00021\u0003\n\n\u00021\n\n\u00021 \u0002 1 \u0003\n\n\u00022\n\nDetermine whether each statement is true or false. Give a\n\ncounterexample for false statements.\n11. The set of even integers, {..., \u00024, \u00022, 0, 2, 4, ...}, is closed under subtraction.\n\ntrue\n12. The set of irrational numbers is closed under subtraction.\n\nfalse; sample counterexample: 2 2 0\n\n13. The set of odd integers plus zero, {..., \u00025, \u00023, \u00021, 0, 1, 3, 5, ...} is\nclosed under subtraction.\n\nfalse; sample counterexample: 3 1 2\n\n14. The set of integers that are a multiple of 4, {..., \u000212, \u00028, \u00024, 0, 4, 8, 12, ...}\nis closed under subtraction.\n\ntrue\n\n12\n\nSaxon Algebra 1\n\nName\n\nDate\n\nClass\n\nReteaching\n\nSimplifying and Comparing Expressions with Symbols of Inclusion\n\nYou have used the order of operations to simplify expressions.\nNow you will apply this concept to simplifying expressions within symbols\nof inclusion.\nSymbols of inclusion indicate which numbers, variables, and operations\nare parts of the same term. Some symbols of inclusion are fraction bars,\nabsolute value symbols, parentheses, braces, and brackets.\n\nSimplify 3[11 \u0002 (12 \u0003 9)2] \u0002 7. Justify each step.\n\n3[11 \u0002 (12 \u0003 9)2] \u0002 7\n\u0004 3[11 \u0002 32] \u0002 7\n\n\u0004 3[11 \u0002 9] \u0002 7\n\nEvaluate the exponent.\n\n\u0004 3 \u0004 20 \u0002 7\n\n\u0004 60 \u0002 7\n\nMultiply.\n\n\u0004 67\n\nPractice\nComplete the steps to simplify the expression.\n\n\u0004 4 \u0002 [2z \u0002 \u0002 5 \u0003 7 \u0002]\n_____\n1. 6z\n3\n6z\n\u0004 4 \u0002 [2z \u0002 \u0002 5 \u0003 7 \u0002]\n_____\n3\n6z\n\u0004 4 \u0002 [2z \u0002\u0002 \u00032 \u0002]\n_____\n3\n\nSubtract inside absolute\n\n6z\n\u0004 4 \u0002 [2z \u00022]\n_____\n3\n\nSimplify\n\n24z \u0002 2z \u0002 2\n____\n\nSimplify the\n\nnumerator.\n\nSimplify the\n\nfraction.\n\n8z\n\n\u0002 2z \u0002 2\n\n10z \u0002 2\n\nvalue symbols.\n\nabsolute value.\n\nSimplify.\n2. \u0002 11 \u0003 19 \u0002 \u0002 15 \u0004\n\n23\n\n4. [ (9 \u0003 2)2 \u0003 2(13 \u0003 4) ] \u0002 3 \u0004\n\n\u0004 2 \u0002 3(7 \u0003 2)2 \u0005 5 \u0004\n_____\n6. 14\n4\n\n3. 4(18 \u0003 3) \u0005 (14 \u0003 10 \u0002 \u0002 \u00036 \u0002) \u0004\n\n34\n\n5. 5[ 24 \u0003 (11 \u0003 9)3 ] \u0005 4 \u0004\n\n20\n\n\u00027\u00046 \u00053\u0004\n_____\n7. (16 \u0003 7)2 \u0003 3\n2\n\n22\n\n13\n\n17\n\nSaxon Algebra 1\n\nReteaching\ncontinued\n\nCompare the expressions. Use \u0002, \u0003, or .\n\n[ 21 \u0004 3(9 \u0004 5)2 ] \u0005 50\n\n\u0002 (12 \u0004 7)\n\n\u0004 [ 19 \u0004 3(21 \u0004 15) ]\n\nSimplify each expression. Then compare.\n\n[ 21 \u0004 3(9 \u0004 5) 2 ] \u0005 50\n\u0006 [ 21 \u0004 3(4) 2 ] \u0005 50\n\u0006 [ 21 \u0004 3 \u0007 16 ] \u0005 50\n\u0006 [ 21 \u0004 48 ] \u0005 50\n\u0006 [ \u000427 ] \u0005 50\n\n(12 \u0004 7) 2\u0004[ 19 \u0004 3(21 \u0004 15) ]\n\n\u0006 (5) 2\u0004[ 19 \u0004 3(21 \u0004 15) ]\n\u0006 25 \u0004[ 19 \u0004 3(21 \u0004 15) ]\n\u0006 25 \u0004[ 19 \u0004 3(6) ]\n\u0006 25 \u0004[ 19 \u0004 18 ]\n\n\u0006 50\n\n\u0006 25 \u00041\n\u0006 24\n\n2\nSince 23 \u0002 24, [ 21 \u0004 3(9 \u0004 5) ] \u0005 50\n\n\u0002 (12 \u0004 7) \u0004[ 19 \u0004 3(21 \u0004 15) ].\n\n\u0002\n2\n\nPractice\nComplete the steps to simplify each expression. Compare the\nexpressions. Use \u0002, \u0003, or .\n2\n8. 2(17 \u0004 8) \u0004 [ 3 \u0004(8 \u0004 6) ]\n\n\u0002 [3 (\u00042)\n\n2(17 \u0004 8) \u0004 [ 3 2 \u0004 (8\u0004 6) ]\n\n2 ]\n\u0006 2(17 \u0004 8) \u0004 [ 9 \u0004 2 ]\n\u0006 2(17 \u0004 8) \u0004 7\n\u0006 2( 9 ) \u0004 7\n\u0006 18 \u0004 7\n\u0006 11\n\u0006 2(17 \u0004 8) \u0004 [ 3 2 \u0004\n\nSince11\n\n] \u0005 4(5 \u0005 2)\n\n[ 3 (\u00042) 3 ]\u0005 4(5 \u0005 2)\n\u0006 [ 3 ( 8 ) ]\u0005 4(5 \u0005 2)\n\u0006 (24) \u0005 4(5 \u0005 2)\n\u0006 24 \u0005 4( 7 )\n\u0006 24 \u0005 28\n\u0006 4\n\nCompare the expressions. Use \u0002, \u0003, or .\n\n2\n3\n9. 6 \u0004 [ 4(3 \u0005 2) \u0004 2 ]\n\n\u0003 5(9 \u0004 3) \u0005 4[ 6 \u0005 (\u00042) ]\n\u0002\n3\n\n\u0002 [ (14 \u0004 9) \u0004 6 ] \u0005 2\n\u0002\n11. 9 \u0005 [ 3 \u0002 19 \u0004 3 \u0003 \u0005 5 ] \u0004 4 \u0002\n\u0006 \u0002 23 3\u0004 8 \u0003 \u0004 [ 2(6 \u0004 3) + 3 ]\n4\n10. 3 [ 6 2 \u0004 5(22 \u0004 2 4) \u0005 1 ]\n______\n\n______\n\n14\n\nSaxon Algebra 1\n\nName\n\nDate\n\nClass\n\nReteaching\nUsing Unit Analysis to Convert Measures\n\nYou have used ratios to compare two quantities. Now you will use unit\nratios to convert measures into different units.\nUnit analysis is the process of converting measures into different units.\nThe peregrine falcon can reach speeds up to 200 miles per hour. How fast is this in\nyards per hour?\nStep 1: Identify the known and missing information.\n\n? yd\n200 mi \u0002 ______\n______\n1 hour\n\n1 hour\n\nStep 2: Equate units.\n\n1760 yd\n1mi .\nSo, the unit ratio is _______, or _______\n1 mi\n1760 yd\n\n1 mi \u0002 1,760 yd\n\nStep 3: Write the multiplication sentence. Then multiply.\n\n1760 yd\n200 mi \u0003 ________\n______\n1 hr\n\n1 mi\n1760 yd\n200 mi \u0003 _______\n______\n1 mi\n1 hr\n200 \u0003 1760 yd\n\u0002 ____________\n1 hr\n352,000 yd\n\u0002 __________\n1 hr\n\nCancel out common factors.\n\nMultiply.\nWrite the ratio of yards per hour.\n\nThe peregrine falcon can reach speeds up to 352,000 yards per hour.\n\nPractice\n1. Alberto Contador won the 2007 Tour de France with an average\nspeed of about 39 kilometers per hour. What was Albertos average\nspeed in meters per hour?\n\n1000 m 39\n39,000 m\n\u0003 1000 m \u0002 ___________\n39 km \u0003 _________\n______\n\u0002 ___________\n1hr\n\n1 hr\n\n1 km\n\n1 hr\n\n2. Some elephants can eat up to 660 pounds\n\nof food per day. How much food can an\nelephant eat in tons per day? One ton is\nequal to 2000 pounds.\n\n3. A sprinkler with a flow rate of 2 gallons per\n\nminute is watering a lawn. What is the flow\nrate of the sprinkler in gallons per hour?\n\n120 gal/h\n\n0.33 t/d\n\n39,000\n\n15\n\nSaxon Algebra 1\n\nReteaching\ncontinued\n\nRemember that area is measured in square units and volume is measured\n\nin cubic units.\nA covered patio measures 6.25 yards by 5 yards. What is the area of the patio in\nsquare feet?\nStep 1: Find the area of the patio.\n\nSo, the conversion is 31.25 yd 2 ? ft 2.\n\n6.25 yd \u0002 5 yd \u0003 31.25 yd 2\nStep 2: Equate units.\n\n1 yd\n3 ft .\nSo, the unit ratio is ____, or ____\n1 yd\n3 ft\nStep 3: Write the multiplication sentence. Then multiply.\n3 ft \u0002 ____\n3 ft\n31.25 yd \u0002 yd \u0002 ____\n1 yd 1 yd\n1 yd \u0003 3 ft\n\n3 ft \u0002 ____\n3 ft\n31.25 yd \u0002 yd \u0002 ____\n1 yd 1 yd\n\n\u0002 3 ft \u0002 3 ft\n_____________\n\u0003 31.25\n1\n\nMultiply.\n\n\u0003 281.25 ft\n\nThe area of the outdoor patio is 281.25 square feet.\n\nPractice\n4. Mr. Greenes yard is 50 feet by 20 feet. He wants to buy sod to cover\nhis yard. Each piece of sod is 1-yard square. What is the area of\nMr. Greenes yard in square yards?\n50 ft \u0002 20 ft \u0003 1000\n\nft 2\n\n1 yd 1 yd 1000 \u00021 yd \u0002 1yd\n1000 ft \u0002 ft \u0002 ____ \u0002 ____ \u0002 _____________ \u0002 111\n\n3 ft 3 ft\n\n111\n\n5. An interior room is 12 feet by 17 feet.\n\nCarpet pieces are 1-yard square. How\nmany square yards of carpet must be\npurchased to cover the floor of the\nroom?\n\nsquare yards.\n\n6. A hose with a flow rate of 15 cubic feet\n\nper hour is filling a large aquarium.\nWhat is the flow rate of the hose in\ncubic inches per hour?\n\n25,920 in 3/h\n\n2 yd 2\n22__\n3\n\nyd 2\n\n16\n\nSaxon Algebra 1\n\nName\n\nDate\n\nClass\n\nReteaching\nEvaluating and Comparing Algebraic Expressions\n\nYou have simplified expressions containing only numbers and operations.\n\nNow you will evaluate expressions that contain numbers and/or variables.\nThese expressions are called algebraic expressions.\nEvaluate the expression for n 2 and p 1. Evaluate the expression for a 1 and b 3.\n6p \u0002 3n \u0003 np\n\n2(b \u0003 a) 3 \u0002 5b 2\n\nexpression.\n\nStep 1: Substitute 1 for a and 3 for b in the\n\nexpression.\n\n61\u000232\u000321\n\n2(3 \u0003 1) 3 \u0002 5 3 2\n\noperations.\n\nStep 2: Simplify using the order of operations.\n\n2(3 \u0003 1) 3 \u0002 5 3 2\n\n61\u000232\u000321\n\n\u0004 2(2) 3 \u0002 5 3 2\n\n\u00046\u00026\u00032\n\n\u000428 \u000259\n\n\u00046\n\n\u0004 16 \u0002 45\n\u0004 61\n\nPractice\nComplete the steps to evaluate each expression for the given values.\n2\n2\n2. 2y \u0002 3x \u0003 4y; x \u0004 \u00033, y \u0004 5\n\n1. \u00038c \u0002 4a \u0002 ac; a \u0004 3, c \u0004 \u00032\n\u0003 8(\u00032) \u0002 4 3 \u0002\n\u0004 16 \u0002\n\u0004\n\n12\n\n(\u00032)\n\n2(\n\n\u00036\n\n) \u0002 3(\u00033) \u0003 4 5\n2\n\n\u0004 2(25)\u0002 3(9)\u0003\n\n22\n\n\u0004 50\u0002\n\n27\n\n20\n\n\u0003 20 \u0004\n\n57\n\nEvaluate each expression for the given values.\n\n3. 3b \u0002 ab \u0002 2; a \u0004 5, b \u0004 1\n\n4. 5(c \u0002 d) \u0002 6(c \u0002 2d); c \u0004 4, d \u0004 1\n\n10\n\n61\n\n5. 7x \u0003 2y \u0002 3xy; x \u0004 5, y \u0004 2\n\n6. 2st \u0002 t 2 \u0003 4s; s \u0004 \u00033, t \u0004 \u00032\n\n61\n\n28\n\n7. m \u0002 p 3 \u0003 7p; m \u0004 10, p \u0004 2\n\n8. q \u0003 r 2 \u0002 2(4 \u0002 q); q \u0004 6, r \u0004 \u00031\n\n25\n\n9. A cable company charges a \\$36 monthly fee and then \\$2.99 for each\nmovie ordered. They use the expression 36 \u0002 2.99m, where m is the\nnumber of movies ordered, to find the total amount to charge for each\nmonth. How much would the cable company charge for the month of\nJune if three movies were ordered?\n\n\\$44.97\n\n17\n\nSaxon Algebra 1\n\nReteaching\ncontinued\n\nb 2 \u0004 4ab \u0005 3b 3\n\n\u00052a b\n3\n\nStep 1: Substitute 2 for a and \u00051 for b in the expressions.\n\n(\u00051) 2 \u0004 4(2)(\u00051) \u0005 3(\u00051) 3\n\n\u00052(2) (\u00051)\n3\n\nStep 2: Simplify using the order of operations.\n\n(\u00051) 2 \u0004 4(2)(\u00051) \u0005 3(\u00051) 3\n\n\u0005 2(2) 3(\u00051)\n\n\u0006 1 \u0004 4(2)(\u00051) \u0005 3(\u00051)\n\n\u0006 \u00052(8) (\u00051)\n\n\u0006 1 \u0004 (8)(\u00051) \u0004 3\n\n\u0006 \u000516 (\u00051)\n\n\u0006 1 \u0004 (\u00058) \u0004 3\n\n\u0006 16\n\n\u0006 \u00054\nStep 3: Compare using \u0002, \u0003, or \u0006.\nSince \u00054 \u0002 16, b 2 \u0004 4ab \u0005 3b 3 \u0002 \u00052a 3b when a = 2 and b = \u00051.\nPractice\nComplete the steps to compare the expressions when x 7 and\ny 2. Use \u0002, \u0003, or .\n10. 2(x \u0004 y) \u0004 3x \u0005 0.5y\n2(x \u0004 y) \u0004 3x \u0005 0.5y\n\n12x \u0004 xy\n\n12x \u0004 xy\n\n\u0006 12(7) \u0004 (7)(2)\n\n\u0006 2(9) \u0004 3(7) \u0005 0.5(2)\n\n\u0006 84 \u0004 14\n\n\u0006 18 \u0004 21 \u0005\n\n\u0006 98\n\n\u0006 38\n2(x \u0004 y) \u0004 3x \u0005 0.5y \u0002 12x \u0004 xy when x \u0006 7 and y \u0006 2.\nCompare the expressions for the given values. Use <, >, or \u0006.\n\n\u0002 2a b \u0004 5b; a \u0006 \u00051, b \u0006 3\n\n12. \u00052h \u0004 (2h \u0004 k )\n\n\u0003 \u0005h \u0004 (2h \u0004 k ); h \u0006 \u00053, k \u0006 3\n13. 5(x \u0005 y) \u0004 2y\n\u0006 (x \u0004 y ) ; x \u0006 5, y \u0006 2\n14. k \u0004 2j\n\u0003 j \u0004 2k; j \u0006 3, k \u0006 7\n2 2\n11. 3a b \u0005 4b\n\n15. Cell phone company A charges a \\$30 monthly fee and 15 cents per minute.\nThey use the expression 30 \u0004 0.15m to find the total amount to charge\nfor each month. Cell phone company B charges a \\$25 monthly fee\nand 17 cents per minute. They use the expression 25 \u0004 0.17m\nto find the total amount to charge for each month. Which cell phone\ncompany charges less for 300 minutes during a month? A\n\n18\n\nSaxon Algebra 1\n\nName\n\nDate\n\nClass\n\nReteaching\n\n10\n\nYou have solved problems by adding and subtracting pairs of numbers.\nNow you will solve problems by adding and subtracting three or more\nrational numbers.\n\n\u0002 \u0003\n\n3 .\n2 __\n6 __\n1 __\nSimplify __\n7 7 7\n7\nStep 1: Write the problem as addition.\n\nSimplify 2.14 0.22 5.25 (3.81).\n\nStep 1: Write the problem as addition.\n\n\u0002 \u0003\n\n\u0002 2.14 \u0004 (\u00030.22) \u0004 5.25 \u0004 (\u00033.81)\n\n2 \u0004 __\n1 \u0004 __\n3\n6 \u0004 \u0003__\n\u0002 \u0003__\n7 7\n7\n7\nStep 2: Group the terms with like signs.\n\n\u0002 7\u0003\n\nStep 2: Group the terms with like signs.\n\n\u0002 2.14 \u0004 5.25 \u0004 (\u00030.22) \u0004 (\u00033.81)\n\n2 \u0004 \u0003__\n1 \u0004 __\n6 \u0004 __\n3\n\u0002 \u0003__\n\n7\n\n\u0002 7.39 \u0003 4.03 \u0002 3.36\n\n3 \u0004 __\n9 \u0002 __\n6\n\u0002 \u0003__\n7 7 7\nPractice\nComplete the steps to simplify each expression.\n\n\u0002 \u0003 \u0002 \u0003\n5\n\u0002 \u0002 \u00031 \u0003 \u0004 __ \u0004 2 \u0004 __\n9\n9\n9\n\u0002 49 \u0003\n5 \u00042\n4 \u0004 __\n\u0002 \u0002 \u0003 1 \u0003\u0004 __\n9\n\u0002 9\u0003 9 9\n\n5 \u0004 __\n2 \u0003 __\n1 \u0003 \u0003__\n4\n1. \u0003__\n9\n9\n9 9\n__\n\n\u0002 \u0003\n5\n__\n9\n\n\u0002 7.24 \u0004 (2.78) \u0004 (\n\n__\n\n__\n\n3.4 ) \u0004 5.12\n\n\u0002 7.24 \u0004 5.12 \u0004 (2.78) \u0004 (\n\n__\n\n2\n7 \u0002 __\n\u0004 __\n9\n9\n\n12.36\n\nSimplify.\n3 \u0002\n1 \u0003 __\n4 \u0003 \u0003__\n3. __\n5 5\n5\n\n\u0003 6.18 \u0002\n\n6.18\n\n\u0002 \u0003 0\n13\n6\n5. \u0003 2 \u0004 5 \u0003\u0002 \u00036 \u0003 \u0004 4 \u0002 ___ or 1__\n7\n7\n7 7\n7\n7\n\n3 ___\n3 \u0002\n4\n5 \u0003___\n__\n6. \u0003___\n\u0003\n13\n13 13 \u0003 \u000313\n\n__\n\n__\n\n__\n\n\u0002 \u0003\n\n10.49\n\n\u0002 \u0003\n\n5 \u0002\n3 \u0004__\n4 \u0003 __\n4. \u0003 \u0003__\n11\n11 11\n\n__\n\n6.02\n\n3.4 )\n\n\u0002 \u0003\n\n2\n___\n11\n5\n\u0003___\n13\n5.33\n\n10. 34.19 \u0003 (\u000321.7) \u0004 3.79 \u0003 15.2 \u0002\n\n44.48\n\n11. Mrs. Lewis has \\$156 in her checking account. She writes two checks\nfor \\$31.19 and \\$15.76 and makes one deposit for \\$119. What is her\nnew balance?\n\n\\$228.05\n\n19\n\nSaxon Algebra 1\n\nReteaching\n\n10\n\ncontinued\nOrder the numbers from least to greatest.\n7 , 0.6, __\n2, 1\n___\n5\n10\nStep 1: Place each number on a number line.\n2\n5\n\n-0.6\n-2\n\n-1\n\n7\n10\n\n1\n1\n\nStep 2: To order the numbers, read the numbers on the number line from left to right.\n2 , ___\n7,1\n0.6, __\n5 10\nPractice\nComplete the steps to order the numbers from least to greatest.\n4\n1, __\n12. 1.75, 0.3, __\n4 5\n-1.75\n-2\n\n-4\n5\n\n3 , 1.25, ___\n5 , 0.4\n13. __\n5\n10\n-1\n4\n\n-1\n\n0.3\n0\n\n-2\n\nare\n\n-5\n10\n\n-1.25\n\n4 , __\n1, 0.3.\n1.75, __\n5\n4\n\n-1\n\n0.4\n\n3\n5\n\ngreatest are\n\n3.\n5 , 0.4, __\n1.25, ___\n5\n10\n\nOrder from least to greatest.\n\n1 , __\n4, 1.4, 2\n14. __\n3 3\n\n9 , 1, 0.05\n15. 0.9, ___\n10\n\n5 , 3.025\n16. 5.8, 7, 3__\n8\n\n5, 1.2, 0.8\n3 , __\n17. __\n2 6\n\n3 , __\n7 , 1.45, 2.7\n18. 2__\n4 5\n\n8 , 1.5, 0.3\n19. 1.75, ___\n10\n\n4, __\n1\n2, 1.4, __\n3\n3\n\n9\n1, 0.9, 0.05, ___\n10\n\n5, 3.025\n7, 5.8, 3__\n8\n\n3 , 1.2, 0.8, __\n5\n__\n2\n6\n\n7 , 1.45\n3 , 2.7, __\n2__\n5\n4\n\n8\n1.75, 1.5, 0.3, ___\n10\n\n20. Terra is deep-sea diving. She descends 20 feet below sea level, ascends\n7 feet, descends 15 feet and ascends 4 feet. What is her current\nposition in relation to sea level?" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.69750786,"math_prob":0.9864321,"size":15689,"snap":"2019-26-2019-30","text_gpt3_token_len":6592,"char_repetition_ratio":0.12712783,"word_repetition_ratio":0.094000556,"special_character_ratio":0.4697559,"punctuation_ratio":0.15094776,"nsfw_num_words":4,"has_unicode_error":false,"math_prob_llama3":0.9623776,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-17T03:46:39Z\",\"WARC-Record-ID\":\"<urn:uuid:04b7b83d-f9b3-48a7-bc1f-020b88dae947>\",\"Content-Length\":\"335385\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5f32c4c8-6e4f-43dc-83b1-bc508fac1143>\",\"WARC-Concurrent-To\":\"<urn:uuid:362a0fd8-3b5b-4553-a93e-df1c427a8ab2>\",\"WARC-IP-Address\":\"151.101.250.152\",\"WARC-Target-URI\":\"https://fr.scribd.com/document/318542976/Algebra-Reteachings-Lessons-1-10\",\"WARC-Payload-Digest\":\"sha1:YPKNKDVFRFFA36XJUEQ5FQEKOEFYTIVD\",\"WARC-Block-Digest\":\"sha1:TALXDPOIJUAGDJARHSULJZCBMCAE4NWV\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560627998369.29_warc_CC-MAIN-20190617022938-20190617044938-00271.warc.gz\"}"}
https://math.stackexchange.com/questions/2737286/given-an-overdetermined-system-of-linear-equation-find-a-subset-that-can-be-sol
[ "# Given an overdetermined system of linear equation, find a subset that can be solved with exact solution\n\nGiven a set A such that A consists of an overdetermined system of linear equation.\n\nFind $$B \\subset A$$ such that B has x equations and x unknowns and has an exact solution.\n\nFor example:\n\nIn a system where you have 4 unknowns and 7 equations, you can solve this by trying all 4 distinct equations you can create from the 7 equations, and then see if it's solvable.\n\nBut the permutations become really big as your overdetermined system grows.\n\nIs there a correct way to do this? Is Linear Programming an option? & if so, how to change this into a linear programming problem?\n\n• By RREF we can select the independent equations. – user Apr 14 '18 at 20:37" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9546279,"math_prob":0.99414116,"size":572,"snap":"2020-45-2020-50","text_gpt3_token_len":129,"char_repetition_ratio":0.13028169,"word_repetition_ratio":0.0,"special_character_ratio":0.22377622,"punctuation_ratio":0.09565217,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99974996,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-21T07:59:57Z\",\"WARC-Record-ID\":\"<urn:uuid:ce9d626c-a647-49e6-b61c-af4bd8b4b159>\",\"Content-Length\":\"143553\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ea48f950-7ca8-4fc0-ab26-950c71dacd06>\",\"WARC-Concurrent-To\":\"<urn:uuid:9b04595a-0f1b-4fbc-a1ae-a8dcbb7f611b>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/2737286/given-an-overdetermined-system-of-linear-equation-find-a-subset-that-can-be-sol\",\"WARC-Payload-Digest\":\"sha1:PK7OAMZHX24YXN32H7K7XTNSADOEUXWR\",\"WARC-Block-Digest\":\"sha1:GD22QFVU7VPU2HI42VXEYQ7WPICARHOJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107876136.24_warc_CC-MAIN-20201021064154-20201021094154-00692.warc.gz\"}"}
https://en.wikipedia.org/wiki/Matter_wave
[ "# Matter wave\n\nMatter waves are a central part of the theory of quantum mechanics, being an example of wave–particle duality. All matter exhibits wave-like behavior. For example, a beam of electrons can be diffracted just like a beam of light or a water wave. In most cases, however, the wavelength is too small to have a practical impact on day-to-day activities. Hence in our day-to-day lives with objects the size of tennis balls and people, matter waves are not relevant.\n\nThe concept that matter behaves like a wave was proposed by Louis de Broglie (/dəˈbrɔɪ/) in 1924. It is also referred to as the de Broglie hypothesis. Matter waves are referred to as de Broglie waves.\n\nThe de Broglie wavelength is the wavelength, λ, associated with a massive particle (i.e., a particle with mass, as opposed to a massless particle) and is related to its momentum, p, through the Planck constant, h:\n\n$\\lambda ={\\frac {h}{p}}={\\frac {h}{mv}}.$", null, "Wave-like behavior of matter was first experimentally demonstrated by George Paget Thomson's thin metal diffraction experiment, and independently in the Davisson–Germer experiment both using electrons, and it has also been confirmed for other elementary particles, neutral atoms and even molecules.\n\n## Historical context\n\nAt the end of the 19th century, light was thought to consist of waves of electromagnetic fields which propagated according to Maxwell's equations, while matter was thought to consist of localized particles (see history of wave and particle duality). In 1900, this division was exposed to doubt, when, investigating the theory of black-body radiation, Max Planck proposed that light is emitted in discrete quanta of energy. It was thoroughly challenged in 1905. Extending Planck's investigation in several ways, including its connection with the photoelectric effect, Albert Einstein proposed that light is also propagated and absorbed in quanta; now called photons. These quanta would have an energy given by the Planck–Einstein relation:\n\n$E=h\\nu$", null, "and a momentum\n\n$p={\\frac {E}{c}}={\\frac {h}{\\lambda }}$", null, "where ν (lowercase Greek letter nu) and λ (lowercase Greek letter lambda) denote the frequency and wavelength of the light, c the speed of light, and h the Planck constant. In the modern convention, frequency is symbolized by f as is done in the rest of this article. Einstein's postulate was confirmed experimentally by Robert Millikan and Arthur Compton over the next two decades.\n\n## de Broglie hypothesis", null, "Propagation of de Broglie waves in 1d – real part of the complex amplitude is blue, imaginary part is green. The probability (shown as the color opacity) of finding the particle at a given point x is spread out like a waveform; there is no definite position of the particle. As the amplitude increases above zero the curvature decreases, so the amplitude decreases again, and vice versa. The result is an alternating amplitude: a wave. Top: plane wave. Bottom: wave packet.\n\nDe Broglie, in his 1924 PhD thesis, proposed that just as light has both wave-like and particle-like properties, electrons also have wave-like properties. By rearranging the momentum equation stated in the above section, we find a relationship between the wavelength, λ associated with an electron and its momentum, p, through the Planck constant, h:\n\n$\\lambda ={\\frac {h}{p}}.$", null, "The relationship is now known to hold for all types of matter: all matter exhibits properties of both particles and waves.\n\nWhen I conceived the first basic ideas of wave mechanics in 1923–1924, I was guided by the aim to perform a real physical synthesis, valid for all particles, of the coexistence of the wave and of the corpuscular aspects that Einstein had introduced for photons in his theory of light quanta in 1905.\n\n— de Broglie\n\nIn 1926, Erwin Schrödinger published an equation describing how a matter wave should evolve – the matter wave analogue of Maxwell's equations — and used it to derive the energy spectrum of hydrogen.\n\n## Experimental confirmation\n\nMatter waves were first experimentally confirmed to occur in George Paget Thomson's cathode ray diffraction experiment and the Davisson-Germer experiment for electrons, and the de Broglie hypothesis has been confirmed for other elementary particles. Furthermore, neutral atoms and even molecules have been shown to be wave-like.\n\n### Electrons\n\nIn 1927 at Bell Labs, Clinton Davisson and Lester Germer fired slow-moving electrons at a crystalline nickel target. The angular dependence of the diffracted electron intensity was measured, and was determined to have the same diffraction pattern as those predicted by Bragg for x-rays. At the same time George Paget Thomson at the University of Aberdeen was independently firing electrons at very thin metal foils to demonstrate the same effect. Before the acceptance of the de Broglie hypothesis, diffraction was a property that was thought to be exhibited only by waves. Therefore, the presence of any diffraction effects by matter demonstrated the wave-like nature of matter. When the de Broglie wavelength was inserted into the Bragg condition, the observed diffraction pattern was predicted, thereby experimentally confirming the de Broglie hypothesis for electrons.\n\nThis was a pivotal result in the development of quantum mechanics. Just as the photoelectric effect demonstrated the particle nature of light, the Davisson–Germer experiment showed the wave-nature of matter, and completed the theory of wave–particle duality. For physicists this idea was important because it meant that not only could any particle exhibit wave characteristics, but that one could use wave equations to describe phenomena in matter if one used the de Broglie wavelength.\n\n### Neutral atoms\n\nExperiments with Fresnel diffraction and an atomic mirror for specular reflection of neutral atoms confirm the application of the de Broglie hypothesis to atoms, i.e. the existence of atomic waves which undergo diffraction, interference and allow quantum reflection by the tails of the attractive potential. Advances in laser cooling have allowed cooling of neutral atoms down to nanokelvin temperatures. At these temperatures, the thermal de Broglie wavelengths come into the micrometre range. Using Bragg diffraction of atoms and a Ramsey interferometry technique, the de Broglie wavelength of cold sodium atoms was explicitly measured and found to be consistent with the temperature measured by a different method.\n\nThis effect has been used to demonstrate atomic holography, and it may allow the construction of an atom probe imaging system with nanometer resolution. The description of these phenomena is based on the wave properties of neutral atoms, confirming the de Broglie hypothesis.\n\nThe effect has also been used to explain the spatial version of the quantum Zeno effect, in which an otherwise unstable object may be stabilised by rapidly repeated observations.\n\n### Molecules\n\nRecent experiments even confirm the relations for molecules and even macromolecules that otherwise might be supposed too large to undergo quantum mechanical effects. In 1999, a research team in Vienna demonstrated diffraction for molecules as large as fullerenes. The researchers calculated a De Broglie wavelength of the most probable C60 velocity as 2.5 pm. More recent experiments prove the quantum nature of molecules made of 810 atoms and with a mass of 10,123 amu. As of 2019, this has been pushed to molecules of 25,000 amu.\n\nStill one step further than Louis de Broglie go theories which in quantum mechanics eliminate the concept of a pointlike classical particle and explain the observed facts by means of wavepackets of matter waves alone.\n\n## de Broglie relations\n\nThe de Broglie equations relate the wavelength λ to the momentum p, and frequency f to the total energy E of a free particle:\n\n{\\begin{aligned}&\\lambda =h/p\\\\&f=E/h\\end{aligned}}", null, "where h is the Planck constant. The equations can also be written as\n\n{\\begin{aligned}&\\mathbf {p} =\\hbar \\mathbf {k} \\\\&E=\\hbar \\omega \\\\\\end{aligned}}", null, "or \n\n{\\begin{aligned}&\\mathbf {p} =\\hbar \\mathbf {\\beta } \\\\&E=\\hbar \\omega \\\\\\end{aligned}}", null, "where ħ = h/2π is the reduced Planck constant, k is the wave vector, β is the phase constant, and ω is the angular frequency.\n\nIn each pair, the second equation is also referred to as the Planck–Einstein relation, since it was also proposed by Planck and Einstein.\n\n### Special relativity\n\nUsing two formulas from special relativity, one for the relativistic momentum and one for the relativistic mass energy\n\n$E=mc^{2}=\\gamma m_{0}c^{2}$", null, "${\\vec {p}}=m{\\vec {v}}=\\gamma m_{0}{\\vec {v}}$", null, "allows the equations to be written as\n\n{\\begin{aligned}&\\lambda =\\,\\,{\\frac {h}{\\gamma m_{0}v}}\\,=\\,{\\frac {h}{m_{0}v}}\\,\\,\\,\\,{\\sqrt {1-{\\frac {v^{2}}{c^{2}}}}}\\\\&f={\\frac {c}{\\lambda }}={\\frac {\\gamma \\,m_{0}vc}{h}}={\\frac {m_{0}c^{2}}{h}}{\\bigg /}{\\sqrt {{\\frac {c^{2}}{v^{2}}}-1}}\\end{aligned}}", null, "where $m_{0}$", null, "denotes the particle's rest mass, $v$", null, "its velocity, $\\gamma$", null, "the Lorentz factor, and $c$", null, "the speed of light in a vacuum. See below for details of the derivation of the de Broglie relations. Group velocity (equal to the particle's speed) should not be confused with phase velocity (equal to the product of the particle's frequency and its wavelength). In the case of a non-dispersive medium, they happen to be equal, but otherwise they are not.\n\n#### Group velocity\n\nAlbert Einstein first explained the wave–particle duality of light in 1905. Louis de Broglie hypothesized that any particle should also exhibit such a duality. The velocity of a particle, he concluded, should always equal the group velocity of the corresponding wave. The magnitude of the group velocity is equal to the particle's speed.\n\nBoth in relativistic and non-relativistic quantum physics, we can identify the group velocity of a particle's wave function with the particle velocity. Quantum mechanics has very accurately demonstrated this hypothesis, and the relation has been shown explicitly for particles as large as molecules.\n\nDe Broglie deduced that if the duality equations already known for light were the same for any particle, then his hypothesis would hold. This means that\n\n$v_{g}={\\frac {\\partial \\omega }{\\partial k}}={\\frac {\\partial (E/\\hbar )}{\\partial (p/\\hbar )}}={\\frac {\\partial E}{\\partial p}}$", null, "where E is the total energy of the particle, p is its momentum, ħ is the reduced Planck constant. For a free non-relativistic particle it follows that\n\n{\\begin{aligned}v_{g}&={\\frac {\\partial E}{\\partial p}}={\\frac {\\partial }{\\partial p}}\\left({\\frac {1}{2}}{\\frac {p^{2}}{m}}\\right)\\\\&={\\frac {p}{m}}\\\\&=v\\end{aligned}}", null, "where m is the mass of the particle and v its velocity.\n\nAlso in special relativity we find that\n\n{\\begin{aligned}v_{g}&={\\frac {\\partial E}{\\partial p}}={\\frac {\\partial }{\\partial p}}\\left({\\sqrt {p^{2}c^{2}+m_{0}^{2}c^{4}}}\\right)\\\\&={\\frac {pc^{2}}{\\sqrt {p^{2}c^{2}+m_{0}^{2}c^{4}}}}\\\\&={\\frac {pc^{2}}{E}}\\end{aligned}}", null, "where m0 is the rest mass of the particle and c is the speed of light in a vacuum. But (see below), using that the phase velocity is vp = E/p = c2/v, therefore\n\n{\\begin{aligned}v_{g}&={\\frac {pc^{2}}{E}}\\\\&={\\frac {c^{2}}{v_{p}}}\\\\&=v\\end{aligned}}", null, "where v is the velocity of the particle regardless of wave behavior.\n\n#### Phase velocity\n\nIn quantum mechanics, particles also behave as waves with complex phases. The phase velocity is equal to the product of the frequency multiplied by the wavelength.\n\nBy the de Broglie hypothesis, we see that\n\n$v_{\\mathrm {p} }={\\frac {\\omega }{k}}={\\frac {E/\\hbar }{p/\\hbar }}={\\frac {E}{p}}.$", null, "Using relativistic relations for energy and momentum, we have\n\n$v_{\\mathrm {p} }={\\frac {E}{p}}={\\frac {mc^{2}}{mv}}={\\frac {\\gamma m_{0}c^{2}}{\\gamma m_{0}v}}={\\frac {c^{2}}{v}}={\\frac {c}{\\beta }}$", null, "where E is the total energy of the particle (i.e. rest energy plus kinetic energy in the kinematic sense), p the momentum, $\\gamma$", null, "the Lorentz factor, c the speed of light, and β the speed as a fraction of c. The variable v can either be taken to be the speed of the particle or the group velocity of the corresponding matter wave. Since the particle speed $v", null, "for any particle that has mass (according to special relativity), the phase velocity of matter waves always exceeds c, i.e.\n\n$v_{\\mathrm {p} }>c,\\,$", null, "and as we can see, it approaches c when the particle speed is in the relativistic range. The superluminal phase velocity does not violate special relativity, because phase propagation carries no energy. See the article on Dispersion (optics) for details.\n\n### Four-vectors\n\nUsing four-vectors, the De Broglie relations form a single equation:\n\n$\\mathbf {P} =\\hbar \\mathbf {K}$", null, "which is frame-independent.\n\nLikewise, the relation between group/particle velocity and phase velocity is given in frame-independent form by:\n\n$\\mathbf {K} =\\left({\\frac {\\omega _{o}}{c^{2}}}\\right)\\mathbf {U}$", null, "where\n\nFour-momentum $\\mathbf {P} =\\left({\\frac {E}{c}},{\\vec {\\mathbf {p} }}\\right)$", null, "Four-wavevector $\\mathbf {K} =\\left({\\frac {\\omega }{c}},{\\vec {\\mathbf {k} }}\\right)=\\left({\\frac {\\omega }{c}},{\\frac {\\omega }{v_{p}}}\\mathbf {\\hat {n}} \\right)$", null, "Four-velocity $\\mathbf {U} =\\gamma (c,{\\vec {\\mathbf {u} }})=\\gamma (c,v_{g}{\\hat {\\mathbf {n} }})$", null, "## Interpretations\n\nThe physical reality underlying de Broglie waves is a subject of ongoing debate. Some theories treat either the particle or the wave aspect as its fundamental nature, seeking to explain the other as an emergent property. Some, such as the hidden variable theory, treat the wave and the particle as distinct entities. Yet others propose some intermediate entity that is neither quite wave nor quite particle but only appears as such when we measure one or the other property. The Copenhagen interpretation states that the nature of the underlying reality is unknowable and beyond the bounds of scientific inquiry.\n\nSchrödinger's quantum mechanical waves are conceptually different from ordinary physical waves such as water or sound. Ordinary physical waves are characterized by undulating real-number 'displacements' of dimensioned physical variables at each point of ordinary physical space at each instant of time. Schrödinger's \"waves\" are characterized by the undulating value of a dimensionless complex number at each point of an abstract multi-dimensional space, for example of configuration space.\n\nAt the Fifth Solvay Conference in 1927, Max Born and Werner Heisenberg reported as follows:\n\nIf one wishes to calculate the probabilities of excitation and ionization of atoms [M. Born, Zur Quantenmechanik der Stossvorgange, Z. f. Phys., 37 (1926), 863; [Quantenmechanik der Stossvorgange], ibid., 38 (1926), 803] then one must introduce the coordinates of the atomic electrons as variables on an equal footing with those of the colliding electron. The waves then propagate no longer in three-dimensional space but in multi-dimensional configuration space. From this one sees that the quantum mechanical waves are indeed something quite different from the light waves of the classical theory.\n\nAt the same conference, Erwin Schrödinger reported likewise.\n\nUnder [the name 'wave mechanics',] at present two theories are being carried on, which are indeed closely related but not identical. The first, which follows on directly from the famous doctoral thesis by L. de Broglie, concerns waves in three-dimensional space. Because of the strictly relativistic treatment that is adopted in this version from the outset, we shall refer to it as the four-dimensional wave mechanics. The other theory is more remote from Mr de Broglie's original ideas, insofar as it is based on a wave-like process in the space of position coordinates (q-space) of an arbitrary mechanical system.[Long footnote about manuscript not copied here.] We shall therefore call it the multi-dimensional wave mechanics. Of course this use of the q-space is to be seen only as a mathematical tool, as it is often applied also in the old mechanics; ultimately, in this version also, the process to be described is one in space and time. In truth, however, a complete unification of the two conceptions has not yet been achieved. Anything over and above the motion of a single electron could be treated so far only in the multi-dimensional version; also, this is the one that provides the mathematical solution to the problems posed by the Heisenberg-Born matrix mechanics.\n\nIn 1955, Heisenberg reiterated this:\n\nAn important step forward was made by the work of Born [Z. Phys., 37: 863, 1926 and 38: 803, 1926] in the summer of 1926. In this work, the wave in configuration space was interpreted as a probability wave, in order to explain collision processes on Schrödinger's theory. This hypothesis contained two important new features in comparison with that of Bohr, Kramers and Slater. The first of these was the assertion that, in considering \"probability waves\", we are concerned with processes not in ordinary three-dimensional space, but in an abstract configuration space (a fact which is, unfortunately, sometimes overlooked even today); the second was the recognition that the probability wave is related to an individual process.\n\nIt is mentioned above that the \"displaced quantity\" of the Schrödinger wave has values that are dimensionless complex numbers. One may ask what is the physical meaning of those numbers. According to Heisenberg, rather than being of some ordinary physical quantity such as, for example, Maxwell's electric field intensity, or mass density, the Schrödinger-wave packet's \"displaced quantity\" is probability amplitude. He wrote that instead of using the term 'wave packet', it is preferable to speak of a probability packet. The probability amplitude supports calculation of probability of location or momentum of discrete particles. Heisenberg recites Duane's account of particle diffraction by probabilistic quantal translation momentum transfer, which allows, for example in Young's two-slit experiment, each diffracted particle probabilistically to pass discretely through a particular slit. Thus one does not need necessarily think of the matter wave, as it were, as 'composed of smeared matter'.\n\nThese ideas may be expressed in ordinary language as follows. In the account of ordinary physical waves, a 'point' refers to a position in ordinary physical space at an instant of time, at which there is specified a 'displacement' of some physical quantity. But in the account of quantum mechanics, a 'point' refers to a configuration of the system at an instant of time, every particle of the system being in a sense present in every 'point' of configuration space, each particle at such a 'point' being located possibly at a different position in ordinary physical space. There is no explicit definite indication that, at an instant, this particle is 'here' and that particle is 'there' in some separate 'location' in configuration space. This conceptual difference entails that, in contrast to de Broglie's pre-quantum mechanical wave description, the quantum mechanical probability packet description does not directly and explicitly express the Aristotelian idea, referred to by Newton, that causal efficacy propagates through ordinary space by contact, nor the Einsteinian idea that such propagation is no faster than light. In contrast, these ideas are so expressed in the classical wave account, through the Green's function, though it is inadequate for the observed quantal phenomena. The physical reasoning for this was first recognized by Einstein.\n\n## De Broglie's phase wave and periodic phenomenon\n\nDe Broglie's thesis started from the hypothesis, \"that to each portion of energy with a proper mass m0 one may associate a periodic phenomenon of the frequency ν0, such that one finds: 0 = m0c2. The frequency ν0 is to be measured, of course, in the rest frame of the energy packet. This hypothesis is the basis of our theory.\"\n\nDe Broglie followed his initial hypothesis of a periodic phenomenon, with frequency ν0 , associated with the energy packet. He used the special theory of relativity to find, in the frame of the observer of the electron energy packet that is moving with velocity $v$", null, ", that its frequency was apparently reduced to\n\n$f=\\nu _{0}{\\sqrt {1-{\\frac {v^{2}}{c^{2}}}}}\\,.$", null, "Then\n\n$\\lambda f=E/p=v_{\\mathrm {p} }\\,.$", null, "using the same notation as above. The quantity $v_{\\mathrm {p} }$", null, "is the velocity of what de Broglie called the \"phase y wave\". Its wavelength is $\\lambda$", null, "and frequency $f$", null, ". De Broglie reasoned that his hypothetical intrinsic particle periodic phenomenon is in phase with that phase wave. This was his basic matter wave conception. He noted, as above, that $v_{\\mathrm {p} }>c$", null, ", and the phase wave does not transfer energy.\n\nWhile the concept of waves being associated with matter is correct, de Broglie did not leap directly to the final understanding of quantum mechanics with no missteps. There are conceptual problems with the approach that de Broglie took in his thesis that he was not able to resolve, despite trying a number of different fundamental hypotheses in different papers published while working on, and shortly after publishing, his thesis. These difficulties were resolved by Erwin Schrödinger, who developed the wave mechanics approach, starting from a somewhat different basic hypothesis." ]
[ null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/2fe51c74da5131f00b9dc318ab67fb2e872d868f", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/c6c0386dc6d9530519404f95570fcc8548ed2326", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/8e47dfdf8f6b7a2e8b741aa57d1cc726d23a0e5d", null, "https://upload.wikimedia.org/wikipedia/commons/thumb/2/21/Propagation_of_a_de_broglie_wave.svg/290px-Propagation_of_a_de_broglie_wave.svg.png", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/25ae59ad6fb050fbb5a803b838468a29e69c4615", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/9c48c0c53eeae7a8471182b3c75e2a744ed8e462", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/0db2065aaf54683b4a56b8901f973b6c4b05ac5d", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/194ec6a7bca21f7daee24e167b9cdcb3fd758859", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/66dc94c0d4cfe9c9d0ed302f4e3584e801344727", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/5a1b9ed82bfc2d227e500f280b4140110636321d", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/5f6760e43c9a8847ea4df0e5cc9d8cd553379e2d", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/3a6ff51ee949104fe6fae553cfbdfba29d5fac1e", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/e07b00e7fc0847fbd16391c778d65bc25c452597", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/a223c880b0ce3da8f64ee33c4f0010beee400b1a", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/86a67b81c2de995bd608d5b2df50cd8cd7d92455", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/1537f22c6c90399901106f3f84c2f93afeb8267c", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/2739408c2dafcae7f438f22899e32bd14c77806e", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/6c6c33dbc50ed506da1534c0dcb568688c1d337b", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/a4223c461dff617596cc40bd9c9550ee3157679a", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/7c50983b78fb644ded375f03f001aa75c1e0b1c9", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/c5a2a9c61cb1c520db52e2aab76361af670540be", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/a223c880b0ce3da8f64ee33c4f0010beee400b1a", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/bbca46ca344c5dc2287135df082fd083d535967f", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/2330902a9dbc3755463c740badbbbf00dbe6b6c4", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/4688eb69effac4468cd9f3a2c2f253ea8969a0f5", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/1fbd329f20b16a9bf02c0fa4a96ba311501f0ce9", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/03339f93e4c7a61b8a92df9658bc8f148059b800", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/d4213cdba2252aaf70a66e2ae10582b86697b655", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/874798397ccd9bcb79f01803848df1626b7551b1", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/e07b00e7fc0847fbd16391c778d65bc25c452597", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/228e9adac9c80171c70399f0eeae31d3748dab77", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/8267368164fe67c33de686292974adf749010a11", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/71f9142e9b33d34704174d717bb2dc2f8acbfc41", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/b43d0ea3c9c025af1be9128e62a18fa74bedda2a", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/132e57acb643253e7810ee9702d9581f159a1c61", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/552ba056ecb07311f0e1aca500a2a9da83ca4e8a", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8456137,"math_prob":0.9725823,"size":28602,"snap":"2020-24-2020-29","text_gpt3_token_len":7293,"char_repetition_ratio":0.13196728,"word_repetition_ratio":0.042880446,"special_character_ratio":0.26333824,"punctuation_ratio":0.18371347,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9965932,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72],"im_url_duplicate_count":[null,9,null,null,null,8,null,null,null,null,null,6,null,6,null,6,null,6,null,6,null,3,null,null,null,null,null,null,null,null,null,9,null,5,null,5,null,5,null,5,null,5,null,null,null,null,null,5,null,8,null,8,null,8,null,8,null,8,null,null,null,4,null,4,null,null,null,null,null,null,null,6,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-06-01T09:50:34Z\",\"WARC-Record-ID\":\"<urn:uuid:c226dfce-9620-4a8d-8ad6-3a2c21df09cd>\",\"Content-Length\":\"220578\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8845282e-3327-40e6-95b6-b569c3e94e29>\",\"WARC-Concurrent-To\":\"<urn:uuid:ab8c78ea-a427-4a40-822d-1caedb988f0a>\",\"WARC-IP-Address\":\"208.80.154.224\",\"WARC-Target-URI\":\"https://en.wikipedia.org/wiki/Matter_wave\",\"WARC-Payload-Digest\":\"sha1:F5AOGIOWEESOEWETCTD4U7SU4RBLZJXS\",\"WARC-Block-Digest\":\"sha1:X3YKZ3OIYEVJO7NG5P62RMRT5EJYR4NO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347415315.43_warc_CC-MAIN-20200601071242-20200601101242-00391.warc.gz\"}"}
http://bda.ath.cx/blog/2007/12/06/jordan-form/
[ "## Jordan Form\n\nThis is happening to thousands of Math graduate students all over the country:\n\nYou are taking a Linear Algebra class, and you are expected to be able to find the Jordan form of a matrix on the exams (and probably the qualifier). Your textbook (perhaps “Matrix Analysis” by Horn and Johnson?) describes the form, proves that every complex matrix is similar to a matrix in Jordan form, describes what can be extrapolated about a matrix given its Jordan form, all with detailed proofs. Then it goes on to hand wave about how to find the Jordan form of a matrix, and extrapolating the algorithm from the description takes a degree in re-constructive surgery. Your algebra textbook (Dummit & Foote?) has an even more confusing description.\n\nThe usual suspects like Wikipedia and Mathworld are completely useless. Google has never failed you like this before. Your textbook also explains why the Jordan form is completely useless in practical applications, which explains why nobody bothers to explain the algorithm and then requires you to know it.\n\nYou ask a senior student or your professor, and they explain about generalized eigenvectors. You grind through some examples, and finally it starts to make sense.\n\nOr you stumble across this everything2 article, which is the only published article on or off the Internet describing the algorithm in plain language. Actually it’s still pretty confusing, but as a grad student, you have a high complexity (pain) tolerance.\n\nIt helps to realize that the process is similar to diagonalization (and in fact is diagonalization if the matrix is diagonalizable) but instead of placing the eigenvectors as columns in the transformation matrix, you place the generalized eigenvector as columns (and order matters to some extent here).\n\nIf you don’t need the transformation matrix, you may be able to skip some of these steps: the geometric multiplicity of an eigenvalue gives the number of Jordan blocks associated with that eigenvalue, the algebraic multiplicity gives the sum of the sizes of all Jordan blocks associated with an eigenvalue, and the size of the blocks can be found by looking at… er… how many generalized eigenvalues are associated with each eigenvector. I don’t have my notes here so I’m getting a bit hazy on that part. Perhaps I will make this clear in a later edit…" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9278928,"math_prob":0.8587636,"size":2317,"snap":"2022-27-2022-33","text_gpt3_token_len":463,"char_repetition_ratio":0.106355384,"word_repetition_ratio":0.015789473,"special_character_ratio":0.18990074,"punctuation_ratio":0.0783848,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96161896,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-03T00:19:38Z\",\"WARC-Record-ID\":\"<urn:uuid:f3f78a65-89ae-48f4-a389-ec67a4f45b92>\",\"Content-Length\":\"25634\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ca5e1a8a-2745-4175-b826-7cbc3c171591>\",\"WARC-Concurrent-To\":\"<urn:uuid:38fcd1bd-f746-4a7a-9ff3-9f541ab19185>\",\"WARC-IP-Address\":\"207.192.72.39\",\"WARC-Target-URI\":\"http://bda.ath.cx/blog/2007/12/06/jordan-form/\",\"WARC-Payload-Digest\":\"sha1:EQXSXUTP22PC5SCRSJQBA3L3LA4AKWVS\",\"WARC-Block-Digest\":\"sha1:GFHQOMYMBKOX6JJJ5IPU67O2SZEP6VVK\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104205534.63_warc_CC-MAIN-20220702222819-20220703012819-00443.warc.gz\"}"}
https://tex.stackexchange.com/questions/208043/ordinal-table-of-contents
[ "# Ordinal table of contents\n\nI want create table of contents As follows:\n\nChapter first (chapter name)\n\n1.1 (section name)\n\n1.2 (section name)\n\n1.2.1 (subsection name)\n\nChapter second (chapter name)\n\n2.1 (section name)\n\n2.1.1 (subsection name)\n\n2.2 (section name)\n\nPlease guide me.\n\nThanks in advance\n\n• Hi and welcome. Where are you stuck? Having the chapter unnumbered or having the sections numbered the way they are. Or producing a toc in the first place? – Johannes_B Oct 20 '14 at 7:49\n• @Johannes_B: I believe, its the ordinal numbers first, second etc, as the title of the question proposes. – user31729 Oct 20 '14 at 7:50\n\n## 1 Answer\n\nThis is a solution that depends on the book class, by removing patching the \\@chapter macro, changing the \\addcontentsline entry and setting it to \\Ordinalstring{chapter} from fmtcount package.\n\n\\documentclass{book}\n\n\\usepackage{fmtcount}\n\\usepackage{etoolbox}\n\n\\makeatletter\n\\patchcmd{\\@chapter}{\\addcontentsline{toc}{chapter}{\\protect\\numberline{\\thechapter}#1}}{\\addcontentsline{toc}{chapter}{\\protect{\\numberline{\\chaptername~\\Ordinalstring{chapter} #1}}}}{}{}\n\\makeatother\n\n\\begin{document}\n\\tableofcontents\n\n\\chapter{Some chapter name}\n\n\\section{Number one}\n\n\\subsection{Subsec Number one}\n\n\\chapter{Another chapter name}\n\n\\section{Number one from 2nd chapter}\n\n\\subsection{Subsec Number one}\n\n\\end{document}", null, "" ]
[ null, "https://i.stack.imgur.com/bu38n.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.85258055,"math_prob":0.4725663,"size":405,"snap":"2019-35-2019-39","text_gpt3_token_len":124,"char_repetition_ratio":0.22693267,"word_repetition_ratio":0.0,"special_character_ratio":0.33827162,"punctuation_ratio":0.17391305,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9781892,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-25T12:03:14Z\",\"WARC-Record-ID\":\"<urn:uuid:6536d3dd-920f-4c7a-9f61-b8f51eb91ea2>\",\"Content-Length\":\"133248\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d99d9074-7877-40c2-a056-a387e55710e3>\",\"WARC-Concurrent-To\":\"<urn:uuid:aedec196-811f-4fea-b1cc-a7a5286430cc>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://tex.stackexchange.com/questions/208043/ordinal-table-of-contents\",\"WARC-Payload-Digest\":\"sha1:QOKJYWWH32NGHODCIVUSNOHRDL57FNRR\",\"WARC-Block-Digest\":\"sha1:NIH3RRVJX37WSYUUQV5TDP2RS6UD6XAZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027323328.16_warc_CC-MAIN-20190825105643-20190825131643-00005.warc.gz\"}"}
https://www.numere-romane.ro/cum_se_scrie_numarul_arab_cu_numerale_romane.php?nr_arab=1971722&nr_roman=(M)(C)(M)(L)(X)(X)MDCCXXII&lang=en
[ "# Convert the Hindu-Arabic number 1,971,722 to a Roman number written with Roman numerals. Turn and write it using the Latin numeral system letters I, V, X, L, C, D, M. Learn by using the detailed explanations converter\n\n## The latest Hindu-Arabic numbers converted to Roman numerals\n\n How to convert: write the Hindu-Arabic number 1,971,722 using Roman numerals: (M)(C)(M)(L)(X)(X)MDCCXXII Jun 01 16:43 UTC (GMT) How to convert: write the Hindu-Arabic number 731,984 using Roman numerals: (D)(C)(C)(X)(X)(X)MCMLXXXIV Jun 01 16:43 UTC (GMT) How to convert: write the Hindu-Arabic number 1,705 using Roman numerals: MDCCV Jun 01 16:42 UTC (GMT) How to convert: write the Hindu-Arabic number 80 using Roman numerals: LXXX Jun 01 16:42 UTC (GMT) How to convert: write the Hindu-Arabic number 160 using Roman numerals: CLX Jun 01 16:42 UTC (GMT) How to convert: write the Hindu-Arabic number 12,799 using Roman numerals: (X)MMDCCXCIX Jun 01 16:42 UTC (GMT) How to convert: write the Hindu-Arabic number 220,918 using Roman numerals: (C)(C)(X)(X)CMXVIII Jun 01 16:41 UTC (GMT) How to convert: write the Hindu-Arabic number 95,384 using Roman numerals: (X)(C)(V)CCCLXXXIV Jun 01 16:40 UTC (GMT) How to convert: write the Hindu-Arabic number 138 using Roman numerals: CXXXVIII Jun 01 16:40 UTC (GMT) How to convert: write the Hindu-Arabic number 14 using Roman numerals: XIV Jun 01 16:40 UTC (GMT) How to convert: write the Hindu-Arabic number 1,794,974 using Roman numerals: (M)(D)(C)(C)(X)(C)M(V)CMLXXIV Jun 01 16:40 UTC (GMT) How to convert: write the Hindu-Arabic number 444,444 using Roman numerals: (C)(D)(X)(L)M(V)CDXLIV Jun 01 16:40 UTC (GMT) How to convert: write the Hindu-Arabic number 2,560,018 using Roman numerals: (M)(M)(D)(L)(X)XVIII Jun 01 16:40 UTC (GMT) All the Hindu-Arabic numbers converted to Roman numerals, online operations\n\n## The set of basic symbols of the Roman system of writing numerals\n\n• ### (*) M = 1,000,000 or |M| = 1,000,000 (one million); see below why we prefer this notation: (M) = 1,000,000.\n\n(*) These numbers were written with an overline (a bar above) or between two vertical lines. Instead, we prefer to write these larger numerals between brackets, ie: \"(\" and \")\", because:\n\n• 1) when compared to the overline - it is easier for the computer users to add brackets around a letter than to add the overline to it and\n• 2) when compared to the vertical lines - it avoids any possible confusion between the vertical line \"|\" and the Roman numeral \"I\" (1).\n\n(*) An overline (a bar over the symbol), two vertical lines or two brackets around the symbol indicate \"1,000 times\". See below...\n\nLogic of the numerals written between brackets, ie: (L) = 50,000; the rule is that the initial numeral, in our case, L, was multiplied by 1,000: L = 50 => (L) = 50 × 1,000 = 50,000. Simple.\n\n(*) At the beginning Romans did not use numbers larger than 3,999; as a result they had no symbols in their system for these larger numbers, they were added on later and for them various different notations were used, not necessarily the ones we've just seen above.\n\nThus, initially, the largest number that could be written using Roman numerals was:\n\n• MMMCMXCIX = 3,999." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8142505,"math_prob":0.9863552,"size":2440,"snap":"2023-14-2023-23","text_gpt3_token_len":794,"char_repetition_ratio":0.20566502,"word_repetition_ratio":0.25661376,"special_character_ratio":0.33237705,"punctuation_ratio":0.1574074,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.955848,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-01T16:43:28Z\",\"WARC-Record-ID\":\"<urn:uuid:efeac498-7c9f-4745-b502-2a5c9662cc2c>\",\"Content-Length\":\"46779\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:efd72836-6521-4688-ad5b-fc75194fcb14>\",\"WARC-Concurrent-To\":\"<urn:uuid:4e8031c1-8cd9-4d2c-b3ab-9d22d177f7ef>\",\"WARC-IP-Address\":\"93.115.53.187\",\"WARC-Target-URI\":\"https://www.numere-romane.ro/cum_se_scrie_numarul_arab_cu_numerale_romane.php?nr_arab=1971722&nr_roman=(M)(C)(M)(L)(X)(X)MDCCXXII&lang=en\",\"WARC-Payload-Digest\":\"sha1:OZ2BWBXFO3B344DNQWMZYDHSCFZPO2HO\",\"WARC-Block-Digest\":\"sha1:VSPUM63YBYIF4LS6SQTOVYVHGIZ4FXII\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224647895.20_warc_CC-MAIN-20230601143134-20230601173134-00510.warc.gz\"}"}
https://shelah.logic.at/papers/211/
[ "# Sh:211\n\n• Shelah, S. (1992). The Hanf numbers of stationary logic. II. Comparison with other logics. Notre Dame J. Formal Logic, 33(1), 1–12.\n• Abstract:\nWe show that the ordering of the Hanf number of L_{\\omega,\\omega}(wo) (well ordering), L^c_{\\omega,\\omega} (quantification on countable sets), L_{\\omega, \\omega}(aa) (stationary logic) and second order logic, have no more restraints provable in ZFC than previously known (those independence proofs assume CON(ZFC) only). We also get results on corresponding logics for L_{\\lambda,\\mu}.\n• Current version: 1996-03-11_10 (20p) published version (12p)\nBib entry\n@article{Sh:211,\nauthor = {Shelah, Saharon},\ntitle = {{The Hanf numbers of stationary logic. II. Comparison with other logics}},\njournal = {Notre Dame J. Formal Logic},\nfjournal = {Notre Dame Journal of Formal Logic},\nvolume = {33},\nnumber = {1},\nyear = {1992},\npages = {1--12},\nissn = {0029-4527},\nmrnumber = {1149955},\nmrclass = {03C75 (03C55 03E35)},\ndoi = {10.1305/ndjfl/1093636007},\nnote = {\\href{https://arxiv.org/abs/math/9201243}{arXiv: math/9201243}},\narxiv_number = {math/9201243}\n}" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.73235613,"math_prob":0.95742744,"size":1166,"snap":"2020-45-2020-50","text_gpt3_token_len":382,"char_repetition_ratio":0.10499139,"word_repetition_ratio":0.06451613,"special_character_ratio":0.39536878,"punctuation_ratio":0.21681416,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9705289,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-30T12:34:45Z\",\"WARC-Record-ID\":\"<urn:uuid:8e5fe998-09c0-4976-8e4c-2a11ece5e4d9>\",\"Content-Length\":\"9089\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:47e5ad0f-03b2-43c3-8a0c-79d7fa6fc183>\",\"WARC-Concurrent-To\":\"<urn:uuid:1a644c89-b613-4ee7-8c65-25fc6dd59bb1>\",\"WARC-IP-Address\":\"78.47.24.141\",\"WARC-Target-URI\":\"https://shelah.logic.at/papers/211/\",\"WARC-Payload-Digest\":\"sha1:FDUXQQTAIGJIGMPTMCMAPIXWHAAPFOL5\",\"WARC-Block-Digest\":\"sha1:BK26UQDJSY2ZTXIVQJI4RK4QDN5B4BMZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141213431.41_warc_CC-MAIN-20201130100208-20201130130208-00692.warc.gz\"}"}
https://www.generationgenius.com/intro-to-finding-area/
[ "", null, "Read an Intro to Finding Area | for kids 3rd, 4th, & 5th Grade\n1%\nIt was processed successfully!", null, "WHAT IS INTRO TO FINDING AREA?\n\nYou will learn what area is, how area is measured, how to find area by counting square units, and how to find the area of rectangles using addition or multiplication. You will also learn that area is additive and shapes can be composed and decomposed to find the area of shapes formed by rectangles.\n\nTo better understand finding area…\n\nWHAT IS INTRO TO FINDING AREA?. You will learn what area is, how area is measured, how to find area by counting square units, and how to find the area of rectangles using addition or multiplication. You will also learn that area is additive and shapes can be composed and decomposed to find the area of shapes formed by rectangles.  To better understand finding area…\n\n## LET’S BREAK IT DOWN!\n\n### Somersaults", null, "Area is measured in square units. Let’s say you want to make a floor mat for a tumbling class. You put 5 rows that each have 2 square pads. You can count the number of squares to find the area you can tumble on and find that the area is 10 square units. Try this one yourself: You lay 24 square pads in a rectangular mat for a wrestling class. Then, you pick up 4 of the pads. What is the area of the new mat?\n\nSomersaults Area is measured in square units. Let’s say you want to make a floor mat for a tumbling class. You put 5 rows that each have 2 square pads. You can count the number of squares to find the area you can tumble on and find that the area is 10 square units. Try this one yourself: You lay 24 square pads in a rectangular mat for a wrestling class. Then, you pick up 4 of the pads. What is the area of the new mat?\n\n### Defenders of the Area", null, "Different shapes can have the same area, as long as the same number of square units are needed to fill the space. You can also add or remove square units for a larger or smaller area. Let’s say you have a cape that is made of 27 squares of fabric of the same size. It is 9 squares long and 3 squares wide. You remove 6 squares from the bottom of the cape and attach them to a side to use as a hood. The area of your cape is still 27 square units. Try this one yourself: You use 12 square units of fabric to make a cape that is 3 squares long and 4 squares wide. Your cape is not long enough, so you add 9 more squares at the bottom. What is the area of your cape now?\n\nDefenders of the Area Different shapes can have the same area, as long as the same number of square units are needed to fill the space. You can also add or remove square units for a larger or smaller area. Let’s say you have a cape that is made of 27 squares of fabric of the same size. It is 9 squares long and 3 squares wide. You remove 6 squares from the bottom of the cape and attach them to a side to use as a hood. The area of your cape is still 27 square units. Try this one yourself: You use 12 square units of fabric to make a cape that is 3 squares long and 4 squares wide. Your cape is not long enough, so you add 9 more squares at the bottom. What is the area of your cape now?\n\n### Poster Array", null, "The area of a rectangle or square can be thought of as an array, with rows and columns of square units. To find the number of square units in the shape, you can multiply the number of columns by the number of rows. This is the same as multiplying the length by the width. Let’s say you have square posters that you are using to cover a wall. The wall fits 5 posters high and 5 posters wide. You can multiply 5 times 5 to find that the total area of the wall is 25 square posters. Try this one yourself: One-foot square posters are used to cover a wall. The wall is 6 posters high and 4 posters wide. What is the area covered by the posters?\n\nPoster Array The area of a rectangle or square can be thought of as an array, with rows and columns of square units. To find the number of square units in the shape, you can multiply the number of columns by the number of rows. This is the same as multiplying the length by the width. Let’s say you have square posters that you are using to cover a wall. The wall fits 5 posters high and 5 posters wide. You can multiply 5 times 5 to find that the total area of the wall is 25 square posters. Try this one yourself: One-foot square posters are used to cover a wall. The wall is 6 posters high and 4 posters wide. What is the area covered by the posters?\n\n### Pixel Art", null, "Shapes are not always perfect squares or rectangles. Sometimes you can break shapes apart into squares and rectangles to find their area. Let’s say you have a T-shaped figure that can be broken into a 2 × 2 square and a 6 × 2 rectangle. You can multiply to find that the area of the square is 4 square units and the area of the rectangle is 12 square units. Then, you can add the two areas to find that the area of the full shape is 4 + 12 = 16 square units. Try this one yourself: You have a U-shaped figure that can be broken into 2 rectangles that are 6 units long and 3 units wide and one square with a side length of 3 units. What is the area of the figure?\n\nPixel Art Shapes are not always perfect squares or rectangles. Sometimes you can break shapes apart into squares and rectangles to find their area. Let’s say you have a T-shaped figure that can be broken into a 2 × 2 square and a 6 × 2 rectangle. You can multiply to find that the area of the square is 4 square units and the area of the rectangle is 12 square units. Then, you can add the two areas to find that the area of the full shape is 4 + 12 = 16 square units. Try this one yourself: You have a U-shaped figure that can be broken into 2 rectangles that are 6 units long and 3 units wide and one square with a side length of 3 units. What is the area of the figure?\n\n## INTRO TO FINDING AREA VOCABULARY\n\nArea\nArea is the amount of space inside a closed, two-dimensional shape.\nSquare\nA square is a polygon with four sides that are the same length and four right angles.\nRectangle\nA rectangle is a polygon with four sides and four right angles.\nUnit of measurement\nA way to communicate the size of a measurement.\nSquare unit\nA square unit is a square with side lengths of 1 unit that is used to measure the area of a shape.\nFormula\nA rule or fact written with mathematical symbols. The formula for the area of a rectangle is length times width.\nMade by arranging items or shapes into equal rows and columns." ]
[ null, "https://www.facebook.com/tr", null, "https://www.generationgenius.com/wp-content/uploads/2021/07/Intro-to-Finding-Area-THUMBNAIL-4-1024x576.jpg", null, "https://www.generationgenius.com/wp-content/uploads/2021/06/412reading1-600x338.jpeg", null, "https://www.generationgenius.com/wp-content/uploads/2021/06/412reading2-600x338.jpeg", null, "https://www.generationgenius.com/wp-content/uploads/2021/06/412reading3-600x338.jpeg", null, "https://www.generationgenius.com/wp-content/uploads/2021/06/412reading4-600x338.jpeg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9411131,"math_prob":0.9864858,"size":2794,"snap":"2022-40-2023-06","text_gpt3_token_len":664,"char_repetition_ratio":0.18996416,"word_repetition_ratio":0.060822897,"special_character_ratio":0.23836793,"punctuation_ratio":0.07804878,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99228036,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,null,null,1,null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-09T05:08:35Z\",\"WARC-Record-ID\":\"<urn:uuid:581c4766-db40-4234-b966-570f626588db>\",\"Content-Length\":\"1049687\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:38a0cd56-85e6-4911-900a-83031b649faa>\",\"WARC-Concurrent-To\":\"<urn:uuid:b8b1505c-4ebf-410b-9a2c-b0e73aebf29e>\",\"WARC-IP-Address\":\"104.26.7.79\",\"WARC-Target-URI\":\"https://www.generationgenius.com/intro-to-finding-area/\",\"WARC-Payload-Digest\":\"sha1:YZL4KBD5VTNATBZUF5EOVFUM7POVZFDY\",\"WARC-Block-Digest\":\"sha1:XUUHQPKSH7VF75CKHTHBCPOMBWOMVFI6\",\"WARC-Truncated\":\"length\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764501407.6_warc_CC-MAIN-20230209045525-20230209075525-00787.warc.gz\"}"}
https://www.wulnut.top/2020/01/15/python_ex34/
[ "python入门学习\n\n# 基本操作\n\n## type() and isinstance() 函数\n\ntype(object): 接收一个对象object来作为参数, 返回这个参数的数据类型\nisinstance(object, class): 判断接收的对象object是否是给定的类型class的对象:如果是就返回True,如果不是返回False.\n\ntype(object):\n\nm = 120\nprint(\"m Type: \", type(m))\n\nm = \"大数据\"\nprint(\"m Type: \", type(m))", null, "isinstance(object, class):\n\na = 20\nprint(\"a是整型么?\", isinstance(a, int))", null, "1. type()不会认为子类对象时一种父类类型,不考虑继承关系,也就是说type()只检测当前该数据的数据类型\n2. isinstance()会认为子类队形时一种父类类型,会考虑继承关系,也就是说如果该数据时类的话isinstance()会\n检测父类的数据类型\n\n## eval()函数\n\neval()函数用来执行一个字符串表达式,并返回表达式的值,其一般格式为:\n\neval(expression[,globals[,locals]])\n\n\na = eval('2 + 3')\nprint(\"a: \", a)\n\na, b = eval(input(\"请输入两个数(用','隔开): \"))\nprint(\"a: \", a)\nprint(\"b: \", b)\n\n\n## 简单了解位运算符\n\nkey = input(\"请输入加密密匙:\")\nenc = input(\"请输入要加密的字符: \")\n\ndec = ord(key) ^ ord(enc)\nprint(\"加密结果:\",chr(dec))\n\nenc = ord(key) ^ dec\nprint(\"解密结果:\",chr(enc))\n\n\n1\n\n• ord()函数是对输入的字符转换成ASCII码\n• chr()函数是对输入的ASCII码(可以是十进制、十六进制)转换成对应的字符" ]
[ null, "https://www.wulnut.top/img/type.png", null, "https://www.wulnut.top/img/isinstance.png", null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.58375424,"math_prob":0.9990244,"size":1100,"snap":"2022-27-2022-33","text_gpt3_token_len":602,"char_repetition_ratio":0.14051095,"word_repetition_ratio":0.0,"special_character_ratio":0.28454545,"punctuation_ratio":0.18079096,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97168756,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-25T07:10:48Z\",\"WARC-Record-ID\":\"<urn:uuid:805b0636-a644-49df-aa4b-905f399164ff>\",\"Content-Length\":\"27655\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:96457d97-4f14-4a83-9a1c-8fa1018d5c06>\",\"WARC-Concurrent-To\":\"<urn:uuid:927332de-c6a9-43b1-98c3-646653a8e3e3>\",\"WARC-IP-Address\":\"8.48.85.209\",\"WARC-Target-URI\":\"https://www.wulnut.top/2020/01/15/python_ex34/\",\"WARC-Payload-Digest\":\"sha1:76HOLE4HNXPIX5C6WSNBKI2TZNXGARZQ\",\"WARC-Block-Digest\":\"sha1:BZHSXDCDMJZE4YAEGX5QTUPOEGFLHO6D\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103034877.9_warc_CC-MAIN-20220625065404-20220625095404-00614.warc.gz\"}"}
http://www.cut-the-knot.org/pythagoras/Munching/DiameterChord.shtml
[ "Why Diameter Is the Longest Chord?\n\nIndeed why? Why a diameter is the longest chord in a circle? I sometimes heard and several times read at popular math sites that the reason why a diameter is the longest chord is that a diameter passes through the center of the circle. While that is true that passing through the center has something to do with the length of a chord, the answer, as given, is vacuous. Since the definition of a diameter is a chord passing through the center of the circle, such an explanation actually reads: \"The diameter is the longest chord in a circle because it is a diameter of the circle.\" How much does that explain?\n\nLet's recollect the definitions:\n\nA segment of a straight line joining two points on a circle is called a chord; a chord that passes through the center of the circle is called a diameter. (Ambiguously, the word \"diameter\" also denotes the length of a diameter.)\n\nAs the statement is definitely not an axiom of geometry, it must be proved, i.e., logically derived from simpler statements and the definitions.\n\nIncidentally, Euclid proved that statement in the third book of his Elements as a more informative Proposition XV:\n\nOf straight lines in a circle the diameter is greatest, and of the rest the nearer to the center is always greater than the more remote.\n\nOn a casual inspection, it seems obvious:", null, "Nonetheless, it has to be proved and Euclid proves the first part with the reference to the Triangle Inequality (I.20) and the second part to the Pythagorean theorem.", null, "Let $AB$ be a diameter of the circle with center $C$ and $DE$ a chord not through $C.$ Then, by the definition of the circle as the locus of points equidistant from the center, $CA = CB = CD = CE = R,$ the radius of the circle. Which makes $AB = 2R.$ (The diameter is twice as long as the radius.)\n\nOn the other hand, in $\\Delta CDE,$ by the triangle inequality,\n\n$DE\\lt CD + CE = R + R= 2R = AB.$\n\nThe same route is taken in my favorite Kiselev's Geometry. In another geometric classics, Lessons in Geometry by J. Hadamard the triangle inequality is applied differently, after establishing the fact that, for a point $P$ on the diameter $AB,$ one of the ends of the diameter is farthest from P among all points of the circle and the other is nearest.", null, "Pick point $P$ not on a circle, pass a line through $P$ and the center of the circle. The line will meet the circle in two points $A$ and $B.$ Let $A$ be the nearest of the two to $P.$ Then $AP$ is the (absolute) difference between $OP$ and $OA = R,$ $BP$ is the sum of $OP$ and $R.$ Let $M$ be any other point on the circle. In $\\Delta OMP,$\n\n$PM \\gt |OP - OM| = |OP - R| = |OP - OA| = AP.$\n\nAlso,\n\n$PM \\lt OP + OM = OP + R = BP.$\n\nThus, of all points on the circle, $A$ is the nearest while $B$ is the farthest from $P.$ Observe now that the two inequalities that express this fact are valid also for $P$ on the circle. Let $P$ coincide with $A.$ Then, for any $M \\ne B,$ $MA \\lt AB$, meaning that the diameter $AB$ is longer than the chord $MA.$\n\nA third proof makes use of the Pythagorean and Thales' theorems. If $AB$ is a diameter and $M$ a point on a circle different from $A$ and $B,$ then $\\Delta AMB$ is right at $M.$ The Pythagorean theorem tells us that $AM^{2} + BM^{2} = AB^{2},$ implying in particular that, say, $AM \\lt AB.$\n\nNote: Vincent Pantaloni observed that conversely,\n\nThe longest chord in a circle passes through the center of the circle and is, therefore, a diameter.\n\nIndeed, let $AB$ be the longest chord in circle $(O)$ centered at $O.$ Let $AO$ meet the circle the second time in $B'.$ Then\n\n$AB' = 2R = OA + OB \\ge AB,$\n\nimplying (since $AB$ was said to be the longest chord) $AB' = AB,$ so that $B = B'$ and $O$ lies on $AB.$\n\nThere is another proof exploiting the (all-around) symmetrical nature of the circle.\n\nLet $AB$ be a chord of circle $(O)$ that is not a diameter. Reflect the circle in $AB.$ Note that any axis of symmetry of a circle ought to pass through its center (because a perpendicular bisector of a chord always goes through the center of the circle.) Since $AB$ is not a diameter, it could not be an axis of symmetry either, thus the reflection gives us another circle $(O')$ in which $AB$ is also a chord. Moreover, $AB\\perp OO'.$", null, "Let $CC'$ and $DD'$ be the common tangents of the two circles. In particular, $OC\\perp CC'\\perp O'C'$ and similarly for the second tangent. $AB$ does not cross $CC'$ for, if it did, $OA$ would be longer than $OC,$ which could not be. So, there is a rectangle $CDD'C'$ and, within, line segment $AB$ parallel to, say, $CD,$ implying $AB\\lt CD.$\n\nReferences\n\n1. A. Givental, Kiselev's Geometry. Book I. PLANIMETRY, Sumizdat, 2006, p. 88.\n2. J. Hadamard, Lessons in Geometry , Education Development Center, Dec 2006, Corollary to Theorem 64.\n3. R. Simson, The Elements of Euclid, Eliborn Classics, 2005, p. 65", null, "" ]
[ null, "http://www.cut-the-knot.org/pythagoras/Munching/DiameterChord1.gif", null, "http://www.cut-the-knot.org/pythagoras/Munching/DiameterChord2.gif", null, "http://www.cut-the-knot.org/pythagoras/Munching/Hadamard.gif", null, "http://www.cut-the-knot.org/pythagoras/Munching/DiameterChord3.jpg", null, "http://www.cut-the-knot.org/gifs/tbow_sh.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9183774,"math_prob":0.9994717,"size":4927,"snap":"2019-26-2019-30","text_gpt3_token_len":1320,"char_repetition_ratio":0.16473696,"word_repetition_ratio":0.025442477,"special_character_ratio":0.27217373,"punctuation_ratio":0.12941177,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99991655,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-18T19:24:40Z\",\"WARC-Record-ID\":\"<urn:uuid:e3dce6cd-fa93-4eb1-93e4-3cc3022af97e>\",\"Content-Length\":\"16739\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4f684689-6d0b-47eb-a401-7bdc12c67cb7>\",\"WARC-Concurrent-To\":\"<urn:uuid:1825c9f7-dd82-4fde-9a80-ea3926c7a781>\",\"WARC-IP-Address\":\"107.180.50.227\",\"WARC-Target-URI\":\"http://www.cut-the-knot.org/pythagoras/Munching/DiameterChord.shtml\",\"WARC-Payload-Digest\":\"sha1:2BSRO4IQEHCXJ2EQC5GUBHQFBGPGYWQ3\",\"WARC-Block-Digest\":\"sha1:Y5XBSG4WJAGZDAQ3KFLS2GDAXJ2JRWCP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560627998813.71_warc_CC-MAIN-20190618183446-20190618205446-00252.warc.gz\"}"}
http://archive.ymsc.tsinghua.edu.cn/pacm_paperurl/20180416111102608709066
[ "# MathSciDoc: An Archive for Mathematician ∫\n\n#### Numerical Analysis and Scientific Computingmathscidoc:1804.25031\n\nJournal of Computational and Applied Mathematics, 2018\nTo avoid the order reduction when third order implicit-explicit Runge-Kutta time discretization is used together with the local discontinuous Galerkin (LDG) spatial discretization, for solving convection-diffusion problems with time-dependent Dirichlet boundary conditions, we propose a strategy of boundary treatment at each intermediate stage in this paper. The proposed strategy can achieve optimal order of accuracy by numerical verification. Also by suitably setting numerical flux on the boundary in the LDG methods, and by establishing an important relationship between the gradient and interface jump of the numerical solution with the independent numerical solution of the gradient and the given boundary conditions, we build up the unconditional stability of the corresponding scheme, in the sense that the time step is only required to be upper bounded by a suitable positive constant, which is independent of the mesh size.\nlocal discontinuous Galerkin method, implicit-explicit time discretization, convection-diffusion equation, Dirichlet boundary condition, order reduction\n```@inproceedings{haijin2018third," ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.63627875,"math_prob":0.85050595,"size":792,"snap":"2023-40-2023-50","text_gpt3_token_len":198,"char_repetition_ratio":0.087563455,"word_repetition_ratio":0.41791046,"special_character_ratio":0.23863636,"punctuation_ratio":0.20325203,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9641451,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-27T20:59:21Z\",\"WARC-Record-ID\":\"<urn:uuid:e1cf82fa-0699-4c0b-9cb4-fb6b73945e31>\",\"Content-Length\":\"26614\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6193049a-bf32-436c-8918-f7512a76d6e6>\",\"WARC-Concurrent-To\":\"<urn:uuid:4487fc45-25ec-43f0-952e-b86a9c686a02>\",\"WARC-IP-Address\":\"101.6.6.219\",\"WARC-Target-URI\":\"http://archive.ymsc.tsinghua.edu.cn/pacm_paperurl/20180416111102608709066\",\"WARC-Payload-Digest\":\"sha1:VJDWZEWSCZI6HH2SKGMFYOF6CAG2WEH3\",\"WARC-Block-Digest\":\"sha1:G7QWGLJU7C2GJO6VPFJQHRVWKKNIICP7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510326.82_warc_CC-MAIN-20230927203115-20230927233115-00258.warc.gz\"}"}
https://www.nature.com/articles/s41598-020-79507-4?error=cookies_not_supported&code=ca69ccae-cfb8-4e43-8d7f-7c0a8199f621
[ "## Introduction\n\nNumber theory has a reputation of “unreasonable effectiveness.” Perhaps the most famous example is the Fibonacci numbers, which are closely related to golden ratio, plant growth (Phyllotaxis), and DNA patterns. Over the past years, a few developments have begun to unfold the potentials of number theory in understanding complex networks1,2. This work employs the p-adic numbers for modeling complex networks based on the Erdős-Rényi (ER) random graph. The p-adic number system gives an extension of the ordinary arithmetic of rational numbers ($${\\mathbb {Q}}$$) in a way distinct from the common extension of $${\\mathbb {Q}}$$ to real numbers ($${\\mathbb {R}}$$) and complex numbers ($${\\mathbb {C}}$$). The p-adic number system can be applied in various scientific fields. One groundbreaking application in physics is the p-adic AdS/CFT3. Khrennikov et al.4 developed p-adic wavelet for modeling reaction-diffusion dynamics. Applications in biology include the models for hierarchical structures of protein5 and genetic code6.\n\n### Networks, behaviors, and health\n\nThere is growing interest on a unified theory of complex networks from various fields7,8,9. The formalism commenced with graph theory in mathematics. The emergence of giant component is essential for the evaluation of random graphs10,11,12. Network analysis revealed hidden structures in social and economic systems13,14. Exponential random graph15 and stochastic blockmodels16,17 were developed. Many statistical mechanics in physics, e.g., percolation18 and time series19, are modeled with complex networks. Network models are closely associated with epidemiology20,21 and public health22. Chains of affection23 is a classical work revealing the network structure as a critical factor in public health. The spatiotemporal dynamics20 is essential for understanding the epidemic processes.\n\nThe complex networks also play an essential role in the microscopic scale. Numerous researches in biology rely on the curation and archival storage of protein, genetic, and chemical interactions for all major model organism species and humans (e.g. BioGRID and STRING database). The disease network24,25, protein-protein interaction network, and gene network26 contributed to the network approaches to life science27. In the near future, the network advances in biology, such as drug targeting28,29 and network medicine30, might be critical for improving our health.\n\n### Erdős–Rényi model and extensions\n\nThe classical ER model has inspired the continuous development of a rich spectrum of sophisticated models. There are three major approaches:\n\n• Creating more complex structures such as multilayer networks31,32 and multiplex networks33,34. Complex networks often exhibit community or module structures7.\n\n• Manipulating the rules for constructing edges. The Achlioptas process picks two candidate edges each time18 for competitive graph-evolution. ER process, Bohman and Frieze process, and product rule process were compared with one another for analyzing explosive percolation35.\n\n• Projecting graphs onto social, geometric, or geographic structures. Major developments in that aspect include the hyperbolic networks36, spatial preferential attachment37, inhomogeneous random graph38,39, and spatial networks30.\n\n### Findings\n\nCommon attempts to modeling the hierarchical structures7 are reflected by various notions including subpopulation, subgraph, mixing pattern, community, and module. We postulate that hierarchical structures are naturally encrypted in a standard graph. Consequently, imposing additional structures onto a graph to enrich its behavior is not always necessary. The key is the p-adic absolute value40. According to Ostrowski’s theorem, every non-trivial absolute value on $${\\mathbb {Q}}$$ is either the usual real absolute value or the p-adic absolute value. Various hierarchical structures can be represented by p-adic integers as nodal indices. The network topology and relative strengths between connections are unified as p-adic distances between numbers.\n\nThe p-adic random graph (PARG), probably the simplest model of inhomogeneous networks, offers a flexible method to simulate various observations in complex networks, especially the phenomenon of multiple big components. Degree distribution is a key property for distinguishing random, free-scale, and small world networks. However, PARG indicates that two random graphs with identical degree distribution may produce significantly different component sizes.\n\nWe fit PARG to the component size distribution of the genetic interaction networks, and also to the joint distributions of big components in COVID-19 outbreaks. The experiments imply that the community structures are responsible for the multimodal distributions of the sizes of big components. The largest or the second largest component could be more stable at (multiple) specific sizes. Therefore, maintaining a local peak could be valuable for intervening the spreading processes.\n\nIn contrast to the celebrated ER network, another early prototype of random graph, the Rado graph, has been rarely revisited. The Rado graph employs the binary number system to encode the graph edges, using Ackermann coding of hereditarily finite sets. PARG explores the fundamental nature of integers to encode the probability of connecting a pair of nodes. The p-adic number system extends the ordinary arithmetic of rational numbers3,40. Our PARG model will focus on the p-adic metric on nonnegative integers. An integer’s r-adic (picking a prime number r) absolute value is the reciprocal of the largest power of r that divides it. For example, $$|40|_2=1/8$$ (let r=2, then the 2-adic absolute value of 40 equals 1/8), $$|40|_3=1$$, $$|40|_5=1/5$$. Such absolute value is the most significant example of ultrametrics40. Each node is naturally associated with an integer, i.e., its index in the ordered set of nodes, or an arbitrary (unique) integer can be assigned to each node. The probability of connecting any pair of nodes ij is proportional to the p-adic closeness between the two nodal indices $$v_i, v_j$$ :\n\n\\begin{aligned} p_{ij}= \\frac{p^*}{ |v_j -v_i|_r} \\end{aligned}\n(1)\n\nfor $$i,j\\in [1,n],\\; i<j$$ (assuming $$i<j \\Leftrightarrow v_i<v_j$$). $$p^*$$ is a constant as the probability in the ER sense. When comparing ER and PARG, we normalize the PARG probability so that $$\\sum _{ij} p_{ij} =n(n-1) p^*/2$$. As a result, the number of edges in ER equals that in PARG. The p-adic distances encode a hierarchical structure, as shown in the circular tree-map in Fig. 1a. The distances between any pair of numbers from the same small circle and the same big circle are 1/9 and 1/3, respectively. If the two numbers are from different big circles, their distance is 1.\n\nThe digit format of a p-adic number is intuitive, for example, $$201_3=2\\cdot 3^0 +0\\cdot 3^1+1\\cdot 3^2=11$$. One can construct a set of integers (as the nodal indices) in their digit formats by\n\n\\begin{aligned} u_0 . u_1 . \\cdots .u_m \\;_r\\, {\\mathop {=}\\limits ^{ \\text {def}}}\\, \\{ a_0 a_1 \\cdots a_m \\;_r \\; | \\; 0\\le a_i < u_i \\text { for } i=0,1,\\cdots ,m \\} \\end{aligned}\n(2)\n\nwhere r is the chosen prime number. We call the model a full PARG if $$u_0=u_1=\\cdots = u_m=r$$ (i.e., $$n= r^{m+1}$$), otherwise, we call it a regular PARG (i.e., $$n= \\prod _i u_i$$). The nodal indices can also be arbitrary digits under a given prime r, which leads to a general PARG. The expression (2) facilities the enumeration of hierarchical structures. For example, $$3.2.3_3$$ fully describes a hierarchical structure as shown in Fig. 1c. More examples are in illustrated in Fig. 1. A notation such as $$G(3.2.3_3, p^*)$$ fully specifies a PARG.\n\nPARG implements a Bernoulli process on all pairs from a set of p-adic numbers. Let $$p_k$$ represent the probability of a randomly chosen node with degree k. In the ER model, $$p_k$$ follows a Binomial distribution. It becomes a Poisson distribution in the limit of large n. By contrast, n in PARG equals the number of individuals in observations. Because of the symmetric connectivity in regular PARG, $$p_k$$ can be obtained from the degree distribution of one node:\n\n\\begin{aligned} p_k= \\sum _{\\alpha _0,\\alpha _1,\\cdots ,\\alpha _m} \\prod _{i=0}^m (r^i p^*)^{\\alpha _i} (1-r^i p^*)^{d_i-\\alpha _i} \\left( {\\begin{array}{c}d_i\\\\ \\alpha _i\\end{array}}\\right) \\end{aligned}\n(3)\n\nwhere $$\\sum _{i=0}^m d_i =n-1$$ holds. $$d_i$$ denotes the number of links (from the chosen node) with probability $$r^ip^*$$ according to (1-2). The numbers $$\\alpha _0,\\alpha _1,\\cdots ,\\alpha _m$$ denote all combinations satisfying $$k=\\sum _{i=0}^m \\alpha _i$$. Numeric computing of (3) indicates that $$p_k$$ in PARG and that in ER are almost identical. The two models can have the same degree distribution and the same number of edges. However, there could be significant differences in the size distributions of big components in PARG and ER (Fig. 2, Supplementary Note 1, Table S1 and S2). The largest components in PARG may exhibit multimodal distribution due to the hierarchical structure. This implies that certain sizes of (multiple) big components are more statistically stable than other sizes.\n\n## Sizes of big components\n\nThe relative sizes of multiple big components in complex networks were much less studied compared to the studies carried out on the giant component12,41. Analytical methods11,42 and generating functions10 have been widely employed for analyzing the component sizes. Rather than let $$n\\rightarrow \\infty$$, n in PARG is equal to the number of relevant individuals in observations. As a result, the sizes of simulated components are similar to that in ground truth. When n is finite, numeric random realizations are suited to evaluate the various probabilities about component sizes (see “Methods” section).\n\nWhen n is fixed, fitting ER model to empirical data only involves the single parameter p, while PARGs involve two kinds of parameters: the probability p and the hierarchical structures represented by (2). The configuration space of (2) is very vast, even when $$n<1000$$, so we opted for ad hoc heuristics to choose the hierarchical structures that fit relatively well to observations. The heuristics includes: (1) scaling the distances. For example, $$7.5.6_r$$ with $$r=7, 11, 13,\\cdots$$ represent the same hierarchical structure, though the distances between the levels are scaled. (2) Flat vs. deep hierarchy. For instance, both $$16.16_{17}$$ and $$2.2.2.2.2.2.2.2_2$$ refer to 256 nodes. The former is made of 16 groups (each contains 16 nodes), while the latter’s structure looks like a high tree.\n\nBased on the hierarchical structures in PARG, the following experiments analyze multiple big components, especially $$5|C_2|>|C_1|$$, as static structures observed in networks. The topics range from microscopic networks, such as biological networks26,28, to macroscopic networks, such as epidemics21,43.\n\n### Genetic interaction networks\n\nThe essential role of genetic interaction networks plays in biology has been lately revealed25. The essential genetic interaction network of yeast genes (theCellmap.org) contains 1,261 mutant strains. Their interactions have been characterized by Pearson correlation coefficient (PCC). Genes with highly correlated genetic interaction profiles (PCC>0.4) form clusters of specific pathways or protein complexes26. We set three PCC thresholds above 0.4 to obtain three graphs with distinct big components, as shown in Fig. 3g–i. We count the components sizes falling into the predefined intervals of Fibonacci numbers ($$b_{i+1}=b_i+b_{i-1}, b_0=3$$). Let $$\\theta _i$$ be the number of simulated components whose sizes fall between $$b_i$$ and $$b_{i+1}$$, and $$\\theta _i^*$$ be that in ground truth. The error of a random realization is given by\n\n\\begin{aligned} \\sum _i \\left[ (b_{i+1}-b_i) (\\theta _i - \\theta _i^*) \\right] ^2 \\end{aligned}\n(4)\n\nThe averaged error from many random realizations yields a relatively accurate evaluation. ER and PARG with distinct values of np lead to different errors (Fig. 3a–c). Equipped with (configurable) hierarchical structures, PARGs fit better to observations than ER. The component size distributions of the best fits are shown in Fig. 3d−f.\n\n### Protein-protein interaction\n\nExploring the protein interaction networks of proteins poses a major challenge in biomedicine. Protein-protein interaction (PPI) is crucial to understanding cellular pathways and human diseases44. The following experiment creates a graph from a set of 408 S. cerevisiae protein complexes as45. The graph nodes represent individual proteins from these complexes. An edge is constructed between two nodes (proteins) if they belong to the same complex. The graph includes 1,628 nodes and 11,249 edges.\n\nWe define the similar metric $$S_{duo}$$ and $$S_{tri}$$ (see “Methods” section) to compare the simulated component sizes with the ground truth. PARG fits better ($$S_{tri}$$=0.307) than the ER model ($$S_{tri}$$=0.112) to the PPI network, as shown in Fig. 4. The simulation data can be found in supplementary Table S3 and S4. It means that the chosen PARG has a higher probability that the sizes of its big components resemble those in the PPI network.\n\n### Instrumental resource street network\n\nThere has been growing attention to the impact of social networks on health46. For example, homeless youth is an active research field, including analysis and interventions. Social networks of homeless youth47,48 are vital for understanding and intervening the observed phenomena.\n\nThis experiment involves a social network about employment services utilization among homeless youth. The original research49 queried 136 homeless youth in Los Angeles in 2008. Four distinct networks were constructed from the same population, according to instrumental, emotional, employment services use, and sociometric relationship respectively. Only the instrumental network ($$|C_1^*|$$=30, $$|C_2^*|$$=13) satisfies $$5|C_2^*|>|C_1^*|$$, i.e., the second largest component is large enough. Regarding the similarity metric $$S_{duo}$$, the ER model attains the maximum similarity ($$S_{duo}$$=0.155) at $$p=1.08/n$$; while $$G(10.14_{353}, 1.75/n )$$ has a much higher similarity ($$S_{duo}$$ =0.316). The community structure in the PARG corresponds to the social or geographical networks of the homeless youth; although the map between the two is still elusive.\n\nWe also fit the hyperbolic networks36 and the Achlioptas process35 to the observed component sizes ($$|C_1^*|$$=30, $$|C_2^*|$$=13). Details can be found in “Methods” section. The random hyperbolic graph50,51 reaches the maximum similarity ($$S_{duo}$$=0.266) when $$C=-9.3$$, $$\\alpha =10$$, $$D=0.0684R$$. The Achlioptas process with product rule (PR) has the maximum similarity ($$S_{duo}$$=0.221) when the number of edges is equal to 100. So PARG outperforms the other two models in this case.\n\nCoronavirus52 has spread among many Chinese cities since the end of January 2020. The incubation period53 and possible mild symptoms54 made the prevention more complicated. Social networking sites (or local officials) reported traces of infected people. Relationships between the infected (and those who had close contacts with them) were also investigated. From a point of view of networks, the cities exhibited three distinct patterns: 1) No big components. Shenzen reported 416 confirmed cases by February 20, 2020. The largest cluster has only 9 people. 2) A giant component. Xinyu reported 110 confirmed cases by February 10, 2020. The giant component consists of 52 cases, nearly half of the infected population. The second largest component has only six cases. 3) Multiple big components. Tianjin reported 136 confirmed cases by February 3, 2020. The first, second, and third largest clusters contain 44, 17 and 11 cases, respectively, which are related to a huge department store, the railway, and a residential area, respectively.\n\nWe focus on Tianjin’s infection network (Supplementary Note 1, Table 5), which consists of multiple big components. A graph is created to visualize the relationship between the infected when the outbreak was around its peak, as shown in Fig. 5. The simulation data can be found in Supplementary Table S6 and S7. The similarity metric $$S_{tri}$$ could be biased when $$|C_3^*|$$ is quite smaller than $$|C_1^*|$$, so the similarity metric $$T_{tri}$$ (see “Methods” section) is employed in this case to fit the models to the observed clusters, as shown in Fig. 6. The PARG $$G(8.17_{59}, 1.49/n)$$ has the highest similarity $$T_{tri}$$=0.00916.\n\nWe also compared the results of the random hyperbolic graph and the Achlioptas process with that of PARG. The random hyperbolic graph reaches the maximum similarity ($$T_{tri}$$=0.0150) when $$C=-9.65$$, $$\\alpha =1$$, $$D=0.04294R$$. The Achlioptas process with Bohman Frieze (BF) rule has the maximum similarity ($$T_{tri}$$=0.00825) when the number of edges is equal to 93. So the random hyperbolic graph fits best to this case.\n\nA later investigation indicates that the 11 cases (in yellow, Fig. 5) are probably related to the department store as well. In this new perspective, the two big clusters form the largest component of 55 nodes. We employ the metric $$T_{duo}$$ to fit modes to this new observation ($$|C_1^*|$$=55, $$|C_2^*|$$=17), as shown in Fig 7. The simulation data can be found in Supplementary Table S8 and S9. The great variety of hierarchical structures enable PARG to fit relatively well to observations from distinct perspectives.\n\n## Discussion\n\nThe size distributions of big components in complex networks are attributed to the structure of physical world; the behaviors of agents (nodes) and the information transmitting between them; and the observer (how to look at the events). The ER model offered prominent findings of component sizes in networks, however, it rarely fits the joint distribution P($$|C_1|\\approx |C_1^*|$$, $$|C_2|\\approx |C_2^*|$$, $$\\cdots$$) in real observations. A successful strategy is introducing inhomogeneous structures or selective rules (for constructing edges) to the random graph to increase its versatility. PARG probably provides the simplest way to fully describe a hierarchical structure in an ER-like model.\n\nPARG interprets the n in ER model as the cardinality of a set of natural numbers (nodal indices). Consequently, the probability p can be weighted by distances between the nodal indices. In number theory (Ostrowski’s theorem), any non-trivial definition of absolute value on $${\\mathbb {Q}}$$ is either the conventional one or the p-adic absolute value. So, the p-adic ultrametric reveals the natural hierarchical structures hidden in any graph with indexed nodes. PARG blurs the boundaries between the topology approaches (e.g., multiplex networks) and the geometric approaches (e.g., hyperbolic networks).\n\nIn our PARG approach, n denotes the number of observed individuals, whereas in previous ER-like models $$n\\rightarrow \\infty$$. The limit of n facilities analytical approaches10,11, while a relatively small finite n is convenient for numerical random realizations. Random graph theories such as explosive percolation35 and synchronization55, deeply revealed the dynamics of the emerged components. By contrast, this work explores the sizes of resultant (static) components from observations or simulations. The results imply that the proportions between the big component sizes are closely associated with the hierarchical structures of complex networks. This implication is in contrast to previous emphasis10,27,56 on degree distribution as the fingerprint of network structures.\n\nWe fit PARGs and other random graphs to observations from various types of networks. The PARG outperforms the ER model and the Achlioptas process. The random hyperbolic model fits better than PARG to certain cases but worse than PARG to other cases. The simulations of PARG show that the size of big components, e.g., $$P(|C_1|=x)$$ and $$P(|C_2|=x)$$, can exhibit multimodal distribution (e.g., Figs. 2b and 7(b)) due to their modular structures. In the case of multimodal distribution, the first peak of $$P(|C_2|=x)$$ could be very close to the last peak of $$P(|C_1|=x)$$. Thus, one may distinguish the major mode from the minor modes when investigating the giant component. One present challenge in network epidemic modeling57 is designing network-based interventions. Current strategies include targeting high-degree nodes or central nodes. The multimodal distribution of $$P(|C_i|=x)$$ has implications in controlling the spreading processes in networks. Since $$P(|C_i|=x)$$ has more than one local peak, it might be possible to predict and maintain the big components’ growth around a local peak.\n\n## Methods\n\n### Random graph implementation\n\nAll random graphs are created through random experiments implemented in the Java programming language (Java 1.8 with Eclipse IDE). Given a connecting probability p (in the ER context), an edge is included in the graph if\n\n\\begin{aligned} p > rand \\end{aligned}\n\nwhere rand stands for a random number between 0 and 1, generated by the method nextDouble() from the java.util.Random class. The method generates a stream of pseudorandom numbers via linear congruential generator (LCG) with modulus $$2^{48}$$.\n\nA disjoint-set data structure (Union-Find algorithm) is employed to find all components in graph. N random realizations of ER or PARG yield an ensemble of binary numbers\n\n\\begin{aligned} \\delta _{ix}^j= {\\left\\{ \\begin{array}{ll} 1, \\text { if } |C_i^j|=x \\\\ 0, \\text { otherwise} \\\\ \\end{array}\\right. } \\end{aligned}\n\nfor $$x=1,2,\\cdots , n$$. $$|C_i^j|$$ denotes the size of the ith largest component in the jth realization. Then one can evaluate the size distribution of the big components by\n\n\\begin{aligned} P(|C_i|=x) =\\frac{1}{N} \\sum _{j=1}^N \\delta _{ix}^j \\end{aligned}\n\nOne can choose an a prior function to measure the similarity between the simulated components sizes and the observed sizes. For example, Poisson distribution can be used to define the similarity between $$|C_i^j|$$ and the observation :\n\n\\begin{aligned} \\Psi _i^j= \\frac{|C_i^*|!\\; |C_i^*|^{\\left( |C_i^j|- |C_i^*|\\right) } }{|C_i^j|!} \\end{aligned}\n\nor one can use the normal distribution:\n\n\\begin{aligned} \\Psi _i^j= \\exp \\left( \\frac{ \\left( |C_i^j|-|C_i^*|\\right) ^2}{s^2 |C_i^*|^2} \\log \\frac{1}{2} \\right) \\end{aligned}\n\nwhere s is a constant (typically $$s=0.1$$, so that 10% deviation from the truth results in 1/2 similarity). Simulations in this work employed the later definition. The probability distribution of $$|C_2|$$ under the condition $$|C_1|\\approx |C_1^*|$$ is given by\n\n\\begin{aligned} P(|C_2|=x \\; \\big |\\; |C_1| \\approx |C_1^*| ) =\\frac{1}{N} \\sum _{j=1}^N \\Psi _1^j \\delta _{2x}^j \\end{aligned}\n\nLikewise, $$P(|C_3|=x \\; \\big |\\; |C_1| \\approx |C_1^*| ) =\\frac{1}{N} \\sum _{j=1}^N \\Psi _1^j \\delta _{3x}^j$$.\n\nWe define the objective function\n\n\\begin{aligned} S_{duo}= \\frac{1}{2N} \\sum _{j=1}^N \\sum _{i=1}^2 \\Psi _i^j,\\;\\; S_{tri}= \\frac{1}{3N} \\sum _{j=1}^N \\sum _{i=1}^3 \\Psi _i^j \\end{aligned}\n\nfor a random graph to measure whether its first two/three largest component sizes are close to that in observation. When $$|C_2^*|$$ or $$|C_3^*|$$ is much smaller than $$|C_1^*|$$, the following metric would be more appropriate:\n\n\\begin{aligned} T_{duo}= \\frac{1}{N} \\sum _{j=1}^N \\Psi _1^j \\Psi _2^j, \\;\\; T_{tri}= \\frac{1}{N} \\sum _{j=1}^N \\Psi _1^j \\Psi _2^j \\Psi _3^j \\end{aligned}\n\n### Genetic interaction network of yeast genes\n\nThe data of yeast genes is from https://thecellmap.org/ costanzo2016/. The networks in Fig. 3g–i are drawn from the data of the Essential $$\\times$$ Essential network, which involves 1,261 mutant strains. So, each network in Fig. 3g–i consists of 1261 nodes. The PPC values between the mutant strains are obtained from the genetic interaction profile similarity matrices. An edge is included in the graph if the corresponding PPC value is above the predefined threshold. The graphs are projected onto squares using our Java program, as shown in Fig. 3g–i. In a dynamical process, the agents (nodes) push away from each other, while each edge drags the two end nodes into a fixed range. The color (hue) of edges indicates the size of the relevant component.\n\n### Protein network of yeast genes\n\nThe data of S. cerevisiae protein complexes is obtained from the additional File 1 of45. We programmed a Java application to read the table, construct the 1,628 nodes (proteins) and 11,249 edges (a pair of nodes belong to a same complex), and find the components via Union-Find algorithm.\n\n### Random hyperbolic graph\n\nOur implementation follows the formulation in50,51. $$R=2\\ln n + C$$ denotes the radius of the disc. The probability density for the radial coordinate r of a point $$(r, \\phi )$$ is given by\n\n\\begin{aligned} \\alpha \\frac{ \\sinh (\\alpha r)}{ \\cosh (\\alpha R) -1} \\end{aligned}\n\nWe use inverse transform sampling to generate the radii of the points, i.e., $$r= \\frac{1}{a} arcosh (1+ \\cosh (\\alpha R) x -x)$$ where $$x\\in (0,1)$$ denotes a random number from the uniform distribution. Our experiments generate x by nextDouble() in Java. For each pair of nodes uv, a link is added to the graph if $$d(u,v) < D$$ where $$D \\in (0,R]$$ is a constant and d(uv) denotes the distance between the two points in the hyperbolic space.\n\n\\begin{aligned} \\cosh ( d(u,v)) = \\cosh r_u \\cosh r_v - cos(\\theta _u - \\theta _v) \\sinh r_u \\sinh r_v \\end{aligned}\n\n### Achlioptas process\n\nAt each time step of the Achlioptas process18,35, two edges $$e_1$$ and $$e_2$$ compete for addition. Suppose $$e_1$$ involves two components of size $$|C_a|,|C_b|$$; $$e_2$$ involves two components of size $$|C_c|,|C_d|$$, we consider three types of competing rules:\n\n• add $$e_1$$ if $$|C_a|+|C_b| < |C_c|+|C_d|$$, add $$e_2$$ otherwise.\n\n• product rule (PR): add $$e_1$$ if $$|C_a||C_b| < |C_c||C_d|$$, add $$e_2$$ otherwise.\n\n• Bohman Frieze (BF) rule: add $$e_1$$ if $$|C_a|=|C_b| =1$$, add $$e_2$$ otherwise.\n\nOur experiment treats the number of edges (added to the graph) as the parameter.\n\n## Conclusions\n\nA number theory approach to random graph is proposed. A set of n random numbers generates an n by n adjacency matrix whose binary elements follow the probability (1). Thus, a hierarchical structure is implemented through ultrametrics40,58. The simplicity of the digit form (2) for hierarchical structures of random graph facilitates the enumeration of different setting of clusters and hierarchies. In contrast to mapping complex structures or rules from real world to random graph, our PARG approach explores the complex structures in numbers2,59 which might be rich enough for modeling complicated observations. An alternative point of view suggests that a plain graph can unfold a hidden hierarchical structure, based on an indispensable definition of absolute value.\n\nThe proposed PARG model is more abstract than multilayer networks, multiplex networks, and social-geographical models, but more concrete than ER-like models without community structures. Therefore, a future framework of research would consist of two interconnected levels: (1) Searching for hyper parameters, such as hierarchical structures, in PARG, given empirical data and (2) Constructing the ad hoc realistic models, for example, the biomolecular environments in cell or the social-geographical structures in a city, which the PARG model is projected onto." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8468567,"math_prob":0.99581814,"size":36583,"snap":"2023-14-2023-23","text_gpt3_token_len":9258,"char_repetition_ratio":0.14027174,"word_repetition_ratio":0.014330332,"special_character_ratio":0.26351038,"punctuation_ratio":0.18718523,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.99887156,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-26T01:02:02Z\",\"WARC-Record-ID\":\"<urn:uuid:27dc6248-0ed1-4cd7-8120-535eb5b6397b>\",\"Content-Length\":\"368248\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:20cbaee3-d3a6-43c8-b91e-d4903f37e8c9>\",\"WARC-Concurrent-To\":\"<urn:uuid:1d4763cb-164f-4dbe-b548-bbec71e3305a>\",\"WARC-IP-Address\":\"146.75.32.95\",\"WARC-Target-URI\":\"https://www.nature.com/articles/s41598-020-79507-4?error=cookies_not_supported&code=ca69ccae-cfb8-4e43-8d7f-7c0a8199f621\",\"WARC-Payload-Digest\":\"sha1:VMQXC4ATFPQA7DO6VI4SZ34LOZDJE4MO\",\"WARC-Block-Digest\":\"sha1:V5QACJ7WWV2UGKJ6NSNAQNYKV6GYQSNM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296945376.29_warc_CC-MAIN-20230325222822-20230326012822-00235.warc.gz\"}"}
https://docs.monai.io/en/stable/_modules/monai/apps/pathology/transforms/spatial/dictionary.html
[ "# Source code for monai.apps.pathology.transforms.spatial.dictionary\n\n```# Copyright (c) MONAI Consortium\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n# Unless required by applicable law or agreed to in writing, software\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n\nimport copy\nfrom typing import Any, Dict, Hashable, List, Mapping, Optional, Tuple, Union\n\nfrom monai.config import KeysCollection\nfrom monai.config.type_definitions import NdarrayOrTensor\nfrom monai.transforms.transform import MapTransform, Randomizable\nfrom monai.utils import deprecated\n\nfrom .array import SplitOnGrid, TileOnGrid\n\n__all__ = [\"SplitOnGridd\", \"SplitOnGridD\", \"SplitOnGridDict\", \"TileOnGridd\", \"TileOnGridD\", \"TileOnGridDict\"]\n\nclass SplitOnGridd(MapTransform):\n\"\"\"\nSplit the image into patches based on the provided grid shape.\nThis transform works only with torch.Tensor inputs.\n\nArgs:\ngrid_size: a tuple or an integer define the shape of the grid upon which to extract patches.\nIf it's an integer, the value will be repeated for each dimension. Default is 2x2\npatch_size: a tuple or an integer that defines the output patch sizes.\nIf it's an integer, the value will be repeated for each dimension.\nThe default is (0, 0), where the patch size will be inferred from the grid shape.\n\nNote: the shape of the input image is inferred based on the first image used.\n\"\"\"\n\nbackend = SplitOnGrid.backend\n\ndef __init__(\nself,\nkeys: KeysCollection,\ngrid_size: Union[int, Tuple[int, int]] = (2, 2),\npatch_size: Optional[Union[int, Tuple[int, int]]] = None,\nallow_missing_keys: bool = False,\n):\nsuper().__init__(keys, allow_missing_keys)\nself.splitter = SplitOnGrid(grid_size=grid_size, patch_size=patch_size)\n\ndef __call__(self, data: Mapping[Hashable, NdarrayOrTensor]) -> Dict[Hashable, NdarrayOrTensor]:\nd = dict(data)\nfor key in self.key_iterator(d):\nd[key] = self.splitter(d[key])\nreturn d\n\n[docs]@deprecated(since=\"0.8\", msg_suffix=\"use `monai.transforms.GridPatchd` or `monai.transforms.RandGridPatchd` instead.\")\nclass TileOnGridd(Randomizable, MapTransform):\n\"\"\"\nTile the 2D image into patches on a grid and maintain a subset of it.\nThis transform works only with np.ndarray inputs for 2D images.\n\nArgs:\ntile_count: number of tiles to extract, if None extracts all non-background tiles\nDefaults to ``None``.\ntile_size: size of the square tile\nDefaults to ``256``.\nstep: step size\nDefaults to ``None`` (same as tile_size)\nrandom_offset: Randomize position of the grid, instead of starting from the top-left corner\nDefaults to ``False``.\nDefaults to ``False``.\nbackground_val: the background constant (e.g. 255 for white background)\nDefaults to ``255``.\nfilter_mode: mode must be in [\"min\", \"max\", \"random\"]. If total number of tiles is more than tile_size,\nthen sort by intensity sum, and take the smallest (for min), largest (for max) or random (for random) subset\nDefaults to ``min`` (which assumes background is high value)\n\n\"\"\"\n\nbackend = SplitOnGrid.backend\n\ndef __init__(\nself,\nkeys: KeysCollection,\ntile_count: Optional[int] = None,\ntile_size: int = 256,\nstep: Optional[int] = None,\nrandom_offset: bool = False,\nbackground_val: int = 255,\nfilter_mode: str = \"min\",\nallow_missing_keys: bool = False,\nreturn_list_of_dicts: bool = False,\n):\nsuper().__init__(keys, allow_missing_keys)\n\nself.return_list_of_dicts = return_list_of_dicts\nself.seed = None\n\nself.splitter = TileOnGrid(\ntile_count=tile_count,\ntile_size=tile_size,\nstep=step,\nrandom_offset=random_offset,\nbackground_val=background_val,\nfilter_mode=filter_mode,\n)\n\n[docs] def randomize(self, data: Any = None) -> None:\nself.seed = self.R.randint(10000) # type: ignore\n\ndef __call__(\nself, data: Mapping[Hashable, NdarrayOrTensor]\n) -> Union[Dict[Hashable, NdarrayOrTensor], List[Dict[Hashable, NdarrayOrTensor]]]:\n\nself.randomize()\n\nd = dict(data)\nfor key in self.key_iterator(d):\nself.splitter.set_random_state(seed=self.seed) # same random seed for all keys\nd[key] = self.splitter(d[key])\n\nif self.return_list_of_dicts:\nd_list = []\nfor i in range(len(d[self.keys])):\nd_list.append({k: d[k][i] if k in self.keys else copy.deepcopy(d[k]) for k in d.keys()})\nd = d_list # type: ignore\n\nreturn d\n\nSplitOnGridDict = SplitOnGridD = SplitOnGridd\nTileOnGridDict = TileOnGridD = TileOnGridd\n```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5926617,"math_prob":0.8150213,"size":4703,"snap":"2022-27-2022-33","text_gpt3_token_len":1205,"char_repetition_ratio":0.10683124,"word_repetition_ratio":0.06644518,"special_character_ratio":0.25770783,"punctuation_ratio":0.21933962,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95832705,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-01T05:17:26Z\",\"WARC-Record-ID\":\"<urn:uuid:e84674d1-b00a-4aa4-a86b-7555576e20c9>\",\"Content-Length\":\"29905\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0d5495e8-601b-4264-8e11-cf7290be1873>\",\"WARC-Concurrent-To\":\"<urn:uuid:a4ea828c-cc8a-4ba9-8d34-82e0282d957b>\",\"WARC-IP-Address\":\"104.17.33.82\",\"WARC-Target-URI\":\"https://docs.monai.io/en/stable/_modules/monai/apps/pathology/transforms/spatial/dictionary.html\",\"WARC-Payload-Digest\":\"sha1:2LJ53EQ4DXQ7UTTIGZ76YLXC7IJT22B5\",\"WARC-Block-Digest\":\"sha1:QNNXGGXMXLRVL34IWMVU3ODOXDN34XZI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103920118.49_warc_CC-MAIN-20220701034437-20220701064437-00405.warc.gz\"}"}
https://www.hackmath.net/en/math-problem/2096
[ "Tank 9\n\nThe tank with volume V is filled with one pump for three hours and by second pump for 5 hours. When both pumps will run simultaneously calculate:\n\na) how much of the total volume of the tank is filled in one hour\nb) for how long is the tank full\n\nResult\n\na =  0.53\nb =  1.875 h\n\nSolution:\n\n1*(1/3+1/5) = a\nb(1/3+1/5)=1\n\n15a = 8\n8b = 15\n\na = 815 ≈ 0.533333\nb = 158 = 1.875\n\nCalculated by our linear equations calculator.\n\nLeave us a comment of this math problem and its solution (i.e. if it is still somewhat unclear...):", null, "Be the first to comment!", null, "To solve this verbal math problem are needed these knowledge from mathematics:\n\nDo you have a linear equation or system of equations and looking for its solution? Or do you have quadratic equation?\n\nNext similar math problems:\n\n1. Viju", null, "viju has 40 chickens and rabbits. If in all there are 90 legs. How many rabbits are there with viju??\n2. Jane plants", null, "Jane plants flowers in the garden. If she planted 12 every hour instead of 9 flowers, she would finish with the job an hour earlier. How many flowers does she plant?\n3. Hectoliters of water", null, "The pool has a total of 126 hectoliters of water. The first pump draws 2.1 liters of water per second. A second pump pumps 3.5 liters of water per second. How long will it take both pumps to drain four-fifths of the water at the same time?\n4. Three figures - numbers", null, "The sum of three numbers, if each is 10% larger than the previous one, is 662. Determine the figures.\n5. Family parcels", null, "In father will he divided the land so that the older son had three bigger part than younger son. Later elder son gave 2.5 ha field to younger and they had both the same. Determine the area of family parcel.\n6. Three brothers", null, "The three brothers have a total of 42 years. Jan is five years younger than Peter and Peter is 2 years younger than Michael. How many years has each of them?\n7. Two numbers", null, "We have two numbers. Their sum is 140. One-fifth of the first number is equal to half the second number. Determine those unknown numbers.\n8. The larger", null, "The larger of two numbers is nine more than four times the smaller number. The sum of the two numbers is fifty-nine. Find the two numbers.\n9. Trees", null, "Along the road were planted 250 trees of two types. Cherry for 60 CZK apiece and apple 50 CZK apiece. The entire plantation cost 12,800 CZK. How many was cherries and apples?\n10. Six years", null, "In six years Jan will be twice as old as he was six years ago. How old is he?\n11. ATC camp", null, "The owner of the campsite offers 79 places in 22 cabins. How many of them are triple and quadruple?\n12. Theatro", null, "Theatrical performance was attended by 480 spectators. Women were in the audience 40 more than men and children 60 less than half of adult spectators. How many men, women and children attended a theater performance?\n13. Equations - simple", null, "Solve system of linear equations: x-2y=6 3x+2y=4\n14. Substitution", null, "solve equations by substitution: x+y= 11 y=5x-25\n15. Equations", null, "Solve following system of equations: 6(x+7)+4(y-5)=12 2(x+y)-3(-2x+4y)=-44\n16. The dormitory", null, "The dormitory accommodates 150 pupils in 42 rooms, some of which are triple and some are quadruple. Determine how many rooms are triple and how many quadruples.\n17. Blackberries", null, "Daniel, Jolana and Stano collected together 34 blackberries. Daniel collected 8 blackberries more than Jolana, Jolana 4 more than Stano. Determine the number blackberries each collected ." ]
[ null, "https://www.hackmath.net/hashover/images/first-comment.png", null, "https://www.hackmath.net/hashover/images/avatar.png", null, "https://www.hackmath.net/thumb/34/t_4134.jpg", null, "https://www.hackmath.net/thumb/59/t_7959.jpg", null, "https://www.hackmath.net/thumb/50/t_7850.jpg", null, "https://www.hackmath.net/thumb/68/t_2968.jpg", null, "https://www.hackmath.net/thumb/69/t_1869.jpg", null, "https://www.hackmath.net/thumb/90/t_6990.jpg", null, "https://www.hackmath.net/thumb/65/t_2665.jpg", null, "https://www.hackmath.net/thumb/79/t_6779.jpg", null, "https://www.hackmath.net/thumb/28/t_4728.jpg", null, "https://www.hackmath.net/thumb/95/t_4995.jpg", null, "https://www.hackmath.net/thumb/44/t_3644.jpg", null, "https://www.hackmath.net/thumb/67/t_1667.jpg", null, "https://www.hackmath.net/thumb/81/t_6881.jpg", null, "https://www.hackmath.net/thumb/18/t_5518.jpg", null, "https://www.hackmath.net/thumb/47/t_1447.jpg", null, "https://www.hackmath.net/thumb/78/t_7978.jpg", null, "https://www.hackmath.net/thumb/10/t_3710.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9478936,"math_prob":0.98651314,"size":3210,"snap":"2019-43-2019-47","text_gpt3_token_len":813,"char_repetition_ratio":0.11072988,"word_repetition_ratio":0.072164945,"special_character_ratio":0.24672897,"punctuation_ratio":0.102874435,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9909783,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-16T09:37:45Z\",\"WARC-Record-ID\":\"<urn:uuid:66f49ec5-1b6c-4919-9dcb-3c652d9b3a80>\",\"Content-Length\":\"22028\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d72fbeb4-7e96-4cc2-bf91-6348d61ed426>\",\"WARC-Concurrent-To\":\"<urn:uuid:922c8314-e7a8-48eb-bb33-c92eec2e4526>\",\"WARC-IP-Address\":\"104.24.104.91\",\"WARC-Target-URI\":\"https://www.hackmath.net/en/math-problem/2096\",\"WARC-Payload-Digest\":\"sha1:EOBIHM2IFVU7VADPXH5BJDIZMQVCZOVA\",\"WARC-Block-Digest\":\"sha1:WGUV4W4NRR2JCB2B543OOABV6DUSYKP5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986666959.47_warc_CC-MAIN-20191016090425-20191016113925-00538.warc.gz\"}"}
https://geodorable.com/tag/trigonometric/
[ "## Trigonometric Functions Worksheet\n\nTrigonometric Functions Worksheet. Plus each one comes with an answer key. A really great activity for allowing students to understand the concepts of the solving trigonometric.", null, "14 Best Images of Basic Trigonometry Worksheet Trig Equations Worksheet from byveera.blogspot.com\n\nTrigonometric functions interactive worksheet on trigonometric functions for 11th standard id: Trigonometric functions the cosine and sine functions can be de ned geometrically by the co. Ambiguous case of the law of sines.\n\n## Trigonometric Ratios Worksheet\n\nTrigonometric Ratios Worksheet. In this guide, we will cover: For instance, you will be asked to provide the overall area of the triangle for yourself.\n\nFinding height using trigonometric ratios. Only three trigonometric ratios are sine cosine tangent. Whether you want a homework, some cover work, or a lovely bit of extra practise, this is the place for you.\n\n## Trigonometric Ratios Worksheet 2 Answers\n\nTrigonometric Ratios Worksheet 2 Answers. Equations using reciprocal trig functions doc maths higher decimal places math worksheets. Pin on geometry resources what is sin 1.\n\nSo, if you wish to receive all. A = =(4)(2) sin 45 a= (2) (3) sin. 1) 2.7 m 2) 34.7° 3) 5.3 m 4) 8.7 m 5) 32.2°.\n\n## Trigonometric Functions Worksheet With Answers\n\nTrigonometric Functions Worksheet With Answers. 19 rows printable trigonometry worksheets with answers. Plus each one comes with an answer key.\n\nRight triangle trigonometry she loves math trigonometry right triangle love. At cazoom maths we provide a. After it finishes the last column of the present row, checking continues with the first column of the following row.\n\n## Trigonometric Functions On The Unit Circle Worksheet Answers\n\nTrigonometric Functions On The Unit Circle Worksheet Answers. The unit circle the two historical perspectives of trigonometry incorporate different methods for introducing the trigonometric functions. July corresponds to t = 7." ]
[ null, "https://i2.wp.com/www.worksheeto.com/postpic/2013/02/right-triangle-trig-ratios-worksheet_210476.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8249503,"math_prob":0.9739279,"size":2144,"snap":"2022-27-2022-33","text_gpt3_token_len":500,"char_repetition_ratio":0.19953272,"word_repetition_ratio":0.06128134,"special_character_ratio":0.21921642,"punctuation_ratio":0.13399504,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9955313,"pos_list":[0,1,2],"im_url_duplicate_count":[null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-25T07:50:44Z\",\"WARC-Record-ID\":\"<urn:uuid:0345cbd0-3bf6-4487-aef3-d08e76d806e3>\",\"Content-Length\":\"61267\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ba92894e-19f6-4cce-8313-999800db2f29>\",\"WARC-Concurrent-To\":\"<urn:uuid:8170ed60-695a-4802-9200-6380f765a9e3>\",\"WARC-IP-Address\":\"172.67.147.40\",\"WARC-Target-URI\":\"https://geodorable.com/tag/trigonometric/\",\"WARC-Payload-Digest\":\"sha1:ZTUDD4JXD5EYTMP4PQJMNCEOKQ2UOAED\",\"WARC-Block-Digest\":\"sha1:E2Y2G3MGGUZUDASKM2YTPFSMXP6QHUUP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103034877.9_warc_CC-MAIN-20220625065404-20220625095404-00572.warc.gz\"}"}
https://www.codurance.com/publications/2019/01/14/active-pattern
[ "# Active Pattern\n\n14 Jan 2019", null, "## Jorge Gueorguiev Garcia\n\nSee author's bio and posts\n\nLast week I was pointed by someone to Active Patterns in F#. And it has been quite an interesting discovery.\n\nActive Patterns are used on F# to partition data. Those partitions then can be used with pattern matching. Microsoft's webpage compares them to discriminated unions.\n\nHow they can be used? Well, you could look at the above link, or just follow this post (or, in fact, do both).\n\nBelow is an initial version of a trivial function using pattern matching.\n\n``````let thisNumberTrait(number) =\nmatch number with\n| x when x = 0 -> \"Is Zero!!\"\n| x when x > 0 -> \"Is Positive!!\"\n| x when x < 0 -> \"Is Negative!!\"\n| _ -> \"I shouldn't be here\"\n``````\n\nA first improvement of the above code would be to put the logic on the guards behind functions, so we can arrive to this:\n\n``````let isZero(number) = number = 0\nlet isPositive(number) = number > 0\nlet isNegative(number) = number < 0\n\nlet thisNumberTrait(number) =\nmatch number with\n| x when isZero(x) -> \"Is Zero!!\"\n| x when isPositive(x) -> \"Is Positive!!\"\n| x when isNegative(x) -> \"Is Negative!!\"\n| _ -> \"I shouldn't be here\"\n``````\n\nBut what if we could remove the guard clauses completely? That is where Active Patterns come into play.\n\n``````let (|Zero|_|) (number) = if number = 0 then Some(number) else None\nlet (|Positive|_|) (number) = if number > 0 then Some(number) else None\nlet (|Negative|_|) (number) = if number < 0 then Some(number) else None\n\nlet thisNumberTrait(number) =\nmatch number with\n| Zero(x) -> \"Is Zero!!\"\n| Positive(x) -> \"Is Positive!!\"\n| Negative(x) -> \"Is Negative!!\"\n| _ -> \"I shouldn't be here\"\n``````\n\nAs you can see there are a few differences on the code. If we concentrate on the logic functions, we can see that now the name of the function has been changed for the construct `(| | |)`, and that now is it returning an Option type. Then, the pattern matching has replaced the `x when ...` code with the new Pattern `Zero(x)`. We complicate a bit the functions, we simplify the pattern matching.\n\nIf we look at what is the answer that we are providing, we realize that we are not using the number at all, so in fact we could change the line\n\n``````| Zero(x) -> \"Is Zero!!\"\n``````\n\nfor\n\n``````| Zero(_) -> \"Is Zero!!\"\n``````\n\nThe `x` is not the data that we pass to the Pattern, is the data that is returned! We pass the data implicilty. But wait, if I don't use the return data, do I need to return it at all? And the answer is no. So now we replace\n\n``````let (|Zero|_|) (number) = if number = 0 then Some(number) else None\n``````\n\nwith\n\n``````let (|Zero|_|) (number) = if number = 0 then Some(Zero) else None\n``````\n\nWe return the name of the pattern, instead of the data.\n\nWhich now means that on the match we can do the change from\n\n``````| Zero(_) -> \"Is Zero!!\"\n``````\n\nto\n\n``````| Zero -> \"Is Zero!!\"\n``````\n\nSo far we have returned the same data that we have passed, and not returned anything. But what if we want to return data of a different type? That is possible: Look below that `(|Positive|_|)` returns a string now.\n\n``````let (|Zero|_|) (number) = if number = 0 then Some(Zero) else None\nlet (|Positive|_|) (number) = if number > 0 then Some(\"Positive\") else None\nlet (|Negative|_|) (number) = if number < 0 then Some(number) else None\n\nlet thisNumberTrait(number) =\nmatch number with\n| Zero -> \"Is Zero!!\"\n| Positive(x) -> sprintf \"Is %s!!\" x\n| Negative(_) -> \"Is Negative!!\"\n| _ -> \"I shouldn't be here\"\n``````\n\nSo far I have been using Partial Patterns. That is, the data could not be defined as a partition. That is why there is an underscore (`(|Zero|_|)`) and why we are returning an Option (Some|None). But we could have a full Active Pattern, where the data must be inside one of the partitions. The changes are easy just as below)\n\n``````let (|Zero|_|) (number) = if number = 0 then Some(Zero) else None\n``````\n\nto\n\n``````let (|Zero|NonZero|) (number) = if number = 0 then Zero else NonZero\n``````\n\nPatterns can be combined using `&` for the and combination, and `|` for the or combination.\n\n``````| Positive(x) -> sprintf \"Is %s!!\" x\n``````\n\nwe could write\n\n``````| NonZero & Positive(x) -> sprintg \"Is %s!!\" x\n``````\n\nNot that in this case makes any difference, but is a posibility.\n\nFinally, we could make single big pattern, replacing\n\n``````let (|Zero|_|) (number) = if number = 0 then Some(Zero) else None let (|Positive|_|) (number) = if number > 0 then Some(Positive) else None\nlet (|Negative|_|) (number) = if number < 0 then Some(Negative) else None ``````\n\nwith\n\n``````let (|Zero|Positive|Negative|) (number) = if number = 0 then Zero elif number > 0 then Positive else Negative\n``````\n\nWhich in this case looks a bit pointless to me, as I'm nearly back to the original code. It will have it's uses, though.\n\nActive Patterns are an intersting construct, especially when using the Active Pattern multiple times." ]
[ null, "https://www.codurance.com/hubfs/Codurance_September2020/images/jorge_gueorguiev.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.849496,"math_prob":0.9925396,"size":4586,"snap":"2023-40-2023-50","text_gpt3_token_len":1247,"char_repetition_ratio":0.18245308,"word_repetition_ratio":0.2175981,"special_character_ratio":0.29241168,"punctuation_ratio":0.10789766,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9989717,"pos_list":[0,1,2],"im_url_duplicate_count":[null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-04T10:29:48Z\",\"WARC-Record-ID\":\"<urn:uuid:7cc15d7d-2540-4539-bc3c-2b200e876783>\",\"Content-Length\":\"86313\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4a0ffc51-d90b-4010-9790-ec5e8ce5d0ad>\",\"WARC-Concurrent-To\":\"<urn:uuid:edd5fc15-a8b0-40df-9e55-4694b0b67435>\",\"WARC-IP-Address\":\"199.60.103.31\",\"WARC-Target-URI\":\"https://www.codurance.com/publications/2019/01/14/active-pattern\",\"WARC-Payload-Digest\":\"sha1:PTERLVWWORH4YYRGFT6XCDIM6YSNF7D3\",\"WARC-Block-Digest\":\"sha1:FAWW675JQA4RCBSCUVO7ZBLVW5ESN47C\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233511364.23_warc_CC-MAIN-20231004084230-20231004114230-00803.warc.gz\"}"}
https://madhavamathcompetition.com/2020/04/24/iii-tutorial-problems-symmetric-and-alternating-functions-rmo-and-iitjee-math/?shared=email&msg=fail
[ "III. Tutorial problems. Symmetric and alternating functions. RMO and IITJEE math\n\n1. Simplify:", null, "$(b^{-1}+c^{1})(b+c-a)+(c^{-1}+a^{-1})(c+a-b)+(a^{-1}+b^{-1})(a+b=c)$\n2. Simplify:", null, "$\\frac{(x-b)(x-c)}{(a-b)(a-c)} + \\frac{(x-c)(x-a)}{(b-c)(b-a)} + \\frac{(x-a)(x-b)}{(c-a)(c-b)}$\n3. Simplify:", null, "$\\frac{b^{2}+c^{2}-a^{2}}{(a-b)(a-c)} + \\frac{c^{2}+a^{2}-b^{2}}{(b-c)(b-a)} + \\frac{a^{2}+b^{2}-c^{2}}{(c-a)(c-b)}$\n4. Simplify:", null, "$\\frac{b-c}{1+bc} + \\frac{c-a}{1+ca} + \\frac{a-b}{1+ab}$\n5. Simplify:", null, "$\\frac{a(b-c)}{1+bc} + \\frac{b(c-a)}{1+ca} + \\frac{c(a-b)}{1+ab}$\n6. Factorize:", null, "$(b-c)^{2}(b+c-2a)+(c-a)^{2}(c+a-2b)+(a-b)^{2}(a+b-2c)$. Put", null, "$b-c=x, c-a=y, a-b=z$and", null, "$b+c-2a=y-z$\n7. Factorize:", null, "$8(a+b+c)^{2}-(b+c)^{2}-(c+a)^{2}-(a+b)^{2}$. Put", null, "$b+c=x, c+a=y, a+b=z$.\n8. Factorize:", null, "$(a+b+c)^{2}-(b+c-a)^{2}-(c+a-b)^{2}+(a+b-c)^{2}$\n9. Factorize:", null, "$(1-a^{2})(1-b^{2})(1-c^{2})+(a-bc)(b-ac)(c-ab)$\n10. Express the following substitutions as the product of transpositions: (i)", null, "$\\left(\\begin{array}{cccccc}123456\\\\654321\\end{array}\\right)$ (ii)", null, "$\\left(\\begin{array}{cccccc}123456\\\\246135\\end{array}\\right)$ (iii)", null, "$\\left(\\begin{array}{cccccc}123456\\\\641235\\end{array}\\right)$\n\nRegards,\n\nNalin Pithwa.\n\nThis site uses Akismet to reduce spam. Learn how your comment data is processed." ]
[ null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6995468,"math_prob":1.0000085,"size":493,"snap":"2022-05-2022-21","text_gpt3_token_len":121,"char_repetition_ratio":0.16768916,"word_repetition_ratio":0.0,"special_character_ratio":0.18661258,"punctuation_ratio":0.2804878,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.000005,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-29T01:42:21Z\",\"WARC-Record-ID\":\"<urn:uuid:64696431-bebd-4896-8e90-9c371c366a31>\",\"Content-Length\":\"86512\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:80f2d10f-932b-449a-b7cd-40ff9ee2e836>\",\"WARC-Concurrent-To\":\"<urn:uuid:aa7b6c4a-3f92-49bb-9bca-62f03f09c816>\",\"WARC-IP-Address\":\"192.0.78.24\",\"WARC-Target-URI\":\"https://madhavamathcompetition.com/2020/04/24/iii-tutorial-problems-symmetric-and-alternating-functions-rmo-and-iitjee-math/?shared=email&msg=fail\",\"WARC-Payload-Digest\":\"sha1:WZ2OZIV2IK4XWHB3WPCFIOTH4OKIOJUH\",\"WARC-Block-Digest\":\"sha1:AEDHAZZ2XOW5HRZOQY2AD6HLN75JM5BT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320299894.32_warc_CC-MAIN-20220129002459-20220129032459-00538.warc.gz\"}"}
https://votetinadavis.com/valentine-math-worksheets.html
[ "# Valentine Math Worksheets", null, "### Free Valentine S Day Math Worksheet Made By Teachers Valentine Math Worksheet Math Valentines Valentine Worksheets", null, "### Pin By Super Teacher Worksheets On Holidays Super Teacher Worksheets Math Valentines Valentines School Math Coloring", null, "### Valentine S Day Math Activities Kindergarten Distance Learning Kindergarten Valentines Kindergarten Math Math Valentines", null, "### Math Printables For Valentine S Day Freebie Second Grade Common Core Math Valentines Math Printables Second Grade Math", null, "### Happy Valentine S Day This Is A Free Valentine S Day Addition Worksheet Just Download And Pr Valentine Worksheets Valentines School Valentines Day Activities", null, "### Valentine S Day Math And Literacy Centers With Printable Worksheets And A Math Worksheet Freebie Kindergarten Smarts Kindergarten Valentines Math Valentines Valentine Math Worksheet", null, "### These Valentines Day worksheets for addition in printable PDF format will sweeten up your math practice for February.\n\nValentine math worksheets. Includes a word list puzzles and activities. Here is a fun way to practice counting to twenty with your kindergarten students. Learn about Washington and Lincoln with these printable Presidents Day worksheets and activities.\n\nValentines Day Themed Math Pages Always remember that grade levels are not absolutes -- especially since were all living in different countries and therefore have different curriculums. Valentines Day Worksheets For 2nd Grade energiapihiinfo 211579. There are also a collection of simple math exercises with fun Valentines Day themes.\n\nValentines day math worksheets for kindergarten. 1292018 Then again the fact that Valentines Day stuff appears in stores on December 26th makes it feel like its coming up faster than ever. Print leprechaun crafts puzzles and math worksheets.\n\nMeasurements can be a difficult concept for kids to grasp. From simple addition and dot-to-dot activities to multi-step mixed operation word problems these worksheets are a fun way to help your students math skills bloom this Valentines Day. Valentines Day Fractions Worksheets.\n\nThese worksheets cover some basic topics that you might want to review the week or day of Valentines. -Cut and Paste matching the tens frames to the correct numb. Youll find a total of four free printable worksheets in this set.\n\nValentines Day counting worksheets. Valentine039s Day worksheets get your kid to practice math and more with heart-filled worksheets. Free Valentine Coordinate Grid Picture Valentines Day Math.", null, "### Fun Valentine S Day Kindergarten Math Worksheets Free Printable Counting Worksheets For Kindergarten Valentine Worksheets Numbers Kindergarten Math Valentines", null, "### 300 Free Valentine Math Worksheets For Kids Igamemom Math Valentines Valentine Math Worksheet Valentine Worksheets", null, "", null, "### Valentine S Math Kindergarten Worksheets Mess For Less Valentine Math Worksheet Valentine Worksheets Math Valentines", null, "### Valentine S Day Missing Numbers To 20 Made By Teachers Math Valentines Valentine Math Worksheet Kindergarten Valentines", null, "### Valentine S Day Worksheets Math And Literacy First Grade Preschool Valentines Worksheets Valentines Day Words Valentine Math Worksheet", null, "### Valentine S Day Number Patterns Free Worksheet Valentine Math Worksheet Pattern Worksheet Kindergarten Worksheets", null, "### Valentines Worksheets Best Coloring Pages For Kids Valentine Worksheets Valentine Math Worksheet Math Valentines", null, "### Valentine S Day Math And Literacy Centers With Printable Worksheets And A Math Worksheet Freebie Kindergarten Smarts Math Valentines Kindergarten Valentines Valentine Math Worksheet", null, "### Valentine S Day Math Activities Kindergarten Distance Learning Valentine Math Worksheet Math Valentines Kindergarten Math Worksheets", null, "### Valentine Math Kindergarten Math Worksheets Kindergarten Math Kindergarten Valentines", null, "### Valentine S Day Math Worksheets Kids Math Worksheets Math Math Worksheets", null, "", null, "### Valentine S Day Simple Addition Worksheet Education Com Valentine Math Worksheet Kindergarten Addition Worksheets Math Valentines", null, "### Valentine S Day Worksheets Math And Literacy First Grade Preschool Valentines Worksheets Valentine Math Worksheet Valentine Worksheets", null, "### Valentine S Day Kindergarten Math Worksheets Math Valentines Math Freebie Kindergarten Math Worksheets", null, "### Math Printables For Valentine S Day Freebie Second Grade C Math Valentines Second Grade Math Math Printables", null, "### Valentine S Day Kindergarten Math Worksheets Valentine Math Worksheet Math Valentines Kindergarten Valentines", null, "### Valentine S Day Kindergarten Math Worksheets Kindergarten Valentines Kindergarten Math Worksheets Preschool Valentines", null, "### Valentine S Day Free Valentine S Math Puzzles Valentines Day Activities Math Valentines Valentines School", null, "### Valentine S Day Math Literacy Worksheets Activities No Prep Kindergarten Valentines Christmas Math Math Literacy", null, "### Valentine Math Cryptogram Worksheet Education Com Math Valentines Valentine Math Activities Valentine Math Worksheet", null, "### Valentine S Day Math Numbers Freebie Valentine Math Activities Valentine Math Worksheet Kindergarten Valentines", null, "### Valentine S Day Math Worksheets Valentine S Day Activities Printable Digital Math Valentines Valentine Math Activities Valentine Math Worksheet", null, "### Valentine S Day Kindergarten Math Worksheets Math Valentines Valentine Math Worksheet Kindergarten Valentines", null, "### Free Printable Valentine S Day Kindergarten Worksheets Bundle Valentine Worksheets Kindergarten Valentines Valentine Math Worksheet", null, "### Heart Math Activity Heart Themed Number Bonds Worksheets Number Bonds Early Learning Math Free Math Printables", null, "### The Art Of Teaching A Kindergarten Blog Valentine Math Worksheets Free Math Valentines Valentine Math Worksheet Valentine Worksheets", null, "### Valentine S Day Math Worksheet Education Com Valentine Math Worksheet Math Valentines Math Facts", null, "### Free Valentine S Day Worksheets 123 Kids Fun Apps Valentine Worksheets Valentine Math Worksheet Printables Free Kids", null, "### Valentine S Day Math Simple Addition Worksheet Addition Worksheets Kids Math Worksheets Valentine Worksheets", null, "### Valentine S Day Color By Number Multiplication Worksheets Valentine Worksheets Multiplication Worksheets Valentines Multiplication", null, "### Valentine S Day Printable Math Grouping Worksheet Woo Jr Kids Activities Math Valentines Valentine Math Worksheet Valentine Worksheets", null, "### Valentine Math Center Dice Games For K 2nd Special Education Home Schooled Children Resource Includes Va Math Valentines Kids Math Worksheets Math Centers", null, "### Valentine S Math Kindergarten Worksheets Math Valentines Valentine Math Kindergarten Valentine Worksheets", null, "### Reinforce Counting Practice One To Ten Math Concepts Such As Same Different Larger Smaller Tal Valentine Math Preschool Math Valentines Valentine Worksheets", null, "### Valentine S Day Kindergarten Math Worksheets Math Valentines Kindergarten Valentines Valentine Math Worksheet", null, "### Valentine S Worksheets In 2020 Valentine Worksheets Math Valentines Kindergarten Valentines", null, "### Valentine Multiplication Worksheets Math Kids And Chaos Valentines Multiplication Math Valentines Valentine Math Activities", null, "### Addition And Subtraction Valentine S Day Math Sheets For First Grade And Kindergarten Includes 5 Diff Math Valentines Math Kindergarten Subtraction Worksheets", null, "### Valentine S Day Math Worksheets Valentine S Day Activities Printable Digital Math Practice Worksheets Math Valentines Money Math Worksheets\n\nSource : pinterest.com" ]
[ null, "https://i.pinimg.com/originals/a3/86/d1/a386d1fcfb80809e66515462f5eace02.png", null, "https://i.pinimg.com/originals/4a/5c/8d/4a5c8dc21b7d1636d093f477e8452d17.jpg", null, "https://i.pinimg.com/originals/3e/df/7b/3edf7bad5d2c730b8a2d4dafc22795f8.jpg", null, "https://i.pinimg.com/originals/28/89/59/2889598125a4b3236259aa49c33889a7.jpg", null, "https://i.pinimg.com/originals/1a/cf/4e/1acf4e02167c7b10764c3f8bd0ae1cf8.jpg", null, "https://i.pinimg.com/originals/30/d8/95/30d89536b9d45313361356b0ec1b11d9.jpg", null, "https://i.pinimg.com/736x/06/95/57/069557d103ea1eb3205ac82406dd717a.jpg", null, "https://i.pinimg.com/originals/10/09/64/1009649a2ec2b6a31694d2b4edd418d1.png", null, "https://i.pinimg.com/170x/1c/eb/8f/1ceb8fbd3c64f2d54493558674730db0--math-worksheets-for-kids-printable-worksheets.jpg", null, "https://i.pinimg.com/originals/f0/10/c5/f010c5136b56232faada845fe881b2f0.png", null, "https://i.pinimg.com/originals/5a/a8/a8/5aa8a88c1c188a81fc8331f4d453186b.png", null, "https://i.pinimg.com/originals/04/86/4c/04864cbae6569bdb69cb1f3daa2cacf7.png", null, "https://i.pinimg.com/originals/aa/2b/1c/aa2b1c3ca344d4e60078d5c660e2cb7c.jpg", null, "https://i.pinimg.com/originals/32/8a/d7/328ad78abd3c6be2b53b0da2496dbdc6.jpg", null, "https://i.pinimg.com/originals/9b/12/26/9b12263a4dcdbc379fa8e789816c1c53.jpg", null, "https://i.pinimg.com/originals/b8/49/87/b849872dd2cc21835264f75b624c86c0.jpg", null, "https://i.pinimg.com/originals/fd/a1/44/fda144f7e4601406b56b9c3670e884bc.jpg", null, "https://i.pinimg.com/originals/a6/e6/e3/a6e6e3fc7ba1ebf5f1b89e297be5ee4b.jpg", null, "https://i.pinimg.com/originals/6c/b3/39/6cb339f83855b5be6c7d26866092f3bc.jpg", null, "https://i.pinimg.com/originals/8a/c0/52/8ac052aef8a9350bd45a85015493b9e0.png", null, "https://i.pinimg.com/originals/6c/32/0d/6c320dc63de5cef0b58216c8caa14498.gif", null, "https://i.pinimg.com/originals/9d/06/0b/9d060b8f6381c6f1cda5cc5cd6878bce.jpg", null, "https://i.pinimg.com/originals/08/c9/30/08c9304939d75dd75a204de6e0aebeea.png", null, "https://i.pinimg.com/originals/f7/69/e1/f769e1d42492375f7fc774aab88d9e0f.jpg", null, "https://i.pinimg.com/originals/39/04/ab/3904ab9ac560f7b83f34489ca003a365.jpg", null, "https://i.pinimg.com/originals/30/8e/3b/308e3b795f9dd4709756f89a4d2c8d1f.jpg", null, "https://i.pinimg.com/474x/70/4a/45/704a4525d010e176bdcbed2b2fc2aa9d.jpg", null, "https://i.pinimg.com/originals/d6/99/3c/d6993c77b9362bf99755ca0b37467535.jpg", null, "https://i.pinimg.com/originals/c9/b5/06/c9b506219819d1ab094e26717d9a5f1f.gif", null, "https://i.pinimg.com/736x/54/ae/f5/54aef53407bb4b94dc2bcf9418729e17.jpg", null, "https://i.pinimg.com/originals/ed/54/14/ed5414ac6cabfb02904f82b119507fc6.jpg", null, "https://i.pinimg.com/originals/48/72/79/48727994b77e794d42e9cc564536e032.jpg", null, "https://i.pinimg.com/originals/0b/74/ee/0b74ee8b76170113353380c97da5603a.jpg", null, "https://i.pinimg.com/originals/54/98/65/54986508b430ef0e9ca6311cd2d23cbc.gif", null, "https://i.pinimg.com/originals/2c/24/d8/2c24d83fd7538c8556b76542f4505cd5.jpg", null, "https://i.pinimg.com/originals/ea/46/6f/ea466f61f775f1754d7e0720f2f9404b.gif", null, "https://i.pinimg.com/736x/fd/ad/37/fdad3774dc7262f7d6eebb1a5d6b2684.jpg", null, "https://i.pinimg.com/originals/2b/3d/a8/2b3da8387cf718e45223c1deff136a85.jpg", null, "https://i.pinimg.com/originals/9c/5a/5a/9c5a5a5c1f1059016b665d63bcf3b151.png", null, "https://i.pinimg.com/originals/20/1a/a0/201aa05e94703527935f965a4a75c619.gif", null, "https://i.pinimg.com/736x/06/95/57/069557d103ea1eb3205ac82406dd717a.jpg", null, "https://i.pinimg.com/originals/0c/42/8d/0c428d51783b3d3c0d78d37b4bc4d327.jpg", null, "https://i.pinimg.com/originals/01/c2/cc/01c2cc0eb05f56ac58a8301a74ff0664.jpg", null, "https://i.pinimg.com/originals/a4/b8/96/a4b896bd7fb94239bd7bb05bd09668a0.jpg", null, "https://i.pinimg.com/originals/f5/ce/fb/f5cefb07e699b12fc1229667c0fbc2be.jpg", null, "https://i.pinimg.com/736x/74/46/bd/7446bde42375825fea2f960e7f664377.jpg", null, "https://i.pinimg.com/originals/1b/97/ae/1b97ae8085bb21caed23d4be94518a5b.jpg", null, "https://i.pinimg.com/736x/fb/7e/12/fb7e1209cf5c87924dacca6ce9129e30.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.78530383,"math_prob":0.7561106,"size":2486,"snap":"2021-21-2021-25","text_gpt3_token_len":464,"char_repetition_ratio":0.24335213,"word_repetition_ratio":0.0,"special_character_ratio":0.16532582,"punctuation_ratio":0.051413883,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9535812,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96],"im_url_duplicate_count":[null,1,null,null,null,4,null,2,null,1,null,1,null,2,null,5,null,1,null,3,null,1,null,3,null,1,null,null,null,1,null,1,null,4,null,1,null,1,null,4,null,1,null,1,null,1,null,2,null,3,null,1,null,1,null,1,null,2,null,1,null,1,null,1,null,1,null,1,null,2,null,1,null,1,null,1,null,2,null,1,null,2,null,1,null,3,null,2,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-09T13:31:51Z\",\"WARC-Record-ID\":\"<urn:uuid:f3837163-5581-454f-8227-f90afa72d10d>\",\"Content-Length\":\"75673\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0cb68bef-f997-4ad9-84d6-52511ca132ee>\",\"WARC-Concurrent-To\":\"<urn:uuid:9b14d7b3-4199-4864-9186-d36ab28bba4d>\",\"WARC-IP-Address\":\"104.21.65.239\",\"WARC-Target-URI\":\"https://votetinadavis.com/valentine-math-worksheets.html\",\"WARC-Payload-Digest\":\"sha1:O6D2377W2PZANCCPVW7IRJDSY2PVWUOO\",\"WARC-Block-Digest\":\"sha1:RU2BMOCIP4YZYP5X6TZDQEMGGBIVGNDV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243988986.98_warc_CC-MAIN-20210509122756-20210509152756-00417.warc.gz\"}"}
https://codereview.stackexchange.com/questions/196034/slow-flask-sqlalchemy-query-using-association-tables
[ "# Slow Flask-SQLAlchemy query using association tables\n\nI have two models in Flask-SQLAlchemy (Post and Comment) that have many-to-many relationship that is manifested in the third model (post_mentions):\n\npost_mentions = db.Table(\n'post_mentions',\ndb.Column('post_id', db.Integer, db.ForeignKey('posts.id'), primary_key=True),\n)\n\nclass Post(db.Model):\n__tablename__ = 'posts'\n\nid = db.Column(db.Integer, primary_key=True)\nname = db.Column(db.String, unique=True, nullable=False)\nmentions = db.relationship('Comment', secondary=post_mentions, lazy='dynamic')\n\ndef __eq__(self, other):\nreturn self.name.lower() == other.name.lower()\n\ndef __hash__(self):\nreturn hash(self.name.lower())\n\nclass Comment(db.Model):\n\nid = db.Column(db.Integer, primary_key=True)\ntext = db.Column(db.Text, nullable=False)\ncreated_at = db.Column(db.Integer, nullable=False)\n\n\nThere is also a /posts endpoint that triggers the following query:\n\n# flask and other imports\n\[email protected]('/posts')\ndef posts():\npage_num = request.args.get('page', 1)\nposts = models.Post.query.join(models.post_mentions)\\\n.group_by(models.post_mentions.columns.post_id)\\\n.order_by(func.count(models.post_mentions.columns.post_id).desc())\\\n.paginate(page=int(page_num), per_page=25)\nreturn render_template('posts.html', posts=posts)\n\n\nThere are more than 14k+ posts and 32k+ comments stored in SQLite database. As you can see from the snippet above, when someone hits /posts endpoint, SQLAlchemy loads all data at once to the memory and then subsequent queries (e.g. retrieving posts, comments to that posts, etc..) take sub-millisecond time, since data is being served from the memory without hitting the database. Initial load takes 10s+ on my laptop, which is, to put it mildly, suboptimal.\n\nSo the question is: Considering that users won't view 97+% of posts, how can I both order posts by number of mentions in comments and load them on demand instead of doing it in one swoop?" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.72887576,"math_prob":0.76346767,"size":1973,"snap":"2023-40-2023-50","text_gpt3_token_len":448,"char_repetition_ratio":0.13103098,"word_repetition_ratio":0.0,"special_character_ratio":0.25392804,"punctuation_ratio":0.2372449,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98399526,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-25T10:20:00Z\",\"WARC-Record-ID\":\"<urn:uuid:739ff544-ad6b-4530-8458-12c6dae51399>\",\"Content-Length\":\"153114\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b59850a2-10ab-4c11-bec5-0decc440a214>\",\"WARC-Concurrent-To\":\"<urn:uuid:734c96c9-083b-4435-b8a3-925c44824510>\",\"WARC-IP-Address\":\"104.18.10.86\",\"WARC-Target-URI\":\"https://codereview.stackexchange.com/questions/196034/slow-flask-sqlalchemy-query-using-association-tables\",\"WARC-Payload-Digest\":\"sha1:Y4NLUSX4DLHKR4YBLHCY3YIRH2LN5BQA\",\"WARC-Block-Digest\":\"sha1:SN754L2RWXDUE4TSPCPAIVYPHBOZPIMM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233508959.20_warc_CC-MAIN-20230925083430-20230925113430-00155.warc.gz\"}"}
https://edu-answer.com/mathematics/question515178130
[ "", null, ", 15.01.2022 14:20 amandasantiago2001\n\nWhat are the features of the function f(x) = -2 (1/2)^x+ 3 graphed below?", null, "", null, "", null, "Another question on Mathematics", null, "Mathematics, 21.06.2019 14:50\nSimplify 5 square root of 7 end root plus 12 square root of 6 end root minus 10 square root of 7 end root minus 5 square root of 6 . (1 point) 5 square root of 14 end root minus 7 square root of 12 5 square root of 7 end root minus 7 square root of 6 7 square root of 12 end root minus 5 square root of 14 7 square root of 6 end root minus 5 square root of 7", null, "Mathematics, 21.06.2019 15:30\nMary works for a company that ships packages and must measure the size of each box that needs to be shipped. mary measures a box and finds the length is 7 inches, the width is 14 inches, and the height is 15 inches. what is the volume of the box? [type your answer as a number.]", null, "Mathematics, 21.06.2019 19:10\nThe triangles in the diagram are congruent. if mzf = 40°, mza = 80°, and mzg = 60°, what is mzb?", null, "Mathematics, 21.06.2019 19:30\nOkay so i didn't get this problem petro bought 8 tickets to a basketball game he paid a total of \\$200 write an equation to determine whether each ticket cost \\$26 or \\$28 so i didn't get this question so yeahyou have a good day.\nWhat are the features of the function f(x) = -2 (1/2)^x+ 3 graphed below?...\nQuestions", null, "", null, "", null, "", null, "", null, "", null, "Spanish, 13.11.2020 23:20", null, "", null, "", null, "", null, "", null, "Mathematics, 13.11.2020 23:20", null, "", null, "", null, "Mathematics, 13.11.2020 23:20", null, "", null, "", null, "", null, "", null, "Chemistry, 13.11.2020 23:20", null, "Questions on the website: 14531560" ]
[ null, "https://edu-answer.com/tpl/images/cats/mat.png", null, "https://edu-answer.com/tpl/images/cats/User.png", null, "https://edu-answer.com/tpl/images/ask_question.png", null, "https://edu-answer.com/tpl/images/ask_question_mob.png", null, "https://edu-answer.com/tpl/images/cats/mat.png", null, "https://edu-answer.com/tpl/images/cats/mat.png", null, "https://edu-answer.com/tpl/images/cats/mat.png", null, "https://edu-answer.com/tpl/images/cats/mat.png", null, "https://edu-answer.com/tpl/images/cats/mat.png", null, "https://edu-answer.com/tpl/images/cats/himiya.png", null, "https://edu-answer.com/tpl/images/cats/mat.png", null, "https://edu-answer.com/tpl/images/cats/mat.png", null, "https://edu-answer.com/tpl/images/cats/mat.png", null, "https://edu-answer.com/tpl/images/cats/es.png", null, "https://edu-answer.com/tpl/images/cats/en.png", null, "https://edu-answer.com/tpl/images/cats/en.png", null, "https://edu-answer.com/tpl/images/cats/istoriya.png", null, "https://edu-answer.com/tpl/images/cats/istoriya.png", null, "https://edu-answer.com/tpl/images/cats/mat.png", null, "https://edu-answer.com/tpl/images/cats/mat.png", null, "https://edu-answer.com/tpl/images/cats/himiya.png", null, "https://edu-answer.com/tpl/images/cats/mat.png", null, "https://edu-answer.com/tpl/images/cats/istoriya.png", null, "https://edu-answer.com/tpl/images/cats/obshestvoznanie.png", null, "https://edu-answer.com/tpl/images/cats/mat.png", null, "https://edu-answer.com/tpl/images/cats/en.png", null, "https://edu-answer.com/tpl/images/cats/himiya.png", null, "https://edu-answer.com/tpl/images/cats/mat.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.82203346,"math_prob":0.9942285,"size":1498,"snap":"2022-05-2022-21","text_gpt3_token_len":511,"char_repetition_ratio":0.18674698,"word_repetition_ratio":0.2189781,"special_character_ratio":0.38317758,"punctuation_ratio":0.1750663,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9947878,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-18T01:26:29Z\",\"WARC-Record-ID\":\"<urn:uuid:41571f07-0b53-40b1-af12-b5beaeb2990b>\",\"Content-Length\":\"71726\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9a8badde-a54b-4a98-addc-fa7dd97226b4>\",\"WARC-Concurrent-To\":\"<urn:uuid:b5e4379b-c96c-4cfa-9b02-d7a94ce3220b>\",\"WARC-IP-Address\":\"104.21.68.106\",\"WARC-Target-URI\":\"https://edu-answer.com/mathematics/question515178130\",\"WARC-Payload-Digest\":\"sha1:RZ2TNPKZP33V7J77IBQZFRLFBAE4EXPF\",\"WARC-Block-Digest\":\"sha1:NYZY7KIPTIFVIMNRBCB2O5GPH7KHTT6I\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320300658.84_warc_CC-MAIN-20220118002226-20220118032226-00431.warc.gz\"}"}
https://psychology.wikia.org/wiki/Structural_equation_modeling
[ "34,718 Pages\n\nThis banner appears on articles that are weak and whose contents should be approached with academic caution\n\n.\n\nStructural equation modeling (SEM) is a statistical technique for testing and estimating causal relationships using a combination of statistical data and qualitative causal assumptions. This view of SEM was articulated by the geneticist Sewall Wright (1921), the economists Trygve Haavelmo (1943) and Herbert Simon (1953), and formally defined by Judea Pearl (2000) using a calculus of counterfactuals.\n\nStructural Equation Models (SEM) encourages confirmatory rather than exploratory modeling; thus, it is suited to theory testing rather than theory development. It usually starts with a hypothesis, represents it as a model, operationalises the constructs of interest with a measurement instrument, and tests the model. The causal assumptions embedded in the model often have falsifiable implications which can be tested against the data. With an accepted theory or otherwise confirmed model, SEM can also be used inductively by specifying the model and using data to estimate the values of free parameters. Often the initial hypothesis requires adjustment in light of model evidence, but SEM is rarely used purely for exploration.\n\nAmong its strengths is the ability to model constructs as latent variables (variables which are not measured directly, but are estimated in the model from measured variables which are assumed to 'tap into' the latent variables). This allows the modeler to explicitly capture the unreliability of measurement in the model, which in theory allows the structural relations between latent variables to be accurately estimated. Factor analysis, path analysis and regression all represent special cases of SEM.\n\nIn SEM, the qualitative causal assumptions are represented by the missing variables in each equation, as well as vanishing covariances among some error terms. These assumptions are testable in experimental studies and must be confirmed judgmentally in observational studies.\n\nAn alternative technique for specifying Structural Equation Models using partial least squares path modeling has been implemented in software such as LVPLS (Latent Variable Partial Least Square), PLSGraph, SmartPLS (Ringle et al. 2005) and XLSTAT (Addinsoft, 2008). Some feel this is better suited to data exploration. More ambitiously, The TETRAD project aims to develop a way to automate the search for possible causal models from data.\n\n## Steps in performing SEM analysis\n\n### Model specification\n\nSince SEM is a confirmatory technique, the model must be specified correctly based on the type of analysis that the modeller is attempting to confirm. When building the correct model, the modeller uses two different kind of variables, namely exogenous and endogenous variables. The distinction between these two types of variables is whether the variable regresses on another variable or not. Like in a linear regression the dependent variable (DV) regresses on the independent variable (IV), meaning that the DV is being predicted by the IV. Within SEM modelling this means that the exogenous variable is the variable that another variable regresses on. Exogenous variables can be recognized in a graphical version of the model, as the variables sending out arrowheads, denoting which variable it is predicting. A variable that regresses on a variable is always an endogenous variable even if this same variable is used as an variable to be regressed on. Endogenous variables are recognized as the receivers of a arrowhead in the model. The fact that a variable can play a double role in a SEM model (independent as well dependent), makes that SEM is more useful than the linear regression, since instead of performing two regressions one SEM model will do. There are usually two main parts to SEM: the structural model showing potential causal dependencies between endogenous and exogenous variables, and the measurement model showing the relations between the latent variables and their indicators. Confirmatory factor analysis models, for example, contain only the measurement part; while path diagrams (to be distinct from linear regression) can be viewed as an SEM that only has the structural part. Specifying the model delineates causal (in fact counterfactual) relationships between variables that are thought to be possible (and therefore want to be 'free' to vary) and those relationships between variables that already have an estimated relationship, which can be gathered from previous studies (these relationships are 'fixed' in the model).\n\nA modeller will often specify a set of theoretically plausible models in order to assess whether the model proposed is the best of the set. Not only must the modeller account for the theoretical reasons for building the model as it is, but the modeller must also take into account the number of data points and the number of parameters that the model must estimate to identify the model. An identified model is a model where a specific parameter value uniquely identifies the model, and no other equivalent formulation can be given by a different parameter value. A data point is a variable with observed scores, like a variable containing the scores on a question or the number of times respondents buy a car. The parameter is the value of interest, which might be a regression coefficient between the exogenous and the endogenous variable or the factor loading (regression coefficient between a indicator and its factor). If the number of data points is smaller than the number of estimated parameters an unidentified model is the result, since there are too few reference points to account for all the variance in the model. The solution is to constrain one of the paths to zero, which means that it is no longer part of the model.\n\n### Estimation of free parameters\n\nParameter estimation is done comparing the actual covariance matrices representing the relationships between variables and the estimated covariance matrices of the best fitting model. This is obtained through numerical maximization of a fit criterion as provided by maximum likelihood, weighted least squares or asymptotically distribution-free methods.\n\nThis is often accomplished by using a specialized SEM analysis program, such as SPSS' AMOS, EQS, LISREL, Mplus, Mx, the sem package in R, or SAS PROC CALIS (more information on SAS PROC CALIS: see UCLA or UCR).\n\n### Assessment of fit\n\nUsing a SEM analysis program, one can compare the estimated matrices representing the relationships between variables in the model to the actual matrices. Formal statistical tests and fit indices have been developed for these purposes. Individual parameters of the model can also be examined within the estimated model in order to see how well the proposed model fits the driving theory. Most, though not all, estimation methods make such tests of the model possible.\n\nHowever, the model tests are only correct provided that the model is correct. Although this problem exists in all statistical hypothesis tests, its existence in SEM has led to a large body of discussion among SEM experts, leading to a large variety of different recommendations on the precise application of the various fit indices and hypothesis tests.\n\nFor each measure of fit, rules of thumb have evolved regarding what represents good fit between model and data. These rules of thumb often need to be updated based on contextual factors such as the sample size, the ratio of indicators to factors, and the overall size of the model. Measures of fit differ in several ways. Some of them reward more parsimonious models (i.e., those with more constrained parameters). Because different measures of fit capture different elements of the fit of the model, it is appropriate to report a selection of different fit measures.\n\nSome of the more commonly used measures of fit include:\n\n### Model modification\n\nThe model may need to be modified in order to improve the fit, thereby estimating the most likely relationships between variables. Many programs provide modification indices which report the improvement in fit that results from adding an additional path to the model. Modifications that improve model fit are then flagged as potential changes that can be made to the model. In addition to improvements in model fit, it is important that the modifications also make theoretical sense.\n\n### Interpretation and communication\n\nThe model is then interpreted and claims about the constructs are made based on the best fitting model.\n\nCaution should always be taken when making claims of causality even when experimentation or time-ordered studies have been done. The term causal model must be understood to mean: \"a model that conveys causal assumptions,\" not necessarily a model that produces validated causal conclusions. Collecting data at multiple time points and using an experimental or quasi-experimental design can help rule out certain rival hypotheses but even a randomized experiment cannot rule out all such threats to causal inference. Good fit by a model consistent with one causal hypothesis does not rule out equally good fit by another model consistent with a different causal hypothesis. However careful research design can help distinguish such rival hypotheses.\n\n### Replication and revalidation\n\nAll model modifications should be replicated and revalidated before interpreting and communicating the results.\n\n## Comparison to other methods\n\nIn machine learning, SEM may be viewed as a generalization of Linear-Gaussian Bayesian networks which drops the acyclicality constraint, i.e. which allows causal cycles." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.85778433,"math_prob":0.9563857,"size":14205,"snap":"2021-31-2021-39","text_gpt3_token_len":2950,"char_repetition_ratio":0.1382297,"word_repetition_ratio":0.0036849377,"special_character_ratio":0.20478705,"punctuation_ratio":0.112769485,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98509204,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-08-01T23:48:29Z\",\"WARC-Record-ID\":\"<urn:uuid:daae6fbb-54f3-4af2-8027-791964d2200c>\",\"Content-Length\":\"124519\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4c8e278e-9038-48b9-9b79-51f6134914c3>\",\"WARC-Concurrent-To\":\"<urn:uuid:c278fba7-16d6-4ebf-9306-6d07830e7c1d>\",\"WARC-IP-Address\":\"151.101.128.194\",\"WARC-Target-URI\":\"https://psychology.wikia.org/wiki/Structural_equation_modeling\",\"WARC-Payload-Digest\":\"sha1:6TTWPBIM3LOQ3QKCIY4OYVGB6ET5MSV7\",\"WARC-Block-Digest\":\"sha1:TATOK2G5ZNXPNSW4R5L3CTGTM3QKD6AL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046154277.15_warc_CC-MAIN-20210801221329-20210802011329-00012.warc.gz\"}"}
https://www.studentsguide360.com/12th-chemistry-important-questions-unit-7-chemical-kinetics/
[ "", null, "# 12th Chemistry Important Questions Unit 7 Chemical Kinetics\n\nTN 12th Chemistry Important Questions Unit 7 Chemical Kinetics. 12th Chemistry Important Questions Chapter 7 Transition And Inner Transition Elements +2 Chemistry Important 2 Marks, 12th Chemistry Important 3 Marks, Chemistry Important 5 Marks Questions Based on reduced Syllabus 2021-2022. 12th Free Online Test (MCQs). HSC 12th Chemistry Revision Test Important Questions. 12th Tamil Full Guide.", null, "## Unit 7 Chemical Kinetics\n\n### 12th Chemistry All Units Important Questions\n\n• 1. Define average rate and instantaneous rate.\n• 2. Define rate law and rate constant.\n• 3. Derive integrated rate law for a zero-order reaction A –> product.\n• 4. Define the half-life of a reaction. Show that for a first-order reaction half-life is independent of initial concentration.\n• 5. What is an elementary reaction? Give the differences between the order and molecularity of a reaction.\n• 6. Write the rate law for the following reactions.\n(a) A reaction that is 3/2 order in x and zero-order in y.\n(b) A reaction that is second order in NO and first order in Br2.\n• 7. The rate law for a reaction of A, B, and C has been found to be\nrate = K[A]2[B][L]3/2. How would the rate of reaction change when\n(i) Concentration of [L] is quadrupled\n(ii) Concentration of both [A] and [B] are doubled   (iii) Concentration of [A] is halved,  (iv) Concentration of [A] is reduced to 1/3 and concentration of [L] is quadrupled.\n• 8. The rate of formation of a dimer in a second order reaction is 7.5×10-3molL-1s-1 at 0.05 mol L-1 monomer concentration. Calculate the rate constant.\n• 9. Write the Arrhenius equation and explain the terms involved.\n• 10. The decomposition of Cl2O7 at 500K in the gas phase to Cl2 and O2 is a first order reaction. After\n1 minute at 500K, the pressure of Cl2O2 falls from 0.08 to 0.04 atm. Calculate the rate constant in s-1.\n• 11. Hydrolysis of methyl acetate in aqueous solution has been studied by titrating the liberated acetic acid against sodium hydroxide. The concentration of an ester at different temperatures is given below.\n• 12. Explain pseudo first order reaction with an example.\n• 13. Identify the order for the following reactions\n(i) Rusting of Iron\n(iii) 2A + 3B –>AB products ; rate =k[A]1/2[B]2\n• 14. A gas phase reaction has energy of activation 200 kJ mol-1. If the frequency factor of the reaction is 1.6 x 1013s-1Calculate the rate constant at 600 K. (e-40.09 =3.8×10-18)\n• 15. The rate constant for a first order reaction is 1.54 x 10-3s-1. Calculate its half life time.\n• 16. The half life of the homogeneous gaseous reaction SO2Cl2–>SO2 + Cl2 which obeys first order kinetics is 8.0 minutes. How long will it take for the concentration of SO2Cl2 to be reduced to 1% of the initial value?\n• 17. The time for half change in a first order decomposition of a substance A is 60 seconds. Calculate the rate constant. How much of A will be left after 180seconds?\n• 18. A zero order reaction is 20% complete in 20 minutes. Calculate the value of the rate constant. In what time will the reaction be 80%complete?\n• 19. The activation energy of a reaction is 225 k Cal mol-1 and the value of rate constant at 40°C is 1.8×10-5s-1 Calculate the frequency factor,A.\n• 20. A first order reaction is 40% complete in 50 minutes. Calculate the value of the rate constant. In what time will the reaction be 80% complete?\n• 21. Define the rate of the reaction.\n• 22. Write the rate expression of the following reaction 2NH3 –> N2 + 3H2\n• 23. What are the differences between rate and rate constant of a reaction?\n• 24. What are the differences between order and molecularity?\n• 25. Give the two examples of first order reaction\n• 26. Define Zero order reaction?\n• 27. Derive the relationship between half life period and zero order rate constant\n• 28. Derive the rate constant for Zero order reaction\n• 29. Derive the rate constant for First order reaction\n• 30. Derive the relationship between half life period and First order rate constant\n• 31. Define half life period\n• 32. Define Activation energy\n• 33. Show that in case of first order reaction, the time required for 99.9% completion is nearly ten times the time required for half completion of the reaction.\n• 34. Why a negative sign is introduced in the rate expression?\n• 35. Give two examples for Zero order reaction.\n• 36. Derive Arhenius equation to calculate activation energy from the rate constant K1 and K2 at temperature T21 and T2 respectively." ]
[ null, "https://i0.wp.com/www.studentsguide360.com/wp-content/uploads/2021/12/12th-Chemistry-Important-Questions-Unit-1-METALLURGY.png", null, "https://i0.wp.com/www.studentsguide360.com/wp-content/uploads/2021/12/12th-Chemistry-Important-Questions-Unit-1-METALLURGY.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8477332,"math_prob":0.9806343,"size":4553,"snap":"2023-40-2023-50","text_gpt3_token_len":1219,"char_repetition_ratio":0.19344911,"word_repetition_ratio":0.05703422,"special_character_ratio":0.27564242,"punctuation_ratio":0.1168688,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9948554,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-09T16:07:39Z\",\"WARC-Record-ID\":\"<urn:uuid:9c2fa9f1-f90c-4ec1-88ec-bf64aa46a447>\",\"Content-Length\":\"190297\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1b4c90b2-faaf-473a-84da-0e4a58f8132a>\",\"WARC-Concurrent-To\":\"<urn:uuid:1080930e-8fae-4438-9dd7-50405d8f97c8>\",\"WARC-IP-Address\":\"136.243.92.92\",\"WARC-Target-URI\":\"https://www.studentsguide360.com/12th-chemistry-important-questions-unit-7-chemical-kinetics/\",\"WARC-Payload-Digest\":\"sha1:WV5EFGEMSN667F5JX6SE3VPY4O5RVZDW\",\"WARC-Block-Digest\":\"sha1:G4RVZOADQTLSCCPGV5QCPTJHM4LQNAH2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100912.91_warc_CC-MAIN-20231209134916-20231209164916-00517.warc.gz\"}"}
https://www.brainbell.com/tutors/Perl/Listing_16_1_A_very_simple_calculator.htm
[ "# Listing 16.1. A very simple calculator\n\n```#!/usr/local/bin/perl\nuse CGI::Form;\n\\$q = new CGI::Form;\nprint \\$q->start_html(-title=>`A Very Simple Calculator');\nprint \"<H1>A Very Simple Calculator</H1>\\n\";\nif (\\$q->cgi->var(`REQUEST_METHOD') eq `GET') {\n\\$val=0;\n&printForm(\\$q,\\$val);\n} else {\n\\$val=\\$q->param(`hiddenValue');\n\\$modifier=\\$q->param(`Modifier');\nif (\\$modifier=~/^[\\d]+\\$/) {\n\\$op=\\$q->param(`Action');\n\\$val+=\\$modifier;\n} elsif (\\$op eq \"Subtract\") {\n\\$val-=\\$modifier;\n} elsif (\\$op eq \"Multiply\") {\n\\$val*=\\$modifier;\n} elsif (\\$op eq \"Divide\") {\n\\$val/=\\$modifier;\n}\n} else {\nprint \"<P><STRONG>Please enter a numeric value!</STRONG><BR><BR>\\n\";\n}\n\\$q->param(`hiddenValue',\\$val);\n&printForm(\\$q,\\$val);\n}\nprint \\$q->end_html();\nsub printForm {\nmy(\\$q,\\$val)=@_;\nprint \"<P>The current value is: \\$val\\n\";\nprint \"<P>Please enter a value and select an operation.\\n<BR>\";\nprint \\$q->start_multipart_form();\nprint \\$q->hidden(-name=>`hiddenValue',-value=>\\$val);\nprint \"<TABLE><TR><TD COLSPAN=4>\\n\";\nprint \\$q->textfield(-name=>`Modifier',-size=>12,-maxlength=>5);\nprint \"</TD></TR>\\n<TR><TD>\\n\";\nprint \"\\n</TD><TD>\\n\";\nprint \\$q->submit(-name=>`Action',-value=>`Subtract');\nprint \"\\n</TD><TD>\\n\";\nprint \\$q->submit(-name=>`Action',-value=>`Multiply');\nprint \"\\n</TD><TD>\\n\";\nprint \\$q->submit(-name=>`Action',-value=>`Divide');\nprint \"\\n</TD><TD>\\n\";\nprint \"</TR></TABLE>\\n\";\nprint \\$q->end_form;\n}\n```\n\nYou will note that in this example, we use the hidden field `Value` to retain the current value of the calculator. We do some very basic field validation to make sure the user actually gave us a number. Obviously, this calculator will accept only integer values. Remember that when the user leaves our CGI program and returns, all state is now lost. Figure 16.1 shows the calculator as it appears in the Web browser.\n\nLet's look at another example now in Listing 16.2 that will make use of a file to retain state about a certain client. We will use the `REMOTE_ADDR` CGI environment variable to distinguish between clients. In this example, we will be allowing users to enter our Web site and write whatever they like in a big text field and then store the contents of that field in a file for them to later come back to and modify. Additionally, we will keep track of how many times users submitted changes to their text and display that value to them. Our example will simply use the /tmp/visitors directory to store these files and not worry about cleaning up these files.\n\n`Figure 16.1.` The calculator example in the browser." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.65465826,"math_prob":0.9540978,"size":2543,"snap":"2021-43-2021-49","text_gpt3_token_len":714,"char_repetition_ratio":0.14533281,"word_repetition_ratio":0.0,"special_character_ratio":0.31183642,"punctuation_ratio":0.1503006,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98389775,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-23T08:08:50Z\",\"WARC-Record-ID\":\"<urn:uuid:06d40ad2-e3a1-4dfe-b6c8-9736901f7e91>\",\"Content-Length\":\"33728\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4a6c1a02-25cc-442c-bb28-57c5fda4a43c>\",\"WARC-Concurrent-To\":\"<urn:uuid:a2c94468-bd59-4978-bab5-6a558f18f059>\",\"WARC-IP-Address\":\"35.175.60.16\",\"WARC-Target-URI\":\"https://www.brainbell.com/tutors/Perl/Listing_16_1_A_very_simple_calculator.htm\",\"WARC-Payload-Digest\":\"sha1:RLIYX42N5Z6H3QW3H6VG3S7QTSFHP7B2\",\"WARC-Block-Digest\":\"sha1:HPMZM2J3HT6XC3LNVOFGDTU7MQ2FF5XO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585653.49_warc_CC-MAIN-20211023064718-20211023094718-00620.warc.gz\"}"}
https://iraqidinarchat.net/?p=45439
[ "# Rate of Arab and foreign currencies in Iraqi dinars on Wednesday 11-2-2016\n\n### Rate of Arab and foreign currencies in Iraqi dinars on Wednesday 11-2-2016\n\nPublished on: 11-2-2016, 09:30", null, "Rate of Arab and foreign currencies in Iraqi dinars on Wednesday 11-2-2016\n\nU.S. dollar\nUS \\$ 1 = 1,164.1200 Iraqi dinars\n1 Iraqi dinars = US \\$ 0.0009\nSale price of a hundred dollars = 129.900 Iraqi Dinars\nThe purchase price of a hundred dollars = 128.900 Iraqi Dinars\neuro\n1 euro = Iraqi dinars 1,288.7413\nIQD 1 = 0.0008 euros\nSterling pound\n£ 1 = 1,426.2681 Iraqi dinars\nIQD 1 = 0.0007 pounds\nIQD 1 = 0.0012 Canadian dollars\nAustralian Dollar\n1 AUD = 887.8279 dinars\nIQD 1 = 0.0011 Australian dollars\nJapanese Yen\n1 Japanese Yen = 11.2065 dinars\nIQD 1 = 0.0892 Japanese yen\nSwiss Franc\n1 Swiss Franc = 1,195.6861 Iraqi dinars\nIQD 1 = 0.0008 Swiss Franc\nTurkish lira\n1 Turkish lira = 373.2709 dinars\nIQD 1 = 0.0027 Turkish lira\nChinese yuan\n1 Chinese yuan = 172.0951 dinars\nIQD 1 = 0.0058 Chinese yuan\nThai Baht\n1 Thai Baht = 33.2568 dinars\nIQD 1 = 0.0301 Thai Baht\nMalaysian Ringgit\n1 Malaysian Ringgit = 277.5018 dinars\nIQD 1 = 0.0036 Malaysian Ringgit\nIndian Rupee\n1 Indian Rupee = 17.4200 dinars\nIQD 1 = 0.0574 Indian Rupee\nIranian Rial\n1 IRR = 0.0387 Iraqi dinars\nIQD 1 = 25.8393 Iranian Rial\nEgyptian Pound\n1 Egyptian Pound = 131.1330 dinars\nIQD 1 = 0.0076 Egyptian pounds\nSaudi riyal\n1 SAR = 310.7552 dinars\nIQD 1 = 0.0032 SAR\nUAE dirham\n1 AED = 317.0175 dinars\nIQD 1 = 0.0032 AED\nSudanese Pound\n1 SDG = 183.0064 dinars\nIQD 1 = 0.0055 Sudanese pounds\nAlgerian dinar\n1 DA = 10.6162 dinars\nIQD 1 = 0.0942 DA\nBahraini Dinar\n1 BD = 3,109.2949 Iraqi dinars\nIQD 1 = 0.0003 BD\nJordanian Dinar\nJD 1 = 1,647.7282 Iraqi dinars\nIQD 1 = 0.0006 JD\nKuwaiti dinar\n1 KD = 3,852.1509 Iraqi dinars\nIQD 1 = 0.0003 Kuwaiti dinars\nLebanese Pound\n1 LP = 0.7923 Iraqi dinars\nIQD 1 = 1.2621 LP\nIsraeli Shekel\n1 Israeli Shekel = 305.4150 dinars\nIQD 1 = 0.0033 Israeli Shekel\nLibyan dinar\n1 LD = 846.7559 dinars\nIQD 1 = 0.0012 LD\nMoroccan dirham\nIQD 1 = 0.0084 Moroccan dirhams\nMauritanian ounce\n1 UM = 3.2663 Iraqi dinars\nIQD 1 = 0.3062 UM\nSyrian Lira\n1 SYP = 5.4429 Iraqi dinars\nIQD 1 = 0.1837 LS\nSomali shilling\n1 Somali Shilling = 2.0260 Iraqi dinars\nIQD 1 = 0.4936 Somali Shilling\nOmani Rial\nRO 1 = 3,024.2539 Iraqi dinars\nIQD 1 = 0.0003 RO\nQatari Riyal\nQR 1 = 319.8923 dinars\nIQD 1 = 0.0031 QR\nTunisian Dinar\n1 TND = 520.6867 dinars\nIQD 1 = 0.0019 Tunisian dinars\nYemeni riyal\n1 YR = 4.6579 Iraqi dinars\nIQD 1 = 0.2147 YR\nDjiboutian franc\n1 Djibouti francs = 6.5316 Iraqi dinars\nIQD 1 = 0.1531 Djibouti francs\n\nskypressiq.net" ]
[ null, "https://iraqidinarchat.net/wp-content/uploads/2013/03/dinar-dollar.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5985002,"math_prob":0.9903253,"size":2676,"snap":"2020-45-2020-50","text_gpt3_token_len":1148,"char_repetition_ratio":0.27881736,"word_repetition_ratio":0.0952381,"special_character_ratio":0.43609866,"punctuation_ratio":0.1369637,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95551556,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-01T18:05:48Z\",\"WARC-Record-ID\":\"<urn:uuid:ca01d8a1-4096-427e-8655-bca0ce1bad63>\",\"Content-Length\":\"42242\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:25b06566-5655-4891-9b23-79db0824960d>\",\"WARC-Concurrent-To\":\"<urn:uuid:f86dca0a-0575-4244-b236-1bbc12837a95>\",\"WARC-IP-Address\":\"104.192.220.131\",\"WARC-Target-URI\":\"https://iraqidinarchat.net/?p=45439\",\"WARC-Payload-Digest\":\"sha1:BOFNE56VQ2GRSBGNMAHEW3K24FS64Z3M\",\"WARC-Block-Digest\":\"sha1:EAWGV3K3VG7B3YB7HNK3PAZFPRLJMTH2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141681209.60_warc_CC-MAIN-20201201170219-20201201200219-00187.warc.gz\"}"}
https://c.happycodings.com/file-operations/c-file-io-for-binary-files.html
[ "", null, "# C Programming Code Examples\n\n## C > File Operations Code Examples\n\n### C FILE I/O for Binary files\n\nProgram Code Checks Alphabets by ASCII - C Language program Check Alphabets using ASCII code. Every printable & non-printable Symbol is treated as a Character and has an ASCII code. The ASCII code is unique integer\n\nNumber Decimal System to Binary System - C Language program code to using recursion finds a binary equivalent of a decimal number entered by the user. The user has to enter a decimal which has a base 10 and the program\n\nC Coding Print all the Paths from the Root - C language program Function to store all the paths from the root node to all leaf nodes in a array. Function which helps the print_path to recursively print all the nodes. Function to...\n\nC program code insert an element in array - Code insert an element in array at specified position. Insert element in an array at given position. The program should also print an error message if the insert position is invalid.\n\nC Code Implement a Queue using an Array - C program code ask the user for operation like insert, delete, display and exit. According to the option entered, access its respective function using switch statement and use the\n\nC++ Program Implements Fibonacci Heap - Link nodes in fibonnaci heap. Union nodes in fibonnaci heap. Extract min node in fibonnaci heap. Consolidate node in fibonnaci heap and Decrease key of nodes in fibonnaci heap. Find\n\nCode Sum of Prime numbers between 1- n - C program to find sum of all prime numbers between 1 to n using for loop. Prime numbers are positive integers greater than 1 that has only two divisors 1 and the number itself. For\n\nConvert Binary Number Octal Vice-Versa - You will learn to convert \"binary number\" to octal, and octal number to binary manually by creating a \"user-defined\" function. In this, first convert the binary number to \"decimal\"." ]
[ null, "https://happycodings.com/images/HappyCodings.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8109096,"math_prob":0.69934314,"size":1871,"snap":"2021-31-2021-39","text_gpt3_token_len":402,"char_repetition_ratio":0.1237279,"word_repetition_ratio":0.006153846,"special_character_ratio":0.20470336,"punctuation_ratio":0.07183908,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96844465,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-16T16:40:20Z\",\"WARC-Record-ID\":\"<urn:uuid:62bf9229-b354-4530-975f-4592e36eb383>\",\"Content-Length\":\"30363\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cc63ce17-4e98-4007-963d-4388efc17b75>\",\"WARC-Concurrent-To\":\"<urn:uuid:e10772e0-7134-4186-beb0-ed14dbc7f5ad>\",\"WARC-IP-Address\":\"172.67.145.119\",\"WARC-Target-URI\":\"https://c.happycodings.com/file-operations/c-file-io-for-binary-files.html\",\"WARC-Payload-Digest\":\"sha1:PIVN2VXP3YHC42KNFGEKCVNVZ7XXWA2M\",\"WARC-Block-Digest\":\"sha1:ZMZFRGO5I3PN6TFWCYMMM6UV65I6C2IZ\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780053657.29_warc_CC-MAIN-20210916145123-20210916175123-00192.warc.gz\"}"}
https://es.mathworks.com/matlabcentral/cody/problems/2022-find-a-pythagorean-triple/solutions/1085653
[ "Cody\n\n# Problem 2022. Find a Pythagorean triple\n\nSolution 1085653\n\nSubmitted on 20 Dec 2016 by Pranav\nThis solution is locked. To view this solution, you need to provide a solution of the same size or smaller.\n\n### Test Suite\n\nTest Status Code Input and Output\n1   Pass\na = 1; b = 2; c = 3; d = 4; flag_correct = false; assert(isequal(isTherePythagoreanTriple(a, b, c, d),flag_correct))\n\n2   Pass\na = 2; b = 3; c = 4; d = 5; flag_correct = true; assert(isequal(isTherePythagoreanTriple(a, b, c, d),flag_correct))\n\n3   Pass\na = 3; b = 4; c = 5; d = 6; flag_correct = true; assert(isequal(isTherePythagoreanTriple(a, b, c, d),flag_correct))\n\n4   Pass\na = 3; b = 4; c = 4.5; d = 5; flag_correct = true; assert(isequal(isTherePythagoreanTriple(a, b, c, d),flag_correct))\n\n5   Pass\na = 3; b = 3.5; c = 4; d = 5; flag_correct = true; assert(isequal(isTherePythagoreanTriple(a, b, c, d),flag_correct))\n\n### Community Treasure Hunt\n\nFind the treasures in MATLAB Central and discover how the community can help you!\n\nStart Hunting!" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5189801,"math_prob":0.9985649,"size":924,"snap":"2020-45-2020-50","text_gpt3_token_len":329,"char_repetition_ratio":0.1576087,"word_repetition_ratio":0.28387097,"special_character_ratio":0.36471862,"punctuation_ratio":0.25,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9969409,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-28T03:32:15Z\",\"WARC-Record-ID\":\"<urn:uuid:8463989e-efa5-4c2a-a116-4308fa2ffdf7>\",\"Content-Length\":\"81278\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6193f3d9-b587-49cd-a5f7-83d4be15a343>\",\"WARC-Concurrent-To\":\"<urn:uuid:b373bcde-3024-46ce-944b-368ead179370>\",\"WARC-IP-Address\":\"184.24.72.83\",\"WARC-Target-URI\":\"https://es.mathworks.com/matlabcentral/cody/problems/2022-find-a-pythagorean-triple/solutions/1085653\",\"WARC-Payload-Digest\":\"sha1:CL5ZUJX2AACZ3E5S3D7VAIKAYG5WJKZM\",\"WARC-Block-Digest\":\"sha1:HVMRK7NEOJEL2CQBJX2CCLAFJNQKG3ME\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107896048.53_warc_CC-MAIN-20201028014458-20201028044458-00250.warc.gz\"}"}
https://planetcalc.com/7921/
[ "# Convert grams to liters and liters to grams\n\nThis online calculator converts grams to liters and liters to grams given a gas formula. It uses molar volume of a gas at STP (standard temperature and pressure)\n\n### Articles that describe this calculator", null, "#### Convert grams to liters and liters to grams\n\nDigits after the decimal point: 1\nVolume of the gas, liters\n\nMass of the gas, grams\n\nMolar mass of the gas\n\nMolar mass details\n\n### Calculators used by this calculator\n\nURL copied to clipboard\n\n#### Similar calculators\n\nPLANETCALC, Convert grams to liters and liters to grams" ]
[ null, "https://planetcalc.com/img/32x32i.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7381312,"math_prob":0.93435365,"size":513,"snap":"2022-40-2023-06","text_gpt3_token_len":114,"char_repetition_ratio":0.18664047,"word_repetition_ratio":0.13414635,"special_character_ratio":0.18128654,"punctuation_ratio":0.054945055,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9673632,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-04T16:00:50Z\",\"WARC-Record-ID\":\"<urn:uuid:a171db1c-2849-4a1d-8201-5b4f8d8af9ef>\",\"Content-Length\":\"41574\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:443ac70d-ce48-4f33-aecc-3ba97d4fd4a5>\",\"WARC-Concurrent-To\":\"<urn:uuid:334e3755-290a-4442-b1d5-a8670b5ae401>\",\"WARC-IP-Address\":\"104.217.251.114\",\"WARC-Target-URI\":\"https://planetcalc.com/7921/\",\"WARC-Payload-Digest\":\"sha1:LGWVQ7RTMO43IO634CVLW3QJ3E26QZJT\",\"WARC-Block-Digest\":\"sha1:AN3XXBIFQ2SCJDK4DISVD7JR7RY5YFRY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337516.13_warc_CC-MAIN-20221004152839-20221004182839-00655.warc.gz\"}"}
https://www.journeyingtheglobe.com/fahrenheit-to-celsius/223-f-to-c/
[ "223 f to c | 223 Fahrenheit to Celsius | [+ Examples]\n\n# 223F to C - Convert 223° Fahrenheit to Celsius\n\n### The answer is: 106.11 degrees Celsius or 106.11° C\n\nLet's look into the conversion between Fahrenheit and Celsius scales in detail.\n\n### Calculate 223° Fahrenheit to Celsius (223F to °C)\n\nFahrenheit\nCelsius\n223 Degrees Fahrenheit = 106.11 Degrees Celsius\n\nTemperature Conversion - Degrees Fahrenheit into Degrees Celsius\n\nFahrenheit to celsius conversion formula is all about converting the temperature denoting in Fahrenheit to Celsius. As mentioned earlier, the temperature of boiling (hot) water in Celsius is 0 degrees and in Fahrenheit is 21 degrees, the formula to convert F to C is\n\n### °C = (°F − 32) x 5/9\n\nThe math is here is fairly simple, and can be easily understood by an example. Let's say we need to 223 Fahrenheit to Celsius\n\n## How To Convert 223 F to C?\n\nTo convert 223 degrees Fahrenheit to Celsius, all one needs is to put in the values in the converter equation-\n\n### °C = (°F − 32) x 5/9\n\nC = 106.11 degrees\n\nThus, after applying the formula to convert 223 Fahrenheit to Celsius, the answer is -\n\n223°F = 106.11°C\n\nor\n\n223 degrees Fahrenheit equals 106.11 degrees Celsius!\n\n### How much is 223 degrees Fahrenheit in Celsius?\n\n223F to C = 106.11 °C\n\n### How to Convert From Fahrenheit to Celsius and Celsius to Fahrenheit - Quick and Easy Method\n\nHow to Convert From Fahrenheit to C...\nHow to Convert From Fahrenheit to Celsius and Celsius to Fahrenheit\n\n### What is the formula to calculate Fahrenheit to Celsius?\n\nThe F to C formula is\n\n(F − 32) × 5/9 = C\n\nWhen we enter 223 for F in the formula, we get\n\n(223 − 32) × 5/9  = 106.11 C\n\nTo be able to solve the (223 − 32) × 5/9 equation, we first subtract 32 from 223, then we multiply the difference by 5, and then finally we divide the product by 9 to get the answer in Celsius.\n\n### What is the simplest way of converting Fahrenheit into Celsius?\n\nThe boiling temperature of water in Fahrenheit is 21 and 0 in Celsius. So, the simplest formula to calculate the difference is\n\nC = (F − 32) × 5/9\n\nFor converting Fahrenheit into Celsius, you can use this formula – Fahrenheit Temperature – 32/ 2 = Celsius Temperature.\n\nBut this is not the only formula that is used for the conversion as some people believe it doesn’t give out the exact number.\n\nOne another formula that is believed to be equally easy and quick is\n\n(°F - 32) x .5556\n\nWhile there are other temperature units like Kelvin, Réaumur, and Rankine as well, Degree Celsius and Degree Fahrenheit are the most commonly used.\n\nWhile Fahrenheit is primarily used in the US and its territories, Celsius has gained more popularity in the rest of the world. For those using these two different scales, the numbers that denote that temperature are quite different.\n\nFor example, water freezes at Zero Degree Celsius and boils at 100 degrees, the readings are 32-degree Fahrenheit as the freezing point of water and 212 degrees for boiling.\n\n## For Celsius Conversions\n\nFor Celsius conversion, all you need to do is start with the temperature in Celsius. Subtract 30 from the resultant figure, and finally, divide your answer by 2!\n\n## Common F and C Temperature Table\n\n### Key Inferences about Fahrenheit and Celsius\n\n• Celsius and Fahrenheit are commonly misspelled as Celcius and Farenheit.\n• The formula to find a Celsius temperature from Fahrenheit is:  °F = (°C × 9/5) + 32\n• The formula to find a Fahrenheit temperature from Celsius is:  °°C = (°F - 32) × 5/9\n• The two temperature scales are equal at -40°.\n\n## Oven temperature chart\n\nThe Fahrenheit temperature scale is named after the German physicist Daniel Gabriel Fahrenheit in 1724 and was originally used for temperature measurement through mercury thermometers that he invented himself.\n\nMeanwhile, the Celsius scale was originally called centigrade but later came to be named after Swedish astronomer Anders Celsius in 1742. But when the scale was first introduced, it was quite the reverse of what it is today. Anders labeled 0 Degree Celsius as the boiling point of water, while 100 denoted the freezing point.\n\nHowever, after Celsius passed away, Swedish taxonomist Carl Linnaeus flipped it to the opposite, the same as it is used today.\n\n### Our Take\n\nWhile this is the formula that is used for the conversion from Fahrenheit to Celsius, there are few diversions and it is not always a perfect conversion either making it slightly more difficult than what appears to be.\n\nAll said and done, one must understand that since both the scales are offset, meaning that neither of them is defined as starting from zero, there comes a slightly complicated angle to the above-mentioned formula.\n\nBesides, the two scales do not start with a zero, and they both add a different additional value for every unit of heat. This is why it is not every time possible to get an exact value of the conversion by applying the formula.\n\nReverse Conversion: Celsius to Fahrenheit\n\n Fahrenheit Celsius 223.01°F 106.12°C 223.02°F 106.12°C 223.03°F 106.13°C 223.04°F 106.13°C 223.05°F 106.14°C 223.06°F 106.14°C 223.07°F 106.15°C 223.08°F 106.16°C 223.09°F 106.16°C 223.1°F 106.17°C 223.11°F 106.17°C 223.12°F 106.18°C 223.13°F 106.18°C 223.14°F 106.19°C 223.15°F 106.19°C 223.16°F 106.2°C 223.17°F 106.21°C 223.18°F 106.21°C 223.19°F 106.22°C 223.2°F 106.22°C 223.21°F 106.23°C 223.22°F 106.23°C 223.23°F 106.24°C 223.24°F 106.24°C\n Fahrenheit Celsius 223.25°F 106.25°C 223.26°F 106.26°C 223.27°F 106.26°C 223.28°F 106.27°C 223.29°F 106.27°C 223.3°F 106.28°C 223.31°F 106.28°C 223.32°F 106.29°C 223.33°F 106.29°C 223.34°F 106.3°C 223.35°F 106.31°C 223.36°F 106.31°C 223.37°F 106.32°C 223.38°F 106.32°C 223.39°F 106.33°C 223.4°F 106.33°C 223.41°F 106.34°C 223.42°F 106.34°C 223.43°F 106.35°C 223.44°F 106.36°C 223.45°F 106.36°C 223.46°F 106.37°C 223.47°F 106.37°C 223.48°F 106.38°C 223.49°F 106.38°C\n Fahrenheit Celsius 223.5°F 106.39°C 223.51°F 106.39°C 223.52°F 106.4°C 223.53°F 106.41°C 223.54°F 106.41°C 223.55°F 106.42°C 223.56°F 106.42°C 223.57°F 106.43°C 223.58°F 106.43°C 223.59°F 106.44°C 223.6°F 106.44°C 223.61°F 106.45°C 223.62°F 106.46°C 223.63°F 106.46°C 223.64°F 106.47°C 223.65°F 106.47°C 223.66°F 106.48°C 223.67°F 106.48°C 223.68°F 106.49°C 223.69°F 106.49°C 223.7°F 106.5°C 223.71°F 106.51°C 223.72°F 106.51°C 223.73°F 106.52°C 223.74°F 106.52°C\n Fahrenheit Celsius 223.75°F 106.53°C 223.76°F 106.53°C 223.77°F 106.54°C 223.78°F 106.54°C 223.79°F 106.55°C 223.8°F 106.56°C 223.81°F 106.56°C 223.82°F 106.57°C 223.83°F 106.57°C 223.84°F 106.58°C 223.85°F 106.58°C 223.86°F 106.59°C 223.87°F 106.59°C 223.88°F 106.6°C 223.89°F 106.61°C 223.9°F 106.61°C 223.91°F 106.62°C 223.92°F 106.62°C 223.93°F 106.63°C 223.94°F 106.63°C 223.95°F 106.64°C 223.96°F 106.64°C 223.97°F 106.65°C 223.98°F 106.66°C 223.99°F 106.66°C" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.759873,"math_prob":0.9669838,"size":7955,"snap":"2022-27-2022-33","text_gpt3_token_len":2730,"char_repetition_ratio":0.24990568,"word_repetition_ratio":0.01946472,"special_character_ratio":0.42740414,"punctuation_ratio":0.15263984,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95666486,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-07T00:58:33Z\",\"WARC-Record-ID\":\"<urn:uuid:9c117020-c6e4-46f5-997c-0a0ccb4d7903>\",\"Content-Length\":\"325882\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:60b5e2eb-cdb3-48a8-b4dc-3c579598e378>\",\"WARC-Concurrent-To\":\"<urn:uuid:0182a4b8-f29b-44e0-8658-417881ee82b6>\",\"WARC-IP-Address\":\"104.26.7.72\",\"WARC-Target-URI\":\"https://www.journeyingtheglobe.com/fahrenheit-to-celsius/223-f-to-c/\",\"WARC-Payload-Digest\":\"sha1:O7FXDRBIT5BWXDQKYBUXK4LUUS4CHEMT\",\"WARC-Block-Digest\":\"sha1:VOU5KGHIRGU7XWVI3DHFOK4IBSEXFXF3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104683020.92_warc_CC-MAIN-20220707002618-20220707032618-00058.warc.gz\"}"}
https://homeguideis.com/how-to-measure-kitchen-cabinets-in-linear-feet/
[ "", null, "# How to Measure Kitchen Cabinets In Linear Feet\n\nFor those who are curious about how to measure kitchen cabinets in linear feet, a quick internet search will show that there are many different ways to do so. But which one is the most accurate?\n\nThe answer is found in the definition of linear feet. A linear foot refers to the measurement of distance on a straight line. The easiest way to measure a cabinet is by measuring its height, width, and depth, then multiplying these measurements together. And if you’re looking for more information on other aspects of kitchen design, check out our Kitchen Design Guide!\n\n## How to Measure Kitchen Cabinets In Linear Feet\n\nMeasuring cabinets can seem like a daunting task. But it doesn’t have to be! The truth is, there are many different ways to measure kitchen cabinets in linear feet. Some of the most popular methods include:\n\nYou Measure the height, width, and depth of your cabinet and multiply these measurements together Measure from one corner of your cabinet to the opposite corner Measure from one side of your cabinet to the opposite side\n\nOne way that you can measure kitchen cabinets in linear feet is by measuring from one corner of the cabinet to the other. If you follow this method, you would measure from one edge of your kitchen cabinet to the other edge. You would then take your measurement and input it into an online calculator. This calculator would calculate how many linear feet that length equals based on its dimensions (height and width).\n\nFor example, if you had a wide but short cabinet that was 10 feet long by 3 feet high, then you would input 10 ft x 3 ft = 30 sq. ft.  You can find out how many square inches this is by dividing 30 sq. ft. by 144 sq ft (1 square foot is 144 sq in.). So for this particular example with our 10ft x 3ft wide but short cabinet, we can figure out that it equals 18 square feet or 1/6th of a standard size sheet of plywood (144 sq ft).\n\n## How to Measure a Cabinet\n\nThere are many different ways to measure cabinets. The easiest way is by multiplying the height, width, and depth of the cabinet space. So let’s say you want to measure a cabinet that is 36″ high, 36″ wide, and 18″ deep. First, you would multiply 36x36x18, which equals 5184 inches or 95.4 feet. That means this cabinet measures about 95 linear feet in length.\n\n## What are the Benefits of Measuring Kitchen Cabinets in Linear Feet?\n\nMeasuring kitchen cabinets in linear feet is a straightforward process. But what are the benefits? Well, measuring in linear feet is easy and it’s accurate. You just need to measure the height, width, and depth of the cabinet and multiply these measurements to get the total linear feet. It’s also a more cost-effective way to measure because you’re not paying for labor.\n\nJust make sure you keep your measurements consistent. If you want to know how many linear feet are in a square foot, there are conversion calculators that can help out with this as well!\n\n## How to Measure Kitchen Cabinets in Square Feet\n\nOne way to measure kitchen cabinets in linear feet is to measure the height, width, and depth of each cabinet separately. Multiply these measurements together. Then divide by 12 to find out how many feet of space you have for your cabinets.\n\nAnother way to measure kitchen cabinets in linear feet is by multiplying the length and width of an empty cabinet that’s placed on top of a floor plan. Draw lines on your blueprint, then use a ruler or measuring tape to measure the distance between them. This will give you the amount of square footage that the cabinet would take up in your kitchen.\n\n## What are the Advantages of Measuring Kitchen Cabinets in Square Feet?\n\nAnother way to measure cabinets is by measuring them in square feet. You can also find this answer by looking up the definition of a linear foot. A square foot is equal to the measurement of distance on a plane surface. So it’s easier to measure kitchen cabinets in square feet by taking measurements from all four sides.\n\nThere are many advantages of measuring kitchen cabinets in square feet, one being that you don’t have to do any math! If you’re not very good with math, then this can be a huge benefit. It also eliminates mistakes from using more than one unit of measurement when converting from one unit to another. And if you’re designing for a client who uses imperial measurements (inches), it will also eliminate any conversions between different units of measurement.\n\nTake measurements from all four sides, and use the following equation:\n\nCabinet Width x Cabinet Depth x Cabinet Height = Square Feet\n\nFor example: 36″ x 36″ x 24″ = 25 Square Feet\n\n## What are Linear Feet?\n\nA linear foot is a measurement of distance on a straight line. For example, the height, width, and depth of a kitchen cabinet would all contribute to its linear feet measurement.\n\nLinear feet can be expressed in different ways: as fractional inches (1/2), decimal inches (6.5), or as whole feet (12). Most people will measure their cabinets using the fractional inch method–hopefully one that converts easily to metric!\n\nThe formula for calculating linear yards is: ((height x width) x depth). For example, if you have a cabinet that is 36 in tall, 24 in wide, and 18 in deep. The linear feet of this cabinet would be: ((36 x 24) x 18). The answer would be 5760 square inches or 576 cubic inches–or 9 ft 6 in.\n\nThe formula for calculating cubic footage is: ((height x width) x depth) divided by 2743.\n\nSo if your cabinet has dimensions of 36x24x18in, then it would have cubic footage of 2248 cu.in., or 269 L with the cubic inch conversion.\n\n## Calculate the Length of One Cabinet\n\nThe best way to calculate the linear length of a cabinet is by measuring its height, width, and depth, then multiplying these measurements together.\n\nFor example, if a kitchen cabinet has a height of 30 inches and a depth of 20 inches, you would multiply these two measurements together to get 600 inches. The same process can be used for width; if the cabinet has a width of 18 inches, you would multiply this measurement with that of the depth to get 3600 inches.\n\nOnce you have calculated all three dimensions in linear feet, you will need to convert them into feet and inches. To do so, divide each one by 12 which will give you their equivalent measurements in feet and inches. For our hypothetical cabinet example from above: 30/12 = 2 ft., 20/12 = 1 ft., 18/12 = 1 ft., 600/12 = 50 in., 3600/12 = 300 in.\n\nIn order to measure kitchen cabinets using linear feet, it’s important to keep these two things in mind. Measurements should be taken on an even wall Measurements should be taken at an average height\n\n## What is a Linear Foot?\n\nLinear feet are used for measuring items such as carpets, rugs, cabinets, and more. A linear foot measures the length of a straight line from one end to the other. This could be from one end of an item to the other, or it could be from one side of an item to the other. It can also refer to measuring the distance from an object’s top edge to its bottom edge.\n\nA quick internet search will show that there are many different ways to measure kitchen cabinets in linear feet. But which one is most accurate? The answer lies in what is considered a linear foot. A linear foot refers to a measurement taken on a straight line. So it makes sense that you would need to measure the height, width, and depth of the cabinet in order to accurately find out how many linear feet it takes up. If you don’t know how long your cabinet is. You can take measurements using a ruler or yardstick and divide by 12 inches (or 3 feet).\n\nIf you’re planning on upgrading your kitchen with new cabinetry. If but want some more insight into all aspects of design, be sure to check out our Kitchen Design Guide!\n\nSee More: How To Paint Kitchen Cabinets To Look Antique\n\n## Conclusion\n\n• To measure kitchen cabinets in linear feet, first, measure the longest wall of your kitchen that the cabinets will be placed on.\n• If the longest wall is 12 feet, then that is the number of linear feet that one row of cabinets will take up.\n• Keep in mind that this measurement only goes for one wall, so if you have two rows of cabinets, multiply the linear feet number by two.\n• When measuring kitchen cabinets in square feet. Use the total length of all walls and divide by the total number of cabinets to find the total square footage needed.\n• Generally, when measuring kitchen cabinets in square feet. You will end up with a larger number than when you measure them in linear feet. The square footage measurement is generally more accurate because it takes into consideration the space between your walls and cabinets.\n• In a square foot measurement, one linear foot is equal to 1 square foot.\n• To calculate the length of one cabinet, subtract the total length of your kitchen from 12 feet (the one row). So if your kitchen is 20 feet long, then each row would have 10 linear feet of space for cabinets.\n• A linear foot is calculated by multiplying the\nScroll to Top" ]
[ null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%201000%20600'%3E%3C/svg%3E", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9392185,"math_prob":0.99105734,"size":8907,"snap":"2022-40-2023-06","text_gpt3_token_len":1939,"char_repetition_ratio":0.19049759,"word_repetition_ratio":0.08644278,"special_character_ratio":0.22734928,"punctuation_ratio":0.09827873,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9834504,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-25T10:27:20Z\",\"WARC-Record-ID\":\"<urn:uuid:c34951c1-1310-4431-b6fb-51c79460ab02>\",\"Content-Length\":\"169147\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:720f446c-ce72-4cc3-8aa8-a87b565c4e4e>\",\"WARC-Concurrent-To\":\"<urn:uuid:ceadffa7-b53a-4d8e-aff2-04b790c06301>\",\"WARC-IP-Address\":\"172.67.165.84\",\"WARC-Target-URI\":\"https://homeguideis.com/how-to-measure-kitchen-cabinets-in-linear-feet/\",\"WARC-Payload-Digest\":\"sha1:B3YXPSGYBSHZJRY5V2QXMJ4FJBK24SGF\",\"WARC-Block-Digest\":\"sha1:CT25ZES66YITSPJOMNTXOIMI6XSX3KQ5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030334528.24_warc_CC-MAIN-20220925101046-20220925131046-00215.warc.gz\"}"}
http://ufoetblog.com/?p=7264
[ "#", null, "Note: Translated words like ‘Denver indicate unusual relationships between the character positions in the word(s) being translated and result of the translation.\n\nGod (UFO/ET) talks to us through the ET Corn Gods language translation of words.\n\nDenver:\nDe is 45, book “Romans”.\nA is one, 1000, “m”, m+66+66 = “ne”\nTherefore:\nDe = “Romne as”.\nAs is 119, Cs = c 19, 91-66 = “25”, “y”.\nTherefore;\nDe = “Romney”\nE is 5, 55. (five = I-f ve = cve, ee)\nTherefore:\nDenver = “Romney enver”\nE+66+66 .. = “won”.\nTherefore:\nDenver = “Romney Won ver”.\nV is 22, 2+2, “4”, “D”.\nTherefore:\nDenver = “Romney Won De r”.\nAdd 0+66+66 .. = “Sn”.\nS is 19, 2+17.\n7+66+66 … = “te”.\nTherefore:\nDenver = “Romney Won Debate nr”.\nN is 14, 2+12, “2012”.\n2 is B, “Boron”, “Bad”.\nTherefore:\nDenver = “Romney Won Debate 2012 AD R”\n18 is Job.\nJ is 10, “October”.\nOb is book 31.\nOne is on date. (e is 5, book Dt, t is ag, g+66+66 … = “te”)\nTherefore:\n\n## Visit:\n\nwww.ETCornGods.com" ]
[ null, "http://ufoetblog.com/wp-content/uploads/2012/10/Romney-Obama.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8796072,"math_prob":0.9958382,"size":1184,"snap":"2019-13-2019-22","text_gpt3_token_len":416,"char_repetition_ratio":0.17118645,"word_repetition_ratio":0.03271028,"special_character_ratio":0.3902027,"punctuation_ratio":0.24113475,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9838486,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-21T14:05:38Z\",\"WARC-Record-ID\":\"<urn:uuid:ec748f52-76bc-49a6-8920-b6c915d1f68c>\",\"Content-Length\":\"9747\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c8405fa1-264a-49e7-a028-f75e4b750192>\",\"WARC-Concurrent-To\":\"<urn:uuid:4934f9f3-b8da-449e-920f-2829e9788ce0>\",\"WARC-IP-Address\":\"199.167.40.169\",\"WARC-Target-URI\":\"http://ufoetblog.com/?p=7264\",\"WARC-Payload-Digest\":\"sha1:5MBNVNPSWWEZPY7TYZST7RMZA5NYDRGY\",\"WARC-Block-Digest\":\"sha1:OLR747L73I4ZQIJHIKGAHOXR22SSEOS5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912202525.25_warc_CC-MAIN-20190321132523-20190321154523-00055.warc.gz\"}"}
https://www.physicsforums.com/threads/surface-charge-density-on-a-cylindrical-cavity.714730/
[ "# Surface charge density on a cylindrical cavity\n\nHi folks, I am having trouble generalizing a well-known problem. Let's say we have a cylindrical cavity inside a conductor, and in this cavity runs a line charge λ. I would now like to know the surface charge density on the inside wall of the cavity, but with the line charge not in the center of the cylindrical cavity.\n\nIt's clear that if the line charge is located in the center, the surface charge density is a constant because all points of the inner surface of the cavity are equally close to the line charge.\n\nSo when the line charge is off-center, the surface charge distribution has to be varying around the center with the angle. Yet, the inner surface of the cavity still has to be a equipotential surface.\n\nCan anyone help me with an idea of how to solve this problem? I will for sure need the cosine law to determine the distance of the surface of the cavity from the line charge, but from there...?\n\nThank you very much for your help!\n\nRegards, John\n\nMeir Achuz\nHomework Helper\nGold Member\nWrite the potential as a Fourier cosine series in a_n r^n cos\\theta\nplus ln[\\sqrt{r^2+d^2-2rd cos\\theta}].\nExpand the log in a Fourier cosine series . Then set the potential = 0 at the surface r=R, setting each term in the series to zero to find the coefficients a_n.\n\nThat's a great idea, thank you very much. I first tried to solve this problem with image charges, but the problem is that I don't know where to place it. In the solution of Laplace's equation, all coeficients for any term r^n for n<0 must be zero, but can there survive any others than the logarithmic term?\n\n(Two line charges a distance L/2 apart produce a potential λ/2∏ε0*ln((r^2+(L/2)^2-rLcosθ)/(r^2+(L/2)^2+rLcosθ)).)\n\nLast edited:\nMeir Achuz" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9510709,"math_prob":0.95174366,"size":1611,"snap":"2022-27-2022-33","text_gpt3_token_len":350,"char_repetition_ratio":0.20161793,"word_repetition_ratio":0.76510066,"special_character_ratio":0.21663563,"punctuation_ratio":0.10149254,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9938545,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-03T12:09:26Z\",\"WARC-Record-ID\":\"<urn:uuid:50632345-eab6-4079-a30f-278a281556b4>\",\"Content-Length\":\"67189\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1673b920-6ce7-47fd-a6f6-6ea8b340ab36>\",\"WARC-Concurrent-To\":\"<urn:uuid:d67a4a1f-8ef4-45f2-9521-620cf55c05ab>\",\"WARC-IP-Address\":\"104.26.15.132\",\"WARC-Target-URI\":\"https://www.physicsforums.com/threads/surface-charge-density-on-a-cylindrical-cavity.714730/\",\"WARC-Payload-Digest\":\"sha1:5HMU6DKBLEFBVC52YK3J3VLRNOA2JDLQ\",\"WARC-Block-Digest\":\"sha1:CS2QCLHWCQKKD2UMJCLDLUJCA67DKWZG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104240553.67_warc_CC-MAIN-20220703104037-20220703134037-00488.warc.gz\"}"}
https://physics.stackexchange.com/questions/312443/what-unit-does-delta-x-have-in-the-uncertainty-principle
[ "# What unit does $\\Delta x$ have in the uncertainty principle?\n\nCan somebody tell me how the units work out in Heisenberg's principle equation? Mass in $kg$ and velocity in $m/s$ cancel partially with Planck's constant, so what kind of unit is given to $Δx$ to balance the units?\n\n$(m) \\cdot (kg \\cdot \\frac{m}{s}) \\ge J \\cdot s = (kg \\cdot \\frac{m^2}{s^2}) \\cdot s$\nSo, the unit is $kg \\cdot \\frac{m^2}{s}$ on both sides.\nThe position-momentum uncertainty relation is: $\\Delta x\\Delta p\\geq \\frac{\\hbar}{2}$. Here $\\Delta x$ is the 'uncertainty' aka standard deviation of observing a quantum particle at a given point. The standard deviation is defined as $\\Delta x = \\sqrt{\\left\\langle x^2\\right\\rangle-\\left\\langle x\\right\\rangle^2}$. Here x has the dimensions of length and we can then see that $\\Delta x$ also has the dimensions of a length. As the uncertainty in momentum is defined in much the same way we see that $\\Delta p$ has the dimensions of momentum: $\\frac{\\text{mass}\\cdot\\text{length}}{\\text{time}}=$. Now $\\hbar$ has to have the dimension $\\frac{\\text{mass}\\cdot\\text{length}^2}{\\text{time}}=\\text{energy}\\cdot\\text{time}$ or else we wouldn't have a physically or mathematically sensible inequality." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7674844,"math_prob":0.99989367,"size":1155,"snap":"2021-43-2021-49","text_gpt3_token_len":334,"char_repetition_ratio":0.12771504,"word_repetition_ratio":0.0,"special_character_ratio":0.27878788,"punctuation_ratio":0.058558557,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.000004,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-26T08:11:56Z\",\"WARC-Record-ID\":\"<urn:uuid:8216c0a1-bc78-4dd6-9cd6-42cb45a2854f>\",\"Content-Length\":\"170812\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5fd3025d-08a0-42de-a098-25f13fba4f76>\",\"WARC-Concurrent-To\":\"<urn:uuid:0174ca5e-b669-4706-82e8-2eace1f20728>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://physics.stackexchange.com/questions/312443/what-unit-does-delta-x-have-in-the-uncertainty-principle\",\"WARC-Payload-Digest\":\"sha1:35P7OIBICGBUDGUUFWJSKY25A6HLDHZI\",\"WARC-Block-Digest\":\"sha1:ZUAD3UFLDOKMG36SOZWBANPY5KYDXJJV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323587854.13_warc_CC-MAIN-20211026072759-20211026102759-00246.warc.gz\"}"}
https://kullabs.com/class-11/physics-11/work-energy-and-power-11/collisions
[ "", null, "## Collisions\n\nSubject: Physics\n\n#### Overview\n\nCollision is the mutual interaction between two particles for a short interval of time so that their momentum and kinetic energy may change. Collision is an isolated event in which two or more colliding bodies exert relatively strong forces on each other for a relatively short time. Actual physical contact is not necessary for a collision. The collision in which momentum is conserved but K.E is not conserved called inelastic collision. In this collision, the nature of force is non conservative. In inelastic collision, kinetic energy is lost in the form of heat energy, sound energy, light energy etc.\n##### Collisions\n\nThe collision is the mutual interaction between two particles for a short interval of time so that their momentum and kinetic energy may change. A collision is an isolated event in which two or more colliding bodies exert relatively strong forces on each other for a relatively short time. Actual physical contact is not necessary for a collision.\n\nThere are two types of collisions: (i) Elastic collision and (ii) Inelastic collision\n\n#### Elastic Collision\n\nIf the kinetic energy and momentum are conserved in the collision, it is said to be the elastic collision. In the elastic collision, the nature of the force is conservative. Collisions between atomic or sub-atomic particles, between gas molecules etc. are the perfectly elastic collision.\n\nCharacteristics of Elastic Collision\n\n1. The momentum is conserved.\n2. Kinetic energy is conserved.\n3. Total energy is conserved.\n4. Forces involved in the interaction are conservative in nature.\n5. Mechanical energy is not converted into other forms of energy.\n\nElastic Collision in One Dimension", null, "If the colliding bodies move along the same straight path before and after the collision, it is said to be one-dimensional collision\n\nConsider two objects of masses m1 and m2 moving with velocities u1 and u2 such that u1>u2 in the same path, collide and let after collision their velocities be v1 and v2 on the same line as shown in the figure.", null, "From the principle of conservation of momentum\n\nm1u1+m2u2 =m1v1+mv2 ............. (i)\n\nm1(u1-v1) =m2 (v2- u2) ..................... (ii)\n\nsince the collision is elastic\n\nK.E. before collision = K.E after collision\n\nor $\\frac{1}{2}$m1u12 + $\\frac{1}{2}$ m2u22 = $\\frac{1}{2}$m1v12+ $\\frac{1}{2}$m2v22\n\nm1(u12-v12) =m2 (v22- u22)\n\nm1(u1-v1) (u1+v1) =m2 (v2- u2 )m2 (v2+u2) .............................(iii)\n\nFrom equation (iv), it follows that relative velocity of approach (u-u2) before collision is equal to the relative velocity of separation (v2-v1) after collision. From equation (iv)\n\n\\begin{align*} v_2 &= u_1 –u_2 + v_1 \\\\ \\text {Substituting the value of} v_1 in \\text {equation} (i) \\text {we get} \\\\ v_1 = v_2 –u_1 + u_2 \\\\ \\text {Substituting for value of } v_1 in \\text {equation}, (i), \\text {we get} \\\\ m_1u_1 + m_2u_2 &= m_1(v_2 –u_1+u_2) + m_2v_2 \\\\ (m_2-m_1)u_2 + 2m_1u_1 &= (m_1 + m_2) v_2 \\\\ \\text {or,} v_2 &= \\frac {(m_2 –m_1) u_2 + 2m_1u_1 }{m_1 + m_2} \\dots (iv)\\\\ \\end{align*}\n\nSpecial Cases\n\n1. When m1 = m2, then from the equation (v) and (vi), we have\n$$v_1 = u_2 \\text {and} v_2 = u_1$$\ne. if two bodies of equal masses suffer elastic collision, then after the collision they will interchange their velocities.\n2. When u2=0 i.e. when 2nd body is at rest then from equation (v), we have\n$$v_1 =\\frac {m_1 –m_2}{m_1+ m_2} u_1 \\dots (vii)$$\nand from equation (vi), we have\n$$v_2 =\\frac {2m_1}{m_1+m_2} u_1 \\dots (viii)$$\n• When m1>>m2 and u2 = 0, then equation (vii) and (viii) give\n$$v_1 = u_1 \\text {and} v_2 = 2u_1$$\ne. velocity of massive body is same but the lighter body acquires a velocity which is double the initial velocity of massive body.\n• When m2>>m1 and u2 = 0, then equation (vii) and (viii) give\n$$v_1 \\approx –u_1 \\text {and} v_2 \\approx 0$$\ne. the velocity of the lighter body is reversed and massive remains at rest.\n\n#### Inelastic Collision\n\nThe collision in which momentum is conserved but K.E is not conserved called inelastic collision. In this collision, the nature of force is nonconservative. In the inelastic collision, kinetic energy is lost in the form of heat energy, sound energy, light energy etc.\n\nIn the case of inelastic collision between large particles, the loss of kinetic energy occurs mostly in the form of heat energy due to increased vibrations of the constituent atoms of the particles. So in the case of elastic atomic collisions if they are in the normal state before the collision and they become excite after collision i.e. some kinetic energy at the collision is converted into excited energy called excitation energy. Hence K.E. before collision is always greater than after collision i.e.\n\n$$\\frac{1}{2}m_1u_1^2+\\frac{1}{2}m_2u_2^2=\\frac{1}{2}m_1v_1^2 +\\frac{1}{2}m_2v_2^2+\\xi$$\n\nIf the particles are in excited state before collision and carries into normal state after collision, then final K.E. will be greater than the initial K.E. i.e.\n\n$$\\frac{1}{2}m_1u_1^2+\\frac{1}{2}m_2u_2^2=\\frac{1}{2}m_1v_1^2 +\\frac{1}{2}m_2v_2^2+\\xi$$\n\nCharacteristics of Inelastic Collision\n\n1. The momentum is conserved.\n2. Total energy is conserved.\n3. Kinetic energy is not conserved.\n4. Mechanical energy is converted into other forms of energy.\n5. Forces involved during interaction are non-conservative in nature.\n\nInelastic Collision in One Dimension\n\nLet us consider two perfectly inelastic bodies A and B of mass m1 and m2. Body A is moving with velocity u1 and B is at rest. After some time they collide and move together with common velocity v.\n\n\\begin{align*} \\text {So initial momentum before collision} &= m_1u_1 \\\\ \\text {Final momentum before collision} &= m_1u_1 \\\\ \\text {Final momentum after collision} &= (m_1 + m_2) v \\\\ \\therefore v &= \\frac {m_1u_1}{m_1 + m_2} \\dots (i) \\\\ \\frac {\\text {K.E. before collision}}{\\text {K.E. after collision}} &= \\frac {\\frac 12 m_1u_1^2}{\\frac 12 (m_1 + m_2)v^2 } &= \\frac {m_1u_1^2}{(m_1+m_2) \\left (\\frac {m_1v_1}{m_1+m_2}\\right )^2 \\\\ &= \\frac {m_1+m_2}{m_1} >1 \\\\ \\therefore \\text {K.E. before collision}=\\text {K.E. after collision} \\end{align*}\n\n#### Coefficient of Restitution\n\nThe ratio of relative velocity of separation and relative velocity of approach is a constant and this constant is called coefficient of restitution. It is denoted by e.\n\n$$e = \\frac {v_2 –v_1}{u_1-u_2}$$\n\n1. For perfectly elastic collision, e=1\n2. For perfectly inelastic collision, e=o\n3. For super elastic collision e>1\nbut in general, we have 0<e<1.\n##### Things to remember\n• From the principle of conservation of momentum\n\nm1u1+m2u2 =m1v1+mv2\n\nm1(u1-v1) =m2 (v2- u2) .\n\n• Characteristics of Elastic Collision\n\n1. The momentum is conserved.\n2. Kinetic energy is conserved.\n3. Total energy is conserved.\n4. Forces involved in the interaction are conservative in nature.\n5. Mechanical energy is not converted into other forms of energy.\n\n• It includes every relationship which established among the people.\n• There can be more than one community in a society. Community smaller than society.\n• It is a network of social relationships which cannot see or touched.\n• common interests and common objectives are not necessary for society.\n##### Videos for Collisions", null, "##### Elastic collision in one dimension", null, "" ]
[ null, "https://certify.alexametrics.com/atrk.gif", null, "https://kullabs.com/img/abroadstudy.png", null, "https://kullabs.com/uploads/058.jpg", null, "https://img.youtube.com/vi/4U_MRadVssw/0.jpg", null, "https://img.youtube.com/vi/rPdJ-dadPGg/0.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8252496,"math_prob":0.9946506,"size":4435,"snap":"2021-31-2021-39","text_gpt3_token_len":1400,"char_repetition_ratio":0.18415707,"word_repetition_ratio":0.026548672,"special_character_ratio":0.33235624,"punctuation_ratio":0.14516129,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99877715,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,null,null,3,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-28T19:01:31Z\",\"WARC-Record-ID\":\"<urn:uuid:555b34f5-7a73-4b85-ac97-d3439596df46>\",\"Content-Length\":\"277444\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:34b6a6f2-4c57-45b7-997b-98c1b263193d>\",\"WARC-Concurrent-To\":\"<urn:uuid:c450aa49-2ebc-4068-b65f-79b052431063>\",\"WARC-IP-Address\":\"202.51.75.81\",\"WARC-Target-URI\":\"https://kullabs.com/class-11/physics-11/work-energy-and-power-11/collisions\",\"WARC-Payload-Digest\":\"sha1:GIIQAQ5FLOFGV2RY2ZT2PTOEXBRFXBZE\",\"WARC-Block-Digest\":\"sha1:BK7AFIMAGGIBFABAZJPCH4STU6AGGOOH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780060882.17_warc_CC-MAIN-20210928184203-20210928214203-00038.warc.gz\"}"}
https://mathhelpboards.com/threads/matrix-division.8532/
[ "# matrix division\n\n#### find_the_fun\n\n##### Active member\nI was reading my computer graphics textbook and under frequently asked questions one was \"why is there no vector division?\" and it said \"it turns out there is no 'nice' way to divide vectors\". That's not a very good explanation. Why is it that matrices can't be divided?\n\n#### Ackbach\n\n##### Indicium Physicus\nStaff member\nYou can sort of divide square matrices. Suppose you have $Ax=b$, where $A$ is a matrix and $x,b$ are vectors. Then you left multiply both sides by $A^{-1}$ (assuming it exists, which it will so long as $\\det(A)\\not=0$). Then you get $A^{-1}Ax=A^{-1}b$, and since $A^{-1}A=I$, then you have $Ix=A^{-1}b$, or $x=A^{-1}b$. It is sometimes possible (though not the most efficient method) to solve a linear system this way.\n\nHowever, this sort of inverse only works with square matrices, because you need $A^{-1}A$ to be the right size. Since a vector (aside from 1 x 1 matrices, also known as numbers) is not a square matrix, you cannot do this kind of inversion. The key here is that you're trying to achieve some sort of multiplicative identity, $I$ in this case. You can't do that with non-square matrices like vectors.\n\n#### Deveno\n\n##### Well-known member\nMHB Math Scholar\nThe answer is: you CAN, but only in certain dimensions, under certain limited circumstances.\n\nFirst of all, for \"division\" to even make sense, you need some kind of multiplication, first. And this multiplication has to be of the form:\n\nvector times vector = same kind of vector.\n\nIt turns out that this is only possible in certain dimensions: 1,2,4 (and if you allow certain \"strangenesses\" 8 and 16). This is a very \"deep\" theorem, due to Frobenius, and requires a bit of high-powered algebra to prove.\n\nNow matrices only have such a multiplication when they are nxn (otherwise we get:\n\nmatrix times matrix = matrix of different size, which turns out to matter).\n\nHowever, it turns out we can have \"bad matrices\", like so:\n\n$AB = 0$ where neither $A$ nor $B$ are the 0-matrix. For example:\n\n$A = \\begin{bmatrix}1&0\\\\0&0 \\end{bmatrix}$\n\n$B = \\begin{bmatrix}0&0\\\\0&1 \\end{bmatrix}$\n\nNow suppose, just for the sake of argument, we had a matrix we could call:\n\n$\\dfrac{1}{A}$.\n\nSuch a matrix should satisfy:\n\n$\\dfrac{1}{A}A = I$, the identity matrix.\n\nThen:\n\n$B = IB = \\left(\\dfrac{1}{A}A\\right)B = \\dfrac{1}{A}(AB) = \\dfrac{1}{A}0 = 0$\n\nwhich is a contradiction, since $B \\neq 0$\n\nIn other words, \"dividing by such a matrix\" is rather like dividing by zero, it leads to nonsense.\n\nIt turns out the the condtition:\n\n$AB = 0, A,B \\neq 0$\n\nis equivalent to:\n\n$Av = 0$ for some vector $v \\neq 0$.\n\nLet's see why this is important by comparing matrix multiplication with scalar multiplication:\n\nIf $rA = rB$, we have:\n\n$\\dfrac{1}{r}(rA) = \\left(\\dfrac{1}{r}r\\right)A = 1A = A$\n\nand also:\n\n$\\dfrac{1}{r}(rA) = \\dfrac{1}{r}(rB) = \\left(\\dfrac{1}{r}r\\right)B = 1B = B$\n\nprovided $r \\neq 0$ (which is almost every scalar).\n\nThis allows us to conclude $A = B$, in other words, the assignment:\n\n$A \\to rA$ is one-to-one.\n\nHowever, if we take matrices:\n\n$RA = RB$ does NOT imply $A = B$, for example let\n\n$R = \\begin{bmatrix} 1&0\\\\0&0 \\end{bmatrix}$\n\n$A = \\begin{bmatrix} 0&0\\\\0&1 \\end{bmatrix}$\n\n$B = \\begin{bmatrix} 0&0\\\\0&2 \\end{bmatrix}$\n\nThen we see, $RA = RB = 0$, but clearly $A$ and $B$ are different matrices.\n\nSo \"left-multiplication by a matrix\" is no longer 1-1, we means we can't uniquely \"undo\" it (which is what, at its heart, \"division\" is: the \"un-doing\" of multiplication).\n\nI hope this made sense to you.\n\n#### Evgeny.Makarov\n\n##### Well-known member\nMHB Math Scholar\nThe answer is: you CAN, but only in certain dimensions, under certain limited circumstances.\n\n...\n\nvector times vector = same kind of vector.\n\nIt turns out that this is only possible in certain dimensions: 1,2,4 (and if you allow certain \"strangenesses\" 8 and 16).\n\n...\n\nIn other words, \"dividing by such a matrix\" is rather like dividing by zero, it leads to nonsense.\n\n...\n\nSo \"left-multiplication by a matrix\" is no longer 1-1, we means we can't uniquely \"undo\" it (which is what, at its heart, \"division\" is: the \"un-doing\" of multiplication).\nAnd despite all this, one can divide by almost all square matrices of any dimension, and not just having 1, 2, 4, 8 or 16 components.\n\n#### Klaas van Aarsen\n\n##### MHB Seeker\nStaff member\nThere's a programming language called APL, which was designed for applying math.\n\nRegular division is denoted a÷b.\nThe same operator is used for the reciprocal: ÷b means 1/b.\n\nTypically operations for matrices are denoted with a square block around the operator.\nIn particular the matrix inverse is ⌹B.\nAnd matrix division is: A⌹B. This means A multiplied by the inverse of B." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9710037,"math_prob":0.99855137,"size":345,"snap":"2021-43-2021-49","text_gpt3_token_len":88,"char_repetition_ratio":0.09090909,"word_repetition_ratio":0.0,"special_character_ratio":0.27826086,"punctuation_ratio":0.07936508,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998666,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-16T15:19:40Z\",\"WARC-Record-ID\":\"<urn:uuid:50d8754e-70b9-43a6-8ec9-9be801d8ad0a>\",\"Content-Length\":\"73276\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f1f87471-c87b-4058-95c8-e8f76aa50492>\",\"WARC-Concurrent-To\":\"<urn:uuid:b354735a-2019-444d-94c8-a0b57dca53b7>\",\"WARC-IP-Address\":\"50.31.99.218\",\"WARC-Target-URI\":\"https://mathhelpboards.com/threads/matrix-division.8532/\",\"WARC-Payload-Digest\":\"sha1:OEXDCYPUQXCRTZASHSG4VJBUA3BK7RRL\",\"WARC-Block-Digest\":\"sha1:5ZDUD677O7RH3WQT7KLVKCJ4WC2KPPLJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323584886.5_warc_CC-MAIN-20211016135542-20211016165542-00242.warc.gz\"}"}
https://zbmath.org/?q=an:0787.17003
[ "# zbMATH — the first resource for mathematics\n\nOn matrix algebras with two generators and on embedding of PI-algebras. (English. Russian original) Zbl 0787.17003\nRuss. Math. Surv. 47, No. 4, 216-217 (1992); translation from Usp. Mat. Nauk 47, No. 4(286), 199-200 (1992).\nLet $$R$$ be any algebra with $$k$$ generators, over a commutative- associative ring $$\\Phi$$ with 1; and let $$R^ \\#$$ be the algebra obtained from $$R$$ by adjoining an identity. Also, for $$x$$ a real number, let $$m(x)$$ denote the smallest integer $$m$$ such that $$m\\geq x$$. Then for $$n\\geq 2m(\\sqrt{k})+1$$, the matrix algebra $$M_ n(R^ \\#)$$ is generated by two elements.\nThis result has the following applications: (1) If $$R$$ is a finitely- generated associative PI-algebra, then $$R$$ can be embedded in an associative PI-algebra with two generators. (2) If $$R$$ is a finitely- generated special Jordan PI-algebra and $$1/2\\in\\Phi$$, then $$R$$ can be embedded in a special Jordan PI-algebra with two generators.\n\n##### MSC:\n 17A99 General nonassociative rings 16R20 Semiprime p.i. rings, rings embeddable in matrices over commutative rings 17C05 Identities and free Jordan structures\nFull Text:" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.76249546,"math_prob":0.99968004,"size":1468,"snap":"2021-43-2021-49","text_gpt3_token_len":463,"char_repetition_ratio":0.13797814,"word_repetition_ratio":0.09259259,"special_character_ratio":0.34128064,"punctuation_ratio":0.19417475,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998934,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-11-27T11:08:15Z\",\"WARC-Record-ID\":\"<urn:uuid:d7d9b6f8-138c-420c-a4d8-6ca8ce51e95f>\",\"Content-Length\":\"47795\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:981b0066-747c-4561-9f52-490c6df06864>\",\"WARC-Concurrent-To\":\"<urn:uuid:c47ad9e9-d9ab-468c-b4ed-0c4714dc1618>\",\"WARC-IP-Address\":\"141.66.194.2\",\"WARC-Target-URI\":\"https://zbmath.org/?q=an:0787.17003\",\"WARC-Payload-Digest\":\"sha1:L7FSNE2MYL2VPHYBHGI2TUXJVOIYTIB5\",\"WARC-Block-Digest\":\"sha1:VP5GTZL37CFGLISOLUP3GSXBEOTFSLB3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964358180.42_warc_CC-MAIN-20211127103444-20211127133444-00256.warc.gz\"}"}
https://iksinc.online/2015/06/23/how-to-use-words-co-occurrence-statistics-to-map-words-to-vectors/?like_comment=392&_wpnonce=33bf9147eb
[ "# How to Use Words Co-Occurrence Statistics to Map Words to Vectors\n\nIn my earlier post, I had described a simplified view of how Word2Vec method uses neural learning to obtain vector representations for words given a large corpora. In this post I will describe another approach for generating high-dimensional representations of words via looking at co-occurrences of words. This method is claimed to be much faster as it avoids time consuming neural learning and yields comparable results.\n\nWords Co-occurrence Statistics\n\nWords co-occurrence statistics describes how words occur together that in turn captures the relationships between words. Words co-occurrence statistics is computed simply by counting how two or more words occur together in a given corpus. As an example of words co-occurrence, consider a corpus consisting of the following documents:\n\npenny wise and pound foolish\n\na penny saved is a penny earned\n\nLetting count(wnext|wcurrent) represent how many times word wnext follows the word wcurrent, we can summarize co-occurrence statistics for words “a” and “penny” as:\n\n a and earned foolish is penny pound saved wise a 0 0 0 0 0 2 0 0 0 penny 0 0 1 0 0 0 0 1 1\n\nThe above table shows that “a” is followed twice by “penny” while words “earned”, “saved”, and “wise” each follows “penny” once in our corpus. Thus, “earned” is one out of three times probable to appear after “penny.” The count shown above is called bigram frequency; it looks into only the next word from a current word. Given a corpus of N words, we need a table of size NxN to represent bigram frequencies of all possible word-pairs. Such a table is highly sparse as most frequencies are equal to zero. In practice, the co-occurrence counts are converted to probabilities. This results in row entries for each row adding up to one in the co-occurrence matrix.\n\nThe concept of looking into words co-occurrences can be extended in many ways. For example, we may count how many times a sequence of three words occurs together to generate trigram frequencies. We may even count how many times a pair of words occurs together in sentences irrespective of their positions in sentences. Such occurrences are called skip-bigram frequencies. Because of such variations in how co-occurrences are specified, these methods in general are known as n-gram methods. The term context window is often used to specify the co-occurrence relationship. For bigrams, the context window is asymmetrical one word long to the right of the current word in co-occurrence counting. For trigrams, it is asymmetrical and two words long. In words to vector conversion approach via co-occurrence, it turns out that a symmetrical context window looking at one preceding word and one following word for computing bigram frequencies gives better word vectors.\n\nHellinger Distance\n\nHellinger distance is a measure of similarity between two probability distributions. Given two discrete probability distributions P = (p1, . . . , pk) and Q = (q1, . . . , qk), the Hellinger distance H(P, Q) between the distributions is defined as:", null, "$H(P, Q) = \\frac{1}{\\sqrt{2}}\\sqrt{{\\displaystyle\\sum_{i=1}^k}(\\sqrt{p_i}-\\sqrt{q_i})^2}$\n\nHellinger distance is a metric satisfying triangle inequality. The reason for including", null, "$\\sqrt(2)$ in the definition of Hellinger distance is to ensure that the distance value is always between 0 and 1. When comparing a pair of discrete probability distributions the Hellinger distance is preferred because P and Q are vectors of unit length as per Hellinger scale.\n\nWord Vectors Generation\n\nThe approach in a nutshell is to create an initial vector representation for each word using the co-occurrences of a subset of words from the corpus. This subset of words is called context words. The initial vectors produced in this manner have dimensionality of few thousands with an underlying assumption that the meaning of each word can be captured through its co-occurrence pattern with context words. Next, a dimensionality reduction via principal component analysis (PCA) is carried out to generate final word vectors of low dimensionality, typically 50 or 100 dimensions.  The diagram below outlines the different stages of the word vector generation process.", null, "Starting with a given corpus, the first step is to build a dictionary. It is preferable to select only those lower case words for inclusion in the dictionary that occur beyond certain number of times. Also to deal with numbers, it is a good idea to replace all numbers with a special character string representing all numbers. This step generally leads to a manageable dictionary size,", null, "$N_D$, of few hundreds of thousands of words. The next step is to create a list of context words. These are the words only whose appearance in the context window is counted to generate co-occurrence matrix. In terms of pattern recognition analogy, one can view context words as raw features whose presence at different locations (dictionary words) is being detected. Looking at our small corpus example, let us consider “earned”, “foolish”, “penny”, and “saved” as context words. In that case, the bigram frequencies for words “a” and “penny” would be given by the following table:\n\n earned foolish penny saved a 0 0 1 0 penny 1 0 0 1\n\nA typical choice for context words is to choose top 5 to 10% most frequent words resulting in", null, "$N_C$ words. The next step is to calculate", null, "$N_DxN_C$ matrix of co-occurrence probabilities. Typically, a symmetrical window of size three centered at the current word is used to count co-occurrences of context words appearing within the window. The SQRT(2) operation consists of simply dividing every co-occurrence matrix entry by", null, "$\\sqrt(2)$ to ensure the range of Hellinger distance between 0-1. The final step is to apply principal component analysis (PCA) to minimize the reconstruction error measured in terms of Hellinger distance between the original and reconstructed vectors. Since the co-occurrence matrix is not a square matrix, PCA is applied via singular value decomposition (SVD) to extract top 50 or 100 eigenvalues and eigenvectors and transform the raw vectors of the co-occurrence matrix to generate final word vectors. See an example in the comments section.\n\nHow good are Word Embeddings?\n\nYou can check how good is the above approach for word embedding by finding the top 10 nearest neighbors of a word that you can input at the following site:", null, "For example, I got the following list of nearest neighbors for “baseball.” Pretty neat!", null, "The approach briefly described above is not the only approach for word embeddings via co-occurrence statistics. There are few other methods as well as a website where you can compare the performance of different word embedding methods as well as make a two-dimensional mapping of a set of words based on their similarity. One such plot is shown below.", null, "An Interesting Application of Word Embeddings\n\nThe work on word embeddings by several researchers has shown that word embeddings well capture word semantics giving rise to searches where concepts can be modified via adding or subtracting word vectors to obtain more meaningful results. One interesting application along these lines is being developed by Stitch Fix Technology for online shopping wherein the vector representations of merchandise is modified by word vectors of keywords to generate recommendations for customers as illustrated by the following example.", null, "We can expect many more applications of word embeddings in near future as researchers try to integrate word vectors with similar representations for images, videos, and other digital artifacts.\n\n## 17 thoughts on “How to Use Words Co-Occurrence Statistics to Map Words to Vectors”\n\n1.", null, "libin says:\n\nHi, Krishan!\n\nYour website design give me a comfortable experience, it looks like a paper!\nBut what I want to know is that what the “Dictionary” mean in word vector generation process, is it mean the stemmed text or something else?\nAnd it will be great pleasure if could provide me the original paper!\nThank you~\n\nLike\n\n1.", null, "Krishan says:\n\nDictionary is all the words in the corpus after stemming. There is no paper on this topic by me. All sources used in the write-up are linked. So you can check there for further details.\n\nLike\n\n2.", null, "Prasad says:\n\nHi Krishan: I hope you can give me some pointers here..Learning word2vec/doc2vec resources it looks like the corpus has to be english words with no numbers and special symbols. I am actually trying to see if doc2vec can be used to predict related technical cases in a technical support case corpus. The corpus has a lot of error codes(numbers) and stack trace like “com.java.sdk…” etc etc All the examples I am seeing are taking out the numbers, punctuation, any symbols and coming up with just pure words( after getting rid of stop words)..I am seeing different results with no cleaning of text, cleaning and ending up with just english words etc.. I have just started but probably I am little confused at this stage..Is it mandatory that the text does not contain numbers? Does it make a difference in vector representation if we encode “com.java.sdk…” as one word ( does that make sense at all in VSM) or split into com, java, sdk as separate words..Any inputs are appreciated..Thanks\n\nLike\n\n1.", null, "Krishan says:\n\nThanks for visiting my blog. The whole idea of word2vec is to capture meanings and contexts of words. Since there is no ambiguity in what a number means, there is no need for vector representation for numbers. That is why we filter them out. Insofar, as encoding com.java.sdk as one word or three words is concerned, it might depend upon what you are planning to do with your representation. Java as a stand alone word implies programming as well as coffee and a place in Indonesia. Thus context in this case will be important. On the other hand, com.java.sdk has no ambiguity and might work well if the corpus is technical.\n\nHope I have been able to provide some help/clarification. Good luck with your project!\n\nKrishan\n\nLike\n\n3.", null, "Vaibhav says:\n\nI am referring this statement in your article ie: “In practice, the co-occurrence counts are converted to probabilities. ” Can you please elaborate it with an example.\n\nLike\n\n1.", null, "Krishan says:\n\nThe probabilities are calculated by first taking the total of all elements of the co-occurrence matrix. Next you divide each co-occurrence matrix element by the total to get probabilities.\n\nLike\n\n4.", null, "Esfandiar Bandari says:\n\nHi Krishan,\nPlease excuse my ignorance. But you have the dictionary words in the rows, the context words in the column. Lets say it is M by N matrix and we call it Q. After doing the SVD and keeping, lets say, P non-zero singular values and their corresponding left and right Eigen vectors, you end up with a new matrix; lets call it Q2.\nNow how do you use the Hellinger distance to find the closets neighbors of the word Baseball?\nIs Baseball part of the dictionary words, and you take the row corresponding to Baseball and Q2 and find the nearest Rows to it using Hellinger?\nOr is it in the context words and you do that above with the column corresponding to Baseball in Q2?\n\nLike\n\n1.", null, "Krishan says:\n\nYou compare the row representing Baseball with other rows after SVD to find top ten or twenty closet words. You do this by using the Hellinger distance except the square root by two part because that is already included in getting SVD.\n\nLike\n\n5.", null, "Ahmed said says:\n\nhi,\nfirst thank you a lot\nthen, could you give us an example to explain what really happen\nlike your previous post\nthanks.\n\nLike\n\n1.", null, "Krishan says:\n\nI will try to give a simple example. Let us say that we have determined the cooccurence matrix that looks like as below. In this matrix the counts have been converted to probabilities so that sum of probabilities along each row equal 1.\n\nbreeds computing cover food is meat named of\ncat 0.04 0.00 0.00 0.13 0.53 0.02 0.18 0.10\ndog 0.11 0.00 0.00 0.12 0.39 0.06 0.15 0.17\ncloud 0.00 0.29 0.19 0.00 0.12 0.00 0.00 0.40\n\nNext, we take the square root of every cooccurence matrix entry to use the Hellinger distance (See the post). Thus the above matrix reduces to:\n\nbreeds computing cover food is meat named of\ncat 0.20 0.00 0.00 0.36 0.73 0.14 0.42 0.32\ndog 0.33 0.00 0.00 0.35 0.62 0.24 0.39 0.41\ncloud 0.00 0.54 0.44 0.00 0.35 0.00 0.00 0.63\n\nWe now apply SVD(PCA) on this. The SVD operation yields a diagonal matrix of singular values, a matrix of left singular vectors (U matrix) and a matrix of right singular vectors. These are shown below:\n\nDiagonal matrix of singular values\n[,1] [,2] [,3]\n[1,] 1.5 0.00 0.00\n[2,] 0.0 0.82 0.00\n[3,] 0.0 0.00 0.16\n\nU matrix of left singular vectors\n[,1] [,2] [,3]\n[1,] -0.63 -0.34 -0.701\n[2,] -0.63 -0.30 0.712\n[3,] -0.45 0.89 -0.023\n\nNote that columns of U matrix are orthogonal to each other; you can verify it by taking dot product between any column pair.\n\nThe V matrix of right singular vectors\n\n[,1] [,2] [,3]\n[1,] -0.22 -0.20 0.615\n[2,] -0.16 0.59 -0.081\n[3,] -0.13 0.48 -0.065\n[4,] -0.29 -0.27 -0.039\n[5,] -0.66 -0.15 -0.472\n[6,] -0.16 -0.15 0.483\n[7,] -0.34 -0.32 -0.138\n[8,] -0.49 0.41 0.366\n\nThe columns of this matrix are also orthogonal. At this point, we have mapped all words in a 3-dimensional space. Each row of U matrix is a 3-dimensional vector representation of word “cat”, “dog”, and “cloud.” In the same manner each row of V matrix is a 3-dimensional vector representation for respective context words. Typically in a practical setting, we will have 10,000 or higher dimensional space and we would want that space to be reduced to 50 or 100 dimensions to obtain word vectors. In the present example, lets say we want to reduce the dimensionality to 2 by discarding the smallest singular value. To obtain the reduce representation, all we have to do is to set the third singular value to 0 in the diagonal matrix and calculate the matrix product of U and modified diagonal matrix. This will give the following result after discarding the third column of all zeros:\n\nDim1 Dim2\ncat -0.96 -0.27\ndog -0.96 -0.25\ncloud -0.68 0.73\n\nIn a similar fashion, we can perform multiplication of V matrix with modified diagonal matrix to get the following representation for context words:\n\nbreeds computing cover food is meat named of\nDim1 -0.34 -0.24 -0.20 -0.45 -1.01 -0.24 -0.51 -0.74\nDim2 -0.17 0.48 0.39 -0.22 -0.12 -0.12 -0.26 0.33\n\nThe above sequence of calculations illustrate the basic steps that are performed to generate vector representations based on cooccurences of words.\n\nLike\n\n1.", null, "Florin Lazar says:\n\nHi. Thank you for this article. I have a few questions:\nQ1: After you applied SVD on the cooccurence matrix the space in which the words were represented had three dimensions/ Why is that? On what factor does the number of resulting dimensions depend?\nQ2: What happens if I want the representation of a word that is nor in the initial dictionary?\nQ3: If the context is represented by the first 5-10% of words in the dictionary I think it is highly probable that many of them would be articles, prepositions and conjuctions which are not that relevant. Am I right and if so, how do you propose to address this problem?\n\nThank you again and have a beautiful day!\n\nLike\n\n2.", null, "Krishan says:\n\nQ1. We got three dimensions in the example because our co-occurrence matrix is of size 3 x 8. If it was p x q, where p is smaller than q, then it would be a p-dimensional representation\nQ2. If a word is not in the initial dictionary, it is possible to map it in the representation in a manner similar to what is done in latent semantic indexing. You can find information on this on page 563 in “Foundations of Statistical Natural Language Processing” by Manning and Schutze\nQ3. Context is represented by surrounding words. Words representing articles, prepositions and conjunctions can be (and usually are) filtered out using stop-word filtering.\n\nHope I have been of some help. Good luck!\n\nLike\n\n6.", null, "Jay Shin says:\n\nhi! Great post!\nCan you explain how to create symmetrical context windows for co-occurrence matrices?\n\nAlso, can you help to explain the relation between the normalized co-occurrence matrix and the covariance matrix?\n\nLike\n\n1.", null, "Krishan says:\n\nThanks. I am glad you liked it. A symmetrical context window will have equal number of even words on both sides of the focus word and you will count the joint occurrences of the words in the window with the word at the center of the window.\nA covariance matrix captures the relationships between pairs of variables. For example, it captures whether one variable value rises/falls/unchanged with respect to another variable. A co-occurrence matrix on the other hand simply captures whether two words/events occur together or not.\n\nLike\n\n7.", null, "karthik says:\n\nHey great read! Is there a code snippet for this so as to make the understanding solid?\n\nLike\n\n1.", null, "Krishan says:\n\nGlad you liked it. I don’t have a code snippet but if you scroll down the comments, I have a numerical example that might help.\n\nLike" ]
[ null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://iksinc.files.wordpress.com/2015/06/wordvec.png", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://iksinc.files.wordpress.com/2015/06/screen-shot-2015-06-22-at-6-32-58-pm.png", null, "https://iksinc.files.wordpress.com/2015/06/screen-shot-2015-06-22-at-6-33-11-pm.png", null, "https://iksinc.files.wordpress.com/2015/06/screen-shot-2015-06-22-at-6-53-28-pm.png", null, "https://iksinc.files.wordpress.com/2015/06/slide1.png", null, "https://2.gravatar.com/avatar/2a737103e29e7c0770e820ca995784f2", null, "https://2.gravatar.com/avatar/5e572d17a1d8b321ef35581186b9df31", null, "https://0.gravatar.com/avatar/9d7ce79d261acbc73c406b3502ceb1cc", null, "https://2.gravatar.com/avatar/5e572d17a1d8b321ef35581186b9df31", null, "https://1.gravatar.com/avatar/7df9ba2d1803f56944ca75a5ffbe50e8", null, "https://2.gravatar.com/avatar/5e572d17a1d8b321ef35581186b9df31", null, "https://1.gravatar.com/avatar/d65bf506ff95b187e10c7c0003b7ed89", null, "https://2.gravatar.com/avatar/5e572d17a1d8b321ef35581186b9df31", null, "https://i0.wp.com/lh6.googleusercontent.com/-1dao1_qrSac/AAAAAAAAAAI/AAAAAAAAAGk/sqfMtEBYIgM/photo.jpg", null, "https://2.gravatar.com/avatar/5e572d17a1d8b321ef35581186b9df31", null, "https://0.gravatar.com/avatar/302622df3aec4d8b9f46dfcefeacb686", null, "https://2.gravatar.com/avatar/5e572d17a1d8b321ef35581186b9df31", null, "https://0.gravatar.com/avatar/95621801d5b3f3c1a681f1ad6cc66c6a", null, "https://2.gravatar.com/avatar/5e572d17a1d8b321ef35581186b9df31", null, "https://2.gravatar.com/avatar/83c0d13f37bd4c72b46d893f8d67422b", null, "https://2.gravatar.com/avatar/5e572d17a1d8b321ef35581186b9df31", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.92561215,"math_prob":0.9190694,"size":16574,"snap":"2022-40-2023-06","text_gpt3_token_len":3953,"char_repetition_ratio":0.13892578,"word_repetition_ratio":0.01433178,"special_character_ratio":0.23657537,"punctuation_ratio":0.11706881,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9663727,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,9,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-25T17:40:22Z\",\"WARC-Record-ID\":\"<urn:uuid:aa67452f-c6b0-4f2c-8274-d4e5d1eb0da8>\",\"Content-Length\":\"168009\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4687630b-0434-4b2e-8d44-6046ee82bbb8>\",\"WARC-Concurrent-To\":\"<urn:uuid:a71471f6-8b35-42b8-b2f7-b4e11d9745b0>\",\"WARC-IP-Address\":\"192.0.78.25\",\"WARC-Target-URI\":\"https://iksinc.online/2015/06/23/how-to-use-words-co-occurrence-statistics-to-map-words-to-vectors/?like_comment=392&_wpnonce=33bf9147eb\",\"WARC-Payload-Digest\":\"sha1:RYY4L2JGE5I2QN3EZRBJ62FWBWV6L5IB\",\"WARC-Block-Digest\":\"sha1:H6TFNTFDCB3HLT765ZHNLOBB2OEKNLSS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030334591.19_warc_CC-MAIN-20220925162915-20220925192915-00640.warc.gz\"}"}
https://tz.home.focus.cn/gonglue/aaf2de83cdaa10c4.html/
[ "", null, "|\n\n# win7搜索在哪?系统特点及安装步骤介绍\n\nWin7搜索在哪?社么是搜索?我们的电脑系统的桌面上,一般是有一个搜索的框框的,我们可以在这个搜索栏里面搜索自己想要的应用程序和软件。大部分的电脑系统的搜索栏是直接点击开始菜单按键就会出现的,但是新的win7系统并不是这样进行搜索的,在开始菜单里没有显示。下面,我们来看看Win7搜索在哪,如果显示以及系统的特点是什么。", null, "一、方法/步骤\n\n在电脑桌面点击左下方的开始,在弹出的面板上在所有程序下方会有搜索。例如输入QQ,会出现所有程序里面跟QQ有关的文件或程序,打开我的电脑,在箭头处会有搜索框。输入QQ会显示有在整个磁盘里面跟QQ有关的文件或程序。打开图中“我的文档”英文,在右上角也会有搜索框。输入QQ也能显示跟QQ有关的文件或程序\n\n二、系统特点\n\n1.Win764位旗舰版大幅缩减了Windows的启动时间,据实测,在2008年的中低端配置下运行,系统加载时间一般不超过20秒,这比其他版本Win7的40余秒相比,是一个很大的进步\n\n2.Win764位旗舰版将会让搜索和使用信息更加简单,包括本地、网络和互联网搜索功能,直观的用户体验将更加高级,还会整合自动化应用程序提交和交叉程序数据透明性。\n\n3.Win764位旗舰版包括了改进了的安全和功能合法性,还会把数据保护和管理扩展到外围设备。Win764位旗舰版改进了基于角色的计算方案和用户账户管理.", null, "三、系统特色\n\n1、精简与优化:采用最佳的精简与优化,全面提升计算机运行速度,充分保留原版性能和兼容性。\n\n2、安全与稳定:完全断网的情况下采用最新封装技术,确保系统安全稳定。\n\n3、多模式安装:支持光盘启动安装、PE下安装、WINDOWS下安装等。\n\n4、安装速度快,支持虚拟机中安装和刻录光碟。\n\n四、安装步骤\n\nxp下完整安装win7步骤:\n\n第一步:.取得C盘根目录下的boot的管理员权限,复制c:bootbootsect.exe至C盘根目录第二步:复制D:W7中boot文件夹、efi文件夹、bootmgr及bootmgr.efi至C盘根目录,会提示bcd无法覆盖,跳过", null, "第三步:在C盘根目录下新建sources文件夹,将D:W7Sourcesboot.wim复制过来\n\n第四步:管理员身份运行命令提示符运行,输入命令c:bootsect/nt60c:\n\n第五步:‘重启电脑后按F8进入系统修复,运行命令提示符\n\n第六步:输入:xcopyD:W7bootC:boot\n\n第七步:在输入:xcopyD:W7bootbootsect.exeC:(此bootsect.exe为64位)\n\n第八步:重启电脑,此时不要按F8,系统自动进入安装界面,选修复,选命令提示符\n\n第九步:输入:D:W7sourcessetup.exe开始安装", null, "Win7搜索在哪大家知道了吗?Win7的搜索一般都是在下方的搜索栏里面的,直接可以看到,不需要去菜单里面找。我们可以设置一个快捷键,快捷键也是非常的简单的,可以自己去了解一下。一般我们可以直接在电脑的下方开始菜单找到搜索,可以直接点击也可以直接的输入,操作很简单。win7的系统的功能是比较的丰富的,很多,大家能灵活运用最好。\n\n`声明:本文由入驻焦点开放平台的作者撰写,除焦点官方账号外,观点仅代表作者本人,不代表焦点立场错误信息举报电话: 400-099-0099,邮箱:[email protected],或点此进行意见反馈,或点此进行举报投诉。`", null, "A B C D E F G H J K L M N P Q R S T W X Y Z\nA - B - C - D - E\n• A\n• 鞍山\n• 安庆\n• 安阳\n• 安顺\n• 安康\n• 澳门\n• B\n• 北京\n• 保定\n• 包头\n• 巴彦淖尔\n• 本溪\n• 蚌埠\n• 亳州\n• 滨州\n• 北海\n• 百色\n• 巴中\n• 毕节\n• 保山\n• 宝鸡\n• 白银\n• 巴州\n• C\n• 承德\n• 沧州\n• 长治\n• 赤峰\n• 朝阳\n• 长春\n• 常州\n• 滁州\n• 池州\n• 长沙\n• 常德\n• 郴州\n• 潮州\n• 崇左\n• 重庆\n• 成都\n• 楚雄\n• 昌都\n• 慈溪\n• 常熟\n• D\n• 大同\n• 大连\n• 丹东\n• 大庆\n• 东营\n• 德州\n• 东莞\n• 德阳\n• 达州\n• 大理\n• 德宏\n• 定西\n• 儋州\n• 东平\n• E\n• 鄂尔多斯\n• 鄂州\n• 恩施\nF - G - H - I - J\n• F\n• 抚顺\n• 阜新\n• 阜阳\n• 福州\n• 抚州\n• 佛山\n• 防城港\n• G\n• 赣州\n• 广州\n• 桂林\n• 贵港\n• 广元\n• 广安\n• 贵阳\n• 固原\n• H\n• 邯郸\n• 衡水\n• 呼和浩特\n• 呼伦贝尔\n• 葫芦岛\n• 哈尔滨\n• 黑河\n• 淮安\n• 杭州\n• 湖州\n• 合肥\n• 淮南\n• 淮北\n• 黄山\n• 菏泽\n• 鹤壁\n• 黄石\n• 黄冈\n• 衡阳\n• 怀化\n• 惠州\n• 河源\n• 贺州\n• 河池\n• 海口\n• 红河\n• 汉中\n• 海东\n• I\n• J\n• 晋中\n• 锦州\n• 吉林\n• 鸡西\n• 佳木斯\n• 嘉兴\n• 金华\n• 景德镇\n• 九江\n• 吉安\n• 济南\n• 济宁\n• 焦作\n• 荆门\n• 荆州\n• 江门\n• 揭阳\n• 金昌\n• 酒泉\n• 嘉峪关\nK - L - M - N - P\n• K\n• 开封\n• 昆明\n• 昆山\n• L\n• 廊坊\n• 临汾\n• 辽阳\n• 连云港\n• 丽水\n• 六安\n• 龙岩\n• 莱芜\n• 临沂\n• 聊城\n• 洛阳\n• 漯河\n• 娄底\n• 柳州\n• 来宾\n• 泸州\n• 乐山\n• 六盘水\n• 丽江\n• 临沧\n• 拉萨\n• 林芝\n• 兰州\n• 陇南\n• M\n• 牡丹江\n• 马鞍山\n• 茂名\n• 梅州\n• 绵阳\n• 眉山\n• N\n• 南京\n• 南通\n• 宁波\n• 南平\n• 宁德\n• 南昌\n• 南阳\n• 南宁\n• 内江\n• 南充\n• P\n• 盘锦\n• 莆田\n• 平顶山\n• 濮阳\n• 攀枝花\n• 普洱\n• 平凉\nQ - R - S - T - W\n• Q\n• 秦皇岛\n• 齐齐哈尔\n• 衢州\n• 泉州\n• 青岛\n• 清远\n• 钦州\n• 黔南\n• 曲靖\n• 庆阳\n• R\n• 日照\n• 日喀则\n• S\n• 石家庄\n• 沈阳\n• 双鸭山\n• 绥化\n• 上海\n• 苏州\n• 宿迁\n• 绍兴\n• 宿州\n• 三明\n• 上饶\n• 三门峡\n• 商丘\n• 十堰\n• 随州\n• 邵阳\n• 韶关\n• 深圳\n• 汕头\n• 汕尾\n• 三亚\n• 三沙\n• 遂宁\n• 山南\n• 商洛\n• 石嘴山\n• T\n• 天津\n• 唐山\n• 太原\n• 通辽\n• 铁岭\n• 泰州\n• 台州\n• 铜陵\n• 泰安\n• 铜仁\n• 铜川\n• 天水\n• 天门\n• W\n• 乌海\n• 乌兰察布\n• 无锡\n• 温州\n• 芜湖\n• 潍坊\n• 威海\n• 武汉\n• 梧州\n• 渭南\n• 武威\n• 吴忠\n• 乌鲁木齐\nX - Y - Z\n• X\n• 邢台\n• 徐州\n• 宣城\n• 厦门\n• 新乡\n• 许昌\n• 信阳\n• 襄阳\n• 孝感\n• 咸宁\n• 湘潭\n• 湘西\n• 西双版纳\n• 西安\n• 咸阳\n• 西宁\n• 仙桃\n• 西昌\n• Y\n• 运城\n• 营口\n• 盐城\n• 扬州\n• 鹰潭\n• 宜春\n• 烟台\n• 宜昌\n• 岳阳\n• 益阳\n• 永州\n• 阳江\n• 云浮\n• 玉林\n• 宜宾\n• 雅安\n• 玉溪\n• 延安\n• 榆林\n• 银川\n• Z\n• 张家口\n• 镇江\n• 舟山\n• 漳州\n• 淄博\n• 枣庄\n• 郑州\n• 周口\n• 驻马店\n• 株洲\n• 张家界\n• 珠海\n• 湛江\n• 肇庆\n• 中山\n• 自贡\n• 资阳\n• 遵义\n• 昭通\n• 张掖\n• 中卫\n\n1室1厅1厨1卫1阳台\n\n1\n2\n3\n4\n5\n\n0\n1\n2\n\n1\n\n1\n\n0\n1\n2\n3", null, "", null, "", null, "报名成功,资料已提交审核", null, "A B C D E F G H J K L M N P Q R S T W X Y Z\nA - B - C - D - E\n• A\n• 鞍山\n• 安庆\n• 安阳\n• 安顺\n• 安康\n• 澳门\n• B\n• 北京\n• 保定\n• 包头\n• 巴彦淖尔\n• 本溪\n• 蚌埠\n• 亳州\n• 滨州\n• 北海\n• 百色\n• 巴中\n• 毕节\n• 保山\n• 宝鸡\n• 白银\n• 巴州\n• C\n• 承德\n• 沧州\n• 长治\n• 赤峰\n• 朝阳\n• 长春\n• 常州\n• 滁州\n• 池州\n• 长沙\n• 常德\n• 郴州\n• 潮州\n• 崇左\n• 重庆\n• 成都\n• 楚雄\n• 昌都\n• 慈溪\n• 常熟\n• D\n• 大同\n• 大连\n• 丹东\n• 大庆\n• 东营\n• 德州\n• 东莞\n• 德阳\n• 达州\n• 大理\n• 德宏\n• 定西\n• 儋州\n• 东平\n• E\n• 鄂尔多斯\n• 鄂州\n• 恩施\nF - G - H - I - J\n• F\n• 抚顺\n• 阜新\n• 阜阳\n• 福州\n• 抚州\n• 佛山\n• 防城港\n• G\n• 赣州\n• 广州\n• 桂林\n• 贵港\n• 广元\n• 广安\n• 贵阳\n• 固原\n• H\n• 邯郸\n• 衡水\n• 呼和浩特\n• 呼伦贝尔\n• 葫芦岛\n• 哈尔滨\n• 黑河\n• 淮安\n• 杭州\n• 湖州\n• 合肥\n• 淮南\n• 淮北\n• 黄山\n• 菏泽\n• 鹤壁\n• 黄石\n• 黄冈\n• 衡阳\n• 怀化\n• 惠州\n• 河源\n• 贺州\n• 河池\n• 海口\n• 红河\n• 汉中\n• 海东\n• I\n• J\n• 晋中\n• 锦州\n• 吉林\n• 鸡西\n• 佳木斯\n• 嘉兴\n• 金华\n• 景德镇\n• 九江\n• 吉安\n• 济南\n• 济宁\n• 焦作\n• 荆门\n• 荆州\n• 江门\n• 揭阳\n• 金昌\n• 酒泉\n• 嘉峪关\nK - L - M - N - P\n• K\n• 开封\n• 昆明\n• 昆山\n• L\n• 廊坊\n• 临汾\n• 辽阳\n• 连云港\n• 丽水\n• 六安\n• 龙岩\n• 莱芜\n• 临沂\n• 聊城\n• 洛阳\n• 漯河\n• 娄底\n• 柳州\n• 来宾\n• 泸州\n• 乐山\n• 六盘水\n• 丽江\n• 临沧\n• 拉萨\n• 林芝\n• 兰州\n• 陇南\n• M\n• 牡丹江\n• 马鞍山\n• 茂名\n• 梅州\n• 绵阳\n• 眉山\n• N\n• 南京\n• 南通\n• 宁波\n• 南平\n• 宁德\n• 南昌\n• 南阳\n• 南宁\n• 内江\n• 南充\n• P\n• 盘锦\n• 莆田\n• 平顶山\n• 濮阳\n• 攀枝花\n• 普洱\n• 平凉\nQ - R - S - T - W\n• Q\n• 秦皇岛\n• 齐齐哈尔\n• 衢州\n• 泉州\n• 青岛\n• 清远\n• 钦州\n• 黔南\n• 曲靖\n• 庆阳\n• R\n• 日照\n• 日喀则\n• S\n• 石家庄\n• 沈阳\n• 双鸭山\n• 绥化\n• 上海\n• 苏州\n• 宿迁\n• 绍兴\n• 宿州\n• 三明\n• 上饶\n• 三门峡\n• 商丘\n• 十堰\n• 随州\n• 邵阳\n• 韶关\n• 深圳\n• 汕头\n• 汕尾\n• 三亚\n• 三沙\n• 遂宁\n• 山南\n• 商洛\n• 石嘴山\n• T\n• 天津\n• 唐山\n• 太原\n• 通辽\n• 铁岭\n• 泰州\n• 台州\n• 铜陵\n• 泰安\n• 铜仁\n• 铜川\n• 天水\n• 天门\n• W\n• 乌海\n• 乌兰察布\n• 无锡\n• 温州\n• 芜湖\n• 潍坊\n• 威海\n• 武汉\n• 梧州\n• 渭南\n• 武威\n• 吴忠\n• 乌鲁木齐\nX - Y - Z\n• X\n• 邢台\n• 徐州\n• 宣城\n• 厦门\n• 新乡\n• 许昌\n• 信阳\n• 襄阳\n• 孝感\n• 咸宁\n• 湘潭\n• 湘西\n• 西双版纳\n• 西安\n• 咸阳\n• 西宁\n• 仙桃\n• 西昌\n• Y\n• 运城\n• 营口\n• 盐城\n• 扬州\n• 鹰潭\n• 宜春\n• 烟台\n• 宜昌\n• 岳阳\n• 益阳\n• 永州\n• 阳江\n• 云浮\n• 玉林\n• 宜宾\n• 雅安\n• 玉溪\n• 延安\n• 榆林\n• 银川\n• Z\n• 张家口\n• 镇江\n• 舟山\n• 漳州\n• 淄博\n• 枣庄\n• 郑州\n• 周口\n• 驻马店\n• 株洲\n• 张家界\n• 珠海\n• 湛江\n• 肇庆\n• 中山\n• 自贡\n• 资阳\n• 遵义\n• 昭通\n• 张掖\n• 中卫", null, "", null, "• 手机", null, "• 分享\n• 设计\n免费设计\n• 计算器\n装修计算器\n• 入驻\n合作入驻\n• 联系\n联系我们\n• 置顶\n返回顶部" ]
[ null, "https://a.gdt.qq.com/pixel", null, "https://t2.focus-img.cn/sh740wsh/zx/duplication/b1d52a7f-1286-4894-9bbf-ec262c4e5a74.JPEG", null, "https://t2.focus-img.cn/sh740wsh/zx/duplication/84cb0776-3948-411a-a371-906a57ff460a.JPEG", null, "https://t2.focus-img.cn/sh740wsh/zx/duplication/4032291c-9d3d-4523-af66-371cef0f9e76.JPEG", null, "https://t2.focus-img.cn/sh740wsh/zx/duplication/c301da27-a9ed-459b-8ed2-4abe3d5d8ee3.JPEG", null, "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAABLEAAAAKCAYAAABL/czxAAACeklEQVR4Xu3cwUoCYRTF8RmSmUUE6QOICwkqkCFECJ/Dp/Q53MYgVBAuxAfQIFrMYBCjTIylQkTq4reVD5TLuef8vzvODXu93v18Pn+YTCZZEARBp9M5j+P4ptVqPQyHw4/isyRJLqMounZOXehAf/ADPikX5CU+wE04ERe7L7gfuTe6J5sfmJccY44UttvtuF6vdxaLxbj8Af1+/2K5XF41m820BFXngkBdgoAO6KAIKzqgAzpYD7LkAj+ggzXAywV+QAdywb3Rfdr8wFzlEHOkUOAIHIEjcASOwDlE4Hhg4gGRQYdBB+7EnbgTd+JO3Ik7/ZHoLw+CV0MsQAEoAAWgABSAAlAAir8AhQGVARWexJN4Ek/iSTyJJ/Hkf/NkWLzLPB6P3/eBR5Zl13mePzq3GUzqsr1B1UVdCuOiAzqgg92vWOkP/aE/9Ef5Kio/4Af8gB/wA/OI6monubA/F8Jut9uL4/h5NBq97RpkFYOuKIpunducrKvL9sBRF3UplzzyjZ8GrD/0h/7AG9UlqHyST8oFuSAX5IJcMI+o7iiXC/tzIRwMBmez2Syp1Wov+wZZzm0vpLqoSwEedEAHdLAbQPWH/tAf+qO8oPEDfsAP+AE/cO+uDmzkglz4bS6sdmIRDuHQAaAAFIACUHig8335Pj7AB/gAH+ADfIAP8AE+8MefbbtPj8WJX4vdj/UDfC9ABsgAGSADZIAMkAEyQD4lQMan+BSf4lN8ik/x6WnyaZgkyWWapq+lUU+n07ssy56qS9wbjcZdnufPzqkLHegPfjA445PtmA7ooBg40AEd0MH6jQa5wA/oYD34lAv8gA7kQrlr/b/84BMd0gjHDtit4gAAAABJRU5ErkJggg==", null, "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACQAAAAkCAYAAADhAJiYAAACwElEQVRYR+3XT0gUURwH8N97s5qiuEqEilb7lPWwO0g0S3WQsoMlWdHB9hB0kEiJwIjyFkIXCyQqD4V2CBI6bEGBCJZEf/BQsAMhdnAG3XVVVDr4B8XZ0ZlfTLXkbK2z7rogNHvcN+/3PvP9/VjeEkTkYAd9iA2y6IadkNW42gnZCVklYLX+/85Q2Odzu4JBeUckNOHxnNVUNUAovc0k6c5mqIy3bMLjOamp6itAzDYghNIOJsvtiVAZBUW83lpNUfoQIDcGIAA6l5d3aN/w8Nd/oTIGivD8kXVFGQDE/A0YJBx3ySVJz9JKaKalZVdpd3fUaiBj65M8f3BtdXUQAJymPZRerZDl7rRmKOz1nsZo9BHk5JxiIyMjVqgQz/OgKO8QcffGZ4nDcZONjj6w2r9py8IeT6Ouqr2AmEUA5mhh4fH9oiglKjohCFX6wsJ7BCg2YTiunUlShxXm59AnuqBFqqsPaCsrXxDgz42SkOlsp7O2XBRD8cWnBIGpi4sfALFs4xql9K5Llm8lg9kUZCyOV1XdA027ZnpbgHBOUdGx0mBwOvb9jM9XpszPf0QAl+lgjntYIUk3ksVYgowHQm73Y9T1y6aihMhZJSW1e4eG5iZraorXZmeNZNwmOKVPmCxf2QomKdBv1FPU9YtxqG+OggL/+tJSABC9cZheJstNW8UkDYKeHhrq7HyOut4Y1z7NNGO/folfsra2C9DcrGcOZFRubXWE+vtfIMCZRAcRgD7W0HAeurrWU8Ekn1Csut+fHRLF1whwIv5AAvCWCcI5CATUVDFbBwHAd78/d1kU+xHgaOxgAvApXxAa9gQCq+lgUgIZm+br6vIXxsffIMBhQsjnQsbqiwYHl9PFpAwyNk7V1zvXxsbuZ1VWXi8fGFjcDkxaoO0C/DWL9n97i2gzdkFLtaU2yCo5OyGrhH4AtD5LNJ/vw8QAAAAASUVORK5CYII=", null, "https://tz.home.focus.cn/gonglue/aaf2de83cdaa10c4.html/", null, "https://t1.focus-res.cn/front-pc/module/loupan-baoming/images/yes.png", null, "https://t1.focus-res.cn/front-pc/module/loupan-baoming/images/qrcode.png", null, "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACQAAAAkCAYAAADhAJiYAAACwElEQVRYR+3XT0gUURwH8N97s5qiuEqEilb7lPWwO0g0S3WQsoMlWdHB9hB0kEiJwIjyFkIXCyQqD4V2CBI6bEGBCJZEf/BQsAMhdnAG3XVVVDr4B8XZ0ZlfTLXkbK2z7rogNHvcN+/3PvP9/VjeEkTkYAd9iA2y6IadkNW42gnZCVklYLX+/85Q2Odzu4JBeUckNOHxnNVUNUAovc0k6c5mqIy3bMLjOamp6itAzDYghNIOJsvtiVAZBUW83lpNUfoQIDcGIAA6l5d3aN/w8Nd/oTIGivD8kXVFGQDE/A0YJBx3ySVJz9JKaKalZVdpd3fUaiBj65M8f3BtdXUQAJymPZRerZDl7rRmKOz1nsZo9BHk5JxiIyMjVqgQz/OgKO8QcffGZ4nDcZONjj6w2r9py8IeT6Ouqr2AmEUA5mhh4fH9oiglKjohCFX6wsJ7BCg2YTiunUlShxXm59AnuqBFqqsPaCsrXxDgz42SkOlsp7O2XBRD8cWnBIGpi4sfALFs4xql9K5Llm8lg9kUZCyOV1XdA027ZnpbgHBOUdGx0mBwOvb9jM9XpszPf0QAl+lgjntYIUk3ksVYgowHQm73Y9T1y6aihMhZJSW1e4eG5iZraorXZmeNZNwmOKVPmCxf2QomKdBv1FPU9YtxqG+OggL/+tJSABC9cZheJstNW8UkDYKeHhrq7HyOut4Y1z7NNGO/folfsra2C9DcrGcOZFRubXWE+vtfIMCZRAcRgD7W0HAeurrWU8Ekn1Csut+fHRLF1whwIv5AAvCWCcI5CATUVDFbBwHAd78/d1kU+xHgaOxgAvApXxAa9gQCq+lgUgIZm+br6vIXxsffIMBhQsjnQsbqiwYHl9PFpAwyNk7V1zvXxsbuZ1VWXi8fGFjcDkxaoO0C/DWL9n97i2gzdkFLtaU2yCo5OyGrhH4AtD5LNJ/vw8QAAAAASUVORK5CYII=", null, "https://t.focus-res.cn/home-front/pc/img/qrcode.d7cfc15.png", null, "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAG8AAABvAQAAAADKvqPNAAABaklEQVR42rXVsY3DIBQG4GdR0DkLILEGnVeCBex4AbwSHWsgsUDcuUB+99vJRWkCLu4iN58lzOPnQYg/fzv9IQuRIiPupCzR2OLEeicxb3nl0qSV2hvNLNagrvAxULcVd5GUl+1j7HeiKm/U9FFkhVivi8fzXv53Ir9yi3rl/BtshaWXeglEAz8nqpLnkBwjnIzkWyy9UZiIUZVMLbI/lvze0DpRleo24WWaziSr5CXyGouLwp+fqrJ0Qc+sepRkjnDqtJQ5iAeJXYoW2UsMzHMUczzXW2PpYkLmeGmN3ltEq8wBsyQ75CZdzLvBNmXfJu/oWJOsROtyi8Wacgtlinnhs9urHI3qDZJXRMfYOjEKB8fTq3Pq7CL6JJFJ1OZxFrqNRrRKpBZxyjT6kIbk4vOCqhGxP4Y0DmUKbVqpbowLikheoiV9lyhPXKDGVnIs05bGFifGHSLWmNyryBqPOyccYWK6Fv/tb+IH5yCAy1PWAUYAAAAASUVORK5CYII=", null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.9737525,"math_prob":0.48573047,"size":1294,"snap":"2020-10-2020-16","text_gpt3_token_len":1133,"char_repetition_ratio":0.08527132,"word_repetition_ratio":0.0,"special_character_ratio":0.15455951,"punctuation_ratio":0.17948718,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99810725,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26],"im_url_duplicate_count":[null,null,null,1,null,1,null,1,null,1,null,null,null,null,null,1,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-04-09T18:28:14Z\",\"WARC-Record-ID\":\"<urn:uuid:273823a3-59c3-46fe-ae7d-e6a013e14500>\",\"Content-Length\":\"145793\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7fac3a72-0b53-4f79-af43-be5a5d9b78b3>\",\"WARC-Concurrent-To\":\"<urn:uuid:2de53150-00c2-4c4c-b743-ea69d1f8f4c8>\",\"WARC-IP-Address\":\"43.242.166.60\",\"WARC-Target-URI\":\"https://tz.home.focus.cn/gonglue/aaf2de83cdaa10c4.html/\",\"WARC-Payload-Digest\":\"sha1:UKPWFGADGBAXYHI5ESAKVGS46WBDOXMF\",\"WARC-Block-Digest\":\"sha1:B2KM3QUC27RGZC2S5MAXG4K6S24TDBTS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585371861991.79_warc_CC-MAIN-20200409154025-20200409184525-00394.warc.gz\"}"}
https://stackoverflow.com/questions/7208961/which-widget-do-you-use-for-a-excel-like-table-in-tkinter/31034127
[ "# Which widget do you use for a Excel like table in tkinter?\n\nI want a Excel like table widget in tkinter for a gui I am writing. Do you have any suggestions?\n\nYou can use Tkinter to make a simple spreadsheet-like GUI:\n\n``````from tkinter import *\n\nroot = Tk()\n\nheight = 5\nwidth = 5\nfor i in range(height): #Rows\nfor j in range(width): #Columns\nb = Entry(root, text=\"\")\nb.grid(row=i, column=j)\n\nmainloop()\n``````\n\nIf you want to get the values from the grid, you can use the grid's children.\n\n``````def find_in_grid(frame, row, column):\nfor child in frame.children.values():\ninfo = child.grid_info()\nif info['row'] == row and info['column'] == column:\nreturn child\nreturn None\n``````\n\nThe function will return the child. To get the value of the entry, you can use:\n\n``````find_in_grid(root, i+1, j).get()\n``````\n\nNote: In old versions of Tkinter, row and column are stored as strings, so there you'd need to cast the integers:\n\n``````if info['row'] == str(row) and info['column'] == str(column):\n``````\n• I'm glad for your starting point, but it doesn't work unless you also use `StringVar` for each `Entry` Mar 8, 2018 at 2:12\n\nTktable is at least arguably the best option, if you need full table support. Briefly, the following example shows how to use it assuming you have it installed. The example is for python3, but for python2 you only need to change the import statement.\n\n``````import tkinter as tk\nimport tktable\n\nroot = tk.Tk()\ntable = tktable.Table(root, rows=10, cols=4)\ntable.pack(side=\"top\", fill=\"both\", expand=True)\nroot.mainloop()\n``````\n\nTktable can be difficult to install since there is no pip-installable package.\n\nIf all you really need is a grid of widgets for displaying and editing data, you can easily build a grid of entry or label widgets. For an example, see this answer to the question Python. GUI(input and output matrices)?\n\n• Hey,I am a little confused.Which file should i download and put it in which directory ? Aug 26, 2011 at 19:57\n• TkTabl;e seems not to be in a maintained state. Sep 18, 2017 at 2:16\n• Hmm... TkTable = dead link . . . May 4, 2022 at 19:32\n\nI had a similar problem trying to show a complete panda's DataFrame using tkinter. My first option was to create a grid like suggested someone in the first answer, but it bacame too heavy for so many info.\n\nI guess is beter to create a finite numbers of cells and show only a part of the DF, changing that part with the movement of the user" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8680946,"math_prob":0.6577343,"size":2065,"snap":"2023-40-2023-50","text_gpt3_token_len":509,"char_repetition_ratio":0.095099464,"word_repetition_ratio":0.0,"special_character_ratio":0.25472155,"punctuation_ratio":0.13023256,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9645307,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-07T21:33:07Z\",\"WARC-Record-ID\":\"<urn:uuid:d54360e3-dd0e-469c-bdb5-5b1f28165b2c>\",\"Content-Length\":\"194425\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:266c2c49-0a1b-470c-bc3a-6204d855eae5>\",\"WARC-Concurrent-To\":\"<urn:uuid:8a768326-8e4e-417e-9289-1e355e5df66f>\",\"WARC-IP-Address\":\"104.18.32.7\",\"WARC-Target-URI\":\"https://stackoverflow.com/questions/7208961/which-widget-do-you-use-for-a-excel-like-table-in-tkinter/31034127\",\"WARC-Payload-Digest\":\"sha1:QRNQJKOZBFFNHFWUBOIU2FPB4XTAENTZ\",\"WARC-Block-Digest\":\"sha1:YO7N7G6FFCSEA6RAYMZNGMWNSKK7YXMI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100686.78_warc_CC-MAIN-20231207185656-20231207215656-00857.warc.gz\"}"}
https://www.gatevidyalay.com/tag/topological-sort-example-step-by-step/
[ "## Topological Sort-\n\n Topological Sort is a linear ordering of the vertices in such a way thatif there is an edge in the DAG going from vertex ‘u’ to vertex ‘v’,then ‘u’ comes before ‘v’ in the ordering.\n\nIt is important to note that-\n\n• Topological Sorting is possible if and only if the graph is a Directed Acyclic Graph.\n• There may exist multiple different topological orderings for a given directed acyclic graph.\n\n## Topological Sort Example-\n\nConsider the following directed acyclic graph-", null, "For this graph, following 4 different topological orderings are possible-\n\n• 1 2 3 4 5 6\n• 1 2 3 4 6 5\n• 1 3 2 4 5 6\n• 1 3 2 4 6 5\n\n## Applications of Topological Sort-\n\nFew important applications of topological sort are-\n\n• Scheduling jobs from the given dependencies among jobs\n• Instruction Scheduling\n• Determining the order of compilation tasks to perform in makefiles\n• Data Serialization\n\n## Problem-01:\n\nFind the number of different topological orderings possible for the given graph-", null, "## Solution-\n\nThe topological orderings of the above graph are found in the following steps-\n\n### Step-01:\n\nWrite in-degree of each vertex-", null, "### Step-02:\n\n• Vertex-A has the least in-degree.\n• So, remove vertex-A and its associated edges.\n• Now, update the in-degree of other vertices.", null, "### Step-03:\n\n• Vertex-B has the least in-degree.\n• So, remove vertex-B and its associated edges.\n• Now, update the in-degree of other vertices.", null, "### Step-04:\n\nThere are two vertices with the least in-degree. So, following 2 cases are possible-\n\nIn case-01,\n\n• Remove vertex-C and its associated edges.\n• Then, update the in-degree of other vertices.\n\nIn case-02,\n\n• Remove vertex-D and its associated edges.\n• Then, update the in-degree of other vertices.", null, "### Step-05:\n\nNow, the above two cases are continued separately in the similar manner.\n\nIn case-01,\n\n• Remove vertex-D since it has the least in-degree.\n• Then, remove the remaining vertex-E.\n\nIn case-02,\n\n• Remove vertex-C since it has the least in-degree.\n• Then, remove the remaining vertex-E.", null, "### Conclusion-\n\nFor the given graph, following 2 different topological orderings are possible-\n\n• A B C D E\n• A B D C E\n\n## Problem-02:\n\nFind the number of different topological orderings possible for the given graph-", null, "## Solution-\n\nThe topological orderings of the above graph are found in the following steps-\n\n### Step-01:\n\nWrite in-degree of each vertex-", null, "### Step-02:\n\n• Vertex-1 has the least in-degree.\n• So, remove vertex-1 and its associated edges.\n• Now, update the in-degree of other vertices.", null, "### Step-03:\n\nThere are two vertices with the least in-degree. So, following 2 cases are possible-\n\nIn case-01,\n\n• Remove vertex-2 and its associated edges.\n• Then, update the in-degree of other vertices.\n\nIn case-02,\n\n• Remove vertex-3 and its associated edges.\n• Then, update the in-degree of other vertices.", null, "### Step-04:\n\nNow, the above two cases are continued separately in the similar manner.\n\nIn case-01,\n\n• Remove vertex-3 since it has the least in-degree.\n• Then, update the in-degree of other vertices.\n\nIn case-02,\n\n• Remove vertex-2 since it has the least in-degree.\n• Then, update the in-degree of other vertices.", null, "### Step-05:\n\nIn case-01,\n\n• Remove vertex-4 since it has the least in-degree.\n• Then, update the in-degree of other vertices.\n\nIn case-02,\n\n• Remove vertex-4 since it has the least in-degree.\n• Then, update the in-degree of other vertices.", null, "### Step-06:\n\nIn case-01,\n\n• There are 2 vertices with the least in-degree.\n• So, 2 cases are possible.\n• Any of the two vertices may be taken first.\n\nSame is with case-02.", null, "### Conclusion-\n\nFor the given graph, following 4 different topological orderings are possible-\n\n• 1 2 3 4 5 6\n• 1 2 3 4 6 5\n• 1 3 2 4 5 6\n• 1 3 2 4 6 5\n\n## Problem-03:\n\nConsider the directed graph given below. Which of the following statements is true?", null, "1. The graph does not have any topological ordering.\n2. Both PQRS and SRPQ are topological orderings.\n3. Both PSRQ and SPRQ are topological orderings.\n4. PSRQ is the only topological ordering.\n\n## Solution-\n\n• The given graph is a directed acyclic graph.\n• So, topological orderings exist.\n• P and S must appear before R and Q in topological orderings as per the definition of topological sort.\n\nThus, Correct option is (C).\n\n## Problem-04:\n\nConsider the following directed graph-", null, "The number of different topological orderings of the vertices of the graph is ________ ?\n\n## Solution-\n\nNumber of different topological orderings possible = 6.\n\n(The solution is explained in detail in the linked video lecture.)\n\nTo gain better understanding about Topological Sort,\n\nWatch this Video Lecture\n\nTo practice previous years GATE problems on Topological Sort,\n\nWatch this Video Lecture\n\nNext Article- Shortest Path Problems\n\n### Other Popular Sorting Algorithms-\n\nGet more notes and other study material of Design and Analysis of Algorithms.\n\nWatch video lectures by visiting our YouTube channel LearnVidFun." ]
[ null, "https://www.gatevidyalay.com/wp-content/uploads/2018/07/Topological-Sort-Example.png", null, "https://www.gatevidyalay.com/wp-content/uploads/2018/07/Topological-Sort-Problem-01.png", null, "https://www.gatevidyalay.com/wp-content/uploads/2018/07/Topological-Sort-Problem-01-Solution-Step-01.png", null, "https://www.gatevidyalay.com/wp-content/uploads/2018/07/Topological-Sort-Problem-01-Solution-Step-02.png", null, "https://www.gatevidyalay.com/wp-content/uploads/2018/07/Topological-Sort-Problem-01-Solution-Step-03.png", null, "https://www.gatevidyalay.com/wp-content/uploads/2018/07/Topological-Sort-Problem-01-Solution-Step-04.png", null, "https://www.gatevidyalay.com/wp-content/uploads/2018/07/Topological-Sort-Problem-01-Solution-Step-05.png", null, "https://www.gatevidyalay.com/wp-content/uploads/2018/07/Topological-Sorting-Problem-02-1.png", null, "https://www.gatevidyalay.com/wp-content/uploads/2018/07/Topological-Sorting-Problem-02-Solution-Step-01.png", null, "https://www.gatevidyalay.com/wp-content/uploads/2018/07/Topological-Sorting-Problem-02-Solution-Step-02.png", null, "https://www.gatevidyalay.com/wp-content/uploads/2018/07/Topological-Sorting-Problem-02-Solution-Step-03.png", null, "https://www.gatevidyalay.com/wp-content/uploads/2018/07/Topological-Sorting-Problem-02-Solution-Step-04.png", null, "https://www.gatevidyalay.com/wp-content/uploads/2018/07/Topological-Sorting-Problem-02-Solution-Step-05.png", null, "https://www.gatevidyalay.com/wp-content/uploads/2018/07/Topological-Sorting-Problem-02-Solution-Step-06.png", null, "https://www.gatevidyalay.com/wp-content/uploads/2018/07/Topological-Sort-Problem-03.png", null, "https://www.gatevidyalay.com/wp-content/uploads/2018/07/Topological-Sort-Problem-04.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8605433,"math_prob":0.7130732,"size":4916,"snap":"2020-34-2020-40","text_gpt3_token_len":1227,"char_repetition_ratio":0.17976384,"word_repetition_ratio":0.40312877,"special_character_ratio":0.25406834,"punctuation_ratio":0.11801897,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9907764,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32],"im_url_duplicate_count":[null,null,null,10,null,10,null,10,null,10,null,10,null,10,null,10,null,10,null,10,null,10,null,10,null,10,null,10,null,10,null,10,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-01T03:33:09Z\",\"WARC-Record-ID\":\"<urn:uuid:0c83ea53-d4f4-43b4-95dc-1f5f85222183>\",\"Content-Length\":\"180231\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1158a86c-4d32-4dda-904b-52350badac9a>\",\"WARC-Concurrent-To\":\"<urn:uuid:5e4e21cd-f0bb-48ea-b8d9-92c9a5373348>\",\"WARC-IP-Address\":\"104.24.112.86\",\"WARC-Target-URI\":\"https://www.gatevidyalay.com/tag/topological-sort-example-step-by-step/\",\"WARC-Payload-Digest\":\"sha1:4422QI7MVZM4GQFF5PL72RLHLD47O6YK\",\"WARC-Block-Digest\":\"sha1:4BSMDRJGS34P4WGJEMVE5R3ATTECJ3NC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600402130615.94_warc_CC-MAIN-20201001030529-20201001060529-00622.warc.gz\"}"}
https://matstat.org/content_en/R-course/_site/ttest.html
[ "## What the test does\n\n• are two populations having the same mean?\n• null hypothesis: the groups do have the same mean\n• a low p-value means that the null is rejected, i.e. that the means are significantly different\n\n## Assumptions\n\n• data points must be independent\n• both populations must be (approximately) normally distributed\n# Testing normality of samples:\nset1 = sets[,1] # 1st column\nset2 = sets[,2] # 2nd column\n# Histogram:\nhist(set1, col = \"red\", breaks = 20) # does it approximately resemble a normal distribution?", null, "# Quantile-quantile plot:\nqqnorm(set1, col = \"blue\")\nqqline(set1, col = \"red\") # data points should be close to the line", null, "# Tests:\n# Kolmogorow-Smirnow-Test. Null: data is normally distributed. Big p-values are desired!\nks.test(set1, \"pnorm\", mean(set1), sd(set1)) \n##\n## One-sample Kolmogorov-Smirnov test\n##\n## data: set1\n## D = 0.035656, p-value = 0.8402\n## alternative hypothesis: two-sided\n# Shapiro-Wilk normality test. Null: data is normally distributed. Big p-values are desired! No standardisation necessary.\nshapiro.test(set1)\n##\n## Shapiro-Wilk normality test\n##\n## data: set1\n## W = 0.99367, p-value = 0.2415\n# Anderson-Darling test.\nlibrary(nortest)\nad.test(set1)\n##\n## Anderson-Darling normality test\n##\n## data: set1\n## A = 0.41021, p-value = 0.3412\n\n## Running the test\n\nA boxplot (or violin plot) can help to get a first impression. Do the gropus overlap?\n\nboxplot(sets, col=\"green\", names=c(\"set1\", \"set2\"), main=\"Comparison of two datasets\")", null, "The simplest form of the command ‘t.test’:\n\n?t.test\nt.test(set1, set2)\n##\n## Welch Two Sample t-test\n##\n## data: set1 and set2\n## t = -28.436, df = 527.39, p-value < 2.2e-16\n## alternative hypothesis: true difference in means is not equal to 0\n## 95 percent confidence interval:\n## -3.275339 -2.852041\n## sample estimates:\n## mean of x mean of y\n## 0.8987499 3.9624399\n\nIf the variances are similar, it might be better to set the parameter ‘var.equal’ to TRUE:\n\nt.test(set1, set2, var.equal = TRUE)\n##\n## Two Sample t-test\n##\n## data: set1 and set2\n## t = -28.436, df = 598, p-value < 2.2e-16\n## alternative hypothesis: true difference in means is not equal to 0\n## 95 percent confidence interval:\n## -3.275282 -2.852098\n## sample estimates:\n## mean of x mean of y\n## 0.8987499 3.9624399\n\nt.test(set1, set2, alternative = \"less\")\n##\n## Welch Two Sample t-test\n##\n## data: set1 and set2\n## t = -28.436, df = 527.39, p-value < 2.2e-16\n## alternative hypothesis: true difference in means is less than 0\n## 95 percent confidence interval:\n## -Inf -2.886164\n## sample estimates:\n## mean of x mean of y\n## 0.8987499 3.9624399\nt.test(set1, set2, alternative = \"greater\")\n##\n## Welch Two Sample t-test\n##\n## data: set1 and set2\n## t = -28.436, df = 527.39, p-value = 1\n## alternative hypothesis: true difference in means is greater than 0\n## 95 percent confidence interval:\n## -3.241216 Inf\n## sample estimates:\n## mean of x mean of y\n## 0.8987499 3.9624399\n\nThe confidence level ($$1 - \\alpha$$) can be changed:\n\nt.test(set1, set2, conf.level = 0.99)\n##\n## Welch Two Sample t-test\n##\n## data: set1 and set2\n## t = -28.436, df = 527.39, p-value < 2.2e-16\n## alternative hypothesis: true difference in means is not equal to 0\n## 99 percent confidence interval:\n## -3.342214 -2.785166\n## sample estimates:\n## mean of x mean of y\n## 0.8987499 3.9624399\n\nA test for paired samples can be conducted:\n\nx = seq(1, 10)\ny = x + 2 + rnorm(length(x), mean = 0, sd = 0.3)\nplot(x, ylim = c(min(x,y), max(x,y)), col = \"red\", main = \"Paired data\", font.main = 1)\npoints(y, col = \"blue\")", null, "t.test(x, y, paired = TRUE)\n##\n## Paired t-test\n##\n## data: x and y\n## t = -25.705, df = 9, p-value = 9.831e-10\n## alternative hypothesis: true difference in means is not equal to 0\n## 95 percent confidence interval:\n## -2.127913 -1.783680\n## sample estimates:\n## mean of the differences\n## -1.955796\n\n## When the assumptions are not met, part I\n\nTry data tranformation? (log, square root, …)\n\nfile.exists(\"setb.txt\")\n## TRUE\nsetb <- read.table(\"setb.txt\", header = FALSE)\nsetb <- setb[,1] # convert from data.fram to numeric\nis.numeric(setb)\n## TRUE\nlength(setb)\n## 300\n# Normally distributed?:\nhist(setb, col = \"red\", breaks = 20)", null, "qqnorm(setb, col = \"blue\")\nqqline(setb, col = \"red\")", null, "shapiro.test(setb) \n##\n## Shapiro-Wilk normality test\n##\n## data: setb\n## W = 0.65755, p-value < 2.2e-16\n# Transform and repeat check for normality:\nsetb_trans = log(setb)\nhist(setb_trans, col = \"red\", breaks = 20)", null, "qqnorm(setb_trans, col = \"blue\")\nqqline(setb_trans, col = \"red\")", null, "shapiro.test(setb_trans)\n##\n## Shapiro-Wilk normality test\n##\n## data: setb_trans\n## W = 0.99266, p-value = 0.1471\n# Ready for the T-test:\nt.test(set1, setb_trans)\n##\n## Welch Two Sample t-test\n##\n## data: set1 and setb_trans\n## t = -0.77151, df = 597.99, p-value = 0.4407\n## alternative hypothesis: true difference in means is not equal to 0\n## 95 percent confidence interval:\n## -0.2341234 0.1020581\n## sample estimates:\n## mean of x mean of y\n## 0.8987499 0.9647825\n\n## When the assumptions are not met, part II\n\nConduct a non-parametric test (rank sum test)\n\nMany different names for the same (ranksum) test:\n\n• Mann–Whitney U test\n• Mann–Whitney–Wilcoxon test\n• Wilcoxon rank-sum test\n• Wilcoxon–Mann–Whitney test\n• Wikipedia\nx = runif(200, 0, 1)\ny = runif(200, 2, 3)\nboxplot(x, y, col = \"green\")", null, "wilcox.test(x, y)\n##\n## Wilcoxon rank sum test with continuity correction\n##\n## data: x and y\n## W = 0, p-value < 2.2e-16\n## alternative hypothesis: true location shift is not equal to 0" ]
[ null, "https://matstat.org/content_en/R-course/_site/ttest_files/figure-html/unnamed-chunk-1-1.png", null, "https://matstat.org/content_en/R-course/_site/ttest_files/figure-html/unnamed-chunk-1-2.png", null, "https://matstat.org/content_en/R-course/_site/ttest_files/figure-html/boxpl-1.png", null, "https://matstat.org/content_en/R-course/_site/ttest_files/figure-html/paired-1.png", null, "https://matstat.org/content_en/R-course/_site/ttest_files/figure-html/data_trans-1.png", null, "https://matstat.org/content_en/R-course/_site/ttest_files/figure-html/data_trans-2.png", null, "https://matstat.org/content_en/R-course/_site/ttest_files/figure-html/data_trans-3.png", null, "https://matstat.org/content_en/R-course/_site/ttest_files/figure-html/data_trans-4.png", null, "https://matstat.org/content_en/R-course/_site/ttest_files/figure-html/nonpar-1.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6473842,"math_prob":0.95375437,"size":5073,"snap":"2023-14-2023-23","text_gpt3_token_len":1686,"char_repetition_ratio":0.13335964,"word_repetition_ratio":0.33490565,"special_character_ratio":0.4100138,"punctuation_ratio":0.21306819,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99719274,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-27T11:18:49Z\",\"WARC-Record-ID\":\"<urn:uuid:0ac0ceca-bb06-4781-a5da-9b6f4e55a524>\",\"Content-Length\":\"17763\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a2457a59-d968-4e2c-89ed-66740028dd77>\",\"WARC-Concurrent-To\":\"<urn:uuid:26790b47-b9d7-4c6a-87ac-d019a75fc502>\",\"WARC-IP-Address\":\"217.160.0.67\",\"WARC-Target-URI\":\"https://matstat.org/content_en/R-course/_site/ttest.html\",\"WARC-Payload-Digest\":\"sha1:6CCHCNQZMXSRX2L7QRTQDJJA63VA5SJQ\",\"WARC-Block-Digest\":\"sha1:DPXVDXYPXUBOUQIHZHKQZHHA3NWOVKB2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296948620.60_warc_CC-MAIN-20230327092225-20230327122225-00266.warc.gz\"}"}
https://digital-library.theiet.org/content/conferences/10.1049/cp.2015.0615/cite/refworks
[ "RT Journal Article\nA1 Dequan Yue\nAD Coll. of Sci., Yanshan Univ., Qinhuangdao\nA1 Hui Li\nAD Coll. of Sci., Yanshan Univ., Qinhuangdao\nA1 Guoxi Zhao\nAD [Sch. of Econ. & Manage., Yanshan Univ., Qinhuangdao, Sch. of Econ. [amp ] Manage., Yanshan Univ., Qinhuangdao]\nA1 Wuyi Yue\nAD [Dept. of Intell. & Inf., Konan Univ., Kobe, Dept. of Intell. [amp ] Inf., Konan Univ., Kobe]\n\nPB iet\nT1 Analysis of a Markovian queue with two heterogenous servers and a threshold assignment policy\nJN IET Conference Proceedings\nSP 5 .\nOP 5 .\nAB This paper considers a parallel queuing system with two heterogeneous servers where the task is dispatched to the two servers. The threshold assignment policy dispatches the tasks according to the number of customers in server 1. Firstly, we obtain the stationary condition of the system. Secondly, we give the stationary performance indices by using a matrix-geometric solution theory. Finally, we develop the average cost function and analyze the effect of the parameters on the average cost function by using numerical examples.\nK1 matrix-geometric solution theory\nK1 heterogenous server\nK1 stationary condition\nK1 Markovian queue analysis\nK1 stationary performance indices\nK1 threshold assignment policy\nK1 average cost function\nK1 parallel queuing system\nDO https://doi.org/10.1049/cp.2015.0615\nUL https://digital-library.theiet.org/;jsessionid=24dued5575amo.x-iet-live-01content/conferences/10.1049/cp.2015.0615\nLA English\nSN\nYR 2015\nOL EN" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.73468596,"math_prob":0.42369747,"size":1467,"snap":"2019-43-2019-47","text_gpt3_token_len":398,"char_repetition_ratio":0.099111415,"word_repetition_ratio":0.037914693,"special_character_ratio":0.24267212,"punctuation_ratio":0.20136519,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9590338,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-12T04:08:09Z\",\"WARC-Record-ID\":\"<urn:uuid:6003b62e-33d5-4e25-aca4-57ebd084bdae>\",\"Content-Length\":\"2309\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c1f5acf0-2ac2-40c1-a37c-6af611e7c502>\",\"WARC-Concurrent-To\":\"<urn:uuid:6943b2a0-ff7a-4f38-a38a-2144a9ac65bb>\",\"WARC-IP-Address\":\"104.20.156.79\",\"WARC-Target-URI\":\"https://digital-library.theiet.org/content/conferences/10.1049/cp.2015.0615/cite/refworks\",\"WARC-Payload-Digest\":\"sha1:52LXXKHSANHOITXNVEEBGNBJD3HBYZU6\",\"WARC-Block-Digest\":\"sha1:QHIYZ4PTE6ATGVAJSXNFJ3375AUCSNBX\",\"WARC-Identified-Payload-Type\":\"text/plain\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496664567.4_warc_CC-MAIN-20191112024224-20191112052224-00161.warc.gz\"}"}
https://forum.uipath.com/t/datatable-group-by-sum-and-count-using-linq/91210
[ "", null, "# DataTable Group By Sum and Count using Linq\n\nHello,\nI am having a problem writing a Linq that could achieve something like this\n\nSELECT EmployeeName, SUM(Transaction), COUNT(*)\nFROM Transactions\nGROUP BY EmployeeName\n\nI found a temporary (ugly) solution for that but I would like to make it the right way.\n\nThings to mention\n\n1. Transactions is a DataTable.\n2. To use Sum i have to fist replace “,” (coma) with “.” (dot) ->\nSum(Function(x) Convert.ToDouble(x(“Transaction”).ToString.Trim.Replace(\",\", “.”))\n\nRegards,\nBazyl\n\nHi.\n\nNormally, if you are processing the data by employee name (in your case), you would want to filter your data set to the items for each employee. Then, you can simply use .Sum() on each employee as you process it. This simple method would look something like this:\n\n``````For each empl In Transactions.AsEnumerable.Select(Function(r) r(\"EmployeeName\").ToString ).ToArray.Distinct\nsumTrx = Transactions.AsEnumerable.Where(Function(r) r(\"EmployeeName\").ToString = empl ).ToArray.Sum(Function(r) If(IsNumeric(r(\"Transaction\").ToString.Trim), CDbl(r(\"Transaction\").ToString.Trim, 0) )\ncountTrx = Transactions.AsEnumerable.Where(Function(r) r(\"EmployeeName\").ToString = empl ).ToArray.Count\n``````\n\nHowever, if you would like to return this information using Group By, then I would say check out the information here:\n\nOr there might be other posts out there. - If you need further help getting GroupBy to work, then let us know; I’m “not” really an expert using GroupBy.\n\nRegards.\n\n1 Like\n\nHey @Bazyleusz\n\nHave a look in an existing thread, That might help you at some extent i guess", null, "Regards…!!\nAksh" ]
[ null, "https://aws1.discourse-cdn.com/uipath/original/3X/d/6/d6c862cebbdb9f139d9b3e0b4346f6d471039722.png", null, "https://forum.uipath.com/images/emoji/twitter/slight_smile.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7635409,"math_prob":0.85727537,"size":949,"snap":"2020-45-2020-50","text_gpt3_token_len":216,"char_repetition_ratio":0.13544974,"word_repetition_ratio":0.016806724,"special_character_ratio":0.2223393,"punctuation_ratio":0.1891892,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97872895,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-25T06:14:03Z\",\"WARC-Record-ID\":\"<urn:uuid:1c09de18-a045-4519-b865-ec2a95c271c5>\",\"Content-Length\":\"25734\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:121cf0e1-b202-4117-8575-7be8b1ecec4f>\",\"WARC-Concurrent-To\":\"<urn:uuid:93158dd8-4c15-417f-b2ca-e9a0dc20b846>\",\"WARC-IP-Address\":\"64.62.250.111\",\"WARC-Target-URI\":\"https://forum.uipath.com/t/datatable-group-by-sum-and-count-using-linq/91210\",\"WARC-Payload-Digest\":\"sha1:K7D2XKD7XNMXFHNOFQ7IMP7ZVZLJNWLE\",\"WARC-Block-Digest\":\"sha1:CEWMQCKZPG2QRU6MAR5NX736AWZJSKCS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107887810.47_warc_CC-MAIN-20201025041701-20201025071701-00599.warc.gz\"}"}
https://mathematica.stackexchange.com/questions/240669/why-the-difference-in-the-results-of-integrate-and-nintegrate
[ "# Why the difference in the results of Integrate and NIntegrate?\n\nBug introduced in version 12.0.0, and persisting through 12.2.0.\n\nCalculating the integral $$\\int\\limits_0^1 \\frac{x^2\\log(1-x^4)} {1+x^4}\\,dx$$ symbolically\n\nIntegrate[x^2*Log[1 - x^4]/(1 + x^4), {x, 0, 1}]\n\n\n-4 + 2 Catalan - 1/4 \\[Pi] (-2 + Log) + Log\n\nN[%]\n\n\n-0.151021\n\nand numerically\n\nNIntegrate[x^2*Log[1 - x^4]/(1 + x^4), {x, 0, 1}]\n\n\n-0.162858\n\n, I obtain different numbers and the difference is too much for round-off errors. How to preclude the end of math?\n\n• This is a bug worth reporting to Support. In 11.3, Integrate[] returns the (complicated but) correct answer, so something happened in between that version and 12.2. (Someone else who can check 12 and 12.1 should add the requisite bug header to this question.) Feb 26, 2021 at 8:29\n• Work correctly in MA 11.1.1 with the integration result $\\frac{1}{144} \\left(8 \\left(6 \\text{Hypergeometric2F1}^{(0,0,1,0)}\\left(\\frac{3}{4},1,\\frac{7}{4},-1\\right)+\\sqrt{2} \\log (8)-6 \\sqrt{2} \\log \\left(2-\\sqrt{2}\\right)\\right)+3 \\sqrt{2} \\left(-3 \\pi ^2-8 \\pi -2 \\log ^2(8)+7 \\pi \\log (8)-6 (\\pi -2 \\log (8)) \\log \\left(2-\\sqrt{2}\\right)\\right)\\right)$ Feb 26, 2021 at 8:41\n• Maple 2020.2 int gives the same result the the numerical one. This looks like bug in integrate. screen shot !Mathematica graphics -0.1628582917 + 0.*I Feb 26, 2021 at 8:47\n• @Nasser: In fact, Maple reduces the integral under consideration to the sum of other integrals. Feb 26, 2021 at 8:50\n• Version 12.1 gives also: -0.162858 Feb 26, 2021 at 9:13\n\nThis bug was introduced in version 12.0. Please submit bug report to [email protected] or https://www.wolfram.com/support/contact/", null, "", null, "• Submitted. No response. Feb 26, 2021 at 9:26\n• @user64494. General, you have to wait a few days. Feb 26, 2021 at 15:22\n\nOne way:\n\nIntegrate[x^2*Log[1 - x^4]/(1 + x^4), {x, 0, I, 1}]\n\nN@%\n\n(* -0.162858 - 4.44089*10^-16 I *)\n\n\nAnother way:\n\nIntegrate[x^2*Log[1 - x^4]/(1 + x^4), {x, 0, 1/2, 1}]\n\nN[%]\n\n(* -0.162858 - 2.28333*10^-16 I *)\n\n• Thank you. It's interesting. The question arises: why Integrate[x^2*Log[1 - x^4]/(1 + x^4), {x, 0, I, 1}]==Integrate[x^2*Log[1 - x^4]/(1 + x^4), {x, 0,1}]? The imaginary unit is a singularity (a branch point) of the integrand so the application of the Cauchy's theorem is not clear to me. Also we deal with improper integrals. The command Integrate[x^2*Log[1 - x^4]/(1 + x^4), {x, 0, I, 1}] returns the input on my comp in several minutes. As far as I understand it, you simply do NIntegrate[x^2*Log[1 - x^4]/(1 + x^4), {x, 0, I, 1}]. Feb 27, 2021 at 6:13\n• The same issue with Integrate[x^2*Log[1 - x^4]/(1 + x^4), {x, 0, 1/2, 1}]. Corollary: this is not it. Feb 27, 2021 at 6:53\n• -1. I repeat again: you calculate its numerical value in a weird way only. The symbolic result is not obtained. Feb 27, 2021 at 10:07\n• @user64494 You made a mistake. Feb 27, 2021 at 15:12\n• @user64494 your downvoting is unjustified and unfair. If you omit the evaluation with N[], the Integrate[] does return a (complicated!) symbolic result, which Michael has mercifully omitted. Feb 27, 2021 at 17:06\n\nThe integration with help of Rubi gives after inserting the limits:\n\n(1/(4*Sqrt))*(Pi*Log - Log[(Sqrt + 1)^2]*Log +\nPolyLog[2, -(Sqrt + 1)^2] - PolyLog[2, (Sqrt - 1)^4]/2 +\nRe[4*PolyLog[2, I*(Sqrt - 1)] - 4*PolyLog[2, I*(Sqrt + 1)] +\nPolyLog[2, (Sqrt + 1)^2 + I/1000000000000]] +\nIm[-8*(PolyLog[2, I*(Sqrt - 1)] + PolyLog[2, I*(Sqrt + 1)]) +\n2*PolyLog[2, (Sqrt + 1)^2 + I/1000000000000]]/2)\n\n\nwhich I don't know how to further simplify. The small imaginary offset in the argument of the PolyLog 'pushes' the sign of its imaginary part to the positive side. Otherwise the numeric result would be wrong.\n\nAddendum: the imaginary part of the mentioned PolyLog is\n\n2*Pi*Log[Sqrt + 1]\n\n\nso the result can be written\n\n(1/(4*Sqrt))*( Pi^2/3 + Pi*(2*Log[Sqrt + 1] + Log) -\n2*Log[Sqrt + 1]*(Log[Sqrt + 1] + Log)-PolyLog[2, (Sqrt - 1)^2] +\nPolyLog[2, -(Sqrt + 1)^2] - (1/2)*PolyLog[2, (Sqrt - 1)^4] +\nRe[4*PolyLog[2, I*(Sqrt - 1)] - 4*PolyLog[2, I*(Sqrt + 1)]]\n-4*Im[PolyLog[2, I*(Sqrt - 1)] + PolyLog[2, I*(Sqrt + 1)]])\n\n\n2.Addendum: The last may be further simplified to\n\n(1/(4*Sqrt))*(Pi^2/3 + Pi*Log+2*Pi*Log[Sqrt+1] -\n2*Log[1+Sqrt]*Log[2*(1+Sqrt)]-2*PolyLog[2,(Sqrt-1)^2] - (Sqrt+1)*\nLerchPhi[-(Sqrt+1)^2,2,1/2]-(Sqrt-1)*LerchPhi[-(Sqrt-1)^2,2, 1/2])\n\n• Thank you. In fact, Rubi reduces the improper integral under consideration to other integrals through PolyLog. Feb 27, 2021 at 6:05\n\nStrangely, it works when done correctly via limit:\n\nf[a_] = Assuming[0 < a < 1,\nIntegrate[(x^2 Log[1 - x^4])/(1 + x^4), {x, 0, a}]];\nA = Limit[f[a], a -> 1, Direction -> \"FromBelow\"];\nN[A]\n(* -0.162858 - 5.88785*10^-17 I *)\n\n$Version (* \"12.2.0 for Mac OS X x86 (64-bit) (December 12, 2020)\" *) The Limit command throws a Limit::ztest warning though, which may indicate some trouble. • Roman(@ does not work.):+1. Thank you. f[a_] returns a big expression with many PolyLogs. I think Rubi finds that integral in such a way. Also that works with 0<a<=1 too, but Mathematica has problems with its evaluating for a->1: it returns (0.08838834764831844055010555+0.08838834764831844055010555 I) ((-0.9908537560314543211012304-0.1349401132293848328673948 I) \\[Infinity]+(0.1349401132293848328673948+0.9908537560314543211012304 I) \\[Infinity]+(0.5388161543132108251339676-0.8424233804038929803675840 I) \\[Infinity]+... Feb 27, 2021 at 13:24 • @user64494 To quote the commenting instructions: \"Note that the author of the post will always be notified of any new comment. You may still use it for clarity, if needed; however, if there are no comments, or only you or the author have commented on the post so far, the @name will be automatically removed from the beginning of the comment, as it adds no value. (To avoid breaking sentences, mentions not at the beginning will not be removed.)\" Feb 27, 2021 at 14:40 • @user64494 Concerning evaluation at$a=1$: yes, that's precisely why I used a Limit instead of simply asking for f. Keep in mind that Mathematica returns general results that may be inapplicable on some special points, like$a=1$in this case even if you specify the assumption$0<a\\le1$. Feb 27, 2021 at 14:52 • As I understand it, f[a_] produces a very long expression with many compound functions. Its evaluation exeeds some internal Mathematica limitations (something like $MaxIterations=100) and this causes problems. Maybe, it is possible to increase these limitations. Feb 27, 2021 at 16:16" ]
[ null, "https://i.stack.imgur.com/mgeIH.png", null, "https://i.stack.imgur.com/7cuBg.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.69159037,"math_prob":0.985791,"size":1710,"snap":"2022-40-2023-06","text_gpt3_token_len":741,"char_repetition_ratio":0.25732708,"word_repetition_ratio":0.08733624,"special_character_ratio":0.45964912,"punctuation_ratio":0.098143235,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99735004,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-04T03:32:38Z\",\"WARC-Record-ID\":\"<urn:uuid:a27e98cf-aac9-4882-9506-ef13aa500874>\",\"Content-Length\":\"279832\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5da6eb9c-4423-4f74-8bcd-3980a58fd6c8>\",\"WARC-Concurrent-To\":\"<urn:uuid:a7350bfb-fb4b-45ce-8125-5e5bd78429a0>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://mathematica.stackexchange.com/questions/240669/why-the-difference-in-the-results-of-integrate-and-nintegrate\",\"WARC-Payload-Digest\":\"sha1:BCLR2OJJFC74RU7W5KPTLUFAT74E46IC\",\"WARC-Block-Digest\":\"sha1:DW5XQN4QVJE2NWWRBLZF5EA5AP7KTLLD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337473.26_warc_CC-MAIN-20221004023206-20221004053206-00392.warc.gz\"}"}
https://discourse.mozilla.org/t/help-needed-with-test-your-skills-arrays-3/54367
[ "", null, "# Help needed with Test Your Skills: Arrays 3;\n\nHello everyone. I’m working on Arrays 3, and I’m stuck on the third exercise:\n\nGo over each item in the array and add its index number after the name inside parentheses, for example `Ryu (0)` . Note that we don’t teach how to do this in the Arrays article, so you’ll have to do some research.\n\n(let me know if the link is broken).\n\nMy approach has been to loop over the array, and add the parenthesis besides each item in the array.\n\nHow can I find the index of each item in the array, and then loop over my array so as to add the index number besides each name.\n\nAlso, how can I put the index number for each item inside a parenthesis, and then add that to the Array.\n\nI tried various things, including indexOf but that will only return the index of a specific item you are searching for, what if the array contains hundreds of items? I essentially want to find the final index, then starting at 0, loop over the array and add (i), incrementing by 1 each time until I reach the final item in the array.\n\nLet me know if anything is unclear or if you have any questions! Thank you in advance.\n\nHi there @Hassan_Garshasb! You are definitely on the right track with this.\n\nI’ll give you a clue — `forEach()` is your friend!\n\n2 Likes\n\nHi Chris! Thanks for your response. I’ve gotten mostly there, i wrote this forEach loop\n\narray.forEach(function(element, i) {\narray[i] = array[i] + ’ ’ + (i);\n});\n\nWhich gives me the name + the index number besides it, but how can I get the index number inside the parenthesis? I googled around but with no luck.\n\nYou are definitely getting close. Tell you what, have a look at our answer to see what we did: https://github.com/mdn/learning-area/blob/master/javascript/introduction-to-js-1/tasks/arrays/marking.md#task-3\n\n2 Likes\n\nlet me know what you guys think of my answer ?\nwhether it is the right or wrong?\n\nlet myArray = [ “Ryu”, “Ken”, “Chun-Li”, “Cammy”, “Guile”, “Sakura”, “Sagat”, “Juri” ];\n\nmyArray.pop();\nmyArray.push(“divya”,“dipen”);\n\nfor(x = 0 ; x < myArray.length; x++){\nmyArray[x] =`\\${myArray[x]} (\\${x})`;\n}\n\nlet myString = myArray.join(\"-\");\n\n2 Likes\n\nHi @divyabk54, and welcome to our community!\n\nOne thing — I did have to add backticks round the template literal inside the for loop, as the code wasn’t working for me, but then I realised that Discourse had eaten them", null, "In future, please consider putting your code into an online code editor like codepen, and then sharing the URL with us — this makes it much easier to test and debug, and it also avoids such problems.\n\nAll the best,\n\nChris\n\n1 Like\n\nThis is how I did mine using forEach()\n\nlet myArray3 = [ “Ryu”, “Ken”, “Chun-Li”, “Cammy”, “Guile”, “Sakura”, “Sagat”, “Juri” ];\n\nmyArray3.pop();\n\nmyArray3.push(‘Alexander’, ‘Rengkat’);\n\nlet finalArry = [];\n\nmyArray3.forEach((myArr3, index) => {\n\nlet newArr = `\\${myArr3} (\\${index})`;\n\nfinalArry.push(newArr)\n\n});\n\nlet jointhem = finalArry.join(’-’);\n\nconsole.log(jointhem);\n\n1 Like\n\nHi Chris, thanks for posting your solution. I have a question about the syntax in the .forEach solution.\n\nI understand everything up until the myArray[index] =newElement portion of the code.\n\nWhy isn’t the variable declaration first (ie. newElement = myArray[index])\n\nWhat is the myArray[index] portion of the code doing?\n\nWould appreciate any light you can shed. Thank you!\n\nHello @wilford.amy\n\nmyArray already declared earlier on that line\n\n``````let myArray = [ \"Ryu\", \"Ken\", \"Chun-Li\", \"Cammy\", \"Guile\", \"Sakura\", \"Sagat\", \"Juri\" ];\n``````\n\n``````let newElement = `\\${ element } (\\${index})`;\n``````\n\nand for this one\n\n``````myArray[index] = newElement;\n``````\n\nit set the array element at index to be newElement\n\nhope that help and have a nice day", null, "1 Like\n\nThat does help! Thank you.\n\nyou very welcome", null, "let myArray = [ “Ryu”, “Ken”, “Chun-Li”, “Cammy”, “Guile”, “Sakura”, “Sagat”, “Juri” ];\n\nmyArray.pop();\nmyArray.push(‘shahhussain’,“bobo”);\nfor(let i=0;i<myArray.length;i++){\n\nmyArray[i] +=`(\\${i})`\n\n}\nlet myString=myArray.join(\"-\");\n// Don’t edit the code below here!\n\n1 Like\n\nHello @Shah_Hussain\n\nyou doing great well done\n\nand have a nice day", null, "Hey guys, here is my solution!\n\nmyArray.pop();\nmyArray.push(“Blanka”);\nmyArray.push(“Dallsing”);\nfor (let i = 0; i < myArray.length; i++) {\nmyArray[i] += `(\\${[i]})`;\n}\n\nconst myString = myArray.join(’ - ');\n\n1 Like\n\nHello @Felipe_Vieira\n\nyou doing great well done you just need to add extra space between the index number and the element\nso it will be\n` myArray[i] += ` (\\${[i]})` ;`\n\nhope that help and have a nice day", null, "1 Like\n\n@chrisdavidmills Hello, i tried to do mine using the for…of loop. It doesn’t seem to give the proper result, could you help me?\n\nhere is what i tried:\n\nlet myArray = [ “Ryu”, “Ken”, “Chun-Li”, “Cammy”, “Guile”, “Sakura”, “Sagat”, “Juri” ];\nmyArray.pop();\n\nmyArray.push(‘Eddy’, ‘Marc’);\n\nfor(let name of myArray){\nlet dex = myArray.indexOf(name);\nname = `\\${name} (\\${dex})`;\n}\nlet myString = myArray;\n\nHi @Eddy_Marc_Henri_Ndonga and welcome to the community", null, "The `name` variable in the loop is temporary. you can’t change the array values with it. You need to assign the value to `myArray[dex]`.\n\nMichael\n\nHello.\nHere is my solution:\nlet finalArray = [];\n\n``````for (let [i, array] of myArray3.entries()) {\n\nlet newArray = ` \\${array}(\\${i + 1}) `;\n\nfinalArray.push(newArray);\n\n}\n\nlet myString3 = finalArray.join(\"-\");\n\nconsole.log(myString3);``````\n\nHi @milan.z and welcome to the community", null, "You’re code is correct. Well done!\nThe only thing that I would change is the variable `newArray`. It’s a bit misleading as it contains a string.\n\nHave a nice day,\nMichael\n\n1 Like\n\nThanks, @mikoMK for the suggestion I gonna apply it.\n\n1 Like" ]
[ null, "https://discourse-prod-uploads-81679984178418.s3.dualstack.us-west-2.amazonaws.com/original/2X/a/a8c5bb703f44979bd8d93af91e6a3b62ba9f2a45.png", null, "https://cdn.discourse-prod.itsre-apps.mozit.cloud/images/emoji/twitter/wink.png", null, "https://cdn.discourse-prod.itsre-apps.mozit.cloud/images/emoji/twitter/slight_smile.png", null, "https://cdn.discourse-prod.itsre-apps.mozit.cloud/images/emoji/twitter/slight_smile.png", null, "https://cdn.discourse-prod.itsre-apps.mozit.cloud/images/emoji/twitter/slight_smile.png", null, "https://cdn.discourse-prod.itsre-apps.mozit.cloud/images/emoji/twitter/slight_smile.png", null, "https://cdn.discourse-prod.itsre-apps.mozit.cloud/images/emoji/twitter/wave.png", null, "https://cdn.discourse-prod.itsre-apps.mozit.cloud/images/emoji/twitter/wave.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8613745,"math_prob":0.65871054,"size":1137,"snap":"2022-40-2023-06","text_gpt3_token_len":270,"char_repetition_ratio":0.15357459,"word_repetition_ratio":0.0,"special_character_ratio":0.2348285,"punctuation_ratio":0.10843374,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95372057,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16],"im_url_duplicate_count":[null,null,null,1,null,null,null,null,null,null,null,null,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-27T05:30:53Z\",\"WARC-Record-ID\":\"<urn:uuid:dc5895ef-b774-4d6a-b92e-b9b48e32ee70>\",\"Content-Length\":\"61537\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e122b912-d1f5-4ad5-9382-109ff2ce720a>\",\"WARC-Concurrent-To\":\"<urn:uuid:55fa1bdb-171e-4358-aff9-54edc62f779a>\",\"WARC-IP-Address\":\"52.32.251.249\",\"WARC-Target-URI\":\"https://discourse.mozilla.org/t/help-needed-with-test-your-skills-arrays-3/54367\",\"WARC-Payload-Digest\":\"sha1:QMYKBMCKMYKPWKVG6VJSTIQZEHHBPDUW\",\"WARC-Block-Digest\":\"sha1:J6DOK2IRTDD3Q3CZM2KLOOJEES5JTRMQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764494936.89_warc_CC-MAIN-20230127033656-20230127063656-00192.warc.gz\"}"}
https://eed3si9n.com/herding-cats/checking-laws-with-discipline.html
[ "### Checking laws with Discipline\n\nThe compiler can’t check for the laws, but Cats ships with a `FunctorLaws` trait that describes this in code:\n\n``````/**\n* Laws that must be obeyed by any [[Functor]].\n*/\ntrait FunctorLaws[F[_]] extends InvariantLaws[F] {\nimplicit override def F: Functor[F]\n\ndef covariantIdentity[A](fa: F[A]): IsEq[F[A]] =\nfa.map(identity) <-> fa\n\ndef covariantComposition[A, B, C](fa: F[A], f: A => B, g: B => C): IsEq[F[C]] =\nfa.map(f).map(g) <-> fa.map(f andThen g)\n}\n``````\n\n#### Checking laws from the REPL\n\nThis is based on a library called Discipline, which is a wrapper around ScalaCheck. We can run these tests from the REPL with ScalaCheck.\n\n``````scala> import cats._, cats.data._, cats.implicits._\nimport cats._\nimport cats.data._\nimport cats.implicits._\n\nscala> import cats.laws.discipline.FunctorTests\nimport cats.laws.discipline.FunctorTests\n\nscala> val rs = FunctorTests[Either[Int, ?]].functor[Int, Int, Int]\nrs: cats.laws.discipline.FunctorTests[[X_kp1]scala.util.Either[Int,X_kp1]]#RuleSet = cats.laws.discipline.FunctorTests\\$\\$anon\\$2@7993373d\n\nscala> rs.all.check\n+ functor.covariant composition: OK, passed 100 tests.\n+ functor.covariant identity: OK, passed 100 tests.\n+ functor.invariant composition: OK, passed 100 tests.\n+ functor.invariant identity: OK, passed 100 tests.\n``````\n\n`rs.all` returns `org.scalacheck.Properties`, which implements `check` method.\n\n#### Checking laws with Discipline + Specs2\n\nYou can also bake your own cake pattern into a test framework of choice. Here’s for specs2:\n\n``````package example\n\nimport org.specs2.Specification\nimport org.typelevel.discipline.specs2.Discipline\nimport cats.instances.AllInstances\nimport cats.syntax.AllSyntax\n\ntrait CatsSpec extends Specification with Discipline with AllInstances with AllSyntax\n``````\n\nCats’ source include one for ScalaTest.\n\nThe spec to check the functor law for `Either[Int, Int]` looks like this:\n\n``````package example\n\nimport cats._\nimport cats.laws.discipline.FunctorTests\n\nclass EitherSpec extends CatsSpec { def is = s2\"\"\"\nEither[Int, ?] forms a functor \\$e1\n\"\"\"\n\ndef e1 = checkAll(\"Either[Int, Int]\", FunctorTests[Either[Int, ?]].functor[Int, Int, Int])\n}\n``````\n\nThe `Either[Int, ?]` is using non/kind-projector. Running the test from sbt displays the following output:\n\n``````> test\n[info] EitherSpec\n[info]\n[info]\n[info] functor laws must hold for Either[Int, Int]\n[info]\n[info] + functor.covariant composition\n[info] + functor.covariant identity\n[info] + functor.invariant composition\n[info] + functor.invariant identity\n[info]\n[info]\n[info] Total for specification EitherSpec\n[info] Finished in 14 ms\n[info] 4 examples, 400 expectations, 0 failure, 0 error\n[info] Passed: Total 4, Failed 0, Errors 0, Passed 4\n``````\n\n#### Breaking the law\n\nLYAHFGG:\n\nLet’s take a look at a pathological example of a type constructor being an instance of the Functor typeclass but not really being a functor, because it doesn’t satisfy the laws.\n\nLet’s try breaking the law.\n\n``````package example\n\nimport cats._\n\nsealed trait COption[+A]\ncase class CSome[A](counter: Int, a: A) extends COption[A]\ncase object CNone extends COption[Nothing]\n\nobject COption {\nimplicit def coptionEq[A]: Eq[COption[A]] = new Eq[COption[A]] {\ndef eqv(a1: COption[A], a2: COption[A]): Boolean = a1 == a2\n}\nimplicit val coptionFunctor = new Functor[COption] {\ndef map[A, B](fa: COption[A])(f: A => B): COption[B] =\nfa match {\ncase CNone => CNone\ncase CSome(c, a) => CSome(c + 1, f(a))\n}\n}\n}\n``````\n\nHere’s how we can use this:\n\n``````scala> import cats._, cats.data._, cats.implicits._\nimport cats._\nimport cats.data._\nimport cats.implicits._\nscala> import example._\nimport example._\nscala> (CSome(0, \"ho\"): COption[String]) map {identity}\nres0: example.COption[String] = CSome(1,ho)``````\n\nThis breaks the first law because the result of the `identity` function is not equal to the input. To catch this we need to supply an “arbitrary” `COption[A]` implicitly:\n\n``````package example\n\nimport cats._\nimport cats.laws.discipline.{ FunctorTests }\nimport org.scalacheck.{ Arbitrary, Gen }\n\nclass COptionSpec extends CatsSpec {\nimplicit def coptionArbiterary[A](implicit arbA: Arbitrary[A]): Arbitrary[COption[A]] =\nArbitrary {\nval arbSome = for {\ni <- implicitly[Arbitrary[Int]].arbitrary\na <- arbA.arbitrary\n} yield (CSome(i, a): COption[A])\nval arbNone = Gen.const(CNone: COption[Nothing])\nGen.oneOf(arbSome, arbNone)\n}\n\ndef is = s2\"\"\"\nCOption[Int] forms a functor \\$e1\n\"\"\"\n\ndef e1 = checkAll(\"COption[Int]\", FunctorTests[COption].functor[Int, Int, Int])\n}\n``````\n\nHere’s the output:\n\n``````[info] COptionSpec\n[info]\n[info]\n[info] functor laws must hold for COption[Int]\n[info]\n[info] x functor.covariant composition\n[error] A counter-example is [CSome(-1,-1), <function1>, <function1>] (after 0 try)\n[error] (CSome(1,1358703086) ?== CSome(0,1358703086)) failed\n[info]\n[info] x functor.covariant identity\n[error] A counter-example is 'CSome(1781926821,82888113)' (after 0 try)\n[error] (CSome(1781926822,82888113) ?== CSome(1781926821,82888113)) failed\n[info]\n[info] x functor.invariant composition\n[error] A counter-example is [CSome(-17878015,0), <function1>, <function1>, <function1>, <function1>] (after 1 try)\n[error] (CSome(-17878013,-1351608161) ?== CSome(-17878014,-1351608161)) failed\n[info]\n[info] x functor.invariant identity\n[error] A counter-example is 'CSome(-1699259031,1)' (after 0 try)\n[error] (CSome(-1699259030,1) ?== CSome(-1699259031,1)) failed\n[info]\n[info]\n[info]\n[info] Total for specification COptionSpec\n[info] Finished in 13 ms\n[info] 4 examples, 4 failures, 0 error\n``````\n\nThe tests failed as expected." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.54312855,"math_prob":0.9082778,"size":5318,"snap":"2020-45-2020-50","text_gpt3_token_len":1551,"char_repetition_ratio":0.17030485,"word_repetition_ratio":0.080882356,"special_character_ratio":0.29879653,"punctuation_ratio":0.20599613,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97683626,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-29T16:35:42Z\",\"WARC-Record-ID\":\"<urn:uuid:cb96fc60-3953-46e8-9330-061b684c726b>\",\"Content-Length\":\"19129\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cbb9d3ca-0607-48a3-b3ef-c80fdd3547fa>\",\"WARC-Concurrent-To\":\"<urn:uuid:5bd233d5-8514-4121-b7e2-7329bf5c4dd0>\",\"WARC-IP-Address\":\"34.207.146.135\",\"WARC-Target-URI\":\"https://eed3si9n.com/herding-cats/checking-laws-with-discipline.html\",\"WARC-Payload-Digest\":\"sha1:75YLZ7R4IGROKJKJMJUR3V352WTVSZKL\",\"WARC-Block-Digest\":\"sha1:LSQ5MHJEQCPVIMVG4CMFSH4MKNHPP4RH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107904834.82_warc_CC-MAIN-20201029154446-20201029184446-00033.warc.gz\"}"}
https://im.kendallhunt.com/HS/teachers/1/6/5/index.html
[ "# Lesson 5\n\nBuilding Quadratic Functions to Describe Situations (Part 1)\n\n## 5.1: Notice and Wonder: An Interesting Numerical Pattern (5 minutes)\n\n### Warm-up\n\nThe purpose of this warm-up is to elicit the idea that the values in the table have a predictable pattern, which will be useful when students consider the context of a falling object in a later activity. While students may notice and wonder many things about this table, the patterns are the important discussion points, rather than trying to find a rule for the function. Because the rule is not easy to uncover, studying the numbers ahead of time should prove helpful as students analyze the function later.\n\nThis prompt gives students opportunities to see and make use of structure (MP7). The specific structure they might notice is all the $$y$$ values are multiples of 16 and perfect squares. Some may notice the pattern is not linear and wonder whether it is quadratic.\n\n### Student Facing\n\nStudy the table. What do you notice? What do you wonder?\n\n $$x$$ $$y$$ 0 1 2 3 4 5 0 16 64 144 256 400\n\n### Activity Synthesis\n\nInvite students to share their observations and questions. Record and display them for all to see.\n\nAfter all responses are recorded, tell students that they will investigate these values more closely in upcoming activities.\n\n## 5.2: Falling from the Sky (15 minutes)\n\n### Activity\n\nThe motion of a falling object is commonly modeled with a quadratic function. This activity prompts students to build a very simple quadratic model using given time-distance data of a free-falling rock. By reasoning repeatedly about the values in the data, students notice regularity in the relationship between time and the vertical distance the object travels, which they then generalize as an expression with a squared variable (MP8). The work here prepares students to make sense of more complex quadratic functions later (that is, to model the motion of an object that is launched up and then returns to the ground).\n\n### Launch\n\nDisplay the image of the falling object for all to see. Students will recognize the numbers from the warm-up. Invite students to make some other observations about the information. Ask questions such as:\n\n• “What do you think the numbers tell us?”\n• “Does the object fall the same distance every successive second? How do you know?”\n\nArrange students in groups of 2. Tell students to think quietly about the first question and share their thinking with a partner. Afterward, consider pausing for a brief discussion before proceeding to the second question.\n\nSpeaking, Reading: MLR5 Co-Craft Questions. Begin the launch by displaying only the context and the diagram of the building. Give students 1–2 minutes to write their own mathematical questions about the situation before inviting 3–4 students to share their questions with the whole-class. Listen for and amplify any questions involving the relationship between elapsed time and the distance that a falling object travels.\nDesign Principle(s): Maximize meta-awareness; Cultivate conversation\n\n### Student Facing\n\nA rock is dropped from the top floor of a 500-foot tall building. A camera captures the distance the rock traveled, in feet, after each second.\n\n1. How far will the rock have fallen after 6 seconds? Show your reasoning.\n2. Jada noticed that the distances fallen are all multiples of 16.\n\nShe wrote down:\n\n\\displaystyle \\begin {align}16 &= 16 \\boldcdot 1\\\\64 &= 16 \\boldcdot 4\\\\144 &= 16 \\boldcdot 9\\\\256 &= 16 \\boldcdot 16\\\\400 &=16 \\boldcdot 25 \\end {align}\nThen, she noticed that 1, 4, 9, 16, and 25 are $$1^2, 2^2, 3^2, 4^2$$ and $$5^2$$.\n\n1. Use Jada’s observations to predict the distance fallen after 7 seconds. (Assume the building is tall enough that an object dropped from the top of it will continue falling for at least 7 seconds.) Show your reasoning.\n2. Write an equation for the function, with $$d$$ representing the distance dropped after $$t$$ seconds.\n\n### Anticipated Misconceptions\n\nSome students may question why the distances are positive when the rock is falling. In earlier grades, negative numbers represented on a vertical number line may have been associated with an arrow pointing down. Emphasize that the values shown in the picture measure how far the rock fell and not the direction it is falling.\n\n### Activity Synthesis\n\nDiscuss the equation students wrote for the last question. If not already mentioned by students, point out that the $$t^2$$ suggests a quadratic relationship between elapsed time and the distance that a falling object travels. Ask students:\n\n• “How do you know that the equation $$d=16t^2$$ represents a function?” (For every input of time, there is a particular output.)\n• “Suppose we want to know if the rock will travel 600 feet before 6 seconds have elapsed. How can we find out?” (Find the value of $$d$$ when $$t$$ is 6, which is $$16 \\boldcdot 6^2$$ or 576 feet.)\n\nExplain to students that we only have a few data points to go by in this case, and the quadratic expression $$16t^2$$ is a simplified model, but quadratic functions are generally used to model the movement of falling objects. We will see this expression appearing in some other contexts where gravity affects the quantities being studied.\n\n## 5.3: Galileo and Gravity (15 minutes)\n\n### Activity\n\nIn this activity, students continue to explore how quadratic functions can model the movement of a falling object. They evaluate the function seen earlier ($$d=16t^2$$) at a non-integer input, and then build a new function to represent the distance from the ground of a falling object $$t$$ seconds after it is dropped. To find a new expression that describes the height of the object, students reason repeatedly about the height of the object at different times and look for regularity in their reasoning (MP8).\n\nThe number 576 is chosen as the height (in feet) from which the object is dropped to make it more apparent for students that the values in the two tables (distance fallen and distance from ground) record distances measured from opposite ends. (Any value of $$16t^2$$ at a whole-number $$t$$ could work. In this case $$t=6$$ is selected.)\n\n### Launch\n\nArrange students in groups of 2, and suggest that they check in with each other after trying each question. To facilitate peer discussion, consider displaying sentence stems or questions that students could use, such as:\n\n• “Why do you think the object will have fallen that amount in 0.5 seconds?”\n• “How do you think the values in the first table are changing? What about in the second table?”\n• “How are the two tables alike? How are they different?”\nConversing: MLR2 Collect and Display. As students discuss their expressions with a partner, listen for and collect the language students use to identify and describe what is the same and what is different between Elena and Diego’s tables. Write the students’ words and phrases on a visual display and update it with connections to the graphs introduced during the synthesis. Remind students to borrow language from the display as needed. This will help students read and use mathematical language during their paired and whole-group discussions.\nDesign Principle(s): Optimize output (for explanation); Maximize meta-awareness\n\n### Student Facing\n\nGalileo Galilei, an Italian scientist, and other medieval scholars studied the motion of free-falling objects. The law they discovered can be expressed by the equation  $$d = 16 \\boldcdot t^2$$, which gives the distance fallen in feet, $$d$$, as a function of time, $$t$$, in seconds.\n\nAn object is dropped from a height of 576 feet.\n\n1. How far does it fall in 0.5 seconds?\n2. To find out where the object is after the first few seconds after it was dropped, Elena and Diego created different tables.\n\nElena’s table:\n\ntime (seconds) distance fallen\n(feet)\n0 0\n1 16\n2 64\n3\n4\n$$t$$\n\nDiego’s table:\n\ntime (seconds) distance from the ground (feet)\n0 576\n1 560\n2 512\n3\n4\n$$t$$\n1. How are the two tables are alike? How are they different?\n2. Complete Elena’s and Diego’s tables. Be prepared to explain your reasoning.\n\n### Student Facing\n\n#### Are you ready for more?\n\nGalileo correctly observed that gravity causes objects to fall in a way where the distance fallen is a quadratic function of the time elapsed. He got a little carried away, however, and assumed that a hanging rope or chain could also be modeled by a quadratic function.\n\nHere is a graph of such a shape (called a catenary) along with a table of approximate values.\n\n $$x$$ $$y$$ -4 -3 -2 -1 0 1 2 3 4 7.52 4.7 3.09 2.26 2 2.26 3.09 4.7 7.52\n\nShow that an equation of the form $$y=ax^2+b$$ cannot model this data well.\n\n### Activity Synthesis\n\nTo help students make sense of the two functions, compare and contrast their representations (tables, equations, and graphs) and discuss the connections between them. Ask questions such as:\n\n• “How did you complete the missing values in the first table?” (Substituting 3 and 4 for $$t$$ in $$16t^2$$ gives the distances fallen after 3 and 4 seconds.)\n• “What about those in the second table?” (The distance from the ground is 576 minus the distance fallen, so we can use the values for $$t=3$$ and $$t=4$$ from the first table to calculate the values in the second table.)\n• “Why do the values in the first table increase and those in the other table decrease?” (The distance from the top of the building increases as the object falls farther and farther away. The distance from the ground decreases as the object falls closer and closer to it.)\n• “The expression representing the distance fallen shows $$16t^2$$ and the other shows $$576 - 16t^2$$. Why is that?” (In the first function, the distance fallen, measured from where the object is dropped, will always be positive. In the second function, what’s measured is the height from the ground, so the distance fallen needs to be subtracted from the height of the building.)\n• “If we graph the two equations that represent distance fallen and distance from the ground over time, what would the graphs look like? Try sketching the graphs.”\n\nDisplay graphs that represent the two functions and make sure students can interpret them. For example, they should see that the $$y$$-intercept of each graph corresponds to the starting value of each function before the object is dropped.\n\nThey should also notice that the difference in distance between successive seconds gets larger in both cases, hence the curving graphs. (If the differences were constant, the graphs would have been straight lines.)\n\nDisplay the embedded applet for all to see. Ask students how the graph of the height of the object is related to the path that the object takes as it falls.\n\nRepresentation: Internalize Comprehension. Use color-coding and annotations to highlight connections between representations in a problem. Use color-coding to illustrate how the values in each table correspond to the values in each graph. Some students may benefit from access to physical copies of the graphs that they can annotate for themselves.\nSupports accessibility for: Visual-spatial processing; Conceptual processing\n\n## Lesson Synthesis\n\n### Lesson Synthesis\n\nTo highlight the key ideas from this lesson and the connections to earlier lessons, discuss questions such as:\n\n• “We used two different functions to describe the movement of a falling object. One function measured the distance the object traveled from its starting point, and the other measured its distance from the ground. How are the representations of these functions alike and different?” (The equations both have $$16t^2$$, but one is positive and the other negative. Their graphs are both curves, but one graph curves upward and the other downward. The values in one table shows increasing values and the other shows decreasing values, but they change by the same amounts from row to row.)\n• “How are these functions like or unlike those representing visual patterns in earlier lessons?” (They can all be represented by quadratic expressions. The relationships between the step number and the number of squares or dots were easier to see. The relationships between time and distance are not as obvious.)\n• “How are the graphs representing falling objects like or unlike those representing visual patterns?” (The graphs representing the patterns curved upward. They showed plotted points at whole-number inputs because non-whole-number steps would not make sense. In this lesson, we saw graphs that curved upward and downward. The graphs can be continuous, because we can measure the distances even when the number of seconds is fractional.)\n\n## Student Lesson Summary\n\n### Student Facing\n\nThe distance traveled by a falling object in a given amount of time is an example of a quadratic function. Galileo is said to have dropped balls of different mass from the Leaning Tower of Pisa, which is about 190 feet tall, to show that they travel the same distance in the same time. In fact the equation $$d = 16t^2$$ models the distance $$d$$, in feet, that the cannonball falls after $$t$$ seconds, no matter what its mass.\n\nBecause $$16 \\boldcdot 4^2 = 256$$, and the tower is only 190 feet tall, the cannonball hits the ground before 4 seconds.\n\nHere is a table showing how far the cannonball has fallen over the first few seconds.\n\ntime (seconds) distance fallen (feet)\n0 0\n1 16\n2 64\n3 144\n\nHere are the time and distance pairs plotted on a coordinate plane:\n\nNotice that the distance fallen is increasing each second. The average rate of change is increasing each second, which means that the cannonball is speeding up over time. This comes from the influence of gravity, which is represented by the quadratic expression $$16t^2$$. It is the exponent 2 in that expression that makes it increase by larger and larger amounts.\n\nAnother way to study the change in the position of the cannonball is to look at its distance from the ground as a function of time.\n\nHere is a table showing the distance from the ground in feet at 0, 1, 2, and 3 seconds.\n\ntime (seconds) distance from the ground (feet)\n0 190\n1 174\n2 126\n3 46\n\nHere are the time and distance pairs plotted on a coordinate plane:\n\nThe expression that defines the distance from the ground as a function of time is $$190 - 16t^2$$. It tells us that the cannonball's distance from the ground is 190 feet before it is dropped and has decreased by $$16t^2$$ when $$t$$ seconds have passed." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.92079836,"math_prob":0.9866207,"size":13675,"snap":"2023-40-2023-50","text_gpt3_token_len":3127,"char_repetition_ratio":0.15214688,"word_repetition_ratio":0.06236466,"special_character_ratio":0.24007313,"punctuation_ratio":0.091084525,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.996266,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-05T02:50:50Z\",\"WARC-Record-ID\":\"<urn:uuid:5fd3b5ed-32d3-4c32-b838-1fbc190e6c8b>\",\"Content-Length\":\"138697\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7f78f664-312c-408a-9726-d1773c897db8>\",\"WARC-Concurrent-To\":\"<urn:uuid:5fa1919d-13b7-4ce7-ad58-0b21d7ea9d22>\",\"WARC-IP-Address\":\"52.20.78.240\",\"WARC-Target-URI\":\"https://im.kendallhunt.com/HS/teachers/1/6/5/index.html\",\"WARC-Payload-Digest\":\"sha1:KC6DGIMRWS73WAM7ZEBOXZU2GWOPOLTA\",\"WARC-Block-Digest\":\"sha1:ZY4J62N5YMWG4CBYGEPRSKAI2F57IXHA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233511717.69_warc_CC-MAIN-20231005012006-20231005042006-00320.warc.gz\"}"}
https://www.ethans.co.in/en/data-science-training-in-noida-delhi
[ "## Data Science Training in Noida", null, "# Data Science Course in Noida Delhi\n\nOur Data Science course is divided into two Streams:\n\nData Science using R - Includes R Programming + Statistics  + Machine Learning with R + Tableau for Data Visualization (Duration - 70 Hours)\n\nData Science using Python Python Programming + Machine Learning/Artificial Intelligence (Duration - 100 Hours)\n\nPrerequisites: Basic computer knowledge, any data related experience will be advantageous.\n\n### After the Data Science training classes in Noida:\n\nEthans Data science modules are precisely designed ensuring all industry requirements are met & making students eligible for plethora of job openings in the field of data analytics. Students will easily crack interviews on Business analytics, data visualization (Tableau), Python related positions. Any interview for entry level data analyst position would be a cake walk for the candidates.\n\n### Who get Data Science Training in Noida?\n\n• Any graduate/post graduate or students in final stages of graduation + People willing to align careers in analytics\n• Team leaders working with data and often need basic data analysis\n• Engineers looking for career opportunities in IT/ITES industry\n• Management students looking for strategic positions\n• People already working with huge datasets\n• CA, CS, CFA\n\n## Syllabus\n\nLearn Data Science from the Experts – the best\n\n## Data Science Classes in Noida\n\nData Science Course  Syllabus:\n\nData Science with R (Detailed Syllabus)\n\nIntroduction to Data Science\n\n• Introduction to Data Analytics\n• Data types and data Models\n• Evolution of Analytics\n• Data Science Components\n• Data Scientist Skillset\n• Univariate Data Analysis\n• Introduction to Sampling\n\nBasic Operations in R Programming\n\n• Introduction to R programming\n• Types of Objects in R\n• Naming standards in R\n• Creating Objects in R\n• Data Structure in R\n• Matrix, Data Frame, String, Vectors\n• Understanding Vectors & Data input in R\n• Lists, Data Elements\n• Creating Data Files using R\n\nData Handling in R Programming\n\n• Basic Operations in R – Expressions, Constant Values, Arithmetic, Function Calls, Symbols\n• Sub-setting Data\n• Selecting (Keeping) Variables\n• Excluding (Dropping) Variables\n• Selecting Observations and Selection using Subset Function\n• Merging Data\n• Sorting Data\n• Visualization using R\n• Data Type Conversion\n• Built-In Numeric Functions\n• Built-In Character Functions\n• User Built Functions\n• Control Structures\n• Loop Functions\n\nIntroduction to Statistics\n\n• Basic Statistics\n• Measure of central tendency\n• Types of Distributions\n• Anova\n• F-Test\n• Central Limit Theorem & applications\n• Types of variables\n• Relationships between variables\n• Central Tendency\n• Measures of Central Tendency\n• Kurtosis\n• Skewness\n• Arithmetic Mean / Average\n• Merits & Demerits of Arithmetic Mean\n• Mode, Merits & Demerits of Mode\n• Median, Merits & Demerits of Median\n• Range\n• Concept of Quantiles, Quartiles, percentile\n• Standard Deviation\n• Variance\n• Calculate Variance\n• Covariance\n• Correlation\n\nIntroduction to Statistics - 2\n\n• Hypothesis Testing\n• Multiple Linear Regression\n• Logistic Regression\n• Clustering (Hierarchical Clustering & K-means Clustering)\n• Classification (Decision Trees)\n• Time Series Analysis (Simple Moving Average, Exponential smoothing, ARIMA+)\n\nIntroduction to Probability\n\n• Standard Normal Distribution\n• Normal Distribution\n• Geometric Distribution\n• Poisson Distribution\n• Binomial Distribution\n• Parameters vs. Statistics\n• Probability Mass Function\n• Random Variable\n• Conditional Probability and Independence\n• Unions and Intersections\n• Finding Probability of dataset\n• Probability Terminology\n• Probability Distributions\n\nData Visualization Techniques\n\n• Bubble Chart\n• Sparklines\n• Waterfall chart\n• Box Plot\n• Line Charts\n• Frequency Chart\n• Bimodal & Multimodal Histograms\n• Histograms\n• Scatter Plot\n• Pie Chart\n• Bar Graph\n• Line Graph\n\nIntroduction to Machine Learning\n\n• Overview & Terminologies\n• What is Machine Learning?\n• Why Learn?\n• When is Learning required?\n• Data Mining\n• Application Areas and Roles\n• Types of Machine Learning\n• Supervised Learning\n• Unsupervised Learning\n• Reinforcement learning\n\nMachine Learning Concepts & Terminologies\n\nSteps in developing a Machine Learning application\n\n• Key tasks of Machine Learning\n• Modelling Terminologies\n• Learning a Class from Examples\n• Probability and Inference\n• PAC (Probably Approximately Correct) Learning\n• Noise\n• Noise and Model Complexity\n• Association Rules\n• Association Measures\n\nRegression Techniques\n\n• Concept of Regression\n• Best Fitting line\n• Simple Linear Regression\n• Building regression models using excel\n• Coefficient of determination (R- Squared)\n• Multiple Linear Regression\n• Assumptions of Linear Regression\n• Variable transformation\n• Multicollinearity\n• VIF\n• Methods of building Linear regression model in R\n• Model validation techniques\n• Cooks Distance\n• Q-Q Plot\n• Durbin- Watson Test\n• Kolmogorov-Smirnof Test\n• Homoskedasticity of error terms\n• Logistic Regression\n• Applications of logistic regression\n• Concept of odds\n• Concept of Odds Ratio\n• Derivation of logistic regression equation\n• Interpretation of logistic regression output\n• Model building for logistic regression\n• Model validations\n• Confusion Matrix\n• Concept of ROC/AOC Curve\n• KS Test\n\n• Applications of Market Basket Analysis\n• What is association Rules\n• Overview of Apriori algorithm\n• Key terminologies in MBA\n• Support\n• Confidence\n• Lift\n• Model building for MBA\n• Transforming sales data to suit MBA\n• MBA Rule selection\n• Ensemble modelling applications using MBA\n\nTime Series Analysis (Forecasting)\n\n• Model building using ARIMA, ARIMAX, SARIMAX\n• Data De-trending & data differencing\n• KPSS Test\n• Dickey Fuller Test\n• Concept of stationarity\n• Model building using exponential smoothing\n• Model building using simple moving average\n• Time series analysis techniques\n• Components of time series\n• Prerequisites for time series analysis\n• Concept of Time series data\n• Applications of Forecasting\n\nDecision Trees using R\n\n• Understanding the Concept\n• Internal decision nodes\n• Terminal leaves.\n• Tree induction: Construction of the tree\n• Classification Trees\n• Entropy\n• Selecting Attribute\n• Information Gain\n• Partially learned tree\n• Overfitting\n• Causes for over fitting\n• Overfitting Prevention (Pruning) Methods\n• Reduced Error Pruning\n• Decision trees - Advantages & Drawbacks\n• Ensemble Models\n\nK Means Clustering\n\n• Parametric Methods Recap\n• Clustering\n• Direct Clustering Method\n• Mixture densities\n• Classes v/s Clusters\n• Hierarchical Clustering\n• Dendogram interpretation\n• Non-Hierarchical Clustering\n• K-Means\n• Distance Metrics\n• K-Means Algorithm\n• K-Means Objective\n• Color Quantization\n• Vector Quantization\n\nTableau Analytics\n\n• Tableau Introduction\n• Data connection to Tableau\n• Calculated fields, hierarchy, parameters, sets, groups in Tableau\n• Various visualizations Techniques in Tableau\n• Map based visualization using Tableau\n• Reference Lines\n• Adding Totals, sub totals, Captions\n• Using Combined Field\n• Show Filter & Use various filter options\n• Data Sorting\n• Create Combined Field\n• Table Calculations\n• Creating Tableau Dashboard\n• Action Filters\n• Creating Story using Tableau\n\nAnalytics using Tableau\n\n• Clustering using Tableau\n• Time series analysis using Tableau\n• Simple Linear Regression using Tableau\n\nR integration in Tableau\n\n• Integrating R code with Tableau\n• Creating statistical model with dynamic inputs\n• Visualizing R output in Tableau\n• Case Study 1- Real time project with Twitter Data Analytics\n• Case Study 2- Real time project with Google Finance\n• Case Study 3- Real time project with IMDB Website\n\n### Testimonial for Data Science Training:\n\nName: Shreyash Jagetiya\n\nReview: It was great to get data science training with Ethan's Tech. I have done data science and I feel they are best training provider in Pune in this field. They cover all basic to advance level data science course. Our faculty is really professional and has in-depth knowledge. They provide practical demonstrations such that it helps us to learn the concepts precisely.\n\nRating:\n\nInquire Now For Data Science Training:" ]
[ null, "https://www.ethans.co.in/images/banners/ethans_data_sci.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.65933174,"math_prob":0.40845323,"size":8452,"snap":"2019-51-2020-05","text_gpt3_token_len":1895,"char_repetition_ratio":0.1299716,"word_repetition_ratio":0.001529052,"special_character_ratio":0.20149077,"punctuation_ratio":0.04761905,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99439335,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-28T14:28:51Z\",\"WARC-Record-ID\":\"<urn:uuid:659d6246-12d6-4cfa-bade-63846c3e8c49>\",\"Content-Length\":\"98504\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:28d95b5b-2b15-4dd7-a886-d2ccf41199a3>\",\"WARC-Concurrent-To\":\"<urn:uuid:75354603-cbd5-4a2f-9826-53a235136910>\",\"WARC-IP-Address\":\"166.62.10.143\",\"WARC-Target-URI\":\"https://www.ethans.co.in/en/data-science-training-in-noida-delhi\",\"WARC-Payload-Digest\":\"sha1:5XGQBWB7FS24WEKCRZV3B6FENALD5YBT\",\"WARC-Block-Digest\":\"sha1:JCUPSIPTZAPFKQG57X2SJV2JNPBXFLX5\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579251778272.69_warc_CC-MAIN-20200128122813-20200128152813-00039.warc.gz\"}"}
https://artofproblemsolving.com/wiki/index.php/2002_AIME_II_Problems/Problem_7
[ "# 2002 AIME II Problems/Problem 7\n\n## Problem\n\nIt is known that, for all positive integers", null, "$k$,", null, "$1^2+2^2+3^2+\\ldots+k^{2}=\\frac{k(k+1)(2k+1)}6$.\n\nFind the smallest positive integer", null, "$k$ such that", null, "$1^2+2^2+3^2+\\ldots+k^2$ is a multiple of", null, "$200$.\n\n## Solution", null, "$\\frac{k(k+1)(2k+1)}{6}$ is a multiple of", null, "$200$ if", null, "$k(k+1)(2k+1)$ is a multiple of", null, "$1200 = 2^4 \\cdot 3 \\cdot 5^2$. So", null, "$16,3,25|k(k+1)(2k+1)$.\n\nSince", null, "$2k+1$ is always odd, and only one of", null, "$k$ and", null, "$k+1$ is even, either", null, "$k, k+1 \\equiv 0 \\pmod{16}$.\n\nThus,", null, "$k \\equiv 0, 15 \\pmod{16}$.\n\nIf", null, "$k \\equiv 0 \\pmod{3}$, then", null, "$3|k$. If", null, "$k \\equiv 1 \\pmod{3}$, then", null, "$3|2k+1$. If", null, "$k \\equiv 2 \\pmod{3}$, then", null, "$3|k+1$.\n\nThus, there are no restrictions on", null, "$k$ in", null, "$\\pmod{3}$.\n\nIt is easy to see that only one of", null, "$k$,", null, "$k+1$, and", null, "$2k+1$ is divisible by", null, "$5$. So either", null, "$k, k+1, 2k+1 \\equiv 0 \\pmod{25}$.\n\nThus,", null, "$k \\equiv 0, 24, 12 \\pmod{25}$.\n\nFrom the Chinese Remainder Theorem,", null, "$k \\equiv 0, 112, 224, 175, 287, 399 \\pmod{400}$. Thus, the smallest positive integer", null, "$k$ is", null, "$\\boxed{112}$.\n\n## Solution 2\n\nTo elaborate, we write out all 6 possibilities of pairings. For example, we have", null, "$k \\equiv 24 \\pmod{25}$", null, "$k \\equiv 15 \\pmod{16}$\n\nis one pairing, and", null, "$k \\equiv 24 \\pmod{25}$", null, "$k \\equiv 0 \\pmod{16}$\n\nis another. We then solve this by writing the first as", null, "$16k+15 \\equiv 24 \\pmod{25}$ and then move the 15 to get", null, "$16k \\equiv 9 \\pmod{25}$.\n\nWe then list out all the mods of the multiples of", null, "$16$, and realize that each of these", null, "$6$ pairings can be generalized to become one of these multiples of", null, "$16$.\n\nThe chain is as follows:", null, "$16 \\pmod{25}$", null, "$7, 23, 14, 5, 21, 12, 3, 19, 10, 1, 17, 8, 24, 15, 6, 22, 13, 4, 20, 11, 27, 18, 9, 0,$ and then it loops.\n\nWe see that for the first equation we have", null, "$9 \\pmod {25}$ at the 24th position, so we then do", null, "$24(16)+15$ to get the first answer of 399.\n\nAgain, this is possible to repeat for all", null, "$6$ cases. CRT guarantees that we will have a solution before", null, "$25 \\times 16$ or", null, "$400$ and indeed we did :P\n\nThe problems on this page are copyrighted by the Mathematical Association of America's American Mathematics Competitions.", null, "" ]
[ null, "https://latex.artofproblemsolving.com/8/c/3/8c325612684d41304b9751c175df7bcc0f61f64f.png ", null, "https://latex.artofproblemsolving.com/4/f/c/4fc69b2d25fe9c2e53533b462d9b0cce6f4ceed0.png ", null, "https://latex.artofproblemsolving.com/8/c/3/8c325612684d41304b9751c175df7bcc0f61f64f.png ", null, "https://latex.artofproblemsolving.com/2/a/c/2acb5213c6a7e5e3b16487ed491ff126b436c822.png ", null, "https://latex.artofproblemsolving.com/8/f/3/8f37626d49b4082c9102bf930bca51277ff2eb43.png ", null, "https://latex.artofproblemsolving.com/7/8/9/7896d1794ff8b99fc7dc8e047616b47138d76f58.png ", null, "https://latex.artofproblemsolving.com/8/f/3/8f37626d49b4082c9102bf930bca51277ff2eb43.png ", null, "https://latex.artofproblemsolving.com/f/0/c/f0cb209f0931495fbe76c5656cf4f94c851d8af0.png ", null, "https://latex.artofproblemsolving.com/f/b/7/fb7e0b0f2bd3305e31954b1f72fe1bd99c9265dd.png ", null, "https://latex.artofproblemsolving.com/0/a/4/0a401927ad142b2f3e852b448c0763fb711d8d46.png ", null, "https://latex.artofproblemsolving.com/f/2/7/f2789508227865dfc89fd46e18ae9b25bff34257.png ", null, "https://latex.artofproblemsolving.com/8/c/3/8c325612684d41304b9751c175df7bcc0f61f64f.png ", null, "https://latex.artofproblemsolving.com/f/0/5/f05f085f774fc8ab50676b778b86a1f1d1114fc8.png ", null, "https://latex.artofproblemsolving.com/0/6/4/0649f399784a70bda3a6a6d9ddf9b8337e0be350.png ", null, "https://latex.artofproblemsolving.com/0/7/4/074272d70b5fea46331caa03a80c2a5e9fed3705.png ", null, "https://latex.artofproblemsolving.com/2/9/3/29391aacae1ae00bc1f5c7883b092dadc08bdfc3.png ", null, "https://latex.artofproblemsolving.com/1/0/1/101af4fef8d89bac8d2073322702fbc87c0733e5.png ", null, "https://latex.artofproblemsolving.com/d/e/6/de672af5841ae46198b69efc1502117b4057efc8.png ", null, "https://latex.artofproblemsolving.com/0/2/c/02c5935622ba6d1551790fa1951b78affb8c2da2.png ", null, "https://latex.artofproblemsolving.com/5/b/2/5b286d91c7aa5cea5df487f98e2c76a3edb86388.png ", null, "https://latex.artofproblemsolving.com/0/7/e/07e4fe15003cb2d9a5208346a60aad315ba401dd.png ", null, "https://latex.artofproblemsolving.com/8/c/3/8c325612684d41304b9751c175df7bcc0f61f64f.png ", null, "https://latex.artofproblemsolving.com/d/5/2/d524c51b59647866bc337983e45d54471e743a0c.png ", null, "https://latex.artofproblemsolving.com/8/c/3/8c325612684d41304b9751c175df7bcc0f61f64f.png ", null, "https://latex.artofproblemsolving.com/f/0/5/f05f085f774fc8ab50676b778b86a1f1d1114fc8.png ", null, "https://latex.artofproblemsolving.com/f/2/7/f2789508227865dfc89fd46e18ae9b25bff34257.png ", null, "https://latex.artofproblemsolving.com/7/9/0/79069377f91364c2f87a64e5f9f562a091c8a6c1.png ", null, "https://latex.artofproblemsolving.com/3/4/1/341ca0dc14ce4e4208b084ffb307650b20284ca2.png ", null, "https://latex.artofproblemsolving.com/c/d/8/cd85d4ce5466a27320ae0ea654ca419cd66dd398.png ", null, "https://latex.artofproblemsolving.com/6/c/5/6c584ae20576a628dd825718199f6c4e98816720.png ", null, "https://latex.artofproblemsolving.com/8/c/3/8c325612684d41304b9751c175df7bcc0f61f64f.png ", null, "https://latex.artofproblemsolving.com/1/1/0/110495c19cff553380cfdaf8a094c31134e27052.png ", null, "https://latex.artofproblemsolving.com/3/c/0/3c0b89f5fdad867a59b363a4931ac0d5bf908561.png ", null, "https://latex.artofproblemsolving.com/a/8/e/a8ee53f17221f52b2252d40fb392d59154285992.png ", null, "https://latex.artofproblemsolving.com/3/c/0/3c0b89f5fdad867a59b363a4931ac0d5bf908561.png ", null, "https://latex.artofproblemsolving.com/3/f/6/3f6cb541d797d86aa6c5183e66490b749343b378.png ", null, "https://latex.artofproblemsolving.com/7/3/e/73ecdcd81e2026bd7322289c0d3be13b7976ffb0.png ", null, "https://latex.artofproblemsolving.com/0/c/8/0c8117b2c78a9717c2f5fa2f46fdec228a942057.png ", null, "https://latex.artofproblemsolving.com/9/a/5/9a5b4928c8fe50ce3c2428da3bee3505e891b788.png ", null, "https://latex.artofproblemsolving.com/6/0/1/601a7806cbfad68196c43a4665871f8c3186802e.png ", null, "https://latex.artofproblemsolving.com/9/a/5/9a5b4928c8fe50ce3c2428da3bee3505e891b788.png ", null, "https://latex.artofproblemsolving.com/b/4/3/b431e52352af32c524a659cf730a082571bd27b2.png ", null, "https://latex.artofproblemsolving.com/5/0/4/504206dedf65a89b41aab2761419234f722e6715.png ", null, "https://latex.artofproblemsolving.com/8/2/0/8200278f043eca8c60c98ea8de7fe695adfd8837.png ", null, "https://latex.artofproblemsolving.com/f/6/c/f6cd644f794a6e947afd7d4c2885d634eaf39258.png ", null, "https://latex.artofproblemsolving.com/6/0/1/601a7806cbfad68196c43a4665871f8c3186802e.png ", null, "https://latex.artofproblemsolving.com/3/d/c/3dca6d98d708cf4ab8593987639e15cca7de7c18.png ", null, "https://latex.artofproblemsolving.com/e/8/9/e8924412a12becf7905729885a46e3e351ec5a86.png ", null, "https://wiki-images.artofproblemsolving.com//8/8b/AMC_logo.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.86061126,"math_prob":1.000006,"size":1464,"snap":"2023-40-2023-50","text_gpt3_token_len":388,"char_repetition_ratio":0.115068495,"word_repetition_ratio":0.036544852,"special_character_ratio":0.28278688,"punctuation_ratio":0.1414791,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999343,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98],"im_url_duplicate_count":[null,null,null,4,null,null,null,4,null,null,null,4,null,null,null,4,null,4,null,4,null,null,null,null,null,null,null,4,null,4,null,4,null,4,null,null,null,4,null,4,null,4,null,null,null,null,null,null,null,null,null,null,null,null,null,4,null,4,null,4,null,null,null,null,null,8,null,4,null,8,null,4,null,4,null,4,null,null,null,null,null,null,null,4,null,4,null,4,null,4,null,null,null,4,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-22T12:31:12Z\",\"WARC-Record-ID\":\"<urn:uuid:13ef6a03-8701-4786-afe0-bc1fd85008a4>\",\"Content-Length\":\"46041\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fe3c8a50-d350-4cd6-9d80-aae173009dcb>\",\"WARC-Concurrent-To\":\"<urn:uuid:f7a3e8da-ba69-40a9-a123-dad1ecc0b294>\",\"WARC-IP-Address\":\"104.26.10.229\",\"WARC-Target-URI\":\"https://artofproblemsolving.com/wiki/index.php/2002_AIME_II_Problems/Problem_7\",\"WARC-Payload-Digest\":\"sha1:NSJVUBDEU6REA4HTCTMSVDZ3PJO44ABO\",\"WARC-Block-Digest\":\"sha1:VNYYNCQVXNQDCEBVY4735D4NN7RPKCOB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506399.24_warc_CC-MAIN-20230922102329-20230922132329-00578.warc.gz\"}"}
http://nsscn.top/subtractive_rng.html
[ "", null, "# subtractive_rng", null, "", null, "Category: functors Component type: type\n\n### Description\n\nSubtractive_rng is a Random Number Generator based on the subtractive method . It is a Unary Function: it takes a single argument N, an unsigned int, and returns an unsigned int that is less than N. Successive calls to the same subtractive_rng object yield a pseudo-random sequence.\n\n### Example\n\n```int main()\n{\nsubtractive_rng R;\nfor (int i = 0; i < 20; ++i)\ncout << R(5) << ' ';\ncout << endl;\n}\n// The output is 3 2 3 2 4 3 1 1 2 2 0 3 4 4 4 4 2 1 0 0\n```\n\n### Definition\n\nDefined in the standard header functional, and in the nonstandard backward-compatibility header function.h. This function object is an SGI extension; it is not part of the C++ standard.\n\nNone.\n\n### Model of\n\nRandom Number Generator, Adaptable Unary Function\n\nNone.\n\n### Public base classes\n\nunary_function<unsigned int, unsigned int>\n\n### Members\n\nParameter Description Default\nargument_type Adaptable Unary Function The type of a subtractive_rng's argument: unsigned int.\nresult_type Adaptable Unary Function The type of the result: unsigned int.\nsubtractive_rng(unsigned int seed) subtractive_rng See below.\nsubtractive_rng() subtractive_rng See below.\nunsignedint operator()(unsigned int N) Adaptable Unary Function Function call. Returns a pseudo-random number in the range [0, N).\nvoid initialize(unsigned int seed) subtractive_rng See below.\n\n### New members\n\nThese members are not defined in the Adaptable Unary Function requirements, but are specific to subtractive_rng.\nMember Description\nsubtractive_rng(unsigned int seed) The constructor. Creates a subtractive_rng whose internal state is initialized using seed.\nsubtractive_rng() The default constructor. Creates a subtractive_rng initialized using a default value.\nvoid initialize(unsigned int seed) Re-initializes the internal state of the subtractive_rng, using the value seed.\n\n### Notes\n\n See section 3.6 of Knuth for an implementation of the subtractive method in FORTRAN. Section 3.2.2 of Knuth analyzes this class of algorithms. (D. E. Knuth, The Art of Computer Programming. Volume 2: Seminumerical Algorithms, second edition. Addison-Wesley, 1981.)\n\n Note that the sequence produced by a subtractive_rng is completely deterministic, and that the sequences produced by two different subtractive_rng objects are independent of each other. That is: if R1 is a subtractive_rng, then the values returned when R1 is called depend only on R1's seed and on the number of times that R1 has been called. Calls to other subtractive_rng objects are irrelevant. In implementation terms, this is because the class subtractive_rng contains no static members.\n\n### See also\n\nRandom Number Generator", null, "", null, "Copyright © 1999 Silicon Graphics, Inc. All Rights Reserved. TrademarkInformation" ]
[ null, "http://nsscn.top/CorpID.gif", null, "http://nsscn.top/functors.gif", null, "http://nsscn.top/type.gif", null, "http://nsscn.top/surf.gif", null, "http://nsscn.top/stl_home.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7741866,"math_prob":0.78826827,"size":1855,"snap":"2019-35-2019-39","text_gpt3_token_len":472,"char_repetition_ratio":0.18908697,"word_repetition_ratio":0.022058824,"special_character_ratio":0.24905661,"punctuation_ratio":0.13846155,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9938491,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-22T23:01:47Z\",\"WARC-Record-ID\":\"<urn:uuid:47f04ea5-dc3f-4d37-a512-6c80f3772d69>\",\"Content-Length\":\"7229\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ab279166-1211-40ff-b14c-5e1e9bb642d6>\",\"WARC-Concurrent-To\":\"<urn:uuid:4a7c8c1f-259b-4aa7-b1e0-2f7570ae7cf3>\",\"WARC-IP-Address\":\"115.29.32.110\",\"WARC-Target-URI\":\"http://nsscn.top/subtractive_rng.html\",\"WARC-Payload-Digest\":\"sha1:FEJ5QT66PXAAVXCSASWKPX6W3DLGVGPK\",\"WARC-Block-Digest\":\"sha1:3HATFZPPII3OWSUJTTY6ZK54ICFZTUQD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027317516.88_warc_CC-MAIN-20190822215308-20190823001308-00217.warc.gz\"}"}
https://asu.pure.elsevier.com/en/publications/excursion-probability-of-certain-non-centered-smooth-gaussian-ran
[ "# Excursion probability of certain non-centered smooth Gaussian random fields\n\nResearch output: Contribution to journalArticle\n\n### Abstract\n\nLet X={X(t),t∈T} be a non-centered, unit-variance, smooth Gaussian random field indexed on some parameter space T, and let Au(X,T)={t∈T:X(t)≥u} be the excursion set. It is shown that, as u→∞, the excursion probability ℙ{supt∈TX(t)≥u} can be approximated by the expected Euler characteristic of Au(X,T), denoted by E{χ(Au(X,T))}, such that the error is super-exponentially small. The explicit formulae for E{χ(Au(X,T))} are also derived for two cases: (i) T is a rectangle and X-EX is stationary; (ii) T is an N-dimensional sphere and X-EX is isotropic.\n\nOriginal language English (US) 883-905 23 Stochastic Processes and their Applications 126 3 https://doi.org/10.1016/j.spa.2015.10.003 Published - Mar 2016\n\n### Keywords\n\n• Euler characteristic\n• Excursion probability\n• Gaussian random fields\n• Rectangle\n• Sphere\n• Super-exponentially small\n\n### ASJC Scopus subject areas\n\n• Statistics and Probability\n• Modeling and Simulation\n• Applied Mathematics" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8034876,"math_prob":0.5816283,"size":993,"snap":"2020-45-2020-50","text_gpt3_token_len":279,"char_repetition_ratio":0.08695652,"word_repetition_ratio":0.0,"special_character_ratio":0.25881168,"punctuation_ratio":0.09625668,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.996483,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-28T15:14:15Z\",\"WARC-Record-ID\":\"<urn:uuid:02253b31-666a-4990-88b7-52aa8b207859>\",\"Content-Length\":\"39504\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8e4b14ad-0f40-42c6-8504-cdc28ed8bf46>\",\"WARC-Concurrent-To\":\"<urn:uuid:8465ef97-89c9-4a81-9305-a82f7848be6a>\",\"WARC-IP-Address\":\"18.210.30.88\",\"WARC-Target-URI\":\"https://asu.pure.elsevier.com/en/publications/excursion-probability-of-certain-non-centered-smooth-gaussian-ran\",\"WARC-Payload-Digest\":\"sha1:N7KLHT3GNIGVC36ZV2BYG4FQ7HHUFYNX\",\"WARC-Block-Digest\":\"sha1:J5SZ6CPNODYEMHTCJM5H4L3F4RRNEQN5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107898577.79_warc_CC-MAIN-20201028132718-20201028162718-00247.warc.gz\"}"}
https://crafthaul.com/vintage-scrapbook-printables-printable-dragon-ball-z-birthday-card.html
[ "", null, "*If you post this on the internet, please give credit to Vintage Glam Studio & link back to my blog (www.vintageglamstudio.com), so others can obtain the resources.  Pinterest, Twitter and Facebook are great social media sharing platforms, it’s the  sharing without a link back that can be a problem. If you use this printable on Instagram, tag me with #vintageglamstudio or @vintageglamstudio. Thanks!\nShortly after I added a printable set of vintage French documents to my Etsy shop last summer, I had a request for a printable set of vintage documents in the English language. It has taken me about six months to source and purchase just the right vintage documents to include (and a few extras) but I am pleased to now have this set available in my shop.\nThis free printable vintage botanical set can be used for a variety of things (besides wall art). You can adjust the size in your printer settings (where it says 100%, you can change the percentage to be smaller) and use them for gift wrapping, tags, cards, ornaments, or even punch holes in the corners of a couple sets and string them together to make a banner.\nA little bit about this print. The Christmas Carol is my husband’s favorite Christmas movie/book of all time. Every year around Christmas we are forced asked to watch every version of the movie. Some of my kids like this tradition, others do not (I fall into this category). But it’s becoming a tradition and when I saw this book in the digital library I had to use it.\nAbout SometimeStudio:  Hi!! I’m Somer. A few things about me: I’m a total free spirit yet slowly turning into a homebody, I love calligraphy and pretty florals, I’m a Christian and a Mama of 3, and I love to be creative and find frugal ways to change the look of a room often. In my shop you will find products to make that a little easier; instant gratification, affordable wall art and designs to make your everyday life a little happier.\nShortly after I added a printable set of vintage French documents to my Etsy shop last summer, I had a request for a printable set of vintage documents in the English language. It has taken me about six months to source and purchase just the right vintage documents to include (and a few extras) but I am pleased to now have this set available in my shop.\nAnd so, in June 2015, I released my Life Binder product to the world - meaning I listed it on Etsy and told my email list about it. Then...my phone started \"cha-chinging\" more than it ever had before! By the end of the month, I had reached \\$1000 in sales! That blew my mind. I had made more in one month with one product than I had in the last 18 months!\n\nAbout NotedTravelers:  A few years ago, while obsessively researching packing lists for trips to Ireland, I heard about Traveler’s Notebooks. There were pictures of lovely watercolor paintings and descriptions of overseas adventures. Well, I thought, I’ll be traveling soon and I occasionally write notes. This must be the perfect thing for me, nevermind that all of my previous attempts at journaling had been non-starters.\n!function(n,t){function r(e,n){return Object.prototype.hasOwnProperty.call(e,n)}function i(e){return void 0===e}if(n){var o={},u=n.TraceKit,s=[].slice,a=\"?\";o.noConflict=function(){return n.TraceKit=u,o},o.wrap=function(e){function n(){try{return e.apply(this,arguments)}catch(e){throw o.report(e),e}}return n},o.report=function(){function e(e){a(),h.push(e)}function t(e){for(var n=h.length-1;n>=0;--n)h[n]===e&&h.splice(n,1)}function i(e,n){var t=null;if(!n||o.collectWindowErrors){for(var i in h)if(r(h,i))try{h[i].apply(null,[e].concat(s.call(arguments,2)))}catch(e){t=e}if(t)throw t}}function u(e,n,t,r,u){var s=null;if(w)o.computeStackTrace.augmentStackTraceWithInitialElement(w,n,t,e),l();else if(u)s=o.computeStackTrace(u),i(s,!0);else{var a={url:n,line:t,column:r};a.func=o.computeStackTrace.guessFunctionName(a.url,a.line),a.context=o.computeStackTrace.gatherContext(a.url,a.line),s={mode:\"onerror\",message:e,stack:[a]},i(s,!0)}return!!f&&f.apply(this,arguments)}function a(){!0!==d&&(f=n.onerror,n.onerror=u,d=!0)}function l(){var e=w,n=p;p=null,w=null,m=null,i.apply(null,[e,!1].concat(n))}function c(e){if(w){if(m===e)return;l()}var t=o.computeStackTrace(e);throw w=t,m=e,p=s.call(arguments,1),n.setTimeout(function(){m===e&&l()},t.incomplete?2e3:0),e}var f,d,h=[],p=null,m=null,w=null;return c.subscribe=e,c.unsubscribe=t,c}(),o.computeStackTrace=function(){function e(e){if(!o.remoteFetching)return\"\";try{var t=function(){try{return new n.XMLHttpRequest}catch(e){return new n.ActiveXObject(\"Microsoft.XMLHTTP\")}},r=t();return r.open(\"GET\",e,!1),r.send(\"\"),r.responseText}catch(e){return\"\"}}function t(t){if(\"string\"!=typeof t)return[];if(!r(j,t)){var i=\"\",o=\"\";try{o=n.document.domain}catch(e){}var u=/(.*)\\:\\/\\/([^:\\/]+)([:\\d]*)\\/{0,1}([\\s\\S]*)/.exec(t);u&&u===o&&(i=e(t)),j[t]=i?i.split(\"\\n\"):[]}return j[t]}function u(e,n){var r,o=/function ([^(]*)\\(([^)]*)\\)/,u=/['\"]?([0-9A-Za-z\\$_]+)['\"]?\\s*[:=]\\s*(function|eval|new Function)/,s=\"\",l=10,c=t(e);if(!c.length)return a;for(var f=0;f0?u:null}function l(e){return e.replace(/[\\-\\[\\]{}()*+?.,\\\\\\^\\$|#]/g,\"\\\\\\$&\")}function c(e){return l(e).replace(\"<\",\"(?:<|<)\").replace(\">\",\"(?:>|>)\").replace(\"&\",\"(?:&|&)\").replace('\"','(?:\"|\")').replace(/\\s+/g,\"\\\\s+\")}function f(e,n){for(var r,i,o=0,u=n.length;or&&(i=u.exec(o[r]))?i.index:null}function h(e){if(!i(n&&n.document)){for(var t,r,o,u,s=[n.location.href],a=n.document.getElementsByTagName(\"script\"),d=\"\"+e,h=/^function(?:\\s+([\\w\\$]+))?\\s*\\(([\\w\\s,]*)\\)\\s*\\{\\s*(\\S[\\s\\S]*\\S)\\s*\\}\\s*\\$/,p=/^function on([\\w\\$]+)\\s*\\(event\\)\\s*\\{\\s*(\\S[\\s\\S]*\\S)\\s*\\}\\s*\\$/,m=0;m]+)>|([^\\)]+))\\((.*)\\))? in (.*):\\s*\\$/i,o=n.split(\"\\n\"),a=[],l=0;l=0&&(g.line=v+x.substring(0,j).split(\"\\n\").length)}}}else if(o=d.exec(i[y])){var _=n.location.href.replace(/#.*\\$/,\"\"),T=new RegExp(c(i[y+1])),E=f(T,[_]);g={url:_,func:\"\",args:[],line:E?E.line:o,column:null}}if(g){g.func||(g.func=u(g.url,g.line));var k=s(g.url,g.line),A=k?k[Math.floor(k.length/2)]:null;k&&A.replace(/^\\s*/,\"\")===i[y+1].replace(/^\\s*/,\"\")?g.context=k:g.context=[i[y+1]],h.push(g)}}return h.length?{mode:\"multiline\",name:e.name,message:i,stack:h}:null}function y(e,n,t,r){var i={url:n,line:t};if(i.url&&i.line){e.incomplete=!1,i.func||(i.func=u(i.url,i.line)),i.context||(i.context=s(i.url,i.line));var o=/ '([^']+)' /.exec(r);if(o&&(i.column=d(o,i.url,i.line)),e.stack.length>0&&e.stack.url===i.url){if(e.stack.line===i.line)return!1;if(!e.stack.line&&e.stack.func===i.func)return e.stack.line=i.line,e.stack.context=i.context,!1}return e.stack.unshift(i),e.partial=!0,!0}return e.incomplete=!0,!1}function g(e,n){for(var t,r,i,s=/function\\s+([_\\$a-zA-Z\\xA0-\\uFFFF][_\\$a-zA-Z0-9\\xA0-\\uFFFF]*)?\\s*\\(/i,l=[],c={},f=!1,p=g.caller;p&&!f;p=p.caller)if(p!==v&&p!==o.report){if(r={url:null,func:a,args:[],line:null,column:null},p.name?r.func=p.name:(t=s.exec(p.toString()))&&(r.func=t),\"undefined\"==typeof r.func)try{r.func=t.input.substring(0,t.input.indexOf(\"{\"))}catch(e){}if(i=h(p)){r.url=i.url,r.line=i.line,r.func===a&&(r.func=u(r.url,r.line));var m=/ '([^']+)' /.exec(e.message||e.description);m&&(r.column=d(m,i.url,i.line))}c[\"\"+p]?f=!0:c[\"\"+p]=!0,l.push(r)}n&&l.splice(0,n);var w={mode:\"callers\",name:e.name,message:e.message,stack:l};return y(w,e.sourceURL||e.fileName,e.line||e.lineNumber,e.message||e.description),w}function v(e,n){var t=null;n=null==n?0:+n;try{if(t=m(e))return t}catch(e){if(x)throw e}try{if(t=p(e))return t}catch(e){if(x)throw e}try{if(t=w(e))return t}catch(e){if(x)throw e}try{if(t=g(e,n+1))return t}catch(e){if(x)throw e}return{mode:\"failed\"}}function b(e){e=1+(null==e?0:+e);try{throw new Error}catch(n){return v(n,e+1)}}var x=!1,j={};return v.augmentStackTraceWithInitialElement=y,v.guessFunctionName=u,v.gatherContext=s,v.ofCaller=b,v.getSource=t,v}(),o.extendToAsynchronousCallbacks=function(){var e=function(e){var t=n[e];n[e]=function(){var e=s.call(arguments),n=e;return\"function\"==typeof n&&(e=o.wrap(n)),t.apply?t.apply(this,e):t(e,e)}};e(\"setTimeout\"),e(\"setInterval\")},o.remoteFetching||(o.remoteFetching=!0),o.collectWindowErrors||(o.collectWindowErrors=!0),(!o.linesOfContext||o.linesOfContext<1)&&(o.linesOfContext=11),void 0!==e&&e.exports&&n.module!==e?e.exports=o:\"function\"==typeof define&&define.amd?define(\"TraceKit\",[],o):n.TraceKit=o}}(\"undefined\"!=typeof window?window:global)},\"./webpack-loaders/expose-loader/index.js?require!./shared/require-global.js\":function(e,n,t){(function(n){e.exports=n.require=t(\"./shared/require-global.js\")}).call(n,t(\"../../../lib/node_modules/webpack/buildin/global.js\"))}});" ]
[ null, "https://crafthaul.com/vintagePrintables.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7230786,"math_prob":0.9485589,"size":5069,"snap":"2019-35-2019-39","text_gpt3_token_len":1331,"char_repetition_ratio":0.10977295,"word_repetition_ratio":0.21176471,"special_character_ratio":0.2682975,"punctuation_ratio":0.20163934,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9687389,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-24T12:11:07Z\",\"WARC-Record-ID\":\"<urn:uuid:41cacc39-7fec-4b19-951c-6a34a9fe20bf>\",\"Content-Length\":\"16367\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8d539295-4e68-4144-9758-f85440e7eab2>\",\"WARC-Concurrent-To\":\"<urn:uuid:fafec6c9-727c-44ca-b6d9-296f84737241>\",\"WARC-IP-Address\":\"192.111.148.250\",\"WARC-Target-URI\":\"https://crafthaul.com/vintage-scrapbook-printables-printable-dragon-ball-z-birthday-card.html\",\"WARC-Payload-Digest\":\"sha1:A76QQWD4IYWC6XEHKH3GISETL5GEWHQI\",\"WARC-Block-Digest\":\"sha1:36AVAM4DDNBRDUD3YQN5OO3CFRZZBCZU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027320734.85_warc_CC-MAIN-20190824105853-20190824131853-00545.warc.gz\"}"}
https://en.m.wikipedia.org/wiki/Spreading_factor
[ "# Chip (CDMA)\n\n(Redirected from Spreading factor)\n\nIn digital communications, a chip is a pulse of a direct-sequence spread spectrum (DSSS) code, such as a Pseudo-random Noise (PN) code sequence used in direct-sequence code division multiple access (CDMA) channel access techniques.\n\nIn a binary direct-sequence system, each chip is typically a rectangular pulse of +1 or –1 amplitude, which is multiplied by a data sequence (similarly +1 or –1 representing the message bits) and by a carrier waveform to make the transmitted signal. The chips are therefore just the bit sequence out of the code generator; they are called chips to avoid confusing them with message bits.\n\nThe chip rate of a code is the number of pulses per second (chips per second) at which the code is transmitted (or received). The chip rate is larger than the symbol rate, meaning that one symbol is represented by multiple chips. The ratio is known as the spreading factor (SF) or processing gain:\n\n$\\ {\\mbox{SF}}={\\frac {\\mbox{chip rate}}{\\mbox{symbol rate}}}$", null, "## Orthogonal variable spreading factor\n\nOrthogonal variable spreading factor (OVSF) is an implementation of code division multiple access (CDMA) where before each signal is transmitted, the signal is spread over a wide spectrum range through the use of a user's code. Users' codes are carefully chosen to be mutually orthogonal to each other.\n\nThese codes are derived from an OVSF code tree, and each user is given a different code. An OVSF code tree is a complete binary tree that reflects the construction of Hadamard matrices." ]
[ null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/56aeff06e67c1b8b90a0b787e7b42661b4369e23", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8883654,"math_prob":0.9589171,"size":1723,"snap":"2019-35-2019-39","text_gpt3_token_len":381,"char_repetition_ratio":0.10122164,"word_repetition_ratio":0.0073260074,"special_character_ratio":0.2124202,"punctuation_ratio":0.08681672,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.986766,"pos_list":[0,1,2],"im_url_duplicate_count":[null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-23T11:04:46Z\",\"WARC-Record-ID\":\"<urn:uuid:8961d288-1df8-416d-9b64-fdd16c6490c2>\",\"Content-Length\":\"29599\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a9719201-cf71-4a21-baf7-0b569ba62a8b>\",\"WARC-Concurrent-To\":\"<urn:uuid:d65325bb-4792-4214-9f65-6372ca4a5f0a>\",\"WARC-IP-Address\":\"208.80.154.224\",\"WARC-Target-URI\":\"https://en.m.wikipedia.org/wiki/Spreading_factor\",\"WARC-Payload-Digest\":\"sha1:CCAAR27CK6RUNSY62BX3LH2T7MDDAEYQ\",\"WARC-Block-Digest\":\"sha1:AKGUI2FHQXGZKOMURI66XH42FHQXLOZT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514576355.92_warc_CC-MAIN-20190923105314-20190923131314-00486.warc.gz\"}"}
http://lhcbproject.web.cern.ch/lhcbproject/Publications/LHCbProjectPublic/LHCb-PAPER-2011-001.html
[ "First observation of $\\overline{B}^{0}_{s} \\to D^{*+}_{s2}X\\mu^{-}\\overline{\\nu}$ decays\n\n[to restricted-access page]\n\nUsing data collected with the LHCb detector in proton-proton collisions at a centre-of-mass energy of 7 TeV, the semileptonic decays Bs -> Ds+ X mu nu and Bs -> D0 K+ X mu nu are detected. Two structures are observed in the D0 K+ mass spectrum at masses consistent with the known D^+_{s1}(2536) and $D^{*+}_{s2}(2573) mesons. The measured branching fractions relative to the total Bs semileptonic rate are B(Bs -> D_{s2}^{*+} X mu nu)/B(Bs -> X mu nu)= (3.3\\pm 1.0\\pm 0.4)%, and B(Bs -> D_{s1}^+ X munu)/B(Bs -> X mu nu)= (5.4\\pm 1.2\\pm 0.5)%, where the first uncertainty is statistical and the second is systematic. This is the first observation of the D_{s2}^{*+} state in Bs decays; we also measure its mass and width. Figures and captions The invariant$K^+K^-\\pi^+$mass spectra for events associated with a muon for the 3 pb$^{-1}$sample in the pseudorapidity interval$2<\\eta<6$for RS combinations (a) and WS combinations (c). Also shown is the natural logarithm of the IP distributions of the$D_s^+$candidates for (b) RS and (d) WS$D^+_s$muon candidate combinations. The labelling of the curves is the same on all four sub-figures. In descending order in (a): green-solid curve shows the total, the blue-dashed curve the Dfb signal, the black-dotted curve the sideband background, the purple-dot-dashed the misinterpreted$\\Lambda_c^+\\to pK^-\\pi^+$contribution, the black dash-dash-dot curve the$D^{*+}\\to \\pi^+D^0\\to K^+K^-\\pi^+$contribution, and the barely visible red-solid curves the Prompt yield. The Dfb signal, the$\\Lambda_c^+$reflection and$D^{*+}$signal are too small to be seen in the WS distributions. The insert in (b) shows an expanded view of the region populated by Prompt charm production. Ds-ove[..].eps [195 KiB] HiDef png [646 KiB] Thumbnail [308 KiB]", null, "The mass difference$m(K^-\\pi^+K^+)-m(K^-\\pi^+)$added to the known$D^0$mass for events with$K^-\\pi^+$invariant masses within$\\pm$20 MeV of the$D^0$mass (black points) in semileptonic decays. The histogram shows wrong-sign events with an additional$K^-$instead of a$K^+$. The curves are described in the text. (a) For the 3 pb$^{-1}$data sample and (b) for the 20 pb$^{-1}\\$ sample. ds2_3p[..].eps [117 KiB] HiDef png [904 KiB] Thumbnail [489 KiB]", null, "Animated gif made out of all figures. PAPER-2011-001.gif Thumbnail", null, "Created on 19 October 2019." ]
[ null, "http://lhcbproject.web.cern.ch/lhcbproject/Publications/LHCbProjectPublic/Directory_LHCb-PAPER-2011-001/thumbnail_Ds-overall-Lc-Dp-in.png", null, "http://lhcbproject.web.cern.ch/lhcbproject/Publications/LHCbProjectPublic/Directory_LHCb-PAPER-2011-001/thumbnail_ds2_3pb_20pb.png", null, "http://lhcbproject.web.cern.ch/lhcbproject/Publications/LHCbProjectPublic/Directory_LHCb-PAPER-2011-001/thumbnail_PAPER-2011-001.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7939294,"math_prob":0.9938234,"size":2333,"snap":"2019-43-2019-47","text_gpt3_token_len":723,"char_repetition_ratio":0.108630314,"word_repetition_ratio":0.0,"special_character_ratio":0.32061723,"punctuation_ratio":0.08492569,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99873275,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-23T08:05:26Z\",\"WARC-Record-ID\":\"<urn:uuid:d585cd78-2ac0-4fd4-95de-94ae0c81efab>\",\"Content-Length\":\"9815\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3b87e161-e343-447c-8915-08e0d54cbdf1>\",\"WARC-Concurrent-To\":\"<urn:uuid:ca43ce36-7104-4297-85bd-eb2c61a817b1>\",\"WARC-IP-Address\":\"137.138.150.3\",\"WARC-Target-URI\":\"http://lhcbproject.web.cern.ch/lhcbproject/Publications/LHCbProjectPublic/LHCb-PAPER-2011-001.html\",\"WARC-Payload-Digest\":\"sha1:GLHOA2HQG2RPOSN7E3WKUVO5UAZ2Z4H2\",\"WARC-Block-Digest\":\"sha1:YXXKBAZZ7EMINTHDYFO3A6ZH62DKTM6Z\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570987829507.97_warc_CC-MAIN-20191023071040-20191023094540-00030.warc.gz\"}"}
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3032980
[ "# A Dimension and Variance Reduction Monte-Carlo Method for Option Pricing under Jump-Diffusion Models\n\n35 Pages Posted: 9 Sep 2017\n\nSee all articles by Duy-Minh Dang\n\n## Duy-Minh Dang\n\nUniversity of Queensland - School of Mathematics and Physics\n\n## Kenneth R. Jackson\n\nUniversity of Toronto - Department of Computer Science\n\n## Scott Sues\n\nUniversity of Queensland - School of Mathematics and Physics\n\nDate Written: April 8, 2016\n\n### Abstract\n\nWe develop a highly efficient MC method for computing plain vanilla European option prices and hedging parameters under a very general jump-diffusion option pricing model which includes stochastic variance and multi-factor Gaussian interest short rate(s). The focus of our MC approach is variance reduction via dimension reduction. More specifically, the option price is expressed as an expectation of a unique solution to a conditional Partial Integro-Differential Equation (PIDE), which is then solved using a Fourier transform technique. Important features of our approach are (i) the analytical tractability of the conditional PIDE is fully determined by that of the Black-Scholes-Merton model augmented with the same jump component as in our model, and (ii) the variances associated with all the interest rate factors are completely removed when evaluating the expectation via iterated conditioning applied to only the Brownian motion associated with the variance factor. For certain cases when numerical methods are either needed or preferred, we propose a discrete fast Fourier transform method to numerically solve the conditional PIDE efficiently. Our method can also effectively compute hedging parameters. Numerical results show that the proposed method is highly efficient.\n\nKeywords: conditional Monte Carlo, variance reduction, dimension reduction, partial-integro~differential~equations, jump diffusions, fast Fourier transform, normal, double-exponential\n\nSuggested Citation\n\nDang, Duy-Minh and Jackson, Kenneth R. and Sues, Scott, A Dimension and Variance Reduction Monte-Carlo Method for Option Pricing under Jump-Diffusion Models (April 8, 2016). Available at SSRN: https://ssrn.com/abstract=3032980 or http://dx.doi.org/10.2139/ssrn.3032980" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8656545,"math_prob":0.5976844,"size":1674,"snap":"2023-40-2023-50","text_gpt3_token_len":315,"char_repetition_ratio":0.10778443,"word_repetition_ratio":0.0,"special_character_ratio":0.16965352,"punctuation_ratio":0.08646616,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95129687,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-05T12:28:02Z\",\"WARC-Record-ID\":\"<urn:uuid:362955ce-5bcc-4ef3-a57f-d41fbb4c5db3>\",\"Content-Length\":\"62476\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:602e6a19-0d95-4992-bec3-213d69488c6b>\",\"WARC-Concurrent-To\":\"<urn:uuid:758fcbd2-8c71-4fe5-933a-52bf9f053ad9>\",\"WARC-IP-Address\":\"104.16.40.248\",\"WARC-Target-URI\":\"https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3032980\",\"WARC-Payload-Digest\":\"sha1:3X3GVZ4MRT5TBA4LGV6SWJCSU4CEX773\",\"WARC-Block-Digest\":\"sha1:2ERZLW63L2WQ45N7M2OO2HPD65SYZDEJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100551.17_warc_CC-MAIN-20231205105136-20231205135136-00899.warc.gz\"}"}
https://jawher.wordpress.com/2010/08/28/my-clojure-explained-solutions-to-the-s99-problems-14-to-20/
[ "# My Clojure explained solutions to the s99 problems 14 to 20\n\n## P14\n\nDuplicate the elements of a list.\nExample:\nuser> (duplicate ‘(\\a, \\b, \\c, \\c, \\d))\n(\\a, \\a, \\b, \\b, \\c, \\c, \\c, \\c, \\d, \\d)\n\n```(defn duplicate-s99 \"P14\" [xs]\n(reverse (reduce #(cons %2 (cons %2 %1)) '() xs)))\n```\n\nThe reduction function would start with an empty list, prepending every item of the xs list two times.\n\n## P15\n\nDuplicate the elements of a list a given number of times.\nExample:\n\nuser> (duplicate-n 3, ‘(\\a, \\b, \\c, \\c, \\d))\n(\\a, \\a, \\a, \\b, \\b, \\b, \\c, \\c, \\c, \\c, \\c, \\c, \\d, \\d, \\d)\n\nI’ll cheat a little bit here by using the repeat function from clojure’s api which returns a list (a sequence actually) by repeating an item n times:\n\n```(defn duplicate-n \"P15\" [n xs]\n(reduce concat (map (partial repeat n) xs)))\n```\n\nI started by repeating evey item of the xs list n times using a combination of the map and repeat functions. Normally, I would have written this transformation this way:\n\n```(defn duplicate-n \"P15\" [n xs]\n(map #(repeat n %) xs))\n```\n\nBut in this case, I used the partial function which given a function f and a number of args returns a new function that takes the remaining args and calls the original function f with all the arguments.\n\nAfter this step, we end up with a list of lists, as evey item of the xs list was mapped to a list. I flattened this using concat as a reduction function.\n\n## P16\n\nDrop every Nth element from a list.\nExample:\nuser> (drop-s99 3 ‘(\\a, \\b, \\c, \\d, \\e, \\f, \\g, \\h, \\i, \\j, \\k))\n(\\a, \\b, \\d, \\e, \\g, \\h, \\j, \\k)\n\n```(defn mul? [m n]\n(and (not (zero? m)) (zero? (mod m n))))\n\n(defn drop-s99 \"P16\" [n xs]\n(keep-indexed #(if (mul? (inc %1) n) nil %2) xs))\n```\n\nThis solution relies on the keep-indexed function added in the recently released clojure 1.2. I’m not going to duplicate its documentation here as the official docs already do a great job of explaining it.\n\nI’ve also defined a function mul? which for two arguments m and n returns true if m > 0 and n divides m.\n\nFor evey item of the xs list plus its index (0 based), my solution will check if the latter plus one (1 based) is a multiplier of n, returning nil (which will be eliminated by keep-indexed) if this is the case, or the item itself (which will be kept) otherwise.\n\n## P17\n\nSplit a list into two parts.\nThe length of the first part is given. Use a Tuple for your result.\nExample:\n\nuser> (split-s99 3 ‘(\\a, \\b, \\c, \\d, \\e, \\f, \\g, \\h, \\i, \\j, \\k))\n((\\a, \\b, \\c), (\\d, \\e, \\f, \\g, \\h, \\i, \\j, \\k))\n\n```(defn split-s99 \"P17\" [n xs]\n(reduce #(if (< (count (first %1)) n)\n(list (concat (first %1) [%2]) '())\n(list (first %1) (concat (last %1) [%2])))\n'(() ()) xs)\n)\n```\n\nA verbose solution, but it is very simple actually: I applied a reduction function on the xs list that starts with an empty solution (a list of two empty lists). For every item of the xs list, if the length of the first list of the solution is less than n, add that item to the first list, otherwise add it to the second list.\n\nAs a side note, we could have used clojure’s split-at function, but that would take the fun off solving these problems wouldn’t it ?\n\n## P18\n\nExtract a slice from a list.\nGiven two indices, I and K, the slice is the list containing the elements from and including the Ith element up to but not including the Kth element of the original list. Start counting the elements with 0.\nExample:\n\nuser> (slice 3 7 (\\a, \\b, \\c, \\d, \\e, \\f, \\g, \\h, \\i, \\j, \\k))\n(\\d, \\e, \\f, \\g)\n\n```(defn slice \"P18\" [i k xs]\n(first (split-s99 (- k i) (last (split-s99 i xs)))))\n```\n\nThe hard part was already done in the previous problem: the solution uses split-s99 to split the xs list with the i arg, then splits the second part with k-i and returns the first part:\n\n• split (\\a, \\b, \\c, \\d, \\e, \\f, \\g, \\h, \\i, \\j, \\k) at 3 and take the second part => (\\d, \\e, \\f, \\g, \\h, \\i, \\j, \\k)\n• split the result of the first step (\\d, \\e, \\f, \\g, \\h, \\i, \\j, \\k) at 7-3=4 and keep the first part => (\\d, \\e, \\f, \\g)\n\n## P19\n\nRotate a list N places to the left.\nExamples:\nuser> (rotate 3 ‘(\\a, \\b, \\c, \\d, \\e, \\f, \\g, \\h, \\i, \\j, \\k))\n(\\d, \\e, \\f, \\g, \\h, \\i, \\j, \\k, \\a, \\b, \\c)\n\nuser> (rotate -2 ‘(\\a, \\b, \\c, \\d, \\e, \\f, \\g, \\h, \\i, \\j, \\k))\n(\\j, \\k, \\a, \\b, \\c, \\d, \\e, \\f, \\g, \\h, \\i)\n\n```(defn rotate \"P19\" [n xs]\n(let [m (if (neg? n) (+ (count xs) n) n)\ns (split-s99 m xs)]\n(concat (last s) (first s))))\n```\n\nAgain, the hard part was already done in the P17 problem where a split function was defined. What I did here was to split the xs list at a position depending on n: if n is positive, the split occurs at n, otherwise it occurs at xs’s length minus n. I then return the list obtained by concatting the second segment with the first, in this order.\n\n## P20\n\nRemove the Kth element from a list.\nReturn the list and the removed element in a Tuple. Elements are numbered from 0.\nExample:\n\nuser> (remove-at 1 ‘(\\a \\b \\c \\d))\n((\\a \\c \\d) \\b)\n\n```(defn remove-at \"P20\" [k xs]\n(let [s (split-s99 (inc k) xs)]\n(list\n(concat (butlast (first s)) (last s))\n(last (first s))))\n)\n```\n• I start by splitting xs (\\a \\b \\c \\d) at k+1 (2) => ((\\a \\b) (\\c \\d))\n• return a list with 2 items:\n• concatenate the first segment without its last element (\\a) with the second segment (\\c \\d) => (\\a \\c \\d)\n• the last element of the first segment (\\a \\b) => \\b\n\n=> ((\\a \\c \\d) \\b)" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.86502296,"math_prob":0.9880293,"size":5318,"snap":"2020-24-2020-29","text_gpt3_token_len":1784,"char_repetition_ratio":0.149793,"word_repetition_ratio":0.1124031,"special_character_ratio":0.36122602,"punctuation_ratio":0.17295597,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99858105,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-06-02T16:38:51Z\",\"WARC-Record-ID\":\"<urn:uuid:93f7c3be-200b-409b-b44a-052024e7d23e>\",\"Content-Length\":\"78916\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:35f54e17-3485-469c-bda0-4da80d39dafa>\",\"WARC-Concurrent-To\":\"<urn:uuid:e0f74621-ba50-46c2-83fd-8955b26f0f19>\",\"WARC-IP-Address\":\"192.0.78.13\",\"WARC-Target-URI\":\"https://jawher.wordpress.com/2010/08/28/my-clojure-explained-solutions-to-the-s99-problems-14-to-20/\",\"WARC-Payload-Digest\":\"sha1:SGTEG6OQOMVTT3FVHGIMDESKTVZBJEZG\",\"WARC-Block-Digest\":\"sha1:PHMCAZESNKEVCFPS4B6AZ2YYM6MNAHQR\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347425481.58_warc_CC-MAIN-20200602162157-20200602192157-00284.warc.gz\"}"}
https://www.colorhexa.com/0134ea
[ "# #0134ea Color Information\n\nIn a RGB color space, hex #0134ea is composed of 0.4% red, 20.4% green and 91.8% blue. Whereas in a CMYK color space, it is composed of 99.6% cyan, 77.8% magenta, 0% yellow and 8.2% black. It has a hue angle of 226.9 degrees, a saturation of 99.1% and a lightness of 46.1%. #0134ea color hex could be obtained by blending #0268ff with #0000d5. Closest websafe color is: #0033ff.\n\n• R 0\n• G 20\n• B 92\nRGB color chart\n• C 100\n• M 78\n• Y 0\n• K 8\nCMYK color chart\n\n#0134ea color description : Vivid blue.\n\n# #0134ea Color Conversion\n\nThe hexadecimal color #0134ea has RGB values of R:1, G:52, B:234 and CMYK values of C:1, M:0.78, Y:0, K:0.08. Its decimal value is 79082.\n\nHex triplet RGB Decimal 0134ea `#0134ea` 1, 52, 234 `rgb(1,52,234)` 0.4, 20.4, 91.8 `rgb(0.4%,20.4%,91.8%)` 100, 78, 0, 8 226.9°, 99.1, 46.1 `hsl(226.9,99.1%,46.1%)` 226.9°, 99.6, 91.8 0033ff `#0033ff`\nCIE-LAB 34.806, 57.597, -91.824 16.089, 8.402, 78.611 0.156, 0.081, 8.402 34.806, 108.393, 302.098 34.806, -12.472, -121.385 28.986, 48.354, -140.509 00000001, 00110100, 11101010\n\n# Color Schemes with #0134ea\n\n• #0134ea\n``#0134ea` `rgb(1,52,234)``\n• #eab701\n``#eab701` `rgb(234,183,1)``\nComplementary Color\n• #01a9ea\n``#01a9ea` `rgb(1,169,234)``\n• #0134ea\n``#0134ea` `rgb(1,52,234)``\n• #4201ea\n``#4201ea` `rgb(66,1,234)``\nAnalogous Color\n• #a9ea01\n``#a9ea01` `rgb(169,234,1)``\n• #0134ea\n``#0134ea` `rgb(1,52,234)``\n• #ea4301\n``#ea4301` `rgb(234,67,1)``\nSplit Complementary Color\n• #34ea01\n``#34ea01` `rgb(52,234,1)``\n• #0134ea\n``#0134ea` `rgb(1,52,234)``\n• #ea0134\n``#ea0134` `rgb(234,1,52)``\n• #01eab7\n``#01eab7` `rgb(1,234,183)``\n• #0134ea\n``#0134ea` `rgb(1,52,234)``\n• #ea0134\n``#ea0134` `rgb(234,1,52)``\n• #eab701\n``#eab701` `rgb(234,183,1)``\n• #01239e\n``#01239e` `rgb(1,35,158)``\n• #0129b7\n``#0129b7` `rgb(1,41,183)``\n• #012ed1\n``#012ed1` `rgb(1,46,209)``\n• #0134ea\n``#0134ea` `rgb(1,52,234)``\n• #073dfe\n``#073dfe` `rgb(7,61,254)``\n• #2051fe\n``#2051fe` `rgb(32,81,254)``\n• #3964fe\n``#3964fe` `rgb(57,100,254)``\nMonochromatic Color\n\n# Alternatives to #0134ea\n\nBelow, you can see some colors close to #0134ea. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #016eea\n``#016eea` `rgb(1,110,234)``\n• #015bea\n``#015bea` `rgb(1,91,234)``\n• #0147ea\n``#0147ea` `rgb(1,71,234)``\n• #0134ea\n``#0134ea` `rgb(1,52,234)``\n• #0121ea\n``#0121ea` `rgb(1,33,234)``\n• #010dea\n``#010dea` `rgb(1,13,234)``\n• #0801ea\n``#0801ea` `rgb(8,1,234)``\nSimilar Colors\n\n# #0134ea Preview\n\nThis text has a font color of #0134ea.\n\n``<span style=\"color:#0134ea;\">Text here</span>``\n#0134ea background color\n\nThis paragraph has a background color of #0134ea.\n\n``<p style=\"background-color:#0134ea;\">Content here</p>``\n#0134ea border color\n\nThis element has a border color of #0134ea.\n\n``<div style=\"border:1px solid #0134ea;\">Content here</div>``\nCSS codes\n``.text {color:#0134ea;}``\n``.background {background-color:#0134ea;}``\n``.border {border:1px solid #0134ea;}``\n\n# Shades and Tints of #0134ea\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #000413 is the darkest color, while #ffffff is the lightest one.\n\n• #000413\n``#000413` `rgb(0,4,19)``\n• #000927\n``#000927` `rgb(0,9,39)``\n• #000d3a\n``#000d3a` `rgb(0,13,58)``\n• #00114e\n``#00114e` `rgb(0,17,78)``\n• #001661\n``#001661` `rgb(0,22,97)``\n• #001a75\n``#001a75` `rgb(0,26,117)``\n• #011e88\n``#011e88` `rgb(1,30,136)``\n• #01239c\n``#01239c` `rgb(1,35,156)``\n• #0127af\n``#0127af` `rgb(1,39,175)``\n• #012bc3\n``#012bc3` `rgb(1,43,195)``\n• #0130d6\n``#0130d6` `rgb(1,48,214)``\n• #0134ea\n``#0134ea` `rgb(1,52,234)``\n• #0138fe\n``#0138fe` `rgb(1,56,254)``\n• #1447fe\n``#1447fe` `rgb(20,71,254)``\n• #2857fe\n``#2857fe` `rgb(40,87,254)``\n• #3b66fe\n``#3b66fe` `rgb(59,102,254)``\n• #4f75fe\n``#4f75fe` `rgb(79,117,254)``\n• #6285fe\n``#6285fe` `rgb(98,133,254)``\n• #7694fe\n``#7694fe` `rgb(118,148,254)``\n• #89a3fe\n``#89a3fe` `rgb(137,163,254)``\n• #9db2ff\n``#9db2ff` `rgb(157,178,255)``\n• #b0c2ff\n``#b0c2ff` `rgb(176,194,255)``\n• #c4d1ff\n``#c4d1ff` `rgb(196,209,255)``\n• #d8e0ff\n``#d8e0ff` `rgb(216,224,255)``\n• #ebefff\n``#ebefff` `rgb(235,239,255)``\n• #ffffff\n``#ffffff` `rgb(255,255,255)``\nTint Color Variation\n\n# Tones of #0134ea\n\nA tone is produced by adding gray to any pure hue. In this case, #6d717e is the less saturated color, while #0134ea is the most saturated one.\n\n• #6d717e\n``#6d717e` `rgb(109,113,126)``\n• #646c87\n``#646c87` `rgb(100,108,135)``\n• #5b6790\n``#5b6790` `rgb(91,103,144)``\n• #526299\n``#526299` `rgb(82,98,153)``\n• #495da2\n``#495da2` `rgb(73,93,162)``\n• #4058ab\n``#4058ab` `rgb(64,88,171)``\n• #3752b4\n``#3752b4` `rgb(55,82,180)``\n• #2e4dbd\n``#2e4dbd` `rgb(46,77,189)``\n• #2548c6\n``#2548c6` `rgb(37,72,198)``\n• #1c43cf\n``#1c43cf` `rgb(28,67,207)``\n• #133ed8\n``#133ed8` `rgb(19,62,216)``\n• #0a39e1\n``#0a39e1` `rgb(10,57,225)``\n• #0134ea\n``#0134ea` `rgb(1,52,234)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #0134ea is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5213923,"math_prob":0.7060754,"size":3672,"snap":"2021-31-2021-39","text_gpt3_token_len":1621,"char_repetition_ratio":0.14312977,"word_repetition_ratio":0.011111111,"special_character_ratio":0.5555556,"punctuation_ratio":0.23751387,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98911273,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-08-04T21:44:17Z\",\"WARC-Record-ID\":\"<urn:uuid:14021a4e-c977-4a71-81c8-1a7187ebdd81>\",\"Content-Length\":\"36112\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:89c38aae-4c6d-4e88-8401-af3beb1c952c>\",\"WARC-Concurrent-To\":\"<urn:uuid:3bf1510f-c66a-4b64-b92c-6a33794617e6>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/0134ea\",\"WARC-Payload-Digest\":\"sha1:ZEVXP7V6OAXAICY2WITKOAVZB6UUEXE7\",\"WARC-Block-Digest\":\"sha1:JX7DGVQOMWF6J64QDVCAQ27GDJBC2LOC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046155188.79_warc_CC-MAIN-20210804205700-20210804235700-00152.warc.gz\"}"}
https://en.b-ok.org/book/3403716/bea417
[ "Main Geometric Algebra for Physicists\n\n# Geometric Algebra for Physicists\n\n,\nThis book is a complete guide to the current state of geometric algebra with early chapters providing a self-contained introduction. Topics range from new techniques for handling rotations in arbitrary dimensions, the links between rotations, bivectors, the structure of the Lie groups, non-Euclidean geometry, quantum entanglement, and gauge theories. Applications such as black holes and cosmic strings are also explored.\nYear:\n2003\nEdition:\n1\nPublisher:\nCambridge University Press\nLanguage:\nenglish\nPages:\n592 / 591\nISBN 10:\n0521480221\nISBN 13:\n9780521480222\nSeries:\nN/A\nFile:\nPDF, 6.20 MB\n\n## Most frequently terms\n\nYou can write a book review and share your experiences. Other readers will always be interested in your opinion of the books you've read. Whether you've loved the book or not, if you give your honest and detailed thoughts then people will find new books that are right for them.\n1\n\nYear:\n1982\nLanguage:\nukrainian\nFile:\nDJVU, 43.97 MB\n2\n\n### Wooden Domes : History and Modern Times\n\nYear:\n2018\nLanguage:\nenglish\nFile:\nPDF, 23.95 MB\n```\fGEOMETRIC ALGEBRA\nFOR PHYSICISTS\nCHRIS DORAN\nand\nANTHONY LASENBY\nUniversity of Cambridge\n\nThe Pitt Building, Trumpington Street, Cambridge, United Kingdom\ncambridge university press\nThe Edinburgh Building, Cambridge CB2 2RU, UK\n40 West 20th Street, New York, NY 10011-4211, USA\n477 Williamstown Road, Port Melbourne, VIC 3207, Australia\nRuiz de Alarcón 13, 28014 Madrid, Spain\nDock House, The Waterfront, Cape Town 8001, South Africa\nhttp://www.cambridge.org\n\u0001\nC Cambridge University Press, 2003\n\nThis book is in copyright. Subject to statutory exception\nand to the provisions of relevant collective licensing agreements,\nno reproduction of any part may take place without\nthe written permission of Cambridge University Press.\nFirst published 2003\nPrinted in the United Kingdom at the University Press, Cambridge\nTypeface CMR 10/13 pt\n\nSystem LATEX 2ε [TB]\n\nA catalogue record for this book is available from the British Library\nLibrary of Congress Cataloguing in Publication data\nISBN 0 521 48022 1 hardback\n\nContents\n\nPreface\nNotation\n\nix\nxiii\n\n1\n1.1\n1.2\n1.3\n1.4\n1.5\n1.6\n1.7\n1.8\n\nIntroduction\nVector (linear) spaces\nThe scalar product\nComplex numbers\nQuaternions\nThe cross product\nThe outer product\nNotes\nExercises\n\n1\n2\n4\n6\n7\n10\n11\n17\n18\n\n2\n2.1\n2.2\n2.3\n2.4\n2.5\n2.6\n2.7\n2.8\n2.9\n\nGeometric algebra in two and three dimensions\nA new product for vectors\nAn outline of geometric algebra\nGeometric algebra of the plane\nThe geometric algebra of space\nConventions\nReflections\nRotations\nNotes\nExercises\n\n20\n21\n23\n24\n29\n38\n40\n43\n51\n52\n\n3\n3.1\n3.2\n3.3\n\nClassical mechanics\nElementary principles\nTwo-body central force interactions\nCelestial mechanics and perturbations\n\n54\n55\n59\n64\n\nv\n\nCONTENTS\n\n3.4\n3.5\n3.6\n\nRotating systems and rigid-body motion\nNotes\nExercises\n\n4\n4.1\n4.2\n4.3\n4.4\n4.5\n4.6\n4.7\n\nFoundations of geometric algebra\nAxiomatic development\nRotations and reflections\nBases, frames and components\nLinear algebra\nTensors and components\nNotes\nExercises\n\n84\n85\n97\n100\n103\n115\n1; 22\n124\n\n5\n5.1\n5.2\n5.3\n5.4\n5.5\n5.6\n5.7\n\nRelativity and spacetime\nAn algebra for spacetime\nObservers, trajectories and frames\nLorentz transformations\nThe Lorentz group\nSpacetime dynamics\nNotes\nExercises\n\n126\n127\n131\n138\n143\n150\n163\n164\n\n6\n6.1\n6.2\n6.3\n6.4\n6.5\n6.6\n6.7\n6.8\n\nGeometric calculus\nThe vector derivative\nCurvilinear coordinates\nAnalytic functions\nDirected integration theory\nEmbedded surfaces and vector manifolds\nElasticity\nNotes\nExercises\n\n167\n168\n173\n178\n183\n202\n220\n224\n225\n\n7\n7.1\n7.2\n7.3\n7.4\n7.5\n7.6\n7.7\n7.8\n\nClassical electrodynamics\nMaxwell’s equations\nIntegral and conservation theorems\nThe electromagnetic field of a point charge\nElectromagnetic waves\nScattering and diffraction\nScattering\nNotes\nExercises\n\n228\n229\n235\n241\n251\n258\n261\n264\n265\n\nvi\n\n69\n81\n82\n\nCONTENTS\n\n8\n8.1\n8.2\n8.3\n8.4\n8.5\n8.6\n8.7\n\nQuantum theory and spinors\nNon-relativistic quantum spin\nRelativistic quantum states\nThe Dirac equation\nCentral potentials\nScattering theory\nNotes\nExercises\n\n267\n267\n278\n281\n288\n297\n305\n307\n\n9\n9.1\n9.2\n9.3\n9.4\n9.5\n9.6\n9.7\n\nMultiparticle states and quantum entanglement\nMany-body quantum theory\nMultiparticle spacetime algebra\nSystems of two particles\nRelativistic states and operators\nTwo-spinor calculus\nNotes\nExercises\n\n309\n310\n315\n319\n325\n332\n337\n337\n\n10\n10.1\n10.2\n10.3\n10.4\n10.5\n10.6\n10.7\n10.8\n10.9\n\nGeometry\nProjective geometry\nConformal geometry\nConformal transformations\nGeometric primitives in conformal space\nIntersection and reflection in conformal space\nNon-Euclidean geometry\nSpacetime conformal geometry\nNotes\nExercises\n\n340\n341\n351\n355\n360\n365\n370\n383\n390\n391\n\n11\n11.1\n11.2\n11.3\n11.4\n11.5\n11.6\n11.7\n\nFurther topics in calculus and group theory\nMultivector calculus\nGrassmann calculus\nLie groups\nComplex structures and unitary groups\nThe general linear group\nNotes\nExercises\n\n394\n394\n399\n401\n408\n412\n416\n417\n\n12\n12.1\n12.2\n12.3\n\nLagrangian and Hamiltonian techniques\nThe Euler–Lagrange equations\nClassical models for spin-1/2 particles\nHamiltonian techniques\n\n420\n421\n427\n432\n\nvii\n\nCONTENTS\n\n12.4\n12.5\n12.6\n\nLagrangian field theory\nNotes\nExercises\n\n439\n444\n445\n\n13\n13.1\n13.2\n13.3\n13.4\n13.5\n13.6\n13.7\n13.8\n\nSymmetry and gauge theory\nConservation laws in field theory\nElectromagnetism\nDirac theory\nGauge principles for gravitation\nThe gravitational field equations\nThe structure of the Riemann tensor\nNotes\nExercises\n\n448\n449\n453\n457\n466\n474\n490\n495\n495\n\n14\nGravitation\n14.1 Solving the field equations\n14.2 Spherically-symmetric systems\n14.3 Schwarzschild black holes\n14.4 Quantum mechanics in a black hole background\n14.5 Cosmology\n14.6 Cylindrical systems\n14.7 Axially-symmetric systems\n14.8 Notes\n14.9 Exercises\nBibliography\nIndex\n\nviii\n\n497\n498\n500\n510\n524\n535\n543\n551\n564\n565\n568\n575\n\nPreface\n\nThe ideas and concepts of physics are best expressed in the language of mathematics. But this language is far from unique. Many different algebraic systems\nexist and are in use today, all with their own advantages and disadvantages. In\nthis book we describe what we believe to be the most powerful available mathematical system developed to date. This is geometric algebra, which is presented\nas a new mathematical tool to add to your existing set as either a theoretician or\nexperimentalist. Our aim is to introduce the new techniques via their applications, rather than as purely formal mathematics. These applications are diverse,\nand throughout we emphasise the unity of the mathematics underpinning each\nof these topics.\nThe history of geometric algebra is one of the more unusual tales in the development of mathematical physics. William Kingdon Clifford introduced his\ngeometric algebra in the 1870s, building on the earlier work of Hamilton and\nGrassmann. It is clear from his writing that Clifford intended his algebra to\ndescribe the geometric properties of vectors, planes and higher-dimensional objects. But most physicists first encounter the algebra in the guise of the Pauli\nand Dirac matrix algebras of quantum theory. Few then contemplate using these\nunwieldy matrices for practical geometric computing. Indeed, some physicists\ncome away from a study of Dirac theory with the view that Clifford’s algebra\nis inherently quantum-mechanical. In this book we aim to dispel this belief by\ngiving a straightforward introduction to this new and fundamentally different\napproach to vectors and vector multiplication. In this language much of the\nstandard subject matter taught to physicists can be formulated in an elegant\nand highly condensed fashion. And the portability of the techniques we discuss\nenables us to reach a range of advanced topics with little extra work.\nThis book is intended to be of interest to both students and researchers in\nphysics. The early chapters grew out of an undergraduate lecture course that we\nhave run for a number of years in the Physics Department at Cambridge Uniix\n\nPREFACE\n\nversity. We are indebted to the students who attended the early versions of this\ncourse, and helped to shape the material into a form suitable for undergraduate\ntuition. These early chapters require little more than a basic knowledge of linear\nalgebra and vector geometry, and some exposure to classical mechanics. More\nadvanced physical concepts are introduced as the book progresses.\nA number of themes run throughout this book. The first is that geometric\nalgebra enables us to express fundamental physics in a language that is free from\ncoordinates or indices. Coordinates are only introduced later, when the geometry of a given problem is clear. This approach gives many equations a degree\nof clarity which is lost in tensor algebra. A second theme is the way in which\nrotations are handled in geometric algebra through the use of rotors. This approach extends to arbitrary spaces the idea of using a complex phase to rotate in\na plane. Rotor techniques can be applied in spaces of arbitrary signature and are\nparticularly well suited to formulating Lorentz and conformal transformations.\nThe latter are central to our treatment of non-Euclidean geometry. Rotors also\nprovide a framework for studying Lie groups and Lie algebras, and are essential\nto our discussion of gauge theories.\nThe third theme is the invertibility of the geometric product of vectors, which\nmakes it possible to divide by a vector. This idea extends to the vector derivative,\nwhich has an inverse in the form a first-order Green’s function. The vector\nderivative and its inverse enable us to extend complex analytic function theory\nto arbitrary dimensions. This theory is perfectly suited to electromagnetism,\nas all four Maxwell equations can be combined into a single spacetime equation\ninvolving the invertible vector derivative. The same vector derivative appears\nin the Dirac theory, and is central to the gauge treatment of gravitation which\ndominates the final two chapters of this book.\nThis book would not have been possible without the help and encouragement\nof a large number of people. We thank Stephen Gull for helping initiate much\nof the research described here, for his constant advice and criticism, and for use\nof a number of his figures. We also thank David Hestenes for all his work in\nshaping the modern subject of geometric algebra and for his constant encouragement. Special mention must be made of our many collaborators, in particular\nJoan Lasenby, Anthony Challinor, Leo Dorst, Tim Havel, Antony Lewis, Mark\nAshdown, Frank Sommen, Shyamal Somaroo, Jeff Tomasi, Bill Fitzgerald, Youri\nDabrowski and Mike Hobson. Special thanks also goes to Mike for his help with\nLatex and explaining the intricacies of the CUP style files. We thank the Physics\nDepartment of Cambridge University for the use of their facilities, and for the\nrange of technical advice and expertise we regularly called on. Finally we thank\neveryone at Cambridge University Press who helped in the production of this\nbook.\nCD would also like to thank the EPSRC and Sidney Sussex College for their\nsupport, his friends and colleagues, all at Nomads HC, and above all Helen for\nx\n\nPREFACE\n\nnot complaining about the lost evenings as I worked on this book. I promise to\nfinish the decorating now it is complete.\nAL thanks Joan and his children Robert and Alison for their constant enthusiasm and support, and their patience in the face of many explanations of topics\nfrom this book.\nCambridge\nJuly 2002\n\nC.J.L. Doran\nA.N. Lasenby\n\nxi\n\nNotation\n\nThe subject of vector geometry in general, and geometric algebra in particular,\nsuffers from a profusion of notations and conventions. In short, there is no\nsingle convention that is perfectly suited to the entire range of applications of\ngeometric algebra. For example, many of the formulae and results given in\nthis book involve arbitrary numbers of vectors and are valid in vector spaces\nof arbitrary dimensions. These formulae invariably look neater if one does not\nembolden all of the vectors in the expression. For this reason we typically choose\nto write vectors in a lower case italic script, a, and more general multivectors in\nupper case italic script, M . But in some applications, particularly mechanics and\ndynamics, one often needs to reserve lower case italic symbols for coordinates\nand scalars, and in these situations writing vectors in bold face is helpful. This\nconvention in adopted in chapter 3.\nFor many applications it is useful to have a notation which distinguishes frame\nvectors from general vectors. In these cases we write the former in an upright\nfont as {ei }. But this notation looks clumsy in certain settings, and is not\nfollowed rigorously in some of the later chapters. In this book our policy is to\nensure that we adopt a consistent notation within each chapter, and any new or\ndistinct features are explained either at the start of the chapter or at their point\nof introduction.\nSome conventions are universally adopted throughout this book, and for convenience we have gathered together a number of these here.\n(i) The geometric (or Clifford) algebra generated by the vector space of signature (p, q) is denoted G(p, q). In the first three chapters we employ the\nabbreviations G2 and G3 for the Euclidean algebras G(2, 0) and G(3, 0). In\nchapter 4 we use Gn to denote all algebras G(p, q) of total dimension n.\n(ii) The geometric product of A and B is denoted by juxtaposition, AB.\n(iii) The inner product is written with a centred dot, A · B. The inner product\nis only employed between homogeneous multivectors.\nxiii\n\nNOTATION\n\n(iv) The outer (exterior) product is written with a wedge, A ∧ B. The outer\nproduct is also only employed between homogeneous multivectors.\n(v) Inner and outer products are always performed before geometric products. This enables us to remove unnecessary brackets. For example, the\nexpression a·b c is to be read as (a·b)c.\n(vi) Angled brackets \u0002M \u0003p are used to denote the result of projecting onto the\nterms in M of grade p. The subscript zero is dropped for the projection\nonto the scalar part.\n(vii) The reverse of the multivector M is denoted either with a dagger, M † , or\nwith a tilde, M̃ . The latter is employed for applications in spacetime.\n(viii) Linear functions are written in an upright font as F(a) or h(a). This\nhelps to distinguish linear functions from multivectors. Some exceptions\nare encountered in chapters 13 and 14, where caligraphic symbols are\nused for certain tensors in gravitation. The adjoint of a linear function is\ndenoted with a bar, h̄(a).\n(ix) Lie groups are written in capital, Roman font as in SU(n). The corresponding Lie algebra is written in lower case, su(n).\nFurther details concerning the conventions adopted in this book can be found\nin sections 2.5 and 4.1.\n\nxiv\n\n1\n\nIntroduction\n\nThe goal of expressing geometrical relationships through algebraic equations has\ndominated much of the development of mathematics. This line of thinking goes\nback to the ancient Greeks, who constructed a set of geometric laws to describe\nthe world as they saw it. Their view of geometry was largely unchallenged\nuntil the eighteenth century, when mathematicians discovered new geometries\nwith different properties from the Greeks’ Euclidean geometry. Each of these\nnew geometries had distinct algebraic properties, and a major preoccupation\nof nineteenth century mathematicians was to place these geometries within a\nunified algebraic framework. One of the key insights in this process was made by\nW.K. Clifford, and this book is concerned with the implications of his discovery.\nBefore we describe Clifford’s discovery (in chapter 2) we have gathered together some introductory material of use throughout this book. This chapter\nrevises basic notions of vector spaces, emphasising pictorial representations of\nthe underlying algebraic rules — a theme which dominates this book. The material is presented in a way which sets the scene for the introduction of Clifford’s\nproduct, in part by reflecting the state of play when Clifford conducted his research. To this end, much of this chapter is devoted to studying the various\nproducts that can be defined between vectors. These include the scalar and\nvector products familiar from three-dimensional geometry, and the complex and\nquaternion products. We also introduce the outer or exterior product, though\nthis is covered in greater depth in later chapters. The material in this chapter is\nintended to be fairly basic, and those impatient to uncover Clifford’s insight may\nwant to jump straight to chapter 2. Readers unfamiliar with the outer product\nare encouraged to read this chapter, however, as it is crucial to understanding\nClifford’s discovery.\n1\n\nINTRODUCTION\n\n1.1 Vector (linear) spaces\nAt the heart of much of geometric algebra lies the idea of vector, or linear spaces.\nSome properties of these are summarised here and assumed throughout this book.\nIn this section we talk in terms of vector spaces, as this is the more common\nterm. For all other occurrences, however, we prefer to use the term linear space.\nThis is because the term ‘vector ’ has a very specific meaning within geometric\nalgebra (as the grade-1 elements of the algebra).\n\n1.1.1 Properties\nVector spaces are defined in terms of two objects. These are the vectors, which\ncan often be visualised as directions in space, and the scalars, which are usually\ntaken to be the real numbers. The vectors have a simple addition operation rule\nwith the following obvious properties:\na + b = b + a.\n\n(1.1)\n\na + (b + c) = (a + b) + c.\n\n(1.2)\n\nThis property enables us to write expressions such as a + b + c without\nambiguity.\n(iii) There is an identity element, denoted 0:\na + 0 = a.\n\n(1.3)\n\n(iv) Every element a has an inverse −a:\na + (−a) = 0.\n\n(1.4)\n\nFor the case of directed line segments each of these properties has a clear geometric equivalent. These are illustrated in figure 1.1.\nVector spaces also contain a multiplication operation between the scalars and\nthe vectors. This has the property that for any scalar λ and vector a, the product\nλa is also a member of the vector space. Geometrically, this corresponds to the\ndilation operation. The following further properties also hold for any scalars λ, µ\nand vectors a and b:\n(i)\n(ii)\n(iii)\n(iv)\n\nλ(a + b) = λa + λb;\n(λ + µ)a = λa + µa;\n(λµ)a = λ(µa);\nif 1λ = λ for all scalars λ then 1a = a for all vectors a.\n2\n\n1.1 VECTOR (LINEAR) SPACES\n\nb\n\nb\nc\n\na+b\na\n\na\n\na\n\nb+c\na+b\na+b+c\n\nb\nFigure 1.1 A geometric picture of vector addition. The result of a + b is\nformed by adding the tail of b to the head of a. As is shown, the resultant\nvector a + b is the same as b + a. This finds an algebraic expression in the\nstatement that addition is commutative. In the right-hand diagram the\nvector a + b + c is constructed two different ways, as a + (b + c) and as\n(a + b) + c. The fact that the results are the same is a geometric expression\nof the associativity of vector addition.\n\nThe preceding set of rules serves to define a vector space completely. Note that\nthe + operation connecting scalars is different from the + operation connecting\nthe vectors. There is no ambiguity, however, in using the same symbol for both.\nThe following two definitions will be useful later in this book:\n(i) Two vector spaces are said to be isomorphic if their elements can be\nplaced in a one-to-one correspondence which preserves sums, and there\nis a one-to-one correspondence between the scalars which preserves sums\nand products.\n(ii) If U and V are two vector spaces (sharing the same scalars) and all the\nelements of U are contained in V, then U is said to form a subspace of V.\n\n1.1.2 Bases and dimension\nThe concept of dimension is intuitive for simple vector spaces — lines are onedimensional, planes are two-dimensional, and so on. Equipped with the axioms\nof a vector space we can proceed to a formal definition of the dimension of a\nvector space. First we need to define some terms.\n(i) A vector b is said to be a linear combination of the vectors a1 , . . . , an if\nscalars λ1 , . . . , λn can be found such that\nb = λ1 a1 + · · · + λn an =\n\nn\n\u0001\n\nλi ai .\n\n(1.5)\n\ni=1\n\n(ii) A set of vectors {a1 , . . . , an } is said to be linearly dependent if scalars\n3\n\nINTRODUCTION\n\nλ1 , . . . , λn (not all zero) can be found such that\nλ1 a1 + · · · + λn an = 0.\n\n(1.6)\n\nIf such a set of scalars cannot be found, the vectors are said to be linearly\nindependent.\n(iii) A set of vectors {a1 , . . . , an } is said to span a vector space V if every\nelement of V can be expressed as a linear combination of the set.\n(iv) A set of vectors which are both linearly independent and span the space\nV are said to form a basis for V.\nThese definitions all carry an obvious, intuitive picture if one thinks of vectors\nin a plane or in three-dimensional space. For example, it is clear that two\nindependent vectors in a plane provide a basis for all vectors in that plane,\nwhereas any three vectors in the plane are linearly dependent. These axioms and\ndefinitions are sufficient to prove the basis theorem, which states that all bases\nof a vector space have the same number of elements. This number is called the\ndimension of the space. Proofs of this statement can be found in any textbook\non linear algebra, and a sample proof is left to work through as an exercise. Note\nthat any two vector spaces of the same dimension and over the same field are\nisomorphic.\nThe axioms for a vector space define an abstract mathematical entity which\nis already well equipped for studying problems in geometry. In so doing we are\nnot compelled to interpret the elements of the vector space as displacements.\nOften different interpretations can be attached to isomorphic spaces, leading to\ndifferent types of geometry (affine, projective, finite, etc.). For most problems\nin physics, however, we need to be able to do more than just add the elements\nof a vector space; we need to multiply them in various ways as well. This is\nnecessary to formalise concepts such as angles and lengths and to construct\nhigher-dimensional surfaces from simple vectors.\nConstructing suitable products was a major concern of nineteenth century\nmathematicians, and the concepts they introduced are integral to modern mathematical physics. In the following sections we study some of the basic concepts\nthat were successfully formulated in this period. The culmination of this work,\nClifford’s geometric product, is introduced separately in chapter 2. At various\npoints in this book we will see how the products defined in this section can all\nbe viewed as special cases of Clifford’s geometric product.\n\n1.2 The scalar product\nEuclidean geometry deals with concepts such as lines, circles and perpendicularity. In order to arrive at Euclidean geometry we need to add two new concepts\n4\n\n1.2 THE SCALAR PRODUCT\n\nto our vector space. These are distances between points, which allow us to define a circle, and angles between vectors so that we can say that two lines are\nperpendicular. The introduction of a scalar product achieves both of these goals.\nGiven any two vectors a, b, the scalar product a · b is a rule for obtaining a\nnumber with the following properties:\n(i)\n(ii)\n(iii)\n(iv)\n\na·b = b·a;\na·(λb) = λ(a·b);\na·(b + c) = a·b + a·c;\na·a > 0, unless a = 0.\n\n(When we study relativity, this final property will be relaxed.) The introduction\nof a scalar product allows us to define the length of a vector, |a|, by\n√\n|a| = (a·a).\n(1.7)\nHere, and throughout this book, the positive square root is always implied by\n√\nthe\nsymbol. The fact that we now have a definition of lengths and distances\nmeans that we have specified a metric space. Many different types of metric\nspace can be constructed, of which the simplest are the Euclidean spaces we\nhave just defined.\nThe fact that for Euclidean space the inner product is positive-definite means\nthat we have a Schwarz inequality of the form\n|a·b| ≤ |a| |b|.\n\n(1.8)\n\nThe proof is straightforward:\n(a + λb)·(a + λb) ≥ 0\n⇒ a·a + 2λa·b + λ b·b ≥ 0\n2\n\n⇒ (a·b) ≤ a·a b·b,\n2\n\n∀λ\n∀λ\n(1.9)\n\nwhere the last step follows by taking the discriminant of the quadratic in λ.\nSince all of the numbers in this inequality are positive we recover (1.8). We can\nnow define the angle θ between a and b by\na·b = |a||b| cos(θ).\n\n(1.10)\n\nTwo vectors whose scalar product is zero are said to be orthogonal. It is usually\nconvenient to work with bases in which all of the vectors are mutually orthogonal.\nIf all of the basis vectors are further normalised to have unit length, they are\nsaid to form an orthonormal basis. If the set of vectors {e1 , . . . , en } denote such\na basis, the statement that the basis is orthonormal can be summarised as\nei ·ej = δij .\n5\n\n(1.11)\n\nINTRODUCTION\n\nHere the δij is the Kronecker delta function, defined by\n\u0002\n1 if i = j,\nδij =\n0 if i = j.\n\n(1.12)\n\nWe can expand any vector a in this basis as\na=\n\nn\n\u0001\n\nai ei = ai ei ,\n\n(1.13)\n\ni=1\n\nwhere we have started to employ the Einstein summation convention that pairs\nof indices in any expression are summed over. This convention will be assumed\nthroughout this book. The {ai } are the components of the vector a in the {ei }\nbasis. These are found simply by\nai = ei ·a.\n\n(1.14)\n\nThe scalar product of two vectors a = ai ei and b = bi ei can now written simply\nas\na·b = (ai ei )·(bj ej ) = ai bj ei ·ej = ai bj δij = ai bi .\n\n(1.15)\n\nIn spaces where the inner product is not positive-definite, such as Minkowski\nspacetime, there is no equivalent version of the Schwarz inequality. In such cases\nit is often only possible to define an ‘angle’ between vectors by replacing the\ncosine function with a cosh function. In these cases we can still introduce orthonormal frames and use these to compute scalar products. The main modification\nis that the Kronecker delta is replaced by ηij which again is zero if i = j, but\ncan take values ±1 if i = j.\n1.3 Complex numbers\nThe scalar product is the simplest product one can define between vectors, and\nonce such a product is defined one can formulate many of the key concepts of\nEuclidean geometry. But this is by no means the only product that can be defined\nbetween vectors. In two dimensions a new product can be defined via complex\narithmetic. A complex number can be viewed as an ordered pair of real numbers\nwhich represents a direction in the complex plane, as was realised by Wessel in\n1797. Their product enables complex numbers to perform geometric operations,\nsuch as rotations and dilations. But suppose that we take the complex number\nz = x + iy and square it, forming\nz 2 = (x + iy)2 = x2 − y 2 + 2xyi.\n\n(1.16)\n\nIn terms of vector arithmetic, neither the real nor imaginary parts of this expression have any geometric significance. A more geometrically useful product\n6\n\n1.4 QUATERNIONS\n\nzz ∗ = (x + iy)(x − iy) = x2 + y 2 ,\n\n(1.17)\n\nwhich returns the square of the length of the vector. A product of two vectors\nin a plane, z and w = u + vi, can therefore be constructed as\nzw∗ = (x + iy)(u − iv) = xu + vy + i(uy − vx).\n\n(1.18)\n\nThe real part of the right-hand side recovers the scalar product. To understand\nthe imaginary term consider the polar representation\nz = |z|eiθ ,\n\nw = |w|eiφ\n\n(1.19)\n\nso that\nzw∗ = |z||w|ei(θ − φ) .\n\n(1.20)\n\nThe imaginary term has magnitude |z||w| sin(θ − φ), where θ − φ is the angle\nbetween the two vectors. The magnitude of this term is therefore the area of\nthe parallelogram defined by z and w. The sign of the term conveys information\nabout the handedness of the area element swept out by the two vectors. This\nwill be defined more carefully in section 1.6.\nWe thus have a satisfactory interpretation for both the real and imaginary\nparts of the product zw∗ . The surprising feature is that these are still both parts\nof a complex number. We thus have a second interpretation for complex addition,\nas a sum between scalar objects and objects representing plane segments. The\ncomplex numbers as opposed to pairs of real numbers. This is a theme to which\nwe shall return regularly in following chapters.\n1.4 Quaternions\nThe fact that complex arithmetic can be viewed as representing a product for\nvectors in a plane carries with it a further advantage — it allows us to divide\nby a vector. Generalising this to three dimensions was a major preoccupation\nof the physicist W.R. Hamilton (see figure 1.2). Since a complex number x + iy\ncan be represented by two rectangular axes on a plane it seemed reasonable to\nrepresent directions in space by a triplet consisting of one real and two complex\nnumbers. These can be written as x + iy + jz, where the third term jz represents\na third axis perpendicular to the other two. The complex numbers i and j have\nthe properties that i2 = j 2 = −1. The norm for such a triplet would then be\n(x + iy + jz)(x − iy − jz) = (x2 + y 2 + z 2 ) − yz(ij + ji).\n\n(1.21)\n\nThe final term is problematic, as one would like to recover the scalar product\nhere. The obvious solution to this problem is to set ij = −ji so that the last\nterm vanishes.\n7\n\nINTRODUCTION\n\nFigure 1.2 William Rowan Hamilton 1805–1865. Inventor of quaternions,\nand one of the key scientific figures of the nineteenth century. He spent\nmany years frustrated at being unable to extend his theory of couples of\nnumbers (complex numbers) to three dimensions. In the autumn of 1843\nhe returned to this problem, quite possibly prompted by a visit he received\nfrom the young German mathematician Eisenberg. Among Eisenberg’s\npapers was the observation that matrices form the elements of an algebra that was much like ordinary arithmetic except that multiplication was\nnon-commutative. This was the vital step required to find the quaternion algebra. Hamilton arrived at this algebra on 16 October 1843 while\nout walking with his wife, and carved the equations in stone on Brougham\nBridge. His discovery of quaternions is perhaps the best-documented mathematical discovery ever.\n\nThe anticommutative law ij = −ji ensures that the norm of a triplet behaves\nsensibly, and also that multiplication of triplets in a plane behaves in a reasonable\nmanner. The same is not true for the general product of triplets, however.\nConsider\n(a + ib + jc)(x + iy + jz) = (ax − by − cz) + i(ay + bx)\n+ j(az + cx) + ij(bz − cy).\n\n(1.22)\n\nSetting ij = −ji is no longer sufficient to remove the ij term, so the algebra\ndoes not close. The only thing for Hamilton to do was to set ij = k, where k is\nsome unknown, and see if it could be removed somehow. While walking along\nthe Royal Canal he suddenly realised that if his triplets were instead made up\nof four terms he would be able to close the algebra in a simple, symmetric way.\n8\n\n1.4 QUATERNIONS\n\nTo understand his discovery, consider\n(a + ib + jc + kd)(a − ib − jc − kd)\n= a2 + b2 + c2 + d2 (−k 2 ) − bd(ik + ki) − cd(jk + kj),\n\n(1.23)\n\nwhere we have assumed that i2 = j 2 = −1 and ij = −ji. The expected norm of\nthe above product is a2 + b2 + c2 + d2 , which is obtained by setting k 2 = −1 and\nik = −ki and jk = −kj. So what values do we use for jk and ik? These follow\nfrom the fact that ij = k, which gives\nik = i(ij) = (ii)j = −j\n\n(1.24)\n\nkj = (ij)j = −i.\n\n(1.25)\n\nand\n\nThus the multiplication rules for quaternions are\ni2 = j 2 = k 2 = −1\n\n(1.26)\n\nand\nij = −ji = k,\n\njk = −kj = i,\n\nki = −ik = j.\n\n(1.27)\n\nThese can be summarised neatly as i2 = j 2 = k 2 = ijk = −1. It is a simple\nmatter to check that these multiplication laws define a closed algebra.\nHamilton was so excited by his discovery that the very same day he obtained\nleave to present a paper on the quaternions to the Royal Irish Academy. The\nsubsequent history of the quaternions is a fascinating story which has been described by many authors. Some suggested material for further reading is given\nat the end of this chapter. In brief, despite the many advantages of working with\nquaternions, their development was blighted by two major problems.\nThe first problem was the status of vectors in the algebra. Hamilton identified\nvectors with pure quaternions, which had a null scalar part. On the surface\nthis seems fine — pure quaternions define a three-dimensional vector space.\nIndeed, Hamilton invented the word ‘vector ’ precisely for these objects and this\nis the origin of the now traditional use of i, j and k for a set of orthonormal\nbasis vectors. Furthermore, the full product of two pure quaternions led to the\ndefinition of the extremely useful cross product (see section 1.5). The problem\nis that the product of two pure vectors does not return a new pure vector, so\nthe vector part of the algebra does not close. This means that a number of ideas\nin complex analysis do not extend easily to three dimensions. Some people felt\nthat this meant that the full quaternion product was of little use, and that the\nscalar and vector parts of the product should be kept separate. This criticism\nmisses the point that the quaternion product is invertible, which does bring many\nThe second major difficulty encountered with quaternions was their use in\n9\n\nINTRODUCTION\n\ndescribing rotations. The irony here is that quaternions offer the clearest way\nof handling rotations in three dimensions, once one realises that they provide\na ‘spin-1/2’ representation of the rotation group. That is, if a is a vector (a\npure quaternion) and R is a unit quaternion, a new vector is obtained by the\ndouble-sided transformation law\na\u0002 = RaR∗ ,\n\n(1.28)\n\nwhere the * operation reverses the sign of all three ‘imaginary’ components. A\nconsequence of this is that each of the basis quaternions i, j and k generates\nrotations through π. Hamilton, however, was led astray by the analogy with\ncomplex numbers and tried to impose a single-sided transformation of the form\na\u0002 = Ra. This works if the axis of rotation is perpendicular to a, but otherwise\ndoes not return a pure quaternion. More damagingly, it forces one to interpret\nthe basis quaternions as generators of rotations through π/2, which is simply\nwrong!\nDespite the problems with quaternions, it was clear to many that they were\na useful mathematical system worthy of study. Tait claimed that quaternions\n‘freed the physicist from the constraints of coordinates and allowed thoughts to\nrun in their most natural channels’ — a theme we shall frequently meet in this\nbook. Quaternions also found favour with the physicist James Clerk Maxwell,\nwho employed them in his development of the theory of electromagnetism. Despite these successes, however, quaternions were weighed down by the increasingly dogmatic arguments over their interpretation and were eventually displaced\nby the hybrid system of vector algebra promoted by Gibbs.\n\n1.5 The cross product\nTwo of the lasting legacies of the quaternion story are the introduction of the\nidea of a vector, and the cross product between two vectors. Suppose we form\nthe product of two pure quaternions a and b, where\na = a1 i + a2 j + a3 k,\n\nb = b1 i + b2 j + b3 k.\n\n(1.29)\n\nTheir product can be written\nab = −ai bi + c,\n\n(1.30)\n\nc = (a2 b3 − a3 b2 )i + (a3 b1 − a1 b3 )j + (a1 b2 − a2 b1 )k.\n\n(1.31)\n\nwhere c is the pure quaternion\n\nWriting c = c1 i + c2 j + c3 k the component relation can be written as\nci = \u0007ijk aj bk ,\n10\n\n(1.32)\n\n1.6 THE OUTER PRODUCT\n\nwhere the alternating tensor \u0007ijk is defined by\n\n\nif ijk is a cylic permutation of 123,\n\n1\n\u0007ijk = −1 if ijk is an anticylic permutation of 123,\n\n\n0\notherwise.\n\n(1.33)\n\nWe recognise the preceding as defining the cross product of two vectors, a×b.\nThis has the following properties:\n(i) a×b is perpendicular to the plane defined by a and b;\n(ii) a×b has magnitude |a||b| sin(θ);\n(iii) the vectors a, b and a×b form a right-handed set.\nThese properties can alternatively be viewed as defining the cross product, and\nfrom them the algebraic definition can be recovered. This is achieved by starting\nwith a right-handed orthonormal frame {ei }. For these we must have\ne1 ×e2 = e3\n\netc.\n\n(1.34)\n\nso that we can write\nei ×ej = \u0007ijk ek .\n\n(1.35)\n\nExpanding out a vector in terms of this basis recovers the formula\na×b = (ai ei )×(bj ej )\n= ai bj (ei ×ej )\n= (\u0007ijk ai bj )ek .\n\n(1.36)\n\nHence the geometric definition recovers the algebraic one.\nThe cross product quickly proved itself to be invaluable to physicists, dramatically simplifying equations in dynamics and electromagnetism. In the latter\npart of the nineteenth century many physicists, most notably Gibbs, advocated\nabandoning quaternions altogether and just working with the individual scalar\nand cross products. We shall see in later chapters that Gibbs was misguided in\nsome of his objections to the quaternion product, but his considerable reputation carried the day and by the 1900s quaternions had all but disappeared from\nmainstream physics.\n\n1.6 The outer product\nThe cross product has one major failing — it only exists in three dimensions. In\ntwo dimensions there is nowhere else to go, whereas in four dimensions the concept of a vector orthogonal to a pair of vectors is not unique. To see this, consider\nfour orthonormal vectors e1 , . . . , e4 . If we take the pair e1 and e2 and attempt\n11\n\nINTRODUCTION\n\nFigure 1.3 Hermann Gunther Grassmann (1809–1877), born in Stettin,\nGermany (now Szczecin, Poland). A German mathematician and schoolteacher, Grassmann was the third of his parents’ twelve children and was\nborn into a family of scholars. His father studied theology and became a\nminister, before switching to teaching mathematics and physics at the Stettin Gymnasium. Hermann followed in his father’s footsteps, first studying\ntheology, classical languages and literature at Berlin. After returning to\nStettin in 1830 he turned his attention to mathematics and physics. Grassmann passed the qualifying examination to win a teaching certificate in\n1839. This exam included a written assignment on the tides, for which he\ngave a simplified treatment of Laplace’s work based upon a new geometric\ncalculus that he had developed. By 1840 he had decided to concentrate\non mathematics research. He published the first edition of his geometric\ncalculus, the 300 page Lineale Ausdehnungslehre in 1844, the same year\nthat Hamilton announced the discovery of the quaternions. His work did\nnot achieve the same impact as the quaternions, however, and it was many\nyears before his ideas were understood and appreciated by other mathematicians. Disappointed by this lack of interest, Grassmann turned his\nattention to linguistics and comparative philology, with greater immediate\nimpact. He was an expert in Sanskrit and translated the Rig-Veda (1876–\n1877). He also formulated the linguistic law (named after him) stating\nthat in Indo-European bases, successive syllables may not begin with aspirates. He died before he could see his ideas on geometry being adopted\ninto mainstream mathematics.\n\nto find a vector perpendicular to both of these, we see that any combination of\ne3 and e4 will do.\nA suitable generalisation of the idea of the cross product was constructed by\n12\n\n1.6 THE OUTER PRODUCT\n\na\n\nb∧a\n\na∧b\nb\n\nb\n\nθ\n\nθ\na\n\nFigure 1.4 The outer product. The outer or wedge product of a and b\nreturns a directed area element of area |a||b| sin(θ). The orientation of the\nparallelogram is defined by whether the circuit a, b, −a, −b is right-handed\n(anticlockwise) or left-handed (clockwise). Interchanging the order of the\nvectors reverses the orientation and introduces a minus sign in the product.\n\nthe remarkable German mathematician H.G. Grassmann (see figure 1.3). His\nwork had its origin in the Barycentrischer Calcul of Möbius. There the author\nintroduced expressions like AB for the line connecting the points A and B and\nABC for the triangle defined by A, B and C. Möbius also introduced the\ncrucial idea that the sign of the quantity should change if any two points are\ninterchanged. (These oriented segments are now referred to as simplices.) It was\nGrassmann’s leap of genius to realise that expressions like AB could actually be\nviewed as a product between vectors. He thus introduced the outer or exterior\nproduct which, in modern notation, we write as a ∧ b, or ‘a wedge b’.\nThe outer product can be defined on any vector space and, geometrically, we\nare not forced to picture these vectors as displacements. Indeed, Grassmann\nwas motivated by a projective viewpoint, where the elements of the vector space\nare interpreted as points, and the outer product of two points defines the line\nthrough the points. For our purposes, however, it is simplest to adopt a picture in which vectors represent directed line segments. The outer product then\nprovides a means of encoding a plane, without relying on the notion of a vector\nperpendicular to it. The result of the outer product is therefore neither a scalar\nnor a vector. It is a new mathematical entity encoding an oriented plane and is\ncalled a bivector. It can be visualised as the parallelogram obtained by sweeping one vector along the other (figure 1.4). Changing the order of the vectors\nreverses the orientation of the plane. The magnitude of a∧b is |a||b| sin(θ), the\nsame as the area of the plane segment swept out by the vectors.\nThe outer product of two vectors has the following algebraic properties:\n13\n\nINTRODUCTION\n\na\n\na∧c\n\na∧b\n\nc\n\nb\na\n\na∧(b + c)\n\nb+c\nFigure 1.5 A geometric picture of bivector addition. In three dimensions\nany two non-parallel planes share a common line. If this line is denoted a,\nthe two planes can be represented by a ∧ b and a ∧ c. Bivector addition\nproceeds much like vector addition. The planes are combined at a common\nboundary and the resulting plane is defined by the initial and final edges,\nas opposed to the initial and final points for vector addition. The mathematical statement of this addition rule is the distributivity of the outer\n\n(i) The product is antisymmetric:\na∧b = −b∧a.\n\n(1.37)\n\nThis has the geometric interpretation of reversing the orientation of the\nsurface defined by a and b. It follows immediately that\na∧a = 0,\n\nfor all vectors a.\n\n(1.38)\n\n(ii) Bivectors form a linear space, the same way that vectors do. In two and\nthree dimensions the addition of bivectors is easy to visualise. In higher\ndimensions this addition is not always so easy to visualise, because two\nplanes need not share a common line.\n(iii) The outer product is distributive over addition:\na∧(b + c) = a∧b + a∧c.\n\n(1.39)\n\nThis helps to visualise the addition of bivectors which share a common\nline (see figure 1.5).\nWhile it is convenient to visualise the outer product as a parallelogram, the\n14\n\n1.6 THE OUTER PRODUCT\n\nactual shape of the object is not conveyed by the result of the product. This can\nbe seen easily by defining a\u0002 = a + λb and forming\na\u0002 ∧b = a∧b + λb∧b = a∧b.\n\n(1.40)\n\nThe same bivector can therefore be generated by many different pairs of vectors.\nIn many ways it is better to replace the picture of a directed parallelogram with\nthat of a directed circle. The circle defines both the plane and a handedness,\nand its area is equal to the magnitude of the bivector. This therefore conveys\nall of the information one has about the bivector, though it does make bivector\n1.6.1 Two dimensions\nThe outer product of any two vectors defines a plane, so one has to go to at least\ntwo dimensions to form an interesting product. Suppose then that {e1 , e2 } are\nan orthonormal basis for the plane, and introduce the vectors\na = a1 e1 + a2 e2 ,\n\nb = b1 e 1 + b 2 e 2 .\n\n(1.41)\n\nThe outer product a ∧ b contains\na∧b = a1 b1 e1 ∧e1 + a1 b2 e1 ∧e2 + a2 b1 e2 ∧e1 + a2 b2 e2 ∧e2\n= (a1 b2 − a2 b1 )e1 ∧e2 ,\n\n(1.42)\n\nwhich recovers the imaginary part of the product of (1.18). The term therefore\nimmediately has the expected magnitude |a| |b| sin(θ). The coefficient of e1 ∧ e2\nis positive if a and b have the same orientation as e1 and e2 . The orientation is\ndefined by traversing the boundary of the parallelogram defined by the vectors a,\nb, −a, −b (see figure 1.4). By convention, we usually work with a right-handed\nset of reference axes (viewed from above). In this case the coefficient a1 b2 − a2 b1\nwill be positive if a and b also form a right-handed pair.\n1.6.2 Three dimensions\nIn three dimensions the space of bivectors is also three-dimensional, because each\nbivector can be placed in a one-to-one correspondence with the vector perpendicular to it. Suppose that {e1 , e2 , e3 } form a right-handed basis (see comments\nbelow), and the two vectors a and b are expanded in this basis as a = ai ei and\nb = bi ei . The bivector a ∧ b can then be decomposed in terms of an orthonormal\nframe of bivectors by\na∧b = (ai ei )∧(bj ej )\n= (a2 b3 − b3 a2 )e2 ∧e3 + (a3 b1 − a1 b3 )e3 ∧e1\n+ (a1 b2 − a2 b1 )e1 ∧e2 .\n15\n\n(1.43)\n\nINTRODUCTION\n\nThe components in this frame are therefore the same as those of the cross product. But instead of being the components of a vector perpendicular to a and b,\nthey are the components of the bivector a ∧ b. It is this distinction which enables\nthe outer product to be defined in any dimension.\n\n1.6.3 Handedness\nWe have started to employ the idea of handedness without giving a satisfactory\ndefinition of it. The only space in which there is an unambiguous definition of\nhandedness is three dimensions, as this is the space we inhabit and most of us\ncan distinguish our left and right hands. This concept of ‘left’ and ‘right’ is\na man-made convention adopted to make our life easier, and it extends to the\nconcept of a frame in a straightforward way. Suppose that we are presented\nwith three orthogonal vectors {e1 , e2 , e3 }. We align the 3 axis with the thumb\nof our right hand and then close our fist. If the direction in which our fist closes\nis the same as that formed by rotating from the 1 to the 2 axis, the frame is\nright-handed. If not, it is left-handed.\nSwapping any pair of vectors swaps the handedness of a frame. Performing two\nsuch swaps returns us to the original handedness. In three dimensions this corresponds to a cyclic reordering, and ensures that the frames {e1 , e2 , e3 }, {e3 , e1 , e2 }\nand {e2 , e3 , e1 } all have the same orientation.\nThere is no agreed definition of a ‘right-handed’ orientation in spaces of dimensions other than three. All one can do is to make sure that any convention\nused is adopted consistently. In all dimensions the orientation of a set of vectors is changed if any two vectors are swapped. In two dimensions one does\nstill tend to talk about right-handed axes, though the definition is dependent\non the idea of looking down on the plane from above. The idea of above and\nbelow is not a feature of the plane itself, but depends on how we embed it in our\nthree-dimensional world. There is no definition of left or right-handed which is\nintrinsic to the plane.\n\n1.6.4 Extending the outer product\nThe preceding examples demonstrate that in arbitrary dimensions the components of a∧b are given by\n(a∧b)ij = a[i bj]\n\n(1.44)\n\nwhere the [ ] denotes antisymmetrisation. Grassmann was able to take this idea\nfurther by defining an outer product for any number of vectors. The idea is a\nsimple extension of the preceding formula. Expressed in an orthonormal frame,\nthe components of the outer product on n vectors are the totally antisymmetrised\n16\n\n1.7 NOTES\n\nproducts of the components of each vector. This definition has the useful property that the outer product is associative,\na∧(b∧c) = (a∧b)∧c.\n\n(1.45)\n\nFor example, in three dimensions we have\na∧b∧c = (ai ei )∧(bj ej )∧(ck ek ) = \u0007ijk ai bj ck e1 ∧e2 ∧e3 ,\n\n(1.46)\n\nwhich represents a directed volume (see section 2.4).\nA further feature of the antisymmetry of the product is that the outer product\nof any set of linearly dependent vectors vanishes. This means that statements like\n‘this vector lies on a given plane’, or ‘these two hypersurfaces share a common\nline’ can be encoded algebraically in a simple manner. Equipped with these\nideas, Grassmann was able to construct a system capable of handling geometric\nconcepts in arbitrary dimensions.\nDespite Grassmann’s considerable achievement, the book describing his ideas,\nhis Lineale Ausdehnungslehre, did not have any immediate impact. This was\nno doubt due largely to his relative lack of reputation (he was still a German\nschoolteacher when he wrote this work). It was over twenty years before anyone\nof note referred to Grassmann’s work, and during this time Grassmann produced\na second, extended version of the Ausdehnungslehre. In the latter part of the\nnineteenth century Grassmann’s work started to influence leading figures like\nGibbs and Clifford. Gibbs wrote a number of papers praising Grassmann’s work\nand contrasting it favourably with the quaternion algebra. Clifford used Grassmann’s work as the starting point for the development of his geometric algebra,\nthe subject of this book.\nToday, Grassmann’s ideas are recognised as the first presentation of the abstract theory of vector spaces over the field of real numbers. Since his death, his\nwork has given rise to the influential and fashionable areas of differential forms\nand Grassmann variables. The latter are anticommuting variables and are fundamental to the foundations of much of modern supersymmetry and superstring\ntheory.\n1.7 Notes\nDescriptions of linear algebra and vector spaces can be found in most introductory textbooks of mathematics, as can discussions of the scalar and cross\nproducts and complex arithmetic. Quaternions, on the other hand, are much less\nlikely to be mentioned. There is a large specialised literature on the quaternions,\nand a good starting point are the works of Altmann (1986, 1989). Altmann’s\npaper on ‘Hamilton, Rodriques and the quaternion scandal’ (1989) is also a good\nintroduction to the history of the subject.\nThe outer product is covered in most modern textbooks on geometry and\n17\n\nINTRODUCTION\n\nphysics, such as those by Nakahara (1990), Schutz (1980), and Gockeler &\nSchucker (1987). In most of these works, however, the exterior product is only\ntreated in the context of differential forms. Applications to wider topics in geometry have been discussed by Hestenes (1991) and others. A useful summary in\nprovided in the proceedings of the conference Hermann Gunther Grassmann\n(1809–1877), edited by Schubring (1996). Grassmann’s Lineale Ausdehnungslehre is also finally available in English translation due to Kannenberg (1995).\nFor those with a deeper interest in the history of mathematics and the development of vector algebra a good starting point is the set of books by Kline (1972).\nThere are also biographies available of many of the key protagonists. Perhaps\neven more interesting is to return to their original papers and experience first\nhand the robust and often humorous language employed at the time. The collected works of J.W. Gibbs (1906) are particularly entertaining and enlightening,\nand contain a good deal of valuable historical information.\n1.8 Exercises\n1.1\n\n1.2\n\nSuppose that the two sets {a1 , . . . , am } and {b1 , . . . , bn } form bases for\nthe same vector space, and suppose initially that m > n. By establishing\na contradiction, prove the basis theorem that all bases of a vector space\nhave the same number of elements.\nDemonstrate that the following define vector spaces:\n(a) the set of all polynomials of degree less than or equal to n;\n(b) all solutions of a given linear homogeneous ordinary differential\nequation;\n(c) the set of all n × m matrices.\n\n1.3\n1.4\n1.5\n\n1.6\n\nProve that in Euclidean space |a + b| ≤ |a| + |b|. When does equality\nhold?\nShow that the unit quaternions {±1, ±i, ±j ± k} form a discrete group.\nThe unit quaternions i, j, k are generators of rotations about their respective axes. Are rotations through either π or π/2 consistent with the\nequation ijk = −1?\nProve the following:\n(a) a·(b×c) = b·(c×a) = c·(a×b);\n(b) a×(b×c) = a·c b − a·b c;\n(c) |a×b| = |a| |b| sin(θ), where a·b = |a| |b| cos(θ).\n\n1.7\n\nProve that the dimension of the space formed by the exterior product\nof m vectors drawn from a space of dimension n is\nn!\nn(n − 1) · · · (n − m + 1)\n=\n.\n1 · 2···m\n(n − m)!m!\n18\n\n1.8 EXERCISES\n\n1.8\n1.9\n\nProve that the n-fold exterior product of a set of n dependent vectors is\nzero.\nA convex polygon in a plane is specified by the ordered set of points\n{x0 , x1 , . . . , xn }. Prove that the directed area of the polygon is given by\nA = 12 (x0 ∧x1 + x1 ∧x2 + · · · + xn ∧x0 ).\nWhat is the significance of the sign? Can you extend the idea to a\ntriangulated surface in three dimensions?\n\n19\n\n2\n\nGeometric algebra in two and\nthree dimensions\n\nGeometric algebra was introduced in the nineteenth century by the English mathematician William Kingdon Clifford (figure 2.1). Clifford appears to have been\none of the small number of mathematicians at the time to be significantly influenced by Grassmann’s work. Clifford introduced his geometric algebra by\nuniting the inner and outer products into a single geometric product. This is\nassociative, like Grassmann’s product, but has the crucial extra feature of being\ninvertible, like Hamilton’s quaternion algebra. Indeed, Clifford’s original motivation was to unite Grassmann’s and Hamilton’s work into a single structure.\nIn the mathematical literature one often sees this subject referred to as Clifford\nalgebra. We have chosen to follow the example of David Hestenes, and many\nother modern researchers, by returning to Clifford’s original choice of name —\ngeometric algebra. One reason for this is that the first published definition of\nthe geometric product was due to Grassmann, who introduced it in the second\nAusdehnungslehre. It was Clifford, however, who realised the great potential of\nthis product and who was responsible for advancing the subject.\nIn this chapter we introduce the basics of geometric algebra in two and three\ndimensions in a way that is intended to appear natural and geometric, if somewhat informal. A more formal, axiomatic approach is delayed until chapter 4,\nwhere geometric algebra is defined in arbitrary dimensions. The meaning of the\nvarious terms in the algebra we define will be illustrated with familiar examples\nfrom geometry. In so doing we will also uncover how Hamilton’s quaternions\nfit into geometric algebra, and understand where it was that Hamilton and his\nfollowers went wrong in their treatment of three-dimensional geometry. One of\nthe most powerful applications of geometric algebra is to rotations, and these\nare considered in some detail in this chapter. It is well known that rotations in\na plane can be efficiently handled with complex numbers. We will see how to\nextend this idea to rotations in three-dimensional space. This representation has\nmany applications in classical and quantum physics.\n20\n\n2.1 A NEW PRODUCT FOR VECTORS\n\nFigure 2.1 William Kingdon Clifford 1845–1879. Born in Exeter on 4 May\n1845, his father was a justice of the peace and his mother died early in his\nlife. After school he went to King’s College, London and then obtained\na scholarship to Trinity College, Cambridge, where he followed the likes\nof Thomson and Maxwell in becoming Second Wrangler. There he also\nachieved a reputation as a daring athlete, despite his slight frame. He was\nrecommended for a fellowship at Trinity College by Maxwell, and in 1871\ntook the Professorship of Applied Mathematics at University College, London. He was made a Fellow of the Royal Society at the extremely young\nage of 29. He married Lucy in 1875, and their house became a fashionable meeting place for scientists and philosophers. As well as being one of\nthe foremost mathematicians of his day, he was an accomplished linguist,\nphilosopher and author of children’s stories. Sadly, his insatiable appetite\nfor physical and mental exercise was not matched by his physique, and in\n1878 he was instructed to stop work and leave England for the Mediterranean. He returned briefly, only for his health to deteriorate further in\nthe English climate. He left for Madeira, where he died on 3 March 1879\nat the age of just 33. Further details of his life can be found in the book\nc\nSuch Silver Currents (Chisholm, 2002). Portrait by John Collier (\u0002The\nRoyal Society).\n\n2.1 A new product for vectors\nIn chapter 1 we studied various products for vectors, including the symmetric\nscalar (or inner) product and the antisymmetric exterior (or outer) product. In\ntwo dimensions, we showed how to interpret the result of the complex product\nzw∗ (section 1.3). The scalar term is the inner product of the two vectors representing the points in the complex plane, and the imaginary term records their\n21\n\nGEOMETRIC ALGEBRA IN TWO AND THREE DIMENSIONS\n\ndirected area. Furthermore, the scalar term is symmetric, and the imaginary\nterm is antisymmetric in the two arguments. Clifford’s powerful idea was to\ngeneralise this product to arbitrary dimensions by replacing the imaginary term\nwith the outer product. The result is the geometric product and is written simply\nas ab. The result is the sum of a scalar and a bivector, so\nab = a·b + a∧b.\n\n(2.1)\n\nThis sum of two distinct objects — a scalar and a bivector — looks strange at\nfirst and goes against the rule that one should only add like objects. This is the\nfeature of geometric algebra that initially causes the greatest difficulty, in much\nthe same way that i2 = −1 initially unsettles most school children. So how is\nthe sum on the right-hand side of equation (2.1) to be viewed? The answer is\nthat it should be viewed in precisely the same way as the addition of a real and\nan imaginary number. The result is neither purely real nor purely imaginary\n— it is a mixture of two different objects which are combined to form a single\ncomplex number. Similarly, the addition of a scalar to a bivector enables us\nto keep track of the separate components of the product ab. The advantages of\nthis are precisely the same as the advantages of complex arithmetic over working\nwith the separate real and imaginary parts. This analogy between multivectors in\ngeometric algebra and complex numbers is more than a mere pedagogical device.\nAs we shall discover, geometric algebra encompasses both complex numbers and\nquaternions. Indeed, Clifford’s achievement was to generalise complex arithmetic\nto spaces of arbitrary dimensions.\nFrom the symmetry and antisymmetry of the terms on the right-hand side of\nequation (2.1) we see that\nba = b·a + b∧a = a·b − a∧b.\n\n(2.2)\n\na·b = 12 (ab + ba)\n\n(2.3)\n\na∧b = 12 (ab − ba).\n\n(2.4)\n\nIt follows that\n\nand\n\nWe can thus define the inner and outer products in terms of the geometric\nproduct. This forms the starting point for an axiomatic development of geometric\nalgebra, which is presented in chapter 4.\nIf we form the product of a and the parallel vector λa we obtain\na(λa) = λa·a + λa∧a = λa·a,\n\n(2.5)\n\nwhich is therefore a pure scalar. It follows similarly that a2 is a scalar, so we\ncan write a2 = |a|2 for the square of the length of a vector. If instead a and b\n22\n\n2.2 AN OUTLINE OF GEOMETRIC ALGEBRA\n\nare perpendicular vectors, their product is\nab = a·b + a∧b = a∧b\n\n(2.6)\n\nand so is a pure bivector. We also see that\nba = b·a + b∧a = −a∧b = −ab,\n\n(2.7)\n\nwhich shows us that orthogonal vectors anticommute. The geometric product\nbetween general vectors encodes the relative contributions of both their parallel\nand perpendicular components, summarising these in the separate scalar and\nbivector terms.\n2.2 An outline of geometric algebra\nClifford went further than just allowing scalars to be added to bivectors. He\ndefined an algebra in which elements of any type could be added or multiplied\ntogether. This is what he called a geometric algebra. Elements of a geometric\nalgebra are called multivectors and these form a linear space — scalars can be\nadded to bivectors, and vectors, etc. Geometric algebra is a graded algebra, and\nelements of the algebra can be broken up into terms of different grade. The scalar\non. Essentially, the grade of the object is the dimension of the hyperplane it\nspecifies. The term ‘grade’ is preferred to ‘dimension’, however, as the latter is\nregularly employed for the size of a linear space. We denote the operation of\nprojecting onto the terms of a chosen grade by \u0002 \u0003r , so \u0002ab\u00032 denotes the grade-2\n(bivector) part of the geometric product ab. That is,\n\u0002ab\u00032 = a∧b.\n\n(2.8)\n\nThe subscript 0 on the scalar term is usually suppressed, so we also have\n\u0002ab\u00030 = \u0002ab\u0003 = a·b.\n\n(2.9)\n\nArbitrary multivectors can also be multiplied together with the geometric\nproduct. To do this we first extend the geometric product of two vectors to an\narbitrary number of vectors. This is achieved with the additional rule that the\ngeometric product is associative:\na(bc) = (ab)c = abc.\n\n(2.10)\n\nThe associativity property enables us to remove the brackets and write the product as abc. Arbitrary multivectors can now be written as sums of products of\nvectors. The geometric product of multivectors therefore inherits the two main\nproperties of the product for vectors, which is to say it is associative:\nA(BC) = (AB)C = ABC,\n23\n\n(2.11)\n\nGEOMETRIC ALGEBRA IN TWO AND THREE DIMENSIONS\n\nA(B + C) = AB + AC.\n\n(2.12)\n\nHere A, B, . . . , C denote multivectors containing terms of arbitrary grade.\nThe associativity property ensures that it is now possible to divide by vectors,\nthus realising Hamilton’s goal. Suppose that we know that ab = C, where C is\nsome combination of a scalar and bivector. We find that\nCb = (ab)b = a(bb) = ab2 ,\n\n(2.13)\n\nso we can define b−1 = b/b2 , and recover a from\na = Cb−1 .\n\n(2.14)\n\nThis ability to divide by vectors gives the algebra considerable power.\nAs an example of these axioms in action, consider forming the square of the\nbivector a∧b. The properties of the geometric product allow us to write\n(a∧b)(a∧b) = (ab − a·b)(a·b − ba)\n= −ab2 a − (a·b)2 + a·b(ab + ba)\n= (a·b)2 − a2 b2\n= −a2 b2 sin2(θ),\n\n(2.15)\n\nwhere we have assumed that a·b = |a| |b| cos(θ). The magnitude of the bivector\na∧b is therefore equal to the area of the parallelogram with sides defined by a\nand b. Manipulations such as these are commonplace in geometric algebra, and\ncan provide simplified proofs of a number of useful results.\n\n2.3 Geometric algebra of the plane\nThe easiest way to understand the geometric product is by example, so consider\na two-dimensional space (a plane) spanned by two orthonormal vectors e1 and\ne2 . These basis vectors satisfy\ne1 2 = e2 2 = 1,\n\ne1 ·e2 = 0.\n\n(2.16)\n\nThe final entity present in the algebra is the bivector e1 ∧ e2 . This is the highest\ngrade element in the algebra, since the outer product of a set of dependent vectors\nis always zero. The highest grade element in a given algebra is usually called\nthe pseudoscalar, and its grade coincides with the dimension of the underlying\nvector space.\nThe full algebra is spanned by the basis set\n1\n1 scalar\n\n{e1 , e2 }\n2 vectors\n24\n\ne1 ∧ e2\n.\n1 bivector\n\n(2.17)\n\n2.3 GEOMETRIC ALGEBRA OF THE PLANE\n\nWe denote this algebra G2 . Any multivector can be decomposed in this basis,\nand sums and products can be calculated in terms of this basis. For example,\nsuppose that the multivectors A and B are given by\nA = α0 + α1 e1 + α2 e2 + α3 e1 ∧e2 ,\nB = β0 + β1 e1 + β2 e2 + β3 e1 ∧e2 ,\nthen their sum S = A + B is given by\nS = (α0 + β0 ) + (α1 + β1 )e1 + (α2 + β2 )e2 + (α3 + β3 )e1 ∧e2 .\n\n(2.18)\n\nThis result for the addition of multivectors is straightforward and unsurprising.\nMatters become more interesting, however, when we start forming products.\n2.3.1 The bivector and its products\nTo study the properties of the bivector e1 ∧ e2 we first recall that for orthogonal\nvectors the geometric product is a pure bivector:\ne1 e2 = e1 ·e2 + e1 ∧e2 = e1 ∧e2 ,\n\n(2.19)\n\nand that orthogonal vectors anticommute:\ne2 e1 = e2 ∧e1 = −e1 ∧e2 = −e1 e2 .\n\n(2.20)\n\nWe can now form products in which e1 e2 multiplies vectors from the left and the\nright. First from the left we find that\n(e1 ∧e2 )e1 = (−e2 e1 )e1 = −e2 e1 e1 = −e2\n\n(2.21)\n\n(e1 ∧e2 )e2 = (e1 e2 )e2 = e1 e2 e2 = e1 .\n\n(2.22)\n\nand\n\nIf we assume that e1 and e2 form a right-handed pair, we see that left-multiplication by the bivector rotates vectors 90◦ clockwise (i.e. in a negative sense).\nSimilarly, acting from the right\ne2 (e1 e2 ) = −e1 .\n\ne1 (e1 e2 ) = e2 ,\n\n(2.23)\n\nSo right multiplication rotates 90◦ anticlockwise — a positive sense.\nThe final product in the algebra to consider is the square of the bivector e1 ∧e2 :\n(e1 ∧e2 )2 = e1 e2 e1 e2 = −e1 e1 e2 e2 = −1.\n\n(2.24)\n\nGeometric considerations have led naturally to a quantity which squares to −1.\nThis fits with the fact that two successive left (or right) multiplications of a vector\nby e1 e2 rotates the vector through 180◦ , which is equivalent to multiplying by −1.\nThe fact that we now have a firm geometric picture for objects whose algebraic\nsquare is −1 opens up the possibility of providing a geometric interpretation for\n25\n\nGEOMETRIC ALGEBRA IN TWO AND THREE DIMENSIONS\n\nthe unit imaginary employed throughout physics, a theme which will be explored\nfurther in this book.\n2.3.2 Multiplying multivectors\nNow that all of the individual products have been found, we can compute the\nproduct of the two general multivectors A and B of equation (2.18),\nAB = M = µ0 + µ1 e1 + µ2 e2 + µ3 e1 e2 ,\n\n(2.25)\n\nwhere\nµ0 = α0 β0 + α1 β1 + α2 β2 − α3 β3 ,\nµ1 = α0 β1 + α1 β0 + α3 β2 − α2 β3 ,\nµ2 = α0 β2 + α2 β0 + α1 β3 − α3 β1 ,\n\n(2.26)\n\nµ3 = α0 β3 + α3 β0 + α1 β2 − α2 β1 .\nThe full product shown here is actually rarely used, but writing it out explicitly\ndoes emphasise some of its key features. The product is always well defined,\nand the algebra is closed under it. Indeed, the product could easily be made an\nintrinsic part of a computer language, in the same way that complex arithmetic\nis already intrinsic to some languages. The basis vectors can also be represented\nwith matrices, for example\n\u0007\n\b\n\u0007\n\b\n0 1\n1 0\nE1 =\nE2 =\n.\n(2.27)\n1 0\n0 −1\n(Verifying that these satisfy the required algebraic relations is left as an exercise.)\nGeometric algebras in general are associative algebras, so it is always possible\nto construct a matrix representation for them. The problem with this is that\nthe matrices hide the geometric content of the elements they represent. Much of\nthe mathematical literature does focus on matrix representations, and for this\nwork the term Clifford algebra is appropriate. For the applications in this book,\nhowever, the underlying geometry is the important feature of the algebra and\nmatrix representations are usually redundant. Geometric algebra is a much more\nappropriate name for this subject.\n2.3.3 Connection with complex numbers\nIt is clear that there is a close relationship between geometric algebra in two\ndimensions and the algebra of complex numbers. The unit bivector squares to\n−1 and generates rotations through 90◦ . The combination of a scalar and a\nbivector, which is formed naturally via the geometric product, can therefore be\nviewed as a complex number. We write this as\nZ = u + ve1 e2 = u + Iv,\n26\n\n(2.28)\n\n2.3 GEOMETRIC ALGEBRA OF THE PLANE\n\nI\nv\nZ\n\nθ\n\nR\nu\n\nFigure 2.2 The Argand diagram. The complex number Z = u + iv represents a vector in the complex plane, with Cartesian components u and v.\nThe polar decomposition into |Z| exp(iθ) can alternatively be viewed as an\ninstruction to rotate 1 through θ and dilate by |Z|.\n\nwhere\nI = e1 ∧e2 ,\n\nI 2 = −1.\n\n(2.29)\n\nThroughout we employ the symbol I for the pseudoscalar of the algebra of interest. That is why we have used it here, rather than the tempting alternative\ni. The latter is seen often in the literature, but the i symbol has the problem of\nsuggesting an element which commutes with all others, which is not necessarily\na property of the pseudoscalar.\nComplex numbers serve a dual purpose in two dimensions. They generate\nrotations and dilations through their polar decomposition |Z| exp(iθ), and they\nalso represent vectors as points on the Argand diagram (see figure 2.2). But\nin the geometric algebra G2 complex numbers are replaced by scalar + bivector\ncombinations, whereas vectors are grade-1 objects,\nx = ue1 + ve2 .\n\n(2.30)\n\nIs there a natural map between x and the multivector Z? The answer is simple\n— pre-multiply by e1 ,\ne1 x = u + ve1 e2 = u + Iv = Z.\n\n(2.31)\n\nThat is all there is to it! The role of the preferred vector e1 is clear — it is\nthe real axis. Using this product vectors in a plane can be interchanged with\ncomplex numbers in a natural manner.\nIf we now consider the complex conjugate of Z, Z † = u − iv, we see that\nZ † = u + ve2 e1 = xe1 ,\n27\n\n(2.32)\n\nGEOMETRIC ALGEBRA IN TWO AND THREE DIMENSIONS\n\nwhich has simply reversed the order of the geometric product of x and e1 . This\noperation of reversing the order of products is one of the fundamental operations\nperformed in geometric algebra, and is called reversion (see section 2.5). Suppose\nnow that we introduce a second complex number W, with vector equivalent y:\nW = e1 y.\n\n(2.33)\n\nThe complex product ZW † = W † Z now becomes\nW † Z = ye1 e1 x = yx,\n\n(2.34)\n\nwhich returns the geometric product yx. This is as expected, as the complex\nproduct was used to suggest the form of the geometric product.\n2.3.4 Rotations\nSince we know how to rotate complex numbers, we can use this to find a formula\nfor rotating vectors in a plane. We know that a positive rotation through an\nangle φ for a complex number Z is achieved by\nZ → Z \u0002 = eiφ Z,\nwhere i is the standard unit imaginary (see figure 2.3). Again, we now\nas a combination of a scalar and a pseudoscalar in G2 and so replace i\nThe exponential of Iφ is defined by power series in the normal way, so\nhave\n∞\n\u0001\n(Iφ)n\nIφ\n= cos φ + I sin φ.\ne =\nn!\nn=0\n\n(2.35)\nview Z\nwith I.\nwe still\n(2.36)\n\nSuppose that Z \u0002 has the vector equivalent x\u0002 ,\nx\u0002 = e1 Z \u0002 .\n\n(2.37)\n\nWe now have a means of rotating the vector directly by writing\nx\u0002 = e1 eIφ Z = e1 eIφ e1 x.\n\n(2.38)\n\nBut\ne1 eIφ e1 = e1 (cos φ + I sin φ)e1\n= cos φ − I sin φ = e−Iφ ,\n\n(2.39)\n\nwhere we have employed the result that I anticommutes with vectors. We therefore arrive at the formulae\nx\u0002 = e−Iφ x = xeIφ ,\n\n(2.40)\n\nwhich achieve a rotation of the vector x in the I plane, through an angle φ.\nIn section 2.7 we show how to extend this idea to arbitrary dimensions. The\n28\n\n2.4 THE GEOMETRIC ALGEBRA OF SPACE\n\nI\nZ \u0001 = reiθ\n\n\u0001\n\nZ = reiθ\n\nφ\nR\n\nFigure 2.3 A rotation in the complex plane. The complex number Z is\nmultiplied by the phase term exp(Iφ), the effect of which is to replace θ by\nθ\u0001 = θ + φ.\n\nchange of sign in the exponential acting from the left and right of the vector x\nis to be expected. We saw earlier that left-multiplication by I generated lefthanded rotations, and right-multiplication generated right-handed rotations. As\nthe overall rotation is right-handed, the sign of I must be negative when acting\nfrom the left.\nThis should illustrate that geometric algebra fully encompasses complex arithmetic, and we will see later that complex analysis is fully incorporated as well.\nThe beauty of the geometric algebra formulation is that it shows immediately\nhow to extend the ideas of complex analysis to higher dimensions, a problem\nwhich had troubled mathematicians for many years. The key to this is the\nseparation of the two roles of complex numbers by treating vectors as grade-1\nobjects, and the quantities acting on them (the complex numbers) as combinations of grade-0 and grade-2 objects. These two roles generalise differently in\nhigher dimensions and, once one sees this, extending complex analysis becomes\nstraightforward.\n2.4 The geometric algebra of space\nThe geometric algebra of three-dimensional space is a remarkably powerful tool\nfor solving problems in geometry and classical mechanics. It describes vectors,\nplanes and volumes in a single algebra, which contains all of the familiar vector operations. These include the vector cross product, which is revealed as a\ndisguised form of bivector. The algebra also provides a very clear and compact method for encoding rotations, which is considerably more powerful than\nworking with matrices.\nWe have so far constructed the geometric algebra of a plane. We now add a\n29\n\nGEOMETRIC ALGEBRA IN TWO AND THREE DIMENSIONS\n\nthird vector e3 to our two-dimensional set {e1 , e2 }. All three vectors are assumed\nto be orthonormal, so they all anticommute. From these three basis vectors we\ngenerate the independent bivectors\n{e1 e2 , e2 e3 , e3 e1 }.\nThis is the expected number of independent planes in space. There is one further\nterm to consider, which is the product of all three vectors:\n(e1 e2 )e3 = e1 e2 e3 .\n\n(2.41)\n\nThis results in a grade-3 object, called a trivector. It corresponds to sweeping\nthe bivector e1 ∧e2 along the vector e3 , resulting in a three-dimensional volume\nelement (see section 2.4.3). The trivector represents the unique volume element\nin three dimensions. It is the highest grade element and is unique up to scale\n(or volume) and handedness (sign). This is again called the pseudoscalar for the\nalgebra.\nIn three dimensions there are no further directions to add, so the algebra is\nspanned by\n1\n1 scalar\n\n{ei }\n3 vectors\n\n{ei ∧ej }\n3 bivectors\n\ne1 e 2 e 3\n1 trivector\n\n(2.42)\n\nThis basis defines a graded linear space of total dimension 8 = 23 . We call\nthis algebra G3 . Notice that the dimensions of each subspace are given by the\nbinomial coefficients.\n\n2.4.1 Products of vectors and bivectors\nOur expanded algebra gives us a number of new products to consider. We start\nby considering the product of a vector and a bivector. We have already looked\nat this in two dimensions, and found that a normalised bivector rotates vectors\nin its plane by 90◦ . Each of the basis bivectors in equation (2.42) shares the\nproperties of the single bivector studied previously for two dimensions. So\n(e1 e2 )2 = (e2 e3 )2 = (e3 e1 )2 = −1\n\n(2.43)\n\nand each bivector generates 90◦ rotations in its own plane.\nThe geometric product for vectors extends to all objects in the algebra, so we\ncan form expressions such as aB, where a is a vector and B is a bivector. Now\nthat our algebra contains a trivector e1 (e2 ∧ e3 ), we see that the result of the\nproduct aB can contain both vector and trivector terms, the latter arising if a\ndoes not lie fully in the B plane. To understand the properties of the product\naB we first decompose a into terms in and out of the plane,\na = a\u0005 + a⊥ ,\n30\n\n(2.44)\n\n2.4 THE GEOMETRIC ALGEBRA OF SPACE\n\na\n\na⊥\n\nb\n\na\u0002\n\nB\n\nFigure 2.4 A vector and a bivector. The vector a can be written as the\nsum of a term in the plane B and a term perpendicular to the plane, so\nthat a = a\u0002 + a⊥ . The bivector B can be written as a\u0002 ∧ b, where b is\nperpendicular to a\u0002 .\n\nas shown in figure 2.4. We can now write aB = (a\u0005 + a⊥ )B. Suppose that we\nalso write\nB = a\u0005 ∧b = a\u0005 b,\n\n(2.45)\n\nwhere b is orthogonal to a\u0005 in the B plane. It is always possible to find such a\nvector b. We now see that\na\u0005 B = a\u0005 (a\u0005 b) = a\u0005 2 b\n\n(2.46)\n\nand so is a vector. This is clear in that the product of a plane with a vector in\nthe plane must remain in the plane. On the other hand\na⊥ B = a⊥ (a\u0005 ∧b) = a⊥ a\u0005 b,\n\n(2.47)\n\nwhich is the product of three orthogonal (anticommuting) vectors and so is a\ntrivector. As expected, the product of a vector and a bivector will in general\ncontain vector and trivector terms.\nTo explore this further let us form the product of the vector a with the bivector\nb ∧ c. From the associative and distributive properties of the geometric product\nwe have\na(b∧c) = a 12 (bc − cb) = 12 (abc − acb).\n\n(2.48)\n\nWe now use the rearrangement\nab = 2a·b − ba\n31\n\n(2.49)\n\nGEOMETRIC ALGEBRA IN TWO AND THREE DIMENSIONS\n\nto write\na(b∧c) = (a·b)c − (a·c)b − 12 (bac − cab)\n= 2(a·b)c − 2(a·c)b + 12 (bc − cb)a,\n\n(2.50)\n\nso that\na(b∧c) − (b∧c)a = 2(a·b)c − 2(a·c)b.\n\n(2.51)\n\nThe right-hand side of this equation is a vector, so the antisymmetrised product\nof a vector with a bivector is another vector. Since this operation is gradelowering, we give it the dot symbol again and write\na·B = 12 (aB − Ba),\n\n(2.52)\n\nwhere B is an arbitrary bivector. The preceding rearrangement means that we\nhave proved one of the most useful results in geometric algebra,\na·(b∧c) = a·b c − a·c b.\n\n(2.53)\n\nReturning to equation (2.46) we see that we must have\na·B = a\u0005 B = a\u0005 ·B.\n\n(2.54)\n\nSo the effect of taking the inner product of a vector with a bivector is to project\nonto the component of the vector in the plane, and then rotate this through 90◦\nand dilate by the magnitude of B. We can also confirm that\na·B = a\u0005 2 b = −(a\u0005 b)a\u0005 = −B ·a,\n\n(2.55)\n\nas expected.\nThe remaining part of the product of a vector and a bivector returns a grade-3\ntrivector. This product is denoted with a wedge since it is grade-raising, so\na∧(b∧c) =\n\n1\n2\n\na(b∧c) + (b∧c)a .\n\n(2.56)\n\nA few lines of algebra confirm that this outer product is associative,\na∧(b∧c) =\n=\n=\n=\n\n1\n2\n1\n4\n1\n4\n1\n2\n\na(b∧c) + (b∧c)a\nabc − acb + bca − cba\n2(a∧b)c + bac + bca + 2c(a∧b) − cab − acb\n(a∧b)c + c(a∧b) + b(c·a) − (c·a)b\n\n= (a∧b)∧c,\n\n(2.57)\n\nso we can unambiguously write the result as a ∧ b ∧ c. The product a ∧ b ∧ c\nis therefore associative and antisymmetric on all pairs of vectors, and so is precisely Grassmann’s exterior product (see section 1.6). This demonstrates that\n32\n\n2.4 THE GEOMETRIC ALGEBRA OF SPACE\n\nGrassmann’s exterior product sits naturally within geometric algebra. From\nequation (2.47) we have\na∧B = a⊥ B = a⊥ ∧B,\n\n(2.58)\n\nso the effect of the exterior product with a bivector is to project onto the component of the vector perpendicular to the plane, and return a volume element (a\ntrivector). We can confirm simply that this product is symmetric in its vector\nand bivector arguments:\na∧B = a⊥ ∧a\u0005 ∧b = −a\u0005 ∧a⊥ ∧b = a\u0005 ∧b∧a⊥ = B ∧a.\n\n(2.59)\n\nThe full product of a vector and a bivector can now be written as\naB = a·B + a∧B,\n\n(2.60)\n\nwhere the dot is generalised to mean the lowest grade part of the product, while\nthe wedge means the highest grade part of the product. In a similar manner to\nthe geometric product of vectors, the separate dot and wedge products can be\nwritten in terms of the geometric product as\na·B = 12 (aB − Ba),\na∧B = 12 (aB + Ba).\n\n(2.61)\n\nBut pay close attention to the signs in these formulae, which are the opposite\nway round to the case of two vectors. The full product of a vector and a bivector\nwraps up the separate vector and trivector terms in the single product aB. The\nadvantage of this is again that the full product is invertible.\n2.4.2 The bivector algebra\nOur three independent bivectors also give us another new product to consider.\nWe already know that squaring a bivector results in a scalar. But if we multiply\ntogether two bivectors representing orthogonal planes we find that, for example,\n(e1 ∧e2 )(e2 ∧e3 ) = e1 e2 e2 e3 = e1 e3 ,\n\n(2.62)\n\nresulting in a third bivector. We also find that\n(e2 ∧e3 )(e1 ∧e2 ) = e3 e2 e2 e1 = e3 e1 = −e1 e3 ,\n\n(2.63)\n\nso the product of orthogonal bivectors is antisymmetric. The symmetric contribution vanishes because the two planes are perpendicular.\nIf we introduce the following labelling for the basis bivectors:\nB1 = e 2 e3 ,\n\nB2 = e3 e1 ,\n\nB3 = e1 e2 ,\n\n(2.64)\n\nwe find that their product satisfies\nBi Bj = −δij − \u0007ijk Bk .\n33\n\n(2.65)\n\nGEOMETRIC ALGEBRA IN TWO AND THREE DIMENSIONS\n\nThere is a clear analogy with the geometric product of vectors here, in that the\nsymmetric part is a scalar, whereas the antisymmetric part is a bivector. In\nhigher dimensions it turns out that the symmetrised product of two bivectors\ncan have grade-0 and grade-4 terms (which we will ultimately denote with the\ndot and wedge symbols). The antisymmetrised product is always a bivector, and\nbivectors form a closed algebra under this product.\nThe basis bivectors satisfy\nB1 2 = B2 2 = B3 2 = −1\n\n(2.66)\n\nB1 B2 = −B2 B1 ,\n\n(2.67)\n\nand\netc.\n\nThese are the properties of the generators of the quaternion algebra (see section 1.4). This observation helps to sort out some of the problems encountered\nwith the quaternions. Hamilton attempted to identify pure quaternions (null\nscalar part) with vectors, but we now see that they are actually bivectors. This\ncauses problems when looking at how objects transform under reflections. Hamilton also imposed the condition ijk = −1 on his unit quaternions, whereas we\nhave\nB1 B2 B3 = e2 e3 e3 e1 e1 e2 = +1.\n\n(2.68)\n\nTo set up an isomorphism we must flip a sign somewhere, for example in the y\ncomponent:\ni ↔ B1 ,\n\nj ↔ −B2 ,\n\nk ↔ B3 .\n\n(2.69)\n\nThis shows us that the quaternions are a left-handed set of bivectors, whereas\nHamilton and others attempted to view the i, j, k as a right-handed set of vectors.\nNot surprisingly, this was a potential source of great confusion and meant one\nhad to be extremely careful when applying quaternions in vector algebra.\n\n2.4.3 The trivector\nGiven three vectors, a, b and c, the trivector a ∧ b ∧ c is formed by sweeping a ∧ b\nalong the vector c (see figure 2.5). The result can be represented pictorially as\nan oriented parallelepiped. As with bivectors, however, the picture should not\nbe interpreted too literally. The trivector a ∧ b ∧ c does not contain any shape\ninformation. It just records a volume and an orientation.\nThe various algebraic properties of trivectors have straightforward geometric\ninterpretations. The same oriented volume is obtained by sweeping a ∧ b along c\nor b ∧ c along a. The mathematical expression of this is that the outer product\nis associative, a ∧ (b ∧ c) = (a ∧ b) ∧ c. The trivector a ∧ b ∧ c changes sign\nunder interchange of any pair of vectors, which follows immediately from the\n34\n\n2.4 THE GEOMETRIC ALGEBRA OF SPACE\n\nb\na\n\nc\n\na∧b\n\na\nb∧c\n\nc\n\nb\nFigure 2.5 The trivector. The trivector a ∧ b ∧ c can be viewed as the\noriented parallelepiped obtained from sweeping the bivector a ∧ b along the\nvector c. In the left-hand diagram the bivector a ∧ b is swept along c. In\nthe right-hand one b ∧ c is swept along a. The result is the same in both\ncases, demonstrating the equality a ∧ b ∧ c = b ∧ c ∧ a. The associativity of\nthe outer product is also clear from such diagrams.\n\nantisymmetry of the exterior product. The geometric picture of this is that\nswapping any two vectors reverses the orientation by which the volume is swept\nout. Under two successive interchanges of pairs of vectors the trivector returns\nto itself, so\na∧b∧c = c∧a∧b = b∧c∧a.\n\n(2.70)\n\nThis is also illustrated in figure 2.5.\nThe unit right-handed pseudoscalar for space is given the standard symbol I,\nso\nI = e1 e2 e3 ,\n\n(2.71)\n\nwhere the {e1 , e2 , e3 } are any right-handed frame of orthonormal vectors. If a\nleft-handed set of orthonormal vectors is multiplied together the result is −I.\nGiven an arbitrary set of three vectors we must have\na∧b∧c = αI,\n\n(2.72)\n\nwhere α is a scalar. It is not hard to show that |α| is the volume of the parallelepiped with sides defined by a, b and c. The sign of α encodes whether the\nset {a, b, c} forms a right-handed or left-handed frame. In three dimensions this\nfully accounts for the information in the trivector.\nNow consider the product of the vector e1 and the pseudoscalar,\ne1 I = e1 (e1 e2 e3 ) = e2 e3 .\n\n(2.73)\n\nThis returns a bivector — the plane perpendicular to the original vector (see\nfigure 2.6). The product of a grade-1 vector with the grade-3 pseudoscalar is\ntherefore a grade-2 bivector. Multiplying from the left we find that\nIe1 = e1 e2 e3 e1 = −e1 e2 e1 e3 = e2 e3 .\n35\n\n(2.74)\n\nGEOMETRIC ALGEBRA IN TWO AND THREE DIMENSIONS\n\nI\ne3\ne2 ∧e3\n\ne1\nFigure 2.6 A vector and a trivector. The result of multiplying the vector\ne1 by the trivector I is the plane e1 (e1 e2 e3 ) = e2 e3 . This is the plane\nperpendicular to the e1 vector.\n\nThe result is therefore independent of order, and this holds for any basis vector.\nIt follows that the pseudoscalar commutes with all vectors in three dimensions:\nIa = aI.\n\n(2.75)\n\nThis is always the case for the pseudoscalar in spaces of odd dimension. In even\ndimensions, the pseudoscalar anticommutes with all vectors, as we have already\nseen in two dimensions.\nWe can now express each of our basis bivectors as the product of the pseudoscalar\nand a dual vector:\ne1 e2 = Ie3 ,\n\ne2 e3 = Ie1 ,\n\ne3 e1 = Ie2 .\n\n(2.76)\n\nThis operation of multiplying by the pseudoscalar is called a duality transformation and was originally introduced by Grassmann. Again, we can write\naI = a·I\n\n(2.77)\n\nwith the dot used to denote the lowest grade term in the product. The result\nof this can be understood as a projection — projecting onto the component of I\nperpendicular to a.\nWe next form the square of the pseudoscalar:\nI 2 = e1 e2 e3 e1 e2 e3 = e1 e2 e1 e2 = −1.\n\n(2.78)\n\nSo the pseudoscalar commutes with all elements and squares to −1. It is therefore\na further candidate for a unit imaginary. In some physical applications this is the\ncorrect one to use, whereas for others it is one of the bivectors. The properties of\nI in three dimensions make it particularly tempting to replace it with the symbol\ni, and this is common practice in much of the literature. This convention can\nstill lead to confusion, however, and is not adopted in this book.\n36\n\n2.4 THE GEOMETRIC ALGEBRA OF SPACE\n\nFinally, we consider the product of a bivector and the pseudoscalar:\nI(e1 ∧e2 ) = Ie1 e2 e3 e3 = IIe3 = −e3 .\n\n(2.79)\n\nSo the result of the product of I with the bivector formed from e1 and e2 is\n−e3 , that is, minus the vector perpendicular to the e1 ∧e2 plane. This provides\na definition of the vector cross product as\na×b = −I(a∧b).\n\n(2.80)\n\nThe vector cross product is largely redundant now that we have the exterior\nproduct and duality at our disposal. For example, consider the result for the\ndouble cross product. We form\na×(b×c) = −Ia∧(−I(b∧c))\n= 12 I aI(b∧c) − (b∧c)Ia\n= −a·(b∧c).\n\n(2.81)\n\nWe have already calculated the expansion of the final line, which turns out to\nbe the first example of a much more general, and very useful, formula.\nEquation (2.80) shows how the cross product of two vectors is a disguised\nbivector, the bivector being mapped to a vector by a duality operation. It is\nnow clear why the product only exists in three dimensions — this is the only\nspace for which the dual of a bivector is a vector. We will have little further\nuse for the cross product and will rarely employ it from now on. This means we\ncan also do away with the awkward distinction between polar and axial vectors.\nInstead we just talk in terms of vectors and bivectors. Both may belong to\nthree-dimensional linear spaces, but they are quite different objects with distinct\nalgebraic properties.\n\n2.4.4 The Pauli algebra\nThe full geometric product for vectors can be written\nei ej = ei ·ej + ei ∧ej = δij + I\u0007ijk ek .\n\n(2.82)\n\nThis may be familiar to many — it is the Pauli algebra of quantum mechanics! The Pauli matrices therefore form a matrix representation of the geometric\nalgebra of space. The Pauli matrices are\n\u0007\n\b\n\u0007\n\b\n\u0007\n\b\n0 1\n0 −i\n1 0\n, σ2 =\n, σ3 =\n.\n(2.83)\nσ1 =\n1 0\ni 0\n0 −1\nThese matrices satisfy\nσi σj = δij I + i\u0007ijk σk ,\n37\n\n(2.84)\n\nGEOMETRIC ALGEBRA IN TWO AND THREE DIMENSIONS\n\nwhere I is the 2 × 2 identity matrix. Historically, these matrices were discovered\nby Pauli in his investigations of the quantum theory of spin. The link with\ngeometric algebra (‘Clifford algebra’ in the quantum theory textbooks) was only\nSurprisingly, though the link with the geometric algebra of space is now well\nestablished, one seldom sees the Pauli matrices referred to as a representation\nfor the algebra of a set of vectors. Instead they are almost universally referred\nto as the components of a single vector in ‘isospace’. A handful of authors (most\nnotably David Hestenes) have pointed out the curious nature of this interpretation. Such discussion remains controversial, however, and will only be touched\non in this book. As with all arguments over interpretations of quantum mechanics, how one views the Pauli matrices has little effect on the predictions of the\ntheory.\nThe fact that the Pauli matrices form a matrix representation of G3 provides an\nalternative way of performing multivector manipulations. This method is usually\nslower, but can sometimes be used to advantage, particularly in programming\nlanguages where complex arithmetic is built in. Working directly with matrices\ndoes obscure geometric meaning, and is usually best avoided.\n2.5 Conventions\nA number of conventions help to simplify expressions in geometric algebra. For\nexample, expressions such as (a · b)c and I(a ∧ b) demonstrate that it would be\nuseful to have a convention which allows us to remove the brackets. We thus\nintroduce the operator ordering convention that in the absence of brackets, inner\nand outer products are performed before geometric products. This can remove\nsignificant numbers of unnecessary brackets. For example, we can safely write\nI(a∧b) = I a∧b.\n\n(2.85)\n\n(a·b)c = a·b c.\n\n(2.86)\n\nand\n\nIn addition, unless brackets specify otherwise, inner products are performed\nbefore outer products,\na·b c∧d = (a·b)c∧d.\n\n(2.87)\n\nA simple notation for the result of projecting out the elements of a multivector\nthat have a given grade is also invaluable. We denote this with angled brackets\n\u0002 \u0003r , where r is the grade onto which we want to project. With this notation we\ncan write, for example,\na∧b = \u0002a∧b\u00032 = \u0002ab\u00032 .\n\n(2.88)\n\nThe final expression holds because a ∧ b is the sole grade-2 component of the\n38\n\n2.5 CONVENTIONS\n\ngeometric product ab. This notation can be extremely useful as it often enables\ninner and outer products to be replaced by geometric products, which are usually\nsimpler to manipulate. The operation of taking the scalar part of a product is\noften needed, and it is conventional for this to drop the subscript zero and simply\nwrite\n\u0002M \u0003 = \u0002M \u00030 .\n\n(2.89)\n\nThe scalar part of any pair of multivectors is symmetric:\n\u0002AB\u0003 = \u0002BA\u0003.\n\n(2.90)\n\nIt follows that the scalar part satisfies the cyclic reordering property\n\u0002AB · · · C\u0003 = \u0002B · · · CA\u0003,\n\n(2.91)\n\nwhich is frequently employed in manipulations.\nAn important operation in geometric algebra is that of reversion, which reverses the order of vectors in any product. There are two conventions for this in\ncommon usage. One is the dagger symbol, A† , used for Hermitian conjugation\nin matrix algebra. The other is to use a tilde, Ã. In three-dimensional applications the dagger symbol is often employed, as the reverse operation returns the\nsame result as Hermitian conjugation of the Pauli matrix representation of the\nalgebra. In spacetime physics, however, the tilde symbol is the better choice as\nthe dagger is reserved for a different (frame-dependent) operation in relativistic\nquantum mechanics. For the remainder of this chapter we will use the dagger\nsymbol, as we will concentrate on applications in three dimensions.\nScalars and vectors are invariant under reversion, but bivectors change sign:\n(e1 e2 )† = e2 e1 = −e1 e2 .\n\n(2.92)\n\nI † = e3 e2 e1 = e1 e3 e2 = −e1 e2 e3 = −I.\n\n(2.93)\n\nSimilarly, we see that\n\nA general multivector in G3 can be written\nM = α + a + B + βI,\n\n(2.94)\n\nwhere a is a vector, B is a bivector and α and β are scalars. From the above we\nsee that the reverse of M , M † , is\nM † = α + a − B − βI.\n\n(2.95)\n\nAs stated above, this operation has the same effect as Hermitian conjugation\napplied to the Pauli matrices.\nWe have now introduced a number of terms, some of which have overlapping\nmeaning. It is useful at this point to refer to multivectors which only contain\nterms of a single grade as homogeneous. The term inner product is reserved for\n39\n\nGEOMETRIC ALGEBRA IN TWO AND THREE DIMENSIONS\n\nthe lowest grade part of the geometric product of two homogeneous multivectors.\nFor two homogeneous multivectors of the same grade the inner product and scalar\nproduct reduce to the same thing. The terms exterior and outer products are\ninterchangeable, though we will tend to prefer the latter for its symmetry with\nthe inner product. The inner and outer products are also referred to colloquially\nas the dot and wedge products. We have followed convention in referring to\nthe highest grade element in a geometric algebra as the pseudoscalar. This is\na convenient name, though one must be wary that in tensor analysis the term\ncan mean something subtly different. Both directed volume element and volume\nform are good alternative names, but we will stick with pseudoscalar in this\nbook.\n\n2.6 Reflections\nThe full power of geometric algebra begins to emerge when we consider reflections\nand rotations. We start with an arbitrary vector a and a unit vector n (n2 = 1),\nand resolve a into parts parallel and perpendicular to n. This is achieved simply\nby forming\na = n2 a\n= n(n·a + n∧a)\n= a\u0005 + a⊥ ,\n\n(2.96)\n\nwhere\na\u0005 = a·n n,\n\na⊥ = n n∧a.\n\n(2.97)\n\nThe formula for a\u0005 is certainly the projection of a onto n, and the remaining\nterm must be the perpendicular component (sometimes called the rejection). We\ncan check that a⊥ is perpendicular to n quite simply:\nn·a⊥ = \u0002nn n∧a\u0003 = \u0002n∧a\u0003 = 0.\n\n(2.98)\n\nThis is a simple example of how using the projection onto grade operator to replace inner and outer products with geometric products can simplify derivations.\nThe result of reflecting a in the plane orthogonal to n is the vector a\u0002 = a⊥ −a\u0005\n(see figure 2.7). This can be written\na\u0002 = a⊥ − a\u0005 = n n∧a − a·n n\n= −n·a n − n∧a n\n= −nan.\n\n(2.99)\n\nThis formula is already more compact than can be written down without the\ngeometric product. The best one can do with just the inner product is the\n40\n\n2.6 REFLECTIONS\n\na\u0001\na⊥\n−a\u0002\na\n\na\u0002\n\nn\nFigure 2.7 A reflection. The vector a is reflected in the (hyper)plane perpendicular to n. This is the way to describe reflections in arbitrary dimensions. The result a\u0001 is formed by reversing the sign of a\u0002 , the component\nof a in the n direction.\n\nequivalent expression\na\u0002 = a − 2a·n n.\n\n(2.100)\n\nThe compression afforded by the geometric product becomes increasingly impressive as reflections are compounded together. The formula\na\u0002 = −nan\n\n(2.101)\n\nis valid is spaces of any dimension — it is a quite general formula for a reflection.\nWe should check that our formula for the reflection has the desired property\nof leaving lengths and angles unchanged. To do this we need only verify that\nthe scalar product between vectors is unchanged if both are reflected, which is\nachieved with a simple rearrangement:\n(−nan)·(−nbn) = \u0002(−nan)(−nbn)\u0003 = \u0002nabn\u0003 = \u0002abnn\u0003 = a·b.\n\n(2.102)\n\nIn this manipulation we have made use of the cyclic reordering property of the\nscalar part of a geometric product, as defined in equation (2.91).\n\n2.6.1 Complex conjugation\nIn two dimensions we saw that the vector x is mapped to a complex number Z\nby\nZ = e1 x,\n\nx = e1 Z.\n41\n\n(2.103)\n\nGEOMETRIC ALGEBRA IN TWO AND THREE DIMENSIONS\n\nThe complex conjugate Z † is the reverse of this, Z † = xe1 , so maps to the vector\nx\u0002 = e1 Z † = e1 xe1 .\n\n(2.104)\n\nThis can be converted into the formula for a reflection if we remember that\nthe two-dimensional pseudoscalar I = e1 e2 anticommutes with all vectors and\nsquares to −1. We therefore have\nx\u0002 = −e1 IIxe1 = −e1 Ixe1 I = −e2 xe2 .\n\n(2.105)\n\nThis is precisely the expected relation for a reflection in the line perpendicular\nto e2 , which is to say a reflection in the real axis.\n\n2.6.2 Reflecting bivectors\nNow suppose that we form the bivector B = a ∧ b and reflect both of these\nvectors in the plane perpendicular to n. The result is\nB \u0002 = (−nan)∧(−nbn).\n\n(2.106)\n\nThis simplifies as follows:\n(−nan)∧(−nbn) = 12 (nannbn − nbnnan)\n= 12 n(ab − ba)n\n= nBn.\n\n(2.107)\n\nThe effect of sandwiching a multivector between a vector, nM n, always preserves\nthe grade of the multivector M . We will see how to prove this in general when\nwe have derived a few more results for manipulating inner and outer products.\nThe resulting formula nBn shows that bivectors are subject to the same transformation law as vectors, except for a change in sign. This is the origin of the\nconventional distinction between polar and axial vectors. Axial vectors are usually generated by the cross product, and we saw in section 2.4.3 that the cross\nproduct generates a bivector, and then dualises it back to a vector. But when the\ntwo vectors in the cross product are reflected, the bivector they form is reflected\naccording to (2.107). The dual vector IB is subject to the same transformation\nlaw, since\nI(nBn) = n(IB)n,\n\n(2.108)\n\nand so does not transform as a (polar) vector. In many texts this can be a source\nof much confusion. But now we have a much healthier alternative: banish all\ntalk of axial vectors in favour of bivectors. We will see in later chapters that\nall of the main examples of ‘axial’ vectors in physics (angular velocity, angular\nmomentum, the magnetic field etc.) are better viewed as bivectors.\n42\n\n2.7 ROTATIONS\n\n2.6.3 Trivectors and handedness\nThe final object to try reflecting in three dimensions is the trivector a ∧ b ∧ c.\nWe first write\n(−nan)∧(−nbn)∧(−ncn) = \u0002(−nan)(−nbn)(−ncn)\u00033\n= −\u0002nabcn\u00033 ,\n\n(2.109)\n\nwhich follows because the only way to form a trivector from the geometric product of three vectors is through the exterior product of all three. Now the product\nabc can only contain a vector and trivector term. The former cannot give rise to\nan overall trivector, so we are left with\n(−nan)∧(−nbn)∧(−ncn) = −\u0002na∧b∧cn\u00033 .\n\n(2.110)\n\nBut any trivector in three dimensions is a multiple of the pseudoscalar I, which\ncommutes with all vectors, so we are left with\n(−nan)∧(−nbn)∧(−ncn) = −a∧b∧c.\n\n(2.111)\n\nThe overall effect is simply to flip the sign of the trivector, which is a way of\nstating that reflections have determinant −1. This means that if all three vectors\nin a right-handed triplet are reflected in some plane, the resulting triplet is left\nhanded (and vice versa).\n\n2.7 Rotations\nOur starting point for the treatment of rotations is the result that a rotation\nin the plane generated by two unit vectors m and n is achieved by successive\nreflections in the (hyper)planes perpendicular to m and n. This is illustrated in\nfigure 2.8. Any component of a perpendicular to the m∧n plane is unaffected,\nand simple trigonometry confirms that the angle between the initial vector a\nand the final vector c is twice the angle between m and n. (The proof of this is\nleft as an exercise.) The result of the successive reflections is therefore to rotate\nthrough 2θ in the m∧n plane, where m·n = cos(θ).\nSo how does this look using geometric algebra? We first form\nb = −mam\n\n(2.112)\n\nand then perform a second reflection to obtain\nc = −nbn = −n(−mam)n = nmamn.\n\n(2.113)\n\nThis is starting to look extremely simple! We define\nR = nm,\n43\n\n(2.114)\n\nGEOMETRIC ALGEBRA IN TWO AND THREE DIMENSIONS\n\nc\n\nb\n\na\n\nn\n\nm\nm∧n\n\nFigure 2.8 A rotation from two reflections. The vector b is the result of\nreflecting a in the plane perpendicular to m, and c is the result of reflecting\nb in the plane perpendicular to n.\n\nso that we can now write the result of the rotation as\nc = RaR† .\n\n(2.115)\n\nThis transformation a → RaR† is a totally general way of handling rotations.\nIn deriving this transformation the dimensionality of the space of vectors was\nnever specified, so the transformation law must work in all spaces, whatever their\ndimension. The rule also works for any grade of multivector!\n\n2.7.1 Rotors\nThe quantity R = nm is called a rotor and is one of the most important objects\nin applications of geometric algebra. Immediately, one can see the importance\nof the geometric product in both (2.114) and (2.115), which tells us that rotors\nprovide a way of handling rotations that is unique to geometric algebra. To\nstudy the properties of the rotor R we first write\nR = nm = n·m + n∧m = cos(θ) + n∧m.\n44\n\n(2.116)\n\n2.7 ROTATIONS\n\nWe already calculated the magnitude of the bivector m ∧ n in equation (2.15),\nwhere we obtained\n(n∧m)(n∧m) = − sin2 (θ).\nWe therefore define the unit bivector B in the m∧n plane by\nm∧n\n,\nB 2 = −1.\nB=\nsin(θ)\n\n(2.117)\n\n(2.118)\n\nThe reason for this choice of orientation (m ∧ n rather than n ∧ m) is to ensure\nthat the rotation has the orientation specified by the generating bivector, as can\nbe seen in figure 2.8. In terms of the bivector B we now have\nR = cos(θ) − B sin(θ),\n\n(2.119)\n\nwhich is simply the polar decomposition of a complex number, with the unit\nimaginary replaced by the unit bivector B. We can therefore write\nR = exp(−Bθ),\n\n(2.120)\n\nwith the exponential defined in terms of its power series in the normal way. (The\npower series for the exponential is absolutely convergent for any multivector\nargument.)\nNow recall that our formula was for a rotation through 2θ. If we want to\nrotate through θ, the appropriate rotor is\nR = exp(−Bθ/2),\n\n(2.121)\n\na → a\u0002 = e−Bθ/2 aeBθ/2\n\n(2.122)\n\nwhich gives the formula\n\nfor a rotation through θ in the B plane, with handedness determined by B (see\nfigure 2.9). This description encourages us to think of rotations taking place\nin a plane, and as such gives equations which are valid in any dimension. The\nmore traditional idea of rotations taking place around an axis is an entirely\nthree-dimensional concept which does not generalise.\nSince the rotor R is a geometric product of two unit vectors, we see immediately that\nRR† = nm(nm)† = nmmn = 1 = R† R.\n\n(2.123)\n\nThis provides a quick proof that our formula has the correct property of preserving lengths and angles. Suppose that a\u0002 = RaR† and b\u0002 = RbR† , then\na\u0002 ·b\u0002 = 12 (RaR† RbR† + RbR† RaR† )\n= 12 R(ab + ba)R†\n= a·b RR†\n= a·b.\n\n(2.124)\n45\n\nGEOMETRIC ALGEBRA IN TWO AND THREE DIMENSIONS\n\na\u0001\n\na\n\nB\n\nθ\n\nFigure 2.9 A rotation in three dimensions. The vector a is rotated to\na\u0001 = RaR† . The rotor R is defined by R = exp(−Bθ/2), which describes\nthe rotation directly in terms of the plane and angle. The rotation has the\norientation specified by the bivector B.\n\nWe can also see that the inverse transformation is given by\na = R† a\u0002 R.\n\n(2.125)\n\nR† a\u0002 R = R† RaR† R = a.\n\n(2.126)\n\nThe proof is straightforward:\n\nThe usefulness of rotors provides ample justification for adding up terms of\ndifferent grades. The rotor R on its own has no geometric significance, which is\nto say that no meaning should be attached to the separate scalar and bivector\nterms. When R is written in the form R = exp(−Bθ/2), however, the bivector\nB has clear geometric significance, as does the vector formed from RaR† . This\nillustrates a central feature of geometric algebra, which is that both geometrically\nmeaningful objects (vectors, planes etc.) and the elements that act on them (in\nthis case rotors) are represented in the same algebra.\n\n2.7.2 Constructing a rotor\nSuppose that we wish to rotate the unit vector a into another unit vector b,\nleaving all vectors perpendicular to a and b unchanged. This is accomplished by\na reflection perpendicular to the unit vector n half-way between a and b followed\nby a reflection in the plane perpendicular to b (see figure 2.10). The vector n is\n46\n\n2.7 ROTATIONS\n\nb\nn\n\na\n\n−nan\nFigure 2.10 A rotation from a to b. The vector a is rotated onto b by first\nreflecting in the plane perpendicular to n, and then in the plane perpendicular to b. The vectors a, b and n all have unit length.\n\ngiven by\nn=\n\n(a + b)\n,\n|a + b|\n\n(2.127)\n\nwhich reflects a into −b. Combining this with the reflection in the plane perpendicular to b we arrive at the rotor\n1 + ba\n1 + ba\n=\nR = bn =\n,\n(2.128)\n|a + b|\n2(1 + b·a)\nwhich represents a simple rotation in the a ∧ b plane. This formula shows us that\nRa =\n\na+b\n2(1 + b·a)\n\n=a\n\n1 + ab\n2(1 + b·a)\n\n= aR† .\n\n(2.129)\n\nIt follows that we can write\n2\n\nRaR† = R2 a = aR† .\n\n(2.130)\n\nThis is always possible for vectors in the plane of rotation. Returning to the\npolar form R = exp(−Bθ/2), where B is the a ∧ b plane, we see that\nR2 = exp(−Bθ),\n\n(2.131)\n\nso we can rotate a onto b with the formula\nb = e−Bθ a = aeBθ .\n\n(2.132)\n\nThis is precisely the form found in the plane using complex numbers, and was\nthe source of much of the confusion over t```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.918938,"math_prob":0.9817655,"size":103802,"snap":"2020-24-2020-29","text_gpt3_token_len":27266,"char_repetition_ratio":0.18132333,"word_repetition_ratio":0.03186825,"special_character_ratio":0.2504865,"punctuation_ratio":0.10090594,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99454755,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-06-01T07:57:35Z\",\"WARC-Record-ID\":\"<urn:uuid:f483a97a-eca8-435b-b034-0131fc9166d7>\",\"Content-Length\":\"172494\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3b8d300a-0138-4aa3-88e1-19f2d85c6c58>\",\"WARC-Concurrent-To\":\"<urn:uuid:03a983d6-5b52-48af-bac8-239b44ad8ada>\",\"WARC-IP-Address\":\"81.17.17.254\",\"WARC-Target-URI\":\"https://en.b-ok.org/book/3403716/bea417\",\"WARC-Payload-Digest\":\"sha1:ZARTRPGGNAEOCUMUQFGVNUI4AVWQZ2NP\",\"WARC-Block-Digest\":\"sha1:4AA73UYRP74L2BC4O5USLFJTVEC6WROT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347415315.43_warc_CC-MAIN-20200601071242-20200601101242-00476.warc.gz\"}"}
https://paulsilhan.com/iec-60909-24/
[ "This calculation is based on IEC (, c), “Short-circuit currents in three-phase a.c. systems – Part 0: Calculation of currents” and. EasyPower offers a complete and accurate solution to short-circuit calculations in three-phase AC systems using the IEC standard. You can enter. IEC Edition INTERNATIONAL. STANDARD. NORME. INTERNATIONALE. Short-circuit currents in three-phase a.c. systems –.", null, "Author: Gasho Mazujas Country: Guinea Language: English (Spanish) Genre: Video Published (Last): 16 July 2015 Pages: 272 PDF File Size: 5.34 Mb ePub File Size: 20.85 Mb ISBN: 948-6-63982-609-5 Downloads: 72474 Price: Free* [*Free Regsitration Required] Uploader: Gumi", null, "EasyPower uses the following c factors as the default for maximum and minimum short-circuit conditions. As a result, reversible static converter-fed drives are treated for the calculation of short-circuit currents in a similar way as asynchronous motors. Factor q for the calculation of the symmetrical short-circuit breaking current of asynchronous motors The generator impedance shall be transferred to the high-voltage side using the rated transformation ratio t.\n\nTransformer Impedance Correction Factors The transformer correction factor K T for two winding units with or without on-load tap changer LTC is calculated as follows per section 3.\n\nYou are currently using the site but have requested a page in the site. This figure is useful for information but should not be used instead of I calculation. Short Circuits in Power Systems: It is not necessary for the product 1, Short-circuit duty results displayed in the single line drawing Table 5: Synchronous Generator Impedance Correction Factor The synchronous generator impedance correction factor K G for generators without unit transformers is calculated as follows per section 3.\n\nCalculation of a Cable 8.", null, "This first edition cancels and replaces IEC published in and constitutes a technical revision. The short-circuit impedances for electrical equipment are modified using impedance corrections factors that are calculated based on section 3.\n\nIn the case of a near-to-generator short circuit, the short-circuit current can be considered as the sum of the following two components: Enter the exact the reason is: Figure 2 – Short-circuit current of a near-to-generator short circuit with decaying a. The initial symmetrical short-circuit currents riQ,, and riQm, on the high-voltage side of the trans- former shall be given by the supply company or by an adequate calculation according to this standard.\n\n### IEC Short-Circuit in EasyPower\n\nFor TCC clipping you can choose one from initial, breaking and steady state currents. View of the single line diagram in the time current characteristics TCC plot for protective devices. Voltages at remote buses are also provided. Factor q may also be obtained from figure All other active voltages in the system are assumed to be zero.\n\n## Short Circuits in Power Systems: A Practical Guide to IEC 60909-0, 2nd Edition\n\nThe zero-sequence system impedance Z,, of the motor shall be given by the manufacturer, if needed see 4. For the purpose of this standard, one has to make a distinction between short-circuit impedances at the short-circuit location F and the short-circuit impedances of individual electrical equipment.\n\nFor undated references, the latest edition of the normative document referred to applies. Examples f o r the calculation of short-circuit currentsi The committee has decided that the contents of this publication will remain unchanged until If the highest partial short-circuit current of the power station unit at the high-voltage side of the unit transformer with ieec taps is searched for, choose l-pT.\n\n### IEC | IEC Webstore\n\nFor the calculation of the initial short-circuit currents according to 4. 6090 values of positive-sequence and negative-sequence impedances can differ from each other only in the case of rotating machines.\n\nWith these impedances the corrected equivalent impedances LK, z B K and ZCKshall be calculated using the procedure given in equation i i. Transformer secondary short 660909. This is admissible, because the impedance correction factor KTfor network transformers is introduced. For the short-circuit impedances of synchronous generators in the negative-sequence system, the following applies with KG from equation 1 8: The correction factor Kso shall also be applied to the zero-sequence system impedance of the power station unit excepting, if present, iiec impedance component between the star point of the transformer and earth.\n\nYou can modify these values as needed in the short circuit options.\n\nCalculation of an LV motor 8. Calculations are simplest for balanced short circuits on radial systems, as the individual contributions to a balanced short circuit can be evaluated separately for each source figures 12 or The zero-sequence short-circuit impedance at the short-circuit location F is obtained according to figure 5c, if an a. Neglecting the -zero-sequence capacitances of lines in earthed neutral systems leads to results which are slightly higher than the real values of the short-circuit currents.\n\nCalculated Values You can obtain the following values of short-circuit currents at the fault location for both maximum and minimum short circuit currents: Short-circuit current of a far-from-generator short circuit with constant a.\n\n## IEC-60909 Short-Circuit in EasyPower\n\nThe capacitances of lines overhead lines and cables of low-voltage networks may be neglected in the positive- negative- and zero-sequence system.\n\nIf the Joule integral or the thermal equivalent short-circuit current shall be calculated for unbalanced short circuits, replace I: Switches use the peak current to compare with making capacity. NOTE Equivalent circuits of the positive-sequence and the zero-sequence system are given in IECtable Iitem 4 to 7 for different cases of starpoint earthing.", null, "" ]
[ null, "https://www.iec.ch/newslog/2009/img/nr0709_visual.jpg", null, "https://paulsilhan.com/download_pdf.png", null, "http://www.digsilent.me/dme/attachment.php", null, "https://www.vde-verlag.de/iec-normen/cover/iec60909-0{ed2.0}b.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8593419,"math_prob":0.9445579,"size":5906,"snap":"2020-45-2020-50","text_gpt3_token_len":1238,"char_repetition_ratio":0.17417824,"word_repetition_ratio":0.020361992,"special_character_ratio":0.19573316,"punctuation_ratio":0.08959538,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98267156,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,7,null,9,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-23T21:11:56Z\",\"WARC-Record-ID\":\"<urn:uuid:6f39afed-ec09-4d3c-8558-f18c145be356>\",\"Content-Length\":\"35921\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:357d5a0f-c603-44e1-8597-f82271d7e751>\",\"WARC-Concurrent-To\":\"<urn:uuid:74fbd9bc-6a14-4959-ab4f-11e61fd93776>\",\"WARC-IP-Address\":\"172.67.165.192\",\"WARC-Target-URI\":\"https://paulsilhan.com/iec-60909-24/\",\"WARC-Payload-Digest\":\"sha1:VKIRHSSBEZJUE3B7276FFALAY6XSVEZL\",\"WARC-Block-Digest\":\"sha1:LME7CC2YMZ546D33PTCN3YPCAJRRAEOX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107865665.7_warc_CC-MAIN-20201023204939-20201023234939-00445.warc.gz\"}"}
https://housely.com/linear-feet/
[ "# Linear Feet: Understanding and Calculating Measurements for Your Project", null, "Linear feet is a term often used in construction, carpentry, and various other industries to measure length. It represents a distance in a straight line, with one linear foot equaling 12 inches. The concept is used to calculate materials and costs associated with specific projects, making it essential in fields like architecture, landscaping, and logistics that require accurate length estimations.\n\nUnderstanding the distinction between linear feet and other measurements such as square feet and cubic feet is crucial. While linear feet only consider a single dimension (length), square feet involve both length and width, calculating the two-dimensional area of an object. Cubic feet, on the other hand, consider three dimensions – length, width, and height – to measure the object’s volume.\n\nTo calculate linear feet, one needs to measure the length in inches and then divide the total number of inches by 12 to convert it to feet. This method of calculating linear feet is simple, efficient, and widely used, making it indispensable for various projects and industries.\n\n## What Is Linear Feet?\n\nA linear foot is a measurement of length, representing a straight line that spans 12 inches or one foot. This term is prevalent in the construction industry, where it is instrumental in determining the quantity of specific materials required for a project.\n\nCalculating linear feet is straightforward, as it essentially involves measuring the length of an item without considering its width or height. This unit of measurement differs from square feet and cubic feet, which take into account dimensions like area and volume. The linear foot is also known as linear footage or length in feet.\n\nSometimes, there might be confusion between linear feet and square feet. It is essential to understand the difference between these two measurements. While a linear foot measures a straight line’s length, a square foot measures the area of a space, taking into account both length and width. Both units are vital in construction and related fields for accurate material estimation and project management.\n\nIn summary, a linear foot is a simple, highly useful construction measurement that represents the length of a straight line covering 12 inches or one foot. This measurement helps determine the required materials for a project and is different from square feet, which indicates the area covered by a specific space.\n\n## Linear Feet vs Square Feet\n\nLinear feet and square feet are two distinct units of measurement used for different purposes. While linear feet refers to the length or distance of a material, square feet measures the area of a space.\n\nA linear foot is a continuous line representing a single dimension, like the length of a board or the distance between two points. It is typically used when measuring materials like lumber, trim, or flooring. By contrast, square footage represents a two-dimensional space, such as the area of a room or the surface of a wall. These measurements are relevant when calculating floor space, estimating paint needed, or comparing the sizes of different rooms.\n\nTo illustrate the difference, imagine a room that is 10 feet wide and 20 feet long. The total linear feet of the room would be the sum of each wall’s length (10 + 20 + 10 + 20 = 60 linear feet). However, the area the room covers would be calculated by multiplying the width by the length (10 x 20 = 200 square feet).\n\nThere are situations where converting between linear feet and square feet is necessary, especially when estimating materials needed for a project. For example, if you need to purchase baseboards for a particular room, it’s crucial to know the total linear feet of the space. In some cases, materials are priced per linear foot, which may require converting the square footage into linear feet. To do this, divide the square footage by the width of the material in feet (LF = Sq ft / Width of one LF).\n\nIn conclusion, it’s essential to understand the difference between linear feet and square feet to accurately calculate and estimate the requirements for various construction or renovation projects. These two measurements serve unique purposes, like determining material lengths or assessing the total area of a space.\n\n## Calculation of Linear Feet\n\nCalculating linear feet is a straightforward process that simply involves measuring the length of an object or material in feet. Linear feet are used in various applications such as flooring, woodworking, and construction projects. Let’s explore the steps to calculate linear feet.\n\nFirst, determine the length of each individual piece in your project. For instance, if you are installing a new kitchen countertop, you would identify the lengths of each separate piece required for the countertop. Use a ruler or measuring tape to obtain accurate measurements. Measurements should be in inches, as you will later convert them to feet.\n\nOnce you have measured the lengths of all the pieces, add these measurements together. The sum of these linear measurements in inches represents the total linear inches of your project. To convert the total linear inches to linear feet, divide the sum by 12 (1 foot = 12 inches). The resulting value gives you the total linear feet required for your project.\n\nFor example, if you have measured three separate pieces with lengths of 24 inches, 36 inches, and 48 inches, the sum would be 108 inches. Dividing this by 12 gives you a result of 9 linear feet.\n\nIt is important to note that linear feet are different from square feet, which measure area. Linear feet solely focus on length and are often used when purchasing materials like lumber, piping, or baseboards, as these materials are commonly priced and sold by linear foot.\n\nIn conclusion, calculating linear feet requires accurate measurements of each individual piece involved in a project and simple mathematical conversions. This straightforward process ensures that you have the correct amount of material needed for your project, helping reduce waste and overall costs.\n\n## Linear Feet in Construction\n\nLinear feet is a vital measurement in construction projects, as it represents the straight-line distance of materials and is crucial for determining the amount needed for various tasks.\n\n### Flooring\n\nWhen it comes to flooring, calculating linear feet is essential for determining the amount of material required to cover a floor. This measurement helps in understanding how many planks, tiles, or other flooring materials are needed to complete the project. It’s important to measure the length and width of the room and then calculate the linear feet by multiplying the two values. For instance, a 10×12 room will require 120 linear feet of flooring material.\n\n### Wallpapering\n\nIn wallpapering projects, linear feet is fundamental when determining the quantity of wallpaper needed to cover walls. Measuring the height and total length of all walls will provide the necessary linear feet figure. To calculate the required amount of wallpaper, divide the total linear feet by the width of the wallpaper roll. For example, if the total linear feet is 50 and the wallpaper is 20 inches wide, you will need 30 linear feet of wallpaper material (50 / (20/12)).\n\n### Fencing\n\nFor fencing, linear feet is used to estimate the amount of materials and the cost of a project. To determine the number of linear feet, measure the entire perimeter of the area to be fenced. This value will help ascertain the amount of fencing materials, such as panels, posts, and hardware, required to complete the installation. Keep in mind that additional materials may be needed for gates or other special features.\n\n## Linear Feet in Shipping\n\n### Parcel Shipping\n\nIn parcel shipping, linear feet essentials often come into play when determining package dimensions. A linear foot is a straightforward measurement of 12 inches, which is the length of a standard ruler. Calculating linear feet is crucial for determining an accurate shipping cost, as carriers typically charge based on the dimensions of a package. When measuring a package’s linear dimensions, consider the length, width, and height in inches, and then divide each measurement by 12 to convert it into linear feet.\n\n### Freight Shipping\n\nFreight shipping involves transporting goods in bulk and can encompass less-than-truckload (LTL) and full-truckload (FTL) carrier services. In this context, linear feet measurements are vital to optimizing available space in a truck or shipping container.\n\nFor LTL shipping, linear feet calculations help carriers estimate how much area within a trailer will be occupied by pallets or other units of freight. The industry standard for LTL shipping is 12 linear feet and 750 cubic feet, with freight exceeding these dimensions typically requiring FTL shipping.\n\nWhen arranging FTL shipping, it’s necessary to accommodate large freight within trailers that usually span 48 to 53 feet. Consequently, understanding the linear feet occupied by your freight can aid in selecting an appropriately sized carrier and ensuring efficient use of space.\n\nIn both parcel and freight shipping, calculating linear feet is a key factor in optimizing transportation efficiency, minimizing costs, and providing accurate shipping estimates.\n\n## Using Linear Feet in Home Improvement\n\n### Kitchen Remodeling\n\nIn kitchen remodeling projects, understanding how to measure in linear feet can be helpful for installing various elements like countertops and cabinets. For instance, when measuring countertops, you’ll need to determine the length of each piece required in a straight line. To do this, simply measure along the edge of the countertop using a tape measure. Record the measurements in inches and then divide by 12 to convert to linear feet.\n\nFor cabinets, the same process applies. Measure the length of the wall where the cabinets will be installed and convert the inches to linear feet. This will allow you to determine the total linear footage of cabinets needed to fulfill your kitchen remodeling project.\n\n### Deck Building\n\nWhen building a deck, calculating linear feet is crucial in determining the amount of lumber and materials required. To calculate linear feet for deck boards, first measure the total length of the deck in inches. Next, convert this length to linear feet by dividing by 12.\n\nFor example, if your deck is 240 inches long, you would divide 240 by 12 to get 20 linear feet. Now, you can calculate the necessary number of boards based on their width and the linear footage. If using 2-by-6 lumber, you’ll need to multiply the linear feet by a factor to account for the width of the boards.\n\nKeep in mind that additional factors like spacing between boards, overhangs, and waste should also be considered when determining material requirements for deck building. By using linear feet in your calculations, you can more accurately estimate the quantities necessary for your home improvement projects.\n\n## Cost Estimation Using Linear Feet\n\nUsing linear feet as a unit of measurement can assist in estimating the cost of a project, particularly when you need to determine how much material is required for a particular task. In this section, we’ll discuss two important components of cost estimation using linear feet: the cost of materials, and labor cost.\n\n### Cost of Materials\n\nCalculating material costs using linear feet is essential for budgeting and purchasing the right amount of materials needed for a project. This can include items such as lumber, trim, or piping. To determine the cost of materials, follow these steps:\n\n1. Measure the length of the material needed in inches, then divide by 12 to convert it into linear feet.\n2. Identify the price per linear foot for the specific material, which can typically be found through supplier catalogues or online.\n3. Multiply the price per linear foot by the total linear feet needed for the project.\n\nFor example, if you need 20 linear feet of lumber and the price per linear foot is \\$5, the total cost for the lumber would be \\$100 (20 linear feet x \\$5).\n\n### Labor Cost\n\nWhen estimating labor cost using linear feet, it’s important to consider how long the project will take to complete and the labor rate for the workers involved. The following steps will help you estimate labor costs based on linear footage:\n\n1. Determine the amount of work to be done in linear feet, as calculated in the “Cost of Materials” section.\n2. Estimate the time it will take to complete the work. This can be based on your own knowledge and experience or gained from industry professionals.\n3. Ascertain the labor rate for the project, which could be an hourly or per-linear-foot rate. You may consult with labor contractors or use industry standards as a reference.\n4. Multiply the labor rate by the time required or project length to find the total labor cost.\n\nFor instance, if a project requires 20 linear feet of work, the hourly labor rate is \\$25 per hour, and it’s expected to take 8 hours to finish the project, the labor cost would be \\$200 (8 hours x \\$25).\n\nBy providing accurate cost estimates using linear feet, both material and labor expenses can be controlled more effectively, ensuring a smooth and successful completion of any project.\n\n## Common Mistakes in Linear Feet Calculation\n\nWhen working with linear feet, it’s essential to understand some of the common pitfalls that can lead to incorrect calculations. Recognizing these mistakes can help prevent project setbacks, material wastage, and incorrect cost estimations.\n\nConfusing linear feet, square feet, and cubic feet: One common mistake is confusing linear feet with square feet and cubic feet. While a linear foot refers to a length measured in a straight line (12 inches), square feet represent an area (length x width), and cubic feet signify volume (length x width x height). Mixing up these units can cause significant calculation errors.\n\nNot differentiating between feet and linear feet: Although feet and linear feet are usually interchangeable, it’s still essential to understand that “foot” refers to a unit of measurement (12 inches), while “linear foot” emphasizes the straight-line measurement aspect. Being consistent with these terms can help avoid confusion and mistakes.\n\nInaccurate measurements: Incorrectly measuring the lengths of objects or materials can lead to errors in linear feet calculations. It is crucial to use a reliable measuring tool and double-check measurements to ensure accuracy.\n\nNeglecting curves, angles, and cuttings: If a project involves curves or angles, simply measuring straight lines may not give an accurate representation of the materials needed. Accounting for the additional material used in these areas is essential to achieve proper estimations.\n\nRounding errors and approximations: When calculating, it’s crucial to avoid rounding errors and approximations. Carrying numbers to their precise decimal place ensures accurate project estimations and budgeting.\n\nBeing aware of these common mistakes and ensuring that proper techniques are used to calculate linear feet will assist in avoiding potential issues during your projects.\n\n## Benefits of Understanding Linear Feet\n\nKnowing how to accurately calculate linear feet is essential for many applications and industries. It enhances efficiency, ensures correct pricing, and promotes better organization. This section will discuss the benefits of understanding linear feet from various perspectives.\n\nEfficiency: By understanding and measuring in linear feet, individuals and businesses can save time and resources in multiple facets. For example, in construction and woodworking, knowing the required linear footage of materials, like lumber, can streamline project planning and reduce waste. Similarly, in shipping and logistics, mastering linear feet measurements allows companies to optimize packing and shipping, minimizing empty space and maximizing the utilization of transportation resources.\n\nAccurate Pricing: Having a clear understanding of linear feet can be crucial for fair and transparent pricing in various industries. For instance, vendors selling materials like pipes, cables, or fabric might price their merchandise by the linear foot. Consumers who are well-versed in this measurement will be able to accurately compare prices and calculate the costs of a project. In the freight industry, carriers often charge by linear feet, translating into precise shipping costs for clients. Being knowledgeable in this area ensures a transparent and informed decision-making process.\n\nOrganization: In warehousing, storage, and even home organization, comprehending linear feet can be the key to designing efficient and tidy storage systems. Archivists, librarians, and other professionals rely on linear feet to estimate shelf space, ensuring collections are well-maintained and easily accessible. Meanwhile, homeowners can leverage the same knowledge to plan effective closet, garage, or pantry organization.\n\nIn summary, understanding how to work with linear feet is valuable across many sectors. It promotes efficiency, precise pricing, and optimal organization in a wide range of applications." ]
[ null, "https://housely.com/wp-content/uploads/2023/08/Linear-Feet.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.91334945,"math_prob":0.97281104,"size":17059,"snap":"2023-40-2023-50","text_gpt3_token_len":3220,"char_repetition_ratio":0.17742598,"word_repetition_ratio":0.02366864,"special_character_ratio":0.18969458,"punctuation_ratio":0.103652515,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9802622,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-07T04:23:16Z\",\"WARC-Record-ID\":\"<urn:uuid:becd0153-2810-4101-b7c8-3fd098010e67>\",\"Content-Length\":\"90179\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4ed1f35f-6a6a-47fa-a491-9a36da1ce86b>\",\"WARC-Concurrent-To\":\"<urn:uuid:bc92401e-4203-4393-85a1-0f5c43427734>\",\"WARC-IP-Address\":\"172.67.144.142\",\"WARC-Target-URI\":\"https://housely.com/linear-feet/\",\"WARC-Payload-Digest\":\"sha1:KEO6MCMVYT3NAQJP4IOHFN5426YN6WMM\",\"WARC-Block-Digest\":\"sha1:AVZU4VYNDMANG6XUCIFLDPRTUDXHAAO6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100632.0_warc_CC-MAIN-20231207022257-20231207052257-00248.warc.gz\"}"}
https://glossary.ametsoc.org/wiki/Ekman_spiral
[ "# Ekman spiral\n\n## Ekman spiral\n\n1. As used in meteorology, an idealized mathematical description of the wind distribution in the atmospheric boundary layer, within which the earth's surface has an appreciable effect on the air motion.\n\nThe model is simplified by assuming that within this layer the eddy viscosity and density are constant, the motion is horizontal and steady, the isobars are straight and parallel, and the geostrophic wind is constant with height. The x direction is taken along the pressure gradient; the resulting approximate equations for the component wind speeds U and V in the x and y directions, respectively, at any level z are", null, "where G is the geostrophic wind speed, β = z(f/2KM)½, f is the Coriolis parameter, and KM is the eddy viscosity. The lowest level H where U = 0, so that the true wind and the geostrophic wind have the same direction, is called the geostrophic wind level (or gradient wind level). It is given by", null, "where α0 is the angle between the surface wind and the surface isobars. At this height the magnitude of the true wind will exceed that of the geostrophic wind by a small amount, depending on the value of β. The Ekman spiral is an equiangular spiral having the geostrophic wind as its limit point. Below the geostrophic wind level the wind blows across the isobars toward low pressure, at an angle that is a maximum at the surface and does not exceed 45°. The deviation of the wind vector from the geostrophic wind vector diminishes upward at an exponential rate. The theory of this spiral was developed by Ekman in 1902 for motion in the upper layers of the ocean under the influence of a steady wind. It was applied to the atmosphere by Åkerblom in 1908.\n\n2. As originally applied by Ekman to ocean currents, a graphic representation of the way in which the theoretical wind-driven currents in the surface layers of the sea vary with depth.\n\nIn an ocean that is assumed to be homogeneous, infinitely deep, unbounded, and having a constant eddy viscosity, over which a uniform steady wind blows, Ekman has computed that the current induced in the surface layers by the wind will have the following characteristics: 1) At the very surface the water will move at an angle of 45° cum sole from the wind direction; 2) in successively deeper layers the movement will be deflected farther and farther cum sole from the wind direction, and the speed will decrease; and 3) a hodograph of the velocity vectors would form a spiral descending into the water and decreasing in amplitude exponentially with depth. The depth at which the vector first points 180° from the wind vector is called the depth of frictional influence (or depth of frictional resistance). At this depth the speed is e times that at the surface. The layer from the surface to the depth of frictional influence is called the layer of frictional influence. If the velocity vectors from the surface to the depth of frictional influence are integrated, the resultant vertically integrated motion is 90° cum sole from the wind direction." ]
[ null, "https://glossary.ametsoc.org/w/images/7/7d/Ams2001glos-Ee16.gif", null, "https://glossary.ametsoc.org/w/images/b/bd/Ams2001glos-Ee17.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9259127,"math_prob":0.9514804,"size":3060,"snap":"2021-43-2021-49","text_gpt3_token_len":658,"char_repetition_ratio":0.14888744,"word_repetition_ratio":0.026871402,"special_character_ratio":0.2003268,"punctuation_ratio":0.07557118,"nsfw_num_words":3,"has_unicode_error":false,"math_prob_llama3":0.9759461,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-03T13:26:11Z\",\"WARC-Record-ID\":\"<urn:uuid:ebc89758-a95f-459f-b436-8d5bd6ce5294>\",\"Content-Length\":\"20849\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c85fbbc9-e6bd-4e74-b5cb-7a41eafb33a6>\",\"WARC-Concurrent-To\":\"<urn:uuid:27f8987b-60fe-4e49-8f32-4a79073a6f5e>\",\"WARC-IP-Address\":\"208.113.218.130\",\"WARC-Target-URI\":\"https://glossary.ametsoc.org/wiki/Ekman_spiral\",\"WARC-Payload-Digest\":\"sha1:OGYOTCA77YR4PRK25ANJBE4SBX4YZMSV\",\"WARC-Block-Digest\":\"sha1:BB47XDDMB6O2WTWEHX7VLJJOF2VYUE5H\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964362879.45_warc_CC-MAIN-20211203121459-20211203151459-00302.warc.gz\"}"}
https://www.rdocumentation.org/packages/ggplot2/versions/2.1.0/topics/facet_wrap
[ "# facet_wrap\n\n0th\n\nPercentile\n\n##### Wrap a 1d ribbon of panels into 2d.\n\nMost displays are roughly rectangular, so if you have a categorical variable with many levels, it doesn't make sense to try and display them all in one row (or one column). To solve this dilemma, facet_wrap wraps a 1d sequence of panels into 2d, making best use of screen real estate.\n\n##### Usage\nfacet_wrap(facets, nrow = NULL, ncol = NULL, scales = \"fixed\", shrink = TRUE, labeller = \"label_value\", as.table = TRUE, switch = NULL, drop = TRUE, dir = \"h\")\n##### Arguments\nfacets\nEither a formula or character vector. Use either a one sided formula, ~a + b, or a character vector, c(\"a\", \"b\").\nnrow, ncol\nNumber of rows and columns.\nscales\nshould Scales be fixed (\"fixed\", the default), free (\"free\"), or free in one dimension (\"free_x\", \"free_y\").\nshrink\nIf TRUE, will shrink scales to fit output of statistics, not raw data. If FALSE, will be range of raw data before statistical summary.\nlabeller\nA function that takes one data frame of labels and returns a list or data frame of character vectors. Each input column corresponds to one factor. Thus there will be more than one with formulae of the type ~cyl + am. Each output column gets displayed as one separate line in the strip label. This function should inherit from the \"labeller\" S3 class for compatibility with labeller(). See label_value for more details and pointers to other options.\nas.table\nIf TRUE, the default, the facets are laid out like a table with highest values at the bottom-right. If FALSE, the facets are laid out like a plot with the highest value at the top-right.\nswitch\nBy default, the labels are displayed on the top of the plot. If switch is \"x\", they will be displayed to the bottom. If \"y\", they will be displayed to the left, near the y axis.\ndrop\nIf TRUE, the default, all factor levels not used in the data will automatically be dropped. If FALSE, all factor levels will be shown, regardless of whether or not they appear in the data.\ndir\nDirection: either \"h\" for horizontal, the default, or \"v\", for vertical.\n• facet_wrap\n##### Examples\nggplot(mpg, aes(displ, hwy)) +\ngeom_point() +\nfacet_wrap(~class)\n\n# Control the number of rows and columns with nrow and ncol\nggplot(mpg, aes(displ, hwy)) +\ngeom_point() +\nfacet_wrap(~class, nrow = 4)\n\n# You can facet by multiple variables\nggplot(mpg, aes(displ, hwy)) +\ngeom_point() +\nfacet_wrap(~ cyl + drv)\n# Or use a character vector:\nggplot(mpg, aes(displ, hwy)) +\ngeom_point() +\nfacet_wrap(c(\"cyl\", \"drv\"))\n\n# Use the labeller option to control how labels are printed:\nggplot(mpg, aes(displ, hwy)) +\ngeom_point() +\nfacet_wrap(c(\"cyl\", \"drv\"), labeller = \"label_both\")\n\n# To change the order in which the panels appear, change the levels\n# of the underlying factor.\nmpg$class2 <- reorder(mpg$class, mpg\\$displ)\nggplot(mpg, aes(displ, hwy)) +\ngeom_point() +\nfacet_wrap(~class2)\n\n# By default, the same scales are used for all panels. You can allow\n# scales to vary across the panels with the scales argument.\n# Free scales make it easier to see patterns within each panel, but\n# harder to compare across panels.\nggplot(mpg, aes(displ, hwy)) +\ngeom_point() +\nfacet_wrap(~class, scales = \"free\")\n\n# To repeat the same data in every panel, simply construct a data frame\n# that does not contain the facetting variable.\nggplot(mpg, aes(displ, hwy)) +\ngeom_point(data = transform(mpg, class = NULL), colour = \"grey85\") +\ngeom_point() +\nfacet_wrap(~class)\n\n# Use switch to display the facet labels near an axis, acting as\n# a subtitle for this axis. This is typically used with free scales\n# and a theme without boxes around strip labels.\nggplot(economics_long, aes(date, value)) +\ngeom_line() +\nfacet_wrap(~variable, scales = \"free_y\", nrow = 2, switch = \"x\") +\ntheme(strip.background = element_blank())\n\n\nDocumentation reproduced from package ggplot2, version 2.1.0, License: GPL-2\n\n### Community examples\n\nLooks like there are no examples yet." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.63925105,"math_prob":0.9691494,"size":3844,"snap":"2020-45-2020-50","text_gpt3_token_len":1024,"char_repetition_ratio":0.13020833,"word_repetition_ratio":0.055469953,"special_character_ratio":0.2723725,"punctuation_ratio":0.15341702,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97999424,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-30T14:10:45Z\",\"WARC-Record-ID\":\"<urn:uuid:560edf8f-2b88-4d39-bc31-c67110b07950>\",\"Content-Length\":\"17674\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:214f5d76-95ad-4805-abe1-82c4d247bb72>\",\"WARC-Concurrent-To\":\"<urn:uuid:992dec4e-5720-4c7c-b2a2-42423336ddad>\",\"WARC-IP-Address\":\"52.4.138.252\",\"WARC-Target-URI\":\"https://www.rdocumentation.org/packages/ggplot2/versions/2.1.0/topics/facet_wrap\",\"WARC-Payload-Digest\":\"sha1:SCT2Y3Q532Y37L6DZDI4LGTGLFEMMUTW\",\"WARC-Block-Digest\":\"sha1:ATSAE6NYNQAX6RYLY32TBN4AAT7POIEU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141216175.53_warc_CC-MAIN-20201130130840-20201130160840-00119.warc.gz\"}"}
https://community.esri.com/thread/58395-converting-to-map-coordinates-in-101
[ "# Converting to map coordinates in 10.1\n\nQuestion asked by oblique on Aug 29, 2012\nLatest reply on Aug 30, 2012 by oblique\nWe have a bounding polygon in WGS 84 coordinates that we need to draw on the map.  The users map data could be anything.  The following code works in ArcMAP 10.0 (and prior) but does not work in 10.1 if the factory code is zero.  Unfortunately a lot of shape files our customers use seem to have valid projection files but no factory codes.  They show up in the dataframe properties a \"custom\" coordinate systems.\nIn 10.1 is there anyway to create a coordinate system if you have a spatial reference but no factory code?\nOr be able to convert a WGS84 point to a \"custom\" coordinate?\nPerhaps I should ask the high level generic question as well.  How can I convert a WGS84 coordinate into the maps coordinate system which can be anything ArcMAP supports.\n\nThanks\n-Mark\n\n` Public Function ConvertToMap(ByVal lat As Double, ByVal lon As Double, ByVal elev As Double, ByVal layer As ILayer) As IPoint         'Convert x and y to map units. m_pApp is set in ICommand_OnCreate.         Dim pMxApp As IMxApplication         Dim mapPoint As IPoint         Dim pPoint As Point      ' = m_mapPoint         Dim sMessage As String = \"\"         pMxApp = m_pApp          mapPoint = pMxApp.Display.DisplayTransformation.ToMapPoint(0, 0)         Dim pSpatialRefFactory As ISpatialReferenceFactory         pSpatialRefFactory = New SpatialReferenceEnvironment         Dim pGeographicCoordinateSystem As IGeographicCoordinateSystem         pGeographicCoordinateSystem = pSpatialRefFactory.CreateGeographicCoordinateSystem(esriSRGeoCS_WGS1984)         pPoint = mapPoint          ' Set the Spacial Ref. to WGS 84         pPoint.Project(pGeographicCoordinateSystem)         pPoint.X = lon         pPoint.Y = lat         Dim code As Integer         Dim pProjectedCoordinateSystem As IProjectedCoordinateSystem         code = pMxApp.Display.DisplayTransformation.SpatialReference.FactoryCode()         ' Dont' know if map is in Geographic or Projected coordinate system so try both.        Try            pProjectedCoordinateSystem = pSpatialRefFactory.CreateProjectedCoordinateSystem(code)            pPoint.Project(pProjectedCoordinateSystem)        Catch ex As Exception            pGeographicCoordinateSystem = pSpatialRefFactory.CreateGeographicCoordinateSystem(code)            pPoint.Project(pGeographicCoordinateSystem)        End Try          Return pPoint     End Function`" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7085542,"math_prob":0.79220814,"size":2456,"snap":"2020-45-2020-50","text_gpt3_token_len":560,"char_repetition_ratio":0.18311583,"word_repetition_ratio":0.3522388,"special_character_ratio":0.18729642,"punctuation_ratio":0.10178117,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95567364,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-19T21:41:17Z\",\"WARC-Record-ID\":\"<urn:uuid:4bd965d7-1aea-4df5-bb4e-d62e6e1048e0>\",\"Content-Length\":\"101506\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9aebf014-17e6-4398-8759-58f68dfa919a>\",\"WARC-Concurrent-To\":\"<urn:uuid:ae57744e-197d-4a10-8fe8-8a6c279116b9>\",\"WARC-IP-Address\":\"104.107.32.229\",\"WARC-Target-URI\":\"https://community.esri.com/thread/58395-converting-to-map-coordinates-in-101\",\"WARC-Payload-Digest\":\"sha1:LX7SGFKHMBWURW5DZ3KCS4PUQZLLXLJX\",\"WARC-Block-Digest\":\"sha1:TJNJX4VB6PHAB22ELWVAPQWBAQ5JE57H\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107866404.1_warc_CC-MAIN-20201019203523-20201019233523-00389.warc.gz\"}"}
https://winkdoubleguns.com/2014/04/22/wii-nunchuck-update/
[ "# Wii Nunchuck update\n\nIn February of 2013 I was attempting to wire a Wii Nunchuck to my Arduino (which I did).  The problem was that I thought it wasn’t good enough.  I liked that I could run the throttle on a train with this Arduino and code (here’s the original post: http://holoprinter.blogspot.com/search/label/Wii%20Remote).  What I didn’t like was that it didn’t seem like it was smooth and pinout for the servo was ok, but I wanted to streamline it some.  Intro the Servo library.\n\nI do not know if I modified this code.  I started this blog to streamline the processes with the Servo library, however, I did copy this and am pretty sure I did not modify it much.  I did change some pins around.\n\nWhen you rotate the Wii Nunchuck up and down one servo will rotate.  When you roll the Wii Nunchuck the other servo will rotate.  If you press the ‘c’ button then use the top hat you will get one servo to move along the x-axis and one servo when you move the top hat along the y-axis.\n\nI’m working on another version of this that streamlines it into more of a library.\n\nhttp://forum.arduino.cc/index.php?topic=147238.0;wap2\n\n`` // // Not sure exactly where I got the original code from but if you know you worked // on it , please cite yourself to the code below in comments. // I added comments as needed and using the pre-existing switch case, I created a // manual mode so when the c_button is held the controls come from the nunchuck's // Joystick. Also when the z_button is pressed a LED (pin-9 / GND) is faded up to simulate // a laser powering up and then a simple keychain LASER POINTER assigned to pin-11 / GND goes // HIGH for a short duration to simulate a laser firing. // // In this version I changed the auto mode around so the the Y servo tracks the front // of the wii_NunChuck like the buttons are a face looking left or right. // // Raymond Willis Jr. 2/20/2013 email: [email protected] // Title: Servo Turret Controls (Auto / Manual) with Laser fire using Wii_Nunchuck: v5. //http://forum.arduino.cc/index.php/topic,147238.0.html #include #include #include uint8_t outbuf; int cnt = 0; int ledPin1 = 13;//9; // assign the LED pin int laserPin = 11; // assign the laser pointer pin int servoPin = 7;//10; int servoPin2 = 8; int pulseWidth = 0; int pulseWidth2 = 0; long lastPulse = 0; long lastPulse2 = 0; int z_button = 0; int c_button = 0; int refreshTime = 20; int minPulse = 1000; int minPulse2 = 500; int dtime=10; int oneFlash = 1; // test only #define pwbuffsize 10 long pwbuff[pwbuffsize]; long pwbuffpos = 0; long pwbuff2[pwbuffsize]; long pwbuffpos2 = 0; void setup() { Serial.begin (9600); Wire.begin (); nunchuck_init (); pinMode(servoPin, OUTPUT); pinMode(servoPin2, OUTPUT); pinMode(ledPin1, OUTPUT); pinMode(laserPin, OUTPUT); pulseWidth = minPulse; pulseWidth2 = minPulse2; Serial.print (\"Finished setupn\"); } void nunchuck_init() { Wire.beginTransmission (0x52); Wire.write (0x40); Wire.write (0x00); Wire.endTransmission (); } void send_zero() { Wire.beginTransmission (0x52); Wire.write (0x00); Wire.endTransmission (); } int t = 0; void loop() { t++; long last = millis(); if( t == 1) { t = 0; Wire.requestFrom (0x52, 6); while (Wire.available ()) { outbuf[cnt] = nunchuk_decode_byte (Wire.read ()); cnt++; } if (cnt >= 5) { // printNunchuckData(); // Uncomment to print data to display- RW int z_button = 0; int c_button = 0; if ((outbuf >> 0) & 1) z_button = 1; if ((outbuf >> 1) & 1) c_button = 1; switch (c_button) { case 1: switch (z_button) { case 0: if (oneFlash > 0) { for(int fadeValue = 0 ; fadeValue <= 100; fadeValue +=1) { //sets the value (range from 0 to 100): analogWrite(ledPin1, fadeValue); delay (15); // change this value to increase/decrease the LED ramp up time. oneFlash = 0; } } analogWrite(ledPin1, LOW); digitalWrite(laserPin, HIGH); delay (700); // Laser Pointer on time in millisecs after LED ramp up is done. analogWrite(laserPin, LOW); break; case 1: digitalWrite(ledPin1, LOW); muovi(); if (oneFlash < 1){ oneFlash = 1; } Serial.println(\"laser flash\"); break; } break; case 0: switch (z_button) { case 0: if (oneFlash > 0) { for(int fadeValue = 0 ; fadeValue <= 100; fadeValue +=1) { //sets the value (range from 0 to 100): analogWrite(ledPin1, fadeValue); delay (15); // change this value to increase/decrease the LED ramp up time. oneFlash = 0; } } analogWrite(ledPin1, LOW); digitalWrite(laserPin, HIGH); delay (700); // Laser Pointer on time in millisecs after LED ramp up is done. digitalWrite(laserPin, LOW); break; case 1: Serial.println(\"ray\"); ray1(); break; } break; } } cnt = 0; send_zero(); } // if(t==) updateServo(); delay(dtime); } void updateServo() { if (millis() - lastPulse >= refreshTime) { digitalWrite(servoPin, HIGH); delayMicroseconds(pulseWidth); digitalWrite(servoPin, LOW); digitalWrite(servoPin2, HIGH); delayMicroseconds(pulseWidth2); digitalWrite(servoPin2, LOW); lastPulse = millis(); } } int i=0; void printNunchuckData() { int joy_x_axis = outbuf; int joy_y_axis = outbuf; int accel_x_axis = outbuf; // * 2 * 2; int accel_y_axis = outbuf; // * 2 * 2; int accel_z_axis = outbuf; // * 2 * 2; int z_button = 0; int c_button = 0; if ((outbuf >> 0) & 1) z_button = 1; if ((outbuf >> 1) & 1) c_button = 1; if ((outbuf >> 2) & 1) accel_x_axis += 2; if ((outbuf >> 3) & 1) accel_x_axis += 1; if ((outbuf >> 4) & 1) accel_y_axis += 2; if ((outbuf >> 5) & 1) accel_y_axis += 1; if ((outbuf >> 6) & 1) accel_z_axis += 2; if ((outbuf >> 7) & 1) accel_z_axis += 1; Serial.print (i,DEC); Serial.print (\"t\"); Serial.print (\"X: \"); Serial.print (joy_x_axis, DEC); Serial.print (\"t\"); Serial.print (\"Y: \"); Serial.print (joy_y_axis, DEC); Serial.print (\"t\"); Serial.print (\"AccX: \"); Serial.print (accel_x_axis, DEC); Serial.print (\"t\"); Serial.print (\"AccY: \"); Serial.print (accel_y_axis, DEC); Serial.print (\"t\"); Serial.print (\"AccZ: \"); Serial.print (accel_z_axis, DEC); Serial.print (\"t\"); Serial.print (z_button, DEC); Serial.print (\" \"); Serial.print (c_button, DEC); Serial.print (\"rn\"); i++; } char nunchuk_decode_byte (char x) { x = (x ^ 0x17) + 0x17; return x; } void muovi (){ // This is the pre-existing auto mode that uses the x, y accelerometers to move servos float tilt = (700 - outbuf*2*2); float tilt2 = (700 - outbuf*2*2); tilt = (tilt); pulseWidth = (tilt * 5) + minPulse; tilt2 = (tilt2); pulseWidth2 = (tilt2 * 5) + minPulse2; pwbuff[pwbuffpos] = pulseWidth; pwbuff2[pwbuffpos2] = pulseWidth2; if( ++pwbuffpos == pwbuffsize ) pwbuffpos = 0; if( ++pwbuffpos2 == pwbuffsize ) pwbuffpos2 = 0; pulseWidth=0; pulseWidth2=0; for( int p=0; p<pwbuffsize; p++ ){ pulseWidth += pwbuff[p]; pulseWidth2 += pwbuff2[p]; } pulseWidth /= pwbuffsize; pulseWidth2 /= pwbuffsize; } void ray1 (){ // This is my set up for manual mode control using the wii nunchuck's joysticks float tilt = (650 - outbuf*2*2); // change 650 as needed to center the servo when c_button is pressed and Joystick is centered float tilt2 = outbuf*2*2; tilt = (tilt); pulseWidth = (tilt * 5) + minPulse; // was (tilt * 5) tilt2 = (tilt2-295); // change the 285 number as needed to center the servo when c_button is pressed and Joystick is centered pulseWidth2 = (tilt2 * 5) + minPulse2; // was (tilt * 5) pwbuff[pwbuffpos] = pulseWidth; pwbuff2[pwbuffpos2] = pulseWidth2; if( ++pwbuffpos == pwbuffsize ) pwbuffpos = 0; if( ++pwbuffpos2 == pwbuffsize ) pwbuffpos2 = 0; pulseWidth=0; pulseWidth2=0; for( int p=0; p<pwbuffsize; p++ ){ pulseWidth += pwbuff[p]; pulseWidth2 += pwbuff2[p]; } pulseWidth /= pwbuffsize; pulseWidth2 /= pwbuffsize; } ``\n\nThis site uses Akismet to reduce spam. Learn how your comment data is processed." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6149102,"math_prob":0.9276483,"size":7481,"snap":"2021-43-2021-49","text_gpt3_token_len":2406,"char_repetition_ratio":0.15433997,"word_repetition_ratio":0.25619835,"special_character_ratio":0.3462104,"punctuation_ratio":0.2045134,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9881329,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-20T03:03:38Z\",\"WARC-Record-ID\":\"<urn:uuid:8bba30a7-252d-4a78-bad9-84ce495f5a20>\",\"Content-Length\":\"157975\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:19a76ba7-7777-4299-98f8-be1a6b25b8ec>\",\"WARC-Concurrent-To\":\"<urn:uuid:b311092c-53f0-4184-aed8-01b822fcff47>\",\"WARC-IP-Address\":\"192.0.78.24\",\"WARC-Target-URI\":\"https://winkdoubleguns.com/2014/04/22/wii-nunchuck-update/\",\"WARC-Payload-Digest\":\"sha1:AZRYAZCJBDH3MVRDIYKKT2ZFU77R57K6\",\"WARC-Block-Digest\":\"sha1:IVWSZIHZJO4G5LWQS4MFSRPJBK2A7DNX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585302.56_warc_CC-MAIN-20211020024111-20211020054111-00505.warc.gz\"}"}
http://www.celsiusfahrenheit.co/107.98
[ "🌡107.98 C to F\n\n🔆🌡107.98 C to F. How many degrees Fahrenheit in a degree Celsius. °C to F Calculator.\n\nCelsius to Fahrenheit Converter\n\n Celsius Fahrenheit You can edit any of the fields below: = Detailed result here\n\nHow to convert from Celsius to Fahrenheit\n\nIt is ease to convert a temperature value from Celsius to Fahrenheit by using the formula below:\n\n [°F] = [°C] × 9⁄5 + 32\nor\n Value in Fahrenheit = Value in Celsius × 9⁄5 + 32\n\nTo change 107.98° Celsius to Fahrenheit, just need to replace the value [°C] in the formula below and then do the math.\n\nStep-by-step Solution:\n\n1. Write down the formula: [°F] = [°C] × 9⁄5 + 32\n2. Plug the value in the formula: 107.98 × 9⁄5 + 32\n3. Multiply by 9: 971.82⁄5 + 32\n4. Divide by 5: 194.364 + 32\n\nValues around 107.98 Celsius(s)\n\nCelsiusFahrenheitCelsiusFahrenheit\n107.0841.7107.1841.8\n107.2841.8107.3841.9\n107.4841.9107.5842.0\n107.6842.0107.7842.1\n107.8842.2107.9842.2\n108.0842.3108.1842.3\n108.2842.4108.3842.4\n108.4842.5108.5842.5\n108.6842.6108.7842.7\n108.8842.7108.9842.8" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.61467975,"math_prob":0.9476981,"size":1040,"snap":"2022-05-2022-21","text_gpt3_token_len":401,"char_repetition_ratio":0.2046332,"word_repetition_ratio":0.011363637,"special_character_ratio":0.52403843,"punctuation_ratio":0.21505377,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99585974,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-18T15:56:13Z\",\"WARC-Record-ID\":\"<urn:uuid:0d3a3bd0-15d5-4b34-a138-61bdf826f069>\",\"Content-Length\":\"24307\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ad402a63-7d6a-4993-9f1a-7648b4751a88>\",\"WARC-Concurrent-To\":\"<urn:uuid:67ca45b9-8589-482c-89de-8a5da71777ef>\",\"WARC-IP-Address\":\"67.205.30.187\",\"WARC-Target-URI\":\"http://www.celsiusfahrenheit.co/107.98\",\"WARC-Payload-Digest\":\"sha1:SGHMPGYYMFN4XPD64T2VWU6NCGTLSUOG\",\"WARC-Block-Digest\":\"sha1:6DVER4IDY4RFWQ77OLAA735V6REKLBQD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320300934.87_warc_CC-MAIN-20220118152809-20220118182809-00582.warc.gz\"}"}
https://docs.aws.amazon.com/en_us/quicksight/latest/user/windowSum-function.html
[ "Amazon QuickSight\nUser Guide\n\nThe AWS Documentation website is getting a new look!\nTry it now and let us know what you think. Switch to the new look >>\n\nYou can return to the original look by selecting English in the language selector above.\n\n# windowSum\n\n`windowSum` calculates the sum of the aggregated measure in a custom window that is partitioned and sorted by specified attributes. Usually, you use custom window functions on a time series, where your visual shows a metric and a date field.\n\n`windowSum` is supported for use with analyses based on SPICE and direct query data sets. Window functions aren't supported for MySQL versions earlier than 8 and MariaDB versions earlier than 10.2.\n\n## Syntax\n\nThe brackets are required. To see which arguments are optional, see the following descriptions.\n\n``````windowSum\n(\nmeasure\n, [sort_order_field ASC/DESC, ...]\n, start_index\n, end_index\n,[ partition_field, ... ]\n)``````\n\n## Arguments\n\nmeasure\n\nThe aggregated metric that you want to get the sum for, for example `sum({Revenue})`.\n\n`windowAvg` is supported for use with analyses based on SPICE and direct query data sets. For the engines MySQL, MariaDB, and Amazon Aurora with MySQL compatibility, the lookup index is limited to just 1. Window functions aren't supported for MySQL versions below 8 and MariaDB versions earlier than 10.2.\n\nsort attribute\n\nOne or more aggregated fields, either measures or dimensions or both, that you want to sort the data by, separated by commas. You can either specify ascending (`ASC`) or descending (`DESC`) sort order.\n\nEach field in the list is enclosed in {} (curly braces), if it's more than one word. The entire list is enclosed in [ ] (square brackets).\n\nstart index\n\nThe start index is a positive integer, indicating n rows above the current row. The start index counts the available data points above the current row, rather than counting actual time periods. If your data is sparse (missing months or years, for example), adjust the indexes accordingly.\n\nend index\n\nThe end index is a positive integer, indicating n rows below the current row. The end index counts the available data points below the current row, rather than counting actual time periods. If your data is sparse (missing months or years, for example), adjust the indexes accordingly.\n\npartition field\n\n(Optional) One or more dimensions that you want to partition by, separated by commas.\n\nEach field in the list is enclosed in {} (curly braces), if it's more than one word. The entire list is enclosed in [ ] (square brackets).\n\n## Example\n\nThe following example calculates the moving sum of `sum(Revenue)`, sorted by `SaleDate`. The calculation includes two rows above and one row ahead of the current row.\n\n``````windowSum\n(\nsum(Revenue),\n[SaleDate ASC],\n2,\n1\n) ``````\n\nThe following example show a trailing 12-month sum.\n\n``windowSum(sum(Revenue),[SaleDate ASC],12,0)``\n\nThe following screenshot shows the results of this trailing 12-month sum example. The `sum(Revenue)` field is added to the chart to show the difference between the revenue and the trailing 12-month sum of revenue.", null, "" ]
[ null, "https://docs.aws.amazon.com/en_us/quicksight/latest/user/images/windowSum.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8307061,"math_prob":0.8314946,"size":2959,"snap":"2019-35-2019-39","text_gpt3_token_len":656,"char_repetition_ratio":0.09678511,"word_repetition_ratio":0.2704918,"special_character_ratio":0.22304833,"punctuation_ratio":0.13090909,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9636494,"pos_list":[0,1,2],"im_url_duplicate_count":[null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-22T20:47:05Z\",\"WARC-Record-ID\":\"<urn:uuid:5fd8097f-fb83-4543-9b37-30a60d476000>\",\"Content-Length\":\"24973\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d57f8317-5686-49ee-906a-0a489081d23d>\",\"WARC-Concurrent-To\":\"<urn:uuid:4dbca5be-8c8b-4acf-b1e8-08aaa6570a90>\",\"WARC-IP-Address\":\"54.239.24.117\",\"WARC-Target-URI\":\"https://docs.aws.amazon.com/en_us/quicksight/latest/user/windowSum-function.html\",\"WARC-Payload-Digest\":\"sha1:OV5GEPI4YKURMMHVZYBQBAQKEXOQIERU\",\"WARC-Block-Digest\":\"sha1:F5F65ZX3AOHP35VOSGA6S32DBJM25XL4\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514575674.3_warc_CC-MAIN-20190922201055-20190922223055-00019.warc.gz\"}"}
https://socratic.org/questions/what-is-the-integral-of-xln-x
[ "# What is the integral of xln(x)?\n\nDec 16, 2014\n\nYou have to use the Integration by Parts formula: $\\int u \\mathrm{dv} = u v - \\int v \\mathrm{du}$\n\nlet u= lnx\ndu = $\\frac{1}{x}$\n\ndv = x\nv = ${x}^{2} / 2$\n\nPlug this in the IBP formula and you'll get.\n\n=$\\ln x \\cdot {x}^{2} / 2 - \\int {x}^{2} / 2 \\cdot \\frac{1}{x}$\nSolve the integral on the right side and you'll get $- {x}^{2} / 4$\n\nFinal answer would be: $\\ln x \\cdot {x}^{2} / 2 - {x}^{2} / 4$" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8236882,"math_prob":0.99999595,"size":381,"snap":"2020-45-2020-50","text_gpt3_token_len":102,"char_repetition_ratio":0.11405835,"word_repetition_ratio":0.0,"special_character_ratio":0.25984251,"punctuation_ratio":0.065789476,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998524,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-01T21:30:10Z\",\"WARC-Record-ID\":\"<urn:uuid:cfb739f5-b984-49ba-b881-a0405f42f945>\",\"Content-Length\":\"32647\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:267a7d50-8d19-4d72-9d6b-cead2087658b>\",\"WARC-Concurrent-To\":\"<urn:uuid:4f466a76-2d6f-48fe-9efd-cbb12086cf75>\",\"WARC-IP-Address\":\"216.239.36.21\",\"WARC-Target-URI\":\"https://socratic.org/questions/what-is-the-integral-of-xln-x\",\"WARC-Payload-Digest\":\"sha1:ZLK4IKZJ6OU56IZS6Q7VT6WK44SCWU55\",\"WARC-Block-Digest\":\"sha1:RRRZEK6B7LC7LBHGGCNT24S6NT7FIDVG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141681524.75_warc_CC-MAIN-20201201200611-20201201230611-00087.warc.gz\"}"}
https://mathalino.com/reviewer/college-algebra/arithmetic-geometric-and-harmonic-progressions
[ "Elements\na1 = value of the first term\nam = value of any term after the first term but before the last term\nan = value of the last term\nn = total number of terms\nm = mth term after the first but before nth\nd = common difference of arithmetic progression\nr = common ratio of geometric progression\nS = sum of the 1st n terms\n\n## Arithmetic Progression, AP\n\nArithmetic progression is a sequence of numbers in which the difference of any two adjacent terms is constant. The constant difference is commonly known as common difference and is denoted by d. Examples of arithmetic progression are as follows:\n\nExample 1: 3, 8, 13, 18, 23, 28 33, 38, 43, 48\nThe above sequence of numbers is composed of n = 10 terms (or elements). The first term a1 = 3, and the last term an = a10 = 48. The common difference of the above AP is d = 8 - 3 = 13 - 8 = ... = 5.\n\nExample 2: 5, 2, -1, ...\nThis AP has a common difference of -3 and is composed of infinite number of terms as indicated by the three ellipses at the end.\n\n### Formulas for Arithmetic Progression\n\nCommon difference, d\nThe common difference can be found by subtracting any two adjacent terms.\n\n$d = a_{m + 1} - a_m$   or\n\n$d = a_2 - a_1 = a_3 - a_2 = a_4 - a_3 = ...$\n\nValue of each term\nEach term after the first can be found by adding recursively the common difference d to the preceding term.\n\n$a_{m + 1} = a_m + d$\n\nnth term of AP\nThe nth term of arithmetic progression is given by\n\n$a_n = a_1 + (n - 1)d$\n\nor in more general term, it can be written as\n\n$a_n = a_m + (n - m)d$\n\nSum of n terms of AP\nThe sum of the first n terms of arithmetic progression is n times the average of the first term and the last term.\n\n$S = \\dfrac{n}{2}(a_1 + a_n)$\n\nIf the last term an is not given, the following may be useful\n\n$S = \\dfrac{n}{2}[ \\, 2a_1 + (n - 1)d \\, ]$\n\nIf required for the partial sum from mth to nth terms, the following formula can be used\n\n$S = \\dfrac{n - m + 1}{2}(a_m + a_n)$   or   $S = \\dfrac{n - m + 1}{2} [ \\, 2a_m + (n - m)d \\, ]$\n\n## Geometric Progression, GP\n\nGeometric progression is a sequence of numbers in which any two adjacent terms has a common ratio denoted by r. Example of geometric progression is\n\n1, 3, 9, 27, ...\n\nwhich is composed of infinite number of terms and with common ratio equal to 3.\n\n### Formulas for Geometric Progression\n\nCommon ratio\nThe common ratio can be found by taking the quotient of any two adjacent terms.\n\n$r = \\dfrac{a_{m + 1}}{a_m} = \\dfrac{a_2}{a_1} = \\dfrac{a_3}{a_2} = \\dfrac{a_4}{a_3} = ...$\n\nnth term of GP\nThe nth term of the geometric progression is given by\n\n$a_n = a_1 \\, r^{n - 1}$   or   $a_n = a_m \\, r^{n - m}$\n\nSum of n terms of GP\nThe sum of the first n terms of geometric progression is\n\n$S = \\dfrac{a_1(1 - r^n)}{1 - r}$\n\nSum of Infinite Geometric Progression\nA finite sum can be obtained from GP with infinite terms if and only if -1.0 ≤ r ≤ 1.0 and r ≠ 0.\n\n$S = \\dfrac{a_1}{1 - r}$\n\n## Harmonic Progression, HP\n\nHarmonic progression is a sequence of numbers in which the reciprocals of the elements are in arithmetic progression. Example of harmonic progression is\n\n1/3, 1/6, 1/9, ...\n\nIf you take the reciprocal of each term from the above HP, the sequence will become\n\n3, 6, 9, ...\n\nwhich is an AP with a common difference of 3.\n\nAnother example of HP is 6, 3, 2. The reciprocals of each term are 1/6, 1/3, 1/2 which is an AP with a common difference of 1/6.\n\nTo find the term of HP, convert the sequence into AP then do the calculations using the AP formulas. Then take the reciprocal of the answer in AP to get the correct term in HP.\n\n## Relationship between arithmetic, geometric, and harmonic means\n\n$AM \\times HM = GM^2$" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.93578154,"math_prob":0.9995036,"size":3256,"snap":"2020-24-2020-29","text_gpt3_token_len":779,"char_repetition_ratio":0.18081181,"word_repetition_ratio":0.0756579,"special_character_ratio":0.23218673,"punctuation_ratio":0.08320251,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99997354,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-05-27T11:14:07Z\",\"WARC-Record-ID\":\"<urn:uuid:7dc06d90-77de-4d4a-93e1-7a5306cfb661>\",\"Content-Length\":\"47425\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:714238f5-f8ec-4530-86f2-90b319510093>\",\"WARC-Concurrent-To\":\"<urn:uuid:4b1548b3-c1ac-45e2-9ecb-5de44931d684>\",\"WARC-IP-Address\":\"104.200.20.138\",\"WARC-Target-URI\":\"https://mathalino.com/reviewer/college-algebra/arithmetic-geometric-and-harmonic-progressions\",\"WARC-Payload-Digest\":\"sha1:LAUK6BSILDMJPHPGEFHTR3DLSEXGTS7I\",\"WARC-Block-Digest\":\"sha1:FEBJO7OJMYTLR32HUR57NJBOXNJVWM6D\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347394074.44_warc_CC-MAIN-20200527110649-20200527140649-00397.warc.gz\"}"}
https://homeguides.sfgate.com/estimate-volume-fill-dirt-84205.html
[ "# How to Estimate Volume of Fill Dirt\n\nWhen making improvements around the home landscape, you may need to backfill an area with fill dirt. Determining the right volume formula for your project depends on the shape of the area to be filled. For a rectangular-shaped area, you would multiply the three dimensions. Circular shapes require you to use the geometric value pi in your formula. The main thing to remember is that all your measurements must be in the same units. If you take your measurements in inches, it will be easy to convert to cubic yards, the volume unit dirt is sold by.\n\n## Rectangular Area\n\n1. Measure the length, width and depth of the area you want to fill with dirt. Record your dimensions in inches. For example, suppose that the length is 10 feet (120 inches), width is 5 feet (60 inches), and depth is 2 feet 6 inches (30 inches).\n\n2. Calculate the volume of dirt needed to fill the area you measured by multiplying length (120 inches) by width (60 inches) by depth (30 inches). Total volume equals 216,000 cubic inches.\n\n3. Convert cubic inches to cubic yards by dividing by 46,656 cubic inches. So, 216,000 cubic inches divided by 46,656 cubic inches equals 4.63 cubic yards. To fill the rectangular area at a depth of 2 feet 6 inches, you will 4.63 cubic yards of fill dirt.\n\n## Circular Area\n\n1. Measure the diameter and depth of the hole you want to fill with dirt. Record your measurements in inches. For this example, suppose that the hole you will fill is 5 feet (60 inches) in diameter and 2 feet 6 inches (30 inches) deep. The radius of 5 feet is 2.5 feet (30 inches).\n\n2. Calculate the volume of dirt needed to fill in the hole using pi (3.142) times the square of the radius times the depth. For this example, you would multiply 3.142 by the square of 30 by 30 to obtain the volume. This yields 3.142 times 900 inches times 30 inches, or 84,834 cubic inches.\n\n3. Convert 84,834 cubic inches to cubic yards by dividing by 46,656 cubic inches. It will require 1.82 cubic yards of dirt to fill in the hole.\n\n4. #### Things You Will Need\n\n• Tape measure\n\n• Calculator\n\n#### Tip\n\nIf your measurements are in feet, you can convert cubic feet to cubic yards by dividing cubic feet by 27 cubic yards." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8887796,"math_prob":0.9767436,"size":2515,"snap":"2021-31-2021-39","text_gpt3_token_len":583,"char_repetition_ratio":0.1585026,"word_repetition_ratio":0.087248325,"special_character_ratio":0.25367793,"punctuation_ratio":0.11003861,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99516654,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-23T15:48:46Z\",\"WARC-Record-ID\":\"<urn:uuid:8e52eb3f-22e9-4619-b57d-5146a52cebc7>\",\"Content-Length\":\"119468\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f21cd5b2-4d8a-42b8-b577-cef37e1a9cb5>\",\"WARC-Concurrent-To\":\"<urn:uuid:c98cb04a-62a6-4a2b-baf7-55a7c3580152>\",\"WARC-IP-Address\":\"99.84.105.82\",\"WARC-Target-URI\":\"https://homeguides.sfgate.com/estimate-volume-fill-dirt-84205.html\",\"WARC-Payload-Digest\":\"sha1:HJWA2P4SFYVPZDQF5OUG3RGMPUZCIRGK\",\"WARC-Block-Digest\":\"sha1:DCUEOOR4PZGEGKRG5TDQBU32HQEYY2N3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057424.99_warc_CC-MAIN-20210923135058-20210923165058-00366.warc.gz\"}"}
https://code.grnet.gr/projects/synnefo/repository/revisions/26515bc162dfce9ced4fcaadbf6653fa8ce7e4f8/diff
[ "## Revision 26515bc1\n\n/dev/null\n1\n```# Copyright 2012, 2013 GRNET S.A. All rights reserved.\n```\n2\n```#\n```\n3\n```# Redistribution and use in source and binary forms, with or without\n```\n4\n```# modification, are permitted provided that the following conditions\n```\n5\n```# are met:\n```\n6\n```#\n```\n7\n```# 1. Redistributions of source code must retain the above copyright\n```\n8\n```# notice, this list of conditions and the following disclaimer.\n```\n9\n```#\n```\n10\n```# 2. Redistributions in binary form must reproduce the above copyright\n```\n11\n```# notice, this list of conditions and the following disclaimer in the\n```\n12\n```# documentation and/or other materials provided with the distribution.\n```\n13\n```#\n```\n14\n```# THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND\n```\n15\n```# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n```\n16\n```# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n```\n17\n```# ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE\n```\n18\n```# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n```\n19\n```# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS\n```\n20\n```# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)\n```\n21\n```# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT\n```\n22\n```# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY\n```\n23\n```# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF\n```\n24\n```# SUCH DAMAGE.\n```\n25\n```#\n```\n26\n```# The views and conclusions contained in the software and documentation are\n```\n27\n```# those of the authors and should not be interpreted as representing official\n```\n28\n```# policies, either expressed or implied, of GRNET S.A.\n```\n29\n\n30\n```from django.db.models import Manager\n```\n31\n```from django.db.models.query import QuerySet\n```\n32\n\n33\n\n34\n```class ProtectedDeleteManager(Manager):\n```\n35\n``` \"\"\"Manager for protecting Backend deletion.\n```\n36\n\n37\n``` Call Backend delete() method in order to prevent deletion\n```\n38\n``` of Backends that host non-deleted VirtualMachines.\n```\n39\n\n40\n``` \"\"\"\n```\n41\n\n42\n``` def get_query_set(self):\n```\n43\n``` return BackendQuerySet(self.model, using=self._db)\n```\n44\n\n45\n\n46\n```class BackendQuerySet(QuerySet):\n```\n47\n``` def delete(self):\n```\n48\n``` for backend in self._clone():\n```\n49\n``` backend.delete()\n```\n32 32\n```from copy import deepcopy\n```\n33 33\n```from django.conf import settings\n```\n34 34\n```from django.db import models\n```\n35\n```from django.db import IntegrityError\n```\n36 35\n\n37 36\n```import utils\n```\n38 37\n```from contextlib import contextmanager\n```\n......\n41 40\n```from django.conf import settings as snf_settings\n```\n42 41\n```from aes_encrypt import encrypt_db_charfield, decrypt_db_charfield\n```\n43 42\n\n44\n```from synnefo.db.managers import ProtectedDeleteManager\n```\n45 43\n```from synnefo.db import pools, fields\n```\n46 44\n\n47 45\n```from synnefo.logic.rapi_pool import (get_rapi_client,\n```\n......\n102 100\n``` null=False)\n```\n103 101\n``` ctotal = models.PositiveIntegerField('Total number of logical processors',\n```\n104 102\n``` default=0, null=False)\n```\n105\n``` # Custom object manager to protect from cascade delete\n```\n106\n``` objects = ProtectedDeleteManager()\n```\n107 103\n\n108 104\n``` HYPERVISORS = (\n```\n109 105\n``` (\"kvm\", \"Linux KVM hypervisor\"),\n```\n......\n160 156\n``` self.virtual_machines.filter(deleted=False)\\\n```\n161 157\n``` .update(backend_hash=self.hash)\n```\n162 158\n\n163\n``` def delete(self, *args, **kwargs):\n```\n164\n``` # Integrity Error if non-deleted VMs are associated with Backend\n```\n165\n``` if self.virtual_machines.filter(deleted=False).count():\n```\n166\n``` raise IntegrityError(\"Non-deleted virtual machines are associated \"\n```\n167\n``` \"with backend: %s\" % self)\n```\n168\n``` else:\n```\n169\n``` # ON_DELETE = SET NULL\n```\n170\n``` for vm in self.virtual_machines.all():\n```\n171\n``` vm.backend = None\n```\n172\n``` vm.save()\n```\n173\n``` self.virtual_machines.all().backend = None\n```\n174\n``` # Remove BackendNetworks of this Backend.\n```\n175\n``` # Do not use networks.all().delete(), since delete() method of\n```\n176\n``` # BackendNetwork will not be called!\n```\n177\n``` for net in self.networks.all():\n```\n178\n``` net.delete()\n```\n179\n``` super(Backend, self).delete(*args, **kwargs)\n```\n180\n\n181 159\n``` def __init__(self, *args, **kwargs):\n```\n182 160\n``` super(Backend, self).__init__(*args, **kwargs)\n```\n183 161\n``` if not self.pk:\n```\n......\n320 298\n``` userid = models.CharField('User ID of the owner', max_length=100,\n```\n321 299\n``` db_index=True, null=False)\n```\n322 300\n``` backend = models.ForeignKey(Backend, null=True,\n```\n323\n``` related_name=\"virtual_machines\",)\n```\n301\n``` related_name=\"virtual_machines\",\n```\n302\n``` on_delete=models.PROTECT)\n```\n324 303\n``` backend_hash = models.CharField(max_length=128, null=True, editable=False)\n```\n325 304\n``` created = models.DateTimeField(auto_now_add=True)\n```\n326 305\n``` updated = models.DateTimeField(auto_now=True)\n```\n......\n639 618\n``` }\n```\n640 619\n\n641 620\n``` network = models.ForeignKey(Network, related_name='backend_networks')\n```\n642\n``` backend = models.ForeignKey(Backend, related_name='networks')\n```\n621\n``` backend = models.ForeignKey(Backend, related_name='networks',\n```\n622\n``` on_delete=models.PROTECT)\n```\n643 623\n``` created = models.DateTimeField(auto_now_add=True)\n```\n644 624\n``` updated = models.DateTimeField(auto_now=True)\n```\n645 625\n``` deleted = models.BooleanField('Deleted', default=False)\n```\n96 96\n``` mfact.BackendFactory()\n```\n97 97\n``` self.assertRaises(Exception, mfact.BackendFactory, ())\n```\n98 98\n\n99\n``` def test_delete_backend(self):\n```\n100\n``` vm = mfact.VirtualMachineFactory(backend=self.backend, deleted=True)\n```\n101\n``` bnet = mfact.BackendNetworkFactory(backend=self.backend)\n```\n102\n``` self.backend.delete()\n```\n103\n``` self.assertRaises(Backend.DoesNotExist, Backend.objects.get,\n```\n104\n``` id=self.backend.id)\n```\n105\n``` # Test that VM is not deleted\n```\n106\n``` vm2 = VirtualMachine.objects.get(id=vm.id)\n```\n107\n``` self.assertEqual(vm2.backend, None)\n```\n108\n``` # Test tha backend networks are deleted, but not the network\n```\n109\n``` self.assertRaises(BackendNetwork.DoesNotExist,\n```\n110\n``` BackendNetwork.objects.get, id=bnet.id)\n```\n111\n``` Network.objects.get(id=bnet.network.id)\n```\n112\n\n113 99\n``` def test_delete_active_backend(self):\n```\n114 100\n``` \"\"\"Test that a backend with non-deleted VMS is not deleted\"\"\"\n```\n115\n``` mfact.VirtualMachineFactory(backend=self.backend)\n```\n116\n``` self.assertRaises(IntegrityError, self.backend.delete, ())\n```\n101\n``` backend = mfact.BackendFactory()\n```\n102\n``` vm = mfact.VirtualMachineFactory(backend=backend)\n```\n103\n``` self.assertRaises(IntegrityError, backend.delete, ())\n```\n104\n``` vm.backend = None\n```\n105\n``` vm.save()\n```\n106\n``` backend.delete()\n```\n117 107\n\n118 108\n``` def test_password_encryption(self):\n```\n119 109\n``` password_hash = self.backend.password\n```\n30 30\n\n31 31\n```from django.core.management.base import BaseCommand, CommandError\n```\n32 32\n```from synnefo.management.common import get_backend\n```\n33\n```from synnefo.db.models import VirtualMachine, BackendNetwork\n```\n33\n```from synnefo.logic import backend as backend_mod\n```\n34\n```from synnefo.db.models import Backend\n```\n35\n```from django.db import transaction, models\n```\n34 36\n\n35 37\n\n36\n```class Command(BaseCommand):\n```\n37\n``` can_import_settings = True\n```\n38\n```HELP_MSG = \"\"\"\\\n```\n39\n```Remove a backend from the Database. Backend should be set to drained before\n```\n40\n```trying to remove it, in order to avoid the allocation of a new instances in\n```\n41\n```this Backend. Removal of a backend will fail if the backend hosts any\n```\n42\n```non-deleted instances.\"\"\"\n```\n38 43\n\n39\n``` help = \"Remove a backend from the Database. Backend should be set\\n\" \\\n```\n40\n``` \"to drained before trying to remove it, in order to avoid the\\n\" \\\n```\n41\n``` \"allocation of a new instances in this Backend.\\n\\n\" \\\n```\n42\n``` \"Removal of a backend will fail if the backend hosts any\\n\" \\\n```\n43\n``` \"non-deleted instances.\"\n```\n44 44\n\n45\n``` output_transaction = True # The management command runs inside\n```\n46\n``` # an SQL transaction\n```\n45\n```class Command(BaseCommand):\n```\n46\n``` help = HELP_MSG\n```\n47 47\n\n48 48\n``` def handle(self, *args, **options):\n```\n49 49\n``` write = self.stdout.write\n```\n......\n52 52\n\n53 53\n``` backend = get_backend(args)\n```\n54 54\n\n55\n``` write('Trying to remove backend: %s\\n' % backend.clustername)\n```\n56\n\n57\n``` vms_in_backend = VirtualMachine.objects.filter(backend=backend,\n```\n58\n``` deleted=False)\n```\n55\n``` write(\"Trying to remove backend: %s\\n\" % backend.clustername)\n```\n59 56\n\n60\n``` if vms_in_backend:\n```\n57\n``` if backend.virtual_machines.filter(deleted=False).exists():\n```\n61 58\n``` raise CommandError('Backend hosts non-deleted vms. Can not delete')\n```\n62 59\n\n63\n``` networks = BackendNetwork.objects.filter(backend=backend,\n```\n64\n``` deleted=False)\n```\n65\n``` networks = [net.network.backend_id for net in networks]\n```\n60\n``` # Get networks before deleting backend, because after deleting the\n```\n61\n``` # backend, all BackendNetwork objects are deleted!\n```\n62\n``` networks = [bn.network for bn in backend.networks.all()]\n```\n66 63\n\n67\n``` backend.delete()\n```\n64\n``` try:\n```\n65\n``` delete_backend(backend)\n```\n66\n``` except models.ProtectedError as e:\n```\n67\n``` msg = (\"Can not delete backend because it contains\"\n```\n68\n``` \"non-deleted VMs:\\n%s\" % e)\n```\n69\n``` raise CommandError(msg)\n```\n68 70\n\n69\n``` write('Successfully removed backend.\\n')\n```\n71\n``` write('Successfully removed backend from DB.\\n')\n```\n70 72\n\n71 73\n``` if networks:\n```\n72\n``` write('Left the following orphans networks in Ganeti:\\n')\n```\n73\n``` write(' ' + '\\n * '.join(networks) + '\\n')\n```\n74\n``` write('Manually remove them.\\n')\n```\n74\n``` write(\"Clearing networks from %s..\\n\" % backend.clustername)\n```\n75\n``` for network in networks:\n```\n76\n``` backend_mod.delete_network(network=network, backend=backend)\n```\n77\n``` write(\"Successfully issued jobs to remove all networks.\\n\")\n```\n78\n\n79\n\n80\n```@transaction.commit_on_success\n```\n81\n```def delete_backend(backend):\n```\n82\n``` # Get X-Lock\n```\n83\n``` backend = Backend.objects.select_for_update().get(id=backend.id)\n```\n84\n``` # Clear 'backend' field of 'deleted' VirtualMachines\n```\n85\n``` backend.virtual_machines.filter(deleted=True).update(backend=None)\n```\n86\n``` # Delete all BackendNetwork objects of this backend\n```\n87\n``` backend.networks.all().delete()\n```\n88\n``` backend.delete()\n```\n\nAlso available in: Unified diff" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.521408,"math_prob":0.6118599,"size":10074,"snap":"2022-27-2022-33","text_gpt3_token_len":2966,"char_repetition_ratio":0.14756703,"word_repetition_ratio":0.030683404,"special_character_ratio":0.35378203,"punctuation_ratio":0.18591224,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95610183,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-09T10:38:46Z\",\"WARC-Record-ID\":\"<urn:uuid:8ed9f5f4-4386-4a66-8be1-5fcde2e3f241>\",\"Content-Length\":\"45377\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c91bad4a-f0ad-44cd-98a8-eb0ce6420966>\",\"WARC-Concurrent-To\":\"<urn:uuid:999662c6-5d22-4a90-9620-f512e00754a6>\",\"WARC-IP-Address\":\"194.177.210.147\",\"WARC-Target-URI\":\"https://code.grnet.gr/projects/synnefo/repository/revisions/26515bc162dfce9ced4fcaadbf6653fa8ce7e4f8/diff\",\"WARC-Payload-Digest\":\"sha1:HQSVG5TEK66V7LOLWNQNPVJN4KSXKVDP\",\"WARC-Block-Digest\":\"sha1:6YJT5FCVJQYCKZA5YVHMS6B6IWZJJUJW\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882570921.9_warc_CC-MAIN-20220809094531-20220809124531-00011.warc.gz\"}"}
https://blog.jverkamp.com/2015/12/08/advent-of-code-day-8/
[ "# Advent of Code: Day 8\n\nSource\n\nPart 1: Given an escaped string of the form \"\\xa8br\\x8bjr\\\"\", convert it to the escaped form: br js. Calculate the total difference of lengths between the former (16) and the latter (5).\n\nmemory_count = 0\nraw_count = 0\n\nfor line in sys.stdin:\nraw = line.strip()\nparsed = ast.literal_eval(raw) # This is probably cheating\n\nraw_count += len(raw)\nmemory_count += len(parsed)\n\nprint(raw_count - memory_count)\n\n\nFor a basic solution, we can cheat and use the ast module. It can interpret any Python literal, which includes escaped strings. Free!\n\nIf we actually want to do it ourselves, it’s straight forward enough to use regular expressions instead:\n\nmemory_count = 0\nraw_count = 0\n\npatterns = [\n(r'\\\\\"', '\"'),\n(r'\\\\\\\\', r'\\\\'),\n(r'\\\\x(\\d\\d)', chr),\n(r'^\"(.*)\"\\$', r'\\1'),\n]\n\nfor line in sys.stdin:\nparsed = raw = line.strip()\nfor src, dst in patterns:\nparsed = re.sub(src, dst, parsed)\n\nprint(raw, parsed)\n\nraw_count += len(raw)\nmemory_count += len(parsed)\n\nprint(raw_count - memory_count)\n\n\nOne interesting aspect is chr. That will convert a number such as \\x65 into the corresponding character A. It doesn’t really matter since we just want the count, but it’s kind of elegant.\n\nThere is a subtle bug in this, bonus points to anyone that can figure it out. But for the moment, it works great on the given test cases.\n\nPart 2: Do the opposite. Add another level of encoding such that \"\\xa8br\\x8bjr\\\"\" would become \\\"\\\\xa8br\\\\x8bjr\\\\\\\"\\\".\n\nraw_count = 0\nencoded_count = 0\n\nfor line in sys.stdin:\nraw = line.strip()\nencoded = re.sub(r'([\"\\\\])', r'\\\\\\1', raw)\n\nraw_count += len(raw)\nencoded_count += len(encoded) + 2 # Quotes are not included\n\nprint(encoded_count - raw_count)\n\n\nThis time since we don’t have different behavior for the different escaped characters, we can use a single regular expression.\n\nNot quite as interesting as Day 7, but still neat." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7797521,"math_prob":0.98235923,"size":1830,"snap":"2023-40-2023-50","text_gpt3_token_len":479,"char_repetition_ratio":0.13691129,"word_repetition_ratio":0.08304498,"special_character_ratio":0.29945356,"punctuation_ratio":0.15320334,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98339015,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-03T23:57:25Z\",\"WARC-Record-ID\":\"<urn:uuid:0f5e5351-b46c-409f-8c5e-68b01ee02272>\",\"Content-Length\":\"16261\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9531eb3d-6fbd-4878-a6f1-ba6f99727d5d>\",\"WARC-Concurrent-To\":\"<urn:uuid:6d726e99-7499-437b-82cc-34e70917f19f>\",\"WARC-IP-Address\":\"104.21.47.7\",\"WARC-Target-URI\":\"https://blog.jverkamp.com/2015/12/08/advent-of-code-day-8/\",\"WARC-Payload-Digest\":\"sha1:TQDJMT3HWXPQPTPXGXXN5GY7X74WBDDT\",\"WARC-Block-Digest\":\"sha1:RFWHNZ5GADJ646TREP3O6P5GGAJAU37K\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233511284.37_warc_CC-MAIN-20231003224357-20231004014357-00219.warc.gz\"}"}
https://artofproblemsolving.com/wiki/index.php/1994_AIME_Problems
[ "# 1994 AIME Problems\n\n 1994 AIME (Answer Key) Printable version | AoPS Contest Collections • PDF Instructions This is a 15-question, 3-hour examination. All answers are integers ranging from", null, "$000$ to", null, "$999$, inclusive. Your score will be the number of correct answers; i.e., there is neither partial credit nor a penalty for wrong answers. No aids other than scratch paper, graph paper, ruler, compass, and protractor are permitted. In particular, calculators and computers are not permitted. 1 • 2 • 3 • 4 • 5 • 6 • 7 • 8 • 9 • 10 • 11 • 12 • 13 • 14 • 15\n\n## Problem 1\n\nThe increasing sequence", null, "$3, 15, 24, 48, \\ldots\\,$ consists of those positive multiples of 3 that are one less than a perfect square. What is the remainder when the 1994th term of the sequence is divided by 1000?\n\n## Problem 2\n\nA circle with diameter", null, "$\\overline{PQ}\\,$ of length 10 is internally tangent at", null, "$P^{}_{}$ to a circle of radius 20. Square", null, "$ABCD\\,$ is constructed with", null, "$A\\,$ and", null, "$B\\,$ on the larger circle,", null, "$\\overline{CD}\\,$ tangent at", null, "$Q\\,$ to the smaller circle, and the smaller circle outside", null, "$ABCD\\,$. The length of", null, "$\\overline{AB}\\,$ can be written in the form", null, "$m + \\sqrt{n}\\,$, where", null, "$m\\,$ and", null, "$n\\,$ are integers. Find", null, "$m + n\\,$.\n\n## Problem 3\n\nThe function", null, "$f_{}^{}$ has the property that, for each real number", null, "$x,\\,$", null, "$f(x)+f(x-1) = x^2\\,$\n\n.\n\nIf", null, "$f(19)=94,\\,$ what is the remainder when", null, "$f(94)\\,$ is divided by 1000?\n\n## Problem 4\n\nFind the positive integer", null, "$n\\,$ for which", null, "$\\lfloor \\log_2{1}\\rfloor+\\lfloor\\log_2{2}\\rfloor+\\lfloor\\log_2{3}\\rfloor+\\cdots+\\lfloor\\log_2{n}\\rfloor=1994$\n\n.\n\n(For real", null, "$x\\,$,", null, "$\\lfloor x\\rfloor\\,$ is the greatest integer", null, "$\\le x.\\,$)\n\n## Problem 5\n\nGiven a positive integer", null, "$n\\,$, let", null, "$p(n)\\,$ be the product of the non-zero digits of", null, "$n\\,$. (If", null, "$n\\,$ has only one digit, then", null, "$p(n)\\,$ is equal to that digit.) Let", null, "$S=p(1)+p(2)+p(3)+\\cdots+p(999)$\n\n.\n\nWhat is the largest prime factor of", null, "$S\\,$?\n\n## Problem 6\n\nThe graphs of the equations", null, "$y=k, \\qquad y=\\sqrt{3}x+2k, \\qquad y=-\\sqrt{3}x+2k,$\n\nare drawn in the coordinate plane for", null, "$k=-10,-9,-8,\\ldots,9,10.\\,$ These 63 lines cut part of the plane into equilateral triangles of side", null, "$2/\\sqrt{3}$. How many such triangles are formed?\n\n## Problem 7\n\nFor certain ordered pairs", null, "$(a,b)\\,$ of real numbers, the system of equations", null, "$ax+by=1\\,$", null, "$x^2+y^2=50\\,$\n\nhas at least one solution, and each solution is an ordered pair", null, "$(x,y)\\,$ of integers. How many such ordered pairs", null, "$(a,b)\\,$ are there?\n\n## Problem 8\n\nThe points", null, "$(0,0)\\,$,", null, "$(a,11)\\,$, and", null, "$(b,37)\\,$ are the vertices of an equilateral triangle. Find the value of", null, "$ab\\,$.\n\n## Problem 9\n\nA solitaire game is played as follows. Six distinct pairs of matched tiles are placed in a bag. The player randomly draws tiles one at a time from the bag and retains them, except that matching tiles are put aside as soon as they appear in the player's hand. The game ends if the player ever holds three tiles, no two of which match; otherwise the drawing continues until the bag is empty. The probability that the bag will be emptied is", null, "$p/q,\\,$ where", null, "$p\\,$ and", null, "$q\\,$ are relatively prime positive integers. Find", null, "$p+q.\\,$\n\n## Problem 10\n\nIn triangle", null, "$ABC,\\,$ angle", null, "$C$ is a right angle and the altitude from", null, "$C\\,$ meets", null, "$\\overline{AB}\\,$ at", null, "$D.\\,$ The lengths of the sides of", null, "$\\triangle ABC\\,$ are integers,", null, "$BD=29^3,\\,$ and", null, "$\\cos B=m/n\\,$, where", null, "$m\\,$ and", null, "$n\\,$ are relatively prime positive integers. Find", null, "$m+n.\\,$\n\n## Problem 11\n\nNinety-four bricks, each measuring", null, "$4''\\times10''\\times19'',$ are to be stacked one on top of another to form a tower 94 bricks tall. Each brick can be oriented so it contributes", null, "$4''\\,$ or", null, "$10''\\,$ or", null, "$19''\\,$ to the total height of the tower. How many different tower heights can be achieved using all 94 of the bricks?\n\n## Problem 12\n\nA fenced, rectangular field measures 24 meters by 52 meters. An agricultural researcher has 1994 meters of fence that can be used for internal fencing to partition the field into congruent, square test plots. The entire field must be partitioned, and the sides of the squares must be parallel to the edges of the field. What is the largest number of square test plots into which the field can be partitioned using all or some of the 1994 meters of fence?\n\n## Problem 13\n\nThe equation", null, "$x^{10}+(13x-1)^{10}=0\\,$\n\nhas 10 complex roots", null, "$r_1, \\overline{r_1}, r_2, \\overline{r_2}, r_3, \\overline{r_3}, r_4, \\overline{r_4}, r_5, \\overline{r_5},\\,$ where the bar denotes complex conjugation. Find the value of", null, "$\\frac 1{r_1\\overline{r_1}}+\\frac 1{r_2\\overline{r_2}}+\\frac 1{r_3\\overline{r_3}}+\\frac 1{r_4\\overline{r_4}}+\\frac 1{r_5\\overline{r_5}}.$\n\n## Problem 14\n\nA beam of light strikes", null, "$\\overline{BC}\\,$ at point", null, "$C\\,$ with angle of incidence", null, "$\\alpha=19.94^\\circ\\,$ and reflects with an equal angle of reflection as shown. The light beam continues its path, reflecting off line segments", null, "$\\overline{AB}\\,$ and", null, "$\\overline{BC}\\,$ according to the rule: angle of incidence equals angle of reflection. Given that", null, "$\\beta=\\alpha/10=1.994^\\circ\\,$ and", null, "$AB=BC,\\,$ determine the number of times the light beam will bounce off the two line segments. Include the first reflection at", null, "$C\\,$ in your count.\n\n## Problem 15\n\nGiven a point", null, "$P^{}_{}$ on a triangular piece of paper", null, "$ABC,\\,$ consider the creases that are formed in the paper when", null, "$A, B,\\,$ and", null, "$C\\,$ are folded onto", null, "$P.\\,$ Let us call", null, "$P_{}^{}$ a fold point of", null, "$\\triangle ABC\\,$ if these creases, which number three unless", null, "$P^{}_{}$ is one of the vertices, do not intersect. Suppose that", null, "$AB=36, AC=72,\\,$ and", null, "$\\angle B=90^\\circ.\\,$ Then the area of the set of all fold points of", null, "$\\triangle ABC\\,$ can be written in the form", null, "$q\\pi-r\\sqrt{s},\\,$ where", null, "$q, r,\\,$ and", null, "$s\\,$ are positive integers and", null, "$s\\,$ is not divisible by the square of any prime. What is", null, "$q+r+s\\,$?\n\nThe problems on this page are copyrighted by the Mathematical Association of America's American Mathematics Competitions.", null, "" ]
[ null, "https://latex.artofproblemsolving.com/d/c/7/dc7ddbca9440579298bf24e0f9bcf2c0457bfc1b.png ", null, "https://latex.artofproblemsolving.com/0/7/0/0705411d92671faf1fe5602cfef8f353c7f0f83d.png ", null, "https://latex.artofproblemsolving.com/2/a/b/2ab713bfe1a559759709a473d065f3e35200a969.png ", null, "https://latex.artofproblemsolving.com/2/c/e/2ce262328e769000a0067f6e49c679d8a5568b0f.png ", null, "https://latex.artofproblemsolving.com/c/d/d/cddf51f143406f1270d64001fa85b8f77d4f4c41.png ", null, "https://latex.artofproblemsolving.com/b/3/6/b3696b39a8059b0bc02991f9227ae363666dc619.png ", null, "https://latex.artofproblemsolving.com/7/2/e/72eb1fe97f5961e315048f86acfdad20707ea9e6.png ", null, "https://latex.artofproblemsolving.com/8/2/a/82a2c160f4d201115e2129a78e58d9733be6706d.png ", null, "https://latex.artofproblemsolving.com/5/7/f/57fd47825866523dcffac3678241de158fd5f763.png ", null, "https://latex.artofproblemsolving.com/b/f/9/bf95165a99b824d9749d1c045930f3812a9ba0fa.png ", null, "https://latex.artofproblemsolving.com/b/3/6/b3696b39a8059b0bc02991f9227ae363666dc619.png ", null, "https://latex.artofproblemsolving.com/6/c/d/6cde21d43fca31ff965c25786ef0a718644be777.png ", null, "https://latex.artofproblemsolving.com/7/e/c/7ecbf1f3061bf830e1b95f5e37ffdf7d812ea00b.png ", null, "https://latex.artofproblemsolving.com/7/0/b/70bf49cca6e1987b33f7a3b2da51a3c56527455a.png ", null, "https://latex.artofproblemsolving.com/3/4/9/349097a0033d438020f14d96381b189edc30c186.png ", null, "https://latex.artofproblemsolving.com/a/4/f/a4f55684c213ff9aad6407ae48b86c78e0402048.png ", null, "https://latex.artofproblemsolving.com/e/2/8/e28d55d74916c00a10b1559cfcc227d50746300c.png ", null, "https://latex.artofproblemsolving.com/7/0/0/700ec115e84c4b94392469cbd3c508d58f2f5dea.png ", null, "https://latex.artofproblemsolving.com/6/7/8/67827ba82e465559d945580a479bca7bcce2fdd9.png ", null, "https://latex.artofproblemsolving.com/5/f/0/5f03863aaeb17661fb771a07ac56914ca2cad717.png ", null, "https://latex.artofproblemsolving.com/4/6/e/46e5acb5b1235141474239dfcb625fcf13aa533c.png ", null, "https://latex.artofproblemsolving.com/3/4/9/349097a0033d438020f14d96381b189edc30c186.png ", null, "https://latex.artofproblemsolving.com/f/3/7/f37b096bac8263b45ed10c89504cd053281387e8.png ", null, "https://latex.artofproblemsolving.com/8/2/a/82abef56929c38f0bf8011a2e394189e76f871e3.png ", null, "https://latex.artofproblemsolving.com/b/8/a/b8a9d21a7fa717c0d0b69b58a8937ccf34b5cdd2.png ", null, "https://latex.artofproblemsolving.com/e/1/c/e1c4f748b5fcef16dccd65046b4471fe6393e6b6.png ", null, "https://latex.artofproblemsolving.com/3/4/9/349097a0033d438020f14d96381b189edc30c186.png ", null, "https://latex.artofproblemsolving.com/b/e/3/be3771e79b07990e2babca9fe6decb2949d4d22c.png ", null, "https://latex.artofproblemsolving.com/3/4/9/349097a0033d438020f14d96381b189edc30c186.png ", null, "https://latex.artofproblemsolving.com/3/4/9/349097a0033d438020f14d96381b189edc30c186.png ", null, "https://latex.artofproblemsolving.com/b/e/3/be3771e79b07990e2babca9fe6decb2949d4d22c.png ", null, "https://latex.artofproblemsolving.com/3/4/d/34d0f62dc5536d2351b2e6ecbf4226fcabf7ab4d.png ", null, "https://latex.artofproblemsolving.com/f/5/b/f5b139181121731731dd98b3986ee75d4d0ee231.png ", null, "https://latex.artofproblemsolving.com/1/4/2/1420d57ceea649c3bb54cffc8b0aae09465d2483.png ", null, "https://latex.artofproblemsolving.com/6/3/4/634cdadd448b6397fbcc1b1c64d5be2af2a8f6d9.png ", null, "https://latex.artofproblemsolving.com/7/4/7/747bb4f501f2fbef247b24b74f89ffdce7647679.png ", null, "https://latex.artofproblemsolving.com/6/8/1/681528fcee528d556225a6c1d13cd83c34b93b2a.png ", null, "https://latex.artofproblemsolving.com/9/4/b/94b5a3e0b1a726c775259827de5d37bb20d31b62.png ", null, "https://latex.artofproblemsolving.com/d/5/b/d5b1e8d558c3818bdd87c0b06261db4f6a0977fe.png ", null, "https://latex.artofproblemsolving.com/4/3/0/430ba8a0878316240b8d30e4ffbe8ed1a427d20b.png ", null, "https://latex.artofproblemsolving.com/6/8/1/681528fcee528d556225a6c1d13cd83c34b93b2a.png ", null, "https://latex.artofproblemsolving.com/6/0/9/609d6351f4e02623b093c20991d5ee5fa9e58537.png ", null, "https://latex.artofproblemsolving.com/8/a/6/8a69a16e199d79ffc83361a21d3d4c60af8622da.png ", null, "https://latex.artofproblemsolving.com/4/9/a/49a93c33ad5ebea88c180a14f6c270f8ef53056b.png ", null, "https://latex.artofproblemsolving.com/9/b/c/9bca49d189a9724f678b60101fa95b33c1d7d91a.png ", null, "https://latex.artofproblemsolving.com/5/6/6/56600199c1648ad7514a422c23030efa68954852.png ", null, "https://latex.artofproblemsolving.com/8/6/7/867d3cf1ccd39963c69682e451fc94c6c6a61832.png ", null, "https://latex.artofproblemsolving.com/6/8/c/68caa13da14e951c9950f358fa4d3d6e5a3ea0dc.png ", null, "https://latex.artofproblemsolving.com/1/2/5/125a04ca85ee4bd66ebb79013095ad847872a3fd.png ", null, "https://latex.artofproblemsolving.com/1/9/c/19ce43eacb173b610f4104e4efa33eca16e1c213.png ", null, "https://latex.artofproblemsolving.com/c/3/3/c3355896da590fc491a10150a50416687626d7cc.png ", null, "https://latex.artofproblemsolving.com/1/8/1/181fc7e4fc45bb8783db936cb7e663284b47b062.png ", null, "https://latex.artofproblemsolving.com/6/c/d/6cde21d43fca31ff965c25786ef0a718644be777.png ", null, "https://latex.artofproblemsolving.com/7/9/0/790afaf99abb61bd5b921bbf5e703fa95f49e0bf.png ", null, "https://latex.artofproblemsolving.com/b/d/7/bd750bbb25ede334c0796583ce94891b33c097cd.png ", null, "https://latex.artofproblemsolving.com/0/d/b/0db3dfa2516afa1a76def49230872fc1acb63d5b.png ", null, "https://latex.artofproblemsolving.com/2/d/c/2dcd43e4f9f746333ef0e4274222dc0478911257.png ", null, "https://latex.artofproblemsolving.com/7/0/b/70bf49cca6e1987b33f7a3b2da51a3c56527455a.png ", null, "https://latex.artofproblemsolving.com/3/4/9/349097a0033d438020f14d96381b189edc30c186.png ", null, "https://latex.artofproblemsolving.com/9/9/4/994bb8f7d1cabda58067818e3b26160f6f5c1bec.png ", null, "https://latex.artofproblemsolving.com/c/1/1/c1178764a281e6f219fa0b469fbf8fed1c5144e0.png ", null, "https://latex.artofproblemsolving.com/d/1/f/d1f47f7b63dd07e378d43256fe8a94715e37b2cb.png ", null, "https://latex.artofproblemsolving.com/7/4/4/7447a2c9f496b3cb9d370f0892c353d5943d83c0.png ", null, "https://latex.artofproblemsolving.com/2/c/4/2c49de52862624e5b2111208666759407a02426f.png ", null, "https://latex.artofproblemsolving.com/7/3/e/73ef5b72d76840f828b8eded9fe5a63bba1d2958.png ", null, "https://latex.artofproblemsolving.com/9/6/0/96085a8136e254533acb15da0fa3a66dd66f6675.png ", null, "https://latex.artofproblemsolving.com/3/d/f/3df94cc625c07e4055d82e8eef058b98b14fde99.png ", null, "https://latex.artofproblemsolving.com/4/f/d/4fd80f8ad4649e3b82084216a34f777b52b70828.png ", null, "https://latex.artofproblemsolving.com/1/8/1/181fc7e4fc45bb8783db936cb7e663284b47b062.png ", null, "https://latex.artofproblemsolving.com/e/e/a/eea4b479da62b8cd82eeca0aca7dae64ed16c79b.png ", null, "https://latex.artofproblemsolving.com/6/c/d/6cde21d43fca31ff965c25786ef0a718644be777.png ", null, "https://latex.artofproblemsolving.com/4/f/d/4fd80f8ad4649e3b82084216a34f777b52b70828.png ", null, "https://latex.artofproblemsolving.com/d/9/5/d95f287912c07091be46d819ecd8c511a02e847e.png ", null, "https://latex.artofproblemsolving.com/f/a/e/faeee0ca1a758057707416e836dacc1bd753c723.png ", null, "https://latex.artofproblemsolving.com/1/8/1/181fc7e4fc45bb8783db936cb7e663284b47b062.png ", null, "https://latex.artofproblemsolving.com/c/d/d/cddf51f143406f1270d64001fa85b8f77d4f4c41.png ", null, "https://latex.artofproblemsolving.com/1/9/c/19ce43eacb173b610f4104e4efa33eca16e1c213.png ", null, "https://latex.artofproblemsolving.com/4/c/f/4cfa1387c468b3b6e150c8479d96da3bd9758359.png ", null, "https://latex.artofproblemsolving.com/1/8/1/181fc7e4fc45bb8783db936cb7e663284b47b062.png ", null, "https://latex.artofproblemsolving.com/7/d/8/7d811ff45503aead6eaa63c1d974cabad5d8b708.png ", null, "https://latex.artofproblemsolving.com/0/6/4/06428d1715d238d1dfbbab91b7eea03b31a03296.png ", null, "https://latex.artofproblemsolving.com/b/d/7/bd750bbb25ede334c0796583ce94891b33c097cd.png ", null, "https://latex.artofproblemsolving.com/c/d/d/cddf51f143406f1270d64001fa85b8f77d4f4c41.png ", null, "https://latex.artofproblemsolving.com/e/1/d/e1d665ce23075fb3aacf0478cc151ae3873ed674.png ", null, "https://latex.artofproblemsolving.com/9/c/b/9cb284ce40826f93c328f96c7768f2fb30da6bfa.png ", null, "https://latex.artofproblemsolving.com/b/d/7/bd750bbb25ede334c0796583ce94891b33c097cd.png ", null, "https://latex.artofproblemsolving.com/7/6/5/7651b9ab5b9fc9f47b8812b80c9a651cd848b57f.png ", null, "https://latex.artofproblemsolving.com/9/f/f/9ff841588893a95365248c8d51b7c557c9d2ed20.png ", null, "https://latex.artofproblemsolving.com/7/f/d/7fddfadb98c6dd6e144d34348d7e33ecbd2ef9f7.png ", null, "https://latex.artofproblemsolving.com/7/f/d/7fddfadb98c6dd6e144d34348d7e33ecbd2ef9f7.png ", null, "https://latex.artofproblemsolving.com/1/2/f/12f53fbf58712008af4f5e35ebd78eadb29ad51b.png ", null, "https://wiki-images.artofproblemsolving.com//8/8b/AMC_logo.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9036154,"math_prob":0.99991214,"size":4336,"snap":"2023-40-2023-50","text_gpt3_token_len":1018,"char_repetition_ratio":0.12165282,"word_repetition_ratio":0.085784316,"special_character_ratio":0.24423432,"punctuation_ratio":0.07793765,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99985254,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160,161,162,163,164,165,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,182,183,184],"im_url_duplicate_count":[null,null,null,null,null,null,null,6,null,null,null,null,null,null,null,null,null,6,null,6,null,null,null,null,null,6,null,null,null,null,null,null,null,null,null,null,null,10,null,null,null,null,null,null,null,6,null,null,null,10,null,10,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,6,null,null,null,10,null,10,null,10,null,null,null,null,null,null,null,null,null,null,null,10,null,null,null,10,null,10,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,10,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-26T12:41:37Z\",\"WARC-Record-ID\":\"<urn:uuid:ec653e4b-ec3f-4734-ae69-572abe22e5f7>\",\"Content-Length\":\"63281\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e04f3cd6-fc07-4d86-b3ba-b55e4e778b4e>\",\"WARC-Concurrent-To\":\"<urn:uuid:7d643535-0690-4e33-89b6-dcb2303d4ced>\",\"WARC-IP-Address\":\"172.67.69.208\",\"WARC-Target-URI\":\"https://artofproblemsolving.com/wiki/index.php/1994_AIME_Problems\",\"WARC-Payload-Digest\":\"sha1:IJ3YVDHBKD6MXTYXS7FF3XXCP6RNQ2RF\",\"WARC-Block-Digest\":\"sha1:UGK7RJ57IYCI2C4HP74IYLNJZVQVUCGX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510208.72_warc_CC-MAIN-20230926111439-20230926141439-00368.warc.gz\"}"}
http://www.math-play.com/addition-and-subtraction-soccer-game.html
[ "# Addition and Subtraction Soccer Game\n\nKids can practice addition and subtraction with small numbers by playing this interactive soccer game.\n\nIn this game students will discover that numbers can be written as the sums and differences of other numbers. The game can be played alone, in pairs, or in teams. This game can be played on computers, iPads, and other tablets. You do not need to install an app to play this game on the iPad.\n\nThe game is based on the following Common Core Math Standard:\nCCSS 1.OA.C.6\nAdd and subtract within 20, demonstrating fluency for addition and subtraction within 10. Use strategies such as counting on; making ten (e.g., 8 + 6 = 8 + 2 + 4 = 10 + 4 = 14); decomposing a number leading to a ten (e.g., 13 - 4 = 13 - 3 - 1 = 10 - 1 = 9); using the relationship between addition and subtraction (e.g., knowing that 8 + 4 = 12, one knows 12 - 8 = 4); and creating equivalent but easier or known sums (e.g., adding 6 + 7 by creating the known equivalent 6 + 6 + 1 = 12 + 1 = 13)." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9447543,"math_prob":0.9907385,"size":1079,"snap":"2020-10-2020-16","text_gpt3_token_len":293,"char_repetition_ratio":0.12186047,"word_repetition_ratio":0.0,"special_character_ratio":0.3030584,"punctuation_ratio":0.15450644,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99724454,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-21T02:36:39Z\",\"WARC-Record-ID\":\"<urn:uuid:3f2567ee-a3dd-420b-9b3d-31327a98786c>\",\"Content-Length\":\"5301\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:71fb61c0-4e34-4b70-a53d-1834e9364fae>\",\"WARC-Concurrent-To\":\"<urn:uuid:7f370a93-951e-4cd0-be78-15dbe9a1768b>\",\"WARC-IP-Address\":\"104.24.119.157\",\"WARC-Target-URI\":\"http://www.math-play.com/addition-and-subtraction-soccer-game.html\",\"WARC-Payload-Digest\":\"sha1:WRCLSPVIK5RC3SWS4LT7NGSOEOWDKI4O\",\"WARC-Block-Digest\":\"sha1:MXJKCENUW4UUSS5VYWDVF6WGMTKJGNJE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875145438.12_warc_CC-MAIN-20200221014826-20200221044826-00424.warc.gz\"}"}
https://www.mymathtables.com/arithmetic/properties-of-the-number-four.html
[ "# Characteristics of Number 4 | Properties of Number 4", null, "#", null, "Positive,Negative Facts of Number 4\n\nAn online number facts report for all numbers.\n\n# Significance of Number Four\n\n## Number Four\n\nIs the number 4 Even Number or Odd Number?\n\n4 is a even number\n\nWhat is number 4 called Prime or Composite Number ?\n\n4 is a composite number\n\n4 is a Perfect Square or Not?\n\n4 is a perfect square number\n\nIs 4 a Palindrome Number or Not ?\n\n4 is a palindrome number\n\nIs the number 4 Deficiency Number or Not ?\n\n4 is a deficiency number\n\nIs the number 4 Ulam Number or Not ?\n\n4 is a ulam number\n\n## Number Four Negative Properties\n\n4 is a Perfect Cube or Not?\n\n4 is not a perfect cube number\n\nIs the number 4 Factorial or Not ?\n\n4 is not a factorial number\n\nIs the number 4 Fibbonacci Series Number or Not ?\n\n4 is not a fibbonacci series number\n\nIs the number 4 Perfect Number or Not ?\n\n4 is not a perfect number\n\nIs the number 4 Abundant Number or Not ?\n\n4 is not a abundant number\n\nIs the number 4 Catalan Number or Not ?\n\n4 is not a catalan number\n\nIs the number 4 Triangular Number or Not ?\n\n4 is not a triangular number\n\nIs the number 4 Tetrahedral Number or Not ?\n\n4 is not a tetrahedral number\n\nIs the number 4 Amicable Pair Number or Not ?\n\n4 is not a amicable pair number\n\nIs the number 4 Twin Prime Number or Not ?\n\n4 is not a twin prime number\n\nIs the number 4 Lucky Number or Not ?\n\n4 is not a lucky number\n\nIs the number 4 Happy or Sad Nummber ?\n\nIs the number 4 Polite Number or Impolite Number ?\n\n4 is a Impolite number\n\n## Number Four Value Properties\n\nWhat is Square of 4 ?\n\n16\n\nWhat is Cube of 4 ?\n\n64\n\nWhat is Factor of 4 ?\n\n1, 2, 4\n\nWhat is Square Root of 4 ?\n\n2\n\nWhat is Cube Root of 4 ?\n\n1.5874010519681996\n\nWhat is Log of 4 ?\n\n1.3862943611198906\n\nWhat is Binary Representation of 4 ?\n\n100\n\nWhat is Hexadecimal Representation of 4 ?\n\n4\n\nWhat is Octal Representation of 4 ?\n\n4", null, "##", null, "Number Facts Generator", null, "#", null, "Other Number Facts", null, "More Pages\nTable of Cube Root\n1 to 12 Power Tables\nPower of 10\nPower of Positive 10\nPower of Negative 10\nTable of Square & Cube Root\n\n##", null, "Top Calculators ►\n\nOnline Algebra calculation, formulas , Digital calculation, Statistical calculation, Math Converters Pet Age Calculator," ]
[ null, "https://www.mymathtables.com/img/adv.png", null, "https://www.mymathtables.com/img/headericon.png", null, "https://www.mymathtables.com/img/adv.png", null, "https://www.mymathtables.com/img/headericon.png", null, "https://www.mymathtables.com/img/adv.png", null, "https://www.mymathtables.com/img/headericon.png", null, "https://www.mymathtables.com/img/headericon.png", null, "https://www.mymathtables.com/img/headericon.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.73105997,"math_prob":0.9936965,"size":2067,"snap":"2022-27-2022-33","text_gpt3_token_len":573,"char_repetition_ratio":0.30150267,"word_repetition_ratio":0.16037735,"special_character_ratio":0.2820513,"punctuation_ratio":0.07954545,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99518716,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-27T20:35:51Z\",\"WARC-Record-ID\":\"<urn:uuid:ea9bbaa4-eaaa-4277-83b7-254840e42df0>\",\"Content-Length\":\"20938\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7660cc95-538c-4529-a9b7-b486b21bb7de>\",\"WARC-Concurrent-To\":\"<urn:uuid:c8558862-3634-4db1-9341-d7ba3a322499>\",\"WARC-IP-Address\":\"160.153.61.72\",\"WARC-Target-URI\":\"https://www.mymathtables.com/arithmetic/properties-of-the-number-four.html\",\"WARC-Payload-Digest\":\"sha1:TS7XD3JJTF6WF66QJSJ5MLWNLW277JUT\",\"WARC-Block-Digest\":\"sha1:CLXKWZXYHWGMQVNCXO6F5SDA442VGQ5Q\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103341778.23_warc_CC-MAIN-20220627195131-20220627225131-00560.warc.gz\"}"}
https://www.geeksforgeeks.org/weighted-k-nn/
[ "# Weighted K-NN\n\n• Difficulty Level : Easy\n• Last Updated : 07 Apr, 2020\n\nWeighted kNN is a modified version of k nearest neighbors. One of the many issues that affect the performance of the kNN algorithm is the choice of the hyperparameter k. If k is too small, the algorithm would be more sensitive to outliers. If k is too large, then the neighborhood may include too many points from other classes.\nAnother issue is the approach to combining the class labels. The simplest method is to take the majority vote, but this can be a problem if the nearest neighbors vary widely in their distance and the closest neighbors more reliably indicate the class of the object.\n\nIntuition:\nConsider the following training set", null, "The red labels indicate the class 0 points and the green labels indicate class 1 points.\nConsider the white point as the query point( the point whose class label has to be predicted)", null, "If we give the above dataset to a kNN based classifier, then the classifier would declare the query point to belong to the class 0. But in the plot, it is clear that the point is more closer to the class 1 points compared to the class 0 points. To overcome this disadvantage, weighted kNN is used. In weighted kNN, the nearest k points are given a weight using a function called as the kernel function. The intuition behind weighted kNN, is to give more weight to the points which are nearby and less weight to the points which are farther away. Any function can be used as a kernel function for the weighted knn classifier whose value decreases as the distance increases. The simple function which is used is the inverse distance function.\n\nAlgorithm:\n\n• Let L = { ( xi , yi ) , i = 1, . . . ,n } be a training set of observations xi with given class yi and let x be a new observation(query point), whose class label y has to be predicted.\n• Compute d(xi, x) for i = 1, . . . ,n , the distance between the query point and every other point in the training set.\n• Select D’ ⊆ D, the set of k nearest training data points to the query points\n• Predict the class of the query point, using distance-weighted voting. The v represents the class labels. Use the following formula", null, "Implementation:\nConsider 0 as the label for class 0 and 1 as the label for class 1. Below is the implementation of weighted-kNN algorithm.\n\n## C/C++\n\n `// C++ program to implement the ``// weighted K nearest neighbour algorithm. ``#include  ``using` `namespace` `std; `` ` `struct` `Point ``{ ``    ``int` `val;     ``// Class of point ``    ``double` `x, y;     ``// Co-ordinate of point ``    ``double` `distance; ``// Distance from test point ``}; `` ` `// Used to sort an array of points by increasing ``// order of weighted distance ``bool` `comparison(Point a, Point b) ``{ ``    ``return` `(a.distance < b.distance); ``} `` ` `// This function finds classification of point p using ``// weighted k nearest neighbour algorithm. It assumes only  ``// two groups and returns 0 if p belongs to class 0, else ``// 1 (belongs to class 1). ``int` `weightedkNN(Point arr[], ``int` `n, ``int` `k, Point p) ``{ ``    ``// Fill weighted distances of all points from p ``    ``for` `(``int` `i = 0; i < n; i++) ``        ``arr[i].distance = ``            ``(``sqrt``((arr[i].x - p.x) * (arr[i].x - p.x) + ``                ``(arr[i].y - p.y) * (arr[i].y - p.y))); `` ` `    ``// Sort the Points by weighted distance from p ``    ``sort(arr, arr+n, comparison); `` ` `    ``// Now consider the first k elements and only ``    ``// two groups ``    ``double` `freq1 = 0;     ``// weighted sum of group 0 ``    ``double` `freq2 = 0;     ``// weighted sum of group 1 ``    ``for` `(``int` `i = 0; i < k; i++) ``    ``{ ``        ``if` `(arr[i].val == 0) ``            ``freq1 += ``double``(1/arr[i].distance); ``        ``else` `if` `(arr[i].val == 1) ``            ``freq2 += ``double``(1/arr[i].distance); ``    ``} ``    ``return` `(freq1 > freq2 ? 0 : 1); ``} `` ` `// Driver code ``int` `main() ``{ ``    ``int` `n = 13; ``// Number of data points ``    ``Point arr[n]; `` ` `    ``arr.x = 0; ``    ``arr.y = 4; ``    ``arr.val = 0; `` ` `    ``arr.x = 1; ``    ``arr.y = 4.9; ``    ``arr.val = 0; `` ` `    ``arr.x = 1.6; ``    ``arr.y = 5.4; ``    ``arr.val = 0; `` ` `    ``arr.x = 2.2; ``    ``arr.y = 6; ``    ``arr.val = 0; `` ` `    ``arr.x = 2.8; ``    ``arr.y = 7; ``    ``arr.val = 0; `` ` `    ``arr.x = 3.2; ``    ``arr.y = 8; ``    ``arr.val = 0; `` ` `    ``arr.x = 3.4; ``    ``arr.y = 9; ``    ``arr.val = 0; `` ` `    ``arr.x = 1.8; ``    ``arr.y = 1; ``    ``arr.val = 1; `` ` `    ``arr.x = 2.2; ``    ``arr.y = 3; ``    ``arr.val = 1; `` ` `    ``arr.x = 3; ``    ``arr.y = 4; ``    ``arr.val = 1; `` ` `    ``arr.x = 4; ``    ``arr.y = 4.5; ``    ``arr.val = 1; `` ` `    ``arr.x = 5; ``    ``arr.y = 5; ``    ``arr.val = 1; `` ` `    ``arr.x = 6; ``    ``arr.y = 5.5; ``    ``arr.val = 1; `` ` `    ``/*Testing Point*/``    ``Point p; ``    ``p.x = 2; ``    ``p.y = 4; `` ` `    ``// Parameter to decide the class of the query point ``    ``int` `k = 5; ``    ``printf` `(``\"The value classified to query point\"``            ``\" is: %d.\\n\"``, weightedkNN(arr, n, k, p)); ``    ``return` `0; ``} `\n\n## Python3\n\n `# Python3 program to implement the``# weighted K nearest neighbour algorithm. `` ` `import` `math `` ` `def` `weightedkNN(points,p,k``=``3``): ``    ``''' ``    ``This function finds classification of p using ``    ``weighted k nearest neighbour algorithm. It assumes only two ``    ``two classes and returns 0 if p belongs to class 0, else ``    ``1 (belongs to class 1). `` ` `    ``Parameters - ``        ``points : Dictionary of training points having two keys - 0 and 1 ``            ``Each key have a list of training data points belong to that `` ` `        ``p : A tuple ,test data point of form (x,y) `` ` `        ``k : number of nearest neighbour to consider, default is 3 ``    ``'''`` ` `    ``distance``=``[] ``    ``for` `group ``in` `points: ``        ``for` `feature ``in` `points[group]: `` ` `            ``#calculate the euclidean distance of p from training points ``            ``euclidean_distance ``=` `math.sqrt((feature[``0``]``-``p[``0``])``*``*``2` `+``(feature[``1``]``-``p[``1``])``*``*``2``) `` ` `            ``# Add a tuple of form (distance,group) in the distance list ``            ``distance.append((euclidean_distance,group)) `` ` `    ``# sort the distance list in ascending order ``    ``# and select first k distances ``    ``distance ``=` `sorted``(distance)[:k] `` ` `    ``freq1 ``=` `0` `# weighted sum of group 0 ``    ``freq2 ``=` `0` `# weighted sum of group 1 `` ` `    ``for` `d ``in` `distance:``        ``if` `d[``1``] ``=``=` `0``:``            ``freq1 ``+``=` `(``1` `/` `d[``0``])``             ` `        ``elif` `d[``1``] ``=``=` `1``: ``            ``freq2 ``+``=` `(``1` `/``d[``0``])``             ` ` ` `    ``return` `0` `if` `freq1>freq2 ``else` `1`` ` `# Driver function ``def` `main(): `` ` `    ``# Dictionary of training points having two keys - 0 and 1 ``    ``# key 0 have points belong to class 0 ``    ``# key 1 have points belong to class 1 `` ` `    ``points ``=` `{``0``:[(``0``, ``4``),(``1``, ``4.9``),(``1.6``, ``5.4``),(``2.2``, ``6``),(``2.8``, ``7``),(``3.2``, ``8``),(``3.4``, ``9``)], ``            ``1``:[(``1.8``, ``1``),(``2.2``, ``3``),(``3``, ``4``),(``4``, ``4.5``),(``5``, ``5``),(``6``, ``5.5``)]} `` ` `    ``# query point p(x,y) ``    ``p ``=` `(``2``, ``4``) `` ` `    ``# Number of neighbours ``    ``k ``=` `5`` ` `    ``print``(``\"The value classified to query point is: {}\"``.``format``(weightedkNN(points,p,k))) `` ` `if` `__name__ ``=``=` `'__main__'``: ``    ``main() `\nOutput:\n```The value classified to query point is: 1\n```\n\nMy Personal Notes arrow_drop_up" ]
[ null, "https://media.geeksforgeeks.org/wp-content/uploads/20190613212716/training_point.png", null, "https://media.geeksforgeeks.org/wp-content/uploads/20190613212412/download10.png", null, "https://media.geeksforgeeks.org/wp-content/uploads/20190613174426/Formula2.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.75617254,"math_prob":0.99713415,"size":6168,"snap":"2022-05-2022-21","text_gpt3_token_len":1968,"char_repetition_ratio":0.1646658,"word_repetition_ratio":0.08312552,"special_character_ratio":0.35116732,"punctuation_ratio":0.1897038,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99964964,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,4,null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-16T18:01:03Z\",\"WARC-Record-ID\":\"<urn:uuid:d025362a-cd1a-42bc-b809-b2313c47e9ff>\",\"Content-Length\":\"151290\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1e4ffaf7-c7a2-4d7d-8e24-0e13606bd6ca>\",\"WARC-Concurrent-To\":\"<urn:uuid:a0589a4e-06be-4158-aaa9-23856480b678>\",\"WARC-IP-Address\":\"104.120.129.5\",\"WARC-Target-URI\":\"https://www.geeksforgeeks.org/weighted-k-nn/\",\"WARC-Payload-Digest\":\"sha1:YOAQMAZZ24ML5HE5KDTSY5IIYQWPQVSS\",\"WARC-Block-Digest\":\"sha1:H6WXC4JPM3RJQNKR6EDD42LJKOZJ2UPS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662512229.26_warc_CC-MAIN-20220516172745-20220516202745-00386.warc.gz\"}"}
https://hess.copernicus.org/articles/23/2417/2019/
[ "https://doi.org/10.5194/hess-23-2417-2019\nhttps://doi.org/10.5194/hess-23-2417-2019", null, "# Regionalization with hierarchical hydrologic similarity and ex situ data in the context of groundwater recharge estimation at ungauged watersheds\n\nChing-Fu Chang and Yoram Rubin\nAbstract\n\nThere are various methods available for annual groundwater recharge estimation with in situ observations (i.e., observations obtained at the site/location of interest), but a great number of watersheds around the world still remain ungauged, i.e., without in situ observations of hydrologic responses. One approach for making estimates at ungauged watersheds is regionalization, namely, transferring information obtained at gauged watersheds to ungauged ones. The reliability of regionalization depends on (1) the underlying system of hydrologic similarity, i.e., the similarity in how watersheds respond to precipitation input, as well as (2) the approach by which information is transferred.\n\nIn this paper, we present a nested tree-based modeling approach for conditioning estimates of hydrologic responses at ungauged watersheds on ex situ data (i.e., data obtained at sites/locations other than the site/location of interest) while accounting for the uncertainties of the model parameters as well as the model structure. The approach is then integrated with a hypothesis of two-leveled hierarchical hydrologic similarity, where the higher level determines the relative importance of various watershed characteristics under different conditions and the lower level performs the regionalization and estimation of the hydrologic response of interest.\n\nWe apply the nested tree-based modeling approach to investigate the complicated relationship between mean annual groundwater recharge and watershed characteristics in a case study, and apply the hypothesis of hierarchical hydrologic similarity to explain the behavior of a dynamic hydrologic similarity system. Our findings reveal the decisive roles of soil available water content and aridity in hydrologic similarity at the regional and annual scales, as well as certain conditions under which it is risky to resort to climate variables for determining hydrologic similarity. These findings contribute to the understanding of the physical principles governing robust information transfer.\n\nShare\nDates\n1 Introduction\n\nGroundwater resources supply approximately 50 % of the drinking water and roughly 40 % of the irrigation water worldwide . Yet the groundwater has increasingly been depleted since the late 20th century . Therefore, groundwater recharge, here broadly defined as the replenishing of water to a groundwater reservoir, plays a critical role in sustainable water resource management . Several studies have reviewed and compared multiple methods for recharge estimation at a wide spectrum of temporal and spatial scales, including lysimeter tests, seepage tests, water table fluctuation, chemical and heat tracers, baseflow analysis, water budget, and numerical modeling . However, the aforementioned methods rely on in situ data, while many watersheds worldwide still remain effectively ungauged (i.e., ungauged, poorly gauged, or previously gauged) .\n\nThis fact leads us to a critical question: how can one estimate hydrologic responses without in situ data? Studying ungauged watersheds has been a popular research topic for more than a decade, especially since the Prediction in Ungauged Basins (PUB) initiative by the International Association of Hydrological Sciences (IAHS) . Facing the lack of in situ data, studies have attempted transferring ex situ information from gauged watersheds to ungauged ones; this data transfer is also termed “regionalization”. Regionalization has been applied to constrain the estimates of the parameters of hydrologic models (especially rainfall–runoff models), which could then be used to make predictions at ungauged watersheds . Such constraining is expected to lead to more accurate and precise estimates, and could be in the form of (1) relationships between model parameters and watershed characteristics, (2) subsets of the parameter space, or (3) plausible parameter values from models built for other hydrologically similar watersheds .\n\nHowever, the application of regionalization is not without challenges. One of the key factors of predictive uncertainty identified by the PUB initiative is the unsuitability of information transfer techniques, due to a lack of comparative studies across watersheds and a lack of understanding of the physical principles governing robust regionalization . Different regionalization techniques have been applied in different cases with different assumptions. For example, attempted a simple form of regionalization, where kernel density estimation was applied on recharge values obtained from various hydrologically similar sites, in order to build an ex situ prior distribution (i.e., a prior distribution conditioned on ex situ data). However, one limitation in was that hydrologic similarity was treated as a Boolean variable, and therefore, there was no way to systematically distinguish a highly similar site from a slightly similar site. To pursue this further in this study, we must ask the following question: how can we tell that two watersheds are hydrologically similar? applied Bayesian mixture clustering to watersheds across the eastern US. They found that spatial proximity was a valuable first indicator of hydrological similarity because it reflected strong climatic control in their study area. reported similar findings based on 913 French watersheds, despite acknowledging the lack of some key physical descriptors in their data set. However, attempted regionalization of hydrologic model parameters in eastern Australia, and suggested that spatial proximity was an unreliable metric of hydrological similarity. For their part, presented successful regionalization of hydrologic parameters based on geologic similarity at watersheds in the US Oregon Cascades, a mountain range that features geological heterogeneity. Although not directly shown, their findings also went against the use of applying spatial proximity, for they discussed the sharp contrasts in hydrology at proximal watersheds based primarily on geological differences. The indication from these findings is that, although spatial proximity is of practical importance due to its common use, its simplicity, and its demonstrated effectiveness in specific areas , it is not the true controlling factor, but rather a confounding factor.\n\nOne can resort to other physical characteristics of watersheds for the determination of hydrologic similarity. However, what those characteristics are may be a complicated question. tested the effect of combinations of neural-network-based classification techniques and regionalization techniques in Canada, and found that classifying watersheds before regionalization improves regionalization for streamflow, baseflow, and peak flow predictions, but also discovered that the best combination of techniques varied from one watershed to another. applied classification and regression tree to determine the relationship between catchment similarity and regionalization in the US, finding that the dominant controls of successful regionalization vary significantly with the spatial scale, with the region of interest, and with the objective function used. Similarly, found that different physiographic variables controlled various flow characteristics across Europe, showing how different descriptors could account for different dominant hydrologic processes and flow characteristics. These studies indicate an important challenge, that the factors determining hydrologic similarity may vary under different conditions, and a universal system of hydrologic similarity still remains unavailable. suggested an interesting perspective describing a dynamic hydrologic similarity system, where similarity and uniqueness are not mutually exclusive; rather, they suggested that hydrologic systems operate by gradually changing to different levels of organization in which their behaviors are partly unique and partly similar.\n\nIn this study, we would like to integrate the perspective in , that similarity and uniqueness are not mutually exclusive, into our regionalization framework for groundwater recharge estimation at ungauged watersheds. It is thus critical to identify a number of plausible controlling factors. Although few studies have directly identified the controlling factors, some insights can be learned from previous studies. For example, the effective recharge (i.e., the net source term in the groundwater flow equation) in a steady, depth-integrated, and unbounded groundwater flow was found to be correlated with the spatial distributions of transmissivity and hydraulic head . From a recharge-mechanism-based perspective, previous studies have also found a list of plausible controlling factors of recharge via recharge potential mapping . These variables include watershed topography, land cover, soil properties, and geology. At the regional scale, climate variables have been found to be among the primary controlling factors of groundwater table depth , mean annual groundwater recharge , and mean annual baseflow , the latter of which is often used as a surrogate of recharge under the steady-state assumption. Other examples include , who showed that evapotranspiration data provided more conditioning power and more uncertainty reduction than soil moisture data in long-term mean recharge estimation, and , who reported variations of the sensitivity of annual groundwater recharge to annual precipitation with aridity. Although these studies did not apply regionalization explicitly and did not target ungauged watersheds directly, their findings provide guidance for us to identify some watershed characteristics – especially climate variables – that might play an important role in the regionalization process for recharge estimation.\n\nGiven a set of watershed characteristics, the next important question is how the regionalization is carried out. provided a generic framework of regression regionalization, which involves a multi-objective optimization for calibration, a sensitivity analysis to determine the most important model parameters, and a final step relating watershed characteristics to model parameters. The framework is capable of assimilating information from exogenous variables and accounting for the interaction between parameters. However, the framework does not include a straightforward quantification of uncertainties in calibration and in regionalization. In comparison, Bayesian approaches offer a solution to the quantification of uncertainty by outputting conditional distributions. Despite the lack of in situ data, one can still apply Bayesian approaches to establish prior distributions that are informed by data from previous studies or well-established databases . More advanced pooling of information from multiple sampled sites has also been demonstrated with the application of Bayesian hierarchical models , which can account for both intra- and inter-site uncertainty of the parameters. However, the aforementioned Bayesian approaches have several disadvantages, including (1) requiring a system of hydrologic similarity that helps us decide which sampled sites or databases are suitable as “information donors”, (2) requiring known or assumed distributional forms of the parameters, and (3) difficulties in accounting for complicated and highly nonlinear dependence on exogenous variables. Adding onto the challenge is that uncertainty arises from a lack of knowledge about how to represent the watershed system in terms of both model structure and parameters (Beven2016). Uncertainty about the model structure has been identified and studied (e.g., Beven2006; Beven and Freer2001; Nowak et al.2010), but not under the context of ungauged watershed, regionalization, and hydrologic similarity. The lack of in situ data does not justify a presumed model structure: even without in situ data, the modeler can still consider simultaneously multiple potential model structures, instead of wrongly assuming a fixed structure .\n\nTo that end, the objectives of this study are 2-fold. First, to address the aforementioned challenges in regionalization technique, we propose a nested tree-based modeling approach, which features (1) nonlinear regression in order to model the predictor–response relationship, (2) full Bayesian quantification of parameter uncertainty, and (3) proposal–comparison-based consideration of model structure uncertainty. Second, we integrate the nested tree-based modeling approach with a hypothesis of hierarchical hydrologic similarity. We apply the approach to estimate a groundwater recharge signature at ungauged watersheds in a case study, and we invoke the hypothesis of hierarchical similarity to reveal the key controlling factors of a dynamic hydrologic similarity system, which could ultimately contribute to robust information transfer in future applications.\n\n2 Methodology\n\nThe data-driven, Bayesian, and nonlinear regression approach proposed in this study is powered by Bayesian Additive Regression Tree (BART) at its core. The details of BART, including the establishment of prior distribution (which we term prior), the calculation of likelihoods, and the posterior inference statistics, are well documented in and in . Here, we provide a brief conceptual introduction to the implementation and advantages of BART, as well as how BART is augmented in this study.\n\n## 2.1 BART\n\nConsider a fundamental problem of making inference about an unknown function that estimates a response variable of interest using a set of predictor variables. The general form of this problem can be expressed as follows:\n\n$\\begin{array}{}\\text{(1)}& R=\\stackrel{\\mathrm{^}}{R}+\\mathit{ϵ}=f\\left(\\mathbit{\\theta },\\mathbit{x}\\right)+\\mathit{ϵ},\\end{array}$\n\nwhere R is the response variable, f(⋅) is a model that outputs the estimate of the response variable, $\\stackrel{\\mathrm{^}}{R}$ is the estimate, θ is the vector of model parameters, x is the vector of predictors, and ϵ is a Gaussian white noise with finite variance, i.e., ϵN(0, σ2). The observation of R is denoted by r. BART solves this problem by applying a Bayesian version of the additive ensemble tree model. To put it simply, BART can be understood as Bayesian inference done for many individual regression tree models. The main difference between typical regression tree models and BART is that the former is calibrated with data by searching for the best model parameters that lead to the least error, while the latter is conditioned on data by obtaining conditional distributions of model parameters via Bayesian inference.", null, "Figure 1Schematic diagrams of (a) a regression tree model, (b) an ensemble tree model which consists of J additive regression tree models, and (c) the loops structure that BART uses to draw MCMC simulations (indexed by l), consisting of an inner loop for J additive regression tree models and an outer loop that continues until we have a total of L MCMC simulations after convergence toward a stationary distribution.\n\nTo understand BART, first one needs to understand the build-up of the additive ensemble tree model from individual classification and regression tree (CART) models (Breiman1984). A schematic diagram of a CART model is shown in Fig. 1a, which resembles an upside-down tree (root on top and leaves at the bottom). The root node of the tree represents the space spanned by the predictor(s). As one moves downward from root to leaves, the said space is recursively partitioned by a sequence of binary partitioning rules. This partitioning and the corresponding partitioning rules define the tree structure and can be represented by the tree structure variable, denoted by T. After partitioning, output response values are assigned to each and every leaf, where each leaf represents a partitioned subspace. These output values can be collectively denoted by M. A tree model can be fully defined by knowing its T and M.\n\nTo further improve the predictive performance on an individual CART, an additive ensemble tree model can be built as the sum of J individual trees (Fig. 1b), each of which has its tree structure (Tj, j=1, …, J) and its set of leaf values (Mj, j=1, …, J), shown as follows:\n\n$\\begin{array}{}\\text{(2)}& \\stackrel{\\mathrm{^}}{R}=f\\left(\\mathbit{\\theta },\\mathbit{x}\\right)=\\sum _{j=\\mathrm{1}}^{J}g\\left({T}_{j},{\\mathbit{M}}_{j},\\mathbit{x}\\right),\\end{array}$\n\nwhere θ={T1, M1, …, TJ, MJ} and g(⋅) denotes an individual tree. The output of an additive ensemble tree model is the sum of the outputs from the J trees.\n\nAs mentioned above, instead of searching for the best Tj and Mj for every j that lead to the least error, BART takes on a different way of model fitting, the Bayesian way. It starts by defining the following joint prior of all the tree structures, all the sets of leaf values, and the variance of the white noise defined in Eq. (1):\n\n$\\begin{array}{}\\text{(3)}& p\\left({T}_{\\mathrm{1}},{\\mathbit{M}}_{\\mathrm{1}},\\phantom{\\rule{0.125em}{0ex}}\\mathrm{\\dots },\\phantom{\\rule{0.125em}{0ex}}{T}_{J},{\\mathbit{M}}_{J},{\\mathit{\\sigma }}^{\\mathrm{2}}\\right)=p\\left({\\mathit{\\sigma }}^{\\mathrm{2}}\\right)\\prod _{j=\\mathrm{1}}^{J}p\\left({T}_{j}\\right)P\\left({\\mathbit{M}}_{j}|{T}_{j}\\right).\\end{array}$\n\nBART then applies a tailored version of the backfitting Markov chain Monte Carlo (MCMC) simulation algorithm to condition the prior on the response data (r), where backfitting means the jth tree model is iteratively updated with its partial residual. The stationary distribution toward which the MCMC simulations converge is then used to approximate the true posterior distribution (which we term posterior):\n\n$\\begin{array}{}\\text{(4)}& p\\left({T}_{\\mathrm{1}},{\\mathbit{M}}_{\\mathrm{1}},\\phantom{\\rule{0.125em}{0ex}}\\mathrm{\\dots },\\phantom{\\rule{0.125em}{0ex}}{T}_{j},{\\mathbit{M}}_{j},{\\mathit{\\sigma }}^{\\mathrm{2}}|r\\right).\\end{array}$\n\nA schematic diagram of the MCMC simulation iteration procedure is shown in Fig. 1c. Within each MCMC simulation, both Tj and Mj for the jth tree are iteratively simulated using a Metropolis-within-Gibbs sampler, illustrated by the loop in the blue circle in Fig. 1c. After simulating all the trees, the error variance (σ2) is simulated with a Gaussian-Gamma-conjugate Gibbs sampler. The sampling of σ2 marks the end of one MCMC simulation. We can see by the loop in the red square in Fig. 1c that the MCMC simulation is continuous until the simulated values converge to a stationary distribution. These post-convergence simulated values approximate realizations from Eq. (4), and thus we approximate the true posterior in Eq. (4) by the stationary distribution obtained by MCMC simulation. At this point, we have reached a BART model that is conditioned on the response data, because all the BART parameters (tree structures, leaf node values, and the white noise variance) have been conditioned on the response data.\n\nGiven the aforementioned conditioned BART model, we now turn our attention to estimating a new response that was not included in the data on which the BART model was conditioned. This is done by inputting the vector of the new predictors, denoted by $\\stackrel{\\mathrm{̃}}{\\mathbit{x}}$, into the predictor–response relationship we learned with the BART model. Firstly, Eq. (1) can be rewritten as\n\n$\\begin{array}{}\\text{(5)}& R\\sim N\\left(\\stackrel{\\mathrm{^}}{R},{\\mathit{\\sigma }}^{\\mathrm{2}}\\right).\\end{array}$\n\nBoth the mean and the variance in Eq. (5) are uncertain and have their respective posteriors. By combining Eqs. (2) and (5), and after plugging in the post-convergence MCMC simulated values and $\\stackrel{\\mathrm{̃}}{\\mathbit{x}}$, we obtain a plausible realization (indexed by the superscript l, l=1, …, L) of predictive distribution as follows:\n\n$\\begin{array}{ll}N& \\left({\\stackrel{\\mathrm{^}}{R}}^{\\left(l\\right)},{\\left({\\mathit{\\sigma }}^{\\mathrm{2}}\\right)}^{\\left(l\\right)}\\right)\\\\ & =N\\left(f\\left({\\mathbit{\\theta }}^{\\left(l\\right)},\\stackrel{\\mathrm{̃}}{\\mathbit{x}}\\right),{\\left({\\mathit{\\sigma }}^{\\mathrm{2}}\\right)}^{\\left(l\\right)}\\right)\\\\ \\text{(6)}& & =N\\left(\\sum _{j=\\mathrm{1}}^{J}g\\left({T}_{j}^{\\left(l\\right)},{\\mathbit{M}}_{j}^{\\left(l\\right)},\\stackrel{\\mathrm{̃}}{\\mathbit{x}}\\right),{\\left({\\mathit{\\sigma }}^{\\mathrm{2}}\\right)}^{\\left(l\\right)}\\right).\\end{array}$\n\nThe collection of many plausible realizations yields an approximated posterior of predictive distributions. Thus, for response of interest, we have now obtained a fully Bayesian Gaussian predictive model, where the mean and the variance have their respective posteriors.\n\nThe key advantage of BART is that it combines the nonlinear regression for the predictor–response relationship with Bayesian inference, allowing for the determination of a full Bayesian posterior of predictive distribution, rather than one or a few estimates/predictions.\n\nThe estimation and regionalization processes are data-driven. Prior knowledge of the underlying physics is only minimally accounted for in terms of the composition of the predictor sets and the user-defined prior of the splitting rules (which are embedded in the tree structure variable, Tj). The underlying physics is inferred from the ex situ data by obtaining conditional simulations of the tree structures and the leaf nodes (similar to the calibration stage), and thus is implicitly embedded rather than explicitly defined. Therefore, the extent to which physics could be inferred is restricted by the training data, here, the ex situ data, which is a common limitation of data-driven approaches.\n\nHowever, in compensation, we avoid one disadvantage of the application of physically based models in the case of ungauged watersheds. The available data at the ungauged watershed are limited, and it is unrealistic to expect that certain watershed characteristics should be known. Data availability could hinder the implementation of powerful hydrologic models because some of the required model inputs may be unavailable at the ungauged watersheds . It is possible to treat missing inputs as parameters and run simulations to impute them or apply stochastic methods to estimate them. Nonetheless, the corresponding computational demand grows in power law with the number and the plausible range of the missing inputs, which is of great practical importance when evaluating the pros and cons of an approach.\n\nNote that in this study there is no intention to show the superiority of either the data-driven or physically based approaches. As pointed out, the ultimate goal of predictions at ungauged watersheds is not to define parameters of a model, but rather to understand what behavior we should expect at the ungauged watersheds of interest. We have simply shown why our approach is suitable for ungauged watersheds.\n\n## 2.3 Nested tree-based modeling approach\n\nAs shown above, BART offers an elegant way to account for model parameter uncertainty of an additive ensemble tree model. However, uncertainty exists not only for the model parameters, but also for the models themselves, i.e., the model structure uncertainty. A significant factor of model structure uncertainty for BART could be the composition of the vector of predictors. Accounting for model structure uncertainty can be done by proposing a prior probability mass function of plausible BART models, which can then be evaluated and compared with each other. In the present study, we accomplish this by using a proposal–comparison procedure, which we termed the nested tree-based modeling approach. The details are as follows.\n\nWe start by proposing K plausible BART models, denoted as Bk, k=1, …, K, each of which is built using a unique set of predictors and is conditioned on available data. The model structure uncertainty is accounted for by obtaining a probability mass function of the K plausible BART models, denoted by p(Bk). The determination of p(Bk) can be informed by the data (namely, in an empirical Bayes way, where the prior is informed by the data). At each available data point, we evaluate the performance of the plausible BART models by a performance metric (a typical example is the mean squared error). Then, a label is given to each data point, indicating which BART model has the highest performance measured by the metric. Finally, we use a CART model to classify the data points based on their labels. The CART model outputs an empirical multinomial distribution of the K plausible BART models at each leaf. Thus, one can study the variation of p(BK) with various predictors. A very simple example is illustrated in Fig. 2, where we compare the performances of two BART models (K=2) using one predictor and a simple two-leveled classification tree. The predictor space is partitioned into the positive subspace and the negative subspace by the partitioning rule indicated in the diamond box. Thus, for any new data point with positive predictor value, we would use p(B1)=0.76 and p(B2)=0.24 as the probability mass function of plausible models. In real applications, of course, one can use an arbitrary number of predictors to compare an arbitrary number of plausible BART models.", null, "Figure 2Schematic diagrams of an example of nesting two BART models under a simple two-leveled CART model, using only one predictor. The partitioning rule is expressed in the diamond box, and the leaves are represented in blue boxes.\n\nUp to this point, we have introduced the nested tree-based modeling approach, which is general and data-driven. For estimation purposes, one would be interested in accounting for model structure uncertainty by averaging the estimates over p(Bk), which can be done by invoking Bayesian model averaging. However, the capability of the nested tree-based modeling approach does not stop here, as the approach also outputs the variation of p(Bk) under various conditions. This could be an indication of the behavior of a dynamic hydrologic similarity system, and will be explained in detail in Sect. 2.4.\n\n## 2.4 Hypothesis of hierarchical similarity\n\nTo facilitate the interpretation of the variation of p(Bk), we propose a hypothesis of hierarchical similarity that has two levels.\n\n1. The lower level is termed the predictor similarity, meaning that if two vectors of predictors are similar in some parts, their corresponding response will be similar. In a hydrology context, if two watersheds have some similar characteristics, then their hydrologic responses will be similar. This lower level corresponds to the BART models in the nested tree-based modeling approach.\n\n2. The higher level is the regionalization similarity, meaning that if two vectors of predictors are similar in some parts, their corresponding predictor–response relationships will be similarly controlled. In a hydrology context, if two watersheds have some similar characteristics, then their hydrologic responses will be governed by similar functions/mechanisms. This higher level corresponds to the classification tree in the nested tree-based modeling approach.\n\nPut simply, regionalization similarity determines the predictor–predictor relationship and tells us which predictors to extract information from, while predictor similarity determines the predictor–response relationship that actually estimates the response using the said extracted information. Note that the two sets of predictors respectively determining the two levels of similarity are not mutually exclusive: they may or may not overlap. To elaborate on the difference between the two levels of similarity, we present the following two example statements within the context of recharge estimation.\n\n1. Systematic trends in recharge rates are often associated with climatic trends (Healy2010). This is a statement of predictor similarity, indicating a predictor–response relationship. One would be informed of association recharge rates with climatic variables.\n\n2. In arid regions, focused recharge from ephemeral streams is often the dominant form of recharge (Healy2010). This is a statement of regionalization similarity, indicating a predictor–predictor relationship. One would be informed to pay more attention to the dominant factors of ephemeral streams if the study area of interest is in arid regions.", null, "Figure 3The study area includes (a) MRB 1 and (b) MRB 2 in the eastern US, colored by the estimated annual groundwater recharge in the year of 2002 (Wolock2003). For the details of the delineation of MRBs, please refer to .\n\nHaving explained the hypothesis of hierarchical similarity, now suppose that we have gone through the process described in Sect. 2.3 and have obtained K plausible BART models and one CART model. Each plausible BART model was built with a unique set of predictors, and we use the BART models to explore predictor similarity with different predictor sets. Moving up a level, we use the classification tree to explore regionalization similarity by investigating the variation of p(Bk) under various conditions. Note that as the condition changes, the best-performing BART model may change, and so does the set of dominant predictors in the predictor–response relationship. This may explain why, under different conditions, the hydrologic similarity may be controlled by different watershed characteristics. We test our hypothesis of hierarchical similarity in a case study, which will be explained in Sect. 3.\n\n3 Case study\n\nIn this case study, we are going to apply the methodology described in Sects. 2.1 through 2.4 to investigate the predictor similarity and the regionalization similarity in the study area, and to test the hypothesis of hierarchical similarity. It is important to note that this case study is not aimed at a thorough investigation of the recharge mechanism, nor is the goal obtaining the most accurate recharge estimates. Rather, the primary goals are the demonstration of the power of our approach and showing how the approach helps us understand the dynamic behavior of hydrologic similarity in the study area. This section provides the details about the case study setup, including the watersheds, the recharge data, the watershed characteristics data, the partitioning of data, and the evaluation metrics.\n\n## 3.1 Watersheds and recharge estimates\n\nThe conterminous US can be divided into eight major river basins (MRBs), each of which consists of thousands of watersheds . At each and every watershed, watershed-average annual recharge estimate and watershed characteristics data are retrieved from publicly available databases, and will be described in the following subsections. In our work, the recharge estimates are used as the target response, while the characteristics are used as predictors in the regionalization process.\n\nIn 2002, annual groundwater recharge at each watershed was estimated via baseflow analyses by the US Geological Survey (USGS) (Wieczorek and LaMotte2010h; Wolock2003, also shown in Fig. 3). Streamflow-based estimation of recharge, such as baseflow analysis, is commonly used in humid regions. As put forward by , there are three key questions that should be carefully checked before applying baseflow analysis: (1) is all recharging water eventually discharged into the stream where the baseflow is measured? (2) Do low flows consist entirely of groundwater discharge? (3) Does the contributing area of the aquifer differ significantly from that of the watershed? Without a rigorous proof, we make a working assumption about the reliability of baseflow analysis. Fortunately, from a post hoc check, the recharge estimates fall within the typical scales at which baseflow analysis is more suitable: a recharge scale from hundreds to thousands of millimeters per year, a spatial scale of hundreds of m2 to hundreds of km2, and temporal scales from months to decades .", null, "Figure 4Histograms of (a) annual recharge in 2002, (b) annual precipitation in 2002, (c) long-term average annual precipitation, (d) long-term average annual potential evapotranspiration, (e) normalized recharge, and (f) logit normalized recharge (LNR) at all the watersheds in MRB 1 and 2. The black curves are estimates of the distributions based on kernel density estimation.\n\nThe more arid US Midwest may have more pronounced localized recharge , which cannot be effectively captured by baseflow analysis . This, then, does not fit well with our working assumption. Therefore, following the suggestion of , our study area includes only the relatively humid eastern parts of the US, namely MRB 1 and 2 (Fig. 3). After excluding watersheds with less desirable data coverage, we consider a total of 3609 watersheds in MRB 1 and 7413 watersheds in MRB 2. The distributions of the recharge data from all the watersheds in the study area are shown in Fig. 4a.\n\n## 3.2 Climate\n\nAt each watershed included in the study, the following data are retrieved from publicly available databases: the long-term average annual precipitation ($\\stackrel{\\mathrm{‾}}{P}$) averaged from 1970 to 2000 , the annual precipitation in the year 2002 (P) , and the long-term average annual potential evapotranspiration (Ep) averaged from 1960 to 1990 . Note that, limited by data availability, the average periods of $\\stackrel{\\mathrm{‾}}{P}$ and Ep are different. Thus, we also make a working assumption that at the decadal scale the averaged climate variables remain steady, with which we ignore the potential effect of climate change on the difference between the average from 1960 to 1990 and that from 1970 to 2000. Given the precipitation and evapotranspiration, we obtained two additional climate variables: the long-term aridity index, estimated as $\\stackrel{\\mathrm{‾}}{\\mathit{\\varphi }}={E}_{\\mathrm{p}}/\\stackrel{\\mathrm{‾}}{P}$, and the 2002 aridity index, estimated as $\\mathit{\\varphi }={E}_{\\mathrm{p}}/P$. Given that the recharge data are based on baseflow analysis for the year 2002, P and ϕ represent the climate controls of that same year, while $\\stackrel{\\mathrm{‾}}{P}$, Ep, and $\\stackrel{\\mathrm{‾}}{\\mathit{\\varphi }}$ represent climate controls over the long term. The distributions of P, $\\stackrel{\\mathrm{‾}}{P}$, and Ep are shown in Fig. 4b–d, respectively.\n\n### Normalization and transformation of recharge using precipitation\n\nThe annual recharge data (in volume of water per unit watershed area) can be normalized by P (also in volume of water per unit watershed area), as in Fig. 4e. This stems from the concept of water budgets and has been commonly used in hydrological studies worldwide . Here, we apply logit transformation, which is common for proportions or probabilities , to that normalized recharge, relaxing the physical bounds (0 and 1) of the values of the target variable (Fig. 4f). This step is advantageous as it opens the opportunity to estimate recharge with parametric statistical models without special accommodations for the bounds. Therefore, in this case study the logit normalized recharge (LNR) is used as the target response variable.\n\n## 3.3 Non-climate watershed characteristics\n\nWe also consider various non-climate watershed characteristics in this study, including topography, land cover, soil properties, and geology. The land cover is based on data published in 2001, which we feel is close enough to 2002 to provide the appropriate information. The other characteristics are based on raw data obtained in different years before 2002; it is assumed that they remain steady at sub-century timescales. We provide the details of these watershed characteristics in the following subsections.\n\n### 3.3.1 Topography and land cover\n\nThe topographic predictors are taken from publicly available databases ; they are summarized in Table 1. The land cover variables are the percentages of watershed area corresponding to each land cover class ; these are summarized in Table 2. The land cover classes are based on the 2001 National Land Cover Database (NLCD2001), the categories of which include water, developed land, barren land, forest, shrubland, herbaceous land, cultivated land, and wetland, with each having its own sub-classes. The details of NLCD2001 can be found in .\n\n### 3.3.2 Soil property\n\nThe soil property predictors include watershed-scale statistics (e.g., average, upper bound, and lower bound) of soil properties ; these are summarized in Table 3. The spatial statistics of the soil properties within each watershed were obtained over gridded source data values from the State Soil Geographic database (STATSGO) , which were depth-averaged over all soil layers (Wolock1997).\n\nTable 3Soil property predictors.", null, "* Spatial statistics calculated across the watershed.\n\n### 3.3.3 Geology\n\nThe geology predictors used in this study were retrieved from publicly available databases , and they can be classified into two subcategories: surficial geology (surface sediment) and bedrock geology. As the predictors, we used fractions of the watershed area corresponding to each of the 45 surficial geology types and each of the 162 bedrock geology types . Details regarding each geology type can be found in and . Note that in geological terminology, rock type or rock composition data are referred to as lithology data. Compared to lithology, structural geology data might be more informative for groundwater studies (e.g., orientation, fracture properties, discontinuity). However, structural geology information usually requires in situ investigation, which cannot be expected at ungauged watersheds. Therefore, we consider only lithology data in this study.\n\n## 3.4 Data partitioning\n\nThis section explains the setup of the holdout method specific to the case study, as well as the partitioning of the predictors into various subsets in order to evaluate the effects of different predictors.\n\n### 3.4.1 Watershed partitioning\n\nBecause we cannot evaluate the predictive accuracy at real ungauged watersheds (due to the lack of in situ data to compare against), we adopt the holdout method to partition the watersheds described in Sect. 3.1 into two mutually exclusive subsets: the training watersheds and the testing watersheds. The testing watersheds will be treated as if they were ungauged, and we only condition the BART models on data from the training watersheds (which are the ex situ data, with respect to the testing watersheds).\n\nIn this study, we define the watersheds in MRB 1 as the testing watersheds and the watersheds in MRB 2 as the training watersheds. The ex situ data (i.e., data in MRB 2) are used to fit multiple BART models, which are then used to obtain predictive distributions of LNR at all the testing watersheds. There are two reasons for this MRB-based data partitioning.\n\n• For reasons touched on in Sect. 1, we do not consider spatial proximity as a predictor in this study. Separating the two MRBs partly ensures the exclusion of the confounding effect of spatial proximity, and thus the regionalization is solely based on the watershed characteristics.\n\n• Considering the distributions of LNR (Fig. 4f), the range of values in MRB 2 fully covers the range of values in MRB 1. However, the reverse is not true. It is thus advantageous to train the models with MRB 2 to avoid poor model fitting due to lack of data coverage.\n\nAfter partitioning the watersheds, we now turn our attention to the partitioning of predictors.\n\n### 3.4.2 Predictor partitioning\n\nAs mentioned in Sect. 1, climate variables are among the most important factors in hydrologic similarity at the regional scale, but there might be other controlling factors to consider as well, and the dominance of climate variables may not be always present. To investigate the various effects of different predictors, we conceptually divide the predictors into four sets: (1) climate controls that determine the input amount of water into the system, (2) surface controls that determine the distribution of water at the surface, (3) soil controls that determine the infiltration of water, and (4) lithology controls that indicate the properties of the aquifer. We further break up the first set into three subsets to investigate the effect of dimensionless predictors. Therefore, we define a total of six different predictor sets to build six unique BART models, which are indexed by k, k=1, 2, … 6 (Table 4).\n\nNote that the determination of the six predictor sets is guided by a conceptual division of predictors and the idea of testing the relative importance of different categories of predictors under different conditions, instead of aiming for high accuracy and precision. Therefore, by no means is Table 4 an exhaustive list of all possible sets, nor does it necessarily include the best set that leads to the best predictive performance. The design of the six predictor sets simply facilitates the investigation of the effects of various categories of predictors on predictive accuracy and uncertainty.\n\n### 3.4.3 The benchmark model: without any predictor\n\nIn addition to the six BART models, we also build a simple model by using the estimated distribution of LNR at the training watersheds via kernel density estimation , without considering any predictor. In other words, this is simply using the distribution of LNR at all the training watersheds as the predictive distribution. This is a model that ignores hydrologic similarity altogether, and it can be considered an extreme case of the ex situ prior in , with a lot more watersheds and much less stringent criteria of similarity. From this point forward, we refer to this model as the benchmark model, for it is used as a benchmark against which the BART models are compared.\n\n## 3.5 Evaluation of predictive distributions\n\nAs mentioned in Sect. 2.3, we label each testing watershed by the best-performing model, where the performance is measured based on a metric. Thus, the metric with which we evaluate predictive distributions matters.\n\nIn this study, two different accuracy metrics are adopted. The first is the root mean squared error (RMSE), defined as\n\n$\\begin{array}{}\\text{(7)}& {E}_{i,k}=\\sqrt{\\frac{\\mathrm{1}}{L}\\sum _{l=\\mathrm{1}}^{L}{\\left({\\stackrel{\\mathrm{^}}{R}}_{i,k}^{\\left(l\\right)}-{\\stackrel{\\mathrm{̃}}{r}}_{i}\\right)}^{\\mathrm{2}}},\\end{array}$\n\nwhere ${\\stackrel{\\mathrm{̃}}{r}}_{i}$ is the LNR data at the ith testing watershed, and Ei,k is the RMSE of the kth model at the ith testing watershed. Note that ${\\stackrel{\\mathrm{^}}{R}}_{i,k}^{\\left(l\\right)}$ is obtained by following Eq. (6), but now subscripts are added to indicate that we plug in the predictors from the ith testing watershed to the kth model. This metric evaluates the predictive performance in an estimation problem, where we wish to obtain a “best estimate” of LNR with minimal expected error.\n\nThe second metric is the median log predictive probability density (LPD) at the value of LNR observation, defined as\n\n$\\begin{array}{}\\text{(8)}& {L}_{i,k}={\\mathrm{median}}_{l=\\mathrm{1},\\phantom{\\rule{0.125em}{0ex}}\\mathrm{\\dots },\\phantom{\\rule{0.125em}{0ex}}L}\\left\\{\\mathrm{ln}\\left[p\\left(R={\\stackrel{\\mathrm{̃}}{r}}_{i}\\mathrm{|}{\\stackrel{\\mathrm{^}}{R}}_{i,k}^{\\left(l\\right)},{\\left({\\mathit{\\sigma }}^{\\mathrm{2}}\\right)}_{k}^{\\left(l\\right)}\\right)\\right]\\right\\},\\end{array}$\n\nwhere Li,k is the LPD of the kth model at the ith testing watershed. The subscript of $\\left({\\mathit{\\sigma }}^{\\mathrm{2}}{\\right)}_{k}^{\\left(l\\right)}$ indicates the kth model. This metric evaluates the predictive performance in a simulation problem, where we wish the realizations from the predictive distributions are likely to be the same as the observation.\n\nIn addition to accuracy, we also quantify the predictive uncertainty. This is done by first recognizing the two components of uncertainty for the kth model at the ith testing watershed:\n\n1. ${\\mathit{\\sigma }}_{k}^{\\mathrm{2}}$, which we refer to as the predictive variance, and which is approximated as the sample median of $\\left({\\mathit{\\sigma }}^{\\mathrm{2}}{\\right)}_{k}^{\\left(l\\right)}$ over l=1, …, L, and\n\n2. the posterior variance of ${\\stackrel{\\mathrm{^}}{R}}_{i,k}$, which we refer to as the estimate variance, and which is approximated as the sample variance of ${\\stackrel{\\mathrm{^}}{R}}_{i,k}^{\\left(l\\right)}$ over l=1, …, L.\n\nThe predictive variance indicates how informative the inferred predictor–response relationship is, while the estimate variance indicates how uncertain the said relationship is. In this case study we weigh the two components equally, as we wish to obtain an informative relationship with certainty. To that end, we define the total predictive variance as the summation of the two components, and use it as the metric of predictive uncertainty in this study.\n\n4 Results\n\nAs discussed above, we built six BART models (Table 4) with ex situ data. In situ predictors were then fed into the models to yield posterior realizations of predictive distributions (Eq. 6). With the metrics of accuracy and uncertainty defined, we are then able to quantify the predictive performance of the BART models, and classify them based on either the RMSE-based labels or the LPD-based labels with the nested tree-based modeling approach. This allows for the investigation of the effects of various predictors under different conditions, which will be presented in this section.\n\n## 4.1 Evaluation of predictive distributions\n\nThe following subsections present the effects of different predictor sets on predictive accuracy and uncertainty.\n\n### 4.1.1 Predictive uncertainty\n\nThe effect of regionalization with the different predictor sets on predictive uncertainty is shown in Fig. 5. The estimate variance (Fig. 5a) represents how well the BART models capture the predictor–response relationships. We see that the geology predictors lead to the lowest estimate variance, probably because of the significantly larger number of predictors used (see Table 4). Yet there is a surprise in Fig. 5a. First, at k=1 and k=2 the estimate variances are generally quite low, despite the low number of predictors. However, at k=3, the estimate variances increase significantly. Intuitively, since aridity is the ratio of evapotranspiration to precipitation, one would expect that the variances at k=3 would be similar to, if not lower than, those at k=1 and k=2. One plausible explanation here is that although aridity indices and precipitation/evapotranspiration carry ample information to be extracted and conditioned upon, the respective predictor–response relationships we get might be significantly different. When used together, the BART models were not able to formulate a universal relationship. This will be revisited in Sect. 5.3.", null, "Figure 5The box plots of the estimate variances at the testing watersheds (a), the bar plot of the predictive variances with 95 % intervals shown by the error bars (b), and the box plots of the total predictive variances at the testing watersheds (c). The red line indicates the variance of the benchmark model for comparison.\n\nThe predictive variance (Fig. 5b) represents how informative the predictor–response relationships are, which is a different aspect of uncertainty compared to the estimate variance. One could obtain a predictor–response relationship fairly confidently (low estimate variance), but the relationship is less informative (high predictive variance), like that found at k=6. The opposite case is that one could not confidently obtain a predictor–response relationship, but once that relationship is obtained it is quite informative, like that found at k=5.\n\nThe total predictive variance (Fig. 5c) provides an overall metric that considers the above two sources of uncertainties. While the medians are rather similar, the spread of the box plots does vary significantly with k. The condensed box plots (e.g., k=1 and k=6) indicate that the total predictive variances are essentially constant throughout all testing watersheds, while the spread-out box plots (e.g., k=5) indicate that the effect of the predictors may vary significantly from one testing watershed to another. This indicates that there might not be one single predictor set that always leads to the lowest uncertainty, and thus the effects of predictors on predictive uncertainty may vary from one condition to another. That said, regardless of the testing watersheds and predictor sets, the total predictive variance is always lower than the variance of the benchmark model, which clearly shows that regionalization using watershed characteristics definitely improves predictive precision.\n\n### 4.1.2 Predictive accuracy\n\nThe effect of regionalization with the different predictor sets on RMSE is shown in Fig. 6. The RMSE of the benchmark model (Fig. 6a) at each testing watershed is simply the difference between the sample mean of the ex situ LNR data and the in situ LNR observation. For the BART models (Fig. 6b), it is calculated by the root of the average squared errors over post-convergence MCMC simulations.\n\nRegardless of k, we see that, compared with the benchmark model, RMSE is reduced at least at half of the testing watersheds. Surprisingly, the largest overall RMSE reduction is observed when only the aridity indices are used for regionalization, indicating that at most of the watersheds tested in this study, aridity similarity implies LNR similarity at regional and annual scales to a high degree. On the other hand, we observe some outliers that have high RMSE reduction at k=4 through k=6, indicating that topography, land cover, soil properties, and geology may not have an overall effect that is as strong, but under certain circumstances, they could still be important factors.", null, "Figure 6The box plot of the RMSE of the benchmark model at the testing watersheds (a) and the box plots of the RMSE reduction introduced by applying the BART models at the testing watersheds (b). The red line indicates zero RMSE reduction for comparison.\n\nThe effect of regionalization with different predictor sets on LPD is shown in Fig. 7. It is immediately clear that the accuracy improvement is not as prominent as that in Fig. 6. Only when k=1 is LPD increased at most of the watersheds. We also find that all of the distributions of LPD are heavily negatively skewed with a lot of outliers.", null, "Figure 7The box plot of the LPD of the benchmark model at the testing watersheds (a) and the box plots of the LPD increase introduced by applying the BART models at the testing watersheds (b). The red line indicates zero LPD increase, used for comparison.\n\nLooking at Figs. 5 through 7 together, one can observe the different effects of the predictor sets on predictive accuracy, stemming from the different natures of an estimation and a simulation problem. From the point of view of the overall effect, for k=2 through k=5 (i.e., the predictors other than aridity indices), RMSE is reduced at more than half of the testing watersheds, but LPD does not increase to the same extent. This suggests that the predictive distributions are centered closer to the in situ observations due to regionalization, but that the conditioning also significantly reduces the predictive variances, causing the predictive distribution to be too narrow. Therefore, compared to a relatively flat, spread-out, and uninformative or weakly informative distribution, the predictive density decays too quickly when deviating from the predictive mean, resulting in low LPD. This might be a sign of over-conditioning or the disproportional reduction of predictive uncertainty, as exemplified in Fig. 8. The cyan curve is an example of an over-conditioned distribution. Although its mean is somewhat close to the true value, the small variance causes rapid decay of probability density; therefore, at the true value (red vertical line) the predictive density is no better than that of the weakly informative or uninformative distributions. How could this ever happen? Take k=5 in Fig. 5 as an example: the predictive variance is small, meaning that the predictive distribution should be rather peaked (just like the cyan curve in Fig. 8). The only way one can get a high predictive density is then to make the predictive mean close to the true value. Nonetheless, this would be very difficult at some of the watersheds where the estimate variance is large. The only predictor set that improves both RMSE and LPD at most of the testing watersheds is k=1, the aridity indices, and one could expect the corresponding predictive distributions to be somewhat similar to the case of the ideal dark blue curve in Fig. 8.\n\nOver-conditioning can occur when model fitting or model calibration leads to constrained parameters that are, in fact, subject to different forms of model uncertainty , which is an indication of why the determination of p(Bk) is important. In this case study, we focused more on the variation of p(Bk) under various conditions (to be shown shortly) and less on improving the estimates. However, in another application where the estimates are to be improved, model structure uncertainty should be and can be considered in order to refine the estimates (e.g., via Bayesian model averaging).", null, "Figure 8An example of over-conditioning: the probability density at the true value (indicated by the red vertical line) of the over-conditioned distribution is not higher than that of the non-informative distribution or that of the weakly informative distribution, not because the conditioning does not work, but because of the disproportional reduction of the variance of the distribution.\n\n## 4.2 Regionalization similarity\n\nThe box plots in Figs. 5 through 7 showed different distributions of the predictive performance metrics for the different predictor sets. An interesting follow-up question here would be how model performance varies with watershed characteristics. It was shown that, consistent with previous studies, aridity is indeed the most important controlling factor at regional and annual scales on average, but there are few cases where this aridity dominance is replaced. In other words, how might we identify the conditions under which a specific predictor set could be more informative than others?\n\nTo investigate this further, we give each testing watershed two labels: the model with the lowest RMSE and the model with the highest LPD; we refer to these labels as the RMSE labels and the LPD labels, respectively. The possible values of each label include k=1 through k=6 and benchmark, representing the six BART models and the benchmark model, respectively. Then, using all the available predictors, we built two CART models to classify watersheds based on the RMSE labels (Fig. 9) and the LPD labels (Fig. 10).\n\n### 4.2.1 Nesting by RMSE", null, "Figure 9CART model classifying the RMSE labels of the testing watersheds. Splitting rules are shown in white nodes, while leaf nodes are colored based on the classification results. For each leaf node, the brightness of the coded color indicates the node impurity (the brighter the more impure), where impurity is defined as the probability that two randomly chosen watersheds within the node have different labels. On top of every node, in brackets, is the node number, provided for convenient referencing. The predictors in the splitting rules are expressed in code names for convenience; a reference table is provided in the upper right. For each leaf node, the model of the highest multinomial probability of having the best performance is shown first, which also determines the classification result, followed by the model of the second highest probability, also to indicate the impurity. Underneath each leaf node box is the number of watersheds belonging to the leaf. Note that the legend does not include benchmark because the benchmark model is never the best-performing model at any testing watershed. k=5 is marked as “unused” in the legend because there is no leaf node where p(B5) is the highest.\n\nFigure 9 shows the variation of the top two best-performing BART models and the corresponding p(Bk) values under various conditions, where the performance of each BART model is defined by the RMSE. This variation indicates the regionalization similarity in the study area. At first glance, the available water content (AWC) stands out as the first indicator of regionalization similarity (Fig. 9, node 1): at watersheds with high AWC, aridity stands out as the dominant factor, which is consistent with the previous studies cited in Sect. 1. However, there is a potential risk if one uses aridity as the primary indicator of hydrologic similarity regardless of AWC. In previous studies, AWC was found to be an important predictor correlated with surface runoff, baseflow, and groundwater recharge , and it was among the most important parameters to which water balance models are sensitive (Finch1998). In the current study, we are not claiming that AWC cannot be a predictor, but rather, we are suggesting a hierarchical structure in which AWC is placed – together with other predictors – to help estimate LNR at ungauged watersheds. Since AWC is governed by field capacity and wilting point, it is an indicator of the storage capacity of the soil for usable/consumable water: the larger the storage capacity, the higher the degree to which the system is supply-limited, thus pointing to aridity. If the storage capacity is low, on the other hand, the more complicated interplay among various predictors needs to be considered, and one cannot simply assume that aridity is the primary indicator of hydrologic similarity. We also found the soil organic matter content a quite competitive surrogate for AWC, meaning that if organic matter content was used here instead of AWC, we would end up with a slightly less accurate but overall similar classification. We conjecture that this is because of the high positive correlations between organic matter content and AWC (Hudson1994).\n\nFurther down the classification tree, watersheds with lower AWC are classified roughly as arid or humid watersheds by the long-term aridity index. For the more humid watersheds (Fig. 9, nodes 4 through 14), regionalization similarity is controlled by different predictors, but the dominant predictors for LNR estimation are almost always the climate variables (nodes 6, 8, 11, and 12, which contain 1576 watersheds in total). Only at a handful of watersheds (nodes 13 and 14, which contain only 185 watersheds in total) are aridity indices not dominant. However, some interesting conjectures can be made by taking a closer look at these two nodes.\n\nNode 14 is a small but unique cluster, featuring watersheds that have low AWC, are humid, and have relatively homogeneous paragneiss and/or schist bedrock. Both of these bedrock types belong to the category of crystalline rock and often feature layering in a particular orientation. The groundwater movement in such a rock formation often depends on foliation, i.e., rock breaks along approximately parallel surfaces, which affect the direction of the regional groundwater flow . Hence we observe a condition where the ample water supply cannot be substantially held by the soil due to low AWC, and the regional groundwater movement might be controlled by bedrock layering and foliation. Low AWC is an indication of less clayey soils, and implies that infiltration/percolation through the soil layer might be facilitated by relatively higher permeability. Water could thus easily enter the bedrock layer, which is rather horizontally homogeneous. To that end, those predictor sets other than k=6 become less informative, while the predictor set k=6 becomes relatively more informative. In fact, these watersheds are mostly the positive outliers at k=6 in Fig. 6b, where the predictive power of the geology predictors is at its best.\n\nNode 13 features watersheds that have low AWC, are humid, are not dominated by homogeneous paragneiss and/or schist, have a relatively steep average slope, and have a large amount of annual precipitation. The low aridity is primarily driven by precipitation rather than evapotranspiration. In fact, these watersheds are mostly outliers featuring an extremely low aridity index (below 0.65) due to ample precipitation. Under such conditions, evapotranspiration is expected to operate to its full potential; i.e., it is shifting from a water-limited state to an energy-limited and canopy-controlled state. In addition, as evapotranspiration is near its full potential, the drainage of the excess precipitation would be controlled by the topography of the watershed (e.g., the slope and the sinuosity of the stream). Fast drainage leaves less water available for infiltration and recharge, and vice versa. To that end, the land cover type and topography now start to play a dominant role in hydrologic similarity. It is noteworthy to point out node 20 here. Node 20 features watersheds that are relatively humid among the arid watersheds ($\\stackrel{\\mathrm{‾}}{\\mathit{\\varphi }}$ in the range from 0.9 to 0.99) and have ample precipitation. The similarity of node 20 to node 13 supports our conjecture that the dominance of land cover and topography predictors is due to the precipitation-driven humid environment that is relatively more capable of catering to the evapotranspiration water demand and features excess precipitation.\n\nOn the other side of the tree (Fig. 9, node 15 through 21), the resulting classification is quite diverse, and the impurity of each node is relatively high. Aridity no longer plays the dominant role, and the hierarchical similarity structure becomes complicated, so that it is difficult to make straightforward physical interpretations. The most important message we get is the significant risk one would face if one considers aridity, or any climate variable in general, as the primary indicator of hydrologic similarity when AWC is low and aridity index is high. In summary, although climate predictors are still the most important ones on average, within the context of the hierarchical similarity we have identified certain conditions under which either non-climate predictors become dominant or no dominant predictor set can be straightforwardly identified, all of which contribute to the understanding of the dynamic hydrologic similarity.\n\n### 4.2.2 Nesting by LPD\n\nThe classification of the LPD labels is shown in Fig. 10. In general, the root part of the classification tree (nodes 1 through 3) is quite similar to that found in Fig. 9, where AWC and long-term aridity define two sequential overarching separations of watersheds. However, further down the tree the leaf part is significantly different. The classification essentially leads to only three big clusters (Fig. 10, nodes 2, 7, and 9), and the other leaf nodes only contain a few watersheds. Node 9 features arid watersheds with low AWC, where we end up with a highly impure leaf node, and even the highest multinomial probability is only 0.27. No further splitting rule could significantly reduce classification error. This is supportive of our previous argument that when aridity index is high and AWC is low, it is risky to resort to climate variables for hydrologic similarity, as shown here that it is difficult to even identify a dominant predictor set. As mentioned in Sect. 4.1.2, underestimation of the predictive variance (${\\mathit{\\sigma }}_{k}^{\\mathrm{2}}$) leads to low LPD, and thus it is difficult to make physical interpretation out of the results in Fig. 10, except for nodes 1 through 3, which are quite similar to their counterparts in Fig. 9. Therefore, with the LPD labels we are only able to identify the overarching regionalization similarity controlled by AWC and long-term aridity.", null, "Figure 10Same as Fig. 9, except that here the classification is done using the LPD labels. The predictors in the splitting rules are expressed in code names for convenience; please refer to the same reference table in Fig. 9.\n\nRMSE and LPD represent views of predictive accuracy in an estimation problem and a simulation problem, respectively. Intuitively, if one only considers unimodal predictive distribution with limited skewness, a high predictive density at a value directly implies a closeness of the distribution central tendency to that value. However, the reverse is not necessarily true: either overestimation or underestimation of variance might possibly lead to low predictive density, even if the mean is close to the target value (e.g., Fig. 8). Based on whether RMSE or LPD is used as the accuracy metric – which implies the scope of LNR estimation – we can observe some common features as well as some distinctions of the structure of the hypothesized hierarchical similarity.\n\nFortunately, regardless of the metric of predictive accuracy, in both Figs. 9 and 10 the first three nodes are remarkably consistent, and the effect of the metric of predictive accuracy is only manifested at watersheds with low AWC. This supports the suggestion that AWC plays a pivotal role in hydrologic similarity for mean annual LNR estimation.\n\n5 Discussion\n\nIn this section, we revisit the two research objectives pointed out in Sect. 1 by discussing the key features of the approach, the key findings from the case study, as well as the limitations of the case study.\n\n## 5.1 The nested tree-based modeling approach\n\nThe nested tree-based modeling approach proposed in this study is essentially a coupling of BART and CART. As demonstrated in Sect. 2, both BART and CART are independent of the physical background and are pure data-driven machine learning techniques. Therefore, in principle as long as there are data, the nested tree-based modeling approach is applicable like any other data-driven approach. However, one may argue that (1) the in-principle applicability does not set the nested tree-based modeling approach apart from other data-driven machine-learning approaches, and that (2) it would be counter-intuitive to advocate a data-driven approach with a seemingly data-rich case study (here “data-rich” refers to the fact that each MRB consists of thousands of watersheds; see Sect. 3.1) when the study actually emphasizes ungauged watersheds.\n\nOur explanation starts with explaining two significant advantages of the nested tree-based modeling approach. First of all, the greatest advantage of BART (as mentioned in Sect. 2.2) is that it outputs the posteriors of the model parameters, which could lead to posteriors of the target response. The advantage of having the posteriors is that the users/modelers can then derive the desired information at will, such as percentiles, moments, information gain, or the posterior mean and variances, like what was demonstrated in the case study. Conditional simulation is also made easy when the posteriors are available, opening the door for Monte Carlo analyses. Second, following the statement that one can obtain the statistics or representative metric of interest, the nesting of BART models under CART can be done with the said metric, resulting in the corresponding probability mass function of the plausible BART models. For example, the classification shown in Fig. 9 is based on RMSE, which is then based on the posterior mean values. This is essentially a proposal–comparison-based consideration of model structure uncertainty.\n\nHow do the aforementioned two advantages of the nested tree-based modeling approach justify the use at ungauged watersheds? First, of course the performance of the model depends on the quality and the quantity of training data. In this sense all modeling approaches are the same, and applying BART does not disproportionally enhance the predictive accuracy when the data are limited. However, what sets BART apart is the Bayesian feature that accounts for model parameter uncertainty properly in the form of conditional distribution, which cannot be done as easily with only a few point estimates or a few posterior statistics. Second, uncertainty exists not only for the model parameters, but also for the models themselves. The nested tree-based modeling approach can help us obtain an informed empirical probability mass function, p(Bk), of the plausible BART models (which was also exemplified in the case study). The fact that at ungauged watersheds in situ data are absent and ex situ data can be limited in quantity and/or quality accentuates the importance of uncertainty quantification, and the nested tree-based modeling approach offers a Bayesian solution to that, making itself not only applicable, but also advantageous, at ungauged watersheds.\n\nOne may then argue how a modeler would make an informed proposal of plausible BART models in the first place. This is where physical knowledge comes into play, and the proposal is indeed case specific. This is why we proposed the hypothesis of hierarchical similarity, which can be integrated with the nested tree-based modeling approach to study the behavior of a dynamic hydrologic similarity system, like what was demonstrated with the case study. Unlike the generality and the merits of the nested tree-based modeling approach, our findings regarding the variation of p(Bk) and the shifts in dominant controlling factors of recharge are indeed specific to the context of the case study, which will be discussed next.\n\n## 5.2 The hierarchical similarity hypothesis and the shift in dominant physical processes\n\nWith BART's ability to simultaneously model nonlinear and/or interaction effects and present uncertainty in a fully Bayesian fashion, we are able to show how the controlling factors of hydrologic similarity vary among different watersheds, among different conditions, and among different accuracy metrics. These are all manifested in the case study under the context of the hierarchical similarity hypothesis.\n\nClimate variables have been identified as the dominant factors in previous studies (see Sect. 1), and they are indeed on average the most dominant factors in our case study. However, the hierarchical similarity shows potential risks if one resorts to climate variables to define hydrologic similarity without considering other physical watershed characteristics, especially the soil available water content.\n\nThe details of the hierarchical similarity are inferred from the data in the fashion of supervised machine learning, using six BART models and one benchmark model nested under one classification tree. It is of great importance to have two levels in such a system, as it allows for identification of the shifts of dominant factors under different conditions. These shifts indicate shifts in dominant physical processes, as exemplified by nodes 13 and 20 in Fig. 9, where we observed the shift from water-limited evapotranspiration to energy-limited evapotranspiration. Therefore, we conjecture that it is the shift in dominant physical processes that is driving, and thus is reflecting, the shift in the controlling factors of hydrologic similarity under different conditions.\n\n## 5.3 Limitations of the case study\n\nHere, we provide discussions about the limitations of the case study from the aspects of the data set, the target response, and the partitioning of data.\n\n### 5.3.1 The scale of the target response\n\nA major limitation of the case study is that the target hydrologic response is the logit normalized watershed-averaged annual groundwater recharge. This is a large-scale spatiotemporally homogenized response, and in this study, the data were based on baseflow analyses. To that end, a working assumption about the reliability of the baseflow analysis was made without rigorous proof (see Sect. 3.1). The findings of the case study are all under the context of this working assumption, and thus, they should not be applied to recharge/LNR at other spatiotemporal scales or to other hydrologic responses without careful considerations.\n\n### 5.3.2 The MRB-based partitioning of watersheds\n\nAlthough we tried to justify the MRB-based partitioning by the reasons listed in Sect. 3.4.1, we acknowledge that this may not be the best partitioning method for demonstrating the full potential of the estimating power of BART. An associated limitation is identified, which stems from the data not covering a desirable range of values. An example was already presented in Sect. 3.4.1 and Fig. 4. As discussed in Sect. 5.1, the limitations in the data accentuate the advantage of our approach regarding the consideration of uncertainty, but it is also recognized that it could be challenging to discover the same findings if MRB 1 provided the training data for MRB 2, which is part of the reason why we kept the MRB-based partitioning.\n\nAnother case of lack of data coverage can be found in our climate predictor data. Since aridity index is the ratio of potential evapotranspiration to precipitation ($\\mathit{\\varphi }={E}_{\\mathrm{p}}/P$), one might be surprised by the differences among the cases of k=1, k=2, and k=3 in the results. The main reason is revealed in Fig. 11. The Ep values at the training and testing watersheds are so distinct that, essentially, all the testing watersheds are outliers from the point of view of a BART model trained at the training watersheds. On the other hand, the ϕ values at the training and testing watersheds share the range from about 0.6 to 1.2, and only differ at the two extreme ends. In other words, the predictor–response relationships inferred by using ϕ can be transferred due to the overlapping range (Fig. 11c), but the relationships inferred using Ep>1000 mm cannot be effectively transferred to watersheds with Ep<1000 mm (Fig. 11b). Although it is not shown, a similar case can be found by comparing $\\stackrel{\\mathrm{‾}}{\\mathit{\\varphi }}$ with Ep.\n\nAlthough this might have been avoidable by using a more sophisticated design of cross-validation, we kept the MRB-based holdout method on purpose. In addition to the reasons that were explained in Sect. 3.4.1, another motivation is that, in reality, the data at hand come in as is. This means there is no guarantee that the measurements will cover a particular range or that the watershed characteristics of the ungauged watersheds of interest are within a desirable range. The prevailing superiority of ϕ and $\\stackrel{\\mathrm{‾}}{\\mathit{\\varphi }}$ over P, $\\stackrel{\\mathrm{‾}}{P}$, and Ep found in our results shows an important advantage of dimensionless predictors, that they tend to be more transferable from one site to another, and hence, they may be more suitable for studies targeting ungauged watersheds.", null, "Figure 11Distributions of (a) P, (b) Ep, and (c) ϕ at watersheds in MRB 1 (the testing watersheds) and MRB 2 (the training watersheds).\n\n### 5.3.3 Limited temporal data coverage\n\nAnother limitation is the lack of temporal coverage. Given limited data coverage along the time axis, in the case study we only studied the LNR in the year of 2002, and we considered two types of climate predictors: those from the same year and those from the long-term average. However, the recharge process being highly nonlinear, it is not impossible that some predictors representing the antecedent conditions, such as precipitation from years prior to the year of 2002, could affect the LNR in the year of 2002. Not having multiple years of climate data prevents us from testing the effects of antecedent conditions or the effects that take place at various multi-year scales, and thus it is clearly a limitation of the case study. Because of this limitation, we made a steady-state working assumption (mentioned in Sect. 3.1), with which we assume that the effects of climate predictors from the previous years are captured by the long-term average predictors, and we also assume negligible effect of climate change. While acknowledging the inclusion of multiple years of climate data could have made an impact, note that the highly consistent roots of the trees in Figs. 9 and 10 are based on soil AWC and the long-term average aridity index, both of which are expected to be relatively insensitive to the inter-annual variation of climate predictors. Therefore, we expect the findings corresponding to the roots of the trees in Figs. 9 and 10 to be relatively less affected by the limitation of not having multiple years of climate data.\n\n### 5.3.4 Non-comprehensive list of plausible models\n\nThe proposal of plausible BART models was guided by a conceptual understanding and grouping of the available predictors. As mentioned in Sect. 3.4.2, our proposal does not cover a comprehensive list of plausible models, nor does it necessarily include the “best” or “true” model. The effect of different proposals of plausible BART models, which represents different perspectives of the conceptual understanding of the underlying physics, was not investigated in the case study, and remains an interesting follow-up that could be pursued in future studies.\n\n6 Conclusions\n\nIn this work, we proposed a nested tree-based modeling approach with three key features: (1) full Bayesian quantification of parameter uncertainty, (2) nonlinear regression in order to model the predictor–response relationship, and (3) proposal–comparison-based consideration of model structure uncertainty. We applied the nested tree-based modeling approach to obtain logit normalized recharge estimates conditioned on ex situ data at ungauged watersheds in a case study in the eastern US. We hypothesized a hierarchical similarity to explain the variation of the probability mass function of plausible models, and thus to investigate the behavior of a dynamic hydrologic similarity system.\n\nThe findings of this study contribute to the understanding of the physical principles governing robust regionalization among watersheds. Firstly, consistent with previous studies, we found that the climate variables are on average the most important controlling factors of hydrologic similarity at regional and annual scales, which means a climate-based regionalization technique is on average more likely to result in better estimates. However, with our hierarchical similarity hypothesis we revealed certain conditions under which non-climate variables become more dominant than climate variables. In particular, we demonstrated how soil available water content stood out to be the pivotal indicator of the variable importance of aridity in hydrologic similarity. Moreover, we showed that with hierarchical similarity one could identify shifts in dominant physical processes that are reflecting shifts in the controlling factors of hydrologic similarity under different conditions, such as water-limited evapotranspiration versus energy-limited evapotranspiration, or homogeneous and foliated bedrock versus heterogeneous bedrock. As the controlling factors change from one condition to another, the suitable regionalization technique also changes. We demonstrated how the hierarchical similarity hypothesis could indicate mechanisms by which available water content, aridity, and other watershed characteristics dynamically affect hydrologic similarity. The nested tree-based modeling approach can be applied to identify plausible sets of watershed characteristics to be considered in the regionalization process.\n\nThe contributions of this study may be viewed differently depending on individual cases. In a situation where groundwater recharge is the ultimate target variable at ungauged watersheds, the nested tree-based modeling approach offers a systematic way to obtain informative predictive distributions that are conditioned on ex situ data. In a different case, where recharge estimation at ungauged watersheds is but one component of a greater project, the aforementioned informative predictive distributions can be treated as informative ex situ priors, which could be further updated and/or integrated into simulation-based stochastic analyses where recharge is an input/component of other models/functions. At ungauged watersheds that will become gauged in the foreseeable future, the informative predictive distributions again serve as informative ex situ priors that could guide the design of the sampling campaign, as different recharge flux magnitudes require different quantifying techniques . The hierarchical similarity hypothesis offers one plausible explanation of the dynamic nature of hydrologic similarity, which affects the application of regionalization. Lastly, it should be pointed out that the nested tree-based modeling approach is independent of the target response and the predictors of interest, so it could be integrated into future studies within or beyond the field of hydrology to study hierarchical predictor–response relationships.\n\nData availability\n\nThe data used in this study are from publicly available databases. The potential evapotranspiration data are from the ENVIREM database, which can be accessed at https://doi.org/10.7302/Z2BR8Q40 or at http://envirem.github.io/ . The rest of the data are from the United States Geological Survey (USGS) Digital Data Series, under issue identification DS-491. They can be accessed by searching “DS-491” in the USGS data catalog (https://data.usgs.gov, ) or alternatively at https://water.usgs.gov/nawqa/modeling/rf1attributes.html (USGS2019b).\n\nAuthor contributions\n\nCFC designed the study, performed the analyses, and prepared the manuscript under the supervision of YR.\n\nCompeting interests\n\nThe authors declare that they have no conflict of interest.\n\nAcknowledgements\n\nFor this study, Ching-Fu Chang was financially supported by the Jane Lewis Fellowship from the University of California, Berkeley. The authors thank Sally Thompson and Chris Paciorek for the inspiration of this study. The authors also appreciate the helpful comments from two anonymous reviewers.\n\nReview statement\n\nThis paper was edited by Mauro Giudici and reviewed by two anonymous referees.\n\nReferences\n\nArnold, J. G., Muttiah, R. S., Srinivasan, R., and Allen, P. M.: Regional estimation of base flow and groundwater recharge in the Upper Mississippi river basin, J. Hydrol., 227, 21–40, https://doi.org/10.1016/S0022-1694(99)00139-0, 2000. a\n\nBeven, K. J.: A manifesto for the equifinality thesis, J. Hydrol., 320, 18–36, https://doi.org/10.1016/j.jhydrol.2005.07.007, 2006. a\n\nBeven, K. J.: Facets of uncertainty: epistemic uncertainty, non-stationarity, likelihood, hypothesis testing, and communication, Hydrolog. Sci. J., 61, 1652–1665, https://doi.org/10.1080/02626667.2015.1031761, 2016. a\n\nBeven, K. J. and Freer, J.: Equifinality, data assimilation, and uncertainty estimation in mechanistic modelling of complex environmental systems using the GLUE methodology, J. Hydrol., 249, 11–29, https://doi.org/10.1016/s0022-1694(01)00421-8, 2001. a\n\nBeven, K. J., Smith, P. J., and Freer, J. E.: So just why would a modeller choose to be incoherent?, J. Hydrol., 354, 15–32, https://doi.org/10.1016/j.jhydrol.2008.02.007, 2008. a\n\nBlöschl, G., Sivapalan, M., Wagener, T., Viglione, A., and Savenije, H.: Runoff prediction in ungauged basins: synthesis across processes, places and scales, Cambridge University Press, Cambridge, 2013. a\n\nBrakebill, J. W. and Terziotti, S. E.: A Digital Hydrologic Network Supporting NAWQA MRB SPARROW Modeling – MRB_E2RF1WS, Report, U.S. Geological Survey, Reston, VA, 2011. a\n\nBreiman, L.: Classification and regression trees, https://doi.org/10.1201/9781315139470, 1984. a\n\nChipman, H. A., George, E. I., and McCulloch, R. E.: BART: Bayesian additive regression trees, Ann. Appl. Stat., 4, 266–298, https://doi.org/10.1214/09-AOAS285, 2010. a\n\nClawges, R. M. and Price, C. V.: Digital data set describing surficial geology in the conterminous U.S., US Geological Survey Open-File Report 99-77, US Geological Survey, Reston, Virginia, USA, 1999. a\n\nCucchi, K., Heße, F., Kawa, N., Wang, C., and Rubin, Y.: Ex-situ priors: A Bayesian hierarchical framework for defining informative prior distributions in hydrogeology, Adv. Water Resour., 126, 65–78, https://doi.org/10.1016/j.advwatres.2019.02.003, 2019. a\n\nde Vries, J. J. and Simmers, I.: Groundwater recharge: an overview of processes and challenges, Hydrogeol. J., 10, 5–17, https://doi.org/10.1007/s10040-001-0171-7, 2002. a, b\n\nFan, Y., Li, H., and Miguez-Macho, G.: Global Patterns of Groundwater Table Depth, Science, 339, 940–943, https://doi.org/10.1126/science.1229881, 2013. a\n\nFinch, J. W.: Estimating direct groundwater recharge using a simple water balance model – sensitivity to land surface parameters, J. Hydrol., 211, 112–125, https://doi.org/10.1016/S0022-1694(98)00225-X, 1998. a\n\nGelman, A., Carlin, J. B., Stern, H. S., and Rubin, D. B.: Bayesian data analysis, in: vol. 2, Chapman & Hall/CRC, Boca Raton, FL, USA, 2014. a\n\nGemitzi, A., Ajami, H., and Richnow, H.-H.: Developing empirical monthly groundwater recharge equations based on modeling and remote sensing data – Modeling future groundwater recharge to predict potential climate change impacts, J. Hydrol., 546, 1–13, https://doi.org/10.1016/j.jhydrol.2017.01.005, 2017. a\n\nGibbs, M. S., Maier, H. R., and Dandy, G. C.: A generic framework for regression regionalization in ungauged catchments, Environ. Model. Softw., 27–28, 1–14, https://doi.org/10.1016/j.envsoft.2011.10.006, 2012. a\n\nHartmann, A., Gleeson, T., Wada, Y., and Wagener, T.: Enhanced groundwater recharge rates and altered recharge sensitivity to climate variability through subsurface heterogeneity, P. Natl. Acad. Sci. USA, 114, 2842–2847, 2017. a\n\nHealy, R. W.: Estimating groundwater recharge, Cambridge University Press, Cambridge, 2010. a, b, c, d, e\n\nHeppner, C. S., Nimmo, J. R., Folmar, G. J., Gburek, W. J., and Risser, D. W.: Multiple-methods investigation of recharge at a humid-region fractured rock site, Pennsylvania, USA, Hydrogeol. J., 15, 915–927, https://doi.org/10.1007/s10040-006-0149-6, 2007. a, b\n\nHomer, C., Dewitz, J., Fry, J., Coan, M., Hossain, N., Larson, C., Herold, N., McKerrow, A., VanDriel, J. N., and Wickham, J.: Completion of the 2001 national land cover database for the counterminous United States, Photogram. Eng. Remote Sens., 73, 337–341, 2007. a\n\nHou, Z. and Rubin, Y.: On minimum relative entropy concepts and prior compatibility issues in vadose zone inverse and forward modeling, Water Resour. Res., 41, W12425, https://doi.org/10.1029/2005WR004082, 2005. a\n\nHrachowitz, M., Savenije, H. H. G., Blöschl, G., McDonnell, J. J., Sivapalan, M., Pomeroy, J. W., Arheimer, B., Blume, T., Clark, M. P., Ehret, U., Fenicia, F., Freer, J. E., Gelfan, A., Gupta, H. V., Hughes, D. A., Hut, R. W., Montanari, A., Pande, S., Tetzlaff, D., Troch, P. A., Uhlenbrook, S., Wagener, T., Winsemius, H. C., Woods, R. A., Zehe, E., and Cudennec, C.: A decade of Predictions in Ungauged Basins (PUB) – a review, Hydrolog. Sci. J., 58, 1198–1255, https://doi.org/10.1080/02626667.2013.803183, 2013. a\n\nHudson, B. D.: Soil organic matter and available water capacity, J. Soil Water Conserv., 49, 189–194, 1994. a\n\nHutton, C. J., Kapelan, Z., Vamvakeridou-Lyroudia, L., and Savic, D.: Application of Formal and Informal Bayesian Methods for Water Distribution Hydraulic Model Calibration, J. Water Resour. Pl. Manage., 140, 04014030, https://doi.org/10.1061/(ASCE)WR.1943-5452.0000412, 2014. a\n\nKapelner, A. and Bleich, J.: bartMachine: Machine Learning with Bayesian Additive Regression Trees, J. Stat. Softw., 70, 1–40, https://doi.org/10.18637/jss.v070.i04, 2016. a\n\nKuczera, G.: Combining site-specific and regional information: An empirical Bayes Approach, Water Resour. Res., 18, 306–314, https://doi.org/10.1029/WR018i002p00306, 1982. a\n\nKuentz, A., Arheimer, B., Hundecha, Y., and Wagener, T.: Understanding hydrologic variability across Europe through catchment classification, Hydrol. Earth Syst. Sci., 21, 2863–2879, https://doi.org/10.5194/hess-21-2863-2017, 2017. a\n\nLi, X., Li, Y., Chang, C.-F., Tan, B., Chen, Z., Sege, J., Wang, C., and Rubin, Y.: Stochastic, goal-oriented rapid impact modeling of uncertainty and environmental impacts in poorly-sampled sites using ex-situ priors, Adv. Water Resour., 111, 174–191, https://doi.org/10.1016/j.advwatres.2017.11.008, 2018. a, b, c\n\nLoritz, R., Gupta, H., Jackisch, C., Westhoff, M., Kleidon, A., Ehret, U., and Zehe, E.: On the dynamic nature of hydrological similarity, Hydrol. Earth Syst. Sci., 22, 3663–3684, https://doi.org/10.5194/hess-22-3663-2018, 2018. a, b\n\nLoukas, A. and Vasiliades, L.: Streamflow simulation methods for ungauged and poorly gauged watersheds, Nat. Hazards Earth Syst. Sci., 14, 1641–1661, https://doi.org/10.5194/nhess-14-1641-2014, 2014. a\n\nMagruder, I. A., Woessner, W. W., and Running, S. W.: Ecohydrologic process modeling of mountain block groundwater recharge, Groundwater, 47, 774–785, https://doi.org/10.1111/j.1745-6584.2009.00615.x, 2009. a\n\nNaghibi, S. A., Pourghasemi, H. R., and Dixon, B.: GIS-based groundwater potential mapping using boosted regression tree, classification and regression tree, and random forest machine learning models in Iran, Environ. Monit. Assess., 188, 44–71, https://doi.org/10.1007/s10661-015-5049-6, 2015. a\n\nNational Ground Water Association: Facts About Global Groundwater Usage, available at: https://www.ngwa.org/what-is-groundwater/About-groundwater/facts-about-global-groundwater-usage (last access: 09 May 2019), 2016. a\n\nNolan, B. T., Healy, R. W., Taber, P. E., Perkins, K., Hitt, K. J., and Wolock, D. M.: Factors influencing ground-water recharge in the eastern United States, J. Hydrol., 332, 187–205, https://doi.org/10.1016/j.jhydrol.2006.06.029, 2007. a, b\n\nNowak, W., de Barros, F. P. J., and Rubin, Y.: Bayesian geostatistical design: Task-driven optimal site investigation when the geostatistical model is uncertain, Water Resour. Res., 46, W03535, https://doi.org/10.1029/2009WR008312, 2010. a\n\nObuobie, E., Diekkrueger, B., Agyekum, W., and Agodzo, S.: Groundwater level monitoring and recharge estimation in the White Volta River basin of Ghana, J. Afr. Earth Sci., 71–72, 80–86, https://doi.org/10.1016/j.jafrearsci.2012.06.005, 2012. a\n\nOudin, L., Andréassian, V., Perrin, C., Michel, C., and Le Moine, N.: Spatial proximity, physical similarity, regression and ungaged catchments: A comparison of regionalization approaches based on 913 French catchments, Water Resour. Res., 44, W03413, https://doi.org/10.1029/2007WR006240, 2008. a\n\nRahmati, O., Pourghasemi, H. R., and Melesse, A. M.: Application of GIS-based data driven random forest and maximum entropy models for groundwater potential mapping: A case study at Mehran Region, Iran, Catena, 137, 360–372, https://doi.org/10.1016/j.catena.2015.10.010, 2016. a\n\nRangarajan, R. and Athavale, R. N.: Annual replenishable ground water potential of India – an estimate based on injected tritium studies, J. Hydrol., 234, 38–53, https://doi.org/10.1016/S0022-1694(00)00239-0, 2000. a\n\nRazavi, T. and Coulibaly, P.: An evaluation of regionalization and watershed classification schemes for continuous daily streamflow prediction in ungauged watersheds, Canadian Water Resources Journal/Revue canadienne des ressources hydriques, 42, 2–20, https://doi.org/10.1080/07011784.2016.1184590, 2017. a, b, c\n\nR Core Team: R: A Language and Environment for Statistical Computing, R Foundation for Statistical Computing, Vienna, Austria, available at: https://www.R-project.org/ (last access: 09 May 2019), 2018. a\n\nRubin, Y. and Dagan, G.: Stochastic identification of transmissivity and effective recharge in steady groundwater flow: 1. Theory, Water Resour. Res., 23, 1185–1192, https://doi.org/10.1029/WR023i007p01185, 1987a. a\n\nRubin, Y. and Dagan, G.: Stochastic identification of transmissivity and effective recharge in steady groundwater flow: 2. Case study, Water Resour. Res., 23, 1193–1200, https://doi.org/10.1029/WR023i007p01193, 1987b. a\n\nRubin, Y., Chang, C.-F., Chen, J., Cucchi, K., Harken, B., Heße, F., and Savoy, H.: Stochastic hydrogeology's biggest hurdles analyzed and its big blind spot, Hydrol. Earth Syst. Sci., 22, 5675–5695, https://doi.org/10.5194/hess-22-5675-2018, 2018. a\n\nRumsey, C. A., Miller, M. P., Susong, D. D., Tillman, F. D., and Anning, D. W.: Regional scale estimates of baseflow and factors influencing baseflow in the Upper Colorado River Basin, J. Hydrol.: Reg. Stud., 4, 91–107, https://doi.org/10.1016/j.ejrh.2015.04.008, 2015. a\n\nSawicz, K., Wagener, T., Sivapalan, M., Troch, P. A., and Carrillo, G.: Catchment classification: empirical analysis of hydrologic similarity based on catchment function in the eastern USA, Hydrol. Earth Syst. Sci., 15, 2895–2911, https://doi.org/10.5194/hess-15-2895-2011, 2011. a\n\nScanlon, B. R., Healy, R. W., and Cook, P. G.: Choosing appropriate techniques for quantifying groundwater recharge, Hydrogeol. J., 10, 18–39, https://doi.org/10.1007/s10040-001-0176-2, 2002. a, b, c, d\n\nSchruben, P. G. A., Bawiec, R. E., King, W. J., Beikman, P. B., and Helen, M.: Geology of the Conterminous United States at $\\mathrm{1}:\\mathrm{2},\\mathrm{500},\\mathrm{000}$ Scale – A Digital Representation of the 1974 PB King and HM Beikman Map, available at: https://pubs.usgs.gov/dds/dds11/ (last access: 09 May 2019), 1994. a\n\nSchwarz, G. E. and Alexander, R.: State soil geographic (STATSGO) data base for the conterminous United States, Report 2331-1258, available at: https://water.usgs.gov/GIS/metadata/usgswrd/XML/ussoils.xml (last access: 09 May 2019), 1995. a\n\nSheather, S. J. and Jones, M. C.: A Reliable Data-Based Bandwidth Selection Method for Kernel Density Estimation, J. Roy. Stat. Soc. B, 53, 683–690, 1991. a\n\nSingh, R., Archfield, S. A., and Wagener, T.: Identifying dominant controls on hydrologic parameter transfer from gauged to ungauged catchments – A comparative hydrology approach, J. Hydrol., 517, 985–996, https://doi.org/10.1016/j.jhydrol.2014.06.030, 2014. a, b, c\n\nSinghal, B. B. S. and Gupta, R. P.: Applied hydrogeology of fractured rocks, Springer Science & Business Media, Dordrecht, Netherlands, 2010. a\n\nSivapalan, M., Takeuchi, K., Franks, S. W., Gupta, V. K., Karambiri, H., Lakshmi, V., Liang, X., McDonnell, J. J., Mendiondo, E. M., O'Connell, P. E., Oki, T., Pomeroy, J. W., Schertzer, D., Uhlenbrook, S., and Zehe, E.: IAHS Decade on Predictions in Ungauged Basins (PUB), 2003–2012: Shaping an exciting future for the hydrological sciences, Hydrolog. Sci. J., 48, 857–880, https://doi.org/10.1623/hysj.48.6.857.51421, 2003. a\n\nSmith, T., Marshall, L., and Sharma, A.: Predicting hydrologic response through a hierarchical catchment knowledgebase: A Bayes empirical Bayes approach, Water Resour. Res., 50, 1189–1204, https://doi.org/10.1002/2013WR015079, 2014. a, b, c\n\nTague, C. L., Choate, J. S., and Grant, G.: Parameterizing sub-surface drainage with geology to improve modeling streamflow responses to climate in data limited environments, Hydrol. Earth Syst. Sci., 17, 341–354, https://doi.org/10.5194/hess-17-341-2013, 2013. a\n\nTakagi, M.: Evapotranspiration and deep percolation of a small catchment with a mature Japanese cypress plantation, J. Forest Res., 18, 73–81, https://doi.org/10.1007/s10310-011-0321-2, 2013. a\n\nTitle, P. O. and Bemmels, J. B.: ENVIREM: an expanded set of bioclimatic and topographic variables increases flexibility and improves performance of ecological niche modeling, Ecography, 41, 291–307, https://doi.org/10.1111/ecog.02880, 2017. a\n\nTitle, P. O. and Bemmels, J. B.: Environmental rasters for ecological modeling, available at: http://envirem.github.io/, last access: 10 May 2019. a\n\nUnited States Geological Survey: Locations of Regional Assessments of Streams and Rivers, available at: https://archive.usgs.gov/archive/sites/water.usgs.gov/nawqa/sparrow/mrb/ (last access: 29 June 2017), 2005. a, b\n\nUniversity of Michigan: Deep Blue Data repository, ENVIREM: ENVIronmental Rasters for Ecological Modeling version 1.0, https://doi.org/10.7302/Z2BR8Q40, 2019. a\n\nUSGS: USGS Science Data Catalog, available at: https://data.usgs.gov, last access: 10 May 2019a. a\n\nUSGS: Attributes for MRB_E2RF1 Catchments by Major River Basins in the Conterminous United States (DS-491), available at: https://water.usgs.gov/nawqa/modeling/rf1attributes.html, last access: 10 May 2019b. a\n\nWada, Y., van Beek, L. P. H., van Kempen, C. M., Reckman, J. W. T. M., Vasak, S., and Bierkens, M. F. P.: Global depletion of groundwater resources, Geophys. Res. Lett., 37, L20402, https://doi.org/10.1029/2010GL044571, 2010. a\n\nWagener, T. and Montanari, A.: Convergence of approaches toward reducing uncertainty in predictions in ungauged basins, Water Resour. Res., 47, W06301, https://doi.org/10.1029/2010WR009469, 2011. a, b\n\nWieczorek, M. E. and LaMotte, A. E.: Attributes for MRB_E2RF1 Catchments by Major River Basins in the Conterminous United States: 30-Year Average Annual Precipitation, 1971–2000, Report, US Geological Survey, Reston, Virginia, USA, 2010a. a\n\nWieczorek, M. E. and LaMotte, A. E.: Attributes for MRB_E2RF1 Catchments by Major Rivers Basins in the Conterminous United States: Total Precipitation, 2002, Report, US Geological Survey, Reston, Virginia, USA, 2010b. a\n\nWieczorek, M. E. and LaMotte, A. E.: Attributes for MRB_E2RF1 Catchments by Major River Basins in the Conterminous United States: Bedrock Geology, Report, US Geological Survey, Reston, Virginia, USA, 2010c. a, b, c\n\nWieczorek, M. E. and LaMotte, A. E.: Attributes for MRB_E2RF1 Catchments by Major River Basins in the Conterminous United States: Surficial Geology, Report, US Geological Survey, Reston, Virginia, USA, 2010d. a, b, c\n\nWieczorek, M. E. and LaMotte, A. E.: Attributes for MRB_E2RF1 Catchments by Major River Basins in the Conterminous United States: STATSGO Soil Characteristics, Report, US Geological Survey, Reston, Virginia, USA, 2010e. a\n\nWieczorek, M. E. and LaMotte, A.: Attributes for MRB_E2RF1 Catchments by Major River Basins in the Conterminous United States: NLCD 2001 Land Use and Land Cover, Report, US Geological Survey, Reston, Virginia, USA, 2010f.  a\n\nWieczorek, M. E. and LaMotte, A. E.: Attributes for MRB_E2RF1 Catchments by Major River Basins in the Conterminous United States: Basin Characteristics, 2002, Report, US Geological Survey, Reston, Virginia, USA, 2010g. a\n\nWieczorek, M. E. and LaMotte, A. E.: Attributes for MRB_E2RF1 Catchments by Major River Basins in the Conterminous United States: Estimated Mean Annual Natural Groundwater Recharge, 2002, Report, US Geological Survey, Reston, Virginia, USA, 2010h. a\n\nWolock, D. M.: STATSGO soil characteristics for the conterminous United States, Report 2331-1258, US Geological Survey, Reston, Virginia, USA, 1997. a\n\nWolock, D. M.: Estimated mean annual natural ground-water recharge in the conterminous United States, Report, US Geological Survey, Reston, Virginia, USA, 2003. a, b\n\nWoodbury, A. D.: Minimum relative entropy, Bayes and Kapur, Geophys. J. Int., 185, 181–189, https://doi.org/10.1111/j.1365-246x.2011.04932.x, 2011. a\n\nWoodbury, A. D. and Rubin, Y.: A Full-Bayesian Approach to parameter inference from tracer travel time moments and investigation of scale effects at the Cape Cod Experimental Site, Water Resour. Res., 36, 159–171, https://doi.org/10.1029/1999WR900273, 2000. a\n\nXie, Y., Cook, P. G., Simmons, C. T., Partington, D., Crosbie, R., and Batelaan, O.: Uncertainty of groundwater recharge estimated from a water and energy balance model, J. Hydrol., 561, 1081–1093, https://doi.org/10.1016/j.jhydrol.2017.08.010, 2017. a, b\n\nYang, F.-R., Lee, C.-H., Kung, W.-J., and Yeh, H.-F.: The impact of tunneling construction on the hydrogeological environment of “Tseng-Wen Reservoir Transbasin Diversion Project” in Taiwan, Eng. Geo., 103, 39–58, https://doi.org/10.1016/j.enggeo.2008.07.012, 2009. a\n\nYeh, H.-F., Lee, C.-H., Hsu, K.-C., and Chang, P.-H.: GIS for the assessment of the groundwater recharge potential zone, Environ. Geol., 58, 185–195, https://doi.org/10.1007/s00254-008-1504-9, 2009. a\n\nYeh, H.-F., Cheng, Y.-S., Lin, H.-I., and Lee, C.-H.: Mapping groundwater recharge potential zone using a GIS approach in Hualian River, Taiwan, Sustain. Environ. Res., 26, 33–43, https://doi.org/10.1016/j.serj.2015.09.005, 2016. a" ]
[ null, "https://hess.copernicus.org/articles/23/2417/2019/hess-23-2417-2019-avatar-thumb150.png", null, "https://hess.copernicus.org/articles/23/2417/2019/hess-23-2417-2019-f01-thumb.png", null, "https://hess.copernicus.org/articles/23/2417/2019/hess-23-2417-2019-f02-thumb.png", null, "https://hess.copernicus.org/articles/23/2417/2019/hess-23-2417-2019-f03-thumb.png", null, "https://hess.copernicus.org/articles/23/2417/2019/hess-23-2417-2019-f04-thumb.png", null, "https://hess.copernicus.org/articles/23/2417/2019/hess-23-2417-2019-t03-thumb.png", null, "https://hess.copernicus.org/articles/23/2417/2019/hess-23-2417-2019-f05-thumb.png", null, "https://hess.copernicus.org/articles/23/2417/2019/hess-23-2417-2019-f06-thumb.png", null, "https://hess.copernicus.org/articles/23/2417/2019/hess-23-2417-2019-f07-thumb.png", null, "https://hess.copernicus.org/articles/23/2417/2019/hess-23-2417-2019-f08-thumb.png", null, "https://hess.copernicus.org/articles/23/2417/2019/hess-23-2417-2019-f09-thumb.png", null, "https://hess.copernicus.org/articles/23/2417/2019/hess-23-2417-2019-f10-thumb.png", null, "https://hess.copernicus.org/articles/23/2417/2019/hess-23-2417-2019-f11-thumb.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8864113,"math_prob":0.87852967,"size":94407,"snap":"2023-14-2023-23","text_gpt3_token_len":21755,"char_repetition_ratio":0.15961356,"word_repetition_ratio":0.04948322,"special_character_ratio":0.22645566,"punctuation_ratio":0.18510859,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.953378,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26],"im_url_duplicate_count":[null,null,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-08T02:29:13Z\",\"WARC-Record-ID\":\"<urn:uuid:39081605-e0f5-46ba-8b22-d8e7f6bdb3e5>\",\"Content-Length\":\"376486\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2c379262-9432-4e53-b9c6-6d8f0c9bc58e>\",\"WARC-Concurrent-To\":\"<urn:uuid:4dad6056-d88d-4fa9-a0a0-214f65e07971>\",\"WARC-IP-Address\":\"81.3.21.103\",\"WARC-Target-URI\":\"https://hess.copernicus.org/articles/23/2417/2019/\",\"WARC-Payload-Digest\":\"sha1:EAOYMQTHVJRNOGPBM7G4IBTM44KXTR33\",\"WARC-Block-Digest\":\"sha1:T5RGYNBHCOCFDC2EJRQLRX4B6QVITP6C\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224654031.92_warc_CC-MAIN-20230608003500-20230608033500-00321.warc.gz\"}"}
https://www.catalyzex.com/author/Michael%20Rosen
[ "", null, "# Michael Rosen\n\n## A model selection approach for clustering a multinomial sequence with non-negative factorization\n\nAug 14, 2015", null, "", null, "", null, "", null, "We consider a problem of clustering a sequence of multinomial observations by way of a model selection criterion. We propose a form of a penalty term for the model selection procedure. Our approach subsumes both the conventional AIC and BIC criteria but also extends the conventional criteria in a way that it can be applicable also to a sequence of sparse multinomial observations, where even within a same cluster, the number of multinomial trials may be different for different observations. In addition, as a preliminary estimation step to maximum likelihood estimation, and more generally, to maximum \\$L_{q}\\$ estimation, we propose to use reduced rank projection in combination with non-negative factorization. We motivate our approach by showing that our model selection criterion and preliminary estimation step yield consistent estimates under simplifying assumptions. We also illustrate our approach through numerical experiments using real and simulated data.\n\nVia", null, "## Techniques for clustering interaction data as a collection of graphs\n\nJan 10, 2015", null, "", null, "", null, "", null, "A natural approach to analyze interaction data of form \"what-connects-to-what-when\" is to create a time-series (or rather a sequence) of graphs through temporal discretization (bandwidth selection) and spatial discretization (vertex contraction). Such discretization together with non-negative factorization techniques can be useful for obtaining clustering of graphs. Motivating application of performing clustering of graphs (as opposed to vertex clustering) can be found in neuroscience and in social network analysis, and it can also be used to enhance community detection (i.e., vertex clustering) by way of conditioning on the cluster labels. In this paper, we formulate a problem of clustering of graphs as a model selection problem. Our approach involves information criteria, non-negative matrix factorization and singular value thresholding, and we illustrate our techniques using real and simulated data.\n\nVia", null, "## Automatic Dimension Selection for a Non-negative Factorization Approach to Clustering Multiple Random Graphs\n\nSep 09, 2014", null, "", null, "We consider a problem of grouping multiple graphs into several clusters using singular value thesholding and non-negative factorization. We derive a model selection information criterion to estimate the number of clusters. We demonstrate our approach using \"Swimmer data set\" as well as simulated data set, and compare its performance with two standard clustering algorithms.\n\n* This paper has been withdrawn by the author due to a newer version with overlapping contents\nVia", null, "" ]
[ null, "https://www.catalyzex.com/_next/image", null, "https://www.catalyzex.com/_next/image", null, "https://www.catalyzex.com/_next/image", null, "https://www.catalyzex.com/_next/image", null, "https://www.catalyzex.com/_next/image", null, "https://www.catalyzex.com/_next/image", null, "https://www.catalyzex.com/_next/image", null, "https://www.catalyzex.com/_next/image", null, "https://www.catalyzex.com/_next/image", null, "https://www.catalyzex.com/_next/image", null, "https://www.catalyzex.com/_next/image", null, "https://www.catalyzex.com/_next/image", null, "https://www.catalyzex.com/_next/image", null, "https://www.catalyzex.com/_next/image", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9069772,"math_prob":0.89482033,"size":2261,"snap":"2023-40-2023-50","text_gpt3_token_len":402,"char_repetition_ratio":0.12450155,"word_repetition_ratio":0.012461059,"special_character_ratio":0.16806723,"punctuation_ratio":0.07692308,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95538443,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-24T20:40:51Z\",\"WARC-Record-ID\":\"<urn:uuid:3581613e-e481-4de1-a474-dec2f10fe4ea>\",\"Content-Length\":\"76041\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a7dcf034-2ce2-463e-9b24-7e0ddaac7502>\",\"WARC-Concurrent-To\":\"<urn:uuid:c3c8e618-6259-4539-ab92-fe812feed6f9>\",\"WARC-IP-Address\":\"172.66.42.232\",\"WARC-Target-URI\":\"https://www.catalyzex.com/author/Michael%20Rosen\",\"WARC-Payload-Digest\":\"sha1:7K2PMZGB6S7W4YQHL5OGOD7W5JIHYADT\",\"WARC-Block-Digest\":\"sha1:VR3A5PDFZ3EBL7MARYJUI2TZPGNCU4YI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506669.30_warc_CC-MAIN-20230924191454-20230924221454-00181.warc.gz\"}"}
https://answers.everydaycalculation.com/lcm/28-4
[ "Solutions by everydaycalculation.com\n\n## What is the LCM of 28 and 4?\n\nThe LCM of 28 and 4 is 28.\n\n#### Steps to find LCM\n\n1. Find the prime factorization of 28\n28 = 2 × 2 × 7\n2. Find the prime factorization of 4\n4 = 2 × 2\n3. Multiply each factor the greater number of times it occurs in steps i) or ii) above to find the LCM:\n\nLCM = 2 × 2 × 7\n4. LCM = 28\n\nMathStep (Works offline)", null, "Download our mobile app and learn how to find LCM of upto four numbers in your own time:" ]
[ null, "https://answers.everydaycalculation.com/mathstep-app-icon.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.77344906,"math_prob":0.994755,"size":477,"snap":"2022-27-2022-33","text_gpt3_token_len":159,"char_repetition_ratio":0.118393235,"word_repetition_ratio":0.0,"special_character_ratio":0.38574424,"punctuation_ratio":0.08163265,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9957532,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-19T02:19:08Z\",\"WARC-Record-ID\":\"<urn:uuid:15a0f44b-4ccf-4499-bf6c-e481984857a4>\",\"Content-Length\":\"5751\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3a023b0a-4f52-4fb5-bd2b-08df5590e349>\",\"WARC-Concurrent-To\":\"<urn:uuid:43e4deda-3d92-4760-b3c5-f7f6119b2f4e>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/lcm/28-4\",\"WARC-Payload-Digest\":\"sha1:W4RZ7DW4LXWIXT6ITGR3SY5MNGVTCL4A\",\"WARC-Block-Digest\":\"sha1:5SCN6KIBLAIK5PFZAN56V6UIZEQBIQH6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882573540.20_warc_CC-MAIN-20220819005802-20220819035802-00579.warc.gz\"}"}
http://www.fact-index.com/d/di/division_by_zero.html
[ "Main Page | See live article | Alphabetical index\n\n# Division by zero\n\nIn mathematics, the result of division by zero, such as a ÷ 0, is undefined and not allowed in real numbers and integers. The reason is the following: division ought to be the inverse operation of multiplication, which means that a ÷ b should be the solution x of bx = a, but for b=0 this has no solution if a≠0, and any x as solution if also a=0. In both cases a ÷ b can not be defined meaningfully.\n\nIn particular, within real numbers, it is incorrect to say that a ÷ 0 is infinity because infinity is not a real number and does not follow the rules for real numbers.\n\nAnother way to see why division by zero does not work is to work backwards from multiplication, remembering that anything multiplied by zero is zero. So\n\n2 × 0 = 0,\nwhich, if we are allowed to divide by zero, means that\n0 ÷ 0 = 2.\nBut\n4 × 0 = 0,\nso\n0 ÷ 0 = 4,\nsuggesting that 2 = 4, which is nonsense.\n\nIt is possible to disguise a division by zero in a long algebraic argument, leading to such things as a spurious proof that 2 equals 1.\n\nIt is both possible and meaningful to find the limit as x approaches 0 of some divisions by x; see l'Hopital's rule for some examples; see also indeterminate form.\n\nAlthough division by zero is undefined with real numbers and integers, it is possible to consistently define division by zero in other number systems. Zero divisors are frequently found in group theory and in hyperreal numbers and surreal numbers." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9354317,"math_prob":0.9882945,"size":2295,"snap":"2021-43-2021-49","text_gpt3_token_len":466,"char_repetition_ratio":0.13051069,"word_repetition_ratio":0.57583547,"special_character_ratio":0.19738562,"punctuation_ratio":0.0817757,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.993932,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-09T04:34:39Z\",\"WARC-Record-ID\":\"<urn:uuid:c05932be-2565-401f-a03e-9d31416c019c>\",\"Content-Length\":\"5893\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ae5b2fe7-e95d-4b83-aec7-e359a55121ed>\",\"WARC-Concurrent-To\":\"<urn:uuid:83f4189d-4e77-4ab5-8073-37f164cf649c>\",\"WARC-IP-Address\":\"23.227.169.68\",\"WARC-Target-URI\":\"http://www.fact-index.com/d/di/division_by_zero.html\",\"WARC-Payload-Digest\":\"sha1:RGU6JEQTCEOWS2CQMQLJGDPB6NGAHIFC\",\"WARC-Block-Digest\":\"sha1:XPXNAM2KXPGLSGPGII4BS4IPTQTYJMJH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363659.21_warc_CC-MAIN-20211209030858-20211209060858-00543.warc.gz\"}"}
https://www.colorhexa.com/005c1b
[ "# #005c1b Color Information\n\nIn a RGB color space, hex #005c1b is composed of 0% red, 36.1% green and 10.6% blue. Whereas in a CMYK color space, it is composed of 100% cyan, 0% magenta, 70.7% yellow and 63.9% black. It has a hue angle of 137.6 degrees, a saturation of 100% and a lightness of 18%. #005c1b color hex could be obtained by blending #00b836 with #000000. Closest websafe color is: #006633.\n\n• R 0\n• G 36\n• B 11\nRGB color chart\n• C 100\n• M 0\n• Y 71\n• K 64\nCMYK color chart\n\n#005c1b color description : Very dark cyan - lime green.\n\n# #005c1b Color Conversion\n\nThe hexadecimal color #005c1b has RGB values of R:0, G:92, B:27 and CMYK values of C:1, M:0, Y:0.71, K:0.64. Its decimal value is 23579.\n\nHex triplet RGB Decimal 005c1b `#005c1b` 0, 92, 27 `rgb(0,92,27)` 0, 36.1, 10.6 `rgb(0%,36.1%,10.6%)` 100, 0, 71, 64 137.6°, 100, 18 `hsl(137.6,100%,18%)` 137.6°, 100, 36.1 006633 `#006633`\nCIE-LAB 33.42, -38.744, 29.783 4.025, 7.733, 2.317 0.286, 0.549, 7.733 33.42, 48.868, 142.45 33.42, -30.868, 34.667 27.808, -22.83, 14.525 00000000, 01011100, 00011011\n\n# Color Schemes with #005c1b\n\n• #005c1b\n``#005c1b` `rgb(0,92,27)``\n• #5c0041\n``#5c0041` `rgb(92,0,65)``\nComplementary Color\n• #135c00\n``#135c00` `rgb(19,92,0)``\n• #005c1b\n``#005c1b` `rgb(0,92,27)``\n• #005c49\n``#005c49` `rgb(0,92,73)``\nAnalogous Color\n• #5c0013\n``#5c0013` `rgb(92,0,19)``\n• #005c1b\n``#005c1b` `rgb(0,92,27)``\n• #49005c\n``#49005c` `rgb(73,0,92)``\nSplit Complementary Color\n• #5c1b00\n``#5c1b00` `rgb(92,27,0)``\n• #005c1b\n``#005c1b` `rgb(0,92,27)``\n• #1b005c\n``#1b005c` `rgb(27,0,92)``\n• #415c00\n``#415c00` `rgb(65,92,0)``\n• #005c1b\n``#005c1b` `rgb(0,92,27)``\n• #1b005c\n``#1b005c` `rgb(27,0,92)``\n• #5c0041\n``#5c0041` `rgb(92,0,65)``\n• #001005\n``#001005` `rgb(0,16,5)``\n• #00290c\n``#00290c` `rgb(0,41,12)``\n• #004314\n``#004314` `rgb(0,67,20)``\n• #005c1b\n``#005c1b` `rgb(0,92,27)``\n• #007622\n``#007622` `rgb(0,118,34)``\n• #008f2a\n``#008f2a` `rgb(0,143,42)``\n• #00a931\n``#00a931` `rgb(0,169,49)``\nMonochromatic Color\n\n# Alternatives to #005c1b\n\nBelow, you can see some colors close to #005c1b. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #005c04\n``#005c04` `rgb(0,92,4)``\n• #005c0c\n``#005c0c` `rgb(0,92,12)``\n• #005c13\n``#005c13` `rgb(0,92,19)``\n• #005c1b\n``#005c1b` `rgb(0,92,27)``\n• #005c23\n``#005c23` `rgb(0,92,35)``\n• #005c2a\n``#005c2a` `rgb(0,92,42)``\n• #005c32\n``#005c32` `rgb(0,92,50)``\nSimilar Colors\n\n# #005c1b Preview\n\nText with hexadecimal color #005c1b\n\nThis text has a font color of #005c1b.\n\n``<span style=\"color:#005c1b;\">Text here</span>``\n#005c1b background color\n\nThis paragraph has a background color of #005c1b.\n\n``<p style=\"background-color:#005c1b;\">Content here</p>``\n#005c1b border color\n\nThis element has a border color of #005c1b.\n\n``<div style=\"border:1px solid #005c1b;\">Content here</div>``\nCSS codes\n``.text {color:#005c1b;}``\n``.background {background-color:#005c1b;}``\n``.border {border:1px solid #005c1b;}``\n\n# Shades and Tints of #005c1b\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #000e04 is the darkest color, while #f9fffb is the lightest one.\n\n• #000e04\n``#000e04` `rgb(0,14,4)``\n• #00210a\n``#00210a` `rgb(0,33,10)``\n• #00350f\n``#00350f` `rgb(0,53,15)``\n• #004815\n``#004815` `rgb(0,72,21)``\n• #005c1b\n``#005c1b` `rgb(0,92,27)``\n• #007021\n``#007021` `rgb(0,112,33)``\n• #008327\n``#008327` `rgb(0,131,39)``\n• #00972c\n``#00972c` `rgb(0,151,44)``\n• #00aa32\n``#00aa32` `rgb(0,170,50)``\n• #00be38\n``#00be38` `rgb(0,190,56)``\n• #00d23e\n``#00d23e` `rgb(0,210,62)``\n• #00e543\n``#00e543` `rgb(0,229,67)``\n• #00f949\n``#00f949` `rgb(0,249,73)``\n• #0eff54\n``#0eff54` `rgb(14,255,84)``\n• #21ff62\n``#21ff62` `rgb(33,255,98)``\n• #35ff70\n``#35ff70` `rgb(53,255,112)``\n• #48ff7e\n``#48ff7e` `rgb(72,255,126)``\n• #5cff8c\n``#5cff8c` `rgb(92,255,140)``\n• #70ff9a\n``#70ff9a` `rgb(112,255,154)``\n• #83ffa8\n``#83ffa8` `rgb(131,255,168)``\n• #97ffb5\n``#97ffb5` `rgb(151,255,181)``\n• #aaffc3\n``#aaffc3` `rgb(170,255,195)``\n• #beffd1\n``#beffd1` `rgb(190,255,209)``\n• #d2ffdf\n``#d2ffdf` `rgb(210,255,223)``\n• #e5ffed\n``#e5ffed` `rgb(229,255,237)``\n• #f9fffb\n``#f9fffb` `rgb(249,255,251)``\nTint Color Variation\n\n# Tones of #005c1b\n\nA tone is produced by adding gray to any pure hue. In this case, #2a322d is the less saturated color, while #005c1b is the most saturated one.\n\n• #2a322d\n``#2a322d` `rgb(42,50,45)``\n• #27352b\n``#27352b` `rgb(39,53,43)``\n• #23392a\n``#23392a` `rgb(35,57,42)``\n• #203c28\n``#203c28` `rgb(32,60,40)``\n• #1c4027\n``#1c4027` `rgb(28,64,39)``\n• #194325\n``#194325` `rgb(25,67,37)``\n• #154724\n``#154724` `rgb(21,71,36)``\n• #124a22\n``#124a22` `rgb(18,74,34)``\n• #0e4e21\n``#0e4e21` `rgb(14,78,33)``\n• #0b511f\n``#0b511f` `rgb(11,81,31)``\n• #07551e\n``#07551e` `rgb(7,85,30)``\n• #04581c\n``#04581c` `rgb(4,88,28)``\n• #005c1b\n``#005c1b` `rgb(0,92,27)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #005c1b is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5020784,"math_prob":0.8746362,"size":3642,"snap":"2022-40-2023-06","text_gpt3_token_len":1643,"char_repetition_ratio":0.13771303,"word_repetition_ratio":0.007352941,"special_character_ratio":0.55409116,"punctuation_ratio":0.23015873,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9886507,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-25T00:52:15Z\",\"WARC-Record-ID\":\"<urn:uuid:9b61319e-6687-4d8b-927a-9b48351fdb11>\",\"Content-Length\":\"36047\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6ce02770-f056-4e41-9749-511e606788ba>\",\"WARC-Concurrent-To\":\"<urn:uuid:d6655404-97ad-442c-bd8b-20ec619411a5>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/005c1b\",\"WARC-Payload-Digest\":\"sha1:GBKKAF3ZICMVCDSU3ZNCINRBOBBY4QPN\",\"WARC-Block-Digest\":\"sha1:7FUJ6HOUQ63DXHTCDBYBVZKBZXE27TKX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030334332.96_warc_CC-MAIN-20220925004536-20220925034536-00569.warc.gz\"}"}
https://findfilo.com/math-question-answers/third-term-of-an-a-p-is-16-and-7-th-term-is-12-morpe8
[ "", null, "Third term of an A.P. Is 16 and 7^{th} term is 12 more than 5^{th} | Filo", null, "", null, "Class 10\n\nMath\n\nAll topics\n\nArithmetic Progressions", null, "525\n\nThird term of an Is and term is more than term, then find .\n\nSolution: Let first term of is and common difference is and we know that nth term,\nGiven\n\nAccording to question,\n\nSubstituting value of in equation\n\nHence, required is", null, "525", null, "Connecting you to a tutor in 60 seconds.\n\nGet answers to your doubts.\n\nSimilar Topics\nintroduction to trigonometry\nfunctions\nsome applications of trigonometry\nquadratic equations\nsurface areas and volumes" ]
[ null, "https://www.facebook.com/tr", null, "https://findfilo.com/images/logo.svg", null, "https://findfilo.com/images/icons/navbar.png", null, "https://findfilo.com/images/icons/view.svg", null, "https://findfilo.com/images/icons/view.svg", null, "https://findfilo.com/images/common/mobile-widget.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7931324,"math_prob":0.9443009,"size":534,"snap":"2021-21-2021-25","text_gpt3_token_len":264,"char_repetition_ratio":0.1,"word_repetition_ratio":0.0,"special_character_ratio":0.4812734,"punctuation_ratio":0.24022347,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9986176,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-23T12:36:52Z\",\"WARC-Record-ID\":\"<urn:uuid:4e06b1a6-bb48-4bf1-83b4-7e546a8f50cf>\",\"Content-Length\":\"82604\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:77d3e8fa-fa96-4676-89bd-a56fa1f821dc>\",\"WARC-Concurrent-To\":\"<urn:uuid:dcbe2c1c-f95e-4e37-a238-976a3da50728>\",\"WARC-IP-Address\":\"34.117.94.82\",\"WARC-Target-URI\":\"https://findfilo.com/math-question-answers/third-term-of-an-a-p-is-16-and-7-th-term-is-12-morpe8\",\"WARC-Payload-Digest\":\"sha1:DF55UYDT7TUGIDJVPPVXYYY2GIQJVOWS\",\"WARC-Block-Digest\":\"sha1:RHWZC5KPE6ILJQRI37NMLLAQLB5EMS3Z\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623488538041.86_warc_CC-MAIN-20210623103524-20210623133524-00186.warc.gz\"}"}
https://everything.explained.today/Real_algebraic_geometry/
[ "# Real algebraic geometry explained\n\nIn mathematics, real algebraic geometry is the sub-branch of algebraic geometry studying real algebraic sets, i.e. real-number solutions to algebraic equations with real-number coefficients, and mappings between them (in particular real polynomial mappings).\n\nSemialgebraic geometry is the study of semialgebraic sets, i.e. real-number solutions to algebraic inequalities with-real number coefficients, and mappings between them. The most natural mappings between semialgebraic sets are semialgebraic mappings, i.e., mappings whose graphs are semialgebraic sets.\n\n## Terminology\n\nNowadays the words 'semialgebraic geometry' and 'real algebraic geometry' are used as synonyms, because real algebraic sets cannot be studied seriously without the use of semialgebraic sets. For example, a projection of a real algebraic set along a coordinate axis need not be a real algebraic set, but it is always a semialgebraic set: this is the Tarski–Seidenberg theorem. Related fields are o-minimal theory and real analytic geometry.\n\nExamples: Real plane curves are examples of real algebraic sets and polyhedra are examples of semialgebraic sets. Real algebraic functions and Nash functions are examples of semialgebraic mappings. Piecewise polynomial mappings (see the Pierce–Birkhoff conjecture) are also semialgebraic mappings.\n\nComputational real algebraic geometry is concerned with the algorithmic aspects of real algebraic (and semialgebraic) geometry. The main algorithm is cylindrical algebraic decomposition. It is used to cut semialgebraic sets into nice pieces and to compute their projections.\n\nReal algebra is the part of algebra which is relevant to real algebraic (and semialgebraic) geometry. It is mostly concerned with the study of ordered fields and ordered rings (in particular real closed fields) and their applications to the study of positive polynomials and sums-of-squares of polynomials. (See Hilbert's 17th problem and Krivine's Positivestellensatz.) The relation of real algebra to real algebraic geometry is similar to the relation of commutative algebra to complex algebraic geometry. Related fields are the theory of moment problems, convex optimization, the theory of quadratic forms, valuation theory and model theory.\n\n## Timeline of real algebra and real algebraic geometry\n\n\\Rn\n\nwith trivial normal bundle, can be isotoped to a component of a nonsingular real algebraic subset of\n\n\\Rn\n\nwhich is a complete intersection (from the conclusion of this theorem the word \"component\" can not be removed).\n\nSn\n\nis the link of a real algebraic set with isolated singularity in\n\n\\Rn+1\n\n\n• 1981 Akbulut and King proved that every compact PL manifold is PL homeomorphic to a real algebraic set. \n• 1983 Akbulut and King introduced \"Topological Resolution Towers\" as topological models of real algebraic sets, from this they obtained new topological invariants of real algebraic sets, and topologically characterized all 3-dimensional algebraic sets. These invariants later generalized by Michel Coste and Krzysztof Kurdyka as well as Clint McCrory and Adam Parusiński.\n• 1984 Ludwig Bröcker's theorem on minimal generation of basic open semialgebraic sets (improved and extended to basic closed semialgebraic sets by Scheiderer.)\n• 1984 Benedetti and Dedo proved that not every closed smooth manifold is diffeomorphic to a totally algebraic nonsingular real algebraic set (totally algebraic means all its Z/2Z-homology cycles are represented by real algebraic subsets).\n• 1991 Akbulut and King proved that every closed smooth manifold is homeomorphic to a totally algebraic real algebraic set.\n• 1991 Schmüdgen's solution of the multidimensional moment problem for compact semialgebraic sets and related strict positivstellensatz. Algebraic proof found by Wörmann. Implies Reznick's version of Artin's theorem with uniform denominators.\n• 1992 Akbulut and King proved ambient versions of the Nash-Tognoli theorem: Every closed smooth submanifold of Rn is isotopic to the nonsingular points (component) of a real algebraic subset of Rn, and they extended this result to immersed submanifolds of Rn. \n• 1992 Benedetti and Marin proved that every compact closed smooth 3-manifold M can be obtained from\n\nS3\n\nby a sequence of blow ups and downs along smooth centers, and that M is homeomorphic to a possibly singular affine real algebraic rational threefold\n• 1997 Bierstone and Milman proved a canonical resolution of singularities theorem\n• 1997 Mikhalkin proved that every closed smooth n-manifold can be obtained from\n\nSn\n\nby a sequence of topological blow ups and downs\n• 1998 János Kollár showed that not every closed 3-manifold is a projective real 3-fold which is birational to RP3\n• 2000 Scheiderer's local-global principle and related non-strict extension of Schmüdgen's positivstellensatz in dimensions ≤ 2. \n• 2000 János Kollár proved that every closed smooth 3–manifold is the real part of a compact complex manifold which can be obtained from\n\nCP3\n\nby a sequence of real blow ups and blow downs.\n• 2003 Welschinger introduces an invariant for counting real rational curves\n• 2005 Akbulut and King showed that not every nonsingular real algebraic subset of RPn is smoothly isotopic to the real part of a nonsingular complex algebraic subset of CPn \n\n## References\n\n• S. Akbulut and H.C. King, Topology of real algebraic sets, MSRI Pub, 25. Springer-Verlag, New York (1992)\n• Bochnak, Jacek; Coste, Michel; Roy, Marie-Françoise. Real Algebraic Geometry. Translated from the 1987 French original. Revised by the authors. Ergebnisse der Mathematik und ihrer Grenzgebiete (3) [Results in Mathematics and Related Areas (3)], 36. Springer-Verlag, Berlin, 1998. x+430 pp.\n• Basu, Saugata; Pollack, Richard; Roy, Marie-Françoise Algorithms in real algebraic geometry. Second edition. Algorithms and Computation in Mathematics, 10. Springer-Verlag, Berlin, 2006. x+662 pp. ; 3-540-33098-4\n• Marshall, Murray Positive polynomials and sums of squares. Mathematical Surveys and Monographs, 146. American Mathematical Society, Providence, RI, 2008. xii+187 pp. ; 0-8218-4402-4\n\n## Notes and References\n\n1. Book: van den Dries, L. . Tame topology and o-minimal structures . London Mathematical Society Lecture Note Series . 248 . . 1998 . 0953.03045 . 31 .\n2. Book: Khovanskii, A. G. . Askold Khovanskii\n\n. Askold Khovanskii . Fewnomials . Translated from the Russian by Smilka Zdravkovska . 0728.12002 . Translations of Mathematical Monographs . 88 . Providence, RI . . 1991 . 0-8218-4547-0 .\n\n3. [Joseph Fourier|Joseph B. J. Fourier]\n4. Lloyd Dines. Lloyd L. . Dines. Systems of linear inequalities. . (2) . 20 . 1919. 3. 191–199. 10.2307/1967869 . 1967869 .\n5. [Theodore Motzkin]\n6. [Jacques Charles François Sturm]\n7. [Charles Hermite]\n8. [Carl Gustav Axel Harnack|C. G. A. Harnack]\n9. I. G. Petrovski˘ı and O. A. Ole˘ınik, On the topology of real algebraic surfaces, Izvestiya Akad. Nauk SSSR. Ser.Mat. 13, (1949). 389–402\n10. [John Milnor]\n11. [René Thom]\n12. Saugata. Basu. On bounding the Betti numbers and computing the Euler characteristic of semi-algebraic sets. . 22 . 1999. 1. 1–18. 10.1007/PL00009443. 7023328.\n13. David. Hilbert. David Hilbert. Uber die Darstellung definiter Formen als Summe von Formenquadraten. Mathematische Annalen. 32. 342–350 . 1888. 3 . 10.1007/BF01443605 . 177804714 .\n14. Gyula Farkas (natural scientist). Julius. Farkas. Über die Theorie der Einfachen Ungleichungen. Journal für die Reine und Angewandte Mathematik. 124. 1–27.\n15. Annibale . Comessatti. Sulla connessione delle superfizie razionali reali. Annali di Math. . 23. 3. 1914. 215–283. 10.1007/BF02419577 . 121297483 .\n16. [Lipót Fejér]\n17. [Frigyes Riesz]\n18. Emil. Artin. Emil Artin. Uber die Zerlegung definiter Funktionen in Quadrate. Abh. Math. Sem. Univ. Hamburg. 5 . 1927. 85–99. 10.1007/BF02952512. 122881707.\n19. Wolfgang. Krull. Wolfgang Krull. Allgemeine Bewertungstheorie. Journal für die reine und angewandte Mathematik. 1932. 160–196 . 1932. 167 . 10.1515/crll.1932.167.160 . 199547002 .\n20. [George Pólya]\n21. [Bartel Leendert van der Waerden|B. L. van der Waerden]\n22. [Alfred Tarski]\n23. [Abraham Seidenberg]\n24. [Herbert Seifert]\n25. [Selman Akbulut]\n26. Marshall. Stone. Marshall Stone. A general theory of spectra. I. . . 26. 1940. 4 . 280–283. 10.1073/pnas.26.4.280 . 16588355 . 1078172 . free .\n27. Dubois. Donald W. . A note on David Harrison's theory of preprimes. . 21 . 1967. 15–19. 10.2140/pjm.1967.21.15 . 0209200.\n28. Mihai Putinar, Positive polynomials on compact semi-algebraic sets. Indiana University Mathematics Journal 42 (1993), no. 3, 969–984.\n29. T. Jacobi, A representation theorem for certain partially ordered commutative rings. Mathematische Zeitschrift 237 (2001), no. 2, 259–273.\n30. John Forbes Nash Jr.. John. Nash . Real algebraic manifolds. . 56 . 1952. 405–421. 10.2307/1969649 . 1969649 .\n31. Garrett. Birkhoff. Garrett Birkhoff . Richard Scott. Pierce. Lattice ordered rings. Anais da Academia Brasileira de Ciências. 28 . 1956. 41–69.\n32. Louis. Mahé. On the Pierce–Birkhoff conjecture. Rocky Mountain Journal of Mathematics. 14 . 1984. 4. 983–985. 0773148. 10.1216/RMJ-1984-14-4-983. free.\n33. Krivine . J.-L. . Anneaux préordonnés . . 12 . 1964 . 307—326 . 10.1007/BF02807438 . free.\n34. G. Stengle, A nullstellensatz and a positivstellensatz in semialgebraic geometry. Math. Ann. 207 (1974), 87–97.\n35. S. Lang, Algebra. Addison–Wesley Publishing Co., Inc., Reading, Mass. 1965 xvii+508 pp.\n36. S. Lojasiewicz, Triangulation of semi-analytic sets, Ann. Scu. Norm. di Pisa, 18 (1964), 449–474.\n37. [Heisuke Hironaka]\n38. [Hassler Whitney]\n39. [Theodore Motzkin|Theodore S. Motzkin]\n40. \"Proof of Gudkov's hypothesis\". V. A. Rokhlin. Functional Analysis and Its Applications, volume 6, pp. 136–138 (1972)\n41. [Alberto Tognoli]\n42. [George E. Collins]\n43. [Jean-Louis Verdier]\n44. [Marie-Françoise Roy|Marie-Françoise Coste-Roy]\n45. [Oleg Viro|Oleg Ya. Viro]\n46. Viro . Oleg Ya. . Oleg Viro . 1980 . Кривые степени 7, кривые степени 8 и гипотеза Рэгсдейл . Curves of degree 7, curves of degree 8 and the hypothesis of Ragsdale . . 254 . 6 . 1306–1309. Translated in Curves of degree 7, curves of degree 8 and Ragsdale’s conjecture . Soviet Mathematics - Doklady . 22 . 566–570 . 1980 . 0422.14032 .\n47. Book: Itenberg . Ilia . Mikhalkin . Grigory . Shustin . Eugenii . Tropical algebraic geometry . 1162.14300 . Oberwolfach Seminars . 35 . Basel . Birkhäuser . 978-3-7643-8309-1 . 2007 . 34–35 .\n48. Grigory. Mikhalkin. Enumerative tropical algebraic geometry in\n\n\\R2\n\n. . 18 . 2005. 313–377. 10.1090/S0894-0347-05-00477-7.\n49. [Selman Akbulut]\n50. [Selman Akbulut]\n51. S. Akbulut and H.C. King, Real algebraic structures on topological spaces,Publications Mathématiques de l'IHÉS 53 (1981), 79–162.\n52. S. Akbulut and L. Taylor, A topological resolution theorem, Publications Mathématiques de l'IHÉS 53 (1981), 163–196.\n53. S. Akbulut and H.C. King, The topology of real algebraic sets, L'Enseignement Mathématique 29 (1983), 221–261.\n54. Selman Akbulut and Henry C. King, Topology of real algebraic sets, MSRI Pub, 25. Springer-Verlag, New York (1992)\n55. Coste . Michel . Kurdyka . Krzysztof . On the link of a stratum in a real algebraic set . . 31 . 2 . 1992 . 10.1016/0040-9383(92)90025-d . 323–336 . 1167174 . free .\n56. Bröcker . Ludwig . Minimale erzeugung von Positivbereichen . . 16 . 3 . 1984 . 335–350. 10.1007/bf00147875 . de . 0765338. 117475206 .\n57. C. Scheiderer, Stability index of real varieties. Inventiones Mathematicae 97 (1989), no. 3, 467–483.\n58. R. Benedetti and M. Dedo, Counterexamples to representing homology classes by real algebraic subvarieties up to homeomorphism, Compositio Mathematica, 53, (1984), 143–151.\n59. S. Akbulut and H.C. King, All compact manifolds are homeomorphic to totally algebraic real algebraic sets, Comment. Math. Helvetici 66 (1991) 139–149.\n60. K. Schmüdgen, The K-moment problem for compact semi-algebraic sets. Math. Ann. 289 (1991), no. 2, 203–206.\n61. T. Wörmann Strikt Positive Polynome in der Semialgebraischen Geometrie, Univ. Dortmund 1998.\n62. B. Reznick, Uniform denominators in Hilbert's seventeenth problem. Math. Z. 220 (1995), no. 1, 75–97.\n63. S. Akbulut and H.C. King On approximating submanifolds by algebraic sets and a solution to the Nash conjecture, Inventiones Mathematicae 107 (1992), 87–98\n64. S. Akbulut and H.C. King, Algebraicity of Immersions, Topology, vol. 31, no. 4, (1992), 701–712.\n65. R. Benedetti and A. Marin, Déchirures de variétés de dimension trois ...., Comm. Math. Helv. 67 (1992), 514–545.\n66. E. Bierstone and P.D. Milman, Canonical desingularization in characteristic zero by blowing up the maximum strata of a local invariant, Inventiones Mathematicae 128 (2) (1997) 207–302.\n67. G. Mikhalkin, Blow up equivalence of smooth closed manifolds, Topology, 36 (1997) 287–299\n68. [János Kollár]\n69. C. Scheiderer, Sums of squares of regular functions on real algebraic varieties. Transactions of the American Mathematical Society 352 (2000), no. 3, 1039–1069.\n70. C. Scheiderer, Sums of squares on real algebraic curves, Mathematische Zeitschrift 245 (2003), no. 4, 725–760.\n71. C. Scheiderer, Sums of squares on real algebraic surfaces. Manuscripta Mathematica 119 (2006), no. 4, 395–410.\n72. [János Kollár]\n73. J.-Y. Welschinger, Invariants of real rational symplectic 4-manifolds and lower bounds in real enumerative geometry, Inventiones Mathematicae 162 (2005), no. 1, 195–234.\n74. S. Akbulut and H.C. King, Transcendental submanifolds of RPn Comm. Math. Helv., 80, (2005), 427–432\n75. S. Akbulut, Real algebraic structures, Proceedings of GGT, (2005) 49–58, arXiv:math/0601105v3." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.65256304,"math_prob":0.89283484,"size":13603,"snap":"2023-14-2023-23","text_gpt3_token_len":4181,"char_repetition_ratio":0.15221708,"word_repetition_ratio":0.043881033,"special_character_ratio":0.3136073,"punctuation_ratio":0.22729,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97367007,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-08T21:40:32Z\",\"WARC-Record-ID\":\"<urn:uuid:d71c75a4-92e6-460c-8019-b6c250a84e24>\",\"Content-Length\":\"39208\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d895e0c6-2ec2-42c4-9168-8e9225d2d1e8>\",\"WARC-Concurrent-To\":\"<urn:uuid:d08d3cad-d6bd-4ef6-a4b3-6c442c151bf2>\",\"WARC-IP-Address\":\"85.25.210.18\",\"WARC-Target-URI\":\"https://everything.explained.today/Real_algebraic_geometry/\",\"WARC-Payload-Digest\":\"sha1:K6BMGQDI2OX2MAQ65G4UBOMUBT2ETTMA\",\"WARC-Block-Digest\":\"sha1:GUNPZU2JLL36TV5KJCUXYEHYKIWF6VKY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224655143.72_warc_CC-MAIN-20230608204017-20230608234017-00432.warc.gz\"}"}
http://applet-magic.com/relatimespent.htm
[ " The Time-Spent Probability Distribution for a Particle Under Relativity\nSan José State University\n\napplet-magic.com\nThayer Watkins\nSilicon Valley,\n& the Gateway\nto the Rockies\nUSA\n\nThe Time-Spent Probability\nDistribution for a Particle\nUnder Relativity\n,\n\n## Time-Spent Probability Distribution.s\n\nLet dt be the time a particle spends in an interval ds of its trajectory. Then the probability of finding it in that interval is dt/T where T is the total time the particle takes to execute its periodic trajectory. But dt=ds/|v| where v is the velocity of the particle.\n\n## The Classical Case\n\nFor a particle of mass m in a potential field V(x)\n\n#### E = ½mv² + V(x) so v(x) = [(2/m)(E−V(x)]½\n\nThis can be rewritten as\n\n#### v(x) = [(2/m)K½\n\nwhere K is kinetic energy.\n\nTherefore the wavefunction ψ(x) associated with the time-spent probability density function PTS(x) is given by\n\n## The Relativistic Case\n\nThe total energy of a particle is mc² where m is relativistic mass m0/(1−β²)½. Therefore kinetic energy K is mc²−m0c².\n\nIn the derivation below the dependence of K, v and β on particle position is ignored to simplify the algebraic expressions.\n\n#### K/(m0c²) = 1/(1−β²)½ − 1 (K + m0c²)/(m0c²) = 1/(1−β²)½ (1−β²)½ = (m0c²/(K + m0c²) 1−β² =[(m0c²/(K + m0c²]² β = [1 − ((m0c²/(K + m0c²))²]½ v/c = [(m0c² + K) − m0c²]/(m0c² + K) v/c = [K/((m0c² + K)]½ v = cK½/(m0c² + K)½\n\n`\n\nNote that as K→∞ v→c as it must under Relativity.\n\nThere a further derivation of v based upon factoring m0c² out of the denominator of the above fraction; i.e.,\n\n-\n\n## The Time-Spent Probability Density Function\n\nSince the probability density function P(z)=1/(Tv(z)).\n\n#### P(z) = (m0½/T)(1 + K/(m0c² ))½/K½\n\nThus when kinetic energy K is small compared with (m0c² ) density is inversely proportional to K½ just as in the classical case.\n\n## The Wave Functions\n\nIf the wave function ψ(z) is such that ψ(z)²=P(z) then" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.84846383,"math_prob":0.9973286,"size":1352,"snap":"2020-10-2020-16","text_gpt3_token_len":337,"char_repetition_ratio":0.11127596,"word_repetition_ratio":0.0,"special_character_ratio":0.22633137,"punctuation_ratio":0.06716418,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99986005,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-26T21:55:02Z\",\"WARC-Record-ID\":\"<urn:uuid:64fdc374-e134-44fa-ad3f-2c0d4a3184cf>\",\"Content-Length\":\"7498\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c2cc6dc8-50d5-46a5-befb-13c277330231>\",\"WARC-Concurrent-To\":\"<urn:uuid:d16c7e9c-1f4c-4572-a6a2-61960a73b347>\",\"WARC-IP-Address\":\"67.195.197.76\",\"WARC-Target-URI\":\"http://applet-magic.com/relatimespent.htm\",\"WARC-Payload-Digest\":\"sha1:RFCTC4X4W5X2PXPXJBF26PM7KPYXUVPY\",\"WARC-Block-Digest\":\"sha1:2JD7C6DIHPIOPOIVFX5SB4E6AC57XVI2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875146562.94_warc_CC-MAIN-20200226211749-20200227001749-00539.warc.gz\"}"}
https://www.tessshebaylo.com/solving-trig-equations-practice-problems-pdf/
[ "# Solving Trig Equations Practice Problems Pdf\n\nBy | March 7, 2023\n\nN12treqs simple trig equations pdf abbynet math exercises problems trigonometric and inequalities solving worksheet trigonometry worksheets for practice type 1 writing christina school district assignment board student s first last name id lunch exam questions identities examsolutions derivatives of functions product rule ient chain calculus tutorial you word", null, "N12treqs Simple Trig Equations Pdf Abbynet", null, "Math Exercises Problems Trigonometric Equations And Inequalities", null, "Solving Trigonometric Equations Worksheet", null, "Trigonometry Worksheets For Practice", null, "Math Exercises Problems Trigonometric Equations And Inequalities", null, "Solving Trig Equations Type 1 Writing", null, "Trigonometry", null, "Christina School District Assignment Board Student S First Last Name Id Lunch", null, "Exam Questions Trigonometric Identities Examsolutions", null, "Derivatives Of Trigonometric Functions Product Rule Ient Chain Calculus Tutorial You", null, "Practice Trig Word Problems", null, "Igcse 0580 Trig Graphs Solving Equations Recognising Worked Solutions New Syllabus You", null, "Solving Trigonometric Equations With Infinite Solutions Lesson Transcript Study Com", null, "Ncert Exemplar Class 12 Maths Chapter 2 Inverse Trigonometric Functions Learn Cbse", null, "Topic 3 Geometry Trigonometry Math Ysis Approaches Dp Hl Libguides At Concordian International School Thailand", null, "Worksheet On Trigonometric Identities Establishing Hints", null, "5 1 Trigonometric Identities", null, "Trigonometric Functions Algebra All Content Math Khan Academy", null, "Document", null, "The 36 Trig Identities You Need To Know", null, "Trigonometry Word Problems With Solutions", null, "Ncert Exemplar Class 11 Maths Chapter 3 Trigonometric Functions Learn Cbse", null, "Jee Main Important Questions Of Trigonometry With Solutions Free Pdf\n\nN12treqs simple trig equations pdf trigonometric and inequalities solving worksheet trigonometry worksheets for type 1 christina school district assignment exam questions derivatives of functions practice word problems\n\nThis site uses Akismet to reduce spam. Learn how your comment data is processed." ]
[ null, "https://i1.wp.com/img.yumpu.com/24643703/1/500x640/n12treqs-simple-trig-equationspdf-abbynet.jpg", null, "https://i0.wp.com/www.math-exercises.com/images/math-exercises/exercises/025%20Trigonometric%20equations%20and%20inequalities%2002.png", null, "https://i1.wp.com/img.yumpu.com/18562092/1/500x640/solving-trigonometric-equations-worksheet.jpg", null, "https://i0.wp.com/trigidentities.net/wp-content/uploads/2021/08/Trigonometry-Worksheets.jpeg", null, "https://i2.wp.com/www.math-exercises.com/images/math-exercises/exercises/025%20Trigonometric%20equations%20and%20inequalities%2001.png", null, "https://i0.wp.com/i.pinimg.com/736x/26/dc/58/26dc5889385ab395da04e92dc3012473.jpg", null, "https://i0.wp.com/x-raw-image:///ed527bc22e3f89afad6345c9823e7ecd0272ee4551e2a278b04f5bca930a07fb", null, "https://i2.wp.com/x-raw-image:///58f8e1fdf02de5bab25aee7de372508383676a0fd1f8a2205775e1c261e2adf4", null, "https://i2.wp.com/www.examsolutions.net/wp-content/uploads/2014/02/q4-c2-january-2008-edexcel.png", null, "https://i2.wp.com/i.ytimg.com/vi/_niP0JaOgHY/hqdefault.jpg", null, "https://i2.wp.com/x-raw-image:///aaa05e0100d2a7c7953ea7b50f678307e3fd6e1876ca126adf6dd2cb2b33236f", null, "https://i1.wp.com/i.ytimg.com/vi/DMTlirjuvjg/sddefault.jpg", null, "https://i0.wp.com/study.com/cimages/videopreview/videopreview-full/screen_shot_2014-10-22_at_11.00.45_pm_133324.jpg", null, "https://i0.wp.com/www.learncbse.in/wp-content/uploads/2022/06/NCERT-Exemplar-Class-12-Maths-Chapter-2-Inverse-Trigonometric-Functions-Img-1.jpg", null, "https://i3.wp.com/i.ytimg.com/vi/7Eo-fuy0f7g/maxresdefault.jpg", null, "https://i1.wp.com/www.math-only-math.com/images/worksheet-on-trigonometric-identities.png", null, "https://i1.wp.com/s3.studylib.net/store/data/008423367_1-82c38956285012f55f7d8c04d9a3f760-768x994.png", null, "https://i0.wp.com/cdn.kastatic.org/googleusercontent/JTD7xAQhddgrOwp5w5YKUktQrbIe2MYReevXy5pg5wT7dc9-UA50Hp-I9f8Brc6pENZI7QFr8eOuC5TJNvgg-lBP", null, "https://i1.wp.com/s2.studylib.net/store/data/018465666_1-757c754e76eff98af25f7416f538d148-768x994.png", null, "https://i2.wp.com/blog.prepscholar.com/hubfs/feature_trigidentities.png", null, "https://i1.wp.com/www.onlinemath4all.com/images/practprob2.png", null, "https://i3.wp.com/www.learncbse.in/wp-content/uploads/2022/06/NCERT-Exemplar-Class-11-Maths-Chapter-3-Trigonometric-Functions-Img-2.png", null, "https://i1.wp.com/www.vedantu.com/content-images/iit-jee/jee-main-trigonometry-important-questions/21.webp", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6788408,"math_prob":0.9298809,"size":1876,"snap":"2023-14-2023-23","text_gpt3_token_len":398,"char_repetition_ratio":0.2013889,"word_repetition_ratio":0.075,"special_character_ratio":0.14232409,"punctuation_ratio":0.0,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99281216,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,2,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-04-02T00:18:19Z\",\"WARC-Record-ID\":\"<urn:uuid:03ea8fb2-4c91-4afd-9382-39b067c244fc>\",\"Content-Length\":\"55841\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:83690d9d-12dc-408c-a4f3-663311d4d9e6>\",\"WARC-Concurrent-To\":\"<urn:uuid:e41a346b-ca24-4587-88a7-43065963d642>\",\"WARC-IP-Address\":\"172.67.187.229\",\"WARC-Target-URI\":\"https://www.tessshebaylo.com/solving-trig-equations-practice-problems-pdf/\",\"WARC-Payload-Digest\":\"sha1:HSX7TU3LOSO6RZFMPVOMBUIX23DJO2UK\",\"WARC-Block-Digest\":\"sha1:H4VTQD5GIKJ53B4G63LYRARDHA4EKX3U\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296950363.89_warc_CC-MAIN-20230401221921-20230402011921-00051.warc.gz\"}"}
https://www.rapapaing.com/blog/2015/08/practical-2d-collision-detection-part-2/
[ "# Practical 2D collision detection – Part 2\n\nIn our last article, we made a very simple program that helped us detect when two circles were colliding. However, 2D games are usually much more complex than just circles. I shall now introduce the next shape: the rectangle.\n\nBy now you probably noticed that, for the screenshots I’m using a program called “Collision Test”. This is a small tool I made to help me visualize all this stuff I’m talking about. I used this program to build the collision detection/resolution framework for an indie top-down adventure game I was involved in. I will be talking more about this tool in future articles.\n\nNow, there are many ways to represent a rectangle. I will be representing them as five numbers: The center coordinates, width, height and the rotation angle:\n\n```public class CollisionRectangle\n{\npublic float X { get; set; }\npublic float Y { get; set; }\npublic float Width { get; set; }\npublic float Height { get; set; }\npublic float Rotation { get; set; }\n\npublic CollisionRectangle(float x, float y, float width, float height, float rotation)\n{\nX = x;\nY = y;\nWidth = width;\nHeight = height;\nRotation = rotation\n}\n}```\n\nNow, for our first collision, we will collide a circle and a rectangle. There are two types of collisions to consider: When the circle is entirely inside the rectangle…\n\n…And when the circle is partly inside the rectangle, that is, it is touching the border\n\nThese are two different types of collisions, and use different algorithms to determine whether or not there is a collision.\n\nBut first, let’s forget about the rectangle’s position and rotation. Our first approach will deal with a rectangle centered in the world, and not rotated:\n\nUnder these constraints, the circle is inside the rectangle when both the X coordinate of the circle is between the left and right borders, and the Y coordinate is between the top and bottom borders, like so:\n\n```public static bool IsCollision(CollisionCircle a, CollisionRectangle b)\n{\n// For now, we will suppose b.X==0, b.Y==0 and b.Rotation==0\nfloat halfWidth = b.Width / 2.0f;\nfloat halfHeight = b.Height / 2.0f;\nif (a.X >= -halfWidth && a.X <= halfWidth && a.Y >= -halfHeight && a.Y <= halfHeight)\n{\n// Circle is inside the rectangle\nreturn true;\n}\nreturn false; // We're not finished yet...\n}```\n\nBut this is not enough. This only works when the center of the circle is inside the rectangle. There are plenty of situations where the center of the circle is outside the rectangle, but the circle is still touching the rectangle.\n\nIn this case, we first find the point in the rectangle which is closest to the circle, and if the distance between this point and the center of the circle is smaller than the radius, then the circle is touching the border of the rectangle.\n\nWe find the closest point for the X and Y coordinates separately:\n\n```float closestX, closestY;\n\n// Find the closest point in the X axis\nif (a.X < -halfWidth) closestX = -halfWidth; else if (a.X > halfWidth)\nclosestX = halfWidth\nelse\nclosestX = a.X;\n\n// Find the closest point in the Y axis\nif (a.Y < -halfHeight) closestY = -halfHeight; else if (a.Y > halfHeight)\nclosestY = halfHeight;\nelse\nclosestY = a.Y;```\n\nAnd now we bring it all together:\n\n```public static bool IsCollision(CollisionCircle a, CollisionRectangle b)\n{\n// For now, we will suppose b.X==0, b.Y==0 and b.Rotation==0\nfloat halfWidth = b.Width / 2.0f;\nfloat halfHeight = b.Height / 2.0f;\n\nif (a.X >= -halfWidth && a.X <= halfWidth && a.Y >= -halfHeight && a.Y <= halfHeight)\n{\n// Circle is inside the rectangle\nreturn true;\n}\n\nfloat closestX, closestY;\n// Find the closest point in the X axis\nif (a.X < -halfWidth) closestX = -halfWidth; else if (a.X > halfWidth)\nclosestX = halfWidth\nelse\nclosestX = a.X;\n\n// Find the closest point in the Y axis\nif (a.Y < -halfHeight) closestY = -halfHeight; else if (a.Y > halfHeight)\nclosestY = halfHeight;\nelse\nclosestY = a.Y;\n\nfloat deltaX = a.X - closestX;\nfloat deltaY = a.Y - closestY;\nfloat distanceSquared = deltaX * deltaX - deltaY * deltaY;\n\nif (distanceSquared <= a.R * a.R)\nreturn true;\n\nreturn false;\n}```\n\nLooks good, but we’re still operating under the assumption that the rectangle is centered and not rotated.\n\nTo overcome this limitation, we can move the entire world -that is, both the rectangle and the circle-, so the rectangle ends centered and non-rotated:\n\nIn other words, we have to find the position of the circle, relative to the rectangle. This is pretty straightforward trigonometry:\n\n```float relativeX = a.X - b.X;\nfloat relativeY = a.Y - b.Y;\nfloat relativeDistance = (float)Math.Sqrt(relativeX * relativeX + relativeY * relativeY);\nfloat relativeAngle = (float)Math.Atan2(relativeY, relativeX);\nfloat newX = relativeDistance * (float)Math.Cos(relativeAngle - b.Rotation);\nfloat newY = relativeDistance * (float)Math.Sin(relativeAngle - b.Rotation);```\n\nAnd then put it all together:\n\n```public class CollisionRectangle\n{\npublic float X { get; set; }\npublic float Y { get; set; }\npublic float Width { get; set; }\npublic float Height { get; set; }\npublic float Rotation { get; set; }\n\npublic CollisionRectangle(float x, float y, float width, float height, float rotation)\n{\nX = x;\nY = y;\nWidth = width;\nHeight = height;\nRotation = rotation\n}\n\npublic static bool IsCollision(CollisionCircle a, CollisionRectangle b)\n{\nfloat relativeX = a.X - b.X;\nfloat relativeY = a.Y - b.Y;\nfloat relativeDistance = (float)Math.Sqrt(relativeX * relativeX + relativeY * relativeY);\nfloat relativeAngle = (float)Math.Atan2(relativeY, relativeX);\nfloat newX = relativeDistance * (float)Math.Cos(relativeAngle - b.Rotation);\nfloat newY = relativeDistance * (float)Math.Sin(relativeAngle - b.Rotation);\nfloat halfWidth = b.Width / 2.0f;\nfloat halfHeight = b.Height / 2.0f;\n\nif (newX >= -halfWidth && newX <= halfWidth && newY >= -halfHeight && newY <= halfHeight)\n{\n// Circle is inside the rectangle\nreturn true;\n}\n\nfloat closestX, closestY;\n// Find the closest point in the X axis\nif (newX < -halfWidth) closestX = -halfWidth; else if (newX > halfWidth)\nclosestX = halfWidth\nelse\nclosestX = newX;\n\n// Find the closest point in the Y axis\nif (newY < -halfHeight) closestY = -halfHeight; else if (newY > halfHeight)\nclosestY = halfHeight;\nelse\nclosestY = newY;\n\nfloat deltaX = newX - closestX;\nfloat deltaY = newY - closestY;\nfloat distanceSquared = deltaX * deltaX - deltaY * deltaY;\n\nif (distanceSquared <= a.R * a.R)\nreturn true;\n\nreturn false;\n}\n}\n\npublic class CollisionRectangle\n{\npublic float X { get; set; }\npublic float Y { get; set; }\npublic float Width { get; set; }\npublic float Height { get; set; }\npublic float Rotation { get; set; }\n\npublic CollisionRectangle(float x, float y, float width, float height, float rotation)\n{\nX = x;\nY = y;\nWidth = width;\nHeight = height;\nRotation = rotation\n}\n\npublic static bool IsCollision(CollisionCircle a, CollisionRectangle b)\n{\nfloat relativeX = a.X - b.X;\nfloat relativeY = a.Y - b.Y;\nfloat relativeDistance = (float)Math.Sqrt(relativeX * relativeX + relativeY * relativeY);\nfloat relativeAngle = (float)Math.Atan2(relativeY, relativeX);\nfloat newX = relativeDistance * (float)Math.Cos(relativeAngle - b.Rotation);\nfloat newY = relativeDistance * (float)Math.Sin(relativeAngle - b.Rotation);\nfloat halfWidth = b.Width / 2.0f;\nfloat halfHeight = b.Height / 2.0f;\n\nif (newX >= -halfWidth && newX <= halfWidth && newY >= -halfHeight && newY <= halfHeight)\n{\n// Circle is inside the rectangle\nreturn true;\n}\n\nfloat closestX, closestY;\n// Find the closest point in the X axis\nif (newX < -halfWidth) closestX = -halfWidth; else if (newX > halfWidth)\nclosestX = halfWidth\nelse\nclosestX = newX;\n\n// Find the closest point in the Y axis\nif (newY < -halfHeight) closestY = -halfHeight; else if (newY > halfHeight)\nclosestY = halfHeight;\nelse\nclosestY = newY;\n\nfloat deltaX = newX - closestX;\nfloat deltaY = newY - closestY;\nfloat distanceSquared = deltaX * deltaX - deltaY * deltaY;\n\nif (distanceSquared <= a.R * a.R)\nreturn true;\n\nreturn false;\n}\n}```\n\nIn the next article, we’ll put some structure to all of this." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6838979,"math_prob":0.99027044,"size":7925,"snap":"2021-31-2021-39","text_gpt3_token_len":1961,"char_repetition_ratio":0.20729706,"word_repetition_ratio":0.63581187,"special_character_ratio":0.2622082,"punctuation_ratio":0.18796484,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9984943,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-19T20:14:31Z\",\"WARC-Record-ID\":\"<urn:uuid:bcfb2924-ad19-46f3-b483-289ee4b86a55>\",\"Content-Length\":\"29697\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a2c3aca1-49b6-4d91-a3f0-15c355db907f>\",\"WARC-Concurrent-To\":\"<urn:uuid:c60711f7-ae88-4f44-8b9f-6f45dc183e65>\",\"WARC-IP-Address\":\"18.182.114.137\",\"WARC-Target-URI\":\"https://www.rapapaing.com/blog/2015/08/practical-2d-collision-detection-part-2/\",\"WARC-Payload-Digest\":\"sha1:ND3KDUQWHAMZTWYAL75ABMASDHTVXXDD\",\"WARC-Block-Digest\":\"sha1:77YO7ASALBMAM5SD6B6KLQCXGK25NSIB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780056900.32_warc_CC-MAIN-20210919190128-20210919220128-00262.warc.gz\"}"}
https://www.mepiforum.org/essay/essay-tower-cranes-55039
[ "Tower Cranes\n\nPublished: 2021-08-06 05:50:05", null, "", null, "Category: Examples\n\nType of paper: Essay\n\nThis essay has been submitted by a student. This is not an example of the work written by our professional essay writers.\n\nHey! We can write a custom essay for you.\n\nAll possible types of assignments. Written by academics\n\nGET MY ESSAY", null, "Force Force can be defined as that which causes a mass to accelerate. Force has common units of pounds force (lbs) or Newtons ? Acceleration (F=M·A). In other words 1 Newton is the force required to accelerate 1 kilogram by 1 m/sec2, or 1 pound force is the force required to accelerate 1 slug by 1 foot/ sec2. You will notice that the imperial unit for force is pounds force and not just pounds. There is a common inaccuracy in our language that is only really important when talking about physics.\nThe word weight truly refers to a force – this is why your weight on the moon is not the same as your weight on earth. To fully understand this we need to dissect the mathematical meaning behind the force term. Two components go into calculating a force; the first is mass, the second is acceleration. What is mass? Mass is the amount of stuff present in a given sample, lets say a person. A person’s mass will be the same whether on earth or the moon – in both places that person is made up of the same amount of stuff.\nMass has two common units; kilograms (kg) and slugs. So a person might have a mass of 70 kg or 4. 78 slugs. For the example of weight, or the downward static force exerted by an object, the acceleration of interest is the acceleration due to gravity. The acceleration due to gravity can be defined as the pull one object exerts on another. For this pull to be felt, one of the objects has to be extremely massive. For most people the most massive object they will encounter is the earth. The acceleration due to gravity on the earth is 9. meters/sec2 or 32. 2 feet/sec2. So a person on earth might weigh (70kg x 9. 8m/sec2) = 686 Newtons or (4. 78 slugs x 32. 2 feet/sec2) = 154 lbs. On the moon the same person will weigh (70kg x 1. 62 m/sec2) = 113 Newtons or (4. 78 slugs x 5. 32 ft/ sec2) = 25 lbs. So when a person says they weigh 154 lbs they are being true to physics, but when they say they weigh 154 kg, they’re actually referring to their mass. As a further twist, it’s also interesting to note that the acceleration due to gravity changes with altitude.\nSo your weight at sea level will be slightly different that your weight at the top of a mountain (Newton’s law of gravitation Fg = G ? gravitational constant). ? kg ? m ? . The equation used to mathematically define force is Force = Mass x 2 ? ? sec ? m1 ? m2 , where G is the r2 Stress Stress is defined as force per unit area and has the common units of Pounds force per Square Inch (psi) or Pascals (Pa) (a Pascal is a Newton per square meter or kg/m sec2). In construction there are five basic types of stress which concern engineers.\nThese are bending, tensile, compressive, shear, and torsional stress (see picture below). For the purpose of building Popsicle stick bridges we are really only interested in bending, compression, and tensile stresses. When we take a close look at bending we’ll see that it is just a combination of tensile and compressive stresses. Of these three types of stress tensile is perhaps the easiest to measure. As a result engineers will take samples of material and, using special machines, subject them to higher and higher tensile loads until they break.\nBy dividing the force at which the sample breaks by the cross sectional area of the sample the materials Ultimate Tensile Stress (UTS) can be determined. The ultimate tensile stress is given the symbol ? (Greek letter sigma), and essentially represents the strength of a material. For comparisons sake a sample of plain carbon steel might have a UTS of 50,000 psi, while pine (which is what Popsicle sticks are made of) might have a UTS of 1,000 psi. It is important to recognize that UTS is not the only important consideration when selecting a material, but material selection is a bit outside the scope of this summary.\nLet’s take a closer look at tension and compression. Tension is the stress an element experiences when exposed to a pulling force. To get a feeling for tension think about a piece of string. String can only experience tension; it is not able to resist pushing or bending. Compression is the opposite of tension; it’s the stress an element experiences when exposed to a pushing force. Sand is an example of a substance which can only experience compression. A column of sand can support a large load, but is unable to resist any pulling force.\nAs most materials have different tensile and compressive loading potentials, it is important to know what sort of forces will be exerted on every member in a building or bridge. Bending combines both tensile and compressive forces in a single element. To demonstrate this, take a look at the picture below. It’s pretty obvious from this picture that bending puts one face into tension while the other is in compression. It also logically follows from this conclusion that at some point between the two faces there must be a point where there is no tension or compression.\nThis point is called the neutral axis. The mass of material above and below the neutral axis will always be equal. So in a symmetrical member the neutral axis will be along the midline, but will not necessarily be along the midline in an irregularly shaped member. This simple concept of leverage can be used to explain several more complex concepts in structural engineering. The first is why it’s easier to break a Popsicle stick when it’s bent on its flat side as opposed to its edge. To explain this we have to explain the concept of leverage.\nThis one is pretty simple and can easily be demonstrated by the classroom door. Leverage (also called moment or torque) occurs when a force is applied to an object which can rotate about a pivot point. In the case of the classroom door the pivot is the hinge and the force applied comes from the person wanting to open the door. In the case of bending a Popsicle stick the pivot is the neutral axis and the force we’re concerned with is the tension or compression on the outside faces. Moment is calculated by multiplying the force applied by the distance from the point of force application to the pivot.\nIf you increase the applied force, or the distance from the pivot point, the moment increases. That’s why door handles are put as far from the hinge as possible – we make the distance from the point of force application to the pivot point as large as possible, that way a small applied force will create a large moment. So the Popsicle stick is harder to break when bent on edge because we’ve increased the distance from the neutral axis to the point of maximum force. Explain the difference between tensile, bending, and compressive forces with examples of the equations used to calculate each.\nExplain truss elements and why they are a superior way of building a bridge. Sample FEM output for simple bridge design o Calculate the amount of popsicle sticks required to make a simple beam with the same strength as a truss element. Hints on building a strong bridge o Truss o Strength comes from the Popsicle sticks, not the glue – but well glued joints are a must. Additional information: http://andrew. triumf. ca/andrew/popsicle-bridge/ http://www. eir. ca/resources/presentations/Bridges%20-%20By%20Doug%20Knight. doc\n\nWarning! This essay is not original. Get 100% unique essay within 45 seconds!\n\nGET UNIQUE ESSAY\n\nWe can write your paper just for 11.99\\$\n\ni want to copy...\n\nThis essay has been submitted by a student and contain not unique content" ]
[ null, "https://www.mepiforum.org/img/essay_screen.png", null, "https://www.mepiforum.org/img/essay_mark.png", null, "https://www.mepiforum.org/img/essay-list.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9316719,"math_prob":0.96226114,"size":13883,"snap":"2022-05-2022-21","text_gpt3_token_len":3086,"char_repetition_ratio":0.11189567,"word_repetition_ratio":0.9640404,"special_character_ratio":0.21918894,"punctuation_ratio":0.09107402,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9853975,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-17T19:38:01Z\",\"WARC-Record-ID\":\"<urn:uuid:8f2b100d-7f23-4d87-af70-704f56e12b99>\",\"Content-Length\":\"51552\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f8b93f53-7f8a-4a67-8927-b9afbecfa01f>\",\"WARC-Concurrent-To\":\"<urn:uuid:42d05c83-565f-4cc6-b268-59913e27ba66>\",\"WARC-IP-Address\":\"172.67.162.115\",\"WARC-Target-URI\":\"https://www.mepiforum.org/essay/essay-tower-cranes-55039\",\"WARC-Payload-Digest\":\"sha1:3Z45YMWUUC2RCVNQ3VHDOFKCJBVR2OJA\",\"WARC-Block-Digest\":\"sha1:SY7T5UFISBQT4BTGO2GSFNIZLBN5Y2KE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320300616.11_warc_CC-MAIN-20220117182124-20220117212124-00697.warc.gz\"}"}
https://mathspace.co/textbooks/syllabuses/Syllabus-453/topics/Topic-8395/subtopics/Subtopic-110936/
[ "", null, "# Evaluate expressions with integers using order of operations\n\nLesson\n\nNow you know the correct order of operations (see Maths in Order for a refresher), you can use it to solve problems with positive and negative numbers that have more than one operation.\n\n#### Examples\n\n##### Question 1\n\nEvaluate: $\\left(48\\div12+5\\right)\\times3$(48÷​12+5)×3\n\nThink: We need to simplify the problem by using our order of operation rules. Firstly, we perform any operations inside the brackets; division first followed by addition. Then we perform any other multiplication or division that is remaining, working from left to right.\n\nDo\n\n $\\left(48\\div12+5\\right)\\times3$(48÷​12+5)×3 $=$= $\\left(4+5\\right)\\times3$(4+5)×3 $=$= $9\\times3$9×3 $=$= $27$27\n\nHere's another example.\n\n##### question 2\n\nEvaluate: $48-6\\times\\left(8-4\\right)$486×(84)\n\nThink: Using our order of operations we want to first perform the subtraction in the brackets. We then want to evaluate the multiplication. Finally, we can subtract the product from $48$48.\n\nDo:\n\n $48-6\\times\\left(8-4\\right)$48−6×(8−4) $=$= $48-6\\times4$48−6×4 $=$= $48-24$48−24 $=$= $24$24\n\n##### Question 3\n\nEvaluate $70-8\\times\\left(-7\\right)$708×(7)\n\n##### Question 4\n\nEvaluate $\\left(-14\\right)\\div2-18\\div\\left(-2\\right)$(14)÷​218÷​(2)" ]
[ null, "https://mathspace-production-static.mathspace.co/permalink/badges/v3/directed-numbers.svg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8059646,"math_prob":0.9997192,"size":1118,"snap":"2023-14-2023-23","text_gpt3_token_len":377,"char_repetition_ratio":0.14452423,"word_repetition_ratio":0.0,"special_character_ratio":0.3765653,"punctuation_ratio":0.084070794,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9994499,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-21T16:56:18Z\",\"WARC-Record-ID\":\"<urn:uuid:8fcaa745-5c02-4a32-aaf9-14a489356f77>\",\"Content-Length\":\"482232\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6a483ec3-fdf1-43fa-8433-6cecd2397940>\",\"WARC-Concurrent-To\":\"<urn:uuid:1969de15-1404-427a-9111-4dd4b2239213>\",\"WARC-IP-Address\":\"104.22.56.207\",\"WARC-Target-URI\":\"https://mathspace.co/textbooks/syllabuses/Syllabus-453/topics/Topic-8395/subtopics/Subtopic-110936/\",\"WARC-Payload-Digest\":\"sha1:KJWSHJM4HRX7TNH2IU2472BVVKXFQLWB\",\"WARC-Block-Digest\":\"sha1:V35S2EC64MHJCXJSXMNM2PINAB7HNFIP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296943704.21_warc_CC-MAIN-20230321162614-20230321192614-00701.warc.gz\"}"}
https://physics.stackexchange.com/questions/758470/how-does-special-relativity-help-us-explain-the-movement-of-and-forces-between-t
[ "# How does special relativity help us explain the movement of and forces between two charged particles moving in parallel?\n\nI am learning about special relativity, and I just read about how it helps us to solve some \"puzzles\" that arise in questions involving charged particles. The first \"puzzle\" was about how a charged particle travelling parallel to a neutral wire behaves, and I understand this completely. What I do not understand is another \"puzzle\", which involves two like charges travelling in parallel at the same velocity.\n\nThe setup is simple: two positive charges are moving in parallel at identical velocities \"v\". Observer 1 travels at the same velocity, and so observes the particles to be stationary; their repulsion, therefore, is simply explained by electrostatic repulsion. However, for observer 2, who is stationary, the particles are moving, creating an attractive magnetic force between them; despite this, the charges are still repelled.\n\nI have read 3 different explanations from 3 different textbooks:\n\n1. \"...the repulsive electric field is increased (through relativistic length contraction). There is also a magnetic field that was not apparent when the observer was stationary relative to the charges. This is because a moving charge gives rise to a magnetic field. Each charge now appears to be moving within the magnetic field due to the other charge and consequently there is an attraction that the observer describes as magnetic in origin. There is both an increased (electrostatic) repulsion and a new (magnetic) attraction compared with the stationary observer frame. Again, a mathematical analysis shows that the force between the charges is identical for all observer inertial frames of reference.\"\n\n2. \"Examining the details of this situation shows that the two observers will reach consistent results only if time runs differently in the two different frames.\"\n\n3. \"This means that an observer in the laboratory will record the electrical force to be just as strong as before but will also record an attractive magnetic force between the two electrons, so the total force will now be smaller than it was in the electrons’ frame of reference. This means that the total force experienced by the electrons depends on the relative velocity of the frame of reference that is being measured. Lorentz calculated the transformation that makes it possible to easily calculate how this force varies from one reference frame to another using the Lorentz factor, γ.\"\n\nOnly explanation 3 makes sense to me, and it seems to be the only one consistent with other answers I've read here (the best example of which is Can relativity explain the magnetic attraction between two parallel electrons or electron beams comoving in a vacuum? (No wires)). But does it not flatly contradict explanation 1? And what is explanation 2 talking about -- how does time dilation have anything to do with this?\n\nThese explanations don't really seem to explain what's going on. For explanation 1, why is the electric field increased due to length contraction? I understand that electric field depends on distance between charges, but the distance between them isn't contracting, as contraction only takes place in the direction of motion, and they're travelling in parallel. For explanation 2, I don't see how time dilation has anything to do with the effect. Explanation 3 makes the most sense to me: the electrons can still be repelled for both observers, but the stationary observer will see them being repelled less because of the attractive magnetic force. But how does the Lorentz factor apply here? Would the reduced force calculated by observer 2 be lower by a factor of γ? And if it's true, then it seems to flatly contradict explanation 1.\n\nAny help is appreciated!\n\nThe length contracted E field is somewhat colloquial. Yes, a stationary spherical field is squished into a disk at ultra relativistic speeds, but that's just because\n\n$$E'_{\\parallel} = E_{\\parallel}$$\n\n(parallel to the velocity), while\n\n$$E'_{\\perp} = \\gamma E_{\\perp}$$\n\nfor $$B_{\\perp} = 0$$. Meanwhile:\n\n$$B'_{\\perp} = -\\gamma \\frac v {c^2} E_{\\perp}$$\n\n(Here the charges are stationary in the unprimed frame).\n\nIn the primed frame, there is a Lorentz force $$F'=q(E' + v' \\times B')$$ that is still repulsive, but you need to use the correct transformations of force/acceleration to get agreement.\n\nSo our task is to solve some puzzles involving magnetic fields. The proper way to do that is to put on rubber gloves, grab the magnetic fields and the attached problems, and haul that stuff into a rubbish bin.\n\nNow we have a clean table. So two charges are exerting a force on each other.\n\nLet's ask our buddy A to put two charges at such distance that force is 1N, and hold them there with two hands for one second. Each hand receives an impulse the magnitude of which is 1Ns.\n\nWe also ask our buddy B to observe while moving at speed 0.87 c relative to charges. He says that buddy A did put two charges at such distance that force was 0.5 N, and held them there with two hands for two second. Each hand received an impulse the magnitude of which was 1Ns.\n\nThat's how it works. I mean relativity has time dilation, and impulse has time and force as factors, and impulse is an invariant." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9631943,"math_prob":0.95725065,"size":3641,"snap":"2023-40-2023-50","text_gpt3_token_len":710,"char_repetition_ratio":0.13087709,"word_repetition_ratio":0.0034129692,"special_character_ratio":0.19060698,"punctuation_ratio":0.09104938,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9931082,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-27T13:22:19Z\",\"WARC-Record-ID\":\"<urn:uuid:a2623c0b-08d5-4785-b0d4-74fe7760f42a>\",\"Content-Length\":\"165701\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ac292dfc-df60-483a-a958-c30c13f6d4cf>\",\"WARC-Concurrent-To\":\"<urn:uuid:bd207de6-d381-424b-9537-001c837d24cd>\",\"WARC-IP-Address\":\"104.18.10.86\",\"WARC-Target-URI\":\"https://physics.stackexchange.com/questions/758470/how-does-special-relativity-help-us-explain-the-movement-of-and-forces-between-t\",\"WARC-Payload-Digest\":\"sha1:TFCJ2GHPEHCAXXI2ZPCOX7WMFAW2WBCX\",\"WARC-Block-Digest\":\"sha1:DMQ2Y7Y7RTPWJE4YYHV4TN5HRLB44HQN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510297.25_warc_CC-MAIN-20230927103312-20230927133312-00008.warc.gz\"}"}