URL
stringlengths
15
1.68k
text_list
sequencelengths
1
199
image_list
sequencelengths
1
199
metadata
stringlengths
1.19k
3.08k
https://www.mathewsopenaccess.com/full-text/finite-element-analysis-f-e-a-an-insight
[ "### Current Issue Volume 3, Issue 2 - 2018\n\nCommentary Article\n\n# Finite Element Analysis (F E A) - An Insight\n\nRohit Kulshrestha*\n\nSenior Lecturer, Department of Orthodontics and Dentofacial Orthopedics, Terna Dental College and Hospital, Navi Mumbai, Maharashtra, India.\n\nCorresponding Author: Rohit Kulshrestha, Senior Lecturer, Department of Orthodontics and Dentofacial Orthopedics, Terna Dental College and Hospital, Navi Mumbai, Maharashtra, India.\n\nE-Mail: [email protected]\n\nAccepted Date: 30 Oct 2018\nPublished Date: 31 Oct 2018\n\nCitation: Kulshrestha R. (2018). Finite Element Analysis (F E A) - An Insight. Mathews J Dentistry. 3(2): 022.\n\nINTRODUCTION\n\nFinite Element Analysis is a numerical method of structure analysis based on principle of dividing an infinite structure into a finite number of small elements connects each other at the corner points or nodes. It is a means of discretizing a continuous structure into sub-domains called Finite Elements. Essentially an attempt at simulating a physical object and analysing it's behaviour when subjected to various circumstances. This is a well-sophisticated engineering tool, which has been used extensively in design optimization and structural analysis. It is one of the most significant developments in the history of computational methods. This method is originated in aerospace industry to study stresses in complex airframe structures. Modern version of finite element method first used in engineering by Turner et al. The term finite element was coined by Argyris and Clough in 1960. First introduced to the dental arena in the 1970's and growth model was documented by MOSS in 1980.\n\n1. Accuracy\n2. Reproducibility\n3. No usage of materials\n4. Generation of intra-material results\n\nBasic steps involved in FEA\n\n1. Pre-processing\n2. Processing\n3. Post-processing\n\nPre-processing\nPre-Processing basically involves modelling of the structure being studied. It is the most crucial step in the finite element analysis. In Pre-processing, the structure being studied is discretised into smaller units termed the elements. Each element is free to get displaced in all the three planes of space.\n\nThe element co-ordinates (x,y,z) can be either\n\n1. Global Co-ordinate system or\n2. The Local Co-ordinate system\n\nVarious categories of elements exist. Examples are\n\n• Shell element\n• Beam element\n• Truss element etc.\n\nNewer possibilities of modelling of complex structures includes\n\n1. 3-D CT scanning\n2. 3-D Laser scanner\n3. Voxel modelling\n\nThese elements are connected at certain points termed 'Nodes'. The joining of elements into nodes and eliminating duplicate nodes is termed as 'Meshing'.\n\nThe mesh size is a crucial determinant of the accuracy of the result. However, it is inversely related to the time involved in the analysis. The meshed model is now a free-floating body. To simulate the exact structure, the material properties are assigned and boundary conditions enforced.\n\nMATERIAL PROPERTIES\n\nThe minimum properties to be assigned are\n\n1. The Modulus of elasticity\n2. Poisson's ratio\n\nModulus of elasticity (Young's modulus) refers to the stiffness of the material within its elastic range.\n\nE = Stress/ Strain\n\nModulus of elasticity of dental structures\n\n1. Enamel - 65 GPa\n2. Dentin - 15 GPa\n3. Alveolar bone - 10 GPa\n4. Periodontal ligament - 0.05 GPa\n\nPoisson's ratio denotes the strain imposed on the material relative to the axis of the load applied.\n\nP = Strain perpendicular to the force/ Strain parallel to the force\n\nPoisson's ratios for dental structures\n\n1. Enamel - 0.32\n2. Dentin - 0.28\n3. Alveolar bone - 0.33\n4. Periodontal ligament - 0.3\n\nAfter assigning the material properties, the material is constrained identical to the real situation. The freedom of the body to be displaced is termed as the \"degrees of freedom\". Each element has six degrees of freedom. The final step in Pre-processing is the application of loads. These can be either force or moments and be directed at any node in all the three planes of space.\n\nPROCESSING\n\n1. ESolving of differential equations\n2. Assemblage into matrices\n3. Summation of the matrix equations\n\nThe equation for the simplest linear static analysis is represented as\n\n[F] = {K} {u}\n\nThe non-linear analysis is solved usually by what is termed as the \"Newton-Raphson method\".\n\nPOST-PROCESSING\n\n1. Graphical output\n2. Numerical output\n3. Animated output", null, "" ]
[ null, "https://i.creativecommons.org/l/by/4.0/88x31.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9086543,"math_prob":0.80342305,"size":3593,"snap":"2020-45-2020-50","text_gpt3_token_len":767,"char_repetition_ratio":0.1128448,"word_repetition_ratio":0.05882353,"special_character_ratio":0.19510159,"punctuation_ratio":0.09822866,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95351666,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-29T21:59:43Z\",\"WARC-Record-ID\":\"<urn:uuid:add9fbe5-4ebb-468d-9181-c1ec10782085>\",\"Content-Length\":\"31865\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0eba9479-10e6-4904-8f62-9c73f087e439>\",\"WARC-Concurrent-To\":\"<urn:uuid:b0ad58dd-11f9-4175-b9d7-2634a247199d>\",\"WARC-IP-Address\":\"165.22.76.182\",\"WARC-Target-URI\":\"https://www.mathewsopenaccess.com/full-text/finite-element-analysis-f-e-a-an-insight\",\"WARC-Payload-Digest\":\"sha1:VZD4SLZIRE4AD6TJMEPVXJB75GMLTJMM\",\"WARC-Block-Digest\":\"sha1:5LUPUFHO6HYAIF3EXAS3Y6Y3MG3U5Z7X\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107905965.68_warc_CC-MAIN-20201029214439-20201030004439-00165.warc.gz\"}"}
https://www.studypug.com/ca/ib-chemistry/ka-and-kb-calculations
[ "# Ka and Kb calculations\n\n### Ka and Kb calculations\n\n#### Lessons\n\nIn this lesson, we will learn:\n\n• How to apply the Ka expression to find the pH of weak acid solutions.\n• The assumptions made in Ka calculations at equilibrium and how to justify them.\n• How to solve for Kb using Kw and find the pH of weak base solutions.\n\nNotes:\n\n• We know that weak acids and bases are any acid/base species that does not completely dissociate in water. Dissolving a weak acid in water then has two chemical effects:\n• Some of the weak acid, HX, will interact with water and dissociate into H3O+ and X- ions.\n• The rest of the HX will stay un-dissociated.\nWe can write an equilibrium for the dissociation of weak acid HX (with A concentration) in water, and show amounts in a table, in a ‘before and after’ format like below:\n\nHX + H2O $\\rightleftharpoons$ H3O+ + Cl-\n\n Start concentration (M) Equilibrium conc. (M) HX A A - B H3O+ $\\approx$ 0 B X- 0 B\n\nThe acid dissociation constant, Ka can be expressed in these terms:\n\nKa = $\\frac{[X^{-}][H_3O^+]}{[HX]}$ = $\\frac{[X^2]}{[A] -B}$\n\n• Taking an example with 0.1M methanoic acid HCOOH, we can write the following:\n Start concentration (M) Equilibrium conc. (M) HCOOH 0.1 0.1 - B H3O+ 0 B X- 0 B\n\nWe can apply the acid dissociation constant, Ka to this equilibrium. Methanoic acid1 has a Ka value of 1.8$*$10-4 so the Ka equation can be written fully:\n\nKa = $\\frac{[HCOO^-][H_3O^+]}{[HCOOH]}$ = $\\frac{[H_3O^+]^2}{[0.1] -[H_3O^+]}$ = 1.8 $*$ 10-4\n\nSome assumptions are made to complete this calculation:\n• The starting concentration of H3O+ ions, in neutral water is only 1*10-7 M (see Autoionization of water) and an equally tiny amount of hydroxide ions are also present. This is an incredibly small amount, so this H3O+ is not taken into the calculation; only H3O+ due to the weak acid is used in the calculation.\n• With weak acids, we assume that the acid is weak enough that the amount of dissociation doesn’t affect acid concentration. Using the table above, 0.1 M HCOOH added to neutral water will still have concentration of approximately 0.1M at equilibrium, and we can ignore the ‘– B’ in ‘A-B’. Where [HX]eq = equilibrium concentration, [HX]i = start concentration of HX:\n\n[HX]eq - B $\\cong$ [HX]i\n\nYOU MUST STATE THIS ASSUMPTION IN CALCULATIONS.\n\nWith the assumptions, we have a final expression:\n\nKa = $\\frac{[HCOO^-][H_3O^+]}{[HCOOH]}$ = $\\frac{[H_3O^+]^2}{[0.1] }$ = 1.8 $*$ 10-4\n\n0.1 $*$ 1.8 $*$ 10-4 = [H3O+]2\n\n$\\small\\sqrt{1.8 * 10^{-5}}$ = [H3O+] = 4.24 $*$ 10-3\n\npH = -log [H3O+] = 2.37\n\nNote that the assumption we made was that [HX] – B is approximately equal to [HX]. We now know that B = 4.24*10-3, so our assumption in this example was to say that 0.1 – 4.24*10-3 = 0.0958.\nThe assumption can be justified if percentage dissociation is less than 5% which we can work out:\n\n% dissociation = $\\frac{[H_3O^+]_{eq}}{[HX]_i}$ $*$ 100\n\nTherefore:\n\n% dissociation = $\\frac{[4.24 *10^{-3}]}{[0.1]}$ $*$ 100 = 4.24%\n\nAs the calculation shows, the assumption was justified as only 4.24% dissociation occurs.\n\n• Kb calculations are similar to Ka calculations with some changes:\n• Because acidity strength tables give only Ka, Kb of a weak base will need to be found by the calculation in the autoionization of water expression. You will need to find the Ka of the conjugate acid in the acidity strength table to do this.\n• The equilibrium concentrations you obtain using Kb will give you [OH-], so pH will need to be found by solving: pH = 14 – pOH.\n\nTaking an example with 0.5M of the weak base ammonia, NH3, we can write the following:\n\nNH3 + H2O $\\rightleftharpoons$ NH4+ + OH-\n\n Start concentration (M) Equilibrium conc. (M) NH3 0.5 0.5 - B NH3+ 0 B OH- 0 B\n\nThe conjugate acid of ammonia is the ammonium ion, NH4+ which has a Ka value1 of 5.6 $*$ 10-10. Solving the autoionization expression for Kb(NH3) gives:\n\nKb = $\\frac{K_w}{K_a}$ = $\\frac{10^{-14}}{5.6 * 10^{-10}}$ = 1.79 $*$ 10-5\n\nUsing our value for Kb (now rounding to 1.8 $*$ 10-5 or 2 significant figures) we can solve for the equilibrium concentration of hydroxide ions. Again, we make the assumption that the concentration of NH3 isn’t significantly affected by the dissociation into NH4+:\n\n[NH3]eq - B $\\cong$ [NH3]i\n\nNow we can find the hydroxide ion concentration:\n\nKb = $\\frac{[NH_{4}^{+}][OH^-]{}} {[NH_3]}$ = $\\frac{[OH^-]^2}{[0.5]}$ = 1.8 $*$ 10-5\n\n[OH-] = $\\sqrt{((1.8 * 10^{-5}) * 0.5)}$ = 3 $*$ 10-3\n\npOH = -log[OH-] = -log (3 $*$ 10-3) = 2.52\n\npH = 14 - pOH = 14 - 2.52 = 11.48\n\nTesting the assumption can now be done:\n\n% dissociation = $\\frac{[OH^-]_{eq}}{[NH_3]_i}$ $*$ 100 = $\\frac{3 * 10^{-3}}{0.5}$ $*$ 100 = 0.6 %\n\nWith only 0.6% dissociation, the assumption is justified and the pH has been found.\n\n• Another calculation that shows the difference between strong and weak acids and bases is the effect of dilution on pH.\nThe effect on strong acids is straightforward, because we assume 100% dissociation:\n• If a solution of strong acid, e.g. HCl is 1M, then [H3O+ (aq)] = 1M. Taking the negative log of this:\npH = -log = 0\nDiluting this by a factor of 10 will give a concentration of 0.1M\npH = -log[0.1] = 1\nA further 10, or 100 fold from the original:\npH = -log[0.01] = 2\nIn short, diluting a strong acid or base has a direct logarithmic effect on pH.\nThe effect of dilution on weak acids and bases is different:\n• If a solution of weak acid, e.g. CH3COOH is 1M, then pH and [H3O+ (aq)] is worked out using the Ka expression:\nKa (CH3COOH) = 1.4*10-5\n\nKa = $\\frac{[H_{3}O^{+}][CH_{3}COO^{-}]}{[CH_{3}COOH]}$\n\nThe calculation to find pH using this expression has been explained above, so moving forward to an answer (using the assumptions needed)\n\n1.4 * 10-5 = $\\frac{[H_{3}O^{+}][CH_{3}COO^{-}]}{}$ where [H3O+] = [CH3COOH-]\n\n$\\sqrt{1.4 * 10^{-5}}$ = [H3O+] = 3.74 * 10-3\n\npH = -log[ 3.74 * 10-3] = 2.42\n\nA dilution of this weak acid solution to make it 0.1M would have the following effect on the calculation:\n\n1.4 * 10-5 = $\\frac{[H_{3}O^{+}][CH_{3}COO^{-}]}{[0.1]}$ where [H3O+] = [CH3COOH-]\n\n$\\sqrt{1.4 * 10^{-6}}$ = [H3O+] = 1.18 * 10-3\n\npH = -log[ 1.18 * 10-3] = 2.92\n\nAnother dilution by a factor of ten:\n\n1.4 * 10-5 = $\\frac{[H_{3}O^{+}][CH_{3}COO^{-}]}{[0.01]}$ where [H3O+] = [CH3COOH-]\n\n$\\sqrt{1.4 * 10^{-7}}$ = [H3O+] = 3.74 * 10-4\n\npH = -log[ 3.74 * 10-4] = 3.42\n\nIn short, diluting a weak acid has a lesser effect on pH than in strong acids.\n• Introduction\nApplying the Ka expression\na)\nWeak acids/bases at equilibrium.\n\nb)\n\nc)\nCalculating pH and percentage dissociation.\n\nd)\nCalculations using Kb.\n\n• 1.\nFind the pH of the weak acid solution and the percentage dissociation of the weak acid/base.\na)\nWhat is the pH of a solution of 0.5 M ethanoic acid, CH3COOH? Find the percentage dissociation of this ethanoic acid solution.\n\nb)\nWhat is the pH of a solution of 0.1 M ammonia, NH3? Find the percentage dissociation of this ammonia solution.\n\n• 2.\nFind the Ka of an unknown weak acid, given pH and concentration.\na)\nA solution of carbonic acid had a pH of 3.72. What was the initial concentration of this acid solution?" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9006494,"math_prob":0.9996306,"size":6007,"snap":"2019-51-2020-05","text_gpt3_token_len":1850,"char_repetition_ratio":0.15808763,"word_repetition_ratio":0.0866426,"special_character_ratio":0.31030464,"punctuation_ratio":0.12509713,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99988115,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-19T01:51:43Z\",\"WARC-Record-ID\":\"<urn:uuid:ec6cffd5-f27b-4a23-b53d-1b97dc7fc341>\",\"Content-Length\":\"291584\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a2ea4622-f1bc-4046-b9cc-2d9b5844de81>\",\"WARC-Concurrent-To\":\"<urn:uuid:07dab1ec-1a30-4252-8096-1bd0ba3ca301>\",\"WARC-IP-Address\":\"34.200.169.6\",\"WARC-Target-URI\":\"https://www.studypug.com/ca/ib-chemistry/ka-and-kb-calculations\",\"WARC-Payload-Digest\":\"sha1:OEIYZTD3M4OCFSJPATO5TRNXEPEB75PR\",\"WARC-Block-Digest\":\"sha1:LNUVZRFAAMI2LA2PL6B3ZDERAFGJDWHX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250594101.10_warc_CC-MAIN-20200119010920-20200119034920-00397.warc.gz\"}"}
https://percent.info/of/52/how-to-calculate-52-percent-of-278000.html
[ "52 percent of 278000", null, "", null, "Here we will show you how to calculate fifty-two percent of two hundred seventy-eight thousand. Before we continue, note that 52 percent of 278000 is the same as 52% of 278000. We will write it both ways throughout this tutorial to remind you that it is the same.\n\n52 percent means that for each 100, there are 52 of something. This page will teach you three different methods you can use to calculate 52 percent of 278000.\n\nWe think that illustrating multiple ways of calculating 52 percent of 278000 will give you a comprehensive understanding of what 52% of 278000 means, and provide you with percent knowledge that you can use to calculate any percentage in the future.\n\nTo solidify your understanding of 52 percent of 278000 even further, we have also created a pie chart showing 52% of 278000. On top of that, we will explain and calculate \"What is not 52 percent of 278000?\"\n\nCalculate 52 percent of 278000 using a formula\nThis is the most common method to calculate 52% of 278000. 278000 is the Whole, 52 is the Percent, and the Part is what we are calculating. Below is the math and answer to \"What is 52% of 278000?\" using the percent formula.\n\n(Whole × Percent)/100 = Part\n(278000 × 52)/100 = 144560\n52% of 278000 = 144560\n\nGet 52 percent of 278000 with a percent decimal number\nYou can convert any percent, such as 52.00%, to 52 percent as a decimal by dividing the percent by one hundred. Therefore, 52% as a decimal is 0.52. Here is how to calculate 52 percent of 278000 with percent as a decimal.\n\nWhole × Percent as a Decimal = Part\n278000 × 0.52 = 144560\n52% of 278000 = 144560\n\nGet 52 percent of 278000 with a fraction function\nThis is our favorite method of calculating 52% of 278000 because it best illustrates what 52 percent of 278000 really means. The facts are that it is 52 per 100 and we want to find parts per 278000. Here is how to illustrate and show you the answer using a function with fractions.\n\n Part 278000\n=\n 52 100\n\nPart = 144560\n\n52% of 278000 = 144560\n\nNote: To solve the equation above, we first multiplied both sides by 278000 and then divided the left side to get the answer.\n\n52 percent of 278000 illustrated\nBelow is a pie chart illustrating 52 percent of 278000. The pie contains 278000 parts, and the blue part of the pie is 144560 parts or 52 percent of 278000.", null, "Note that it does not matter what the parts are. It could be 52 percent of 278000 dollars, 52 percent of 278000 people, and so on. The pie chart of 52% of 278000 will look the same regardless what it is.\n\nWhat is not 52 percent of 278000?\nWhat is not 52 percent of 278000? In other words, what is the red part of our pie above? We know that the total is 100 percent, so to calculate \"What is not 52%?\" you deduct 52% from 100% and then take that percent from 278000:\n\n100% - 52% = 48%\n(278000 × 48)/100 = 133440\n\nAnother way of calculating the red part is to subtract 144560 from 278000.\n\n278000 - 144560 = 133440\n\nThat is the end of our tutorial folks. We hope we accomplished our goal of making you a percent expert - at least when it comes to calculating 52 percent of 278000.\n\nPercent of a Number\nGo here if you need to calculate the percent of a different number.\n\n52 percent of 279000\nHere is the next percent tutorial on our list that may be of interest.\n\nCopyright  |   Privacy Policy  |   Disclaimer  |   Contact" ]
[ null, "https://percent.info/images/percent-of/52-percent.jpg", null, "https://percent.info/images/percent-of/of-278000.jpg", null, "https://percent.info/images/pie-charts/pie-chart-showing-52-percent.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.91788507,"math_prob":0.9962296,"size":3234,"snap":"2021-04-2021-17","text_gpt3_token_len":856,"char_repetition_ratio":0.24458204,"word_repetition_ratio":0.05756579,"special_character_ratio":0.3521954,"punctuation_ratio":0.082822084,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9994167,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,2,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-18T08:40:04Z\",\"WARC-Record-ID\":\"<urn:uuid:fa20cccc-cda7-41f9-b286-a5e626d97f41>\",\"Content-Length\":\"9434\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c43ad319-2943-44ff-b262-6969039d1007>\",\"WARC-Concurrent-To\":\"<urn:uuid:fe323988-5691-4c1d-a3dd-22590dfcdf6c>\",\"WARC-IP-Address\":\"13.32.200.19\",\"WARC-Target-URI\":\"https://percent.info/of/52/how-to-calculate-52-percent-of-278000.html\",\"WARC-Payload-Digest\":\"sha1:PQB4CX5TXAH5IMCFF4MFP322VVY5FRD4\",\"WARC-Block-Digest\":\"sha1:57M67SZSMX3ABVOVT5BQS5IZ5T6H2BII\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038469494.59_warc_CC-MAIN-20210418073623-20210418103623-00548.warc.gz\"}"}
http://msduncanchem.com/Unit_10/gas_stoich_std.htm
[ "Name:     ID:\n\nEmail:\n\nGas Stoichiometry\n\nMultiple Choice\nIdentify the letter of the choice that best completes the statement or answers the question.\n\n1.\n\nGiven the reaction: 2 PbO --> 2 Pb + O2\nWhat is the total volume of O2, measured at STP, produced when 1.00 mole of PbO decomposes?\n a. 44.8 L b. 22.4 L c. 11.2 L d. 5.60 L\n\n2.\n\nGiven the balanced equation: Ca + 2 H2O --> Ca(OH)2 + H2\nWhen 80.2 grams of Ca react completely with the water, what is the total volume, at STP, of H2 produced?\n a. 44.8 L b. 22.4 L c. 2.00 L d. 1.00 L\n\n3.\n\nGiven the reaction at STP:     2 KClO3 (s)  -->  2 KCl (s) + 3 O2 (g)\nWhat mass of KClO3 (s) is required to produce 32.0 liters of O2 (g)?\n a. 0.952 g b. 21.3 g c. 117 g d. 263 g\n\n4.\n\nGiven the balanced equation: C3H8 + 5 O2 --> 3 CO2 + 4 H2O\nWhat is the total number of liters of CO2 produced when 20.0 liters of O2 are completely consumed?\n a. 12.0 L b. 22.4 L c. 3.00 L d. 5.00 L" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7290856,"math_prob":0.99380517,"size":976,"snap":"2022-40-2023-06","text_gpt3_token_len":415,"char_repetition_ratio":0.113168724,"word_repetition_ratio":0.038793102,"special_character_ratio":0.42110655,"punctuation_ratio":0.18456376,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9934794,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-31T02:58:10Z\",\"WARC-Record-ID\":\"<urn:uuid:88d82ec2-66d4-46dc-9c3f-4f29f4aacd50>\",\"Content-Length\":\"22062\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:55b3d881-fb69-4c27-8c5e-a07deb755739>\",\"WARC-Concurrent-To\":\"<urn:uuid:e9ee1d2c-98b1-4227-b0c9-b5c832c2a73a>\",\"WARC-IP-Address\":\"54.156.116.15\",\"WARC-Target-URI\":\"http://msduncanchem.com/Unit_10/gas_stoich_std.htm\",\"WARC-Payload-Digest\":\"sha1:JBDK5TVMOMQXDE64WBPI2E4SLIG6MJ4F\",\"WARC-Block-Digest\":\"sha1:NQDYTUOA5I2ZICZUXY3RMZYHKYVBETZK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499842.81_warc_CC-MAIN-20230131023947-20230131053947-00551.warc.gz\"}"}
https://www.themathcitadel.com/exploiting-chemistry-for-better-packet-flow-management-3-formal-analysis/
[ "\nExploiting Chemistry for Better Packet Flow Management 3: Formal Analysis\n\n# Exploiting Chemistry for Better Packet Flow Management 3: Formal Analysis\n\nThe previous two posts introduced the ideas of Meyer and Tschudin involving the application and exploitation of chemical kinetic theory to flow management in computer networking. The first part introduced the ideas and gave an overview of the entire work, and the second part took a deeper look into the formal model of a packet chemistry. This section discusses the analysis options available once a packet chemistry model has been created.\n\nThis section can also be skipped for those less interested in the formal mathematics. Suffice it to say that there are a multitude of already created methods now available for the elegant analysis of computer networks when modeled by an artificial packet chemistry.\n\n## Formal Analysis of Artificial Packet Chemistry\n\nBy representing packet flow in a computer network as an artificial chemistry, a multitude of analyses are available, from high to low granularity. The authors give a heavily brief survey (and a good bibliography) of works that can be utilized to analyze these networks pulled from the physics and chemistry literature. A particular advantage of this method is the ability to study the transient states of the network rather than just steady states. The authors also claim the ability to determine the stability of the network flow based only on topology, a heavy advantage in design.\n\n### Stochastic Analysis at the Microscopic Level\n\nThe stochastic behavior of chemical reaction networks is described by the chemical master equation which takes the form\n$$\\frac{\\text{d}\\mathbf{P}}{\\text{d}t} = \\mathbf{A}\\mathbf{P}$$\n\nwhich is a differential equation describing the evolution of state probabilities for a system. Here the states are discrete, and time is continuous. The matrix $\\mathbf{A}$ describes the transition rates (which can also be kinetic or reaction rates), and the stochastic process described is a Markov jump-process Since we’re on a network, the Markov jump process exists in an $\\mathcal{S}$-dimensional integer lattice. Some work has been done to analyze several classes of chemical reaction networks to find the steady-state probability distribution of the state space. For example, if the total number of packets in the network has a bound, and the network contains only first order (unimolecular to unimolecular) reactions, the steady state probability distribution for the lengths of the queues in the network is a multinomial distribution. On the other hand, if the network is open (we allow packets to exit the network completely), then the steady state probability distribution of the lengths of the queues follows a product of Poisson distributions (which is also Poisson). (This is an extremely desirable property, called a product-form.)\n\n### Deterministic Approximations\n\nThis is the most common approach utilized in computer network analysis today, simply because networks are so large and complex that stochastic modeling becomes too cumbersome. Here, the average trajectory is represented by a system of ordinary differential equations, building a fluid model. One downside to this in the networking space is that the analysis of protocols by this method requires manual extraction from source code and accuracy is uncertain.\n\nIn the chemistry sector (and now in the packet chemistry model), obtaining a fluid approximation is not only easier, but shown to be accurate. There are links between the stochastic master equation to several approximations[5,6] including a deterministic ODE model. Gillespie showed that the ODE model accurately predicts the network flow trajectory in many cases.\n\nOne thing the authors note here is that the ODE model can be directly and automatically generated from the network topology. For example, a single server with a single queue (M/M/1) is simply modeled as one chemical species $X$. The arrival rate (inflow) is $\\lambda$, and the service rate is proportional to the queue length, so $\\mu = kx$, where $x$ is the queue length. Then we get a simple differential equation\n$$\\dot{x} = \\lambda-kx$$ describing the change in queue length as the difference of inflow and outflow. In the steady state, $\\dot{x} = 0$, which lets us look for a fixed point $\\hat{x} = \\frac{\\lambda}{k}$. This is the steady-state queue length, which allows us to derive the expected waiting time $T = \\frac{1}{k}$, showing that the latency of a packet under this model is independent of the arrival rate and fill level. This model when implemented automatically adjusts the service rate such that in the steady state, every packet sees the same latency.\n\nIt’s also important to determine just how stable this steady state is by analyzing the sensitivity of the network and states to perturbations. The authors list several citations to show that no new approaches are needed to do this; one can look to signal and control theory literature. In particular, a network designer would desire to predict the stability of a complex network by studying the topology as opposed to an analysis of the system of ODEs. Fortunately, modeling a network this way allows for the use of the Deficiency Zero Theorem for complex chemical networks that gives conditions for stability of steady-state[2,7].", null, "The authors give a formal convergence proof that the example network above converges to a stable fixed point and is asymptotically stable, comparing it to the proof of a similar protocol Push-Sum (a gossip protocol in computer networks).\n\n## Continuation\n\nThe next post in this series will discuss Meyer and Tschudin’s implementation of a scheduler based on the principles discussed thus far.\n\n## References\n\n1. Dittrich, P., Ziegler, J., and Banzhaf, W. Artificial chemistries – a review. Artificial Life 7(2001), 225–275.\n1. Feinburg, M. Complex balancing in general kinetic systems. Archive for Rational Mechanics and Analysis 49 (1972).\n2. Gadgil, C., Lee, C., and Othmer, H. A stochastic analysis of first-order reaction networks. Bulletin of Mathematical Biology 67 (2005), 901–946.\n3. Gibson, M., and Bruck, J. Effcient stochastic simulation of chemical systems with many species and many channels. Journal of Physical Chemistry 104 (2000), 1876–1889.\n4. Gillespie, D. The chemical langevin equation. Journal of Chemical Physics 113 (2000).\n5. Gillespie, D. The chemical langevin and fokker-planck equations for the reversible isomerizationreaction. Journal of Physical Chemistry 106 (2002), 5063–5071.\n6. Horn, F. On a connexion between stability and graphs in chemical kinetics. Proceedings of the RoyalSociety of London 334 (1973), 299–330.\n7. Kamimura, K., Hoshino, H., and Shishikui, Y. Constant delay queuing for jitter-sensitive iptvdistribution on home network. IEEE Global Telecommunications Conference (2008).\n8. Laidler, K. Chemical Kinetics. McGraw-Hill, 1950.\n9. McQuarrie, D. Stochastic approach to chemical kinetics. Journal of Applied Probability 4 (1967), 413–478.\n10.  Meyer, T., and Tschudin, C. Flow management in packet networks through interacting queues and law-of-mass-action-scheduling. Technical report, University of Basel.\n11.  Pocher, H. L., Leung, V., and Gilles, D. An application- and management-based approach to atm scheduling. Telecommunication Systems 12 (1999), 103–122.\n12. Tschudin, C. Fraglets- a metabolistic execution model for communication protocols. Proceedings of the 2nd annual symposium on autonomous intelligent networks and systems (2003).\n\n### Attachments\n\n•", null, "chem-queuing-report" ]
[ null, "http://www.themathcitadel.com/wp-content/uploads/2019/01/Screen-Shot-2019-01-10-at-8.52.15-AM-1024x418.png", null, "https://www.themathcitadel.com/wp-content/plugins/download-attachments/images/ext/pdf.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8843673,"math_prob":0.90502816,"size":7519,"snap":"2021-43-2021-49","text_gpt3_token_len":1615,"char_repetition_ratio":0.12867598,"word_repetition_ratio":0.0070113936,"special_character_ratio":0.21239527,"punctuation_ratio":0.12091988,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98683065,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-11-29T15:18:27Z\",\"WARC-Record-ID\":\"<urn:uuid:5c3b22e8-0ede-4e01-8cc5-f8b90b319f4d>\",\"Content-Length\":\"51630\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e28c6b4a-cae3-4ed4-b8d2-eeb0d90f926e>\",\"WARC-Concurrent-To\":\"<urn:uuid:bd0593e2-4125-4ab1-984b-16d2ae10d4c1>\",\"WARC-IP-Address\":\"165.227.218.96\",\"WARC-Target-URI\":\"https://www.themathcitadel.com/exploiting-chemistry-for-better-packet-flow-management-3-formal-analysis/\",\"WARC-Payload-Digest\":\"sha1:BHQ745RKASZCS74O72B675YRMPH53IQN\",\"WARC-Block-Digest\":\"sha1:QEKWKEC6HDWLU3BFHTOOJ37I5333O236\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964358774.44_warc_CC-MAIN-20211129134323-20211129164323-00623.warc.gz\"}"}
https://gdl.graphisoft.com/forums/reply/2415
[ "#2415\n\nYour code would work with the following corrections:\n\n``````dim varia[]\nvaria=a1\nvaria=a2\nvaria=a3\nvaria=a4\nvaria=a5``````\n\nQuotation marks not needed.\n\n`if varia­[LIT][i][/LIT]=1 then`\nparameters set with values{2} can be used as numbers.\n\nPéter Baksa\nLibrary Platform, Software Engineer\nGRAPHISOFT SE" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6445532,"math_prob":0.93586177,"size":460,"snap":"2023-40-2023-50","text_gpt3_token_len":150,"char_repetition_ratio":0.13815789,"word_repetition_ratio":0.0,"special_character_ratio":0.28478262,"punctuation_ratio":0.083333336,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9584256,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-11-29T18:38:12Z\",\"WARC-Record-ID\":\"<urn:uuid:4dd1047e-e11a-4441-a23d-f28a7c6c774d>\",\"Content-Length\":\"38804\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:915a30ea-81f6-40a3-98ae-830abff1374d>\",\"WARC-Concurrent-To\":\"<urn:uuid:2a332525-c672-48d2-b5ef-6a76180d9a68>\",\"WARC-IP-Address\":\"34.77.123.245\",\"WARC-Target-URI\":\"https://gdl.graphisoft.com/forums/reply/2415\",\"WARC-Payload-Digest\":\"sha1:IBQJP2VDKU4H5YIIJQJM2LSJBHEJIGSB\",\"WARC-Block-Digest\":\"sha1:SZZDCUKJJ2GJWDKV6JXYMXNIQEPAJ624\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100135.11_warc_CC-MAIN-20231129173017-20231129203017-00600.warc.gz\"}"}
https://arxiv.org/abs/1406.5372
[ "astro-ph.GA\n\nTitle:Two Conditions for Galaxy Quenching: Compact Centres and Massive Haloes\n\nAbstract: We investigate the roles of two classes of quenching mechanisms for central and satellite galaxies in the SDSS ($z<0.075$): those involving the halo and those involving the formation of a compact centre. For central galaxies with inner compactness $\\Sigma_{\\rm 1kpc} \\sim 10^{9-9.4}M_{\\odot} {\\rm kpc}^{-2}$, the quenched fraction $f_{q}$ is strongly correlated with $\\Sigma_{\\rm 1kpc}$ with only weak halo mass $M_{\\rm h}$ dependence. However, at higher and lower $\\Sigma_{\\rm 1kpc}$, sSFR is a strong function of $M_{\\rm h}$ and mostly independent of $\\Sigma_{\\rm 1kpc}$. In other words, $\\Sigma_{\\rm 1kpc} \\sim 10^{9-9.4} M_{\\odot} {\\rm kpc}^{-2}$ divides galaxies into those with high sSFR below and low sSFR above this range. In both the upper and lower regimes, increasing $M_{\\rm h}$ shifts the entire sSFR distribtuion to lower sSFR without a qualitative change in shape. This is true even at fixed $M_{*}$, but varying $M_{*}$ at fixed $M_{\\rm h}$ adds no quenching information. Most of the quenched centrals with $M_{\\rm h} > 10^{11.8}M_{\\odot}$ are dense ($\\Sigma_{\\rm 1kpc} > 10^{9}~ M_{\\odot} {\\rm kpc}^{-2}$), suggesting compaction-related quenching maintained by halo-related quenching. However, 21% are diffuse, indicating only halo quenching. For satellite galaxies in the outskirts of halos, quenching is a strong function of compactness and a weak function of host $M_{\\rm h}$. In the inner halo, $M_{\\rm h}$ dominates quenching, with $\\sim 90\\%$ of the satellites being quenched once $M_{\\rm h} > 10^{13}M_{\\odot}$. This regional effect is greatest for the least massive satellites. As demonstrated via semi-analytic modelling with simple prescriptions for quenching, the observed correlations can be explained if quenching due to central compactness is rapid while quenching due to halo mass is slow.\n Comments: 16 pages, 11 figures, MNRAS accepted Subjects: Astrophysics of Galaxies (astro-ph.GA) DOI: 10.1093/mnras/stu2755 Cite as: arXiv:1406.5372 [astro-ph.GA] (or arXiv:1406.5372v3 [astro-ph.GA] for this version)\n\nSubmission history\n\nFrom: Joanna Woo [view email]\n[v1] Fri, 20 Jun 2014 13:04:00 UTC (181 KB)\n[v2] Thu, 4 Dec 2014 22:49:21 UTC (200 KB)\n[v3] Fri, 16 Jan 2015 16:38:50 UTC (201 KB)" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7553819,"math_prob":0.976279,"size":2322,"snap":"2019-43-2019-47","text_gpt3_token_len":701,"char_repetition_ratio":0.13373598,"word_repetition_ratio":0.005830904,"special_character_ratio":0.30534023,"punctuation_ratio":0.11261261,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98476064,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-19T14:45:47Z\",\"WARC-Record-ID\":\"<urn:uuid:e570c6a2-993e-47f8-8988-c4a85271a03b>\",\"Content-Length\":\"21541\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:493b9746-9cce-4391-beb4-80969d3e4187>\",\"WARC-Concurrent-To\":\"<urn:uuid:e5a4ffa9-7f71-463b-ac66-899dd5a84979>\",\"WARC-IP-Address\":\"128.84.21.199\",\"WARC-Target-URI\":\"https://arxiv.org/abs/1406.5372\",\"WARC-Payload-Digest\":\"sha1:5PYF5THS24YX57WNAWSDIM55N3FGJPYH\",\"WARC-Block-Digest\":\"sha1:526N3H6AZLVWSTBWNUDWHFMAALDYWX5J\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986696339.42_warc_CC-MAIN-20191019141654-20191019165154-00316.warc.gz\"}"}
http://lunlun.com/2021/06/
[ "## Solve x+3<=0 (first degree inequality)\n\n0 Solve x+3<=0 (first degree inequality) Subcribe now to LUNLUN.com newsletter and you will get the cheatsheet “Top 10 Trigonometry\n\n## Solve x+2<=0 (first degree inequality)\n\n0 Solve x+2<=0 (first degree inequality) Subcribe now to LUNLUN.com newsletter and you will get the cheatsheet “Top 10 Trigonometry\n\n## Solve x+2>=0 (first degree inequality)\n\n0 Solve x+2>=0 (first degree inequality) Subcribe now to LUNLUN.com newsletter and you will get the cheatsheet “Top 10 Trigonometry\n\n## Solve x+4=0 (first degree equation)\n\n0 Solve x+4=0 (first degree equation) Subcribe now to LUNLUN.com newsletter and you will get the cheatsheet “Top 10 Trigonometry\n\n## Solve x+1<=0 (first degree inequality)\n\n0 Solve x+1<=0 (first degree inequality) Subcribe now to LUNLUN.com newsletter and you will get the cheatsheet “Top 10 Trigonometry\n\n## Solve x+1>=0 (first degree inequality)\n\n0 Solve x+1>=0 (first degree inequality) Subcribe now to LUNLUN.com newsletter and you will get the cheatsheet “Top 10 Trigonometry\n\n## Solve x+3<0 (first degree inequality)\n\n0 Solve x+3<0 (first degree inequality) Subcribe now to LUNLUN.com newsletter and you will get the cheatsheet “Top 10 Trigonometry\n\n## Solve x+3>0 (first degree inequality)\n\n0 Solve x+3>0 (first degree inequality) Subcribe now to LUNLUN.com newsletter and you will get the cheatsheet “Top 10 Trigonometry" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8493061,"math_prob":0.97863054,"size":1830,"snap":"2021-31-2021-39","text_gpt3_token_len":505,"char_repetition_ratio":0.15607886,"word_repetition_ratio":0.62352943,"special_character_ratio":0.23825137,"punctuation_ratio":0.034591194,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9640619,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-26T03:56:25Z\",\"WARC-Record-ID\":\"<urn:uuid:8d41b97c-7bbc-40c6-8c3a-0b7553f05293>\",\"Content-Length\":\"36045\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:402a5456-32a4-47cf-b638-1fdb337b999e>\",\"WARC-Concurrent-To\":\"<urn:uuid:489aecd5-5263-4751-b0f3-f54bd00fc1b0>\",\"WARC-IP-Address\":\"166.62.119.124\",\"WARC-Target-URI\":\"http://lunlun.com/2021/06/\",\"WARC-Payload-Digest\":\"sha1:KZJBU2RYOCTJDFEQVHSERJ46KCG7JHWY\",\"WARC-Block-Digest\":\"sha1:GLLCG5CERGKRZCI2YWENUZCKRTHYZAYK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057796.87_warc_CC-MAIN-20210926022920-20210926052920-00706.warc.gz\"}"}
https://www.learndartprogramming.com/fundamentals/for-loop-in-dart/
[ "", null, "# For Loop in Dart\n\nPublished on 29 March 2020\nLast Updated on 29 March 2020\n\nFor loop is one of the control flow statements in Dart.\n\n## Use Of For Loop In Dart\n\nA For loop is used for performing repeated execution of instructions. Using for loops in Dart, we can iterate over same set of instructions over and over again.\n\n## Simple For Loop\n\n``````// for loop in dart\nmain(List<String> args) {\nfor (var i = 0; i < 4; i++) {\nprint('i is: \\${i}');\n}\n}\n``````\n\nAbove program produces following output:\n\n``````i is: 0\ni is: 1\ni is: 2\ni is: 3\n``````\n\n## For in loop in Dart\n\nFor in loop in Dart is used for iterating through object’s properties.\n\n``````// for in loop in dart\nmain(List<String> args) {\nvar collection = [1, 2, 3];\nfor (var obj in collection) {\nprint('obj is \\${obj}');\n}\n}\n``````\n\nAbove program produces following output:\n\n``````obj is 1\nobj is 2\nobj is 3\n``````\n\nAbove program demonstrates how we can iterate through elements in a list object through for in loop.\n\n## Nested For Loop in Dart\n\nWhen one or more for loops are placed into one another then they are called nested for loops.\n\n``````// nested for loop\nmain(List<String> args) {\nfor (var i = 0; i < 3; i++) {\nfor (var j = 0; j < 3; j++) {\nprint('i is: \\${i}, j is \\${j}');\n}\n}\n}\n``````\n\nAbove program produces following output:\n\n``````i is: 0, j is 0\ni is: 0, j is 1\ni is: 0, j is 2\ni is: 1, j is 0\ni is: 1, j is 1\ni is: 1, j is 2\ni is: 2, j is 0\ni is: 2, j is 1\ni is: 2, j is 2\n\nProcess finished with exit code 0\n``````\n\nWhile creating nested for loops one must make sure not to create an infinite loop.\n\n## Use of break statement in for loop\n\nOnce break is called inside a for loop, the program exits the loop. Subsequent statements placed after break statement are not executed.\n\n``````// use of break in for loop\nmain(List<String> args) {\nfor (var i = 0; i < 3; i++) {\nif (i == 3) {\n// for loop will exit when i becomes 3\nbreak;\n}\nprint('i is: \\${i}');\n}\n}\n``````\n\nAbove program produces following output:\n\n``````i is: 0\ni is: 1\ni is: 2\n``````\n\n## Use for continue statement in a for loop\n\nContinue statement is used for skipping statements that are placed after the continue statement in the current iteration and returning program execution pointer back to the beginning of the loop.\n\n``````// for loop continue\nmain(List<String> args) {\nfor (var i = 0; i < 4; i++) {\nif (i == 2) {\n// for loop will skip execution when i becomes 2\ncontinue;\n}\nprint('i is: \\${i}');\n}\n}\n``````\n\nAbove program produces following output:\n\n``````i is: 0\ni is: 1\ni is: 3\n``````" ]
[ null, "https://www.learndartprogramming.com/thumbnail/dart_3.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.84290844,"math_prob":0.8446108,"size":2165,"snap":"2020-34-2020-40","text_gpt3_token_len":653,"char_repetition_ratio":0.1443776,"word_repetition_ratio":0.26741573,"special_character_ratio":0.32840645,"punctuation_ratio":0.14516129,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9838503,"pos_list":[0,1,2],"im_url_duplicate_count":[null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-29T01:10:21Z\",\"WARC-Record-ID\":\"<urn:uuid:b2bd5475-e29a-4504-be57-6330c63c06c8>\",\"Content-Length\":\"28092\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2d59928f-0303-4cbe-b043-96362385ff1f>\",\"WARC-Concurrent-To\":\"<urn:uuid:40bd31c5-af3d-41af-96e3-65d8bf4317cd>\",\"WARC-IP-Address\":\"151.101.1.195\",\"WARC-Target-URI\":\"https://www.learndartprogramming.com/fundamentals/for-loop-in-dart/\",\"WARC-Payload-Digest\":\"sha1:N5E6MOHWPIQOHIKATGMEQFQ6YJC2LDHN\",\"WARC-Block-Digest\":\"sha1:XG5RO3S7LFXZIDAMPHFCTTGNXHR3BD3W\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600401617641.86_warc_CC-MAIN-20200928234043-20200929024043-00299.warc.gz\"}"}
https://git.dynare.org/Dynare/dseries/commit/14478ca16c2cbdc5c6d922d131518be742d552b9
[ "### Added the ability to tag variables in dseries objects.\n\n``` - New class member tags\n\n- Member tags is a structure, initialized to empty.\n\n- Each user-defined field of this structure must be a vobs(o)*1 cell\narray (each element is associated to a variable).\n\n- To add a new tag name:\n\n>> o.tag('type')\n\n- To associate a tag value to a variable:\n\n>> o.tag('type', 'Consumption', 'Flow')\n\nthe first argument is the tag name, the second argument is the name\nof the variable, the last argument is the value of the tag.```\nparent 03b223fa\n ... ... @@ -29,6 +29,7 @@ p.name = o.name; p.tex = o.tex; p.dates = o.dates; p.ops = o.ops; p.tags = o.tags; %@test:1 %\\$ % Define a dates object ... ...\n ... ... @@ -23,6 +23,7 @@ properties tex = {}; % TeX names of the variables. dates = dates(); % Dates associated to the observations. ops = {}; % History of operations on the variables. tags = struct(); % User defined tags on the variables. end methods ... ... @@ -54,6 +55,7 @@ methods o.tex = {}; o.dates = dates(); o.ops = {}; o.tags = struct(); return case 1 if isdates(varargin{1}) ... ... @@ -67,17 +69,19 @@ methods o.tex = {}; o.dates = varargin{1}; o.ops = {}; o.tags = struct(); otherwise error('dseries:WrongInputArguments', 'Input (identified as a dates object) must have a unique element!'); end return elseif ischar(varargin{1}) [init, data, varlist, tex, ops] = load_data(varargin{1}); [init, data, varlist, tex, ops, tags] = load_data(varargin{1}); o.data = data; o.name = varlist; o.dates = init:init+(nobs(o)-1); o.tex = tex; o.ops = ops; o.tags = tags; elseif ~isoctave() && istable(varargin{1}) % It is assumed that the dates are in the first column. thistable = varargin{1}; ... ... @@ -86,36 +90,40 @@ methods o.data = varargin{1}{:,2:end}; o.dates = dates(varargin{1}{1,1}{1})+(0:size(varargin{1}, 1)-1); o.ops = cell(length(o.name), 1); o.tags = struct(); elseif isnumeric(varargin{1}) && isequal(ndims(varargin{1}),2) o.data = varargin{1}; o.name = default_name(vobs(o)); o.tex = name2tex(o.name); o.dates = dates(1,1):dates(1,1)+(nobs(o)-1); o.ops = cell(length(o.name), 1); o.tags = struct(); end case {2,3,4} if isequal(nargin,2) && ischar(varargin{1}) && isdates(varargin{2}) % Instantiate dseries object with a data file and force the initial date to % be as given by the second input argument (initial period represented % with a dates object). [init, data, varlist, tex, ops] = load_data(varargin{1}); [init, data, varlist, tex, ops, tags] = load_data(varargin{1}); o.data = data; o.name = varlist; o.dates = varargin{2}:varargin{2}+(nobs(o)-1); o.tex = tex; o.ops = ops; o.tags = tags; return end if isequal(nargin,2) && ischar(varargin{1}) && ischar(varargin{2}) && isdate(varargin{2}) % Instantiate dseries object with a data file and force the initial date to % be as given by the second input argument (initial period represented with a % string). [init, data, varlist, tex, ops] = load_data(varargin{1}); [init, data, varlist, tex, ops, tags] = load_data(varargin{1}); o.data = data; o.name = varlist; o.dates = dates(varargin{2}):dates(varargin{2})+(nobs(o)-1); o.tex = tex; o.ops = ops; o.tags = tags; return end a = varargin{1}; ... ... @@ -177,6 +185,7 @@ methods o.name = default_name(vobs(o)); end o.ops = cell(length(o.name), 1); o.tags = struct(); if ~isempty(d) if vobs(o)==length(d) for i=1:vobs(o) ... ...\n ... ... @@ -78,6 +78,25 @@ end a.ops = vertcat(b.ops,c.ops); a.name = vertcat(b.name,c.name); a.tex = vertcat(b.tex,c.tex); btagnames = fieldnames(b.tags); ctagnames = fieldnames(c.tags); atagnames = union(btagnames, ctagnames); if isempty(atagnames) a.tags = struct(); else for i=1:length(atagnames) if ismember(atagnames{i}, btagnames) && ismember(atagnames{i}, ctagnames) a.tags.(atagnames{i}) = vertcat(b.tags.(atagnames{i}), b.tags.(atagnames{i})); elseif ismember(atagnames{i}, btagnames) a.tags.(atagnames{i}) = vertcat(b.tags.(atagnames{i}), cell(vobs(c), 1)); elseif ismember(atagnames{i}, ctagnames) a.tags.(atagnames{i}) = vertcat(cell(vobs(b), 1), c.tags.(atagnames{i})); else error('dseries::horzcat: This is a bug!') end end end if ~( d_nobs_flag(1) || d_init_flag(1) ) a.data = [b.data,c.data]; a.dates = b.dates; ... ... @@ -331,3 +350,48 @@ end %\\$ %\\$ T = t; %@eof:7 %@test:8 %\\$ % Define a data set. %\\$ A = [transpose(1:10),2*transpose(1:10)]; %\\$ B = [transpose(1:10),2*transpose(1:10)]; %\\$ %\\$ % Define names %\\$ A_name = {'A1';'A2'}; %\\$ B_name = {'B1';'B2'}; %\\$ %\\$ % Define expected results. %\\$ e.init = dates(1,1); %\\$ e.freq = 1; %\\$ e.name = {'A1';'A2';'B1';'B2'}; %\\$ e.data = [A,B]; %\\$ %\\$ % Instantiate two time series objects. %\\$ ts1 = dseries(A,[],A_name,[]); %\\$ ts2 = dseries(B,[],B_name,[]); %\\$ ts1.tag('t1'); %\\$ ts1.tag('t1', 'A1', 'Stock'); %\\$ ts1.tag('t1', 'A2', 'Flow'); %\\$ ts2.tag('t2'); %\\$ ts2.tag('t2', 'B1', 0); %\\$ ts2.tag('t2', 'B2', 1); %\\$ %\\$ % Call the tested method. %\\$ try %\\$ ts3 = [ts1,ts2]; %\\$ t(1) = true; %\\$ catch %\\$ t(1) = false; %\\$ end %\\$ %\\$ % Check the results. %\\$ if t(1) %\\$ t(2) = dassert(ts3.init,e.init); %\\$ t(3) = dassert(ts3.freq,e.freq); %\\$ t(4) = dassert(ts3.data,e.data); %\\$ t(5) = dassert(ts3.name,e.name); %\\$ t(6) = dassert(ts3.tags.t1,{'Stock';'Flow';[];[]}); %\\$ t(7) = dassert(ts3.tags.t2,{[];[];0;1}); %\\$ end %\\$ T = all(t); %@eof:8\n ... ... @@ -48,23 +48,64 @@ end % Keep the second input argument constant. p = copy(p); % Add NaNs if necessary. [o, p] = align(o, p); n = length(id); % Get tag names in p ptagnames = fieldnames(p.tags); if n>1 [id, jd] = sort(id); p.data = p.data(:,jd); p.name = p.name(jd); p.tex = p.tex(jd); p.ops = p.ops(jd); if ~isempty(ptagnames) for i = 1:length(ptagnames) p.tags.(ptagnames{i}) = p.tags.(ptagnames{i})(jd); end end end % Get tag names in o otagnames = fieldnames(o.tags); % Merge tag names if isempty(otagnames) && isempty(ptagnames) notags = true; else notags = false; dtagnames_o = setdiff(ptagnames, otagnames); dtagnames_p = setdiff(otagnames, ptagnames); if ~isempty(dtagnames_o) % If p has tags that are not in o... for i=1:length(dtagnames_o) o.tags.(dtagnames_o{i}) = cell(vobs(o), 1); end end if ~isempty(dtagnames_p) % If o has tags that are not in p... for i=1:length(dtagnames_p) p.tags.(dtagnames_p{i}) = cell(vobs(p), 1); end end end % Update list of tag names in o. otagnames = fieldnames(o.tags); for i=1:n o.data = insert_column_vector_in_a_matrix(o.data, p.data(:,i),id(i)); o.name = insert_object_in_a_one_dimensional_cell_array(o.name, p.name{i}, id(i)); o.tex = insert_object_in_a_one_dimensional_cell_array(o.tex, p.tex{i}, id(i)); o.ops = insert_object_in_a_one_dimensional_cell_array(o.ops, p.ops{i}, id(i)); if ~notags for j=1:length(otagnames) o.tags.(otagnames{j}) = insert_object_in_a_one_dimensional_cell_array(o.tags.(otagnames{j}), p.tags.(otagnames{j}){i}, id(i)); end end id = id+1; end ... ... @@ -83,6 +124,16 @@ end %\\$ % Instantiate two dseries objects. %\\$ ts1 = dseries(A, A_init, A_name,[]); %\\$ ts2 = dseries(B, B_init, B_name,[]); %\\$ ts1.tag('t1'); %\\$ ts1.tag('t1','A1',1); %\\$ ts1.tag('t1','A2',1); %\\$ ts1.tag('t1','A3',0); %\\$ ts2.tag('t1'); %\\$ ts2.tag('t1','B1',1); %\\$ ts2.tag('t1','B2',1); %\\$ ts2.tag('t2'); %\\$ ts2.tag('t2','B1','toto'); %\\$ ts2.tag('t2','B2','titi'); %\\$ %\\$ try %\\$ ts1 = insert(ts1,ts2,[1,2]); ... ... @@ -91,11 +142,14 @@ end %\\$ t = 0; %\\$ end %\\$ %\\$ if length(t)>1 %\\$ t(2) = dassert(ts1.vobs,{'B1';'A1';'B2';'A3'}); %\\$ %\\$ if t(1) %\\$ t(2) = dassert(ts1.name,{'B1';'A1';'B2';'A2';'A3'}); %\\$ t(3) = dassert(ts1.nobs,10); %\\$ eB = [NaN(2,2); B; NaN(3,2)]; %\\$ t(4) = dassert(ts1.data,[eB(:,1), A(:,1), eB(:,2), A(:,2:3)], 1e-15); %\\$ t(5) = dassert(ts1.tags.t1,{1; 1; 1; 1; 0}); %\\$ t(6) = dassert(ts1.tags.t2,{'toto'; []; 'titi'; []; []}); %\\$ end %\\$ T = all(t); %@eof:1 ... ...\n ... ... @@ -73,6 +73,13 @@ if ~isequal(o.ops, p.ops) warning on backtrace end if ~isequal(o.tags, p.tags) warning off backtrace warning('dseries::isequal: Both input arguments have different tags!') warning on backtrace end if nargin<3 b = isequal(o.data, p.data); else ... ...\n ... ... @@ -42,12 +42,35 @@ if ~isequal(frequency(o), frequency(p)) end q = dseries(); [q.name, IBC, junk] = unique([o.name; p.name], 'last'); tex = [o.tex; p.tex]; q.tex = tex(IBC); ops = [o.ops; p.ops]; q.ops = ops(IBC); otagnames = fieldnames(o.tags); ptagnames = fieldnames(p.tags); qtagnames = union(otagnames, ptagnames); if isempty(qtagnames) q.tags = struct(); else for i=1:length(qtagnames) if ismember(qtagnames{i}, otagnames) && ismember(qtagnames{i}, ptagnames) q.tags.(qtagnames{i}) = vertcat(o.tags.(otagnames{i}), p.tags.(ptagnames{i})); elseif ismember(qtagnames{i}, otagnames) q.tags.(qtagnames{i}) = vertcat(o.tags.(qtagnames{i}), cell(vobs(p), 1)); elseif ismember(qtagnames{i}, ptagnames) q.tags.(qtagnames{i}) = vertcat(cell(vobs(o), 1), p.tags.(qtagnames{i})); else error('dseries::horzcat: This is a bug!') end q.tags.(qtagnames{i}) = q.tags.(qtagnames{i})(IBC); end end if nobs(o) == 0 q = copy(p); elseif nobs(p) == 0 ... ... @@ -93,22 +116,26 @@ q.dates = q_init:q_init+(nobs(q)-1); %\\$ % Define names %\\$ A_name = {'A1';'A2'}; B_name = {'A1'}; %\\$ %\\$ t = zeros(4,1); %\\$ %\\$ % Instantiate a time series object. %\\$ % Instantiate two time series objects and merge. %\\$ try %\\$ ts1 = dseries(A,[],A_name,[]); %\\$ ts1.tag('type'); %\\$ ts1.tag('type', 'A1', 'Stock'); %\\$ ts1.tag('type', 'A2', 'Flow'); %\\$ ts2 = dseries(B,[],B_name,[]); %\\$ ts2.tag('type'); %\\$ ts2.tag('type', 'A1', 'Flow'); %\\$ ts3 = merge(ts1,ts2); %\\$ t(1) = 1; %\\$ catch %\\$ t = 0; %\\$ end %\\$ %\\$ if length(t)>1 %\\$ if t(1) %\\$ t(2) = dassert(ts3.vobs,2); %\\$ t(3) = dassert(ts3.nobs,10); %\\$ t(4) = dassert(ts3.data,[B, A(:,2)],1e-15); %\\$ t(5) = dassert(ts3.tags.type, {'Flow';'Flow'}); %\\$ end %\\$ T = all(t); %@eof:1 ... ... @@ -120,12 +147,15 @@ q.dates = q_init:q_init+(nobs(q)-1); %\\$ % Define names %\\$ A_name = {'A1';'A2'}; B_name = {'B1'}; %\\$ %\\$ t = zeros(4,1); %\\$ %\\$ % Instantiate a time series object. %\\$ % Instantiate two time series objects and merge them. %\\$ try %\\$ ts1 = dseries(A,[],A_name,[]); %\\$ ts1.tag('t1'); %\\$ ts1.tag('t1', 'A1', 'Stock'); %\\$ ts1.tag('t1', 'A2', 'Flow'); %\\$ ts2 = dseries(B,[],B_name,[]); %\\$ ts2.tag('t2'); %\\$ ts2.tag('t2', 'B1', 1); %\\$ ts3 = merge(ts1,ts2); %\\$ t(1) = 1; %\\$ catch ... ... @@ -136,6 +166,8 @@ q.dates = q_init:q_init+(nobs(q)-1); %\\$ t(2) = dassert(ts3.vobs,3); %\\$ t(3) = dassert(ts3.nobs,10); %\\$ t(4) = dassert(ts3.data,[A, B],1e-15); %\\$ t(5) = dassert(ts3.tags.t1, {'Flow';'Flow';[]}); %\\$ t(6) = dassert(ts3.tags.t2, {[];[];1}); %\\$ end %\\$ T = all(t); %@eof:2\n ... ... @@ -69,6 +69,10 @@ if ~isequal(o.tex, p.tex) warning('dseries::ne: Both input arguments do not have the same tex names!') end if ~isequal(o.tags, p.tags) warning('dseries::ne: Both input arguments do not have the same tags!') end b = ne(o.data, p.data); %@test:1 ... ...\n ... ... @@ -42,6 +42,10 @@ o.data(:,id) = []; o.name(id) = []; o.tex(id) = []; o.ops(id) = []; otagnames = fieldnames(o.tags); for i=1:length(otagnames) o.tags.(otagnames{i})(id) = []; end %@test:1 %\\$ % Define a datasets. ... ... @@ -53,16 +57,21 @@ o.ops(id) = []; %\\$ % Instantiate a time series object. %\\$ try %\\$ ts1 = dseries(A,[],A_name,[]); %\\$ ts1.tag('type'); %\\$ ts1.tag('type', 'A1', 1); %\\$ ts1.tag('type', 'A2', 2); %\\$ ts1.tag('type', 'A3', 3); %\\$ ts1.pop_('A2'); %\\$ t(1) = 1; %\\$ catch %\\$ t(1) = 0; %\\$ end %\\$ %\\$ if length(t)>1 %\\$ if t(1) %\\$ t(2) = dassert(ts1.vobs,2); %\\$ t(3) = dassert(ts1.nobs,10); ts1 %\\$ t(3) = dassert(ts1.nobs,10); %\\$ t(4) = dassert(ts1.data,[A(:,1), A(:,3)],1e-15); %\\$ t(5) = dassert(ts1.tags.type, {1;3}); %\\$ end %\\$ T = all(t); %@eof:1 \\ No newline at end of file\n ... ... @@ -65,10 +65,31 @@ switch format fprintf(fid,[ '''' o.ops{i} '''']); end if i\n ... ... @@ -65,7 +65,7 @@ function B = subsref(A, S) % --*-- Unitary tests --*-- switch S(1).type case '.' switch S(1).subs case {'data','name','tex','dates','ops'} % Public members. case {'data','name','tex','dates','ops', 'tags'} % Public members. if length(S)>1 && isequal(S(2).type,'()') && isempty(S(2).subs) error(['dseries::subsref: ' S(1).subs ' is not a method but a member!']) end ... ... @@ -169,7 +169,7 @@ switch S(1).type else error('dseries::subsref: Call to size method must come in last position!') end case {'set_names','rename','rename_','tex_rename','tex_rename_'} case {'set_names','rename','rename_','tex_rename','tex_rename_', 'tag'} B = feval(S(1).subs,A,S(2).subs{:}); S = shiftS(S,1); case {'disp'} ... ...\n function o = tag(o, a, b, c) % --*-- Unitary tests --*-- % Add tag to a dseries oject (in place modification). % INPUTS % - o [dseries] % - a [string] Name of the tag. % - b [string] Name of the variable. % - c [any] Value of the variable tag. % % OUTPUT % - o [dseries] Updated with tag % Copyright (C) 2017 Dynare Team % % This file is part of Dynare. % % Dynare is free software: you can redistribute it and/or modify % it under the terms of the GNU General Public License as published by % the Free Software Foundation, either version 3 of the License, or % (at your option) any later version. % % Dynare is distributed in the hope that it will be useful, % but WITHOUT ANY WARRANTY; without even the implied warranty of % MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the % GNU General Public License for more details. % % You should have received a copy of the GNU General Public License % along with Dynare. If not, see . if nargin<3 % Initialize a new tag name if ~ismember(a, fieldnames(o.tags)) o.tags.(a) = cell(vobs(o), 1); end else % Test if tag name (a) exists if ~ismember(a, fieldnames(o.tags)) error('dseries::tag: Tag name %s is unknown!', a) end % Test if variable (b) exists if ~ismember(b, o.name) error('dseries::tag: Variable %s is unknown!', b) else id = strmatch(b, o.name, 'exact'); end o.tags.(a)(id) = {c}; end %@test:1 %\\$ ts = dseries(randn(10, 3)); %\\$ try %\\$ tag(ts, 'name'); %\\$ tag(ts, 'name', 'Variable_1', 'Flow'); %\\$ tag(ts, 'name', 'Variable_2', 'Stock'); %\\$ tag(ts, 'name', 'Variable_3', 'Flow'); %\\$ t(1) = 1; %\\$ catch %\\$ t(1) = 0; %\\$ end %\\$ %\\$ if t(1) %\\$ t(2) = dassert(ts.tags.name, {'Flow'; 'Stock'; 'Flow'}); %\\$ end %\\$ %\\$ T = all(t); %@eof:1 %@test:2 %\\$ ts = dseries(randn(10, 3)); %\\$ try %\\$ tag(ts, 'name'); %\\$ tag(ts, 'name', 'Variable_1', 'Flow'); %\\$ tag(ts, 'name', 'Variable_3', 'Flow'); %\\$ t(1) = 1; %\\$ catch %\\$ t(1) = 0; %\\$ end %\\$ %\\$ if t(1) %\\$ t(2) = dassert(ts.tags.name, {'Flow'; []; 'Flow'}); %\\$ end %\\$ %\\$ T = all(t); %@eof:2 %@test:3 %\\$ ts = dseries(randn(10, 3)); %\\$ try %\\$ tag(ts, 'name'); %\\$ tag(ts, 'name', 'Variable_1', 'Flow'); %\\$ tag(ts, 'noname', 'Variable_3', 1); %\\$ t(1) = 0; %\\$ catch %\\$ t(1) = 1; %\\$ end %\\$ %\\$ if t(1) %\\$ t(2) = dassert(ts.tags.name, {'Flow'; []; []}); %\\$ end %\\$ %\\$ T = all(t); %@eof:3 %@test:4 %\\$ ts = dseries(randn(10, 3)); %\\$ try %\\$ ts.tag('name'); %\\$ ts.tag('name', 'Variable_1', 'Flow'); %\\$ ts.tag('name', 'Variable_2', 'Stock'); %\\$ ts.tag('name', 'Variable_3', 'Flow'); %\\$ t(1) = 1; %\\$ catch %\\$ t(1) = 0; %\\$ end %\\$ %\\$ if t(1) %\\$ t(2) = dassert(ts.tags.name, {'Flow'; 'Stock'; 'Flow'}); %\\$ end %\\$ %\\$ T = all(t); %@eof:4" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6630029,"math_prob":0.9988449,"size":555,"snap":"2020-10-2020-16","text_gpt3_token_len":140,"char_repetition_ratio":0.13793103,"word_repetition_ratio":0.0,"special_character_ratio":0.27747747,"punctuation_ratio":0.123893805,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9948908,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-29T09:53:28Z\",\"WARC-Record-ID\":\"<urn:uuid:e32161e0-c684-4c00-b765-571bb9fb4f63>\",\"Content-Length\":\"1049684\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5a565dc1-694b-4ac6-9991-8235cfcece5f>\",\"WARC-Concurrent-To\":\"<urn:uuid:2d1fb171-1b96-4ed6-aedb-d0f3125d49a3>\",\"WARC-IP-Address\":\"217.70.191.81\",\"WARC-Target-URI\":\"https://git.dynare.org/Dynare/dseries/commit/14478ca16c2cbdc5c6d922d131518be742d552b9\",\"WARC-Payload-Digest\":\"sha1:DGBJFLYYYLXHKYL5C4QFASTUEKRA3XV7\",\"WARC-Block-Digest\":\"sha1:5OKIXN27GHOAN7TABSJVCV5DBWNPMXH7\",\"WARC-Truncated\":\"length\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875148850.96_warc_CC-MAIN-20200229083813-20200229113813-00323.warc.gz\"}"}
https://git.sesse.net/?p=pitch;a=blobdiff;f=pitchdetector.cpp;h=cf0abe0d01fc313dd596aa44cdc956053c6b77cf;hp=71e5875c7b328a31a38ba45540d967c861af316f;hb=af5720a7e5ece0711550fa86f30a59f30b819bbd;hpb=61ad39f32700534f5750d9405efd05d33e12ad14;ds=sidebyside
[ "index 71e5875..cf0abe0 100644 (file)\n@@ -38,7 +38,7 @@ PitchDetector::~PitchDetector()\nstd::pair<double, double> PitchDetector::detect_pitch(short *buf)\n{\nunsigned buf_len = fft_length / pad_factor / overlap;\nstd::pair<double, double> PitchDetector::detect_pitch(short *buf)\n{\nunsigned buf_len = fft_length / pad_factor / overlap;\n-       memmove(in, in + buf_len, (fft_length - buf_len) * sizeof(double));\n+       memmove(in, in + buf_len, (fft_length / pad_factor - buf_len) * sizeof(double));\n\nfor (unsigned i = 0; i < buf_len; ++i)\nin[i + (fft_length / pad_factor - buf_len)] = double(buf[i]);\n\nfor (unsigned i = 0; i < buf_len; ++i)\nin[i + (fft_length / pad_factor - buf_len)] = double(buf[i]);" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.66055435,"math_prob":0.93393964,"size":1080,"snap":"2020-10-2020-16","text_gpt3_token_len":325,"char_repetition_ratio":0.1598513,"word_repetition_ratio":0.31578946,"special_character_ratio":0.32407406,"punctuation_ratio":0.21787709,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97698975,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-23T21:18:23Z\",\"WARC-Record-ID\":\"<urn:uuid:59e61b27-0aeb-4854-af3f-8fca97efee1b>\",\"Content-Length\":\"8876\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:280cdf37-f09a-43d1-940e-9a531041437f>\",\"WARC-Concurrent-To\":\"<urn:uuid:be36f0ed-0569-4b2d-96ea-2bf4811d06bd>\",\"WARC-IP-Address\":\"193.35.52.50\",\"WARC-Target-URI\":\"https://git.sesse.net/?p=pitch;a=blobdiff;f=pitchdetector.cpp;h=cf0abe0d01fc313dd596aa44cdc956053c6b77cf;hp=71e5875c7b328a31a38ba45540d967c861af316f;hb=af5720a7e5ece0711550fa86f30a59f30b819bbd;hpb=61ad39f32700534f5750d9405efd05d33e12ad14;ds=sidebyside\",\"WARC-Payload-Digest\":\"sha1:INUUXCSK2L4E4PNZCPG7DTR65SLHYZLF\",\"WARC-Block-Digest\":\"sha1:5X2FAJVYUZKPSKDZAONB3IPJIKEDBIP7\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875145839.51_warc_CC-MAIN-20200223185153-20200223215153-00495.warc.gz\"}"}
https://stats.stackexchange.com/questions/64562/compare-two-sets-of-data
[ "# Compare two sets of data\n\nI have three sets of data, measured by three different devices: A,B and C of air balloon whose fall is influenced by wind. Each data sheet looks like:\n\nA:\n\nLongitude Latitude Altitude Weight Rotation(about main axis) Time\n... ... ... ... ... ...\n\nB:\n\nLongitude Latitude Altitude Weight Rotation(about main axis) Time\n... ... ... ... ... ...\n\n\nand similarly for C.\n\nAre there standard techniques to compare the error measurement of B and C with respect to A (considered standard)?.\n\nNote that except time, all other parameter can both increase and decrease, and longitude/latitude and rotation are the only parameters that can be positive and negative.\n\nThe problem with regression is that I do not have independent/dependent variables. The method of error analysis I generally use is to calculate ${\\rm Err}_{x}$, ${\\rm Err}_{y}$, and ${\\rm Err}_{z}$, and combine them suitably when I have a function explaining something. I do not have 'the function'. (I mean I do not have a function to use 'propagation of error'.)\n\nPrecisely, I am trying to measure accuracy of devices B and C\n\nSide note: I could not find an appropriate tag.\n\n• And where is the question? Jul 17, 2013 at 7:41\n• @sashkello The was small thinking error. I intended what are? but wrote these are. Sorry. Jul 17, 2013 at 7:44\n• Can you give examples of results you expect or even hypotheses? This will influence the answer. For example, I suppose combined change over time in longitude and latitude (i.e. speed) might be interesting. Or only change in altitude over time? On the other hand, change in weight over time maybe not? Jul 17, 2013 at 8:56\n• @robert, thank you for your comment. I expect change in all quantities. Longitude and latitude and altitude definitely change. Weight also change slightly when you consider significant change in altitude. I am merely measuring accuracy of two devices B and C with respect to A(consider standard). Hypothesis would be accurate measurement. I don't know what else? Jul 17, 2013 at 10:19\n• For clarification: (1) You want to know how much measurement error there is in the data that come from each device; & (2) A is considered the 'gold standard' / measured w/o error. Is that correct? Jul 17, 2013 at 12:49\n\n1. Compute relative differences, e.g. $(longitude_{B}-longitude_{A})/longitude_{A}$ for all variables at all time points (assuming the devices measured all variables at the same time points). Then look at the distribution and change over time of the differences.\n2. If the differences increase over time (which might or might not be plausible) you could use linear regression with time as predictor and relative difference in e.g., longitude, as variable you want to explain. This could be done for each variable and pair of devices.\n• If you only care about total error then you don't need linear regression. However, if (and only if) the error changes over time I could imagine it would make a lot of sense to include that aspect in your analysis. For example, sensor error could increase over time because the accuracy of the sensors deteriorates over time or because measurements at $t_2$ depend on measurements at $t_1$ (and error accumulates). Whether such a scenario makes sense or not is of course up to you to decide. Jul 17, 2013 at 14:33" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.85301155,"math_prob":0.86023283,"size":1204,"snap":"2022-27-2022-33","text_gpt3_token_len":281,"char_repetition_ratio":0.12166667,"word_repetition_ratio":0.10152284,"special_character_ratio":0.25415283,"punctuation_ratio":0.23371647,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9910997,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-30T03:34:14Z\",\"WARC-Record-ID\":\"<urn:uuid:80bccdef-6b7c-416b-886f-21ec04050546>\",\"Content-Length\":\"230795\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:813f661b-b238-43d5-9b70-c0899cbe6605>\",\"WARC-Concurrent-To\":\"<urn:uuid:aa82d024-a730-4893-81dc-7c48cc6422d2>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://stats.stackexchange.com/questions/64562/compare-two-sets-of-data\",\"WARC-Payload-Digest\":\"sha1:4Q75IHWRSLIE6P2OLV7AARF6PNKWEWV2\",\"WARC-Block-Digest\":\"sha1:QKRTKHY65WHDLEVEIDZT7GL6LY34YWHA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103661137.41_warc_CC-MAIN-20220630031950-20220630061950-00059.warc.gz\"}"}
https://wiki.haskell.org/index.php?title=Examples/Random_list&diff=15445&printable=yes
[ "# Difference between revisions of \"Examples/Random list\"\n\n## Create a random list\n\nGenerate a random list of numbers, without using the System.Random.randoms method:\n\n```import System.Random\nimport Data.List\n\nmain = do\nseed <- newStdGen\nlet rs = randomlist 10 seed\nprint rs\n\nrandomlist :: Int -> StdGen -> [Int]\nrandomlist n = take n . unfoldr (Just . random)\n```\n\n## Delete an element at random\n\n``` unpick and unpick' are by osfameron and are from http://osfameron.vox.com/library/post/more-random-fun.html (no explicit license)\nremoveOne is by Chris Kuklewicz (BSD3 licence, 2007)\n\n> import System.Random\n> import Debug.Trace -- for removeOne' demonstration\n\nThe unpick function and its helper unpick' are strict in the entire\nlist being operated on (forcing it all into memory at once). And IO\n[a] cannot lazily return any initial values.\n\n> unpick :: [a] -> IO [a]\n> unpick [] = undefined\n> unpick [x] = do return []\n> unpick (x:xs) = do zs <- unpick' [] [x] xs 2\n> return (reverse zs)\n>\n> unpick' :: (Num p, Random p) => [t] -> [t] -> [t] -> p -> IO [t]\n> unpick' curr orig [] _\n> = do return curr\n> unpick' curr orig (next:rest) prob\n> = do r <- getStdRandom (randomR (1,prob))\n> let curr' = if r == 1 then orig else (next:curr)\n> unpick' curr' (next:orig) rest (prob+1)\n\nTo run in the IO Monad just use (getStdRandom . removeOne) :: [a] -> IO [a].\n\nremoveOne returns the output list lazily as soon as it has decided\nnot to delete any element in a prefix of the input list.\nThe resulting list is constructed efficiently, with no wasted\nintermediate list construction. removeOne allows any output it\ngenerates to be garbage collected, it holds no references to it.\n\n\"removeOne\" is presented in curried form, without a binding for the\nRandomGen g. The StdGen is hidden inside a State\nmonad. removeOne is designed for use with Strict.Lazy. It may not be\noptimal to use with Strict.Strict.\n\nLike \"tail\" this function is partial and will produce an error if\ngiven the empty list.\n\n> removeOne :: (RandomGen g) => [a] -> g -> ([a],g)\n> removeOne [] = error \"Cannot removeOne from empty list\"\n> removeOne whole@(_:xs) = runState (helper whole xs 0 1) where\n\nThe laziness is needed in helper to make \"rest\" a lazy thunk. The\n\"start\" list parameter to helper is a suffix of \"whole\" that has the\ncurrent candidate for deletion as its head. \"oldIndex\" is the index\nof the current candidate for deletion in the \"whole\" list. \"here\" is a\nsuffix of \"whole\" with the \"index\" element of whole as its head. The\nrandomR decides if the head of \"here\" replaces the head of \"start\" as\nthe candidate to remove. If it does replace the old candidate then\na prefix of \"start\" of length \"(index-oldIndex)\" is immediately\noutput, counted off by prependSome.\n\nAssert \"start\" is never [].\nAssert 0 <= oldIndex < index.\n\n> helper start [] oldIndex index = return (tail start)\n> helper start here@(_:ys) oldIndex index = do\n> r <- State (randomR (0,index))\n> if r==0 then do rest <- helper here ys index \\$! succ index\n> return (prependSome (index-oldIndex) start rest)\n> else helper start ys oldIndex \\$! succ index\n\nI assert that \"prependSome n xs ys == take n xs ++ ys\" but slightly\noptimized (without depending on the compiler). Assert n >= length xs.\n\n> prependSome :: Int -> [a] -> [a] -> [a]\n> prependSome 0 _ rest = rest\n> prependSome n (x:xs) rest = x : prependSome (pred n) xs rest\n> prependSome _ [] _ = error \"impossible error in removeOne.prependSome\"\n\n\"removeOne'\" is a tracing version for demonstration below:\n\n> removeOne' :: (Show a,RandomGen g) => [a] -> g -> ([a],g)\n> removeOne' [] _ = error \"Cannot removeOne from empty list\"\n> removeOne' whole@(x:xs) g = runState (helper whole xs 0 1) g where\n> helper start [] oldIndex index = return (tail start)\n> helper start here@(_:ys) oldIndex index = do\n> r <- State (randomR (0,index))\n> if r==0 then do rest <- helper here ys index \\$! succ index\n> let rest' = trace \".\" rest\n> return (prependSome (index-oldIndex) start rest')\n> else do let ys' = trace \"_\" ys\n> helper start ys' oldIndex \\$! succ index\n\nUse \"removeOne'\" to demonstrate when random decisions to drop\nelements are made. This also demonstrates that removeOne is lazy,\nreturning elements as soon as the removal decision has moved on to a\nlater element (the \".\" is output instead of \"_\").\n\nThe element after the last \".\" is the one actually removed,\ndefaulting to the first element.\n\nSince the probability of \".\" decreases, the average length of the\nrun of output produced by appendSome increases as the list is\nprocessed.\n\n*Main> getStdRandom (removeOne' [1..10])\n[1.\n_\n_\n,2,3,4.\n_\n_\n_\n_\n,5,6,7,8,9.\n]\n*Main> getStdRandom (removeOne' [1..10])\n_\n_\n[1,2,3.\n_\n_\n_\n_\n_\n_\n,5,6,7,8,9,10]\n*Main> getStdRandom (removeOne' [1..10])\n_\n_\n_\n_\n_\n_\n_\n_\n_\n[2,3,4,5,6,7,8,9,10]\n*Main> getStdRandom (removeOne' [1..10])\n[1.\n_\n_\n_\n_\n_\n_\n_\n_\n,3,4,5,6,7,8,9,10]\n*Main> getStdRandom (removeOne' [1..10])\n[1.\n,2.\n_\n_\n_\n_\n_\n_\n_\n,4,5,6,7,8,9,10]\n*Main> getStdRandom (removeOne' [1..10])\n[1.\n_\n_\n_\n_\n,2,3,4,5,6.\n_\n_\n,7,8,9.\n]\n\nIf I use \":m + Data.List\" then I can demonstrate how fair the removal is:\n\n*Main Data.List> sequence (replicate 1000 \\$ getStdRandom (removeOne [1..4])) >>= return . map length . group . sort\n[241,255,239,265]\n\nwhere a perfect balance would be [250,250,250,250]\n```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.73738384,"math_prob":0.945395,"size":11585,"snap":"2020-34-2020-40","text_gpt3_token_len":3723,"char_repetition_ratio":0.17787756,"word_repetition_ratio":0.5686801,"special_character_ratio":0.380492,"punctuation_ratio":0.16689466,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97339994,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-23T02:41:23Z\",\"WARC-Record-ID\":\"<urn:uuid:48bdd15d-cf29-407a-a6db-4ff52dcdab4a>\",\"Content-Length\":\"81833\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0dffb103-2445-40d5-b769-017ca33685d8>\",\"WARC-Concurrent-To\":\"<urn:uuid:883f7221-98ae-4030-9083-f2274b8e158a>\",\"WARC-IP-Address\":\"199.232.65.175\",\"WARC-Target-URI\":\"https://wiki.haskell.org/index.php?title=Examples/Random_list&diff=15445&printable=yes\",\"WARC-Payload-Digest\":\"sha1:HMYWPITGIP7IG6VEDAW2TXC2JAJYKG2G\",\"WARC-Block-Digest\":\"sha1:YJE234N67DZQF3G46HHLDMV3FIPHDKDC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400209665.4_warc_CC-MAIN-20200923015227-20200923045227-00187.warc.gz\"}"}
http://dreamhosting.xyz/hvds-case-study-63/
[ "# HVDS CASE STUDY\n\nThe feeder voltage and feeder current are two constraints which should be within the standard range. The power factor assumed to be used as 0. Now voltage drop can be estimated at various power factors and also at various temperatures of conductors. The proposed case study includes the conversion of existing LVDS into HVDS in order to minimise the distribution losses and pilferage thereby, improving the voltage profile and quality of supply to the consumers in existing 11kV feeder. Log In Sign Up. This paper demonstrates the capability of load factor and load loss factor to calculate the power losses of the network. Nuclear Power 20 nuclear reactors in operation, generating 5.", null, "Total losses per annum The total installed capacity in India is Thus, this sector especially, the distribution sectors require economical system to provide electrical energy at a suitable prize and at a minimum voltage drop to reduce the voltage regulation. Estimation of Current at different power factor On the basis of the current estimation at reference power factor of 0. Thus, the net power loss per annum can be calculated as considering working for hours of a day with days per annum. The performance of the feeder may be improved by the HVDS system installation, which results in saving of total losses per annum and an annual savings of In 1st stage, the power losses including the theft and losses in the line and transformer losses for both LT and HT systems are determined and then in 2 nd stage, determination of annual savings and payback period is carried out alongwith the complete comparison of LVDS and HVDS system of 11kV feeder.\n\nThus, this sector especially, the distribution sectors require economical system to provide electrical energy at a suitable prize and at a minimum voltage drop to reduce the voltage regulation. The results obtained can be used for financial loss calculation and can be presented to regulate wtudy tariff determination process. The results are satisfactorily obtained in the above case study, theoretically.\n\n# Case Study: High Voltage Distribution System (HVDS) Implementation in BESCOM and MGVCL\n\nThe present paper emphasis on the re-designing for the existing distribution network for fulfilment of the above objectives of distribution feeder. Estimation of Current at different power factor On the basis of the current estimation at reference power factor of 0. Calculative Analysis of 11kV Distribution Feeder. Thus core and copper losses occurred in the transformer also contributed to the total power losses per annum in LT system So, for kVA rating transformer, the fixed value of no-load losses and full load losses are W and W respectively.\n\nDISSERTATION SUR LE MARIAGE PUTATIF\n\nHelp Center Find new research papers in: Therefore, loss minimization in power system has assumed greater significance HVDS scheme has led to the formulation of new strategy of energy conservation and minimization of transmission and distribution losses by reducing the power theft. The installation of HVDS system in considerable area of the sub-division, is the main technique which is also applied in the present work to evaluate the proposed re-designing of existing distribution network and its future planning.\n\nGeneration Total capacity till Calculation of voltage drop at various power factors and temperature As the values of current at various power factors had been determined as per above table. Installed capacity generation in Punjab IV.\n\nThis results in improving the stability as well as energy handling capacity atudy the system at minimum cost. Thus, the net power loss per annum can be calculated as considering working for hours stydy a day with days per annum.\n\nFrom the result, it is also realised that the causes of voltage drop on the feeder was mainly due to high impedance level as compared to the permissible value and this high impedance is caused by poor jointing and terminations, use of undersized conductors and different types of conductor materials etc.\n\nNuclear Power 20 nuclear reactors in operation, generating 5. Ritula Thakur et al in presented a paper analysing and designing with the observation that the existing feeder is to be operated on 0. The subdivision is working on the installation of HVDS system in this feeder and calculations made above is an attempt for success of the above work made by State Electricity Board. Sub- transmission and distribution systems constitute the link between electricity utilities and consumers.\n\nLOKTAK LAKE ESSAY\n\nIn fact, it has become essential ingredient for improving the quality of life and its absence is associated with poverty and poor quality of life. Pilferage on HT system is assumed to negligible. Total load losses per annum units hvvds 5. Thus required calculation of HT transformer losses are as under in table no. Total power losses Sub-transmission and distribution systems constitute the link between electricity utilities and consumers.\n\nThis work was mainly focused on low voltage distribution system LVDS. As the distribution network is located far away from the sources of power generation and the other infrastructure of electrical power system.\n\n## Case Study: High Voltage Distribution System (HVDS) Implementation in BESCOM and MGVCL\n\nBut the weight of 80mm 2 conductor is The power factor assumed to be used as 0. Total iron losses per annum units units 4. Due to this unplanned expansion in the system, the supply conditions were sacrificed to meet the required targets.", null, "The performance of the feeder may be hvrs by the HVDS system installation, which results in saving of total losses per annum and an annual savings of LT Transformer losses As per the information derived from the 33kV State Electricity board of subdivision, on LT side, a large number of transformer of capacity of kVA is used to supply the power to the consumers at the end point of each section.\n\nThe technical losses are the losses occurred in the electrical elements during of transmission of energy from source to consumer and mainly comprises of ohmic losses. The details are as under in table 1 .", null, "Efficient functioning of these segments of the electricity utility is essential to sustain the growth of the power sector and the economy of the country." ]
[ null, "http://dreamhosting.xyz/essay.png", null, "https://cdn.slidesharecdn.com/ss_thumbnails/tpddlcasestudy-141230062517-conversion-gate02-thumbnail-4.jpg", null, "https://imgv2-1-f.scribdassets.com/img/document/49451269/original/364fc74a27/1545306266", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9359864,"math_prob":0.9164949,"size":6195,"snap":"2020-10-2020-16","text_gpt3_token_len":1180,"char_repetition_ratio":0.14165725,"word_repetition_ratio":0.1998002,"special_character_ratio":0.1803067,"punctuation_ratio":0.06481481,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9579147,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,4,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-04-06T15:57:01Z\",\"WARC-Record-ID\":\"<urn:uuid:18f8881b-6595-41ba-8d09-3562b6af0792>\",\"Content-Length\":\"29069\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:16271137-3bd5-4c7b-9415-5bd4e0707845>\",\"WARC-Concurrent-To\":\"<urn:uuid:490e5c03-c440-4c21-8a18-831d2c5eaa7e>\",\"WARC-IP-Address\":\"104.31.70.247\",\"WARC-Target-URI\":\"http://dreamhosting.xyz/hvds-case-study-63/\",\"WARC-Payload-Digest\":\"sha1:7T5K6LWPIOMNBACJMRD7UD5SZCW3OK4D\",\"WARC-Block-Digest\":\"sha1:J65U42M7XMKLU6RSWZLUNLMI6BQ5WW7O\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585371637684.76_warc_CC-MAIN-20200406133533-20200406164033-00245.warc.gz\"}"}
https://www.univ-smb.fr/listic/en/production-scientifique/revue-busefal/version-electronique/ebusefal-86/
[ "# eBUSEFAL #86\n\n Amiya Kumar Shyamal and Madhumangal Pal Distances Between Interval-Valued Intuitionistic Fuzzy Sets Liu Huawen Decompositions of Intuitionistic Fuzzy Sets Ma Baoguo On Fuzzy weakly semi-precontinuous multifunctions Liu Jinlu Pseudo-fuzzy Linear Inequation and Equation Wang Hongxu and Li Guishan Notes on the decision theorems that the equation type II of a fuzzy matrix has the solutions when the index is one Witold Pedrycz and George Vukovich An algorithmic Framework of Collaborative Clustering Witold Pedrycz and George Vukovich Relevance and consistency in rule-based systems Zou Li (u,v)-Implication of IOFL Jacek M. Leski An e-insensitive fuzzy c-means clustering Wang Xin and Liu Xiadong The Base of Finite EI Algebra 1 Wen-Xiang Gu and Su-yun Li The necessary and sufficient condition that a fuzzy set is a pointwise fuzzy group Bai Shi-Zhong Fuzzy Pre-Urysohn Spaces Bai Shi-Zhong Countably Strong Lowen's Compactness in L-fts Yang Jie The decision theorems that the equation type III of a fuzzy matrix has a solution when the index is one Zhang Chengyi and Su Jinlin Fuzzy Congruence on Rings Liu Jinlu On A Kind of Fuzzy Relational Equation" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7848195,"math_prob":0.8213113,"size":1185,"snap":"2023-14-2023-23","text_gpt3_token_len":359,"char_repetition_ratio":0.11346316,"word_repetition_ratio":0.057142857,"special_character_ratio":0.19324894,"punctuation_ratio":0.010256411,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9905475,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-07T18:02:28Z\",\"WARC-Record-ID\":\"<urn:uuid:1632a9c8-cad0-44c5-992f-e0a6df6301c8>\",\"Content-Length\":\"693724\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e9a06353-bbfb-42d3-964d-9a1f0d5d3c21>\",\"WARC-Concurrent-To\":\"<urn:uuid:9803d22f-28d8-4db2-825f-d7f09a5e2b0c>\",\"WARC-IP-Address\":\"145.239.14.239\",\"WARC-Target-URI\":\"https://www.univ-smb.fr/listic/en/production-scientifique/revue-busefal/version-electronique/ebusefal-86/\",\"WARC-Payload-Digest\":\"sha1:4V4QO6L44OWRAUWE4SVMAHCQ3VZMFACY\",\"WARC-Block-Digest\":\"sha1:JLPO2AZDYULMHXUBNTQRW5GH2NZQGRNH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224654012.67_warc_CC-MAIN-20230607175304-20230607205304-00430.warc.gz\"}"}
https://stuffsure.com/what-is-4-out-of-16-as-a-percentage/
[ "# What is 4 out of 16 as a Percentage?\n\nIf you’re not sure what 4 out of 16 as a percentage, don’t worry – we’ll show you how to calculate it. Just follow these simple steps and you’ll have your answer in no time.\n\nCheckout this video:\n\n## Introduction\n\n4 out of 16 as a percentage is equal to 25 percent. To find the percentage, divide 4 by 16 and then multiply the answer by 100. The answer will be 25 percent.\n\n## What is 4 out of 16 as a Percentage?\n\n4 out of 16 as a percentage is 25%.\n\n## Conclusion\n\n4 out of 16 is equivalent to 25 percent." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9275967,"math_prob":0.9681767,"size":491,"snap":"2023-40-2023-50","text_gpt3_token_len":132,"char_repetition_ratio":0.20123203,"word_repetition_ratio":0.125,"special_character_ratio":0.29531568,"punctuation_ratio":0.096491225,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.988052,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-08T16:04:28Z\",\"WARC-Record-ID\":\"<urn:uuid:a537b59c-2530-4ef5-86bd-db34378a9a69>\",\"Content-Length\":\"55751\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:929384c3-c6c8-4122-94e3-d0ac282512dc>\",\"WARC-Concurrent-To\":\"<urn:uuid:90f5b7ad-d600-4933-ae3f-2196e5a5befc>\",\"WARC-IP-Address\":\"50.16.223.119\",\"WARC-Target-URI\":\"https://stuffsure.com/what-is-4-out-of-16-as-a-percentage/\",\"WARC-Payload-Digest\":\"sha1:4OPFDDODLRHJA73OGMGUDSOXT6VBMMNC\",\"WARC-Block-Digest\":\"sha1:3IP4DWHH2YA3ZNEW6WYSNOXVJ4EVWY64\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100762.64_warc_CC-MAIN-20231208144732-20231208174732-00721.warc.gz\"}"}
https://www.mathworksheets4kids.com/volume-mixed-pyramids.php
[ "1. Worksheets>\n2. Math>\n3. Geometry>\n4. Volume>\n5. Mixed Pyramids\n\n# Volume of a Pyramid Worksheets\n\nWork out the volume of the pyramids with rectangular, triangular and polygonal base faces. Calculate the volume by plugging in the measures expressed as integers and decimals in the appropriate formulas. The 8th grade and high school printable worksheets are classified into two levels. Level 1 comprises polygons with 3 or 4-sided base faces, while level 2 includes polygonal base faces. Practice finding the missing measures as well. Explore some of these worksheets for free!\n\nSelect the Measurement Units\n\nFinding Volume using Base Area | Integers\n\nJump-start your learning with this batch of PDF worksheets for 8th grade and 9th grade students on mixed pyramids featuring rectangular and triangular pyramids. Apply apt formulas to find the volume using the base area measure expressed as integers.\n\nFinding Volume using Base Area | Decimals\n\nAdvance to the next level consisting of pyramids whose base faces are rectangles or triangles, and the base area is indicated as decimals. Multiply the base area with the height and divide by 3 to compute the volume.\n\nVolume of Rectangular Pyramids\n\nExclusively dealing with rectangular pyramids, these PDF worksheets for 9th grade and 10th grade students are a must-have for thorough knowledge and practice in finding the volume of rectangular pyramids offering varied levels of difficulty.\n\n(24 Worksheets)\n\nVolume of Triangular Pyramids\n\nMake strides in your practice with these printable worksheets on finding the volume of triangular pyramids or tetrahedrons. Try the easy, moderate and challenging levels with integer and decimal dimensions.\n\n(24 Worksheets)\n\nFinding Volume of Polygonal Pyramids using Apothem | Level 1\n\nAnother essential step is to determine the volume of pyramids with polygonal base faces. Plug in the apothem(a), number of sides(n), side length(s) and height(h) in the formula V = 1/6 * ansh and compute the volume.\n\nVolume of Polygonal Pyramids using Side length or Perimeter | Level 2\n\nThe side length or perimeter and height are provided. First, find the apothem (apothem = side / 2 tan (180/n)), and then assign the known values in the volume formula to solve for volume of polygonal pyramids.\n\nVolume of Pyramids | Level 1 - Integers - Easy\n\nThe easy level contains pyramids with integer dimensions ≤ 20. One-third of the base area of the square, rectangle or a triangle multiplied by the height results in the volume of the pyramid. Recommended for grade 8 and grade 9 children.\n\nVolume of Pyramids | Level 1 - Integers - Moderate\n\nPolish up your skills in finding the volume of pyramids with this section of high school geometry worksheets presenting pyramids as 3D shapes and in the word format with 3 or 4-sided base faces involving integer dimensions ≥ 20.\n\nVolume of Pyramids | Level 1 - Decimals\n\nScale new heights as you practice solving for volume with these pdf worksheets posing a challenge with decimal dimensions. Apply relevant formula, substitute and calculate the volume of each pyramid.\n\nVolume of Pyramids | Level 2\n\nWith the sides of the base face, increases the level of difficulty. Figure out the volume of the triangular, rectangular and polygonal pyramids applying suitable formulas and bolster skills.\n\nVolume of Pyramids | Missing Measure\n\nMotivate learners to give these printable worksheets a quick try. All they have to do is simply rearrange the formula, making the missing measure (x) the subject, plug in the known values and solve!\n\nRelated Worksheets" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.86409026,"math_prob":0.9495832,"size":3441,"snap":"2023-40-2023-50","text_gpt3_token_len":722,"char_repetition_ratio":0.1754437,"word_repetition_ratio":0.052158274,"special_character_ratio":0.20023249,"punctuation_ratio":0.0713073,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98783106,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-07T19:40:00Z\",\"WARC-Record-ID\":\"<urn:uuid:c551efc3-336a-4570-86d7-39322f5aca2b>\",\"Content-Length\":\"40665\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:76804cb3-33fb-417a-9d69-97ed2a2ea4e0>\",\"WARC-Concurrent-To\":\"<urn:uuid:cc27697f-6928-429c-9607-eb556322afb8>\",\"WARC-IP-Address\":\"67.225.178.45\",\"WARC-Target-URI\":\"https://www.mathworksheets4kids.com/volume-mixed-pyramids.php\",\"WARC-Payload-Digest\":\"sha1:ZI2DN2JT4IX666H5UP72XF54TVKHIZKQ\",\"WARC-Block-Digest\":\"sha1:4BSQOA2UPDLRGZSRIIDN6CO245NZLWMM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100686.78_warc_CC-MAIN-20231207185656-20231207215656-00511.warc.gz\"}"}
https://www.mediagust.com/2019/11/artificial-intelligence-ideal-mapping.html
[ "Once we realize that an agent's behavior depends only on its percept sequence to date, then we can describe any particular agent by making a table of the action it takes in response to each possible percept sequence. (For most agents, this would be a very long list—infinite, in fact, unless we place a bound on the length of percept sequences we want to consider.) Such a list is called a mapping from percept sequences to actions. We can, in principle, find out which mapping correctly describes an agent by trying out all possible percept sequences and recording which actions the agent does in response. (If the agent uses some randomization in its computations, then we would have to try some percept sequences several times to get a good idea of the agent's average behavior.) And if mappings describe agents, then ideal mappings describe ideal agents.\n\nSpecifying which action an agent ought to take in response to any given percept sequence provides a design for an ideal agent. This does not mean, of course, that we have to create an explicit table with an entry for every possible percept sequence. It is possible to define a specification of the mapping without exhaustively enumerating it. Consider a very simple agent: the square-root function on a calculator. The percept sequence for this agent is a sequence of keystrokes representing a number, and the action is to display a number on the display screen. The ideal mapping is that when the percept is a positive number x, the right action is to display a positive number z such that z 2 « x, accurate to, say, 15 decimal places.\n\nThis specification of the ideal mapping does not require the designer to actually construct a table of square roots. Nor does the square-root function have to use a table to behave correctly: Figure 2.2 shows part of the ideal mapping and a simple program that implements the mapping using Newton's method. The square-root example illustrates the relationship between the ideal mapping and an ideal agent design, for a very restricted task. Whereas the table is very large, the agent is a nice,; compact program. It turns out that it is possible to design nice, compact agents that implement part of the ideal mapping for the square-root problem (accurate to 1 5 digits), and a corresponding program that implements the ideal mapping." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.895473,"math_prob":0.9700656,"size":2420,"snap":"2019-51-2020-05","text_gpt3_token_len":508,"char_repetition_ratio":0.15811259,"word_repetition_ratio":0.009876544,"special_character_ratio":0.19545455,"punctuation_ratio":0.09409191,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9635108,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-09T10:14:54Z\",\"WARC-Record-ID\":\"<urn:uuid:928962a7-db19-4176-b374-66db91cf4a33>\",\"Content-Length\":\"364457\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a62c9eba-0ad9-44e5-b7f8-03ee1bfdf0b6>\",\"WARC-Concurrent-To\":\"<urn:uuid:b2a1590f-63bd-43f0-8139-1253675b5f2b>\",\"WARC-IP-Address\":\"172.217.164.179\",\"WARC-Target-URI\":\"https://www.mediagust.com/2019/11/artificial-intelligence-ideal-mapping.html\",\"WARC-Payload-Digest\":\"sha1:HETYZW5HJJQ7SLUVAO2ITRIXMBE4COKC\",\"WARC-Block-Digest\":\"sha1:YEH4CHEJJ3DCRGN64KPAYKSHOB2YZF6E\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540518627.72_warc_CC-MAIN-20191209093227-20191209121227-00052.warc.gz\"}"}
https://nl.mathworks.com/help/deeplearning/ref/dlarray.dlconv.html
[ "# dlconv\n\nDeep learning convolution\n\n## Syntax\n\n``dlY = dlconv(dlX,weights,bias)``\n``dlY = dlconv(dlX,weights,bias,'DataFormat',FMT)``\n``dlY = dlconv(___Name,Value)``\n\n## Description\n\nThe convolution operation applies sliding filters to the input data. Use 1-D and 2-D filters with ungrouped or grouped convolutions and 3-D filters with ungrouped convolutions.\n\nUse grouped convolution for channel-wise separable (also known as depth-wise separable) convolution. For each group, the operation convolves the input by moving filters along spatial dimensions of the input data, computing the dot product of the weights and the data and adding a bias. If the number of groups is equal to the number of channels, then this function performs channel-wise convolution. If the number of groups is equal to `1`, this function performs ungrouped convolution.\n\nNote\n\nThis function applies the deep learning convolution operation to `dlarray` data. If you want to apply convolution within a `layerGraph` object or `Layer` array, use one of the following layers:\n\nexample\n\n````dlY = dlconv(dlX,weights,bias)` computes the deep learning convolution of the input `dlX` using sliding convolutional filters defined by `weights`, and adds a constant `bias`. The input `dlX` is a formatted `dlarray` with dimension labels. Convolution acts on dimensions that you specify as `'S'` dimensions. The output `dlY` is a formatted `dlarray` with the same dimension labels as `dlX`.```\n\nexample\n\n````dlY = dlconv(dlX,weights,bias,'DataFormat',FMT)` also specifies dimension format `FMT` when `dlX` is not a formatted `dlarray`. The output `dlY` is an unformatted `dlarray` with the same dimension order as `dlX`. ```\n\nexample\n\n````dlY = dlconv(___Name,Value)` specifies options using one or more name-value pair arguments in addition to the input arguments in previous syntaxes. For example, `'Stride',3` sets the stride of the convolution operation. ```\n\n## Examples\n\ncollapse all\n\nConvolve all channels of an image input using a single filter.\n\nImport the image data and convert it to a `dlarray`.\n\n```X = imread('sherlock.jpg'); dlX = dlarray(single(X),'SSC');```\n\nDisplay the image.\n\n`imshow(X,'DisplayRange',[])`", null, "Initialize the convolutional filters. Specify an ungrouped convolution that applies a single filter to all three channels of the input data.\n\n```filterHeight = 10; filterWidth = 10; numChannelsPerGroup = 3; numFiltersPerGroup = 1; numGroups = 1; weights = rand(filterHeight,filterWidth,numChannelsPerGroup,numFiltersPerGroup,numGroups);```\n\nInitialize the bias term.\n\n`bias = rand(numFiltersPerGroup*numGroups,1);`\n\nPerform the convolution. Use a `'Stride'` value of `2` and a `'DilationFactor'` value of `2`.\n\n`dlY = dlconv(dlX,weights,bias,'Stride',2,'DilationFactor',2);`\n\nDisplay the convolved image.\n\n```Y = extractdata(dlY); imshow(Y,'DisplayRange',[])```", null, "Convolve the input data in three groups of two channels each. Apply four filters per group.\n\nCreate the input data as 10 observations of size 100-by-100 with six channels.\n\n```height = 100; width = 100; channels = 6; numObservations = 10; X = rand(height,width,channels,numObservations); dlX = dlarray(X,'SSCB');```\n\nInitialize the convolutional filters. Specify three groups of convolutions that each apply four convolution filters to two channels of the input data.\n\n```filterHeight = 8; filterWidth = 8; numChannelsPerGroup = 2; numFiltersPerGroup = 4; numGroups = 3; weights = rand(filterHeight,filterWidth,numChannelsPerGroup,numFiltersPerGroup,numGroups);```\n\nInitialize the bias term.\n\n`bias = rand(numFiltersPerGroup*numGroups,1);`\n\nPerform the convolution.\n\n```dlY = dlconv(dlX,weights,bias); size(dlY)```\n```ans = 1×4 93 93 12 10 ```\n`dims(dlY)`\n```ans = 'SSCB' ```\n\nThe 12 channels of the convolution output represent the three groups of convolutions with four filters per group.\n\nSeparate the input data into channels and perform convolution on each channel separately.\n\nCreate the input data as a single observation with a size of 64-by-64 and 10 channels. Create the data as an unformatted `dlarray`.\n\n```height = 64; width = 64; channels = 10; X = rand(height,width,channels); dlX = dlarray(X);```\n\nInitialize the convolutional filters. Specify an ungrouped convolution that applies a single convolution to all three channels of the input data.\n\n```filterHeight = 8; filterWidth = 8; numChannelsPerGroup = 1; numFiltersPerGroup = 1; numGroups = channels; weights = rand(filterHeight,filterWidth,numChannelsPerGroup,numFiltersPerGroup,numGroups);```\n\nInitialize the bias term.\n\n`bias = rand(numFiltersPerGroup*numGroups,1);`\n\nPerform the convolution. Specify the dimension labels of the input data using the `'DataFormat'` option.\n\n```dlY = dlconv(dlX,weights,bias,'DataFormat','SSC'); size(dlY)```\n```ans = 1×3 57 57 10 ```\n\nEach channel is convolved separately, so there are 10 channels in the output.\n\n## Input Arguments\n\ncollapse all\n\nInput data, specified as a `dlarray` with or without dimension labels or a numeric array. When `dlX` is not a formatted `dlarray`, you must specify the dimension label format using `'DataFormat',FMT`. If `dlX` is a numeric array, at least one of `weights` or `bias` must be a `dlarray`.\n\nConvolution acts on dimensions that you specify as spatial dimensions using the `'S'` dimension label. You can specify up to three dimensions in `dlX` as `'S'` dimensions.\n\nData Types: `single` | `double`\n\nConvolutional filters, specified as a `dlarray` with or without labels or a numeric array. The `weights` argument specifies the size and values of the filters, as well as the number of filters and the number of groups for grouped convolutions.\n\nSpecify weights as a `filterSize`-by-`numChannelsPerGroup`-by-`numFiltersPerGroup`-by-`numGroups` array.\n\n• `filterSize` — Size of the convolutional filters. `filterSize` can have up to three dimensions, depending on the number of spatial dimensions in the input data.\n\nInput Data `'S'` Dimensions`filterSize`\n1-Dh, where h corresponds to the height of the filter\n2-D h-by-w, where h and w correspond to the height and width of the filter, respectively\n3-Dh-by-w-by-d, where h, w, and d correspond to the height, width, and depth of the filter, respectively\n\n• `numChannelsPerGroup` — Number of channels to convolve within each group. `numChannelsPerGroup` must equal the number of channels in the input data divided by `numGroups`, the number of groups. For ungrouped convolutions, where `numGroups = 1`, `numChannelsPerGroup` must equal the number of channels in the input data.\n\n• `numFiltersPerGroup` — Number of filters to apply within each group.\n\n• `numGroups` — Number of groups (optional). When `numGroups > 1`, the function performs grouped convolutions. Grouped convolutions are not supported for input data with more than two `'S'` dimensions. When `numGroups = 1`, the function performs ungrouped convolutions; in this case, this dimension is singleton and can be omitted.\n\nIf `weights` is a formatted `dlarray`, it can have multiple spatial dimensions labeled `'S'`, one channel dimension labeled `'C'`, and up to two other dimensions labeled `'U'`. The number of `'S'` dimensions must match the number of `'S'` dimensions of the input data. The labeled dimensions correspond to the filter specifications as follows.\n\nFilter SpecificationDimension Labels\n`filterSize`Up to three `'S'` dimensions\n`numChannelsPerGroup``'C'` dimension\n`numFiltersPerGroup`First `'U'` dimension\n`numGroups` (optional)Second `'U'` dimension\n\nData Types: `single` | `double`\n\nBias constant, specified as a `dlarray` vector or `dlarray` scalar with or without labels, a numeric vector, or a numeric scalar.\n\n• If `bias` is a scalar or has only singleton dimensions, the same bias is applied to each output.\n\n• If `bias` has a nonsingleton dimension, each element of `bias` is the bias applied to the corresponding convolutional filter specified by `weights`. The number of elements of `bias` must match the number of filters specified by `weights`.\n\n• If `bias` is a scalar numeric array with value `0`, the bias term is disabled and no bias is added during the convolution operation.\n\nIf `bias` is a formatted `dlarray`, the nonsingleton dimension must be a channel dimension labeled `'C'`.\n\nData Types: `single` | `double`\n\n### Name-Value Pair Arguments\n\nSpecify optional comma-separated pairs of `Name,Value` arguments. `Name` is the argument name and `Value` is the corresponding value. `Name` must appear inside quotes. You can specify several name and value pair arguments in any order as `Name1,Value1,...,NameN,ValueN`.\n\nExample: `'DilationFactor',2` sets the dilation factor for each convolutional filter to `2`.\n\nDimension order of unformatted input data, specified as the comma-separated pair consisting of `'DataFormat'` and a character array or string `FMT` that provides a label for each dimension of the data. Each character in `FMT` must be one of the following:\n\n• `'S'` — Spatial\n\n• `'C'` — Channel\n\n• `'B'` — Batch (for example, samples and observations)\n\n• `'T'` — Time (for example, sequences)\n\n• `'U'` — Unspecified\n\nYou can specify multiple dimensions labeled `'S'` or `'U'`. You can use the labels `'C'`, `'B'`, and `'T'` at most once.\n\nYou must specify `'DataFormat'` when the input data `dlX` is not a formatted `dlarray`.\n\nExample: `'DataFormat','SSCB'`\n\nData Types: `char` | `string`\n\nStep size for traversing the input data, specified as the comma-separated pair consisting of `'Stride'` and a numeric scalar or numeric vector. If you specify `'Stride'` as a scalar, the same value is used for all spatial dimensions. If you specify `'Stride'` as a vector of the same size as the number of spatial dimensions of the input data, the vector values are used for the corresponding spatial dimensions.\n\nThe default value of `'Stride'` is `1`.\n\nExample: `'Stride',3`\n\nData Types: `single` | `double`\n\nFilter dilation factor, specified as the comma-separated pair consisting of `'DilationFactor'` and one of the following.\n\n• Numeric scalar — The same dilation factor value is applied for all spatial dimensions.\n\n• Numeric vector — A different dilation factor value is applied along each spatial dimension. Use a vector of size `d`, where `d` is the number of spatial dimensions of the input data. The `i`th element of the vector specifies the dilation factor applied to the `i`th spatial dimension.\n\nUse the dilation factor to increase the receptive field of the filter (the area of the input that the filter can see) on the input data. Using a dilation factor corresponds to an effective filter size of `filterSize + (filterSize-1)*(dilationFactor-1)`.\n\nExample: `'DilationFactor',2`\n\nData Types: `single` | `double`\n\nSize of padding applied to edges of data, specified as the comma-separated pair consisting of `'Padding'` and one of the following:\n\n• `'same'` — Padding size is set so that the output size is the same as the input size when the stride is `1`. More generally, the output size of each spatial dimension is `ceil(inputSize/stride)`, where `inputSize` is the size of the input along a spatial dimension.\n\n• Numeric scalar — The same amount of padding is applied to both ends of all spatial dimensions.\n\n• Numeric vector — A different amount of padding is applied along each spatial dimension. Use a vector of size `d`, where `d` is the number of spatial dimensions of the input data. The `i`th element of the vector specifies the size of padding applied to the start and the end along the `i`th spatial dimension.\n\n• Numeric matrix — A different amount of padding is applied to the start and end of each spatial dimension. Use a matrix of size 2-by-`d`, where `d` is the number of spatial dimensions of the input data. The element `(1,d)` specifies the size of padding applied to the start of spatial dimension `d`. The element `(2,d)` specifies the size of padding applied to the end of spatial dimension `d`. For example, in 2-D, the format is ```[top, left; bottom, right]```.\n\nIn each case, the input data is padded with zeros.\n\nExample: `'Padding','same'`\n\nData Types: `single` | `double`\n\n## Output Arguments\n\ncollapse all\n\nConvolved feature map, returned as a `dlarray`. The output `dlY` has the same underlying data type as the input `dlX`.\n\nIf the input data `dlX` is a formatted `dlarray`, `dlY` has the same dimension labels as `dlX`. If the input data is not a formatted `dlarray`, `dlY` is an unformatted `dlarray` with the same dimension order as the input data.\n\nThe size of the `'C'` channel dimension of `dlY` depends on the size of the `weights` input. The size of the `'C'` dimension of output `Y` is the product of the size of the dimensions `numFiltersPerGroup` and `numGroups` in the `weights` argument. If `weights` is a formatted `dlarray`, this product is the same as the product of the size of the `'U'` dimensions.\n\ncollapse all\n\n### Deep Learning Convolution\n\nThe `dlconv` function applies sliding convolution filters to the spatial dimensions of the input data. The `dlconv` function supports convolution in one, two, or three spatial dimensions. For more information, see the definition of convolutional layer on the `convolution2dLayer` reference page." ]
[ null, "https://nl.mathworks.com/help/examples/nnet/win64/PerformUngroupedConvolutionExample_01.png", null, "https://nl.mathworks.com/help/examples/nnet/win64/PerformUngroupedConvolutionExample_02.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7025217,"math_prob":0.9862056,"size":1551,"snap":"2020-34-2020-40","text_gpt3_token_len":368,"char_repetition_ratio":0.16548158,"word_repetition_ratio":0.041666668,"special_character_ratio":0.19406834,"punctuation_ratio":0.09328358,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.995226,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-18T18:05:15Z\",\"WARC-Record-ID\":\"<urn:uuid:d2acf812-ee25-4595-ab27-68b086520983>\",\"Content-Length\":\"123004\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d0a3fc98-3681-406f-8908-547eb6e172ec>\",\"WARC-Concurrent-To\":\"<urn:uuid:6a3d086c-ef39-4de4-9041-a57b8156484a>\",\"WARC-IP-Address\":\"96.7.70.236\",\"WARC-Target-URI\":\"https://nl.mathworks.com/help/deeplearning/ref/dlarray.dlconv.html\",\"WARC-Payload-Digest\":\"sha1:3GJ4LDS2ZIJEHUFADDVXUJJGKPKFNY57\",\"WARC-Block-Digest\":\"sha1:PI6665DS2TXN7UEHZC3ASGJTKYNLSEB3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400188049.8_warc_CC-MAIN-20200918155203-20200918185203-00395.warc.gz\"}"}
https://socratic.org/questions/58cbf8e77c01492623633e19
[ "# Question 33e19\n\nMar 17, 2017\n\nThe divisor is\n\n${x}^{2} - 5 x + 6$\n\n$= {x}^{2} - 3 x - 2 x + 6$\n\n$= x \\left(x - 3\\right) - 2 \\left(x - 3\\right)$\n\n$= \\left(x - 3\\right) \\left(x - 2\\right)$\nso it has two linear factors $\\left(x - 3\\right) \\mathmr{and} \\left(x - 2\\right)$\n\nThe polynomial of $x$ to be divided is\n\n$P \\left(x\\right) = a {x}^{3} - 9 {x}^{2} + b x + 3 a$\nThis is to be exactly divisible by the divisor containing two linear factors $\\left(x - 3\\right) \\mathmr{and} \\left(x - 2\\right)$.\n\nSo\n\n$P \\left(3\\right) = 0$\n\n$\\implies a \\cdot {3}^{3} - 9 \\cdot {3}^{2} + b \\cdot 3 + 3 a = 0$\n\n$\\implies 30 a + 3 b = 729$\n\n$\\implies 10 a + b = 243. \\ldots \\ldots . \\left(1\\right)$\n\nAgain\n\n$P \\left(2\\right) = 0$\n\n$\\implies a \\cdot {2}^{3} - 9 \\cdot {2}^{2} + b \\cdot 2 + 3 a = 0$\n\n$\\implies 11 a + 2 b = 36. \\ldots \\ldots . \\left(2\\right)$\n\nMultiplying (1) by 2 and subtracting (2) from the product we get\n\n$20 a - 11 a = 486 - 36$\n\n$\\implies 9 a = 450$\n\n$\\implies a = \\frac{450}{9} = 50$\n\nInserting the value of a in equation (1)\n\n=>10xx50+b=243)#\n\n$\\implies b = 243 - 500 = - 257$\n\nMar 17, 2017\n\n$a = 2 , b = 7$\n\n#### Explanation:\n\nIf $a {x}^{3} - 9 {x}^{2} + b x + 3 a$ is exactly divisible by ${x}^{2} - 5 x + 6$ then\n\n$a {x}^{3} - 9 {x}^{2} + b x + 3 a \\equiv 0 \\mod {x}^{2} - 5 x + 6$\n\nthen $\\exists \\left(c x + d\\right) | a {x}^{3} - 9 {x}^{2} + b x + 3 a = \\left(c x + d\\right) \\left({x}^{2} - 5 x + 6\\right)$\n\nor\n\n$\\left(a - c\\right) {x}^{3} + \\left(5 c - 9 - d\\right) {x}^{2} + \\left(5 d - 6 c + b\\right) x + 3 a - 6 d = 0$\n\nSolving\n\n$\\left\\{\\begin{matrix}a - c = 0 \\\\ 5 c - 9 - d = 0 \\\\ 5 d - 6 c + b = 0 \\\\ 3 a - 6 d = 0\\end{matrix}\\right.$\n\nwe have\n\n$a = 2 , b = 7 , c = 2 , d = 1$" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.85162044,"math_prob":1.00001,"size":368,"snap":"2020-45-2020-50","text_gpt3_token_len":93,"char_repetition_ratio":0.10714286,"word_repetition_ratio":0.0,"special_character_ratio":0.2201087,"punctuation_ratio":0.015151516,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000033,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-05T09:27:28Z\",\"WARC-Record-ID\":\"<urn:uuid:6f401be7-0f46-41a6-b173-3c097a940287>\",\"Content-Length\":\"35972\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ceea8afe-e6bf-49fc-8887-379b9dda2807>\",\"WARC-Concurrent-To\":\"<urn:uuid:fae76dd8-aac2-4f09-b729-df1db552a0d1>\",\"WARC-IP-Address\":\"216.239.38.21\",\"WARC-Target-URI\":\"https://socratic.org/questions/58cbf8e77c01492623633e19\",\"WARC-Payload-Digest\":\"sha1:2WLPY7NYXUGV7Q37LLQE26CUE6AAD2Q2\",\"WARC-Block-Digest\":\"sha1:GHU7HWNM6YWCCVXFUIZEQETDXTEQYCUZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141747323.98_warc_CC-MAIN-20201205074417-20201205104417-00583.warc.gz\"}"}
https://socratic.org/questions/how-do-you-write-the-equation-of-a-circle-with-a-center-at-6-7-and-a-diameter-of
[ "# How do you write the equation of a circle with a center at (6, 7) and a diameter of 4?\n\n${\\left(x - 6\\right)}^{2} + {\\left(y - 7\\right)}^{2} = {2}^{2}$\n${\\left(x - h\\right)}^{2} + {\\left(y - k\\right)}^{2} = {r}^{2}$ is the general equation where $\\left(h , k\\right)$ is the center. $r$ is radius thus it equals to $\\frac{1}{2}$ of diameter, $r$= 2." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6922903,"math_prob":1.00001,"size":249,"snap":"2020-24-2020-29","text_gpt3_token_len":65,"char_repetition_ratio":0.114285715,"word_repetition_ratio":0.0,"special_character_ratio":0.26104417,"punctuation_ratio":0.08,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000012,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-08T01:46:36Z\",\"WARC-Record-ID\":\"<urn:uuid:a59ec779-ab6b-4b60-9a2e-281e79d8f561>\",\"Content-Length\":\"32813\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7e1ea4f9-2bb1-4245-891d-078b808b4717>\",\"WARC-Concurrent-To\":\"<urn:uuid:d654405f-b07d-4c0d-9594-6d7f9882c112>\",\"WARC-IP-Address\":\"216.239.38.21\",\"WARC-Target-URI\":\"https://socratic.org/questions/how-do-you-write-the-equation-of-a-circle-with-a-center-at-6-7-and-a-diameter-of\",\"WARC-Payload-Digest\":\"sha1:BULHBMT733RSO4LQR4XBGLC7GKSUCM2U\",\"WARC-Block-Digest\":\"sha1:YWSSF7AVKQSFYKZGWD2E4YFCWUHETUBF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655896169.35_warc_CC-MAIN-20200708000016-20200708030016-00493.warc.gz\"}"}
https://homework.cpm.org/category/CC/textbook/cca2/chapter/8/lesson/8.3.2/problem/8-158
[ "", null, "", null, "### Home > CCA2 > Chapter 8 > Lesson 8.3.2 > Problem8-158\n\n8-158.\n\nSolve each equation.\n\n1. $\\log_3(2x - 1) = -2$\n\nConvert the equation into exponential form.\n$3^{-2} = 2x - 1$\n\nSolve for $x$.\n\n$x = \\frac{5}{9}$\n\n1. $5^{\\log_5(x)} = 3$\n\nRemember the Inverse Relationship of Logarithms:\n\n$b^{ \\log_b (N)}$\n\n$x = 3$\n\n1. $\\log_2(x) − \\log_2(3) = 4$\n\nRemember the Quotient Property of Logarithms:\n\n1. $\\log_3(5) = x$" ]
[ null, "https://homework.cpm.org/dist/7d633b3a30200de4995665c02bdda1b8.png", null, "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAfQAAABDCAYAAABqbvfzAAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAAyRpVFh0WE1MOmNvbS5hZG9iZS54bXAAAAAAADw/eHBhY2tldCBiZWdpbj0i77u/IiBpZD0iVzVNME1wQ2VoaUh6cmVTek5UY3prYzlkIj8+IDx4OnhtcG1ldGEgeG1sbnM6eD0iYWRvYmU6bnM6bWV0YS8iIHg6eG1wdGs9IkFkb2JlIFhNUCBDb3JlIDUuMC1jMDYxIDY0LjE0MDk0OSwgMjAxMC8xMi8wNy0xMDo1NzowMSAgICAgICAgIj4gPHJkZjpSREYgeG1sbnM6cmRmPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5LzAyLzIyLXJkZi1zeW50YXgtbnMjIj4gPHJkZjpEZXNjcmlwdGlvbiByZGY6YWJvdXQ9IiIgeG1sbnM6eG1wPSJodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAvIiB4bWxuczp4bXBNTT0iaHR0cDovL25zLmFkb2JlLmNvbS94YXAvMS4wL21tLyIgeG1sbnM6c3RSZWY9Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC9zVHlwZS9SZXNvdXJjZVJlZiMiIHhtcDpDcmVhdG9yVG9vbD0iQWRvYmUgUGhvdG9zaG9wIENTNS4xIE1hY2ludG9zaCIgeG1wTU06SW5zdGFuY2VJRD0ieG1wLmlpZDo5QzA0RUVFMzVFNDExMUU1QkFCNEYxREYyQTk4OEM5NCIgeG1wTU06RG9jdW1lbnRJRD0ieG1wLmRpZDo5QzA0RUVFNDVFNDExMUU1QkFCNEYxREYyQTk4OEM5NCI+IDx4bXBNTTpEZXJpdmVkRnJvbSBzdFJlZjppbnN0YW5jZUlEPSJ4bXAuaWlkOjlDMDRFRUUxNUU0MTExRTVCQUI0RjFERjJBOTg4Qzk0IiBzdFJlZjpkb2N1bWVudElEPSJ4bXAuZGlkOjlDMDRFRUUyNUU0MTExRTVCQUI0RjFERjJBOTg4Qzk0Ii8+IDwvcmRmOkRlc2NyaXB0aW9uPiA8L3JkZjpSREY+IDwveDp4bXBtZXRhPiA8P3hwYWNrZXQgZW5kPSJyIj8+RSTQtAAAG9JJREFUeNrsXQmYXEW1Pj09PVtmJjsBDGFXiCKKIBJ2REEQQdaARBBiFFRAnrIoyhqCgLwnEfEpPMAgggsGJG7w2MMuiuwkJDGQINmTycxklu62/r5/0ZWaur3M9GQCc/7vO1/fvrfuvXXr1q3/nFOnqhLZbFYUCoVCoVC8u1GlRaBQKBQKhRK6QqFQKBQKJXSFQqFQKBRK6AqFQqFQKJTQFQqFQqFQQlcoFAqFQqGErlAoFAqFonKoLveE2jM+uTHk+zNGjjZyj5EXqJhgQH3KyClGOo1MNbK2vzOSTWakbmWTjHp+69y2QqFQKBQW85+avvES+kaCKUaOMHK8kcWS9zQkjYzj9l1Gnuj3nCSykuxIaa1VKBQKxbvLQt9I0Gjk30YehtPA2d9tZJGRPYxs0++EnjCaRFe1NC4emSN2hUKhUCiU0MtDjZE3jRwXODaRhP5hI7f1ZyayVRmpWdMoqbb63LZCoVAoFAOFd2tQHHzcWxppChwbxt89+zsTWWOV161okkQ6oTVJoVAoFErovQA8C6OMjA0csy74nSXfn155GA6vXlcj9cuHqnWuUCgUCiX0XqDByOiIUnNu9ThCh/W+T79Z54bEa1c1SnVbjdnW/nOFQqFQKKGXi/cbeR+3Px44PtrZPrw/M1K/vDlSKxQKhUKhUEIvG/tK1IcO7CE9KXVn/v7ZyAFGNqm4dY6hautqpGZNg7rbFQqFQqGE3sv8gtDXOeTt9pMPN/Ixh9CNCS2HVJzQq7JSu3qIJDtTaqErFAqFQgm9FwBZY/z520ZWS9Sfvrdz/AjHeke6RyWaOa6iwJBzuNsTyuYKhUKhUELvFdAn/rREQ9NeN/KkkaN4bAQJ/x7+hy/8RhL+DpVk86p0taRadOy5QqFQKJTQe4NtSNog8aESzdf+RyOfolX+ZSMPSDRbHIBhbXcaaTcyuVKZQP95am2dVHelctsKhUKhUAxGQoeP+hoj1xu5yciFZZwLUv6NRIuwWMKeLdGscRdLFN3+O8lHuY800mbkdiOnSn7CmT4Sukj9imZJZHShOoVCoVAMXkLH/bBc2ywj5xg5wcjnSjgP4803owU+kvsQ8PaskYeMnGbkCu6vd44D15LMT6yIRmLUiZq19WqdKxQKhWJQE/q2Eo0hR7/3GCMLJFoGddciefymkR/zfyN/U7TO20niNhjOTizTwN9/GPmrkfMcsu+ddV6VkVR7nVS31mn/uUKhUCgGNaGDyP9l5F6J3OMdRr5n5FwjH4w55wwjrxj5G/+787dfQwsd/eZf5b46z1IHLqUicVLfzHOR6vYaqepOas1RKBQKxaAldIwXR7/3XIn6wVskcp+D4NEHfomRXbxzDpJorPkPnX2WsDHm/FEeQ/Db13j9as9CF6bDuPSLJLygS4xFns1Z4lYy1encdK+JjA5XUygUCsXgJfQvGblDIrc7VkI71sh2Rg418gKtdFjrdknUCUYmSdTX3u1c533O9uP8vZrKAYLfugKEDpwvkZv/nFIzjGj2mtUNuRnhILWrhkhVV1LXPlcoFArFRocNtR76YUbeMrKElvqJJGlMDvNFWta3GDmGFjf2wa89xchSI0NoqeM6n3KuO4q//5Ro7fPvS34WOZ/Q0ZeO6PoLmPblYpke8crmhtRr1198pSohmaT2nysUCoVi8BH6hySa8AWBaacbSUvUdw7vAJjyK0a+bmSakVVGWiVykSPgDUPVOmlZg/zv4q+d3rXOuQ/c9kdKNFY9ROjAd5nmBiN7SX4IXBCIZI/c7vlkiYS62xUKxYbH/KemayEoCqI/Xe4YKnYKyXO8kZslmhBmUyM/kshNjpXTrpNoARUExX2e5yVI7BCYwwh8m0kLf0vnHm7g22u00LMFCH0l8zSBaRUKhUKhUAvdA4aLoX97FxL19iTVZ0nMcHnDHf5Vh4hB1KOYbpGRtRJN07o/rfKmInm8yMhEEjWC69p4D1x/SMw5mF3uKp77dyN3azVQKBQKhRJ6HqMlH8X+iJHlsn4wW7kAIY+k9b41lYQPkPDx20zLf3zM+bDkEdmO/vUXjbxqZB6tfATGITjvVxK53v+uVUGhUCgUg4rQs15AWCL9jtf+TUrkMM86vyGgfzr3E9sn3WrObzWJFprtZ5z9uOHmRnYzcqCR/WJIHX3wB1GEOYGSgWC4xySKuMc1fm9kHyMLtTooFAqFYtAQet2yJvJxQjLVGelsbn9nnDb25Qg+QzLPRPSbSaZzc59Ho72iKPFkR7VUmbSZmgJGfO787DtR5bx+xlEefk/ixopqCKA7TOJd7Ql6EPaW/JKrrUyPceyH0HpXKBQKheK9T+gjX9jCsZWz0l3XJV2N7dLZtC43RrtueWN+nXCQfqpb2ke1SMfwVknXduUixhsXDZfGN0fkyD+TSsdb6WZ/d32ndAxtM+SfkM7GDllnrgXNAJO7MPocUfD/TxkvmcRZ5nqnSmkBf5b8ETX/oERD2u7UaqFQKBSK9zyh+y736vaUVLfVSMPbCE5ff4hXDu01UruqIWfNg5xxvHZ1Q2TVGx5PdhbOAqZaradXAOfAI9A+eo20jVljlIeGnMcAln7HsFbpauh8KV3XNaW7oeN2c+1rEunEeEPuXQVvkIAHAHnOol/+DpN+lsnYmWb/v8p1Xkjk1u/QaqVQKBSKjZ7QexB8jsCzBQZ0g+SjrVRrtG4KplB1jPBid3jnfCA3c1tLvQxZNCJH9u+wqSF2XCpd0w3Sv79t9JqPdA5vHZdOdVfB2x6arjVrlIzkulR2yOLmNnMcD5HoGtIxdN3IlrebFozOXb+HghKPL0i0UMxtWq0UCoVC8a4jdAJ907tLNIkMItPB2JgZDtHjz5DofHLEvdFv3SSFJ3gBE6+QaJz569ZDUN2Rst6CKl5naBb6QXcyR+5GMplU98PrRrQuXjt2ec6yr0onc3ey+WhcOFIaI8XgIJuPbFUmaxSOj1V1VafM9bHe+vz1lICsYf2wEgL3va7aolAoFIp3JaFjKVPMwY7JWjaPSYOo8usoLuCixpKoW5R4Lyzmgrnb/8fIn5z1yJO8TjThDAztZHQskU7OHvLvofvVL2/sXrPlMml934qc6z/VWifD5mwqtSuHIP0hhsBnradBGOKnsnCyT+gFACVG54RVKBQKxYCgLzPFYeKY+yUKJNu8QLodSbhYLrXZNXYlmgimVMCC/rREE8P8oKTrJLJ7GgI/VjJVMmzupjLipbHSvHCUjP77VjkyN6RdY6z1qYHz7FaXVhGFQqFQvJcJHdO3wqrdrYxzMIf6LVIZtzQmhil16taLDUE3od8ervjm18fkoutpgcOz8BGtBgqFQqEYrIR+JS30cnGERCupVQJYaAV99sVmo8MSrWfkTHlD4jkijyzwkfQuKBQKhUIxKAkds7JNjDn2N4lWTcPCK/MKWNcIT0/HHEcA3F8kWp0NU7c+GZMO1zi1xDz/l0TLtrr4tqy/trpCoVAoFO9a9CYoDv3YqcB+zNp2vOTHYWNd8wckmnvdBf7vIdHCLCE8Z+RgT+k4wciNJHEXmLK1toByYDGc1vgU/se88F/T169QKBSKwWyhfzSwL03L3J1U5d8S9XPPpcyhzCepJ0pUMtDZfatEAXg+xkq03Gop0eUnG9mV25dIFKGvUCgUCsWgtdBDEe1wky8I7P+NkT95+0DkiB6vr0D+s5JfBqYY4FU4z8i1Ro7ZCN8FFIzNJD+Gvz2QppZeiqxXnp0SnqEuxXJexzSFUMf0uG9cXEKC10tKgWV3nGtUM72ftkviZ9SrYV46me+4Z+qKKSMAK/8hRgLL8S6SwvMcWDQzvascJkuopwm+szYqyA2SH3kRum89v6EE33NrjKLdwLy0Ffh2G4qUg32uVon3YtWxXrWXUEd8FCqftTH765n3cuqEC7zXUczvGyW8W5TzFrwvFmda1k/5wn0wEqelQJ7qWX/XlHC9Jr6z9hLrr0LRKws9tPhJS4FKutaTFjbUcSQcIhO48vcP7F9sZHWJhA58zshvpW/D9SoNNFAIMkRXQ27yHInWkL+ADa2LqTyGCXv+6ciz9GLs7aWfxLT3s4GIAxq8x5n2oALpQCB38X7PeXlw5bNM/2mmfdY59jz/38HjPr7BfFwVk4ejeXxG4NhHeN2XJJr/AOWJlfWOK/IO7D0v8fbv4z0Xnvlv3vNAfsf07+exh6ic+cR5Ae9jPVbYvijwbhDvMZv32jMmz0fy/FsK1P+TmZ9rCjz7VF7nm72ou7vElAfK6RGWq0/4tzL9PwJ1Au/04zH3QnDrLyRaCvkVvtvZRd7tRL7/13gOzv2l9OwGRPndXCBfuO8nipSFfbffKpBmBtNMLXKtk5gOsUTDlKYU/WmhZ2MIvbNCefqQ00BmaG3tE9Nozab2HCLoNY5G7Fp3owNp0T0wpgzFoFLYjB6Mnfn/VeYRDc6lEi0aM9GxEDZhwybcZxeoBfHbYMVT2ABZLX8bCqam/WlMPr4i+eF7Q4rkGaMbtuS76QqUWcJpxOud/HY69cfm91iS6IWedY38xgUsDuXxVd7+/VlvhrNsXmR5oSG+nedMi7EyJ/P4ZCoSqx2PyFjHE5Ry6ppb31c639P2tIirPCX4VxKtBgjMo/W1PZ/9Uzy2wrnODvRWYA6HCQEr3JbDigIWHIJGtyWxX0GPgA+U89Ysq3JRRyXGWrJZx1BA3vYyciiVsLWO8rgd03YG6vBRVODvcu6D7+MevosMFTYowntQcPw7Xt6+4xDnElrmyOsJLG8onU85dXIrJ1+2TXHzdQzzNTNG0Z1MRWwyvYAhq34sy+Ub/BbfiCnT8/jemjYy40PxHrTQQ+iqoFtoNK2PI9kQ7BtDtLDkf+6QiA806D8q4X7PsdFMDED5X83GaIFEa7uPpxxPUsAwv9O9cgZ+xgZ/R/4iNuA2ktN0yc++57pZz2BjEfIQuKMFisUjWCI7xcmDK+PZ+LrXQgO8k5Nmd8fC/j6f3ffQxE3qkw4QKkj8Jv7+kff6MJXDHzLNZVSQfNgpi4VKneuheJjPY8t5MvfPoQJkn/dwrx52eN/Dt0jYq1incc4H+X6XkbAv9JTmDsfrcEGJ5eBiJz4b0OwoE6FvN84zVgz2/UKp2I1ltAOf78tU9A/y6rDN77leHd6dym09CXGYo1TdSDKczfLYieV3GdOc79WhfRwyv5RpbZ14gG3M9Z4HzObrvJh81Xn58pXJcY6XZq8i3w6I+rSYNJ93PAgdou52xQAQ+kBgKt1icV6GIbRKFhS5DhqDtwcg/2igPsftMyVa/jXDjxgW5ZU8dnbAbbmazzWPv3B7TqIS00wLxMeOtH58wHrbtBf5X+TkwZW5bMh90niNx+fTMsJ8BLMc5aAv+CS9Bkv4PHNYlktIpo+wrp8ZOHcij83l/0nOsTbut+X8hkN+9nlej7G0xCGkE7l9Cb0IHSyTu0ggQqKPc69+m5ZoOTiGHoV5zO+kfqzLackHvM7n9g2S78I4WnpOKLXUq8OoEyfxnYEcd2G63aiItbKePM93i/7w7xm5m+lOdK5tn/XPVBiX8ZyX6alq4/UPCTwL7v8vL1+TuB+KcqhLwN77Nf6eUEKZTQ54C1EPz1JaUgw0oW/oRUlg2V5cJE2t89HH4T5q300DUPZoHBpp3TweOD6dpPftwHtKxlhLL3M7zl39TU8Bgqvwq45VWA7K6a6B5VoT2P9bx5rsSx3awfG2LA0cn0Kiv9Xb30yLKMuyWUhLb8uY+6Sc56ktMW9Qlmx/+gOB4w+R3DeR9fvdq0g8C3jfH5dxT6Q71lEGXqVC8MF+qstx5fG04wWqLaH+LCVxAkMdi1eoWL0WOOde/m7r7NveO+biLXrAzohRxEL5Wu7UK1/p2oyKwTpes4WK+ogSPJH+PBoHSnwMgULRL4Qeck03SnhseiXRzgbxMDZSxQjIRr+jEX8wcBxW0jkFnqm/Yee1XynhaG7sn0Fr3Y+E7o7xSNh+8IXesQdo2XzMs0pgOW1HC/8fZea/EjETbzl5b+jDdWwjG+dpQUAUgsf+GmhA4SlBlwC6CeBih2v1iAq+5yaSWafk+9r9et1CIqnzvrMsLbZVtCi/U+I94fL9AOsBvAD3U2Hqr9EdWQlH2u/rELVfx0PR+weQjLO08oHhzjUk5juxdci2aU1F6sPdVJifCRwL5etAyceCvOwd+yy/ZVjyCGJDtwCi8A8t0Hb+kt/w1x3FxSrcwEyJjw1SKCpiZbkNUKjRapJ8UE9fAGviSoeQYXku4wf+ai8UljQVgNmelfgTiSJJB7rsu6T8/stNaNW6VuC32OgsCxAXgv4w8c+1THc3G3jr3kMU9GllNN7AFWwwk16D9b2YhlJilCrrceiLhZ4sUDcLwbpGf+80pCdy/3SpzOp5SckPLQzFBXQ7+xMBJe0JiVzXeEfnUvF4usg9j3eIK81fBGIhIvxyqVwAq1uXMT/FWueZP8P8WgLzyxJW7OZMm6FX5EQqP4gHedF7t+uKKJZJpwxD9WFXfjdZJ13I6j/Cy9dYenf8fPllfadThw5mHZoRk2d8n2OoKEyi9wWWOUZ9wN3/fxLFZWj/uaLfCT2k9Q7nR+AT+v5s4NNO5QSp3sCPI4TFrNCVBAgGQTBnOhbs1AEue7dhKddDcDLFByL7vyw9o5mHsnFBfy2Gtu1GBeyjtDhmUukpB3EL8/y0DEJ3yyJbobIsFWioD2KjbUdVII5hCZ9tl148R2/ec7H3D+/Xj0jGu7Px372AEjhC8gFwv+bvoxL1Ce9A6/3+CtdlfP+PxRybwW/Px3HSc8hZG7/9s5xyK/ZuE166uHNQhhO8c690lA6LYwKeDHjIEIB7tqeYjGd5tku+L38W0+9PBXtujBJyNQkdVvr/UuGCAYKA1/kyMF5DxSAk9BcC+6C9fs2z8rDvssBHBFxVwPqp7qdnRV6OYkOOhV2WD3DZ9+WDfZtKSZKNACwjuPxulsi1HipTuG2voyJzjuOt+G82pMky84358Z+UvFswUaB+FPKgDFRZHk6yhJvddjesIrmfxkb9mQrlLdGH57CW4mkkzY+TBBbFXOMztEThfXrEsW7RdQOX/cR+IPRuWq7dfKcZEtmdjlLhA11hiB9AVx2i4D9EMjy1l+82UeQcxGu8QuPCkm1XgXwlWc7IF0ZOTAmktYGHs0jCwJtMj2NHSj641QW6l+5gvUM3GQJz0RXWQkLfSqlJsaEI/a8kR/+jQXAV+o7gEkRf4BdjyBxE9KCEg6T6E8v4cR0vPYOjBgJtzsddI4XXhk94FsgvJN//Xw5gZaCf7mj+XyDR+OjeAIQxu49lYPu+OyTvUrWKRZzClw4oA+scS7FURcK6SuGh2JPfQkbyoyKg/F1c5L2Ugg5aZPUSjhOwM9+JxA/Vs+WNbo6LJBri9ouYdLYb4SXvuawCcBjLaWUF6/JKWqpryzgHwai3OSQICxf90RjG+ZyTrt3xMoUwxClnW286vPplFVeLmwsQ+h+db+JNtmeH0ZvldtHVOJb8K3z+JOuntcqhPP1Qes7SZ2daRJ5ukXyA73S2Ux9QalL0Br2xkBBA9ZeYY0fzY/lpDJkDP6FLKjUAz3ujQ2YDjVX8qEfHNFZoQOACnik9I2t7a9kulfUnl7mOjXBvrldXgTKw0elLnEbYTuoyJuacTZ3ycz0WwLiYc6ZQibya/3eSfDQxJtV5lMdhrf+A+xE1vW8FnnEFSQllHJo2eRRJqU16Dvfzgbw9zXNs95Gr6CHP+3H7C95zXeeU38H94G0q1zho8Ej0CSo2/ph7G/W+eUybMc6rD1lHWdk65t7betcOKQhW6XhM8rP8uXBHDZxHb8iD/D2f+6Gc7FqgDOyshlYpvVYpSbGhCd0O8elNANzj1EIH0ipevJGU/Rx6K+okP3TMfS/Q2g8gma8ONKC9xfW0gEAMN/XhOi1lpE1Lz0AsDEeyE7Xc5+x/mL8TAoQKIjuJ2+5qfU84SpAfXTyWFu2+TkNvXaVv0Br7jSP4/6pDin3FUsfiDAUens73PUcKj2e3jf43aFmGukg+T6JEEOTtged6vsBztffxOftSJ9P0PgBwU3/CMyDWkZxPCNSHL3h1QBzP0XHSc6w3vAC7sx17rEi+YO3b2QWP8IwU6+GZS0+DW9b4P9/zBMV5by6nV+g6Cfe3KxQlo7f91a+wgt9awCoKWfbHSt9dmO8VrGUjdj01fFikGGJUS9I6hA3Kd6Uy0dYWi9lgurOR9QYns4FLBOoUvAovelb1+ZJ3PW5FTwkaW7g1f+aR80zWL/R7wmWJvkaMrf86FYGF9LZYPMWG9Bg2pldTYRlH5RPW3WtsNF1X6eUSng4XZT+Lv2OkbxMPZfme9yPBQIGzUd/HOXkBcZQy2uFJWuoXBAh1IrevlfA0txNIdgfwHSxwjkHhCc15kKLy9Eg/fw/38N1/gs/2WYcwf05FBvVkRyp9GP+Ncd8Y5vaW5GeNBG6gVwZu9XtZHkizN89JUZl9roR8WSt9Ar/FQ6lkH+5Y578LnIeI/RlUsnBea8z1URf+UKaCrFBUlNCFHzg+kMvYKMW5YGHJ3yzR0JvVXgPUHEhf7rKmdpUjH0PLuEbcilH93c8PMkFUMmaz+hLFAtbk2bJ+P7V1B5Y6ZrsupkxDQ4CaS3hmt6xPLZBuCQndXmszkqePZ+ideMuziibz3EMCxPQyFZ63A+ckaeH5i6y8SOsObtmjqBRkJD9TnY+H+Qyb0AK8xiub5hiLtNqpey4xoovqFF7ncIcMrKcDBHaHsy/pvOOQJY5vDv26OzvvAwqDndp2ZsxzQcnBzHbbsq5d6NxnP8m7631MjyF06wIfVoa3z9az2oCVPo1K7aFU6OxznMO6jzI8V9aPTH+ZyqXr3XiLRHozy+hG716/ooLgoqlIvv7A+ngg68WmrE9xAYb30usxjnVyRoF7rIkp16GiY9EVG4jQhZYSgt8QbIbpRnciQWXo9kODfZ/0nOjEupum8eNIO/mZ1wt33Q9oSaWdRnCJlD4U6kESjjseGNd4dgO8g8tpBdg5vrtpOaCBn+OlvZ3l83AZStc0elSKWZFX0QouZLV08nqjC3gNkpJ3f2Jq3qmyflBQgiSGYw9IeEz0clpoIL6DmS8ohugT/rX07IKwjeJRJDpEem9BpegR75x2PkMhFze8J6eTIBd75DGNhNEZ4/24hPfw83gTlbOJJJkEy+D2wPtZRpJHw7405tuBBXi8971cwW8t7n2jfqPvfU/nPFiIr0p+oZQQad8Xc715VC7WluF5g7W8jazvIreAgnUWyTLlKaCnsqxQJ7Zk+T7EfS0xyuIEltFeJMc3SMx/jsnXdgXydSYV03rWtWl8f3HBhVA4v0KPwhpHMYIy9XiRMprH72ZlActeoehpcWWz5Q3/3WrX0wZ7kUmiKjjC62w25NdrtVIoFJXG/KemayEo+tVCH3x0noiN/XlaCg87UigUCoVi47HQFQqFQqFQbHzQgAuFQqFQKJTQFQqFQqFQKKErFAqFQqGoCP4jwADQNvw20jA5ogAAAABJRU5ErkJggg==", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6832811,"math_prob":1.0000083,"size":403,"snap":"2020-45-2020-50","text_gpt3_token_len":103,"char_repetition_ratio":0.15037593,"word_repetition_ratio":0.53571427,"special_character_ratio":0.22332506,"punctuation_ratio":0.16901408,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999989,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-25T14:38:35Z\",\"WARC-Record-ID\":\"<urn:uuid:78172935-02dd-4f9a-bd0d-ee1c55e0cd88>\",\"Content-Length\":\"42184\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2f4f060d-47f7-46d1-b8ab-4db472cd23e9>\",\"WARC-Concurrent-To\":\"<urn:uuid:e6e2906b-c6db-4fc9-98c4-839b8b9fcf24>\",\"WARC-IP-Address\":\"172.67.70.60\",\"WARC-Target-URI\":\"https://homework.cpm.org/category/CC/textbook/cca2/chapter/8/lesson/8.3.2/problem/8-158\",\"WARC-Payload-Digest\":\"sha1:ELMS2MALCMKK4O4FQFLDL7E3BJJCMNHY\",\"WARC-Block-Digest\":\"sha1:CWWV3H6Q3PTPLRMDDVFJCCHDGCQAOXCL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141182794.28_warc_CC-MAIN-20201125125427-20201125155427-00525.warc.gz\"}"}
https://www.khanacademy.org/math/geometry/hs-geo-foundations/hs-geo-area/v/area-of-a-kite
[ "Main content\n\n# Area of kites\n\nCCSS Math: 6.G.A.1\n\n## Video transcript\n\nWhat is the area of this figure? And this figure right over here is sometimes called a kite for obvious reasons. If you tied some string here, you might want to fly it at the beach. And another way to think about what a kite is, it's a quadrilateral that is symmetric around a diagonal. So this right over here is the diagonal of this quadrilateral. And it's symmetric around it. This top part and this bottom part are mirror images. And to think about how we might find the area of it given that we've been given essentially the width of this kite, and we've also been given the height of this kite, or if you view this as a sideways kite, you could view this is the height and that the eight centimeters as the width. Given that we've got those dimensions, how can we actually figure out its area? So to do that, let me actually copy and paste half of the kite. So this is the bottom half of the kite. And then let's take the top half of the kite and split it up into sections. So I have this little red section here. I have this red section here. And actually, I'm going to try to color the actual lines here so that we can keep track of those as well. So I'll make this line green and I'll make this line purple. So imagine taking this little triangle right over here-- and actually, let me do this one too in blue. So this one over here is blue. You get the picture. Let me try to color it in at least reasonably. So I'll color it in. And then I could make this segment right over here, I'm going to make orange. So let's start focusing on this red triangle here. Imagine flipping it over and then moving it down here. So what would it look like? Well then the green side is going to now be over here. This kind of mauve colored side is still on the bottom. And my red triangle is going to look something like this. My red triangle is going to look like that. Now let's do the same thing with this bigger blue triangle. Let's flip it over and then move it down here. So this green side, since we've flipped it, is now over here. And this orange side is now over here. And we have this blue right over here. And the reason that we know that it definitely fits is the fact that it is symmetric around this diagonal, that this length right over here is equivalent to this length right over here. That's why it fits perfectly like this. Now, what we just constructed is clearly a rectangle, a rectangle that is 14 centimeters wide and not 8 centimeters high, it's half of 8 centimeters high. So it's 8 centimeters times 1/2 or 4 centimeters high. And we know how to find the area of this. This is 4 centimeters times 14 centimeters. So the area is equal to 4 centimeters times 14 centimeters which is equal to-- let's see, that's 40 plus 16-- 56 square centimeters. So if you're taking the area of a kite, you're really just taking 1/2 the width times the height, or 1/2 the width times the height, any way you want to think about it." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9281157,"math_prob":0.94198525,"size":3012,"snap":"2019-35-2019-39","text_gpt3_token_len":775,"char_repetition_ratio":0.15026596,"word_repetition_ratio":0.017421603,"special_character_ratio":0.23373175,"punctuation_ratio":0.10138249,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9946609,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-17T15:43:27Z\",\"WARC-Record-ID\":\"<urn:uuid:86de4927-879f-4389-989c-0357eef7f11f>\",\"Content-Length\":\"194938\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:23495942-b383-4e8a-8c59-4eca8833b0ed>\",\"WARC-Concurrent-To\":\"<urn:uuid:ce6e6ae6-bba7-478f-8252-ebd3a1e2e681>\",\"WARC-IP-Address\":\"151.101.249.42\",\"WARC-Target-URI\":\"https://www.khanacademy.org/math/geometry/hs-geo-foundations/hs-geo-area/v/area-of-a-kite\",\"WARC-Payload-Digest\":\"sha1:MSDAHTHXCJO5AR4ULM5HFTQGZ4P4TEDR\",\"WARC-Block-Digest\":\"sha1:6E4ZNTQEMOGONRLDP57CY4MHT424WMBM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027313428.28_warc_CC-MAIN-20190817143039-20190817165039-00087.warc.gz\"}"}
https://math.stackexchange.com/questions/tagged/closed-form
[ "Questions tagged [closed-form]\n\nA \"closed form expression\" is any representation of a mathematical expression in terms of \"known\" functions, \"known\" usually being replaced with \"elementary\".\n\n2,397 questions\n26 views\n\n24 views\n\nSolve and asymptotic expansion of $\\sum_{a=1}^{H} \\sum_{b=a+1}^{H} \\left\\lfloor{\\frac{H}{a\\, b}}\\right\\rfloor$\n\nI am solving constrained polynomial systems resulting in constrained sums. I am looking to see if $$\\sum_{a=1}^{H} \\sum_{b=a+1}^{H} \\left\\lfloor{\\frac{H}{a\\, b}}\\right\\rfloor$$ is expressible in ...\n63 views\n\n152 views\n\n114 views\n\n431 views\n\nIntegral $\\int_0^1 \\frac{\\ln(1+x+x^2)\\ln(1-x+x^2)}{x}dx$\n\nProve $$\\sf I=\\int_0^1 \\frac{\\ln(1+x+x^2)\\ln(1-x+x^2)}{x}dx=\\frac{\\pi}{6\\sqrt{3}}\\psi_1\\left(\\frac{1}{3}\\right)-\\frac{\\pi^3}{9\\sqrt{3}}-\\frac{19}{18}\\zeta(3).$$ I have thought about the integral ...\n96 views\n\nClosed form for $\\int_0^t(x+c)^p(1-2x)^{N-1}\\text{d}x$\n\nI am trying to find a closed form for the integral $$I\\equiv\\int_0^t(x+c)^p(1-2x)^{N-1}\\text{d}x,$$ where $N\\in\\mathbb{N}$, $p>0$, $c\\ge 0$ and $t\\in\\left(0,\\frac{1}{2}\\right)$. I thought to ...\n41 views\n\nFinding closed form of exponential generating function involving identity permutation\n\nFix a prime number $p > 1$ and for a positive integer $n$, let $a_n$ be the number of permutations $π ∈ S_n$ such that $π^p = id$, where $id$ is the identity permutation. Find a closed form for the ...\n101 views" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.710646,"math_prob":0.99993,"size":14916,"snap":"2019-26-2019-30","text_gpt3_token_len":5674,"char_repetition_ratio":0.14511803,"word_repetition_ratio":0.011789925,"special_character_ratio":0.3795924,"punctuation_ratio":0.106796116,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000008,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-26T01:42:16Z\",\"WARC-Record-ID\":\"<urn:uuid:3d0d71bb-79e1-4b65-a69c-644a69acf81a>\",\"Content-Length\":\"240588\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:df57a462-944f-4b7d-b75a-b2c334ea6d96>\",\"WARC-Concurrent-To\":\"<urn:uuid:a465a3fe-0d44-423a-9250-4e3d125c28ee>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/tagged/closed-form\",\"WARC-Payload-Digest\":\"sha1:QBC2XO5UHD7QQH7TEBEFZFCXTWN2SX6K\",\"WARC-Block-Digest\":\"sha1:EQIZYIZOCKXQGNWSUOZIHWK5TZ5PHMUO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560628000044.37_warc_CC-MAIN-20190626013357-20190626035357-00507.warc.gz\"}"}
http://topcoder.bgcoder.com/print.php?id=858
[ "### Problem Statement\n\nDetermine the number of ways to cut a convex polygon with n sides if the only cuts allowed are from vertex to vertex, each cut divides exactly one polygon into exactly two polygons, and you must end up with exactly k polygons. Consider each vertex distinct. For example, there are three ways to cut a square - the two diagonals and not cutting at all - but only two ways to cut it to form 2 polygons, and only one way to cut it to form 1 polygon. The order of cuts does not matter. Since the number of ways is very large, you should return the number taken modulo 1000000000 (one billion). In other words, if the answer would have at least 10 digits, return only the 9 least significant. If there is no way to cut the polygon into k pieces, return -1.\n\n### Definition\n\n Class: PolygonDecomposition Method: howMany Parameters: int, int Returns: int Method signature: int howMany(int n, int k) (be sure your method is public)\n\n### Notes\n\n-The vertices are distinct - there are 5 ways to cut a pentagon into 3 triangles, not just one way.\n-Only one polygon at a time may be cut - you cannot cut two triangles into four triangles with one cut.\n\n### Constraints\n\n-n is between 3 and 100, inclusive.\n-k is between 1 and 100, inclusive.\n\n### Examples\n\n0)\n\n `4` `2`\n`Returns: 2`\n A quadrilateral can be cut into two triangles in two different ways, one for each diagonal.\n1)\n\n `100` `1`\n`Returns: 1`\n Any polygon can be cut into one polygon by not cutting at all, but no other way.\n2)\n\n `6` `4`\n`Returns: 14`\n3)\n\n `31` `20`\n`Returns: 956146480`\n The actual number of ways is about 6.5 x 10^18, but we return only the final 9 digits.\n4)\n\n `3` `4`\n`Returns: -1`\n\n#### Problem url:\n\nhttp://www.topcoder.com/stat?c=problem_statement&pm=4445\n\n#### Problem stats url:\n\nhttp://www.topcoder.com/tc?module=ProblemDetail&rd=7998&pm=4445\n\nEnogipe\n\n#### Testers:\n\nPabloGilberto , brett1479 , Olexiy\n\n#### Problem categories:\n\nDynamic Programming" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.87897855,"math_prob":0.95873,"size":1609,"snap":"2020-24-2020-29","text_gpt3_token_len":425,"char_repetition_ratio":0.12523365,"word_repetition_ratio":0.0066225166,"special_character_ratio":0.28216285,"punctuation_ratio":0.11890244,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98866546,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-14T14:13:28Z\",\"WARC-Record-ID\":\"<urn:uuid:a01897ac-b266-4444-bf3f-8efecda2831d>\",\"Content-Length\":\"7081\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e949b2ca-038f-49be-ac5a-a193171a12f2>\",\"WARC-Concurrent-To\":\"<urn:uuid:bda1e739-a57c-46f9-b83f-78800199f241>\",\"WARC-IP-Address\":\"78.128.6.10\",\"WARC-Target-URI\":\"http://topcoder.bgcoder.com/print.php?id=858\",\"WARC-Payload-Digest\":\"sha1:IXVNJDB5BP3G2I224CP3IMOK3HYESEUC\",\"WARC-Block-Digest\":\"sha1:VWXWNOEFKZA3ULEADMZMIYGG5LDZACOQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655880665.3_warc_CC-MAIN-20200714114524-20200714144524-00274.warc.gz\"}"}
https://silently9527.cn/?p=52
[ "# 图算法系列之无向图的数据结构", null, "2021-08-14 / 0 评论 / 410 阅读\n\n## 前言\n\n1. 基于散列函数将被查找键转换为数组的下标\n2. 处理散列值冲突的情况,有两种方式来处理冲突:拉链式和线性探测\n\n## 散列函数\n\njava中的约定:如果两个对象的equals相等,那么hashCode一定相同;如果hashCode相同,equals不一定相同。对于自定义类型的键我们通常需要自定义实现hashCode和equals;默认的hashCode返回的是对象的内存地址,这种散列值不会太好。\n\n#### Integer\n\n``````@Override\npublic int hashCode() {\nreturn Integer.hashCode(value);\n}\npublic static int hashCode(int value) {\nreturn value;\n}\n``````\n\n#### Long\n\nJava中Long类型的hashCode计算是先把值无符号右移32位,之后再与值相异或,保证每一位都用用到,最后强制转换成int值\n\n``````@Override\npublic int hashCode() {\nreturn Long.hashCode(value);\n}\n\npublic static int hashCode(long value) {\nreturn (int)(value ^ (value >>> 32));\n}\n``````\n\n#### Double、Float\n\n``````public static int hashCode(float value) {\nreturn floatToIntBits(value);\n}\npublic static int floatToIntBits(float value) {\nint result = floatToRawIntBits(value);\n// Check for NaN based on values of bit fields, maximum\n// exponent and nonzero significand.\nif ( ((result & FloatConsts.EXP_BIT_MASK) ==\nresult = 0x7fc00000;\nreturn result;\n}\n``````\n\n#### String\n\njava中每个char都可以表示成一个int值,所以字符串转换成一个int值\n\n``````public int hashCode() {\nint h = hash;\nif (h == 0 && value.length > 0) {\nchar val[] = value;\n\nfor (int i = 0; i < value.length; i++) {\nh = 31 * h + val[i];\n}\nhash = h;\n}\nreturn h;\n}\n\n``````\n\n## 拉链式的散列表", null, "``````public class SeparateChainingHashMap<K, V> implements Map<K, V> {\n\nprivate int size;\n\npublic SeparateChainingHashMap(int capacity) {\nfor (int i = 0; i < capacity; i++) {\n}\n}\n\n@Override\npublic void put(K key, V value) {\nthis.table[hash(key)].put(key, value);\nsize++;\n}\n\nprivate int hash(K key) {\nreturn (key.hashCode() & 0x7fffffff) % table.length;\n}\n\n@Override\npublic V get(K key) {\nreturn this.table[hash(key)].get(key);\n}\n\n@Override\npublic void delete(K key) {\nthis.table[hash(key)].delete(key);\nsize--;\n}\n\n@Override\npublic int size() {\nreturn size;\n}\n\n}\n\n``````\n\n## 线性探测式散列表\n\n1. 下一个位置和待插入的键相等,那么值就修改值\n2. 下一个位置和待插入的键不相等,那么索引加一继续查找\n3. 如果下一个位置还是一个空位,那么直接把待插入对象放入到这个空位\n\n#### 初始化\n\n``````private int size;\nprivate int capacity;\nprivate K[] keys;\nprivate V[] values;\n\npublic LinearProbingHashMap(int capacity) {\nthis.capacity = capacity;\nthis.keys = (K[]) new Object[capacity];\nthis.values = (V[]) new Object[capacity];\n}\n\n``````\n\n#### 插入\n\n1. 当插入键的位置超过了数组的大小,就需要回到数组的开始位置继续查找,直到找到一个位置为null的才结束;`index = (index + 1) % capacity`\n2. 当数组已存放的容量超过了数组总容量的一半,就需要扩容到原来的2倍\n``````@Override\npublic void put(K key, V value) {\nif (Objects.isNull(key)) {\nthrow new IllegalArgumentException(\"Key can not null\");\n}\nif (this.size > this.capacity / 2) {\nresize(2 * this.capacity);\n}\nint index;\nfor (index = hash(key); this.keys[index] != null; index = (index + 1) % capacity) {\nif (this.keys[index].equals(key)) {\nthis.values[index] = value;\nreturn;\n}\n}\nthis.keys[index] = key;\nthis.values[index] = value;\nsize++;\n}\n``````\n\n#### 动态调整数组的大小\n\n``````private void resize(int cap) {\nLinearProbingHashMap<K, V> linearProbingHashMap = new LinearProbingHashMap<>(cap);\nfor (int i = 0; i < capacity; i++) {\nlinearProbingHashMap.put(keys[i], values[i]);\n}\nthis.keys = linearProbingHashMap.keys;\nthis.values = linearProbingHashMap.values;\nthis.capacity = linearProbingHashMap.capacity;\n}\n``````\n\n#### 查询\n\n``````@Override\npublic V get(K key) {\nif (Objects.isNull(key)) {\nthrow new IllegalArgumentException(\"Key can not null\");\n}\nint index;\nfor (index = hash(key); this.keys[index] != null; index = (index + 1) % capacity) {\nif (this.keys[index].equals(key)) {\nreturn this.values[index];\n}\n}\nreturn null;\n}\n``````\n\n#### 删除元素\n\n``````@Override\npublic void delete(K key) {\nif (Objects.isNull(key)) {\nthrow new IllegalArgumentException(\"Key can not null\");\n}\nint index;\nfor (index = hash(key); this.keys[index] != null; index = (index + 1) % capacity) {\nif (this.keys[index].equals(key)) {\nthis.keys[index] = null;\nthis.values[index] = null;\nbreak;\n}\n}\n\nfor (index = (index + 1) % capacity; this.keys[index] != null; index = (index + 1) % capacity) {\nthis.size--;\nthis.put(this.keys[index], this.values[index]);\nthis.keys[index] = null;\nthis.values[index] = null;\n}\nthis.size--;\nif (size > 0 && size < capacity / 4) {\nresize(capacity / 2);\n}\n\n}\n``````\n\nhttps://github.com/silently9527/JavaCore" ]
[ null, "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAFAAAABQCAMAAAC5zwKfAAAC/VBMVEUAAAD87++g2veg2ff++fmg2feg2fb75uag2fag2fag2fag2fag2feg2vah2fef2POg2feg2vag2fag2fag2fag2fag2vah2fag2vb7u3Gg2fag2fb0tLSg2fb3vHig2ff0s7P2wMD0s7Og2fXzs7Pzs7Of2fWh2veh2vf+/v7///+g2vf9/f2e1/ag2fSg2/mg3PT3r6+30tSh2fb+0Hj76ev4u3P6u3K11dr60H3UyKr+/v766On80Hz49vj2xcXm5u3z0IfUx6v2u7vazKTn0pfi6PKg2fbztLT///+g2faf2fag2vf///+g2feg2fe63O6l3vb///+g2fb80Kb8um+x1uD80Hv86er+0Hf73tb0s7P10YX/0Hiq2Or+/v6g2vbe0qL60YT+/v6y1NzuvoS20dSz09ru0Y6z3fTI1MDbxp+h2fag2fb////O4PDuv4XA3/LOz7bh06Du0o/1t7ex3PP+/v6h2ffSzrLdxZ3s5u3/2qag2fb7+/z40NCg2fb9/f2f2PWf2PX0tLT+/v70s7P+/v7M7Pyf1/b1s7P////zs7P0tbWZ2fL20dH+/v7+0Hep2vWl2O+x2/P+/v641tbI1b7C1cf8xpCz0tj1wMD1x8fTya392KPo0ZT56ez4vXbN1bn26Orh0p3x8/jbxZ/CzcT8xo7327DV1tHt0Y7u8/n759661tLyy6L049710IK8z870s7PX1a3xvX/y6OzA1cvBzsXI1cG30dP+38D73Mn/0oX3ysrpwYzv5+zo0pXv5+zH4PDW4e/n5O3+/v786+vN4vP9/f30s7P9/f2f2fSu0er//Pzgu8X///+4zOD////z8/OW0vCq1f+g2fb86er0s7P+z3f8um/+/v72xcX948ym2O/85+T839D8v3v86ej54eH828X+3Kz80qz8w4T8u3Oq2/Wq1ees2Ob64OCx1d/F2N785tv529v94MH82b/1vb382bj93LD91pf91ZH+04b+0X2p2er+2aH8zJ78yZX8yJU3IRXQAAAA1nRSTlMA8PbEz5vhv1X6Y0wzrX9A8/DJt6mHsnH98uzo4NzY19DJwKGAf3tpZmVVSD86LysgIP787ejn4uHf29jW1M3MysnHxcK+vbywn5ONg39wW0AlIBr8+/f29PTx7+rm5eTj4+Df29nX1tLR0dHQz8zKyMXFxcPCwL+9u7u5t7KsqaObmH1wbWBcVVJQSUAwFA34+Pbz8vHx8O7u7ero6Ofl4ODf3t7d3Nvb2djY19fU1NLS0M/NzcrJycjHx8LCwcHAwL68uraxr5SSkId4X1NTNTItFREGybAGmgAABQNJREFUWMOl13N0HEEcwPFp2lzTpElq20jTpLZt27Zt27Zt27b7m9vbpqlt+3Xvdvd2ZncWufv+e+993t7saJFJ0wL8M1UKjJ4yTpyU0QMrZfIPmIa8qLZ/edBU3r+2Z1pY5qGg09DMYVHmsicCwxJljxIXnABMSxBsmcsxAiw1IoclLtQXLOcbau75tYAo1MLPzMsEUSyTsZceolx6Iy86eFB0fS8ZeFQyPS85eFhythcfPC4+y0sIXpRQ6yUGr0qs9vzBy/xpLwC8LsDghXj/YvzApJdgHrmsB4BuzfaXKVkwT6u6+VL1KNXOEBygeNVBrwJlm3LOlj13OEtV6r6BWN10Cc/rwEl9rOMQy1fIYFGbTZk9Mzm5iEYOubYFTKdOPPa/LckpvccP3WLSUnpgPOkIAVb1CnJEGP9xKHXWE8VDpgowekt5PzD+5CDSG8gqLrALaHvdhCP7hnHkQ1Jcyga7OL3YwGgNR/UUY1yHBOvmYouxdbatBRzdRwF84CBrq7+NpQZN91vR3s9HWOifw3wYUyOUE7St4uh+Y6x5xHzALCeaCNo2q8AI7OoZJbJHcSLKDJp+cepXIhb5nATXMcHMKAg0zedUc0buATl1kjLBIOQLmlqqn08RXxAic+PxRYyL5XLS+4rJnhD/+hXzIsraGYhV8j0C00U+kx7yxd937P3BBprqu5fw10dY04Mnn748exKJMRO0oVhA16l3h40u8ef3L5HYqO2DetXTgLGQD1CVFajDOCIi4j02a6HDkb+NGvRR3ZA4Z0OwlcQtd5Hm3pRSO2GOWvKKiLNRNXlSoq7kLsi5arjVCniEuXt3pU68Thxn/T9vEMGVqpOPWinysVTUgrfDIdVetVKygFIeGTxhDm6SwYEUmIU8AZpxUgN7mnqnIL8EHqfPAPKmflDy8syGwSZe3n4wSAJTUfd36ibXWwJPAtiKGINnANo4pHKTdzrqLrxT9PqAUD9D7ywIHUgqgu2omzF5qDR0eWXB1WkDb7W4XneJw1iGPFLIu9c2J9dU+DkJOCunP4A2EGu/1wn2UN+/RoNYH2G+9PIRPBGEnnnZXom4irA+lSAeArnRiHF1SOIe5DklGNyK7kCV6+2r+8qkYX2C5iZ2yI6DG9BcgxIvLXyYBtNbpAASZDllAj3a130WGBWMpAIpkNpyEwTVrnmh3Ja1xYoVG3atFgqtVl7fC2R/9vj4EFz2kKojeaL+VW/FrhTH/NNnFBP0rZExBq/pfMabVeKyvFFIKcxGgNIYpr6asbFdAh9/XlxRBmPaG2cMDdR6tjACJDexONLjXU9ht8vgG3sK1NoN2u27p1bTgFkQVaAK9Btutysg/jA8K6+AQuP8NG+ErqaNAoOz3ZNBORpMN5YWbTWRKvfvcV0erwKbt6bBvvz4YPrLUVNCBQzKxtPg48/pkBrkswWRd2tGCWQwdY3CIki9FBoszfOFa8R1z1fEzFecNlC9Iq8C8YfHvAbkR1ZzH3U6VRaveJN5AqSiQX6yuJVWRrq5RiWgmwJG09bI7iwtL9QtQLwFG5QYIN54XgbZKSCf1QaxsiPDYkPl/tbBYVfi3UEm3Z3AWwfnTkDmjbUEFuddVUUWylrYKtg8K7LU7cszLIEXpyOr1arILzEGj/HnQswUmgyZeimNnpZmTHjIDeRB4WMYZoVx4ciLwqdMypChQroUwmOlq5Ahw6QpZuP2HxxXd11eM9wcAAAAAElFTkSuQmCC", null, "https://tva1.sinaimg.cn/large/008eGmZEgy1gpg7i9y3qoj30ct06ra9w.jpg", null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.64887166,"math_prob":0.9786487,"size":5912,"snap":"2023-14-2023-23","text_gpt3_token_len":2953,"char_repetition_ratio":0.14759648,"word_repetition_ratio":0.23076923,"special_character_ratio":0.23393099,"punctuation_ratio":0.19278607,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9898664,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-10T11:35:01Z\",\"WARC-Record-ID\":\"<urn:uuid:e0715a53-9dbc-46d2-9e48-ef868ae6ae3f>\",\"Content-Length\":\"112994\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fe65b569-12a5-4281-afd7-345ce84e0161>\",\"WARC-Concurrent-To\":\"<urn:uuid:f2f2c2de-03db-4b1b-80f2-1f56db9a3242>\",\"WARC-IP-Address\":\"110.40.143.67\",\"WARC-Target-URI\":\"https://silently9527.cn/?p=52\",\"WARC-Payload-Digest\":\"sha1:2EHG4MOK2P7DRTXQ36TQFC4YHFZKRU4Z\",\"WARC-Block-Digest\":\"sha1:P2N4C6FKHC2JA3ZNUDSPJ3CILBMOY3KJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224657169.98_warc_CC-MAIN-20230610095459-20230610125459-00416.warc.gz\"}"}
https://statkat.com/stattest.php?t=7
[ "# Paired sample t test - overview\n\nThis page offers structured overviews of one or more selected methods. Add additional methods for comparisons by clicking on the dropdown button in the right-hand column. To practice with a specific method click the button at the bottom row of the table\n\nPaired sample $t$ test\nIndependent variable\n2 paired groups\nDependent variable\nOne quantitative of interval or ratio level\nNull hypothesis\nH0: $\\mu = \\mu_0$\n\nHere $\\mu$ is the population mean of the difference scores, and $\\mu_0$ is the population mean of the difference scores according to the null hypothesis, which is usually 0. A difference score is the difference between the first score of a pair and the second score of a pair.\nAlternative hypothesis\nH1 two sided: $\\mu \\neq \\mu_0$\nH1 right sided: $\\mu > \\mu_0$\nH1 left sided: $\\mu < \\mu_0$\nAssumptions\n• Difference scores are normally distributed in the population\n• Sample of difference scores is a simple random sample from the population of difference scores. That is, difference scores are independent of one another\nTest statistic\n$t = \\dfrac{\\bar{y} - \\mu_0}{s / \\sqrt{N}}$\nHere $\\bar{y}$ is the sample mean of the difference scores, $\\mu_0$ is the population mean of the difference scores according to the null hypothesis, $s$ is the sample standard deviation of the difference scores, and $N$ is the sample size (number of difference scores).\n\nThe denominator $s / \\sqrt{N}$ is the standard error of the sampling distribution of $\\bar{y}$. The $t$ value indicates how many standard errors $\\bar{y}$ is removed from $\\mu_0$.\nSampling distribution of $t$ if H0 were true\n$t$ distribution with $N - 1$ degrees of freedom\nSignificant?\nTwo sided:\nRight sided:\nLeft sided:\n$C\\%$ confidence interval for $\\mu$\n$\\bar{y} \\pm t^* \\times \\dfrac{s}{\\sqrt{N}}$\nwhere the critical value $t^*$ is the value under the $t_{N-1}$ distribution with the area $C / 100$ between $-t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df = 20).\n\nThe confidence interval for $\\mu$ can also be used as significance test.\nEffect size\nCohen's $d$:\nStandardized difference between the sample mean of the difference scores and $\\mu_0$: $$d = \\frac{\\bar{y} - \\mu_0}{s}$$ Cohen's $d$ indicates how many standard deviations $s$ the sample mean of the difference scores $\\bar{y}$ is removed from $\\mu_0.$\nVisual representation\nEquivalent to\n• One sample $t$ test on the difference scores.\n• Repeated measures ANOVA with one dichotomous within subjects factor.\nExample context\nIs the average difference between the mental health scores before and after an intervention different from $\\mu_0 = 0$?\nSPSS\nAnalyze > Compare Means > Paired-Samples T Test...\n• Put the two paired variables in the boxes below Variable 1 and Variable 2\nJamovi\nT-Tests > Paired Samples T-Test\n• Put the two paired variables in the box below Paired Variables, one on the left side of the vertical line and one on the right side of the vertical line\n• Under Hypothesis, select your alternative hypothesis\nPractice questions" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.81178,"math_prob":0.99873954,"size":2411,"snap":"2022-27-2022-33","text_gpt3_token_len":664,"char_repetition_ratio":0.16701289,"word_repetition_ratio":0.09186352,"special_character_ratio":0.28204066,"punctuation_ratio":0.071599044,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.999843,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-07T13:27:55Z\",\"WARC-Record-ID\":\"<urn:uuid:d08c6f66-a729-410b-81b6-db04e45b8861>\",\"Content-Length\":\"17978\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fc752211-39d7-4e4f-b6c8-927628244f98>\",\"WARC-Concurrent-To\":\"<urn:uuid:ba2b13e1-029c-4d36-8595-f5594ab9605c>\",\"WARC-IP-Address\":\"141.138.168.125\",\"WARC-Target-URI\":\"https://statkat.com/stattest.php?t=7\",\"WARC-Payload-Digest\":\"sha1:AMLH56C4UUANZEZTENS3I3N4UEB7O6ZT\",\"WARC-Block-Digest\":\"sha1:2XZGQITFUSVRUZRDQWBPTKOKT4BVNPZX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104692018.96_warc_CC-MAIN-20220707124050-20220707154050-00737.warc.gz\"}"}
https://zbmath.org/?q=an:0873.45006
[ "## About the non cutoff Kac equation: Uniqueness and asymptotic behaviour.(English)Zbl 0873.45006\n\nThe authors investigate the asymptotic behaviour of the solution of Kac’s equation by showing uniform boundedness in time of various convex functionals of the solution. The uniqueness of the solution of the non cutoff variant of Kac equation has been proved by means of Tanaka’s functional.\n\n### MSC:\n\n 45K05 Integro-partial differential equations 45M05 Asymptotics of solutions to integral equations\n\n### Keywords:\n\nnon cutoff Kac equation; asymptotic behaviour" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.788694,"math_prob":0.98703015,"size":821,"snap":"2022-27-2022-33","text_gpt3_token_len":237,"char_repetition_ratio":0.12974297,"word_repetition_ratio":0.052173913,"special_character_ratio":0.26187575,"punctuation_ratio":0.19875777,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9736755,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-27T15:01:11Z\",\"WARC-Record-ID\":\"<urn:uuid:ec62a5cb-7215-4d1b-b01f-da10ca38b853>\",\"Content-Length\":\"49535\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:abc249ae-70d8-407c-a5ad-4186ced73cfa>\",\"WARC-Concurrent-To\":\"<urn:uuid:ae20ce77-e19b-485a-b583-3c49279a5128>\",\"WARC-IP-Address\":\"141.66.194.2\",\"WARC-Target-URI\":\"https://zbmath.org/?q=an:0873.45006\",\"WARC-Payload-Digest\":\"sha1:RIMN2Y5PQTGCDRF5YRUHSWGUKSJ4YA22\",\"WARC-Block-Digest\":\"sha1:FLGFKIP4QWUA3B6VLYXHCQYBOB4KCLW7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103334753.21_warc_CC-MAIN-20220627134424-20220627164424-00550.warc.gz\"}"}
https://matplotlib.org/3.0.2/api/_as_gen/matplotlib.pyplot.fill.html
[ "# matplotlib.pyplot.fill¶\n\nmatplotlib.pyplot.fill(*args, data=None, **kwargs)[source]\n\nPlot filled polygons.\n\nParameters: args : sequence of x, y, [color] Each polygon is defined by the lists of x and y positions of its nodes, optionally followed by a color specifier. See matplotlib.colors for supported color specifiers. The standard color cycle is used for polygons without a color specifier. You can plot multiple polygons by providing multiple x, y, [color] groups. For example, each of the following is legal: ax.fill(x, y) # a polygon with default color ax.fill(x, y, \"b\") # a blue polygon ax.fill(x, y, x2, y2) # two polygons ax.fill(x, y, \"b\", x2, y2, \"r\") # a blue and a red polygon a list of :class:~matplotlib.patches.Polygon **kwargs : Polygon properties\n\nNotes\n\nUse fill_between() if you would like to fill the region between two curves.\n\nNote\n\nIn addition to the above described arguments, this function can take a data keyword argument. If such a data argument is given, the following arguments are replaced by data[<arg>]:\n\n• All arguments with the following names: 'x', 'y'.\n\nObjects passed as data must support item access (data[<arg>]) and membership test (<arg> in data).\n\n## Examples using matplotlib.pyplot.fill¶", null, "Interactive functions", null, "Fill Spiral" ]
[ null, "https://matplotlib.org/3.0.2/_images/sphx_glr_ginput_manual_clabel_sgskip_thumb.png", null, "https://matplotlib.org/3.0.2/_images/sphx_glr_fill_spiral_thumb.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.55996287,"math_prob":0.58768785,"size":1270,"snap":"2021-21-2021-25","text_gpt3_token_len":323,"char_repetition_ratio":0.13112165,"word_repetition_ratio":0.0,"special_character_ratio":0.25826773,"punctuation_ratio":0.20689656,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98239744,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,3,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-14T04:30:50Z\",\"WARC-Record-ID\":\"<urn:uuid:0c67513c-69e7-40a8-aad5-f314461dd638>\",\"Content-Length\":\"14172\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a5bf4c12-e38c-4ad4-95f9-803bf352156a>\",\"WARC-Concurrent-To\":\"<urn:uuid:5c7bacb5-91b9-45c0-a331-0f7e8e8d4f9b>\",\"WARC-IP-Address\":\"104.26.0.8\",\"WARC-Target-URI\":\"https://matplotlib.org/3.0.2/api/_as_gen/matplotlib.pyplot.fill.html\",\"WARC-Payload-Digest\":\"sha1:XDVP2A2B37EQA3KFQ5YTN7EFRI4DQ7KR\",\"WARC-Block-Digest\":\"sha1:V7OIFGQRPF4U7BYEUPOQBMRJDO7XWK64\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243991737.39_warc_CC-MAIN-20210514025740-20210514055740-00351.warc.gz\"}"}
https://thetextchemistry.org/qa/question-how-much-is-1-cm-on-a-ruler.html
[ "", null, "# Question: How Much Is 1 Cm On A Ruler?\n\n## Is centimeter a metric unit?\n\nIn addition to the difference in the basic units, the metric system is based on 10s, and different measures for length include kilometer, meter, decimeter, centimeter, and millimeter.\n\nThis means that a meter is 100 times larger than a centimeter, and a kilogram is 1,000 times heavier than a gram..\n\n## How can I measure cm without a ruler?\n\nOne inch (2.5 cm) is roughly the measurement from the top knuckle on your thumb to your thumb tip. Measure yours to see how close it is to 1 inch. After all, you should always have a thumb handy for a guide for measuring items under 6 inches (15cm)!\n\n## Is 1 cm the same as 1 inch?\n\nOne inch is equal to 2.54 centimeters, so reverse it. For example, if you have 50 cm. and the problem is to convert into inches. … One inch equals 2.54 centimeters, so 62 inches would be 157.5 centimeters.\n\n## What stage is a 2 cm tumor?\n\nIn general, stage IIB describes invasive breast cancer in which: the tumor is larger than 2 cm but no larger than 5 centimeters; small groups of breast cancer cells — larger than 0.2 mm but not larger than 2 mm — are found in the lymph nodes or.\n\n## What is the centimeter symbol?\n\nA centimetre (international spelling) or centimeter (American spelling) (SI symbol cm) is a unit of length in the metric system, equal to one hundredth of a metre, centi being the SI prefix for a factor of 1100.\n\n## How many cm exactly is an inch?\n\n2.54cmHow many cm in an inch? 1 inch = 2.54cm. To convert inches to centimeters multiply your figure by 2.54.\n\n## How big is a pea in CM?\n\nLNCtips.com: Wound SizingCMInchesObject0.1 cm0.04 inchesGrain of sugar0.5 cm0.2 inchesPea0.6 cm0.2 inchesPencil eraser0.9 cm0.4 inchesLadybug20 more rows\n\n## How thick is a centimeter?\n\nEquals: 0.39 inches thickness (in) in dimension. Converting centimeter thickness to inches thickness value in the thickness units scale.\n\n## What size is 2.5 cm in inches?\n\nTo convert 2.5 centimeters to inches you have to divide the value in cm by 2.54. 2.5 cm in inches: 2.5 cm are equal to 2.5/2.54 = 0.98425 inches.\n\n## What is a sentence for centimeter?\n\nCentimeter sentence examples. It’s a 38 centimeter chain with the typical Chanel logo at the end, studded in pave cubic zirconias.\n\n## How long is an inch on your finger?\n\n* Use your own body for fast, approximate measuring. The first joint of an index finger is about 1 inch long. When a hand is spread wide, the span from the tip of the thumb to the tip of the pinkie is about 9 inches; from the tip of the thumb to the tip of the index finger, around 6 inches.\n\n## Do you count the zero on a ruler?\n\nStart Measuring With Your Ruler Place the flat end of the ruler against whatever it is you’re measuring, and line the zero mark on the ruler up with one end of the object to be measured. … Remember that however many marks you’ve counted along the ruler equal the number of millimeters you’ve measured.\n\n## What objects are 1 cm long?\n\nA centimeter (cm) is about:about as long as a staple.the width of a highlighter.the diameter of a belly button.the width of 5 CD’s stacked on top of each other.the thickness of a notepad.the radius (half the diameter) of a US penny.\n\n## Do you start measuring at 0 or 1?\n\nPlace the starting end of the measuring tool where it says “0” against the closest edge of the object or distance you’re trying to measure. Make sure the starting edge of the measuring tool and the edge of the object are perfectly aligned in order to get an accurate measurement.\n\n## What’s a centimeter look like?\n\nA centimeter is a metric unit of length. … 1 centimeter is equal to 0.3937 inches, or 1 inch is equal to 2.54 centimeters. In other words, 1 centimeter is less than half as big as an inch, so you need about two-and-a-half centimeters to make one inch.\n\n## What length is represented by 1 centimeter?\n\n0.39370 inchesThe centimetre is a unit of length in the metric system, equal to one-hundredth of a metre. 1cm is equivalent to 0.39370 inches.\n\n## Is 1 cm half an inch?\n\nInches to centimeters conversion tableInches (“)Centimeters (cm)1/2 in1.27 cm1 in2.54 cm2 in5.08 cm3 in7.62 cm23 more rows\n\n## What does 1 cm look like on a ruler?\n\nThe longest line represents the biggest unit on the ruler: 1 cm. Each centimeter is labeled on the ruler (1-30). Example: You take out a ruler to measure the width of your fingernail. The ruler stops at 1 cm, meaning that your nail is precisely 1 cm wide.\n\n## How big is a 2 cm tumor?\n\nPrimary breast tumors vary in shape and size. The smallest lesion that can be felt by hand is typically 1.5 to 2 centimeters (about 1/2 to 3/4 inch) in diameter. Sometimes tumors that are 5 centimeters (about 2 inches) — or even larger — can be found in the breast.\n\n## Does tumor size determine stage?\n\nIn general, the smaller the tumor, the better the prognosis tends to be . Tumor size is part of breast cancer staging. In the TNM staging system, a “T” followed by a number shows the size of the tumor. In some cases, the size of the tumor cannot be determined (TX) or a tumor cannot be found (T0).\n\n## What is something that is 1 cm?\n\nNote a few objects that are roughly 1 cm wide. The easiest objects to use are a standard pencil, pen, or highlighter. The width of a pencil is close to 1 cm. Other options include the length of a staple, the width of five CDs or DVDs stacked together, the thickness of a standard notepad, and the radius of a U.S. penny." ]
[ null, "https://mc.yandex.ru/watch/68554720", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.89741576,"math_prob":0.9659364,"size":6022,"snap":"2020-45-2020-50","text_gpt3_token_len":1601,"char_repetition_ratio":0.1540379,"word_repetition_ratio":0.12982456,"special_character_ratio":0.264364,"punctuation_ratio":0.13101406,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95611846,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-04T02:28:08Z\",\"WARC-Record-ID\":\"<urn:uuid:72b8443d-33f8-4386-8b2c-fdd1e78c68bb>\",\"Content-Length\":\"36225\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1e3e7623-b4de-4ce7-ae5d-89da851d611e>\",\"WARC-Concurrent-To\":\"<urn:uuid:c89c2713-b526-44e7-aa6f-434ef093d8d7>\",\"WARC-IP-Address\":\"87.236.16.33\",\"WARC-Target-URI\":\"https://thetextchemistry.org/qa/question-how-much-is-1-cm-on-a-ruler.html\",\"WARC-Payload-Digest\":\"sha1:FJKB27RKA6DLOE7265MF2XSMSDPQI7N4\",\"WARC-Block-Digest\":\"sha1:YHJW3TAQGMKGYH2NF26O7R723YQX7XGW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141733120.84_warc_CC-MAIN-20201204010410-20201204040410-00177.warc.gz\"}"}
http://cpr-mathph.blogspot.com/2013/07/10094062-jesper-l-jacobsen-et-al.html
[ "## Is the five-flow conjecture almost false?    [PDF]\n\nJesper L. Jacobsen, Jesus Salas\nThe number of nowhere zero Z_Q flows on a graph G can be shown to be a polynomial in Q, defining the flow polynomial \\Phi_G(Q). According to Tutte's five-flow conjecture, \\Phi_G(5) > 0 for any bridgeless G.A conjecture by Welsh that \\Phi_G(Q) has no real roots for Q \\in (4,\\infty) was recently disproved by Haggard, Pearce and Royle. These authors conjectured the absence of roots for Q \\in [5,\\infty). We study the real roots of \\Phi_G(Q) for a family of non-planar cubic graphs known as generalised Petersen graphs G(m,k). We show that the modified conjecture on real flow roots is also false, by exhibiting infinitely many real flow roots Q>5 within the class G(nk,k). In particular, we compute explicitly the flow polynomial of G(119,7), showing that it has real roots at Q\\approx 5.0000197675 and Q\\approx 5.1653424423. We moreover prove that the graph families G(6n,6) and G(7n,7) possess real flow roots that accumulate at Q=5 as n\\to\\infty (in the latter case from above and below); and that Q_c(7)\\approx 5.2352605291 is an accumulation point of real zeros of the flow polynomials for G(7n,7) as n\\to\\infty.\nView original: http://arxiv.org/abs/1009.4062" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8842048,"math_prob":0.9576258,"size":1223,"snap":"2020-24-2020-29","text_gpt3_token_len":347,"char_repetition_ratio":0.12797375,"word_repetition_ratio":0.0,"special_character_ratio":0.28209323,"punctuation_ratio":0.123595506,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.984818,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-05-28T15:34:26Z\",\"WARC-Record-ID\":\"<urn:uuid:3f29deb9-a63e-42ef-be48-d7330de5f471>\",\"Content-Length\":\"202683\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8c2b78e5-01fd-4082-9c24-37d36eecfd15>\",\"WARC-Concurrent-To\":\"<urn:uuid:cf8845e2-80c5-4023-9c2f-c8456a379bed>\",\"WARC-IP-Address\":\"172.217.13.225\",\"WARC-Target-URI\":\"http://cpr-mathph.blogspot.com/2013/07/10094062-jesper-l-jacobsen-et-al.html\",\"WARC-Payload-Digest\":\"sha1:6P4PKJN4ZWKOVVC2XSSYWKQVNSFAYIWH\",\"WARC-Block-Digest\":\"sha1:BYPPRPKDDJTGWUVIDOAG23FK5WFZSGFP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347399820.9_warc_CC-MAIN-20200528135528-20200528165528-00088.warc.gz\"}"}
https://socratic.org/questions/how-do-you-determine-the-theoretical-probability-of-rolling-a-five-on-a-die
[ "# How do you determine the theoretical probability of rolling a five on a die?\n\nFeb 13, 2015\n\nThe probability of any event is based on given probabilities of elementary events and a composition of our event as a set of elementary events.\n\nIf we know the probabilities of elementary events and a composition of our event, the probability of our event is a sum of probabilities of all elementary events that comprise it.\n\nIn case of a die, the elementary events are numbers rolled on this die. The so-called \"fair\" die has all its 6 numbers equally probable. Since the total probability always equals to 1, the probability of each elementary event (rolling each number from 1 to 6) equals to\n$P \\left\\{1\\right\\} = P \\left\\{2\\right\\} = P \\left\\{3\\right\\} = P \\left\\{4\\right\\} = P \\left\\{5\\right\\} = P \\left\\{6\\right\\} = \\frac{1}{6}$.\n\nAn event offered in the problem, \"rolling a five on a die\", contains only one elementary event - number 5, whose probability we know as being equal to $\\frac{1}{6}$. Therefore, the probability of rolling five on a die equals to\n$P \\left\\{5\\right\\} = \\frac{1}{6}$.\n\nJust as an example, an event \"Rolling a number that is less than five\" contains 4 different elementary events - rolling 1, 2, 3 or 4. Each of them has a probability $\\frac{1}{6}$. The sum of all elementary events comprising our event, that is the probability of our event, is\n$P \\left\\{< 5\\right\\} = P \\left\\{1 \\mathmr{and} 2 \\mathmr{and} 3 \\mathmr{and} 4\\right\\} = \\frac{4}{6} = \\frac{2}{3}$." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9430478,"math_prob":0.99980134,"size":1044,"snap":"2020-34-2020-40","text_gpt3_token_len":226,"char_repetition_ratio":0.21538462,"word_repetition_ratio":0.05464481,"special_character_ratio":0.21360153,"punctuation_ratio":0.07462686,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99984825,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-09T21:19:44Z\",\"WARC-Record-ID\":\"<urn:uuid:818697bd-edcb-4125-b60e-042392ed7b04>\",\"Content-Length\":\"35820\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:939870bc-684b-41eb-9d7a-fd40667bf436>\",\"WARC-Concurrent-To\":\"<urn:uuid:8f965b34-88fb-4d34-988e-f6b3e31e9150>\",\"WARC-IP-Address\":\"216.239.38.21\",\"WARC-Target-URI\":\"https://socratic.org/questions/how-do-you-determine-the-theoretical-probability-of-rolling-a-five-on-a-die\",\"WARC-Payload-Digest\":\"sha1:4O3R6NGJAKZHN4POUDNUV3EN7LNKAAVR\",\"WARC-Block-Digest\":\"sha1:2F4NPXPOWHWICC3CZYXG3UXRS5A4KASJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439738573.99_warc_CC-MAIN-20200809192123-20200809222123-00402.warc.gz\"}"}
http://currency7.com/MWK-to-BOB-exchange-rate-converter?amount=300
[ "# 300 Malawian Kwacha (MWK) to Bolivian Boliviano (BOB)\n\nThe currency calculator will convert exchange rate of Malawian kwacha (MWK) to Bolivian boliviano (BOB).\n\n• Malawian kwacha\nThe Malawian kwacha (MWK) is the currency of Malawi. The currency code is MWK and currency symbol is MK. The Malawian kwacha is subdivided into 100 tambala. Plural of kwacha is kwacha. Frequently used Malawian kwacha coins are in denominations of MK 1, MK 5, MK 10. Frequently used Malawian kwacha banknotes are in denominations of MK 20, MK 50, MK 100, MK 200, MK 500, MK 1000.\n• Bolivian boliviano\nThe Bolivian boliviano (BOB) is the currency of Bolivia. The currency code is BOB and currency symbol is Bs. The Bolivian boliviano is subdivided into 100 centavos (singular: centavo; symbol: Cvs.). Frequently used Bolivian boliviano coins are in denominations of Bs.1, Bs.2, Bs.5, 10 centavos, 20 centavos, 50 centavos. Frequently used Bolivian boliviano banknotes are in denominations of Bs.10, Bs.20, Bs.50, Bs.100, Bs.200.\n• 100 MWK = 0.41 BOB\n• 200 MWK = 0.82 BOB\n• 500 MWK = 2.06 BOB\n• 1,000 MWK = 4.11 BOB\n• 2,000 MWK = 8.22 BOB\n• 5,000 MWK = 20.55 BOB\n• 6,000 MWK = 24.66 BOB\n• 10,000 MWK = 41.10 BOB\n• 20,000 MWK = 82.20 BOB\n• 30,000 MWK = 123.31 BOB\n• 50,000 MWK = 205.51 BOB\n• 100,000 MWK = 411.02 BOB\n• 500,000 MWK = 2,055.11 BOB\n• 1,000,000 MWK = 4,110.22 BOB\n• 5,000,000 MWK = 20,551.11 BOB\n• 1 BOB = 243.30 MWK\n• 5 BOB = 1,216.48 MWK\n• 10 BOB = 2,432.96 MWK\n• 20 BOB = 4,865.92 MWK\n• 25 BOB = 6,082.40 MWK\n• 50 BOB = 12,164.80 MWK\n• 100 BOB = 24,329.59 MWK\n• 200 BOB = 48,659.18 MWK\n• 250 BOB = 60,823.98 MWK\n• 500 BOB = 121,647.95 MWK\n• 1,000 BOB = 243,295.91 MWK\n• 2,000 BOB = 486,591.81 MWK\n• 2,500 BOB = 608,239.77 MWK\n• 3,500 BOB = 851,535.67 MWK\n• 5,000 BOB = 1,216,479.53 MWK\n\n## Popular MWK pairing\n\n` <a href=\"http://currency7.com/MWK-to-BOB-exchange-rate-converter?amount=300\">300 MWK in BOB</a> `" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.529612,"math_prob":0.99880385,"size":2933,"snap":"2023-40-2023-50","text_gpt3_token_len":1117,"char_repetition_ratio":0.26254696,"word_repetition_ratio":0.029354207,"special_character_ratio":0.37572452,"punctuation_ratio":0.18032786,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9559932,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-07T03:43:54Z\",\"WARC-Record-ID\":\"<urn:uuid:65fd71b0-ab09-40fc-93a3-6eab25d14318>\",\"Content-Length\":\"29275\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d9d6beb3-cd69-430b-acf3-2eb6a0477f81>\",\"WARC-Concurrent-To\":\"<urn:uuid:fc660320-5545-4fa5-b827-e584865cdc77>\",\"WARC-IP-Address\":\"70.35.206.41\",\"WARC-Target-URI\":\"http://currency7.com/MWK-to-BOB-exchange-rate-converter?amount=300\",\"WARC-Payload-Digest\":\"sha1:AR6SONG2R7DKUQJJID4BQKPPJW6LN6KE\",\"WARC-Block-Digest\":\"sha1:VZFUTKVQIEFJLMDVIZLM4SUMYUZ4CYLJ\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100632.0_warc_CC-MAIN-20231207022257-20231207052257-00307.warc.gz\"}"}
http://gpl4you.com/discussion.php?q=mcq&company=HCL&type=Mathematical%20Skills&subtype=Number%20System
[ "## Company\n\n•", null, "### which is the 4 digit number whose second digit is thrice the first digit and 3'rd digit is sum of 1’st and 2'nd and last digit is twice the second digit. 1. 2674 2. 1349 3. 3343 4. 3678 (HCL Question)\n\nlast reply by sandeep tiwari  •  4 years ago  •  asked by Saikat\n\n•", null, "### How many integers n greater than and less than 100 are there such that,if the digits of n are reversed, the resulting integer is n+9 ? (A) 5 (B) 6 (C) 7 (D) 8 (E) 9 (HCL Question)\n\nlast reply by saranya  •  5 years ago  •  asked by Saikat\n\n•", null, "### What does the hex number E78 correspond to in radix 7 ? a) 12455 b) 14153 c) 14256 d) 13541 e) 13112 (HCL Question)\n\nlast reply by rakhi  •  7 years ago  •  asked by Saikat\n\n•", null, "### How many integers n greater than and less than 100 are there such that,if the digits of n are reversed, the resulting integer is n+9 ? (A)5 (B)6 (C)7 (D)8 (E)9 (HCL Question)\n\nlast reply by rakhi  •  7 years ago  •  asked by Bhushan\n\n•", null, "### How many integers n greater than and less than 100 are there such that, if the digits of n are reversed, the resulting integer is n+9 ? (A)5 (B)6 (C)7 (D)8 (E)9 (HCL Question)\n\nlast reply by Anita  •  8 years ago  •  asked by Bhushan\n\n•", null, "" ]
[ null, "http://gpl4you.com/images/noavatar_small.gif", null, "http://gpl4you.com/images/noavatar_small.gif", null, "http://gpl4you.com/images/noavatar_small.gif", null, "http://gpl4you.com/images/noavatar_small.gif", null, "http://gpl4you.com/images/noavatar_small.gif", null, "http://gpl4you.com/images/noavatar_small.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.80843616,"math_prob":0.944885,"size":1115,"snap":"2019-13-2019-22","text_gpt3_token_len":305,"char_repetition_ratio":0.115211524,"word_repetition_ratio":0.02970297,"special_character_ratio":0.26816145,"punctuation_ratio":0.043956045,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95179063,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-20T05:00:55Z\",\"WARC-Record-ID\":\"<urn:uuid:f2ff5336-cb4a-4e4c-805f-cce8114a0124>\",\"Content-Length\":\"96307\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:223cf0c1-b393-4ba8-9b39-7ad5079353b2>\",\"WARC-Concurrent-To\":\"<urn:uuid:ecba1d25-cc7c-49cc-b9fe-e2dd8c822d2b>\",\"WARC-IP-Address\":\"13.127.215.39\",\"WARC-Target-URI\":\"http://gpl4you.com/discussion.php?q=mcq&company=HCL&type=Mathematical%20Skills&subtype=Number%20System\",\"WARC-Payload-Digest\":\"sha1:SOHFABTLU5BDAG3DW6F3IYND72D5W4JC\",\"WARC-Block-Digest\":\"sha1:XONJ6T7QEGQD43P3QB5H6BGTG7TMBT5R\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912202299.16_warc_CC-MAIN-20190320044358-20190320070358-00340.warc.gz\"}"}
https://crypto.stackexchange.com/questions/67592/el-gamal-cryptosystem-proof-step-by-step-explanation-needed-i-feel-like-i-don
[ "# El Gamal Cryptosystem Proof, Step-by-Step Explanation Needed. I feel like I don't understand what I have on this proof\n\nCan someone provide a good and thorough explanation of the El Gamal proof? Basically, I need a step-by-step breakdown of what is happening at each important part in the algorithm.\n\nI haven't been able to find any good notes on El Gamala that explain the proof in an easy to understand format.\n\nHere is the proof in this image:", null, "• Generally when people talk about the \"proof of ElGamal encryption\" they refer to the proof of security. What you have there is just the derivation that shows that it is correct, i.e. that decryption actually works. Are you aware of that or is that part of the confusion? – Maeher Feb 25 '19 at 7:36\n• Oh dear, I must be so confused then. This is what I took from my professor's notes and I assumed it was the proof because it is labeled as such. – John Doe X Feb 25 '19 at 8:10\n• I think I mean that I need an explanation of the derivation, which is the image posted in the body of my message. – John Doe X Feb 25 '19 at 8:10\n\nFirst, we should make clear what we're proving here. The derivation you're showing is part of a proof of correctness of ElGamal encryption, not security.\n\nPerfect correctness (also referred to as completeness) of a public key encryption scheme is defined as follows.\n\nLet $$(\\mathsf{Gen},\\mathsf{Enc},\\mathsf{Dec})$$ be a public key encryption scheme. The scheme is said to be perfectly correct if it holds that for any security parameter $$n\\in\\mathbb{N}$$, any key pair $$(\\mathsf{ek},\\mathsf{dk})\\gets\\mathsf{Gen}(1^n)$$, any message $$m$$ from the message space, and any ciphertext $$c \\gets \\mathsf{Enc}(\\mathsf{ek},m)$$ it holds that $$\\mathsf{Dec}(\\mathsf{dk},c)=m$$.\n\nTo see whether ElGamal encryption is correct, we first recall the definition of ElGamal encryption. I will try to follow the notation in your notes as close as possible. From the notes it's not clear what kind of group is being used. But $$e_1$$ is a generator of some subgroup of $$\\mathbb{Z}_p^*$$. Let $$q$$ be the order of that subgroup. (If $$e_1$$ is a generator of $$\\mathbb{Z}_p^*$$, then $$q=p-1$$, if we are in a safe prime setting, then $$q=(p-1)/2$$ and prime.\n\n\\begin{align} &\\mathsf{Gen}(1^n) && \\mathsf{Enc}(e_2,P) && \\mathsf{Dec}(d,(c_1,c_2))\\\\ &d\\gets\\mathbb{Z}_q&&r \\gets \\mathbb{Z}_q&&P' := c_2\\cdot (c_1^d)^{-1} \\bmod p\\\\ &e_2 := e_1^d \\bmod p&&c_1:= e_1^r\\bmod p&&\\text{return }P'\\\\ &\\text{return } (e_2,d)&&c_2 := P\\cdot e_2^r\\bmod p&\\\\ &&&\\text{return } (c_1,c_2)&&\\\\ \\end{align}\n\nTo verify that the scheme is correct, we need to verify that for any choice of key pair, and message it always holds that $$\\mathsf{Dec}(d,\\mathsf{Enc}(e_2,P)) = P$$. This is where the derivation you're looking at comes in.\n\n\\begin{align} P' =& c_2\\cdot (c_1^d)^{-1} \\bmod p \\tag{1}\\\\ =& c_2\\cdot (e_1^{rd})^{-1} \\bmod p\\tag{2}\\\\ =& P\\cdot e_2^r\\cdot (e_1^{rd})^{-1} \\bmod p\\tag{3}\\\\ =& P\\cdot e_1^{rd}\\cdot (e_1^{rd})^{-1} \\bmod p\\tag{4}\\\\ =& P\\cdot e_1^{rd}\\cdot e_1^{-rd} \\bmod p\\tag{5}\\\\ =& P\\cdot e_1^{rd-rd} \\bmod p\\tag{6}\\\\ =& P \\bmod p\\tag{7}\\\\ \\end{align}\n\nNow let's go through this line by line.\n\n1. In line (1) we simply have the definition of the decryption algorithm.\n2. In line (2) we use the definition of $$c_1 = e_1^r\\bmod p$$ from the encryption algorithm and replace $$c_1$$ by its definition.\n3. In line (3) we do exactly the same with $$c_2$$ and replace it with its definition $$c_2 := P\\cdot e_2^r\\bmod p$$ from the encryption algorithm.\n4. In line (4) we now look at the key generation algorithm and replace $$e_2$$ by its definition from there $$e_2 := e_1^d \\bmod p$$. Note that we have not changed anything. We have simply replaced the variables $$c_1,c_2,e_2$$ by their respective definitions.\n5. In line (5) we use a basic rule of exponentiation, namely that $$(x^a)^b = x^{a\\cdot b}$$, therefore $$(e_1^{rd})^{-1} = e_1^{-rd}$$.\n6. In line (6) we again use a basic rule of exponentiation, namely that $$x^a\\cdot x^b = x^{a+ b}$$, therefore $$e_1^{rd}\\cdot e_1^{-rd} = e_1^{rd-rd}$$.\n7. Since $$rd-rd=0$$ and anything raised to the $$0$$th power equals the identity, we are left with $$P\\cdot 1 \\bmod p$$ and thereby, since $$1$$ is the identity of the group with $$P$$ in line (7).\n\nSince this derivation makes no assumptions whatsoever about $$P,d,e_2,c_1$$ and $$c_2$$, besides that they were generated according to the above definition of ElGamal encryption, it holds for any choice of key-pair and any plaintext $$P$$.\n\nThe derivation thus shows that decryption always works and results in the same plaintext that was encrypted, thus showing that the encryption scheme is in fact correct.\n\nWhat it does not show is anything about the security of the scheme.\n\n• This is amazing and exactly what I needed. However, I think my professor is using this as a proof for El Gamal's encryption and decryption process and why it works, and I don't think she is using it as a proof of the security of El Gamal. She didn't really specify, but thanks for clearing this up. – John Doe X Mar 1 '19 at 8:59" ]
[ null, "https://i.stack.imgur.com/Po1m7.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8156,"math_prob":0.9990081,"size":3587,"snap":"2021-04-2021-17","text_gpt3_token_len":1197,"char_repetition_ratio":0.1373151,"word_repetition_ratio":0.018348623,"special_character_ratio":0.33398384,"punctuation_ratio":0.09375,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998696,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-24T00:25:57Z\",\"WARC-Record-ID\":\"<urn:uuid:4fdc1ee9-106e-4377-9601-f4c27de437ad>\",\"Content-Length\":\"156246\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3b09e708-b95e-4658-ab5f-61abae7c510b>\",\"WARC-Concurrent-To\":\"<urn:uuid:8056e076-ddb5-41db-bd01-65a774060412>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://crypto.stackexchange.com/questions/67592/el-gamal-cryptosystem-proof-step-by-step-explanation-needed-i-feel-like-i-don\",\"WARC-Payload-Digest\":\"sha1:C4DIARUWXHV6LILLFV3MVT3RWZSOJRTU\",\"WARC-Block-Digest\":\"sha1:CC77KRZXE5UF6U4NJSUY4DZ3OFL4XSER\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703538741.56_warc_CC-MAIN-20210123222657-20210124012657-00401.warc.gz\"}"}
http://ffmpeg.org/pipermail/ffmpeg-cvslog/2006-June/003120.html
[ "# [Ffmpeg-cvslog] r5493 - in trunk/libavcodec/ppc: dsputil_altivec.c dsputil_h264_template_altivec.c\n\nlu_zero subversion\nSat Jun 17 20:46:07 CEST 2006\n\n```Author: lu_zero\nDate: Sat Jun 17 20:46:06 2006\nNew Revision: 5493\n\nModified:\ntrunk/libavcodec/ppc/dsputil_altivec.c\ntrunk/libavcodec/ppc/dsputil_h264_template_altivec.c\n\nLog:\nCosmetics: should not hurt performance, scream if are\n\nModified: trunk/libavcodec/ppc/dsputil_altivec.c\n==============================================================================\n--- trunk/libavcodec/ppc/dsputil_altivec.c\t(original)\n+++ trunk/libavcodec/ppc/dsputil_altivec.c\tSat Jun 17 20:46:06 2006\n@@ -1311,9 +1311,9 @@\nint hadamard8_diff8x8_altivec(/*MpegEncContext*/ void *s, uint8_t *dst, uint8_t *src, int stride, int h){\nint sum;\nregister const_vector unsigned char vzero = (const_vector unsigned char)vec_splat_u8(0);\nregister vector signed short temp0, temp1, temp2, temp3, temp4, temp5, temp6, temp7;\n{\nregister const_vector signed short vprod1 = (const_vector signed short)AVV( 1,-1, 1,-1, 1,-1, 1,-1);\nregister const_vector signed short vprod2 = (const_vector signed short)AVV( 1, 1,-1,-1, 1, 1,-1,-1);\n@@ -1338,6 +1338,8 @@\n{ \\\nregister vector unsigned char src1, src2, srcO; \\\nregister vector unsigned char dst1, dst2, dstO; \\\n+ register vector signed short srcV, dstV; \\\n+ register vector signed short but0, but1, but2, op1, op2, op3; \\\nsrc1 = vec_ld(stride * i, src); \\\nif ((((stride * i) + (unsigned long)src) & 0x0000000F) > 8) \\\nsrc2 = vec_ld((stride * i) + 16, src); \\\n@@ -1348,17 +1350,19 @@\ndstO = vec_perm(dst1, dst2, vec_lvsl(stride * i, dst)); \\\n/* promote the unsigned chars to signed shorts */ \\\n/* we're in the 8x8 function, we only care for the first 8 */ \\\n- register vector signed short srcV = \\\n- (vector signed short)vec_mergeh((vector signed char)vzero, (vector signed char)srcO); \\\n- register vector signed short dstV = \\\n- (vector signed short)vec_mergeh((vector signed char)vzero, (vector signed char)dstO); \\\n+ srcV = \\\n+ (vector signed short)vec_mergeh((vector signed char)vzero, \\\n+ (vector signed char)srcO); \\\n+ dstV = \\\n+ (vector signed short)vec_mergeh((vector signed char)vzero, \\\n+ (vector signed char)dstO); \\\n/* substractions inside the first butterfly */ \\\n- register vector signed short but0 = vec_sub(srcV, dstV); \\\n- register vector signed short op1 = vec_perm(but0, but0, perm1); \\\n- register vector signed short but1 = vec_mladd(but0, vprod1, op1); \\\n- register vector signed short op2 = vec_perm(but1, but1, perm2); \\\n- register vector signed short but2 = vec_mladd(but1, vprod2, op2); \\\n- register vector signed short op3 = vec_perm(but2, but2, perm3); \\\n+ but0 = vec_sub(srcV, dstV); \\\n+ op1 = vec_perm(but0, but0, perm1); \\\n+ but1 = vec_mladd(but0, vprod1, op1); \\\n+ op2 = vec_perm(but1, but1, perm2); \\\n+ but2 = vec_mladd(but1, vprod2, op2); \\\n+ op3 = vec_perm(but2, but2, perm3); \\\nres = vec_mladd(but2, vprod3, op3); \\\n}\nONEITERBUTTERFLY(0, temp0);\n@@ -1481,37 +1485,63 @@\n\n#define ONEITERBUTTERFLY(i, res1, res2) \\\n{ \\\n- register vector unsigned char src1 REG_v(v22), src2 REG_v(v23); \\\n- register vector unsigned char dst1 REG_v(v24), dst2 REG_v(v25); \\\n+ register vector unsigned char src1 REG_v(v22), \\\n+ src2 REG_v(v23), \\\n+ dst1 REG_v(v24), \\\n+ dst2 REG_v(v25), \\\n+ srcO REG_v(v22), \\\n+ dstO REG_v(v23); \\\n+ \\\n+ register vector signed short srcV REG_v(v24), \\\n+ dstV REG_v(v25), \\\n+ srcW REG_v(v26), \\\n+ dstW REG_v(v27), \\\n+ but0 REG_v(v28), \\\n+ but0S REG_v(v29), \\\n+ op1 REG_v(v30), \\\n+ but1 REG_v(v22), \\\n+ op1S REG_v(v23), \\\n+ but1S REG_v(v24), \\\n+ op2 REG_v(v25), \\\n+ but2 REG_v(v26), \\\n+ op2S REG_v(v27), \\\n+ but2S REG_v(v28), \\\n+ op3 REG_v(v29), \\\n+ op3S REG_v(v30); \\\n+ \\\nsrc1 = vec_ld(stride * i, src); \\\nsrc2 = vec_ld((stride * i) + 16, src); \\\n- register vector unsigned char srcO REG_v(v22) = vec_perm(src1, src2, vec_lvsl(stride * i, src)); \\\n+ srcO = vec_perm(src1, src2, vec_lvsl(stride * i, src)); \\\ndst1 = vec_ld(stride * i, dst); \\\ndst2 = vec_ld((stride * i) + 16, dst); \\\n- register vector unsigned char dstO REG_v(v23) = vec_perm(dst1, dst2, vec_lvsl(stride * i, dst)); \\\n+ dstO = vec_perm(dst1, dst2, vec_lvsl(stride * i, dst)); \\\n/* promote the unsigned chars to signed shorts */ \\\n- register vector signed short srcV REG_v(v24) = \\\n- (vector signed short)vec_mergeh((vector signed char)vzero, (vector signed char)srcO); \\\n- register vector signed short dstV REG_v(v25) = \\\n- (vector signed short)vec_mergeh((vector signed char)vzero, (vector signed char)dstO); \\\n- register vector signed short srcW REG_v(v26) = \\\n- (vector signed short)vec_mergel((vector signed char)vzero, (vector signed char)srcO); \\\n- register vector signed short dstW REG_v(v27) = \\\n- (vector signed short)vec_mergel((vector signed char)vzero, (vector signed char)dstO); \\\n+ srcV = \\\n+ (vector signed short)vec_mergeh((vector signed char)vzero, \\\n+ (vector signed char)srcO); \\\n+ dstV = \\\n+ (vector signed short)vec_mergeh((vector signed char)vzero, \\\n+ (vector signed char)dstO); \\\n+ srcW = \\\n+ (vector signed short)vec_mergel((vector signed char)vzero, \\\n+ (vector signed char)srcO); \\\n+ dstW = \\\n+ (vector signed short)vec_mergel((vector signed char)vzero, \\\n+ (vector signed char)dstO); \\\n/* substractions inside the first butterfly */ \\\n- register vector signed short but0 REG_v(v28) = vec_sub(srcV, dstV); \\\n- register vector signed short but0S REG_v(v29) = vec_sub(srcW, dstW); \\\n- register vector signed short op1 REG_v(v30) = vec_perm(but0, but0, perm1); \\\n- register vector signed short but1 REG_v(v22) = vec_mladd(but0, vprod1, op1); \\\n- register vector signed short op1S REG_v(v23) = vec_perm(but0S, but0S, perm1); \\\n- register vector signed short but1S REG_v(v24) = vec_mladd(but0S, vprod1, op1S); \\\n- register vector signed short op2 REG_v(v25) = vec_perm(but1, but1, perm2); \\\n- register vector signed short but2 REG_v(v26) = vec_mladd(but1, vprod2, op2); \\\n- register vector signed short op2S REG_v(v27) = vec_perm(but1S, but1S, perm2); \\\n- register vector signed short but2S REG_v(v28) = vec_mladd(but1S, vprod2, op2S); \\\n- register vector signed short op3 REG_v(v29) = vec_perm(but2, but2, perm3); \\\n+ but0 = vec_sub(srcV, dstV); \\\n+ but0S = vec_sub(srcW, dstW); \\\n+ op1 = vec_perm(but0, but0, perm1); \\\n+ but1 = vec_mladd(but0, vprod1, op1); \\\n+ op1S = vec_perm(but0S, but0S, perm1); \\\n+ but1S = vec_mladd(but0S, vprod1, op1S); \\\n+ op2 = vec_perm(but1, but1, perm2); \\\n+ but2 = vec_mladd(but1, vprod2, op2); \\\n+ op2S = vec_perm(but1S, but1S, perm2); \\\n+ but2S = vec_mladd(but1S, vprod2, op2S); \\\n+ op3 = vec_perm(but2, but2, perm3); \\\nres1 = vec_mladd(but2, vprod3, op3); \\\n- register vector signed short op3S REG_v(v30) = vec_perm(but2S, but2S, perm3); \\\n+ op3S = vec_perm(but2S, but2S, perm3); \\\nres2 = vec_mladd(but2S, vprod3, op3S); \\\n}\nONEITERBUTTERFLY(0, temp0, temp0S);\n@@ -1526,6 +1556,12 @@\n#undef ONEITERBUTTERFLY\n{\nregister vector signed int vsum;\n+ register vector signed short line0S, line1S, line2S, line3S, line4S,\n+ line5S, line6S, line7S, line0BS,line2BS,\n+ line1BS,line3BS,line4BS,line6BS,line5BS,\n+ line7BS,line0CS,line4CS,line1CS,line5CS,\n+ line2CS,line6CS,line3CS,line7CS;\n+\nregister vector signed short line0 = vec_add(temp0, temp1);\nregister vector signed short line1 = vec_sub(temp0, temp1);\nregister vector signed short line2 = vec_add(temp2, temp3);\n@@ -1562,32 +1598,32 @@\nvsum = vec_sum4s(vec_abs(line6C), vsum);\nvsum = vec_sum4s(vec_abs(line7C), vsum);\n\n- register vector signed short line0S = vec_add(temp0S, temp1S);\n- register vector signed short line1S = vec_sub(temp0S, temp1S);\n- register vector signed short line2S = vec_add(temp2S, temp3S);\n- register vector signed short line3S = vec_sub(temp2S, temp3S);\n- register vector signed short line4S = vec_add(temp4S, temp5S);\n- register vector signed short line5S = vec_sub(temp4S, temp5S);\n- register vector signed short line6S = vec_add(temp6S, temp7S);\n- register vector signed short line7S = vec_sub(temp6S, temp7S);\n-\n- register vector signed short line0BS = vec_add(line0S, line2S);\n- register vector signed short line2BS = vec_sub(line0S, line2S);\n- register vector signed short line1BS = vec_add(line1S, line3S);\n- register vector signed short line3BS = vec_sub(line1S, line3S);\n- register vector signed short line4BS = vec_add(line4S, line6S);\n- register vector signed short line6BS = vec_sub(line4S, line6S);\n- register vector signed short line5BS = vec_add(line5S, line7S);\n- register vector signed short line7BS = vec_sub(line5S, line7S);\n-\n- register vector signed short line0CS = vec_add(line0BS, line4BS);\n- register vector signed short line4CS = vec_sub(line0BS, line4BS);\n- register vector signed short line1CS = vec_add(line1BS, line5BS);\n- register vector signed short line5CS = vec_sub(line1BS, line5BS);\n- register vector signed short line2CS = vec_add(line2BS, line6BS);\n- register vector signed short line6CS = vec_sub(line2BS, line6BS);\n- register vector signed short line3CS = vec_add(line3BS, line7BS);\n- register vector signed short line7CS = vec_sub(line3BS, line7BS);\n+ line1S = vec_sub(temp0S, temp1S);\n+ line3S = vec_sub(temp2S, temp3S);\n+ line5S = vec_sub(temp4S, temp5S);\n+ line7S = vec_sub(temp6S, temp7S);\n+\n+ line2BS = vec_sub(line0S, line2S);\n+ line3BS = vec_sub(line1S, line3S);\n+ line6BS = vec_sub(line4S, line6S);\n+ line7BS = vec_sub(line5S, line7S);\n+\n+ line4CS = vec_sub(line0BS, line4BS);\n+ line5CS = vec_sub(line1BS, line5BS);\n+ line6CS = vec_sub(line2BS, line6BS);\n+ line7CS = vec_sub(line3BS, line7BS);\n\nvsum = vec_sum4s(vec_abs(line0CS), vsum);\nvsum = vec_sum4s(vec_abs(line1CS), vsum);\n\nModified: trunk/libavcodec/ppc/dsputil_h264_template_altivec.c\n==============================================================================\n--- trunk/libavcodec/ppc/dsputil_h264_template_altivec.c\t(original)\n+++ trunk/libavcodec/ppc/dsputil_h264_template_altivec.c\tSat Jun 17 20:46:06 2006\n@@ -19,13 +19,13 @@\n/* this code assume that stride % 16 == 0 */\nvoid PREFIX_h264_chroma_mc8_altivec(uint8_t * dst, uint8_t * src, int stride, int h, int x, int y) {\nPOWERPC_PERF_DECLARE(PREFIX_h264_chroma_mc8_num, 1);\n- POWERPC_PERF_START_COUNT(PREFIX_h264_chroma_mc8_num, 1);\n- signed int ABCD __attribute__((aligned(16)));\n+ signed int ABCD __attribute__((aligned(16))) =\n+ {((8 - x) * (8 - y)),\n+ ((x) * (8 - y)),\n+ ((8 - x) * (y)),\n+ ((x) * (y))};\nregister int i;\n- ABCD = ((8 - x) * (8 - y));\n- ABCD = ((x) * (8 - y));\n- ABCD = ((8 - x) * (y));\n- ABCD = ((x) * (y));\n+ vector unsigned char fperm;\nconst vector signed int vABCD = vec_ld(0, ABCD);\nconst vector signed short vA = vec_splat((vector signed short)vABCD, 1);\nconst vector signed short vB = vec_splat((vector signed short)vABCD, 3);\n@@ -34,55 +34,61 @@\nconst vector signed int vzero = vec_splat_s32(0);\nconst vector signed short v32ss = vec_sl(vec_splat_s16(1),vec_splat_u16(5));\nconst vector unsigned short v6us = vec_splat_u16(6);\n+ register int loadSecond = (((unsigned long)src) % 16) <= 7 ? 0 : 1;\n+ register int reallyBadAlign = (((unsigned long)src) % 16) == 15 ? 1 : 0;\n\n- vector unsigned char fperm;\n+ vector unsigned char vsrcAuc, vsrcBuc, vsrcperm0, vsrcperm1;\n+ vector unsigned char vsrc0uc, vsrc1uc;\n+ vector signed short vsrc0ssH, vsrc1ssH;\n+ vector unsigned char vsrcCuc, vsrc2uc, vsrc3uc;\n+ vector signed short vsrc2ssH, vsrc3ssH, psum;\n+ vector unsigned char vdst, ppsum, vfdst, fsum;\n+\n+ POWERPC_PERF_START_COUNT(PREFIX_h264_chroma_mc8_num, 1);\n\nif (((unsigned long)dst) % 16 == 0) {\n- fperm = (vector unsigned char)AVV(0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,\n- 0x08, 0x09, 0x0A, 0x0B, 0x0C, 0x0D, 0x0E, 0x0F);\n+ fperm = (vector unsigned char)AVV(0x10, 0x11, 0x12, 0x13,\n+ 0x14, 0x15, 0x16, 0x17,\n+ 0x08, 0x09, 0x0A, 0x0B,\n+ 0x0C, 0x0D, 0x0E, 0x0F);\n} else {\n- fperm = (vector unsigned char)AVV(0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,\n- 0x18, 0x19, 0x1A, 0x1B, 0x1C, 0x1D, 0x1E, 0x1F);\n+ fperm = (vector unsigned char)AVV(0x00, 0x01, 0x02, 0x03,\n+ 0x04, 0x05, 0x06, 0x07,\n+ 0x18, 0x19, 0x1A, 0x1B,\n+ 0x1C, 0x1D, 0x1E, 0x1F);\n}\n\n- register int loadSecond = (((unsigned long)src) % 16) <= 7 ? 0 : 1;\n- register int reallyBadAlign = (((unsigned long)src) % 16) == 15 ? 1 : 0;\n-\n- vector unsigned char vsrcAuc;\n- vector unsigned char vsrcBuc;\n- vector unsigned char vsrcperm0;\n- vector unsigned char vsrcperm1;\nvsrcAuc = vec_ld(0, src);\n+\nvsrcBuc = vec_ld(16, src);\nvsrcperm0 = vec_lvsl(0, src);\nvsrcperm1 = vec_lvsl(1, src);\n\n- vector unsigned char vsrc0uc;\n- vector unsigned char vsrc1uc;\nvsrc0uc = vec_perm(vsrcAuc, vsrcBuc, vsrcperm0);\nvsrc1uc = vsrcBuc;\nelse\nvsrc1uc = vec_perm(vsrcAuc, vsrcBuc, vsrcperm1);\n\n- vector signed short vsrc0ssH = (vector signed short)vec_mergeh((vector unsigned char)vzero, (vector unsigned char)vsrc0uc);\n- vector signed short vsrc1ssH = (vector signed short)vec_mergeh((vector unsigned char)vzero, (vector unsigned char)vsrc1uc);\n+ vsrc0ssH = (vector signed short)vec_mergeh((vector unsigned char)vzero,\n+ (vector unsigned char)vsrc0uc);\n+ vsrc1ssH = (vector signed short)vec_mergeh((vector unsigned char)vzero,\n+ (vector unsigned char)vsrc1uc);\n\nfor (i = 0 ; i < h ; i++) {\n- vector unsigned char vsrcCuc;\n+\n+\nvsrcCuc = vec_ld(stride + 0, src);\n\n- vector unsigned char vsrc2uc;\n- vector unsigned char vsrc3uc;\nvsrc2uc = vec_perm(vsrcCuc, vsrcCuc, vsrcperm0);\nvsrc3uc = vec_perm(vsrcCuc, vsrcCuc, vsrcperm1);\n\n- vector signed short vsrc2ssH = (vector signed short)vec_mergeh((vector unsigned char)vzero, (vector unsigned char)vsrc2uc);\n- vector signed short vsrc3ssH = (vector signed short)vec_mergeh((vector unsigned char)vzero, (vector unsigned char)vsrc3uc);\n-\n- vector signed short psum;\n+ vsrc2ssH = (vector signed short)vec_mergeh((vector unsigned char)vzero,\n+ (vector unsigned char)vsrc2uc);\n+ vsrc3ssH = (vector signed short)vec_mergeh((vector unsigned char)vzero,\n+ (vector unsigned char)vsrc3uc);\n\n@@ -91,11 +97,9 @@\npsum = vec_sra(psum, v6us);\n\n- vector unsigned char vdst = vec_ld(0, dst);\n- vector unsigned char ppsum = (vector unsigned char)vec_packsu(psum, psum);\n-\n- vector unsigned char vfdst = vec_perm(vdst, ppsum, fperm);\n- vector unsigned char fsum;\n+ vdst = vec_ld(0, dst);\n+ ppsum = (vector unsigned char)vec_packsu(psum, psum);\n+ vfdst = vec_perm(vdst, ppsum, fperm);\n\nOP_U8_ALTIVEC(fsum, vfdst, vdst);\n\n@@ -108,24 +112,21 @@\nsrc += stride;\n}\n} else {\n- for (i = 0 ; i < h ; i++) {\n- vector unsigned char vsrcCuc;\nvector unsigned char vsrcDuc;\n+ for (i = 0 ; i < h ; i++) {\nvsrcCuc = vec_ld(stride + 0, src);\nvsrcDuc = vec_ld(stride + 16, src);\n\n- vector unsigned char vsrc2uc;\n- vector unsigned char vsrc3uc;\nvsrc2uc = vec_perm(vsrcCuc, vsrcDuc, vsrcperm0);\nvsrc3uc = vsrcDuc;\nelse\nvsrc3uc = vec_perm(vsrcCuc, vsrcDuc, vsrcperm1);\n\n- vector signed short vsrc2ssH = (vector signed short)vec_mergeh((vector unsigned char)vzero, (vector unsigned char)vsrc2uc);\n- vector signed short vsrc3ssH = (vector signed short)vec_mergeh((vector unsigned char)vzero, (vector unsigned char)vsrc3uc);\n-\n- vector signed short psum;\n+ vsrc2ssH = (vector signed short)vec_mergeh((vector unsigned char)vzero,\n+ (vector unsigned char)vsrc2uc);\n+ vsrc3ssH = (vector signed short)vec_mergeh((vector unsigned char)vzero,\n+ (vector unsigned char)vsrc3uc);\n\n@@ -134,11 +135,9 @@\npsum = vec_sr(psum, v6us);\n\n- vector unsigned char vdst = vec_ld(0, dst);\n- vector unsigned char ppsum = (vector unsigned char)vec_pack(psum, psum);\n-\n- vector unsigned char vfdst = vec_perm(vdst, ppsum, fperm);\n- vector unsigned char fsum;\n+ vdst = vec_ld(0, dst);\n+ ppsum = (vector unsigned char)vec_pack(psum, psum);\n+ vfdst = vec_perm(vdst, ppsum, fperm);\n\nOP_U8_ALTIVEC(fsum, vfdst, vdst);\n\n@@ -157,7 +156,6 @@\n/* this code assume stride % 16 == 0 */\nstatic void PREFIX_h264_qpel16_h_lowpass_altivec(uint8_t * dst, uint8_t * src, int dstStride, int srcStride) {\nPOWERPC_PERF_DECLARE(PREFIX_h264_qpel16_h_lowpass_num, 1);\n- POWERPC_PERF_START_COUNT(PREFIX_h264_qpel16_h_lowpass_num, 1);\nregister int i;\n\nconst vector signed int vzero = vec_splat_s32(0);\n@@ -172,13 +170,30 @@\nconst vector signed short v20ss = vec_sl(vec_splat_s16(5),vec_splat_u16(2));\nconst vector signed short v16ss = vec_sl(vec_splat_s16(1),vec_splat_u16(4));\nconst vector unsigned char dstperm = vec_lvsr(0, dst);\n- const vector unsigned char neg1 = (const vector unsigned char)vec_splat_s8(-1);\n- const vector unsigned char dstmask = vec_perm((const vector unsigned char)vzero, neg1, dstperm);\n+ const vector unsigned char neg1 =\n+ (const vector unsigned char) vec_splat_s8(-1);\n+\n+ const vector unsigned char dstmask =\n+ vec_perm((const vector unsigned char)vzero,\n+ neg1, dstperm);\n+\n+ vector unsigned char srcM2, srcM1, srcP0, srcP1, srcP2, srcP3;\n\nregister int align = ((((unsigned long)src) - 2) % 16);\n\n+ vector signed short srcP0A, srcP0B, srcP1A, srcP1B,\n+ srcP2A, srcP2B, srcP3A, srcP3B,\n+ srcM1A, srcM1B, srcM2A, srcM2B,\n+ sum1A, sum1B, sum2A, sum2B, sum3A, sum3B,\n+ pp1A, pp1B, pp2A, pp2B, pp3A, pp3B,\n+ psumA, psumB, sumA, sumB;\n+\n+ vector unsigned char sum, dst1, dst2, vdst, fsum,\n+ rsum, fdst1, fdst2;\n+\n+ POWERPC_PERF_START_COUNT(PREFIX_h264_qpel16_h_lowpass_num, 1);\n+\nfor (i = 0 ; i < 16 ; i ++) {\n- vector unsigned char srcM2, srcM1, srcP0, srcP1, srcP2, srcP3;\nvector unsigned char srcR1 = vec_ld(-2, src);\nvector unsigned char srcR2 = vec_ld(14, src);\n\n@@ -237,55 +252,54 @@\n} break;\n}\n\n- const vector signed short srcP0A = (vector signed short)vec_mergeh((vector unsigned char)vzero, srcP0);\n- const vector signed short srcP0B = (vector signed short)vec_mergel((vector unsigned char)vzero, srcP0);\n- const vector signed short srcP1A = (vector signed short)vec_mergeh((vector unsigned char)vzero, srcP1);\n- const vector signed short srcP1B = (vector signed short)vec_mergel((vector unsigned char)vzero, srcP1);\n-\n- const vector signed short srcP2A = (vector signed short)vec_mergeh((vector unsigned char)vzero, srcP2);\n- const vector signed short srcP2B = (vector signed short)vec_mergel((vector unsigned char)vzero, srcP2);\n- const vector signed short srcP3A = (vector signed short)vec_mergeh((vector unsigned char)vzero, srcP3);\n- const vector signed short srcP3B = (vector signed short)vec_mergel((vector unsigned char)vzero, srcP3);\n-\n- const vector signed short srcM1A = (vector signed short)vec_mergeh((vector unsigned char)vzero, srcM1);\n- const vector signed short srcM1B = (vector signed short)vec_mergel((vector unsigned char)vzero, srcM1);\n- const vector signed short srcM2A = (vector signed short)vec_mergeh((vector unsigned char)vzero, srcM2);\n- const vector signed short srcM2B = (vector signed short)vec_mergel((vector unsigned char)vzero, srcM2);\n-\n- const vector signed short sum1A = vec_adds(srcP0A, srcP1A);\n- const vector signed short sum1B = vec_adds(srcP0B, srcP1B);\n- const vector signed short sum2A = vec_adds(srcM1A, srcP2A);\n- const vector signed short sum2B = vec_adds(srcM1B, srcP2B);\n- const vector signed short sum3A = vec_adds(srcM2A, srcP3A);\n- const vector signed short sum3B = vec_adds(srcM2B, srcP3B);\n-\n- const vector signed short pp1A = vec_mladd(sum1A, v20ss, v16ss);\n- const vector signed short pp1B = vec_mladd(sum1B, v20ss, v16ss);\n-\n- const vector signed short pp2A = vec_mladd(sum2A, v5ss, (vector signed short)vzero);\n- const vector signed short pp2B = vec_mladd(sum2B, v5ss, (vector signed short)vzero);\n-\n- const vector signed short pp3A = vec_add(sum3A, pp1A);\n- const vector signed short pp3B = vec_add(sum3B, pp1B);\n-\n- const vector signed short psumA = vec_sub(pp3A, pp2A);\n- const vector signed short psumB = vec_sub(pp3B, pp2B);\n-\n- const vector signed short sumA = vec_sra(psumA, v5us);\n- const vector signed short sumB = vec_sra(psumB, v5us);\n-\n- const vector unsigned char sum = vec_packsu(sumA, sumB);\n-\n- const vector unsigned char dst1 = vec_ld(0, dst);\n- const vector unsigned char dst2 = vec_ld(16, dst);\n- const vector unsigned char vdst = vec_perm(dst1, dst2, vec_lvsl(0, dst));\n+ srcP0A = vec_mergeh((vector unsigned char)vzero, srcP0);\n+ srcP0B = vec_mergel((vector unsigned char)vzero, srcP0);\n+ srcP1A = vec_mergeh((vector unsigned char)vzero, srcP1);\n+ srcP1B = vec_mergel((vector unsigned char)vzero, srcP1);\n+\n+ srcP2A = vec_mergeh((vector unsigned char)vzero, srcP2);\n+ srcP2B = vec_mergel((vector unsigned char)vzero, srcP2);\n+ srcP3A = vec_mergeh((vector unsigned char)vzero, srcP3);\n+ srcP3B = vec_mergel((vector unsigned char)vzero, srcP3);\n+\n+ srcM1A = vec_mergeh((vector unsigned char)vzero, srcM1);\n+ srcM1B = vec_mergel((vector unsigned char)vzero, srcM1);\n+ srcM2A = vec_mergeh((vector unsigned char)vzero, srcM2);\n+ srcM2B = vec_mergel((vector unsigned char)vzero, srcM2);\n+\n+\n+ pp1A = vec_mladd(sum1A, v20ss, v16ss);\n+ pp1B = vec_mladd(sum1B, v20ss, v16ss);\n+\n+ pp2A = vec_mladd(sum2A, v5ss, (vector signed short)vzero);\n+ pp2B = vec_mladd(sum2B, v5ss, (vector signed short)vzero);\n+\n+\n+ psumA = vec_sub(pp3A, pp2A);\n+ psumB = vec_sub(pp3B, pp2B);\n+\n+ sumA = vec_sra(psumA, v5us);\n+ sumB = vec_sra(psumB, v5us);\n+\n+ sum = vec_packsu(sumA, sumB);\n+\n+ dst1 = vec_ld(0, dst);\n+ dst2 = vec_ld(16, dst);\n+ vdst = vec_perm(dst1, dst2, vec_lvsl(0, dst));\n\n- vector unsigned char fsum;\nOP_U8_ALTIVEC(fsum, sum, vdst);\n\n- const vector unsigned char rsum = vec_perm(fsum, fsum, dstperm);\n- const vector unsigned char fdst1 = vec_sel(dst1, rsum, dstmask);\n- const vector unsigned char fdst2 = vec_sel(rsum, dst2, dstmask);\n+ rsum = vec_perm(fsum, fsum, dstperm);\n+ fdst1 = vec_sel(dst1, rsum, dstmask);\n+ fdst2 = vec_sel(rsum, dst2, dstmask);\n\nvec_st(fdst1, 0, dst);\nvec_st(fdst2, 16, dst);\n@@ -299,7 +313,6 @@\n/* this code assume stride % 16 == 0 */\nstatic void PREFIX_h264_qpel16_v_lowpass_altivec(uint8_t * dst, uint8_t * src, int dstStride, int srcStride) {\nPOWERPC_PERF_DECLARE(PREFIX_h264_qpel16_v_lowpass_num, 1);\n- POWERPC_PERF_START_COUNT(PREFIX_h264_qpel16_v_lowpass_num, 1);\n\nregister int i;\n\n@@ -318,49 +331,71 @@\nconst vector unsigned char srcM2a = vec_ld(0, srcbis);\nconst vector unsigned char srcM2b = vec_ld(16, srcbis);\nconst vector unsigned char srcM2 = vec_perm(srcM2a, srcM2b, perm);\n- srcbis += srcStride;\n- const vector unsigned char srcM1a = vec_ld(0, srcbis);\n+// srcbis += srcStride;\n+ const vector unsigned char srcM1a = vec_ld(0, srcbis += srcStride);\nconst vector unsigned char srcM1b = vec_ld(16, srcbis);\nconst vector unsigned char srcM1 = vec_perm(srcM1a, srcM1b, perm);\n- srcbis += srcStride;\n- const vector unsigned char srcP0a = vec_ld(0, srcbis);\n+// srcbis += srcStride;\n+ const vector unsigned char srcP0a = vec_ld(0, srcbis += srcStride);\nconst vector unsigned char srcP0b = vec_ld(16, srcbis);\nconst vector unsigned char srcP0 = vec_perm(srcP0a, srcP0b, perm);\n- srcbis += srcStride;\n- const vector unsigned char srcP1a = vec_ld(0, srcbis);\n+// srcbis += srcStride;\n+ const vector unsigned char srcP1a = vec_ld(0, srcbis += srcStride);\nconst vector unsigned char srcP1b = vec_ld(16, srcbis);\nconst vector unsigned char srcP1 = vec_perm(srcP1a, srcP1b, perm);\n- srcbis += srcStride;\n- const vector unsigned char srcP2a = vec_ld(0, srcbis);\n+// srcbis += srcStride;\n+ const vector unsigned char srcP2a = vec_ld(0, srcbis += srcStride);\nconst vector unsigned char srcP2b = vec_ld(16, srcbis);\nconst vector unsigned char srcP2 = vec_perm(srcP2a, srcP2b, perm);\n- srcbis += srcStride;\n+// srcbis += srcStride;\n+\n+ vector signed short srcM2ssA = (vector signed short)\n+ vec_mergeh((vector unsigned char)vzero, srcM2);\n+ vector signed short srcM2ssB = (vector signed short)\n+ vec_mergel((vector unsigned char)vzero, srcM2);\n+ vector signed short srcM1ssA = (vector signed short)\n+ vec_mergeh((vector unsigned char)vzero, srcM1);\n+ vector signed short srcM1ssB = (vector signed short)\n+ vec_mergel((vector unsigned char)vzero, srcM1);\n+ vector signed short srcP0ssA = (vector signed short)\n+ vec_mergeh((vector unsigned char)vzero, srcP0);\n+ vector signed short srcP0ssB = (vector signed short)\n+ vec_mergel((vector unsigned char)vzero, srcP0);\n+ vector signed short srcP1ssA = (vector signed short)\n+ vec_mergeh((vector unsigned char)vzero, srcP1);\n+ vector signed short srcP1ssB = (vector signed short)\n+ vec_mergel((vector unsigned char)vzero, srcP1);\n+ vector signed short srcP2ssA = (vector signed short)\n+ vec_mergeh((vector unsigned char)vzero, srcP2);\n+ vector signed short srcP2ssB = (vector signed short)\n+ vec_mergel((vector unsigned char)vzero, srcP2);\n+\n+ vector signed short pp1A, pp1B, pp2A, pp2B, pp3A, pp3B,\n+ psumA, psumB, sumA, sumB,\n+ srcP3ssA, srcP3ssB,\n+ sum1A, sum1B, sum2A, sum2B, sum3A, sum3B;\n\n- vector signed short srcM2ssA = (vector signed short)vec_mergeh((vector unsigned char)vzero, srcM2);\n- vector signed short srcM2ssB = (vector signed short)vec_mergel((vector unsigned char)vzero, srcM2);\n- vector signed short srcM1ssA = (vector signed short)vec_mergeh((vector unsigned char)vzero, srcM1);\n- vector signed short srcM1ssB = (vector signed short)vec_mergel((vector unsigned char)vzero, srcM1);\n- vector signed short srcP0ssA = (vector signed short)vec_mergeh((vector unsigned char)vzero, srcP0);\n- vector signed short srcP0ssB = (vector signed short)vec_mergel((vector unsigned char)vzero, srcP0);\n- vector signed short srcP1ssA = (vector signed short)vec_mergeh((vector unsigned char)vzero, srcP1);\n- vector signed short srcP1ssB = (vector signed short)vec_mergel((vector unsigned char)vzero, srcP1);\n- vector signed short srcP2ssA = (vector signed short)vec_mergeh((vector unsigned char)vzero, srcP2);\n- vector signed short srcP2ssB = (vector signed short)vec_mergel((vector unsigned char)vzero, srcP2);\n+ vector unsigned char sum, dst1, dst2, vdst, fsum, rsum, fdst1, fdst2,\n+ srcP3a, srcP3b, srcP3;\n+\n+ POWERPC_PERF_START_COUNT(PREFIX_h264_qpel16_v_lowpass_num, 1);\n\nfor (i = 0 ; i < 16 ; i++) {\n- const vector unsigned char srcP3a = vec_ld(0, srcbis);\n- const vector unsigned char srcP3b = vec_ld(16, srcbis);\n- const vector unsigned char srcP3 = vec_perm(srcP3a, srcP3b, perm);\n- const vector signed short srcP3ssA = (vector signed short)vec_mergeh((vector unsigned char)vzero, srcP3);\n- const vector signed short srcP3ssB = (vector signed short)vec_mergel((vector unsigned char)vzero, srcP3);\n- srcbis += srcStride;\n-\n- const vector signed short sum1A = vec_adds(srcP0ssA, srcP1ssA);\n- const vector signed short sum1B = vec_adds(srcP0ssB, srcP1ssB);\n- const vector signed short sum2A = vec_adds(srcM1ssA, srcP2ssA);\n- const vector signed short sum2B = vec_adds(srcM1ssB, srcP2ssB);\n- const vector signed short sum3A = vec_adds(srcM2ssA, srcP3ssA);\n- const vector signed short sum3B = vec_adds(srcM2ssB, srcP3ssB);\n+ srcP3a = vec_ld(0, srcbis += srcStride);\n+ srcP3b = vec_ld(16, srcbis);\n+ srcP3 = vec_perm(srcP3a, srcP3b, perm);\n+ srcP3ssA = (vector signed short)\n+ vec_mergeh((vector unsigned char)vzero, srcP3);\n+ srcP3ssB = (vector signed short)\n+ vec_mergel((vector unsigned char)vzero, srcP3);\n+// srcbis += srcStride;\n+\n\nsrcM2ssA = srcM1ssA;\nsrcM2ssB = srcM1ssB;\n@@ -373,33 +408,32 @@\nsrcP2ssA = srcP3ssA;\nsrcP2ssB = srcP3ssB;\n\n- const vector signed short pp1A = vec_mladd(sum1A, v20ss, v16ss);\n- const vector signed short pp1B = vec_mladd(sum1B, v20ss, v16ss);\n+ pp1A = vec_mladd(sum1A, v20ss, v16ss);\n+ pp1B = vec_mladd(sum1B, v20ss, v16ss);\n\n- const vector signed short pp2A = vec_mladd(sum2A, v5ss, (vector signed short)vzero);\n- const vector signed short pp2B = vec_mladd(sum2B, v5ss, (vector signed short)vzero);\n+ pp2A = vec_mladd(sum2A, v5ss, (vector signed short)vzero);\n+ pp2B = vec_mladd(sum2B, v5ss, (vector signed short)vzero);\n\n- const vector signed short pp3A = vec_add(sum3A, pp1A);\n- const vector signed short pp3B = vec_add(sum3B, pp1B);\n\n- const vector signed short psumA = vec_sub(pp3A, pp2A);\n- const vector signed short psumB = vec_sub(pp3B, pp2B);\n+ psumA = vec_sub(pp3A, pp2A);\n+ psumB = vec_sub(pp3B, pp2B);\n\n- const vector signed short sumA = vec_sra(psumA, v5us);\n- const vector signed short sumB = vec_sra(psumB, v5us);\n+ sumA = vec_sra(psumA, v5us);\n+ sumB = vec_sra(psumB, v5us);\n\n- const vector unsigned char sum = vec_packsu(sumA, sumB);\n+ sum = vec_packsu(sumA, sumB);\n\n- const vector unsigned char dst1 = vec_ld(0, dst);\n- const vector unsigned char dst2 = vec_ld(16, dst);\n- const vector unsigned char vdst = vec_perm(dst1, dst2, vec_lvsl(0, dst));\n+ dst1 = vec_ld(0, dst);\n+ dst2 = vec_ld(16, dst);\n+ vdst = vec_perm(dst1, dst2, vec_lvsl(0, dst));\n\n- vector unsigned char fsum;\nOP_U8_ALTIVEC(fsum, sum, vdst);\n\n- const vector unsigned char rsum = vec_perm(fsum, fsum, dstperm);\n- const vector unsigned char fdst1 = vec_sel(dst1, rsum, dstmask);\n- const vector unsigned char fdst2 = vec_sel(rsum, dst2, dstmask);\n+ rsum = vec_perm(fsum, fsum, dstperm);\n+ fdst1 = vec_sel(dst1, rsum, dstmask);\n+ fdst2 = vec_sel(rsum, dst2, dstmask);\n\nvec_st(fdst1, 0, dst);\nvec_st(fdst2, 16, dst);\n@@ -412,7 +446,6 @@\n/* this code assume stride % 16 == 0 *and* tmp is properly aligned */\nstatic void PREFIX_h264_qpel16_hv_lowpass_altivec(uint8_t * dst, int16_t * tmp, uint8_t * src, int dstStride, int tmpStride, int srcStride) {\nPOWERPC_PERF_DECLARE(PREFIX_h264_qpel16_hv_lowpass_num, 1);\n- POWERPC_PERF_START_COUNT(PREFIX_h264_qpel16_hv_lowpass_num, 1);\nregister int i;\nconst vector signed int vzero = vec_splat_s32(0);\nconst vector unsigned char permM2 = vec_lvsl(-2, src);\n@@ -430,8 +463,38 @@\n\nregister int align = ((((unsigned long)src) - 2) % 16);\n\n- src -= (2 * srcStride);\n+ const vector unsigned char neg1 = (const vector unsigned char)\n+ vec_splat_s8(-1);\n+\n+ vector signed short srcP0A, srcP0B, srcP1A, srcP1B,\n+ srcP2A, srcP2B, srcP3A, srcP3B,\n+ srcM1A, srcM1B, srcM2A, srcM2B,\n+ sum1A, sum1B, sum2A, sum2B, sum3A, sum3B,\n+ pp1A, pp1B, pp2A, pp2B, psumA, psumB;\n+\n+ const vector unsigned char dstperm = vec_lvsr(0, dst);\n+\n+ const vector unsigned char dstmask = vec_perm((const vector unsigned char)vzero, neg1, dstperm);\n+\n+ const vector unsigned char mperm = (const vector unsigned char)\n+ AVV(0x00, 0x08, 0x01, 0x09, 0x02, 0x0A, 0x03, 0x0B,\n+ 0x04, 0x0C, 0x05, 0x0D, 0x06, 0x0E, 0x07, 0x0F);\n+ int16_t *tmpbis = tmp;\n\n+ vector signed short tmpM1ssA, tmpM1ssB, tmpM2ssA, tmpM2ssB,\n+ tmpP0ssA, tmpP0ssB, tmpP1ssA, tmpP1ssB,\n+ tmpP2ssA, tmpP2ssB;\n+\n+ vector signed int pp1Ae, pp1Ao, pp1Be, pp1Bo, pp2Ae, pp2Ao, pp2Be, pp2Bo,\n+ pp3Ae, pp3Ao, pp3Be, pp3Bo, pp1cAe, pp1cAo, pp1cBe, pp1cBo,\n+ pp32Ae, pp32Ao, pp32Be, pp32Bo, sumAe, sumAo, sumBe, sumBo,\n+ ssumAe, ssumAo, ssumBe, ssumBo;\n+ vector unsigned char fsum, sumv, sum, dst1, dst2, vdst,\n+ rsum, fdst1, fdst2;\n+ vector signed short ssume, ssumo;\n+\n+ POWERPC_PERF_START_COUNT(PREFIX_h264_qpel16_hv_lowpass_num, 1);\n+ src -= (2 * srcStride);\nfor (i = 0 ; i < 21 ; i ++) {\nvector unsigned char srcM2, srcM1, srcP0, srcP1, srcP2, srcP3;\nvector unsigned char srcR1 = vec_ld(-2, src);\n@@ -492,36 +555,48 @@\n} break;\n}\n\n- const vector signed short srcP0A = (vector signed short)vec_mergeh((vector unsigned char)vzero, srcP0);\n- const vector signed short srcP0B = (vector signed short)vec_mergel((vector unsigned char)vzero, srcP0);\n- const vector signed short srcP1A = (vector signed short)vec_mergeh((vector unsigned char)vzero, srcP1);\n- const vector signed short srcP1B = (vector signed short)vec_mergel((vector unsigned char)vzero, srcP1);\n-\n- const vector signed short srcP2A = (vector signed short)vec_mergeh((vector unsigned char)vzero, srcP2);\n- const vector signed short srcP2B = (vector signed short)vec_mergel((vector unsigned char)vzero, srcP2);\n- const vector signed short srcP3A = (vector signed short)vec_mergeh((vector unsigned char)vzero, srcP3);\n- const vector signed short srcP3B = (vector signed short)vec_mergel((vector unsigned char)vzero, srcP3);\n-\n- const vector signed short srcM1A = (vector signed short)vec_mergeh((vector unsigned char)vzero, srcM1);\n- const vector signed short srcM1B = (vector signed short)vec_mergel((vector unsigned char)vzero, srcM1);\n- const vector signed short srcM2A = (vector signed short)vec_mergeh((vector unsigned char)vzero, srcM2);\n- const vector signed short srcM2B = (vector signed short)vec_mergel((vector unsigned char)vzero, srcM2);\n-\n- const vector signed short sum1A = vec_adds(srcP0A, srcP1A);\n- const vector signed short sum1B = vec_adds(srcP0B, srcP1B);\n- const vector signed short sum2A = vec_adds(srcM1A, srcP2A);\n- const vector signed short sum2B = vec_adds(srcM1B, srcP2B);\n- const vector signed short sum3A = vec_adds(srcM2A, srcP3A);\n- const vector signed short sum3B = vec_adds(srcM2B, srcP3B);\n+ srcP0A = (vector signed short)\n+ vec_mergeh((vector unsigned char)vzero, srcP0);\n+ srcP0B = (vector signed short)\n+ vec_mergel((vector unsigned char)vzero, srcP0);\n+ srcP1A = (vector signed short)\n+ vec_mergeh((vector unsigned char)vzero, srcP1);\n+ srcP1B = (vector signed short)\n+ vec_mergel((vector unsigned char)vzero, srcP1);\n+\n+ srcP2A = (vector signed short)\n+ vec_mergeh((vector unsigned char)vzero, srcP2);\n+ srcP2B = (vector signed short)\n+ vec_mergel((vector unsigned char)vzero, srcP2);\n+ srcP3A = (vector signed short)\n+ vec_mergeh((vector unsigned char)vzero, srcP3);\n+ srcP3B = (vector signed short)\n+ vec_mergel((vector unsigned char)vzero, srcP3);\n+\n+ srcM1A = (vector signed short)\n+ vec_mergeh((vector unsigned char)vzero, srcM1);\n+ srcM1B = (vector signed short)\n+ vec_mergel((vector unsigned char)vzero, srcM1);\n+ srcM2A = (vector signed short)\n+ vec_mergeh((vector unsigned char)vzero, srcM2);\n+ srcM2B = (vector signed short)\n+ vec_mergel((vector unsigned char)vzero, srcM2);\n+\n\n- const vector signed short pp1A = vec_mladd(sum1A, v20ss, sum3A);\n- const vector signed short pp1B = vec_mladd(sum1B, v20ss, sum3B);\n+ pp1A = vec_mladd(sum1A, v20ss, sum3A);\n+ pp1B = vec_mladd(sum1B, v20ss, sum3B);\n\n- const vector signed short pp2A = vec_mladd(sum2A, v5ss, (vector signed short)vzero);\n- const vector signed short pp2B = vec_mladd(sum2B, v5ss, (vector signed short)vzero);\n+ pp2A = vec_mladd(sum2A, v5ss, (vector signed short)vzero);\n+ pp2B = vec_mladd(sum2B, v5ss, (vector signed short)vzero);\n\n- const vector signed short psumA = vec_sub(pp1A, pp2A);\n- const vector signed short psumB = vec_sub(pp1B, pp2B);\n+ psumA = vec_sub(pp1A, pp2A);\n+ psumB = vec_sub(pp1B, pp2B);\n\nvec_st(psumA, 0, tmp);\nvec_st(psumB, 16, tmp);\n@@ -530,35 +605,25 @@\ntmp += tmpStride; /* int16_t*, and stride is 16, so it's OK here */\n}\n\n- const vector unsigned char dstperm = vec_lvsr(0, dst);\n- const vector unsigned char neg1 = (const vector unsigned char)vec_splat_s8(-1);\n- const vector unsigned char dstmask = vec_perm((const vector unsigned char)vzero, neg1, dstperm);\n- const vector unsigned char mperm = (const vector unsigned char)\n- AVV(0x00, 0x08, 0x01, 0x09, 0x02, 0x0A, 0x03, 0x0B,\n- 0x04, 0x0C, 0x05, 0x0D, 0x06, 0x0E, 0x07, 0x0F);\n-\n- int16_t *tmpbis = tmp - (tmpStride * 21);\n-\n- vector signed short tmpM2ssA = vec_ld(0, tmpbis);\n- vector signed short tmpM2ssB = vec_ld(16, tmpbis);\n+ tmpM2ssA = vec_ld(0, tmpbis);\n+ tmpM2ssB = vec_ld(16, tmpbis);\ntmpbis += tmpStride;\n- vector signed short tmpM1ssA = vec_ld(0, tmpbis);\n- vector signed short tmpM1ssB = vec_ld(16, tmpbis);\n+ tmpM1ssA = vec_ld(0, tmpbis);\n+ tmpM1ssB = vec_ld(16, tmpbis);\ntmpbis += tmpStride;\n- vector signed short tmpP0ssA = vec_ld(0, tmpbis);\n- vector signed short tmpP0ssB = vec_ld(16, tmpbis);\n+ tmpP0ssA = vec_ld(0, tmpbis);\n+ tmpP0ssB = vec_ld(16, tmpbis);\ntmpbis += tmpStride;\n- vector signed short tmpP1ssA = vec_ld(0, tmpbis);\n- vector signed short tmpP1ssB = vec_ld(16, tmpbis);\n+ tmpP1ssA = vec_ld(0, tmpbis);\n+ tmpP1ssB = vec_ld(16, tmpbis);\ntmpbis += tmpStride;\n- vector signed short tmpP2ssA = vec_ld(0, tmpbis);\n- vector signed short tmpP2ssB = vec_ld(16, tmpbis);\n+ tmpP2ssA = vec_ld(0, tmpbis);\n+ tmpP2ssB = vec_ld(16, tmpbis);\ntmpbis += tmpStride;\n\nfor (i = 0 ; i < 16 ; i++) {\nconst vector signed short tmpP3ssA = vec_ld(0, tmpbis);\nconst vector signed short tmpP3ssB = vec_ld(16, tmpbis);\n- tmpbis += tmpStride;\n\nconst vector signed short sum1A = vec_adds(tmpP0ssA, tmpP1ssA);\nconst vector signed short sum1B = vec_adds(tmpP0ssB, tmpP1ssB);\n@@ -567,6 +632,8 @@\nconst vector signed short sum3A = vec_adds(tmpM2ssA, tmpP3ssA);\nconst vector signed short sum3B = vec_adds(tmpM2ssB, tmpP3ssB);\n\n+ tmpbis += tmpStride;\n+\ntmpM2ssA = tmpM1ssA;\ntmpM2ssB = tmpM1ssB;\ntmpM1ssA = tmpP0ssA;\n@@ -578,57 +645,56 @@\ntmpP2ssA = tmpP3ssA;\ntmpP2ssB = tmpP3ssB;\n\n- const vector signed int pp1Ae = vec_mule(sum1A, v20ss);\n- const vector signed int pp1Ao = vec_mulo(sum1A, v20ss);\n- const vector signed int pp1Be = vec_mule(sum1B, v20ss);\n- const vector signed int pp1Bo = vec_mulo(sum1B, v20ss);\n-\n- const vector signed int pp2Ae = vec_mule(sum2A, v5ss);\n- const vector signed int pp2Ao = vec_mulo(sum2A, v5ss);\n- const vector signed int pp2Be = vec_mule(sum2B, v5ss);\n- const vector signed int pp2Bo = vec_mulo(sum2B, v5ss);\n-\n- const vector signed int pp3Ae = vec_sra((vector signed int)sum3A, v16ui);\n- const vector signed int pp3Ao = vec_mulo(sum3A, v1ss);\n- const vector signed int pp3Be = vec_sra((vector signed int)sum3B, v16ui);\n- const vector signed int pp3Bo = vec_mulo(sum3B, v1ss);\n-\n- const vector signed int pp1cAe = vec_add(pp1Ae, v512si);\n- const vector signed int pp1cAo = vec_add(pp1Ao, v512si);\n- const vector signed int pp1cBe = vec_add(pp1Be, v512si);\n- const vector signed int pp1cBo = vec_add(pp1Bo, v512si);\n-\n- const vector signed int pp32Ae = vec_sub(pp3Ae, pp2Ae);\n- const vector signed int pp32Ao = vec_sub(pp3Ao, pp2Ao);\n- const vector signed int pp32Be = vec_sub(pp3Be, pp2Be);\n- const vector signed int pp32Bo = vec_sub(pp3Bo, pp2Bo);\n-\n- const vector signed int sumAe = vec_add(pp1cAe, pp32Ae);\n- const vector signed int sumAo = vec_add(pp1cAo, pp32Ao);\n- const vector signed int sumBe = vec_add(pp1cBe, pp32Be);\n- const vector signed int sumBo = vec_add(pp1cBo, pp32Bo);\n-\n- const vector signed int ssumAe = vec_sra(sumAe, v10ui);\n- const vector signed int ssumAo = vec_sra(sumAo, v10ui);\n- const vector signed int ssumBe = vec_sra(sumBe, v10ui);\n- const vector signed int ssumBo = vec_sra(sumBo, v10ui);\n-\n- const vector signed short ssume = vec_packs(ssumAe, ssumBe);\n- const vector signed short ssumo = vec_packs(ssumAo, ssumBo);\n-\n- const vector unsigned char sumv = vec_packsu(ssume, ssumo);\n- const vector unsigned char sum = vec_perm(sumv, sumv, mperm);\n-\n- const vector unsigned char dst1 = vec_ld(0, dst);\n- const vector unsigned char dst2 = vec_ld(16, dst);\n- const vector unsigned char vdst = vec_perm(dst1, dst2, vec_lvsl(0, dst));\n+ pp1Ae = vec_mule(sum1A, v20ss);\n+ pp1Ao = vec_mulo(sum1A, v20ss);\n+ pp1Be = vec_mule(sum1B, v20ss);\n+ pp1Bo = vec_mulo(sum1B, v20ss);\n+\n+ pp2Ae = vec_mule(sum2A, v5ss);\n+ pp2Ao = vec_mulo(sum2A, v5ss);\n+ pp2Be = vec_mule(sum2B, v5ss);\n+ pp2Bo = vec_mulo(sum2B, v5ss);\n+\n+ pp3Ae = vec_sra((vector signed int)sum3A, v16ui);\n+ pp3Ao = vec_mulo(sum3A, v1ss);\n+ pp3Be = vec_sra((vector signed int)sum3B, v16ui);\n+ pp3Bo = vec_mulo(sum3B, v1ss);\n+\n+\n+ pp32Ae = vec_sub(pp3Ae, pp2Ae);\n+ pp32Ao = vec_sub(pp3Ao, pp2Ao);\n+ pp32Be = vec_sub(pp3Be, pp2Be);\n+ pp32Bo = vec_sub(pp3Bo, pp2Bo);\n+\n+\n+ ssumAe = vec_sra(sumAe, v10ui);\n+ ssumAo = vec_sra(sumAo, v10ui);\n+ ssumBe = vec_sra(sumBe, v10ui);\n+ ssumBo = vec_sra(sumBo, v10ui);\n+\n+ ssume = vec_packs(ssumAe, ssumBe);\n+ ssumo = vec_packs(ssumAo, ssumBo);\n+\n+ sumv = vec_packsu(ssume, ssumo);\n+ sum = vec_perm(sumv, sumv, mperm);\n+\n+ dst1 = vec_ld(0, dst);\n+ dst2 = vec_ld(16, dst);\n+ vdst = vec_perm(dst1, dst2, vec_lvsl(0, dst));\n\n- vector unsigned char fsum;\nOP_U8_ALTIVEC(fsum, sum, vdst);\n\n- const vector unsigned char rsum = vec_perm(fsum, fsum, dstperm);\n- const vector unsigned char fdst1 = vec_sel(dst1, rsum, dstmask);\n- const vector unsigned char fdst2 = vec_sel(rsum, dst2, dstmask);\n+ rsum = vec_perm(fsum, fsum, dstperm);\n+ fdst1 = vec_sel(dst1, rsum, dstmask);\n+ fdst2 = vec_sel(rsum, dst2, dstmask);\n\nvec_st(fdst1, 0, dst);\nvec_st(fdst2, 16, dst);\n\n```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5322732,"math_prob":0.99971193,"size":41620,"snap":"2022-27-2022-33","text_gpt3_token_len":14818,"char_repetition_ratio":0.37218857,"word_repetition_ratio":0.50117135,"special_character_ratio":0.35504565,"punctuation_ratio":0.24784878,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9951869,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-30T21:46:25Z\",\"WARC-Record-ID\":\"<urn:uuid:96b53827-bc5e-425c-9ada-11de5b8ffc35>\",\"Content-Length\":\"52875\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ac02d575-f15a-4e67-bad4-de98ce1f8094>\",\"WARC-Concurrent-To\":\"<urn:uuid:8f669a5e-af6d-4263-b2fe-da6448a3e0fa>\",\"WARC-IP-Address\":\"79.124.17.100\",\"WARC-Target-URI\":\"http://ffmpeg.org/pipermail/ffmpeg-cvslog/2006-June/003120.html\",\"WARC-Payload-Digest\":\"sha1:V32Z3HJ2G7H6TZKBEKP5H55JW4ERCVLV\",\"WARC-Block-Digest\":\"sha1:EE26S45Q3CRZ37QHNJXZU3XMM2NN3UV2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103915196.47_warc_CC-MAIN-20220630213820-20220701003820-00735.warc.gz\"}"}
https://encyclopediaofmath.org/wiki/Subrepresentation_of_a_representation
[ "# Subrepresentation of a representation\n\nA linear representation $\\rho$ in an invariant subspace $F \\subset E$ of a representation $\\pi$ of a group (algebra, ring or semi-group) $X$ in a (topological) vector space $E$ defined by the formula $\\rho ( x) \\xi = \\pi ( x) \\xi$ for all $\\xi \\in F$, $x \\in X$. If $\\pi$ is a continuous representation (of a topological group, algebra, ring, or semi-group), then any subrepresentation of it is also continuous." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7200645,"math_prob":0.99802554,"size":1027,"snap":"2021-21-2021-25","text_gpt3_token_len":251,"char_repetition_ratio":0.2170088,"word_repetition_ratio":0.07092199,"special_character_ratio":0.24732229,"punctuation_ratio":0.1597633,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99975365,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-10T04:15:38Z\",\"WARC-Record-ID\":\"<urn:uuid:418e6c4d-87a0-4a17-a554-a84b35f289c0>\",\"Content-Length\":\"14700\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3d9ff18d-1bba-4a58-a982-17a19d908cb3>\",\"WARC-Concurrent-To\":\"<urn:uuid:6ccdf969-3162-4df6-b920-bc958936343d>\",\"WARC-IP-Address\":\"34.96.94.55\",\"WARC-Target-URI\":\"https://encyclopediaofmath.org/wiki/Subrepresentation_of_a_representation\",\"WARC-Payload-Digest\":\"sha1:JSIFCL7T3FZ6JARO7GDBZ66OYTD4IG3J\",\"WARC-Block-Digest\":\"sha1:SW2YPTZJN7ALYHVTL2FZ35VJLVKDTWGK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243989030.87_warc_CC-MAIN-20210510033850-20210510063850-00172.warc.gz\"}"}
https://www.learnatnoon.com/s/a-pass-on-is-numbered-so-that-its-faces-show/6036/
[ "A pass on is numbered so that its faces show the numbers 1, 2, 2, 3, 3, 6. It is tossed multiple times and the complete score in two tosses is noted. Complete the accompanying table which gives a couple of upsides of the absolute score on the two tosses:\nA pass on is numbered so that its faces show the numbers 1, 2, 2, 3, 3, 6. It is tossed multiple times and the complete score in two tosses is noted. Complete the accompanying table which gives a couple of upsides of the absolute score on the two tosses:\n\nWhat is the likelihood that the absolute score is\n\n(I) even?\n\n(ii) 6?\n\n(iii) something like 6?\n\nSolution:\n\nThe table will be as per the following:\n\n• 1 2 2 3 3 6\n\n1 2 3 3 4 4 7\n\n2 3 4 4 5 5 8\n\n2 3 4 4 5 5 8\n\n3 4 5 5 6 6 9\n\n3 4 5 5 6 6 9\n\n6 7 8 8 9 9 12\n\nAlong these lines, the all out number of results = 6×6 = 36\n\n(I) E (Even) = 18\n\nP (Even) = 18/36 = ½\n\n(ii) E (aggregate is 6) = 4\n\nP (aggregate is 6) = 4/36 = 1/9\n\n(iii) E (aggregate is atleast 6) = 15\n\nP (aggregate is atleast 6) = 15/36 = 5/12" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.86242723,"math_prob":0.99917173,"size":479,"snap":"2022-40-2023-06","text_gpt3_token_len":242,"char_repetition_ratio":0.15157895,"word_repetition_ratio":0.11023622,"special_character_ratio":0.5407098,"punctuation_ratio":0.045801528,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9848254,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-05T04:47:52Z\",\"WARC-Record-ID\":\"<urn:uuid:44cdacc4-4049-4e96-ad2e-8891a678ff54>\",\"Content-Length\":\"110916\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c0b02362-6a60-40b6-8f9b-124173347461>\",\"WARC-Concurrent-To\":\"<urn:uuid:f669b5df-a91c-464e-951c-238186a996dd>\",\"WARC-IP-Address\":\"18.158.64.80\",\"WARC-Target-URI\":\"https://www.learnatnoon.com/s/a-pass-on-is-numbered-so-that-its-faces-show/6036/\",\"WARC-Payload-Digest\":\"sha1:KXZLIVADGHVCU4L42DV6SVMP45676GWE\",\"WARC-Block-Digest\":\"sha1:4MA22ZNN7XRLY4TGBL3HMAEE2SHSXOLB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500215.91_warc_CC-MAIN-20230205032040-20230205062040-00537.warc.gz\"}"}
https://javascript.tutorialink.com/opencv-js-perspective-transform/
[ "# opencv.js perspective transform\n\nI’m trying to use opencv.js to find a document in a provided image (detect edges, apply perspective transform, etc.\n\nI’ve got a reasonable set of code that (occasionally) detects edges of a document and grabs the bounding box for that. However, I’m struggling to do the perspective transform steps. There are some helpers for this (not in JS) here and here.\n\nUnfortunately I’m getting stuck on something simple. I can find the matching `Mat` that has 4 edges. Displaying that shows it to be accurate. However, I have no idea how to get some simple X/Y info out of that `Mat`. I thought `minMaxLoc()` would be a good option, but I keep getting an error passing in my matching `Mat`. Any idea why I can draw `foundContour` and get bounding box info from it, but I can’t call `minMaxLoc` on it?\n\nCode:\n\n```//<Get Image>\n//<Convert to Gray, do GaussianBlur, and do Canny edge detection>\nlet contours = new cv.MatVector();\ncv.findContours(matDestEdged, contours, hierarchy, cv.RETR_LIST, cv.CHAIN_APPROX_SIMPLE);\n\n//<Sort resulting contours by area to get largest>\n\nlet foundContour = null;\nfor (let sortableContour of sortableContours) {\nlet peri = cv.arcLength(sortableContour.contour, true);\nlet approx = new cv.Mat();\ncv.approxPolyDP(sortableContour.contour, approx, 0.1 * peri, true);\n\nif (approx.rows == 4) {\nconsole.log('found it');\nfoundContour = approx\nbreak;\n}\nelse {\napprox.delete();\n}\n}\n\n//<Draw foundContour and a bounding box to ensure it's accurate>\n\n//TODO: Do a perspective transform\nlet result = cv.minMaxLoc(foundContour);\n```\n\nThe last line above results in a runtime error (`Uncaught (in promise): 6402256 - Exception catching is disabled`). I can run `minMaxLoc()` on other `Mat` objects.\n\nFor anyone else looking to do this in OpenCV.JS, what I commented above seems to still be accurate. The contour found can’t be used with `minMaxLoc`, but the X/Y data can be pulled out of `data32S[]`. That should be all that’s needed to do this perspective transform. Some code is below.\n\n```//Find all contours\nlet contours = new cv.MatVector();\nlet hierarchy = new cv.Mat();\ncv.findContours(matDest, contours, hierarchy, cv.RETR_LIST, cv.CHAIN_APPROX_SIMPLE);\n\n//Get area for all contours so we can find the biggest\nlet sortableContours: SortableContour[] = [];\nfor (let i = 0; i < contours.size(); i++) {\nlet cnt = contours.get(i);\nlet area = cv.contourArea(cnt, false);\nlet perim = cv.arcLength(cnt, false);\n\nsortableContours.push(new SortableContour({ areaSize: area, perimiterSize: perim, contour: cnt }));\n}\n\n//Sort 'em\nsortableContours = sortableContours.sort((item1, item2) => { return (item1.areaSize > item2.areaSize) ? -1 : (item1.areaSize < item2.areaSize) ? 1 : 0; }).slice(0, 5);\n\n//Ensure the top area contour has 4 corners (NOTE: This is not a perfect science and likely needs more attention)\nlet approx = new cv.Mat();\ncv.approxPolyDP(sortableContours.contour, approx, .05 * sortableContours.perimiterSize, true);\n\nif (approx.rows == 4) {\nconsole.log('Found a 4-corner approx');\nfoundContour = approx;\n}\nelse{\nconsole.log('No 4-corner large contour!');\nreturn;\n}\n\n//Find the corners\n//foundCountour has 2 channels (seemingly x/y), has a depth of 4, and a type of 12. Seems to show it's a CV_32S \"type\", so the valid data is in data32S??\nlet corner1 = new cv.Point(foundContour.data32S, foundContour.data32S);\nlet corner2 = new cv.Point(foundContour.data32S, foundContour.data32S);\nlet corner3 = new cv.Point(foundContour.data32S, foundContour.data32S);\nlet corner4 = new cv.Point(foundContour.data32S, foundContour.data32S);\n\n//Order the corners\nlet cornerArray = [{ corner: corner1 }, { corner: corner2 }, { corner: corner3 }, { corner: corner4 }];\n//Sort by Y position (to get top-down)\ncornerArray.sort((item1, item2) => { return (item1.corner.y < item2.corner.y) ? -1 : (item1.corner.y > item2.corner.y) ? 1 : 0; }).slice(0, 5);\n\n//Determine left/right based on x position of top and bottom 2\nlet tl = cornerArray.corner.x < cornerArray.corner.x ? cornerArray : cornerArray;\nlet tr = cornerArray.corner.x > cornerArray.corner.x ? cornerArray : cornerArray;\nlet bl = cornerArray.corner.x < cornerArray.corner.x ? cornerArray : cornerArray;\nlet br = cornerArray.corner.x > cornerArray.corner.x ? cornerArray : cornerArray;\n\n//Calculate the max width/height\nlet widthBottom = Math.hypot(br.corner.x - bl.corner.x, br.corner.y - bl.corner.y);\nlet widthTop = Math.hypot(tr.corner.x - tl.corner.x, tr.corner.y - tl.corner.y);\nlet theWidth = (widthBottom > widthTop) ? widthBottom : widthTop;\nlet heightRight = Math.hypot(tr.corner.x - br.corner.x, tr.corner.y - br.corner.y);\nlet heightLeft = Math.hypot(tl.corner.x - bl.corner.x, tr.corner.y - bl.corner.y);\nlet theHeight = (heightRight > heightLeft) ? heightRight : heightLeft;\n\n//Transform!\nlet finalDestCoords = cv.matFromArray(4, 1, cv.CV_32FC2, [0, 0, theWidth - 1, 0, theWidth - 1, theHeight - 1, 0, theHeight - 1]); //\nlet srcCoords = cv.matFromArray(4, 1, cv.CV_32FC2, [tl.corner.x, tl.corner.y, tr.corner.x, tr.corner.y, br.corner.x, br.corner.y, bl.corner.x, bl.corner.y]);\nlet dsize = new cv.Size(theWidth, theHeight);\nlet M = cv.getPerspectiveTransform(srcCoords, finalDestCoords)\ncv.warpPerspective(matDestTransformed, finalDest, M, dsize, cv.INTER_LINEAR, cv.BORDER_CONSTANT, new cv.Scalar());\n```\n\nFor reference, here is the class definition I was using for `SortableContour`. The code above is meant as a guide, not as something that can run on its own, however.\n\n```export class SortableContour {\nperimiterSize: number;\nareaSize: number;\ncontour: any;\n\nconstructor(fields: Partial<SortableContour>) {\nObject.assign(this, fields);\n}\n}\n```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6003427,"math_prob":0.7680381,"size":5644,"snap":"2022-05-2022-21","text_gpt3_token_len":1602,"char_repetition_ratio":0.18457447,"word_repetition_ratio":0.0232859,"special_character_ratio":0.2834869,"punctuation_ratio":0.27483442,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9602185,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-26T01:55:01Z\",\"WARC-Record-ID\":\"<urn:uuid:4c612283-60c5-4d03-b4b5-596cda62f4da>\",\"Content-Length\":\"41563\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:06d7d279-0231-4c97-8c29-601cc6fd858c>\",\"WARC-Concurrent-To\":\"<urn:uuid:1d880aef-37d9-4d3a-ac4c-910581be7084>\",\"WARC-IP-Address\":\"104.21.20.67\",\"WARC-Target-URI\":\"https://javascript.tutorialink.com/opencv-js-perspective-transform/\",\"WARC-Payload-Digest\":\"sha1:SS5542SDCSSFBPTIKIUBYYQUTZPC2EGZ\",\"WARC-Block-Digest\":\"sha1:OPRFMQPRE5WYZP2MBVVJJLGFWLGPNI2D\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662595559.80_warc_CC-MAIN-20220526004200-20220526034200-00461.warc.gz\"}"}
http://crux.ms/commands/spectral-counts.html
[ "# spectral-counts\n\n## Usage:\n\ncrux spectral-counts [options] <input PSMs>\n\n## Description:\n\nGiven a collection of scored PSMs, produce a list of proteins or peptides ranked by a quantification score. Spectral-counts supports four types of quantification: Normalized Spectral Abundance Factor (NSAF), Distributed Normalized Spectral Abundance (dNSAF), Normalized Spectral Index (SIN) and Exponentially Modified Protein Abundance Index (emPAI). The NSAF method is from Paoletti et al. (2006). The SIN method is from Griffin et al. (2010). The emPAI method was first described in Ishihama et al (2005). The quantification methods are defined below and in the following paper:\n\nS McIlwain, M Mathews, M Bereman, EW Rubel, MJ MacCoss, and WS Noble. \"Estimating relative abundances of proteins from shotgun proteomics data.\" BMC Bioinformatics. 13:308, 2012.\n\n### Protein Quantification\n\n1. For each protein in a given database, the NSAF score is:\n$$NSAF_N=\\frac{S_N/L_N}{\\sum_{i=1}^ns_i/L_i}$$\nwhere:\n• N is protein index\n• SN is the number of peptide spectra matched to the protein\n• LN is the length of protein N\n• n is the total number of proteins in the input database\n2. For each protein in a given database, the dNSAF score is:\n$$NSAF_N=\\frac{\\frac{uSpc_N+(d)sSpc_N}{uL_N+sL_N}}{\\frac{uSpc_i+(d)sSpc_i}{uL_i+sL_i}}$$\nwhere:\n• N is the protein index\n• uSpcN is the unique number spectra matched to the protein index\n• sSpcN is the shared number peptide spectra matched to the protein index\n• LN is the length of protein N\n• n is the total number of proteins in the input database\n• d is the distribution factor of peptide K to protein N, given by\n$$d=\\frac{uSpc_N}{\\sum_{i=1}^nuSpc_i}$$\n3. For each protein in a given database, the SIN score is:\n$$SI_N=\\frac{\\sum_{j=1}^{p_N}(\\sum_{k=1}^{s_j}i_k)}{L_N(\\sum_{j=1}^nSI_j)}$$\nwhere:\n• N is protein index\n• pn is the number of unique peptides in protein N\n• sj is the number of spectra assigned to peptide j\n• ik is the total fragment ion intensity of spectrum k\n• LN is the length of protein N\n4. For each protein in a given database, the emPAI score is:\n$$emPAI=10^{\\frac{N_{observed}}{N_{observable}}}-1$$\nwhere:\n• Nobserved is the number of experimentally observed peptides with scores above a specified threshold.\n• Nobservable is the calculated number of observable peptides for the protein given the search constraints.\n\n### Peptide Quantification\n\n1. For each peptide in a given database, the NSAF score is:\n$$NSAF_N=\\frac{S_N/L_N}{\\sum_{i=1}^ns_i/L_i}$$\nwhere:\n• N is the peptide index\n• SN is the number spectra matched to peptide N\n• LN is the length of peptide N\n• n is the total number of peptides in the input database\n2. For each peptide in a given database, the SIN score is:\n$$SI_N=\\frac{(\\sum_{k=1}^{S_N}i_k)}{L_N(\\sum_{j=1}^nSI_J)}$$\nwhere:\n• N is the peptide index\n• SN is the number of spectra assigned to peptide N\n• ik is the total fragment ion intensity of spectrum k\n• LN is the length of peptide N\n\n## Input:\n\n• input PSMs – A PSM file in either tab delimited text format (as produced by percolator), or pepXML format.\n\n## Output:\n\nThe program writes files to the folder crux-output by default. The name of the output folder can be set by the user using the --output-dir option. The following files will be created:\n\n• spectral-counts.target.txt – a tab-delimited text file containing the protein IDs and their corresponding scores, in sorted order.\n• spectral-counts.params.txt – a file containing the name and value of all parameters/options for the current operation. Not all parameters in the file may have been used in the operation. The resulting file can be used with the --parameter-file option for other Crux programs.\n• spectral-counts.log.txt – All messages written to standard error.\n\n## Options:\n\n• ### spectral-counts options\n\n• --parsimony none|simple|greedy – Perform a parsimony analysis on the proteins, and report a \"parsimony rank\" column in the output file. This column contains integers indicating the protein's rank in a list sorted by spectral counts. If the parsimony analysis results in two proteins being merged, then their parsimony rank is the same. In such a case, the rank is assigned based on the largest spectral count of any protein in the merged meta-protein. The \"simple\" parsimony algorithm only merges two proteins A and B if the peptides identified in protein A are the same as or a subset of the peptides identified in protein B. The \"greedy\" parsimony algorithm does additional merging, by identifying the longest protein (i.e., the protein with the most peptides) that contains one or more shared peptides. The shared peptides are assigned to the identified protein and removed from any other proteins that contain them, and the process is then repeated. Note that, with this option, some proteins end up being assigned no peptides at all; these orphan proteins are not reported in the output. Default = none.\n• --threshold <float> – Only consider PSMs with a threshold value. By default, q-values are thresholded using a specified threshold value. This behavior can be changed using the --custom-threshold and --threshold-min parameters. Default = 0.01.\n• --threshold-type none|qvalue|custom – Determines what type of threshold to use when filtering matches. none : read all matches, qvalue : use calculated q-value from percolator, custom : use --custom-threshold-name and --custom-threshold-min parameters. Default = qvalue.\n• --input-ms2 <string> – MS2 file corresponding to the psm file. Required to measure the SIN. Ignored for NSAF, dNSAF and EMPAI. Default = <empty>.\n• --unique-mapping T|F – Ignore peptides that map to multiple proteins. Default = false.\n• --quant-level protein|peptide – Quantification at protein or peptide level. Default = protein.\n• --measure RAW|NSAF|dNSAF|SIN|EMPAI – Type of analysis to make on the match results: (RAW|NSAF|dNSAF|SIN|EMPAI). With exception of the RAW metric, the database of sequences need to be provided using --protein-database. Default = NSAF.\n• --custom-threshold-name <string> – Specify which field to apply the threshold to. The direction of the threshold (<= or >=) is governed by --custom-threshold-min. By default, the threshold applies to the q-value, specified by \"percolator q-value\", \"decoy q-value (xcorr)\". Default = <empty>.\n• --custom-threshold-min T|F – When selecting matches with a custom threshold, custom-threshold-min determines whether to filter matches with custom-threshold-name values that are greater-than or equal (F) or less-than or equal (T) than the threshold. Default = true.\n• --mzid-use-pass-threshold T|F – Use mzid's passThreshold attribute to filter matches. Default = false.\n• --protein-database <string> – The name of the file in FASTA format. Default = <empty>.\n• ### Input and output\n\n• --verbosity <integer> – Specify the verbosity of the current processes. Each level prints the following messages, including all those at lower verbosity levels: 0-fatal errors, 10-non-fatal errors, 20-warnings, 30-information on the progress of execution, 40-more progress information, 50-debug info, 60-detailed debug info. Default = 30.\n• --parameter-file <string> – A file containing parameters. See the parameter documentation page for details. Default = <empty>.\n• --spectrum-parser pwiz|mstoolkit – Specify the parser to use for reading in MS/MS spectra. The default, ProteoWizard parser can read the MS/MS file formats listed here. The alternative is MSToolkit parser. If the ProteoWizard parser fails to read your files properly, you may want to try the MSToolkit parser instead. Default = pwiz.\n• --fileroot <string> – The fileroot string will be added as a prefix to all output file names. Default = <empty>.\n• --output-dir <string> – The name of the directory where output files will be created. Default = crux-output.\n• --overwrite T|F – Replace existing files if true or fail when trying to overwrite a file if false. Default = false." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.76553607,"math_prob":0.81404495,"size":7817,"snap":"2021-21-2021-25","text_gpt3_token_len":1975,"char_repetition_ratio":0.15295021,"word_repetition_ratio":0.14652957,"special_character_ratio":0.2362799,"punctuation_ratio":0.10846746,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9665535,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-06T08:43:55Z\",\"WARC-Record-ID\":\"<urn:uuid:e8edddd6-458e-495e-8379-fe7ea3d68600>\",\"Content-Length\":\"15536\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8eaf5992-986a-40e1-a307-968f0ba1fa7c>\",\"WARC-Concurrent-To\":\"<urn:uuid:0e789043-2a9a-4e02-8ee6-31ee1ee3e6e0>\",\"WARC-IP-Address\":\"185.199.109.153\",\"WARC-Target-URI\":\"http://crux.ms/commands/spectral-counts.html\",\"WARC-Payload-Digest\":\"sha1:WNSTCJL4YVGX752VHTQOJ5QX6ZLOH6NU\",\"WARC-Block-Digest\":\"sha1:HKFBBETMQHOMEMJO5GYQHYJWPNQTF5E3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243988753.91_warc_CC-MAIN-20210506083716-20210506113716-00245.warc.gz\"}"}
http://forums.wolfram.com/mathgroup/archive/2014/May/msg00013.html
[ "", null, "", null, "", null, "", null, "", null, "", null, "", null, "Re: How to avoid repeated recalculation of the same function\n\n• To: mathgroup at smc.vnet.net\n• Subject: [mg132677] Re: How to avoid repeated recalculation of the same function\n• From: Murray Eisenberg <murray at math.umass.edu>\n• Date: Sat, 3 May 2014 03:41:15 -0400 (EDT)\n• Delivered-to: [email protected]\n• Delivered-to: [email protected]\n• Delivered-to: [email protected]\n• Delivered-to: [email protected]\n• References: <[email protected]>\n\n```Did you try using Set ( = ) instead of SetDelayed ( := ) when defining f? That is, try:\n\nf[x_] = expression\n\nOn May 2, 2014, at 2:18 AM, pgeipi10 at gmail.com wrote:\n\n> Hi,\n>\n> I'm doing a calculaton that's purly symbolic (no graphing, numerical integration, etc.).\n>\n> Suppose I have a function f[x_]:=... that's very complex to build. In fact, f[x] ends up being a manageable expression (about 30 characters) but it takes Mathematica about 30 min to build that expression.\n>\n> Another function g[] uses the function f[x] and references it many times. I've discovered that g[] actually builds f[x] every time it's referenced which takes 30 minutes each time. Theoretically, Mathematica could build it once and then use the resulting expression every time it's referenced.\n>\n> So how do I accomplish that? That is, how do I make it build f[x] once and then use the resulting expression when it's needed?\n>\n> Thanks,\n>\n>\n> Pavel\n>\n\nMurray Eisenberg murray at math.umass.edu\nMathematics & Statistics Dept.\nLederle Graduate Research Tower phone 240 246-7240 (H)\nUniversity of Massachusetts\n710 North Pleasant Street\nAmherst, MA 01003-9305\n\n```\n\n• Prev by Date: Re: Plotting Data By State\n• Next by Date: Re: How to avoid repeated recalculation of the same\n• Previous by thread: Re: How to avoid repeated recalculation of the same function\n• Next by thread: Re: How to avoid repeated recalculation of the same" ]
[ null, "http://forums.wolfram.com/mathgroup/images/head_mathgroup.gif", null, "http://forums.wolfram.com/mathgroup/images/head_archive.gif", null, "http://forums.wolfram.com/mathgroup/images/numbers/2.gif", null, "http://forums.wolfram.com/mathgroup/images/numbers/0.gif", null, "http://forums.wolfram.com/mathgroup/images/numbers/1.gif", null, "http://forums.wolfram.com/mathgroup/images/numbers/4.gif", null, "http://forums.wolfram.com/mathgroup/images/search_archive.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.82049173,"math_prob":0.5346915,"size":1803,"snap":"2020-34-2020-40","text_gpt3_token_len":503,"char_repetition_ratio":0.106170096,"word_repetition_ratio":0.10909091,"special_character_ratio":0.28840822,"punctuation_ratio":0.1846591,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9691688,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-12T10:24:35Z\",\"WARC-Record-ID\":\"<urn:uuid:44799567-4f3f-4e20-b854-1782ea158af5>\",\"Content-Length\":\"45506\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ab5b53be-2b70-449f-bce9-dcd17e530fdc>\",\"WARC-Concurrent-To\":\"<urn:uuid:1a7058e9-dd96-4d0d-87bc-7f8bba9172f9>\",\"WARC-IP-Address\":\"140.177.205.73\",\"WARC-Target-URI\":\"http://forums.wolfram.com/mathgroup/archive/2014/May/msg00013.html\",\"WARC-Payload-Digest\":\"sha1:DCXHQ54PPDQPGMZL4MJY6KUNG5BYAXJS\",\"WARC-Block-Digest\":\"sha1:4DYKGUTNRJNJCORZO7UXFCDFD266DFPJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439738888.13_warc_CC-MAIN-20200812083025-20200812113025-00208.warc.gz\"}"}
http://old.fieldtriptoolbox.org/reference/ft_datatype_segmentation
[ "Note that this reference documentation is identical to the help that is displayed in MATLAB when you type “help ft_datatype_segmentation”.\n\n``` FT_DATATYPE_SEGMENTATION describes the FieldTrip MATLAB structure for segmented\nvoxel-based data and atlasses. A segmentation can either be indexed or probabilistic\n(see below).\n\nA segmentation is a volumetric description which is usually derived from an anatomical\nMRI, which describes for each voxel the tissue type. It for example distinguishes\nbetween white matter, grey matter, csf, skull and skin. It is mainly used for masking\nin visualization, construction of volume conduction models and for construction of\ncortical sheets. An volume-based atlas is basically a very detailed segmentation with\nan anatomical label for each voxel.\n\nFor example, the AFNI TTatlas+tlrc segmented brain atlas (which can be created\n\ndim: [161 191 141] the size of the 3D volume in voxels\ntransform: [4x4 double] affine transformation matrix for mapping the voxel coordinates to head coordinate system\ncoordsys: 'tal' the transformation matrix maps the voxels into this (head) coordinate system\nunit: 'mm' the units in which the coordinate system is expressed\nbrick0: [161x191x141 uint8] integer values from 1 to N, the value 0 means unknown\nbrick1: [161x191x141 uint8] integer values from 1 to M, the value 0 means unknown\nbrick0label: {Nx1 cell}\nbrick1label: {Mx1 cell}\n\nAn example segmentation with binary values that can be used for construction of a\nBEM volume conduction model of the head looks like this\n\ndim: [256 256 256] the dimensionality of the 3D volume\ntransform: [4x4 double] affine transformation matrix for mapping the voxel coordinates to head coordinate system\ncoordsys: 'ctf' the transformation matrix maps the voxels into this (head) coordinate system\nunit: 'mm' the units in which the coordinate system is expressed\nbrain: [256x256x256 logical] binary map representing the voxels which belong to the brain\nscalp: [256x256x256 logical] binary map representing the voxels which belong to the scalp\nskull: [256x256x256 logical] binary map representing the voxels which belong to the skull\n\nAn example of a whole-brain anatomical MRI that was segmented using FT_VOLUMESEGMENT\nlooks like this\n\ndim: [256 256 256] the size of the 3D volume in voxels\ntransform: [4x4 double] affine transformation matrix for mapping the voxel coordinates to head coordinate system\ncoordsys: 'ctf' the transformation matrix maps the voxels into this (head) coordinate system\nunit: 'mm' the units in which the coordinate system is expressed\ngray: [256x256x256 double] probabilistic map of the gray matter\nwhite: [256x256x256 double] probabilistic map of the white matter\ncsf: [256x256x256 double] probabilistic map of the cerebrospinal fluid\n\nThe examples above demonstrate that a segmentation can be indexed, i.e. consisting of\nsubsequent integer numbers (1, 2, ...) or probabilistic, consisting of real numbers\nranging from 0 to 1 that represent probabilities between 0% and 100%. An extreme case\nis one where the probability is either 0 or 1, in which case the probability can be\nrepresented as a binary or logical array.\n\nThe only difference to the volume data representation is that the segmentation\nstructure contains the additional fields xxx and xxxlabel. See FT_DATATYPE_VOLUME for\nfurther details.\n\n- dim, transform\n\nOptional fields:\n- coordsys, unit\n\nDeprecated fields:\n- none\n\nObsoleted fields:\n- none\n\nRevision history:\n(2012/latest) The explicit distunction between the indexed and probabilistic" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8124281,"math_prob":0.9287993,"size":3808,"snap":"2020-34-2020-40","text_gpt3_token_len":842,"char_repetition_ratio":0.13590957,"word_repetition_ratio":0.28494623,"special_character_ratio":0.22373949,"punctuation_ratio":0.10047847,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9666953,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-06T01:02:08Z\",\"WARC-Record-ID\":\"<urn:uuid:f8d47dfa-3f6f-4e30-80c4-8c2338667783>\",\"Content-Length\":\"24387\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e09d66d6-d353-4842-8e4f-807c5e6ec53f>\",\"WARC-Concurrent-To\":\"<urn:uuid:1a487621-3749-44f8-acaf-ef1af6add18f>\",\"WARC-IP-Address\":\"131.174.44.34\",\"WARC-Target-URI\":\"http://old.fieldtriptoolbox.org/reference/ft_datatype_segmentation\",\"WARC-Payload-Digest\":\"sha1:ZNITF4AILRCDTHVLZPSU7XP2VPK2W7SP\",\"WARC-Block-Digest\":\"sha1:KZWPVB4IEWLM7GC4ACUBQFHS3DQDJZWE\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439735990.92_warc_CC-MAIN-20200806001745-20200806031745-00229.warc.gz\"}"}
http://babyhunters.com/scatterplot-worksheet/
[ "", null, "Worksheets\n\n# Scatterplot Worksheet\n\nSp 1 creating scatter plots mathops plots. Mr matts math classes assignment scatter plot worksheet worksheet. Sp 2 scatter plots and correlation mathops plots. 3 2 relationships and lines of best fit scatter plots trends mfm1p foundations mathematics grade 9 applied math resourc. Worksheets on scatter plots livinghealthybulletin worksheet for every plot th grade.", null, "## Sp 1 creating scatter plots mathops plots", null, "## Mr matts math classes assignment scatter plot worksheet worksheet", null, "## Sp 2 scatter plots and correlation mathops plots", null, "## 3 2 relationships and lines of best fit scatter plots trends mfm1p foundations mathematics grade 9 applied math resourc", null, "## Worksheets on scatter plots livinghealthybulletin worksheet for every plot th grade", null, "## Scatter plot correlation worksheet worksheets for all download and worksheet", null, "## Scatter plot worksheet jmap worksheets by topic graphs and statistics plots", null, "## Quiz worksheet scatter plots word problems study com print how to use solve worksheet", null, "## Llr a", null, "## Scatter plot worksheet math ideas its a thing pinterest maths teaching and worksheets", null, "## Scatter plot wikipedia", null, "## Scatter plots and correlation worksheet worksheets for all worksheet", null, "## Quiz worksheet interpreting scatterplots study com print scatterplot and correlation definition example analysis worksheet", null, "## Worksheet scatterplot fun study site scatter plots aba pinterest plot maths algebra and algebra", null, "## Excel 2013 manually adding multiple data sets to scatter plot youtube", null, "## Scatter plots 2", null, "Related Posts\n\n### Cube Root Worksheet", null, "" ]
[ null, "https://i.pinimg.com/originals/ae/4f/96/ae4f96e076d3e45f7071792bd6c95fe0.png", null, "http://www.mathops.com/free/standards/images/sp01-a1db007ws-a.jpg", null, "https://1.bp.blogspot.com/-o5yMDJJG7dA/U4UqZj04RDI/AAAAAAAADII/CxOVx6UzTPQ/s1600/1HW.bmp", null, "http://www.mathops.com/free/standards/images/sp02-a1db001ws.jpg", null, "https://i.pinimg.com/originals/ae/4f/96/ae4f96e076d3e45f7071792bd6c95fe0.png", null, "http://cdn.bbcpc.org/worksheet/scatter-plot-worksheet-middle-school-math/worksheet-scatter-plots-for-every-on-scatter-plot-worksheet-th-grade-the-best-worksheets-i.jpg", null, "https://bonlacfoods.com/images/scatter-plot-correlation-worksheet/scatter-plot-correlation-worksheet-7.jpg", null, "https://i.pinimg.com/originals/64/44/c7/6444c7f0cbf2d2d6fa48fe6ffa1e225b.png", null, "https://study.com/academy/practice/quiz-worksheet-scatter-plots-word-problems.jpg", null, "x-raw-image:/aab42b923e52342c1ea54e04ced3df2112049e2261af1d801070280a31f3f892", null, "https://i.pinimg.com/originals/4f/c3/fd/4fc3fd2412c8e415b35e89257cb07b9a.jpg", null, "https://upload.wikimedia.org/wikipedia/commons/thumb/a/af/Scatter_diagram_for_quality_characteristic_XXX.svg/1200px-Scatter_diagram_for_quality_characteristic_XXX.svg.png", null, "https://bonlacfoods.com/images/scatter-plots-and-correlation-worksheet/scatter-plots-and-correlation-worksheet-2.jpg", null, "https://study.com/academy/practice/quiz-worksheet-interpreting-scatterplots.jpg", null, "https://s-media-cache-ak0.pinimg.com/originals/ac/51/76/ac5176cb2e445edeb770840b7ddb5a07.jpg", null, "https://i.ytimg.com/vi/gL59N66AHUQ/maxresdefault.jpg", null, "x-raw-image:/b0d50bceaada5e4a0e067269d9a01f6a8733f86fb756b682dfd7332cb9dd944a", null, "https://www.math-drills.com/numbersense/images/cube_roots_001_pin2.jpg", null, "https://location-voiture-crete-aeroport.com/wp-content/uploads/2018/10/properties-of-logarithms-worksheet-new-properties-logarithms-worksheet-answers-livinghealthybulletin-of-properties-of-logarithms-", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7346141,"math_prob":0.50583696,"size":1580,"snap":"2019-35-2019-39","text_gpt3_token_len":308,"char_repetition_ratio":0.29060915,"word_repetition_ratio":0.26548672,"special_character_ratio":0.16202532,"punctuation_ratio":0.021551725,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99706864,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38],"im_url_duplicate_count":[null,null,null,null,null,2,null,null,null,null,null,3,null,2,null,3,null,6,null,null,null,5,null,4,null,2,null,2,null,2,null,3,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-22T01:28:28Z\",\"WARC-Record-ID\":\"<urn:uuid:f95120e4-a5ad-4d17-bf11-48ead6611823>\",\"Content-Length\":\"20244\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f882929a-833e-41f9-b854-68190ee60131>\",\"WARC-Concurrent-To\":\"<urn:uuid:39a305f6-8d9e-4fec-a059-ec43e828c0ad>\",\"WARC-IP-Address\":\"104.24.101.81\",\"WARC-Target-URI\":\"http://babyhunters.com/scatterplot-worksheet/\",\"WARC-Payload-Digest\":\"sha1:RVO3WQT7TK673EJ2XRBJIHVCH42QPYEB\",\"WARC-Block-Digest\":\"sha1:QSMXLSDWB22YKYMAWHZ3VIM7HHTH5VB2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027316555.4_warc_CC-MAIN-20190822000659-20190822022659-00255.warc.gz\"}"}
https://www.gerardoschiavone.com/tool/moire/
[ "", null, "MOIRE’", null, "The Moiré effect is achieved from the interactions of two graphic grid expressions. A CRT monitor grid expression multiplied by a moiré grid lines. Meant to have  good control and easy animation based on camera movements.", null, "You can disable moiré grid if you need the CRT pattern only.\n\nHere the expressions:\n\nred channel:\n\nfmod((x/2),2)==0?fmod(y,2)==0?1:0:0\n\ngreen channel:\n\nfmod((x/2),2)==0.5?fmod(y,2)==0?1:0:0\n\nblue channel:\n\nfmod((x/2),2)==1?fmod(y,2)==0?1:0:0", null, "", null, "Moirè lines expression:\n\ncos((x+(vx*100))*((y+(vy*100))/scale/80))\n\nKNOOBS\n\nCTR pattern – Customize ctr pattern\n\nMoiré – Animate or link to a camera rotation/translation" ]
[ null, "https://www.gerardoschiavone.com/wp-content/uploads/2019/08/GS_Icon_Moiré-e1566462637776.png", null, "https://www.gerardoschiavone.com/wp-content/uploads/2019/08/GIF3.gif", null, "https://www.gerardoschiavone.com/wp-content/uploads/2019/08/CRT_Pattern-300x300.jpg", null, "https://www.gerardoschiavone.com/wp-content/uploads/2019/08/Img.jpg", null, "https://www.gerardoschiavone.com/wp-content/uploads/2019/08/Img-300x225.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5257541,"math_prob":0.9929278,"size":682,"snap":"2023-40-2023-50","text_gpt3_token_len":216,"char_repetition_ratio":0.13126844,"word_repetition_ratio":0.0,"special_character_ratio":0.30058652,"punctuation_ratio":0.17948718,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9804352,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,2,null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-05T13:00:18Z\",\"WARC-Record-ID\":\"<urn:uuid:963ea8d3-e4d4-4d6a-8bd2-6c7871637088>\",\"Content-Length\":\"32367\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:97fb07fc-9980-46e8-8efc-944aa850d612>\",\"WARC-Concurrent-To\":\"<urn:uuid:7979aae6-db8e-4dc0-a2d6-baa0b8fe7608>\",\"WARC-IP-Address\":\"89.46.108.69\",\"WARC-Target-URI\":\"https://www.gerardoschiavone.com/tool/moire/\",\"WARC-Payload-Digest\":\"sha1:XOOEJGTM2ADMCVD6U6XJAFV6YXUZKNM4\",\"WARC-Block-Digest\":\"sha1:ZFGYWSB5CLUAZURPOT3WFVFW57S57J4O\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100551.17_warc_CC-MAIN-20231205105136-20231205135136-00359.warc.gz\"}"}
http://perplexus.info/show.php?pid=12249&cid=63241
[ "", null, "All about flooble | fun stuff | Get a free chatterbox | Free JavaScript | Avatars", null, "", null, "perplexus dot info", null, "", null, "Max angle in triangle (Posted on 2021-03-26)", null, "D is midpoint of AC for triangle ABC. Bisectors of ∠ACB, ∠ABD are perpendicular. Find max value for ∠BAC.\n\n No Solution Yet Submitted by Danish Ahmed Khan No Rating", null, "Comments: ( Back to comment list | You must be logged in to post comments.)", null, "Long solution", null, "| Comment 1 of 2\nI assume there's an easier way, but here's my approach:\n\nUsing coordinates, B=(0,0), D=(1,a), A=(2,2a) which are on a line with slope a, so C is on a line with slope -a.  C=(c,-ac) for some c.\n\nThe bisector of B is just the x-axis so the perpendicular is a vertical line through C.  This means the two sides of ACD have opposite slopes.\n\nSlope CD=(a+ac)/(1-c)\nSlope AC=(2a+ac)/(2-c)\nSetting the sum of these slopes to 0 and solving for c in terms of a just gives c=sqrt(2).\nC=(sqrt(2), -a*sqrt(2))\n\nNow to find BAC = arctan(slope CA) - arctan(slope AB)\nSlope CA simplifies to a(3+2sqrt(2))\nSlope AB = a\nusing the arctan difference formula gives\nBAC = arctan [2a(1+sqrt(2))/(1+a^2(3+2sqrt(2))]\n\nSince arctangent is an increasing function, we just need to maximize the argument.\nUsing calculus to maximize\nf(a)= a(1+sqrt(2))/(1+a^2(3+2sqrt(2))\nit's derivative is to messy to type out here, but setting it equal to zero and solving gives\na=sqrt(2)-1\nand\nf(a)=1\n\nFinally arctan(1) = 45 degrees.\n\nRemarks:  At this maximum, the arctangents of CA ad AB are 67.5 and 22.5 respectively and also ACB is an isosceles right triangle.\n\n Posted by Jer on 2021-03-26 11:09:27", null, "Please log in:\n\n Search: Search body:\nForums (0)" ]
[ null, "http://perplexus.info/images/flooble.gif", null, "http://perplexus.info/images/dot.gif", null, "http://perplexus.info/images/dot.gif", null, "http://perplexus.info/images/dot.gif", null, "http://perplexus.info/images/dot.gif", null, "http://perplexus.info/images/perplexus/diff/3.gif", null, "http://perplexus.info/images/dot_black.gif", null, "http://perplexus.info/images/perplexus/icons/solution.gif", null, "http://perplexus.info/images/perplexus/icons/up.gif", null, "http://perplexus.info/images/dot.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.83608,"math_prob":0.98805666,"size":1262,"snap":"2023-40-2023-50","text_gpt3_token_len":428,"char_repetition_ratio":0.11605723,"word_repetition_ratio":0.009615385,"special_character_ratio":0.32012677,"punctuation_ratio":0.10508475,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9995321,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-08T16:16:57Z\",\"WARC-Record-ID\":\"<urn:uuid:d512149f-31d0-42c5-9e7a-b1022fbcc065>\",\"Content-Length\":\"13481\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:810b5bd1-6ec8-47e0-b2ef-4ea6c01c855c>\",\"WARC-Concurrent-To\":\"<urn:uuid:d8326854-707e-4235-936e-ccd165b518ef>\",\"WARC-IP-Address\":\"65.98.103.122\",\"WARC-Target-URI\":\"http://perplexus.info/show.php?pid=12249&cid=63241\",\"WARC-Payload-Digest\":\"sha1:VYQIBG6QDAEZFOQSZQUWXZXUQZRBVUCV\",\"WARC-Block-Digest\":\"sha1:IYUOL2QGE5XPB2MK2EHRB6QL4FBTO3PU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100762.64_warc_CC-MAIN-20231208144732-20231208174732-00801.warc.gz\"}"}
https://ccrma.stanford.edu/~jos/st/Power_Spectral_Density_Estimation.html
[ "Next  |  Prev  |  Up  |  Top  |  Index  |  JOS Index  |  JOS Pubs  |  JOS Home  |  Search\n\n## Power Spectral Density Estimation\n\nWelch's method (or the periodogram method ) for estimating power spectral densities (PSD) is carried out by dividing the time signal into successive blocks, and averaging squared-magnitude DFTs of the signal blocks. Let", null, ",", null, ", denote the", null, "th block of the signal", null, ", with", null, "denoting the number of blocks. Then the Welch PSD estimate is given by", null, "(8.3)\n\nwhere", null, "'' denotes time averaging across blocks (or frames'') of data indexed by", null, ". The function pwelch implements Welch's method in Octave (Octave-Forge collection) and Matlab (Signal Processing Toolbox).\n\nRecall that", null, "which is circular (cyclic) autocorrelation. To obtain an acyclic autocorrelation instead, we may use zero padding in the time domain, as described in §8.4.2. That is, we can replace", null, "above by", null, ".8.12Although this fixes the wrap-around problem'', the estimator is still biased because its expected value is the true autocorrelation", null, "weighted by", null, ". This bias is equivalent to multiplying the correlation in the lag domain'' by a triangular window (also called a Bartlett window''). The bias can be removed by simply dividing it out, as in Eq.(8.2), but it is common to retain the Bartlett weighting since it merely corresponds to smoothing the power spectrum (or cross-spectrum) with a sinc", null, "kernel;8.13it also down-weights the less reliable large-lag estimates, weighting each lag by the number of lagged products that were summed.\n\nSince", null, ", and since the DFT is a linear operator7.4.1), averaging magnitude-squared DFTs", null, "is equivalent, in principle, to estimating block autocorrelations", null, ", averaging them, and taking a DFT of the average. However, this would normally be slower.\n\nWe return to power spectral density estimation in Book IV of the music signal processing series.\n\nNext  |  Prev  |  Up  |  Top  |  Index  |  JOS Index  |  JOS Pubs  |  JOS Home  |  Search\n\n[How to cite this work]  [Order a printed hardcopy]  [Comment on this page via email]" ]
[ null, "https://ccrma.stanford.edu/~jos/st/img1586.png", null, "https://ccrma.stanford.edu/~jos/st/img1068.png", null, "https://ccrma.stanford.edu/~jos/st/img1024.png", null, "https://ccrma.stanford.edu/~jos/st/img1587.png", null, "https://ccrma.stanford.edu/~jos/st/img240.png", null, "https://ccrma.stanford.edu/~jos/st/img1588.png", null, "https://ccrma.stanford.edu/~jos/st/img1589.png", null, "https://ccrma.stanford.edu/~jos/st/img1024.png", null, "https://ccrma.stanford.edu/~jos/st/img1590.png", null, "https://ccrma.stanford.edu/~jos/st/img1591.png", null, "https://ccrma.stanford.edu/~jos/st/img1592.png", null, "https://ccrma.stanford.edu/~jos/st/img1594.png", null, "https://ccrma.stanford.edu/~jos/st/img1595.png", null, "https://ccrma.stanford.edu/~jos/st/img1596.png", null, "https://ccrma.stanford.edu/~jos/st/img1599.png", null, "https://ccrma.stanford.edu/~jos/st/img1600.png", null, "https://ccrma.stanford.edu/~jos/st/img1601.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.92485166,"math_prob":0.92414814,"size":1536,"snap":"2020-24-2020-29","text_gpt3_token_len":348,"char_repetition_ratio":0.11422977,"word_repetition_ratio":0.0,"special_character_ratio":0.2233073,"punctuation_ratio":0.13013698,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9896503,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34],"im_url_duplicate_count":[null,3,null,5,null,7,null,3,null,null,null,3,null,1,null,7,null,1,null,2,null,3,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-14T17:01:53Z\",\"WARC-Record-ID\":\"<urn:uuid:2c5920ba-73db-4adc-8f44-c5becf6822b4>\",\"Content-Length\":\"15813\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:82ec237a-b176-4d3c-b005-5624bb4eb0ea>\",\"WARC-Concurrent-To\":\"<urn:uuid:4672cb09-f6ae-49f5-9b10-99170da40a45>\",\"WARC-IP-Address\":\"171.64.197.141\",\"WARC-Target-URI\":\"https://ccrma.stanford.edu/~jos/st/Power_Spectral_Density_Estimation.html\",\"WARC-Payload-Digest\":\"sha1:TRSGHAOP6KBFDHHG4XVFPCRI3D56KIEP\",\"WARC-Block-Digest\":\"sha1:LEK3AHTC4KTK5ANE2IHM7Y55XNUZFWVK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655897168.4_warc_CC-MAIN-20200714145953-20200714175953-00595.warc.gz\"}"}
https://number.academy/101717
[ "# Number 101717\n\nNumber 101,717 spell 🔊, write in words: one hundred and one thousand, seven hundred and seventeen . Ordinal number 101717th is said 🔊 and write: one hundred and one thousand, seven hundred and seventeenth. Color #101717. The meaning of number 101717 in Maths: Is Prime? Factorization and prime factors tree. The square root and cube root of 101717. What is 101717 in computer science, numerology, codes and images, writing and naming in other languages. Other interesting facts related to 101717.\n\n## What is 101,717 in other units\n\nThe decimal (Arabic) number 101717 converted to a Roman number is (C)MDCCXVII. Roman and decimal number conversions.\n\n#### Weight conversion\n\n101717 kilograms (kg) = 224245.3 pounds (lbs)\n101717 pounds (lbs) = 46138.5 kilograms (kg)\n\n#### Length conversion\n\n101717 kilometers (km) equals to 63204 miles (mi).\n101717 miles (mi) equals to 163698 kilometers (km).\n101717 meters (m) equals to 333714 feet (ft).\n101717 feet (ft) equals 31004 meters (m).\n101717 centimeters (cm) equals to 40046.1 inches (in).\n101717 inches (in) equals to 258361.2 centimeters (cm).\n\n#### Temperature conversion\n\n101717° Fahrenheit (°F) equals to 56491.7° Celsius (°C)\n101717° Celsius (°C) equals to 183122.6° Fahrenheit (°F)\n\n#### Time conversion\n\n(hours, minutes, seconds, days, weeks)\n101717 seconds equals to 1 day, 4 hours, 15 minutes, 17 seconds\n101717 minutes equals to 2 months, 2 weeks, 15 hours, 17 minutes\n\n### Codes and images of the number 101717\n\nNumber 101717 morse code: .---- ----- .---- --... .---- --...\nSign language for number 101717:", null, "", null, "", null, "", null, "", null, "", null, "Number 101717 in braille:", null, "QR code Bar code, type 39", null, "", null, "Images of the number Image (1) of the number Image (2) of the number", null, "", null, "More images, other sizes, codes and colors ...\n\n## Share in social networks", null, "## Mathematics of no. 101717\n\n### Multiplications\n\n#### Multiplication table of 101717\n\n101717 multiplied by two equals 203434 (101717 x 2 = 203434).\n101717 multiplied by three equals 305151 (101717 x 3 = 305151).\n101717 multiplied by four equals 406868 (101717 x 4 = 406868).\n101717 multiplied by five equals 508585 (101717 x 5 = 508585).\n101717 multiplied by six equals 610302 (101717 x 6 = 610302).\n101717 multiplied by seven equals 712019 (101717 x 7 = 712019).\n101717 multiplied by eight equals 813736 (101717 x 8 = 813736).\n101717 multiplied by nine equals 915453 (101717 x 9 = 915453).\nshow multiplications by 6, 7, 8, 9 ...\n\n### Fractions: decimal fraction and common fraction\n\n#### Fraction table of 101717\n\nHalf of 101717 is 50858,5 (101717 / 2 = 50858,5 = 50858 1/2).\nOne third of 101717 is 33905,6667 (101717 / 3 = 33905,6667 = 33905 2/3).\nOne quarter of 101717 is 25429,25 (101717 / 4 = 25429,25 = 25429 1/4).\nOne fifth of 101717 is 20343,4 (101717 / 5 = 20343,4 = 20343 2/5).\nOne sixth of 101717 is 16952,8333 (101717 / 6 = 16952,8333 = 16952 5/6).\nOne seventh of 101717 is 14531 (101717 / 7 = 14531).\nOne eighth of 101717 is 12714,625 (101717 / 8 = 12714,625 = 12714 5/8).\nOne ninth of 101717 is 11301,8889 (101717 / 9 = 11301,8889 = 11301 8/9).\nshow fractions by 6, 7, 8, 9 ...\n\n 101717\n\n### Advanced math operations\n\n#### Is Prime?\n\nThe number 101717 is not a prime number. The closest prime numbers are 101701, 101719.\n\n#### Factorization and factors (dividers)\n\nThe prime factors of 101717 are 7 * 11 * 1321\nThe factors of 101717 are 1 , 7 , 11 , 77 , 1321 , 9247 , 14531 , 101717\nTotal factors 8.\nSum of factors 126912 (25195).\n\n#### Powers\n\nThe second power of 1017172 is 10.346.348.089.\nThe third power of 1017173 is 1.052.399.488.568.813.\n\n#### Roots\n\nThe square root √101717 is 318,931027.\nThe cube root of 3101717 is 46,680036.\n\n#### Logarithms\n\nThe natural logarithm of No. ln 101717 = loge 101717 = 11,52995.\nThe logarithm to base 10 of No. log10 101717 = 5,007394.\nThe Napierian logarithm of No. log1/e 101717 = -11,52995.\n\n### Trigonometric functions\n\nThe cosine of 101717 is 0,08376.\nThe sine of 101717 is -0,996486.\nThe tangent of 101717 is -11,896898.\n\n### Properties of the number 101717\n\nIs a Friedman number: No\nIs a Fibonacci number: No\nIs a Bell number: No\nIs a palindromic number: No\nIs a pentagonal number: No\nIs a perfect number: No\n\n## Number 101717 in Computer Science\n\nCode typeCode value\nPIN 101717 It's recommendable to use 101717 as a password or PIN.\n101717 Number of bytes99.3KB\nCSS Color\n#101717 hexadecimal to red, green and blue (RGB) (16, 23, 23)\nUnix timeUnix time 101717 is equal to Friday Jan. 2, 1970, 4:15:17 a.m. GMT\nIPv4, IPv6Number 101717 internet address in dotted format v4 0.1.141.85, v6 ::1:8d55\n101717 Decimal = 11000110101010101 Binary\n101717 Decimal = 12011112022 Ternary\n101717 Decimal = 306525 Octal\n101717 Decimal = 18D55 Hexadecimal (0x18d55 hex)\n101717 BASE64MTAxNzE3\n101717 MD506cd416767b39a6af50a668305668d5c\n101717 SHA1f11dcc821e04204f19f2e6206d4746b3bab9f03b\n101717 SHA25647161cc243b404725fb0350759854c062c066bbc46d4795f2bb45ca2c0ebe72d\n101717 SHA384ac4e67201cb7830667527c825d13ca84899b8384bddbea48abcb1b421f45b36ae2e49ac968f76baef19849b38ab6a564\nMore SHA codes related to the number 101717 ...\n\nIf you know something interesting about the 101717 number that you did not find on this page, do not hesitate to write us here.\n\n## Numerology 101717\n\n### Character frequency in number 101717\n\nCharacter (importance) frequency for numerology.\n Character: Frequency: 1 3 0 1 7 2\n\n### Classical numerology\n\nAccording to classical numerology, to know what each number means, you have to reduce it to a single figure, with the number 101717, the numbers 1+0+1+7+1+7 = 1+7 = 8 are added and the meaning of the number 8 is sought.\n\n## Interesting facts about the number 101717\n\n### Asteroids\n\n• (101717) 1999 DR7 is asteroid number 101717. It was discovered by LONEOS from Anderson Mesa on 2/18/1999.\n\n## № 101,717 in other languages\n\nHow to say or write the number one hundred and one thousand, seven hundred and seventeen in Spanish, German, French and other languages. The character used as the thousands separator.\n Spanish: 🔊 (número 101.717) ciento uno mil setecientos diecisiete German: 🔊 (Anzahl 101.717) einhunderteinstausendsiebenhundertsiebzehn French: 🔊 (nombre 101 717) cent un mille sept cent dix-sept Portuguese: 🔊 (número 101 717) cento e um mil, setecentos e dezessete Chinese: 🔊 (数 101 717) 十万一千七百一十七 Arabian: 🔊 (عدد 101,717) مائة و واحد ألف و سبعمائة و سبعة عشر Czech: 🔊 (číslo 101 717) sto jedna tisíc sedmset sedmnáct Korean: 🔊 (번호 101,717) 십만 천칠백십칠 Danish: 🔊 (nummer 101 717) ethundrede og ettusindsyvhundrede og sytten Dutch: 🔊 (nummer 101 717) honderdéénduizendzevenhonderdzeventien Japanese: 🔊 (数 101,717) 十万千七百十七 Indonesian: 🔊 (jumlah 101.717) seratus satu ribu tujuh ratus tujuh belas Italian: 🔊 (numero 101 717) centounomilasettecentodiciassette Norwegian: 🔊 (nummer 101 717) en hundre og en tusen, syv hundre og sytten Polish: 🔊 (liczba 101 717) sto jeden tysięcy siedemset siedemnaście Russian: 🔊 (номер 101 717) сто одна тысяча семьсот семнадцать Turkish: 🔊 (numara 101,717) yüzbinyediyüzonyedi Thai: 🔊 (จำนวน 101 717) หนึ่งแสนหนึ่งพันเจ็ดร้อยสิบเจ็ด Ukrainian: 🔊 (номер 101 717) сто одна тисяча сiмсот сiмнадцять Vietnamese: 🔊 (con số 101.717) một trăm lẻ một nghìn bảy trăm mười bảy Other languages ...\n\n## News to email\n\n#### Receive news about \"Number 101717\" to email\n\nPrivacy Policy.\n\n## Comment\n\nIf you know something interesting about the number 101717 or any natural number (positive integer) please write us here or on facebook." ]
[ null, "https://numero.wiki/s/senas/lenguaje-de-senas-numero-1.png", null, "https://numero.wiki/s/senas/lenguaje-de-senas-numero-0.png", null, "https://numero.wiki/s/senas/lenguaje-de-senas-numero-1.png", null, "https://numero.wiki/s/senas/lenguaje-de-senas-numero-7.png", null, "https://numero.wiki/s/senas/lenguaje-de-senas-numero-1.png", null, "https://numero.wiki/s/senas/lenguaje-de-senas-numero-7.png", null, "https://number.academy/img/braille-101717.svg", null, "https://numero.wiki/img/codigo-qr-101717.png", null, "https://numero.wiki/img/codigo-barra-101717.png", null, "https://numero.wiki/img/a-101717.jpg", null, "https://numero.wiki/img/b-101717.jpg", null, "https://numero.wiki/s/share-desktop.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5327395,"math_prob":0.80916774,"size":7117,"snap":"2022-40-2023-06","text_gpt3_token_len":2581,"char_repetition_ratio":0.16041051,"word_repetition_ratio":0.014427412,"special_character_ratio":0.43473375,"punctuation_ratio":0.16133942,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9922716,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,2,null,2,null,2,null,2,null,2,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-06T00:36:44Z\",\"WARC-Record-ID\":\"<urn:uuid:8833b86d-bd8a-4fb3-8711-69a170330505>\",\"Content-Length\":\"41041\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0da29ae3-0d8e-41f3-8a2c-8809ac922b9c>\",\"WARC-Concurrent-To\":\"<urn:uuid:352bfc86-0e91-4af6-9d47-07ddadd1ef8e>\",\"WARC-IP-Address\":\"162.0.227.212\",\"WARC-Target-URI\":\"https://number.academy/101717\",\"WARC-Payload-Digest\":\"sha1:44SRNW45BDIOJZUQS2N6X6NROVQO5M4T\",\"WARC-Block-Digest\":\"sha1:7L3JQZZY5LWQTJNBYPUKZBHNE2A2JLRP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337680.35_warc_CC-MAIN-20221005234659-20221006024659-00131.warc.gz\"}"}
https://www.geeksforgeeks.org/m-coloring-problem/
[ "Open In App\nRelated Articles\n\n# m Coloring Problem\n\nGiven an undirected graph and a number m, the task is to color the given graph with at most m colors such that no two adjacent vertices of the graph are colored with the same color\n\nNote: Here coloring of a graph means the assignment of colors to all vertices\n\nBelow is the example of a graph that can be colored with 3 different colors:", null, "Examples:\n\nInput:  graph = {0, 1, 1, 1},\n{1, 0, 1, 0},\n{1, 1, 0, 1},\n{1, 0, 1, 0}\nOutput: Solution Exists: Following are the assigned colors: 1  2  3  2\nExplanation: By coloring the vertices with following colors,\nadjacent vertices does not have same colors\n\nInput: graph = {1, 1, 1, 1},\n{1, 1, 1, 1},\n{1, 1, 1, 1},\n{1, 1, 1, 1}\n\nOutput: Solution does not exist\nExplanation: No solution exits\n\n## Naive Approach for “m Coloring Problem”:\n\nGenerate all possible configurations of colors. Since each node can be colored using any of the m available colors, the total number of color configurations possible is mV. After generating a configuration of color, check if the adjacent vertices have the same color or not. If the conditions are met, print the combination.\n\nTime Complexity: O(mV). There is a total O(mV) combination of colors\nAuxiliary Space: O(V). The Recursive Stack of graph coloring(…) function will require O(V) space.\n\n## m Coloring Problem using Backtracking:\n\nAssign colors one by one to different vertices, starting from vertex 0. Before assigning a color, check for safety by considering already assigned colors to the adjacent vertices i.e check if the adjacent vertices have the same color or not. If there is any color assignment that does not violate the conditions, mark the color assignment as part of the solution. If no assignment of color is possible then backtrack and return false\n\nFollow the given steps to solve the problem:\n\n• Create a recursive function that takes the graph, current index, number of vertices, and color array.\n• If the current index is equal to the number of vertices. Print the color configuration in the color array.\n• Assign a color to a vertex from the range (1 to m).\n• For every assigned color, check if the configuration is safe, (i.e. check if the adjacent vertices do not have the same color) and recursively call the function with the next index and number of vertices otherwise, return false\n• If any recursive function returns true then return true\n• If no recursive function returns true then return false\n\n#### Illustration:\n\n• To color the graph, color each node one by one.\n• To color the first node there are 3 choices of colors Red, Green and Blue, so lets take the red color for first node.\n• After Red color for first node is fixed then we have made choice for second node in similar manner as we did for first node, then for 3rd node and so on.\n• There is one important point to remember. while choosing color for the node, it should not be same as the color of the adjacent node.\n• As shown in the above diagram, all the solutions are shown by coloring the first node in Red.\n• Let’s choose Green color for the first node and explore the options for the remaining nodes.\n\n• As shown in the above diagram, all the solutions are shown by coloring the first node in Green.\n• Let’s choose Blue color for the first node and explore the options for the remaining nodes.\n\nBelow is the implementation of the above approach:\n\nC++\n\n``````// C++ program for solution of M\n// Coloring problem using backtracking\n\n#include <bits/stdc++.h>\nusing namespace std;\n\n// Number of vertices in the graph\n#define V 4\n\nvoid printSolution(int color[]);\n\n/* A utility function to check if\nthe current color assignment\nis safe for vertex v i.e. checks\nwhether the edge exists or not\n(i.e, graph[v][i]==1). If exist\nthen checks whether the color to\nbe filled in the new vertex(c is\nsent in the parameter) is already\nnot (i.e, color[i]==c) */\nbool isSafe(int v, bool graph[V][V], int color[], int c)\n{\nfor (int i = 0; i < V; i++)\nif (graph[v][i] && c == color[i])\nreturn false;\n\nreturn true;\n}\n\n/* A recursive utility function\nto solve m coloring problem */\nbool graphColoringUtil(bool graph[V][V], int m, int color[],\nint v)\n{\n\n/* base case: If all vertices are\nassigned a color then return true */\nif (v == V)\nreturn true;\n\n/* Consider this vertex v and\ntry different colors */\nfor (int c = 1; c <= m; c++) {\n\n/* Check if assignment of color\nc to v is fine*/\nif (isSafe(v, graph, color, c)) {\ncolor[v] = c;\n\n/* recur to assign colors to\nrest of the vertices */\nif (graphColoringUtil(graph, m, color, v + 1)\n== true)\nreturn true;\n\n/* If assigning color c doesn't\nlead to a solution then remove it */\ncolor[v] = 0;\n}\n}\n\n/* If no color can be assigned to\nthis vertex then return false */\nreturn false;\n}\n\n/* This function solves the m Coloring\nproblem using Backtracking. It mainly\nuses graphColoringUtil() to solve the\nproblem. It returns false if the m\ncolors cannot be assigned, otherwise\nreturn true and prints assignments of\ncolors to all vertices. Please note\nthat there may be more than one solutions,\nthis function prints one of the\nfeasible solutions.*/\nbool graphColoring(bool graph[V][V], int m)\n{\n\n// Initialize all color values as 0.\n// This initialization is needed\n// correct functioning of isSafe()\nint color[V];\nfor (int i = 0; i < V; i++)\ncolor[i] = 0;\n\n// Call graphColoringUtil() for vertex 0\nif (graphColoringUtil(graph, m, color, 0) == false) {\ncout << \"Solution does not exist\";\nreturn false;\n}\n\n// Print the solution\nprintSolution(color);\nreturn true;\n}\n\n/* A utility function to print solution */\nvoid printSolution(int color[])\n{\ncout << \"Solution Exists:\"\n<< \" Following are the assigned colors\"\n<< \"\\n\";\nfor (int i = 0; i < V; i++)\ncout << \" \" << color[i] << \" \";\n\ncout << \"\\n\";\n}\n\n// Driver code\nint main()\n{\n\n/* Create following graph and test\nwhether it is 3 colorable\n(3)---(2)\n| / |\n| / |\n| / |\n(0)---(1)\n*/\nbool graph[V][V] = {\n{ 0, 1, 1, 1 },\n{ 1, 0, 1, 0 },\n{ 1, 1, 0, 1 },\n{ 1, 0, 1, 0 },\n};\n\n// Number of colors\nint m = 3;\n\n// Function call\ngraphColoring(graph, m);\nreturn 0;\n}\n\n// This code is contributed by Shivani``````\n\nC\n\n``````// C program for solution of M\n// Coloring problem using backtracking\n\n#include <stdbool.h>\n#include <stdio.h>\n\n// Number of vertices in the graph\n#define V 4\n\nvoid printSolution(int color[]);\n\n/* A utility function to check if\nthe current color assignment\nis safe for vertex v i.e. checks\nwhether the edge exists or not\n(i.e, graph[v][i]==1). If exist\nthen checks whether the color to\nbe filled in the new vertex(c is\nsent in the parameter) is already\nnot (i.e, color[i]==c) */\nbool isSafe(int v, bool graph[V][V], int color[], int c)\n{\nfor (int i = 0; i < V; i++)\nif (graph[v][i] && c == color[i])\nreturn false;\nreturn true;\n}\n\n/* A recursive utility function\nto solve m coloring problem */\nbool graphColoringUtil(bool graph[V][V], int m, int color[],\nint v)\n{\n/* base case: If all vertices are\nassigned a color then return true */\nif (v == V)\nreturn true;\n\n/* Consider this vertex v and\ntry different colors */\nfor (int c = 1; c <= m; c++) {\n/* Check if assignment of color\nc to v is fine*/\nif (isSafe(v, graph, color, c)) {\ncolor[v] = c;\n\n/* recur to assign colors to\nrest of the vertices */\nif (graphColoringUtil(graph, m, color, v + 1)\n== true)\nreturn true;\n\n/* If assigning color c doesn't\nlead to a solution then remove it */\ncolor[v] = 0;\n}\n}\n\n/* If no color can be assigned to\nthis vertex then return false */\nreturn false;\n}\n\n/* This function solves the m Coloring\nproblem using Backtracking. It mainly\nuses graphColoringUtil() to solve the\nproblem. It returns false if the m\ncolors cannot be assigned, otherwise\nreturn true and prints assignments of\ncolors to all vertices. Please note\nthat there may be more than one solutions,\nthis function prints one of the\nfeasible solutions.*/\nbool graphColoring(bool graph[V][V], int m)\n{\n// Initialize all color values as 0.\n// This initialization is needed\n// correct functioning of isSafe()\nint color[V];\nfor (int i = 0; i < V; i++)\ncolor[i] = 0;\n\n// Call graphColoringUtil() for vertex 0\nif (graphColoringUtil(graph, m, color, 0) == false) {\nprintf(\"Solution does not exist\");\nreturn false;\n}\n\n// Print the solution\nprintSolution(color);\nreturn true;\n}\n\n/* A utility function to print solution */\nvoid printSolution(int color[])\n{\nprintf(\"Solution Exists:\"\n\" Following are the assigned colors \\n\");\nfor (int i = 0; i < V; i++)\nprintf(\" %d \", color[i]);\nprintf(\"\\n\");\n}\n\n// Driver code\nint main()\n{\n/* Create following graph and test\nwhether it is 3 colorable\n(3)---(2)\n| / |\n| / |\n| / |\n(0)---(1)\n*/\nbool graph[V][V] = {\n{ 0, 1, 1, 1 },\n{ 1, 0, 1, 0 },\n{ 1, 1, 0, 1 },\n{ 1, 0, 1, 0 },\n};\nint m = 3; // Number of colors\n\n// Function call\ngraphColoring(graph, m);\nreturn 0;\n}``````\n\nJava\n\n``````/* Java program for solution of\nM Coloring problem using backtracking */\n\npublic class mColoringProblem {\nfinal int V = 4;\nint color[];\n\n/* A utility function to check\nif the current color assignment\nis safe for vertex v */\nboolean isSafe(int v, int graph[][], int color[], int c)\n{\nfor (int i = 0; i < V; i++)\nif (graph[v][i] == 1 && c == color[i])\nreturn false;\nreturn true;\n}\n\n/* A recursive utility function\nto solve m coloring problem */\nboolean graphColoringUtil(int graph[][], int m,\nint color[], int v)\n{\n/* base case: If all vertices are\nassigned a color then return true */\nif (v == V)\nreturn true;\n\n/* Consider this vertex v and try\ndifferent colors */\nfor (int c = 1; c <= m; c++) {\n/* Check if assignment of color c to v\nis fine*/\nif (isSafe(v, graph, color, c)) {\ncolor[v] = c;\n\n/* recur to assign colors to rest\nof the vertices */\nif (graphColoringUtil(graph, m, color,\nv + 1))\nreturn true;\n\n/* If assigning color c doesn't lead\nto a solution then remove it */\ncolor[v] = 0;\n}\n}\n\n/* If no color can be assigned to\nthis vertex then return false */\nreturn false;\n}\n\n/* This function solves the m Coloring problem using\nBacktracking. It mainly uses graphColoringUtil()\nto solve the problem. It returns false if the m\ncolors cannot be assigned, otherwise return true\nand prints assignments of colors to all vertices.\nPlease note that there may be more than one\nsolutions, this function prints one of the\nfeasible solutions.*/\nboolean graphColoring(int graph[][], int m)\n{\n// Initialize all color values as 0. This\n// initialization is needed correct\n// functioning of isSafe()\ncolor = new int[V];\nfor (int i = 0; i < V; i++)\ncolor[i] = 0;\n\n// Call graphColoringUtil() for vertex 0\nif (!graphColoringUtil(graph, m, color, 0)) {\nSystem.out.println(\"Solution does not exist\");\nreturn false;\n}\n\n// Print the solution\nprintSolution(color);\nreturn true;\n}\n\n/* A utility function to print solution */\nvoid printSolution(int color[])\n{\nSystem.out.println(\"Solution Exists: Following\"\n+ \" are the assigned colors\");\nfor (int i = 0; i < V; i++)\nSystem.out.print(\" \" + color[i] + \" \");\nSystem.out.println();\n}\n\n// Driver code\npublic static void main(String args[])\n{\nmColoringProblem Coloring = new mColoringProblem();\n/* Create following graph and\ntest whether it is\n3 colorable\n(3)---(2)\n| / |\n| / |\n| / |\n(0)---(1)\n*/\nint graph[][] = {\n{ 0, 1, 1, 1 },\n{ 1, 0, 1, 0 },\n{ 1, 1, 0, 1 },\n{ 1, 0, 1, 0 },\n};\nint m = 3; // Number of colors\n\n// Function call\nColoring.graphColoring(graph, m);\n}\n}\n// This code is contributed by Abhishek Shankhadhar``````\n\nPython3\n\n``````# Python3 program for solution of M Coloring\n# problem using backtracking\n\nclass Graph():\n\ndef __init__(self, vertices):\nself.V = vertices\nself.graph = [[0 for column in range(vertices)]\nfor row in range(vertices)]\n\n# A utility function to check\n# if the current color assignment\n# is safe for vertex v\ndef isSafe(self, v, colour, c):\nfor i in range(self.V):\nif self.graph[v][i] == 1 and colour[i] == c:\nreturn False\nreturn True\n\n# A recursive utility function to solve m\n# coloring problem\ndef graphColourUtil(self, m, colour, v):\nif v == self.V:\nreturn True\n\nfor c in range(1, m + 1):\nif self.isSafe(v, colour, c) == True:\ncolour[v] = c\nif self.graphColourUtil(m, colour, v + 1) == True:\nreturn True\ncolour[v] = 0\n\ndef graphColouring(self, m):\ncolour = * self.V\nif self.graphColourUtil(m, colour, 0) == None:\nreturn False\n\n# Print the solution\nprint(\"Solution exist and Following are the assigned colours:\")\nfor c in colour:\nprint(c, end=' ')\nreturn True\n\n# Driver Code\nif __name__ == '__main__':\ng = Graph(4)\ng.graph = [[0, 1, 1, 1], [1, 0, 1, 0], [1, 1, 0, 1], [1, 0, 1, 0]]\nm = 3\n\n# Function call\ng.graphColouring(m)\n\n# This code is contributed by Divyanshu Mehta\n``````\n\nC#\n\n``````/* C# program for solution of M Coloring problem\nusing backtracking */\nusing System;\n\nclass GFG {\nint[] color;\n\n/* A utility function to check if the current\ncolor assignment is safe for vertex v */\nbool isSafe(int v, int[, ] graph, int[] color, int c)\n{\nfor (int i = 0; i < V; i++)\nif (graph[v, i] == 1 && c == color[i])\nreturn false;\nreturn true;\n}\n\n/* A recursive utility function to solve m\ncoloring problem */\nbool graphColoringUtil(int[, ] graph, int m,\nint[] color, int v)\n{\n/* base case: If all vertices are assigned\na color then return true */\nif (v == V)\nreturn true;\n\n/* Consider this vertex v and try different\ncolors */\nfor (int c = 1; c <= m; c++) {\n/* Check if assignment of color c to v\nis fine*/\nif (isSafe(v, graph, color, c)) {\ncolor[v] = c;\n\n/* recur to assign colors to rest\nof the vertices */\nif (graphColoringUtil(graph, m, color,\nv + 1))\nreturn true;\n\n/* If assigning color c doesn't lead\nto a solution then remove it */\ncolor[v] = 0;\n}\n}\n\n/* If no color can be assigned to this vertex\nthen return false */\nreturn false;\n}\n\n/* This function solves the m Coloring problem using\nBacktracking. It mainly uses graphColoringUtil()\nto solve the problem. It returns false if the m\ncolors cannot be assigned, otherwise return true\nand prints assignments of colors to all vertices.\nPlease note that there may be more than one\nsolutions, this function prints one of the\nfeasible solutions.*/\nbool graphColoring(int[, ] graph, int m)\n{\n// Initialize all color values as 0. This\n// initialization is needed correct functioning\n// of isSafe()\ncolor = new int[V];\nfor (int i = 0; i < V; i++)\ncolor[i] = 0;\n\n// Call graphColoringUtil() for vertex 0\nif (!graphColoringUtil(graph, m, color, 0)) {\nConsole.WriteLine(\"Solution does not exist\");\nreturn false;\n}\n\n// Print the solution\nprintSolution(color);\nreturn true;\n}\n\n/* A utility function to print solution */\nvoid printSolution(int[] color)\n{\nConsole.WriteLine(\"Solution Exists: Following\"\n+ \" are the assigned colors\");\nfor (int i = 0; i < V; i++)\nConsole.Write(\" \" + color[i] + \" \");\nConsole.WriteLine();\n}\n\n// Driver Code\npublic static void Main(String[] args)\n{\nGFG Coloring = new GFG();\n\n/* Create following graph and test whether it is\n3 colorable\n(3)---(2)\n| / |\n| / |\n| / |\n(0)---(1)\n*/\nint[, ] graph = { { 0, 1, 1, 1 },\n{ 1, 0, 1, 0 },\n{ 1, 1, 0, 1 },\n{ 1, 0, 1, 0 } };\nint m = 3; // Number of colors\n\n// Function call\nColoring.graphColoring(graph, m);\n}\n}\n\n// This code is contributed by PrinciRaj1992``````\n\nJavascript\n\n``````<script>\n\n/* JavaScript program for solution of\nM Coloring problem using backtracking */\n\nlet V = 4;\nlet color;\n\n/* A utility function to check\nif the current color assignment\nis safe for vertex v */\nfunction isSafe(v,graph,color,c)\n{\nfor (let i = 0; i < V; i++)\nif (\ngraph[v][i] == 1 && c == color[i])\nreturn false;\nreturn true;\n}\n\n/* A recursive utility function\nto solve m coloring problem */\nfunction graphColoringUtil(graph,m,color,v)\n{\n/* base case: If all vertices are\nassigned a color then return true */\nif (v == V)\nreturn true;\n\n/* Consider this vertex v and try\ndifferent colors */\nfor (let c = 1; c <= m; c++)\n{\n/* Check if assignment of color c to v\nis fine*/\nif (isSafe(v, graph, color, c))\n{\ncolor[v] = c;\n\n/* recur to assign colors to rest\nof the vertices */\nif (\ngraphColoringUtil(\ngraph, m,\ncolor, v + 1))\nreturn true;\n\n/* If assigning color c doesn't lead\nto a solution then remove it */\ncolor[v] = 0;\n}\n}\n\n/* If no color can be assigned to\nthis vertex then return false */\nreturn false;\n}\n\n/* This function solves the m Coloring problem using\nBacktracking. It mainly uses graphColoringUtil()\nto solve the problem. It returns false if the m\ncolors cannot be assigned, otherwise return true\nand prints assignments of colors to all vertices.\nPlease note that there may be more than one\nsolutions, this function prints one of the\nfeasible solutions.*/\nfunction graphColoring(graph,m)\n{\n// Initialize all color values as 0. This\n// initialization is needed correct\n// functioning of isSafe()\ncolor = new Array(V);\nfor (let i = 0; i < V; i++)\ncolor[i] = 0;\n\n// Call graphColoringUtil() for vertex 0\nif (\n!graphColoringUtil(\ngraph, m, color, 0))\n{\ndocument.write(\n\"Solution does not exist<br>\");\nreturn false;\n}\n\n// Print the solution\nprintSolution(color);\nreturn true;\n}\n\n/* A utility function to print solution */\nfunction printSolution(color)\n{\ndocument.write(\n\"Solution Exists: Following\"\n+ \" are the assigned colors<br>\");\nfor (let i = 0; i < V; i++)\ndocument.write(\" \" + color[i] + \" \");\ndocument.write(\"<br>\");\n}\n\n// driver program to test above function\n/* Create following graph and\ntest whether it is\n3 colorable\n(3)---(2)\n| / |\n| / |\n| / |\n(0)---(1)\n*/\nlet graph = [\n[ 0, 1, 1, 1 ],\n[ 1, 0, 1, 0 ],\n[ 1, 1, 0, 1 ],\n[ 1, 0, 1, 0 ],\n];\nlet m = 3; // Number of colors\ngraphColoring(graph, m);\n\n// This code is contributed by ab2127\n\n</script>``````\n\nOutput\n\n```Solution Exists: Following are the assigned colors\n1 2 3 2\n```\n\nTime Complexity: O(mV). There is a total of O(mV) combinations of colors. The upper bound time complexity remains the same but the average time taken will be less.\nAuxiliary Space: O(V). The recursive Stack of the graph coloring function will require O(V) space." ]
[ null, "https://media.geeksforgeeks.org/wp-content/uploads/mcolor.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.64313066,"math_prob":0.98877805,"size":17243,"snap":"2023-40-2023-50","text_gpt3_token_len":4776,"char_repetition_ratio":0.18104298,"word_repetition_ratio":0.6301109,"special_character_ratio":0.31670824,"punctuation_ratio":0.16265415,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99545217,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-28T22:12:42Z\",\"WARC-Record-ID\":\"<urn:uuid:4099bbb9-4020-4c48-8c18-5236ad242e6e>\",\"Content-Length\":\"276668\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:78a49ca5-490e-487a-af39-c9364ebe21cc>\",\"WARC-Concurrent-To\":\"<urn:uuid:d3e584c7-470d-439a-a8c2-0b39e5bb78d0>\",\"WARC-IP-Address\":\"23.205.105.7\",\"WARC-Target-URI\":\"https://www.geeksforgeeks.org/m-coloring-problem/\",\"WARC-Payload-Digest\":\"sha1:63ESHA2QZ7ZLQF5UK2YN4M4HP3W2OWBE\",\"WARC-Block-Digest\":\"sha1:UOL2XKR4HI3YPYWEA75HEI75X5P4LAAJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510454.60_warc_CC-MAIN-20230928194838-20230928224838-00367.warc.gz\"}"}
https://en.wikipedia.org/wiki/Empirical_orthogonal_function
[ "# Empirical orthogonal functions\n\n(Redirected from Empirical orthogonal function)\nJump to navigation Jump to search\n\nIn statistics and signal processing, the method of empirical orthogonal function (EOF) analysis is a decomposition of a signal or data set in terms of orthogonal basis functions which are determined from the data. The term is also interchangeable with the geographically weighted PCAs in geophysics.\n\nThe i th basis function is chosen to be orthogonal to the basis functions from the first through i − 1, and to minimize the residual variance. That is, the basis functions are chosen to be different from each other, and to account for as much variance as possible.\n\nThe method of EOF analysis is similar in spirit to harmonic analysis, but harmonic analysis typically uses predetermined orthogonal functions, for example, sine and cosine functions at fixed frequencies. In some cases the two methods may yield essentially the same results.\n\nThe basis functions are typically found by computing the eigenvectors of the covariance matrix of the data set. A more advanced technique is to form a kernel out of the data, using a fixed kernel. The basis functions from the eigenvectors of the kernel matrix are thus non-linear in the location of the data (see Mercer's theorem and the kernel trick for more information).\n\n## References and notes\n\n1. ^ Stephenson, David B.; Benestad, Rasmus E. (2000-09-02). \"Empirical Orthogonal Function analysis\". Environmental statistics for climate researchers. Retrieved 2013-02-28." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.81739724,"math_prob":0.9115537,"size":2356,"snap":"2021-21-2021-25","text_gpt3_token_len":539,"char_repetition_ratio":0.13477892,"word_repetition_ratio":0.0,"special_character_ratio":0.2236842,"punctuation_ratio":0.14251208,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96685743,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-24T18:39:35Z\",\"WARC-Record-ID\":\"<urn:uuid:5785d0a2-f2e7-4123-9e86-4daaaf9a874e>\",\"Content-Length\":\"33699\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a8ed7573-c282-4b49-be35-2aab67d49f81>\",\"WARC-Concurrent-To\":\"<urn:uuid:a6b5f226-46c9-46bc-809a-fab4a9d1fe26>\",\"WARC-IP-Address\":\"208.80.154.224\",\"WARC-Target-URI\":\"https://en.wikipedia.org/wiki/Empirical_orthogonal_function\",\"WARC-Payload-Digest\":\"sha1:PRGR3HL4V2UGEURRKP4IOW7JSJGPGKH5\",\"WARC-Block-Digest\":\"sha1:EHCP53TO3DPQSLMO7PGEIRYLMBAOG3KZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623488556482.89_warc_CC-MAIN-20210624171713-20210624201713-00397.warc.gz\"}"}
https://math.stackexchange.com/questions/2903310/question-about-the-definition-of-the-least-upper-bound-property
[ "# Question about the definition of the least upper bound property\n\nDefinition: Let $A$ the set with order relation. We say that the set $A$ has least upper bound property if any $A_0\\subset A$, $A_0\\neq \\varnothing$ which has upper bound has the least upper bound.\n\nQuestion 1: When we say \"has upper bound...\" do we mean that its upper bound is in $A$?\n\nQuestion 2: When we say \"has the least upper bound...\" do we mean that its least upper bound is in $A$?\n\nExample: Consider the set $A=(-1,1)$ of real numbers in the usual order. Assuming the fact that the real numbers have least upper bound property, it follows that the set $A$ has the least upper bound property (why?). For given any subset of $A$ having an upper bound in $A$ , it follows that its least upper bound must be in $A$. For example, the subset $\\{-1/2n: n\\in \\mathbb{N}\\}$ of $A$, thought it has no largest element, does have a least upper bound in $A$, the number $0$.\n\n$\\quad$ On the other hand, the set $B=(-1,0)\\cup (0,1)$ does not have th least upper bound property . The subset $\\{-1/2n: n\\in > \\mathbb{N}\\}$ of $B$ is bounded above by any element of $(0,1)$, but it has no least upper bound in $B$.\n\nI have read this example very carefully and I guess that it provides an example of subsets of reals which has LUB-property and has not, respectively.\n\nDo I correctly interpreted the meaning of above example?\n\nYour point is that if a set $A$ has the least upper bound property, it does not imply that every subset of A also has the least upper bound property." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.91323894,"math_prob":0.9976607,"size":1313,"snap":"2019-35-2019-39","text_gpt3_token_len":361,"char_repetition_ratio":0.20244461,"word_repetition_ratio":0.050847456,"special_character_ratio":0.29246002,"punctuation_ratio":0.13448276,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9997569,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-22T05:53:28Z\",\"WARC-Record-ID\":\"<urn:uuid:4e92eb45-2157-4834-afc7-08b2557da07f>\",\"Content-Length\":\"136098\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:214df705-0b71-49f6-95d8-328965291ee9>\",\"WARC-Concurrent-To\":\"<urn:uuid:94c3b533-ad1e-490e-ac85-f2726612392a>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/2903310/question-about-the-definition-of-the-least-upper-bound-property\",\"WARC-Payload-Digest\":\"sha1:4YMHZ2CATS5RJHBHGDQ4IINB2PG2ACJN\",\"WARC-Block-Digest\":\"sha1:6XJF6ZKHEG2QKXWKXSI3MVZLKT7DPHDL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514575168.82_warc_CC-MAIN-20190922053242-20190922075242-00174.warc.gz\"}"}
https://encyclopedia2.thefreedictionary.com/Iterated+Logarithm%2C+Law+of
[ "# Iterated Logarithm, Law of\n\nThe following article is from The Great Soviet Encyclopedia (1979). It might be outdated or ideologically biased.\n\n## Iterated Logarithm, Law of\n\na limit theorem in probability theory similar in sense to the law of large numbers. Under certain conditions, the law of iterated logarithm defines the exact order of increase of the sums of independent random variables as the number of terms increases.\n\nFor example, suppose that the random variables X1, X2, …, Xn, … are independent and that each variable takes on the values +1 and -1, the probability of each value being ½. Let sn = X1 + … + Xn. The probability is then unity that for any δ > 0", null, "for all n greater than some number N, depending on the particular case, and", null, "for an infinite sequence of numbers n. The law derives its name from the factor In In n occurring in the above expressions.\n\nThe law of iterated logarithm developed out of the metric theory of numbers. The first result involving the law was obtained in 1924 by A. Ia. Khinchin. Further important advances in the study of the conditions under which the law can be applied were made by A. N. Kolmogorov in 1929 and by W. Feller in 1943.\n\n### REFERENCE\n\nFeller, W. Vvedenie v teoriiu veroiatnostei i ee prilozheniia, 2nd ed., vol. 1. Moscow, 1967. (Translated from English.)\n\nIU. V. PROKHOROV" ]
[ null, "https://img.tfd.com/ggse/68/gsed_0001_0020_0_img5509.png", null, "https://img.tfd.com/ggse/ba/gsed_0001_0020_0_img5510.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.90500975,"math_prob":0.9631929,"size":1208,"snap":"2021-31-2021-39","text_gpt3_token_len":314,"char_repetition_ratio":0.11046512,"word_repetition_ratio":0.0,"special_character_ratio":0.23841059,"punctuation_ratio":0.14979757,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9952324,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-25T22:39:53Z\",\"WARC-Record-ID\":\"<urn:uuid:756b53dd-49f0-4327-9e79-8ada7059ba23>\",\"Content-Length\":\"40353\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f32685f4-f529-44eb-8b07-cf4bf42fef18>\",\"WARC-Concurrent-To\":\"<urn:uuid:7c73dfa5-c0ef-4818-b141-6a3c449c0190>\",\"WARC-IP-Address\":\"91.204.210.226\",\"WARC-Target-URI\":\"https://encyclopedia2.thefreedictionary.com/Iterated+Logarithm%2C+Law+of\",\"WARC-Payload-Digest\":\"sha1:4NBG2HAFJUJVGWXMTXXDW3GFDU76OA4E\",\"WARC-Block-Digest\":\"sha1:R5G5QANKPGNVSIGKUPZMFUU4JYFWSZRX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046151866.98_warc_CC-MAIN-20210725205752-20210725235752-00126.warc.gz\"}"}
https://nays3dv-documents.readthedocs.io/en/v3_en/03_Examples/09_Simulation%20of%20tidal%20oscilation.html
[ "# Example 09: Tidal oscilation\n\n## Purpose\n\nTo calculate the open channel flow with a tidal oscilation at a boundary. In this example an open channel with dowsntream and upstream boundaries with constant discharge at upstream and oscilating water surface at downstream will be simulated. For simplicity only water is simulated and can add density flow if needed.\n\n## Creation of calculation grid and setting initial conditions\n\nAs explained in the other examples and the introduction, create the grid using, [Grid], [Select Algorithm to Create Grid] and then select [Grid Generator for Nays3DV]. Then the grid creation window will appear.\n\nIn grid creation window, give channel shape parameters as shown in Figure 133.\n\nThen we can give Channel bed condition. As here we use the default condition flat(no bar) no modifications are needed.\n\nIf new grids are added or width is varied it is possible to set them. As in this example no grids added and no width variations, no modifications are needed in them.\n\nInitial water surface profile tab is used to give downstream depth, water surface slope and initial water surface purtavation. It can be seen as shown in Figure 134 and click on [Create Grid]. Here the bed is given as a sloped bed varying linearly in x direction.\n\nThen the grid is created and a confirmation message box will appear asking to map the geographic data as shown in Figure 135 and click on [Yes].", null, "Figure 135 : Grid creation : Mapping geographic data to the grid\n\nThis will map the geographic data to the grid and the mapped grid can be seen as shown in Figure 136.", null, "Figure 136 : Grid creation : Mapping geographic data to the grid\n\nNow save the project with [File] [Save project as .ipro].\n\n## Setting the calculation conditions and simulation\n\nSet the calculation conditions with [Calculation Condition], [Setting].\n\nCalculation condition window will open.\n\nSet computational parameters as shown in Figure 137.\n\nThen give hydraulic boundary conditions. Since the boundary conditions are open boundaries , boundary condition needs to be given as shown in Figure 138.\n\nThen give initial and Boundary concentrations as shown in Figure 139. Here only water is used for simulation, initial and boundary concentartion window is inactive.\n\nThen the time and iteration parameters are give as shown in Figure 140.\n\nThen give the physical parameters as given in Figure 141.\n\nAfter setting the calculation conditions, save the project by clicking on save tab. Now start simulation by, [Simulation] [Run]. Simulation will start and after some time it will finish showing the message the solver finished the calculation.\n\n## Visualization of results\n\nOpen 3D post processing window by selecting, [Calculation Results] [Open new 3D Post-Processing Window].\n\nIn this example, linear plots will be demonstrated. For linear graphs, click on linear graph icon in top of the window as shown in Figure 142. Or else go to [Calculation Results] - [Open new graph window]. Then the data source setting window will appear as shown in figure.", null, "Figure 142 : Visualization of Results : Data sorce setting window\n\nHere x axis and the 3dimensional data need be selected. Here Position vs Distance is selceted to plot. Therefore, axis is set i and position is selected for 3dimensional data. The resulting plot is as shown in Figure 143.", null, "Figure 143 : Visualization of Results : Position vs Distane plot\n\nIf we need to plot position vs time, data source setting has to be done as shown in Figure 144." ]
[ null, "https://nays3dv-documents.readthedocs.io/en/v3_en/_images/09_Grid_Creation_03.png", null, "https://nays3dv-documents.readthedocs.io/en/v3_en/_images/09_Grid_Creation_04.png", null, "https://nays3dv-documents.readthedocs.io/en/v3_en/_images/09_Visualization_of_Results_01.png", null, "https://nays3dv-documents.readthedocs.io/en/v3_en/_images/09_Visualization_of_Results_02.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.88091874,"math_prob":0.9696281,"size":3381,"snap":"2023-40-2023-50","text_gpt3_token_len":700,"char_repetition_ratio":0.1649393,"word_repetition_ratio":0.019642858,"special_character_ratio":0.20792665,"punctuation_ratio":0.09615385,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9904516,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-08T02:15:07Z\",\"WARC-Record-ID\":\"<urn:uuid:e75d4c03-b65c-4dbf-8ed7-50e54593bd01>\",\"Content-Length\":\"23910\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:303ca37a-12d1-4694-85cf-7b274bfe67cf>\",\"WARC-Concurrent-To\":\"<urn:uuid:619df82b-5365-4ad7-bfae-64ed90fab556>\",\"WARC-IP-Address\":\"104.17.32.82\",\"WARC-Target-URI\":\"https://nays3dv-documents.readthedocs.io/en/v3_en/03_Examples/09_Simulation%20of%20tidal%20oscilation.html\",\"WARC-Payload-Digest\":\"sha1:7SBYXQOS4HMN7Z2QJFW2NAYDXPOT5EP6\",\"WARC-Block-Digest\":\"sha1:F7MU7LKCCQUWOFBX42TODM7RQHWWFRZI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100710.22_warc_CC-MAIN-20231208013411-20231208043411-00821.warc.gz\"}"}
https://metanumbers.com/27395
[ "# 27395 (number)\n\n27,395 (twenty-seven thousand three hundred ninety-five) is an odd five-digits composite number following 27394 and preceding 27396. In scientific notation, it is written as 2.7395 × 104. The sum of its digits is 26. It has a total of 2 prime factors and 4 positive divisors. There are 21,912 positive integers (up to 27395) that are relatively prime to 27395.\n\n## Basic properties\n\n• Is Prime? No\n• Number parity Odd\n• Number length 5\n• Sum of Digits 26\n• Digital Root 8\n\n## Name\n\nShort name 27 thousand 395 twenty-seven thousand three hundred ninety-five\n\n## Notation\n\nScientific notation 2.7395 × 104 27.395 × 103\n\n## Prime Factorization of 27395\n\nPrime Factorization 5 × 5479\n\nComposite number\nDistinct Factors Total Factors Radical ω(n) 2 Total number of distinct prime factors Ω(n) 2 Total number of prime factors rad(n) 27395 Product of the distinct prime numbers λ(n) 1 Returns the parity of Ω(n), such that λ(n) = (-1)Ω(n) μ(n) 1 Returns: 1, if n has an even number of prime factors (and is square free) −1, if n has an odd number of prime factors (and is square free) 0, if n has a squared prime factor Λ(n) 0 Returns log(p) if n is a power pk of any prime p (for any k >= 1), else returns 0\n\nThe prime factorization of 27,395 is 5 × 5479. Since it has a total of 2 prime factors, 27,395 is a composite number.\n\n## Divisors of 27395\n\n1, 5, 5479, 27395\n\n4 divisors\n\n Even divisors 0 4 2 2\nTotal Divisors Sum of Divisors Aliquot Sum τ(n) 4 Total number of the positive divisors of n σ(n) 32880 Sum of all the positive divisors of n s(n) 5485 Sum of the proper positive divisors of n A(n) 8220 Returns the sum of divisors (σ(n)) divided by the total number of divisors (τ(n)) G(n) 165.514 Returns the nth root of the product of n divisors H(n) 3.33273 Returns the total number of divisors (τ(n)) divided by the sum of the reciprocal of each divisors\n\nThe number 27,395 can be divided by 4 positive divisors (out of which 0 are even, and 4 are odd). The sum of these divisors (counting 27,395) is 32,880, the average is 8,220.\n\n## Other Arithmetic Functions (n = 27395)\n\n1 φ(n) n\nEuler Totient Carmichael Lambda Prime Pi φ(n) 21912 Total number of positive integers not greater than n that are coprime to n λ(n) 10956 Smallest positive number such that aλ(n) ≡ 1 (mod n) for all a coprime to n π(n) ≈ 2996 Total number of primes less than or equal to n r2(n) 0 The number of ways n can be represented as the sum of 2 squares\n\nThere are 21,912 positive integers (less than 27,395) that are coprime with 27,395. And there are approximately 2,996 prime numbers less than or equal to 27,395.\n\n## Divisibility of 27395\n\n m n mod m 2 3 4 5 6 7 8 9 1 2 3 0 5 4 3 8\n\nThe number 27,395 is divisible by 5.\n\n## Classification of 27395\n\n• Arithmetic\n• Semiprime\n• Deficient\n\n• Polite\n\n• Square Free\n\n### Other numbers\n\n• LucasCarmichael\n\n## Base conversion (27395)\n\nBase System Value\n2 Binary 110101100000011\n3 Ternary 1101120122\n4 Quaternary 12230003\n5 Quinary 1334040\n6 Senary 330455\n8 Octal 65403\n10 Decimal 27395\n12 Duodecimal 13a2b\n20 Vigesimal 389f\n36 Base36 l4z\n\n## Basic calculations (n = 27395)\n\n### Multiplication\n\nn×y\n n×2 54790 82185 109580 136975\n\n### Division\n\nn÷y\n n÷2 13697.5 9131.67 6848.75 5479\n\n### Exponentiation\n\nny\n n2 750486025 20559564654875 563229273720300625 15429665953567635621875\n\n### Nth Root\n\ny√n\n 2√n 165.514 30.1456 12.8652 7.71852\n\n## 27395 as geometric shapes\n\n### Circle\n\n Diameter 54790 172128 2.35772e+09\n\n### Sphere\n\n Volume 8.61197e+13 9.43089e+09 172128\n\n### Square\n\nLength = n\n Perimeter 109580 7.50486e+08 38742.4\n\n### Cube\n\nLength = n\n Surface area 4.50292e+09 2.05596e+13 47449.5\n\n### Equilateral Triangle\n\nLength = n\n Perimeter 82185 3.2497e+08 23724.8\n\n### Triangular Pyramid\n\nLength = n\n Surface area 1.29988e+09 2.42297e+12 22367.9" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.61022234,"math_prob":0.9885793,"size":4544,"snap":"2021-43-2021-49","text_gpt3_token_len":1615,"char_repetition_ratio":0.12004405,"word_repetition_ratio":0.028231798,"special_character_ratio":0.4465229,"punctuation_ratio":0.07483871,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9984785,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-03T00:24:39Z\",\"WARC-Record-ID\":\"<urn:uuid:eb06e3a4-39c9-4c14-88ec-8cf996e4160e>\",\"Content-Length\":\"39908\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ccf53416-8532-407e-b285-e47cf57878ee>\",\"WARC-Concurrent-To\":\"<urn:uuid:dddddcdf-5e25-4be9-844d-b8a9b51a2b9f>\",\"WARC-IP-Address\":\"46.105.53.190\",\"WARC-Target-URI\":\"https://metanumbers.com/27395\",\"WARC-Payload-Digest\":\"sha1:A3VD4BAUVDROZASYIBGUGP4APEHUOXAO\",\"WARC-Block-Digest\":\"sha1:2G3ODY4WFUOTFJLZ35HZRORYLCPAMQLW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964362571.17_warc_CC-MAIN-20211203000401-20211203030401-00323.warc.gz\"}"}
https://www.colorhexa.com/00e2cb
[ "# #00e2cb Color Information\n\nIn a RGB color space, hex #00e2cb is composed of 0% red, 88.6% green and 79.6% blue. Whereas in a CMYK color space, it is composed of 100% cyan, 0% magenta, 10.2% yellow and 11.4% black. It has a hue angle of 173.9 degrees, a saturation of 100% and a lightness of 44.3%. #00e2cb color hex could be obtained by blending #00ffff with #00c597. Closest websafe color is: #00cccc.\n\n• R 0\n• G 89\n• B 80\nRGB color chart\n• C 100\n• M 0\n• Y 10\n• K 11\nCMYK color chart\n\n#00e2cb color description : Pure (or mostly pure) cyan.\n\n# #00e2cb Color Conversion\n\nThe hexadecimal color #00e2cb has RGB values of R:0, G:226, B:203 and CMYK values of C:1, M:0, Y:0.1, K:0.11. Its decimal value is 58059.\n\nHex triplet RGB Decimal 00e2cb `#00e2cb` 0, 226, 203 `rgb(0,226,203)` 0, 88.6, 79.6 `rgb(0%,88.6%,79.6%)` 100, 0, 10, 11 173.9°, 100, 44.3 `hsl(173.9,100%,44.3%)` 173.9°, 100, 88.6 00cccc `#00cccc`\nCIE-LAB 81.127, -50.397, -1.652 37.972, 58.7, 65.826 0.234, 0.361, 58.7 81.127, 50.424, 181.878 81.127, -65.107, 5.35 76.616, -45.611, 2.692 00000000, 11100010, 11001011\n\n# Color Schemes with #00e2cb\n\n• #00e2cb\n``#00e2cb` `rgb(0,226,203)``\n• #e20017\n``#e20017` `rgb(226,0,23)``\nComplementary Color\n• #00e25a\n``#00e25a` `rgb(0,226,90)``\n• #00e2cb\n``#00e2cb` `rgb(0,226,203)``\n• #0088e2\n``#0088e2` `rgb(0,136,226)``\nAnalogous Color\n• #e25a00\n``#e25a00` `rgb(226,90,0)``\n• #00e2cb\n``#00e2cb` `rgb(0,226,203)``\n• #e20088\n``#e20088` `rgb(226,0,136)``\nSplit Complementary Color\n• #e2cb00\n``#e2cb00` `rgb(226,203,0)``\n• #00e2cb\n``#00e2cb` `rgb(0,226,203)``\n• #cb00e2\n``#cb00e2` `rgb(203,0,226)``\n• #17e200\n``#17e200` `rgb(23,226,0)``\n• #00e2cb\n``#00e2cb` `rgb(0,226,203)``\n• #cb00e2\n``#cb00e2` `rgb(203,0,226)``\n• #e20017\n``#e20017` `rgb(226,0,23)``\n• #009686\n``#009686` `rgb(0,150,134)``\n• #00af9d\n``#00af9d` `rgb(0,175,157)``\n• #00c9b4\n``#00c9b4` `rgb(0,201,180)``\n• #00e2cb\n``#00e2cb` `rgb(0,226,203)``\n• #00fce2\n``#00fce2` `rgb(0,252,226)``\n• #16ffe7\n``#16ffe7` `rgb(22,255,231)``\n• #30ffea\n``#30ffea` `rgb(48,255,234)``\nMonochromatic Color\n\n# Alternatives to #00e2cb\n\nBelow, you can see some colors close to #00e2cb. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #00e293\n``#00e293` `rgb(0,226,147)``\n• #00e2a5\n``#00e2a5` `rgb(0,226,165)``\n• #00e2b8\n``#00e2b8` `rgb(0,226,184)``\n• #00e2cb\n``#00e2cb` `rgb(0,226,203)``\n• #00e2de\n``#00e2de` `rgb(0,226,222)``\n• #00d3e2\n``#00d3e2` `rgb(0,211,226)``\n• #00c1e2\n``#00c1e2` `rgb(0,193,226)``\nSimilar Colors\n\n# #00e2cb Preview\n\nThis text has a font color of #00e2cb.\n\n``<span style=\"color:#00e2cb;\">Text here</span>``\n#00e2cb background color\n\nThis paragraph has a background color of #00e2cb.\n\n``<p style=\"background-color:#00e2cb;\">Content here</p>``\n#00e2cb border color\n\nThis element has a border color of #00e2cb.\n\n``<div style=\"border:1px solid #00e2cb;\">Content here</div>``\nCSS codes\n``.text {color:#00e2cb;}``\n``.background {background-color:#00e2cb;}``\n``.border {border:1px solid #00e2cb;}``\n\n# Shades and Tints of #00e2cb\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #000a09 is the darkest color, while #f6fffe is the lightest one.\n\n• #000a09\n``#000a09` `rgb(0,10,9)``\n• #001e1b\n``#001e1b` `rgb(0,30,27)``\n• #00312c\n``#00312c` `rgb(0,49,44)``\n• #00453e\n``#00453e` `rgb(0,69,62)``\n• #005950\n``#005950` `rgb(0,89,80)``\n• #006c61\n``#006c61` `rgb(0,108,97)``\n• #008073\n``#008073` `rgb(0,128,115)``\n• #009485\n``#009485` `rgb(0,148,133)``\n• #00a796\n``#00a796` `rgb(0,167,150)``\n• #00bba8\n``#00bba8` `rgb(0,187,168)``\n• #00ceb9\n``#00ceb9` `rgb(0,206,185)``\n• #00e2cb\n``#00e2cb` `rgb(0,226,203)``\n• #00f6dd\n``#00f6dd` `rgb(0,246,221)``\n• #0affe6\n``#0affe6` `rgb(10,255,230)``\n• #1effe8\n``#1effe8` `rgb(30,255,232)``\n• #31ffea\n``#31ffea` `rgb(49,255,234)``\n• #45ffec\n``#45ffec` `rgb(69,255,236)``\n• #59ffee\n``#59ffee` `rgb(89,255,238)``\n• #6cfff0\n``#6cfff0` `rgb(108,255,240)``\n• #80fff2\n``#80fff2` `rgb(128,255,242)``\n• #94fff4\n``#94fff4` `rgb(148,255,244)``\n• #a7fff6\n``#a7fff6` `rgb(167,255,246)``\n• #bbfff8\n``#bbfff8` `rgb(187,255,248)``\n• #cefffa\n``#cefffa` `rgb(206,255,250)``\n• #e2fffc\n``#e2fffc` `rgb(226,255,252)``\n• #f6fffe\n``#f6fffe` `rgb(246,255,254)``\nTint Color Variation\n\n# Tones of #00e2cb\n\nA tone is produced by adding gray to any pure hue. In this case, #687a78 is the less saturated color, while #00e2cb is the most saturated one.\n\n• #687a78\n``#687a78` `rgb(104,122,120)``\n• #60827f\n``#60827f` `rgb(96,130,127)``\n• #578b86\n``#578b86` `rgb(87,139,134)``\n• #4e948d\n``#4e948d` `rgb(78,148,141)``\n• #469c94\n``#469c94` `rgb(70,156,148)``\n• #3da59b\n``#3da59b` `rgb(61,165,155)``\n• #34aea1\n``#34aea1` `rgb(52,174,161)``\n• #2bb7a8\n``#2bb7a8` `rgb(43,183,168)``\n• #23bfaf\n``#23bfaf` `rgb(35,191,175)``\n• #1ac8b6\n``#1ac8b6` `rgb(26,200,182)``\n• #11d1bd\n``#11d1bd` `rgb(17,209,189)``\n• #09d9c4\n``#09d9c4` `rgb(9,217,196)``\n• #00e2cb\n``#00e2cb` `rgb(0,226,203)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #00e2cb is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5121569,"math_prob":0.8221095,"size":3691,"snap":"2019-51-2020-05","text_gpt3_token_len":1632,"char_repetition_ratio":0.13615406,"word_repetition_ratio":0.011049724,"special_character_ratio":0.5348144,"punctuation_ratio":0.23198198,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98832023,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-19T11:01:35Z\",\"WARC-Record-ID\":\"<urn:uuid:233189a2-1137-4afd-81ff-9175ed522f2c>\",\"Content-Length\":\"36265\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:14146fe1-4f5e-4d07-84a3-a181d139059c>\",\"WARC-Concurrent-To\":\"<urn:uuid:5c418f01-ef7d-48d6-88de-f8c333dfe427>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/00e2cb\",\"WARC-Payload-Digest\":\"sha1:EJ2AJ3UL7ZRTUKGPP3O4JMXADZYM3JJ4\",\"WARC-Block-Digest\":\"sha1:4PNBVU7RR2PWHHDYPH5NQMJYLWJWQ5BN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250594391.21_warc_CC-MAIN-20200119093733-20200119121733-00358.warc.gz\"}"}
https://tcagley.wordpress.com/2016/02/27/how-to-measure-anything-chapter-10-bayes-adding-to-what-you-know-now/
[ "", null, "How to Measure Anything, Finding the Value of “Intangibles in Business” Third Edition\n\nChapter 10 of How to Measure Anything, Finding the Value of “Intangibles in Business” Third Edition is titled, Bayes: Adding to What You Know Now.  Here is a summary of the chapter in a few bullet points:\n\n• Prior knowledge influences how we process new information.\n• Bayesian statistics help us move from what we know to what we don’t know.\n• We are all Bayesian to some extent, but maybe not enough.\n• Many myths about using information are just plain wrong, and Bayes proves it.\n\nConventional statistics make some simplifying assumptions.  Two of the big assumptions are:\n\n1. The observer has no prior information about the range of possible values, and\n2. The observer does not have prior knowledge of the distribution of the population.\n\nOften, both are bad assumptions.  Enter the concepts of Bayesian statistics.  Bayesian statistics deal with how we update prior knowledge with new information.  Hubbard uses the example of the determining the probability that it raining if you have an accident based on the known probability of having an accident if it is raining.  This is called the Bayesian inversion.  In figure 10.1 Hubbard lists and reviews a number of basic probability concepts that allow us to logically flip the question being asked around and then to determine the probability of the flipped question. A Bayesian inversion gives you a path from things that seem easier to quantify to something you believe to be harder to quantify.\n\nI spent a semester in undergrad and several classes during grad school learning Bayesian statistics.  Bayesian statistics is powerful, but difficult to learn (at least in my case). Hubbard makes the point that people are intuitively Bayesian.  It is in our nature to begin with an estimate, gather information and then update the estimates (this is of course unless you are a fan of the Cleveland Browns).  Being intuitively Bayesian means we  understand that we begin with prior knowledge and that we can update that knowledge with new information.  The process happens in our heads without Microsoft Excel or even math.\n\nA deeper problem tends to be a tendency to ignore how the data is distributed when presented with new information. This phenomenon is called base rate neglect.  Wikipedia presents two excellent examples.  Boiling the concept down, when presented with specific information related to a broader pool of answers.  For example, let’s say my Cleveland Browns win the first game of the next football season, I might immeability jump to the conclusion that they will win the Super Bowl for that year even they have not won more than 50% of their games in YEARS. I am neglecting what is known about the distribution of success for the Cleveland Brown’s based on the most immediate observation. Hubbard suggests that one simple defense against base rate neglect is to simply be aware that the whole set of observations must be taken into account. Secondly, calibrated estimators  are better at leveraging Bayesian concepts than un-calibrated estimators.\n\nHubbard summarizes Bayes by pointing out that often a measurement question is more approachable if  we begin with a proposition that we understand and then invert the question, this is the heart of the Bayesian Inversion. Hubbard uses Bayesian statistics to debunk four myths.\n\n1. Myth: “Absence of evidence is not evidence of absence” is wrong.  The absence of evidence is data that, when inverted, provides information that reduces uncertainty.  In the simple example used in the Chapter, Hubbard began with the known probability of accidents occurring in the rain. Does the a lack of accidents at a particular time tell us anything about whether it raining?  Through a Bayesian Inversion, the answer is yes if we know there was no accident at a particular time we know something about the probability that it was raining. We have reframed the question to use the absence of something to tell us something about the probability of something else.\n2. Myth: Correlation is not evidence of causation. The logical proof that correlation is not evidence of causation follows a path similar to the proof of the absence of evidence discussed above.  The classic example that correlation does not establish causation uses the relationship between the sun setting and crickets chirping. The sun goes down and almost all of the time, crickets begin chirp (a strong positive correlation).  As with the absence of evidence, correlation can constitute evidence and additionally, correlation does increase the probability of causation.\n3. Myth: Ambiguous results tell us nothing.  Assuming we are looking for evidence, the fact that that an observation is ambiguous or that we don’t see what we are looking for does not mean we have not learned anything.  The lack of evidence provides information that is useful for understanding the probability of whether what we are looking for exists. Using the Bayesian Statistics, if we’re looking for evidence, the fact that we don’t see it or that the results are ambiguous provides information that changes the original estimated probability that what we are looking for exists.\n4. Myth: Each observation alone tells us nothing. Bayesian Statistics drives the point home that every observation provides information that changes what we knew before.  Debunking this myth is important to organizations that are investigating the concept of software development productivity.  Software development productivity is a complex concept and is affected by a myriad of factors.  However, if knowing something about a single variable helps reduce uncertainty when considered among many other variables, then it is useful even in isolation.\n\nUnderstanding Bayes is important, even if we can be instinctively Bayesian.  Many estimation problems, ranging from story points to portfolio-level valuation,s use analogies.  The use of analogies in estimation is an example of the use of Bayes Theorem. Analogies are a set of observation known observations and the understanding of the distribution of those observations. We choose an analogy from a set of observations and then use what we know about the present to determine how that effects the analogy. Chapter 10 re-jumpstarted my knowledge of Baye, but I had to crack open a couple of my university textbooks and my copy of Schaum’s Outline of Business Statistics in order re-baseline my knowledge of Bayes Theorem and Bayesian Statistics.\n\nA parting comment . . . if I publish Re-Read Saturday on 90% of the Saturdays in a year, what is the probability that if I publish a blog on Saturday that it will be part of a Re-Read? If you need help, check out the downloads available at http://www.howtomeasureanything.com/\n\nHow To Measure Anything, Third Edition, Introduction\n\nChapter 1: The Challenge of Intangibles\n\nChapter 2: An Intuitive Measurement Habit: Eratosthenes, Enrico, and Emily\n\nChapter 3: The Illusions of Intangibles: Why Immeasurables Aren’t\n\nChapter 4: Clarifying the Measurement Problem\n\nChapter 5: Calibrated Estimates: How Much Do You Know Now?\n\nChapter 6: Quantifying Risk Through Modeling\n\nChapter 7: Quantifying The Value of Information\n\nChapter 8 The Transition: From What to Measure to How to Measure\n\nChapter 9: Sampling Reality: How Observing Some Things Tells Us about All Things" ]
[ null, "https://tcagley.files.wordpress.com/2015/12/htma.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.920294,"math_prob":0.8652091,"size":8753,"snap":"2022-05-2022-21","text_gpt3_token_len":1871,"char_repetition_ratio":0.12561436,"word_repetition_ratio":0.05821918,"special_character_ratio":0.21604021,"punctuation_ratio":0.101591185,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9641197,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-27T15:16:00Z\",\"WARC-Record-ID\":\"<urn:uuid:7d76d6b2-b27e-453c-b5b0-1ca70917c83a>\",\"Content-Length\":\"110173\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:795db6c0-f7fb-411c-9237-5c405b3e9946>\",\"WARC-Concurrent-To\":\"<urn:uuid:4b1f3c60-8615-48f6-9c68-0ead477116b3>\",\"WARC-IP-Address\":\"192.0.78.12\",\"WARC-Target-URI\":\"https://tcagley.wordpress.com/2016/02/27/how-to-measure-anything-chapter-10-bayes-adding-to-what-you-know-now/\",\"WARC-Payload-Digest\":\"sha1:EVOXOFZ3VLVYBOH2A4MQZSUXPBXJW2YJ\",\"WARC-Block-Digest\":\"sha1:IP527FXAHTR26CK6KA4AKHOLZSVKUSWL\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662658761.95_warc_CC-MAIN-20220527142854-20220527172854-00227.warc.gz\"}"}
https://www.arxiv-vanity.com/papers/2103.06877/
[ "# Fast and Accurate Model Scaling\n\nPiotr Dollár  Mannat Singh  Ross Girshick\nFacebook AI Research (FAIR)\n###### Abstract\n\nIn this work we analyze strategies for convolutional neural network scaling; that is, the process of scaling a base convolutional network to endow it with greater computational complexity and consequently representational power. Example scaling strategies may include increasing model width, depth, resolution, \\etc. While various scaling strategies exist, their tradeoffs are not fully understood. Existing analysis typically focuses on the interplay of accuracy and flops (floating point operations). Yet, as we demonstrate, various scaling strategies affect model parameters, activations, and consequently actual runtime quite differently. In our experiments we show the surprising result that numerous scaling strategies yield networks with similar accuracy but with widely varying properties. This leads us to propose a simple fast compound scaling strategy that encourages primarily scaling model width, while scaling depth and resolution to a lesser extent. Unlike currently popular scaling strategies, which result in about increase in model activation \\wrtscaling flops by a factor of , the proposed fast compound scaling results in close to increase in activations, while achieving excellent accuracy. Fewer activations leads to speedups on modern memory-bandwidth limited hardware (\\eg, GPUs). More generally, we hope this work provides a framework for analyzing scaling strategies under various computational constraints.\n\n## 1 Introduction", null, "Figure 1: An analysis of four model scaling strategies: width scaling (w), in which only the width of a base model is scaled; compound scaling (dwr), in which the width, depth, and resolution are all scaled in roughly equal proportions; depth and width scaling (dw); and the proposed fast compound scaling (dWr), which emphasizes scaling primarily, but not only, the model width. (Top): We apply the four scaling strategies to two base models (EfficientNet-B0 and RegNetZ-500MF). Compound and fast scaling result in highest accuracy models, and both outperform width scaling. (Bottom-left): The scaling strategies have asymptotically different behavior in how they affect model activations. Given a scale factor of s, activations increase with about O(√s) for w and dWr scaling compared to almost O(s) for dwr and dw scaling. (Bottom-right): Runtime of a model (EfficientNet-B0) scaled using the four scaling strategies. Fast scaling results in models nearly as fast as w scaling (but with higher accuracy), and much faster than dwr and dw scaling, closely reflecting model activations.\n\nAdvances in modern hardware for training and running convolutional neural networks over the past several years have been impressive. Highly-parallel hardware accelerators, such as GPUs and TPUs, allow for training and deploying ever larger and more accurate networks.\n\nInterestingly, this rapid advancement has greatly benefited our ability to optimize models for the low-compute regime. In particular, whether via manual design, random search, or more complex neural architecture search strategies [Zoph2017], it has become feasible to train a large number of small models and select the best one, in terms of both accuracy and speed. At intermediate-compute regimes, efficient search [Liu2018] or efficient design spaces [Radosavovic2019, Radosavovic2020] can still provide the ability to directly optimize neural networks. However, regardless of computational resources, there will necessarily exist a high-compute regime where it may only be feasible to train a handful of models, or possibly even only a single model. This regime motivates our work.\n\nIn the high-compute regime, network scaling, the process by which a lower-complexity model is enlarged by expanding one or more of its dimensions (\\eg, depth or width), becomes essential. Scaling has proven effective in terms of obtaining larger models with good accuracy [Tan2019]. However, existing work on model scaling focuses on model accuracy. In this work, we are interested in large, accurate models that are fast enough to deploy and use in practice.\n\nThe concept of network scaling emerged naturally in deep learning, with early work focused on scaling networks by increasing depth [Simonyan2015, Szegedy2015, He2016]. However, gains from depth scaling plateaued, leading to explorations of scaling width [Zagoruyko2016] and resolution [Howard2017]. More recently scaling multiple dimensions at once, coined compound scaling [Tan2019], has been shown to achieve excellent accuracy.\n\nExisting explorations of model scaling typically focus on maximizing accuracy versus flops. Yet as we will show, two scaled models with the same flops can have very different runtime on modern accelerators. This leads us to the central question explored in our work: can we design scaling strategies that optimize both accuracy and model runtime?\n\nOur first core observation is that there exists multiple scaling strategies that can yield similar accuracy models at the same flops. In Figure 1, top, we show that there exist multiple scaling strategies that can result in models with high accuracy. We will expand on this result in §6.\n\nHowever, scaling a model to a fixed target flops using two scaling strategies can result in widely different runtimes, see Figure 1, bottom-right. To better understand this behavior at a more fundamental level, in §3 we develop a framework for analyzing the complexity of various scaling strategies, in terms of not just flops, but also parameters and activations. In particular, we show that different strategies scale activations at different asymptotic rates relative to flops. \\Eg, when scaling a model from flops to flops by scaling width, activations increase by , compared to nearly for compound scaling. Figure 1, bottom-left, shows this asymptotic behavior for a few select strategies.\n\nIn §4 we will show that within a flop range of practical interest, on modern accelerators the runtime of a scaled model is more strongly correlated with activations than flops. We emphasize that this correlation holds over a diverse set of scaling strategies, which enables us to use activations as a proxy for predicting a scaled model’s runtime.\n\nBased on our analysis, in §5 we introduce a new family of scaling strategies parameterized by a single parameter that controls the relative scaling along model width versus other dimensions. This lets us carefully control the asymptotic rate at which model activations scale. We show yields models that are both fast and accurate. We refer to this scaling strategy as fast compound model scaling, or simply fast scaling for brevity.\n\nAs we will show in §6, fast scaling allows us to obtain large models that are as accurate as the state-of-the-art but faster. As a concrete example, we apply fast scaling to scale a RegNetY-4GF [Radosavovic2020] model to 16GF (gigaflops), and find it uses less memory and is faster (and more accurate) than EfficientNet-B4 [Tan2019] – a model with 4 fewer flops.\n\nIn order to facilitate future research we will release all code and pretrained models introduced in this work.\n\n## 2 Related Work\n\n#### Manual network design.\n\nSince the impressive success of AlexNet [Krizhevsky2012], and with the steady progress of hardware accelerators, the community has pushed toward ever larger and more accurate models. Increasing model depth led to rapid gains, notable examples include VGG [Simonyan2015] and Inception [Szegedy2015, Szegedy2016a]. This trend culminated with the introduction of residual networks [He2016]. Next, wider models proved not only effective but particularly efficient [Zagoruyko2016, Howard2017]. The use of depthwise [Chollet2017] and group convolution [Xie2017] enabled even higher capacity models. Other notable design elements that led to larger and more accurate models include the inverted bottleneck [Sandler2018], SE [Hu2018], and new nonlinearities [Hendrycks2016, Ramachandran2017].\n\n#### Automated network design.\n\nWith the rapid advancement of hardware for training deep models, it has become more feasible to automate network design. Neural architecture search [Zoph2017, Zoph2018, Real2018] has turned into a thriving research area and led to highly efficient models, especially in the low-compute regime. Model search is computationally expensive when training larger models, this has led to interest in developing efficient search algorithms [Liu2018, Pham2018, Liu2019]. For example, DARTS [Liu2019] proposed a differentiable search strategy that does not require training multiple separate models to optimize model structure. Nevertheless, in practice search is most effective in low or medium compute regimes.\n\n#### Design space design.\n\nDespite the effectiveness of model search, the paradigm has limitations. The outcome of a search is a single model instance tuned to a specific setting (\\eg, dataset or flop regime). As an alternative, Radosavovic \\etal [Radosavovic2020] recently introduced the idea of designing design spaces, and designed a low-dimensional design space consisting of simple, easy-to-tune models. Given a new dataset or compute regime, a model can be selected from this design space by tuning a handful of parameters, allowing for highly efficient random search. This allows for optimizing models directly in fairly high-compute regimes. We utilize these efficient design spaces in our experiments.\n\n#### Network scaling.\n\nRegardless of the model design strategy, there will exist some computational regime in which it is not feasible to train and compare a large number of models. Thus model scaling becomes crucial. Popular scaling strategies include scaling depth [Simonyan2015, Szegedy2015, He2016], width [Zagoruyko2016, Howard2017] and resolution [Howard2017, Huang2019gpipe]. The recently introduced compound scaling strategy [Tan2019b], which scales along all three dimensions at once, achieves an excellent accuracy versus flops tradeoff and serves as a core baseline in our work.\n\n#### Going bigger.\n\nThere is substantial interest in scaling to massive datasets [Sun2017jft, Mahajan2018] and compute regimes [Huang2019gpipe]. Moreover, recent progress in unsupervised learning [He2020moco, Chen2020simclr, Caron2020swav] may create the potential to train with essentially unlimited data. These efforts motivate our work: we aim to enable scaling models to the size necessary for these brave new regimes.\n\n## 3 Complexity of Scaled Models\n\nIn this section we present a general framework for analyzing the complexity of various network scaling strategies. While the framework is simple and intuitive, it proves powerful in understanding and extending model scaling.\n\n### 3.1 Complexity Metrics\n\nThe three most relevant properties of models we consider are their flops (), parameters (), and activations (). Following common practice, we use flops to mean multiply-adds and parameters to denote the number of free variables in a model. We define activations as the number of elements in the output tensors of convolutional (conv) layers.\n\nFlops and parameters are popular complexity measures of neural networks. We note, however, that parameters of a convolution are independent of input resolution and hence do not fully reflect the actual capacity or runtime of a convolutional network. Therefore, given that we study networks with varying input resolution, we report parameters but we focus on flops as a primary complexity measure.\n\nActivations are less often reported but as we demonstrate play a key role in determining network speed on modern memory-bandwidth limited hardware. Hence, we carefully analyze the interplay between scaling and activations.\n\n### 3.2 Network Complexity\n\nWhile conv networks are composed of many heterogeneous layers, we focus our complexity analysis on conv layers. First, many layers such as normalization, pooling, or activation often account for a small percentage of a model’s compute. Second, the number and complexity of these layers tends to be proportional to the number and size of conv layers (\\eg, every conv may be followed by an activation). For these reasons analyzing convs serves as an excellent proxy of how model scaling affects an entire network.\n\nConsider a conv layer with width (number of channels) and spatial resolution . The layer takes in a feature map of size , and for each of the patches of size the network applies dot products of size . Therefore the complexity of a conv layer is given by:\n\n f=w2r2k2,p=k2w2,a=wr2 (1)\n\nAs is not scaled, we let without loss of generality.\n\nCommon networks are composed of stages, where each stage consists of uniform conv layers, each with the same and . The complexity of a stage of depth is:\n\n f=dw2r2,p=dw2,a=dwr2 (2)\n\nIn subsequent analysis we will show how different scaling strategies affect the complexity of a single stage. For simplicity, we use the same scaling for each network stage, thus our complexity analysis applies to the entire network.\n\n### 3.3 Complexity of Simple Scaling\n\nWe define simple scaling of a stage as scaling a stage along a single dimension. In particular, we consider width (), depth (), and resolution () scaling. In addition to the scaling dimension, we define the scaling factor to be the amount by which scaling increases model flops. Increasing by , by , or by all increase flops by (for simplicity we ignore quantization effects).\n\nTable 1 shows the complexity of scaling a stage by a factor of along different scaling dimensions. While in each case the resulting flops are the same (by design), the parameters and activations vary. In particular, activations increase by when scaling width compared to by when scaling along resolution or depth. This observation will play a central role in how we design new scaling strategies.\n\n### 3.4 Complexity of Compound Scaling\n\nRather than scaling along a single dimension, an intuitive approach is to scale along multiple dimensions at once. Coined compound scaling by [Tan2019], such an approach has been shown to achieve higher accuracy than simple scaling.\n\nIn Table 2 we show the complexity for scaling along either two or three dimensions. In each case, we select ratios such that scaling is uniform \\wrtflops along each dimension. \\Eg, if scaling along all dimensions (), we scale by , by , and by , such that flops increase by when scaling each dimension and by in total.\n\nInterestingly, the compound scaling rule discovered empirically in [Tan2019] scaled by 1.2, 1.1, and 1.15 along , , and , which corresponds roughly to uniform compound scaling with (, ). We thus use uniform compound scaling as a simple proxy for the purpose of our analysis. Observe that for uniform compound scaling, activations increase nearly linearly with .\n\n### 3.5 Complexity of Group Width Scaling\n\nMany top-performing networks rely heavily on group conv and depthwise conv. A group conv with channel width and group width is equivalent to splitting the channels into groups each of width , applying a regular conv to each group, and concatenating the results. Depthwise conv is a special case with . Therefore, its complexity is:\n\n f=wgr2,p=wg,a=wr2. (3)\n\nIn Table 3 we show three basic strategies for scaling group conv. We observe that to obtain scaling behavior similar to scaling regular conv, both channel width and group width must be scaled. Therefore, unless otherwise noted, we scale proportionally to . For networks that use depthwise conv (), as in previous work [Tan2019], we do not scale .\n\nFinally, we note that when scaling , we must ensure is divisible by . To address this, we set if and round to be divisible by otherwise ( will change by at most 1/3 under such a strategy [Radosavovic2020]).\n\n## 4 Runtime of Scaled Models\n\nOur motivation is to design scaling strategies that result in fast and accurate models. In §3 we analyzed the behavior of flops, parameters, and activations for various scaling strategies. In this section we examine the relationship between these complexity metrics and model runtime. This will allow us to design new fast scaling strategies in §5.\n\nHow are the complexity metrics we analyzed in §3 related to model runtime on modern accelerators? To answer this question, in Figure 2 we report runtime for a large number of models scaled from three base models as a function of flops, parameters, and activations. From these plots we can make two observations: flops and parameters are only weakly predictive of runtime when scaling a single model via different scaling strategies; however, activations are strongly predictive of runtime for a model regardless of the scaling strategy. See Figure 2 for additional details.", null, "Figure 2: Model runtime as a function of various complexity metrics. (Top-left): We scale EfficientNet-B0 (EN-B0) using four scaling strategies (dwr, dw, dWr, w) with a wide range of scaling factors (s<100). For each scaling strategy we plot epoch time versus flops for each model (along with a best fit line). For a single scaling strategy (\\eg, w), runtime is highly correlated with flops (\\eg, Pearson’s r=0.99). However, when comparing scaled versions of the same model using different scaling strategies, flops are only weakly predictive of runtime (r=0.81). (Top-right): Using the same set of models, we plot runtime versus parameters, and again observe parameters are even more weakly correlated with runtime (r=0.56). (Bottom-left): Repeating the same analysis for runtime versus activations, we see that activations are strongly predictive of runtime regardless of the scaling strategy (r=0.99). (Bottom-right): We repeat the analysis of runtime versus activations for three models (see §6.1 for model details). For scaled versions of each model, activations are highly predictive of runtime (r≥0.99), and only very large models tend to be flop bound. This makes activations an excellent proxy for runtime. We note, however, that activations are less predictive of runtime when comparing scaled versions of different models (r=0.95).\n\nThis simple result leads us to use model activations as a proxy for runtime. Specifically, for scaled versions of a single model, the Pearson correlation between runtime and activations is , regardless of the scaling strategy, while correlation with flops and parameters is far lower ( of 0.81 and 0.56, respectively). We caution, however, that activations cannot perfectly predict runtime across heterogeneous models (), as models may use operations with different runtimes, \\egReLU \\vsSiLU. Moreover, some big models have runtimes higher than predicted from their activations indicating these models are flop bound.\n\n#### Implementation details.\n\nWe report the time to perform one epoch of training on ImageNet [Deng2009] which contains 1.2M training images. For each model, we use the largest batch size that fits in memory. We note that inference time is highly correlated with training time, but we report epoch time as it is easy to interpret (inference performance depends heavily on the use case). We time all models using PyTorch and 8 32GB Volta GPUs. Runtime is of course hardware dependent; however, we believe timing on GPUs is reasonable for two reasons. First, hardware accelerators (such as GPUs, TPUs, \\etc.) are highly prevalent. Second, accelerators are extremely efficient in terms of compute but tend to be memory-bandwidth bound [Yang2019], and this trend is expected to become more pronounced.\n\n## 5 Fast Compound Model Scaling\n\nGiven the strong dependency of runtime on activations, we aim to design scaling strategies that minimize the increase in model activations. As our results from Tables 1-3 indicate, of all scaling strategies that involve scaling width, depth, and resolution, scaling a network by increasing its channel width and group width results in the smallest increase in activations. Indeed, it is well known that wide networks are quite efficient in wall-clock time [Zagoruyko2016]. Unfortunately, wide networks may not always achieve top results compared to deeper or higher-resolution models [He2016, Tan2019].\n\nTo address this, in this work we introduce the concept of fast compound model scaling, or simply fast scaling for brevity. The idea is simple: we design and test scaling strategies that primary increase model width, but also increase depth and resolution to a lesser extent.\n\nWe formalize this by introducing a family of scaling strategies parameterized by . Given we define:\n\n ed=1−α2,ew=α,er=1−α2, (4)\n\nand when scaling a network by a factor of , we set:\n\n d′=sedd,w′=√seww,r′=√serr. (5)\n\nIf using group conv, we also set (same scaling as for ). The resulting complexity of the scaled model is:\n\n (6)\n\nInstantiations for scaling strategies using various are shown in Table 4. Setting results in width () scaling (lowest activations). Setting results in depth and resolution () scaling (highest activations). corresponds to uniform compound scaling ().\n\nThe interesting new regime we explore is . In particular, we refer to scaling strategies with near 1 as fast scaling. Unless specified, we use by default, which we denote using . Next, in §6 we show that fast scaling results in good speed and accuracy.\n\n## 6 Experiments\n\nIn this section we evaluate the effectiveness of our proposed fast scaling strategy. We introduce the baseline networks we test along with optimization settings in §6.1. In §6.2, we evaluate existing scaling strategies, then we perform extensive experiments and comparisons of fast scaling in §6.3. Finally we compare scaling \\vsrandom search in §6.4 and compare larger models in §6.5.\n\n### 6.1 Baselines and Optimization Settings\n\n#### Baseline networks.\n\nIn this work we evaluate scaling strategies on three networks families: EfficientNet [Tan2019], RegNetY [Radosavovic2020], and RegNetZ (described below). We chose these models as they are representative of the state-of-the-art and are well suited for our scaling experiments. Moreover, EfficientNet was introduced in the context of model scaling work [Tan2019], making it an excellent candidate for our study.\n\n#### EfficientNet.\n\nEfficientNets have been shown to achieve a good flop-to-accuracy tradeoff. These models use inverted bottlenecks [Sandler2018], depthwise conv, and the SiLU nonlinearity [Hendrycks2016] (also popularly known as Swish [Ramachandran2017]). An EfficientNet is composed of seven stages with varying width, depth, stride and kernel size. The original model (EfficientNet-B0) was optimized in the mobile regime (400MF) using neural architecture search [Tan2019b] and scaled to larger sizes (B1-B7) via compound scaling. For further details, please see [Tan2019].\n\nNote that EfficientNets are specified by 30 parameters (input resolution, 7 stages with 4 parameters each, and stem and head width). Given this high-dimensional search space, optimizing an EfficientNet is only feasible in a low-compute regime, and scaling must be used to obtain larger models.\n\n#### RegNets.\n\nAs an alternative to neural architecture search, Radosavovic \\etal [Radosavovic2020] introduced the idea of designing design spaces, where a design space is a parameterized population of models. Using this methodology, [Radosavovic2020] designed a design space consisting of simple, regular networks called RegNets that are effective across a wide range of block types and flop regimes. Importantly for our work, a RegNet model is specified by a handful of parameters (6), which then allows for fast model selection using random search. Thus, unlike EfficientNets, RegNets allow us to compare large models obtained either via scaling or random search.\n\nA RegNet consists of a stem, a body with four stages, and a head. Each stage consists of a sequence of identical blocks. The block type can vary depending on the model (the two block types we use are shown in Figure 3). Importantly, the widths and depths of a RegNet are not specified independently per stage, but are determined by a quantized linear function which has 4 parameters (, , , ), for details see [Radosavovic2020]. Any other block parameters (like group width or bottleneck ratio) are kept constant across stages.", null, "Figure 3: RegNet blocks. Each stage consists of a stride s=2 block that halves r and increases w followed by multiple stride s=1 blocks with constant r and w. (a-b) The Y block is based on residual bottlenecks with group conv [Xie2017]. Each block consists of a 1×1 conv, a 3×3 group conv, and a final 1×1 conv. The 1×1 convs can change w via the bottleneck ratio b, however, we set b=1 following [Radosavovic2020]. BatchNorm [Ioffe2015] and ReLU follow each conv. (c-d) We introduce the Z block based on inverted bottlenecks [Sandler2018]. The Z block is similar to the Y block with 4 differences: no non-linearity follows the final 1×1 conv, (2) SiLU [Hendrycks2016] is used in place of ReLU, (3) the stride 2 variant of the block has no residual, and (4) b<1 (we use b=1/4 in all experiments). Finally, a Squeeze-and-Excitation (SE) op [Hu2018] (reduction ratio of 1/4) follows the 3×3 conv for both the Y and Z blocks (not shown).\n\n#### RegNetY.\n\nThe RegNetY block (Y) is shown in Figure 3 (a-b). The Y block resembles the standard residual bottleneck block with group conv [Xie2017]. Additionally it uses a Squeeze-and-Excitation (SE) layer [Hu2018]. Following [Radosavovic2020], we set the bottleneck ratio to 1 (effectively no bottleneck). A RegNetY model is thus fully specified with 5 parameters: , , , , and . Unlike [Radosavovic2020], we additionally vary the image input resolution (bringing the total parameters to 6).\n\n#### RegNetZ.\n\nWe introduce a new Z block based on inverted bottlenecks [Sandler2018]. The Z block resembles the Y block except it omits the last nonlinearity and inverts the bottleneck (we use in all experiments). See Figure 3 (c-d) for additional details. A RegNetZ model, built using the Z block, is fully specified with the same 6 parameters as a RegNetY model. We note that EfficientNet also uses inverted bottlenecks, but we introduce RegNetZ to allow us to compare large models obtained via scaling and random search.\n\n#### Optimization settings.\n\nOur goal is to enable fair and reproducible results. However, we also aim to achieve state-of-the-art results. This creates a tension between using a simple yet weak optimization setup (\\eg[Radosavovic2020]) versus a strong setup that yields good results but may be difficult to reproduce (\\eg[Tan2019]). To address this, we use a training setup that effectively balances between these two objectives.\n\nOur setup is as follows: we use SGD with a momentum of 0.9, label smoothing with  [Szegedy2016a], mixup with  [Zhang2018mixup], AutoAugment [Cubuk2018], stochastic weight averaging (SWA) [Dai2020], and mixed precision training [Micikevicius2018]. For all models we use 5 epochs of gradual warmup [Goyal2017]. We use an exponential learning rate schedule with a batch size of 1024 (distributed on 8 32GB GPUs), learning rate , and decay .222We parameterize the exponential learning rate via , where is the current epoch, the final epoch, is the initial learning rate, and is the final learning rate. We use this parameterization (as opposed to ) as it allows us to use a single setting for the decay regardless of the schedule length (setting makes the two equivalent). For RegNets we use a weight decay of 2e-5 and for EfficientNets we use 1e-5. Batch norm parameters are not decayed. For large models we reduce the batch size and learning rate proportionally as in [Goyal2017]. For reproducibility, we will release code for our setup.\n\n#### EfficientNet baselines.\n\nIn Table 5, we report EfficientNet results using our optimization setup versus results from [Tan2019]. We report our results using a ‘1’, ‘2’, or ‘4’ schedule (corresponding to 100, 200, and 400 epochs, respectively). Our 2 schedule achieves competitive results, our 4 schedule outperforms the originally reported results for all but the largest model tested. We use the 2 schedule in all following experiments unless otherwise noted.\n\n#### RegNet baselines.\n\nIn Table 6 we report results for baseline RegNet models. We obtain these models via random search as in [Radosavovic2020].333We sample RegNet model configurations until we obtain 32 models in a given flop regime, train each of these model using the 1 schedule, and finally select the best one. Sampling just 32 random models in a given flop regime is typically sufficient to obtain accurate models as shown in [Radosavovic2020]. Note that there are two versions of the 4GF RegNets (using default and discovered resolutions).", null, "Figure 4: Compound scaling: EfficientNet. (Left) Uniform compound scaling (dwr) offers the best accuracy relative to simple scaling along depth (d), width (w), or resolution (r). All models are scaled from EfficientNet-B0 (400MF) up to at most 4GF. (Right) Models obtained with w scaling are much faster than those from dwr scaling. Both of these results are expected. However, as we will show, it is possible to obtain models that are both fast and accurate. For reference, we also show the original EfficientNet models (orig) obtained via non-uniform compound scaling [Tan2019], the results closely match uniform compound scaling (dwr).", null, "Figure 5: Compound scaling: RegNet. We apply simple and compound scaling to RegNetY-500MF (left) and RegNetZ-500MF (right). As in Figure 4, dwr scaling achieves the best error, but at significant increase in runtime (see appendix) relative to w scaling.\n\n### 6.2 Simple and Compound Scaling\n\nWe now turn to evaluation of simple and compound scaling [Tan2019] described in §3.3 and §3.4, respectively. For these experiments we scale the baseline models from §6.1.\n\nIn Figure 4, we evaluate the accuracy (left) and runtime (right) of EfficientNet-B0 scaled either via simple scaling along width (), depth (), or resolution () or via uniform compound scaling (). As expected, scaling provides the best accuracy, but results in slower models than scaling. This suggests a tradeoff between speed and accuracy, but as we will show shortly, this need not be the case. Finally, we tested uniform scaling along pairs of dimensions (see Table 2), but scaling proved best (not shown).\n\nWe also compare uniform compound scaling () to the original compound scaling rule (orig) from [Tan2019], which empirically set the per-dimension scalings factors. As expected from our analysis in §3.4, scaling is close in both accuracy and runtime to the original compound scaling rule without the need to optimize individual scaling factors.\n\nIn Figure 5 we repeat the same experiment but for the RegNetY-500MF and RegNetZ-500MF baselines. We see a similar behavior, where scaling achieves the strongest results. Runtimes (see appendix) exhibit very similar behaviors ( scaling is much faster). Note that as discussed, group width is scaled proportionally to width .", null, "Figure 6: Fast scaling: EfficientNet. We test scaling EfficientNet-B0 using our family of scaling strategies parameterized by α (see Table 4). (Left) Scaling with any α<1 achieves good accuracy and results in a sizable gap in error to scaling with α=1 (w). The exact value of α<1 does not greatly influence the error. (Right) While all scaling strategies with α<1 give good accuracy, their runtime differ substantially. A setting of α=4/5 (dWr) gives the best of both worlds: models that are both fast and accurate.", null, "Figure 7: Fast scaling: RegNet. We apply scaling with different α to RegNetY-500MF (left) and RegNetZ-500MF (right). As in Figure 6, dWr scaling yields good accuracy and speed (see appendix for rutnimes). We note that α may potentially be be further tuned to tradeoff speed and accuracy, but we use α=4/5 in this work.\n\n### 6.3 Fast Scaling\n\nWe now perform an empirical analysis of the effectiveness of our fast scaling strategy. Recall that in §5 we introduced a family of scaling strategies parameterized by that interpolates between uniform compound scaling () when to width scaling () when . As goes toward 1, the model activations increase least as we scale a model, resulting in faster models. In particular, we define as fast scaling, and denote it by .\n\nIn Figure 6, we evaluate the accuracy (left) and runtime (right) of EfficientNet-B0 scaled with various settings of . Interestingly, for all tested values of model accuracy was quite similar and substantially higher than for scaling (), especially for larger models. In terms of runtime, scaling is nearly as fast as scaling, and substantially faster than scaling. We emphasize that the differences in memory and speed increase asymptotically, hence the difference in runtime for models scaled with different becomes more pronounced at larger scales.\n\nIn Figure 7 we repeat the same experiment but for the RegNet baselines. Results are similar, scaling () achieves excellent accuracy and runtime. Finally, we observe that for RegNets, scaling is more effective than for EfficientNet. This can be partially explained as for RegNets we scale the group width along width (EfficientNet always uses ), indeed setting and scaling RegNets by just performs worse (see appendix).\n\n### 6.4 Scaling versus Search", null, "Figure 8: Large models. We scale four models via fast scaling (dWr) up to 16GF (1× to 32× scaling). We include the original EfficientNet model for reference. All results use our 2× schedule. See §6.5 for details and discussion.\n\nHow do scaled models compare to models obtained via random search? Recall that RegNets only have 6 free parameters, so optimizing a RegNet directly by random search in an intermediate flop regime is feasible (see §6.1).\n\nTable 7 compares three sets of models. First, we compare RegNetY at 4GF obtained either via scaling (denoted by RegNetY-500MF4GF) or search (RegNetY-4GF) in rows 1-2. The best sampled model outperforms the scaled model by 0.6% with a 4 schedule. We repeat this analysis for RegNetZ (rows 3-4) and find the best sampled model outperforms the scaled model by 0.1%. These results indicate that scaling a high-accuracy model is not guaranteed to yield an optimal model. Nevertheless, scaling is often necessary for targeting high compute regimes where model optimization is not feasible.\n\nThe above results suggest a hybrid scaling strategy, in which we optimize a model at an intermediate flop regime prior to scaling the model to larger scales. In Table 7, rows 5-6, we compare two 16GF RegNetY models, one scaled by 32 from a 500MF model and one scaled 4 from an optimized 4GF model. The model obtained with the hybrid strategy of scaling an intermediate model is 0.3% better.\n\nFinally, observe that the best sampled models have far fewer parameters than the scaled models. We found that at higher flop regimes, optimized models have fewer blocks in the last stage, which greatly reduces their parameters. This shows a limitation of uniformly scaling model stages without redistributing blocks across stages.\n\n### 6.5 Comparison of Large Models\n\nThe primary benefit of model scaling is it allows us to scale to larger models where optimization is not feasible. In Figure 8, we scale four models up to 16GF using fast scaling. We make the following observations:\n\n1. [topsep=2.5pt, itemsep=-0.8ex]\n\n2. Model ranking is consistent across flop regimes, with scaled versions RegNetZ achieving the best accuracy.\n\n3. All models obtained via fast scaling () are asymptotically faster than the original EfficientNet models, including our scaled versions of EfficientNet-B0.\n\n4. The gap between the highest and lowest error models (RegNetY and RegNetZ) shrinks from 2.2% at 500MF to 0.8% at 16GF, implying that on ImageNet model optimization may be less important at high flop regimes.\n\n5. The hybrid approach of scaling an intermediate flop regime model to higher flops (4GF16GF) closes much of the gap between RegNetY and RegNetZ.\n\n6. RegNetY is the fastest model tested and a good choice if runtime is constrained, especially at higher flops.\n\nIn Table 8 we give further details of the 4GF and 16GF models we tested, along with additional baselines. We note that RegNetY-4GF16GF uses less memory and is faster than EfficientNet-B4, even though this RegNetY model has 4 as many flops. This emphasizes the importance of looking at metrics beyond flops when comparing models.\n\n## 7 Discussion\n\nIn this work we presented a general framework for analyzing model scaling strategies that takes into account not just flops but also other network properties, including activations, which we showed are highly correlated with runtime on modern hardware. Given our analysis, we presented a fast scaling strategy that primarily, but not exclusively, scales model width. Fast scaling results in accurate models that also have fast runtime. While the optimal scaling approach may be task dependent, we hope our work provides a general framework for reasoning about model scaling.\n\n## Appendix", null, "Figure 9: Large Models Additional Analysis (see also Figure 8). (Left): Plotting error versus runtime shows that scaled versions of RegNetY and RegNetZ offer the best speed versus accuracy tradeoff. However, the exact speed of these models is implementation dependent and may change with additional optimizations. (Right): For a given model type, activations of these large state-of-the-art models are strongly predictive of runtime, as expected.\n\n#### Large models additional analysis.\n\nIn Figure 9 we show further analysis of the models from Figure 8. The left plot shows error versus runtime, with RegNetY and RegNetZ offering the best speed versus accuracy tradeoff. While offering a useful perspective, the relative ranking of methods is implementation dependent and may change with additional optimizations. For example, group conv seems to be underoptimized relative to depthwise or full-width conv, so a better implementation could lead to speedups for models that rely on group conv. On the other hand, activations are highly predictive of runtime of a scaled model (right plot), which we expect to hold regardless of implementation.", null, "Figure 10: Group \\vsDepthwise Conv. EfficientNet uses depthwise conv while RegNetZ uses group conv, but otherwise the models use fairly similar components (inverted bottlenecks, SiLU, SE). To study this difference, we introduce RegNetZ-G1 which is like RegNetZ but uses depthwise conv. At higher flops, RegNetZ shows gains over RegNetZ-G1 and EfficientNet, demonstrating that group conv may be a better option at higher compute regimes.\n\n#### Group \\vsdepthwise conv.\n\nEfficientNet [Liu2018] uses depthwise conv while RegNetZ uses group conv. Does this explain the accuracy difference between them? To answer this, we introduce a variant RegNetZ which is constrained to use depthwise conv, denoted as RegNetZ-G1. In Figure 10 we plot scaled versions of EfficientNet-B0, RegNetZ-500MF, and RegNetZ-G1-500MF (using scaling). Interestingly, RegNetZ-G1 achieves better accuracy then EfficientNet, which is surprising as they use similar components and EfficientNet-B0 was obtained with a more sophisticated search. Nevertheless, we see that indeed much of the improvement of RegNetZ over EfficientNet, especially at higher flops, comes from using group conv.", null, "Figure 11: Compound scaling: RegNet. Activations (top) and runtime (bottom) versus flops for the RegNetY (left) and RegNetZ (right) scaled models from Figure 5; shown here for completeness. Results are as expected, with activations being highly predictive of runtime and with w scaling resulting in the fastest scaled models.", null, "Figure 12: Fast scaling: RegNet. Activations (top) and runtime (bottom) versus flops for the RegNetY (left) and RegNetZ (right) scaled models from Figure 7; shown here for completeness. Results are as expected, with activations being highly predictive of runtime and with large α resulting in the fastest scaled models.\n\n#### RegNet timing results.\n\nFor completeness, activation and runtime results for RegNetY and RegNetZ corresponding to the scaling strategies from Figure 5 and 7 are shown in Figure 11 and 12, respectively. In both figures, activations are shown at the top and timings at the bottom, and RegNetY is shown on the left and RegNetZ on the right. First, observe that the model timing plots closely follow the model activations plots in all cases. This is expected since activations and timings are highly correlated (see §4). Second, as expected, in Figure 11 we see scaling results in lowest activations/runtime, and in Figure 12 we see that using a large results in lowest activations/runtime for all models.\n\n## Acknowledgements\n\nWe would like to thank Xiaoliang Dai for help with the simple yet strong training setup used in this work and Kaiming He and Ilija Radosavovic for valuable discussions and feedback.\n\n\\setstretch\n\n.95\n\n## References\n\nWant to hear about new tools we're making? Sign up to our mailing list for occasional updates.\n\nIf you find a rendering bug, file an issue on GitHub. Or, have a go at fixing it yourself – the renderer is open source!\n\nFor everything else, email us at [email protected]." ]
[ null, "https://media.arxiv-vanity.com/render-output/5933737/x1.png", null, "https://media.arxiv-vanity.com/render-output/5933737/x5.png", null, "https://media.arxiv-vanity.com/render-output/5933737/x9.png", null, "https://media.arxiv-vanity.com/render-output/5933737/x10.png", null, "https://media.arxiv-vanity.com/render-output/5933737/x12.png", null, "https://media.arxiv-vanity.com/render-output/5933737/x14.png", null, "https://media.arxiv-vanity.com/render-output/5933737/x16.png", null, "https://media.arxiv-vanity.com/render-output/5933737/x18.png", null, "https://media.arxiv-vanity.com/render-output/5933737/x20.png", null, "https://media.arxiv-vanity.com/render-output/5933737/x22.png", null, "https://media.arxiv-vanity.com/render-output/5933737/x24.png", null, "https://media.arxiv-vanity.com/render-output/5933737/x28.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.88219434,"math_prob":0.9242134,"size":36183,"snap":"2022-40-2023-06","text_gpt3_token_len":8915,"char_repetition_ratio":0.15016998,"word_repetition_ratio":0.03534716,"special_character_ratio":0.25160986,"punctuation_ratio":0.14493379,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9710538,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-27T05:14:21Z\",\"WARC-Record-ID\":\"<urn:uuid:4802d982-579d-48ba-a985-3bcebf347fa5>\",\"Content-Length\":\"528278\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5506dba9-2827-4497-8183-1ca4e0fd0fd2>\",\"WARC-Concurrent-To\":\"<urn:uuid:96888f8c-a414-4d51-935a-8260c6a36e8f>\",\"WARC-IP-Address\":\"104.21.14.110\",\"WARC-Target-URI\":\"https://www.arxiv-vanity.com/papers/2103.06877/\",\"WARC-Payload-Digest\":\"sha1:54ENC2247GQ2VSD4LVVXFMB2GDYPAGD5\",\"WARC-Block-Digest\":\"sha1:ACWQINWREDYUG5OBEXINMZ7LCBADCRBS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030334987.39_warc_CC-MAIN-20220927033539-20220927063539-00146.warc.gz\"}"}
https://www.codespeedy.com/convert-a-map-to-a-vector-in-cpp/
[ "# Convert a map to a vector in C++\n\nThis is a tutorial on how to convert a map to a vector in C++. Maps are used to store key-value pairs in an ordered way. And so, to store these we need a vector of paired values.\n\nSuppose we have a map of integer keys and integer values.\n\n```map<int,int> mp; //mp[key] =value\nmp = 10;\nmp = 2;\nmp = 6;\n\n```\n\nTo store these we need a vector of integer paired with an integer.\n\n```vector<pair<int,int>> vec;\n//this vec will store the map as:\n[[5,10] , [8,2] , [9,6]]```\n\nNow let us assume a map of string keys and integer values:\n\n```map<string,int> mp1;\nmp1[\"cake\"] = 500;\nmp1[\"jam\"] = 100;\nmp1[\"pizza\"] =400;```\n\nTo store these we need a vector of string paired with an integer.\n\n```vector<pair<string,int>> vec1;\n//this vec1 will store the map as:\n[[\"cake\",500] , [\"jam\",100] , [\"pizza\",400]]```\n\nSo let’s see how we can do this.\n\n## Map to vector\n\nThe idea to convert a map to vector is to iterate over the map and store the key-value pairs into vector one by one. This can be understood by the following examples:\n\nExample1: Integer key and integer value\n\n```#include <bits/stdc++.h>\nusing namespace std;\n\nint main() {\nmap<int,int> mp;\nmp = 10;\nmp = 16;\nmp = 7;\nmp = 6;\n\nvector<pair<int,int>> vec;\n\nfor(auto i : mp) //inserting map values into vector\n{\nvec.push_back(make_pair(i.first,i.second));\n}\n\nfor(auto j : vec)\ncout<<j.first<<\" : \"<<j.second<<endl;\n\nreturn 0;\n}\n```\n\nOutput:\n\n```2 : 16\n5 : 10\n9 : 7\n10 : 6```\n\nExmaple2: String key and integer values\n\n```#include <bits/stdc++.h>\nusing namespace std;\n\nint main() {\nmap<string,int> mp1;\nmp1[\"cake\"] = 500;\nmp1[\"jam\"] = 100;\nmp1[\"pizza\"] = 400;\n\nvector<pair<string,int>> vec1;\n\nfor(auto i : mp1) //inserting map values into vector\n{\nvec1.push_back(make_pair(i.first,i.second));\n}\n\nfor(auto j : vec1)\ncout<<j.first<<\" : \"<<j.second<<endl;\n\nreturn 0;\n}\n```\n\nOutput:\n\n```cake : 500\njam : 100\npizza : 400```\n\nThat’s it on how to convert a map into a vector. I hope you understood it." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5371618,"math_prob":0.98154896,"size":1919,"snap":"2022-40-2023-06","text_gpt3_token_len":587,"char_repetition_ratio":0.12845953,"word_repetition_ratio":0.1554878,"special_character_ratio":0.3626889,"punctuation_ratio":0.20217392,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.997749,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-06T00:37:55Z\",\"WARC-Record-ID\":\"<urn:uuid:83ae13c9-4ab1-46c0-9bdc-27890e8d4b32>\",\"Content-Length\":\"47297\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1d2712be-0455-4928-a3be-518e6e1bed24>\",\"WARC-Concurrent-To\":\"<urn:uuid:342ff7e6-7012-4b3d-8da3-e2e79294ceef>\",\"WARC-IP-Address\":\"104.21.85.98\",\"WARC-Target-URI\":\"https://www.codespeedy.com/convert-a-map-to-a-vector-in-cpp/\",\"WARC-Payload-Digest\":\"sha1:VZSC4ZGNTX6RFYLEDAD4TWN2ZENUWAAV\",\"WARC-Block-Digest\":\"sha1:U76BLLG4TLWU7FTRMAFDTUSGPIOEMPQK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337680.35_warc_CC-MAIN-20221005234659-20221006024659-00475.warc.gz\"}"}
https://answers.everydaycalculation.com/compare-fractions/10-15-and-9-8
[ "Solutions by everydaycalculation.com\n\n## Compare 10/15 and 9/8\n\n1st number: 10/15, 2nd number: 1 1/8\n\n10/15 is smaller than 9/8\n\n#### Steps for comparing fractions\n\n1. Find the least common denominator or LCM of the two denominators:\nLCM of 15 and 8 is 120\n\nNext, find the equivalent fraction of both fractional numbers with denominator 120\n2. For the 1st fraction, since 15 × 8 = 120,\n10/15 = 10 × 8/15 × 8 = 80/120\n3. Likewise, for the 2nd fraction, since 8 × 15 = 120,\n9/8 = 9 × 15/8 × 15 = 135/120\n4. Since the denominators are now the same, the fraction with the bigger numerator is the greater fraction\n5. 80/120 < 135/120 or 10/15 < 9/8\n\nMathStep (Works offline)", null, "Download our mobile app and learn to work with fractions in your own time:" ]
[ null, "https://answers.everydaycalculation.com/mathstep-app-icon.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8572369,"math_prob":0.9915008,"size":477,"snap":"2022-05-2022-21","text_gpt3_token_len":210,"char_repetition_ratio":0.33403805,"word_repetition_ratio":0.0,"special_character_ratio":0.5052411,"punctuation_ratio":0.055555556,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99583644,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-21T18:08:23Z\",\"WARC-Record-ID\":\"<urn:uuid:2f4ac29d-8184-4874-9722-4e784cd4ae83>\",\"Content-Length\":\"8503\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:72dcefb7-a959-4426-ba17-aeecf01a1c52>\",\"WARC-Concurrent-To\":\"<urn:uuid:54a67604-cae6-42ac-adbf-aeb9eeb9305a>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/compare-fractions/10-15-and-9-8\",\"WARC-Payload-Digest\":\"sha1:QI4I5DGXOLFQPXXFHKWHRXCIM7EFZWGE\",\"WARC-Block-Digest\":\"sha1:RNMQNKCDS25NTOAHQ4RFKSWN7T52XNUU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662540268.46_warc_CC-MAIN-20220521174536-20220521204536-00784.warc.gz\"}"}
https://socratic.org/questions/how-many-kilojoules-are-released-when-8-2-g-of-water-condenses-at-100-c-and-cool
[ "# How many kilojoules are released when 8.2 g of water condenses at 100°C and cools to 15°C?\n\nJun 2, 2016\n\n$- 185 , 402 + \\left(- 2 , 913.46\\right) = - 188315.46 J$\n\n$Q = 1.9 x {10}^{5} J$\n\n$Q = 1.9 x {10}^{2} k J$\n\n#### Explanation:\n\nThere are two steps to this thermochemistry process", null, "First we are going to calculate the condensation process at ${100}^{o} C$.\nThen calculate the cooling of the liquid from ${100}^{o} C \\to {15}^{o} C$\n\nStep 1 $Q = m {C}_{p}$\n\n$Q = 8.2 g \\left(- 2261 \\frac{J}{g}\\right) = - 185 , 402 J$\n\nStep 2 $Q = m \\left({T}_{f} - {T}_{i}\\right) {C}_{p}$\n\n$Q = 8.2 g \\left({15}^{o} - {100}^{o} C\\right) 4.18 \\frac{J}{{g}^{o} C} = - 2 , 913.46 J$\n\n$- 185 , 402 + \\left(- 2 , 913.46\\right) = - 188315.46 J$\n\n$Q = 1.9 x {10}^{5} J$\n\n$Q = 1.9 x {10}^{2} k J$" ]
[ null, "https://useruploads.socratic.org/hUCvu3vRuK8gDK0Z88PU_Screen%20Shot%202016-06-01%20at%206.42.45+PM.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8078649,"math_prob":1.0000073,"size":424,"snap":"2020-24-2020-29","text_gpt3_token_len":108,"char_repetition_ratio":0.09761905,"word_repetition_ratio":0.0,"special_character_ratio":0.22877358,"punctuation_ratio":0.05,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99999785,"pos_list":[0,1,2],"im_url_duplicate_count":[null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-06-01T12:32:24Z\",\"WARC-Record-ID\":\"<urn:uuid:62fa174a-3493-4d39-9d62-b8594b846088>\",\"Content-Length\":\"33522\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3e99fd21-3227-40a9-a44c-eb717feb02ef>\",\"WARC-Concurrent-To\":\"<urn:uuid:cfd6404e-2d97-4866-82e6-4f6491ec2479>\",\"WARC-IP-Address\":\"216.239.34.21\",\"WARC-Target-URI\":\"https://socratic.org/questions/how-many-kilojoules-are-released-when-8-2-g-of-water-condenses-at-100-c-and-cool\",\"WARC-Payload-Digest\":\"sha1:EZ7QMRJG6AZESOUB2D6WKLSTLIUWPZLV\",\"WARC-Block-Digest\":\"sha1:JDWWPXWUPRIZ3WPQ6KGQJDH3FGQGYWPG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347417746.33_warc_CC-MAIN-20200601113849-20200601143849-00578.warc.gz\"}"}
https://studysoup.com/tsg/879490/differential-equations-and-their-applications-an-introduction-to-applied-mathematics-3-edition-chapter-1-8-problem-11
[ "×\n×\n\n# A 500 gallon tank originally contains 100 gallons of fresh water. Beginning at time t =", null, "ISBN: 9780387908069 381\n\n## Solution for problem 11 Chapter 1.8\n\nDifferential Equations and Their Applications: An Introduction to Applied Mathematics | 3rd Edition\n\n• Textbook Solutions\n• 2901 Step-by-step solutions solved by professors and subject experts\n• Get 24/7 help from StudySoup virtual teaching assistants", null, "Differential Equations and Their Applications: An Introduction to Applied Mathematics | 3rd Edition\n\n4 5 1 354 Reviews\n16\n0\nProblem 11\n\nA 500 gallon tank originally contains 100 gallons of fresh water. Beginning at time t = 0, water containing 50 percent pollutants flows into the tank at the rate of 2 gal/min, and the well-stirred mixture leaves at the rate of 1 gal/min. Find the concentration of pollutants in the tank at the moment it overflows.\n\nStep-by-Step Solution:\nStep 1 of 3\n\nL30 - 2 ex. Find the most general antiderivative of the following: 1) f(x)=sec xtanx 2) f(x)= e 5x n NOTE: If f(x)= x...\n\nStep 2 of 3\n\nStep 3 of 3\n\n##### ISBN: 9780387908069\n\nDifferential Equations and Their Applications: An Introduction to Applied Mathematics was written by and is associated to the ISBN: 9780387908069. The full step-by-step solution to problem: 11 from chapter: 1.8 was answered by , our top Math solution expert on 03/13/18, 07:00PM. The answer to “A 500 gallon tank originally contains 100 gallons of fresh water. Beginning at time t = 0, water containing 50 percent pollutants flows into the tank at the rate of 2 gal/min, and the well-stirred mixture leaves at the rate of 1 gal/min. Find the concentration of pollutants in the tank at the moment it overflows.” is broken down into a number of easy to follow steps, and 56 words. Since the solution to 11 from 1.8 chapter was answered, more than 213 students have viewed the full step-by-step answer. This full solution covers the following key subjects: . This expansive textbook survival guide covers 65 chapters, and 855 solutions. This textbook survival guide was created for the textbook: Differential Equations and Their Applications: An Introduction to Applied Mathematics, edition: 3.\n\nUnlock Textbook Solution" ]
[ null, "https://studysoup.com/cdn/26cover_2673355", null, "https://studysoup.com/cdn/26cover_2673355", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9376547,"math_prob":0.87112796,"size":1091,"snap":"2020-45-2020-50","text_gpt3_token_len":247,"char_repetition_ratio":0.11407544,"word_repetition_ratio":0.058139537,"special_character_ratio":0.2465628,"punctuation_ratio":0.13551402,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98694617,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-29T01:07:39Z\",\"WARC-Record-ID\":\"<urn:uuid:ea489d3c-ccdc-415a-9cd1-d8cb22b09874>\",\"Content-Length\":\"81641\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:aac659ac-15bd-4bfb-97ea-229ac70d7c4d>\",\"WARC-Concurrent-To\":\"<urn:uuid:0d162d66-801b-42e6-ac16-00323903edfb>\",\"WARC-IP-Address\":\"54.189.254.180\",\"WARC-Target-URI\":\"https://studysoup.com/tsg/879490/differential-equations-and-their-applications-an-introduction-to-applied-mathematics-3-edition-chapter-1-8-problem-11\",\"WARC-Payload-Digest\":\"sha1:AC7MPTVOZGW43MLTXL52I5E6DGTNEXJH\",\"WARC-Block-Digest\":\"sha1:I63BPXRZWAT2N4D6SBDE7WHDDPETQLOQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141195967.34_warc_CC-MAIN-20201129004335-20201129034335-00234.warc.gz\"}"}
https://treetran.com/
[ "Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran Tree Tran" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9335187,"math_prob":0.9998821,"size":1009,"snap":"2022-40-2023-06","text_gpt3_token_len":226,"char_repetition_ratio":0.08258706,"word_repetition_ratio":0.0,"special_character_ratio":0.20614469,"punctuation_ratio":0.14146341,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9770835,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-05T04:56:12Z\",\"WARC-Record-ID\":\"<urn:uuid:004173da-9bcd-47b5-8243-03446a57f5be>\",\"Content-Length\":\"105198\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:aa6ab2e9-06b4-4008-88f7-e1ec7e944d33>\",\"WARC-Concurrent-To\":\"<urn:uuid:a8d746d9-f15d-4328-957f-5ca46c21d718>\",\"WARC-IP-Address\":\"198.71.232.3\",\"WARC-Target-URI\":\"https://treetran.com/\",\"WARC-Payload-Digest\":\"sha1:NPU4YR5QUZR4624XRVL2QPKVUZ5TWUWL\",\"WARC-Block-Digest\":\"sha1:6SOZV4J2JBKAVRAP3QA5DROFKGDZQOD2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337537.25_warc_CC-MAIN-20221005042446-20221005072446-00195.warc.gz\"}"}
https://phys.libretexts.org/Bookshelves/Conceptual_Physics/Book%3A_Body_Physics_-_Motion_to_Metabolism_(Davis)/14%3A_Lab_Extension_Activities/14.01%3A_Unit_9_Lab_Extension_Part_II-_Limits_on_Human_Performance*
[ "$$\\require{cancel}$$\n\n# 14.1: Unit 9 Lab Extension Part II- Limits on Human Performance*\n\n## Limits on Human Performance\n\nWhat is the ultimate strength of the Achilles Tendon\n\nFind or calculate the cross-sectional area of the Achilles Tendon. Cite any sources.\n\nBased on your answer, what force can the typical Achilles Tendon supply before rupture?\n\nIn order to transfer the the force on the balls of the feet directly to the lower legs during the jump, the force on the Achilles needs to be roughly twice the force on the balls of the feet. Given the maximum force the Achilles can handle, how large of a force can be applied to the balls of the feet during the jump without rupture?\n\nIf you apply that force to the floor, what force is supplied back on your feet (Newton’s 3rd Law)?\n\nIf that peak force were supplied during launch phase, what would be the peak net force? (Don’t forget about gravity cancelling out some of the upward force supplied by the floor).\n\nIf the peak net force was what you found above, what would be the average net force (assuming the force curve peak-to-average force ratio as your own jump).\n\nIf that average net force were supplied over the same launch time as your jump, what would be the impulse?\n\nWhat would be the change momentum during the launch?\n\nWhat would be the final velocity at the end of launch phase?\n\nHow long would it take for your velocity to become zero at the peak of the jump?\n\nWhat is the maximum hang time possible given the limitations of the strength of the Achilles tendon?\n\nDetermine the maximum kinetic energy a person can gain during the launch phase.\n\nDetermine the maximum height that a person can jump based on that kinetic energy (Use conservation of Energy).\n\n## Additional Limits on Human Performance\n\nHaving already found the maximum kinetic energy a person can gain during the launch phase, what is net work that would be done during the launch phase. (Work-Energy Theorem)\n\nUsing the distance the center of mass traveled during your own launch phase, calculate the work done by gravity during launch (Work equation).\n\nDetermine the work that would be done by the jumper during launch.\n\nIf the work was done over the same time interval as your launch phase, what would be the power output of the person." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9543949,"math_prob":0.9480208,"size":2276,"snap":"2021-31-2021-39","text_gpt3_token_len":470,"char_repetition_ratio":0.1703345,"word_repetition_ratio":0.07017544,"special_character_ratio":0.20518453,"punctuation_ratio":0.07482993,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9716483,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-17T06:07:56Z\",\"WARC-Record-ID\":\"<urn:uuid:168480a9-b1b1-4ca8-8447-ae544a5dd4a4>\",\"Content-Length\":\"97142\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0133dcee-6b72-420c-9b4b-2d5586c340a5>\",\"WARC-Concurrent-To\":\"<urn:uuid:d241d251-024b-44bf-b3a9-441607b2e2eb>\",\"WARC-IP-Address\":\"13.32.150.34\",\"WARC-Target-URI\":\"https://phys.libretexts.org/Bookshelves/Conceptual_Physics/Book%3A_Body_Physics_-_Motion_to_Metabolism_(Davis)/14%3A_Lab_Extension_Activities/14.01%3A_Unit_9_Lab_Extension_Part_II-_Limits_on_Human_Performance*\",\"WARC-Payload-Digest\":\"sha1:NEJCP72K4P6GWRGG4KTJNQ7V27OZNAMU\",\"WARC-Block-Digest\":\"sha1:UZ3GZGRZF75DLIPR2VNQIG3G62HOGQLF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780055601.25_warc_CC-MAIN-20210917055515-20210917085515-00209.warc.gz\"}"}
https://www.physicsoverflow.org/39550/remaining-variables-magnitudes-correctly-conserved-magnitude
[ "#", null, "In the angular momentum equation L = r x p, which one of the remaining variables’ magnitudes is correctly conserved when the magnitude of the radius changes?\n\n+ 0 like - 1 dislike\n1240 views\n\nFor the equation L = r x p, assuming that the implied rotation occurs around a central point.\n\nPremise 1:\n\nThere is a force at all times directed from the point mass along the radius toward the centre of rotation (centripetal force).\n\nPremise 2:\n\nA change in the magnitude of radius is conducted by altering the magnitude of this force.\n\nPremise 3:\n\nThere can be no component of this force perpendicular to the radius.\n\nPremise 4:\n\nIn order to affect the magnitude of the component of momentum perpendicular to the radius, one must apply a parallel component of force (Newton’s first law).\n\nDeduction:\n\nA change in the magnitude of the radius cannot affect the magnitude of the component of momentum perpendicular to the radius.\n\nConclusion:\n\nIn the equation L = r x p, assuming that the implied rotation occurs around a central point, it is the magnitude of the component of momentum perpendicular to the radius that must be conserved when the magnitude of the radius changes.\n\nClosed as per community consensus as the post is neither graduate-level, nor coherent nor a question\nrecategorized Aug 29, 2017\n\nThis is not graduate-level, voting to close.\n\nAre you proposing that an absolute proof that the laws of physics are flawed and require a change is something that should be dealt with at a level below graduate?\n\nYou must employ the equations of motion $\\dot{{\\bf{p}}}=\\bf{F}$ in order to derive your conclusions. If there is no force perpendicular to the radius, then the corresponding part of momentum is conserved." ]
[ null, "https://www.physicsoverflow.org/qa-plugin/po-printer-friendly/print_on.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.819218,"math_prob":0.97845924,"size":965,"snap":"2023-14-2023-23","text_gpt3_token_len":200,"char_repetition_ratio":0.18106139,"word_repetition_ratio":0.30674848,"special_character_ratio":0.20103627,"punctuation_ratio":0.09289618,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9975717,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-28T19:14:56Z\",\"WARC-Record-ID\":\"<urn:uuid:f9f6e1e9-a25c-49d1-bd4a-1c9e4e9657cf>\",\"Content-Length\":\"127191\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0950f74b-7e2f-4267-af5a-cf7da74a02c5>\",\"WARC-Concurrent-To\":\"<urn:uuid:00d0469d-fc25-415f-a3ad-970e4404a0ca>\",\"WARC-IP-Address\":\"129.70.43.86\",\"WARC-Target-URI\":\"https://www.physicsoverflow.org/39550/remaining-variables-magnitudes-correctly-conserved-magnitude\",\"WARC-Payload-Digest\":\"sha1:XTZF2NCS7WVBM5BDASRCYL4W4NUMSMXP\",\"WARC-Block-Digest\":\"sha1:ZGDJWOPTZ2IAKMM4Q3QAM2NPKO7BFBR3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296948868.90_warc_CC-MAIN-20230328170730-20230328200730-00323.warc.gz\"}"}
https://www.techwhiff.com/learn/end-benchmark-2016-2015-ind-avg-liquidity-current/322426
[ "# End Benchmark 2016 2015 Ind. Avg. LIQUIDITY Current Quick 1.60 X 0.90 x 1.90 1.20 1.80...\n\n###### Question:", null, "end Benchmark 2016 2015 Ind. Avg. LIQUIDITY Current Quick 1.60 X 0.90 x 1.90 1.20 1.80 1.00 ASSET MANAGEMENT 2.80 Inventory Turnover Days Sales Outstanding 125.00 days 125.00 Fixed Asset Turnover Total Asset Turnover 2.75 130.00 0.80 0.40 2.60x 0.80 X 0.40 x 0.90 0.45 Consider the table above, which of the following is true? O The quick ratio indicates a possible short term liquidity problem. O The current ratio indicates a high level of profitability. O The days sales outstanding is improving relative to last year. The inventory turnover is better than the benchmark\n\n#### Similar Solved Questions\n\n##### Describe from a nursing perspective how a community health nurse would design prevention strategies aimed at...\nDescribe from a nursing perspective how a community health nurse would design prevention strategies aimed at improving the goal(s) for each level of prevention. Please give three examples Primary...\n##### A Kelvin Bridge schematic is shown in Figure P4.7. At null, Vk-Vp R1 R. Ry Null...\nA Kelvin Bridge schematic is shown in Figure P4.7. At null, Vk-Vp R1 R. Ry Null Detector Ri Rx FIGURE P4.7 (A) Find expressions for Vk, Ix and lab (B) Derive an expression for Rx at null (note that it has two terms). (C) At null: Ra 1000.6 S2. Calculate Rx using the simple first term in the equation...\n##### Find the complex zeros of the following polynomial function. Write fin factored form. f(x) = x3...\nFind the complex zeros of the following polynomial function. Write fin factored form. f(x) = x3 - 13x² + 59x - 87 The complex zeros off are (Simplify your answer. Type an exact answer, using radicals and i as needed. Use integers or fractions for any numbers in the expression. Use a comma to se...\n##### Problem 14-4A Straight-Line: Amortization of bond discount LO P2 [The following information applies to the questions...\nProblem 14-4A Straight-Line: Amortization of bond discount LO P2 [The following information applies to the questions displayed below.] Legacy issues $740,000 of 7.5%, four-year bonds dated January 1, 2019, that pay interest semiannually on June 30 and December 31. They are issued at$680,186 when t...\n##### 9. Identify each of the following molecules as aromatic, antiaromatic or nonaromatic. (For this problem, you...\n9. Identify each of the following molecules as aromatic, antiaromatic or nonaromatic. (For this problem, you may assume that each molecule can and will be planar, if that influences the analysis.) NUMBERS 10 and 11 are BIZZZZ-ONUSI (.e., bonus) 10. In learning about 1,2-vs. 1,4-addition to dienes, w...\n##### Sandhill Corporation issued $680,000, 7%, 20-year bonds on January 1, 2020, for$613,236. This price resulted...\nSandhill Corporation issued $680,000, 7%, 20-year bonds on January 1, 2020, for$613,236. This price resulted in an effective-interest rate of 8% on the bonds. Interest is payable annually on January 1. Sandhill uses the effective-interest method to amortize bond premium or discount. Prepare the jou..." ]
[ null, "https://img.homeworklib.com/questions/18983fd0-6f62-11ea-8418-63d685fe8393.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.90709764,"math_prob":0.93410665,"size":6165,"snap":"2022-40-2023-06","text_gpt3_token_len":1579,"char_repetition_ratio":0.0990099,"word_repetition_ratio":0.35438266,"special_character_ratio":0.28353608,"punctuation_ratio":0.16945289,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95579296,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-07T01:23:22Z\",\"WARC-Record-ID\":\"<urn:uuid:145c440d-c1dd-4774-8bbf-66d899e8fe67>\",\"Content-Length\":\"49840\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:098a0baf-809a-4a1a-83fb-9158371b2bfb>\",\"WARC-Concurrent-To\":\"<urn:uuid:426fe32d-033b-4942-aa17-bb1814c0004a>\",\"WARC-IP-Address\":\"172.67.177.68\",\"WARC-Target-URI\":\"https://www.techwhiff.com/learn/end-benchmark-2016-2015-ind-avg-liquidity-current/322426\",\"WARC-Payload-Digest\":\"sha1:5SLHEIUIQGCZHZGCCLQJDZTPEQRZFOWF\",\"WARC-Block-Digest\":\"sha1:DIEPPILSCUAIUMIUN7F27E2SKDVXIMXO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500368.7_warc_CC-MAIN-20230207004322-20230207034322-00862.warc.gz\"}"}
https://www.wordaz.com/Anova.html
[ " Definition of Anova. Meaning of Anova. Synonyms of Anova\n\n# Definition of Anova. Meaning of Anova. Synonyms of Anova\n\nHere you will find one or more explanations in English for the word Anova. Also in the bottom left of the page several parts of wikipedia pages related to the word Anova and, of course, Anova synonyms and on the right images related to the word Anova.\n\n## Definition of Anova\n\nNo result for Anova. Showing similar results...\n\n## Meaning of Anova from wikipedia\n\n- Analysis of variance (ANOVA) is a collection of statistical models and their ****ociated estimation procedures (such as the \"variation\" among and between...\n- In statistics, one-way analysis of variance (abbreviated one-way ANOVA) is a technique that can be used to compare means of two or more samples (using...\n- best-known F-test, and plays an important role in the analysis of variance (ANOVA). The hypothesis that a proposed regression model fits the data well. See...\n- Anova Culinary (known legally as Anova Applied Electronics, Inc.) is a San Francisco-based smart kitchen company that provides connected precision cooking...\n- ANOVA gauge repeatability and reproducibility is a measurement systems analysis technique that uses an analysis of variance (ANOVA) random effects model...\n- In statistics, the two-way analysis of variance (ANOVA) is an extension of the one-way ANOVA that examines the influence of two different categorical...\n- In statistics, one purpose for the analysis of variance (ANOVA) is to analyze differences in means between groups. The test statistic, F, ****umes independence...\n- response between repetitions. Repeated measures analysis of variance (rANOVA) is a commonly used statistical approach to repeated measure designs. With...\n- Analysis of covariance (ANCOVA) is a general linear model which blends ANOVA and regression. ANCOVA evaluates whether the means of a dependent variable...\n- a mixed-design analysis of variance model, also known as a split-plot ANOVA, is used to test for differences between two or more independent groups..." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.80960953,"math_prob":0.896996,"size":452,"snap":"2020-34-2020-40","text_gpt3_token_len":102,"char_repetition_ratio":0.18303572,"word_repetition_ratio":0.0,"special_character_ratio":0.19911504,"punctuation_ratio":0.14444445,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9771353,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-20T17:33:11Z\",\"WARC-Record-ID\":\"<urn:uuid:81468b0a-02f6-41b3-9e5e-2208ccd70ca5>\",\"Content-Length\":\"13656\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:28bc6668-12f6-46d9-988a-dad52eeef94e>\",\"WARC-Concurrent-To\":\"<urn:uuid:5d58c396-653f-4938-8746-1bfc498539c7>\",\"WARC-IP-Address\":\"138.197.68.142\",\"WARC-Target-URI\":\"https://www.wordaz.com/Anova.html\",\"WARC-Payload-Digest\":\"sha1:ZNTFAZJTVHPL3XURGBG2GJZGKVETMMF5\",\"WARC-Block-Digest\":\"sha1:YQHQO3AIP2F5GXYVR5HD2KLCUMH7KKTI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400198287.23_warc_CC-MAIN-20200920161009-20200920191009-00222.warc.gz\"}"}
https://www.databasestar.com/oracle-to_binary_double-to_binary_float-functions/
[ "", null, "In this article, I’ll explain what the TO_BINARY_DOUBLE and TO_BINARY_FLOAT functions do and show you some examples.\n\n## Purpose of the Oracle TO_BINARY_DOUBLE Function\n\nThe TO_BINARY_DOUBLE function converts an expression into a BINARY_DOUBLE data type.", null, "This is a double-precision floating-point number. If you work with BINARY_DOUBLE data types (which I’ve written about here (TODO link)) then you will find this function useful.\n\n## Purpose of the Oracle TO_BINARY_FLOAT Function\n\nThis function converts a number to a single-precision floating point number. It’s similar to TO_BINARY_DOUBLE, but the output data type is a FLOAT.\n\n## Syntax\n\nThe syntax of the TO_BINARY_DOUBLE function is:\n\nTO_BINARY_DOUBLE (expression [, format_mask [, nls_parameter] ])\n\nThe syntax of the TO_BINARY_FLOAT function is:\n\nTO_BINARY_FLOAT (expression [, format_mask [, nls_parameter] ])\n\nThey both take the same parameters.\n\n## Parameters\n\nThe parameters of the TO_BINARY_DOUBLE function are:\n\n• expression (mandatory): The expression to convert to a BINARY_DOUBLE. It can be a string, or a numeric value of type NUMBER, BINARY_FLOAT, or BINARY_DOUBLE.\n• format_mask (optional): This parameter specifies the format of the input value, if the input value is a character type that can be converted to a BINARY_DOUBLE.\n• nls_parameter (optional): This allows you to set the NLS_PARAMETER if the input value is a character type.\n\nThe parameters of the TO_BINARY_FLOAT are the same:\n\n• expression (mandatory): The expression to convert to a BINARY_FLOAT. It can be a string, or a numeric value of type NUMBER, BINARY_FLOAT, or BINARY_DOUBLE.\n• format_mask (optional): This parameter specifies the format of the input value, if the input value is a character type that can be converted to a BINARY_FLOAT.\n• nls_parameter (optional): This allows you to set the NLS_PARAMETER if the input value is a character type.\n\n## Examples of the TO_BINARY_DOUBLE and TO_BINARY_FLOAT Functions\n\nHere are some examples of these two function. I find that examples are the best way for me to learn about code, even with the explanation above.\n\nFor these examples, I’ll create a test table which contains numbers in different data types.\n\n``````CREATE TABLE double_test (\nnum_val NUMBER(8, 2),\nbin_double_val BINARY_DOUBLE,\nbin_float_val BINARY_FLOAT,\nchar_val VARCHAR2(10)\n);\n\nINSERT INTO double_test (num_val, bin_double_val, bin_float_val, char_val)\nVALUES (2468.12, 2468.12, 2468.12, ‘2468.12’);``````\n\nNow, we can use this data in our examples.\n\n### Example 1: Number Value\n\nThis example uses TO_BINARY_DOUBLE and TO_BINARY_FLOAT on a NUMBER value.\n\n``````SELECT num_val,\nTO_BINARY_DOUBLE(num_val) AS bin_double,\nTO_BINARY_FLOAT(num_val) AS bin_float\nFROM double_test;``````\n\nResults:\n\n NUM_VAL BIN_DOUBLE BIN_FLOAT 2468.12 2468.12 2468.12\n\n### Example 2: Binary Double Value\n\nThis example shows using the TO_BINARY_DOUBLE function on a value that is already a BINARY_DOUBLE.\n\n``````SELECT bin_double_val,\nTO_BINARY_DOUBLE(bin_double_val) AS bin_double,\nTO_BINARY_FLOAT(bin_double_val) AS bin_float\nFROM double_test;``````\n\nResults:\n\n BIN_DOUBLE_VAL BIN_DOUBLE BIN_FLOAT 2468.12 2468.12 2468.12\n\n### Example 3: Binary Float Value\n\nThis example shows using this function on a value that is a BINARY_FLOAT.\n\n``````SELECT bin_float_val,\nTO_BINARY_DOUBLE(bin_float_val) AS bin_double,\nTO_BINARY_FLOAT(bin_float_val) AS bin_float\nFROM double_test;``````\n\nResults:\n\n BIN_FLOAT_VAL BIN_DOUBLE BIN_FLOAT 2468.12 2468.1201171875 2468.12\n\n### Example 4: Character Value\n\nThis example uses a number that is stored inside a character field.\n\n``````SELECT char_val,\nTO_BINARY_DOUBLE(char_val) AS bin_double,\nTO_BINARY_FLOAT(char_val) AS bin_float\nFROM double_test;``````\n\nResults:\n\n CHAR_VAL BIN_DOUBLE BIN_FLOAT 2468.12 2468.12 2468.12\n\n## Similar Functions\n\nSome functions which are similar to the TO_BINARY_DOUBLE function are:\n\n• TO_NUMBER: Converts a value to a NUMBER data type.\n• TO_CHAR: Converts a value to a VARCHAR2 data type.\n\nIf you want to know more about SQL functions, you can find a full list of Oracle SQL functions here.\n\nLastly, if you enjoy the information and career advice I’ve been providing, sign up to my newsletter below to stay up-to-date on my articles. You’ll also receive a fantastic bonus. Thanks!\n\nGet Your SQL Cheat Sheets Now:" ]
[ null, "https://www.facebook.com/tr", null, "data:image/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.62238616,"math_prob":0.9449026,"size":4162,"snap":"2020-10-2020-16","text_gpt3_token_len":979,"char_repetition_ratio":0.1938432,"word_repetition_ratio":0.22931035,"special_character_ratio":0.2503604,"punctuation_ratio":0.15384616,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9954739,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-28T14:27:18Z\",\"WARC-Record-ID\":\"<urn:uuid:e2fbc5cc-5ead-4c44-9f67-6cd774a20ee4>\",\"Content-Length\":\"59699\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d4c3b024-0891-4540-9f0d-842869bf78e3>\",\"WARC-Concurrent-To\":\"<urn:uuid:8742866b-2b7a-4673-ab9b-8f7a14429c3e>\",\"WARC-IP-Address\":\"104.28.11.169\",\"WARC-Target-URI\":\"https://www.databasestar.com/oracle-to_binary_double-to_binary_float-functions/\",\"WARC-Payload-Digest\":\"sha1:3V7ONKJA5V5FDT7X63TI7TPHTICT6CLS\",\"WARC-Block-Digest\":\"sha1:5VO3JM6X5VZANOV2XFI6PW4CW5DVRPRL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875147234.52_warc_CC-MAIN-20200228135132-20200228165132-00378.warc.gz\"}"}
https://codeblog.vurdalakov.net/2016/02/number-of-bits-needed-to-represent-integer.html
[ "## Tuesday, February 23, 2016\n\n### Number of bits needed to represent an integer\n\nThe following C# function calculates the number of bits that are needed to represent a random integer number.\n\n``````int NumberOfBits(int number)\n{\nvar bits = 1;\n\nwhile ((number >>= 1) != 0)\n{\nbits++;\n}\n\nreturn bits;\n}\n``````" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7531134,"math_prob":0.99538875,"size":345,"snap":"2019-51-2020-05","text_gpt3_token_len":85,"char_repetition_ratio":0.1319648,"word_repetition_ratio":0.0,"special_character_ratio":0.28115943,"punctuation_ratio":0.14285715,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9943885,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-25T02:50:31Z\",\"WARC-Record-ID\":\"<urn:uuid:415e6daf-83d0-45b4-906f-8f8d5c45b2ce>\",\"Content-Length\":\"37547\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:86f9085d-e572-4f8c-92dc-f501cec305cd>\",\"WARC-Concurrent-To\":\"<urn:uuid:ab7aa34a-ebd2-4115-9efd-ae05119ea38d>\",\"WARC-IP-Address\":\"172.217.15.115\",\"WARC-Target-URI\":\"https://codeblog.vurdalakov.net/2016/02/number-of-bits-needed-to-represent-integer.html\",\"WARC-Payload-Digest\":\"sha1:3YYYSFOVO43FHAA2S4Q6YD3POINBWDGI\",\"WARC-Block-Digest\":\"sha1:XVW6VW5GKZWJMIASGNJ66SPKCI2VDERT\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250628549.43_warc_CC-MAIN-20200125011232-20200125040232-00522.warc.gz\"}"}
https://www.physicsforums.com/threads/double-delta-function-potential-well.682666/
[ "# Double Delta function potential well\n\nConsider a one-dimensional system described by a particle of mass m in the presence\nof a pair of delta function wells of strength Wo > 0 located at x = \u0006L, i.e.\nV(x) = -Wo \u000e(x + L) - Wo\u000e(x - L) This is a rough but illuminating toy model of an electron in the presence of two positive.\ncharges located at x = \u0006L.\n\n(a) Derive a transcendental equation for the allowed eigenenergies of any bound states.\nExpress your result in terms of the dimensionless quantities go = mLWo/hbar^2 and ε = κL where E = -hbar^2*κ^2 / 2m is the (negative) energy of the bound state.\n\n(b) Solve your transcendental equation(s) graphically / numerically to identify all\nbound state energy eigenvalues. How many bound states exist? Does the number\ndepend on go?\n\n(c) Plot all bound state energy eigenfunctions for go = 0.1, go = 0.5 and go = 10.\n\n(d) How does the energy of the most tightly bound state vary as you vary L? Include\na plot (with axes and units labeled) which shows the energy as a function of L.\nNote: You do not have to do this analytically; this is most easily done numerically\nusing Mathematica.\n\n(e) Suppose we place the particle in the lowest energy bound state. Do the two delta\nfunctions want to be close together or far apart? Plot the induced force between\nthe delta functions as a function of L. Again, best done numerically.\n\n(f) Use the above to suggest a plausible explanation for why H+2\nis a stable molecule.\n\n(g) How does the splitting between levels change as you increase the separation between the wells? Why does the 2rd excited state have the same number of nodes inside each well as the 3rd excited state, but not the same number as the 4th?\n\nSo I think I did a). I got ε = go(1-e^(-2ε)) and ε = go(1+e^(-2ε)). I'm not suite sure what b) is. I thought it was 2. The thing I'm getting hung up on c) -g) bc I can't use mathematica to its full 'potential' I guess. Help please? How do I proceed." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.92666245,"math_prob":0.9762256,"size":3723,"snap":"2019-51-2020-05","text_gpt3_token_len":985,"char_repetition_ratio":0.12100027,"word_repetition_ratio":0.9617021,"special_character_ratio":0.26322857,"punctuation_ratio":0.09020618,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9974837,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-28T06:14:17Z\",\"WARC-Record-ID\":\"<urn:uuid:2da21c96-4fc2-4c7b-9b3b-8bf7b7ea0402>\",\"Content-Length\":\"62786\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9b1d2441-c019-4ae6-a2fa-f0b8f46ed96a>\",\"WARC-Concurrent-To\":\"<urn:uuid:dc52e380-8ce5-44af-9920-b70b4c3f35a6>\",\"WARC-IP-Address\":\"23.111.143.85\",\"WARC-Target-URI\":\"https://www.physicsforums.com/threads/double-delta-function-potential-well.682666/\",\"WARC-Payload-Digest\":\"sha1:BJ5OMJUKCEEMBPERQD5V5VCOPCUZHXN5\",\"WARC-Block-Digest\":\"sha1:NM72YETEQJUJUDZQE3HLVYUFOTLLFCZM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579251776516.99_warc_CC-MAIN-20200128060946-20200128090946-00300.warc.gz\"}"}
https://themindofjoe.blogspot.com/2017/01/what-is-it-number-1729-is-very-dear-to.html
[ "## Friday, January 27, 2017\n\n### Taxi Cab Numbers\n\nThe number 1729 is very dear to the hearts of mathematicians.\n\nThe story goes back to a 1919 conversation between the famous British mathematician G. H. Hardy and the Indian genius mathematician Srinivasa Ramanujan. Ramanujan asked the taxi number that Hardy had ridden in on the way. Hardy replied that it was number 1729 and mentioned that the number “seemed to be rather a dull one”.\n\n“No”, Ramanujan replied, “it is a very interesting number; it is the smallest number expressible as the sum of two [positive] cubes in two different ways.”\n\nAnd it’s true. The number 1729 can be expressed as 13 + 123 and also as 93 + 103 and is the smallest number for which that is true.\n\nIn the world of recreational mathematics — yes, there is such a thing — such numbers are now known as Taxi Cab Numbers. They even have their own web site.\n\nThe next number in the sequence is 4,104 = 23 + 163 = 9 3 + 15 3, then 13,832 = 23 + 243 = 183 + 203, then 20,683 = 103 + 273 = 193 + 243 and so on.\n\nThis gives rise to a variety interesting math problems. For example, can you write a computer program that calculates such numbers? Sure. In fact, here are 25 of them.\n\nThe series is infinite. In other words, given enough computing power, you will always be able to find a next-higher number. Always.\n\nAnd this why stop at two different ways? For example, what’s the smallest number that can be expressed as the sum of two cubes in three different ways? That number is 87,539,319 which is 2283 + 4233 and 1673 + 4363 AND 2553 + 4143\n\nHow about in four ways? It’s 6,963,472,309,248, which is 13,3223 + 16,6303 and 10,2003 + 18,0723 and 5,4363 + 18,9483 and 2,4213 + 19,0833 .\n\nYou can imagine, they get crazy-big after that.\n\nOh, why stop with just adding two numbers together? What about adding three numbers? Why not include negative numbers? And exponents other than cubes?\n\nIn other words, the sum of A numbers raised to the power of B, C different ways.\n\nYep. There are an infinite number of all these variations. I didn’t even get into the possibility of subtracting numbers, not just adding them.\n\nThat’s the cool thing about math. Almost everything in math is infinite. No matter what cool thing you find, somebody with enough imagination — and perhaps enough computer power — will be able to figure out the next thing one bigger.\n\nWant a billion digits of pi? Okay. Heck, how about five billion?\n\nHow about ten million digits of the square root of two?\n\nI could do this all day.\n\nNumbers extend forever. And since numbers are really just a construct of our mind, you could argue that the mind could extend forever.\n\nAnd only you can decide if that’s a comforting thought ... or a scary one." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.93538994,"math_prob":0.9871608,"size":2691,"snap":"2020-45-2020-50","text_gpt3_token_len":690,"char_repetition_ratio":0.11202084,"word_repetition_ratio":0.004040404,"special_character_ratio":0.29319954,"punctuation_ratio":0.14457831,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9961109,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-24T12:40:28Z\",\"WARC-Record-ID\":\"<urn:uuid:44e52006-7851-4371-9fd3-c47fe8f23269>\",\"Content-Length\":\"123908\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:53654646-8bdb-4d49-a0e5-d74bee890667>\",\"WARC-Concurrent-To\":\"<urn:uuid:35c65c67-bc05-4a75-8669-c068549e7fe9>\",\"WARC-IP-Address\":\"172.217.164.129\",\"WARC-Target-URI\":\"https://themindofjoe.blogspot.com/2017/01/what-is-it-number-1729-is-very-dear-to.html\",\"WARC-Payload-Digest\":\"sha1:GTLNAJX4JV7APIQ2C2HPCY5KHLTLZ3HZ\",\"WARC-Block-Digest\":\"sha1:MU6VBYYFXHRD7QD2UQQR3JMI5STMZN5I\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107882581.13_warc_CC-MAIN-20201024110118-20201024140118-00312.warc.gz\"}"}
https://www.codingninjas.com/codestudio/problem-details/tiling-problem_630464
[ "# Tiling Problem\n\nPosted: 25 Jul, 2020\nDifficulty: Hard\n\n## PROBLEM STATEMENT\n\n#### You have been given a board where there are '2' rows and 'N' columns. You have an infinite supply of 2x1 tiles, and you can place a tile in the following ways:\n\n``````1. Horizontally as 1x2 tile\n2. Vertically as 2x1 tile\n``````\n\n#### Count the number of ways to tile the given board using the available tiles.\n\n##### Note :\n``````The number of ways might be large so output your answer modulo 10^9 + 7.\n``````\n\n#### Here an example of tile and board for 'N' = 4 :", null, "##### Input format :\n``````The first and only line of each test case contains an Integer 'N' which denotes the size of the board, i.e. '2' rows and 'N' columns.\n``````\n##### Output format :\n``````For each test case, print the number of ways to tile the board modulo 10^9 + 7.\n``````\n##### Note:\n``````You are not required to print the output explicitly, it has already been taken care of. Just implement the function.\n``````\n##### Constraints :\n``````1 <= N <= 10^18\n\nWhere 'N' is the number of columns in the board.\n\nTime limit: 1 sec\n``````", null, "Approach 1\n\nTry to place the tile to fill the unit column and calculate the number of ways from smaller sub-problems. Then use memoization to convert O(2^N) solution to an O(N) solution.\n\n1. At any point we are at ‘idx’ column then we can place our tile in two ways to fill this column.\n1. Option 1 -  1 Horizontal Tile\n\nWe can place in this way where we have ‘idx-1’ column filled.\n\n2.   Option 2 - 2 Vertical Tiles\n\nWe can place in this way where we have ‘idx-2’ column filled.\n\n2. So, numberOfWays(n) = numberOfWays(n-1) + numberOfWays(n-2)\n\n3. Base cases are:\n\n1. When n = 1 there is only 1 way - Placing 1 Vertical Tile\n2. When n = 2 there are two ways - Placing 2 Vertical Tile and Placing 2 Horizontal Tiles.\n3. Also, take care of overflow using modulo 10^9 + 7.\n4. Lastly, use a DP Array of size N for memoization to save time over repetitive calls." ]
[ null, "https://files.codingninjas.in/0000000000004263.png", null, "https://s3-ap-southeast-1.amazonaws.com/codestudio.codingninjas.com/codestudio/assets/icons/tick-comment-bubble.svg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7627711,"math_prob":0.93088585,"size":2168,"snap":"2022-27-2022-33","text_gpt3_token_len":614,"char_repetition_ratio":0.11367837,"word_repetition_ratio":0.054054055,"special_character_ratio":0.2836716,"punctuation_ratio":0.12418301,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9955128,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,3,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-29T19:16:33Z\",\"WARC-Record-ID\":\"<urn:uuid:5e7f2300-47ee-4c9c-a3bd-3f8c58b00ea0>\",\"Content-Length\":\"145582\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e7fb8530-abc0-4d51-b974-91ca345f0829>\",\"WARC-Concurrent-To\":\"<urn:uuid:433704e8-4c1c-43e7-8584-96f92ceafd08>\",\"WARC-IP-Address\":\"52.85.132.88\",\"WARC-Target-URI\":\"https://www.codingninjas.com/codestudio/problem-details/tiling-problem_630464\",\"WARC-Payload-Digest\":\"sha1:UNJCARH2HTKHGKLW75FEVPHAQYP2JUEH\",\"WARC-Block-Digest\":\"sha1:QMCTAZZFG6Z7EKPFABROLWCHK7AJ54CY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103642979.38_warc_CC-MAIN-20220629180939-20220629210939-00060.warc.gz\"}"}
https://www.gcflcm.com/lcm-of-6-and-23
[ "# What is the Least Common Multiple of 6 and 23?\n\nLeast common multiple or lowest common denominator (lcd) can be calculated in two way; with the LCM formula calculation of greatest common factor (GCF), or multiplying the prime factors with the highest exponent factor.\n\nLeast common multiple (LCM) of 6 and 23 is 138.\n\nLCM(6,23) = 138\n\nLCM Calculator and\nand\n\n## Least Common Multiple of 6 and 23 with GCF Formula\n\nThe formula of LCM is LCM(a,b) = ( a × b) / GCF(a,b).\nWe need to calculate greatest common factor 6 and 23, than apply into the LCM equation.\n\nGCF(6,23) = 1\nLCM(6,23) = ( 6 × 23) / 1\nLCM(6,23) = 138 / 1\nLCM(6,23) = 138\n\n## Least Common Multiple (LCM) of 6 and 23 with Primes\n\nLeast common multiple can be found by multiplying the highest exponent prime factors of 6 and 23. First we will calculate the prime factors of 6 and 23.\n\n### Prime Factorization of 6\n\nPrime factors of 6 are 2, 3. Prime factorization of 6 in exponential form is:\n\n6 = 21 × 31\n\n### Prime Factorization of 23\n\nPrime factors of 23 are 23. Prime factorization of 23 in exponential form is:\n\n23 = 231\n\nNow multiplying the highest exponent prime factors to calculate the LCM of 6 and 23.\n\nLCM(6,23) = 21 × 31 × 231\nLCM(6,23) = 138" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8775833,"math_prob":0.99903464,"size":1159,"snap":"2023-40-2023-50","text_gpt3_token_len":363,"char_repetition_ratio":0.17489177,"word_repetition_ratio":0.10045662,"special_character_ratio":0.3373598,"punctuation_ratio":0.101626016,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999981,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-09T10:47:21Z\",\"WARC-Record-ID\":\"<urn:uuid:6de1478c-ead0-4d38-b90f-8046290e5301>\",\"Content-Length\":\"20470\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8df5e4e4-1316-4bc6-ad99-6e5074e71a6b>\",\"WARC-Concurrent-To\":\"<urn:uuid:7e61da92-b115-4162-a7dc-d8adc7ff8843>\",\"WARC-IP-Address\":\"34.133.163.157\",\"WARC-Target-URI\":\"https://www.gcflcm.com/lcm-of-6-and-23\",\"WARC-Payload-Digest\":\"sha1:5QGWL2G2KMY2QQCL4HS4VIILCIPNLAO5\",\"WARC-Block-Digest\":\"sha1:E3UM7VLE4NSAOYX4SMB2PWNT3IKR4D2S\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100909.82_warc_CC-MAIN-20231209103523-20231209133523-00436.warc.gz\"}"}
http://methods.sagepub.com/Reference/encyclopedia-of-survey-research-methods/n277.xml
[ "# List Sampling\n\nEncyclopedia\nEdited by: Published: 2008\n\n• ## Subject Index\n\nList sampling is one of the basic ways that survey samples can be created. The basic concept of list sampling is deceptively simple. The process is to choose a subset of the elements (the sample) from a listing of all elements (the sampling frame) using a specific selection process. The selection process may have several features, for example, sampling with replacement or sampling without replacement.\n\nIn list sampling, as in other sample selection processes, issues arise about whether the sample estimate is an unbiased and reliable estimate for the characteristic or attribute in the full list of elements. Bias and reliability are measures of how well the estimator for the attribute computed using list sample data corresponds to the true value for the attribute in the ...\n\n• All\n• A\n• B\n• C\n• D\n• E\n• F\n• G\n• H\n• I\n• J\n• K\n• L\n• M\n• N\n• O\n• P\n• Q\n• R\n• S\n• T\n• U\n• V\n• W\n• X\n• Y\n• Z\n\n## Methods Map", null, "Research Methods\n\nCopy and paste the following HTML into your website" ]
[ null, "http://methods.sagepub.com/images/img-bg.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.81185555,"math_prob":0.72405297,"size":1488,"snap":"2019-51-2020-05","text_gpt3_token_len":349,"char_repetition_ratio":0.13477089,"word_repetition_ratio":0.0,"special_character_ratio":0.2170699,"punctuation_ratio":0.063025214,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9515792,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-14T17:14:41Z\",\"WARC-Record-ID\":\"<urn:uuid:104f9eb8-2774-4a34-a4bc-0100e7f995f8>\",\"Content-Length\":\"266406\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:959ea643-02ef-487d-a7aa-31a6f9e7814c>\",\"WARC-Concurrent-To\":\"<urn:uuid:99200562-8022-4a67-855a-c3d759c854b8>\",\"WARC-IP-Address\":\"128.121.3.195\",\"WARC-Target-URI\":\"http://methods.sagepub.com/Reference/encyclopedia-of-survey-research-methods/n277.xml\",\"WARC-Payload-Digest\":\"sha1:3XW6U36PKXK2HBOCUEY5II5S5KTU2LIN\",\"WARC-Block-Digest\":\"sha1:7EOZXF5UE3GDJSMY3GRBPTT7ZJDQK3TT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575541281438.51_warc_CC-MAIN-20191214150439-20191214174439-00014.warc.gz\"}"}
https://spark.apache.org/docs/latest/api/python/pyspark.mllib.html?highlight=streaminglinearregressionwithsgd
[ "# pyspark.mllib package¶\n\n## pyspark.mllib.classification module¶\n\nclass pyspark.mllib.classification.LogisticRegressionModel(weights, intercept, numFeatures, numClasses)[source]\n\nClassification model trained using Multinomial/Binary Logistic Regression.\n\nParameters\n• weights – Weights computed for every feature.\n\n• intercept – Intercept computed for this model. (Only used in Binary Logistic Regression. In Multinomial Logistic Regression, the intercepts will not bea single value, so the intercepts will be part of the weights.)\n\n• numFeatures – The dimension of the features.\n\n• numClasses – The number of possible outcomes for k classes classification problem in Multinomial Logistic Regression. By default, it is binary logistic regression so numClasses will be set to 2.\n\n>>> data = [\n... LabeledPoint(0.0, [0.0, 1.0]),\n... LabeledPoint(1.0, [1.0, 0.0]),\n... ]\n>>> lrm = LogisticRegressionWithSGD.train(sc.parallelize(data), iterations=10)\n>>> lrm.predict([1.0, 0.0])\n1\n>>> lrm.predict([0.0, 1.0])\n0\n>>> lrm.predict(sc.parallelize([[1.0, 0.0], [0.0, 1.0]])).collect()\n[1, 0]\n>>> lrm.clearThreshold()\n>>> lrm.predict([0.0, 1.0])\n0.279...\n\n>>> sparse_data = [\n... LabeledPoint(0.0, SparseVector(2, {0: 0.0})),\n... LabeledPoint(1.0, SparseVector(2, {1: 1.0})),\n... LabeledPoint(0.0, SparseVector(2, {0: 1.0})),\n... LabeledPoint(1.0, SparseVector(2, {1: 2.0}))\n... ]\n>>> lrm = LogisticRegressionWithSGD.train(sc.parallelize(sparse_data), iterations=10)\n>>> lrm.predict(array([0.0, 1.0]))\n1\n>>> lrm.predict(array([1.0, 0.0]))\n0\n>>> lrm.predict(SparseVector(2, {1: 1.0}))\n1\n>>> lrm.predict(SparseVector(2, {0: 1.0}))\n0\n>>> import os, tempfile\n>>> path = tempfile.mkdtemp()\n>>> lrm.save(sc, path)\n>>> sameModel.predict(array([0.0, 1.0]))\n1\n>>> sameModel.predict(SparseVector(2, {0: 1.0}))\n0\n>>> from shutil import rmtree\n>>> try:\n... rmtree(path)\n... except:\n... pass\n>>> multi_class_data = [\n... LabeledPoint(0.0, [0.0, 1.0, 0.0]),\n... LabeledPoint(1.0, [1.0, 0.0, 0.0]),\n... LabeledPoint(2.0, [0.0, 0.0, 1.0])\n... ]\n>>> data = sc.parallelize(multi_class_data)\n>>> mcm = LogisticRegressionWithLBFGS.train(data, iterations=10, numClasses=3)\n>>> mcm.predict([0.0, 0.5, 0.0])\n0\n>>> mcm.predict([0.8, 0.0, 0.0])\n1\n>>> mcm.predict([0.0, 0.0, 0.3])\n2\n\n\nNew in version 0.9.0.\n\nclearThreshold()\n\nClears the threshold so that predict will output raw prediction scores. It is used for binary classification only.\n\nNew in version 1.4.0.\n\nproperty intercept\n\nIntercept computed for this model.\n\nNew in version 1.0.0.\n\nclassmethod load(sc, path)[source]\n\nLoad a model from the given path.\n\nNew in version 1.4.0.\n\nproperty numClasses\n\nNumber of possible outcomes for k classes classification problem in Multinomial Logistic Regression.\n\nNew in version 1.4.0.\n\nproperty numFeatures\n\nDimension of the features.\n\nNew in version 1.4.0.\n\npredict(x)[source]\n\nPredict values for a single data point or an RDD of points using the model trained.\n\nNew in version 0.9.0.\n\nsave(sc, path)[source]\n\nSave this model to the given path.\n\nNew in version 1.4.0.\n\nsetThreshold(value)\n\nSets the threshold that separates positive predictions from negative predictions. An example with prediction score greater than or equal to this threshold is identified as a positive, and negative otherwise. It is used for binary classification only.\n\nNew in version 1.4.0.\n\nproperty threshold\n\nReturns the threshold (if any) used for converting raw prediction scores into 0/1 predictions. It is used for binary classification only.\n\nNew in version 1.4.0.\n\nproperty weights\n\nWeights computed for every feature.\n\nNew in version 1.0.0.\n\nclass pyspark.mllib.classification.LogisticRegressionWithSGD[source]\n\nNew in version 0.9.0.\n\nNote\n\nDeprecated in 2.0.0. Use ml.classification.LogisticRegression or LogisticRegressionWithLBFGS.\n\nclassmethod train(data, iterations=100, step=1.0, miniBatchFraction=1.0, initialWeights=None, regParam=0.01, regType='l2', intercept=False, validateData=True, convergenceTol=0.001)[source]\n\nTrain a logistic regression model on the given data.\n\nParameters\n• data – The training data, an RDD of LabeledPoint.\n\n• iterations – The number of iterations. (default: 100)\n\n• step – The step parameter used in SGD. (default: 1.0)\n\n• miniBatchFraction – Fraction of data to be used for each SGD iteration. (default: 1.0)\n\n• initialWeights – The initial weights. (default: None)\n\n• regParam – The regularizer parameter. (default: 0.01)\n\n• regType\n\nThe type of regularizer used for training our model. Supported values:\n\n• ”l1” for using L1 regularization\n\n• ”l2” for using L2 regularization (default)\n\n• None for no regularization\n\n• intercept – Boolean parameter which indicates the use or not of the augmented representation for training data (i.e., whether bias features are activated or not). (default: False)\n\n• validateData – Boolean parameter which indicates if the algorithm should validate data before training. (default: True)\n\n• convergenceTol – A condition which decides iteration termination. (default: 0.001)\n\nNew in version 0.9.0.\n\nclass pyspark.mllib.classification.LogisticRegressionWithLBFGS[source]\n\nNew in version 1.2.0.\n\nclassmethod train(data, iterations=100, initialWeights=None, regParam=0.0, regType='l2', intercept=False, corrections=10, tolerance=1e-06, validateData=True, numClasses=2)[source]\n\nTrain a logistic regression model on the given data.\n\nParameters\n• data – The training data, an RDD of LabeledPoint.\n\n• iterations – The number of iterations. (default: 100)\n\n• initialWeights – The initial weights. (default: None)\n\n• regParam – The regularizer parameter. (default: 0.0)\n\n• regType\n\nThe type of regularizer used for training our model. Supported values:\n\n• ”l1” for using L1 regularization\n\n• ”l2” for using L2 regularization (default)\n\n• None for no regularization\n\n• intercept – Boolean parameter which indicates the use or not of the augmented representation for training data (i.e., whether bias features are activated or not). (default: False)\n\n• corrections – The number of corrections used in the LBFGS update. If a known updater is used for binary classification, it calls the ml implementation and this parameter will have no effect. (default: 10)\n\n• tolerance – The convergence tolerance of iterations for L-BFGS. (default: 1e-6)\n\n• validateData – Boolean parameter which indicates if the algorithm should validate data before training. (default: True)\n\n• numClasses – The number of classes (i.e., outcomes) a label can take in Multinomial Logistic Regression. (default: 2)\n\n>>> data = [\n... LabeledPoint(0.0, [0.0, 1.0]),\n... LabeledPoint(1.0, [1.0, 0.0]),\n... ]\n>>> lrm = LogisticRegressionWithLBFGS.train(sc.parallelize(data), iterations=10)\n>>> lrm.predict([1.0, 0.0])\n1\n>>> lrm.predict([0.0, 1.0])\n0\n\n\nNew in version 1.2.0.\n\nclass pyspark.mllib.classification.SVMModel(weights, intercept)[source]\n\nModel for Support Vector Machines (SVMs).\n\nParameters\n• weights – Weights computed for every feature.\n\n• intercept – Intercept computed for this model.\n\n>>> data = [\n... LabeledPoint(0.0, [0.0]),\n... LabeledPoint(1.0, [1.0]),\n... LabeledPoint(1.0, [2.0]),\n... LabeledPoint(1.0, [3.0])\n... ]\n>>> svm = SVMWithSGD.train(sc.parallelize(data), iterations=10)\n>>> svm.predict([1.0])\n1\n>>> svm.predict(sc.parallelize([[1.0]])).collect()\n\n>>> svm.clearThreshold()\n>>> svm.predict(array([1.0]))\n1.44...\n\n>>> sparse_data = [\n... LabeledPoint(0.0, SparseVector(2, {0: -1.0})),\n... LabeledPoint(1.0, SparseVector(2, {1: 1.0})),\n... LabeledPoint(0.0, SparseVector(2, {0: 0.0})),\n... LabeledPoint(1.0, SparseVector(2, {1: 2.0}))\n... ]\n>>> svm = SVMWithSGD.train(sc.parallelize(sparse_data), iterations=10)\n>>> svm.predict(SparseVector(2, {1: 1.0}))\n1\n>>> svm.predict(SparseVector(2, {0: -1.0}))\n0\n>>> import os, tempfile\n>>> path = tempfile.mkdtemp()\n>>> svm.save(sc, path)\n>>> sameModel.predict(SparseVector(2, {1: 1.0}))\n1\n>>> sameModel.predict(SparseVector(2, {0: -1.0}))\n0\n>>> from shutil import rmtree\n>>> try:\n... rmtree(path)\n... except:\n... pass\n\n\nNew in version 0.9.0.\n\nclearThreshold()\n\nClears the threshold so that predict will output raw prediction scores. It is used for binary classification only.\n\nNew in version 1.4.0.\n\nproperty intercept\n\nIntercept computed for this model.\n\nNew in version 1.0.0.\n\nclassmethod load(sc, path)[source]\n\nLoad a model from the given path.\n\nNew in version 1.4.0.\n\npredict(x)[source]\n\nPredict values for a single data point or an RDD of points using the model trained.\n\nNew in version 0.9.0.\n\nsave(sc, path)[source]\n\nSave this model to the given path.\n\nNew in version 1.4.0.\n\nsetThreshold(value)\n\nSets the threshold that separates positive predictions from negative predictions. An example with prediction score greater than or equal to this threshold is identified as a positive, and negative otherwise. It is used for binary classification only.\n\nNew in version 1.4.0.\n\nproperty threshold\n\nReturns the threshold (if any) used for converting raw prediction scores into 0/1 predictions. It is used for binary classification only.\n\nNew in version 1.4.0.\n\nproperty weights\n\nWeights computed for every feature.\n\nNew in version 1.0.0.\n\nclass pyspark.mllib.classification.SVMWithSGD[source]\n\nNew in version 0.9.0.\n\nclassmethod train(data, iterations=100, step=1.0, regParam=0.01, miniBatchFraction=1.0, initialWeights=None, regType='l2', intercept=False, validateData=True, convergenceTol=0.001)[source]\n\nTrain a support vector machine on the given data.\n\nParameters\n• data – The training data, an RDD of LabeledPoint.\n\n• iterations – The number of iterations. (default: 100)\n\n• step – The step parameter used in SGD. (default: 1.0)\n\n• regParam – The regularizer parameter. (default: 0.01)\n\n• miniBatchFraction – Fraction of data to be used for each SGD iteration. (default: 1.0)\n\n• initialWeights – The initial weights. (default: None)\n\n• regType\n\nThe type of regularizer used for training our model. Allowed values:\n\n• ”l1” for using L1 regularization\n\n• ”l2” for using L2 regularization (default)\n\n• None for no regularization\n\n• intercept – Boolean parameter which indicates the use or not of the augmented representation for training data (i.e. whether bias features are activated or not). (default: False)\n\n• validateData – Boolean parameter which indicates if the algorithm should validate data before training. (default: True)\n\n• convergenceTol – A condition which decides iteration termination. (default: 0.001)\n\nNew in version 0.9.0.\n\nclass pyspark.mllib.classification.NaiveBayesModel(labels, pi, theta)[source]\n\nModel for Naive Bayes classifiers.\n\nParameters\n• labels – List of labels.\n\n• pi – Log of class priors, whose dimension is C, number of labels.\n\n• theta – Log of class conditional probabilities, whose dimension is C-by-D, where D is number of features.\n\n>>> data = [\n... LabeledPoint(0.0, [0.0, 0.0]),\n... LabeledPoint(0.0, [0.0, 1.0]),\n... LabeledPoint(1.0, [1.0, 0.0]),\n... ]\n>>> model = NaiveBayes.train(sc.parallelize(data))\n>>> model.predict(array([0.0, 1.0]))\n0.0\n>>> model.predict(array([1.0, 0.0]))\n1.0\n>>> model.predict(sc.parallelize([[1.0, 0.0]])).collect()\n[1.0]\n>>> sparse_data = [\n... LabeledPoint(0.0, SparseVector(2, {1: 0.0})),\n... LabeledPoint(0.0, SparseVector(2, {1: 1.0})),\n... LabeledPoint(1.0, SparseVector(2, {0: 1.0}))\n... ]\n>>> model = NaiveBayes.train(sc.parallelize(sparse_data))\n>>> model.predict(SparseVector(2, {1: 1.0}))\n0.0\n>>> model.predict(SparseVector(2, {0: 1.0}))\n1.0\n>>> import os, tempfile\n>>> path = tempfile.mkdtemp()\n>>> model.save(sc, path)\n>>> sameModel.predict(SparseVector(2, {0: 1.0})) == model.predict(SparseVector(2, {0: 1.0}))\nTrue\n>>> from shutil import rmtree\n>>> try:\n... rmtree(path)\n... except OSError:\n... pass\n\n\nNew in version 0.9.0.\n\nclassmethod load(sc, path)[source]\n\nLoad a model from the given path.\n\nNew in version 1.4.0.\n\npredict(x)[source]\n\nReturn the most likely class for a data vector or an RDD of vectors\n\nNew in version 0.9.0.\n\nsave(sc, path)[source]\n\nSave this model to the given path.\n\nclass pyspark.mllib.classification.NaiveBayes[source]\n\nNew in version 0.9.0.\n\nclassmethod train(data, lambda_=1.0)[source]\n\nTrain a Naive Bayes model given an RDD of (label, features) vectors.\n\nThis is the Multinomial NB (U{http://tinyurl.com/lsdw6p}) which can handle all kinds of discrete data. For example, by converting documents into TF-IDF vectors, it can be used for document classification. By making every vector a 0-1 vector, it can also be used as Bernoulli NB (U{http://tinyurl.com/p7c96j6}). The input feature values must be nonnegative.\n\nParameters\n• data – RDD of LabeledPoint.\n\n• lambda – The smoothing parameter. (default: 1.0)\n\nNew in version 0.9.0.\n\nclass pyspark.mllib.classification.StreamingLogisticRegressionWithSGD(stepSize=0.1, numIterations=50, miniBatchFraction=1.0, regParam=0.0, convergenceTol=0.001)[source]\n\nTrain or predict a logistic regression model on streaming data. Training uses Stochastic Gradient Descent to update the model based on each new batch of incoming data from a DStream.\n\nEach batch of data is assumed to be an RDD of LabeledPoints. The number of data points per batch can vary, but the number of features must be constant. An initial weight vector must be provided.\n\nParameters\n• stepSize – Step size for each iteration of gradient descent. (default: 0.1)\n\n• numIterations – Number of iterations run for each batch of data. (default: 50)\n\n• miniBatchFraction – Fraction of each batch of data to use for updates. (default: 1.0)\n\n• regParam – L2 Regularization parameter. (default: 0.0)\n\n• convergenceTol – Value used to determine when to terminate iterations. (default: 0.001)\n\nNew in version 1.5.0.\n\nlatestModel()\n\nReturns the latest model.\n\nNew in version 1.5.0.\n\npredictOn(dstream)\n\nUse the model to make predictions on batches of data from a DStream.\n\nReturns\n\nDStream containing predictions.\n\nNew in version 1.5.0.\n\npredictOnValues(dstream)\n\nUse the model to make predictions on the values of a DStream and carry over its keys.\n\nReturns\n\nDStream containing the input keys and the predictions as values.\n\nNew in version 1.5.0.\n\nsetInitialWeights(initialWeights)[source]\n\nSet the initial value of weights.\n\nThis must be set before running trainOn and predictOn.\n\nNew in version 1.5.0.\n\ntrainOn(dstream)[source]\n\nTrain the model on the incoming dstream.\n\nNew in version 1.5.0.\n\n## pyspark.mllib.clustering module¶\n\nclass pyspark.mllib.clustering.BisectingKMeansModel(java_model)[source]\n\nA clustering model derived from the bisecting k-means method.\n\n>>> data = array([0.0,0.0, 1.0,1.0, 9.0,8.0, 8.0,9.0]).reshape(4, 2)\n>>> bskm = BisectingKMeans()\n>>> model = bskm.train(sc.parallelize(data, 2), k=4)\n>>> p = array([0.0, 0.0])\n>>> model.predict(p)\n0\n>>> model.k\n4\n>>> model.computeCost(p)\n0.0\n\n\nNew in version 2.0.0.\n\nproperty clusterCenters\n\nGet the cluster centers, represented as a list of NumPy arrays.\n\nNew in version 2.0.0.\n\ncomputeCost(x)[source]\n\nReturn the Bisecting K-means cost (sum of squared distances of points to their nearest center) for this model on the given data. If provided with an RDD of points returns the sum.\n\nParameters\n\npoint – A data point (or RDD of points) to compute the cost(s).\n\nNew in version 2.0.0.\n\nproperty k\n\nGet the number of clusters\n\nNew in version 2.0.0.\n\npredict(x)[source]\n\nFind the cluster that each of the points belongs to in this model.\n\nParameters\n\nx – A data point (or RDD of points) to determine cluster index.\n\nReturns\n\nPredicted cluster index or an RDD of predicted cluster indices if the input is an RDD.\n\nNew in version 2.0.0.\n\nclass pyspark.mllib.clustering.BisectingKMeans[source]\n\nA bisecting k-means algorithm based on the paper “A comparison of document clustering techniques” by Steinbach, Karypis, and Kumar, with modification to fit Spark. The algorithm starts from a single cluster that contains all points. Iteratively it finds divisible clusters on the bottom level and bisects each of them using k-means, until there are k leaf clusters in total or no leaf clusters are divisible. The bisecting steps of clusters on the same level are grouped together to increase parallelism. If bisecting all divisible clusters on the bottom level would result more than k leaf clusters, larger clusters get higher priority.\n\nBased on U{http://glaros.dtc.umn.edu/gkhome/fetch/papers/docclusterKDDTMW00.pdf} Steinbach, Karypis, and Kumar, A comparison of document clustering techniques, KDD Workshop on Text Mining, 2000.\n\nNew in version 2.0.0.\n\nclassmethod train(rdd, k=4, maxIterations=20, minDivisibleClusterSize=1.0, seed=-1888008604)[source]\n\nRuns the bisecting k-means algorithm return the model.\n\nParameters\n• rdd – Training points as an RDD of Vector or convertible sequence types.\n\n• k – The desired number of leaf clusters. The actual number could be smaller if there are no divisible leaf clusters. (default: 4)\n\n• maxIterations – Maximum number of iterations allowed to split clusters. (default: 20)\n\n• minDivisibleClusterSize – Minimum number of points (if >= 1.0) or the minimum proportion of points (if < 1.0) of a divisible cluster. (default: 1)\n\n• seed – Random seed value for cluster initialization. (default: -1888008604 from classOf[BisectingKMeans].getName.##)\n\nNew in version 2.0.0.\n\nclass pyspark.mllib.clustering.KMeansModel(centers)[source]\n\nA clustering model derived from the k-means method.\n\n>>> data = array([0.0,0.0, 1.0,1.0, 9.0,8.0, 8.0,9.0]).reshape(4, 2)\n>>> model = KMeans.train(\n... sc.parallelize(data), 2, maxIterations=10, initializationMode=\"random\",\n... seed=50, initializationSteps=5, epsilon=1e-4)\n>>> model.predict(array([0.0, 0.0])) == model.predict(array([1.0, 1.0]))\nTrue\n>>> model.predict(array([8.0, 9.0])) == model.predict(array([9.0, 8.0]))\nTrue\n>>> model.k\n2\n>>> model.computeCost(sc.parallelize(data))\n2.0000000000000004\n>>> model = KMeans.train(sc.parallelize(data), 2)\n>>> sparse_data = [\n... SparseVector(3, {1: 1.0}),\n... SparseVector(3, {1: 1.1}),\n... SparseVector(3, {2: 1.0}),\n... SparseVector(3, {2: 1.1})\n... ]\n>>> model = KMeans.train(sc.parallelize(sparse_data), 2, initializationMode=\"k-means||\",\n... seed=50, initializationSteps=5, epsilon=1e-4)\n>>> model.predict(array([0., 1., 0.])) == model.predict(array([0, 1.1, 0.]))\nTrue\n>>> model.predict(array([0., 0., 1.])) == model.predict(array([0, 0, 1.1]))\nTrue\n>>> model.predict(sparse_data) == model.predict(sparse_data)\nTrue\n>>> model.predict(sparse_data) == model.predict(sparse_data)\nTrue\n>>> isinstance(model.clusterCenters, list)\nTrue\n>>> import os, tempfile\n>>> path = tempfile.mkdtemp()\n>>> model.save(sc, path)\n>>> sameModel.predict(sparse_data) == model.predict(sparse_data)\nTrue\n>>> from shutil import rmtree\n>>> try:\n... rmtree(path)\n... except OSError:\n... pass\n\n>>> data = array([-383.1,-382.9, 28.7,31.2, 366.2,367.3]).reshape(3, 2)\n>>> model = KMeans.train(sc.parallelize(data), 3, maxIterations=0,\n... initialModel = KMeansModel([(-1000.0,-1000.0),(5.0,5.0),(1000.0,1000.0)]))\n>>> model.clusterCenters\n[array([-1000., -1000.]), array([ 5., 5.]), array([ 1000., 1000.])]\n\n\nNew in version 0.9.0.\n\nproperty clusterCenters\n\nGet the cluster centers, represented as a list of NumPy arrays.\n\nNew in version 1.0.0.\n\ncomputeCost(rdd)[source]\n\nReturn the K-means cost (sum of squared distances of points to their nearest center) for this model on the given data.\n\nParameters\n\nrdd – The RDD of points to compute the cost on.\n\nNew in version 1.4.0.\n\nproperty k\n\nTotal number of clusters.\n\nNew in version 1.4.0.\n\nclassmethod load(sc, path)[source]\n\nLoad a model from the given path.\n\nNew in version 1.4.0.\n\npredict(x)[source]\n\nFind the cluster that each of the points belongs to in this model.\n\nParameters\n\nx – A data point (or RDD of points) to determine cluster index.\n\nReturns\n\nPredicted cluster index or an RDD of predicted cluster indices if the input is an RDD.\n\nNew in version 0.9.0.\n\nsave(sc, path)[source]\n\nSave this model to the given path.\n\nNew in version 1.4.0.\n\nclass pyspark.mllib.clustering.KMeans[source]\n\nNew in version 0.9.0.\n\nclassmethod train(rdd, k, maxIterations=100, runs=1, initializationMode='k-means||', seed=None, initializationSteps=2, epsilon=0.0001, initialModel=None)[source]\n\nTrain a k-means clustering model.\n\nParameters\n• rdd – Training points as an RDD of Vector or convertible sequence types.\n\n• k – Number of clusters to create.\n\n• maxIterations – Maximum number of iterations allowed. (default: 100)\n\n• runs – This param has no effect since Spark 2.0.0.\n\n• initializationMode – The initialization algorithm. This can be either “random” or “k-means||”. (default: “k-means||”)\n\n• seed – Random seed value for cluster initialization. Set as None to generate seed based on system time. (default: None)\n\n• initializationSteps – Number of steps for the k-means|| initialization mode. This is an advanced setting – the default of 2 is almost always enough. (default: 2)\n\n• epsilon – Distance threshold within which a center will be considered to have converged. If all centers move less than this Euclidean distance, iterations are stopped. (default: 1e-4)\n\n• initialModel – Initial cluster centers can be provided as a KMeansModel object rather than using the random or k-means|| initializationModel. (default: None)\n\nNew in version 0.9.0.\n\nclass pyspark.mllib.clustering.GaussianMixtureModel(java_model)[source]\n\nA clustering model derived from the Gaussian Mixture Model method.\n\n>>> from pyspark.mllib.linalg import Vectors, DenseMatrix\n>>> from numpy.testing import assert_equal\n>>> from shutil import rmtree\n>>> import os, tempfile\n\n>>> clusterdata_1 = sc.parallelize(array([-0.1,-0.05,-0.01,-0.1,\n... 0.9,0.8,0.75,0.935,\n... -0.83,-0.68,-0.91,-0.76 ]).reshape(6, 2), 2)\n>>> model = GaussianMixture.train(clusterdata_1, 3, convergenceTol=0.0001,\n... maxIterations=50, seed=10)\n>>> labels = model.predict(clusterdata_1).collect()\n>>> labels==labels\nFalse\n>>> labels==labels\nFalse\n>>> labels==labels\nTrue\n>>> model.predict([-0.1,-0.05])\n0\n>>> softPredicted = model.predictSoft([-0.1,-0.05])\n>>> abs(softPredicted - 1.0) < 0.001\nTrue\n>>> abs(softPredicted - 0.0) < 0.001\nTrue\n>>> abs(softPredicted - 0.0) < 0.001\nTrue\n\n>>> path = tempfile.mkdtemp()\n>>> model.save(sc, path)\n>>> assert_equal(model.weights, sameModel.weights)\n>>> mus, sigmas = list(\n... zip(*[(g.mu, g.sigma) for g in model.gaussians]))\n>>> sameMus, sameSigmas = list(\n... zip(*[(g.mu, g.sigma) for g in sameModel.gaussians]))\n>>> mus == sameMus\nTrue\n>>> sigmas == sameSigmas\nTrue\n>>> from shutil import rmtree\n>>> try:\n... rmtree(path)\n... except OSError:\n... pass\n\n>>> data = array([-5.1971, -2.5359, -3.8220,\n... -5.2211, -5.0602, 4.7118,\n... 6.8989, 3.4592, 4.6322,\n... 5.7048, 4.6567, 5.5026,\n... 4.5605, 5.2043, 6.2734])\n>>> clusterdata_2 = sc.parallelize(data.reshape(5,3))\n>>> model = GaussianMixture.train(clusterdata_2, 2, convergenceTol=0.0001,\n... maxIterations=150, seed=4)\n>>> labels = model.predict(clusterdata_2).collect()\n>>> labels==labels\nTrue\n>>> labels==labels==labels\nTrue\n\n\nNew in version 1.3.0.\n\nproperty gaussians\n\nArray of MultivariateGaussian where gaussians[i] represents the Multivariate Gaussian (Normal) Distribution for Gaussian i.\n\nNew in version 1.4.0.\n\nproperty k\n\nNumber of gaussians in mixture.\n\nNew in version 1.4.0.\n\nclassmethod load(sc, path)[source]\n\nParameters\n• sc – SparkContext.\n\n• path – Path to where the model is stored.\n\nNew in version 1.5.0.\n\npredict(x)[source]\n\nFind the cluster to which the point ‘x’ or each point in RDD ‘x’ has maximum membership in this model.\n\nParameters\n\nx – A feature vector or an RDD of vectors representing data points.\n\nReturns\n\nPredicted cluster label or an RDD of predicted cluster labels if the input is an RDD.\n\nNew in version 1.3.0.\n\npredictSoft(x)[source]\n\nFind the membership of point ‘x’ or each point in RDD ‘x’ to all mixture components.\n\nParameters\n\nx – A feature vector or an RDD of vectors representing data points.\n\nReturns\n\nThe membership value to all mixture components for vector ‘x’ or each vector in RDD ‘x’.\n\nNew in version 1.3.0.\n\nproperty weights\n\nWeights for each Gaussian distribution in the mixture, where weights[i] is the weight for Gaussian i, and weights.sum == 1.\n\nNew in version 1.4.0.\n\nclass pyspark.mllib.clustering.GaussianMixture[source]\n\nLearning algorithm for Gaussian Mixtures using the expectation-maximization algorithm.\n\nNew in version 1.3.0.\n\nclassmethod train(rdd, k, convergenceTol=0.001, maxIterations=100, seed=None, initialModel=None)[source]\n\nTrain a Gaussian Mixture clustering model.\n\nParameters\n• rdd – Training points as an RDD of Vector or convertible sequence types.\n\n• k – Number of independent Gaussians in the mixture model.\n\n• convergenceTol – Maximum change in log-likelihood at which convergence is considered to have occurred. (default: 1e-3)\n\n• maxIterations – Maximum number of iterations allowed. (default: 100)\n\n• seed – Random seed for initial Gaussian distribution. Set as None to generate seed based on system time. (default: None)\n\n• initialModel – Initial GMM starting point, bypassing the random initialization. (default: None)\n\nNew in version 1.3.0.\n\nclass pyspark.mllib.clustering.PowerIterationClusteringModel(java_model)[source]\n\nModel produced by [[PowerIterationClustering]].\n\n>>> import math\n>>> def genCircle(r, n):\n... points = []\n... for i in range(0, n):\n... theta = 2.0 * math.pi * i / n\n... points.append((r * math.cos(theta), r * math.sin(theta)))\n... return points\n>>> def sim(x, y):\n... dist2 = (x - y) * (x - y) + (x - y) * (x - y)\n... return math.exp(-dist2 / 2.0)\n>>> r1 = 1.0\n>>> n1 = 10\n>>> r2 = 4.0\n>>> n2 = 40\n>>> n = n1 + n2\n>>> points = genCircle(r1, n1) + genCircle(r2, n2)\n>>> similarities = [(i, j, sim(points[i], points[j])) for i in range(1, n) for j in range(0, i)]\n>>> rdd = sc.parallelize(similarities, 2)\n>>> model = PowerIterationClustering.train(rdd, 2, 40)\n>>> model.k\n2\n>>> result = sorted(model.assignments().collect(), key=lambda x: x.id)\n>>> result.cluster == result.cluster == result.cluster == result.cluster\nTrue\n>>> result.cluster == result.cluster == result.cluster == result.cluster\nTrue\n>>> import os, tempfile\n>>> path = tempfile.mkdtemp()\n>>> model.save(sc, path)\n>>> sameModel.k\n2\n>>> result = sorted(model.assignments().collect(), key=lambda x: x.id)\n>>> result.cluster == result.cluster == result.cluster == result.cluster\nTrue\n>>> result.cluster == result.cluster == result.cluster == result.cluster\nTrue\n>>> from shutil import rmtree\n>>> try:\n... rmtree(path)\n... except OSError:\n... pass\n\n\nNew in version 1.5.0.\n\nassignments()[source]\n\nReturns the cluster assignments of this model.\n\nNew in version 1.5.0.\n\nproperty k\n\nReturns the number of clusters.\n\nNew in version 1.5.0.\n\nclassmethod load(sc, path)[source]\n\nLoad a model from the given path.\n\nNew in version 1.5.0.\n\nclass pyspark.mllib.clustering.PowerIterationClustering[source]\n\nPower Iteration Clustering (PIC), a scalable graph clustering algorithm developed by [[http://www.cs.cmu.edu/~frank/papers/icml2010-pic-final.pdf Lin and Cohen]]. From the abstract: PIC finds a very low-dimensional embedding of a dataset using truncated power iteration on a normalized pair-wise similarity matrix of the data.\n\nNew in version 1.5.0.\n\nclass Assignment[source]\n\nRepresents an (id, cluster) tuple.\n\nNew in version 1.5.0.\n\nclassmethod train(rdd, k, maxIterations=100, initMode='random')[source]\nParameters\n• rdd – An RDD of (i, j, sij) tuples representing the affinity matrix, which is the matrix A in the PIC paper. The similarity sijmust be nonnegative. This is a symmetric matrix and hence sij= sji For any (i, j) with nonzero similarity, there should be either (i, j, sij) or (j, i, sji) in the input. Tuples with i = j are ignored, because it is assumed sij= 0.0.\n\n• k – Number of clusters.\n\n• maxIterations – Maximum number of iterations of the PIC algorithm. (default: 100)\n\n• initMode – Initialization mode. This can be either “random” to use a random vector as vertex properties, or “degree” to use normalized sum similarities. (default: “random”)\n\nNew in version 1.5.0.\n\nclass pyspark.mllib.clustering.StreamingKMeans(k=2, decayFactor=1.0, timeUnit='batches')[source]\n\nProvides methods to set k, decayFactor, timeUnit to configure the KMeans algorithm for fitting and predicting on incoming dstreams. More details on how the centroids are updated are provided under the docs of StreamingKMeansModel.\n\nParameters\n• k – Number of clusters. (default: 2)\n\n• decayFactor – Forgetfulness of the previous centroids. (default: 1.0)\n\n• timeUnit – Can be “batches” or “points”. If points, then the decay factor is raised to the power of number of new points and if batches, then decay factor will be used as is. (default: “batches”)\n\nNew in version 1.5.0.\n\nlatestModel()[source]\n\nReturn the latest model\n\nNew in version 1.5.0.\n\npredictOn(dstream)[source]\n\nMake predictions on a dstream. Returns a transformed dstream object\n\nNew in version 1.5.0.\n\npredictOnValues(dstream)[source]\n\nMake predictions on a keyed dstream. Returns a transformed dstream object.\n\nNew in version 1.5.0.\n\nsetDecayFactor(decayFactor)[source]\n\nSet decay factor.\n\nNew in version 1.5.0.\n\nsetHalfLife(halfLife, timeUnit)[source]\n\nSet number of batches after which the centroids of that particular batch has half the weightage.\n\nNew in version 1.5.0.\n\nsetInitialCenters(centers, weights)[source]\n\nSet initial centers. Should be set before calling trainOn.\n\nNew in version 1.5.0.\n\nsetK(k)[source]\n\nSet number of clusters.\n\nNew in version 1.5.0.\n\nsetRandomCenters(dim, weight, seed)[source]\n\nSet the initial centres to be random samples from a gaussian population with constant weights.\n\nNew in version 1.5.0.\n\ntrainOn(dstream)[source]\n\nTrain the model on the incoming dstream.\n\nNew in version 1.5.0.\n\nclass pyspark.mllib.clustering.StreamingKMeansModel(clusterCenters, clusterWeights)[source]\n\nClustering model which can perform an online update of the centroids.\n\nThe update formula for each centroid is given by\n\n• c_t+1 = ((c_t * n_t * a) + (x_t * m_t)) / (n_t + m_t)\n\n• n_t+1 = n_t * a + m_t\n\nwhere\n\n• c_t: Centroid at the n_th iteration.\n\n• n_t: Number of samples (or) weights associated with the centroid\n\nat the n_th iteration.\n\n• x_t: Centroid of the new data closest to c_t.\n\n• m_t: Number of samples (or) weights of the new data closest to c_t\n\n• c_t+1: New centroid.\n\n• n_t+1: New number of weights.\n\n• a: Decay Factor, which gives the forgetfulness.\n\nNote\n\nIf a is set to 1, it is the weighted mean of the previous and new data. If it set to zero, the old centroids are completely forgotten.\n\nParameters\n• clusterCenters – Initial cluster centers.\n\n• clusterWeights – List of weights assigned to each cluster.\n\n>>> initCenters = [[0.0, 0.0], [1.0, 1.0]]\n>>> initWeights = [1.0, 1.0]\n>>> stkm = StreamingKMeansModel(initCenters, initWeights)\n>>> data = sc.parallelize([[-0.1, -0.1], [0.1, 0.1],\n... [0.9, 0.9], [1.1, 1.1]])\n>>> stkm = stkm.update(data, 1.0, u\"batches\")\n>>> stkm.centers\narray([[ 0., 0.],\n[ 1., 1.]])\n>>> stkm.predict([-0.1, -0.1])\n0\n>>> stkm.predict([0.9, 0.9])\n1\n>>> stkm.clusterWeights\n[3.0, 3.0]\n>>> decayFactor = 0.0\n>>> data = sc.parallelize([DenseVector([1.5, 1.5]), DenseVector([0.2, 0.2])])\n>>> stkm = stkm.update(data, 0.0, u\"batches\")\n>>> stkm.centers\narray([[ 0.2, 0.2],\n[ 1.5, 1.5]])\n>>> stkm.clusterWeights\n[1.0, 1.0]\n>>> stkm.predict([0.2, 0.2])\n0\n>>> stkm.predict([1.5, 1.5])\n1\n\n\nNew in version 1.5.0.\n\nproperty clusterWeights\n\nReturn the cluster weights.\n\nNew in version 1.5.0.\n\nupdate(data, decayFactor, timeUnit)[source]\n\nUpdate the centroids, according to data\n\nParameters\n• data – RDD with new data for the model update.\n\n• decayFactor – Forgetfulness of the previous centroids.\n\n• timeUnit – Can be “batches” or “points”. If points, then the decay factor is raised to the power of number of new points and if batches, then decay factor will be used as is.\n\nNew in version 1.5.0.\n\nclass pyspark.mllib.clustering.LDA[source]\n\nNew in version 1.5.0.\n\nclassmethod train(rdd, k=10, maxIterations=20, docConcentration=-1.0, topicConcentration=-1.0, seed=None, checkpointInterval=10, optimizer='em')[source]\n\nTrain a LDA model.\n\nParameters\n• rdd – RDD of documents, which are tuples of document IDs and term (word) count vectors. The term count vectors are “bags of words” with a fixed-size vocabulary (where the vocabulary size is the length of the vector). Document IDs must be unique and >= 0.\n\n• k – Number of topics to infer, i.e., the number of soft cluster centers. (default: 10)\n\n• maxIterations – Maximum number of iterations allowed. (default: 20)\n\n• docConcentration – Concentration parameter (commonly named “alpha”) for the prior placed on documents’ distributions over topics (“theta”). (default: -1.0)\n\n• topicConcentration – Concentration parameter (commonly named “beta” or “eta”) for the prior placed on topics’ distributions over terms. (default: -1.0)\n\n• seed – Random seed for cluster initialization. Set as None to generate seed based on system time. (default: None)\n\n• checkpointInterval – Period (in iterations) between checkpoints. (default: 10)\n\n• optimizer – LDAOptimizer used to perform the actual calculation. Currently “em”, “online” are supported. (default: “em”)\n\nNew in version 1.5.0.\n\nclass pyspark.mllib.clustering.LDAModel(java_model)[source]\n\nA clustering model derived from the LDA method.\n\nLatent Dirichlet Allocation (LDA), a topic model designed for text documents. Terminology - “word” = “term”: an element of the vocabulary - “token”: instance of a term appearing in a document - “topic”: multinomial distribution over words representing some concept References: - Original LDA paper (journal version): Blei, Ng, and Jordan. “Latent Dirichlet Allocation.” JMLR, 2003.\n\n>>> from pyspark.mllib.linalg import Vectors\n>>> from numpy.testing import assert_almost_equal, assert_equal\n>>> data = [\n... [1, Vectors.dense([0.0, 1.0])],\n... [2, SparseVector(2, {0: 1.0})],\n... ]\n>>> rdd = sc.parallelize(data)\n>>> model = LDA.train(rdd, k=2, seed=1)\n>>> model.vocabSize()\n2\n>>> model.describeTopics()\n[([1, 0], [0.5..., 0.49...]), ([0, 1], [0.5..., 0.49...])]\n>>> model.describeTopics(1)\n[(, [0.5...]), (, [0.5...])]\n\n>>> topics = model.topicsMatrix()\n>>> topics_expect = array([[0.5, 0.5], [0.5, 0.5]])\n>>> assert_almost_equal(topics, topics_expect, 1)\n\n>>> import os, tempfile\n>>> from shutil import rmtree\n>>> path = tempfile.mkdtemp()\n>>> model.save(sc, path)\n>>> assert_equal(sameModel.topicsMatrix(), model.topicsMatrix())\n>>> sameModel.vocabSize() == model.vocabSize()\nTrue\n>>> try:\n... rmtree(path)\n... except OSError:\n... pass\n\n\nNew in version 1.5.0.\n\ndescribeTopics(maxTermsPerTopic=None)[source]\n\nReturn the topics described by weighted terms.\n\nWARNING: If vocabSize and k are large, this can return a large object!\n\nParameters\n\nmaxTermsPerTopic – Maximum number of terms to collect for each topic. (default: vocabulary size)\n\nReturns\n\nArray over topics. Each topic is represented as a pair of matching arrays: (term indices, term weights in topic). Each topic’s terms are sorted in order of decreasing weight.\n\nNew in version 1.6.0.\n\nclassmethod load(sc, path)[source]\n\nParameters\n• sc – SparkContext.\n\n• path – Path to where the model is stored.\n\nNew in version 1.5.0.\n\ntopicsMatrix()[source]\n\nInferred topics, where each topic is represented by a distribution over terms.\n\nNew in version 1.5.0.\n\nvocabSize()[source]\n\nVocabulary size (number of terms or terms in the vocabulary)\n\nNew in version 1.5.0.\n\n## pyspark.mllib.evaluation module¶\n\nclass pyspark.mllib.evaluation.BinaryClassificationMetrics(scoreAndLabels)[source]\n\nEvaluator for binary classification.\n\nParameters\n\nscoreAndLabels – an RDD of (score, label) pairs\n\n>>> scoreAndLabels = sc.parallelize([\n... (0.1, 0.0), (0.1, 1.0), (0.4, 0.0), (0.6, 0.0), (0.6, 1.0), (0.6, 1.0), (0.8, 1.0)], 2)\n>>> metrics = BinaryClassificationMetrics(scoreAndLabels)\n>>> metrics.areaUnderROC\n0.70...\n>>> metrics.areaUnderPR\n0.83...\n>>> metrics.unpersist()\n\n\nNew in version 1.4.0.\n\nproperty areaUnderPR\n\nComputes the area under the precision-recall curve.\n\nNew in version 1.4.0.\n\nproperty areaUnderROC\n\nComputes the area under the receiver operating characteristic (ROC) curve.\n\nNew in version 1.4.0.\n\nunpersist()[source]\n\nUnpersists intermediate RDDs used in the computation.\n\nNew in version 1.4.0.\n\nclass pyspark.mllib.evaluation.RegressionMetrics(predictionAndObservations)[source]\n\nEvaluator for regression.\n\nParameters\n\npredictionAndObservations – an RDD of (prediction, observation) pairs.\n\n>>> predictionAndObservations = sc.parallelize([\n... (2.5, 3.0), (0.0, -0.5), (2.0, 2.0), (8.0, 7.0)])\n>>> metrics = RegressionMetrics(predictionAndObservations)\n>>> metrics.explainedVariance\n8.859...\n>>> metrics.meanAbsoluteError\n0.5...\n>>> metrics.meanSquaredError\n0.37...\n>>> metrics.rootMeanSquaredError\n0.61...\n>>> metrics.r2\n0.94...\n\n\nNew in version 1.4.0.\n\nproperty explainedVariance\n\nReturns the explained variance regression score. explainedVariance = $$1 - \\frac{variance(y - \\hat{y})}{variance(y)}$$\n\nNew in version 1.4.0.\n\nproperty meanAbsoluteError\n\nReturns the mean absolute error, which is a risk function corresponding to the expected value of the absolute error loss or l1-norm loss.\n\nNew in version 1.4.0.\n\nproperty meanSquaredError\n\nReturns the mean squared error, which is a risk function corresponding to the expected value of the squared error loss or quadratic loss.\n\nNew in version 1.4.0.\n\nproperty r2\n\nReturns R^2^, the coefficient of determination.\n\nNew in version 1.4.0.\n\nproperty rootMeanSquaredError\n\nReturns the root mean squared error, which is defined as the square root of the mean squared error.\n\nNew in version 1.4.0.\n\nclass pyspark.mllib.evaluation.MulticlassMetrics(predictionAndLabels)[source]\n\nEvaluator for multiclass classification.\n\nParameters\n\npredictionAndLabels – an RDD of (prediction, label) pairs.\n\n>>> predictionAndLabels = sc.parallelize([(0.0, 0.0), (0.0, 1.0), (0.0, 0.0),\n... (1.0, 0.0), (1.0, 1.0), (1.0, 1.0), (1.0, 1.0), (2.0, 2.0), (2.0, 0.0)])\n>>> metrics = MulticlassMetrics(predictionAndLabels)\n>>> metrics.confusionMatrix().toArray()\narray([[ 2., 1., 1.],\n[ 1., 3., 0.],\n[ 0., 0., 1.]])\n>>> metrics.falsePositiveRate(0.0)\n0.2...\n>>> metrics.precision(1.0)\n0.75...\n>>> metrics.recall(2.0)\n1.0...\n>>> metrics.fMeasure(0.0, 2.0)\n0.52...\n>>> metrics.accuracy\n0.66...\n>>> metrics.weightedFalsePositiveRate\n0.19...\n>>> metrics.weightedPrecision\n0.68...\n>>> metrics.weightedRecall\n0.66...\n>>> metrics.weightedFMeasure()\n0.66...\n>>> metrics.weightedFMeasure(2.0)\n0.65...\n\n\nNew in version 1.4.0.\n\nproperty accuracy\n\nReturns accuracy (equals to the total number of correctly classified instances out of the total number of instances).\n\nNew in version 2.0.0.\n\nconfusionMatrix()[source]\n\nReturns confusion matrix: predicted classes are in columns, they are ordered by class label ascending, as in “labels”.\n\nNew in version 1.4.0.\n\nfMeasure(label=None, beta=None)[source]\n\nReturns f-measure or f-measure for a given label (category) if specified.\n\nNew in version 1.4.0.\n\nfalsePositiveRate(label)[source]\n\nReturns false positive rate for a given label (category).\n\nNew in version 1.4.0.\n\nprecision(label=None)[source]\n\nReturns precision or precision for a given label (category) if specified.\n\nNew in version 1.4.0.\n\nrecall(label=None)[source]\n\nReturns recall or recall for a given label (category) if specified.\n\nNew in version 1.4.0.\n\ntruePositiveRate(label)[source]\n\nReturns true positive rate for a given label (category).\n\nNew in version 1.4.0.\n\nweightedFMeasure(beta=None)[source]\n\nReturns weighted averaged f-measure.\n\nNew in version 1.4.0.\n\nproperty weightedFalsePositiveRate\n\nReturns weighted false positive rate.\n\nNew in version 1.4.0.\n\nproperty weightedPrecision\n\nReturns weighted averaged precision.\n\nNew in version 1.4.0.\n\nproperty weightedRecall\n\nReturns weighted averaged recall. (equals to precision, recall and f-measure)\n\nNew in version 1.4.0.\n\nproperty weightedTruePositiveRate\n\nReturns weighted true positive rate. (equals to precision, recall and f-measure)\n\nNew in version 1.4.0.\n\nclass pyspark.mllib.evaluation.RankingMetrics(predictionAndLabels)[source]\n\nEvaluator for ranking algorithms.\n\nParameters\n\npredictionAndLabels – an RDD of (predicted ranking, ground truth set) pairs.\n\n>>> predictionAndLabels = sc.parallelize([\n... ([1, 6, 2, 7, 8, 3, 9, 10, 4, 5], [1, 2, 3, 4, 5]),\n... ([4, 1, 5, 6, 2, 7, 3, 8, 9, 10], [1, 2, 3]),\n... ([1, 2, 3, 4, 5], [])])\n>>> metrics = RankingMetrics(predictionAndLabels)\n>>> metrics.precisionAt(1)\n0.33...\n>>> metrics.precisionAt(5)\n0.26...\n>>> metrics.precisionAt(15)\n0.17...\n>>> metrics.meanAveragePrecision\n0.35...\n>>> metrics.ndcgAt(3)\n0.33...\n>>> metrics.ndcgAt(10)\n0.48...\n\n\nNew in version 1.4.0.\n\nproperty meanAveragePrecision\n\nReturns the mean average precision (MAP) of all the queries. If a query has an empty ground truth set, the average precision will be zero and a log warining is generated.\n\nNew in version 1.4.0.\n\nndcgAt(k)[source]\n\nCompute the average NDCG value of all the queries, truncated at ranking position k. The discounted cumulative gain at position k is computed as: sum,,i=1,,^k^ (2^{relevance of ‘’i’’th item}^ - 1) / log(i + 1), and the NDCG is obtained by dividing the DCG value on the ground truth set. In the current implementation, the relevance value is binary. If a query has an empty ground truth set, zero will be used as NDCG together with a log warning.\n\nNew in version 1.4.0.\n\nprecisionAt(k)[source]\n\nCompute the average precision of all the queries, truncated at ranking position k.\n\nIf for a query, the ranking algorithm returns n (n < k) results, the precision value will be computed as #(relevant items retrieved) / k. This formula also applies when the size of the ground truth set is less than k.\n\nIf a query has an empty ground truth set, zero will be used as precision together with a log warning.\n\nNew in version 1.4.0.\n\n## pyspark.mllib.feature module¶\n\nPython package for feature in MLlib.\n\nclass pyspark.mllib.feature.Normalizer(p=2.0)[source]\n\nBases: pyspark.mllib.feature.VectorTransformer\n\nNormalizes samples individually to unit Lp norm\n\nFor any 1 <= p < float(‘inf’), normalizes samples using sum(abs(vector) p) (1/p) as norm.\n\nFor p = float(‘inf’), max(abs(vector)) will be used as norm for normalization.\n\nParameters\n\np – Normalization in L^p^ space, p = 2 by default.\n\n>>> v = Vectors.dense(range(3))\n>>> nor = Normalizer(1)\n>>> nor.transform(v)\nDenseVector([0.0, 0.3333, 0.6667])\n\n>>> rdd = sc.parallelize([v])\n>>> nor.transform(rdd).collect()\n[DenseVector([0.0, 0.3333, 0.6667])]\n\n>>> nor2 = Normalizer(float(\"inf\"))\n>>> nor2.transform(v)\nDenseVector([0.0, 0.5, 1.0])\n\n\nNew in version 1.2.0.\n\ntransform(vector)[source]\n\nApplies unit length normalization on a vector.\n\nParameters\n\nvector – vector or RDD of vector to be normalized.\n\nReturns\n\nnormalized vector. If the norm of the input is zero, it will return the input vector.\n\nNew in version 1.2.0.\n\nclass pyspark.mllib.feature.StandardScalerModel(java_model)[source]\n\nBases: pyspark.mllib.feature.JavaVectorTransformer\n\nRepresents a StandardScaler model that can transform vectors.\n\nNew in version 1.2.0.\n\nproperty mean\n\nReturn the column mean values.\n\nNew in version 2.0.0.\n\nsetWithMean(withMean)[source]\n\nSetter of the boolean which decides whether it uses mean or not\n\nNew in version 1.4.0.\n\nsetWithStd(withStd)[source]\n\nSetter of the boolean which decides whether it uses std or not\n\nNew in version 1.4.0.\n\nproperty std\n\nReturn the column standard deviation values.\n\nNew in version 2.0.0.\n\ntransform(vector)[source]\n\nApplies standardization transformation on a vector.\n\nNote\n\nIn Python, transform cannot currently be used within an RDD transformation or action. Call transform directly on the RDD instead.\n\nParameters\n\nvector – Vector or RDD of Vector to be standardized.\n\nReturns\n\nStandardized vector. If the variance of a column is zero, it will return default 0.0 for the column with zero variance.\n\nNew in version 1.2.0.\n\nproperty withMean\n\nReturns if the model centers the data before scaling.\n\nNew in version 2.0.0.\n\nproperty withStd\n\nReturns if the model scales the data to unit standard deviation.\n\nNew in version 2.0.0.\n\nclass pyspark.mllib.feature.StandardScaler(withMean=False, withStd=True)[source]\n\nBases: object\n\nStandardizes features by removing the mean and scaling to unit variance using column summary statistics on the samples in the training set.\n\nParameters\n• withMean – False by default. Centers the data with mean before scaling. It will build a dense output, so take care when applying to sparse input.\n\n• withStd – True by default. Scales the data to unit standard deviation.\n\n>>> vs = [Vectors.dense([-2.0, 2.3, 0]), Vectors.dense([3.8, 0.0, 1.9])]\n>>> dataset = sc.parallelize(vs)\n>>> standardizer = StandardScaler(True, True)\n>>> model = standardizer.fit(dataset)\n>>> result = model.transform(dataset)\n>>> for r in result.collect(): r\nDenseVector([-0.7071, 0.7071, -0.7071])\nDenseVector([0.7071, -0.7071, 0.7071])\n>>> int(model.std)\n4\n>>> int(model.mean*10)\n9\n>>> model.withStd\nTrue\n>>> model.withMean\nTrue\n\n\nNew in version 1.2.0.\n\nfit(dataset)[source]\n\nComputes the mean and variance and stores as a model to be used for later scaling.\n\nParameters\n\ndataset – The data used to compute the mean and variance to build the transformation model.\n\nReturns\n\na StandardScalarModel\n\nNew in version 1.2.0.\n\nclass pyspark.mllib.feature.HashingTF(numFeatures=1048576)[source]\n\nBases: object\n\nMaps a sequence of terms to their term frequencies using the hashing trick.\n\nNote\n\nThe terms must be hashable (can not be dict/set/list…).\n\nParameters\n\nnumFeatures – number of features (default: 2^20)\n\n>>> htf = HashingTF(100)\n>>> doc = \"a a b b c d\".split(\" \")\n>>> htf.transform(doc)\nSparseVector(100, {...})\n\n\nNew in version 1.2.0.\n\nindexOf(term)[source]\n\nReturns the index of the input term.\n\nNew in version 1.2.0.\n\nsetBinary(value)[source]\n\nIf True, term frequency vector will be binary such that non-zero term counts will be set to 1 (default: False)\n\nNew in version 2.0.0.\n\ntransform(document)[source]\n\nTransforms the input document (list of terms) to term frequency vectors, or transform the RDD of document to RDD of term frequency vectors.\n\nNew in version 1.2.0.\n\nclass pyspark.mllib.feature.IDFModel(java_model)[source]\n\nBases: pyspark.mllib.feature.JavaVectorTransformer\n\nRepresents an IDF model that can transform term frequency vectors.\n\nNew in version 1.2.0.\n\nidf()[source]\n\nReturns the current IDF vector.\n\nNew in version 1.4.0.\n\ntransform(x)[source]\n\nTransforms term frequency (TF) vectors to TF-IDF vectors.\n\nIf minDocFreq was set for the IDF calculation, the terms which occur in fewer than minDocFreq documents will have an entry of 0.\n\nNote\n\nIn Python, transform cannot currently be used within an RDD transformation or action. Call transform directly on the RDD instead.\n\nParameters\n\nx – an RDD of term frequency vectors or a term frequency vector\n\nReturns\n\nan RDD of TF-IDF vectors or a TF-IDF vector\n\nNew in version 1.2.0.\n\nclass pyspark.mllib.feature.IDF(minDocFreq=0)[source]\n\nBases: object\n\nInverse document frequency (IDF).\n\nThe standard formulation is used: idf = log((m + 1) / (d(t) + 1)), where m is the total number of documents and d(t) is the number of documents that contain term t.\n\nThis implementation supports filtering out terms which do not appear in a minimum number of documents (controlled by the variable minDocFreq). For terms that are not in at least minDocFreq documents, the IDF is found as 0, resulting in TF-IDFs of 0.\n\nParameters\n\nminDocFreq – minimum of documents in which a term should appear for filtering\n\n>>> n = 4\n>>> freqs = [Vectors.sparse(n, (1, 3), (1.0, 2.0)),\n... Vectors.dense([0.0, 1.0, 2.0, 3.0]),\n... Vectors.sparse(n, , [1.0])]\n>>> data = sc.parallelize(freqs)\n>>> idf = IDF()\n>>> model = idf.fit(data)\n>>> tfidf = model.transform(data)\n>>> for r in tfidf.collect(): r\nSparseVector(4, {1: 0.0, 3: 0.5754})\nDenseVector([0.0, 0.0, 1.3863, 0.863])\nSparseVector(4, {1: 0.0})\n>>> model.transform(Vectors.dense([0.0, 1.0, 2.0, 3.0]))\nDenseVector([0.0, 0.0, 1.3863, 0.863])\n>>> model.transform([0.0, 1.0, 2.0, 3.0])\nDenseVector([0.0, 0.0, 1.3863, 0.863])\n>>> model.transform(Vectors.sparse(n, (1, 3), (1.0, 2.0)))\nSparseVector(4, {1: 0.0, 3: 0.5754})\n\n\nNew in version 1.2.0.\n\nfit(dataset)[source]\n\nComputes the inverse document frequency.\n\nParameters\n\ndataset – an RDD of term frequency vectors\n\nNew in version 1.2.0.\n\nclass pyspark.mllib.feature.Word2Vec[source]\n\nBases: object\n\nWord2Vec creates vector representation of words in a text corpus. The algorithm first constructs a vocabulary from the corpus and then learns vector representation of words in the vocabulary. The vector representation can be used as features in natural language processing and machine learning algorithms.\n\nWe used skip-gram model in our implementation and hierarchical softmax method to train the model. The variable names in the implementation matches the original C implementation.\n\nFor original C implementation, see https://code.google.com/p/word2vec/ For research papers, see Efficient Estimation of Word Representations in Vector Space and Distributed Representations of Words and Phrases and their Compositionality.\n\n>>> sentence = \"a b \" * 100 + \"a c \" * 10\n>>> localDoc = [sentence, sentence]\n>>> doc = sc.parallelize(localDoc).map(lambda line: line.split(\" \"))\n>>> model = Word2Vec().setVectorSize(10).setSeed(42).fit(doc)\n\n\nQuerying for synonyms of a word will not return that word:\n\n>>> syms = model.findSynonyms(\"a\", 2)\n>>> [s for s in syms]\n['b', 'c']\n\n\nBut querying for synonyms of a vector may return the word whose representation is that vector:\n\n>>> vec = model.transform(\"a\")\n>>> syms = model.findSynonyms(vec, 2)\n>>> [s for s in syms]\n['a', 'b']\n\n>>> import os, tempfile\n>>> path = tempfile.mkdtemp()\n>>> model.save(sc, path)\n>>> model.transform(\"a\") == sameModel.transform(\"a\")\nTrue\n>>> syms = sameModel.findSynonyms(\"a\", 2)\n>>> [s for s in syms]\n['b', 'c']\n>>> from shutil import rmtree\n>>> try:\n... rmtree(path)\n... except OSError:\n... pass\n\n\nNew in version 1.2.0.\n\nfit(data)[source]\n\nComputes the vector representation of each word in vocabulary.\n\nParameters\n\ndata – training data. RDD of list of string\n\nReturns\n\nWord2VecModel instance\n\nNew in version 1.2.0.\n\nsetLearningRate(learningRate)[source]\n\nSets initial learning rate (default: 0.025).\n\nNew in version 1.2.0.\n\nsetMinCount(minCount)[source]\n\nSets minCount, the minimum number of times a token must appear to be included in the word2vec model’s vocabulary (default: 5).\n\nNew in version 1.4.0.\n\nsetNumIterations(numIterations)[source]\n\nSets number of iterations (default: 1), which should be smaller than or equal to number of partitions.\n\nNew in version 1.2.0.\n\nsetNumPartitions(numPartitions)[source]\n\nSets number of partitions (default: 1). Use a small number for accuracy.\n\nNew in version 1.2.0.\n\nsetSeed(seed)[source]\n\nSets random seed.\n\nNew in version 1.2.0.\n\nsetVectorSize(vectorSize)[source]\n\nSets vector size (default: 100).\n\nNew in version 1.2.0.\n\nsetWindowSize(windowSize)[source]\n\nSets window size (default: 5).\n\nNew in version 2.0.0.\n\nclass pyspark.mllib.feature.Word2VecModel(java_model)[source]\n\nBases: pyspark.mllib.feature.JavaVectorTransformer, pyspark.mllib.util.JavaSaveable, pyspark.mllib.util.JavaLoader\n\nclass for Word2Vec model\n\nNew in version 1.2.0.\n\nfindSynonyms(word, num)[source]\n\nFind synonyms of a word\n\nParameters\n• word – a word or a vector representation of word\n\n• num – number of synonyms to find\n\nReturns\n\narray of (word, cosineSimilarity)\n\nNote\n\nLocal use only\n\nNew in version 1.2.0.\n\ngetVectors()[source]\n\nReturns a map of words to their vector representations.\n\nNew in version 1.4.0.\n\nclassmethod load(sc, path)[source]\n\nLoad a model from the given path.\n\nNew in version 1.5.0.\n\ntransform(word)[source]\n\nTransforms a word to its vector representation\n\nNote\n\nLocal use only\n\nParameters\n\nword – a word\n\nReturns\n\nvector representation of word(s)\n\nNew in version 1.2.0.\n\nclass pyspark.mllib.feature.ChiSqSelector(numTopFeatures=50, selectorType='numTopFeatures', percentile=0.1, fpr=0.05, fdr=0.05, fwe=0.05)[source]\n\nBases: object\n\nCreates a ChiSquared feature selector. The selector supports different selection methods: numTopFeatures, percentile, fpr, fdr, fwe.\n\n• numTopFeatures chooses a fixed number of top features according to a chi-squared test.\n\n• percentile is similar but chooses a fraction of all features instead of a fixed number.\n\n• fpr chooses all features whose p-values are below a threshold, thus controlling the false positive rate of selection.\n\n• fdr uses the Benjamini-Hochberg procedure to choose all features whose false discovery rate is below a threshold.\n\n• fwe chooses all features whose p-values are below a threshold. The threshold is scaled by 1/numFeatures, thus controlling the family-wise error rate of selection.\n\nBy default, the selection method is numTopFeatures, with the default number of top features set to 50.\n\n>>> data = sc.parallelize([\n... LabeledPoint(0.0, SparseVector(3, {0: 8.0, 1: 7.0})),\n... LabeledPoint(1.0, SparseVector(3, {1: 9.0, 2: 6.0})),\n... LabeledPoint(1.0, [0.0, 9.0, 8.0]),\n... LabeledPoint(2.0, [7.0, 9.0, 5.0]),\n... LabeledPoint(2.0, [8.0, 7.0, 3.0])\n... ])\n>>> model = ChiSqSelector(numTopFeatures=1).fit(data)\n>>> model.transform(SparseVector(3, {1: 9.0, 2: 6.0}))\nSparseVector(1, {})\n>>> model.transform(DenseVector([7.0, 9.0, 5.0]))\nDenseVector([7.0])\n>>> model = ChiSqSelector(selectorType=\"fpr\", fpr=0.2).fit(data)\n>>> model.transform(SparseVector(3, {1: 9.0, 2: 6.0}))\nSparseVector(1, {})\n>>> model.transform(DenseVector([7.0, 9.0, 5.0]))\nDenseVector([7.0])\n>>> model = ChiSqSelector(selectorType=\"percentile\", percentile=0.34).fit(data)\n>>> model.transform(DenseVector([7.0, 9.0, 5.0]))\nDenseVector([7.0])\n\n\nNew in version 1.4.0.\n\nfit(data)[source]\n\nReturns a ChiSquared feature selector.\n\nParameters\n\ndata – an RDD[LabeledPoint] containing the labeled dataset with categorical features. Real-valued features will be treated as categorical for each distinct value. Apply feature discretizer before using this function.\n\nNew in version 1.4.0.\n\nsetFdr(fdr)[source]\n\nset FDR [0.0, 1.0] for feature selection by FDR. Only applicable when selectorType = “fdr”.\n\nNew in version 2.2.0.\n\nsetFpr(fpr)[source]\n\nset FPR [0.0, 1.0] for feature selection by FPR. Only applicable when selectorType = “fpr”.\n\nNew in version 2.1.0.\n\nsetFwe(fwe)[source]\n\nset FWE [0.0, 1.0] for feature selection by FWE. Only applicable when selectorType = “fwe”.\n\nNew in version 2.2.0.\n\nsetNumTopFeatures(numTopFeatures)[source]\n\nset numTopFeature for feature selection by number of top features. Only applicable when selectorType = “numTopFeatures”.\n\nNew in version 2.1.0.\n\nsetPercentile(percentile)[source]\n\nset percentile [0.0, 1.0] for feature selection by percentile. Only applicable when selectorType = “percentile”.\n\nNew in version 2.1.0.\n\nsetSelectorType(selectorType)[source]\n\nset the selector type of the ChisqSelector. Supported options: “numTopFeatures” (default), “percentile”, “fpr”, “fdr”, “fwe”.\n\nNew in version 2.1.0.\n\nclass pyspark.mllib.feature.ChiSqSelectorModel(java_model)[source]\n\nBases: pyspark.mllib.feature.JavaVectorTransformer\n\nRepresents a Chi Squared selector model.\n\nNew in version 1.4.0.\n\ntransform(vector)[source]\n\nApplies transformation on a vector.\n\nParameters\n\nvector – Vector or RDD of Vector to be transformed.\n\nReturns\n\ntransformed vector.\n\nNew in version 1.4.0.\n\nclass pyspark.mllib.feature.ElementwiseProduct(scalingVector)[source]\n\nBases: pyspark.mllib.feature.VectorTransformer\n\nScales each column of the vector, with the supplied weight vector. i.e the elementwise product.\n\n>>> weight = Vectors.dense([1.0, 2.0, 3.0])\n>>> eprod = ElementwiseProduct(weight)\n>>> a = Vectors.dense([2.0, 1.0, 3.0])\n>>> eprod.transform(a)\nDenseVector([2.0, 2.0, 9.0])\n>>> b = Vectors.dense([9.0, 3.0, 4.0])\n>>> rdd = sc.parallelize([a, b])\n>>> eprod.transform(rdd).collect()\n[DenseVector([2.0, 2.0, 9.0]), DenseVector([9.0, 6.0, 12.0])]\n\n\nNew in version 1.5.0.\n\ntransform(vector)[source]\n\nComputes the Hadamard product of the vector.\n\nNew in version 1.5.0.\n\n## pyspark.mllib.fpm module¶\n\nclass pyspark.mllib.fpm.FPGrowth[source]\n\nA Parallel FP-growth algorithm to mine frequent itemsets.\n\nNew in version 1.4.0.\n\nclass FreqItemset[source]\n\nRepresents an (items, freq) tuple.\n\nNew in version 1.4.0.\n\nclassmethod train(data, minSupport=0.3, numPartitions=-1)[source]\n\nComputes an FP-Growth model that contains frequent itemsets.\n\nParameters\n• data – The input data set, each element contains a transaction.\n\n• minSupport – The minimal support level. (default: 0.3)\n\n• numPartitions – The number of partitions used by parallel FP-growth. A value of -1 will use the same number as input data. (default: -1)\n\nNew in version 1.4.0.\n\nclass pyspark.mllib.fpm.FPGrowthModel(java_model)[source]\n\nA FP-Growth model for mining frequent itemsets using the Parallel FP-Growth algorithm.\n\n>>> data = [[\"a\", \"b\", \"c\"], [\"a\", \"b\", \"d\", \"e\"], [\"a\", \"c\", \"e\"], [\"a\", \"c\", \"f\"]]\n>>> rdd = sc.parallelize(data, 2)\n>>> model = FPGrowth.train(rdd, 0.6, 2)\n>>> sorted(model.freqItemsets().collect())\n[FreqItemset(items=['a'], freq=4), FreqItemset(items=['c'], freq=3), ...\n>>> model_path = temp_path + \"/fpm\"\n>>> model.save(sc, model_path)\n>>> sorted(model.freqItemsets().collect()) == sorted(sameModel.freqItemsets().collect())\nTrue\n\n\nNew in version 1.4.0.\n\nfreqItemsets()[source]\n\nReturns the frequent itemsets of this model.\n\nNew in version 1.4.0.\n\nclassmethod load(sc, path)[source]\n\nLoad a model from the given path.\n\nNew in version 2.0.0.\n\nclass pyspark.mllib.fpm.PrefixSpan[source]\n\nA parallel PrefixSpan algorithm to mine frequent sequential patterns. The PrefixSpan algorithm is described in J. Pei, et al., PrefixSpan: Mining Sequential Patterns Efficiently by Prefix-Projected Pattern Growth ([[http://doi.org/10.1109/ICDE.2001.914830]]).\n\nNew in version 1.6.0.\n\nclass FreqSequence[source]\n\nRepresents a (sequence, freq) tuple.\n\nNew in version 1.6.0.\n\nclassmethod train(data, minSupport=0.1, maxPatternLength=10, maxLocalProjDBSize=32000000)[source]\n\nFinds the complete set of frequent sequential patterns in the input sequences of itemsets.\n\nParameters\n• data – The input data set, each element contains a sequence of itemsets.\n\n• minSupport – The minimal support level of the sequential pattern, any pattern that appears more than (minSupport * size-of-the-dataset) times will be output. (default: 0.1)\n\n• maxPatternLength – The maximal length of the sequential pattern, any pattern that appears less than maxPatternLength will be output. (default: 10)\n\n• maxLocalProjDBSize – The maximum number of items (including delimiters used in the internal storage format) allowed in a projected database before local processing. If a projected database exceeds this size, another iteration of distributed prefix growth is run. (default: 32000000)\n\nNew in version 1.6.0.\n\nclass pyspark.mllib.fpm.PrefixSpanModel(java_model)[source]\n\nModel fitted by PrefixSpan\n\n>>> data = [\n... [[\"a\", \"b\"], [\"c\"]],\n... [[\"a\"], [\"c\", \"b\"], [\"a\", \"b\"]],\n... [[\"a\", \"b\"], [\"e\"]],\n... [[\"f\"]]]\n>>> rdd = sc.parallelize(data, 2)\n>>> model = PrefixSpan.train(rdd)\n>>> sorted(model.freqSequences().collect())\n[FreqSequence(sequence=[['a']], freq=3), FreqSequence(sequence=[['a'], ['a']], freq=1), ...\n\n\nNew in version 1.6.0.\n\nfreqSequences()[source]\n\nGets frequent sequences\n\nNew in version 1.6.0.\n\n## pyspark.mllib.linalg module¶\n\nMLlib utilities for linear algebra. For dense vectors, MLlib uses the NumPy array type, so you can simply pass NumPy arrays around. For sparse vectors, users can construct a SparseVector object from MLlib or pass SciPy scipy.sparse column vectors if SciPy is available in their environment.\n\nclass pyspark.mllib.linalg.Vector[source]\n\nBases: object\n\nasML()[source]\n\nConvert this vector to the new mllib-local representation. This does NOT copy the data; it copies references.\n\nReturns\n\npyspark.ml.linalg.Vector\n\ntoArray()[source]\n\nConvert the vector into an numpy.ndarray\n\nReturns\n\nnumpy.ndarray\n\nclass pyspark.mllib.linalg.DenseVector(ar)[source]\n\nA dense vector represented by a value array. We use numpy array for storage and arithmetics will be delegated to the underlying numpy array.\n\n>>> v = Vectors.dense([1.0, 2.0])\n>>> u = Vectors.dense([3.0, 4.0])\n>>> v + u\nDenseVector([4.0, 6.0])\n>>> 2 - v\nDenseVector([1.0, 0.0])\n>>> v / 2\nDenseVector([0.5, 1.0])\n>>> v * u\nDenseVector([3.0, 8.0])\n>>> u / v\nDenseVector([3.0, 2.0])\n>>> u % 2\nDenseVector([1.0, 0.0])\n>>> -v\nDenseVector([-1.0, -2.0])\n\nasML()[source]\n\nConvert this vector to the new mllib-local representation. This does NOT copy the data; it copies references.\n\nReturns\n\npyspark.ml.linalg.DenseVector\n\nNew in version 2.0.0.\n\ndot(other)[source]\n\nCompute the dot product of two Vectors. We support (Numpy array, list, SparseVector, or SciPy sparse) and a target NumPy array that is either 1- or 2-dimensional. Equivalent to calling numpy.dot of the two vectors.\n\n>>> dense = DenseVector(array.array('d', [1., 2.]))\n>>> dense.dot(dense)\n5.0\n>>> dense.dot(SparseVector(2, [0, 1], [2., 1.]))\n4.0\n>>> dense.dot(range(1, 3))\n5.0\n>>> dense.dot(np.array(range(1, 3)))\n5.0\n>>> dense.dot([1.,])\nTraceback (most recent call last):\n...\nAssertionError: dimension mismatch\n>>> dense.dot(np.reshape([1., 2., 3., 4.], (2, 2), order='F'))\narray([ 5., 11.])\n>>> dense.dot(np.reshape([1., 2., 3.], (3, 1), order='F'))\nTraceback (most recent call last):\n...\nAssertionError: dimension mismatch\n\nnorm(p)[source]\n\nCalculates the norm of a DenseVector.\n\n>>> a = DenseVector([0, -1, 2, -3])\n>>> a.norm(2)\n3.7...\n>>> a.norm(1)\n6.0\n\nnumNonzeros()[source]\n\nNumber of nonzero elements. This scans all active values and count non zeros\n\nstatic parse(s)[source]\n\nParse string representation back into the DenseVector.\n\n>>> DenseVector.parse(' [ 0.0,1.0,2.0, 3.0]')\nDenseVector([0.0, 1.0, 2.0, 3.0])\n\nsquared_distance(other)[source]\n\nSquared distance of two Vectors.\n\n>>> dense1 = DenseVector(array.array('d', [1., 2.]))\n>>> dense1.squared_distance(dense1)\n0.0\n>>> dense2 = np.array([2., 1.])\n>>> dense1.squared_distance(dense2)\n2.0\n>>> dense3 = [2., 1.]\n>>> dense1.squared_distance(dense3)\n2.0\n>>> sparse1 = SparseVector(2, [0, 1], [2., 1.])\n>>> dense1.squared_distance(sparse1)\n2.0\n>>> dense1.squared_distance([1.,])\nTraceback (most recent call last):\n...\nAssertionError: dimension mismatch\n>>> dense1.squared_distance(SparseVector(1, [0,], [1.,]))\nTraceback (most recent call last):\n...\nAssertionError: dimension mismatch\n\ntoArray()[source]\n\nReturns an numpy.ndarray\n\nproperty values\n\nReturns a list of values\n\nclass pyspark.mllib.linalg.SparseVector(size, *args)[source]\n\nA simple sparse vector class for passing data to MLlib. Users may alternatively pass SciPy’s {scipy.sparse} data types.\n\nasML()[source]\n\nConvert this vector to the new mllib-local representation. This does NOT copy the data; it copies references.\n\nReturns\n\npyspark.ml.linalg.SparseVector\n\nNew in version 2.0.0.\n\ndot(other)[source]\n\nDot product with a SparseVector or 1- or 2-dimensional Numpy array.\n\n>>> a = SparseVector(4, [1, 3], [3.0, 4.0])\n>>> a.dot(a)\n25.0\n>>> a.dot(array.array('d', [1., 2., 3., 4.]))\n22.0\n>>> b = SparseVector(4, , [1.0])\n>>> a.dot(b)\n0.0\n>>> a.dot(np.array([[1, 1], [2, 2], [3, 3], [4, 4]]))\narray([ 22., 22.])\n>>> a.dot([1., 2., 3.])\nTraceback (most recent call last):\n...\nAssertionError: dimension mismatch\n>>> a.dot(np.array([1., 2.]))\nTraceback (most recent call last):\n...\nAssertionError: dimension mismatch\n>>> a.dot(DenseVector([1., 2.]))\nTraceback (most recent call last):\n...\nAssertionError: dimension mismatch\n>>> a.dot(np.zeros((3, 2)))\nTraceback (most recent call last):\n...\nAssertionError: dimension mismatch\n\nindices = None\n\nA list of indices corresponding to active entries.\n\nnorm(p)[source]\n\nCalculates the norm of a SparseVector.\n\n>>> a = SparseVector(4, [0, 1], [3., -4.])\n>>> a.norm(1)\n7.0\n>>> a.norm(2)\n5.0\n\nnumNonzeros()[source]\n\nNumber of nonzero elements. This scans all active values and count non zeros.\n\nstatic parse(s)[source]\n\nParse string representation back into the SparseVector.\n\n>>> SparseVector.parse(' (4, [0,1 ],[ 4.0,5.0] )')\nSparseVector(4, {0: 4.0, 1: 5.0})\n\nsize = None\n\nSize of the vector.\n\nsquared_distance(other)[source]\n\nSquared distance from a SparseVector or 1-dimensional NumPy array.\n\n>>> a = SparseVector(4, [1, 3], [3.0, 4.0])\n>>> a.squared_distance(a)\n0.0\n>>> a.squared_distance(array.array('d', [1., 2., 3., 4.]))\n11.0\n>>> a.squared_distance(np.array([1., 2., 3., 4.]))\n11.0\n>>> b = SparseVector(4, , [1.0])\n>>> a.squared_distance(b)\n26.0\n>>> b.squared_distance(a)\n26.0\n>>> b.squared_distance([1., 2.])\nTraceback (most recent call last):\n...\nAssertionError: dimension mismatch\n>>> b.squared_distance(SparseVector(3, [1,], [1.0,]))\nTraceback (most recent call last):\n...\nAssertionError: dimension mismatch\n\ntoArray()[source]\n\nReturns a copy of this SparseVector as a 1-dimensional NumPy array.\n\nvalues = None\n\nA list of values corresponding to active entries.\n\nclass pyspark.mllib.linalg.Vectors[source]\n\nBases: object\n\nFactory methods for working with vectors.\n\nNote\n\nDense vectors are simply represented as NumPy array objects, so there is no need to covert them for use in MLlib. For sparse vectors, the factory methods in this class create an MLlib-compatible type, or users can pass in SciPy’s scipy.sparse column vectors.\n\nstatic dense(*elements)[source]\n\nCreate a dense vector of 64-bit floats from a Python list or numbers.\n\n>>> Vectors.dense([1, 2, 3])\nDenseVector([1.0, 2.0, 3.0])\n>>> Vectors.dense(1.0, 2.0)\nDenseVector([1.0, 2.0])\n\nstatic fromML(vec)[source]\n\nConvert a vector from the new mllib-local representation. This does NOT copy the data; it copies references.\n\nParameters\nReturns\n\nNew in version 2.0.0.\n\nstatic norm(vector, p)[source]\n\nFind norm of the given vector.\n\nstatic parse(s)[source]\n\nParse a string representation back into the Vector.\n\n>>> Vectors.parse('[2,1,2 ]')\nDenseVector([2.0, 1.0, 2.0])\n>>> Vectors.parse(' ( 100, , )')\nSparseVector(100, {0: 2.0})\n\nstatic sparse(size, *args)[source]\n\nCreate a sparse vector, using either a dictionary, a list of (index, value) pairs, or two separate arrays of indices and values (sorted by index).\n\nParameters\n• size – Size of the vector.\n\n• args – Non-zero entries, as a dictionary, list of tuples, or two sorted lists containing indices and values.\n\n>>> Vectors.sparse(4, {1: 1.0, 3: 5.5})\nSparseVector(4, {1: 1.0, 3: 5.5})\n>>> Vectors.sparse(4, [(1, 1.0), (3, 5.5)])\nSparseVector(4, {1: 1.0, 3: 5.5})\n>>> Vectors.sparse(4, [1, 3], [1.0, 5.5])\nSparseVector(4, {1: 1.0, 3: 5.5})\n\nstatic squared_distance(v1, v2)[source]\n\nSquared distance between two vectors. a and b can be of type SparseVector, DenseVector, np.ndarray or array.array.\n\n>>> a = Vectors.sparse(4, [(0, 1), (3, 4)])\n>>> b = Vectors.dense([2, 5, 4, 1])\n>>> a.squared_distance(b)\n51.0\n\nstatic stringify(vector)[source]\n\nConverts a vector into a string, which can be recognized by Vectors.parse().\n\n>>> Vectors.stringify(Vectors.sparse(2, , [1.0]))\n'(2,,[1.0])'\n>>> Vectors.stringify(Vectors.dense([0.0, 1.0]))\n'[0.0,1.0]'\n\nstatic zeros(size)[source]\nclass pyspark.mllib.linalg.Matrix(numRows, numCols, isTransposed=False)[source]\n\nBases: object\n\nasML()[source]\n\nConvert this matrix to the new mllib-local representation. This does NOT copy the data; it copies references.\n\ntoArray()[source]\n\nReturns its elements in a NumPy ndarray.\n\nclass pyspark.mllib.linalg.DenseMatrix(numRows, numCols, values, isTransposed=False)[source]\n\nColumn-major dense matrix.\n\nasML()[source]\n\nConvert this matrix to the new mllib-local representation. This does NOT copy the data; it copies references.\n\nReturns\n\npyspark.ml.linalg.DenseMatrix\n\nNew in version 2.0.0.\n\ntoArray()[source]\n\nReturn an numpy.ndarray\n\n>>> m = DenseMatrix(2, 2, range(4))\n>>> m.toArray()\narray([[ 0., 2.],\n[ 1., 3.]])\n\ntoSparse()[source]\n\nConvert to SparseMatrix\n\nclass pyspark.mllib.linalg.SparseMatrix(numRows, numCols, colPtrs, rowIndices, values, isTransposed=False)[source]\n\nSparse Matrix stored in CSC format.\n\nasML()[source]\n\nConvert this matrix to the new mllib-local representation. This does NOT copy the data; it copies references.\n\nReturns\n\npyspark.ml.linalg.SparseMatrix\n\nNew in version 2.0.0.\n\ntoArray()[source]\n\nReturn an numpy.ndarray\n\ntoDense()[source]\nclass pyspark.mllib.linalg.Matrices[source]\n\nBases: object\n\nstatic dense(numRows, numCols, values)[source]\n\nCreate a DenseMatrix\n\nstatic fromML(mat)[source]\n\nConvert a matrix from the new mllib-local representation. This does NOT copy the data; it copies references.\n\nParameters\nReturns\n\nNew in version 2.0.0.\n\nstatic sparse(numRows, numCols, colPtrs, rowIndices, values)[source]\n\nCreate a SparseMatrix\n\nclass pyspark.mllib.linalg.QRDecomposition(Q, R)[source]\n\nBases: object\n\nRepresents QR factors.\n\nproperty Q\n\nAn orthogonal matrix Q in a QR decomposition. May be null if not computed.\n\nNew in version 2.0.0.\n\nproperty R\n\nAn upper triangular matrix R in a QR decomposition.\n\nNew in version 2.0.0.\n\n## pyspark.mllib.linalg.distributed module¶\n\nPackage for distributed linear algebra.\n\nclass pyspark.mllib.linalg.distributed.BlockMatrix(blocks, rowsPerBlock, colsPerBlock, numRows=0, numCols=0)[source]\n\nRepresents a distributed matrix in blocks of local matrices.\n\nParameters\n• blocks – An RDD of sub-matrix blocks ((blockRowIndex, blockColIndex), sub-matrix) that form this distributed matrix. If multiple blocks with the same index exist, the results for operations like add and multiply will be unpredictable.\n\n• rowsPerBlock – Number of rows that make up each block. The blocks forming the final rows are not required to have the given number of rows.\n\n• colsPerBlock – Number of columns that make up each block. The blocks forming the final columns are not required to have the given number of columns.\n\n• numRows – Number of rows of this matrix. If the supplied value is less than or equal to zero, the number of rows will be calculated when numRows is invoked.\n\n• numCols – Number of columns of this matrix. If the supplied value is less than or equal to zero, the number of columns will be calculated when numCols is invoked.\n\nadd(other)[source]\n\nAdds two block matrices together. The matrices must have the same size and matching rowsPerBlock and colsPerBlock values. If one of the sub matrix blocks that are being added is a SparseMatrix, the resulting sub matrix block will also be a SparseMatrix, even if it is being added to a DenseMatrix. If two dense sub matrix blocks are added, the output block will also be a DenseMatrix.\n\n>>> dm1 = Matrices.dense(3, 2, [1, 2, 3, 4, 5, 6])\n>>> dm2 = Matrices.dense(3, 2, [7, 8, 9, 10, 11, 12])\n>>> sm = Matrices.sparse(3, 2, [0, 1, 3], [0, 1, 2], [7, 11, 12])\n>>> blocks1 = sc.parallelize([((0, 0), dm1), ((1, 0), dm2)])\n>>> blocks2 = sc.parallelize([((0, 0), dm1), ((1, 0), dm2)])\n>>> blocks3 = sc.parallelize([((0, 0), sm), ((1, 0), dm2)])\n>>> mat1 = BlockMatrix(blocks1, 3, 2)\n>>> mat2 = BlockMatrix(blocks2, 3, 2)\n>>> mat3 = BlockMatrix(blocks3, 3, 2)\n\n>>> mat1.add(mat2).toLocalMatrix()\nDenseMatrix(6, 2, [2.0, 4.0, 6.0, 14.0, 16.0, 18.0, 8.0, 10.0, 12.0, 20.0, 22.0, 24.0], 0)\n\n>>> mat1.add(mat3).toLocalMatrix()\nDenseMatrix(6, 2, [8.0, 2.0, 3.0, 14.0, 16.0, 18.0, 4.0, 16.0, 18.0, 20.0, 22.0, 24.0], 0)\n\nproperty blocks\n\nThe RDD of sub-matrix blocks ((blockRowIndex, blockColIndex), sub-matrix) that form this distributed matrix.\n\n>>> mat = BlockMatrix(\n... sc.parallelize([((0, 0), Matrices.dense(3, 2, [1, 2, 3, 4, 5, 6])),\n... ((1, 0), Matrices.dense(3, 2, [7, 8, 9, 10, 11, 12]))]), 3, 2)\n>>> blocks = mat.blocks\n>>> blocks.first()\n((0, 0), DenseMatrix(3, 2, [1.0, 2.0, 3.0, 4.0, 5.0, 6.0], 0))\n\ncache()[source]\n\nCaches the underlying RDD.\n\nNew in version 2.0.0.\n\nproperty colsPerBlock\n\nNumber of columns that make up each block.\n\n>>> blocks = sc.parallelize([((0, 0), Matrices.dense(3, 2, [1, 2, 3, 4, 5, 6])),\n... ((1, 0), Matrices.dense(3, 2, [7, 8, 9, 10, 11, 12]))])\n>>> mat = BlockMatrix(blocks, 3, 2)\n>>> mat.colsPerBlock\n2\n\nmultiply(other)[source]\n\nLeft multiplies this BlockMatrix by other, another BlockMatrix. The colsPerBlock of this matrix must equal the rowsPerBlock of other. If other contains any SparseMatrix blocks, they will have to be converted to DenseMatrix blocks. The output BlockMatrix will only consist of DenseMatrix blocks. This may cause some performance issues until support for multiplying two sparse matrices is added.\n\n>>> dm1 = Matrices.dense(2, 3, [1, 2, 3, 4, 5, 6])\n>>> dm2 = Matrices.dense(2, 3, [7, 8, 9, 10, 11, 12])\n>>> dm3 = Matrices.dense(3, 2, [1, 2, 3, 4, 5, 6])\n>>> dm4 = Matrices.dense(3, 2, [7, 8, 9, 10, 11, 12])\n>>> sm = Matrices.sparse(3, 2, [0, 1, 3], [0, 1, 2], [7, 11, 12])\n>>> blocks1 = sc.parallelize([((0, 0), dm1), ((0, 1), dm2)])\n>>> blocks2 = sc.parallelize([((0, 0), dm3), ((1, 0), dm4)])\n>>> blocks3 = sc.parallelize([((0, 0), sm), ((1, 0), dm4)])\n>>> mat1 = BlockMatrix(blocks1, 2, 3)\n>>> mat2 = BlockMatrix(blocks2, 3, 2)\n>>> mat3 = BlockMatrix(blocks3, 3, 2)\n\n>>> mat1.multiply(mat2).toLocalMatrix()\nDenseMatrix(2, 2, [242.0, 272.0, 350.0, 398.0], 0)\n\n>>> mat1.multiply(mat3).toLocalMatrix()\nDenseMatrix(2, 2, [227.0, 258.0, 394.0, 450.0], 0)\n\nproperty numColBlocks\n\nNumber of columns of blocks in the BlockMatrix.\n\n>>> blocks = sc.parallelize([((0, 0), Matrices.dense(3, 2, [1, 2, 3, 4, 5, 6])),\n... ((1, 0), Matrices.dense(3, 2, [7, 8, 9, 10, 11, 12]))])\n>>> mat = BlockMatrix(blocks, 3, 2)\n>>> mat.numColBlocks\n1\n\nnumCols()[source]\n\nGet or compute the number of cols.\n\n>>> blocks = sc.parallelize([((0, 0), Matrices.dense(3, 2, [1, 2, 3, 4, 5, 6])),\n... ((1, 0), Matrices.dense(3, 2, [7, 8, 9, 10, 11, 12]))])\n\n>>> mat = BlockMatrix(blocks, 3, 2)\n>>> print(mat.numCols())\n2\n\n>>> mat = BlockMatrix(blocks, 3, 2, 7, 6)\n>>> print(mat.numCols())\n6\n\nproperty numRowBlocks\n\nNumber of rows of blocks in the BlockMatrix.\n\n>>> blocks = sc.parallelize([((0, 0), Matrices.dense(3, 2, [1, 2, 3, 4, 5, 6])),\n... ((1, 0), Matrices.dense(3, 2, [7, 8, 9, 10, 11, 12]))])\n>>> mat = BlockMatrix(blocks, 3, 2)\n>>> mat.numRowBlocks\n2\n\nnumRows()[source]\n\nGet or compute the number of rows.\n\n>>> blocks = sc.parallelize([((0, 0), Matrices.dense(3, 2, [1, 2, 3, 4, 5, 6])),\n... ((1, 0), Matrices.dense(3, 2, [7, 8, 9, 10, 11, 12]))])\n\n>>> mat = BlockMatrix(blocks, 3, 2)\n>>> print(mat.numRows())\n6\n\n>>> mat = BlockMatrix(blocks, 3, 2, 7, 6)\n>>> print(mat.numRows())\n7\n\npersist(storageLevel)[source]\n\nPersists the underlying RDD with the specified storage level.\n\nNew in version 2.0.0.\n\nproperty rowsPerBlock\n\nNumber of rows that make up each block.\n\n>>> blocks = sc.parallelize([((0, 0), Matrices.dense(3, 2, [1, 2, 3, 4, 5, 6])),\n... ((1, 0), Matrices.dense(3, 2, [7, 8, 9, 10, 11, 12]))])\n>>> mat = BlockMatrix(blocks, 3, 2)\n>>> mat.rowsPerBlock\n3\n\nsubtract(other)[source]\n\nSubtracts the given block matrix other from this block matrix: this - other. The matrices must have the same size and matching rowsPerBlock and colsPerBlock values. If one of the sub matrix blocks that are being subtracted is a SparseMatrix, the resulting sub matrix block will also be a SparseMatrix, even if it is being subtracted from a DenseMatrix. If two dense sub matrix blocks are subtracted, the output block will also be a DenseMatrix.\n\n>>> dm1 = Matrices.dense(3, 2, [3, 1, 5, 4, 6, 2])\n>>> dm2 = Matrices.dense(3, 2, [7, 8, 9, 10, 11, 12])\n>>> sm = Matrices.sparse(3, 2, [0, 1, 3], [0, 1, 2], [1, 2, 3])\n>>> blocks1 = sc.parallelize([((0, 0), dm1), ((1, 0), dm2)])\n>>> blocks2 = sc.parallelize([((0, 0), dm2), ((1, 0), dm1)])\n>>> blocks3 = sc.parallelize([((0, 0), sm), ((1, 0), dm2)])\n>>> mat1 = BlockMatrix(blocks1, 3, 2)\n>>> mat2 = BlockMatrix(blocks2, 3, 2)\n>>> mat3 = BlockMatrix(blocks3, 3, 2)\n\n>>> mat1.subtract(mat2).toLocalMatrix()\nDenseMatrix(6, 2, [-4.0, -7.0, -4.0, 4.0, 7.0, 4.0, -6.0, -5.0, -10.0, 6.0, 5.0, 10.0], 0)\n\n>>> mat2.subtract(mat3).toLocalMatrix()\nDenseMatrix(6, 2, [6.0, 8.0, 9.0, -4.0, -7.0, -4.0, 10.0, 9.0, 9.0, -6.0, -5.0, -10.0], 0)\n\n\nNew in version 2.0.0.\n\ntoCoordinateMatrix()[source]\n\nConvert this matrix to a CoordinateMatrix.\n\n>>> blocks = sc.parallelize([((0, 0), Matrices.dense(1, 2, [1, 2])),\n... ((1, 0), Matrices.dense(1, 2, [7, 8]))])\n>>> mat = BlockMatrix(blocks, 1, 2).toCoordinateMatrix()\n>>> mat.entries.take(3)\n[MatrixEntry(0, 0, 1.0), MatrixEntry(0, 1, 2.0), MatrixEntry(1, 0, 7.0)]\n\ntoIndexedRowMatrix()[source]\n\nConvert this matrix to an IndexedRowMatrix.\n\n>>> blocks = sc.parallelize([((0, 0), Matrices.dense(3, 2, [1, 2, 3, 4, 5, 6])),\n... ((1, 0), Matrices.dense(3, 2, [7, 8, 9, 10, 11, 12]))])\n>>> mat = BlockMatrix(blocks, 3, 2).toIndexedRowMatrix()\n\n>>> # This BlockMatrix will have 6 effective rows, due to\n>>> # having two sub-matrix blocks stacked, each with 3 rows.\n>>> # The ensuing IndexedRowMatrix will also have 6 rows.\n>>> print(mat.numRows())\n6\n\n>>> # This BlockMatrix will have 2 effective columns, due to\n>>> # having two sub-matrix blocks stacked, each with 2 columns.\n>>> # The ensuing IndexedRowMatrix will also have 2 columns.\n>>> print(mat.numCols())\n2\n\ntoLocalMatrix()[source]\n\nCollect the distributed matrix on the driver as a DenseMatrix.\n\n>>> blocks = sc.parallelize([((0, 0), Matrices.dense(3, 2, [1, 2, 3, 4, 5, 6])),\n... ((1, 0), Matrices.dense(3, 2, [7, 8, 9, 10, 11, 12]))])\n>>> mat = BlockMatrix(blocks, 3, 2).toLocalMatrix()\n\n>>> # This BlockMatrix will have 6 effective rows, due to\n>>> # having two sub-matrix blocks stacked, each with 3 rows.\n>>> # The ensuing DenseMatrix will also have 6 rows.\n>>> print(mat.numRows)\n6\n\n>>> # This BlockMatrix will have 2 effective columns, due to\n>>> # having two sub-matrix blocks stacked, each with 2\n>>> # columns. The ensuing DenseMatrix will also have 2 columns.\n>>> print(mat.numCols)\n2\n\ntranspose()[source]\n\nTranspose this BlockMatrix. Returns a new BlockMatrix instance sharing the same underlying data. Is a lazy operation.\n\n>>> blocks = sc.parallelize([((0, 0), Matrices.dense(3, 2, [1, 2, 3, 4, 5, 6])),\n... ((1, 0), Matrices.dense(3, 2, [7, 8, 9, 10, 11, 12]))])\n>>> mat = BlockMatrix(blocks, 3, 2)\n\n>>> mat_transposed = mat.transpose()\n>>> mat_transposed.toLocalMatrix()\nDenseMatrix(2, 6, [1.0, 4.0, 2.0, 5.0, 3.0, 6.0, 7.0, 10.0, 8.0, 11.0, 9.0, 12.0], 0)\n\n\nNew in version 2.0.0.\n\nvalidate()[source]\n\nValidates the block matrix info against the matrix data (blocks) and throws an exception if any error is found.\n\nNew in version 2.0.0.\n\nclass pyspark.mllib.linalg.distributed.CoordinateMatrix(entries, numRows=0, numCols=0)[source]\n\nRepresents a matrix in coordinate format.\n\nParameters\n• entries – An RDD of MatrixEntry inputs or (long, long, float) tuples.\n\n• numRows – Number of rows in the matrix. A non-positive value means unknown, at which point the number of rows will be determined by the max row index plus one.\n\n• numCols – Number of columns in the matrix. A non-positive value means unknown, at which point the number of columns will be determined by the max row index plus one.\n\nproperty entries\n\nEntries of the CoordinateMatrix stored as an RDD of MatrixEntries.\n\n>>> mat = CoordinateMatrix(sc.parallelize([MatrixEntry(0, 0, 1.2),\n... MatrixEntry(6, 4, 2.1)]))\n>>> entries = mat.entries\n>>> entries.first()\nMatrixEntry(0, 0, 1.2)\n\nnumCols()[source]\n\nGet or compute the number of cols.\n\n>>> entries = sc.parallelize([MatrixEntry(0, 0, 1.2),\n... MatrixEntry(1, 0, 2),\n... MatrixEntry(2, 1, 3.7)])\n\n>>> mat = CoordinateMatrix(entries)\n>>> print(mat.numCols())\n2\n\n>>> mat = CoordinateMatrix(entries, 7, 6)\n>>> print(mat.numCols())\n6\n\nnumRows()[source]\n\nGet or compute the number of rows.\n\n>>> entries = sc.parallelize([MatrixEntry(0, 0, 1.2),\n... MatrixEntry(1, 0, 2),\n... MatrixEntry(2, 1, 3.7)])\n\n>>> mat = CoordinateMatrix(entries)\n>>> print(mat.numRows())\n3\n\n>>> mat = CoordinateMatrix(entries, 7, 6)\n>>> print(mat.numRows())\n7\n\ntoBlockMatrix(rowsPerBlock=1024, colsPerBlock=1024)[source]\n\nConvert this matrix to a BlockMatrix.\n\nParameters\n• rowsPerBlock – Number of rows that make up each block. The blocks forming the final rows are not required to have the given number of rows.\n\n• colsPerBlock – Number of columns that make up each block. The blocks forming the final columns are not required to have the given number of columns.\n\n>>> entries = sc.parallelize([MatrixEntry(0, 0, 1.2),\n... MatrixEntry(6, 4, 2.1)])\n>>> mat = CoordinateMatrix(entries).toBlockMatrix()\n\n>>> # This CoordinateMatrix will have 7 effective rows, due to\n>>> # the highest row index being 6, and the ensuing\n>>> # BlockMatrix will have 7 rows as well.\n>>> print(mat.numRows())\n7\n\n>>> # This CoordinateMatrix will have 5 columns, due to the\n>>> # highest column index being 4, and the ensuing\n>>> # BlockMatrix will have 5 columns as well.\n>>> print(mat.numCols())\n5\n\ntoIndexedRowMatrix()[source]\n\nConvert this matrix to an IndexedRowMatrix.\n\n>>> entries = sc.parallelize([MatrixEntry(0, 0, 1.2),\n... MatrixEntry(6, 4, 2.1)])\n>>> mat = CoordinateMatrix(entries).toIndexedRowMatrix()\n\n>>> # This CoordinateMatrix will have 7 effective rows, due to\n>>> # the highest row index being 6, and the ensuing\n>>> # IndexedRowMatrix will have 7 rows as well.\n>>> print(mat.numRows())\n7\n\n>>> # This CoordinateMatrix will have 5 columns, due to the\n>>> # highest column index being 4, and the ensuing\n>>> # IndexedRowMatrix will have 5 columns as well.\n>>> print(mat.numCols())\n5\n\ntoRowMatrix()[source]\n\nConvert this matrix to a RowMatrix.\n\n>>> entries = sc.parallelize([MatrixEntry(0, 0, 1.2),\n... MatrixEntry(6, 4, 2.1)])\n>>> mat = CoordinateMatrix(entries).toRowMatrix()\n\n>>> # This CoordinateMatrix will have 7 effective rows, due to\n>>> # the highest row index being 6, but the ensuing RowMatrix\n>>> # will only have 2 rows since there are only entries on 2\n>>> # unique rows.\n>>> print(mat.numRows())\n2\n\n>>> # This CoordinateMatrix will have 5 columns, due to the\n>>> # highest column index being 4, and the ensuing RowMatrix\n>>> # will have 5 columns as well.\n>>> print(mat.numCols())\n5\n\ntranspose()[source]\n\nTranspose this CoordinateMatrix.\n\n>>> entries = sc.parallelize([MatrixEntry(0, 0, 1.2),\n... MatrixEntry(1, 0, 2),\n... MatrixEntry(2, 1, 3.7)])\n>>> mat = CoordinateMatrix(entries)\n>>> mat_transposed = mat.transpose()\n\n>>> print(mat_transposed.numRows())\n2\n\n>>> print(mat_transposed.numCols())\n3\n\n\nNew in version 2.0.0.\n\nclass pyspark.mllib.linalg.distributed.DistributedMatrix[source]\n\nBases: object\n\nRepresents a distributively stored matrix backed by one or more RDDs.\n\nnumCols()[source]\n\nGet or compute the number of cols.\n\nnumRows()[source]\n\nGet or compute the number of rows.\n\nclass pyspark.mllib.linalg.distributed.IndexedRow(index, vector)[source]\n\nBases: object\n\nRepresents a row of an IndexedRowMatrix.\n\nJust a wrapper over a (long, vector) tuple.\n\nParameters\n• index – The index for the given row.\n\n• vector – The row in the matrix at the given index.\n\nclass pyspark.mllib.linalg.distributed.IndexedRowMatrix(rows, numRows=0, numCols=0)[source]\n\nRepresents a row-oriented distributed Matrix with indexed rows.\n\nParameters\n• rows – An RDD of IndexedRows or (long, vector) tuples.\n\n• numRows – Number of rows in the matrix. A non-positive value means unknown, at which point the number of rows will be determined by the max row index plus one.\n\n• numCols – Number of columns in the matrix. A non-positive value means unknown, at which point the number of columns will be determined by the size of the first row.\n\ncolumnSimilarities()[source]\n\nCompute all cosine similarities between columns.\n\n>>> rows = sc.parallelize([IndexedRow(0, [1, 2, 3]),\n... IndexedRow(6, [4, 5, 6])])\n>>> mat = IndexedRowMatrix(rows)\n>>> cs = mat.columnSimilarities()\n>>> print(cs.numCols())\n3\n\ncomputeGramianMatrix()[source]\n\nComputes the Gramian matrix A^T A.\n\nNote\n\nThis cannot be computed on matrices with more than 65535 columns.\n\n>>> rows = sc.parallelize([IndexedRow(0, [1, 2, 3]),\n... IndexedRow(1, [4, 5, 6])])\n>>> mat = IndexedRowMatrix(rows)\n\n>>> mat.computeGramianMatrix()\nDenseMatrix(3, 3, [17.0, 22.0, 27.0, 22.0, 29.0, 36.0, 27.0, 36.0, 45.0], 0)\n\n\nNew in version 2.0.0.\n\ncomputeSVD(k, computeU=False, rCond=1e-09)[source]\n\nComputes the singular value decomposition of the IndexedRowMatrix.\n\nThe given row matrix A of dimension (m X n) is decomposed into U * s * V’T where\n\n• U: (m X k) (left singular vectors) is a IndexedRowMatrix\n\nwhose columns are the eigenvectors of (A X A’)\n\n• s: DenseVector consisting of square root of the eigenvalues\n\n(singular values) in descending order.\n\n• v: (n X k) (right singular vectors) is a Matrix whose columns\n\nare the eigenvectors of (A’ X A)\n\nFor more specific details on implementation, please refer the scala documentation.\n\nParameters\n• k – Number of leading singular values to keep (0 < k <= n). It might return less than k if there are numerically zero singular values or there are not enough Ritz values converged before the maximum number of Arnoldi update iterations is reached (in case that matrix A is ill-conditioned).\n\n• computeU – Whether or not to compute U. If set to be True, then U is computed by A * V * s^-1\n\n• rCond – Reciprocal condition number. All singular values smaller than rCond * s are treated as zero where s is the largest singular value.\n\nReturns\n\nSingularValueDecomposition object\n\n>>> rows = [(0, (3, 1, 1)), (1, (-1, 3, 1))]\n>>> irm = IndexedRowMatrix(sc.parallelize(rows))\n>>> svd_model = irm.computeSVD(2, True)\n>>> svd_model.U.rows.collect()\n[IndexedRow(0, [-0.707106781187,0.707106781187]), IndexedRow(1, [-0.707106781187,-0.707106781187])]\n>>> svd_model.s\nDenseVector([3.4641, 3.1623])\n>>> svd_model.V\nDenseMatrix(3, 2, [-0.4082, -0.8165, -0.4082, 0.8944, -0.4472, 0.0], 0)\n\n\nNew in version 2.2.0.\n\nmultiply(matrix)[source]\n\nMultiply this matrix by a local dense matrix on the right.\n\nParameters\n\nmatrix – a local dense matrix whose number of rows must match the number of columns of this matrix\n\nReturns\n\nIndexedRowMatrix\n\n>>> mat = IndexedRowMatrix(sc.parallelize([(0, (0, 1)), (1, (2, 3))]))\n>>> mat.multiply(DenseMatrix(2, 2, [0, 2, 1, 3])).rows.collect()\n[IndexedRow(0, [2.0,3.0]), IndexedRow(1, [6.0,11.0])]\n\n\nNew in version 2.2.0.\n\nnumCols()[source]\n\nGet or compute the number of cols.\n\n>>> rows = sc.parallelize([IndexedRow(0, [1, 2, 3]),\n... IndexedRow(1, [4, 5, 6]),\n... IndexedRow(2, [7, 8, 9]),\n... IndexedRow(3, [10, 11, 12])])\n\n>>> mat = IndexedRowMatrix(rows)\n>>> print(mat.numCols())\n3\n\n>>> mat = IndexedRowMatrix(rows, 7, 6)\n>>> print(mat.numCols())\n6\n\nnumRows()[source]\n\nGet or compute the number of rows.\n\n>>> rows = sc.parallelize([IndexedRow(0, [1, 2, 3]),\n... IndexedRow(1, [4, 5, 6]),\n... IndexedRow(2, [7, 8, 9]),\n... IndexedRow(3, [10, 11, 12])])\n\n>>> mat = IndexedRowMatrix(rows)\n>>> print(mat.numRows())\n4\n\n>>> mat = IndexedRowMatrix(rows, 7, 6)\n>>> print(mat.numRows())\n7\n\nproperty rows\n\nRows of the IndexedRowMatrix stored as an RDD of IndexedRows.\n\n>>> mat = IndexedRowMatrix(sc.parallelize([IndexedRow(0, [1, 2, 3]),\n... IndexedRow(1, [4, 5, 6])]))\n>>> rows = mat.rows\n>>> rows.first()\nIndexedRow(0, [1.0,2.0,3.0])\n\ntoBlockMatrix(rowsPerBlock=1024, colsPerBlock=1024)[source]\n\nConvert this matrix to a BlockMatrix.\n\nParameters\n• rowsPerBlock – Number of rows that make up each block. The blocks forming the final rows are not required to have the given number of rows.\n\n• colsPerBlock – Number of columns that make up each block. The blocks forming the final columns are not required to have the given number of columns.\n\n>>> rows = sc.parallelize([IndexedRow(0, [1, 2, 3]),\n... IndexedRow(6, [4, 5, 6])])\n>>> mat = IndexedRowMatrix(rows).toBlockMatrix()\n\n>>> # This IndexedRowMatrix will have 7 effective rows, due to\n>>> # the highest row index being 6, and the ensuing\n>>> # BlockMatrix will have 7 rows as well.\n>>> print(mat.numRows())\n7\n\n>>> print(mat.numCols())\n3\n\ntoCoordinateMatrix()[source]\n\nConvert this matrix to a CoordinateMatrix.\n\n>>> rows = sc.parallelize([IndexedRow(0, [1, 0]),\n... IndexedRow(6, [0, 5])])\n>>> mat = IndexedRowMatrix(rows).toCoordinateMatrix()\n>>> mat.entries.take(3)\n[MatrixEntry(0, 0, 1.0), MatrixEntry(0, 1, 0.0), MatrixEntry(6, 0, 0.0)]\n\ntoRowMatrix()[source]\n\nConvert this matrix to a RowMatrix.\n\n>>> rows = sc.parallelize([IndexedRow(0, [1, 2, 3]),\n... IndexedRow(6, [4, 5, 6])])\n>>> mat = IndexedRowMatrix(rows).toRowMatrix()\n>>> mat.rows.collect()\n[DenseVector([1.0, 2.0, 3.0]), DenseVector([4.0, 5.0, 6.0])]\n\nclass pyspark.mllib.linalg.distributed.MatrixEntry(i, j, value)[source]\n\nBases: object\n\nRepresents an entry of a CoordinateMatrix.\n\nJust a wrapper over a (long, long, float) tuple.\n\nParameters\n• i – The row index of the matrix.\n\n• j – The column index of the matrix.\n\n• value – The (i, j)th entry of the matrix, as a float.\n\nclass pyspark.mllib.linalg.distributed.RowMatrix(rows, numRows=0, numCols=0)[source]\n\nRepresents a row-oriented distributed Matrix with no meaningful row indices.\n\nParameters\n• rows – An RDD of vectors.\n\n• numRows – Number of rows in the matrix. A non-positive value means unknown, at which point the number of rows will be determined by the number of records in the rows RDD.\n\n• numCols – Number of columns in the matrix. A non-positive value means unknown, at which point the number of columns will be determined by the size of the first row.\n\ncolumnSimilarities(threshold=0.0)[source]\n\nCompute similarities between columns of this matrix.\n\nThe threshold parameter is a trade-off knob between estimate quality and computational cost.\n\nThe default threshold setting of 0 guarantees deterministically correct results, but uses the brute-force approach of computing normalized dot products.\n\nSetting the threshold to positive values uses a sampling approach and incurs strictly less computational cost than the brute-force approach. However the similarities computed will be estimates.\n\nThe sampling guarantees relative-error correctness for those pairs of columns that have similarity greater than the given similarity threshold.\n\nTo describe the guarantee, we set some notation:\n• Let A be the smallest in magnitude non-zero element of this matrix.\n\n• Let B be the largest in magnitude non-zero element of this matrix.\n\n• Let L be the maximum number of non-zeros per row.\n\nFor example, for {0,1} matrices: A=B=1. Another example, for the Netflix matrix: A=1, B=5\n\nFor those column pairs that are above the threshold, the computed similarity is correct to within 20% relative error with probability at least 1 - (0.981)^10/B^\n\nThe shuffle size is bounded by the smaller of the following two expressions:\n\n• O(n log(n) L / (threshold * A))\n\n• O(m L^2^)\n\nThe latter is the cost of the brute-force approach, so for non-zero thresholds, the cost is always cheaper than the brute-force approach.\n\nParam\n\nthreshold: Set to 0 for deterministic guaranteed correctness. Similarities above this threshold are estimated with the cost vs estimate quality trade-off described above.\n\nReturns\n\nAn n x n sparse upper-triangular CoordinateMatrix of cosine similarities between columns of this matrix.\n\n>>> rows = sc.parallelize([[1, 2], [1, 5]])\n>>> mat = RowMatrix(rows)\n\n>>> sims = mat.columnSimilarities()\n>>> sims.entries.first().value\n0.91914503...\n\n\nNew in version 2.0.0.\n\ncomputeColumnSummaryStatistics()[source]\n\nComputes column-wise summary statistics.\n\nReturns\n\nMultivariateStatisticalSummary object containing column-wise summary statistics.\n\n>>> rows = sc.parallelize([[1, 2, 3], [4, 5, 6]])\n>>> mat = RowMatrix(rows)\n\n>>> colStats = mat.computeColumnSummaryStatistics()\n>>> colStats.mean()\narray([ 2.5, 3.5, 4.5])\n\n\nNew in version 2.0.0.\n\ncomputeCovariance()[source]\n\nComputes the covariance matrix, treating each row as an observation.\n\nNote\n\nThis cannot be computed on matrices with more than 65535 columns.\n\n>>> rows = sc.parallelize([[1, 2], [2, 1]])\n>>> mat = RowMatrix(rows)\n\n>>> mat.computeCovariance()\nDenseMatrix(2, 2, [0.5, -0.5, -0.5, 0.5], 0)\n\n\nNew in version 2.0.0.\n\ncomputeGramianMatrix()[source]\n\nComputes the Gramian matrix A^T A.\n\nNote\n\nThis cannot be computed on matrices with more than 65535 columns.\n\n>>> rows = sc.parallelize([[1, 2, 3], [4, 5, 6]])\n>>> mat = RowMatrix(rows)\n\n>>> mat.computeGramianMatrix()\nDenseMatrix(3, 3, [17.0, 22.0, 27.0, 22.0, 29.0, 36.0, 27.0, 36.0, 45.0], 0)\n\n\nNew in version 2.0.0.\n\ncomputePrincipalComponents(k)[source]\n\nComputes the k principal components of the given row matrix\n\nNote\n\nThis cannot be computed on matrices with more than 65535 columns.\n\nParameters\n\nk – Number of principal components to keep.\n\nReturns\n\npyspark.mllib.linalg.DenseMatrix\n\n>>> rows = sc.parallelize([[1, 2, 3], [2, 4, 5], [3, 6, 1]])\n>>> rm = RowMatrix(rows)\n\n>>> # Returns the two principal components of rm\n>>> pca = rm.computePrincipalComponents(2)\n>>> pca\nDenseMatrix(3, 2, [-0.349, -0.6981, 0.6252, -0.2796, -0.5592, -0.7805], 0)\n\n>>> # Transform into new dimensions with the greatest variance.\n>>> rm.multiply(pca).rows.collect()\n[DenseVector([0.1305, -3.7394]), DenseVector([-0.3642, -6.6983]), DenseVector([-4.6102, -4.9745])]\n\n\nNew in version 2.2.0.\n\ncomputeSVD(k, computeU=False, rCond=1e-09)[source]\n\nComputes the singular value decomposition of the RowMatrix.\n\nThe given row matrix A of dimension (m X n) is decomposed into U * s * V’T where\n\n• U: (m X k) (left singular vectors) is a RowMatrix whose\n\ncolumns are the eigenvectors of (A X A’)\n\n• s: DenseVector consisting of square root of the eigenvalues\n\n(singular values) in descending order.\n\n• v: (n X k) (right singular vectors) is a Matrix whose columns\n\nare the eigenvectors of (A’ X A)\n\nFor more specific details on implementation, please refer the Scala documentation.\n\nParameters\n• k – Number of leading singular values to keep (0 < k <= n). It might return less than k if there are numerically zero singular values or there are not enough Ritz values converged before the maximum number of Arnoldi update iterations is reached (in case that matrix A is ill-conditioned).\n\n• computeU – Whether or not to compute U. If set to be True, then U is computed by A * V * s^-1\n\n• rCond – Reciprocal condition number. All singular values smaller than rCond * s are treated as zero where s is the largest singular value.\n\nReturns\n\nSingularValueDecomposition\n\n>>> rows = sc.parallelize([[3, 1, 1], [-1, 3, 1]])\n>>> rm = RowMatrix(rows)\n\n>>> svd_model = rm.computeSVD(2, True)\n>>> svd_model.U.rows.collect()\n[DenseVector([-0.7071, 0.7071]), DenseVector([-0.7071, -0.7071])]\n>>> svd_model.s\nDenseVector([3.4641, 3.1623])\n>>> svd_model.V\nDenseMatrix(3, 2, [-0.4082, -0.8165, -0.4082, 0.8944, -0.4472, 0.0], 0)\n\n\nNew in version 2.2.0.\n\nmultiply(matrix)[source]\n\nMultiply this matrix by a local dense matrix on the right.\n\nParameters\n\nmatrix – a local dense matrix whose number of rows must match the number of columns of this matrix\n\nReturns\n\nRowMatrix\n\n>>> rm = RowMatrix(sc.parallelize([[0, 1], [2, 3]]))\n>>> rm.multiply(DenseMatrix(2, 2, [0, 2, 1, 3])).rows.collect()\n[DenseVector([2.0, 3.0]), DenseVector([6.0, 11.0])]\n\n\nNew in version 2.2.0.\n\nnumCols()[source]\n\nGet or compute the number of cols.\n\n>>> rows = sc.parallelize([[1, 2, 3], [4, 5, 6],\n... [7, 8, 9], [10, 11, 12]])\n\n>>> mat = RowMatrix(rows)\n>>> print(mat.numCols())\n3\n\n>>> mat = RowMatrix(rows, 7, 6)\n>>> print(mat.numCols())\n6\n\nnumRows()[source]\n\nGet or compute the number of rows.\n\n>>> rows = sc.parallelize([[1, 2, 3], [4, 5, 6],\n... [7, 8, 9], [10, 11, 12]])\n\n>>> mat = RowMatrix(rows)\n>>> print(mat.numRows())\n4\n\n>>> mat = RowMatrix(rows, 7, 6)\n>>> print(mat.numRows())\n7\n\nproperty rows\n\nRows of the RowMatrix stored as an RDD of vectors.\n\n>>> mat = RowMatrix(sc.parallelize([[1, 2, 3], [4, 5, 6]]))\n>>> rows = mat.rows\n>>> rows.first()\nDenseVector([1.0, 2.0, 3.0])\n\ntallSkinnyQR(computeQ=False)[source]\n\nCompute the QR decomposition of this RowMatrix.\n\nThe implementation is designed to optimize the QR decomposition (factorization) for the RowMatrix of a tall and skinny shape.\n\nReference:\n\nPaul G. Constantine, David F. Gleich. “Tall and skinny QR factorizations in MapReduce architectures” ([[http://dx.doi.org/10.1145/1996092.1996103]])\n\nParam\n\ncomputeQ: whether to computeQ\n\nReturns\n\nQRDecomposition(Q: RowMatrix, R: Matrix), where Q = None if computeQ = false.\n\n>>> rows = sc.parallelize([[3, -6], [4, -8], [0, 1]])\n>>> mat = RowMatrix(rows)\n>>> decomp = mat.tallSkinnyQR(True)\n>>> Q = decomp.Q\n>>> R = decomp.R\n\n>>> # Test with absolute values\n>>> absQRows = Q.rows.map(lambda row: abs(row.toArray()).tolist())\n>>> absQRows.collect()\n[[0.6..., 0.0], [0.8..., 0.0], [0.0, 1.0]]\n\n>>> # Test with absolute values\n>>> abs(R.toArray()).tolist()\n[[5.0, 10.0], [0.0, 1.0]]\n\n\nNew in version 2.0.0.\n\nclass pyspark.mllib.linalg.distributed.SingularValueDecomposition(java_model)[source]\n\nBases: pyspark.mllib.common.JavaModelWrapper\n\nRepresents singular value decomposition (SVD) factors.\n\nNew in version 2.2.0.\n\nproperty U\n\nReturns a distributed matrix whose columns are the left singular vectors of the SingularValueDecomposition if computeU was set to be True.\n\nNew in version 2.2.0.\n\nproperty V\n\nReturns a DenseMatrix whose columns are the right singular vectors of the SingularValueDecomposition.\n\nNew in version 2.2.0.\n\nproperty s\n\nReturns a DenseVector with singular values in descending order.\n\nNew in version 2.2.0.\n\n## pyspark.mllib.random module¶\n\nPython package for random data generation.\n\nclass pyspark.mllib.random.RandomRDDs[source]\n\nGenerator methods for creating RDDs comprised of i.i.d samples from some distribution.\n\nNew in version 1.1.0.\n\nstatic exponentialRDD(sc, mean, size, numPartitions=None, seed=None)[source]\n\nGenerates an RDD comprised of i.i.d. samples from the Exponential distribution with the input mean.\n\nParameters\n• sc – SparkContext used to create the RDD.\n\n• mean – Mean, or 1 / lambda, for the Exponential distribution.\n\n• size – Size of the RDD.\n\n• numPartitions – Number of partitions in the RDD (default: sc.defaultParallelism).\n\n• seed – Random seed (default: a random long integer).\n\nReturns\n\nRDD of float comprised of i.i.d. samples ~ Exp(mean).\n\n>>> mean = 2.0\n>>> x = RandomRDDs.exponentialRDD(sc, mean, 1000, seed=2)\n>>> stats = x.stats()\n>>> stats.count()\n1000\n>>> abs(stats.mean() - mean) < 0.5\nTrue\n>>> from math import sqrt\n>>> abs(stats.stdev() - sqrt(mean)) < 0.5\nTrue\n\n\nNew in version 1.3.0.\n\nstatic exponentialVectorRDD(sc, mean, numRows, numCols, numPartitions=None, seed=None)[source]\n\nGenerates an RDD comprised of vectors containing i.i.d. samples drawn from the Exponential distribution with the input mean.\n\nParameters\n• sc – SparkContext used to create the RDD.\n\n• mean – Mean, or 1 / lambda, for the Exponential distribution.\n\n• numRows – Number of Vectors in the RDD.\n\n• numCols – Number of elements in each Vector.\n\n• numPartitions – Number of partitions in the RDD (default: sc.defaultParallelism)\n\n• seed – Random seed (default: a random long integer).\n\nReturns\n\nRDD of Vector with vectors containing i.i.d. samples ~ Exp(mean).\n\n>>> import numpy as np\n>>> mean = 0.5\n>>> rdd = RandomRDDs.exponentialVectorRDD(sc, mean, 100, 100, seed=1)\n>>> mat = np.mat(rdd.collect())\n>>> mat.shape\n(100, 100)\n>>> abs(mat.mean() - mean) < 0.5\nTrue\n>>> from math import sqrt\n>>> abs(mat.std() - sqrt(mean)) < 0.5\nTrue\n\n\nNew in version 1.3.0.\n\nstatic gammaRDD(sc, shape, scale, size, numPartitions=None, seed=None)[source]\n\nGenerates an RDD comprised of i.i.d. samples from the Gamma distribution with the input shape and scale.\n\nParameters\n• sc – SparkContext used to create the RDD.\n\n• shape – shape (> 0) parameter for the Gamma distribution\n\n• scale – scale (> 0) parameter for the Gamma distribution\n\n• size – Size of the RDD.\n\n• numPartitions – Number of partitions in the RDD (default: sc.defaultParallelism).\n\n• seed – Random seed (default: a random long integer).\n\nReturns\n\nRDD of float comprised of i.i.d. samples ~ Gamma(shape, scale).\n\n>>> from math import sqrt\n>>> shape = 1.0\n>>> scale = 2.0\n>>> expMean = shape * scale\n>>> expStd = sqrt(shape * scale * scale)\n>>> x = RandomRDDs.gammaRDD(sc, shape, scale, 1000, seed=2)\n>>> stats = x.stats()\n>>> stats.count()\n1000\n>>> abs(stats.mean() - expMean) < 0.5\nTrue\n>>> abs(stats.stdev() - expStd) < 0.5\nTrue\n\n\nNew in version 1.3.0.\n\nstatic gammaVectorRDD(sc, shape, scale, numRows, numCols, numPartitions=None, seed=None)[source]\n\nGenerates an RDD comprised of vectors containing i.i.d. samples drawn from the Gamma distribution.\n\nParameters\n• sc – SparkContext used to create the RDD.\n\n• shape – Shape (> 0) of the Gamma distribution\n\n• scale – Scale (> 0) of the Gamma distribution\n\n• numRows – Number of Vectors in the RDD.\n\n• numCols – Number of elements in each Vector.\n\n• numPartitions – Number of partitions in the RDD (default: sc.defaultParallelism).\n\n• seed – Random seed (default: a random long integer).\n\nReturns\n\nRDD of Vector with vectors containing i.i.d. samples ~ Gamma(shape, scale).\n\n>>> import numpy as np\n>>> from math import sqrt\n>>> shape = 1.0\n>>> scale = 2.0\n>>> expMean = shape * scale\n>>> expStd = sqrt(shape * scale * scale)\n>>> mat = np.matrix(RandomRDDs.gammaVectorRDD(sc, shape, scale, 100, 100, seed=1).collect())\n>>> mat.shape\n(100, 100)\n>>> abs(mat.mean() - expMean) < 0.1\nTrue\n>>> abs(mat.std() - expStd) < 0.1\nTrue\n\n\nNew in version 1.3.0.\n\nstatic logNormalRDD(sc, mean, std, size, numPartitions=None, seed=None)[source]\n\nGenerates an RDD comprised of i.i.d. samples from the log normal distribution with the input mean and standard distribution.\n\nParameters\n• sc – SparkContext used to create the RDD.\n\n• mean – mean for the log Normal distribution\n\n• std – std for the log Normal distribution\n\n• size – Size of the RDD.\n\n• numPartitions – Number of partitions in the RDD (default: sc.defaultParallelism).\n\n• seed – Random seed (default: a random long integer).\n\nReturns\n\nRDD of float comprised of i.i.d. samples ~ log N(mean, std).\n\n>>> from math import sqrt, exp\n>>> mean = 0.0\n>>> std = 1.0\n>>> expMean = exp(mean + 0.5 * std * std)\n>>> expStd = sqrt((exp(std * std) - 1.0) * exp(2.0 * mean + std * std))\n>>> x = RandomRDDs.logNormalRDD(sc, mean, std, 1000, seed=2)\n>>> stats = x.stats()\n>>> stats.count()\n1000\n>>> abs(stats.mean() - expMean) < 0.5\nTrue\n>>> from math import sqrt\n>>> abs(stats.stdev() - expStd) < 0.5\nTrue\n\n\nNew in version 1.3.0.\n\nstatic logNormalVectorRDD(sc, mean, std, numRows, numCols, numPartitions=None, seed=None)[source]\n\nGenerates an RDD comprised of vectors containing i.i.d. samples drawn from the log normal distribution.\n\nParameters\n• sc – SparkContext used to create the RDD.\n\n• mean – Mean of the log normal distribution\n\n• std – Standard Deviation of the log normal distribution\n\n• numRows – Number of Vectors in the RDD.\n\n• numCols – Number of elements in each Vector.\n\n• numPartitions – Number of partitions in the RDD (default: sc.defaultParallelism).\n\n• seed – Random seed (default: a random long integer).\n\nReturns\n\nRDD of Vector with vectors containing i.i.d. samples ~ log N(mean, std).\n\n>>> import numpy as np\n>>> from math import sqrt, exp\n>>> mean = 0.0\n>>> std = 1.0\n>>> expMean = exp(mean + 0.5 * std * std)\n>>> expStd = sqrt((exp(std * std) - 1.0) * exp(2.0 * mean + std * std))\n>>> m = RandomRDDs.logNormalVectorRDD(sc, mean, std, 100, 100, seed=1).collect()\n>>> mat = np.matrix(m)\n>>> mat.shape\n(100, 100)\n>>> abs(mat.mean() - expMean) < 0.1\nTrue\n>>> abs(mat.std() - expStd) < 0.1\nTrue\n\n\nNew in version 1.3.0.\n\nstatic normalRDD(sc, size, numPartitions=None, seed=None)[source]\n\nGenerates an RDD comprised of i.i.d. samples from the standard normal distribution.\n\nTo transform the distribution in the generated RDD from standard normal to some other normal N(mean, sigma^2), use RandomRDDs.normal(sc, n, p, seed) .map(lambda v: mean + sigma * v)\n\nParameters\n• sc – SparkContext used to create the RDD.\n\n• size – Size of the RDD.\n\n• numPartitions – Number of partitions in the RDD (default: sc.defaultParallelism).\n\n• seed – Random seed (default: a random long integer).\n\nReturns\n\nRDD of float comprised of i.i.d. samples ~ N(0.0, 1.0).\n\n>>> x = RandomRDDs.normalRDD(sc, 1000, seed=1)\n>>> stats = x.stats()\n>>> stats.count()\n1000\n>>> abs(stats.mean() - 0.0) < 0.1\nTrue\n>>> abs(stats.stdev() - 1.0) < 0.1\nTrue\n\n\nNew in version 1.1.0.\n\nstatic normalVectorRDD(sc, numRows, numCols, numPartitions=None, seed=None)[source]\n\nGenerates an RDD comprised of vectors containing i.i.d. samples drawn from the standard normal distribution.\n\nParameters\n• sc – SparkContext used to create the RDD.\n\n• numRows – Number of Vectors in the RDD.\n\n• numCols – Number of elements in each Vector.\n\n• numPartitions – Number of partitions in the RDD (default: sc.defaultParallelism).\n\n• seed – Random seed (default: a random long integer).\n\nReturns\n\nRDD of Vector with vectors containing i.i.d. samples ~ N(0.0, 1.0).\n\n>>> import numpy as np\n>>> mat = np.matrix(RandomRDDs.normalVectorRDD(sc, 100, 100, seed=1).collect())\n>>> mat.shape\n(100, 100)\n>>> abs(mat.mean() - 0.0) < 0.1\nTrue\n>>> abs(mat.std() - 1.0) < 0.1\nTrue\n\n\nNew in version 1.1.0.\n\nstatic poissonRDD(sc, mean, size, numPartitions=None, seed=None)[source]\n\nGenerates an RDD comprised of i.i.d. samples from the Poisson distribution with the input mean.\n\nParameters\n• sc – SparkContext used to create the RDD.\n\n• mean – Mean, or lambda, for the Poisson distribution.\n\n• size – Size of the RDD.\n\n• numPartitions – Number of partitions in the RDD (default: sc.defaultParallelism).\n\n• seed – Random seed (default: a random long integer).\n\nReturns\n\nRDD of float comprised of i.i.d. samples ~ Pois(mean).\n\n>>> mean = 100.0\n>>> x = RandomRDDs.poissonRDD(sc, mean, 1000, seed=2)\n>>> stats = x.stats()\n>>> stats.count()\n1000\n>>> abs(stats.mean() - mean) < 0.5\nTrue\n>>> from math import sqrt\n>>> abs(stats.stdev() - sqrt(mean)) < 0.5\nTrue\n\n\nNew in version 1.1.0.\n\nstatic poissonVectorRDD(sc, mean, numRows, numCols, numPartitions=None, seed=None)[source]\n\nGenerates an RDD comprised of vectors containing i.i.d. samples drawn from the Poisson distribution with the input mean.\n\nParameters\n• sc – SparkContext used to create the RDD.\n\n• mean – Mean, or lambda, for the Poisson distribution.\n\n• numRows – Number of Vectors in the RDD.\n\n• numCols – Number of elements in each Vector.\n\n• numPartitions – Number of partitions in the RDD (default: sc.defaultParallelism)\n\n• seed – Random seed (default: a random long integer).\n\nReturns\n\nRDD of Vector with vectors containing i.i.d. samples ~ Pois(mean).\n\n>>> import numpy as np\n>>> mean = 100.0\n>>> rdd = RandomRDDs.poissonVectorRDD(sc, mean, 100, 100, seed=1)\n>>> mat = np.mat(rdd.collect())\n>>> mat.shape\n(100, 100)\n>>> abs(mat.mean() - mean) < 0.5\nTrue\n>>> from math import sqrt\n>>> abs(mat.std() - sqrt(mean)) < 0.5\nTrue\n\n\nNew in version 1.1.0.\n\nstatic uniformRDD(sc, size, numPartitions=None, seed=None)[source]\n\nGenerates an RDD comprised of i.i.d. samples from the uniform distribution U(0.0, 1.0).\n\nTo transform the distribution in the generated RDD from U(0.0, 1.0) to U(a, b), use RandomRDDs.uniformRDD(sc, n, p, seed) .map(lambda v: a + (b - a) * v)\n\nParameters\n• sc – SparkContext used to create the RDD.\n\n• size – Size of the RDD.\n\n• numPartitions – Number of partitions in the RDD (default: sc.defaultParallelism).\n\n• seed – Random seed (default: a random long integer).\n\nReturns\n\nRDD of float comprised of i.i.d. samples ~ U(0.0, 1.0).\n\n>>> x = RandomRDDs.uniformRDD(sc, 100).collect()\n>>> len(x)\n100\n>>> max(x) <= 1.0 and min(x) >= 0.0\nTrue\n>>> RandomRDDs.uniformRDD(sc, 100, 4).getNumPartitions()\n4\n>>> parts = RandomRDDs.uniformRDD(sc, 100, seed=4).getNumPartitions()\n>>> parts == sc.defaultParallelism\nTrue\n\n\nNew in version 1.1.0.\n\nstatic uniformVectorRDD(sc, numRows, numCols, numPartitions=None, seed=None)[source]\n\nGenerates an RDD comprised of vectors containing i.i.d. samples drawn from the uniform distribution U(0.0, 1.0).\n\nParameters\n• sc – SparkContext used to create the RDD.\n\n• numRows – Number of Vectors in the RDD.\n\n• numCols – Number of elements in each Vector.\n\n• numPartitions – Number of partitions in the RDD.\n\n• seed – Seed for the RNG that generates the seed for the generator in each partition.\n\nReturns\n\nRDD of Vector with vectors containing i.i.d samples ~ U(0.0, 1.0).\n\n>>> import numpy as np\n>>> mat = np.matrix(RandomRDDs.uniformVectorRDD(sc, 10, 10).collect())\n>>> mat.shape\n(10, 10)\n>>> mat.max() <= 1.0 and mat.min() >= 0.0\nTrue\n>>> RandomRDDs.uniformVectorRDD(sc, 10, 10, 4).getNumPartitions()\n4\n\n\nNew in version 1.1.0.\n\n## pyspark.mllib.recommendation module¶\n\nclass pyspark.mllib.recommendation.MatrixFactorizationModel(java_model)[source]\n\nA matrix factorisation model trained by regularized alternating least-squares.\n\n>>> r1 = (1, 1, 1.0)\n>>> r2 = (1, 2, 2.0)\n>>> r3 = (2, 1, 2.0)\n>>> ratings = sc.parallelize([r1, r2, r3])\n>>> model = ALS.trainImplicit(ratings, 1, seed=10)\n>>> model.predict(2, 2)\n0.4...\n\n>>> testset = sc.parallelize([(1, 2), (1, 1)])\n>>> model = ALS.train(ratings, 2, seed=0)\n>>> model.predictAll(testset).collect()\n[Rating(user=1, product=1, rating=1.0...), Rating(user=1, product=2, rating=1.9...)]\n\n>>> model = ALS.train(ratings, 4, seed=10)\n>>> model.userFeatures().collect()\n[(1, array('d', [...])), (2, array('d', [...]))]\n\n>>> model.recommendUsers(1, 2)\n[Rating(user=2, product=1, rating=1.9...), Rating(user=1, product=1, rating=1.0...)]\n>>> model.recommendProducts(1, 2)\n[Rating(user=1, product=2, rating=1.9...), Rating(user=1, product=1, rating=1.0...)]\n>>> model.rank\n4\n\n>>> first_user = model.userFeatures().take(1)\n>>> latents = first_user\n>>> len(latents)\n4\n\n>>> model.productFeatures().collect()\n[(1, array('d', [...])), (2, array('d', [...]))]\n\n>>> first_product = model.productFeatures().take(1)\n>>> latents = first_product\n>>> len(latents)\n4\n\n>>> products_for_users = model.recommendProductsForUsers(1).collect()\n>>> len(products_for_users)\n2\n>>> products_for_users\n(1, (Rating(user=1, product=2, rating=...),))\n\n>>> users_for_products = model.recommendUsersForProducts(1).collect()\n>>> len(users_for_products)\n2\n>>> users_for_products\n(1, (Rating(user=2, product=1, rating=...),))\n\n>>> model = ALS.train(ratings, 1, nonnegative=True, seed=10)\n>>> model.predict(2, 2)\n3.73...\n\n>>> df = sqlContext.createDataFrame([Rating(1, 1, 1.0), Rating(1, 2, 2.0), Rating(2, 1, 2.0)])\n>>> model = ALS.train(df, 1, nonnegative=True, seed=10)\n>>> model.predict(2, 2)\n3.73...\n\n>>> model = ALS.trainImplicit(ratings, 1, nonnegative=True, seed=10)\n>>> model.predict(2, 2)\n0.4...\n\n>>> import os, tempfile\n>>> path = tempfile.mkdtemp()\n>>> model.save(sc, path)\n>>> sameModel.predict(2, 2)\n0.4...\n>>> sameModel.predictAll(testset).collect()\n[Rating(...\n>>> from shutil import rmtree\n>>> try:\n... rmtree(path)\n... except OSError:\n... pass\n\n\nNew in version 0.9.0.\n\nclassmethod load(sc, path)[source]\n\nLoad a model from the given path\n\nNew in version 1.3.1.\n\npredict(user, product)[source]\n\nPredicts rating for the given user and product.\n\nNew in version 0.9.0.\n\npredictAll(user_product)[source]\n\nReturns a list of predicted ratings for input user and product pairs.\n\nNew in version 0.9.0.\n\nproductFeatures()[source]\n\nReturns a paired RDD, where the first element is the product and the second is an array of features corresponding to that product.\n\nNew in version 1.2.0.\n\nproperty rank\n\nRank for the features in this model\n\nNew in version 1.4.0.\n\nrecommendProducts(user, num)[source]\n\nRecommends the top “num” number of products for a given user and returns a list of Rating objects sorted by the predicted rating in descending order.\n\nNew in version 1.4.0.\n\nrecommendProductsForUsers(num)[source]\n\nRecommends the top “num” number of products for all users. The number of recommendations returned per user may be less than “num”.\n\nrecommendUsers(product, num)[source]\n\nRecommends the top “num” number of users for a given product and returns a list of Rating objects sorted by the predicted rating in descending order.\n\nNew in version 1.4.0.\n\nrecommendUsersForProducts(num)[source]\n\nRecommends the top “num” number of users for all products. The number of recommendations returned per product may be less than “num”.\n\nuserFeatures()[source]\n\nReturns a paired RDD, where the first element is the user and the second is an array of features corresponding to that user.\n\nNew in version 1.2.0.\n\nclass pyspark.mllib.recommendation.ALS[source]\n\nAlternating Least Squares matrix factorization\n\nNew in version 0.9.0.\n\nclassmethod train(ratings, rank, iterations=5, lambda_=0.01, blocks=-1, nonnegative=False, seed=None)[source]\n\nTrain a matrix factorization model given an RDD of ratings by users for a subset of products. The ratings matrix is approximated as the product of two lower-rank matrices of a given rank (number of features). To solve for these features, ALS is run iteratively with a configurable level of parallelism.\n\nParameters\n• ratings – RDD of Rating or (userID, productID, rating) tuple.\n\n• rank – Number of features to use (also referred to as the number of latent factors).\n\n• iterations – Number of iterations of ALS. (default: 5)\n\n• lambda – Regularization parameter. (default: 0.01)\n\n• blocks – Number of blocks used to parallelize the computation. A value of -1 will use an auto-configured number of blocks. (default: -1)\n\n• nonnegative – A value of True will solve least-squares with nonnegativity constraints. (default: False)\n\n• seed – Random seed for initial matrix factorization model. A value of None will use system time as the seed. (default: None)\n\nNew in version 0.9.0.\n\nclassmethod trainImplicit(ratings, rank, iterations=5, lambda_=0.01, blocks=-1, alpha=0.01, nonnegative=False, seed=None)[source]\n\nTrain a matrix factorization model given an RDD of ‘implicit preferences’ of users for a subset of products. The ratings matrix is approximated as the product of two lower-rank matrices of a given rank (number of features). To solve for these features, ALS is run iteratively with a configurable level of parallelism.\n\nParameters\n• ratings – RDD of Rating or (userID, productID, rating) tuple.\n\n• rank – Number of features to use (also referred to as the number of latent factors).\n\n• iterations – Number of iterations of ALS. (default: 5)\n\n• lambda – Regularization parameter. (default: 0.01)\n\n• blocks – Number of blocks used to parallelize the computation. A value of -1 will use an auto-configured number of blocks. (default: -1)\n\n• alpha – A constant used in computing confidence. (default: 0.01)\n\n• nonnegative – A value of True will solve least-squares with nonnegativity constraints. (default: False)\n\n• seed – Random seed for initial matrix factorization model. A value of None will use system time as the seed. (default: None)\n\nNew in version 0.9.0.\n\nclass pyspark.mllib.recommendation.Rating[source]\n\nRepresents a (user, product, rating) tuple.\n\n>>> r = Rating(1, 2, 5.0)\n>>> (r.user, r.product, r.rating)\n(1, 2, 5.0)\n>>> (r, r, r)\n(1, 2, 5.0)\n\n\nNew in version 1.2.0.\n\n## pyspark.mllib.regression module¶\n\nclass pyspark.mllib.regression.LabeledPoint(label, features)[source]\n\nClass that represents the features and labels of a data point.\n\nParameters\n• label – Label for this data point.\n\n• features – Vector of features for this point (NumPy array, list, pyspark.mllib.linalg.SparseVector, or scipy.sparse column matrix).\n\nNote\n\n‘label’ and ‘features’ are accessible as class attributes.\n\nNew in version 1.0.0.\n\nclass pyspark.mllib.regression.LinearModel(weights, intercept)[source]\n\nA linear model that has a vector of coefficients and an intercept.\n\nParameters\n• weights – Weights computed for every feature.\n\n• intercept – Intercept computed for this model.\n\nNew in version 0.9.0.\n\nproperty intercept\n\nIntercept computed for this model.\n\nNew in version 1.0.0.\n\nproperty weights\n\nWeights computed for every feature.\n\nNew in version 1.0.0.\n\nclass pyspark.mllib.regression.LinearRegressionModel(weights, intercept)[source]\n\nA linear regression model derived from a least-squares fit.\n\n>>> from pyspark.mllib.regression import LabeledPoint\n>>> data = [\n... LabeledPoint(0.0, [0.0]),\n... LabeledPoint(1.0, [1.0]),\n... LabeledPoint(3.0, [2.0]),\n... LabeledPoint(2.0, [3.0])\n... ]\n>>> lrm = LinearRegressionWithSGD.train(sc.parallelize(data), iterations=10,\n... initialWeights=np.array([1.0]))\n>>> abs(lrm.predict(np.array([0.0])) - 0) < 0.5\nTrue\n>>> abs(lrm.predict(np.array([1.0])) - 1) < 0.5\nTrue\n>>> abs(lrm.predict(SparseVector(1, {0: 1.0})) - 1) < 0.5\nTrue\n>>> abs(lrm.predict(sc.parallelize([[1.0]])).collect() - 1) < 0.5\nTrue\n>>> import os, tempfile\n>>> path = tempfile.mkdtemp()\n>>> lrm.save(sc, path)\n>>> abs(sameModel.predict(np.array([0.0])) - 0) < 0.5\nTrue\n>>> abs(sameModel.predict(np.array([1.0])) - 1) < 0.5\nTrue\n>>> abs(sameModel.predict(SparseVector(1, {0: 1.0})) - 1) < 0.5\nTrue\n>>> from shutil import rmtree\n>>> try:\n... rmtree(path)\n... except:\n... pass\n>>> data = [\n... LabeledPoint(0.0, SparseVector(1, {0: 0.0})),\n... LabeledPoint(1.0, SparseVector(1, {0: 1.0})),\n... LabeledPoint(3.0, SparseVector(1, {0: 2.0})),\n... LabeledPoint(2.0, SparseVector(1, {0: 3.0}))\n... ]\n>>> lrm = LinearRegressionWithSGD.train(sc.parallelize(data), iterations=10,\n... initialWeights=array([1.0]))\n>>> abs(lrm.predict(array([0.0])) - 0) < 0.5\nTrue\n>>> abs(lrm.predict(SparseVector(1, {0: 1.0})) - 1) < 0.5\nTrue\n>>> lrm = LinearRegressionWithSGD.train(sc.parallelize(data), iterations=10, step=1.0,\n... miniBatchFraction=1.0, initialWeights=array([1.0]), regParam=0.1, regType=\"l2\",\n... intercept=True, validateData=True)\n>>> abs(lrm.predict(array([0.0])) - 0) < 0.5\nTrue\n>>> abs(lrm.predict(SparseVector(1, {0: 1.0})) - 1) < 0.5\nTrue\n\n\nNew in version 0.9.0.\n\nproperty intercept\n\nIntercept computed for this model.\n\nNew in version 1.0.0.\n\nclassmethod load(sc, path)[source]\n\nNew in version 1.4.0.\n\npredict(x)\n\nPredict the value of the dependent variable given a vector or an RDD of vectors containing values for the independent variables.\n\nNew in version 0.9.0.\n\nsave(sc, path)[source]\n\nSave a LinearRegressionModel.\n\nNew in version 1.4.0.\n\nproperty weights\n\nWeights computed for every feature.\n\nNew in version 1.0.0.\n\nclass pyspark.mllib.regression.LinearRegressionWithSGD[source]\n\nNew in version 0.9.0.\n\nNote\n\nDeprecated in 2.0.0. Use ml.regression.LinearRegression.\n\nclassmethod train(data, iterations=100, step=1.0, miniBatchFraction=1.0, initialWeights=None, regParam=0.0, regType=None, intercept=False, validateData=True, convergenceTol=0.001)[source]\n\nTrain a linear regression model using Stochastic Gradient Descent (SGD). This solves the least squares regression formulation\n\nf(weights) = 1/(2n) ||A weights - y||^2\n\nwhich is the mean squared error. Here the data matrix has n rows, and the input RDD holds the set of rows of A, each with its corresponding right hand side label y. See also the documentation for the precise formulation.\n\nParameters\n• data – The training data, an RDD of LabeledPoint.\n\n• iterations – The number of iterations. (default: 100)\n\n• step – The step parameter used in SGD. (default: 1.0)\n\n• miniBatchFraction – Fraction of data to be used for each SGD iteration. (default: 1.0)\n\n• initialWeights – The initial weights. (default: None)\n\n• regParam – The regularizer parameter. (default: 0.0)\n\n• regType\n\nThe type of regularizer used for training our model. Supported values:\n\n• ”l1” for using L1 regularization\n\n• ”l2” for using L2 regularization\n\n• None for no regularization (default)\n\n• intercept – Boolean parameter which indicates the use or not of the augmented representation for training data (i.e., whether bias features are activated or not). (default: False)\n\n• validateData – Boolean parameter which indicates if the algorithm should validate data before training. (default: True)\n\n• convergenceTol – A condition which decides iteration termination. (default: 0.001)\n\nNew in version 0.9.0.\n\nclass pyspark.mllib.regression.RidgeRegressionModel(weights, intercept)[source]\n\nA linear regression model derived from a least-squares fit with an l_2 penalty term.\n\n>>> from pyspark.mllib.regression import LabeledPoint\n>>> data = [\n... LabeledPoint(0.0, [0.0]),\n... LabeledPoint(1.0, [1.0]),\n... LabeledPoint(3.0, [2.0]),\n... LabeledPoint(2.0, [3.0])\n... ]\n>>> lrm = RidgeRegressionWithSGD.train(sc.parallelize(data), iterations=10,\n... initialWeights=array([1.0]))\n>>> abs(lrm.predict(np.array([0.0])) - 0) < 0.5\nTrue\n>>> abs(lrm.predict(np.array([1.0])) - 1) < 0.5\nTrue\n>>> abs(lrm.predict(SparseVector(1, {0: 1.0})) - 1) < 0.5\nTrue\n>>> abs(lrm.predict(sc.parallelize([[1.0]])).collect() - 1) < 0.5\nTrue\n>>> import os, tempfile\n>>> path = tempfile.mkdtemp()\n>>> lrm.save(sc, path)\n>>> abs(sameModel.predict(np.array([0.0])) - 0) < 0.5\nTrue\n>>> abs(sameModel.predict(np.array([1.0])) - 1) < 0.5\nTrue\n>>> abs(sameModel.predict(SparseVector(1, {0: 1.0})) - 1) < 0.5\nTrue\n>>> from shutil import rmtree\n>>> try:\n... rmtree(path)\n... except:\n... pass\n>>> data = [\n... LabeledPoint(0.0, SparseVector(1, {0: 0.0})),\n... LabeledPoint(1.0, SparseVector(1, {0: 1.0})),\n... LabeledPoint(3.0, SparseVector(1, {0: 2.0})),\n... LabeledPoint(2.0, SparseVector(1, {0: 3.0}))\n... ]\n>>> lrm = LinearRegressionWithSGD.train(sc.parallelize(data), iterations=10,\n... initialWeights=array([1.0]))\n>>> abs(lrm.predict(np.array([0.0])) - 0) < 0.5\nTrue\n>>> abs(lrm.predict(SparseVector(1, {0: 1.0})) - 1) < 0.5\nTrue\n>>> lrm = RidgeRegressionWithSGD.train(sc.parallelize(data), iterations=10, step=1.0,\n... regParam=0.01, miniBatchFraction=1.0, initialWeights=array([1.0]), intercept=True,\n... validateData=True)\n>>> abs(lrm.predict(np.array([0.0])) - 0) < 0.5\nTrue\n>>> abs(lrm.predict(SparseVector(1, {0: 1.0})) - 1) < 0.5\nTrue\n\n\nNew in version 0.9.0.\n\nproperty intercept\n\nIntercept computed for this model.\n\nNew in version 1.0.0.\n\nclassmethod load(sc, path)[source]\n\nNew in version 1.4.0.\n\npredict(x)\n\nPredict the value of the dependent variable given a vector or an RDD of vectors containing values for the independent variables.\n\nNew in version 0.9.0.\n\nsave(sc, path)[source]\n\nSave a RidgeRegressionMode.\n\nNew in version 1.4.0.\n\nproperty weights\n\nWeights computed for every feature.\n\nNew in version 1.0.0.\n\nclass pyspark.mllib.regression.RidgeRegressionWithSGD[source]\n\nNew in version 0.9.0.\n\nNote\n\nDeprecated in 2.0.0. Use ml.regression.LinearRegression with elasticNetParam = 0.0. Note the default regParam is 0.01 for RidgeRegressionWithSGD, but is 0.0 for LinearRegression.\n\nclassmethod train(data, iterations=100, step=1.0, regParam=0.01, miniBatchFraction=1.0, initialWeights=None, intercept=False, validateData=True, convergenceTol=0.001)[source]\n\nTrain a regression model with L2-regularization using Stochastic Gradient Descent. This solves the l2-regularized least squares regression formulation\n\nf(weights) = 1/(2n) ||A weights - y||^2 + regParam/2 ||weights||^2\n\nHere the data matrix has n rows, and the input RDD holds the set of rows of A, each with its corresponding right hand side label y. See also the documentation for the precise formulation.\n\nParameters\n• data – The training data, an RDD of LabeledPoint.\n\n• iterations – The number of iterations. (default: 100)\n\n• step – The step parameter used in SGD. (default: 1.0)\n\n• regParam – The regularizer parameter. (default: 0.01)\n\n• miniBatchFraction – Fraction of data to be used for each SGD iteration. (default: 1.0)\n\n• initialWeights – The initial weights. (default: None)\n\n• intercept – Boolean parameter which indicates the use or not of the augmented representation for training data (i.e. whether bias features are activated or not). (default: False)\n\n• validateData – Boolean parameter which indicates if the algorithm should validate data before training. (default: True)\n\n• convergenceTol – A condition which decides iteration termination. (default: 0.001)\n\nNew in version 0.9.0.\n\nclass pyspark.mllib.regression.LassoModel(weights, intercept)[source]\n\nA linear regression model derived from a least-squares fit with an l_1 penalty term.\n\n>>> from pyspark.mllib.regression import LabeledPoint\n>>> data = [\n... LabeledPoint(0.0, [0.0]),\n... LabeledPoint(1.0, [1.0]),\n... LabeledPoint(3.0, [2.0]),\n... LabeledPoint(2.0, [3.0])\n... ]\n>>> lrm = LassoWithSGD.train(sc.parallelize(data), iterations=10, initialWeights=array([1.0]))\n>>> abs(lrm.predict(np.array([0.0])) - 0) < 0.5\nTrue\n>>> abs(lrm.predict(np.array([1.0])) - 1) < 0.5\nTrue\n>>> abs(lrm.predict(SparseVector(1, {0: 1.0})) - 1) < 0.5\nTrue\n>>> abs(lrm.predict(sc.parallelize([[1.0]])).collect() - 1) < 0.5\nTrue\n>>> import os, tempfile\n>>> path = tempfile.mkdtemp()\n>>> lrm.save(sc, path)\n>>> abs(sameModel.predict(np.array([0.0])) - 0) < 0.5\nTrue\n>>> abs(sameModel.predict(np.array([1.0])) - 1) < 0.5\nTrue\n>>> abs(sameModel.predict(SparseVector(1, {0: 1.0})) - 1) < 0.5\nTrue\n>>> from shutil import rmtree\n>>> try:\n... rmtree(path)\n... except:\n... pass\n>>> data = [\n... LabeledPoint(0.0, SparseVector(1, {0: 0.0})),\n... LabeledPoint(1.0, SparseVector(1, {0: 1.0})),\n... LabeledPoint(3.0, SparseVector(1, {0: 2.0})),\n... LabeledPoint(2.0, SparseVector(1, {0: 3.0}))\n... ]\n>>> lrm = LinearRegressionWithSGD.train(sc.parallelize(data), iterations=10,\n... initialWeights=array([1.0]))\n>>> abs(lrm.predict(np.array([0.0])) - 0) < 0.5\nTrue\n>>> abs(lrm.predict(SparseVector(1, {0: 1.0})) - 1) < 0.5\nTrue\n>>> lrm = LassoWithSGD.train(sc.parallelize(data), iterations=10, step=1.0,\n... regParam=0.01, miniBatchFraction=1.0, initialWeights=array([1.0]), intercept=True,\n... validateData=True)\n>>> abs(lrm.predict(np.array([0.0])) - 0) < 0.5\nTrue\n>>> abs(lrm.predict(SparseVector(1, {0: 1.0})) - 1) < 0.5\nTrue\n\n\nNew in version 0.9.0.\n\nproperty intercept\n\nIntercept computed for this model.\n\nNew in version 1.0.0.\n\nclassmethod load(sc, path)[source]\n\nNew in version 1.4.0.\n\npredict(x)\n\nPredict the value of the dependent variable given a vector or an RDD of vectors containing values for the independent variables.\n\nNew in version 0.9.0.\n\nsave(sc, path)[source]\n\nSave a LassoModel.\n\nNew in version 1.4.0.\n\nproperty weights\n\nWeights computed for every feature.\n\nNew in version 1.0.0.\n\nclass pyspark.mllib.regression.LassoWithSGD[source]\n\nNew in version 0.9.0.\n\nNote\n\nDeprecated in 2.0.0. Use ml.regression.LinearRegression with elasticNetParam = 1.0. Note the default regParam is 0.01 for LassoWithSGD, but is 0.0 for LinearRegression.\n\nclassmethod train(data, iterations=100, step=1.0, regParam=0.01, miniBatchFraction=1.0, initialWeights=None, intercept=False, validateData=True, convergenceTol=0.001)[source]\n\nTrain a regression model with L1-regularization using Stochastic Gradient Descent. This solves the l1-regularized least squares regression formulation\n\nf(weights) = 1/(2n) ||A weights - y||^2 + regParam ||weights||_1\n\nHere the data matrix has n rows, and the input RDD holds the set of rows of A, each with its corresponding right hand side label y. See also the documentation for the precise formulation.\n\nParameters\n• data – The training data, an RDD of LabeledPoint.\n\n• iterations – The number of iterations. (default: 100)\n\n• step – The step parameter used in SGD. (default: 1.0)\n\n• regParam – The regularizer parameter. (default: 0.01)\n\n• miniBatchFraction – Fraction of data to be used for each SGD iteration. (default: 1.0)\n\n• initialWeights – The initial weights. (default: None)\n\n• intercept – Boolean parameter which indicates the use or not of the augmented representation for training data (i.e. whether bias features are activated or not). (default: False)\n\n• validateData – Boolean parameter which indicates if the algorithm should validate data before training. (default: True)\n\n• convergenceTol – A condition which decides iteration termination. (default: 0.001)\n\nNew in version 0.9.0.\n\nclass pyspark.mllib.regression.IsotonicRegressionModel(boundaries, predictions, isotonic)[source]\n\nRegression model for isotonic regression.\n\nParameters\n• boundaries – Array of boundaries for which predictions are known. Boundaries must be sorted in increasing order.\n\n• predictions – Array of predictions associated to the boundaries at the same index. Results of isotonic regression and therefore monotone.\n\n• isotonic – Indicates whether this is isotonic or antitonic.\n\n>>> data = [(1, 0, 1), (2, 1, 1), (3, 2, 1), (1, 3, 1), (6, 4, 1), (17, 5, 1), (16, 6, 1)]\n>>> irm = IsotonicRegression.train(sc.parallelize(data))\n>>> irm.predict(3)\n2.0\n>>> irm.predict(5)\n16.5\n>>> irm.predict(sc.parallelize([3, 5])).collect()\n[2.0, 16.5]\n>>> import os, tempfile\n>>> path = tempfile.mkdtemp()\n>>> irm.save(sc, path)\n>>> sameModel.predict(3)\n2.0\n>>> sameModel.predict(5)\n16.5\n>>> from shutil import rmtree\n>>> try:\n... rmtree(path)\n... except OSError:\n... pass\n\n\nNew in version 1.4.0.\n\nclassmethod load(sc, path)[source]\n\nNew in version 1.4.0.\n\npredict(x)[source]\n\nPredict labels for provided features. Using a piecewise linear function. 1) If x exactly matches a boundary then associated prediction is returned. In case there are multiple predictions with the same boundary then one of them is returned. Which one is undefined (same as java.util.Arrays.binarySearch). 2) If x is lower or higher than all boundaries then first or last prediction is returned respectively. In case there are multiple predictions with the same boundary then the lowest or highest is returned respectively. 3) If x falls between two values in boundary array then prediction is treated as piecewise linear function and interpolated value is returned. In case there are multiple values with the same boundary then the same rules as in 2) are used.\n\nParameters\n\nx – Feature or RDD of Features to be labeled.\n\nNew in version 1.4.0.\n\nsave(sc, path)[source]\n\nSave an IsotonicRegressionModel.\n\nNew in version 1.4.0.\n\nclass pyspark.mllib.regression.IsotonicRegression[source]\n\nIsotonic regression. Currently implemented using parallelized pool adjacent violators algorithm. Only univariate (single feature) algorithm supported.\n\nSequential PAV implementation based on:\n\nTibshirani, Ryan J., Holger Hoefling, and Robert Tibshirani. “Nearly-isotonic regression.” Technometrics 53.1 (2011): 54-61. Available from http://www.stat.cmu.edu/~ryantibs/papers/neariso.pdf\n\nSequential PAV parallelization based on:\n\nKearsley, Anthony J., Richard A. Tapia, and Michael W. Trosset. “An approach to parallelizing isotonic regression.” Applied Mathematics and Parallel Computing. Physica-Verlag HD, 1996. 141-147. Available from http://softlib.rice.edu/pub/CRPC-TRs/reports/CRPC-TR96640.pdf\n\nNew in version 1.4.0.\n\nclassmethod train(data, isotonic=True)[source]\n\nTrain an isotonic regression model on the given data.\n\nParameters\n• data – RDD of (label, feature, weight) tuples.\n\n• isotonic – Whether this is isotonic (which is default) or antitonic. (default: True)\n\nNew in version 1.4.0.\n\nclass pyspark.mllib.regression.StreamingLinearAlgorithm(model)[source]\n\nBase class that has to be inherited by any StreamingLinearAlgorithm.\n\nPrevents reimplementation of methods predictOn and predictOnValues.\n\nNew in version 1.5.0.\n\nlatestModel()[source]\n\nReturns the latest model.\n\nNew in version 1.5.0.\n\npredictOn(dstream)[source]\n\nUse the model to make predictions on batches of data from a DStream.\n\nReturns\n\nDStream containing predictions.\n\nNew in version 1.5.0.\n\npredictOnValues(dstream)[source]\n\nUse the model to make predictions on the values of a DStream and carry over its keys.\n\nReturns\n\nDStream containing the input keys and the predictions as values.\n\nNew in version 1.5.0.\n\nclass pyspark.mllib.regression.StreamingLinearRegressionWithSGD(stepSize=0.1, numIterations=50, miniBatchFraction=1.0, convergenceTol=0.001)[source]\n\nTrain or predict a linear regression model on streaming data. Training uses Stochastic Gradient Descent to update the model based on each new batch of incoming data from a DStream (see LinearRegressionWithSGD for model equation).\n\nEach batch of data is assumed to be an RDD of LabeledPoints. The number of data points per batch can vary, but the number of features must be constant. An initial weight vector must be provided.\n\nParameters\n• stepSize – Step size for each iteration of gradient descent. (default: 0.1)\n\n• numIterations – Number of iterations run for each batch of data. (default: 50)\n\n• miniBatchFraction – Fraction of each batch of data to use for updates. (default: 1.0)\n\n• convergenceTol – Value used to determine when to terminate iterations. (default: 0.001)\n\nNew in version 1.5.0.\n\nlatestModel()\n\nReturns the latest model.\n\nNew in version 1.5.0.\n\npredictOn(dstream)\n\nUse the model to make predictions on batches of data from a DStream.\n\nReturns\n\nDStream containing predictions.\n\nNew in version 1.5.0.\n\npredictOnValues(dstream)\n\nUse the model to make predictions on the values of a DStream and carry over its keys.\n\nReturns\n\nDStream containing the input keys and the predictions as values.\n\nNew in version 1.5.0.\n\nsetInitialWeights(initialWeights)[source]\n\nSet the initial value of weights.\n\nThis must be set before running trainOn and predictOn\n\nNew in version 1.5.0.\n\ntrainOn(dstream)[source]\n\nTrain the model on the incoming dstream.\n\nNew in version 1.5.0.\n\n## pyspark.mllib.stat module¶\n\nPython package for statistical functions in MLlib.\n\nclass pyspark.mllib.stat.Statistics[source]\nstatic chiSqTest(observed, expected=None)[source]\n\nIf observed is Vector, conduct Pearson’s chi-squared goodness of fit test of the observed data against the expected distribution, or againt the uniform distribution (by default), with each category having an expected frequency of 1 / len(observed).\n\nIf observed is matrix, conduct Pearson’s independence test on the input contingency matrix, which cannot contain negative entries or columns or rows that sum up to 0.\n\nIf observed is an RDD of LabeledPoint, conduct Pearson’s independence test for every feature against the label across the input RDD. For each feature, the (feature, label) pairs are converted into a contingency matrix for which the chi-squared statistic is computed. All label and feature values must be categorical.\n\nNote\n\nobserved cannot contain negative values\n\nParameters\n• observed – it could be a vector containing the observed categorical counts/relative frequencies, or the contingency matrix (containing either counts or relative frequencies), or an RDD of LabeledPoint containing the labeled dataset with categorical features. Real-valued features will be treated as categorical for each distinct value.\n\n• expected – Vector containing the expected categorical counts/relative frequencies. expected is rescaled if the expected sum differs from the observed sum.\n\nReturns\n\nChiSquaredTest object containing the test statistic, degrees of freedom, p-value, the method used, and the null hypothesis.\n\n>>> from pyspark.mllib.linalg import Vectors, Matrices\n>>> observed = Vectors.dense([4, 6, 5])\n>>> pearson = Statistics.chiSqTest(observed)\n>>> print(pearson.statistic)\n0.4\n>>> pearson.degreesOfFreedom\n2\n>>> print(round(pearson.pValue, 4))\n0.8187\n>>> pearson.method\n'pearson'\n>>> pearson.nullHypothesis\n'observed follows the same distribution as expected.'\n\n>>> observed = Vectors.dense([21, 38, 43, 80])\n>>> expected = Vectors.dense([3, 5, 7, 20])\n>>> pearson = Statistics.chiSqTest(observed, expected)\n>>> print(round(pearson.pValue, 4))\n0.0027\n\n>>> data = [40.0, 24.0, 29.0, 56.0, 32.0, 42.0, 31.0, 10.0, 0.0, 30.0, 15.0, 12.0]\n>>> chi = Statistics.chiSqTest(Matrices.dense(3, 4, data))\n>>> print(round(chi.statistic, 4))\n21.9958\n\n>>> data = [LabeledPoint(0.0, Vectors.dense([0.5, 10.0])),\n... LabeledPoint(0.0, Vectors.dense([1.5, 20.0])),\n... LabeledPoint(1.0, Vectors.dense([1.5, 30.0])),\n... LabeledPoint(0.0, Vectors.dense([3.5, 30.0])),\n... LabeledPoint(0.0, Vectors.dense([3.5, 40.0])),\n... LabeledPoint(1.0, Vectors.dense([3.5, 40.0])),]\n>>> rdd = sc.parallelize(data, 4)\n>>> chi = Statistics.chiSqTest(rdd)\n>>> print(chi.statistic)\n0.75\n>>> print(chi.statistic)\n1.5\n\nstatic colStats(rdd)[source]\n\nComputes column-wise summary statistics for the input RDD[Vector].\n\nParameters\n\nrdd – an RDD[Vector] for which column-wise summary statistics are to be computed.\n\nReturns\n\nMultivariateStatisticalSummary object containing column-wise summary statistics.\n\n>>> from pyspark.mllib.linalg import Vectors\n>>> rdd = sc.parallelize([Vectors.dense([2, 0, 0, -2]),\n... Vectors.dense([4, 5, 0, 3]),\n... Vectors.dense([6, 7, 0, 8])])\n>>> cStats = Statistics.colStats(rdd)\n>>> cStats.mean()\narray([ 4., 4., 0., 3.])\n>>> cStats.variance()\narray([ 4., 13., 0., 25.])\n>>> cStats.count()\n3\n>>> cStats.numNonzeros()\narray([ 3., 2., 0., 3.])\n>>> cStats.max()\narray([ 6., 7., 0., 8.])\n>>> cStats.min()\narray([ 2., 0., 0., -2.])\n\nstatic corr(x, y=None, method=None)[source]\n\nCompute the correlation (matrix) for the input RDD(s) using the specified method. Methods currently supported: pearson (default), spearman.\n\nIf a single RDD of Vectors is passed in, a correlation matrix comparing the columns in the input RDD is returned. Use method= to specify the method to be used for single RDD inout. If two RDDs of floats are passed in, a single float is returned.\n\nParameters\n• x – an RDD of vector for which the correlation matrix is to be computed, or an RDD of float of the same cardinality as y when y is specified.\n\n• y – an RDD of float of the same cardinality as x.\n\n• method – String specifying the method to use for computing correlation. Supported: pearson (default), spearman\n\nReturns\n\nCorrelation matrix comparing columns in x.\n\n>>> x = sc.parallelize([1.0, 0.0, -2.0], 2)\n>>> y = sc.parallelize([4.0, 5.0, 3.0], 2)\n>>> zeros = sc.parallelize([0.0, 0.0, 0.0], 2)\n>>> abs(Statistics.corr(x, y) - 0.6546537) < 1e-7\nTrue\n>>> Statistics.corr(x, y) == Statistics.corr(x, y, \"pearson\")\nTrue\n>>> Statistics.corr(x, y, \"spearman\")\n0.5\n>>> from math import isnan\n>>> isnan(Statistics.corr(x, zeros))\nTrue\n>>> from pyspark.mllib.linalg import Vectors\n>>> rdd = sc.parallelize([Vectors.dense([1, 0, 0, -2]), Vectors.dense([4, 5, 0, 3]),\n... Vectors.dense([6, 7, 0, 8]), Vectors.dense([9, 0, 0, 1])])\n>>> pearsonCorr = Statistics.corr(rdd)\n>>> print(str(pearsonCorr).replace('nan', 'NaN'))\n[[ 1. 0.05564149 NaN 0.40047142]\n[ 0.05564149 1. NaN 0.91359586]\n[ NaN NaN 1. NaN]\n[ 0.40047142 0.91359586 NaN 1. ]]\n>>> spearmanCorr = Statistics.corr(rdd, method=\"spearman\")\n>>> print(str(spearmanCorr).replace('nan', 'NaN'))\n[[ 1. 0.10540926 NaN 0.4 ]\n[ 0.10540926 1. NaN 0.9486833 ]\n[ NaN NaN 1. NaN]\n[ 0.4 0.9486833 NaN 1. ]]\n>>> try:\n... Statistics.corr(rdd, \"spearman\")\n... print(\"Method name as second argument without 'method=' shouldn't be allowed.\")\n... except TypeError:\n... pass\n\nstatic kolmogorovSmirnovTest(data, distName='norm', *params)[source]\n\nPerforms the Kolmogorov-Smirnov (KS) test for data sampled from a continuous distribution. It tests the null hypothesis that the data is generated from a particular distribution.\n\nThe given data is sorted and the Empirical Cumulative Distribution Function (ECDF) is calculated which for a given point is the number of points having a CDF value lesser than it divided by the total number of points.\n\nSince the data is sorted, this is a step function that rises by (1 / length of data) for every ordered point.\n\nThe KS statistic gives us the maximum distance between the ECDF and the CDF. Intuitively if this statistic is large, the probability that the null hypothesis is true becomes small. For specific details of the implementation, please have a look at the Scala documentation.\n\nParameters\n• data – RDD, samples from the data\n\n• distName – string, currently only “norm” is supported. (Normal distribution) to calculate the theoretical distribution of the data.\n\n• params – additional values which need to be provided for a certain distribution. If not provided, the default values are used.\n\nReturns\n\nKolmogorovSmirnovTestResult object containing the test statistic, degrees of freedom, p-value, the method used, and the null hypothesis.\n\n>>> kstest = Statistics.kolmogorovSmirnovTest\n>>> data = sc.parallelize([-1.0, 0.0, 1.0])\n>>> ksmodel = kstest(data, \"norm\")\n>>> print(round(ksmodel.pValue, 3))\n1.0\n>>> print(round(ksmodel.statistic, 3))\n0.175\n>>> ksmodel.nullHypothesis\n'Sample follows theoretical distribution'\n\n>>> data = sc.parallelize([2.0, 3.0, 4.0])\n>>> ksmodel = kstest(data, \"norm\", 3.0, 1.0)\n>>> print(round(ksmodel.pValue, 3))\n1.0\n>>> print(round(ksmodel.statistic, 3))\n0.175\n\nclass pyspark.mllib.stat.MultivariateStatisticalSummary(java_model)[source]\n\nTrait for multivariate statistical summary of a data matrix.\n\ncount()[source]\nmax()[source]\nmean()[source]\nmin()[source]\nnormL1()[source]\nnormL2()[source]\nnumNonzeros()[source]\nvariance()[source]\nclass pyspark.mllib.stat.ChiSqTestResult(java_model)[source]\n\nContains test results for the chi-squared hypothesis test.\n\nproperty method\n\nName of the test method\n\nclass pyspark.mllib.stat.MultivariateGaussian[source]\n\nRepresents a (mu, sigma) tuple\n\n>>> m = MultivariateGaussian(Vectors.dense([11,12]),DenseMatrix(2, 2, (1.0, 3.0, 5.0, 2.0)))\n>>> (m.mu, m.sigma.toArray())\n(DenseVector([11.0, 12.0]), array([[ 1., 5.],[ 3., 2.]]))\n>>> (m, m)\n(DenseVector([11.0, 12.0]), array([[ 1., 5.],[ 3., 2.]]))\n\nclass pyspark.mllib.stat.KernelDensity[source]\n\nEstimate probability density at required points given an RDD of samples from the population.\n\n>>> kd = KernelDensity()\n>>> sample = sc.parallelize([0.0, 1.0])\n>>> kd.setSample(sample)\n>>> kd.estimate([0.0, 1.0])\narray([ 0.12938758, 0.12938758])\n\nestimate(points)[source]\n\nEstimate the probability density at points\n\nsetBandwidth(bandwidth)[source]\n\nSet bandwidth of each sample. Defaults to 1.0\n\nsetSample(sample)[source]\n\nSet sample points from the population. Should be a RDD\n\n## pyspark.mllib.tree module¶\n\nclass pyspark.mllib.tree.DecisionTreeModel(java_model)[source]\n\nA decision tree model for classification or regression.\n\nNew in version 1.1.0.\n\ncall(name, *a)\n\nCall method of java_model\n\ndepth()[source]\n\nGet depth of tree (e.g. depth 0 means 1 leaf node, depth 1 means 1 internal node + 2 leaf nodes).\n\nNew in version 1.1.0.\n\nclassmethod load(sc, path)\n\nLoad a model from the given path.\n\nNew in version 1.3.0.\n\nnumNodes()[source]\n\nGet number of nodes in tree, including leaf nodes.\n\nNew in version 1.1.0.\n\npredict(x)[source]\n\nPredict the label of one or more examples.\n\nNote\n\nIn Python, predict cannot currently be used within an RDD transformation or action. Call predict directly on the RDD instead.\n\nParameters\n\nx – Data point (feature vector), or an RDD of data points (feature vectors).\n\nNew in version 1.1.0.\n\nsave(sc, path)\n\nSave this model to the given path.\n\nNew in version 1.3.0.\n\ntoDebugString()[source]\n\nfull model.\n\nNew in version 1.2.0.\n\nclass pyspark.mllib.tree.DecisionTree[source]\n\nLearning algorithm for a decision tree model for classification or regression.\n\nNew in version 1.1.0.\n\nclassmethod trainClassifier(data, numClasses, categoricalFeaturesInfo, impurity='gini', maxDepth=5, maxBins=32, minInstancesPerNode=1, minInfoGain=0.0)[source]\n\nTrain a decision tree model for classification.\n\nParameters\n• data – Training data: RDD of LabeledPoint. Labels should take values {0, 1, …, numClasses-1}.\n\n• numClasses – Number of classes for classification.\n\n• categoricalFeaturesInfo – Map storing arity of categorical features. An entry (n -> k) indicates that feature n is categorical with k categories indexed from 0: {0, 1, …, k-1}.\n\n• impurity – Criterion used for information gain calculation. Supported values: “gini” or “entropy”. (default: “gini”)\n\n• maxDepth – Maximum depth of tree (e.g. depth 0 means 1 leaf node, depth 1 means 1 internal node + 2 leaf nodes). (default: 5)\n\n• maxBins – Number of bins used for finding splits at each node. (default: 32)\n\n• minInstancesPerNode – Minimum number of instances required at child nodes to create the parent split. (default: 1)\n\n• minInfoGain – Minimum info gain required to create a split. (default: 0.0)\n\nReturns\n\nDecisionTreeModel.\n\nExample usage:\n\n>>> from numpy import array\n>>> from pyspark.mllib.regression import LabeledPoint\n>>> from pyspark.mllib.tree import DecisionTree\n>>>\n>>> data = [\n... LabeledPoint(0.0, [0.0]),\n... LabeledPoint(1.0, [1.0]),\n... LabeledPoint(1.0, [2.0]),\n... LabeledPoint(1.0, [3.0])\n... ]\n>>> model = DecisionTree.trainClassifier(sc.parallelize(data), 2, {})\n>>> print(model)\nDecisionTreeModel classifier of depth 1 with 3 nodes\n\n>>> print(model.toDebugString())\nDecisionTreeModel classifier of depth 1 with 3 nodes\nIf (feature 0 <= 0.5)\nPredict: 0.0\nElse (feature 0 > 0.5)\nPredict: 1.0\n\n>>> model.predict(array([1.0]))\n1.0\n>>> model.predict(array([0.0]))\n0.0\n>>> rdd = sc.parallelize([[1.0], [0.0]])\n>>> model.predict(rdd).collect()\n[1.0, 0.0]\n\n\nNew in version 1.1.0.\n\nclassmethod trainRegressor(data, categoricalFeaturesInfo, impurity='variance', maxDepth=5, maxBins=32, minInstancesPerNode=1, minInfoGain=0.0)[source]\n\nTrain a decision tree model for regression.\n\nParameters\n• data – Training data: RDD of LabeledPoint. Labels are real numbers.\n\n• categoricalFeaturesInfo – Map storing arity of categorical features. An entry (n -> k) indicates that feature n is categorical with k categories indexed from 0: {0, 1, …, k-1}.\n\n• impurity – Criterion used for information gain calculation. The only supported value for regression is “variance”. (default: “variance”)\n\n• maxDepth – Maximum depth of tree (e.g. depth 0 means 1 leaf node, depth 1 means 1 internal node + 2 leaf nodes). (default: 5)\n\n• maxBins – Number of bins used for finding splits at each node. (default: 32)\n\n• minInstancesPerNode – Minimum number of instances required at child nodes to create the parent split. (default: 1)\n\n• minInfoGain – Minimum info gain required to create a split. (default: 0.0)\n\nReturns\n\nDecisionTreeModel.\n\nExample usage:\n\n>>> from pyspark.mllib.regression import LabeledPoint\n>>> from pyspark.mllib.tree import DecisionTree\n>>> from pyspark.mllib.linalg import SparseVector\n>>>\n>>> sparse_data = [\n... LabeledPoint(0.0, SparseVector(2, {0: 0.0})),\n... LabeledPoint(1.0, SparseVector(2, {1: 1.0})),\n... LabeledPoint(0.0, SparseVector(2, {0: 0.0})),\n... LabeledPoint(1.0, SparseVector(2, {1: 2.0}))\n... ]\n>>>\n>>> model = DecisionTree.trainRegressor(sc.parallelize(sparse_data), {})\n>>> model.predict(SparseVector(2, {1: 1.0}))\n1.0\n>>> model.predict(SparseVector(2, {1: 0.0}))\n0.0\n>>> rdd = sc.parallelize([[0.0, 1.0], [0.0, 0.0]])\n>>> model.predict(rdd).collect()\n[1.0, 0.0]\n\n\nNew in version 1.1.0.\n\nclass pyspark.mllib.tree.RandomForestModel(java_model)[source]\n\nRepresents a random forest model.\n\nNew in version 1.2.0.\n\ncall(name, *a)\n\nCall method of java_model\n\nclassmethod load(sc, path)\n\nLoad a model from the given path.\n\nNew in version 1.3.0.\n\nnumTrees()\n\nGet number of trees in ensemble.\n\nNew in version 1.3.0.\n\npredict(x)\n\nPredict values for a single data point or an RDD of points using the model trained.\n\nNote\n\nIn Python, predict cannot currently be used within an RDD transformation or action. Call predict directly on the RDD instead.\n\nNew in version 1.3.0.\n\nsave(sc, path)\n\nSave this model to the given path.\n\nNew in version 1.3.0.\n\ntoDebugString()\n\nFull model\n\nNew in version 1.3.0.\n\ntotalNumNodes()\n\nGet total number of nodes, summed over all trees in the ensemble.\n\nNew in version 1.3.0.\n\nclass pyspark.mllib.tree.RandomForest[source]\n\nLearning algorithm for a random forest model for classification or regression.\n\nNew in version 1.2.0.\n\nsupportedFeatureSubsetStrategies = ('auto', 'all', 'sqrt', 'log2', 'onethird')\nclassmethod trainClassifier(data, numClasses, categoricalFeaturesInfo, numTrees, featureSubsetStrategy='auto', impurity='gini', maxDepth=4, maxBins=32, seed=None)[source]\n\nTrain a random forest model for binary or multiclass classification.\n\nParameters\n• data – Training dataset: RDD of LabeledPoint. Labels should take values {0, 1, …, numClasses-1}.\n\n• numClasses – Number of classes for classification.\n\n• categoricalFeaturesInfo – Map storing arity of categorical features. An entry (n -> k) indicates that feature n is categorical with k categories indexed from 0: {0, 1, …, k-1}.\n\n• numTrees – Number of trees in the random forest.\n\n• featureSubsetStrategy – Number of features to consider for splits at each node. Supported values: “auto”, “all”, “sqrt”, “log2”, “onethird”. If “auto” is set, this parameter is set based on numTrees: if numTrees == 1, set to “all”; if numTrees > 1 (forest) set to “sqrt”. (default: “auto”)\n\n• impurity – Criterion used for information gain calculation. Supported values: “gini” or “entropy”. (default: “gini”)\n\n• maxDepth – Maximum depth of tree (e.g. depth 0 means 1 leaf node, depth 1 means 1 internal node + 2 leaf nodes). (default: 4)\n\n• maxBins – Maximum number of bins used for splitting features. (default: 32)\n\n• seed – Random seed for bootstrapping and choosing feature subsets. Set as None to generate seed based on system time. (default: None)\n\nReturns\n\nRandomForestModel that can be used for prediction.\n\nExample usage:\n\n>>> from pyspark.mllib.regression import LabeledPoint\n>>> from pyspark.mllib.tree import RandomForest\n>>>\n>>> data = [\n... LabeledPoint(0.0, [0.0]),\n... LabeledPoint(0.0, [1.0]),\n... LabeledPoint(1.0, [2.0]),\n... LabeledPoint(1.0, [3.0])\n... ]\n>>> model = RandomForest.trainClassifier(sc.parallelize(data), 2, {}, 3, seed=42)\n>>> model.numTrees()\n3\n>>> model.totalNumNodes()\n7\n>>> print(model)\nTreeEnsembleModel classifier with 3 trees\n\n>>> print(model.toDebugString())\nTreeEnsembleModel classifier with 3 trees\n\nTree 0:\nPredict: 1.0\nTree 1:\nIf (feature 0 <= 1.5)\nPredict: 0.0\nElse (feature 0 > 1.5)\nPredict: 1.0\nTree 2:\nIf (feature 0 <= 1.5)\nPredict: 0.0\nElse (feature 0 > 1.5)\nPredict: 1.0\n\n>>> model.predict([2.0])\n1.0\n>>> model.predict([0.0])\n0.0\n>>> rdd = sc.parallelize([[3.0], [1.0]])\n>>> model.predict(rdd).collect()\n[1.0, 0.0]\n\n\nNew in version 1.2.0.\n\nclassmethod trainRegressor(data, categoricalFeaturesInfo, numTrees, featureSubsetStrategy='auto', impurity='variance', maxDepth=4, maxBins=32, seed=None)[source]\n\nTrain a random forest model for regression.\n\nParameters\n• data – Training dataset: RDD of LabeledPoint. Labels are real numbers.\n\n• categoricalFeaturesInfo – Map storing arity of categorical features. An entry (n -> k) indicates that feature n is categorical with k categories indexed from 0: {0, 1, …, k-1}.\n\n• numTrees – Number of trees in the random forest.\n\n• featureSubsetStrategy – Number of features to consider for splits at each node. Supported values: “auto”, “all”, “sqrt”, “log2”, “onethird”. If “auto” is set, this parameter is set based on numTrees: if numTrees == 1, set to “all”; if numTrees > 1 (forest) set to “onethird” for regression. (default: “auto”)\n\n• impurity – Criterion used for information gain calculation. The only supported value for regression is “variance”. (default: “variance”)\n\n• maxDepth – Maximum depth of tree (e.g. depth 0 means 1 leaf node, depth 1 means 1 internal node + 2 leaf nodes). (default: 4)\n\n• maxBins – Maximum number of bins used for splitting features. (default: 32)\n\n• seed – Random seed for bootstrapping and choosing feature subsets. Set as None to generate seed based on system time. (default: None)\n\nReturns\n\nRandomForestModel that can be used for prediction.\n\nExample usage:\n\n>>> from pyspark.mllib.regression import LabeledPoint\n>>> from pyspark.mllib.tree import RandomForest\n>>> from pyspark.mllib.linalg import SparseVector\n>>>\n>>> sparse_data = [\n... LabeledPoint(0.0, SparseVector(2, {0: 1.0})),\n... LabeledPoint(1.0, SparseVector(2, {1: 1.0})),\n... LabeledPoint(0.0, SparseVector(2, {0: 1.0})),\n... LabeledPoint(1.0, SparseVector(2, {1: 2.0}))\n... ]\n>>>\n>>> model = RandomForest.trainRegressor(sc.parallelize(sparse_data), {}, 2, seed=42)\n>>> model.numTrees()\n2\n>>> model.totalNumNodes()\n4\n>>> model.predict(SparseVector(2, {1: 1.0}))\n1.0\n>>> model.predict(SparseVector(2, {0: 1.0}))\n0.5\n>>> rdd = sc.parallelize([[0.0, 1.0], [1.0, 0.0]])\n>>> model.predict(rdd).collect()\n[1.0, 0.5]\n\n\nNew in version 1.2.0.\n\nclass pyspark.mllib.tree.GradientBoostedTreesModel(java_model)[source]\n\nNew in version 1.3.0.\n\ncall(name, *a)\n\nCall method of java_model\n\nclassmethod load(sc, path)\n\nLoad a model from the given path.\n\nNew in version 1.3.0.\n\nnumTrees()\n\nGet number of trees in ensemble.\n\nNew in version 1.3.0.\n\npredict(x)\n\nPredict values for a single data point or an RDD of points using the model trained.\n\nNote\n\nIn Python, predict cannot currently be used within an RDD transformation or action. Call predict directly on the RDD instead.\n\nNew in version 1.3.0.\n\nsave(sc, path)\n\nSave this model to the given path.\n\nNew in version 1.3.0.\n\ntoDebugString()\n\nFull model\n\nNew in version 1.3.0.\n\ntotalNumNodes()\n\nGet total number of nodes, summed over all trees in the ensemble.\n\nNew in version 1.3.0.\n\nclass pyspark.mllib.tree.GradientBoostedTrees[source]\n\nLearning algorithm for a gradient boosted trees model for classification or regression.\n\nNew in version 1.3.0.\n\nclassmethod trainClassifier(data, categoricalFeaturesInfo, loss='logLoss', numIterations=100, learningRate=0.1, maxDepth=3, maxBins=32)[source]\n\nTrain a gradient-boosted trees model for classification.\n\nParameters\n• data – Training dataset: RDD of LabeledPoint. Labels should take values {0, 1}.\n\n• categoricalFeaturesInfo – Map storing arity of categorical features. An entry (n -> k) indicates that feature n is categorical with k categories indexed from 0: {0, 1, …, k-1}.\n\n• loss – Loss function used for minimization during gradient boosting. Supported values: “logLoss”, “leastSquaresError”, “leastAbsoluteError”. (default: “logLoss”)\n\n• numIterations – Number of iterations of boosting. (default: 100)\n\n• learningRate – Learning rate for shrinking the contribution of each estimator. The learning rate should be between in the interval (0, 1]. (default: 0.1)\n\n• maxDepth – Maximum depth of tree (e.g. depth 0 means 1 leaf node, depth 1 means 1 internal node + 2 leaf nodes). (default: 3)\n\n• maxBins – Maximum number of bins used for splitting features. DecisionTree requires maxBins >= max categories. (default: 32)\n\nReturns\n\nGradientBoostedTreesModel that can be used for prediction.\n\nExample usage:\n\n>>> from pyspark.mllib.regression import LabeledPoint\n>>>\n>>> data = [\n... LabeledPoint(0.0, [0.0]),\n... LabeledPoint(0.0, [1.0]),\n... LabeledPoint(1.0, [2.0]),\n... LabeledPoint(1.0, [3.0])\n... ]\n>>>\n>>> model = GradientBoostedTrees.trainClassifier(sc.parallelize(data), {}, numIterations=10)\n>>> model.numTrees()\n10\n>>> model.totalNumNodes()\n30\n>>> print(model) # it already has newline\nTreeEnsembleModel classifier with 10 trees\n\n>>> model.predict([2.0])\n1.0\n>>> model.predict([0.0])\n0.0\n>>> rdd = sc.parallelize([[2.0], [0.0]])\n>>> model.predict(rdd).collect()\n[1.0, 0.0]\n\n\nNew in version 1.3.0.\n\nclassmethod trainRegressor(data, categoricalFeaturesInfo, loss='leastSquaresError', numIterations=100, learningRate=0.1, maxDepth=3, maxBins=32)[source]\n\nTrain a gradient-boosted trees model for regression.\n\nParameters\n• data – Training dataset: RDD of LabeledPoint. Labels are real numbers.\n\n• categoricalFeaturesInfo – Map storing arity of categorical features. An entry (n -> k) indicates that feature n is categorical with k categories indexed from 0: {0, 1, …, k-1}.\n\n• loss – Loss function used for minimization during gradient boosting. Supported values: “logLoss”, “leastSquaresError”, “leastAbsoluteError”. (default: “leastSquaresError”)\n\n• numIterations – Number of iterations of boosting. (default: 100)\n\n• learningRate – Learning rate for shrinking the contribution of each estimator. The learning rate should be between in the interval (0, 1]. (default: 0.1)\n\n• maxDepth – Maximum depth of tree (e.g. depth 0 means 1 leaf node, depth 1 means 1 internal node + 2 leaf nodes). (default: 3)\n\n• maxBins – Maximum number of bins used for splitting features. DecisionTree requires maxBins >= max categories. (default: 32)\n\nReturns\n\nGradientBoostedTreesModel that can be used for prediction.\n\nExample usage:\n\n>>> from pyspark.mllib.regression import LabeledPoint\n>>> from pyspark.mllib.linalg import SparseVector\n>>>\n>>> sparse_data = [\n... LabeledPoint(0.0, SparseVector(2, {0: 1.0})),\n... LabeledPoint(1.0, SparseVector(2, {1: 1.0})),\n... LabeledPoint(0.0, SparseVector(2, {0: 1.0})),\n... LabeledPoint(1.0, SparseVector(2, {1: 2.0}))\n... ]\n>>>\n>>> data = sc.parallelize(sparse_data)\n>>> model = GradientBoostedTrees.trainRegressor(data, {}, numIterations=10)\n>>> model.numTrees()\n10\n>>> model.totalNumNodes()\n12\n>>> model.predict(SparseVector(2, {1: 1.0}))\n1.0\n>>> model.predict(SparseVector(2, {0: 1.0}))\n0.0\n>>> rdd = sc.parallelize([[0.0, 1.0], [1.0, 0.0]])\n>>> model.predict(rdd).collect()\n[1.0, 0.0]\n\n\nNew in version 1.3.0.\n\n## pyspark.mllib.util module¶\n\nclass pyspark.mllib.util.JavaLoader[source]\n\nMixin for classes which can load saved models using its Scala implementation.\n\nNew in version 1.3.0.\n\nclassmethod load(sc, path)[source]\n\nLoad a model from the given path.\n\nNew in version 1.3.0.\n\nclass pyspark.mllib.util.JavaSaveable[source]\n\nMixin for models that provide save() through their Scala implementation.\n\nNew in version 1.3.0.\n\nsave(sc, path)[source]\n\nSave this model to the given path.\n\nNew in version 1.3.0.\n\nclass pyspark.mllib.util.LinearDataGenerator[source]\n\nUtils for generating linear data.\n\nNew in version 1.5.0.\n\nstatic generateLinearInput(intercept, weights, xMean, xVariance, nPoints, seed, eps)[source]\nParam\n\nintercept bias factor, the term c in X’w + c\n\nParam\n\nweights feature vector, the term w in X’w + c\n\nParam\n\nxMean Point around which the data X is centered.\n\nParam\n\nxVariance Variance of the given data\n\nParam\n\nnPoints Number of points to be generated\n\nParam\n\nseed Random Seed\n\nParam\n\neps Used to scale the noise. If eps is set high, the amount of gaussian noise added is more.\n\nReturns a list of LabeledPoints of length nPoints\n\nNew in version 1.5.0.\n\nstatic generateLinearRDD(sc, nexamples, nfeatures, eps, nParts=2, intercept=0.0)[source]\n\nGenerate an RDD of LabeledPoints.\n\nNew in version 1.5.0.\n\nclass pyspark.mllib.util.Loader[source]\n\nMixin for classes which can load saved models from files.\n\nNew in version 1.3.0.\n\nclassmethod load(sc, path)[source]\n\nLoad a model from the given path. The model should have been saved using py:meth:Saveable.save.\n\nParameters\n\n• path – Path specifying the directory to which the model was saved.\n\nReturns\n\nmodel instance\n\nclass pyspark.mllib.util.MLUtils[source]\n\nHelper methods to load, save and pre-process data used in MLlib.\n\nNew in version 1.0.0.\n\nstatic appendBias(data)[source]\n\nReturns a new vector with 1.0 (bias) appended to the end of the input vector.\n\nNew in version 1.5.0.\n\nstatic convertMatrixColumnsFromML(dataset, *cols)[source]\n\nConverts matrix columns in an input DataFrame to the pyspark.mllib.linalg.Matrix type from the new pyspark.ml.linalg.Matrix type under the spark.ml package.\n\nParameters\n• dataset – input dataset\n\n• cols – a list of matrix columns to be converted. Old matrix columns will be ignored. If unspecified, all new matrix columns will be converted except nested ones.\n\nReturns\n\nthe input dataset with new matrix columns converted to the old matrix type\n\n>>> import pyspark\n>>> from pyspark.ml.linalg import Matrices\n>>> from pyspark.mllib.util import MLUtils\n>>> df = spark.createDataFrame(\n... [(0, Matrices.sparse(2, 2, [0, 2, 3], [0, 1, 1], [2, 3, 4]),\n... Matrices.dense(2, 2, range(4)))], [\"id\", \"x\", \"y\"])\n>>> r1 = MLUtils.convertMatrixColumnsFromML(df).first()\n>>> isinstance(r1.x, pyspark.mllib.linalg.SparseMatrix)\nTrue\n>>> isinstance(r1.y, pyspark.mllib.linalg.DenseMatrix)\nTrue\n>>> r2 = MLUtils.convertMatrixColumnsFromML(df, \"x\").first()\n>>> isinstance(r2.x, pyspark.mllib.linalg.SparseMatrix)\nTrue\n>>> isinstance(r2.y, pyspark.ml.linalg.DenseMatrix)\nTrue\n\n\nNew in version 2.0.0.\n\nstatic convertMatrixColumnsToML(dataset, *cols)[source]\n\nConverts matrix columns in an input DataFrame from the pyspark.mllib.linalg.Matrix type to the new pyspark.ml.linalg.Matrix type under the spark.ml package.\n\nParameters\n• dataset – input dataset\n\n• cols – a list of matrix columns to be converted. New matrix columns will be ignored. If unspecified, all old matrix columns will be converted excepted nested ones.\n\nReturns\n\nthe input dataset with old matrix columns converted to the new matrix type\n\n>>> import pyspark\n>>> from pyspark.mllib.linalg import Matrices\n>>> from pyspark.mllib.util import MLUtils\n>>> df = spark.createDataFrame(\n... [(0, Matrices.sparse(2, 2, [0, 2, 3], [0, 1, 1], [2, 3, 4]),\n... Matrices.dense(2, 2, range(4)))], [\"id\", \"x\", \"y\"])\n>>> r1 = MLUtils.convertMatrixColumnsToML(df).first()\n>>> isinstance(r1.x, pyspark.ml.linalg.SparseMatrix)\nTrue\n>>> isinstance(r1.y, pyspark.ml.linalg.DenseMatrix)\nTrue\n>>> r2 = MLUtils.convertMatrixColumnsToML(df, \"x\").first()\n>>> isinstance(r2.x, pyspark.ml.linalg.SparseMatrix)\nTrue\n>>> isinstance(r2.y, pyspark.mllib.linalg.DenseMatrix)\nTrue\n\n\nNew in version 2.0.0.\n\nstatic convertVectorColumnsFromML(dataset, *cols)[source]\n\nConverts vector columns in an input DataFrame to the pyspark.mllib.linalg.Vector type from the new pyspark.ml.linalg.Vector type under the spark.ml package.\n\nParameters\n• dataset – input dataset\n\n• cols – a list of vector columns to be converted. Old vector columns will be ignored. If unspecified, all new vector columns will be converted except nested ones.\n\nReturns\n\nthe input dataset with new vector columns converted to the old vector type\n\n>>> import pyspark\n>>> from pyspark.ml.linalg import Vectors\n>>> from pyspark.mllib.util import MLUtils\n>>> df = spark.createDataFrame(\n... [(0, Vectors.sparse(2, , [1.0]), Vectors.dense(2.0, 3.0))],\n... [\"id\", \"x\", \"y\"])\n>>> r1 = MLUtils.convertVectorColumnsFromML(df).first()\n>>> isinstance(r1.x, pyspark.mllib.linalg.SparseVector)\nTrue\n>>> isinstance(r1.y, pyspark.mllib.linalg.DenseVector)\nTrue\n>>> r2 = MLUtils.convertVectorColumnsFromML(df, \"x\").first()\n>>> isinstance(r2.x, pyspark.mllib.linalg.SparseVector)\nTrue\n>>> isinstance(r2.y, pyspark.ml.linalg.DenseVector)\nTrue\n\n\nNew in version 2.0.0.\n\nstatic convertVectorColumnsToML(dataset, *cols)[source]\n\nConverts vector columns in an input DataFrame from the pyspark.mllib.linalg.Vector type to the new pyspark.ml.linalg.Vector type under the spark.ml package.\n\nParameters\n• dataset – input dataset\n\n• cols – a list of vector columns to be converted. New vector columns will be ignored. If unspecified, all old vector columns will be converted excepted nested ones.\n\nReturns\n\nthe input dataset with old vector columns converted to the new vector type\n\n>>> import pyspark\n>>> from pyspark.mllib.linalg import Vectors\n>>> from pyspark.mllib.util import MLUtils\n>>> df = spark.createDataFrame(\n... [(0, Vectors.sparse(2, , [1.0]), Vectors.dense(2.0, 3.0))],\n... [\"id\", \"x\", \"y\"])\n>>> r1 = MLUtils.convertVectorColumnsToML(df).first()\n>>> isinstance(r1.x, pyspark.ml.linalg.SparseVector)\nTrue\n>>> isinstance(r1.y, pyspark.ml.linalg.DenseVector)\nTrue\n>>> r2 = MLUtils.convertVectorColumnsToML(df, \"x\").first()\n>>> isinstance(r2.x, pyspark.ml.linalg.SparseVector)\nTrue\n>>> isinstance(r2.y, pyspark.mllib.linalg.DenseVector)\nTrue\n\n\nNew in version 2.0.0.\n\nstatic loadLabeledPoints(sc, path, minPartitions=None)[source]\n\nLoad labeled points saved using RDD.saveAsTextFile.\n\nParameters\n• sc – Spark context\n\n• path – file or directory path in any Hadoop-supported file system URI\n\n• minPartitions – min number of partitions\n\nReturns\n\nlabeled data stored as an RDD of LabeledPoint\n\n>>> from tempfile import NamedTemporaryFile\n>>> from pyspark.mllib.util import MLUtils\n>>> from pyspark.mllib.regression import LabeledPoint\n>>> examples = [LabeledPoint(1.1, Vectors.sparse(3, [(0, -1.23), (2, 4.56e-7)])),\n... LabeledPoint(0.0, Vectors.dense([1.01, 2.02, 3.03]))]\n>>> tempFile = NamedTemporaryFile(delete=True)\n>>> tempFile.close()\n>>> sc.parallelize(examples, 1).saveAsTextFile(tempFile.name)\n[LabeledPoint(1.1, (3,[0,2],[-1.23,4.56e-07])), LabeledPoint(0.0, [1.01,2.02,3.03])]\n\n\nNew in version 1.1.0.\n\nstatic loadLibSVMFile(sc, path, numFeatures=-1, minPartitions=None, multiclass=None)[source]\n\nLoads labeled data in the LIBSVM format into an RDD of LabeledPoint. The LIBSVM format is a text-based format used by LIBSVM and LIBLINEAR. Each line represents a labeled sparse feature vector using the following format:\n\nlabel index1:value1 index2:value2 …\n\nwhere the indices are one-based and in ascending order. This method parses each line into a LabeledPoint, where the feature indices are converted to zero-based.\n\nParameters\n• sc – Spark context\n\n• path – file or directory path in any Hadoop-supported file system URI\n\n• numFeatures – number of features, which will be determined from the input data if a nonpositive value is given. This is useful when the dataset is already split into multiple files and you want to load them separately, because some features may not present in certain files, which leads to inconsistent feature dimensions.\n\n• minPartitions – min number of partitions\n\nReturns\n\nlabeled data stored as an RDD of LabeledPoint\n\n>>> from tempfile import NamedTemporaryFile\n>>> from pyspark.mllib.util import MLUtils\n>>> from pyspark.mllib.regression import LabeledPoint\n>>> tempFile = NamedTemporaryFile(delete=True)\n>>> _ = tempFile.write(b\"+1 1:1.0 3:2.0 5:3.0\\n-1\\n-1 2:4.0 4:5.0 6:6.0\")\n>>> tempFile.flush()\n>>> tempFile.close()\n>>> examples\nLabeledPoint(1.0, (6,[0,2,4],[1.0,2.0,3.0]))\n>>> examples\nLabeledPoint(-1.0, (6,[],[]))\n>>> examples\nLabeledPoint(-1.0, (6,[1,3,5],[4.0,5.0,6.0]))\n\n\nNew in version 1.0.0.\n\nstatic loadVectors(sc, path)[source]\n\nLoads vectors saved using RDD[Vector].saveAsTextFile with the default number of partitions.\n\nNew in version 1.5.0.\n\nstatic saveAsLibSVMFile(data, dir)[source]\n\nSave labeled data in LIBSVM format.\n\nParameters\n• data – an RDD of LabeledPoint to be saved\n\n• dir – directory to save the data\n\n>>> from tempfile import NamedTemporaryFile\n>>> from fileinput import input\n>>> from pyspark.mllib.regression import LabeledPoint\n>>> from glob import glob\n>>> from pyspark.mllib.util import MLUtils\n>>> examples = [LabeledPoint(1.1, Vectors.sparse(3, [(0, 1.23), (2, 4.56)])),\n... LabeledPoint(0.0, Vectors.dense([1.01, 2.02, 3.03]))]\n>>> tempFile = NamedTemporaryFile(delete=True)\n>>> tempFile.close()\n>>> MLUtils.saveAsLibSVMFile(sc.parallelize(examples), tempFile.name)\n>>> ''.join(sorted(input(glob(tempFile.name + \"/part-0000*\"))))\n'0.0 1:1.01 2:2.02 3:3.03\\n1.1 1:1.23 3:4.56\\n'\n\n\nNew in version 1.0.0.\n\nclass pyspark.mllib.util.Saveable[source]\n\nMixin for models and transformers which may be saved as files.\n\nNew in version 1.3.0.\n\nsave(sc, path)[source]\n\nSave this model to the given path.\n\nThis saves:" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.59710044,"math_prob":0.9758285,"size":158417,"snap":"2020-24-2020-29","text_gpt3_token_len":46897,"char_repetition_ratio":0.18054704,"word_repetition_ratio":0.45805317,"special_character_ratio":0.31966266,"punctuation_ratio":0.2675741,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9925713,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-05-31T03:20:22Z\",\"WARC-Record-ID\":\"<urn:uuid:2a91c1be-b542-438a-994b-1f7d5a36357b>\",\"Content-Length\":\"814523\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b7618a51-f384-4aac-a100-0e4273a9ff91>\",\"WARC-Concurrent-To\":\"<urn:uuid:2af55956-6563-42e5-af31-dd7abfe49d3a>\",\"WARC-IP-Address\":\"40.79.78.1\",\"WARC-Target-URI\":\"https://spark.apache.org/docs/latest/api/python/pyspark.mllib.html?highlight=streaminglinearregressionwithsgd\",\"WARC-Payload-Digest\":\"sha1:7HDVE5OTB5YVW5QXWTPHJJ3KD6LSDYIP\",\"WARC-Block-Digest\":\"sha1:FUXDKUOJESYUZSXX6BXKSU5DUCVLQD4F\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347410745.37_warc_CC-MAIN-20200531023023-20200531053023-00581.warc.gz\"}"}
https://homework.cpm.org/category/CC/textbook/cca/chapter/11/lesson/11.1.1/problem/11-9
[ "", null, "", null, "### Home > CCA > Chapter 11 > Lesson 11.1.1 > Problem11-9\n\n11-9.\n\nWrite an equation or system of equations to solve this problem.\n\nThe number of students attending the spring play was $150$ more than the number of adults attending. Student tickets cost $\\3$ and adult tickets cost $\\5$. A total of $\\4730$ was collected. How many students attended the play?\n\n$685$ students attended" ]
[ null, "https://homework.cpm.org/dist/7d633b3a30200de4995665c02bdda1b8.png", null, "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAfQAAABDCAYAAABqbvfzAAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAAyRpVFh0WE1MOmNvbS5hZG9iZS54bXAAAAAAADw/eHBhY2tldCBiZWdpbj0i77u/IiBpZD0iVzVNME1wQ2VoaUh6cmVTek5UY3prYzlkIj8+IDx4OnhtcG1ldGEgeG1sbnM6eD0iYWRvYmU6bnM6bWV0YS8iIHg6eG1wdGs9IkFkb2JlIFhNUCBDb3JlIDUuMC1jMDYxIDY0LjE0MDk0OSwgMjAxMC8xMi8wNy0xMDo1NzowMSAgICAgICAgIj4gPHJkZjpSREYgeG1sbnM6cmRmPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5LzAyLzIyLXJkZi1zeW50YXgtbnMjIj4gPHJkZjpEZXNjcmlwdGlvbiByZGY6YWJvdXQ9IiIgeG1sbnM6eG1wPSJodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAvIiB4bWxuczp4bXBNTT0iaHR0cDovL25zLmFkb2JlLmNvbS94YXAvMS4wL21tLyIgeG1sbnM6c3RSZWY9Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC9zVHlwZS9SZXNvdXJjZVJlZiMiIHhtcDpDcmVhdG9yVG9vbD0iQWRvYmUgUGhvdG9zaG9wIENTNS4xIE1hY2ludG9zaCIgeG1wTU06SW5zdGFuY2VJRD0ieG1wLmlpZDo5QzA0RUVFMzVFNDExMUU1QkFCNEYxREYyQTk4OEM5NCIgeG1wTU06RG9jdW1lbnRJRD0ieG1wLmRpZDo5QzA0RUVFNDVFNDExMUU1QkFCNEYxREYyQTk4OEM5NCI+IDx4bXBNTTpEZXJpdmVkRnJvbSBzdFJlZjppbnN0YW5jZUlEPSJ4bXAuaWlkOjlDMDRFRUUxNUU0MTExRTVCQUI0RjFERjJBOTg4Qzk0IiBzdFJlZjpkb2N1bWVudElEPSJ4bXAuZGlkOjlDMDRFRUUyNUU0MTExRTVCQUI0RjFERjJBOTg4Qzk0Ii8+IDwvcmRmOkRlc2NyaXB0aW9uPiA8L3JkZjpSREY+IDwveDp4bXBtZXRhPiA8P3hwYWNrZXQgZW5kPSJyIj8+RSTQtAAAG9JJREFUeNrsXQmYXEW1Pj09PVtmJjsBDGFXiCKKIBJ2REEQQdaARBBiFFRAnrIoyhqCgLwnEfEpPMAgggsGJG7w2MMuiuwkJDGQINmTycxklu62/r5/0ZWaur3M9GQCc/7vO1/fvrfuvXXr1q3/nFOnqhLZbFYUCoVCoVC8u1GlRaBQKBQKhRK6QqFQKBQKJXSFQqFQKBRK6AqFQqFQKJTQFQqFQqFQQlcoFAqFQqGErlAoFAqFonKoLveE2jM+uTHk+zNGjjZyj5EXqJhgQH3KyClGOo1MNbK2vzOSTWakbmWTjHp+69y2QqFQKBQW85+avvES+kaCKUaOMHK8kcWS9zQkjYzj9l1Gnuj3nCSykuxIaa1VKBQKxbvLQt9I0Gjk30YehtPA2d9tZJGRPYxs0++EnjCaRFe1NC4emSN2hUKhUCiU0MtDjZE3jRwXODaRhP5hI7f1ZyayVRmpWdMoqbb63LZCoVAoFAOFd2tQHHzcWxppChwbxt89+zsTWWOV161okkQ6oTVJoVAoFErovQA8C6OMjA0csy74nSXfn155GA6vXlcj9cuHqnWuUCgUCiX0XqDByOiIUnNu9ThCh/W+T79Z54bEa1c1SnVbjdnW/nOFQqFQKKGXi/cbeR+3Px44PtrZPrw/M1K/vDlSKxQKhUKhUEIvG/tK1IcO7CE9KXVn/v7ZyAFGNqm4dY6hautqpGZNg7rbFQqFQqGE3sv8gtDXOeTt9pMPN/Ixh9CNCS2HVJzQq7JSu3qIJDtTaqErFAqFQgm9FwBZY/z520ZWS9Sfvrdz/AjHeke6RyWaOa6iwJBzuNsTyuYKhUKhUELvFdAn/rREQ9NeN/KkkaN4bAQJ/x7+hy/8RhL+DpVk86p0taRadOy5QqFQKJTQe4NtSNog8aESzdf+RyOfolX+ZSMPSDRbHIBhbXcaaTcyuVKZQP95am2dVHelctsKhUKhUAxGQoeP+hoj1xu5yciFZZwLUv6NRIuwWMKeLdGscRdLFN3+O8lHuY800mbkdiOnSn7CmT4Sukj9imZJZHShOoVCoVAMXkLH/bBc2ywj5xg5wcjnSjgP4803owU+kvsQ8PaskYeMnGbkCu6vd44D15LMT6yIRmLUiZq19WqdKxQKhWJQE/q2Eo0hR7/3GCMLJFoGddciefymkR/zfyN/U7TO20niNhjOTizTwN9/GPmrkfMcsu+ddV6VkVR7nVS31mn/uUKhUCgGNaGDyP9l5F6J3OMdRr5n5FwjH4w55wwjrxj5G/+787dfQwsd/eZf5b46z1IHLqUicVLfzHOR6vYaqepOas1RKBQKxaAldIwXR7/3XIn6wVskcp+D4NEHfomRXbxzDpJorPkPnX2WsDHm/FEeQ/Db13j9as9CF6bDuPSLJLygS4xFns1Z4lYy1encdK+JjA5XUygUCsXgJfQvGblDIrc7VkI71sh2Rg418gKtdFjrdknUCUYmSdTX3u1c533O9uP8vZrKAYLfugKEDpwvkZv/nFIzjGj2mtUNuRnhILWrhkhVV1LXPlcoFArFRocNtR76YUbeMrKElvqJJGlMDvNFWta3GDmGFjf2wa89xchSI0NoqeM6n3KuO4q//5Ro7fPvS34WOZ/Q0ZeO6PoLmPblYpke8crmhtRr1198pSohmaT2nysUCoVi8BH6hySa8AWBaacbSUvUdw7vAJjyK0a+bmSakVVGWiVykSPgDUPVOmlZg/zv4q+d3rXOuQ/c9kdKNFY9ROjAd5nmBiN7SX4IXBCIZI/c7vlkiYS62xUKxYbH/KemayEoCqI/Xe4YKnYKyXO8kZslmhBmUyM/kshNjpXTrpNoARUExX2e5yVI7BCYwwh8m0kLf0vnHm7g22u00LMFCH0l8zSBaRUKhUKhUAvdA4aLoX97FxL19iTVZ0nMcHnDHf5Vh4hB1KOYbpGRtRJN07o/rfKmInm8yMhEEjWC69p4D1x/SMw5mF3uKp77dyN3azVQKBQKhRJ6HqMlH8X+iJHlsn4wW7kAIY+k9b41lYQPkPDx20zLf3zM+bDkEdmO/vUXjbxqZB6tfATGITjvVxK53v+uVUGhUCgUg4rQs15AWCL9jtf+TUrkMM86vyGgfzr3E9sn3WrObzWJFprtZ5z9uOHmRnYzcqCR/WJIHX3wB1GEOYGSgWC4xySKuMc1fm9kHyMLtTooFAqFYtAQet2yJvJxQjLVGelsbn9nnDb25Qg+QzLPRPSbSaZzc59Ho72iKPFkR7VUmbSZmgJGfO787DtR5bx+xlEefk/ixopqCKA7TOJd7Ql6EPaW/JKrrUyPceyH0HpXKBQKheK9T+gjX9jCsZWz0l3XJV2N7dLZtC43RrtueWN+nXCQfqpb2ke1SMfwVknXduUixhsXDZfGN0fkyD+TSsdb6WZ/d32ndAxtM+SfkM7GDllnrgXNAJO7MPocUfD/TxkvmcRZ5nqnSmkBf5b8ETX/oERD2u7UaqFQKBSK9zyh+y736vaUVLfVSMPbCE5ff4hXDu01UruqIWfNg5xxvHZ1Q2TVGx5PdhbOAqZaradXAOfAI9A+eo20jVljlIeGnMcAln7HsFbpauh8KV3XNaW7oeN2c+1rEunEeEPuXQVvkIAHAHnOol/+DpN+lsnYmWb/v8p1Xkjk1u/QaqVQKBSKjZ7QexB8jsCzBQZ0g+SjrVRrtG4KplB1jPBid3jnfCA3c1tLvQxZNCJH9u+wqSF2XCpd0w3Sv79t9JqPdA5vHZdOdVfB2x6arjVrlIzkulR2yOLmNnMcD5HoGtIxdN3IlrebFozOXb+HghKPL0i0UMxtWq0UCoVC8a4jdAJ907tLNIkMItPB2JgZDtHjz5DofHLEvdFv3SSFJ3gBE6+QaJz569ZDUN2Rst6CKl5naBb6QXcyR+5GMplU98PrRrQuXjt2ec6yr0onc3ey+WhcOFIaI8XgIJuPbFUmaxSOj1V1VafM9bHe+vz1lICsYf2wEgL3va7aolAoFIp3JaFjKVPMwY7JWjaPSYOo8usoLuCixpKoW5R4Lyzmgrnb/8fIn5z1yJO8TjThDAztZHQskU7OHvLvofvVL2/sXrPlMml934qc6z/VWifD5mwqtSuHIP0hhsBnradBGOKnsnCyT+gFACVG54RVKBQKxYCgLzPFYeKY+yUKJNu8QLodSbhYLrXZNXYlmgimVMCC/rREE8P8oKTrJLJ7GgI/VjJVMmzupjLipbHSvHCUjP77VjkyN6RdY6z1qYHz7FaXVhGFQqFQvJcJHdO3wqrdrYxzMIf6LVIZtzQmhil16taLDUE3od8ervjm18fkoutpgcOz8BGtBgqFQqEYrIR+JS30cnGERCupVQJYaAV99sVmo8MSrWfkTHlD4jkijyzwkfQuKBQKhUIxKAkds7JNjDn2N4lWTcPCK/MKWNcIT0/HHEcA3F8kWp0NU7c+GZMO1zi1xDz/l0TLtrr4tqy/trpCoVAoFO9a9CYoDv3YqcB+zNp2vOTHYWNd8wckmnvdBf7vIdHCLCE8Z+RgT+k4wciNJHEXmLK1toByYDGc1vgU/se88F/T169QKBSKwWyhfzSwL03L3J1U5d8S9XPPpcyhzCepJ0pUMtDZfatEAXg+xkq03Gop0eUnG9mV25dIFKGvUCgUCsWgtdBDEe1wky8I7P+NkT95+0DkiB6vr0D+s5JfBqYY4FU4z8i1Ro7ZCN8FFIzNJD+Gvz2QppZeiqxXnp0SnqEuxXJexzSFUMf0uG9cXEKC10tKgWV3nGtUM72ftkviZ9SrYV46me+4Z+qKKSMAK/8hRgLL8S6SwvMcWDQzvascJkuopwm+szYqyA2SH3kRum89v6EE33NrjKLdwLy0Ffh2G4qUg32uVon3YtWxXrWXUEd8FCqftTH765n3cuqEC7zXUczvGyW8W5TzFrwvFmda1k/5wn0wEqelQJ7qWX/XlHC9Jr6z9hLrr0LRKws9tPhJS4FKutaTFjbUcSQcIhO48vcP7F9sZHWJhA58zshvpW/D9SoNNFAIMkRXQ27yHInWkL+ADa2LqTyGCXv+6ciz9GLs7aWfxLT3s4GIAxq8x5n2oALpQCB38X7PeXlw5bNM/2mmfdY59jz/38HjPr7BfFwVk4ejeXxG4NhHeN2XJJr/AOWJlfWOK/IO7D0v8fbv4z0Xnvlv3vNAfsf07+exh6ic+cR5Ae9jPVbYvijwbhDvMZv32jMmz0fy/FsK1P+TmZ9rCjz7VF7nm72ou7vElAfK6RGWq0/4tzL9PwJ1Au/04zH3QnDrLyRaCvkVvtvZRd7tRL7/13gOzv2l9OwGRPndXCBfuO8nipSFfbffKpBmBtNMLXKtk5gOsUTDlKYU/WmhZ2MIvbNCefqQ00BmaG3tE9Nozab2HCLoNY5G7Fp3owNp0T0wpgzFoFLYjB6Mnfn/VeYRDc6lEi0aM9GxEDZhwybcZxeoBfHbYMVT2ABZLX8bCqam/WlMPr4i+eF7Q4rkGaMbtuS76QqUWcJpxOud/HY69cfm91iS6IWedY38xgUsDuXxVd7+/VlvhrNsXmR5oSG+nedMi7EyJ/P4ZCoSqx2PyFjHE5Ry6ppb31c639P2tIirPCX4VxKtBgjMo/W1PZ/9Uzy2wrnODvRWYA6HCQEr3JbDigIWHIJGtyWxX0GPgA+U89Ysq3JRRyXGWrJZx1BA3vYyciiVsLWO8rgd03YG6vBRVODvcu6D7+MevosMFTYowntQcPw7Xt6+4xDnElrmyOsJLG8onU85dXIrJ1+2TXHzdQzzNTNG0Z1MRWwyvYAhq34sy+Ub/BbfiCnT8/jemjYy40PxHrTQQ+iqoFtoNK2PI9kQ7BtDtLDkf+6QiA806D8q4X7PsdFMDED5X83GaIFEa7uPpxxPUsAwv9O9cgZ+xgZ/R/4iNuA2ktN0yc++57pZz2BjEfIQuKMFisUjWCI7xcmDK+PZ+LrXQgO8k5Nmd8fC/j6f3ffQxE3qkw4QKkj8Jv7+kff6MJXDHzLNZVSQfNgpi4VKneuheJjPY8t5MvfPoQJkn/dwrx52eN/Dt0jYq1incc4H+X6XkbAv9JTmDsfrcEGJ5eBiJz4b0OwoE6FvN84zVgz2/UKp2I1ltAOf78tU9A/y6rDN77leHd6dym09CXGYo1TdSDKczfLYieV3GdOc79WhfRwyv5RpbZ14gG3M9Z4HzObrvJh81Xn58pXJcY6XZq8i3w6I+rSYNJ93PAgdou52xQAQ+kBgKt1icV6GIbRKFhS5DhqDtwcg/2igPsftMyVa/jXDjxgW5ZU8dnbAbbmazzWPv3B7TqIS00wLxMeOtH58wHrbtBf5X+TkwZW5bMh90niNx+fTMsJ8BLMc5aAv+CS9Bkv4PHNYlktIpo+wrp8ZOHcij83l/0nOsTbut+X8hkN+9nlej7G0xCGkE7l9Cb0IHSyTu0ggQqKPc69+m5ZoOTiGHoV5zO+kfqzLackHvM7n9g2S78I4WnpOKLXUq8OoEyfxnYEcd2G63aiItbKePM93i/7w7xm5m+lOdK5tn/XPVBiX8ZyX6alq4/UPCTwL7v8vL1+TuB+KcqhLwN77Nf6eUEKZTQ54C1EPz1JaUgw0oW/oRUlg2V5cJE2t89HH4T5q300DUPZoHBpp3TweOD6dpPftwHtKxlhLL3M7zl39TU8Bgqvwq45VWA7K6a6B5VoT2P9bx5rsSx3awfG2LA0cn0Kiv9Xb30yLKMuyWUhLb8uY+6Sc56ktMW9Qlmx/+gOB4w+R3DeR9fvdq0g8C3jfH5dxT6Q71lEGXqVC8MF+qstx5fG04wWqLaH+LCVxAkMdi1eoWL0WOOde/m7r7NveO+biLXrAzohRxEL5Wu7UK1/p2oyKwTpes4WK+ogSPJH+PBoHSnwMgULRL4Qeck03SnhseiXRzgbxMDZSxQjIRr+jEX8wcBxW0jkFnqm/Yee1XynhaG7sn0Fr3Y+E7o7xSNh+8IXesQdo2XzMs0pgOW1HC/8fZea/EjETbzl5b+jDdWwjG+dpQUAUgsf+GmhA4SlBlwC6CeBih2v1iAq+5yaSWafk+9r9et1CIqnzvrMsLbZVtCi/U+I94fL9AOsBvAD3U2Hqr9EdWQlH2u/rELVfx0PR+weQjLO08oHhzjUk5juxdci2aU1F6sPdVJifCRwL5etAyceCvOwd+yy/ZVjyCGJDtwCi8A8t0Hb+kt/w1x3FxSrcwEyJjw1SKCpiZbkNUKjRapJ8UE9fAGviSoeQYXku4wf+ai8UljQVgNmelfgTiSJJB7rsu6T8/stNaNW6VuC32OgsCxAXgv4w8c+1THc3G3jr3kMU9GllNN7AFWwwk16D9b2YhlJilCrrceiLhZ4sUDcLwbpGf+80pCdy/3SpzOp5SckPLQzFBXQ7+xMBJe0JiVzXeEfnUvF4usg9j3eIK81fBGIhIvxyqVwAq1uXMT/FWueZP8P8WgLzyxJW7OZMm6FX5EQqP4gHedF7t+uKKJZJpwxD9WFXfjdZJ13I6j/Cy9dYenf8fPllfadThw5mHZoRk2d8n2OoKEyi9wWWOUZ9wN3/fxLFZWj/uaLfCT2k9Q7nR+AT+v5s4NNO5QSp3sCPI4TFrNCVBAgGQTBnOhbs1AEue7dhKddDcDLFByL7vyw9o5mHsnFBfy2Gtu1GBeyjtDhmUukpB3EL8/y0DEJ3yyJbobIsFWioD2KjbUdVII5hCZ9tl148R2/ec7H3D+/Xj0jGu7Px372AEjhC8gFwv+bvoxL1Ce9A6/3+CtdlfP+PxRybwW/Px3HSc8hZG7/9s5xyK/ZuE166uHNQhhO8c690lA6LYwKeDHjIEIB7tqeYjGd5tku+L38W0+9PBXtujBJyNQkdVvr/UuGCAYKA1/kyMF5DxSAk9BcC+6C9fs2z8rDvssBHBFxVwPqp7qdnRV6OYkOOhV2WD3DZ9+WDfZtKSZKNACwjuPxulsi1HipTuG2voyJzjuOt+G82pMky84358Z+UvFswUaB+FPKgDFRZHk6yhJvddjesIrmfxkb9mQrlLdGH57CW4mkkzY+TBBbFXOMztEThfXrEsW7RdQOX/cR+IPRuWq7dfKcZEtmdjlLhA11hiB9AVx2i4D9EMjy1l+82UeQcxGu8QuPCkm1XgXwlWc7IF0ZOTAmktYGHs0jCwJtMj2NHSj641QW6l+5gvUM3GQJz0RXWQkLfSqlJsaEI/a8kR/+jQXAV+o7gEkRf4BdjyBxE9KCEg6T6E8v4cR0vPYOjBgJtzsddI4XXhk94FsgvJN//Xw5gZaCf7mj+XyDR+OjeAIQxu49lYPu+OyTvUrWKRZzClw4oA+scS7FURcK6SuGh2JPfQkbyoyKg/F1c5L2Ugg5aZPUSjhOwM9+JxA/Vs+WNbo6LJBri9ouYdLYb4SXvuawCcBjLaWUF6/JKWqpryzgHwai3OSQICxf90RjG+ZyTrt3xMoUwxClnW286vPplFVeLmwsQ+h+db+JNtmeH0ZvldtHVOJb8K3z+JOuntcqhPP1Qes7SZ2daRJ5ukXyA73S2Ux9QalL0Br2xkBBA9ZeYY0fzY/lpDJkDP6FLKjUAz3ujQ2YDjVX8qEfHNFZoQOACnik9I2t7a9kulfUnl7mOjXBvrldXgTKw0elLnEbYTuoyJuacTZ3ycz0WwLiYc6ZQibya/3eSfDQxJtV5lMdhrf+A+xE1vW8FnnEFSQllHJo2eRRJqU16Dvfzgbw9zXNs95Gr6CHP+3H7C95zXeeU38H94G0q1zho8Ej0CSo2/ph7G/W+eUybMc6rD1lHWdk65t7betcOKQhW6XhM8rP8uXBHDZxHb8iD/D2f+6Gc7FqgDOyshlYpvVYpSbGhCd0O8elNANzj1EIH0ipevJGU/Rx6K+okP3TMfS/Q2g8gma8ONKC9xfW0gEAMN/XhOi1lpE1Lz0AsDEeyE7Xc5+x/mL8TAoQKIjuJ2+5qfU84SpAfXTyWFu2+TkNvXaVv0Br7jSP4/6pDin3FUsfiDAUens73PUcKj2e3jf43aFmGukg+T6JEEOTtged6vsBztffxOftSJ9P0PgBwU3/CMyDWkZxPCNSHL3h1QBzP0XHSc6w3vAC7sx17rEi+YO3b2QWP8IwU6+GZS0+DW9b4P9/zBMV5by6nV+g6Cfe3KxQlo7f91a+wgt9awCoKWfbHSt9dmO8VrGUjdj01fFikGGJUS9I6hA3Kd6Uy0dYWi9lgurOR9QYns4FLBOoUvAovelb1+ZJ3PW5FTwkaW7g1f+aR80zWL/R7wmWJvkaMrf86FYGF9LZYPMWG9Bg2pldTYRlH5RPW3WtsNF1X6eUSng4XZT+Lv2OkbxMPZfme9yPBQIGzUd/HOXkBcZQy2uFJWuoXBAh1IrevlfA0txNIdgfwHSxwjkHhCc15kKLy9Eg/fw/38N1/gs/2WYcwf05FBvVkRyp9GP+Ncd8Y5vaW5GeNBG6gVwZu9XtZHkizN89JUZl9roR8WSt9Ar/FQ6lkH+5Y578LnIeI/RlUsnBea8z1URf+UKaCrFBUlNCFHzg+kMvYKMW5YGHJ3yzR0JvVXgPUHEhf7rKmdpUjH0PLuEbcilH93c8PMkFUMmaz+hLFAtbk2bJ+P7V1B5Y6ZrsupkxDQ4CaS3hmt6xPLZBuCQndXmszkqePZ+ideMuziibz3EMCxPQyFZ63A+ckaeH5i6y8SOsObtmjqBRkJD9TnY+H+Qyb0AK8xiub5hiLtNqpey4xoovqFF7ncIcMrKcDBHaHsy/pvOOQJY5vDv26OzvvAwqDndp2ZsxzQcnBzHbbsq5d6NxnP8m7631MjyF06wIfVoa3z9az2oCVPo1K7aFU6OxznMO6jzI8V9aPTH+ZyqXr3XiLRHozy+hG716/ooLgoqlIvv7A+ngg68WmrE9xAYb30usxjnVyRoF7rIkp16GiY9EVG4jQhZYSgt8QbIbpRnciQWXo9kODfZ/0nOjEupum8eNIO/mZ1wt33Q9oSaWdRnCJlD4U6kESjjseGNd4dgO8g8tpBdg5vrtpOaCBn+OlvZ3l83AZStc0elSKWZFX0QouZLV08nqjC3gNkpJ3f2Jq3qmyflBQgiSGYw9IeEz0clpoIL6DmS8ohugT/rX07IKwjeJRJDpEem9BpegR75x2PkMhFze8J6eTIBd75DGNhNEZ4/24hPfw83gTlbOJJJkEy+D2wPtZRpJHw7405tuBBXi8971cwW8t7n2jfqPvfU/nPFiIr0p+oZQQad8Xc715VC7WluF5g7W8jazvIreAgnUWyTLlKaCnsqxQJ7Zk+T7EfS0xyuIEltFeJMc3SMx/jsnXdgXydSYV03rWtWl8f3HBhVA4v0KPwhpHMYIy9XiRMprH72ZlActeoehpcWWz5Q3/3WrX0wZ7kUmiKjjC62w25NdrtVIoFJXG/KemayEo+tVCH3x0noiN/XlaCg87UigUCoVi47HQFQqFQqFQbHzQgAuFQqFQKJTQFQqFQqFQKKErFAqFQqGoCP4jwADQNvw20jA5ogAAAABJRU5ErkJggg==", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.90826535,"math_prob":0.99178815,"size":294,"snap":"2021-43-2021-49","text_gpt3_token_len":70,"char_repetition_ratio":0.13103448,"word_repetition_ratio":0.6530612,"special_character_ratio":0.25510204,"punctuation_ratio":0.0862069,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9913266,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-23T18:02:24Z\",\"WARC-Record-ID\":\"<urn:uuid:880e7b9a-97b3-4a8d-9d4f-a9d6b9f32ede>\",\"Content-Length\":\"34909\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d89a8c9b-f724-45fb-a6f7-e05fd238d29c>\",\"WARC-Concurrent-To\":\"<urn:uuid:b24a3f44-e24e-4ea7-90cc-b180de143296>\",\"WARC-IP-Address\":\"104.26.6.16\",\"WARC-Target-URI\":\"https://homework.cpm.org/category/CC/textbook/cca/chapter/11/lesson/11.1.1/problem/11-9\",\"WARC-Payload-Digest\":\"sha1:I2I55ZTRIWWF4JQHT5ORUHZBUNHJGKUU\",\"WARC-Block-Digest\":\"sha1:CPVSG36XGZKUNR5IVCZ7UEXEQG6AZQXH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585737.45_warc_CC-MAIN-20211023162040-20211023192040-00169.warc.gz\"}"}
https://www.mathworks.com/help/finance/default-portfolio-problem-mad.html
[ "Main Content\n\n## Default Portfolio Problem\n\nThe default portfolio optimization problem has a risk and return proxy associated with a given problem, and a portfolio set that specifies portfolio weights to be nonnegative and to sum to `1`. The lower bound combined with the budget constraint is sufficient to ensure that the portfolio set is nonempty, closed, and bounded. The default portfolio optimization problem characterizes a long-only investor who is fully invested in a collection of assets.\n\n• For mean-variance portfolio optimization, it is sufficient to set up the default problem. After setting up the problem, data in the form of a mean and covariance of asset returns are then used to solve portfolio optimization problems.\n\n• For conditional value-at-risk portfolio optimization, the default problem requires the additional specification of a probability level that must be set explicitly. Generally, “typical” values for this level are 0.90 or 0.95. After setting up the problem, data in the form of scenarios of asset returns are then used to solve portfolio optimization problems.\n\n• For MAD portfolio optimization, it is sufficient to set up the default problem. After setting up the problem, data in the form of scenarios of asset returns are then used to solve portfolio optimization problems.\n\nDownload ebook" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8377362,"math_prob":0.82610536,"size":1299,"snap":"2021-21-2021-25","text_gpt3_token_len":253,"char_repetition_ratio":0.2,"word_repetition_ratio":0.37688443,"special_character_ratio":0.18398768,"punctuation_ratio":0.096491225,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95788145,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-22T02:47:44Z\",\"WARC-Record-ID\":\"<urn:uuid:e4c74397-c34b-4fd8-bf58-cf41f0db3c74>\",\"Content-Length\":\"69478\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d6133b7b-d69e-4ed6-94ee-9b83a28c6b2e>\",\"WARC-Concurrent-To\":\"<urn:uuid:ba95ca7f-8a7a-42d3-b22e-88ae86dbaaf8>\",\"WARC-IP-Address\":\"23.220.132.54\",\"WARC-Target-URI\":\"https://www.mathworks.com/help/finance/default-portfolio-problem-mad.html\",\"WARC-Payload-Digest\":\"sha1:EJCRSWVO2YSEAIVN2TCSUIEWH4Q2FHKC\",\"WARC-Block-Digest\":\"sha1:62MHMBAY55DDUIW36TFY76YBUFXVRXGV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623488504969.64_warc_CC-MAIN-20210622002655-20210622032655-00098.warc.gz\"}"}
https://phys.libretexts.org/Courses/Muhlenberg_College/Physics_122%3A_General_Physics_II_(Collett)/00%3A_Front_Matter/03%3A_Table_of_Contents
[ "$$\\require{cancel}$$\n\n• ## 1: Electric Charges and Fields\n\nIn this chapter, we begin the study of the electric force, which acts on all objects with a property called charge. The electric force is much stronger than gravity (in most systems where both appear), but it can be a force of attraction or a force of repulsion, which leads to very different effects on objects. The electric force helps keep atoms together, so it is of fundamental importance in matter.\n• ## 2: Gauss's Law\n\nSo far, we have found that the electrostatic field begins and ends at point charges and that the field of a point charge varies inversely with the square of the distance from that charge. These characteristics of the electrostatic field lead to an important mathematical relationship known as Gauss’s law. Gauss’s law gives us an elegantly simple way of finding the electric field, and, as you will see, it can be much easier to use than the integration method described in the previous chapter.\n• ## 3: Electric Potential\n\nIn this chapter, we examine the relationship between voltage and electrical energy, and begin to explore some of the many applications of electricity.\n• ## 4: Current and Resistance\n\nIn this chapter, we study the electrical current through a material, where the electrical current is the rate of flow of charge. We also examine a characteristic of materials known as the resistance. Resistance is a measure of how much a material impedes the flow of charge, and it will be shown that the resistance depends on temperature. In general, a good conductor, such as copper, gold, or silver, has very low resistance.\n• ## 5: Direct-Current Circuits\n\nIn this chapter, we use these electric components in circuits. A circuit is a collection of electrical components connected to accomplish a specific task. The second section of this chapter covers the analysis of series and parallel circuits that consist of resistors. We also introduce the basic equations and techniques to analyze any circuit, including those that are not reducible through simplifying parallel and series elements. But first, we need to understand how to power a circuit.\n• ## 6: Magnetic Forces and Fields\n\nFor the past few chapters, we have been studying electrostatic forces and fields, which are caused by electric charges at rest. These electric fields can move other free charges, such as producing a current in a circuit; however, the electrostatic forces and fields themselves come from other static charges. In this chapter, we see that when an electric charge moves, it generates other forces and fields. These additional forces and fields are what we commonly call magnetism.\n• ## 7: Sources of Magnetic Fields\n\nIn this chapter, we examine how magnetic fields are created by arbitrary distributions of electric current, using the Biot-Savart law. Then we look at how current-carrying wires create magnetic fields and deduce the forces that arise between two current-carrying wires due to these magnetic fields. We also study the torques produced by the magnetic fields of current loops. We then generalize these results to an important law of electromagnetism, called Ampère’s law.\n• ## 8: Electromagnetic Induction\n\nIn this and the next several chapters, you will see a wonderful symmetry in the behavior exhibited by time-varying electric and magnetic fields. Mathematically, this symmetry is expressed by an additional term in Ampère’s law and by another key equation of electromagnetism called Faraday’s law. We also discuss how moving a wire through a magnetic field produces an emf or voltage.\n• ## 9: The Nature of Light\n\nIn this chapter, we study the basic properties of light. In the next few chapters, we investigate the behavior of light when it interacts with optical devices such as mirrors, lenses, and apertures.\n• ## 10: Geometric Optics and Image Formation\n\nThis chapter introduces the major ideas of geometric optics, which describe the formation of images due to reflection and refraction.\n• ## 11: Interference\n\nThe most certain indication of a wave is interference. This wave characteristic is most prominent when the wave interacts with an object that is not large compared with the wavelength. Interference is observed for water waves, sound waves, light waves, and, in fact, all types of waves.\n• ## 12: Diffraction\n\nIn the preceding chapter, we implicitly regarded slits as objects with positions but no size. The widths of the slits were considered negligible. When the slits have finite widths, each point along the opening can be considered a point source of light—a foundation of Huygens’s principle. Because real-world optical instruments must have finite apertures (otherwise, no light can enter), diffraction plays a major role in the way we interpret the output of these optical instruments." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9357739,"math_prob":0.96107405,"size":4739,"snap":"2021-43-2021-49","text_gpt3_token_len":951,"char_repetition_ratio":0.1374868,"word_repetition_ratio":0.010638298,"special_character_ratio":0.18822536,"punctuation_ratio":0.110857144,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9854068,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-11-27T14:38:20Z\",\"WARC-Record-ID\":\"<urn:uuid:15bbed9d-43d3-4543-b3af-8135784fcde7>\",\"Content-Length\":\"136472\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e560589d-a3db-4612-9fa4-7f0a7f82c4dc>\",\"WARC-Concurrent-To\":\"<urn:uuid:bb9e22c3-b58d-4769-86b3-bafaec820d9d>\",\"WARC-IP-Address\":\"13.249.38.111\",\"WARC-Target-URI\":\"https://phys.libretexts.org/Courses/Muhlenberg_College/Physics_122%3A_General_Physics_II_(Collett)/00%3A_Front_Matter/03%3A_Table_of_Contents\",\"WARC-Payload-Digest\":\"sha1:FFP4HVGHDEGPJZFMEOVU5O5XDYZVKZNN\",\"WARC-Block-Digest\":\"sha1:GEDA5WOP6BK7KOCUQPBWC3OSR2ZZ5OUU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964358189.36_warc_CC-MAIN-20211127133237-20211127163237-00556.warc.gz\"}"}
https://stats.stackexchange.com/questions/200708/model-has-a-great-fit-significant-variables-but-residuals-are-not-normally-dist
[ "# Model has a Great Fit, Significant Variables but Residuals are Not Normally Distributed. How should we proceed?\n\nI have a data on some overall conversion rates (i.e. out of x users visiting, y buy something hence y/x is my conversion rate, essentially proportions) over a time period, now this overall proportion can be broken by if they came from channel 1, channel 2 or channel 3 and for each channel there would be again similar proportions. My objective is to see how these proportions from different channels impact the overall proportion\n\nI have run a simple linear regression in R and below is the result.\n\nCall:\nlm(formula = target_variable ~ . - date, data = data_lcr)\n\nResiduals:\nMin 1Q Median 3Q Max\n-0.034173 -0.003217 -0.000704 0.002331 0.073845\n\nCoefficients:\nEstimate Std. Error t value Pr(>|t|)\n(Intercept) -0.0049876 0.0006139 -8.124 7.4e-15 ***\nexp1 0.0785438 0.0086230 9.109 < 2e-16 ***\nexp2 0.0290531 0.0175517 1.655 0.0987 .\nexp3 -0.1026385 0.0080550 -12.742 < 2e-16 ***\nexp4 1.0760312 0.0669632 16.069 < 2e-16 ***\nexp5 0.2466149 0.0195844 12.592 < 2e-16 ***\n---\nSignif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1\n\nResidual standard error: 0.007503 on 358 degrees of freedom\nMultiple R-squared: 0.9843, Adjusted R-squared: 0.9841\nF-statistic: 4503 on 5 and 358 DF, p-value: < 2.2e-16\n\n\nThe Model has great R-squared which is significant, all variables turn out to be significant. Next I am checking if my residuals are normally distributed\n\n> skewness(fitlm$residuals) 2.863341 > kurtosis(fitlm$residuals)\n 33.83711\n\nShapiro-Wilk normality test\n\ndata: fitlm$residuals W = 0.72781, p-value < 2.2e-16 Anderson-Darling normality test data: fitlm$residuals\nA = 17.485, p-value < 2.2e-16\n\n\nThese tests suggest that my residuals are not normally distributed. Should I still consider the model based on R-squared and F-Value or make some corrections? Please suggest\n\nHere is the residual plot:", null, "", null, "EDIT After removing outliers:", null, "• What does the plot of the residuals look like? Also, can you explain the data setup? What is the response here? My most natural instinct is a transformation of the response. – Greenparker Mar 9 '16 at 9:17\n• As @Greenparker said, how does residual plots look like? See also: stats.stackexchange.com/questions/2492/… – Tim Mar 9 '16 at 9:20\n• @AnuragH, a plot of the residuals means predicted values on the x axes and the residuals on the y axes. If you used lm() in R, then you can get the diagnostic plot when you do plot(lm.object). Also, it looks like you might have at least one outlier that might be affecting your results. The QQ plot looks a little heavy tailed, but nothing that suggests anything too crazy. – Greenparker Mar 9 '16 at 10:13\n• @Greenparker I am updating the post with the diagnostic plots and yes I found about 13 outliers using the Box Plot, even after removing them I do not see great improvement. I am attaching the plot(lm.object) after removing the outliers, do let me know what can I do next. Thanks – Anurag H Mar 9 '16 at 10:32\n• @AnuragH I edited your question to rollback the previous version of the question and add the new plots. Your last edit changed your question into a totally different problem. – Tim Mar 9 '16 at 11:12\n\nIt is always good to look at plotted data. In case of regression, it is good to look at residuals plots. In your case residuals seem to come from distribution with longer tails than normal. Distribution of your data is closer to $t$-distribution and actually I was able to produce a similar data example for $t_2$ distribution.", null, "It even produces quite close Shapiro-Wilk estimates:\n\n> shapiro.test(x)\n\nShapiro-Wilk normality test\n\ndata: x\nW = 0.78051, p-value < 0.00000000000000022\n\n\nBut this is of smaller importance. What the residual plot really shows you is that your distribution has longer tails, i.e. it has some outliers. Now you should ask yourself what are the outlying values? Identify and check the values. Why are they outlying? Is it the measured phenomenon having long tail distribution that produced them, or maybe there was some issues with measurement (are they erroneous)? The outlying values can influence your final estimate, so you have to make a number of decisions on what do do with the outlying values. Check also How should outliers be dealt with in linear regression analysis? and Interpreting the residuals vs. fitted values plot for verifying the assumptions of a linear model threads.\n\n• Thanks Tim for the assistance. If I remove the outliers and run the model and I still tend to have non-normal residuals, what is the impact I make in accepting the model given it's goodness of fit to be good – Anurag H Mar 9 '16 at 10:56\n• @AnuragH I see that you edited your question - it would be better if you rather left the previous plots and added the new ones since your last edit changes your question to totally different one. As about new plots - they seem to show a totally different pattern: with some small \"cluster\" of outlying values and overall linear trend in residuals. This needs further investigation (search this site for residuals regression for multiple similar cases and examples). – Tim Mar 9 '16 at 11:09\n\nWhen data are not normally distributed the inferential results on p-values (and on F-statistics) do not hold and so it is not correct to look at them.\n\nHowever the leasts squares fit does not rely on the normality assumption and so if the fit is good and R squared is high there is no reason to discard the model.\n\n• You are partially right, but notice that $R^2$ can be misleading, e.g. stats.stackexchange.com/questions/13314/… – Tim Mar 9 '16 at 11:15\n• Yes, R squared is highly misleading, but (according to my knowledge) not for reasons related to lack of normality in the data. Correlation between predictors is one of them for example. – adaien Mar 9 '16 at 12:14\n• I meant your suggestion to rely of $R^2$ when believing that there is \"no reason to discard the model\". The example in the question (after we learned about the residuals distribution - what you obviously did not know at the moment of answering) shows that there are issues with residuals - outliers, linearity etc. that can influence the results. – Tim Mar 9 '16 at 12:19\n• His question is what to do when model has a great fit but data are not normally distributed, my answer is that the inferential results on significance do not hold, but this does not contradict the other results. Of course there could other reasons (like outliers) why the fit should not be good, but this was not his question for what I have understood. – adaien Mar 9 '16 at 12:40\n• Completely agree on that – adaien Mar 9 '16 at 12:47" ]
[ null, "https://i.stack.imgur.com/pUIfB.png", null, "https://i.stack.imgur.com/ZWTux.png", null, "https://i.stack.imgur.com/bmCBB.png", null, "https://i.stack.imgur.com/YwbTH.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7481648,"math_prob":0.85008365,"size":1772,"snap":"2020-34-2020-40","text_gpt3_token_len":607,"char_repetition_ratio":0.10294118,"word_repetition_ratio":0.0,"special_character_ratio":0.40688488,"punctuation_ratio":0.18461539,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96664566,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,3,null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-13T13:04:07Z\",\"WARC-Record-ID\":\"<urn:uuid:6538ccaa-3c42-4917-9fc9-7aa58b2eb1a0>\",\"Content-Length\":\"172720\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:edd1d9ec-5916-4853-8cf2-02f2cef8513c>\",\"WARC-Concurrent-To\":\"<urn:uuid:9670a4e8-d6dd-46da-ad83-60064cc9ba7a>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://stats.stackexchange.com/questions/200708/model-has-a-great-fit-significant-variables-but-residuals-are-not-normally-dist\",\"WARC-Payload-Digest\":\"sha1:ZZUSBA4MQDCLJMOOFYKYMGFLJ4YMMLXR\",\"WARC-Block-Digest\":\"sha1:2TBN3WFI5LZ3S3YHLF2W5VQUNWBOSWRF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439738982.70_warc_CC-MAIN-20200813103121-20200813133121-00474.warc.gz\"}"}
https://www.cbsencertsolution.com/2016/05/pair-of-linear-equations-in-two.html
[ "# Pair of Linear Equations in Two Variables - Class 10 Math CBSE Guide NCERT Solutions of Exercise 3.1\n\n## Class 10 Mathematics - CBSE Guide NCERT Solutions\n\n### Chapter 3 Pair of Linear Equations in Two Variables\n\n#### NCERT Solutions of Math Textbook Exercise 3.1\n\nQuestion 1:  Aftab tells his daughter, “Seven years ago, I was seven times as old as you were then. Also, three years from now, I shall be three times as old as you will be.” (isn't this interesting?) Represent this situation algebraically and graphically.\nSolution:\nLet the present age of Aftab and his daughter are be x and y respectively.\nSeven years ago,\nAge of Aftab = x − 7\nAge of his daughter = y − 7\nAccording to the question,\n( x – 7 ) = 7 (y – 7)\nx – 7 = 7y – 49\nx – 7y = – 42 ………………… (1)\nThree years hence,\nAge of Aftab = x + 3 and,\nAge of his daughter = y + 3\nBy the given condition,\n(x + 3) = 3(y + 3),\nx – 3y = 6 … … … … … .. (2)\nTherefore, the algebraic representation is -\nx – 7y = – 42\nx – 3y = 6\nFor  x – 7y = – 42,\nx = – 42 + 7y\nThe solution table is -\nx = 6 + 3y\nThe solution table is -\n\nQuestion 2: The coach of a cricket team buys 3 bats and 6 balls for Rs 3900. Later, she buys another bat and 2 more balls of the same kind for Rs 1300. Represent this situation algebraically and geometrically.\nSolution:\nLet the cost of a bat be Rs x.\nAnd, cost of a ball = Rs y\nAccording to the question, the algebraic representation is -\n3x + 6y = 3900\nx + 2y = 1300\nFor 3x + 6y = 3900,\n\nQuestion 3: The cost of 2 kg of apples and 1 kg of grapes on a day was found to be Rs 160. After a month, the cost of 4 kg of apples and 2 kg of grapes is Rs 300. Represent the situation algebraically and geometrically.\nSolution:\n(Student may try to solve taking hint from above problems).\nThe graphical representation will be as follows -" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.92716813,"math_prob":0.99600303,"size":1766,"snap":"2020-24-2020-29","text_gpt3_token_len":551,"char_repetition_ratio":0.11180477,"word_repetition_ratio":0.06084656,"special_character_ratio":0.33125708,"punctuation_ratio":0.10989011,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99797934,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-11T05:00:03Z\",\"WARC-Record-ID\":\"<urn:uuid:d29752a3-3264-4879-95af-d190f0cad07f>\",\"Content-Length\":\"243614\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:16d0c849-6686-44b7-ac6e-f4d57aeb6586>\",\"WARC-Concurrent-To\":\"<urn:uuid:e27df1f6-5d94-4fdd-8f07-39e470280326>\",\"WARC-IP-Address\":\"172.217.2.115\",\"WARC-Target-URI\":\"https://www.cbsencertsolution.com/2016/05/pair-of-linear-equations-in-two.html\",\"WARC-Payload-Digest\":\"sha1:TUJDSP4NVQJBXNJWUWKLQPHIUYL4CD7Y\",\"WARC-Block-Digest\":\"sha1:74B6SKBRG6G4EWCGRKYCGVA5PXVFQ57D\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655921988.66_warc_CC-MAIN-20200711032932-20200711062932-00489.warc.gz\"}"}
https://la.mathworks.com/help/pde/ug/pde.stationaryresults.html
[ "# StationaryResults\n\nTime-independent PDE solution and derived quantities\n\n## Description\n\nA `StationaryResults` object contains the solution of a PDE and its gradients in a form convenient for plotting and postprocessing.\n\n• A `StationaryResults` object contains the solution and its gradient calculated at the nodes of the triangular or tetrahedral mesh, generated by `generateMesh`.\n\n• Solution values at the nodes appear in the `NodalSolution` property.\n\n• The three components of the gradient of the solution values at the nodes appear in the `XGradients`, `YGradients`, and `ZGradients` properties.\n\n• The array dimensions of `NodalSolution`, `XGradients`, `YGradients`, and `ZGradients` enable you to extract solution and gradient values for specified equation indices in a PDE system.\n\nTo interpolate the solution or its gradient to a custom grid (for example, specified by `meshgrid`), use `interpolateSolution` or `evaluateGradient`.\n\n## Creation\n\nThere are several ways to create a `StationaryResults` object:\n\n• Solve a time-independent problem using the `solvepde` function. This function returns a PDE solution as a `StationaryResults` object. This is the recommended approach.\n\n• Solve a time-independent problem using the `assempde` or `pdenonlin` function. Then use the `createPDEResults` function to obtain a `StationaryResults` object from a PDE solution returned by `assempde` or `pdenonlin`. Note that `assempde` and `pdenonlin` are legacy functions. They are not recommended for solving PDE problems.\n\n## Properties\n\nexpand all\n\nFinite element mesh, returned as a FEMesh Properties object.\n\nSolution values at the nodes, returned as a vector or array. For details about the dimensions of `NodalSolution`, see Dimensions of Solutions, Gradients, and Fluxes.\n\nData Types: `double`\n\nx-component of the gradient at the nodes, returned as a vector or array. For details about the dimensions of `XGradients`, see Dimensions of Solutions, Gradients, and Fluxes.\n\nData Types: `double`\n\ny-component of the gradient at the nodes, returned as a vector or array. For details about the dimensions of `YGradients`, see Dimensions of Solutions, Gradients, and Fluxes.\n\nData Types: `double`\n\nz-component of the gradient at the nodes, returned as a vector or array. For details about the dimensions of `ZGradients`, see Dimensions of Solutions, Gradients, and Fluxes.\n\nData Types: `double`\n\n## Object Functions\n\n `evaluateCGradient` Evaluate flux of PDE solution `evaluateGradient` Evaluate gradients of PDE solutions at arbitrary points `interpolateSolution` Interpolate PDE solution to arbitrary points\n\n## Examples\n\ncollapse all\n\nCreate a PDE model for a system of three equations. Import the geometry of a bracket and plot the face labels.\n\n```model = createpde(3); importGeometry(model,\"BracketWithHole.stl\"); figure pdegplot(model,\"FaceLabels\",\"on\") view(30,30) title(\"Bracket with Face Labels\")```", null, "```figure pdegplot(model,\"FaceLabels\",\"on\") view(-134,-32) title(\"Bracket with Face Labels, Rear View\")```", null, "Set boundary conditions such that face 4 is immobile, and face 8 has a force in the negative `z` direction.\n\n```applyBoundaryCondition(model,\"dirichlet\",\"Face\",4,\"u\",[0,0,0]); applyBoundaryCondition(model,\"neumann\",\"Face\",8,\"g\",[0,0,-1e4]);```\n\nSet coefficients that represent the equations of linear elasticity. See Linear Elasticity Equations.\n\n```E = 200e9; nu = 0.3; specifyCoefficients(model,\"m\",0,... \"d\",0,... \"c\",elasticityC3D(E,nu),... \"a\",0,... \"f\",[0;0;0]);```\n\nCreate a mesh.\n\n`generateMesh(model,\"Hmax\",1e-2);`\n\nSolve the PDE.\n\n`results = solvepde(model)`\n```results = StationaryResults with properties: NodalSolution: [14002x3 double] XGradients: [14002x3 double] YGradients: [14002x3 double] ZGradients: [14002x3 double] Mesh: [1x1 FEMesh] ```\n\nAccess the solution at the nodal locations.\n\n`u = results.NodalSolution;`\n\nPlot the solution for the `z`-component, which is component 3.\n\n`pdeplot3D(model,\"ColorMapData\",u(:,3))`", null, "Obtain a `StationaryResults` object from a legacy solver together with `createPDEResults`.\n\nCreate a PDE model for a system of three equations. Import the geometry of a bracket and plot the face labels.\n\n```model = createpde(3); importGeometry(model,\"BracketWithHole.stl\"); figure pdegplot(model,\"FaceLabels\",\"on\") view(30,30) title(\"Bracket with Face Labels\")```", null, "```figure pdegplot(model,\"FaceLabels\",\"on\") view(-134,-32) title(\"Bracket with Face Labels, Rear View\")```", null, "Set boundary conditions such that `F4` is immobile, and `F8` has a force in the negative `z` direction.\n\n```applyBoundaryCondition(model,\"dirichlet\",\"Face\",4,\"u\",[0,0,0]); applyBoundaryCondition(model,\"neumann\",\"Face\",8,\"g\",[0,0,-1e4]);```\n\nSet coefficients for a legacy solver that represent the equations of linear elasticity. See Linear Elasticity Equations.\n\n```E = 200e9; nu = 0.3; c = elasticityC3D(E,nu); a = 0; f = [0;0;0];```\n\nCreate a mesh.\n\n`generateMesh(model,\"Hmax\",1e-2);`\n\nSolve the problem using a legacy solver.\n\n`u = assempde(model,c,a,f);`\n\nCreate a `StationaryResults` object from the solution.\n\n`results = createPDEResults(model,u)`\n```results = StationaryResults with properties: NodalSolution: [14002x3 double] XGradients: [14002x3 double] YGradients: [14002x3 double] ZGradients: [14002x3 double] Mesh: [1x1 FEMesh] ```\n\nAccess the solution at the nodal locations.\n\n`u = results.NodalSolution;`\n\nPlot the solution for the `z`-component, which is component 3.\n\n`pdeplot3D(model,\"ColorMapData\",u(:,3))`", null, "## Version History\n\nIntroduced in R2016a\n\nexpand all" ]
[ null, "https://la.mathworks.com/help/examples/pde/win64/ObtainAStationaryResultsObjectFromSolvepdeExample_01.png", null, "https://la.mathworks.com/help/examples/pde/win64/ObtainAStationaryResultsObjectFromSolvepdeExample_02.png", null, "https://la.mathworks.com/help/examples/pde/win64/ObtainAStationaryResultsObjectFromSolvepdeExample_03.png", null, "https://la.mathworks.com/help/examples/pde/win64/ResultsFromCreatePDEResultsExample_01.png", null, "https://la.mathworks.com/help/examples/pde/win64/ResultsFromCreatePDEResultsExample_02.png", null, "https://la.mathworks.com/help/examples/pde/win64/ResultsFromCreatePDEResultsExample_03.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7508649,"math_prob":0.9622225,"size":5733,"snap":"2023-14-2023-23","text_gpt3_token_len":1487,"char_repetition_ratio":0.15116076,"word_repetition_ratio":0.44065484,"special_character_ratio":0.22640851,"punctuation_ratio":0.19865642,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9894638,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-29T00:10:52Z\",\"WARC-Record-ID\":\"<urn:uuid:9e9186d9-8d93-4e59-a001-ee53e2af39d3>\",\"Content-Length\":\"105077\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b91a77f0-a288-4352-b331-eb78dad716b4>\",\"WARC-Concurrent-To\":\"<urn:uuid:44fe11a0-6a1c-4e56-9f03-06d96466ebdd>\",\"WARC-IP-Address\":\"23.34.160.82\",\"WARC-Target-URI\":\"https://la.mathworks.com/help/pde/ug/pde.stationaryresults.html\",\"WARC-Payload-Digest\":\"sha1:2Y5YW4TIXWCOFHJK6EWFMQZI6MS4GAKT\",\"WARC-Block-Digest\":\"sha1:ALGB2RAVY7DBVZJZXQYKL7OHXBSVHPQ2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296948900.50_warc_CC-MAIN-20230328232645-20230329022645-00144.warc.gz\"}"}
http://vitalonlineexperts.com/rolex-paris-fikpaoa/state-division-algorithm-30c345
[ "a\\}. Recall that the HCF of two positive integers a and b is the largest positive integer d that divides both a and b. If a number $N$ is a factor of two number $s$ and $t$, then it is also a factor of the sum of and the difference between $s$ and $t$; and 4. Add some text here. The work in Preview Activity $$\\PageIndex{1}$$ provides some rationale that this is a reasonable axiom. A prime is an integer greater than 1 whose only positive divisors are 1 and itself. We say an integer $n$ is a linear combination of $a$ and $b$ if there exists integers $x$ and $y$ such that $n=ax+by.$ For example, $7$ is a linear combination of $3$ and $2$ since $7=2(2)+1(3).$. Lemma. \\begin{array} { r l l } (Division Algorithm) If $a$ and $b$ are nonzero positive integers, then there are unique positive integers $q$ and $r$ such that $a=bq+r$ where $0\\leq r < b.$. Greatest Common Divisor / Lowest Common Multiple, https://brilliant.org/wiki/division-algorithm/. Extend the Division Algorithm by allowing negative divisors. Notice S is nonempty since ab>a. By the Well-Ordering Axiom, S must contain a least element, say bk. Since k\\not= 0, there exists a natural number q such that k=q+1. Notice b q\\leq a since bk is the least multiple of b greater than a. Thus there exists a natural number r such that a=bq+r. Notice 0\\leq r. Assume, r\\geq b. Then there exists a natural number m\\geq 0 such that b+m=r. By substitution, a=b(q+1)+m and so bk=b(q+1)\\leq a. This contradiction shows r< b as needed. 24 is a multiple of 8. Similarly, q_2< q_1 cannot happen either, and thus q_1=q_2 as desired. a(x)=b(x)×d(x)+r(x), a(x) = b(x) \\times d(x) + r(x),a(x)=b(x)×d(x)+r(x). Exercise. For if a|n where a and n are positive integers, then n=ak for some integer k. Since k is a positive integer, we see that n=ak\\geq a. Hence any nonzero integer n can have at most 2|n| divisors. Modular arithmetic is a system of arithmetic for integers, where we only perform calculations by considering their remainder with respect to the modulus. This is described in detail in the division algorithm presented in section 4.3.1 of Knuth, The art of computer programming, Volume 2, Seminumerical algorithms - the standard reference. Definition. It actually has deeper connections into many other areas of mathematics, and we will highlight a few of them. We are now unable to give each person a slice. Division algorithms fall into two main categories: slow division and fast division. So let's have some practice and solve the following problems: (Assume that) Today is a Friday. If c\\neq 0 and a|b then a c|b c.. A division algorithm provides a quotient and a remainder when we divide two number. [thm5]The Division Algorithm If a and b are integers such that b > 0, then there exist unique integers q and r such that a = bq + r where 0 ≤ r < b. But since one person couldn't make it to the party, those slices were eventually distributed evenly among 4 people, with each person getting 1 additional slice than originally planned and two slices left over. Then I prove the Division Algorithm in great detail based on the Well-Ordering Axiom. Show that if a and b are positive integers and a|b, then a\\leq b., Exercise. Since a|b certainly implies a|b, the case for k=1 is trivial. (Antisymmetric Property of Divisibility) Let a and b be nonzero positive integers. We’ll then look at the ASMD (Algorithmic State Machine with a Data path) chart and the VHDL code of this binary divider. Thus, if we only wish to consider integers, we simply can not take any two integers and divide them. More clearly, The Division Algorithm can be proven, but we have not yet studied the methods that are usually used to do so. For example. There are 24 hours in one complete day. Let P be the set of natural number for which 7^n-2^n is divisible by 5. Clearly, 7^1-2^1=5 is divisible by 5, so P is nonempty with 0\\in P. Assume k\\in P. We find \\begin{align*} 7^{k+1}-2^{k+1} & = 7\\cdot 7^k-2\\cdot 2^k \\\\ & = 7\\cdot 7^k-7\\cdot 2^k+7\\cdot 2^k-2\\cdot 2^k \\\\ & = 7(7^k- 2^k)+2^k(7 -2) \\end{align*} The induction hypothesis is that (7^k- 2^k) is divisible by 5. Learn about Euclid’s Division Algorithm in a way never done before. To get the number of days in 2500 hours, we need to divide 2500 by 24. Any integer n, except 0, has just a finite number of divisors. Consider the set A = {a − bk ≥ 0 ∣ k ∈ Z}. We will use mathematical induction. Proof. We then give each person another slice, so we give out another 3 slices leaving 4−3=1 4 - 3 = 1 4−3=1. State the Division Algorithm. We need to show that m(m+1)(m+2) is of the form 6 k. The division algorithm yields that m is either even or odd. In addition to showing the divisibility relationship between any two non zero integers, it is worth noting that such relationships are characterized by certain properties. Definition 17.2. According to the algorithm, in this case, the divisor is 25. The advantage of the Division Algorithm is that it allows us to prove statements about the positive integers (integers) by considering only a finite number of cases. a = bq + r, 0 ≤ r < b. These extensions will help you develop a further appreciation of this basic concept, so you are encouraged to explore them further! Dividend = Quotient × Divisor + Remainder The Euclidean Algorithm. -6 & +5 & = -1 \\\\ There are integers a, b, and c such that a|bc, but a\\nmid b and a\\nmid c., Exercise. We have x a+y b=x(m c)+y(n c)= c(x m+ y n) $$Since x m+ y n \\in \\mathbb{Z} we see that c|(x a+y b) as desired. You are walking along a row of trees numbered from 789 to 954. Expert Answer 100% (1 rating) Previous question Next question 15 \\equiv 29 \\pmod{7} . where the remainder r(x)r(x)r(x) is a polynomial with degree smaller than the degree of the divisor d(x)d(x) d(x). We say an integer a is of the form bq+r if there exists integers b, q, and r such that a=bq+r. Notice that the division algorithm, in a certain sense, measures the divisibility of a by b using a remainder r. This is an incredible important and powerful statement. We then give a few examples followed by several basic lemmas on divisibility. Therefore, k+1\\in P and so P=\\mathbb{N} by mathematical induction. (Transitive Property of Divisibility) Let a, b, and c be integers. We begin by stating the definition of divisibility, the main topic of discussion.$$ Thus, $n m=1$ and so in particular $n= 1.$ Whence, $a= b$ as desired. Show transcribed image text. The division algorithm for integers states that given any two integers a and b, with b > 0, we can find integers q and r such that 0 < r < b and a = bq + r.. Euclid’s Division Lemma says that for any two positive integers suppose a and b there exist two novel whole numbers say q and r, such that, a = bq+r, where 0≤rPhd In Global Nutrition, Lodges In Scotland With Hot Tubs, Econ 307 Duke, Lodges In Scotland With Hot Tubs, Glucose Is A Polar Molecule, Glucose Is A Polar Molecule, Kind Of Blue Sales Figures, Hks Hi-power Muffler 4 Inch, Hawaii State Public Library Staff Directory, Corporate Treasury Analyst Goldman Sachs, \" /> a\\}. Recall that the HCF of two positive integers a and b is the largest positive integer d that divides both a and b. If a number $N$ is a factor of two number $s$ and $t$, then it is also a factor of the sum of and the difference between $s$ and $t$; and 4. Add some text here. The work in Preview Activity $$\\PageIndex{1}$$ provides some rationale that this is a reasonable axiom. A prime is an integer greater than 1 whose only positive divisors are 1 and itself. We say an integer $n$ is a linear combination of $a$ and $b$ if there exists integers $x$ and $y$ such that $n=ax+by.$ For example, $7$ is a linear combination of $3$ and $2$ since $7=2(2)+1(3).$. Lemma. \\begin{array} { r l l } (Division Algorithm) If $a$ and $b$ are nonzero positive integers, then there are unique positive integers $q$ and $r$ such that $a=bq+r$ where $0\\leq r < b.$. Greatest Common Divisor / Lowest Common Multiple, https://brilliant.org/wiki/division-algorithm/. Extend the Division Algorithm by allowing negative divisors. Notice S is nonempty since ab>a. By the Well-Ordering Axiom, S must contain a least element, say bk. Since k\\not= 0, there exists a natural number q such that k=q+1. Notice b q\\leq a since bk is the least multiple of b greater than a. Thus there exists a natural number r such that a=bq+r. Notice 0\\leq r. Assume, r\\geq b. Then there exists a natural number m\\geq 0 such that b+m=r. By substitution, a=b(q+1)+m and so bk=b(q+1)\\leq a. This contradiction shows r< b as needed. 24 is a multiple of 8. Similarly, q_2< q_1 cannot happen either, and thus q_1=q_2 as desired. a(x)=b(x)×d(x)+r(x), a(x) = b(x) \\times d(x) + r(x),a(x)=b(x)×d(x)+r(x). Exercise. For if a|n where a and n are positive integers, then n=ak for some integer k. Since k is a positive integer, we see that n=ak\\geq a. Hence any nonzero integer n can have at most 2|n| divisors. Modular arithmetic is a system of arithmetic for integers, where we only perform calculations by considering their remainder with respect to the modulus. This is described in detail in the division algorithm presented in section 4.3.1 of Knuth, The art of computer programming, Volume 2, Seminumerical algorithms - the standard reference. Definition. It actually has deeper connections into many other areas of mathematics, and we will highlight a few of them. We are now unable to give each person a slice. Division algorithms fall into two main categories: slow division and fast division. So let's have some practice and solve the following problems: (Assume that) Today is a Friday. If c\\neq 0 and a|b then a c|b c.. A division algorithm provides a quotient and a remainder when we divide two number. [thm5]The Division Algorithm If a and b are integers such that b > 0, then there exist unique integers q and r such that a = bq + r where 0 ≤ r < b. But since one person couldn't make it to the party, those slices were eventually distributed evenly among 4 people, with each person getting 1 additional slice than originally planned and two slices left over. Then I prove the Division Algorithm in great detail based on the Well-Ordering Axiom. Show that if a and b are positive integers and a|b, then a\\leq b., Exercise. Since a|b certainly implies a|b, the case for k=1 is trivial. (Antisymmetric Property of Divisibility) Let a and b be nonzero positive integers. We’ll then look at the ASMD (Algorithmic State Machine with a Data path) chart and the VHDL code of this binary divider. Thus, if we only wish to consider integers, we simply can not take any two integers and divide them. More clearly, The Division Algorithm can be proven, but we have not yet studied the methods that are usually used to do so. For example. There are 24 hours in one complete day. Let P be the set of natural number for which 7^n-2^n is divisible by 5. Clearly, 7^1-2^1=5 is divisible by 5, so P is nonempty with 0\\in P. Assume k\\in P. We find \\begin{align*} 7^{k+1}-2^{k+1} & = 7\\cdot 7^k-2\\cdot 2^k \\\\ & = 7\\cdot 7^k-7\\cdot 2^k+7\\cdot 2^k-2\\cdot 2^k \\\\ & = 7(7^k- 2^k)+2^k(7 -2) \\end{align*} The induction hypothesis is that (7^k- 2^k) is divisible by 5. Learn about Euclid’s Division Algorithm in a way never done before. To get the number of days in 2500 hours, we need to divide 2500 by 24. Any integer n, except 0, has just a finite number of divisors. Consider the set A = {a − bk ≥ 0 ∣ k ∈ Z}. We will use mathematical induction. Proof. We then give each person another slice, so we give out another 3 slices leaving 4−3=1 4 - 3 = 1 4−3=1. State the Division Algorithm. We need to show that m(m+1)(m+2) is of the form 6 k. The division algorithm yields that m is either even or odd. In addition to showing the divisibility relationship between any two non zero integers, it is worth noting that such relationships are characterized by certain properties. Definition 17.2. According to the algorithm, in this case, the divisor is 25. The advantage of the Division Algorithm is that it allows us to prove statements about the positive integers (integers) by considering only a finite number of cases. a = bq + r, 0 ≤ r < b. These extensions will help you develop a further appreciation of this basic concept, so you are encouraged to explore them further! Dividend = Quotient × Divisor + Remainder The Euclidean Algorithm. -6 & +5 & = -1 \\\\ There are integers a, b, and c such that a|bc, but a\\nmid b and a\\nmid c., Exercise. We have x a+y b=x(m c)+y(n c)= c(x m+ y n) $$Since x m+ y n \\in \\mathbb{Z} we see that c|(x a+y b) as desired. You are walking along a row of trees numbered from 789 to 954. Expert Answer 100% (1 rating) Previous question Next question 15 \\equiv 29 \\pmod{7} . where the remainder r(x)r(x)r(x) is a polynomial with degree smaller than the degree of the divisor d(x)d(x) d(x). We say an integer a is of the form bq+r if there exists integers b, q, and r such that a=bq+r. Notice that the division algorithm, in a certain sense, measures the divisibility of a by b using a remainder r. This is an incredible important and powerful statement. We then give a few examples followed by several basic lemmas on divisibility. Therefore, k+1\\in P and so P=\\mathbb{N} by mathematical induction. (Transitive Property of Divisibility) Let a, b, and c be integers. We begin by stating the definition of divisibility, the main topic of discussion.$$ Thus, $n m=1$ and so in particular $n= 1.$ Whence, $a= b$ as desired. Show transcribed image text. The division algorithm for integers states that given any two integers a and b, with b > 0, we can find integers q and r such that 0 < r < b and a = bq + r.. Euclid’s Division Lemma says that for any two positive integers suppose a and b there exist two novel whole numbers say q and r, such that, a = bq+r, where 0≤rPhd In Global Nutrition, Lodges In Scotland With Hot Tubs, Econ 307 Duke, Lodges In Scotland With Hot Tubs, Glucose Is A Polar Molecule, Glucose Is A Polar Molecule, Kind Of Blue Sales Figures, Hks Hi-power Muffler 4 Inch, Hawaii State Public Library Staff Directory, Corporate Treasury Analyst Goldman Sachs, \" />\n\n0" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8837267,"math_prob":0.9972898,"size":17621,"snap":"2021-31-2021-39","text_gpt3_token_len":5007,"char_repetition_ratio":0.16285406,"word_repetition_ratio":0.072780676,"special_character_ratio":0.28636286,"punctuation_ratio":0.11519155,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99991107,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-17T22:45:47Z\",\"WARC-Record-ID\":\"<urn:uuid:115be29f-a39c-4722-824e-b36cd5c16204>\",\"Content-Length\":\"109109\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1197c9dc-007d-4836-94b1-bae64d591687>\",\"WARC-Concurrent-To\":\"<urn:uuid:b826f51a-9cb9-4b0f-873c-be33e7ddd5ab>\",\"WARC-IP-Address\":\"172.67.177.182\",\"WARC-Target-URI\":\"http://vitalonlineexperts.com/rolex-paris-fikpaoa/state-division-algorithm-30c345\",\"WARC-Payload-Digest\":\"sha1:N6EC75MF5E2EO6D3QASXHTSPIWRCIDDW\",\"WARC-Block-Digest\":\"sha1:IGXOGLMBXMHJCGTYYTNV66FUX6SQDKNT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780055808.78_warc_CC-MAIN-20210917212307-20210918002307-00222.warc.gz\"}"}
https://metanumbers.com/18208
[ "# 18208 (number)\n\n18,208 (eighteen thousand two hundred eight) is an even five-digits composite number following 18207 and preceding 18209. In scientific notation, it is written as 1.8208 × 104. The sum of its digits is 19. It has a total of 6 prime factors and 12 positive divisors. There are 9,088 positive integers (up to 18208) that are relatively prime to 18208.\n\n## Basic properties\n\n• Is Prime? No\n• Number parity Even\n• Number length 5\n• Sum of Digits 19\n• Digital Root 1\n\n## Name\n\nShort name 18 thousand 208 eighteen thousand two hundred eight\n\n## Notation\n\nScientific notation 1.8208 × 104 18.208 × 103\n\n## Prime Factorization of 18208\n\nPrime Factorization 25 × 569\n\nComposite number\nDistinct Factors Total Factors Radical ω(n) 2 Total number of distinct prime factors Ω(n) 6 Total number of prime factors rad(n) 1138 Product of the distinct prime numbers λ(n) 1 Returns the parity of Ω(n), such that λ(n) = (-1)Ω(n) μ(n) 0 Returns: 1, if n has an even number of prime factors (and is square free) −1, if n has an odd number of prime factors (and is square free) 0, if n has a squared prime factor Λ(n) 0 Returns log(p) if n is a power pk of any prime p (for any k >= 1), else returns 0\n\nThe prime factorization of 18,208 is 25 × 569. Since it has a total of 6 prime factors, 18,208 is a composite number.\n\n## Divisors of 18208\n\n1, 2, 4, 8, 16, 32, 569, 1138, 2276, 4552, 9104, 18208\n\n12 divisors\n\n Even divisors 10 2 2 0\nTotal Divisors Sum of Divisors Aliquot Sum τ(n) 12 Total number of the positive divisors of n σ(n) 35910 Sum of all the positive divisors of n s(n) 17702 Sum of the proper positive divisors of n A(n) 2992.5 Returns the sum of divisors (σ(n)) divided by the total number of divisors (τ(n)) G(n) 134.937 Returns the nth root of the product of n divisors H(n) 6.08454 Returns the total number of divisors (τ(n)) divided by the sum of the reciprocal of each divisors\n\nThe number 18,208 can be divided by 12 positive divisors (out of which 10 are even, and 2 are odd). The sum of these divisors (counting 18,208) is 35,910, the average is 299,2.5.\n\n## Other Arithmetic Functions (n = 18208)\n\n1 φ(n) n\nEuler Totient Carmichael Lambda Prime Pi φ(n) 9088 Total number of positive integers not greater than n that are coprime to n λ(n) 1136 Smallest positive number such that aλ(n) ≡ 1 (mod n) for all a coprime to n π(n) ≈ 2085 Total number of primes less than or equal to n r2(n) 8 The number of ways n can be represented as the sum of 2 squares\n\nThere are 9,088 positive integers (less than 18,208) that are coprime with 18,208. And there are approximately 2,085 prime numbers less than or equal to 18,208.\n\n## Divisibility of 18208\n\n m n mod m 2 3 4 5 6 7 8 9 0 1 0 3 4 1 0 1\n\nThe number 18,208 is divisible by 2, 4 and 8.\n\n• Deficient\n\n• Polite\n\n• Frugal\n\n## Base conversion (18208)\n\nBase System Value\n2 Binary 100011100100000\n3 Ternary 220222101\n4 Quaternary 10130200\n5 Quinary 1040313\n6 Senary 220144\n8 Octal 43440\n10 Decimal 18208\n12 Duodecimal a654\n20 Vigesimal 25a8\n36 Base36 e1s\n\n## Basic calculations (n = 18208)\n\n### Multiplication\n\nn×y\n n×2 36416 54624 72832 91040\n\n### Division\n\nn÷y\n n÷2 9104 6069.33 4552 3641.6\n\n### Exponentiation\n\nny\n n2 331531264 6036521254912 109912979009437696 2001295521803841568768\n\n### Nth Root\n\ny√n\n 2√n 134.937 26.308 11.6162 7.11299\n\n## 18208 as geometric shapes\n\n### Circle\n\n Diameter 36416 114404 1.04154e+09\n\n### Sphere\n\n Volume 2.52857e+13 4.16614e+09 114404\n\n### Square\n\nLength = n\n Perimeter 72832 3.31531e+08 25750\n\n### Cube\n\nLength = n\n Surface area 1.98919e+09 6.03652e+12 31537.2\n\n### Equilateral Triangle\n\nLength = n\n Perimeter 54624 1.43557e+08 15768.6\n\n### Triangular Pyramid\n\nLength = n\n Surface area 5.74229e+08 7.11411e+11 14866.8\n\n## Cryptographic Hash Functions\n\nmd5 29eb72af70b45ea8994b6d0256b1b97f 9e9bc686d0bc546a61c8165a4777593bf8a66b48 305f7429498ccd9e275f944f7a10094ad84af83c2d287dcd9fbcc2fbb1f3ba77 400e3ef3a4a2c9d5f3495de5ff556ccc38be5765863a73e2a1c99f42a90f225542e96364e6d2bcde4b72790300ce5e75b12674a1eed8b93a9f5efdfc0f2a1229 2ac469809663c2bf8d52605909fa94d399db3ee2" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.610813,"math_prob":0.98063046,"size":4468,"snap":"2021-31-2021-39","text_gpt3_token_len":1595,"char_repetition_ratio":0.12096774,"word_repetition_ratio":0.025487257,"special_character_ratio":0.4514324,"punctuation_ratio":0.07891332,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9955,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-22T17:40:50Z\",\"WARC-Record-ID\":\"<urn:uuid:0f2ec378-659e-417d-99c9-09ccadd33c54>\",\"Content-Length\":\"39589\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8b75a5b4-caef-4f88-8eb6-ec8cf591c44e>\",\"WARC-Concurrent-To\":\"<urn:uuid:174a146d-477e-45aa-89a8-8cd59a502bac>\",\"WARC-IP-Address\":\"46.105.53.190\",\"WARC-Target-URI\":\"https://metanumbers.com/18208\",\"WARC-Payload-Digest\":\"sha1:ZCT6NX6OWSTI72FXKYDZ6F37MB4FJFY6\",\"WARC-Block-Digest\":\"sha1:SSBF3HO224USFKHLZLDPWA7FOFQGCDUE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057371.69_warc_CC-MAIN-20210922163121-20210922193121-00556.warc.gz\"}"}
https://www.semanticscholar.org/paper/A-global-analysis-proof-of-the-stability-of-space-Hintz-Vasy/a73d7a585fa40dbd3484d1a3c12cc1c05908165f
[ "Corpus ID: 119134342\n\n# A global analysis proof of the stability of Minkowski space and the polyhomogeneity of the metric\n\n```@article{Hintz2017AGA,\ntitle={A global analysis proof of the stability of Minkowski space and the polyhomogeneity of the metric},\nauthor={P. Hintz and A. Vasy},\njournal={arXiv: Analysis of PDEs},\nyear={2017}\n}```\n• Published 2017\n• Mathematics, Physics\n• arXiv: Analysis of PDEs\nWe first give a new proof of the non-linear stability of the \\$(3+1)\\$-dimensional Minkowski spacetime as a solution of the Einstein vacuum equation. We then show that the metric admits a full asymptotic expansion at infinity, more precisely at the boundary hypersurfaces (corresponding to spacelike, null, and timelike infinity) of a suitable compactification of \\$\\mathbb{R}^4\\$ adapted to the bending of outgoing light cones. We work in a wave map/DeTurck gauge closely related to the standard wave… Expand\nLinear stability of slowly rotating Kerr black holes\n• Physics, Mathematics\n• 2019\nWe prove the linear stability of slowly rotating Kerr black holes as solutions of the Einstein vacuum equations: linearized perturbations of a Kerr metric decay at an inverse polynomial rate to aExpand\nAsymptotic Stability of Minkowski Space-Time with non-compactly supported massless Vlasov matter\n• Mathematics, Physics\n• 2020\nWe prove the global asymptotic stability of the Minkowski space for the massless Einstein-Vlasov system in wave coordinates. In contrast with previous work on the subject, no compact supportExpand\nConformal Scale Geometry of Spacetime.\nWe devise a new approach for the study of the issue of singularities and black holes based on a new mass function on a phase space of the conformal space-time, and on the almost time-independentExpand\nThe linear stability of the Schwarzschild solution to gravitational perturbations in the generalised wave gauge\nWe prove in this paper that the Schwarzschild famiily of black holes are linearly stable as a family of solutions to the system of equations that result from expressing the Einstein vacuum equationsExpand\nAsymptotic structure of a massless scalar field and its dual two-form field at spatial infinity\n• Physics\n• 2018\nA bstractRelativistic field theories with a power law decay in r−k at spatial infinity generically possess an infinite number of conserved quantities because of Lorentz invariance. Most of these areExpand\nHamiltonian structure and asymptotic symmetries of the Einstein-Maxwell system at spatial infinity\n• Physics\n• 2018\nA bstractWe present a new set of asymptotic conditions for gravity at spatial infinity that includes gravitational magnetic-type solutions, allows for a non-trivial Hamiltonian action of the completeExpand\nThe linear stability of the Schwarzschild spacetime in the harmonic gauge: odd part\nIn this paper, we study the odd solution of the linearlized Einstein equation on the Schwarzschild background and in the harmonic gauge. With the aid of Regge-Wheeler quantities, we are able toExpand\nCharacteristic Cauchy problem on the light cone for the Einstein–Vlasov system in temporal gauge\nThis article is concerned with the Cauchy problem on the initial light cone for geometric-transport equations in general relativity when temporal gauge is considered. A novel hierarchy ofExpand\nThe role of trapping in black hole spacetimes\nIn the here presented work we discuss a series of results that are all in one way or another connected to the phenomenon of trapping in black hole spacetimes. First we present a comprehensive reviewExpand\nOn the choice of a conformal Gauss gauge near the cylinder representing spatial infinity.\nA convenient approach to analyze spatial infinity is to use a cylinder representation \\$I\\$ and impose a gauge based on a congruence of conformal geodesics. This so-called conformal Gauss gauge comesExpand" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8222813,"math_prob":0.8865902,"size":7140,"snap":"2021-31-2021-39","text_gpt3_token_len":1597,"char_repetition_ratio":0.15316704,"word_repetition_ratio":0.065298505,"special_character_ratio":0.19313726,"punctuation_ratio":0.046491228,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97352135,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-24T15:33:25Z\",\"WARC-Record-ID\":\"<urn:uuid:d2017e9b-2177-4e94-bed8-8807cb324da9>\",\"Content-Length\":\"430702\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a8f2af3e-1a58-4ece-92db-d0b7d6951305>\",\"WARC-Concurrent-To\":\"<urn:uuid:a69a4aa0-061d-4bd2-bb51-7abc7634b024>\",\"WARC-IP-Address\":\"13.32.208.17\",\"WARC-Target-URI\":\"https://www.semanticscholar.org/paper/A-global-analysis-proof-of-the-stability-of-space-Hintz-Vasy/a73d7a585fa40dbd3484d1a3c12cc1c05908165f\",\"WARC-Payload-Digest\":\"sha1:SZADUL6TJ5J6QGGUQWPURIIA4Q64HYH6\",\"WARC-Block-Digest\":\"sha1:SRQG2DN2HQOGARJV73NYFGPT7CMJ2JUO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057558.23_warc_CC-MAIN-20210924140738-20210924170738-00020.warc.gz\"}"}
https://themortgagestudent.com/tag/550a36-2020-2021-wall-calendar
[ "# 2020 2021 wall calendar\n\nYet they are different, and have there own advantages, and disadvantages . At 5 minutes, 1 apple is sold. 2. The figure below shows what the graph of a discrete data will look like for the table immediately above. Progress % Practice Now. Preview; Assign Practice; Preview. Practice. Visually, this can be depicted as a smooth graph that gives a value for every point along an axis. At 10 minutes, another apple is sold. Continuous- Advantages. Assign to Class. that a broken-line graph shows a relationship between one set of continuous data and one set of discrete data. Some of the worksheets for this concept are Continuity date period, Discrete and continuous domains, Discrete and continuous variables, Discrete and continuous domains, Examples of domains and ranges from graphs, Name class date 2 6, Discrete and continuous random variables, A guide to data handling. Discrete graphs represent values at specific points along the number line. Continuous And Discrete Graphs - Displaying top 8 worksheets found for this concept.. On the other hand, continuous data includes any value within range. Notice that we cannot connect the points since the numbers between 1 and 2, 2 and 3, 3 and 4 do not exist. Continuous and discrete data are both number based forms and are useful for all different types of graphs. Contrary to discrete data, continuous data is mutually … In this data set, every 5 minutes, one or some apples are sold. Here is what the graph of a continuous data will look like. Continuous data possess the measurable nature and unlike discrete data continuous data can take any values from the sequential pattern and that why they are tabulated in grouped frequency mode. Graphs for Discrete and for Continuous Data. Choosing the correct display for given data sets % Progress . Line graphs, frequency polygons, histograms, and stem-and-leaf plots all involve numerical data, or quantitative data, as is shown in the remaining graphs. Discrete data is information that can be counted. On the other hand, a continuous data can have numbers between any two data values … Therefore, when you talk about discrete and continuous data, you are talking about numerical data. The Difference Between Continuous & Discrete Graphs Continuous Graphs. Statistics Visualizing Data ..... All Modalities. Numerical data involves measuring or counting a numerical value. Tabulation of discrete data, done against a single value, is called as an ungrouped … Generally discrete data are … Discrete data is graphically represented by bar graph whereas a histogram is used to represent continuous data graphically. Continuous graphs represent functions that are continuous along their entire domain. These functions... Discrete Graphs. Continuous data is information that can be measured at infinite points. At 15 minutes, 2 apples are sold, etc. Discrete data is countable while continuous data is measurable. This can be visually depicted as a bar chart. The figure below shows what the graph of a discrete data will look like for the table immediately above. Grouping data: Discrete and Continuous data Theory: Generally the data collected are unorganized. They are very easy to use and you will find that most survey answers will be using that for of data! Continuous data is represented on the histogram and shows connected points on the graph depicting the continuous sequence of data. MEMORY METER. For example: In this graph you could say that the value (y) is the amount of apples sold from a stand. The discrete data is usually shown on the horizontal axis. More All Modalities; Share with Classes. Before any statistical analysis, one needs to organize the data in such a way that one can extract maximum information or inferences from the collected data. They can even be integrated to work with each other in certain graphs. Create Assignment. Discrete data: Any ungrouped raw data, mostly represented as whole numbers are called discrete data. (3, 9) of course means that 3 pounds cost 9 dollars. The simplest and the most popular type of chart. In the graph above, we show the points (1 3), (2, 6), (3, 9), and (4, 12). The following charts work especially well for representing the discrete data: Bar chart; Stacked bar chart; Column chart; Stacked column chart; Spider chart; Bar chart. Examples of non-discrete (continuous) data: height, weight, length, income, temperature. This indicates how strong in your memory this concept is. Discrete data contains distinct or separate values." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9090821,"math_prob":0.9592338,"size":4817,"snap":"2021-04-2021-17","text_gpt3_token_len":995,"char_repetition_ratio":0.17785165,"word_repetition_ratio":0.045454547,"special_character_ratio":0.19991696,"punctuation_ratio":0.14587973,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9879857,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-16T19:18:58Z\",\"WARC-Record-ID\":\"<urn:uuid:5b553017-06ae-4ce1-9b4c-65cd09e41300>\",\"Content-Length\":\"54655\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ceb6149a-85ec-4631-b353-542009fdc7c1>\",\"WARC-Concurrent-To\":\"<urn:uuid:a5bb3e45-9e0f-4f59-9aa4-85f633bf75d7>\",\"WARC-IP-Address\":\"172.67.166.143\",\"WARC-Target-URI\":\"https://themortgagestudent.com/tag/550a36-2020-2021-wall-calendar\",\"WARC-Payload-Digest\":\"sha1:HYHFTCRTVFZJ5AVQNLSH2XHDBNKJLJQK\",\"WARC-Block-Digest\":\"sha1:TIQTML5YEJF75FJAJX2B253NMDDCVLYO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038089289.45_warc_CC-MAIN-20210416191341-20210416221341-00540.warc.gz\"}"}
https://codereview.stackexchange.com/questions/51702/replacing-strings-of-blanks-by-tabs-and-blanks-to-achieve-same-spacing
[ "# Replacing strings of blanks by tabs and blanks to achieve same spacing\n\n#include <stdio.h>\n\n#define MAXLINE 1000 /* Maximum length of a line */\n#define TABSTOP 4 /* Length of tabstop */\n\nint getLine(char line[], int limit);\nint lookAhead(char line[], int start, int end);\n\n/*\n* Exercise 1-21\n* Write a program entab that replaces strings of blanks\n* by the minimum number of tabs and blanks to achieve the\n* same spacing.\n*/\n\nint main(void)\n{\nint tabIndex, i;\nchar line[MAXLINE];\n\nwhile ((getLine(line, MAXLINE)) > 0) {\ni = 0;\nwhile (line[i] != '\\0') {\nif (line[i] == ' ') {\ntabIndex = i + (TABSTOP - (i % TABSTOP));\nif ((lookAhead(line, i + 1, tabIndex)) == 1) {\nputchar('\\t');\ni = tabIndex;\n} else {\nputchar(line[i]);\n++i;\n}\n} else {\nputchar(line[i]);\n++i;\n}\n}\n}\nreturn 0;\n}\n\nint getLine(char line[], int limit)\n{\nint inputVal, i;\n\nfor (i=0; i < (limit - 1) && (inputVal = getchar()) != EOF &&\ninputVal != '\\n'; ++i) {\nline[i] = inputVal;\n}\nif (inputVal == '\\n') {\nline[i] = inputVal;\n++i;\n}\nline[i] = '\\0';\nreturn i;\n}\n\nint lookAhead(char line[], int start, int end)\n{\nint i, clearPath;\n\nclearPath = 1;\nfor (i = start; i < end; ++i) {\nif (line[i] != ' '){\nclearPath = 0;\n}\n}\nreturn clearPath;\n}\n\n\n## 3 Answers\n\nI don't think a proper getline() function should return an int, whether it's based on the attempted read (a boolean) or something else.\n\nI believe you're trying to imitate fgets(), which instead returns a char* (the extracted file line) if the read was successful or NULL if it failed. A proper I/O function should set an error flag upon a failed read attempt, and this is not done by returning an int. Plus, there will be issues if the caller of your function expects a string or a failbit, which should be the case if the caller expects proper feedback.\n\nUnless fgets() really doesn't satisfy your needs here, I'd recommend using that or another library function that handles this properly. Recreating an I/O function can cause problems if not written carefully, and even then you shouldn't be doing that.\n\n• The chapter has you build bare bones versions of several std lib functions which is why I included them. I believe the usage of std lib functions will be detailed in later chapters. Perhaps I should have mentioned that. – albertjorlando May 25 '14 at 20:15\n• @ao2130: Yes, that would be a good idea, but at least you've learned something about it. :-) – Jamal May 25 '14 at 20:35\n• @Jama Curious, do you consider fgets() an improper I/O function because it does not, per C spec, set an error flag anymore than OP's getLine(). Maybe NULL is an error flag in that context? – chux Jun 7 '14 at 6:15\n• @chux: I think it's okay in regards to error flags. The OP's function, however, doesn't return NULL at all, so the calling code won't know if an error occurred. – Jamal Jun 7 '14 at 13:44\n\nI believe you have two ways of interpreting \"to achieve the same spacing\":\n\n1. Replace consecutive space anywhere in a line.\n2. Replace only spaces at the beginning of a line that occur before any non-space character.\n\nEither way, this is a very difficult problem to solve correctly. You'll need to implement a complete tab-stop interpreter. In addition, with Interpretation #1, you may need to implement a source code parser, if your goal is to be able to retab source code without corrupting literal strings that may be embedded in the text.\n\nConsider the following input text as an example:\n\n/*\nA B C D E F G H I J K L M N O P Q R\n012301230123012301230123012301230123012301230123012301230123012301230123\n*/\nvar␣d␣=␣b␣*␣b␣-␣4␣*␣a␣*␣c;␣␣␣␣␣␣␣␣␣␣␣␣␣␣␣␣␣␣//␣discriminant⏎\nif (d␣>=␣0)␣{␣␣␣␣␣␣␣␣␣␣␣␣␣␣␣␣␣␣␣␣␣␣␣␣␣␣␣␣␣␣␣//␣only␣real␣solutions⏎\n␣␣␣␣var␣root1␣=␣(-b␣+␣sqrt(d))␣/␣(2␣*␣a);␣␣␣//␣the␣larger␣root⏎\n␣␉ var␣root2␣=␣(-b␣+␣sqrt(d))␣/␣(2␣*␣a);␣␣␣//␣the␣smaller␣root⏎\n}⏎\n\n\nI believe that the first line should get five Tabs, with the first Tab representing just two spaces (G2 and G3). Also, the var root2 line has a superfluous leading space that should be discarded.\n\nWith Interpretation #1, the output should be:\n\nvar␣d␣=␣b␣*␣b␣-␣4␣*␣a␣*␣c;␉␉␉␉␉//␣discriminant⏎\nif (d␣>=␣0)␣{␉␉␉␉␉␉␉␉//␣only␣real␣solutions⏎\n␉var␣root1␣=␣(-b␣+␣sqrt(d))␣/␣(2␣*␣a);␉//␣the␣larger␣root⏎\n␉var␣root2␣=␣(-b␣+␣sqrt(d))␣/␣(2␣*␣a);␉//␣the␣smaller␣root⏎\n}⏎\n\n\nWith Interpretation #2, the output would be:\n\nvar␣d␣=␣b␣*␣b␣-␣4␣*␣a␣*␣c;␣␣␣␣␣␣␣␣␣␣␣␣␣␣␣␣␣␣//␣discriminant⏎\nif (d␣>=␣0)␣{␣␣␣␣␣␣␣␣␣␣␣␣␣␣␣␣␣␣␣␣␣␣␣␣␣␣␣␣␣␣␣//␣only␣real␣solutions⏎\n␉var␣root1␣=␣(-b␣+␣sqrt(d))␣/␣(2␣*␣a);␣␣␣//␣the␣larger␣root⏎\n␉var␣root2␣=␣(-b␣+␣sqrt(d))␣/␣(2␣*␣a);␣␣␣//␣the␣smaller␣root⏎\n}⏎\n\n\nThere is a case to be made for interpretation #2, which is more conservative about which spaces are safe to replace while achieving the \"same\" spacing. Here is a real-life example of a bug that was introduced due to a careless spaces-to-tabs transformation that meddled with a literal string within some source code. If there is any doubt about correctness and safety, I think a good policy is to leave the text untransformed.\n\nMinor: int getLine(char line[], int limit)\n\nFollowing code has trouble if limit is small as inputVal is tested without prior setting and line[i] is not known to be in range.\n\nint inputVal;\n...\nif (inputVal == '\\n') {\nline[i] = inputVal;\n++i;\n}\nline[i] = '\\0';\n\n\nSuggest:\n\n// top of function\nif (limit < 1) return 0;\nint inputVal = 0;\n...\nif (i < (limit - 1) && inputVal == '\\n') {\nline[i] = inputVal;\n++i;\n}\n\n\nA re-write of this function could be\n\nsize_t getLine(char line[], size_t limit) {\nif (limit < 1) return 0;\nlimit--; // Room for \\0\n\nsize_t i = 0;\nint inputVal;\nwhile (i < limit && (inputVal = getchar()) != EOF) {\nline[i] = inputVal;\n++i;\nif (inputVal == '\\n') break;\n}\n\nline[i] = '\\0';\nreturn i;\n}" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.73499393,"math_prob":0.9102234,"size":5679,"snap":"2019-35-2019-39","text_gpt3_token_len":2408,"char_repetition_ratio":0.122290745,"word_repetition_ratio":0.054143645,"special_character_ratio":0.30463108,"punctuation_ratio":0.13277623,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9673902,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-22T18:59:22Z\",\"WARC-Record-ID\":\"<urn:uuid:98a08f00-5ae0-46a2-8953-1dcb041de571>\",\"Content-Length\":\"160085\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a8686a33-ac55-42c1-9abf-ad088f3e5de3>\",\"WARC-Concurrent-To\":\"<urn:uuid:b21b69af-82b8-49e6-af2c-bea9b37d55ed>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://codereview.stackexchange.com/questions/51702/replacing-strings-of-blanks-by-tabs-and-blanks-to-achieve-same-spacing\",\"WARC-Payload-Digest\":\"sha1:EGAQLNKJDGYFUQQFK53E2QAN7TFAZ74C\",\"WARC-Block-Digest\":\"sha1:AKC7YOG377BUAY54UBYO4AXEH52OMV2K\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027317339.12_warc_CC-MAIN-20190822172901-20190822194901-00536.warc.gz\"}"}
https://ubuntuforums.org/archive/index.php/t-1649953.html?s=52e3174a48a421b05de62def0cae3e6a
[ "PDA\n\nView Full Version : BASh: array element w/in arithmetic function odd behavior\n\njamesisin\nDecember 21st, 2010, 03:17 AM\nI am building a script to change time stamps in cue files. I have managed to parse out the bit I need to convert from frames (ff) to milliseconds (nnn), but when I run the array element through the arithmetic function in bash I get an error.\n\nHere is the section of script in question:\n\nfor (( i=0 ; i < \\${#cuefind[@]} ; i++ )) ; do\n# path is cue iteration less file name\ncuefolder=\"\\${cuefind[i]%/*.*}\"\ncat \"\\${cuefind[i]}\" | grep INDEX | awk -F':' '{print \\$3}' > /tmp/whyme2\ndeclare -a ff\nlet ii=0\nwhile read ffline; do\nff[\\$i]=\\$ffline\necho \"ffline is \" \\$ffline\nnnn=\\$(( ( ffline * 1000 ) / 75 ))\necho \"nnn is \" \\$nnn\n((ii++))\ndone < /tmp/whyme2\ndone\n\nAnd here is the error I receive:\n\nffline is 00\n\")syntax error: invalid arithmetic operator (error token is \"\n\nHere is the script running without the variable in the equation (I just stuck in the real number 8 ) :\n\nffline is 00\nnnn is 106\nffline is 38\nnnn is 106\nffline is 38\nnnn is 106\nffline is 56\nnnn is 106\nffline is 38\nnnn is 106\nffline is 38\nnnn is 106\nffline is 28\nnnn is 106\nffline is 38\nnnn is 106\nffline is 19\nnnn is 106\nffline is 13\nnnn is 106\nffline is 19\nnnn is 106\nffline is 19\nnnn is 106\nffline is 66\nnnn is 106\nffline is 00\nnnn is 106\nffline is 38\nnnn is 106\nffline is 38\nnnn is 106\nffline is 56\nnnn is 106\nffline is 00\nnnn is 106\nffline is 38\nnnn is 106\n\nClearly the problem is in the array somehow. I don't have the variable quoted which I think is fine because each element is just a two digit number, so I'm confused where the equation is bumping into a quotation mark.\n\nA little help?\n\njamesisin\nDecember 21st, 2010, 05:02 AM\nMaybe this additional information will be useful:\n\n\\$ x=1000\n\\$ new=\\$(( x * 3 ))\n\\$ echo \\$new\n3000\n\\$ new=\\$(( ( x * 3 ) / 5 ))\n\\$ echo \\$new\n600\n\\$ cat /tmp/whyme2\n00\n38\n38\n56\n38\n38\n28\n38\n19\n13\n19\n19\n66\n00\n38\n38\n56\n00\n38\n\\$\n\njamesisin\nDecember 21st, 2010, 08:23 AM\nAlso, I tried including the dollar sign on the variable (even though within the double-parenthetical equations you are not supposed to) and received a slightly different error:\n\nffline is 00\n* 1000 ) / 75 \")syntax error: invalid arithmetic operator (error token is \"\n\nDoes any of this make sense?\n\nI don't use these mathematical functions much.\n\nI know I have the equation correct because it works outside the script. It's just that in the script I get this error (when using the variable—if I substitute the number 8 I get exactly the same as posted above).\n\ngmargo\nDecember 21st, 2010, 05:35 PM\nYour input file, /tmp/whyme2, is probably in DOS format (CR/LF line endings). Correct it to UNIX format (LF line endings) and your code works.\n\njamesisin\nDecember 21st, 2010, 08:56 PM\nThat seems odd. The temp file is created from standard out (using the > in BASh) on an Ubuntu system. Is there a systemic way to force the other type?\n\ncat \"\\${cuefind[i]}\" | grep INDEX | awk -F':' '{print \\$3}' > /tmp/whyme2\n\ngmargo\nDecember 22nd, 2010, 01:10 AM\nFirst check if my supposition was right. Use xxd to dump your file and verify the line endings. I'm guessed this was the problem since that's the only way I managed to generate your error.\n\njamesisin\nDecember 22nd, 2010, 07:18 PM\nHere's the hex-dump:\n\n\\$ xxd /tmp/whyme2\n0000000: 3030 0d0a 3338 0d0a 3338 0d0a 3536 0d0a 00..38..38..56..\n0000010: 3338 0d0a 3338 0d0a 3238 0d0a 3338 0d0a 38..38..28..38..\n0000020: 3139 0d0a 3133 0d0a 3139 0d0a 3139 0d0a 19..13..19..19..\n0000030: 3636 0d0a 3030 0d0a 3338 0d0a 3338 0d0a 66..00..38..38..\n0000040: 3536 0d0a 3030 0d0a 3338 0d0a 56..00..38..\n\nSo, yes, that looks like the DOS interpretation of Enter. So how can I force standard out to print only the 0a into the file?\n\ngmargo\nDecember 22nd, 2010, 09:20 PM\nThere are many ways. Here are three:\n\nAdd this to pipeline:\n\nsed 's/\\r//'\n\nOr, add this to pipeline:\n\nperl -n -e 's/[[:space:]]+\\z//s; print \"\\$_\\n\"'\n\nOr, add this to pipeline (from the tofrodos package):\n\nfromdos" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8948273,"math_prob":0.9156768,"size":2889,"snap":"2019-43-2019-47","text_gpt3_token_len":1027,"char_repetition_ratio":0.31785095,"word_repetition_ratio":0.90387857,"special_character_ratio":0.35929388,"punctuation_ratio":0.055374593,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9872077,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-18T04:10:18Z\",\"WARC-Record-ID\":\"<urn:uuid:af6acc2c-3529-4a0c-832b-d17c21ff8831>\",\"Content-Length\":\"9438\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:87e39ea4-1a4b-4b9d-bad3-5a56d39ae022>\",\"WARC-Concurrent-To\":\"<urn:uuid:842a89e9-7242-424d-917e-b40efa2c0a64>\",\"WARC-IP-Address\":\"91.189.94.12\",\"WARC-Target-URI\":\"https://ubuntuforums.org/archive/index.php/t-1649953.html?s=52e3174a48a421b05de62def0cae3e6a\",\"WARC-Payload-Digest\":\"sha1:VE7E52MCVYWVY2HKKVJAFPO5345QUZ5N\",\"WARC-Block-Digest\":\"sha1:ZCKFW4P6QLZ5VH4B5M6AM5XRW7IVB6I4\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986677884.28_warc_CC-MAIN-20191018032611-20191018060111-00077.warc.gz\"}"}
https://tex.stackexchange.com/questions/195005/how-to-expand-a-macro-to-use-it-inside-a-question-in-amc/195010
[ "# Context\n\nI'm trying to use AMC to produce an exam with code in it.\n\nHowever, the use of verbatim code inside the questions is beyond the scope of the package. They suggest to declare boxes and used them inside each question. But, I found problematic to create a \\newbox per question and insert it by hand.\n\nThus, I searched for a solution on how to create and insert boxes automatically (following this question and this answer).\n\n# Problem\n\nBut now, I'm stuck with the expansion of the macros. As the macro I'm using inserts always the last created box. As it seems that the AMC package post processes all the elements (questions) in the \\onecopy macro, and it expands my macro \\insertbox then. However, I need to insert the expanded version of the macro in every element, in order to insert the expanded name of the box I created.\n\nThus, how can I expand the definition of the box and insert it in each question?\n\nI tried to store the \\savebox definition in another macro an insert it after, using \\edef but that doesn't work either.\n\nI will like to redefine \\insertbox in such a way that is expanded with the name of the temporal box I created, instead of being expanded until the call of \\onecopy.\n\n# Code\n\n\\documentclass{article}\n\n\\usepackage[box]{automultiplechoice}\n\\usepackage{listings}\n\n% a simple wrapper to create boxes automatically\n\\makeatletter\n\\newcounter{myboxcounter}\n\\newenvironment{mybox}{%\n\\expandafter\\newsavebox\\csname foobox\\roman{myboxcounter}\\endcsname\n\\global\\expandafter\\setbox\\csname foobox\\roman{myboxcounter}\\endcsname\\hbox\\bgroup\\color@setgroup\\ignorespaces\n}{%\n\\color@endgroup\\egroup\n}\n% first try\n% \\newcommand{\\insertbox}{\\expandafter\\usebox\\csname\\name\\endcsname}\n% second one\n\\newcommand{\\insertbox}{\\edef\\name{foobox\\roman{myboxcounter}}\\edef\\x{\\expandafter\\usebox\\csname\\name\\endcsname}\\x}\n\\makeatother\n\n\\begin{document}\n\n%%% preparation of the groups\n\\begin{mybox}\n\\begin{lstlisting}[language=C++]\nint a = 10;\na = a + 10;\n\\end{lstlisting}\n\\end{mybox}\n\\element{code}{\n\\begin{question}{code 1}\nWhich is the result of \\texttt{a}?\n\n\\insertbox\n\\begin{choices}\n\\correctchoice{10}\n\\wrongchoice{20}\n\\wrongchoice{0}\n\\wrongchoice{30}\n\\end{choices}\n\\end{question}\n}\n\n\\begin{mybox}\n\\begin{lstlisting}[language=C++]\nint a = 10;\na = a++;\n\\end{lstlisting}\n\\end{mybox}\n\\element{code}{\n\\begin{question}{code 2}\nWhich is the result of \\texttt{a}?\n\n\\insertbox\n\\begin{choices}\n\\correctchoice{10}\n\\wrongchoice{11}\n\\wrongchoice{12}\n\\wrongchoice{0}\n\\end{choices}\n\\end{question}\n}\n\n%%% copies\n\\onecopy{1}{\n\\insertgroup{code}\n}\n\n\\end{document}\n\n\nAs you can see in the image below, both inserted codes belong to the last box created. As the macro seems to expand later, instead of when called in the \\element macro.", null, "Here, I reset the counter to 0 before the final \\onecopy and redefine \\insertbox to step the counter with its output.\n\n\\documentclass{article}\n\n\\usepackage[box]{automultiplechoice}\n\\usepackage{listings}\n\n% a simple wrapper to create boxes automatically\n\\makeatletter\n\\newcounter{myboxcounter}\n\\newenvironment{mybox}{%\n\\stepcounter{myboxcounter}%\n\\expandafter\\newsavebox\\csname foobox\\roman{myboxcounter}\\endcsname\n\\global\\expandafter\\setbox\\csname foobox\\roman{myboxcounter}\\endcsname\\hbox\\bgroup\\color@setgroup\\ignorespaces\n}{%\n\\color@endgroup\\egroup\n}\n% first try\n% \\newcommand{\\insertbox}{\\expandafter\\usebox\\csname\\name\\endcsname}\n% second one\n\\newcommand{\\insertbox}{\\stepcounter{myboxcounter}%\n\\edef\\name{foobox\\roman{myboxcounter}}\\edef\\x{%\n\\expandafter\\usebox\\csname\\name\\endcsname}\\x}\n\\makeatother\n\n\\begin{document}\n\n%%% preparation of the groups\n\\begin{mybox}\n\\begin{lstlisting}[language=C++]\nint a = 10;\na = a + 10;\n\\end{lstlisting}\n\\end{mybox}\n\\element{code}{\n\\begin{question}{code 1}\nWhich is the result of \\texttt{a}?\n\n\\insertbox\n\\begin{choices}\n\\correctchoice{10}\n\\wrongchoice{20}\n\\wrongchoice{0}\n\\wrongchoice{30}\n\\end{choices}\n\\end{question}\n}\n\n\\begin{mybox}\n\\begin{lstlisting}[language=C++]\nint a = 10;\na = a++;\n\\end{lstlisting}\n\\end{mybox}\n\\element{code}{\n\\begin{question}{code 2}\nWhich is the result of \\texttt{a}?\n\n\\insertbox\n\\begin{choices}\n\\correctchoice{10}\n\\wrongchoice{11}\n\\wrongchoice{12}\n\\wrongchoice{0}\n\\end{choices}\n\\end{question}\n}\n\n%%% copies\n\\setcounter{myboxcounter}{0}\n\\onecopy{1}{\n\\insertgroup{code}\n}\n\n\\end{document}", null, "• That is a nice workaround. However, if I use the functionality of AMC to shuffle the questions I will have problems, as the questions are not necessarily in the same order as they were input in the text. – adn Aug 5 '14 at 20:19\n• @adn True enough. But I don't have a (workaround)^2 yet... – Steven B. Segletes Aug 5 '14 at 20:29\n• any workaround yet? – adn Mar 30 '15 at 1:25\n• @adn I'm sorry, but I would have to know a lot more about how the shuffle operation of automultiplechoice operates, in order to even have an idea. Unfortunately, I have not tried to get to the bottom of that. Perhaps you could compose an MWE that makes use of the shuffle (showing the problem of renumbering) and ask it as a new question? – Steven B. Segletes Mar 30 '15 at 2:31\n\nI'm afraid you have to name the boxes, so that they can be retrieved in the right order. You can use the question ID as a name for the boxes:\n\n\\documentclass{article}\n\n\\usepackage[box]{automultiplechoice}\n\\usepackage{listings}\n\n% a simple wrapper to create boxes automatically\n\\makeatletter\n\\newenvironment{mybox}{%\n\\expandafter\\newsavebox\\csname foobox#1\\endcsname\n\\global\\expandafter\\setbox\\csname foobox#1\\endcsname\\hbox\\bgroup\\color@setgroup\\ignorespaces\n}{%\n\\color@endgroup\\egroup\n}\n\\newcommand{\\insertbox}{\\edef\\name{foobox\\AMCid@name}\\edef\\x{\\expandafter\\usebox\\csname\\name\\endcsname}\\x}\n\\makeatother\n\n\\begin{document}\n\n%%% preparation of the groups\n\\begin{mybox}{code 1}\n\\begin{lstlisting}[language=C++]\nint a = 10;\na = a + 10;\n\\end{lstlisting}\n\\end{mybox}\n\\element{code}{\n\\begin{question}{code 1}\nWhich is the result of \\texttt{a}?\n\n\\insertbox\n\\begin{choices}\n\\correctchoice{10}\n\\wrongchoice{20}\n\\wrongchoice{0}\n\\wrongchoice{30}\n\\end{choices}\n\\end{question}\n}\n\n\\begin{mybox}{code 2}\n\\begin{lstlisting}[language=C++]\nint a = 10;\na = a++;\n\\end{lstlisting}\n\\end{mybox}\n\\element{code}{\n\\begin{question}{code 2}\nWhich is the result of \\texttt{a}?\n\n\\insertbox\n\\begin{choices}\n\\correctchoice{10}\n\\wrongchoice{11}\n\\wrongchoice{12}\n\\wrongchoice{0}\n\\end{choices}\n\\end{question}\n}\n\n%%% copies\n\\onecopy{5}{\n\\shufflegroup{code}\n\\insertgroup{code}\n}\n\n\\end{document}" ]
[ null, "https://i.stack.imgur.com/MojIc.jpg", null, "https://i.stack.imgur.com/ml1rl.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.70992076,"math_prob":0.6056494,"size":2736,"snap":"2019-51-2020-05","text_gpt3_token_len":779,"char_repetition_ratio":0.13543192,"word_repetition_ratio":0.05142857,"special_character_ratio":0.23684211,"punctuation_ratio":0.07064018,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99045086,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,5,null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-11T16:48:31Z\",\"WARC-Record-ID\":\"<urn:uuid:0eadf66c-a5a6-4bda-84c1-50e8f62d0210>\",\"Content-Length\":\"148799\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:309d66e8-282a-4167-8ebb-f47c91b4b0b0>\",\"WARC-Concurrent-To\":\"<urn:uuid:4d18ff03-afad-4bf5-bf67-70484136a8a7>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://tex.stackexchange.com/questions/195005/how-to-expand-a-macro-to-use-it-inside-a-question-in-amc/195010\",\"WARC-Payload-Digest\":\"sha1:YSDDMLVBLPCKRO6VKMHOTB4P33DKNLVD\",\"WARC-Block-Digest\":\"sha1:JH7B34ESYHVP5GFRGQUL4NIQZJPKXVVJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540531974.7_warc_CC-MAIN-20191211160056-20191211184056-00454.warc.gz\"}"}
https://math.stackexchange.com/questions/2437321/inscribing-a-bezier-curve-into-a-rectangle
[ "# Inscribing a Bezier curve into a rectangle\n\nMy goal is to determine the coordinates of the rectangle where a cubic Bezier curve is inscribed. I only know the Start and End points and the two Control points coordinates. Is there a simple formula to determine the rectangle coordinates?\n\n• Do you want the smallest axis-aligned bounding box?\n– lhf\nSep 20 '17 at 11:38\n• – lhf\nSep 20 '17 at 11:43\n• @lhf. Yes that's what I want. But instead of interpolating all curve's points, i want a direct approach that leads to the same result. do you think it it is possible? Sep 20 '17 at 12:06\n• @user2383818 Why, the method Pomax presents is the direct method! Sep 20 '17 at 12:10\n\nIf the Bézier curve is given by $(x(t),y(t))$, where $t\\in [0,1]$ and $x(t)$ and $y(t)$ are cubic polynomials, then the bounding box is given by $[x_{\\text{min}},x_{\\text{max}}] \\times [y_{\\text{min}},y_{\\text{max}}]$, where $x_{\\text{min}}$ is the minimum value attained by $x(t)$ for $t \\in [0,1]$, and analogously for the others.\nMinimizing $x(t)$ for $t \\in [0,1]$ to find $x_{\\text{min}}$ reduces to solving a quadratic equation. Don't forget to consider $x(0)$ and $x(1)$." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8314955,"math_prob":0.9989821,"size":2002,"snap":"2021-43-2021-49","text_gpt3_token_len":575,"char_repetition_ratio":0.11961962,"word_repetition_ratio":0.006688963,"special_character_ratio":0.2962038,"punctuation_ratio":0.1296758,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999082,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-26T08:42:34Z\",\"WARC-Record-ID\":\"<urn:uuid:7437b4fd-1467-49f4-af62-6a963adc1be6>\",\"Content-Length\":\"167553\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:be4e8e6f-9d24-433c-a789-3eec4be2a6f9>\",\"WARC-Concurrent-To\":\"<urn:uuid:58f32d12-d6b6-4c78-8297-0cbfdc2f952a>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/2437321/inscribing-a-bezier-curve-into-a-rectangle\",\"WARC-Payload-Digest\":\"sha1:KA7BEN3HQ6RAIX6RMIXBT43DGIIAF5QH\",\"WARC-Block-Digest\":\"sha1:SHYSBKHRHETBUSPR7L5SNHGXYFNM6GLE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323587854.13_warc_CC-MAIN-20211026072759-20211026102759-00641.warc.gz\"}"}
https://golem.ph.utexas.edu/category/2010/10/pictures_of_modular_curves_ii.html
[ "## October 29, 2010\n\n### Pictures of Modular Curves (II)\n\n#### Posted by Guest", null, "guest post by Tim Silverman\n\nWelcome to the second part of this series of posts on picturing modular curves.\n\nCatching up\n\nLast time, I zipped through some of the maths relating (among other things) Farey sequences, modular arithmetic, the hyperbolic plane, Platonic solids, the rational projective line, and the complex upper half plane, accompanied by pretty pictures of these often rather photogenic objects. But, during the attempts to get some of the svg for the pictures to work with the blog format, some of the pictures near the end got missed off! So I’m going to present them here.\n\nBut, also, I packed rather a lot of stuff into one post, and since I’m creating an extra post to put the pictures in, and since some people grumbled at the frenetic speed of the last post, I thought I’d take the opportunity to be more explicit about some of that maths I zipped through.\n\nSome linear actions\n\nNow, this series of posts is (in some sense) really about various actions of the group $PSL(2, \\mathbb{Z})$ and some of its subgroups and quotients, so I want to talk about $PSL(2,\\mathbb{Z})$; but I think I’ll start off by taking a step back, and talking instead about the group $SL(2,\\mathbb{Z})$.\n\n$SL(2,\\mathbb{Z})$ is the group of $2\\times 2$ matrices with integer entries and a determinant of $1$. For instance, $\\left(\\array{3&5\\\\7&12}\\right)$, $\\left(\\array{1&1\\\\9&10}\\right)$ or $\\left(\\array{1&-2\\\\-3&7}\\right)$.\n\nWe can get this group to act naturally on ordered pairs of integers, by putting the pairs of integers into column vectors and multiplying them by matrices on the left. E.g.\n\n$\\left(\\array{3&5\\\\7&12}\\right)\\left(\\array{-4\\\\3}\\right)=\\left(\\array{3\\\\8}\\right)$.\n\nAnd so forth.\n\nWe’re not restricted to acting on pairs of integers, either. Obviously, $SL(2,\\mathbb{Z})$ can act in the same way on pairs of numbers from larger number systems containing the integers: for instance, pairs of rational numbers, pairs of real numbers, or pairs of complex numbers.\n\nWe can take these ordered pairs of numbers to be coordinates of a plane. For instance, here is the $\\mathbb{Z}^2$ plane sitting inside the $\\mathbb{R}^2$ plane. I’ve drawn some extra features as well (some lines and a square) and coloured one of the points red so we can see what happens when we act on these planes with an element of $SL(2,\\mathbb{Z})$.\n\n-3 -2 -1 0 1 2 3 3 2 1 0 -1 -2 -3\nThe ℤ2 plane in the ℝ2 plane\n\nActing on it with $\\left(\\array{2&1\\\\1&1}\\right)$, the points get rearranged, and the lines and the blue square get sent to other lines and a blue parallelogram. In fact, we get the picture below. (I’ve left the coordinate labels in place while everything else moves under them.) The points with integer coordinates get sent to other points with integer coordinates, so we can’t tell they’ve moved without the aid of the extra features—but they have, and the black lines, blue parallelogram and red dot indicate how.\n\n-3 -2 -1 0 1 2 3 3 2 1 0 -1 -2 -3\nThe ℤ2 plane in the ℝ2 plane, after action\n\nInvariants\n\nNow, $SL(2,\\mathbb{Z})$ preserves a bunch of things about the planes shown above when it acts on them. For instance, since the matrices that make up its elements have determinant $1$, the action preserves area in the plane $\\mathbb{R}^2$. So the thin blue parallelogram has an area of $1$, just like the square it came from. Area is of course a geometric property associated with $\\mathbb{R}^2$, but in this context we can give it an algebraic formulation, which allows us to express the area-preserving property in an equivalent way which is meaningful for $\\mathbb{Z}^2$ too. For the area of a parallelogram is the magnitude of the vector cross-product of two adjacent sides.\n\nFor instance, the blue square had one side going from $(0,0)$ to $(1,0)$—the vector difference being $(1,0)$—and another going from $(0,0)$ to $(0,1)$, with vector difference $(0,1)$. In the blue parallelogram, these vectors get sent to $(2, 1)$ and $(1, 1)$.\n\nThe vector cross-product needs $3$ dimensions to work, so we add in a $z$-coordinate, and set it to $0$ in the plane. Then the vector cross-product of $(x_1, y_1, 0)$ and $(x_2, y_2, 0)$ has magnitude $\\vert x_1 y_2-x_2 y_1\\vert$. And, restricting to parallelograms with one vertex at the origin, we can conclude that for two points in the plane with coordinates $(x_1, y_1)$ and $(x_2, y_2)$, the quantity $\\vert x_1 y_2-x_2 y_1\\vert$ is preserved by the action of $SL(2,\\mathbb{Z})$. Indeed, since the determinants of the matrices are $1$, the signed magnitude $x_1 y_2-x_2 y_1$ is also preserved. This algebraic quantity is of course perfectly well defined for pairs of integers. Last time, I called the coordinates by different names—instead of $(x_1, y_1)$ and $(x_2, y_2)$, I called the points $(a, c)$ and $(b, d)$. This was so I could line up the two column vectors of coordinates side by side like this:\n\n$\\left(\\array{a\\\\c}\\right)\\left(\\array{b\\\\d}\\right)$\n\nand say that the quantity $a d-b c$ is preserved, which (not coincidentally) is the determinant of the matrix $\\left(\\array{a&b\\\\c&d}\\right)$. There’s a bit more to say about this, but I’ll put it off until I’ve talked about some other stuff that I need to discuss first.\n\nSo anyway, one of the things that is preserved is the vector cross-product.\n\nA second is the vector sum:\n\n$\\left(\\array{x_1\\\\y_1}\\right)+\\left(\\array{x_2\\\\y_2}\\right)=\\left(\\array{x_1+x_2\\\\y_1+y_2}\\right)$\n\nObviously, the sum-of-vectors operation commutes with the linear transformations given by elements of $SL(2, \\mathbb{Z})$.\n\nAnd thirdly, the action of $SL(2,\\mathbb{Z})$ preserves the property “being a line through the origin”. Indeed, we get a transitive action on the set of lines through the origin, and I want to say a bit about this next.\n\nLines through the origin\n\nLines through the origin are another geometric concept with an easy algebraic characterisation. For a line in the $\\mathbb{R}^2$ plane, we pick a point $(x, y)$ on the line (other than the origin), and the line is the points $(\\lambda x, \\lambda y)$ for all real lambda. For lines in the $\\mathbb{Z}^2$ plane, we can pick one of the points on it closest to the origin, $(m, n)$, and the line is all points $(k m, k n)$ for integer $k$.\n\n(Of course, in the latter case there are two points closest to the origin, $(m, n)$ and $(-m, -n)$.)\n\nAlternatively, we can characterise the lines in terms of a formal ratio of $y$ to $x$. Thus the line made of points of the form $(\\lambda x, \\lambda y)$ can be characterised by the ratio $\\frac{y}{x}$, which is invariant under multiplication of both top and bottom by $\\lambda$. For non-zero $y$, this an actual ratio—for lines in $\\mathbb{R}^2$ we get a real number and for lines in $\\mathbb{Z}^2$ we get a rational number. But we also have the extra line given by $\\frac{1}{0}$. The lines through the origin in these planes together constitute the points on the projective lines over $\\mathbb{R}$ and $\\mathbb{Z}$ respectively, with $\\frac{1}{0}$ being the “point at infinity”.\n\nAnd as we might expect, this also works for other number systems such as $\\mathbb{Q}$ and $\\mathbb{C}$.\n\nProjective invariants\n\nNow we want to return to those invariants on $\\mathbb{Z}^2$ and $\\mathbb{R}^2$ (etc) that we looked at earlier, namely, for two points $(x_1, y_1)$ and $(x_2, y_2)$, the parallelogram area $x_1 y_2 - x_2 y_1$ and the sum relation $(x_1, y_1) + (x_2, y_2) = (x_1 + x_2, y_1 + y_2)$. Have these been disrupted by going from points to lines containing those points?\n\nAt this point a crucial difference arises between $\\mathbb{R}$ and $\\mathbb{C}$ on the one hand, and $\\mathbb{Z}$ and $\\mathbb{Q}$ on the other. Considering area first: replacing $(x_1, y_1)$ by $(\\lambda x_1, \\lambda y_1)$ has the effect of multiplying $x_1 y_2 - x_2 y_1$ by $\\lambda$. So we appear to have lost our invariant.\n\nHowever, for $\\mathbb{Z}$ or $\\mathbb{Q}$, we can still identify, and cancel out, the factors of $\\lambda$, by reducing the fraction to its lowest form. This leaves only a sign ambiguity (between $(m, n)$ and $(-m, -n)$). And by picking the absolute value $\\vert x_1 y_2 - x_2 y_1\\vert$ coming from the reduced form of the fraction, we can get rid of the sign ambiguity too, and get an true invariant of pairs of points of the projective line. And thus for any natural number $k$, we get an invariant relation on the projective line given by picking out the pairs of points with $\\vert x_1 y_2 - x_2 y_1\\vert = k$. In particular, this is true of $k=1$, which is the case for which we joined the fractions by an edge last time.\n\nNow, for the sum, we can do the same thing, reducing fractions to their lowest form, and extracting the numerator and denominator. As in the previous paragraph, there is still a sign ambiguity, so what we get is an invariant ternary relation among triplets of points, given by $\\pm\\mathbf{a}\\pm\\mathbf{b}\\pm\\mathbf{c}=0$. By this, I mean that, given three points $a$, $b$ and $c$ on the projective line over (say) $\\mathbb{Q}$, we look at these as lines in $\\mathbb{Z}^2$, take one of the two points closest to the origin on each of those lines, think of those points as position vectors $\\mathbf{a}$, $\\mathbf{b}$ and $\\mathbf{c}$, and then fiddle with the signs of those vectors to see if any combination gives a triplet that sums to zero. If so, then this relation will be invariant under $SL(2, \\mathbb{Z})$. Given any two lines, identified by the points, $\\pm\\mathbf{a}$ and $\\pm\\mathbf{b}$, there will be two lines $\\pm\\mathbf{c}$ which enter into such a relation with them: viz. $\\pm(\\mathbf{a}+\\mathbf{b})$ and $\\pm(\\mathbf{a}-\\mathbf{b})$.\n\nSo now we have our binary relation giving edges, and our ternary relation giving triangles.\n\nFinally, we notice that $SL(2, \\mathbb{Z})$ does not act freely on the lines in a plane: the matrix $\\left(\\array{-1&0\\\\0&-1}\\right)$ belongs to this group, and sends $(m, n)$ to $(-m, -n)$, corresponding to the same line. This is the only non-identity matrix in $SL(2, \\mathbb{Z})$ which preserves lines like this, so if we quotient $SL(2, \\mathbb{Z})$ out by the subgroup $\\left\\{\\left(\\array{1&0\\\\0&1}\\right),\\left(\\array{-1&0\\\\0&-1}\\right)\\right\\}$, we get a free action on the projective line. And that quotient group is just the projective group $PSL(2, \\mathbb{Z})$.\n\nThe complex case\n\nNow let’s try the same thing over $\\mathbb{C}$. So we take the plane consisting of pairs of complex numbers $(z, w)$. (This is not the same as the thing often called the “complex plane”—which is characterised by a single complex number for each point! The latter is two dimensional as a real manifold, but only one dimensional as a complex manifold.) We can act on this plane $\\mathbb{C}^2$ with $SL(2, \\mathbb{Z})$ as before.\n\nNow take the lines in this two-complex-dimensional plane—each line being the set of points of the form $(\\lambda z, \\lambda w)$ for a given $z$ and $w$ and all $\\lambda\\in\\mathbb{C}$. We can act on the set of lines with $SL(2, \\mathbb{Z})$ or $PSL(2, \\mathbb{Z})$, and algebraically this works in just the same way as over $\\mathbb{R}$ or $\\mathbb{Q}$ or $\\mathbb{Z}$. As before, we can characterise the lines by formal ratios of complex numbers, which means either an actual ratio—a single complex number—or $\\frac{1}{0}$, the point at infinity.\n\nSomething interesting shows up when we calculate what this action actually does to a complex number acting as a ratio, i.e. when we act on $z$ with $\\left(\\array{a&b\\\\c&d}\\right)$ to get $\\frac{a z+b}{c z+d}$. It turns out that although the real part of the result is a bit complicated, the imaginary part is the result is just $\\frac{Im(z)}{c^2+d^2}$. That is, we divide the imaginary part of $z$ by a positive real number. And this in turn means that the upper half of the complex plane is sent to itself (as are the lower half of the complex plane, and the real line). By “plane” I now mean the plane given by a single complex number (which is a ratio)!\n\nSo we can consider the action of $PSL(2, \\mathbb{Z})$ on the complex upper half-plane on its own.\n\nConformal and metric structures\n\nNow we’ll take a more elementary geometric turn. A complex structure on a surface—I mean something that makes it locally look like pieces of complex plane—an example being if it’s an actual region of the complex plane—implies a conformal structure. That is, a structure that defines angles between any intersecting curves at any point. We get this, basically, because multiplication of complex numbers involves rotation—in particular, multiplication by $i$ gives a rotation by $90^\\circ$. Moreover, a large class of nice functions on complex surfaces preserves the conformal structure, except possibly at some isolated points, which is why we care about it. In fact, on a surface with complex structure (a Riemann surface) the conformal structure tells us everything there is to know about its behaviour qua Riemann surface. A complex structure and a conformal structure are equivalent.\n\nNow a conformal structure gives you somewhat less than an Riemannian metric; a Riemannian metric tells you not only angles but also distances. Of course, given a Riemannian metric, you get a conformal structure for free, by throwing away the distances and keeping the angles. Somewhat surprisingly, there’s a theorem that enables us to go in the other direction in a particularly nice way: given a Riemann surface, there’s a Riemannian metric on the surface, of constant curvature, which is geodesically complete, and which implies the conformal structure (i.e. agrees with it about all angles). And, up to a scaling factor, this metric is the unique one with these properties, on the given Riemann surface!\n\n(Geodesically complete means that, if you travel along any geodesic in either direction from any starting point, at a constant distance per unit time, you can keep going forever. You’ll either go round and round a closed loop—as on a sphere—or head off toward infinity. So, for instance, the Euclidean plane (with its usual metric) is geodesically closed, because you can keep going forever in any direction from anywhere; but the Euclidean plane with a point removed is no longer geodesically complete, since if you travel along certain geodesics, you’ll run into the missing point after finite time, and be forced to stop.)\n\nIn particular, the upper half of the complex plane gives rise to a Riemannian metric of constant negative curvature which makes it isometric to the hyperbolic plane. The action of the group $PSL(2, \\mathbb{Z})$ always preserves the conformal structure everywhere on the upper half plane, and since the hyperbolic metric is implied by the conformal structure, this group also acts as isometries of the hyperbolic plane.\n\nConformal structures are also sometimes nice for putting drawings of curved surfaces in books. For instance, one obviously can’t isometrically embed pieces of a sphere into the Euclidean plane, but one can do so conformally. If the sphere in question is the earth, this is particularly helpful for navigators since it preserves the relative bearing of different courses from a given location. And most of the maps you see (or should I say, the maps whose images you see) in atlases of the world are conformal. (Straight rhumb lines are better still.)\n\nFor navigators on hyperbolic seas, conformally accurate pictures are nice too. And such is the Poincaré Disc that I used last time to show the tiling of the hyperbolic plane by triangles. The whole hyperbolic plane is conformally shrunk down to an open disc, and geodesics are either diameters, or arcs of circles that intersect the boundary of the disc at right angles. I like this picture because it’s bounded, so you can see everything at once. If you prefer to see the same tilings of the upper half plane in its more conventional representation as … well … as the upper half of a plane, then they’re all over the internet. Here’s Wikipedia’s example:", null, "Tiled Upper Half Plane\n\nAll the figures are triangles although, as you can see, some of them have one vertex off at $\\infty$, off the top of the diagram.\n\nNow, what about when we take quotients of the hyperbolic plane by the action of $PSL(2, \\mathbb{Z})$ or one of its large and interesting subgroups? What we tend to get is a surface from which protrude several long, tapering spines that stretch off to infinity without ever quite reaching a tip. So we have a noncompact surface, even if it only takes a finite number of our triangular tiles to cover it.\n\n(The triangles are also long and tapering, and have their apices missing. This is true of every triangle when tiling the whole hyperbolic plane, but then there are an infinite number of them, so perhaps the non-compactness doesn’t seem so bothersome.)\n\nHowever, there is a remedy for this.\n\nFirst, throw away the Riemannian metric, but keep the conformal structure.\n\nNow, each long, tapering spine is conformally equivalent to a disc with a point removed from its interior. So what we’ll do is mentally shrink the spines down to punctured discs, and then add an extra point to each disc to fill in the puncture. Then we paint over the new points with some topology and conformal structure, blending tastefully in with extending the existing topology and conformal structure, so it looks like new. The extra points are called cusps. And they are the apices of the triangles, and are therefore where we stick our fractions-mod-$N$ as labels.\n\nWe now have a compact surface—indeed, a compact Riemann surface. So we can imply a whole new constant-curvature, geodesically complete Riemannian metric. Adding the cusps will have changed what it takes to make the surface geodesically complete, so the new metric can be very different from the old one, and indeed need not even have negative curvature any more. Which is how we can end up with the tiled spheres that were supposed to appear at the end of the last article, as the final stage of the preceding polyhedra.\n\nSo, having explained myself, I hope, a bit more carefully than last time, I hereby present the missing spherical polyhedra. These are just the nice smooth constant-curvature versions of the dual polyhedra I showed last time.\n\nTetrahedron:\n\n3|1,0|0,0|0|false|false|false|10.000000|30.000000|5.000000|1.000000|false|1.200000|1.000000|false|false|false|true|1.100000| e h 0:white,,, 1:white,,, $\\frac{1}{1}$ $\\frac{2}{1}$ $\\frac{1}{0}$ $\\frac{0}{1}$\nSpherical tetrahedron: N = 3\n\nAnd now for the cube:\n\n4|1,0|0,0|1|false|false|false|30.000000|30.000000|3.000000|1.000000|false|1.400000|1.000000|false|false|false|true|1.100000| e h 0:white,,,, 1:white,,,, 2:white,,,, $\\frac{2}{1}$ $\\frac{3}{1}$ $\\frac{1}{2}$ $\\frac{1}{0}$ $\\frac{1}{1}$ $\\frac{0}{1}$\nSpherical cube: N = 4\n\nAnd finally we have the dodecahedron:\n\n5|1,0|0,0|2|false|false|false|32.000000|15.000000|3.000000|1.000000|false|1.400000|1.000000|false|false|false|true|1.100000| e h 0:white,,,,, 1:white,,,,, 2:white,,,,, $\\frac{1}{0}$ $\\frac{2}{1}$ $\\frac{3}{1}$ $\\frac{4}{1}$ $\\frac{3}{2}$ $\\frac{0}{2}$ $\\frac{2}{2}$ $\\frac{2}{0}$ $\\frac{1}{1}$ $\\frac{0}{1}$ $\\frac{1}{2}$ $\\frac{4}{2}$\nSpherical dodecahedron: N = 5\n\nAnother advantage of the spherical representation is that it enables us to properly display the tiling for $N=2$:\n\n2|1,0|0,0|0|false|false|false|32.000000|40.000000|3.000000|1.000000|false|1.400000|1.000000|false|false|false|true|1.100000| e h 0:white,, 1:white,, $\\frac{1}{1}$ $\\frac{0}{1}$ $\\frac{1}{0}$\nBigons: N = 2\n\nLet’s interpret this slightly boggolating image. There are just three reduced fractions mod $2$, viz. $\\frac{1}{0}$, $\\frac{0}{1}$ and $\\frac{1}{1}$. These form a mediant triplet, so it looks as though the tiling mod $2$ consists of a single triangle. However, it is better to think of this as two triangles back-to-back. This is a bit silly as a polyhedron, but on the sphere, each “face” forms a different hemisphere, with the three “edges” forming three segments of the equator. This isn’t the picture shown above. Rather, dually, we get a segmentation of the sphere into three bigons, each labelled by one of the fractions—like some kind of mutant orange—and that’s what we see above.\n\nNote that rotating around $\\frac{1}{0}$ still adds $1$, and rotating around the edge between $\\frac{1}{0}$ and $\\frac{0}{1}$ still sends $q\\rightarrow\\frac{-1}{q}$, even though these operations are almost confusingly simple at this point.\n\nWe could even go to $N=1$, but the resulting single triangle, even drawn on a sphere, is rather degenerate and so I shan’t bother (it isn’t a simple monohedron, though such things exist on the sphere—it’s a triangle folded back on itself to look like a simple monohedron).\n\nI hope that makes things a bit clearer.\n\nNext time, we’ll look at $N>5$.\n\nPosted at October 29, 2010 2:11 PM UTC\n\nTrackBack URL for this Entry:   https://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/2296\n\n### Re: Pictures of Modular Curves (II)\n\nPictures do help! Thanks\n\nPosted by: jim stasheff on October 31, 2010 2:36 PM | Permalink | Reply to this\n\n### Martin Farquhar Tupper; Re: Pictures of Modular Curves (II)\n\nLet bigons be bigons.\n\n[pun on:\n“Let byegones be byegones,” — they foolishly say,\nAnd bid me be wise and forget them;\nBut old recollections are active to-day,\nAnd I can do nought but regret them….”\n– Martin Farquhar Tupper [17 July 1810 – November 1880], English writer, and poet, and author of Proverbial Philosophy.]\n\nPosted by: Jonathan Vos Post on October 31, 2010 5:48 PM | Permalink | Reply to this\n\n### Re: Martin Farquhar Tupper; Re: Pictures of Modular Curves (II)\n\nI wonder how many of us have used that puns\nthough knowing only the first line.\n\nPosted by: jim stasheff on November 1, 2010 1:31 PM | Permalink | Reply to this\n\n### Re: Pictures of Modular Curves (II)\n\nTim wrote:\n\nNow, each long, tapering spine is conformally equivalent to a disc with a point removed from its interior. So what we’ll do is mentally shrink the spines down to punctured discs, and then add an extra point to each disc to fill in the puncture. Then we paint over the new points with some topology and conformal structure, extending the existing topology and conformal structure, so it looks like new. The extra points are called cusps. And they are the apices of the triangles, and are therefore where we stick our fractions-mod-N as labels.\n\nWe now have a compact surface—indeed, a compact Riemann surface. So we can imply a whole new constant-curvature, geodesically complete Riemannian metric. Adding the cusps will have changed what it takes to make the surface geodesically complete, so the new metric can be very different from the old one, and indeed need not even have negative curvature any more.\n\nI was very confused at first when someone told me about the 24 cusps on Klein’s quartic curve, because it seemed so compact and un-cuspy. Somehow I eventually sorted it out in my mind. But if someone had told me what you just said, it would have been much easier!\n\nBut I guess we’ll be seeing Klein’s quartic curve soon enough…\n\nPosted by: John Baez on November 2, 2010 1:53 PM | Permalink | Reply to this\n\n### Re: Pictures of Modular Curves (II)\n\nIt took me a long time to understand it too, and I’m still a bit vague about some of the algebraic-geometric aspects of this. The information one needs never seems to be gathered nicely together in one place. The first time I saw the theorem about the unique metric on a Riemann surface, I’m pretty sure it was missing the geodesic completeness condition—which was really confusing.\n\nI wonder how much time mathematicians spend massaging others’ results into forms that they can understand.\n\nPosted by: Tim Silverman on November 2, 2010 3:11 PM | Permalink | Reply to this\n\n### Re: Pictures of Modular Curves (II)\n\nI wonder how much time mathematicians spend massaging others’ results into forms that they can understand.\n\nA lot.\n\nPosted by: Todd Trimble on November 2, 2010 4:12 PM | Permalink | Reply to this\n\n### Greg Egan, Polymath, Wikiscience; Re: Pictures of Modular Curves (II)\n\nI admire the web pages at Greg Egan’s web site, where he has programmed his own beautiful user-friendly Java apps to show 4-D geometry, Quantum Mechanics, and the like. In this century, lovely crafted prose typeset in TeX is becoming less important, and open source software by the research team, as an appendix to the arXiv or dead tree publication is becoming more important. This continues, in the limit as we accelerate into “Wikiscience” and “Polymath.”\n\nPosted by: Jonathan Vos Post on November 2, 2010 5:53 PM | Permalink | Reply to this\n\n### Re: Greg Egan, Polymath, Wikiscience; Re: Pictures of Modular Curves (II)\n\nWhile it is refreshing not to be reminded, for a change, of the number of publications you’ve had as poet, scientist, science-fiction author, pal of Feynman, contributor to the OEIS, etc., etc., I’m still a little baffled by this:\n\nIn this century, lovely crafted prose typeset in TeX is becoming less important, and open source software by the research team, as an appendix to the arXiv or dead tree publication is becoming more important.\n\n(Lovely crafted) prose in TeX is becoming less important? And, is this supposed to be cause for celebration?\n\nPosted by: Todd Trimble on November 2, 2010 7:20 PM | Permalink | Reply to this\n\n### Re: Greg Egan, Polymath, Wikiscience; Re: Pictures of Modular Curves (II)\n\nTodd wrote:\n\n(Lovely crafted) prose in TeX is becoming less important?\n\nI don’t think so. After all, Tim’s blog entry, the one we’re supposedly talking about here — it’s full of lovely crafted prose in TeX!\n\nWhat’s new is the blending of lovely crafted prose in TeX with the interactive nature of the Web, and its ability to deliver pictures, movies, etc.\n\nPosted by: John Baez on November 3, 2010 12:38 AM | Permalink | Reply to this\n\n### Agreement reached; Re: Greg Egan, Polymath, Wikiscience; Re: Pictures of Modular Curves (II)\n\nThank you, John Baez. I happily accept your friendly amendment. I did indeed mean a reduction in “merely lovely crafted prose in TeX” because of an increase, taking nothing away from that, in interactivity and virtuality and collaborationware. Both text-based and web-based interactive segments grow over time. I like Greg Egan in ASCII, but like him more in ASCII and interactive color mathematical physics simulations. I like your papers in arXiv and AMS journals, but I like you even more in those plus n-Category Cafe plus Azimuth.\n\nPosted by: Jonathan Vos Post on November 3, 2010 5:13 AM | Permalink | Reply to this\n\n### Re: Pictures of Modular Curves (II)\n\nAmen! though occasionally the disciples obfuscate thee master, e.g. Dirac as master.\n\nPosted by: jim stasheff on November 3, 2010 1:09 PM | Permalink | Reply to this\n\n### Re: Pictures of Modular Curves (II)\n\nHi all,\n\nNice to know you.Very interesting blog.\nVery interesting also these Klein quartic curves.\nIndeed dear Tim, the sphere helps always.\n\nps it exists a specific fractal of spheres,or of the sphere.This universal serie can help everywhere.If the volumes are correlated….\n\nRegards\n\nSteve\n\nPosted by: Steve Dufourny on November 2, 2010 2:53 PM | Permalink | Reply to this\n\n### Re: Pictures of Modular Curves (II)\n\nOne thing I’d like to understand is the relationship between these modular curves and equations of the type:\nXaYbZc + ZaXbYc + YaZbXc = 0\nThe most famous example is Klein’s quartic equation:\nX3Y + Z3X + Y3Z = 0\nAlso Fermat’s quartic equation:\nX4 + Z4 + Y4 = 0\nThe most simple example is the “triangle equation”:\nX + Y + Z = 0\nIn the complex plane, the X,Y and Z solutions of this correspond just to a triangle, with X,Y and Z the translations in the complex plane that add to zero.\nThis may seem a bit boring, since the solution is simply Z= -(X + Y)\nBut it becomes somewhat more interesting when we “don’t care about “, or “mod out” an arbitrary factor. This is like considering triangles of unit circumference, aligned to the x-axis. (remember we are modding out by a *complex* number, so both size and rotation are standardized). In this case, the equation gets a more interesting, it contains all the structure of trigenometry: eg what are the angles (phases) of a triangle, given its lengths, and the like.\nWe could say the equation has a symmetry group of order 6, consisting of the permutations of {X,Y,Z}\nNow Klein’s quartic can be mapped on the triangle equation by setting:\nX3Y = u, Y3Z = v, Z3X = w.\nThe inverse of this is interesing:\nX28 = w3u9v-3, Y28 = u3v9w-3, Z28 = v3w9u-3\nThe 28th powers suggest a 28 fold symmetry, which combined with the aforementioned 6 fold symmetry gives the 168 fold symmetry of the Klein quartic?\nAs I said, I don’t understand this as well as I like. I feel there is somemore cool stuff out there…\n\nGerard\n\nPosted by: Gerard Westendorp on November 2, 2010 11:17 PM | Permalink | Reply to this\nRead the post Pictures of Modular Curves (III)\nWeblog: The n-Category Café\nExcerpt: Tilings of modular curves of level greater than 5, with non-positive curvature.\nTracked: November 14, 2010 10:57 PM\n\nPost a New Comment" ]
[ null, "https://golem.ph.utexas.edu/~distler/blog/images/MathML.png", null, "http://upload.wikimedia.org/wikipedia/commons/a/ad/ModularGroup-FundamentalDomain-01.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.93790656,"math_prob":0.99590325,"size":16946,"snap":"2021-43-2021-49","text_gpt3_token_len":3660,"char_repetition_ratio":0.13522607,"word_repetition_ratio":0.0073260074,"special_character_ratio":0.20665644,"punctuation_ratio":0.10804393,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9961526,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-18T05:18:18Z\",\"WARC-Record-ID\":\"<urn:uuid:c690091d-23f6-4b48-a371-2cab00b1dbe7>\",\"Content-Length\":\"136985\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f062b80e-6384-4005-a6c7-5064b2324d36>\",\"WARC-Concurrent-To\":\"<urn:uuid:2d84862a-73b4-4277-a1cf-d5e2ec714ba6>\",\"WARC-IP-Address\":\"128.83.16.204\",\"WARC-Target-URI\":\"https://golem.ph.utexas.edu/category/2010/10/pictures_of_modular_curves_ii.html\",\"WARC-Payload-Digest\":\"sha1:RFMVK4U5QRUA4TKXXYEZD7P3ZAI4RC73\",\"WARC-Block-Digest\":\"sha1:A4MWXBOHCV64AWECYJLSODNY3W3GIW5K\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585196.73_warc_CC-MAIN-20211018031901-20211018061901-00577.warc.gz\"}"}
https://homework.cpm.org/category/CCI_CT/textbook/int1/chapter/4/lesson/4.2.1/problem/4-62
[ "", null, "", null, "### Home > INT1 > Chapter 4 > Lesson 4.2.1 > Problem4-62\n\n4-62.\n\nRobbie’s class collected the following view tube data in problem 4-1.\n\n Distance from wall (in) Width of field of view (in) $144$ $20.7$ $132$ $19.6$ $120$ $17.3$ $96$ $14.8$ $84$ $13.1$ $72$ $11.4$ $60$ $9.3$ checksum $\\it 708$ checksum $\\it 106.2$\n1. Use your calculator to make a scatterplot and graph the least squares regression line (LSRL). Sketch the graph and LSRL on your paper. Remember to put a scale on the $x$axis and $y$axis of your sketch. Write the equation of the LSRL rounded to four decimal places.\n\nSee scatterplot below. $y = 1.6568 + 0.1336x$", null, "2. Using your calculator, determine the residuals. Make a table with the distance from wall (inches) as the first column and residual (inches) in the second column. What is the sum of the squares of the residuals?\n\n• See table below.\nUse your calculator to find the sum of the squares of the residuals.\n\n Distance from wall (in) Residual (in) 144 -0.198 132 0.305 120 -0.391 96 0.316 84 0.219 72 0.123 60 -0.374" ]
[ null, "https://homework.cpm.org/dist/7d633b3a30200de4995665c02bdda1b8.png", null, "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAfQAAABDCAYAAABqbvfzAAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAAyRpVFh0WE1MOmNvbS5hZG9iZS54bXAAAAAAADw/eHBhY2tldCBiZWdpbj0i77u/IiBpZD0iVzVNME1wQ2VoaUh6cmVTek5UY3prYzlkIj8+IDx4OnhtcG1ldGEgeG1sbnM6eD0iYWRvYmU6bnM6bWV0YS8iIHg6eG1wdGs9IkFkb2JlIFhNUCBDb3JlIDUuMC1jMDYxIDY0LjE0MDk0OSwgMjAxMC8xMi8wNy0xMDo1NzowMSAgICAgICAgIj4gPHJkZjpSREYgeG1sbnM6cmRmPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5LzAyLzIyLXJkZi1zeW50YXgtbnMjIj4gPHJkZjpEZXNjcmlwdGlvbiByZGY6YWJvdXQ9IiIgeG1sbnM6eG1wPSJodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAvIiB4bWxuczp4bXBNTT0iaHR0cDovL25zLmFkb2JlLmNvbS94YXAvMS4wL21tLyIgeG1sbnM6c3RSZWY9Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC9zVHlwZS9SZXNvdXJjZVJlZiMiIHhtcDpDcmVhdG9yVG9vbD0iQWRvYmUgUGhvdG9zaG9wIENTNS4xIE1hY2ludG9zaCIgeG1wTU06SW5zdGFuY2VJRD0ieG1wLmlpZDo5QzA0RUVFMzVFNDExMUU1QkFCNEYxREYyQTk4OEM5NCIgeG1wTU06RG9jdW1lbnRJRD0ieG1wLmRpZDo5QzA0RUVFNDVFNDExMUU1QkFCNEYxREYyQTk4OEM5NCI+IDx4bXBNTTpEZXJpdmVkRnJvbSBzdFJlZjppbnN0YW5jZUlEPSJ4bXAuaWlkOjlDMDRFRUUxNUU0MTExRTVCQUI0RjFERjJBOTg4Qzk0IiBzdFJlZjpkb2N1bWVudElEPSJ4bXAuZGlkOjlDMDRFRUUyNUU0MTExRTVCQUI0RjFERjJBOTg4Qzk0Ii8+IDwvcmRmOkRlc2NyaXB0aW9uPiA8L3JkZjpSREY+IDwveDp4bXBtZXRhPiA8P3hwYWNrZXQgZW5kPSJyIj8+RSTQtAAAG9JJREFUeNrsXQmYXEW1Pj09PVtmJjsBDGFXiCKKIBJ2REEQQdaARBBiFFRAnrIoyhqCgLwnEfEpPMAgggsGJG7w2MMuiuwkJDGQINmTycxklu62/r5/0ZWaur3M9GQCc/7vO1/fvrfuvXXr1q3/nFOnqhLZbFYUCoVCoVC8u1GlRaBQKBQKhRK6QqFQKBQKJXSFQqFQKBRK6AqFQqFQKJTQFQqFQqFQQlcoFAqFQqGErlAoFAqFonKoLveE2jM+uTHk+zNGjjZyj5EXqJhgQH3KyClGOo1MNbK2vzOSTWakbmWTjHp+69y2QqFQKBQW85+avvES+kaCKUaOMHK8kcWS9zQkjYzj9l1Gnuj3nCSykuxIaa1VKBQKxbvLQt9I0Gjk30YehtPA2d9tZJGRPYxs0++EnjCaRFe1NC4emSN2hUKhUCiU0MtDjZE3jRwXODaRhP5hI7f1ZyayVRmpWdMoqbb63LZCoVAoFAOFd2tQHHzcWxppChwbxt89+zsTWWOV161okkQ6oTVJoVAoFErovQA8C6OMjA0csy74nSXfn155GA6vXlcj9cuHqnWuUCgUCiX0XqDByOiIUnNu9ThCh/W+T79Z54bEa1c1SnVbjdnW/nOFQqFQKKGXi/cbeR+3Px44PtrZPrw/M1K/vDlSKxQKhUKhUEIvG/tK1IcO7CE9KXVn/v7ZyAFGNqm4dY6hautqpGZNg7rbFQqFQqGE3sv8gtDXOeTt9pMPN/Ixh9CNCS2HVJzQq7JSu3qIJDtTaqErFAqFQgm9FwBZY/z520ZWS9Sfvrdz/AjHeke6RyWaOa6iwJBzuNsTyuYKhUKhUELvFdAn/rREQ9NeN/KkkaN4bAQJ/x7+hy/8RhL+DpVk86p0taRadOy5QqFQKJTQe4NtSNog8aESzdf+RyOfolX+ZSMPSDRbHIBhbXcaaTcyuVKZQP95am2dVHelctsKhUKhUAxGQoeP+hoj1xu5yciFZZwLUv6NRIuwWMKeLdGscRdLFN3+O8lHuY800mbkdiOnSn7CmT4Sukj9imZJZHShOoVCoVAMXkLH/bBc2ywj5xg5wcjnSjgP4803owU+kvsQ8PaskYeMnGbkCu6vd44D15LMT6yIRmLUiZq19WqdKxQKhWJQE/q2Eo0hR7/3GCMLJFoGddciefymkR/zfyN/U7TO20niNhjOTizTwN9/GPmrkfMcsu+ddV6VkVR7nVS31mn/uUKhUCgGNaGDyP9l5F6J3OMdRr5n5FwjH4w55wwjrxj5G/+787dfQwsd/eZf5b46z1IHLqUicVLfzHOR6vYaqepOas1RKBQKxaAldIwXR7/3XIn6wVskcp+D4NEHfomRXbxzDpJorPkPnX2WsDHm/FEeQ/Db13j9as9CF6bDuPSLJLygS4xFns1Z4lYy1encdK+JjA5XUygUCsXgJfQvGblDIrc7VkI71sh2Rg418gKtdFjrdknUCUYmSdTX3u1c533O9uP8vZrKAYLfugKEDpwvkZv/nFIzjGj2mtUNuRnhILWrhkhVV1LXPlcoFArFRocNtR76YUbeMrKElvqJJGlMDvNFWta3GDmGFjf2wa89xchSI0NoqeM6n3KuO4q//5Ro7fPvS34WOZ/Q0ZeO6PoLmPblYpke8crmhtRr1198pSohmaT2nysUCoVi8BH6hySa8AWBaacbSUvUdw7vAJjyK0a+bmSakVVGWiVykSPgDUPVOmlZg/zv4q+d3rXOuQ/c9kdKNFY9ROjAd5nmBiN7SX4IXBCIZI/c7vlkiYS62xUKxYbH/KemayEoCqI/Xe4YKnYKyXO8kZslmhBmUyM/kshNjpXTrpNoARUExX2e5yVI7BCYwwh8m0kLf0vnHm7g22u00LMFCH0l8zSBaRUKhUKhUAvdA4aLoX97FxL19iTVZ0nMcHnDHf5Vh4hB1KOYbpGRtRJN07o/rfKmInm8yMhEEjWC69p4D1x/SMw5mF3uKp77dyN3azVQKBQKhRJ6HqMlH8X+iJHlsn4wW7kAIY+k9b41lYQPkPDx20zLf3zM+bDkEdmO/vUXjbxqZB6tfATGITjvVxK53v+uVUGhUCgUg4rQs15AWCL9jtf+TUrkMM86vyGgfzr3E9sn3WrObzWJFprtZ5z9uOHmRnYzcqCR/WJIHX3wB1GEOYGSgWC4xySKuMc1fm9kHyMLtTooFAqFYtAQet2yJvJxQjLVGelsbn9nnDb25Qg+QzLPRPSbSaZzc59Ho72iKPFkR7VUmbSZmgJGfO787DtR5bx+xlEefk/ixopqCKA7TOJd7Ql6EPaW/JKrrUyPceyH0HpXKBQKheK9T+gjX9jCsZWz0l3XJV2N7dLZtC43RrtueWN+nXCQfqpb2ke1SMfwVknXduUixhsXDZfGN0fkyD+TSsdb6WZ/d32ndAxtM+SfkM7GDllnrgXNAJO7MPocUfD/TxkvmcRZ5nqnSmkBf5b8ETX/oERD2u7UaqFQKBSK9zyh+y736vaUVLfVSMPbCE5ff4hXDu01UruqIWfNg5xxvHZ1Q2TVGx5PdhbOAqZaradXAOfAI9A+eo20jVljlIeGnMcAln7HsFbpauh8KV3XNaW7oeN2c+1rEunEeEPuXQVvkIAHAHnOol/+DpN+lsnYmWb/v8p1Xkjk1u/QaqVQKBSKjZ7QexB8jsCzBQZ0g+SjrVRrtG4KplB1jPBid3jnfCA3c1tLvQxZNCJH9u+wqSF2XCpd0w3Sv79t9JqPdA5vHZdOdVfB2x6arjVrlIzkulR2yOLmNnMcD5HoGtIxdN3IlrebFozOXb+HghKPL0i0UMxtWq0UCoVC8a4jdAJ907tLNIkMItPB2JgZDtHjz5DofHLEvdFv3SSFJ3gBE6+QaJz569ZDUN2Rst6CKl5naBb6QXcyR+5GMplU98PrRrQuXjt2ec6yr0onc3ey+WhcOFIaI8XgIJuPbFUmaxSOj1V1VafM9bHe+vz1lICsYf2wEgL3va7aolAoFIp3JaFjKVPMwY7JWjaPSYOo8usoLuCixpKoW5R4Lyzmgrnb/8fIn5z1yJO8TjThDAztZHQskU7OHvLvofvVL2/sXrPlMml934qc6z/VWifD5mwqtSuHIP0hhsBnradBGOKnsnCyT+gFACVG54RVKBQKxYCgLzPFYeKY+yUKJNu8QLodSbhYLrXZNXYlmgimVMCC/rREE8P8oKTrJLJ7GgI/VjJVMmzupjLipbHSvHCUjP77VjkyN6RdY6z1qYHz7FaXVhGFQqFQvJcJHdO3wqrdrYxzMIf6LVIZtzQmhil16taLDUE3od8ervjm18fkoutpgcOz8BGtBgqFQqEYrIR+JS30cnGERCupVQJYaAV99sVmo8MSrWfkTHlD4jkijyzwkfQuKBQKhUIxKAkds7JNjDn2N4lWTcPCK/MKWNcIT0/HHEcA3F8kWp0NU7c+GZMO1zi1xDz/l0TLtrr4tqy/trpCoVAoFO9a9CYoDv3YqcB+zNp2vOTHYWNd8wckmnvdBf7vIdHCLCE8Z+RgT+k4wciNJHEXmLK1toByYDGc1vgU/se88F/T169QKBSKwWyhfzSwL03L3J1U5d8S9XPPpcyhzCepJ0pUMtDZfatEAXg+xkq03Gop0eUnG9mV25dIFKGvUCgUCsWgtdBDEe1wky8I7P+NkT95+0DkiB6vr0D+s5JfBqYY4FU4z8i1Ro7ZCN8FFIzNJD+Gvz2QppZeiqxXnp0SnqEuxXJexzSFUMf0uG9cXEKC10tKgWV3nGtUM72ftkviZ9SrYV46me+4Z+qKKSMAK/8hRgLL8S6SwvMcWDQzvascJkuopwm+szYqyA2SH3kRum89v6EE33NrjKLdwLy0Ffh2G4qUg32uVon3YtWxXrWXUEd8FCqftTH765n3cuqEC7zXUczvGyW8W5TzFrwvFmda1k/5wn0wEqelQJ7qWX/XlHC9Jr6z9hLrr0LRKws9tPhJS4FKutaTFjbUcSQcIhO48vcP7F9sZHWJhA58zshvpW/D9SoNNFAIMkRXQ27yHInWkL+ADa2LqTyGCXv+6ciz9GLs7aWfxLT3s4GIAxq8x5n2oALpQCB38X7PeXlw5bNM/2mmfdY59jz/38HjPr7BfFwVk4ejeXxG4NhHeN2XJJr/AOWJlfWOK/IO7D0v8fbv4z0Xnvlv3vNAfsf07+exh6ic+cR5Ae9jPVbYvijwbhDvMZv32jMmz0fy/FsK1P+TmZ9rCjz7VF7nm72ou7vElAfK6RGWq0/4tzL9PwJ1Au/04zH3QnDrLyRaCvkVvtvZRd7tRL7/13gOzv2l9OwGRPndXCBfuO8nipSFfbffKpBmBtNMLXKtk5gOsUTDlKYU/WmhZ2MIvbNCefqQ00BmaG3tE9Nozab2HCLoNY5G7Fp3owNp0T0wpgzFoFLYjB6Mnfn/VeYRDc6lEi0aM9GxEDZhwybcZxeoBfHbYMVT2ABZLX8bCqam/WlMPr4i+eF7Q4rkGaMbtuS76QqUWcJpxOud/HY69cfm91iS6IWedY38xgUsDuXxVd7+/VlvhrNsXmR5oSG+nedMi7EyJ/P4ZCoSqx2PyFjHE5Ry6ppb31c639P2tIirPCX4VxKtBgjMo/W1PZ/9Uzy2wrnODvRWYA6HCQEr3JbDigIWHIJGtyWxX0GPgA+U89Ysq3JRRyXGWrJZx1BA3vYyciiVsLWO8rgd03YG6vBRVODvcu6D7+MevosMFTYowntQcPw7Xt6+4xDnElrmyOsJLG8onU85dXIrJ1+2TXHzdQzzNTNG0Z1MRWwyvYAhq34sy+Ub/BbfiCnT8/jemjYy40PxHrTQQ+iqoFtoNK2PI9kQ7BtDtLDkf+6QiA806D8q4X7PsdFMDED5X83GaIFEa7uPpxxPUsAwv9O9cgZ+xgZ/R/4iNuA2ktN0yc++57pZz2BjEfIQuKMFisUjWCI7xcmDK+PZ+LrXQgO8k5Nmd8fC/j6f3ffQxE3qkw4QKkj8Jv7+kff6MJXDHzLNZVSQfNgpi4VKneuheJjPY8t5MvfPoQJkn/dwrx52eN/Dt0jYq1incc4H+X6XkbAv9JTmDsfrcEGJ5eBiJz4b0OwoE6FvN84zVgz2/UKp2I1ltAOf78tU9A/y6rDN77leHd6dym09CXGYo1TdSDKczfLYieV3GdOc79WhfRwyv5RpbZ14gG3M9Z4HzObrvJh81Xn58pXJcY6XZq8i3w6I+rSYNJ93PAgdou52xQAQ+kBgKt1icV6GIbRKFhS5DhqDtwcg/2igPsftMyVa/jXDjxgW5ZU8dnbAbbmazzWPv3B7TqIS00wLxMeOtH58wHrbtBf5X+TkwZW5bMh90niNx+fTMsJ8BLMc5aAv+CS9Bkv4PHNYlktIpo+wrp8ZOHcij83l/0nOsTbut+X8hkN+9nlej7G0xCGkE7l9Cb0IHSyTu0ggQqKPc69+m5ZoOTiGHoV5zO+kfqzLackHvM7n9g2S78I4WnpOKLXUq8OoEyfxnYEcd2G63aiItbKePM93i/7w7xm5m+lOdK5tn/XPVBiX8ZyX6alq4/UPCTwL7v8vL1+TuB+KcqhLwN77Nf6eUEKZTQ54C1EPz1JaUgw0oW/oRUlg2V5cJE2t89HH4T5q300DUPZoHBpp3TweOD6dpPftwHtKxlhLL3M7zl39TU8Bgqvwq45VWA7K6a6B5VoT2P9bx5rsSx3awfG2LA0cn0Kiv9Xb30yLKMuyWUhLb8uY+6Sc56ktMW9Qlmx/+gOB4w+R3DeR9fvdq0g8C3jfH5dxT6Q71lEGXqVC8MF+qstx5fG04wWqLaH+LCVxAkMdi1eoWL0WOOde/m7r7NveO+biLXrAzohRxEL5Wu7UK1/p2oyKwTpes4WK+ogSPJH+PBoHSnwMgULRL4Qeck03SnhseiXRzgbxMDZSxQjIRr+jEX8wcBxW0jkFnqm/Yee1XynhaG7sn0Fr3Y+E7o7xSNh+8IXesQdo2XzMs0pgOW1HC/8fZea/EjETbzl5b+jDdWwjG+dpQUAUgsf+GmhA4SlBlwC6CeBih2v1iAq+5yaSWafk+9r9et1CIqnzvrMsLbZVtCi/U+I94fL9AOsBvAD3U2Hqr9EdWQlH2u/rELVfx0PR+weQjLO08oHhzjUk5juxdci2aU1F6sPdVJifCRwL5etAyceCvOwd+yy/ZVjyCGJDtwCi8A8t0Hb+kt/w1x3FxSrcwEyJjw1SKCpiZbkNUKjRapJ8UE9fAGviSoeQYXku4wf+ai8UljQVgNmelfgTiSJJB7rsu6T8/stNaNW6VuC32OgsCxAXgv4w8c+1THc3G3jr3kMU9GllNN7AFWwwk16D9b2YhlJilCrrceiLhZ4sUDcLwbpGf+80pCdy/3SpzOp5SckPLQzFBXQ7+xMBJe0JiVzXeEfnUvF4usg9j3eIK81fBGIhIvxyqVwAq1uXMT/FWueZP8P8WgLzyxJW7OZMm6FX5EQqP4gHedF7t+uKKJZJpwxD9WFXfjdZJ13I6j/Cy9dYenf8fPllfadThw5mHZoRk2d8n2OoKEyi9wWWOUZ9wN3/fxLFZWj/uaLfCT2k9Q7nR+AT+v5s4NNO5QSp3sCPI4TFrNCVBAgGQTBnOhbs1AEue7dhKddDcDLFByL7vyw9o5mHsnFBfy2Gtu1GBeyjtDhmUukpB3EL8/y0DEJ3yyJbobIsFWioD2KjbUdVII5hCZ9tl148R2/ec7H3D+/Xj0jGu7Px372AEjhC8gFwv+bvoxL1Ce9A6/3+CtdlfP+PxRybwW/Px3HSc8hZG7/9s5xyK/ZuE166uHNQhhO8c690lA6LYwKeDHjIEIB7tqeYjGd5tku+L38W0+9PBXtujBJyNQkdVvr/UuGCAYKA1/kyMF5DxSAk9BcC+6C9fs2z8rDvssBHBFxVwPqp7qdnRV6OYkOOhV2WD3DZ9+WDfZtKSZKNACwjuPxulsi1HipTuG2voyJzjuOt+G82pMky84358Z+UvFswUaB+FPKgDFRZHk6yhJvddjesIrmfxkb9mQrlLdGH57CW4mkkzY+TBBbFXOMztEThfXrEsW7RdQOX/cR+IPRuWq7dfKcZEtmdjlLhA11hiB9AVx2i4D9EMjy1l+82UeQcxGu8QuPCkm1XgXwlWc7IF0ZOTAmktYGHs0jCwJtMj2NHSj641QW6l+5gvUM3GQJz0RXWQkLfSqlJsaEI/a8kR/+jQXAV+o7gEkRf4BdjyBxE9KCEg6T6E8v4cR0vPYOjBgJtzsddI4XXhk94FsgvJN//Xw5gZaCf7mj+XyDR+OjeAIQxu49lYPu+OyTvUrWKRZzClw4oA+scS7FURcK6SuGh2JPfQkbyoyKg/F1c5L2Ugg5aZPUSjhOwM9+JxA/Vs+WNbo6LJBri9ouYdLYb4SXvuawCcBjLaWUF6/JKWqpryzgHwai3OSQICxf90RjG+ZyTrt3xMoUwxClnW286vPplFVeLmwsQ+h+db+JNtmeH0ZvldtHVOJb8K3z+JOuntcqhPP1Qes7SZ2daRJ5ukXyA73S2Ux9QalL0Br2xkBBA9ZeYY0fzY/lpDJkDP6FLKjUAz3ujQ2YDjVX8qEfHNFZoQOACnik9I2t7a9kulfUnl7mOjXBvrldXgTKw0elLnEbYTuoyJuacTZ3ycz0WwLiYc6ZQibya/3eSfDQxJtV5lMdhrf+A+xE1vW8FnnEFSQllHJo2eRRJqU16Dvfzgbw9zXNs95Gr6CHP+3H7C95zXeeU38H94G0q1zho8Ej0CSo2/ph7G/W+eUybMc6rD1lHWdk65t7betcOKQhW6XhM8rP8uXBHDZxHb8iD/D2f+6Gc7FqgDOyshlYpvVYpSbGhCd0O8elNANzj1EIH0ipevJGU/Rx6K+okP3TMfS/Q2g8gma8ONKC9xfW0gEAMN/XhOi1lpE1Lz0AsDEeyE7Xc5+x/mL8TAoQKIjuJ2+5qfU84SpAfXTyWFu2+TkNvXaVv0Br7jSP4/6pDin3FUsfiDAUens73PUcKj2e3jf43aFmGukg+T6JEEOTtged6vsBztffxOftSJ9P0PgBwU3/CMyDWkZxPCNSHL3h1QBzP0XHSc6w3vAC7sx17rEi+YO3b2QWP8IwU6+GZS0+DW9b4P9/zBMV5by6nV+g6Cfe3KxQlo7f91a+wgt9awCoKWfbHSt9dmO8VrGUjdj01fFikGGJUS9I6hA3Kd6Uy0dYWi9lgurOR9QYns4FLBOoUvAovelb1+ZJ3PW5FTwkaW7g1f+aR80zWL/R7wmWJvkaMrf86FYGF9LZYPMWG9Bg2pldTYRlH5RPW3WtsNF1X6eUSng4XZT+Lv2OkbxMPZfme9yPBQIGzUd/HOXkBcZQy2uFJWuoXBAh1IrevlfA0txNIdgfwHSxwjkHhCc15kKLy9Eg/fw/38N1/gs/2WYcwf05FBvVkRyp9GP+Ncd8Y5vaW5GeNBG6gVwZu9XtZHkizN89JUZl9roR8WSt9Ar/FQ6lkH+5Y578LnIeI/RlUsnBea8z1URf+UKaCrFBUlNCFHzg+kMvYKMW5YGHJ3yzR0JvVXgPUHEhf7rKmdpUjH0PLuEbcilH93c8PMkFUMmaz+hLFAtbk2bJ+P7V1B5Y6ZrsupkxDQ4CaS3hmt6xPLZBuCQndXmszkqePZ+ideMuziibz3EMCxPQyFZ63A+ckaeH5i6y8SOsObtmjqBRkJD9TnY+H+Qyb0AK8xiub5hiLtNqpey4xoovqFF7ncIcMrKcDBHaHsy/pvOOQJY5vDv26OzvvAwqDndp2ZsxzQcnBzHbbsq5d6NxnP8m7631MjyF06wIfVoa3z9az2oCVPo1K7aFU6OxznMO6jzI8V9aPTH+ZyqXr3XiLRHozy+hG716/ooLgoqlIvv7A+ngg68WmrE9xAYb30usxjnVyRoF7rIkp16GiY9EVG4jQhZYSgt8QbIbpRnciQWXo9kODfZ/0nOjEupum8eNIO/mZ1wt33Q9oSaWdRnCJlD4U6kESjjseGNd4dgO8g8tpBdg5vrtpOaCBn+OlvZ3l83AZStc0elSKWZFX0QouZLV08nqjC3gNkpJ3f2Jq3qmyflBQgiSGYw9IeEz0clpoIL6DmS8ohugT/rX07IKwjeJRJDpEem9BpegR75x2PkMhFze8J6eTIBd75DGNhNEZ4/24hPfw83gTlbOJJJkEy+D2wPtZRpJHw7405tuBBXi8971cwW8t7n2jfqPvfU/nPFiIr0p+oZQQad8Xc715VC7WluF5g7W8jazvIreAgnUWyTLlKaCnsqxQJ7Zk+T7EfS0xyuIEltFeJMc3SMx/jsnXdgXydSYV03rWtWl8f3HBhVA4v0KPwhpHMYIy9XiRMprH72ZlActeoehpcWWz5Q3/3WrX0wZ7kUmiKjjC62w25NdrtVIoFJXG/KemayEo+tVCH3x0noiN/XlaCg87UigUCoVi47HQFQqFQqFQbHzQgAuFQqFQKJTQFQqFQqFQKKErFAqFQqGoCP4jwADQNvw20jA5ogAAAABJRU5ErkJggg==", null, "https://s3-us-west-2.amazonaws.com/c3po-media-dev/files/057c2d50-25a0-11e9-ab6b-d7741872f579/cca 6-42a copy_original.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8428361,"math_prob":0.9997414,"size":880,"snap":"2021-31-2021-39","text_gpt3_token_len":241,"char_repetition_ratio":0.12785389,"word_repetition_ratio":0.039473683,"special_character_ratio":0.29204544,"punctuation_ratio":0.11290322,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.999271,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,8,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-22T17:36:06Z\",\"WARC-Record-ID\":\"<urn:uuid:19f67adc-9da1-4a98-a3ce-34998c7f434e>\",\"Content-Length\":\"56291\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:43942436-ce9f-4f89-aea0-621c754da499>\",\"WARC-Concurrent-To\":\"<urn:uuid:adde837f-a898-4939-89d9-174541641eca>\",\"WARC-IP-Address\":\"172.67.70.60\",\"WARC-Target-URI\":\"https://homework.cpm.org/category/CCI_CT/textbook/int1/chapter/4/lesson/4.2.1/problem/4-62\",\"WARC-Payload-Digest\":\"sha1:M35P2QOGL6BQ3SXMTE6CCQZR3KTS6NZV\",\"WARC-Block-Digest\":\"sha1:XMXUJQIS3UL75I265KXYJNEX4NAZICOY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057371.69_warc_CC-MAIN-20210922163121-20210922193121-00113.warc.gz\"}"}
https://mathhelpboards.com/threads/solve-for-x.1964/
[ "# solve for x\n\n#### goosey00\n\n##### Member\nSolve for x:e^-0.38x=.3\nI got .4387 Is that correct\n\n#### SuperSonic4\n\n##### Well-known member\nMHB Math Helper\nSolve for x:e^-0.38x=.3\nI got .4387 Is that correct\nYou can check by evaluating $e^{-0.38*0.4387}$\nIf we use google calculator we end up with 0.8467 (4sf) so 0.4387 is not correct.\n\nWhat do you know about solving exponential equations and/or the natural logarithm?\n\n#### goosey00\n\n##### Member\nI just can't remember how to put it in my calculator again. How did you get the .4387 to times by it-[FONT=MathJax_Math-italic-Web]e[/FONT] [FONT=MathJax_Main-Web]−[/FONT][FONT=MathJax_Main-Web]0.38[/FONT][FONT=MathJax_Main-Web]∗[/FONT][FONT=MathJax_Main-Web]0.4387[/FONT]\n\n#### SuperSonic4\n\n##### Well-known member\nMHB Math Helper\nI just can't remember how to put it in my calculator again. How did you get the .4387 to times by it-[FONT=MathJax_Math-italic-Web]e[/FONT] [FONT=MathJax_Main-Web]−[/FONT][FONT=MathJax_Main-Web]0.38[/FONT][FONT=MathJax_Main-Web]∗[/FONT][FONT=MathJax_Main-Web]0.4387[/FONT]\nEither\nCode:\n [2nd] [ln] [(] [-][0.38] [x] [0.4387][)][=]\nor\nCode:\n [(] [-][0.38] [x] [0.4387][)][2nd] [ln][=]\nYou can also use an online calculator to check answers - I used google which you can see in the link above and there is also a MHB calculator which works. For your own calculator it may be prudent to find the manual online (search for \"Ti30x user manual\") so you're not stuck in an exam.\n\nBear in mind that was just a test to see if your answer was right (it isn't). You need to use the natural logarithm (ln) to find x.\n\n$-0.38\\ln(x) = ln(0.3)$\n\n#### soroban\n\n##### Well-known member\nHello, goosey00!\n\nSolve for $$x:\\;e^{-0.38x}\\:=\\:0.3$$\n\nI got 0.4387 . Is that correct?\n$$\\text{We have: }\\:e^{-0.38x} \\;=\\;0.3$$\n$$\\text{Take logs: }\\:\\ln(e^{-0.38x}) \\;=\\;\\ln(0.3) \\quad\\Rightarrow\\quad \\text{-}0.38x\\underbrace{\\ln e}_{\\text{This is 1}} \\;=\\;\\ln(0.3)$$\n. . . $$\\text{-}0.38x \\;=\\;\\ln(0.3) \\quad\\Rightarrow\\quad x \\;=\\;\\frac{\\ln(0.3)}{\\text{-}0.38}$$\n. . . . . $$x \\;=\\;3.168\\,349\\,485$$" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.67611176,"math_prob":0.99371284,"size":592,"snap":"2020-45-2020-50","text_gpt3_token_len":242,"char_repetition_ratio":0.17006803,"word_repetition_ratio":0.0,"special_character_ratio":0.44763514,"punctuation_ratio":0.17910448,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9983826,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-30T04:54:33Z\",\"WARC-Record-ID\":\"<urn:uuid:bedfc7cf-41aa-4870-a811-62365fcbcf7a>\",\"Content-Length\":\"71596\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c59bca39-eefa-4d14-ba70-62e36c1beff3>\",\"WARC-Concurrent-To\":\"<urn:uuid:64dd494c-6a84-43d9-b54f-6795fbaf4fd5>\",\"WARC-IP-Address\":\"50.31.99.218\",\"WARC-Target-URI\":\"https://mathhelpboards.com/threads/solve-for-x.1964/\",\"WARC-Payload-Digest\":\"sha1:MBHMJIXWIDLT5XSTWOQJWWVFS6LGOEVD\",\"WARC-Block-Digest\":\"sha1:6RHO2P25AD2G5AZ4ANJ6SMFZAQUWHELR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141205147.57_warc_CC-MAIN-20201130035203-20201130065203-00469.warc.gz\"}"}
https://www.massage-energie.be/May_21/how-to-use-equation-for-corrosion-of-calcium-metal.html
[ "Email:\n\n# how to use equation for corrosion of calcium metal", null, "### How to Calculate the Rate of Metal Corrosion\n\n8/12/2019· Converting Corrosion Rates. To convert the corrosion rate between the mils per year (MPY) and the metric equivalent millimeter per year (MM/Y), you can use the following equation to convert mils per year to micrometers per year (MicroM/Y): 1 MPY = 0.0254 MM / Y = 25.4 MicroM / Y.", null, "### Metal extraction slides\n\nThis is called electrolysis. 2KCl (l) 2K (l) + Cl2 (g) 12. • The method used to extract a metal depends on the reactivity of the metal. • Unreactive metals, such as gold, are often found as free (uncoined) elements. • Less reactive metals are extracted by heating their oxides with carbon.", null, "### Deicing Salt – Recognizing The Corrosion Threat\n\nF C Sodium Chloride Calcium Chloride Magnesium Chloride Salt Corrosion For corrosion to occur, a material’s surface must be dampened by an electrolyte, which is a water solution that can conduct an electric current. There is a direct correlation between", null, "### Drinking Water Problems: Corrosion\n\nLimestone (calcium carbonate) and dolomite (calcium magnesium carbonate) in the soil neutralize the acid and the water is usually alkaline—pH between 7 and 8—and “hard” due to the carbonates. If there is no limestone or dolomite, the groundwater will remain", null, "### Write the balanced equation for the reaction of calcium …\n\nThe unbalanced reaction equation is: {eq}Ca (s) + Fe(NO_3)_3 (aq) \\rightarrow Ca(NO_3)_2 (aq) + Fe (s) {/eq} We balance the nitrate anions on both sides by adjusting two coefficients simultaneously:", null, "### (PDF) Factors influencing corrosion of metal pipes in soils\n\nAlthough pH affects corrosion, there is no relationship between corrosion and pH and the corrosion rates of buried pipes are inversely proportional to soil resistivity.", null, "### Rebar Corrosion - an overview | ScienceDirect Topics\n\nCarbonation is the process of formation of calcium carbonate through reaction of carbon dioxide with concrete constituents of calcium hydroxide, aluminates, and silies. This process results in the initiation of rebar corrosion in the reinforced concrete ( Liachiya et al., 2012 ).", null, "### Calcium Metal - an overview | ScienceDirect Topics\n\nSmelting to give the metals involves metallothermic reduction of fluorides or oxides, or electrochemical methods. Metal oxides are converted to fluorides by HF/Ar and purified by melting in an HF/Ar atmosphere. The fluorides are then reduced by the more electropositive calcium metal. 2 LnF 3 + 3 Ca → 3 CaF 2 + 2 Ln.", null, "### Water Research Center - Drinking Water Corrosion, …\n\nIn order to use this index, the following laboratory analysis is needed: pH, conductivity, total dissolved solids, alkalinity, and total hardness. In manipulating the data, the actual pH of the water is compared to the theoretical pH (pHs) based on the chemical analysis. The Saturation Index =. SI = pH - pHs.", null, "### Corrosion of Carbon Steel - Total Materia\n\nThe weight loss and maximum pit depth in soil corrosion can be represented by an equation of the form: Z = a·t m Where: Z - either the weight of loss of maximum pit depth T - time of exposure a and m - constants that depend on the specific soil corrosion", null, "### How to Calculate the Rate of Metal Corrosion\n\n8/12/2019· Converting Corrosion Rates. To convert the corrosion rate between the mils per year (MPY) and the metric equivalent millimeter per year (MM/Y), you can use the following equation to convert mils per year to micrometers per year (MicroM/Y): 1 MPY = 0.0254 MM / Y = 25.4 MicroM / Y.", null, "### Drinking Water Problems: Corrosion\n\nLimestone (calcium carbonate) and dolomite (calcium magnesium carbonate) in the soil neutralize the acid and the water is usually alkaline—pH between 7 and 8—and “hard” due to the carbonates. If there is no limestone or dolomite, the groundwater will remain", null, "### calcium metal and water balanced equation processing\n\nsides of the equation 1. Calcium metal reacts with water to form solid calcium hydroxide and hydrogen gas. Ca + 2H2O → Ca(OH)2 (s) + H2 (g) 2. Zinc hydroxide solution reacts with lithium to form lithium hydroxide solution and zinc metal. balanced chemical", null, "### (PDF) Factors influencing corrosion of metal pipes in …\n\nAlthough pH affects corrosion, there is no relationship between corrosion and pH and the corrosion rates of buried pipes are inversely proportional to soil resistivity.", null, "### (PDF) Factors influencing corrosion of metal pipes in …\n\nAlthough pH affects corrosion, there is no relationship between corrosion and pH and the corrosion rates of buried pipes are inversely proportional to soil resistivity.", null, "### PAPER OPEN ACCESS …\n\nwith the behavior of NaCl, which is due to the calcium and magnesium ions contained in the metal surface precipitation of calcium carbonate and magnesium hydroxide precipitation, the metal has a certain protective effect.", null, "### Corrosion of Eedded Materials - PCA\n\nIn the reaction with calcium hydroxide, calcium carbonate is formed: Ca(OH) 2 + CO 2 → CaCO 3 + H 2 O This reaction reduces the pH of the pore solution to as low as 8.5, at which level the passive film on the steel is not stable.", null, "### How to calculate corrosion rate of metal-coated steel …\n\nThe corrosion rate can be calculated in millimeter per year, (mm/yr) on the basis of the apparent surface area using equation. Corrosion Rate (mm/yr) = Weight loss x K/Density x Area Time = W.K", null, "### Corrosion Doctors - hodic Processes\n\nThus, almost every case of aqueous corrosion can be reduced to these equations, either singly or in coination. Consider the corrosion of zinc by water or moist air. By multiplying the zinc oxidation reaction by 2 and summing this with the oxygen reduction", null, "### 17.6 Corrosion – Chemistry\n\nIron will rust when it is exposed to oxygen and water. The main steps in the rusting of iron appear to involve the following ( Figure 2 ). Once exposed to the atmosphere, iron rapidly oxidizes. anode: Fe(s) Fe2+(aq) + 2e− E∘ Fe2+/Fe = −0.44 V anode: Fe ( s) Fe 2 + ( a …", null, "### CORROSION AND SCALING - Geothermal Communities\n\nSolubility of heavy metal sulphides at 2 N NaCl solutions as a function(a) of pH at a constant temperature of 250ºC and (b) of temperature at pH=7. A part of the dissolved iron in the fluids comes from the geothermal formation, its concentration being usually less than 1 mg/L.", null, "### What is the chemical formula of corrosion? - Quora\n\nCorrosion occurs as a metal is oxidized usually to form the metal oxide. But the common form of corrosion is the rusting of iron, it always requires water and FeO (OH) is the most common form of rust. 4Fe (s) + 3O2 (g) + 2H2O (l) → 4FeO (OH) (s) 447 views · Answer requested by", null, "### 19.9: Corrosion- Undesirable Redox Reactions - …\n\nOne way to avoid these problems is to use a more easily oxidized metal to protect iron from corrosion. In this approach, called hodic protection, a more reactive metal such as $$\\ce{Zn}$$ (E° = −0.76 V for $$\\ce{Zn^{2+} + 2e^{−} -> Zn}$$) becomes the anode, and iron becomes the hode.", null, "### Metal extraction slides\n\nThis is called electrolysis. 2KCl (l) 2K (l) + Cl2 (g) 12. • The method used to extract a metal depends on the reactivity of the metal. • Unreactive metals, such as gold, are often found as free (uncoined) elements. • Less reactive metals are extracted by heating their oxides with carbon.", null, "### Corrosion Doctors - hodic Processes\n\nThus, almost every case of aqueous corrosion can be reduced to these equations, either singly or in coination. Consider the corrosion of zinc by water or moist air. By multiplying the zinc oxidation reaction by 2 and summing this with the oxygen reduction", null, "### Corrosion of Eedded Materials - PCA\n\nIn the reaction with calcium hydroxide, calcium carbonate is formed: Ca(OH) 2 + CO 2 → CaCO 3 + H 2 O This reaction reduces the pH of the pore solution to as low as 8.5, at which level the passive film on the steel is not stable.", null, "### What Is Metal Corrosion and Why Does It Occur?\n\n16/5/2019· It is caused by chemical or electrochemical reactions. While general attack corrosion can cause a metal to fail, it is also a known and predictable issue. As a result, it is possible to plan for and manage general attack corrosion. Localized Corrosion: This corrosion attacks only portions of a metal …", null, "### Calcium chloride — Materials Technology\n\nCorrosion rate less than 0.1 mm/year. The material is corrosion proof. 1 Corrosion rate 0.1—1.0 mm/year. The material is not corrosion proof, but useful in certain cases. 2 Corrosion rate over 1.0 mm/year. Serious corrosion. The material is not usable. p, P c, C" ]
[ null, "https://www.massage-energie.be/fy/419.jpg", null, "https://www.massage-energie.be/fy/451.jpg", null, "https://www.massage-energie.be/fy/554.jpg", null, "https://www.massage-energie.be/fy/297.jpg", null, "https://www.massage-energie.be/fy/40.jpg", null, "https://www.massage-energie.be/fy/433.jpg", null, "https://www.massage-energie.be/fy/410.jpg", null, "https://www.massage-energie.be/fy/9.jpg", null, "https://www.massage-energie.be/fy/396.jpg", null, "https://www.massage-energie.be/fy/13.jpg", null, "https://www.massage-energie.be/fy/351.jpg", null, "https://www.massage-energie.be/fy/274.jpg", null, "https://www.massage-energie.be/fy/398.jpg", null, "https://www.massage-energie.be/fy/458.jpg", null, "https://www.massage-energie.be/fy/286.jpg", null, "https://www.massage-energie.be/fy/501.jpg", null, "https://www.massage-energie.be/fy/386.jpg", null, "https://www.massage-energie.be/fy/464.jpg", null, "https://www.massage-energie.be/fy/412.jpg", null, "https://www.massage-energie.be/fy/388.jpg", null, "https://www.massage-energie.be/fy/413.jpg", null, "https://www.massage-energie.be/fy/157.jpg", null, "https://www.massage-energie.be/fy/68.jpg", null, "https://www.massage-energie.be/fy/489.jpg", null, "https://www.massage-energie.be/fy/247.jpg", null, "https://www.massage-energie.be/fy/517.jpg", null, "https://www.massage-energie.be/fy/22.jpg", null, "https://www.massage-energie.be/fy/499.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.90559274,"math_prob":0.94565856,"size":7137,"snap":"2022-27-2022-33","text_gpt3_token_len":1808,"char_repetition_ratio":0.13612786,"word_repetition_ratio":0.39951178,"special_character_ratio":0.23931624,"punctuation_ratio":0.09677419,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97646195,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56],"im_url_duplicate_count":[null,2,null,5,null,2,null,5,null,1,null,2,null,1,null,2,null,2,null,1,null,2,null,1,null,1,null,1,null,1,null,1,null,2,null,1,null,2,null,1,null,1,null,1,null,2,null,3,null,1,null,1,null,4,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-06T17:09:16Z\",\"WARC-Record-ID\":\"<urn:uuid:078bae2c-4ad2-4679-af65-fd0ff7c0f430>\",\"Content-Length\":\"26203\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d4d242d4-b859-4c1c-b9cd-63bbe0e88b6e>\",\"WARC-Concurrent-To\":\"<urn:uuid:193f9b94-3826-46b9-ab18-636f25368e5d>\",\"WARC-IP-Address\":\"104.21.53.90\",\"WARC-Target-URI\":\"https://www.massage-energie.be/May_21/how-to-use-equation-for-corrosion-of-calcium-metal.html\",\"WARC-Payload-Digest\":\"sha1:EKWX4HZPG2P5NNIK7PQBJKFJHQ6VUQTY\",\"WARC-Block-Digest\":\"sha1:ZLFRX76TX7XDWPVR54OL45TUN6LJA6MV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104675818.94_warc_CC-MAIN-20220706151618-20220706181618-00022.warc.gz\"}"}
https://www.studyadda.com/notes/1st-class/mathematics/geometrical-shapes/geometrical-shapes/15825
[ "# 1st Class Mathematics Geometrical Shapes\n\nGeometrical Shapes\n\nCategory : 1st Class\n\nGeometrical Shapes\n\nGeometrical Shapes\n\nIn this chapter, we will know about the basic geometrical shapes, which we often see around us.\n\nLine\n\nA line is a collection of points. The lines may be straight or curved.\n\nSee the picture of lines given below:\n\n1. Straight lines", null, "1. Curved lines", null, "Angles\n\nWhen two lines meet at a point, an angle is formed.\n\nSee the following figures:", null, "", null, "(i)                    (ii)\n\nThese are angles.\n\n• Example:\n\nWhich one of the following is not an angle?\n\n(a)", null, "(b)", null, "(c)", null, "(d)", null, "(e)   None of these\n\nExplanation: Option (d) is correct because in (d), two lines do not meet to make an angle.\n\nTriangle\n\nA triangle has three sides and three angles. Let's see the following figures:", null, "Quadrilateral has four sides and four angles. Let's see the following figures:", null, "Thus it is clear that a quadrilateral has four sides whether they are equal or not.\n\nRectangle\n\nA rectangle has equal angles and equal opposite sides. Let's see the following rectangles:", null, "Square\n\nSquare has 4 equal sides and 4 angles. Let's see the squares given below:", null, "Rhombus\n\nRhombus is a quadrilateral in which all four sides are equal. Let's see the pictures of rhombus given below.", null, "• Example:\n\nHow many lines are required to make 2 square?\n\n(a) 4                                          (b) 8\n\n(c) 12                                        (d) 3\n\n(e) None of these\n\nYou will be redirected in 3 sec", null, "" ]
[ null, "https://www.studyadda.com/upload/html_folder/7_Geometrical_Shapes-1_ITHO_NZ/7_Geometrical_Shapes-1_ITHO_NZ_files/image001.jpg", null, "https://www.studyadda.com/upload/html_folder/7_Geometrical_Shapes-1_ITHO_NZ/7_Geometrical_Shapes-1_ITHO_NZ_files/image002.jpg", null, "https://www.studyadda.com/upload/html_folder/7_Geometrical_Shapes-1_ITHO_NZ/7_Geometrical_Shapes-1_ITHO_NZ_files/image003.jpg", null, "https://www.studyadda.com/upload/html_folder/7_Geometrical_Shapes-1_ITHO_NZ/7_Geometrical_Shapes-1_ITHO_NZ_files/image004.jpg", null, "https://www.studyadda.com/upload/html_folder/7_Geometrical_Shapes-1_ITHO_NZ/7_Geometrical_Shapes-1_ITHO_NZ_files/image005.jpg", null, "https://www.studyadda.com/upload/html_folder/7_Geometrical_Shapes-1_ITHO_NZ/7_Geometrical_Shapes-1_ITHO_NZ_files/image006.jpg", null, "https://www.studyadda.com/upload/html_folder/7_Geometrical_Shapes-1_ITHO_NZ/7_Geometrical_Shapes-1_ITHO_NZ_files/image007.jpg", null, "https://www.studyadda.com/upload/html_folder/7_Geometrical_Shapes-1_ITHO_NZ/7_Geometrical_Shapes-1_ITHO_NZ_files/image008.jpg", null, "https://www.studyadda.com/upload/html_folder/7_Geometrical_Shapes-1_ITHO_NZ/7_Geometrical_Shapes-1_ITHO_NZ_files/image009.jpg", null, "https://www.studyadda.com/upload/html_folder/7_Geometrical_Shapes-1_ITHO_NZ/7_Geometrical_Shapes-1_ITHO_NZ_files/image010.jpg", null, "https://www.studyadda.com/upload/html_folder/7_Geometrical_Shapes-1_ITHO_NZ/7_Geometrical_Shapes-1_ITHO_NZ_files/image011.jpg", null, "https://www.studyadda.com/upload/html_folder/7_Geometrical_Shapes-1_ITHO_NZ/7_Geometrical_Shapes-1_ITHO_NZ_files/image012.jpg", null, "https://www.studyadda.com/upload/html_folder/7_Geometrical_Shapes-1_ITHO_NZ/7_Geometrical_Shapes-1_ITHO_NZ_files/image013.jpg", null, "https://www.studyadda.com/assets/frontend/images/msg-gif.GIF", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8920294,"math_prob":0.9884206,"size":1435,"snap":"2020-10-2020-16","text_gpt3_token_len":371,"char_repetition_ratio":0.123689726,"word_repetition_ratio":0.038910504,"special_character_ratio":0.24181184,"punctuation_ratio":0.10726643,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9993129,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28],"im_url_duplicate_count":[null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-21T10:21:43Z\",\"WARC-Record-ID\":\"<urn:uuid:172509b1-e979-4027-9bc3-bdeced14f03b>\",\"Content-Length\":\"113988\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1236c0e7-f71d-475c-b782-70648155cefe>\",\"WARC-Concurrent-To\":\"<urn:uuid:e4ed6eaf-6f42-43b9-a562-6a32d32efa13>\",\"WARC-IP-Address\":\"50.62.56.142\",\"WARC-Target-URI\":\"https://www.studyadda.com/notes/1st-class/mathematics/geometrical-shapes/geometrical-shapes/15825\",\"WARC-Payload-Digest\":\"sha1:AKNVGY22L3WKS3CXDLWT4JN2V723TTGE\",\"WARC-Block-Digest\":\"sha1:7SK74AKCMOUKAKA3UIDICWYYYD4FFLQI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875145500.90_warc_CC-MAIN-20200221080411-20200221110411-00356.warc.gz\"}"}
https://artofproblemsolving.com/wiki/index.php/1988_AIME_Problems/Problem_7
[ "# 1988 AIME Problems/Problem 7\n\n## Problem\n\nIn triangle", null, "$ABC$,", null, "$\\tan \\angle CAB = 22/7$, and the altitude from", null, "$A$ divides", null, "$BC$ into segments of length 3 and 17. What is the area of triangle", null, "$ABC$?\n\n## Solution", null, "Let", null, "$D$ be the intersection of the altitude with", null, "$\\overline{BC}$, and", null, "$h$ be the length of the altitude. Without loss of generality, let", null, "$BD = 17$ and", null, "$CD = 3$. Then", null, "$\\tan \\angle DAB = \\frac{17}{h}$ and", null, "$\\tan \\angle CAD = \\frac{3}{h}$. Using the tangent sum formula,", null, "\\begin{align*} \\tan CAB &= \\tan (DAB + CAD)\\\\ \\frac{22}{7} &= \\frac{\\tan DAB + \\tan CAD}{1 - \\tan DAB \\cdot \\tan CAD} \\\\ &=\\frac{\\frac{17}{h} + \\frac{3}{h}}{1 - \\left(\\frac{17}{h}\\right)\\left(\\frac{3}{h}\\right)} \\\\ \\frac{22}{7} &= \\frac{20h}{h^2 - 51}\\\\ 0 &= 22h^2 - 140h - 22 \\cdot 51\\\\ 0 &= (11h + 51)(h - 11) \\end{align*}\n\nThe positive value of", null, "$h$ is", null, "$11$, so the area is", null, "$\\frac{1}{2}(17 + 3)\\cdot 11 = \\boxed{110}$.\n\nThe problems on this page are copyrighted by the Mathematical Association of America's American Mathematics Competitions.", null, "" ]
[ null, "https://latex.artofproblemsolving.com/e/2/a/e2a559986ed5a0ffc5654bd367c29dfc92913c36.png ", null, "https://latex.artofproblemsolving.com/3/8/8/388a19ae81f89ad673481e0fd1fdc6a4788f3907.png ", null, "https://latex.artofproblemsolving.com/0/1/9/019e9892786e493964e145e7c5cf7b700314e53b.png ", null, "https://latex.artofproblemsolving.com/6/c/5/6c52a41dcbd739f1d026c5d4f181438b75b76976.png ", null, "https://latex.artofproblemsolving.com/e/2/a/e2a559986ed5a0ffc5654bd367c29dfc92913c36.png ", null, "https://wiki-images.artofproblemsolving.com//d/d6/AIME_1988_Solution_07.png", null, "https://latex.artofproblemsolving.com/9/f/f/9ffb448918db29f2a72f8f87f421b3b3cad18f95.png ", null, "https://latex.artofproblemsolving.com/e/3/3/e33fe7d65facd8868f58b6e94ddc7f153a5a3f9f.png ", null, "https://latex.artofproblemsolving.com/8/1/8/8189a5b5a0917b8c93350827be4038af1839139d.png ", null, "https://latex.artofproblemsolving.com/e/e/b/eeb056cc061add1b8cbccee5632157a65474d77f.png ", null, "https://latex.artofproblemsolving.com/b/5/3/b53e0064e5c0075ceeb38a4e29b4fa1ed396dd85.png ", null, "https://latex.artofproblemsolving.com/f/9/8/f985109e1cf1d4c26c989000561cc78e76bbb3f1.png ", null, "https://latex.artofproblemsolving.com/9/4/4/944aeb671fb3cbc2842f22d0de09f282a86afd8a.png ", null, "https://latex.artofproblemsolving.com/9/2/e/92e1d7ca995b0bf10d17fbf6bcbead26dc09f234.png ", null, "https://latex.artofproblemsolving.com/8/1/8/8189a5b5a0917b8c93350827be4038af1839139d.png ", null, "https://latex.artofproblemsolving.com/c/6/8/c6878713578626763c38433b3f4c8c2205ad0c15.png ", null, "https://latex.artofproblemsolving.com/0/0/2/0025063120a0262512fd9be280584c2f6afb6853.png ", null, "https://wiki-images.artofproblemsolving.com//8/8b/AMC_logo.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6742252,"math_prob":1.0000021,"size":684,"snap":"2019-51-2020-05","text_gpt3_token_len":190,"char_repetition_ratio":0.1367647,"word_repetition_ratio":0.0,"special_character_ratio":0.30847952,"punctuation_ratio":0.10236221,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999937,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,10,null,null,null,null,null,null,null,10,null,null,null,10,null,10,null,9,null,null,null,null,null,7,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-18T14:56:58Z\",\"WARC-Record-ID\":\"<urn:uuid:810196e0-89ec-4c53-9cae-7ae3773cfc8b>\",\"Content-Length\":\"40335\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a8d1f6e0-cad6-48b6-a142-64e03d82b6a6>\",\"WARC-Concurrent-To\":\"<urn:uuid:1710ccc3-6be3-4a69-a7d7-07ea28728dc7>\",\"WARC-IP-Address\":\"96.126.112.194\",\"WARC-Target-URI\":\"https://artofproblemsolving.com/wiki/index.php/1988_AIME_Problems/Problem_7\",\"WARC-Payload-Digest\":\"sha1:3H647S5TPY7ICF4N57VQMJ4Y4BPDKW66\",\"WARC-Block-Digest\":\"sha1:RQKMYTZDM7H7GRHHPZO4HZWS3UU7RLTG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250592636.25_warc_CC-MAIN-20200118135205-20200118163205-00529.warc.gz\"}"}
https://dup4.com/Leetcode/problems/762.prime-number-of-set-bits-in-binary-representation/
[ "# 762.prime-number-of-set-bits-in-binary-representation\n\n## Statement\n\n• 例如, `21` 的二进制表示 `10101` 有 `3` 个计算置位。\n\n``````输入:left = 6, right = 10\n\n6 -> 110 (2 个计算置位,2 是质数)\n7 -> 111 (3 个计算置位,3 是质数)\n9 -> 1001 (2 个计算置位,2 是质数)\n10-> 1010 (2 个计算置位,2 是质数)\n\n``````\n\n``````输入:left = 10, right = 15\n\n10 -> 1010 (2 个计算置位, 2 是质数)\n11 -> 1011 (3 个计算置位, 3 是质数)\n12 -> 1100 (2 个计算置位, 2 是质数)\n13 -> 1101 (3 个计算置位, 3 是质数)\n14 -> 1110 (3 个计算置位, 3 是质数)\n15 -> 1111 (4 个计算置位, 4 不是质数)\n\n``````\n\n• `1 <= left <= right <= 106`\n• `0 <= right - left <= 104`\n\nGiven two integers `left` and `right`, return the count of numbers in the inclusive range `[left, right]` having a prime number of set bits in their binary representation.\n\nRecall that the number of set bits an integer has is the number of `1`'s present when written in binary.\n\n• For example, `21` written in binary is `10101`, which has `3` set bits.\n\nExample 1:\n\n``````Input: left = 6, right = 10\nOutput: 4\nExplanation:\n6 -> 110 (2 set bits, 2 is prime)\n7 -> 111 (3 set bits, 3 is prime)\n8 -> 1000 (1 set bit, 1 is not prime)\n9 -> 1001 (2 set bits, 2 is prime)\n10 -> 1010 (2 set bits, 2 is prime)\n4 numbers have a prime number of set bits.\n``````\n\nExample 2:\n\n``````Input: left = 10, right = 15\nOutput: 5\nExplanation:\n10 -> 1010 (2 set bits, 2 is prime)\n11 -> 1011 (3 set bits, 3 is prime)\n12 -> 1100 (2 set bits, 2 is prime)\n13 -> 1101 (3 set bits, 3 is prime)\n14 -> 1110 (3 set bits, 3 is prime)\n15 -> 1111 (4 set bits, 4 is not prime)\n5 numbers have a prime number of set bits.\n``````\n\nConstraints:\n\n• `1 <= left <= right <= 106`\n• `0 <= right - left <= 104`\n\n## Solution\n\n``````#include <bits/stdc++.h>\n#include <ext/pb_ds/assoc_container.hpp>\n#include <ext/pb_ds/tree_policy.hpp>\n\n#define endl \"\\n\"\n#define fi first\n#define se second\n#define all(x) begin(x), end(x)\n#define rall rbegin(a), rend(a)\n#define bitcnt(x) (__builtin_popcountll(x))\n#define complete_unique(a) a.erase(unique(begin(a), end(a)), end(a))\n#define mst(x, a) memset(x, a, sizeof(x))\n#define MP make_pair\n\nusing ll = long long;\nusing ull = unsigned long long;\nusing db = double;\nusing ld = long double;\nusing VLL = std::vector<ll>;\nusing VI = std::vector<int>;\nusing PII = std::pair<int, int>;\nusing PLL = std::pair<ll, ll>;\n\nusing namespace __gnu_pbds;\nusing namespace std;\ntemplate <typename T>\nusing ordered_set = tree<T, null_type, less<T>, rb_tree_tag, tree_order_statistics_node_update>;\nconst ll mod = 1e9 + 7;\n\ntemplate <typename T, typename S>\ninline bool chmax(T &a, const S &b) {\nreturn a < b ? a = b, 1 : 0;\n}\n\ntemplate <typename T, typename S>\ninline bool chmin(T &a, const S &b) {\nreturn a > b ? a = b, 1 : 0;\n}\n\n#ifdef LOCAL\n#include <debug.hpp>\n#else\n#define dbg(...)\n#endif\n\nint is_prime[] = {0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0};\n\nclass Solution {\npublic:\nint countPrimeSetBits(int left, int right) {\nint res = 0;\n\nfor (int i = left; i <= right; i++) {\nres += is_prime[__builtin_popcount(i)];\n}\n\nreturn res;\n}\n};\n\n#ifdef LOCAL\n\nint main() {\nreturn 0;\n}\n\n#endif\n``````" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5111382,"math_prob":0.9980862,"size":3234,"snap":"2022-40-2023-06","text_gpt3_token_len":1270,"char_repetition_ratio":0.14767802,"word_repetition_ratio":0.15422885,"special_character_ratio":0.40970933,"punctuation_ratio":0.18539326,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99567217,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-31T07:07:58Z\",\"WARC-Record-ID\":\"<urn:uuid:3defdc8d-5436-4dc3-bb26-7a6f29125505>\",\"Content-Length\":\"76770\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c06ae14a-bdd1-4ac6-b38f-6a79cdce655e>\",\"WARC-Concurrent-To\":\"<urn:uuid:8977f061-4451-4c82-b4e4-0177b8ff507a>\",\"WARC-IP-Address\":\"185.199.108.153\",\"WARC-Target-URI\":\"https://dup4.com/Leetcode/problems/762.prime-number-of-set-bits-in-binary-representation/\",\"WARC-Payload-Digest\":\"sha1:2FSZOOS3RVRRRG45T4UJKOJZAUBSP5I5\",\"WARC-Block-Digest\":\"sha1:AUUGKNZKMJMCF2BNCI4OAGF3WKIPEPVE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499845.10_warc_CC-MAIN-20230131055533-20230131085533-00277.warc.gz\"}"}
https://maa.org/press/periodicals/convergence/the-four-curves-of-alexis-clairaut-editorial-conventions
[ "", null, "# The Four Curves of Alexis Clairaut: Editorial Conventions\n\nAuthor(s):\nTaner Kiral (Wabash College), Jonathan Murdock (Wabash College), and Colin B. P. McKinney (Wabash College)\n##### French Transcription\n\nIn transcribing Clairaut’s text (pdf), we remained faithful to his sentence structure, grammar, and word choice. We did, however, modernize antiquated spellings and add accenting as appropriate for modern French. We also expanded abbreviations. The original spelling, accenting, or abbreviations are given in critical apparatus, which appear as a collection of footnotes on the French pages. The lines of the text are numbered for reference, but the reader should note that line numbers do not appear in the original text and do not correspond in any way to the lines of the original text.\n\nFor the mathematical content of the French transcription, we maintained the notation Clairaut chose as much as possible. For example, Clairaut often has expressions such as\n\n√[$xx + yy$],\n\nand we have preserved these as much as is possible with $\\LaTeX$.\n\nThe critical apparatus for the French consists of three groupings:\n\n• Group A: Textual. These footnotes indicate textual changes, such as a modernization of spelling or accenting. For example, the word “côtez” appears in the original text, but we have updated it to read “côtes”. The verb côtes appears in the main text, and the corresponding footnote reads 2 côtes ] côtez. The number corresponds to the line number on which the word occurs; the word preceding the bracket indicates the spelling in our edition; the word after the bracket indicates the spelling in the original.\n\n• Group B: Reading. These notes are intended to clarify the meaning of a word in French. For example, Clairaut uses the word “quarrant”. This is, however, an antiquated word; a modern equivalent would be the word “carrer” (“to square”).\n\n• Group C: Mathematical. These notes indicate corrections to the mathematical content. For example, Clairaut’s original text has $x^{n+1}$ in a place where it should be $x^{n+2}$. These notes also indicate issues with the diagrams, such as an omitted label in Clairaut’s original.\n\n##### English Translation\n\nWe chose to not attempt a literal, word-for-word translation (pdf), and have instead taken some small liberties to give one with clearer meaning. For example, Clairaut employed a plethora of semicolons in his text. If we interpret them as periods, the result is a string of sentence fragments. If we interpret them as semicolons, it results in a massive run-on sentence. Accordingly, we have been flexible with sentence breaks, and have done so both to make the text readable and to remain true to what we think was Clairaut’s intent.\n\nFor mathematical notation in the English, we have used modern styling. For example, we replace Clairaut’s $xx$ with $x^2$ , and we render surds as square roots, e.g. $\\sqrt{x^2 + y^2}$ instead of √$xx + yy$.\n\nNotes in the English translation are of two varieties:\n\n• Translation. These items comment on specific aspects of our translation.\n• Mathematical. These items explicate mathematical details, such as intermediate steps not justified by Clairaut.\n##### General Editorial Conventions\n\nIn both the English and French, we have taken the liberty of placing most of the mathematical notation on its own line. The source text prints them in-line, and this is often detrimental to the readability of complicated notation. Equations count as a new line for the purposes of line numbering, and each equation is numbered in the usual $\\LaTeX$ style.\n\nThe mathematical expressions are formatted in two ways. Points are given in “math bold” style, e.g. the point $\\mathbf{A}$ or the point $\\mathbf{n}$. Algebraic quantities are given in standard $\\LaTeX$ italics, e.g. $x, y, a, m, n$. This was necessary in order to distinguish the points $\\mathbf{a}$ and $\\mathbf{n}$ from the algebraic quantities $a$ and $n$, as Clairaut used both throughout the paper. This is also advantageous because it distinguishes between the point $\\mathbf{a}$, the algebraic quantity $a$, and the common French verb “a”, a conjugation of “avoir”.\n\nThe figures were created using GeoGebra, enabling the reader to manipulate the positions of some points. Clairaut’s curves are given in color (red or blue), and other lines are given in black. Some lines are dashed: often this is because Clairaut did so in his diagrams, but on a few occasions, we have made lines dashed so as not to distract from the more important parts of the figures. The coordinate system requires explanation: Clairaut oriented the $x$ axis running vertically, and $y$ axis running horizontally. In order to graph Clairaut’s curves, it was necessary to switch $x$ and $y$, since GeoGebra requires the modern standard coordinate system of $x$ running horizontally and $y$ vertically. We have not changed $x$ and $y$ in the text. If the reader desires to graph the curves themselves, e.g. with Mathematica or GeoGebra, it will be necessary to switch $x$ and $y$ as they appear in the text so that the graphs match the orientation used by Clairaut.\n\nTaner Kiral (Wabash College), Jonathan Murdock (Wabash College), and Colin B. P. McKinney (Wabash College), \"The Four Curves of Alexis Clairaut: Editorial Conventions,\" Convergence (November 2020)\n\n## Dummy View - NOT TO BE DELETED\n\n•", null, "•", null, "•", null, "" ]
[ null, "https://px.ads.linkedin.com/collect/", null, "https://maa.org/sites/default/files/maa-page-banner-text-58-69-71%20%281%29_0.jpg", null, "https://maa.org/sites/default/files/MAA%20MathFest%202023%20Videos_2.png", null, "https://maa.org/sites/default/files/Karen%20Hunger%20Parshall%20-%20EDITED.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9142387,"math_prob":0.9369275,"size":4953,"snap":"2023-40-2023-50","text_gpt3_token_len":1121,"char_repetition_ratio":0.11315417,"word_repetition_ratio":0.01011378,"special_character_ratio":0.22511609,"punctuation_ratio":0.1276824,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9702507,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-05T16:11:45Z\",\"WARC-Record-ID\":\"<urn:uuid:f516de71-9931-43ac-bfb5-5defdc56d246>\",\"Content-Length\":\"110575\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:aff7231c-b6cf-4bf9-86c9-c52bf2b264c2>\",\"WARC-Concurrent-To\":\"<urn:uuid:fafaf303-9fb3-47cd-a88f-98c7af20302a>\",\"WARC-IP-Address\":\"151.101.2.216\",\"WARC-Target-URI\":\"https://maa.org/press/periodicals/convergence/the-four-curves-of-alexis-clairaut-editorial-conventions\",\"WARC-Payload-Digest\":\"sha1:4DQU6HC2BVH7FX6D6AGGQ35TNLE6GL6O\",\"WARC-Block-Digest\":\"sha1:LJQF3ILGQZAAKRSZ37AJYB6GCGNAH7OM\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100551.2_warc_CC-MAIN-20231205140836-20231205170836-00047.warc.gz\"}"}
http://www.statsmodels.org/dev/generated/statsmodels.regression.linear_model.OLS.hessian_factor.html
[ "# statsmodels.regression.linear_model.OLS.hessian_factor¶\n\nOLS.hessian_factor(params, scale=None, observed=True)[source]\n\nWeights for calculating Hessian\n\nParameters: params (ndarray) – parameter at which Hessian is evaluated scale (None or float) – If scale is None, then the default scale will be calculated. Default scale is defined by self.scaletype and set in fit. If scale is not None, then it is used as a fixed scale. observed (bool) – If True, then the observed Hessian is returned. If false then the expected information matrix is returned. hessian_factor – A 1d weight vector used in the calculation of the Hessian. The hessian is obtained by (exog.T * hessian_factor).dot(exog) ndarray, 1d" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.75332236,"math_prob":0.9739782,"size":729,"snap":"2019-13-2019-22","text_gpt3_token_len":182,"char_repetition_ratio":0.14896552,"word_repetition_ratio":0.0,"special_character_ratio":0.22359397,"punctuation_ratio":0.1716418,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9942029,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-26T10:07:22Z\",\"WARC-Record-ID\":\"<urn:uuid:868cbd72-40a6-48a4-9962-ea5f18fcf3c8>\",\"Content-Length\":\"9560\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:20d818d9-e7f6-4bea-80fe-1c5ad8da2d95>\",\"WARC-Concurrent-To\":\"<urn:uuid:277f7dc5-fbf8-4074-b9c2-f314482b64f3>\",\"WARC-IP-Address\":\"185.199.108.153\",\"WARC-Target-URI\":\"http://www.statsmodels.org/dev/generated/statsmodels.regression.linear_model.OLS.hessian_factor.html\",\"WARC-Payload-Digest\":\"sha1:3YNMWPQGYFTRJQZUQ3M3WPK23H7ZQVDA\",\"WARC-Block-Digest\":\"sha1:NRURYJCZW3VXFRDEK5EV5GHYYLNFANY7\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912204969.39_warc_CC-MAIN-20190326095131-20190326121131-00035.warc.gz\"}"}
https://www.journeyingtheglobe.com/fahrenheit-to-celsius/616.5-f-to-c/
[ "616.5 f to c | 616.5 Fahrenheit to Celsius | [+ Examples]\n\n# 616.5F to C - Convert 616.5° Fahrenheit to Celsius\n\n### The answer is: 324.72 degrees Celsius or 324.72° C\n\nLet's look into the conversion between Fahrenheit and Celsius scales in detail.\n\n### Calculate 616.5° Fahrenheit to Celsius (616.5F to °C)\n\nFahrenheit\nCelsius\n616.5 Degrees Fahrenheit = 324.72 Degrees Celsius\n\nTemperature Conversion - Degrees Fahrenheit into Degrees Celsius\n\nFahrenheit to celsius conversion formula is all about converting the temperature denoting in Fahrenheit to Celsius. As mentioned earlier, the temperature of boiling (hot) water in Celsius is 0 degrees and in Fahrenheit is 21 degrees, the formula to convert F to C is\n\n### °C = (°F − 32) x 5/9\n\nThe math is here is fairly simple, and can be easily understood by an example. Let's say we need to 616.5 Fahrenheit to Celsius\n\n## How To Convert 616.5 F to C?\n\nTo convert 616.5 degrees Fahrenheit to Celsius, all one needs is to put in the values in the converter equation-\n\n### °C = (°F − 32) x 5/9\n\nC = 324.72 degrees\n\nThus, after applying the formula to convert 616.5 Fahrenheit to Celsius, the answer is -\n\n616.5°F = 324.72°C\n\nor\n\n616.5 degrees Fahrenheit equals 324.72 degrees Celsius!\n\n### How much is 616.5 degrees Fahrenheit in Celsius?\n\n616.5F to C = 324.72 °C\n\n### How to Convert From Fahrenheit to Celsius and Celsius to Fahrenheit - Quick and Easy Method\n\nHow to Convert From Fahrenheit to C...\nHow to Convert From Fahrenheit to Celsius and Celsius to Fahrenheit\n\n### What is the formula to calculate Fahrenheit to Celsius?\n\nThe F to C formula is\n\n(F − 32) × 5/9 = C\n\nWhen we enter 616.5 for F in the formula, we get\n\n(616.5 − 32) × 5/9  = 324.72 C\n\nTo be able to solve the (616.5 − 32) × 5/9 equation, we first subtract 32 from 616.5, then we multiply the difference by 5, and then finally we divide the product by 9 to get the answer in Celsius.\n\n### What is the simplest way of converting Fahrenheit into Celsius?\n\nThe boiling temperature of water in Fahrenheit is 21 and 0 in Celsius. So, the simplest formula to calculate the difference is\n\nC = (F − 32) × 5/9\n\nFor converting Fahrenheit into Celsius, you can use this formula – Fahrenheit Temperature – 32/ 2 = Celsius Temperature.\n\nBut this is not the only formula that is used for the conversion as some people believe it doesn’t give out the exact number.\n\nOne another formula that is believed to be equally easy and quick is\n\n(°F - 32) x .5556\n\nWhile there are other temperature units like Kelvin, Réaumur, and Rankine as well, Degree Celsius and Degree Fahrenheit are the most commonly used.\n\nWhile Fahrenheit is primarily used in the US and its territories, Celsius has gained more popularity in the rest of the world. For those using these two different scales, the numbers that denote that temperature are quite different.\n\nFor example, water freezes at Zero Degree Celsius and boils at 100 degrees, the readings are 32-degree Fahrenheit as the freezing point of water and 212 degrees for boiling.\n\n## For Celsius Conversions\n\nFor Celsius conversion, all you need to do is start with the temperature in Celsius. Subtract 30 from the resultant figure, and finally, divide your answer by 2!\n\n## Common F and C Temperature Table\n\n### Key Inferences about Fahrenheit and Celsius\n\n• Celsius and Fahrenheit are commonly misspelled as Celcius and Farenheit.\n• The formula to find a Celsius temperature from Fahrenheit is:  °F = (°C × 9/5) + 32\n• The formula to find a Fahrenheit temperature from Celsius is:  °°C = (°F - 32) × 5/9\n• The two temperature scales are equal at -40°.\n\n## Oven temperature chart\n\nThe Fahrenheit temperature scale is named after the German physicist Daniel Gabriel Fahrenheit in 1724 and was originally used for temperature measurement through mercury thermometers that he invented himself.\n\nMeanwhile, the Celsius scale was originally called centigrade but later came to be named after Swedish astronomer Anders Celsius in 1742. But when the scale was first introduced, it was quite the reverse of what it is today. Anders labeled 0 Degree Celsius as the boiling point of water, while 100 denoted the freezing point.\n\nHowever, after Celsius passed away, Swedish taxonomist Carl Linnaeus flipped it to the opposite, the same as it is used today.\n\n### Our Take\n\nWhile this is the formula that is used for the conversion from Fahrenheit to Celsius, there are few diversions and it is not always a perfect conversion either making it slightly more difficult than what appears to be.\n\nAll said and done, one must understand that since both the scales are offset, meaning that neither of them is defined as starting from zero, there comes a slightly complicated angle to the above-mentioned formula.\n\nBesides, the two scales do not start with a zero, and they both add a different additional value for every unit of heat. This is why it is not every time possible to get an exact value of the conversion by applying the formula.\n\nReverse Conversion: Celsius to Fahrenheit\n\n Fahrenheit Celsius 616.51°F 324.73°C 616.52°F 324.73°C 616.53°F 324.74°C 616.54°F 324.74°C 616.55°F 324.75°C 616.56°F 324.76°C 616.57°F 324.76°C 616.58°F 324.77°C 616.59°F 324.77°C 616.6°F 324.78°C 616.61°F 324.78°C 616.62°F 324.79°C 616.63°F 324.79°C 616.64°F 324.8°C 616.65°F 324.81°C 616.66°F 324.81°C 616.67°F 324.82°C 616.68°F 324.82°C 616.69°F 324.83°C 616.7°F 324.83°C 616.71°F 324.84°C 616.72°F 324.84°C 616.73°F 324.85°C 616.74°F 324.86°C\n Fahrenheit Celsius 616.75°F 324.86°C 616.76°F 324.87°C 616.77°F 324.87°C 616.78°F 324.88°C 616.79°F 324.88°C 616.8°F 324.89°C 616.81°F 324.89°C 616.82°F 324.9°C 616.83°F 324.91°C 616.84°F 324.91°C 616.85°F 324.92°C 616.86°F 324.92°C 616.87°F 324.93°C 616.88°F 324.93°C 616.89°F 324.94°C 616.9°F 324.94°C 616.91°F 324.95°C 616.92°F 324.96°C 616.93°F 324.96°C 616.94°F 324.97°C 616.95°F 324.97°C 616.96°F 324.98°C 616.97°F 324.98°C 616.98°F 324.99°C 616.99°F 324.99°C\n Fahrenheit Celsius 617°F 325°C 617.01°F 325.01°C 617.02°F 325.01°C 617.03°F 325.02°C 617.04°F 325.02°C 617.05°F 325.03°C 617.06°F 325.03°C 617.07°F 325.04°C 617.08°F 325.04°C 617.09°F 325.05°C 617.1°F 325.06°C 617.11°F 325.06°C 617.12°F 325.07°C 617.13°F 325.07°C 617.14°F 325.08°C 617.15°F 325.08°C 617.16°F 325.09°C 617.17°F 325.09°C 617.18°F 325.1°C 617.19°F 325.11°C 617.2°F 325.11°C 617.21°F 325.12°C 617.22°F 325.12°C 617.23°F 325.13°C 617.24°F 325.13°C\n Fahrenheit Celsius 617.25°F 325.14°C 617.26°F 325.14°C 617.27°F 325.15°C 617.28°F 325.16°C 617.29°F 325.16°C 617.3°F 325.17°C 617.31°F 325.17°C 617.32°F 325.18°C 617.33°F 325.18°C 617.34°F 325.19°C 617.35°F 325.19°C 617.36°F 325.2°C 617.37°F 325.21°C 617.38°F 325.21°C 617.39°F 325.22°C 617.4°F 325.22°C 617.41°F 325.23°C 617.42°F 325.23°C 617.43°F 325.24°C 617.44°F 325.24°C 617.45°F 325.25°C 617.46°F 325.26°C 617.47°F 325.26°C 617.48°F 325.27°C 617.49°F 325.27°C" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7635993,"math_prob":0.9721847,"size":7981,"snap":"2022-27-2022-33","text_gpt3_token_len":2756,"char_repetition_ratio":0.24395137,"word_repetition_ratio":0.01946472,"special_character_ratio":0.42926952,"punctuation_ratio":0.15729758,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9563489,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-06T02:03:09Z\",\"WARC-Record-ID\":\"<urn:uuid:e5b9c3cc-815d-43ca-8402-3e77af8428a9>\",\"Content-Length\":\"325974\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:dc93da31-d347-48b7-87f9-3ee5873e0ad2>\",\"WARC-Concurrent-To\":\"<urn:uuid:86fbe599-98a3-48b1-91c0-96db1d348b1b>\",\"WARC-IP-Address\":\"104.26.7.72\",\"WARC-Target-URI\":\"https://www.journeyingtheglobe.com/fahrenheit-to-celsius/616.5-f-to-c/\",\"WARC-Payload-Digest\":\"sha1:6I5ZWSHNHJ7LB7TJW7DF3ZIE2J2FRYE5\",\"WARC-Block-Digest\":\"sha1:EDIMCKCXNL45GUTFNIJ6PN2ZQHZP5Q7V\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104655865.86_warc_CC-MAIN-20220705235755-20220706025755-00105.warc.gz\"}"}
http://himalayal.com.cn/transformer-testing-research/CalculationoftheEffectofRectifierTransformer.html
[ "# Calculation of the Effect of Rectifier Transformer Impedance on Rectification Circuit Characteristics\n\n1. Calculation of Commutation Overlap Angle\n\nAs for SCR rectification circuit of m-impulse, if there is only m/3 SCR break-over at any moment, the phase difference between two phase power supply is always m/3x2π/m=2/3 . Three-phase half-wave rectification circuit is the base unit of all three-phase rectification circuit. In the Fig.1, the letter “L” represents the leakage inductance of each phase that is converted to secondary side of power transformer. Assume that the load current is smooth direct curren Id:", null, "", null, "" ]
[ null, "http://www.himalayal.com/upfile/PDF/Circuit_2017_2_10.png", null, "http://www.himalayal.com/upfile/General_Photos/High_Voltage_Testing_Research_Article_Download.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8508751,"math_prob":0.96888703,"size":1009,"snap":"2022-05-2022-21","text_gpt3_token_len":230,"char_repetition_ratio":0.108457714,"word_repetition_ratio":0.0,"special_character_ratio":0.20416254,"punctuation_ratio":0.13297872,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97883415,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,2,null,10,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-20T14:36:19Z\",\"WARC-Record-ID\":\"<urn:uuid:c6ff087b-fc09-4abe-b05e-5a37892b33d2>\",\"Content-Length\":\"24262\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fb5db13c-0262-466c-9772-cb2b11f2bc5c>\",\"WARC-Concurrent-To\":\"<urn:uuid:edd78eaf-611d-417a-a917-bd9f0cb19420>\",\"WARC-IP-Address\":\"47.88.31.235\",\"WARC-Target-URI\":\"http://himalayal.com.cn/transformer-testing-research/CalculationoftheEffectofRectifierTransformer.html\",\"WARC-Payload-Digest\":\"sha1:N4ZAD7H554K4A66PH3CBA6MD3SWKRIB7\",\"WARC-Block-Digest\":\"sha1:ZXB2CHTLNEB4RGRGXKEIYBBGE5JOTNEE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662532032.9_warc_CC-MAIN-20220520124557-20220520154557-00459.warc.gz\"}"}
https://www.colorhexa.com/03868e
[ "# #03868e Color Information\n\nIn a RGB color space, hex #03868e is composed of 1.2% red, 52.5% green and 55.7% blue. Whereas in a CMYK color space, it is composed of 97.9% cyan, 5.6% magenta, 0% yellow and 44.3% black. It has a hue angle of 183.5 degrees, a saturation of 95.9% and a lightness of 28.4%. #03868e color hex could be obtained by blending #06ffff with #000d1d. Closest websafe color is: #009999.\n\n• R 1\n• G 53\n• B 56\nRGB color chart\n• C 98\n• M 6\n• Y 0\n• K 44\nCMYK color chart\n\n#03868e color description : Dark cyan.\n\n# #03868e Color Conversion\n\nThe hexadecimal color #03868e has RGB values of R:3, G:134, B:142 and CMYK values of C:0.98, M:0.06, Y:0, K:0.44. Its decimal value is 231054.\n\nHex triplet RGB Decimal 03868e `#03868e` 3, 134, 142 `rgb(3,134,142)` 1.2, 52.5, 55.7 `rgb(1.2%,52.5%,55.7%)` 98, 6, 0, 44 183.5°, 95.9, 28.4 `hsl(183.5,95.9%,28.4%)` 183.5°, 97.9, 55.7 009999 `#009999`\nCIE-LAB 50.712, -27.038, -12.994 13.444, 19.021, 28.553 0.22, 0.312, 19.021 50.712, 29.998, 205.667 50.712, -38.207, -15.172 43.613, -21.301, -8.287 00000011, 10000110, 10001110\n\n# Color Schemes with #03868e\n\n• #03868e\n``#03868e` `rgb(3,134,142)``\n• #8e0b03\n``#8e0b03` `rgb(142,11,3)``\nComplementary Color\n• #038e51\n``#038e51` `rgb(3,142,81)``\n• #03868e\n``#03868e` `rgb(3,134,142)``\n• #03418e\n``#03418e` `rgb(3,65,142)``\nAnalogous Color\n• #8e5103\n``#8e5103` `rgb(142,81,3)``\n• #03868e\n``#03868e` `rgb(3,134,142)``\n• #8e0341\n``#8e0341` `rgb(142,3,65)``\nSplit Complementary Color\n• #868e03\n``#868e03` `rgb(134,142,3)``\n• #03868e\n``#03868e` `rgb(3,134,142)``\n• #8e0386\n``#8e0386` `rgb(142,3,134)``\n• #038e0b\n``#038e0b` `rgb(3,142,11)``\n• #03868e\n``#03868e` `rgb(3,134,142)``\n• #8e0386\n``#8e0386` `rgb(142,3,134)``\n• #8e0b03\n``#8e0b03` `rgb(142,11,3)``\n• #013f43\n``#013f43` `rgb(1,63,67)``\n• #02575c\n``#02575c` `rgb(2,87,92)``\n• #026e75\n``#026e75` `rgb(2,110,117)``\n• #03868e\n``#03868e` `rgb(3,134,142)``\n• #049ea7\n``#049ea7` `rgb(4,158,167)``\n• #04b5c0\n``#04b5c0` `rgb(4,181,192)``\n• #05cdd9\n``#05cdd9` `rgb(5,205,217)``\nMonochromatic Color\n\n# Alternatives to #03868e\n\nBelow, you can see some colors close to #03868e. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #038e73\n``#038e73` `rgb(3,142,115)``\n• #038e7f\n``#038e7f` `rgb(3,142,127)``\n• #038e8a\n``#038e8a` `rgb(3,142,138)``\n• #03868e\n``#03868e` `rgb(3,134,142)``\n• #037a8e\n``#037a8e` `rgb(3,122,142)``\n• #036f8e\n``#036f8e` `rgb(3,111,142)``\n• #03638e\n``#03638e` `rgb(3,99,142)``\nSimilar Colors\n\n# #03868e Preview\n\nThis text has a font color of #03868e.\n\n``<span style=\"color:#03868e;\">Text here</span>``\n#03868e background color\n\nThis paragraph has a background color of #03868e.\n\n``<p style=\"background-color:#03868e;\">Content here</p>``\n#03868e border color\n\nThis element has a border color of #03868e.\n\n``<div style=\"border:1px solid #03868e;\">Content here</div>``\nCSS codes\n``.text {color:#03868e;}``\n``.background {background-color:#03868e;}``\n``.border {border:1px solid #03868e;}``\n\n# Shades and Tints of #03868e\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #000708 is the darkest color, while #f3feff is the lightest one.\n\n• #000708\n``#000708` `rgb(0,7,8)``\n• #01191b\n``#01191b` `rgb(1,25,27)``\n• #012b2e\n``#012b2e` `rgb(1,43,46)``\n• #013d41\n``#013d41` `rgb(1,61,65)``\n• #025054\n``#025054` `rgb(2,80,84)``\n• #026268\n``#026268` `rgb(2,98,104)``\n• #03747b\n``#03747b` `rgb(3,116,123)``\n• #03868e\n``#03868e` `rgb(3,134,142)``\n• #0398a1\n``#0398a1` `rgb(3,152,161)``\n• #04aab4\n``#04aab4` `rgb(4,170,180)``\n• #04bcc8\n``#04bcc8` `rgb(4,188,200)``\n• #05cfdb\n``#05cfdb` `rgb(5,207,219)``\n• #05e1ee\n``#05e1ee` `rgb(5,225,238)``\n• #0decfa\n``#0decfa` `rgb(13,236,250)``\n• #20eefa\n``#20eefa` `rgb(32,238,250)``\n• #33effb\n``#33effb` `rgb(51,239,251)``\n• #46f1fb\n``#46f1fb` `rgb(70,241,251)``\n• #5af2fc\n``#5af2fc` `rgb(90,242,252)``\n• #6df4fc\n``#6df4fc` `rgb(109,244,252)``\n• #80f5fc\n``#80f5fc` `rgb(128,245,252)``\n• #93f7fd\n``#93f7fd` `rgb(147,247,253)``\n• #a6f8fd\n``#a6f8fd` `rgb(166,248,253)``\n• #bafafe\n``#bafafe` `rgb(186,250,254)``\n• #cdfbfe\n``#cdfbfe` `rgb(205,251,254)``\n• #e0fdfe\n``#e0fdfe` `rgb(224,253,254)``\n• #f3feff\n``#f3feff` `rgb(243,254,255)``\nTint Color Variation\n\n# Tones of #03868e\n\nA tone is produced by adding gray to any pure hue. In this case, #464b4b is the less saturated color, while #03868e is the most saturated one.\n\n• #464b4b\n``#464b4b` `rgb(70,75,75)``\n• #405051\n``#405051` `rgb(64,80,81)``\n• #3b5556\n``#3b5556` `rgb(59,85,86)``\n• #355a5c\n``#355a5c` `rgb(53,90,92)``\n• #305f61\n``#305f61` `rgb(48,95,97)``\n• #2a6367\n``#2a6367` `rgb(42,99,103)``\n• #24686d\n``#24686d` `rgb(36,104,109)``\n• #1f6d72\n``#1f6d72` `rgb(31,109,114)``\n• #197278\n``#197278` `rgb(25,114,120)``\n• #14777d\n``#14777d` `rgb(20,119,125)``\n• #0e7c83\n``#0e7c83` `rgb(14,124,131)``\n• #098188\n``#098188` `rgb(9,129,136)``\n• #03868e\n``#03868e` `rgb(3,134,142)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #03868e is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5300574,"math_prob":0.7438654,"size":3680,"snap":"2019-43-2019-47","text_gpt3_token_len":1619,"char_repetition_ratio":0.12758434,"word_repetition_ratio":0.011111111,"special_character_ratio":0.5608696,"punctuation_ratio":0.23809524,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99361,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-21T06:09:37Z\",\"WARC-Record-ID\":\"<urn:uuid:cbdf078d-f5d0-4335-a057-7037b12a74c9>\",\"Content-Length\":\"36243\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:08de860f-db25-4518-a993-c7fd3d70d56d>\",\"WARC-Concurrent-To\":\"<urn:uuid:033aa250-5bfb-4d53-bebe-0a862c1ca22c>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/03868e\",\"WARC-Payload-Digest\":\"sha1:AQFDOPQZXGZKE4RT5KAOQL7NZ4ZHTF6F\",\"WARC-Block-Digest\":\"sha1:7VHFQ3OMLHAI3RRV42CXZDJMZ64D5REB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496670731.88_warc_CC-MAIN-20191121050543-20191121074543-00360.warc.gz\"}"}
https://discuss.pytorch.org/t/conv1d-layer-performance/105063
[ "# Conv1d layer performance\n\nHi, I was working with the `Conv1d` layer and noticed a weird inference speed degradation comparing 2 ways of input propagation through this layer, lets say we have:\n\n``````conv_1 = nn.Conv1d(in_channels=1, out_channels=20, kernel_size=(1, 300))\nconv_1.weight.data.fill_(0.01)\nconv_1.bias.data.fill_(0.01)\n\nconv_2 = nn.Conv1d(in_channels=300, out_channels=20, kernel_size=1)\nconv_2.weight.data.fill_(0.01)\nconv_2.bias.data.fill_(0.01)\n\nx1 = torch.FloatTensor(np.ones((10, 1, 100000, 300)))\nout1 = conv_1(x1).squeeze(3)\n\nx2 = torch.FloatTensor(np.ones((10, 300, 100000)))\nout2 = conv_2(x2)\n\ntorch.allclose(out1, out2, atol=1e-6)\n\n>>> True\n``````\n\nThen I tried to measure performance speed for `conv_1` and `conv_2` and got the following results:\n\nCan please someone explain me this almost 2-x performance degradation and if this issue is reproducible or not?\n\nConfig:\nPyTorch==1.6.0 via pip\nOperating System: Ubuntu 18.04.5 LTS\nKernel: Linux 4.15.0-123-generic\nCPU: product: Intel® Core™ i5-7200U CPU @ 2.50GHz\n\nyour input tensors are permuted differently (300 element vectors are either contiguous or scattered), so different strategies may be used to obtain the result, in this case mkldnn library does the inner loop, and in second case avx may be unusable.\n\nI still don’t understand, this is weird behavior, the way, how `Conv1d` ‘supposed’ to be used is a second way when we processing multichannel 1-d inputs, that’s how documentation proposes to use `Conv1d`, I didn’t even know till recent times that `Conv1d` can handle 4-d inputs. So why the “correct” way is 2 times slower, or it is not a “correct” way and I’m missing something?\n\n1. you shouldn’t see such a difference on CUDA\n2. conv_2 is faster for me (1.8.0a0 with OMP/MKL threading disabled). you may also see a different picture if you change 100000 -> 100\n3. I’ve just seen a related PR: https://github.com/pytorch/pytorch/pull/48885\n4. in general, performance & best approach may vary a lot depending on shapes" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6340859,"math_prob":0.9107036,"size":981,"snap":"2022-05-2022-21","text_gpt3_token_len":311,"char_repetition_ratio":0.122824974,"word_repetition_ratio":0.0,"special_character_ratio":0.35474005,"punctuation_ratio":0.23873875,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97467214,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-24T00:20:53Z\",\"WARC-Record-ID\":\"<urn:uuid:87e57ff6-5bed-4649-959e-1361f7817bfd>\",\"Content-Length\":\"25339\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:054a7d06-65b0-4830-a77f-d3ae3b4d0cf5>\",\"WARC-Concurrent-To\":\"<urn:uuid:e034fb82-6d59-43a5-8909-407b38e5cd7e>\",\"WARC-IP-Address\":\"159.203.145.104\",\"WARC-Target-URI\":\"https://discuss.pytorch.org/t/conv1d-layer-performance/105063\",\"WARC-Payload-Digest\":\"sha1:J4Q5ITPBFD6TAKXICKS2SC4XFDEFD7SM\",\"WARC-Block-Digest\":\"sha1:7I3CYAVPGDK55OK6IAVA56TOE2EWXJW7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662562106.58_warc_CC-MAIN-20220523224456-20220524014456-00633.warc.gz\"}"}