URL
stringlengths
15
1.68k
text_list
sequencelengths
1
199
image_list
sequencelengths
1
199
metadata
stringlengths
1.19k
3.08k
https://insideaiml.com/blog/Python-3---Numbers-470
[ "", null, "", null, "#### Top Courses", null, "#### Machine Learning with Python & Statistics", null, "4 (4,001 Ratings)", null, "218 Learners\n\n#### Live Masterclass on \"Python for Artificial Intelligence\"", null, "Dec 4th (7:00 PM) 208 Registered\nMore webinars\n\n# Python 3 - Numbers", null, "Neha Kumawat\n\na year ago\n\nNumber data types store numeric values. They are immutable data types. This means, changing the value of a number data type results in a newly allocated object.\nNumber objects are created when you assign a value to them. For example −\n``````\nvar1 = 1\nvar2 = 10\n``````\nYou can also delete the reference to a number object by using the del statement. The syntax of the del statement is −\n``````\ndel var1[,var2[,var3[....,varN]]]]\n``````\nYou can delete a single object or multiple objects by using the del statement. For example −\n``````\ndel var\ndel var_a, var_b\n``````\nPython supports different numerical types −\n• int (signed integers) − They are often called just integers or ints. They are positive or negative whole numbers with no decimal point. Integers in Python 3 are of unlimited size. Python 2 has two integer types - int and long. There is no 'long integer' in Python 3 anymore.\nint (signed integers) − They are often called just integers or ints. They are positive or negative whole numbers with no decimal point. Integers in Python 3 are of unlimited size. Python 2 has two integer types - int and long. There is no 'long integer' in Python 3 anymore.\n• float (floating point real values) − Also called floats, they represent real numbers and are written with a decimal point dividing the integer and the fractional parts. Floats may also be in scientific notation, with E or e indicating the power of 10 (2.5e2 = 2.5 x 102 = 250).\nfloat (floating point real values) − Also called floats, they represent real numbers and are written with a decimal point dividing the integer and the fractional parts. Floats may also be in scientific notation, with E or e indicating the power of 10 (2.5e2 = 2.5 x 102 = 250).\n• complex (complex numbers) − are of the form a + bJ, where a and b are floats and J (or j) represents the square root of -1 (which is an imaginary number). The real part of the number is a, and the imaginary part is b. Complex numbers are not used much in Python programming.\ncomplex (complex numbers) − are of the form a + bJ, where a and b are floats and J (or j) represents the square root of -1 (which is an imaginary number). The real part of the number is a, and the imaginary part is b. Complex numbers are not used much in Python programming.\nIt is possible to represent an integer in hexa-decimal or octal form\n``````\n&gt;&gt;&gt; number = 0xA0F #Hexa-decimal\n&gt;&gt;&gt; number\n2575\n\n&gt;&gt;&gt; number = 0o37 #Octal\n&gt;&gt;&gt; number\n31\n``````\n\n### Examples\n\nHere are some examples of numbers.\nA complex number consists of an ordered pair of real floating-point numbers denoted by a &plus bj, where a is the real part and b is the imaginary part of the complex number.\n\n## Number Type Conversion\n\nPython converts numbers internally in an expression containing mixed types to a common type for evaluation. Sometimes, you need to coerce a number explicitly from one type to another to satisfy the requirements of an operator or function parameter.\n• Type int(x) to convert x to a plain integer.\nType int(x) to convert x to a plain integer.\n• Type long(x) to convert x to a long integer.\nType long(x) to convert x to a long integer.\n• Type float(x) to convert x to a floating-point number.\nType float(x) to convert x to a floating-point number.\n• Type complex(x) to convert x to a complex number with real part x and imaginary part zero.\nType complex(x) to convert x to a complex number with real part x and imaginary part zero.\n• Type complex(x, y) to convert x and y to a complex number with real part x and imaginary part y. x and y are numeric expressions\nType complex(x, y) to convert x and y to a complex number with real part x and imaginary part y. x and y are numeric expressions\n\n## Mathematical Functions\n\nPython includes the following functions that perform mathematical calculations.\nThe absolute value of x: the (positive) distance between x and zero.\nThe ceiling of x: the smallest integer not less than x.\ncmp(x, y)\n-1 if x < y, 0 if x == y, or 1 if x > y. Deprecated in Python 3. Instead use return (x>y)-(x\nThe exponential of x: ex\nThe absolute value of x.\nThe floor of x: the largest integer not greater than x.\nThe natural logarithm of x, for x > 0.\nThe base-10 logarithm of x for x > 0.\nThe largest of its arguments: the value closest to positive infinity\nThe smallest of its arguments: the value closest to negative infinity.\nThe fractional and integer parts of x in a two-item tuple. Both parts have the same sign as x. The integer part is returned as a float.\nThe value of x**y.\nx rounded to n digits from the decimal point. Python rounds away from zero as a tie-breaker: round(0.5) is 1.0 and round(-0.5) is -1.0.\nThe square root of x for x > 0.\n\n## Random Number Functions\n\nRandom numbers are used for games, simulations, testing, security, and privacy applications. Python includes the following functions that are commonly used.\nA random item from a list, tuple, or string.\nA randomly selected element from range(start, stop, step).\nA random float r, such that 0 is less than or equal to r and r is less than 1\nSets the integer starting value used in generating random numbers. Call this function before calling any other random module function. Returns None.\nRandomizes the items of a list in place. Returns None.\nA random float r, such that x is less than or equal to r and r is less than y.\n\n## Trigonometric Functions\n\nPython includes the following functions that perform trigonometric calculations.\nReturn the arc cosine of x, in radians.\nReturn the arc sine of x, in radians.\nReturn the arc tangent of x, in radians.\nReturn atan(y / x), in radians.\nReturn the cosine of x radians.\nReturn the Euclidean norm, sqrt(x*x + y*y).\nReturn the sine of x radians.\nReturn the tangent of x radians.\nConverts angle x from radians to degrees.\nConverts angle x from degrees to radians.\n\n## Mathematical Constants\n\nThe module also defines two mathematical constants −\npi\nThe mathematical constant pi.\ne\nThe mathematical constant e." ]
[ null, "https://www.facebook.com/tr", null, "https://insideaiml.com/assets/images/footer-logo.png", null, "https://ambrapaliaidata.blob.core.windows.net/ai-storage/landingPage/exploreCourses/machine%20learning%20with%20python.webp", null, "https://insideaiml.com/assets/images/staar.png", null, "https://insideaiml.com/assets/images/v.svg", null, "https://ambrapaliaidata.blob.core.windows.net/ai-storage/webinar%20images/4th%20dec%20without%20register.jpg", null, "https://insideaiml.com/assets/images/01img.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5870766,"math_prob":0.9946702,"size":394,"snap":"2021-43-2021-49","text_gpt3_token_len":130,"char_repetition_ratio":0.14358975,"word_repetition_ratio":0.0,"special_character_ratio":0.33756346,"punctuation_ratio":0.15277778,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9985768,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-03T13:24:08Z\",\"WARC-Record-ID\":\"<urn:uuid:9b832d0a-4f14-4561-a0fc-c517852b3be6>\",\"Content-Length\":\"256544\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3bc94448-8038-4335-bde3-d7187ca88d31>\",\"WARC-Concurrent-To\":\"<urn:uuid:1c1ce2f2-d117-41fa-b74e-889c6d7ec74b>\",\"WARC-IP-Address\":\"143.110.253.149\",\"WARC-Target-URI\":\"https://insideaiml.com/blog/Python-3---Numbers-470\",\"WARC-Payload-Digest\":\"sha1:HUXEZ33K23EVOQKYTZULWAG2U7XL4RLK\",\"WARC-Block-Digest\":\"sha1:2CPNBERUDOD2HVIUWQZAMHHO7LLNWJSJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964362879.45_warc_CC-MAIN-20211203121459-20211203151459-00259.warc.gz\"}"}
https://www.domestic-engineering.com/about/ignorance/rebellion.html
[ "# My rebellion against ignorance", null, "Photo credit to Torsten Dederichs\n\n## Math is for people who like understanding things\n\nThe world is full of literature about money for the mathematically ignorant and emotionally undisciplined. I am not writing to those people. If this blog is confusing or the equations unclear, the reader does not need me to hold their hand through an abbreviated explanation of calculus, they need a calculus and/or differential equations course which is available for free online in many places. No portion of this text requires more than a cursory understanding of calculus and the ability to follow step by step derivations. Most of the broad conclusions can be understood with algebra skills that all reasonably educated adults possess.\n\n## Ubiquitous and terrible ways to avoid writing equations\n\nPerhaps the most common ignorance inducing statement made in the financial math world is the attempt to describe the interest on a loan with a couple of examples.\n\nFrom Nerdwallet.com retrieved 2017-05-22\n\na 30-year, fixed-rate $300,000 mortgage with a 6% APR . . . for a total repayment amount of$647,515. If, however, you took out the same mortgage and paid $40,000 in one-time fees upfront, you would have a 4% APR and end up paying . . . a total cost of$555,607.\n\nFrom DaveRamsey.com retrieved 2017-05-22\n\nA $175,000, 30-year mortgage with a 4% interest rate will cost you$68,000 more over the life of the loan than a 15-year mortgage will.\n\nThat is to say, describe the following equation\n\n$$\\text{Interest} = f(B_0,r,t_\\text{term})$$\n\nusing only a few example points. $$B_0 =$$ loan amount, $$r =$$ interest rate, and $$t_\\text{term} =$$ the loan term. This is fatuously incomplete because if we know nothing else about that equation, we have no idea what kinds of functions we might be dealing with. Is interest a linear or exponential function of $$B_0$$? of $$r$$? logarithmic? sinusoidal? The author imparts almost no information with their examples except illustrations of their \"Debt is really costly\" or \"Low-interest rates are preferable to high ones\" thesis.\n\nThe real equation is not as complicated as the black box version suggests it might be because it is not an independent function of $$r$$ and $$t_\\text{term}$$ it is a function of their product $$rt_\\text{term}$$. It also scales linearly with $$B_0$$ That information takes the mystery from a function of 3 variables to a scaling relation of 1.\n\n$$\\frac{\\text{Interest}}{B_0} = f(rt_\\text{term})= \\frac{rt_\\text{term}}{1-e^{-rt_\\text{term}}}-1$$\n\nThis equation is simple to understand, easy to apply, and relevant to every loan with a constant repayment rate. For reasons I cannot understand, it also seems to be a giant secret in the finance guru business. This problem is widespread in the world of financial literature, not limited to describing the interest on a loan (though that is the most common case).\n\n## Business Insider thinks $$e$$ is too confusing to publish\n\nConsider this pile of garbage from businessinsider.com in an article \"11 Personal Finance Equations You Need To Know\" which is introduced with the overselling line \"We've rounded up 11 math equations that can be used every single day. Write them down, whip out your pencil, and prepare to budget like a genius\" despite the fact that only one of their 11 equations has anything to do with budgeting and it's so tautological, it must be a joke. (Variables changed for consistency with this blog.)\n\n$$P = \\frac{B_0 \\frac{r}{f}}{1-\\left(1+\\frac{r}{f}\\right)^{-t_\\text{term} f}}$$\n\nThe equation describes the payments on a loan as a function of the loan size, the payment frequency, the loan term, and the interest rate. It is misleading or insulting for three reasons: it appears to be derived from a finite difference analysis of the differential equation of a loan balance which it is not, no work is shown, and the approximations made are not stated.\n\nIf we look at the real equation, the one which is the result of a 1st order ODE, we see that the authors at business insider decided the readers would not know what $$e$$ is so they gave it a cumbersome approximation. They assume a sizable portion of their readership would be able to properly multiply and convert units in frequency and interest rate, raise quantities to fractional powers, and apply order of operations to a complex fraction but $$e$$ was too foreign. Replacing $$e$$ with 2.71 was also not an option for some indefensible, unknown reason.\n\n$$P = \\frac{B_0 \\frac{r}{f}}{1-e^{-rt_\\text{term}}}$$\n\nFrom the same article is this ridiculous attempt to compute the net present value of an annuity. $$F$$ is the cash flow rate (\\$ per year), $$r$$ is the rate of return, and $$t$$ is the length of the annuity.\n\n$$\\text{NPV} = F\\left( \\frac{1}{r}-\\frac{1}{r(1+r)^t}\\right)$$\n\nThe real equation that any 2nd-year college student in a STEM curriculum should be able to derive is both more correct and concise.\n\n$$\\text{NPV} = \\frac{F}{r} \\left( 1 - e^{-rt} \\right)$$\n\nAgain, Business Insider believes its readership is too uneducated to understand $$e$$? Do they think the same thing about $$\\pi$$? What is wrong with their editors? Why is their instinct to publish cumbersome, inaccurate formulas instead of combating ignorance with explanation?" ]
[ null, "https://www.domestic-engineering.com/about/ignorance/tagimage_mini.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9610002,"math_prob":0.9940554,"size":5619,"snap":"2023-14-2023-23","text_gpt3_token_len":1210,"char_repetition_ratio":0.10828139,"word_repetition_ratio":0.0021367522,"special_character_ratio":0.22192562,"punctuation_ratio":0.0900474,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9969987,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-04-01T03:52:16Z\",\"WARC-Record-ID\":\"<urn:uuid:62dc098a-7314-465f-8696-db9fac15edf9>\",\"Content-Length\":\"12185\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2468a929-bbb6-4901-b34b-83ad5690c208>\",\"WARC-Concurrent-To\":\"<urn:uuid:c7c11166-c7e0-4162-bb36-d59c5b1fb8df>\",\"WARC-IP-Address\":\"172.102.240.42\",\"WARC-Target-URI\":\"https://www.domestic-engineering.com/about/ignorance/rebellion.html\",\"WARC-Payload-Digest\":\"sha1:GA5QOKKEAUA2ICHOG7YAMQ4PDZQTLZEN\",\"WARC-Block-Digest\":\"sha1:3WVYCH5VLJ6UCGSAM5FIYS33WWJLI2VA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296949701.0_warc_CC-MAIN-20230401032604-20230401062604-00679.warc.gz\"}"}
https://tech-story.net/basic-electronics-diodes/
[ "", null, "Basic Electronics - Diodes", null, "# Basic Electronics – Diodes\n\nAfter having known about various components, let us focus on another important component in the field of electronics, known as a Diode. A semiconductor diode is a two terminal electronic component with a PN junction. This is also called as a Rectifier.", null, "The anode which is the positive terminal of a diode is represented with A and the cathode, which is the negative terminal is represented with K. To know the anode and cathode of a practical diode, a fine line is drawn on the diode which means cathode, while the other end represents anode.", null, "As we had already discussed about the P-type and N-type semiconductors, and the behavior of their carriers, let us now try to join these materials together to see what happens.\n\n## Formation of a Diode\n\nIf a P-type and an N-type material are brought close to each other, both of them join to form a junction, as shown in the figure below.", null, "A P-type material has holes as the majority carriers and an N-type material has electrons as the majority carriers. As opposite charges attract, few holes in P-type tend to go to n-side, whereas few electrons in N-type tend to go to P-side.\n\nAs both of them travel towards the junction, holes and electrons recombine with each other to neutralize and forms ions. Now, in this junction, there exists a region where the positive and negative ions are formed, called as PN junction or junction barrier as shown in the figure.", null, "The formation of negative ions on P-side and positive ions on N-side results in the formation of a narrow charged region on either side of the PN junction. This region is now free from movable charge carriers. The ions present here have been stationary and maintain a region of space between them without any charge carriers.\n\nAs this region acts as a barrier between P and N type materials, this is also called as Barrier junction. This has another name called as Depletion region meaning it depletes both the regions. There occurs a potential difference VD due to the formation of ions, across the junction called as Potential Barrier as it prevents further movement of holes and electrons through the junction.\n\n## Biasing of a Diode\n\nWhen a diode or any two-terminal component is connected in a circuit, it has two biased conditions with the given supply. They are Forward biased condition and Reverse biased condition. Let us know them in detail.\n\n### Forward Biased Condition\n\nWhen a diode is connected in a circuit, with its anode to the positive terminal and cathode to the negative terminal of the supply, then such a connection is said to be forward biased condition. This kind of connection makes the circuit more and more forward biased and helps in more conduction. A diode conducts well in forward biased condition.\n\n### Reverse Biased Condition\n\nWhen a diode is connected in a circuit, with its anode to the negative terminal and cathode to the positive terminal of the supply, then such a connection is said to be Reverse biased condition. This kind of connection makes the circuit more and more reverse biased and helps in minimizing and preventing the conduction. A diode cannot conduct in reverse biased condition.", null, "Let us now try to know what happens if a diode is connected in forward biased and in reverse biased conditions.\n\n## Working under Forward Biased\n\nWhen an external voltage is applied to a diode such that it cancels the potential barrier and permits the flow of current is called as forward bias. When anode and cathode are connected to positive and negative terminals respectively, the holes in P-type and electrons in N-type tend to move across the junction, breaking the barrier. There exists a free flow of current with this, almost eliminating the barrier.", null, "With the repulsive force provided by positive terminal to holes and by negative terminal to electrons, the recombination takes place in the junction. The supply voltage should be such high that it forces the movement of electrons and holes through the barrier and to cross it to provide forward current.\n\nForward Current is the current produced by the diode when operating in forward biased condition and it is indicated by If.\n\n## Working under Reverse Biased\n\nWhen an external voltage is applied to a diode such that it increases the potential barrier and restricts the flow of current is called as Reverse bias. When anode and cathode are connected to negative and positive terminals respectively, the electrons are attracted towards the positive terminal and holes are attracted towards the negative terminal. Hence both will be away from the potential barrier increasing the junction resistance and preventing any electron to cross the junction.\n\nThe following figure explains this. The graph of conduction when no field is applied and when some external field is applied are also drawn.", null, "With the increasing reverse bias, the junction has few minority carriers to cross the junction. This current is normally negligible. This reverse current is almost constant when the temperature is constant. But when this reverse voltage increases further, then a point called reverse breakdown occurs, where an avalanche of current flows through the junction. This high reverse current damages the device.\n\nReverse current is the current produced by the diode when operating in reverse biased condition and it is indicated by Ir. Hence a diode provides high resistance path in reverse biased condition and doesn’t conduct, where it provides a low resistance path in forward biased condition and conducts. Thus we can conclude that a diode is a one-way device which conducts in forward bias and acts as an insulator in reverse bias. This behavior makes it work as a rectifier, which converts AC to DC.\n\n### Peak Inverse Voltage\n\nPeak Inverse Voltage is shortly called as PIV. It states the maximum voltage applied in reverse bias. The Peak Inverse Voltage can be defined as “The maximum reverse voltage that a diode can withstand without being destroyed”. Hence, this voltage is considered during reverse biased condition. It denotes how a diode can be safely operated in reverse bias.\n\n## Purpose of a Diode\n\nA diode is used to block the electric current flow in one direction, i.e. in forward direction and to block in reverse direction. This principle of diode makes it work as a Rectifier.\n\nFor a circuit to allow the current flow in one direction but to stop in the other direction, the rectifier diode is the best choice. Thus the output will be DC removing the AC components. The circuits such as half wave and full wave rectifiers are made using diodes, which can be studied in Electronic Circuits tutorials.\n\nA diode is also used as a Switch. It helps a faster ON and OFF for the output that should occur in a quick rate.\n\n## V – I Characteristics of a Diode\n\nA Practical circuit arrangement for a PN junction diode is as shown in the following figure. An ammeter is connected in series and voltmeter in parallel, while the supply is controlled through a variable resistor.", null, "During the operation, when the diode is in forward biased condition, at some particular voltage, the potential barrier gets eliminated. Such a voltage is called as Cut-off Voltage or Knee Voltage. If the forward voltage exceeds beyond the limit, the forward current rises up exponentially and if this is done further, the device is damaged due to overheating.\n\nThe following graph shows the state of diode conduction in forward and reverse biased conditions.", null, "During the reverse bias, current produced through minority carriers exist known as “Reverse current”. As the reverse voltage increases, this reverse current increases and it suddenly breaks down at a point, resulting in the permanent destruction of the junction.", null, "" ]
[ null, "https://mc.yandex.ru/watch/74485888", null, "https://tech-story.net/basic-electronics-diodes/", null, "https://tech-story.net/basic-electronics-diodes/image/gif,GIF89a%01%00%01%00%80%00%00%00%00%00%FF%FF%FF%21%F9%04%01%00%00%00%00%2C%00%00%00%00%01%00%01%00%00%02%01D%00%3B", null, "https://tech-story.net/basic-electronics-diodes/image/gif,GIF89a%01%00%01%00%80%00%00%00%00%00%FF%FF%FF%21%F9%04%01%00%00%00%00%2C%00%00%00%00%01%00%01%00%00%02%01D%00%3B", null, "https://tech-story.net/basic-electronics-diodes/image/gif,GIF89a%01%00%01%00%80%00%00%00%00%00%FF%FF%FF%21%F9%04%01%00%00%00%00%2C%00%00%00%00%01%00%01%00%00%02%01D%00%3B", null, "https://tech-story.net/basic-electronics-diodes/image/gif,GIF89a%01%00%01%00%80%00%00%00%00%00%FF%FF%FF%21%F9%04%01%00%00%00%00%2C%00%00%00%00%01%00%01%00%00%02%01D%00%3B", null, "https://tech-story.net/basic-electronics-diodes/image/gif,GIF89a%01%00%01%00%80%00%00%00%00%00%FF%FF%FF%21%F9%04%01%00%00%00%00%2C%00%00%00%00%01%00%01%00%00%02%01D%00%3B", null, "https://tech-story.net/basic-electronics-diodes/image/gif,GIF89a%01%00%01%00%80%00%00%00%00%00%FF%FF%FF%21%F9%04%01%00%00%00%00%2C%00%00%00%00%01%00%01%00%00%02%01D%00%3B", null, "https://tech-story.net/basic-electronics-diodes/image/gif,GIF89a%01%00%01%00%80%00%00%00%00%00%FF%FF%FF%21%F9%04%01%00%00%00%00%2C%00%00%00%00%01%00%01%00%00%02%01D%00%3B", null, "https://tech-story.net/basic-electronics-diodes/image/gif,GIF89a%01%00%01%00%80%00%00%00%00%00%FF%FF%FF%21%F9%04%01%00%00%00%00%2C%00%00%00%00%01%00%01%00%00%02%01D%00%3B", null, "https://tech-story.net/basic-electronics-diodes/image/gif,GIF89a%01%00%01%00%80%00%00%00%00%00%FF%FF%FF%21%F9%04%01%00%00%00%00%2C%00%00%00%00%01%00%01%00%00%02%01D%00%3B", null, "https://i1.wp.com/www.paypal.com/en_FR/i/scr/pixel.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9531726,"math_prob":0.8550034,"size":7657,"snap":"2021-43-2021-49","text_gpt3_token_len":1514,"char_repetition_ratio":0.15248922,"word_repetition_ratio":0.09034268,"special_character_ratio":0.18597361,"punctuation_ratio":0.08028169,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96114784,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24],"im_url_duplicate_count":[null,null,null,1,null,9,null,9,null,9,null,9,null,9,null,9,null,9,null,9,null,9,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-09T07:51:31Z\",\"WARC-Record-ID\":\"<urn:uuid:447a1055-eaee-47b9-a135-cd555c34b755>\",\"Content-Length\":\"187320\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:524a9fb6-7562-4290-b41b-ec13cadfd9d9>\",\"WARC-Concurrent-To\":\"<urn:uuid:7badd1ed-d6f8-49ab-9aff-3a6bc01de63c>\",\"WARC-IP-Address\":\"172.67.182.101\",\"WARC-Target-URI\":\"https://tech-story.net/basic-electronics-diodes/\",\"WARC-Payload-Digest\":\"sha1:QG4E63ZIKHMZM4TTE5QKXUWHG5AUDTE2\",\"WARC-Block-Digest\":\"sha1:SS7UR6DGLD372HN27FMGK2MY77U3732E\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363689.56_warc_CC-MAIN-20211209061259-20211209091259-00480.warc.gz\"}"}
https://mb3is.megx.net/gustame/constrained-analyses/cca/partial-canonical-correspondence-analysis
[ "### Partial Canonical Correspondence Analysis\n\n#### b\n\nFigure 1: An illustration of \"partialling out\" the influence of a set of variables (W) from a model. a) Both the explanatory variable(s) in matrices X and Y explain a portion of the variation in the response data (Y). b) After the partialling out the effect of W (which may be a single variable or a set of variables), only the variance in the response variables (Y) which can be exclusively explained by the variance in one set of explanatory variables (X) is retained.\n\nPartial canonical correspondence analysis (pCCA) is an extension of CCA wherein the influence of a set of variables stored in an additional matrix can be controlled for. The concept is related to partial correlation.\n\nThis is particularly useful when one wishes to control for a set of variables whose influence is known or at least anticipated and which are not of immediate interest. Examples include geographic distance, latitudinal temperature gradients, or depth-dependent photogradients.\n\nControlling for the effect of different sampling or measurement times or locations between samples is also possible. It must be determined whether time and/or space are best represented by dummy variables or appropriately transformed quantitative variables. As CCA is a method tuned to represent centroids (multivariate means) of data sets, if the influence of time and/or space is restricted to shifting these centroids then its effect can be well controlled for. However, if time/space effects show more complex influence or interact with other explanatory or control variables, these factors cannot be controlled for by pCCA.\n\nThe method can also be applied to examine the effect of a single variable in a matrix of explanatory variables using pCCA, while controlling for the other variables. This is done by placing all other explanatory variables in a matrix of control variables. Their effects may then be partialled out. A single canonical axis and eigenvalue will be generated which express the variation that the variable of interest is responsible for.\n\nMASAME pCCA app\n\nReferences" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9122127,"math_prob":0.93291944,"size":1565,"snap":"2020-45-2020-50","text_gpt3_token_len":280,"char_repetition_ratio":0.12684177,"word_repetition_ratio":0.0,"special_character_ratio":0.16932908,"punctuation_ratio":0.06870229,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9518992,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-28T13:03:44Z\",\"WARC-Record-ID\":\"<urn:uuid:a242512f-c888-4152-9166-0e59481077f5>\",\"Content-Length\":\"47235\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:90eb8767-9083-469a-89e0-cfea0449cfc9>\",\"WARC-Concurrent-To\":\"<urn:uuid:65f56656-18f1-4ba5-9896-b6b5799b0cb6>\",\"WARC-IP-Address\":\"194.95.6.8\",\"WARC-Target-URI\":\"https://mb3is.megx.net/gustame/constrained-analyses/cca/partial-canonical-correspondence-analysis\",\"WARC-Payload-Digest\":\"sha1:HLWONC5D2DG7OUDD6WV57KU33C5D6A6N\",\"WARC-Block-Digest\":\"sha1:65FMPIYHB2OOT5SN32YZZBRVKDXYVKGP\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141195656.78_warc_CC-MAIN-20201128125557-20201128155557-00170.warc.gz\"}"}
https://www.colorhexa.com/180216
[ "# #180216 Color Information\n\nIn a RGB color space, hex #180216 is composed of 9.4% red, 0.8% green and 8.6% blue. Whereas in a CMYK color space, it is composed of 0% cyan, 91.7% magenta, 8.3% yellow and 90.6% black. It has a hue angle of 305.5 degrees, a saturation of 84.6% and a lightness of 5.1%. #180216 color hex could be obtained by blending #30042c with #000000. Closest websafe color is: #000000.\n\n• R 9\n• G 1\n• B 9\nRGB color chart\n• C 0\n• M 92\n• Y 8\n• K 91\nCMYK color chart\n\n#180216 color description : Very dark (mostly black) magenta.\n\n# #180216 Color Conversion\n\nThe hexadecimal color #180216 has RGB values of R:24, G:2, B:22 and CMYK values of C:0, M:0.92, Y:0.08, K:0.91. Its decimal value is 1573398.\n\nHex triplet RGB Decimal 180216 `#180216` 24, 2, 22 `rgb(24,2,22)` 9.4, 0.8, 8.6 `rgb(9.4%,0.8%,8.6%)` 0, 92, 8, 91 305.5°, 84.6, 5.1 `hsl(305.5,84.6%,5.1%)` 305.5°, 91.7, 9.4 000000 `#000000`\nCIE-LAB 2.67, 10.744, -6.66 0.543, 0.296, 0.787 0.334, 0.182, 0.296 2.67, 12.641, 328.205 2.67, 3.409, -3.675 5.437, 8.321, -4.782 00011000, 00000010, 00010110\n\n# Color Schemes with #180216\n\n• #180216\n``#180216` `rgb(24,2,22)``\n• #021804\n``#021804` `rgb(2,24,4)``\nComplementary Color\n• #0f0218\n``#0f0218` `rgb(15,2,24)``\n• #180216\n``#180216` `rgb(24,2,22)``\n• #18020b\n``#18020b` `rgb(24,2,11)``\nAnalogous Color\n• #02180f\n``#02180f` `rgb(2,24,15)``\n• #180216\n``#180216` `rgb(24,2,22)``\n• #0b1802\n``#0b1802` `rgb(11,24,2)``\nSplit Complementary Color\n• #021618\n``#021618` `rgb(2,22,24)``\n• #180216\n``#180216` `rgb(24,2,22)``\n• #161802\n``#161802` `rgb(22,24,2)``\n• #040218\n``#040218` `rgb(4,2,24)``\n• #180216\n``#180216` `rgb(24,2,22)``\n• #161802\n``#161802` `rgb(22,24,2)``\n• #021804\n``#021804` `rgb(2,24,4)``\n• #000000\n``#000000` `rgb(0,0,0)``\n• #000000\n``#000000` `rgb(0,0,0)``\n• #000000\n``#000000` `rgb(0,0,0)``\n• #180216\n``#180216` `rgb(24,2,22)``\n• #30042c\n``#30042c` `rgb(48,4,44)``\n• #470641\n``#470641` `rgb(71,6,65)``\n• #5f0857\n``#5f0857` `rgb(95,8,87)``\nMonochromatic Color\n\n# Alternatives to #180216\n\nBelow, you can see some colors close to #180216. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #150218\n``#150218` `rgb(21,2,24)``\n• #160218\n``#160218` `rgb(22,2,24)``\n• #180218\n``#180218` `rgb(24,2,24)``\n• #180216\n``#180216` `rgb(24,2,22)``\n• #180214\n``#180214` `rgb(24,2,20)``\n• #180212\n``#180212` `rgb(24,2,18)``\n• #180211\n``#180211` `rgb(24,2,17)``\nSimilar Colors\n\n# #180216 Preview\n\nThis text has a font color of #180216.\n\n``<span style=\"color:#180216;\">Text here</span>``\n#180216 background color\n\nThis paragraph has a background color of #180216.\n\n``<p style=\"background-color:#180216;\">Content here</p>``\n#180216 border color\n\nThis element has a border color of #180216.\n\n``<div style=\"border:1px solid #180216;\">Content here</div>``\nCSS codes\n``.text {color:#180216;}``\n``.background {background-color:#180216;}``\n``.border {border:1px solid #180216;}``\n\n# Shades and Tints of #180216\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #060005 is the darkest color, while #fef3fd is the lightest one.\n\n• #060005\n``#060005` `rgb(6,0,5)``\n• #180216\n``#180216` `rgb(24,2,22)``\n• #2a0427\n``#2a0427` `rgb(42,4,39)``\n• #3c0537\n``#3c0537` `rgb(60,5,55)``\n• #4e0748\n``#4e0748` `rgb(78,7,72)``\n• #600858\n``#600858` `rgb(96,8,88)``\n• #730a69\n``#730a69` `rgb(115,10,105)``\n• #850b7a\n``#850b7a` `rgb(133,11,122)``\n• #970d8a\n``#970d8a` `rgb(151,13,138)``\n• #a90e9b\n``#a90e9b` `rgb(169,14,155)``\n• #bb10ab\n``#bb10ab` `rgb(187,16,171)``\n• #cd11bc\n``#cd11bc` `rgb(205,17,188)``\n• #df13cd\n``#df13cd` `rgb(223,19,205)``\n``#ec1ad9` `rgb(236,26,217)``\n• #ed2cdc\n``#ed2cdc` `rgb(237,44,220)``\n• #ef3edf\n``#ef3edf` `rgb(239,62,223)``\n• #f050e2\n``#f050e2` `rgb(240,80,226)``\n• #f262e5\n``#f262e5` `rgb(242,98,229)``\n• #f374e8\n``#f374e8` `rgb(243,116,232)``\n• #f586eb\n``#f586eb` `rgb(245,134,235)``\n• #f698ee\n``#f698ee` `rgb(246,152,238)``\n• #f8aaf1\n``#f8aaf1` `rgb(248,170,241)``\n• #f9bcf4\n``#f9bcf4` `rgb(249,188,244)``\n• #fbcff7\n``#fbcff7` `rgb(251,207,247)``\n• #fce1fa\n``#fce1fa` `rgb(252,225,250)``\n• #fef3fd\n``#fef3fd` `rgb(254,243,253)``\nTint Color Variation\n\n# Tones of #180216\n\nA tone is produced by adding gray to any pure hue. In this case, #0e0c0e is the less saturated color, while #1a0018 is the most saturated one.\n\n• #0e0c0e\n``#0e0c0e` `rgb(14,12,14)``\n• #0f0b0f\n``#0f0b0f` `rgb(15,11,15)``\n• #100a0f\n``#100a0f` `rgb(16,10,15)``\n• #110910\n``#110910` `rgb(17,9,16)``\n• #120811\n``#120811` `rgb(18,8,17)``\n• #130712\n``#130712` `rgb(19,7,18)``\n• #140613\n``#140613` `rgb(20,6,19)``\n• #150514\n``#150514` `rgb(21,5,20)``\n• #160414\n``#160414` `rgb(22,4,20)``\n• #170315\n``#170315` `rgb(23,3,21)``\n• #180216\n``#180216` `rgb(24,2,22)``\n• #190117\n``#190117` `rgb(25,1,23)``\n• #1a0018\n``#1a0018` `rgb(26,0,24)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #180216 is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5424887,"math_prob":0.7809197,"size":3651,"snap":"2020-34-2020-40","text_gpt3_token_len":1601,"char_repetition_ratio":0.13134083,"word_repetition_ratio":0.011049724,"special_character_ratio":0.56505066,"punctuation_ratio":0.23730685,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9935163,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-19T06:23:59Z\",\"WARC-Record-ID\":\"<urn:uuid:5fc3713e-9ce0-4313-a52c-7557ae5d831e>\",\"Content-Length\":\"36157\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:74cff6e8-9548-4892-8c8d-b918f704aed0>\",\"WARC-Concurrent-To\":\"<urn:uuid:82ea8d1e-42d1-413f-a2e8-a05c1b5e0c8e>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/180216\",\"WARC-Payload-Digest\":\"sha1:5VS2RZVUSK7QKNIDKWFEBKFAVLAKFFD6\",\"WARC-Block-Digest\":\"sha1:3G4YYRQ4QAIOOX27TA3FAL5UMIIWIIP2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400190270.10_warc_CC-MAIN-20200919044311-20200919074311-00200.warc.gz\"}"}
https://calculator.name/numberstowords/2
[ "# Write 2 in English Words\n\nHow to write 2 in english words? - The number 2 written in english words is \"two\". Spell, say, write number 2 in english by using our numbers to words calculator.\n\n 2 in words two 2 spelling two\nNumbers to Words Converter\n\n# 2 spelling\n\nHow do you spell number in currency spelling? Here we have made a list of the currency names you would need to write spellings in order to deposit money against your currency cheques, DD, loan payments or more.\nJust find the currency and get spelling for it.\n\n• Afghanistan → Afghani → two Afghani\n• Albania → Leke → two Leke\n• Algeria → Dinar → two Dinar\n• Andorra → Euro → two Euro\n• Angola → Kwanza → two Kwanza\n• Antigua and Barbuda → Dollar → two dollar\n• Argentina → Peso → two Peso\n• Armenia → Dram → two Dram\n• Australia → Dollar → two dollar\n• Austria → Euro → two Euro\n• Azerbaijan → Manat → two Manat\n• Bahamas → Dollar → two dollar\n• Bahrain → Dinar → two dinar\n• Bangladesh → Taka → two Taka\n• Barbados → Dollar → two dollar\n• Belarus → Ruble → two ruble\n• Belgium → Euro → two Euro\n• Belize → Dollar → two dollar\n• Benin → Franc → two Franc\n• Bhutan → Ngultrum → two Ngultrum\n• Bolivia → Boliviano → two Boliviano\n• Bosnia and Herzegovina → Marka → two Marka\n• Botswana → Pula → two Pula\n• Brazil → Real → two Real\n• Brunei → Dollar → two dollar\n• Bulgaria → Lev → two Lev\n• Burkina Faso → Franc → two Franc\n• Burundi → Franc → two franc\n• Cambodia → Riel → two Riel\n• Cameroon → Franc → two Franc\n• Canada → Dollar → two dollar\n• Cape Verde → Escudo → two escudo\n• Central African Republic → Franc → two Franc\n• Chad → Franc → two Franc\n• Chile → Peso → two Peso\n• China → Yuan → two Yuan\n• Colombia → Peso → two Peso\n• Costa Rica → Colon → two Colon\n• Croatia → Kuna → two Kuna\n• Cuba → Peso → two Peso\n• Cyprus → Pound → two pound\n• Czech Republic → Koruna → two Koruna\n• Denmark → Krone → two Krone\n• Djibouti → Franc → two franc\n• Dominica → Dollar → two dollar\n• Dominican Republic → Peso → two Peso\n• East Timor → Dollar → two dollar\n• Ecuador → Dollar → two dollar\n• Egypt → Pound → two pound\n• El Salvador → Dollar → two dollar\n• Equatorial Guinea → Franc → two Franc\n• Eritrea → Nakfa → two Nakfa\n• Estonia → Kroon → two Kroon\n• Ethiopia → Birr → two Birr\n• Fiji → Dollar → two dollar\n• Finland → Euro → two Euro\n• France → Euro → two Euro\n• Gabon → Franc → two Franc\n• Gambia → Dalasi → two Dalasi\n• Georgia → Lari → two Lari\n• Germany → Euro → two Euro\n• Ghana → Cedi → two Cedi\n• Greece → Euro → two Euro\n• Grenada → Dollar → two dollar\n• Guatemala → Quetzal → two Quetzal\n• Guinea → Franc → two franc\n• Guinea-Bissau → Franc → two Franc\n• Guyana → Dollar → two dollar\n• Haiti → Gourde → two Gourde\n• Honduras → Lempira → two Lempira\n• Hungary → Forint → two Forint\n• Iceland → Krona → two krona\n• India → Rupee → two Rupee\n• Indonesia → Rupiah → two Rupiah\n• Iraq → Dollar → two dollar\n• Ireland → Euro → two Euro\n• Israel → Shekel → two Shekel\n• Italy → Euro → two Euro\n• Jamaica → Dollar → two dollar\n• Japan → Yen → two Yen\n• Jordan → Dinar → two dinar\n• Kazakhstan → Tenge → two Tenge\n• Kenya → Shilling → two shilling\n• Kiribati → Dollar → two dollar\n• Korea, North → Won → two Won\n• Korea, South → Won → two Won\n• Kuwait → Dinar → two dinar\n• Kyrgyzstan → Som → two Som\n• Laos → Kip → two Kip\n• Latvia → Lats → two Lats\n• Lebanon → Pound → two pound\n• Lesotho → Maluti → two Maluti\n• Liberia → Dollar → two dollar\n• Libya → Dinar → two dinar\n• Liechtenstein → Franc → two franc\n• Lithuania → Litas → two Litas\n• Luxembourg → Euro → two Euro\n• Macedonia → Denar → two Denar\n• Madagascar → Franc → two franc\n• Malawi → Kwacha → two Kwacha\n• Malaysia → Ringgit → two Ringgit\n• Maldives → Rufiya → two Rufiya\n• Mali → Franc → two Franc\n• Malta → Euro → two Euro\n• Mauritania → Ouguiya → two Ouguiya\n• Mauritius → Rupee → two rupee\n• Mexico → Peso → two peso\n• Moldova → Leu → two Leu\n• Monaco → Euro → two Euro\n• Mongolia → Tugrik → two Tugrik\n• Montenegro → Euro → two Euro\n• Morocco → Dirham → two Dirham\n• Mozambique → Metical → two Metical\n• Myanmar → Kyat → two Kyat\n• Namibia → Dollar → two dollar\n• Nauru → Dollar → two dollar\n• Nepal → Rupee → two rupee\n• Netherlands → Euro → two Euro\n• New Zealand → Dollar → two dollar\n• Nicaragua → Cordoba → two cordoba\n• Niger → Franc → two Franc\n• Nigeria → Naira → two Naira\n• Norway → Krone → two krone\n• Oman → Rial → two rial\n• Pakistan → Rupee → two rupee\n• Palau → Dollar → two dollar\n• Panama → Dollar → two dollar\n• Papua New Guinea → Kina → two Kina\n• Paraguay → Guaraní → two Guaraní\n• Philippines → Peso → two Peso\n• Poland → Zloty → two Zloty\n• Portugal → Escudo → two escudo\n• Qatar → Riyal → two riyal\n• Romania → Leu → two Leu\n• Russia → Ruble → two Ruble\n• Rwanda → Franc → two franc\n• St. Kitts and Nevis → Dollar → two dollar\n• St. Lucia → Dollar → two dollar\n• St. Vincent and the Grena → Dollar → two dollar\n• Samoa → Tala → two Tala\n• San Marino → Euro → two Euro\n• São Tomé and Príncipe → Dobra → two Dobra\n• Saudi Arabia → Riyal → two Riyal\n• Senegal → Franc → two Franc\n• Seychelles → Rupee → two rupee\n• Sierra Leone → Leone → two Leone\n• Singapore → Dollar → two dollar\n• Slovakia → Koruna → two Koruna\n• Slovenia → Euro → two euro\n• Solomon Islands → Dollar → two dollar\n• Somalia → Shilling → two shilling\n• South Africa → Rand → two Rand\n• Spain → Euro → two Euro\n• Sri Lanka → Rupee → two rupee\n• Sudan → Dinar → two Dinar\n• Suriname → Dollar → two dollar\n• Swaziland → Lilangeni → two Lilangeni\n• Sweden → Krona → two Krona\n• Switzerland → Franc → two franc\n• Syria → Pound → two pound\n• Taiwan → Dollar → two dollar\n• Tajikistan → Somoni → two somoni\n• Tanzania → Shilling → two shilling\n• Thailand → Baht → two baht\n• Togo → Franc → two Franc\n• Tonga → Pa'anga → two Pa'anga\n• Trinidad and Tobago → Dollar → two dollar\n• Tunisia → Dinar → two dinar\n• Turkey → Lira → two lira\n• Turkmenistan → Manat → two Manat\n• Tuvalu → Dollar → two dollar\n• Uganda → Shilling → two shilling\n• Ukraine → Hryvna → two Hryvna\n• United Arab Emirates → Dirham → two dirham\n• United Kingdom → Pound → two Pound\n• United States → Dollar → two dollar\n• Uruguay → Peso → two peso\n• Uzbekistan → Sum → two sum\n• Vatican City → Euro → two Euro\n• Venezuela → Bolivar → two Bolivar\n• Vietnam → Dong → two Dong\n• Western Sahara → Tala → two Tala\n• Yemen → Rial → two Rial\n• Zambia → Kwacha → two Kwacha\n• Zimbabwe → Dollar → two dollar\n\nHere are some more examples of numbers to words converter" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.90297514,"math_prob":0.52304196,"size":513,"snap":"2022-40-2023-06","text_gpt3_token_len":119,"char_repetition_ratio":0.1611002,"word_repetition_ratio":0.0,"special_character_ratio":0.22612086,"punctuation_ratio":0.09615385,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9831118,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-03T04:13:03Z\",\"WARC-Record-ID\":\"<urn:uuid:aea767b6-144e-491a-ad0d-fc1bfe82c1b2>\",\"Content-Length\":\"16051\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ddad3257-156c-40f3-9341-e588b6af6523>\",\"WARC-Concurrent-To\":\"<urn:uuid:33d7812e-bec7-4b7d-8d51-16a69f526ffa>\",\"WARC-IP-Address\":\"68.178.146.134\",\"WARC-Target-URI\":\"https://calculator.name/numberstowords/2\",\"WARC-Payload-Digest\":\"sha1:7EN2XYAKX7XKHFFERJ4AVJYV2MF3NXVF\",\"WARC-Block-Digest\":\"sha1:ZTEQ5ZGK5CH32SX6VFI6P2WTJDVARFH7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500042.8_warc_CC-MAIN-20230203024018-20230203054018-00367.warc.gz\"}"}
https://www.rdocumentation.org/packages/spatstat/versions/1.2-1/topics/quadscheme
[ "0th\n\nPercentile\n\n##### Generate a Quadrature Scheme from a Point Pattern\n\nGenerates a quadrature scheme (an object of class \"quad\") from point patterns of data and dummy points.\n\nKeywords\nspatial\n##### Usage\nquadscheme(data)\nquadscheme(data, dummy=default.dummy(data), method=\"grid\", ...)\n##### Arguments\ndata\nThe observed data point pattern. An object of class \"ppp\" or in a format recognised by as.ppp()\ndummy\nThe pattern of dummy points for the quadrature. An object of class \"ppp\" or in a format recognised by as.ppp()\nmethod\nThe name of the method for calculating quadrature weights: either \"grid\" or \"dirichlet\".\n...\nParameters of the weighting method (see below)\n##### Details\n\nThis is the primary method for producing a quadrature schemes for use by mpl. The function mpl fits a point process model to an observed point pattern using the Berman-Turner quadrature approximation (Berman and Turner, 1992; Baddeley and Turner, 2000) to the pseudolikelihood of the model. It requires a quadrature scheme consisting of the original data point pattern, an additional pattern of dummy points, and a vector of quadrature weights for all these points. Such quadrature schemes are represented by objects of class \"quad\". See quad.object for a description of this class.\n\nQuadrature schemes are created by the function quadscheme. The arguments data and dummy specify the data and dummy points, respectively. There is a sensible default for the dummy points (provided by default.dummy). Alternatively the dummy points may be specified arbitrarily and given in any format recognised by as.ppp. There are also functions for creating dummy patterns including corners, gridcentres, stratrand and spokes. The quadrature region is the region over which we are integrating, and approximating integrals by finite sums. If dummy is a point pattern object (class \"ppp\") then the quadrature region is taken to be dummy$window. If dummy is just a list of$x, y$coordinates then the quadrature region defaults to the observation window of the data pattern, data$window.\n\nIf method = \"grid\" then the optional arguments (for ...) are (nx = default.ngrid(data), ny=nx). The quadrature region (see below) is divided into an nx by ny grid of rectangular tiles. The weight for each quadrature point is the area of a tile divided by the number of quadrature points in that tile. If method=\"dirichlet\" then the optional arguments are (exact=TRUE). The quadrature points (both data and dummy) are used to construct the Dirichlet tessellation. The quadrature weight of each point is the area of its Dirichlet tile inside the quadrature region.\n\n##### Value\n\n• An object of class \"quad\" describing the quadrature scheme (data points, dummy points, and quadrature weights) suitable as the argument Q of the function mpl() for fitting a point process model.\n\n##### References\n\nBaddeley, A. and Turner, R. Practical maximum pseudolikelihood for spatial point patterns. Australian and New Zealand Journal of Statistics 42 (2000) 283--322. Berman, M. and Turner, T.R. Approximating point process likelihoods with GLIM. Applied Statistics 41 (1992) 31--38.\n\nmpl, as.ppp, quad.object, gridweights, dirichlet.weights, corners, gridcentres, stratrand, spokes\n\n##### Examples\nlibrary(spatstat)\ndata(simdat)\nP <- simdat\nD <- default.dummy(P, 100)\nQ <- quadscheme(P, , \"grid\")" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.65202826,"math_prob":0.95702714,"size":3453,"snap":"2019-51-2020-05","text_gpt3_token_len":887,"char_repetition_ratio":0.157147,"word_repetition_ratio":0.03710575,"special_character_ratio":0.2319722,"punctuation_ratio":0.15566038,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9892463,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-29T16:51:49Z\",\"WARC-Record-ID\":\"<urn:uuid:9863ae4f-e422-461d-a3db-8d6f3c67839f>\",\"Content-Length\":\"20205\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b79c8f87-47c8-4041-9a53-36f30237ef7c>\",\"WARC-Concurrent-To\":\"<urn:uuid:7f7150e7-258b-45b6-b032-4eb342228326>\",\"WARC-IP-Address\":\"54.172.158.14\",\"WARC-Target-URI\":\"https://www.rdocumentation.org/packages/spatstat/versions/1.2-1/topics/quadscheme\",\"WARC-Payload-Digest\":\"sha1:2EO3YMM37UHUXLC4BE7SS3WH3POKAAGB\",\"WARC-Block-Digest\":\"sha1:EY4GOC7PQK63LNDLSZF6DRM6AA45KABT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579251801423.98_warc_CC-MAIN-20200129164403-20200129193403-00016.warc.gz\"}"}
https://en.wikipedia.org/wiki/Woodbury_matrix_identity
[ "# Woodbury matrix identity\n\nIn mathematics (specifically linear algebra), the Woodbury matrix identity, named after Max A. Woodbury, says that the inverse of a rank-k correction of some matrix can be computed by doing a rank-k correction to the inverse of the original matrix. Alternative names for this formula are the matrix inversion lemma, Sherman–Morrison–Woodbury formula or just Woodbury formula. However, the identity appeared in several papers before the Woodbury report.\n\nThe Woodbury matrix identity is\n\n$\\left(A+UCV\\right)^{-1}=A^{-1}-A^{-1}U\\left(C^{-1}+VA^{-1}U\\right)^{-1}VA^{-1},$", null, "where A, U, C and V are conformable matrices: A is n×n, C is k×k, U is n×k, and V is k×n. This can be derived using blockwise matrix inversion.\n\nWhile the identity is primarily used on matrices, it holds in a general ring or in an Ab-category.\n\nThe Woodbury matrix identity allows cheap computation of inverses and solutions to linear equations. However, little is known about the numerical stability of the formula. There are no published results concerning its error bounds. Anecdotal evidence suggests that it may diverge even for seemingly benign examples (when both the original and modified matrices are well-conditioned).\n\n## Discussion\n\nTo prove this result, we will start by proving a simpler one. Replacing A and C with the identity matrix I, we obtain another identity which is a bit simpler:\n\n$\\left(I+UV\\right)^{-1}=I-U\\left(I+VU\\right)^{-1}V.$", null, "To recover the original equation from this reduced identity, set $U=A^{-1}X$", null, "and $V=CY$", null, ".\n\nThis identity itself can be viewed as the combination of two simpler identities. We obtain the first identity from\n\n$I=(I+P)^{-1}(I+P)=(I+P)^{-1}+(I+P)^{-1}P$", null, ",\n\nthus,\n\n$(I+P)^{-1}=I-(I+P)^{-1}P$", null, ",\n\nand similarly\n\n$(I+P)^{-1}=I-P(I+P)^{-1}.$", null, "The second identity is the so-called push-through identity\n\n$(I+UV)^{-1}U=U(I+VU)^{-1}$", null, "that we obtain from\n\n$U(I+VU)=(I+UV)U$", null, "after multiplying by $(I+VU)^{-1}$", null, "on the right and by $(I+UV)^{-1}$", null, "on the left.\n\nPutting all together,\n\n$\\left(I+UV\\right)^{-1}=I-UV\\left(I+UV\\right)^{-1}=I-U\\left(I+VU\\right)^{-1}V.$", null, "where the first and second equality come from the first and second identity, respectively.\n\n### Special cases\n\nWhen $V,U$", null, "are vectors, the identity reduces to the Sherman–Morrison formula.\n\nIn the scalar case, the reduced version is simply\n\n${\\frac {1}{1+uv}}=1-{\\frac {uv}{1+uv}}.$", null, "#### Inverse of a sum\n\nIf n = k and U = V = In is the identity matrix, then\n\n{\\begin{aligned}\\left({A}+{B}\\right)^{-1}&=A^{-1}-A^{-1}(B^{-1}+A^{-1})^{-1}A^{-1}\\\\&={A}^{-1}-{A}^{-1}\\left({A}{B}^{-1}+{I}\\right)^{-1}.\\end{aligned}}", null, "Continuing with the merging of the terms of the far right-hand side of the above equation results in Hua's identity\n\n$\\left({A}+{B}\\right)^{-1}={A}^{-1}-\\left({A}+{A}{B}^{-1}{A}\\right)^{-1}.$", null, "Another useful form of the same identity is\n\n$\\left({A}-{B}\\right)^{-1}={A}^{-1}+{A}^{-1}{B}\\left({A}-{B}\\right)^{-1},$", null, "which, unlike those above, is valid even if $B$", null, "is singular, and has a recursive structure that yields\n\n$\\left({A}-{B}\\right)^{-1}=\\sum _{k=0}^{\\infty }\\left({A}^{-1}{B}\\right)^{k}{A}^{-1}$", null, "if the spectral radius of $A^{-1}B$", null, "is less than one. That is, if the above sum converges then it is equal to $(A-B)^{-1}$", null, ".\n\nThis form can be used in perturbative expansions where B is a perturbation of A.\n\n### Variations\n\n#### Binomial inverse theorem\n\nIf A, B, U, V are matrices of sizes n×n, k×k, n×k, k×n, respectively, then\n\n$\\left(A+UBV\\right)^{-1}=A^{-1}-A^{-1}UB\\left(B+BVA^{-1}UB\\right)^{-1}BVA^{-1}$", null, "provided A and B + BVA−1UB are nonsingular. Nonsingularity of the latter requires that B−1 exist since it equals B(I + VA−1UB) and the rank of the latter cannot exceed the rank of B.\n\nSince B is invertible, the two B terms flanking the parenthetical quantity inverse in the right-hand side can be replaced with (B−1)−1, which results in the original Woodbury identity.\n\nA variation for when B is singular and possibly even non-square:\n\n$(A+UBV)^{-1}=A^{-1}-A^{-1}U(I+BVA^{-1}U)^{-1}BVA^{-1}.$", null, "Formulas also exist for certain cases in which A is singular.\n\n#### Pseudoinverse with positive semidefinite matrices\n\nIn general Woodbury's identity is not valid if one or more inverses are replaced by (Moore–Penrose) pseudoinverses. However, if $A$", null, "and $C$", null, "are positive semidefinite, and $V=U^{\\mathrm {H} }$", null, "(implying that $A+UCV$", null, "is itself positive semidefinite), then the following formula provides a generalization:\n\n{\\begin{aligned}(XX^{\\mathrm {H} }+YY^{\\mathrm {H} })^{+}&=(ZZ^{\\mathrm {H} })^{+}+(I-YZ^{+})^{\\mathrm {H} }X^{+\\mathrm {H} }EX^{+}(I-YZ^{+}),\\\\Z&=(I-XX^{+})Y,\\\\E&=I-X^{+}Y(I-Z^{+}Z)F^{-1}(X^{+}Y)^{\\mathrm {H} },\\\\F&=I+(I-Z^{+}Z)Y^{\\mathrm {H} }(XX^{\\mathrm {H} })^{+}Y(I-Z^{+}Z),\\end{aligned}}", null, "where $A+UCU^{\\mathrm {H} }$", null, "can be written as $XX^{\\mathrm {H} }+YY^{\\mathrm {H} }$", null, "because any positive semidefinite matrix is equal to $MM^{\\mathrm {H} }$", null, "for some $M$", null, ".\n\n## Derivations\n\n### Direct proof\n\nThe formula can be proven by checking that $(A+UCV)$", null, "times its alleged inverse on the right side of the Woodbury identity gives the identity matrix:\n\n{\\begin{aligned}&\\left(A+UCV\\right)\\left[A^{-1}-A^{-1}U\\left(C^{-1}+VA^{-1}U\\right)^{-1}VA^{-1}\\right]\\\\={}&\\left\\{I-U\\left(C^{-1}+VA^{-1}U\\right)^{-1}VA^{-1}\\right\\}+\\left\\{UCVA^{-1}-UCVA^{-1}U\\left(C^{-1}+VA^{-1}U\\right)^{-1}VA^{-1}\\right\\}\\\\={}&\\left\\{I+UCVA^{-1}\\right\\}-\\left\\{U\\left(C^{-1}+VA^{-1}U\\right)^{-1}VA^{-1}+UCVA^{-1}U\\left(C^{-1}+VA^{-1}U\\right)^{-1}VA^{-1}\\right\\}\\\\={}&I+UCVA^{-1}-\\left(U+UCVA^{-1}U\\right)\\left(C^{-1}+VA^{-1}U\\right)^{-1}VA^{-1}\\\\={}&I+UCVA^{-1}-UC\\left(C^{-1}+VA^{-1}U\\right)\\left(C^{-1}+VA^{-1}U\\right)^{-1}VA^{-1}\\\\={}&I+UCVA^{-1}-UCVA^{-1}\\\\={}&I.\\end{aligned}}", null, "### Alternative proofs\n\nAlgebraic proof\n\nFirst consider these useful identities,\n\n{\\begin{aligned}U+UCVA^{-1}U&=UC\\left(C^{-1}+VA^{-1}U\\right)=\\left(A+UCV\\right)A^{-1}U\\\\\\left(A+UCV\\right)^{-1}UC&=A^{-1}U\\left(C^{-1}+VA^{-1}U\\right)^{-1}\\end{aligned}}", null, "Now,\n\n{\\begin{aligned}A^{-1}&=\\left(A+UCV\\right)^{-1}\\left(A+UCV\\right)A^{-1}\\\\&=\\left(A+UCV\\right)^{-1}\\left(I+UCVA^{-1}\\right)\\\\&=\\left(A+UCV\\right)^{-1}+\\left(A+UCV\\right)^{-1}UCVA^{-1}\\\\&=\\left(A+UCV\\right)^{-1}+A^{-1}U\\left(C^{-1}+VA^{-1}U\\right)^{-1}VA^{-1}.\\end{aligned}}", null, "Derivation via blockwise elimination\n\nDeriving the Woodbury matrix identity is easily done by solving the following block matrix inversion problem\n\n${\\begin{bmatrix}A&U\\\\V&-C^{-1}\\end{bmatrix}}{\\begin{bmatrix}X\\\\Y\\end{bmatrix}}={\\begin{bmatrix}I\\\\0\\end{bmatrix}}.$", null, "Expanding, we can see that the above reduces to\n\n${\\begin{cases}AX+UY=I\\\\VX-C^{-1}Y=0\\end{cases}}$", null, "which is equivalent to $(A+UCV)X=I$", null, ". Eliminating the first equation, we find that $X=A^{-1}(I-UY)$", null, ", which can be substituted into the second to find $VA^{-1}(I-UY)=C^{-1}Y$", null, ". Expanding and rearranging, we have $VA^{-1}=\\left(C^{-1}+VA^{-1}U\\right)Y$", null, ", or $\\left(C^{-1}+VA^{-1}U\\right)^{-1}VA^{-1}=Y$", null, ". Finally, we substitute into our $AX+UY=I$", null, ", and we have $AX+U\\left(C^{-1}+VA^{-1}U\\right)^{-1}VA^{-1}=I$", null, ". Thus,\n\n$(A+UCV)^{-1}=X=A^{-1}-A^{-1}U\\left(C^{-1}+VA^{-1}U\\right)^{-1}VA^{-1}.$", null, "We have derived the Woodbury matrix identity.\n\nDerivation from LDU decomposition\n\nWe start by the matrix\n\n${\\begin{bmatrix}A&U\\\\V&C\\end{bmatrix}}$", null, "By eliminating the entry under the A (given that A is invertible) we get\n\n${\\begin{bmatrix}I&0\\\\-VA^{-1}&I\\end{bmatrix}}{\\begin{bmatrix}A&U\\\\V&C\\end{bmatrix}}={\\begin{bmatrix}A&U\\\\0&C-VA^{-1}U\\end{bmatrix}}$", null, "Likewise, eliminating the entry above C gives\n\n${\\begin{bmatrix}A&U\\\\V&C\\end{bmatrix}}{\\begin{bmatrix}I&-A^{-1}U\\\\0&I\\end{bmatrix}}={\\begin{bmatrix}A&0\\\\V&C-VA^{-1}U\\end{bmatrix}}$", null, "Now combining the above two, we get\n\n${\\begin{bmatrix}I&0\\\\-VA^{-1}&I\\end{bmatrix}}{\\begin{bmatrix}A&U\\\\V&C\\end{bmatrix}}{\\begin{bmatrix}I&-A^{-1}U\\\\0&I\\end{bmatrix}}={\\begin{bmatrix}A&0\\\\0&C-VA^{-1}U\\end{bmatrix}}$", null, "Moving to the right side gives\n\n${\\begin{bmatrix}A&U\\\\V&C\\end{bmatrix}}={\\begin{bmatrix}I&0\\\\VA^{-1}&I\\end{bmatrix}}{\\begin{bmatrix}A&0\\\\0&C-VA^{-1}U\\end{bmatrix}}{\\begin{bmatrix}I&A^{-1}U\\\\0&I\\end{bmatrix}}$", null, "which is the LDU decomposition of the block matrix into an upper triangular, diagonal, and lower triangular matrices.\n\nNow inverting both sides gives\n\n{\\begin{aligned}{\\begin{bmatrix}A&U\\\\V&C\\end{bmatrix}}^{-1}&={\\begin{bmatrix}I&A^{-1}U\\\\0&I\\end{bmatrix}}^{-1}{\\begin{bmatrix}A&0\\\\0&C-VA^{-1}U\\end{bmatrix}}^{-1}{\\begin{bmatrix}I&0\\\\VA^{-1}&I\\end{bmatrix}}^{-1}\\\\[8pt]&={\\begin{bmatrix}I&-A^{-1}U\\\\0&I\\end{bmatrix}}{\\begin{bmatrix}A^{-1}&0\\\\0&\\left(C-VA^{-1}U\\right)^{-1}\\end{bmatrix}}{\\begin{bmatrix}I&0\\\\-VA^{-1}&I\\end{bmatrix}}\\\\[8pt]&={\\begin{bmatrix}A^{-1}+A^{-1}U\\left(C-VA^{-1}U\\right)^{-1}VA^{-1}&-A^{-1}U\\left(C-VA^{-1}U\\right)^{-1}\\\\-\\left(C-VA^{-1}U\\right)^{-1}VA^{-1}&\\left(C-VA^{-1}U\\right)^{-1}\\end{bmatrix}}\\qquad \\mathrm {(1)} \\end{aligned}}", null, "We could equally well have done it the other way (provided that C is invertible) i.e.\n\n${\\begin{bmatrix}A&U\\\\V&C\\end{bmatrix}}={\\begin{bmatrix}I&UC^{-1}\\\\0&I\\end{bmatrix}}{\\begin{bmatrix}A-UC^{-1}V&0\\\\0&C\\end{bmatrix}}{\\begin{bmatrix}I&0\\\\C^{-1}V&I\\end{bmatrix}}$", null, "Now again inverting both sides,\n\n{\\begin{aligned}{\\begin{bmatrix}A&U\\\\V&C\\end{bmatrix}}^{-1}&={\\begin{bmatrix}I&0\\\\C^{-1}V&I\\end{bmatrix}}^{-1}{\\begin{bmatrix}A-UC^{-1}V&0\\\\0&C\\end{bmatrix}}^{-1}{\\begin{bmatrix}I&UC^{-1}\\\\0&I\\end{bmatrix}}^{-1}\\\\[8pt]&={\\begin{bmatrix}I&0\\\\-C^{-1}V&I\\end{bmatrix}}{\\begin{bmatrix}\\left(A-UC^{-1}V\\right)^{-1}&0\\\\0&C^{-1}\\end{bmatrix}}{\\begin{bmatrix}I&-UC^{-1}\\\\0&I\\end{bmatrix}}\\\\[8pt]&={\\begin{bmatrix}\\left(A-UC^{-1}V\\right)^{-1}&-\\left(A-UC^{-1}V\\right)^{-1}UC^{-1}\\\\-C^{-1}V\\left(A-UC^{-1}V\\right)^{-1}&C^{-1}+C^{-1}V\\left(A-UC^{-1}V\\right)^{-1}UC^{-1}\\end{bmatrix}}\\qquad \\mathrm {(2)} \\end{aligned}}", null, "Now comparing elements (1, 1) of the RHS of (1) and (2) above gives the Woodbury formula\n\n$\\left(A-UC^{-1}V\\right)^{-1}=A^{-1}+A^{-1}U\\left(C-VA^{-1}U\\right)^{-1}VA^{-1}.$", null, "## Applications\n\nThis identity is useful in certain numerical computations where A−1 has already been computed and it is desired to compute (A + UCV)−1. With the inverse of A available, it is only necessary to find the inverse of C−1 + VA−1U in order to obtain the result using the right-hand side of the identity. If C has a much smaller dimension than A, this is more efficient than inverting A + UCV directly. A common case is finding the inverse of a low-rank update A + UCV of A (where U only has a few columns and V only a few rows), or finding an approximation of the inverse of the matrix A + B where the matrix B can be approximated by a low-rank matrix UCV, for example using the singular value decomposition.\n\nThis is applied, e.g., in the Kalman filter and recursive least squares methods, to replace the parametric solution, requiring inversion of a state vector sized matrix, with a condition equations based solution. In case of the Kalman filter this matrix has the dimensions of the vector of observations, i.e., as small as 1 in case only one new observation is processed at a time. This significantly speeds up the often real time calculations of the filter.\n\nIn the case when C is the identity matrix I, the matrix $I+VA^{-1}U$", null, "is known in numerical linear algebra and numerical partial differential equations as the capacitance matrix." ]
[ null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/3ffa2c14bb438728d93f2cdf7ea6657338ab8fb7", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/5eeb70301ac7e343fb8623767de8a0649d6e283e", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/e63b0d78508c049b7171f2c8012f60c0c077a9c5", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/d39971493c85bd36755ac1ef34a843f675cdc6a4", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/214a791b48df362d0ba51fbe3c6bc4c9ce5b207c", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/4a9318fa3e0d6d84c24d203af60746322ff4e5e9", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/2f9e786fe782f80b6a05d3b5edce07fad20f3cdd", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/36b66a9b677368d8edcaf5d21d922f146ab309b5", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/241c862faf39bc56440f36e59e89b0bc8571cf6c", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/b16583a2884bada989d0334d4d239ae31e82056a", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/9d661d6a315db3a9c67760d4f919a68b780bf8db", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/fc688655921eee5770abae3247732c121f6f3ce4", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/9d7bfa2d63f8a7831ac7071fa93448a784ff5827", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/aa9234ecf8e9513f4ba528c2234d3911fad1f43d", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/894a196d83f5ecb7f9fc06224c485f63b4d4d71b", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/84753f5c2751e02d352db6ac93ae43d27612976b", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/23b59c8d6c11965349bdd95afb7f99bda2373e62", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/47136aad860d145f75f3eed3022df827cee94d7a", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/3631f9e2e4d5ecef6be7922bb9dd074e69ad9834", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/520d0fea745399f5c14a8dade852d33754d5a109", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/51b11876c5308eb01e8b7f0e4c2fe9451947d54e", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/e9c239cf0c7127b53e05d2c9db0cfc910451df35", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/78190ec51dacc60deadaee7a31c11354a3636e4d", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/7daff47fa58cdfd29dc333def748ff5fa4c923e3", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/4fc55753007cd3c18576f7933f6f089196732029", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/f4e28751409bfc76cfdfe48be21f50cd0f2b33e5", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/21b58496cdd1b3cd7406f45f252432edd0f2a6e0", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/1493dc0c75118030c1b0420550f12af707b7c95a", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/890554ca4e9c7a4315cfdf6c4203fa622916d97c", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/5639b32cf07247bbcf0cf35978b462173cddbaf9", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/48bda59fd276b77b0b440535dfc90b438c68ab80", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/f82cade9898ced02fdd08712e5f0c0151758a0dd", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/8aa8914521ec82b48663270b4ea75fdee65f3006", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/7f9188a55ebfb78625ce03beb899b67c010612cc", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/71219180b403f904079aa30b4aa81bb696001436", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/2f7a2c689f3884727285d9f3ada94c0eee642130", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/555b9267ee8ea7fbe44fca4efd3426162dd8ce52", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/f7ef6e74ab6eebff296e143092b0cb11b305ed8a", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/9f89deb10fe1b6f92cb4fa0c3bcf4d1194cc973b", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/a6ecb5362137511612aa4f1cb46d05c7f4ba7d52", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/c5e02e4d71024f248369feaccebd0cc28f04104d", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/d1cfe021b2937c28ba26ca2a67da6e6480e833ce", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/f9ca968fa0e9e36044a2eea1abba79f06c5a4d5f", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/54722932a7e2896a8445b92cae78356d12e16ad6", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/23c49de625321339e1654025d653dd5685b7e5b9", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/fee15967a325eff9f44e31c6d121e4cc16af8a5c", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/5907bea5a46759431d5debc3e5b030c05feca890", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/13751c4095c8b244967b8df948560ef7a1f711c4", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/2535c11301ce63d2af12832cd8b3803d76313230", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/fab20d08b45a7eec8dcb8532feca18e173839ddc", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/a645a429503f2f36b3ea16e547d7046cc572dd64", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/fd80a045fa9157c9185e1805320cacf91ce7b40e", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/3ee9bdd73665f01d460a9bdea121a5abc867a671", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/70dd74f08ea4769fe59dc5765fa134270b371d60", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/820dfc232fefe118477b1765d422831bf6b081d8", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/82cb2844d84822859cdc652114210c356f4b36ce", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8334456,"math_prob":0.9998833,"size":8096,"snap":"2023-40-2023-50","text_gpt3_token_len":2022,"char_repetition_ratio":0.14508156,"word_repetition_ratio":0.0015360983,"special_character_ratio":0.25296444,"punctuation_ratio":0.15147705,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999157,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112],"im_url_duplicate_count":[null,6,null,4,null,4,null,4,null,2,null,3,null,3,null,3,null,3,null,3,null,3,null,1,null,4,null,4,null,3,null,4,null,4,null,null,null,1,null,5,null,1,null,4,null,4,null,null,null,null,null,1,null,1,null,1,null,1,null,1,null,1,null,null,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,3,null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-28T21:21:04Z\",\"WARC-Record-ID\":\"<urn:uuid:c53ab48c-450c-4447-9b67-04f37cee2683>\",\"Content-Length\":\"257562\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:282239e0-3527-4038-b6ff-4df315609700>\",\"WARC-Concurrent-To\":\"<urn:uuid:fe2e03b6-4c3a-44c6-b1cb-d7a4ba8e82bb>\",\"WARC-IP-Address\":\"208.80.154.224\",\"WARC-Target-URI\":\"https://en.wikipedia.org/wiki/Woodbury_matrix_identity\",\"WARC-Payload-Digest\":\"sha1:USM7QVE6TCD6P6LCHTAHNCDHDTCG6QSD\",\"WARC-Block-Digest\":\"sha1:ET25RCPOH5KQCOZVWYL5IMYGTCGQQPEU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510454.60_warc_CC-MAIN-20230928194838-20230928224838-00892.warc.gz\"}"}
http://engineeronadisk.com/V3/engineeronadisk-164.html
[ "• Interfacing for Acquisition of Signals from Sensors and Generation of Signals for Actuators\n\n• Used by Computers, PLC’s, PID Controllers, etc.\n\n• Computers are designed to handle both input and output (I/O) data\n\n• Two main types of data I/O for computers\n\nAnalog\n\nDigital", null, "• A Continuous signal is sampled by the computer\n\n• The computer uses approximation techniques to estimate the analog value during the sampling window.\n\n• An example of an A/D, D/A control of a process is shown below", null, "• Multiplexers are used when a number of signals are to be input to a single A/D converter. This allows each of a number of channels to be sampled, one at a time\n\n• Signal conditioners are often to amplify, or filter signals coming from transducers, before they are read by the A/D converter.\n\n• Output drivers and amplifiers are often required to drive output devices when using D/A\n\n• Sampling problems occur with A/D conversion. Because readings are taken periodically (not continually), the Nyquist criterion specifies that sampling frequencies should be twice the frequency of the signal being measured, otherwise aliasing will occur.\n\n• Since the sampling window for a signal is short, noise will have added effect on the signal read. For example, a momentary voltage spike might result in a higher than normal reading.\n\n• When an analog value is converted to or from digital values, a quantization error is involved. The digital numbering scheme means that for an 8 bit A/D converter, there is a resolution of 256 values between maximum and minimum. This means that there is a round off error of approximately 0.4%.", null, "#### 26.1.1 Analog To Digital Conversions\n\n• When there are analog values outside a computer, and we plan to read these to digital values, there are a variety of factors to consider,\n\nwhen the sample is requested, a short period of time passes before the final sample value is obtained.\n\nthe sample value is ‘frozen’ after a sample interval.\n\nafter the sample is taken, the system may change\n\nsample values can be very sensitive to noise\n\nthe continuous values of the signal loose some accuracy when conversion to a digital number\n\n• Consider the conversion process pictured below,", null, "• Once this signal is processes through a typical A/D converter we get the following relations (these may vary slightly for different types of A/D converters).", null, "Problem 26.1 We are given a 12 bit analog input with a range of -10V to 10V. If we put in 2.735V, what will the integer value be after the A/D conversion? What is the error? What voltage can we calculate?\n\n• In most applications a sample is taken at regular intervals, with a period of ‘T’ seconds.\n\n• In practice the sample interval is kept as small as possible. (i.e., tau << T)\n\n• If we are sampling a periodic signal that changes near or faster that the sampling rate, there is a chance that we will get a signal that appears chaotic, or seems to be a lower frequency. This phenomenon is known as aliasing.\n\n• Quite often an A/D converter will multiplex between various inputs. As it switches the voltage will be sampled by a ‘sample and hold circuit’. This will then be converted to a digital value. The sample and hold circuits can be used before the multiplexer to collect data values at the same instant in time.\n\n• A simple type of A/D converter is shown below. It is known as a successive approximation type.", null, "#### 26.1.2 Analog Inputs With a PLC\n\n• To input analog values into a PLC we use the block transfer commands. These allow control information to the input card and retrieve results.\n\n• The example below shows ladder logic to do an analog input.", null, "• The block that needs to be written to an 1771-IFE analog input card is shown below. This is a 12 bit card, so the range will have up to 2**12 = 4096 values.", null, "• After the input card reads the values, the results are returned in a block. The structure of the block is shown below.", null, "26.2 Analog Outputs\n\n• After we have used a controller equation to estimate a value to put into our process, we must convert this from a digital value in the computers memory, to a physical voltage.\n\n• This voltage is typically limited to 20mA in most computer boards, and drawing near this current reduces accuracy and life of the board.\n\n• A simple circuit is shown below for a simple digital to analog converter.", null, "• The calculations for the A/D converter resolution and accuracy still apply.\n\nProblem 26.2 We need to select a digital to analog converter for an application. The output will vary from -5V to 10V DC, and we need to be able to specify the voltage to within 50mV. What resolution will be required? How many bits will this D/A converter need? What will the accuracy be?\n\n#### 26.2.1 Analog Outputs With A PLC\n\n• An example of an output card is 1771-OFE.\n\n• To output a value we only need to write a single value to the output card", null, "• The format for the block that is to be written to the card is shown below.", null, "26.3 Design Cases\n\n#### 26.3.1 Oven Temperature Control\n\n• Design an analog controller that will read an oven temperature, and when it passes 1200 degrees the oven will be turned off. The voltage from the thermocouple is passed through a signal conditioner that gives 1V at 500F and 3V at 1500F. The controller should have a start button and E-stop.\n\n#### 26.3.2 Statistical Process Control (SPC)\n\n• We can do SPC checking using analog inputs, and built in statistics functions.\n\n• Recall the basic equations for a control chart.", null, "• The general flow would be,\n\n1. Read sampled inputs.\n\n2. Randomly select values and calculate the average and store in memory. Calculate the standard deviation of the stored values.\n\n3. Compare the inputs to the standard deviation. If it is larger than 3 deviations from the mean, halt the process.\n\n4. If it is larger than 2 then increase a counter A, or if it is larger than 1 increase a second counter B. If it is less than 1 reset the counters.\n\n5. If counter A is =3 or B is =5 then shut down.\n\n6. Goto 1.\n\n26.4 Problems\n\nProblem 26.3 Write a program that will input an analog voltage, do the calculation below, and output an analog voltage.", null, "Problem 26.4 The following calculation will be made when input ‘A’ is true. If the result ‘x’ is between 1 and 10 then the output ‘B’ will be turned on. The value of ‘x’ will be output as an analog voltage. Create a ladder logic program to perform these tasks.", null, "", null, "Problem 26.5 You are developing a controller for a game that measures hand strength. To do this a ‘START’ button is pushed, 3 seconds later a ‘LIGHT’ is turned on for one second to let the user know when to start squeezing. The analog value is read at 0.3s after the light is on. The value is converted to a force ‘F’ with the equation below. The force is displayed by converting it to BCD and writing it to an output card (O:001). If the value exceeds 100 then a ‘BIG_LIGHT’ and ‘SIREN’ are turned on for 5sec. Use a structured design technique to develop ladder logic.", null, "", null, "" ]
[ null, "http://engineeronadisk.com/V3/engineeronadisk-3080.gif", null, "http://engineeronadisk.com/V3/engineeronadisk-3081.gif", null, "http://engineeronadisk.com/V3/engineeronadisk-3082.gif", null, "http://engineeronadisk.com/V3/engineeronadisk-3083.gif", null, "http://engineeronadisk.com/V3/engineeronadisk-3084.gif", null, "http://engineeronadisk.com/V3/engineeronadisk-3085.gif", null, "http://engineeronadisk.com/V3/engineeronadisk-3086.gif", null, "http://engineeronadisk.com/V3/engineeronadisk-3087.gif", null, "http://engineeronadisk.com/V3/engineeronadisk-3088.gif", null, "http://engineeronadisk.com/V3/engineeronadisk-3089.gif", null, "http://engineeronadisk.com/V3/engineeronadisk-3090.gif", null, "http://engineeronadisk.com/V3/engineeronadisk-3091.gif", null, "http://engineeronadisk.com/V3/engineeronadisk-3092.gif", null, "http://engineeronadisk.com/V3/engineeronadisk-3093.gif", null, "http://engineeronadisk.com/V3/engineeronadisk-3094.gif", null, "http://engineeronadisk.com/V3/engineeronadisk-3095.gif", null, "http://engineeronadisk.com/V3/engineeronadisk-3096.gif", null, "http://engineeronadisk.com/V3/engineeronadisk-3097.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8895471,"math_prob":0.975874,"size":7104,"snap":"2019-13-2019-22","text_gpt3_token_len":1622,"char_repetition_ratio":0.13239437,"word_repetition_ratio":0.009375,"special_character_ratio":0.2339527,"punctuation_ratio":0.09832636,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99089026,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36],"im_url_duplicate_count":[null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-21T19:15:03Z\",\"WARC-Record-ID\":\"<urn:uuid:37fe5ed7-57e6-42a2-bbb7-9acdb6e917de>\",\"Content-Length\":\"24065\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:27041b91-bfce-4bb0-a902-dac22875afdc>\",\"WARC-Concurrent-To\":\"<urn:uuid:c4e1ba3e-7e66-4d02-8d37-710df50c8a60>\",\"WARC-IP-Address\":\"107.180.55.229\",\"WARC-Target-URI\":\"http://engineeronadisk.com/V3/engineeronadisk-164.html\",\"WARC-Payload-Digest\":\"sha1:MU7CYCGHAGWSBRJMXRX6JJZ3ZFRRSQNK\",\"WARC-Block-Digest\":\"sha1:MRTMJ75BXO5GOUUY6VIFLLJAN7WDJEHB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912202530.49_warc_CC-MAIN-20190321172751-20190321194751-00236.warc.gz\"}"}
http://ixtrieve.fh-koeln.de/birds/litie/document/4566
[ "# Document (#4566)\n\nAuthor\nMiller, P.L.\nTitle\nPrototyping an institutional IAIMS/UMLS information environment for a academic medical center\nSource\nBulletin of the Medical Library Association. 80(1992) no.3, S.281-287\nYear\n1992\nAbstract\nA prototype design is discussed which shows the link between the US Matioanl Library of Medicine Integrated Academic Information Management System (IAIMS) and the Unified Medical Language System (UMLS)\nField\nMedizin\nObject\nIAIMS\nUMLS\n\n## Similar documents (author)\n\n1. Miller, G.A.: ¬The magical number, seven plus or minus two : some limits on our capacity for processing information (1956) 4.45\n```4.449747 = sum of:\n4.449747 = weight(author_txt:miller in 2752) [ClassicSimilarity], result of:\n4.449747 = fieldWeight in 2752, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.1195955 = idf(docFreq=93, maxDocs=42740)\n0.625 = fieldNorm(doc=2752)\n```\n2. Miller, M.L.: Automation and LCSH (1986) 4.45\n```4.449747 = sum of:\n4.449747 = weight(author_txt:miller in 2900) [ClassicSimilarity], result of:\n4.449747 = fieldWeight in 2900, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.1195955 = idf(docFreq=93, maxDocs=42740)\n0.625 = fieldNorm(doc=2900)\n```\n3. Miller, G.A.: Psychology and information (1968) 4.45\n```4.449747 = sum of:\n4.449747 = weight(author_txt:miller in 3245) [ClassicSimilarity], result of:\n4.449747 = fieldWeight in 3245, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.1195955 = idf(docFreq=93, maxDocs=42740)\n0.625 = fieldNorm(doc=3245)\n```\n4. Miller, D.C.: Evaluating CD-ROMs : to buy or what to buy (1987) 4.45\n```4.449747 = sum of:\n4.449747 = weight(author_txt:miller in 3305) [ClassicSimilarity], result of:\n4.449747 = fieldWeight in 3305, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.1195955 = idf(docFreq=93, maxDocs=42740)\n0.625 = fieldNorm(doc=3305)\n```\n5. Miller, D.J.: Advanced Freestyle searching with Lexis-Nexis (1997) 4.45\n```4.449747 = sum of:\n4.449747 = weight(author_txt:miller in 4225) [ClassicSimilarity], result of:\n4.449747 = fieldWeight in 4225, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.1195955 = idf(docFreq=93, maxDocs=42740)\n0.625 = fieldNorm(doc=4225)\n```\n\n## Similar documents (content)\n\n1. Squires, S.J.: Access to biomedical information : the Unified Medical Language System (1993) 0.79\n```0.78786916 = sum of:\n0.78786916 = product of:\n1.7333121 = sum of:\n0.016670894 = weight(abstract_txt:library in 6705) [ClassicSimilarity], result of:\n0.016670894 = score(doc=6705,freq=1.0), product of:\n0.047882635 = queryWeight, product of:\n1.0851197 = boost\n3.1831915 = idf(docFreq=4815, maxDocs=42740)\n0.013862374 = queryNorm\n0.34816158 = fieldWeight in 6705, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n3.1831915 = idf(docFreq=4815, maxDocs=42740)\n0.109375 = fieldNorm(doc=6705)\n0.038123764 = weight(abstract_txt:language in 6705) [ClassicSimilarity], result of:\n0.038123764 = score(doc=6705,freq=1.0), product of:\n0.08311307 = queryWeight, product of:\n1.4296288 = boost\n4.1938066 = idf(docFreq=1752, maxDocs=42740)\n0.013862374 = queryNorm\n0.45869762 = fieldWeight in 6705, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n4.1938066 = idf(docFreq=1752, maxDocs=42740)\n0.109375 = fieldNorm(doc=6705)\n0.039579257 = weight(abstract_txt:management in 6705) [ClassicSimilarity], result of:\n0.039579257 = score(doc=6705,freq=1.0), product of:\n0.08521523 = queryWeight, product of:\n1.4475956 = boost\n4.246512 = idf(docFreq=1662, maxDocs=42740)\n0.013862374 = queryNorm\n0.46446225 = fieldWeight in 6705, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n4.246512 = idf(docFreq=1662, maxDocs=42740)\n0.109375 = fieldNorm(doc=6705)\n0.05160368 = weight(abstract_txt:environment in 6705) [ClassicSimilarity], result of:\n0.05160368 = score(doc=6705,freq=1.0), product of:\n0.101701155 = queryWeight, product of:\n1.5814358 = boost\n4.6391315 = idf(docFreq=1122, maxDocs=42740)\n0.013862374 = queryNorm\n0.50740504 = fieldWeight in 6705, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n4.6391315 = idf(docFreq=1122, maxDocs=42740)\n0.109375 = fieldNorm(doc=6705)\n0.02569424 = weight(abstract_txt:information in 6705) [ClassicSimilarity], result of:\n0.02569424 = score(doc=6705,freq=3.0), product of:\n0.055812597 = queryWeight, product of:\n1.6567987 = boost\n2.430104 = idf(docFreq=10226, maxDocs=42740)\n0.013862374 = queryNorm\n0.4603663 = fieldWeight in 6705, product of:\n1.7320508 = tf(freq=3.0), with freq of:\n3.0 = termFreq=3.0\n2.430104 = idf(docFreq=10226, maxDocs=42740)\n0.109375 = fieldNorm(doc=6705)\n0.14776626 = weight(abstract_txt:medicine in 6705) [ClassicSimilarity], result of:\n0.14776626 = score(doc=6705,freq=1.0), product of:\n0.20507953 = queryWeight, product of:\n2.2456899 = boost\n6.5877166 = idf(docFreq=159, maxDocs=42740)\n0.013862374 = queryNorm\n0.7205315 = fieldWeight in 6705, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n6.5877166 = idf(docFreq=159, maxDocs=42740)\n0.109375 = fieldNorm(doc=6705)\n0.14991301 = weight(abstract_txt:unified in 6705) [ClassicSimilarity], result of:\n0.14991301 = score(doc=6705,freq=1.0), product of:\n0.20706102 = queryWeight, product of:\n2.2565129 = boost\n6.6194654 = idf(docFreq=154, maxDocs=42740)\n0.013862374 = queryNorm\n0.72400403 = fieldWeight in 6705, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n6.6194654 = idf(docFreq=154, maxDocs=42740)\n0.109375 = fieldNorm(doc=6705)\n0.039329555 = weight(abstract_txt:system in 6705) [ClassicSimilarity], result of:\n0.039329555 = score(doc=6705,freq=1.0), product of:\n0.10691241 = queryWeight, product of:\n2.293072 = boost\n3.3633559 = idf(docFreq=4021, maxDocs=42740)\n0.013862374 = queryNorm\n0.36786705 = fieldWeight in 6705, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n3.3633559 = idf(docFreq=4021, maxDocs=42740)\n0.109375 = fieldNorm(doc=6705)\n0.2025068 = weight(abstract_txt:medical in 6705) [ClassicSimilarity], result of:\n0.2025068 = score(doc=6705,freq=1.0), product of:\n0.31879282 = queryWeight, product of:\n3.95966 = boost\n5.8078184 = idf(docFreq=348, maxDocs=42740)\n0.013862374 = queryNorm\n0.6352301 = fieldWeight in 6705, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.8078184 = idf(docFreq=348, maxDocs=42740)\n0.109375 = fieldNorm(doc=6705)\n1.0221246 = weight(abstract_txt:umls in 6705) [ClassicSimilarity], result of:\n1.0221246 = score(doc=6705,freq=3.0), product of:\n0.65039563 = queryWeight, product of:\n5.655779 = boost\n8.295595 = idf(docFreq=28, maxDocs=42740)\n0.013862374 = queryNorm\n1.5715429 = fieldWeight in 6705, product of:\n1.7320508 = tf(freq=3.0), with freq of:\n3.0 = termFreq=3.0\n8.295595 = idf(docFreq=28, maxDocs=42740)\n0.109375 = fieldNorm(doc=6705)\n0.45454547 = coord(10/22)\n```\n2. Stuart, S.J.; Powell, T.; Humphreys, B.L.: ¬The Unified Medical Language System (UMLS) project (2002) 0.55\n```0.550092 = sum of:\n0.550092 = product of:\n1.100184 = sum of:\n0.007907954 = weight(abstract_txt:which in 263) [ClassicSimilarity], result of:\n0.007907954 = score(doc=263,freq=2.0), product of:\n0.040665183 = queryWeight, product of:\n2.9334934 = idf(docFreq=6181, maxDocs=42740)\n0.013862374 = queryNorm\n0.19446498 = fieldWeight in 263, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n2.9334934 = idf(docFreq=6181, maxDocs=42740)\n0.046875 = fieldNorm(doc=263)\n0.012374929 = weight(abstract_txt:library in 263) [ClassicSimilarity], result of:\n0.012374929 = score(doc=263,freq=3.0), product of:\n0.047882635 = queryWeight, product of:\n1.0851197 = boost\n3.1831915 = idf(docFreq=4815, maxDocs=42740)\n0.013862374 = queryNorm\n0.25844294 = fieldWeight in 263, product of:\n1.7320508 = tf(freq=3.0), with freq of:\n3.0 = termFreq=3.0\n3.1831915 = idf(docFreq=4815, maxDocs=42740)\n0.046875 = fieldNorm(doc=263)\n0.02310649 = weight(abstract_txt:language in 263) [ClassicSimilarity], result of:\n0.02310649 = score(doc=263,freq=2.0), product of:\n0.08311307 = queryWeight, product of:\n1.4296288 = boost\n4.1938066 = idf(docFreq=1752, maxDocs=42740)\n0.013862374 = queryNorm\n0.27801272 = fieldWeight in 263, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n4.1938066 = idf(docFreq=1752, maxDocs=42740)\n0.046875 = fieldNorm(doc=263)\n0.022023635 = weight(abstract_txt:information in 263) [ClassicSimilarity], result of:\n0.022023635 = score(doc=263,freq=12.0), product of:\n0.055812597 = queryWeight, product of:\n1.6567987 = boost\n2.430104 = idf(docFreq=10226, maxDocs=42740)\n0.013862374 = queryNorm\n0.3945997 = fieldWeight in 263, product of:\n3.4641016 = tf(freq=12.0), with freq of:\n12.0 = termFreq=12.0\n2.430104 = idf(docFreq=10226, maxDocs=42740)\n0.046875 = fieldNorm(doc=263)\n0.034907 = weight(abstract_txt:integrated in 263) [ClassicSimilarity], result of:\n0.034907 = score(doc=263,freq=1.0), product of:\n0.13786848 = queryWeight, product of:\n1.8412855 = boost\n5.4013987 = idf(docFreq=523, maxDocs=42740)\n0.013862374 = queryNorm\n0.25319058 = fieldWeight in 263, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.4013987 = idf(docFreq=523, maxDocs=42740)\n0.046875 = fieldNorm(doc=263)\n0.06332839 = weight(abstract_txt:medicine in 263) [ClassicSimilarity], result of:\n0.06332839 = score(doc=263,freq=1.0), product of:\n0.20507953 = queryWeight, product of:\n2.2456899 = boost\n6.5877166 = idf(docFreq=159, maxDocs=42740)\n0.013862374 = queryNorm\n0.3087992 = fieldWeight in 263, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n6.5877166 = idf(docFreq=159, maxDocs=42740)\n0.046875 = fieldNorm(doc=263)\n0.064248435 = weight(abstract_txt:unified in 263) [ClassicSimilarity], result of:\n0.064248435 = score(doc=263,freq=1.0), product of:\n0.20706102 = queryWeight, product of:\n2.2565129 = boost\n6.6194654 = idf(docFreq=154, maxDocs=42740)\n0.013862374 = queryNorm\n0.31028745 = fieldWeight in 263, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n6.6194654 = idf(docFreq=154, maxDocs=42740)\n0.046875 = fieldNorm(doc=263)\n0.02383731 = weight(abstract_txt:system in 263) [ClassicSimilarity], result of:\n0.02383731 = score(doc=263,freq=2.0), product of:\n0.10691241 = queryWeight, product of:\n2.293072 = boost\n3.3633559 = idf(docFreq=4021, maxDocs=42740)\n0.013862374 = queryNorm\n0.2229611 = fieldWeight in 263, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n3.3633559 = idf(docFreq=4021, maxDocs=42740)\n0.046875 = fieldNorm(doc=263)\n0.046322905 = weight(abstract_txt:academic in 263) [ClassicSimilarity], result of:\n0.046322905 = score(doc=263,freq=1.0), product of:\n0.20976378 = queryWeight, product of:\n3.2119508 = boost\n4.711118 = idf(docFreq=1044, maxDocs=42740)\n0.013862374 = queryNorm\n0.22083366 = fieldWeight in 263, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n4.711118 = idf(docFreq=1044, maxDocs=42740)\n0.046875 = fieldNorm(doc=263)\n0.086788625 = weight(abstract_txt:medical in 263) [ClassicSimilarity], result of:\n0.086788625 = score(doc=263,freq=1.0), product of:\n0.31879282 = queryWeight, product of:\n3.95966 = boost\n5.8078184 = idf(docFreq=348, maxDocs=42740)\n0.013862374 = queryNorm\n0.27224147 = fieldWeight in 263, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.8078184 = idf(docFreq=348, maxDocs=42740)\n0.046875 = fieldNorm(doc=263)\n0.71533823 = weight(abstract_txt:umls in 263) [ClassicSimilarity], result of:\n0.71533823 = score(doc=263,freq=8.0), product of:\n0.65039563 = queryWeight, product of:\n5.655779 = boost\n8.295595 = idf(docFreq=28, maxDocs=42740)\n0.013862374 = queryNorm\n1.0998509 = fieldWeight in 263, product of:\n2.828427 = tf(freq=8.0), with freq of:\n8.0 = termFreq=8.0\n8.295595 = idf(docFreq=28, maxDocs=42740)\n0.046875 = fieldNorm(doc=263)\n0.5 = coord(11/22)\n```\n3. Nelson, S.J.; Powell, T.; Srinivasan, S.; Humphreys, B.L.: Unified Medical Language System® (UMLS®) Project (2009) 0.53\n```0.5345924 = sum of:\n0.5345924 = product of:\n1.470129 = sum of:\n0.014289338 = weight(abstract_txt:library in 1702) [ClassicSimilarity], result of:\n0.014289338 = score(doc=1702,freq=1.0), product of:\n0.047882635 = queryWeight, product of:\n1.0851197 = boost\n3.1831915 = idf(docFreq=4815, maxDocs=42740)\n0.013862374 = queryNorm\n0.2984242 = fieldWeight in 1702, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n3.1831915 = idf(docFreq=4815, maxDocs=42740)\n0.09375 = fieldNorm(doc=1702)\n0.032677513 = weight(abstract_txt:language in 1702) [ClassicSimilarity], result of:\n0.032677513 = score(doc=1702,freq=1.0), product of:\n0.08311307 = queryWeight, product of:\n1.4296288 = boost\n4.1938066 = idf(docFreq=1752, maxDocs=42740)\n0.013862374 = queryNorm\n0.39316937 = fieldWeight in 1702, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n4.1938066 = idf(docFreq=1752, maxDocs=42740)\n0.09375 = fieldNorm(doc=1702)\n0.012715351 = weight(abstract_txt:information in 1702) [ClassicSimilarity], result of:\n0.012715351 = score(doc=1702,freq=1.0), product of:\n0.055812597 = queryWeight, product of:\n1.6567987 = boost\n2.430104 = idf(docFreq=10226, maxDocs=42740)\n0.013862374 = queryNorm\n0.22782224 = fieldWeight in 1702, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n2.430104 = idf(docFreq=10226, maxDocs=42740)\n0.09375 = fieldNorm(doc=1702)\n0.12665679 = weight(abstract_txt:medicine in 1702) [ClassicSimilarity], result of:\n0.12665679 = score(doc=1702,freq=1.0), product of:\n0.20507953 = queryWeight, product of:\n2.2456899 = boost\n6.5877166 = idf(docFreq=159, maxDocs=42740)\n0.013862374 = queryNorm\n0.6175984 = fieldWeight in 1702, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n6.5877166 = idf(docFreq=159, maxDocs=42740)\n0.09375 = fieldNorm(doc=1702)\n0.12849687 = weight(abstract_txt:unified in 1702) [ClassicSimilarity], result of:\n0.12849687 = score(doc=1702,freq=1.0), product of:\n0.20706102 = queryWeight, product of:\n2.2565129 = boost\n6.6194654 = idf(docFreq=154, maxDocs=42740)\n0.013862374 = queryNorm\n0.6205749 = fieldWeight in 1702, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n6.6194654 = idf(docFreq=154, maxDocs=42740)\n0.09375 = fieldNorm(doc=1702)\n0.033711046 = weight(abstract_txt:system in 1702) [ClassicSimilarity], result of:\n0.033711046 = score(doc=1702,freq=1.0), product of:\n0.10691241 = queryWeight, product of:\n2.293072 = boost\n3.3633559 = idf(docFreq=4021, maxDocs=42740)\n0.013862374 = queryNorm\n0.31531462 = fieldWeight in 1702, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n3.3633559 = idf(docFreq=4021, maxDocs=42740)\n0.09375 = fieldNorm(doc=1702)\n0.2454753 = weight(abstract_txt:medical in 1702) [ClassicSimilarity], result of:\n0.2454753 = score(doc=1702,freq=2.0), product of:\n0.31879282 = queryWeight, product of:\n3.95966 = boost\n5.8078184 = idf(docFreq=348, maxDocs=42740)\n0.013862374 = queryNorm\n0.7700152 = fieldWeight in 1702, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n5.8078184 = idf(docFreq=348, maxDocs=42740)\n0.09375 = fieldNorm(doc=1702)\n0.8761068 = weight(abstract_txt:umls in 1702) [ClassicSimilarity], result of:\n0.8761068 = score(doc=1702,freq=3.0), product of:\n0.65039563 = queryWeight, product of:\n5.655779 = boost\n8.295595 = idf(docFreq=28, maxDocs=42740)\n0.013862374 = queryNorm\n1.3470367 = fieldWeight in 1702, product of:\n1.7320508 = tf(freq=3.0), with freq of:\n3.0 = termFreq=3.0\n8.295595 = idf(docFreq=28, maxDocs=42740)\n0.09375 = fieldNorm(doc=1702)\n0.36363637 = coord(8/22)\n```\n4. Lindberg, D.A.B.; Humphreys, B.L.: ¬The UMLS project : making the conceptual connection between users and the information they need (1993) 0.48\n```0.4848752 = sum of:\n0.4848752 = product of:\n1.1852505 = sum of:\n0.011907781 = weight(abstract_txt:library in 3796) [ClassicSimilarity], result of:\n0.011907781 = score(doc=3796,freq=1.0), product of:\n0.047882635 = queryWeight, product of:\n1.0851197 = boost\n3.1831915 = idf(docFreq=4815, maxDocs=42740)\n0.013862374 = queryNorm\n0.24868684 = fieldWeight in 3796, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n3.1831915 = idf(docFreq=4815, maxDocs=42740)\n0.078125 = fieldNorm(doc=3796)\n0.015669027 = weight(abstract_txt:between in 3796) [ClassicSimilarity], result of:\n0.015669027 = score(doc=3796,freq=1.0), product of:\n0.057497926 = queryWeight, product of:\n1.18909 = boost\n3.4881876 = idf(docFreq=3549, maxDocs=42740)\n0.013862374 = queryNorm\n0.27251464 = fieldWeight in 3796, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n3.4881876 = idf(docFreq=3549, maxDocs=42740)\n0.078125 = fieldNorm(doc=3796)\n0.02723126 = weight(abstract_txt:language in 3796) [ClassicSimilarity], result of:\n0.02723126 = score(doc=3796,freq=1.0), product of:\n0.08311307 = queryWeight, product of:\n1.4296288 = boost\n4.1938066 = idf(docFreq=1752, maxDocs=42740)\n0.013862374 = queryNorm\n0.32764113 = fieldWeight in 3796, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n4.1938066 = idf(docFreq=1752, maxDocs=42740)\n0.078125 = fieldNorm(doc=3796)\n0.014985185 = weight(abstract_txt:information in 3796) [ClassicSimilarity], result of:\n0.014985185 = score(doc=3796,freq=2.0), product of:\n0.055812597 = queryWeight, product of:\n1.6567987 = boost\n2.430104 = idf(docFreq=10226, maxDocs=42740)\n0.013862374 = queryNorm\n0.2684911 = fieldWeight in 3796, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n2.430104 = idf(docFreq=10226, maxDocs=42740)\n0.078125 = fieldNorm(doc=3796)\n0.10554733 = weight(abstract_txt:medicine in 3796) [ClassicSimilarity], result of:\n0.10554733 = score(doc=3796,freq=1.0), product of:\n0.20507953 = queryWeight, product of:\n2.2456899 = boost\n6.5877166 = idf(docFreq=159, maxDocs=42740)\n0.013862374 = queryNorm\n0.51466537 = fieldWeight in 3796, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n6.5877166 = idf(docFreq=159, maxDocs=42740)\n0.078125 = fieldNorm(doc=3796)\n0.10708073 = weight(abstract_txt:unified in 3796) [ClassicSimilarity], result of:\n0.10708073 = score(doc=3796,freq=1.0), product of:\n0.20706102 = queryWeight, product of:\n2.2565129 = boost\n6.6194654 = idf(docFreq=154, maxDocs=42740)\n0.013862374 = queryNorm\n0.51714575 = fieldWeight in 3796, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n6.6194654 = idf(docFreq=154, maxDocs=42740)\n0.078125 = fieldNorm(doc=3796)\n0.028092539 = weight(abstract_txt:system in 3796) [ClassicSimilarity], result of:\n0.028092539 = score(doc=3796,freq=1.0), product of:\n0.10691241 = queryWeight, product of:\n2.293072 = boost\n3.3633559 = idf(docFreq=4021, maxDocs=42740)\n0.013862374 = queryNorm\n0.2627622 = fieldWeight in 3796, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n3.3633559 = idf(docFreq=4021, maxDocs=42740)\n0.078125 = fieldNorm(doc=3796)\n0.14464772 = weight(abstract_txt:medical in 3796) [ClassicSimilarity], result of:\n0.14464772 = score(doc=3796,freq=1.0), product of:\n0.31879282 = queryWeight, product of:\n3.95966 = boost\n5.8078184 = idf(docFreq=348, maxDocs=42740)\n0.013862374 = queryNorm\n0.45373583 = fieldWeight in 3796, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.8078184 = idf(docFreq=348, maxDocs=42740)\n0.078125 = fieldNorm(doc=3796)\n0.730089 = weight(abstract_txt:umls in 3796) [ClassicSimilarity], result of:\n0.730089 = score(doc=3796,freq=3.0), product of:\n0.65039563 = queryWeight, product of:\n5.655779 = boost\n8.295595 = idf(docFreq=28, maxDocs=42740)\n0.013862374 = queryNorm\n1.1225306 = fieldWeight in 3796, product of:\n1.7320508 = tf(freq=3.0), with freq of:\n3.0 = termFreq=3.0\n8.295595 = idf(docFreq=28, maxDocs=42740)\n0.078125 = fieldNorm(doc=3796)\n0.4090909 = coord(9/22)\n```\n5. Humphreys, B.L.: ¬The 1994 Unified Medical Language System Knowledge Sources (1994) 0.38\n```0.37780932 = sum of:\n0.37780932 = product of:\n1.0389756 = sum of:\n0.011907781 = weight(abstract_txt:library in 1665) [ClassicSimilarity], result of:\n0.011907781 = score(doc=1665,freq=1.0), product of:\n0.047882635 = queryWeight, product of:\n1.0851197 = boost\n3.1831915 = idf(docFreq=4815, maxDocs=42740)\n0.013862374 = queryNorm\n0.24868684 = fieldWeight in 1665, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n3.1831915 = idf(docFreq=4815, maxDocs=42740)\n0.078125 = fieldNorm(doc=1665)\n0.02723126 = weight(abstract_txt:language in 1665) [ClassicSimilarity], result of:\n0.02723126 = score(doc=1665,freq=1.0), product of:\n0.08311307 = queryWeight, product of:\n1.4296288 = boost\n4.1938066 = idf(docFreq=1752, maxDocs=42740)\n0.013862374 = queryNorm\n0.32764113 = fieldWeight in 1665, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n4.1938066 = idf(docFreq=1752, maxDocs=42740)\n0.078125 = fieldNorm(doc=1665)\n0.01835303 = weight(abstract_txt:information in 1665) [ClassicSimilarity], result of:\n0.01835303 = score(doc=1665,freq=3.0), product of:\n0.055812597 = queryWeight, product of:\n1.6567987 = boost\n2.430104 = idf(docFreq=10226, maxDocs=42740)\n0.013862374 = queryNorm\n0.3288331 = fieldWeight in 1665, product of:\n1.7320508 = tf(freq=3.0), with freq of:\n3.0 = termFreq=3.0\n2.430104 = idf(docFreq=10226, maxDocs=42740)\n0.078125 = fieldNorm(doc=1665)\n0.10554733 = weight(abstract_txt:medicine in 1665) [ClassicSimilarity], result of:\n0.10554733 = score(doc=1665,freq=1.0), product of:\n0.20507953 = queryWeight, product of:\n2.2456899 = boost\n6.5877166 = idf(docFreq=159, maxDocs=42740)\n0.013862374 = queryNorm\n0.51466537 = fieldWeight in 1665, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n6.5877166 = idf(docFreq=159, maxDocs=42740)\n0.078125 = fieldNorm(doc=1665)\n0.10708073 = weight(abstract_txt:unified in 1665) [ClassicSimilarity], result of:\n0.10708073 = score(doc=1665,freq=1.0), product of:\n0.20706102 = queryWeight, product of:\n2.2565129 = boost\n6.6194654 = idf(docFreq=154, maxDocs=42740)\n0.013862374 = queryNorm\n0.51714575 = fieldWeight in 1665, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n6.6194654 = idf(docFreq=154, maxDocs=42740)\n0.078125 = fieldNorm(doc=1665)\n0.028092539 = weight(abstract_txt:system in 1665) [ClassicSimilarity], result of:\n0.028092539 = score(doc=1665,freq=1.0), product of:\n0.10691241 = queryWeight, product of:\n2.293072 = boost\n3.3633559 = idf(docFreq=4021, maxDocs=42740)\n0.013862374 = queryNorm\n0.2627622 = fieldWeight in 1665, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n3.3633559 = idf(docFreq=4021, maxDocs=42740)\n0.078125 = fieldNorm(doc=1665)\n0.14464772 = weight(abstract_txt:medical in 1665) [ClassicSimilarity], result of:\n0.14464772 = score(doc=1665,freq=1.0), product of:\n0.31879282 = queryWeight, product of:\n3.95966 = boost\n5.8078184 = idf(docFreq=348, maxDocs=42740)\n0.013862374 = queryNorm\n0.45373583 = fieldWeight in 1665, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.8078184 = idf(docFreq=348, maxDocs=42740)\n0.078125 = fieldNorm(doc=1665)\n0.5961152 = weight(abstract_txt:umls in 1665) [ClassicSimilarity], result of:\n0.5961152 = score(doc=1665,freq=2.0), product of:\n0.65039563 = queryWeight, product of:\n5.655779 = boost\n8.295595 = idf(docFreq=28, maxDocs=42740)\n0.013862374 = queryNorm\n0.9165424 = fieldWeight in 1665, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n8.295595 = idf(docFreq=28, maxDocs=42740)\n0.078125 = fieldNorm(doc=1665)\n0.36363637 = coord(8/22)\n```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6720041,"math_prob":0.9980824,"size":20888,"snap":"2020-34-2020-40","text_gpt3_token_len":8037,"char_repetition_ratio":0.25248995,"word_repetition_ratio":0.5054065,"special_character_ratio":0.5395442,"punctuation_ratio":0.28509045,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998822,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-22T16:33:08Z\",\"WARC-Record-ID\":\"<urn:uuid:63b09331-bf22-4c0e-99dc-b5d04d4c11cf>\",\"Content-Length\":\"36226\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:38b1fb15-c1df-42ca-afa5-e6216eea898a>\",\"WARC-Concurrent-To\":\"<urn:uuid:2650d3b7-a234-494c-bfe2-475ac2be95af>\",\"WARC-IP-Address\":\"139.6.160.6\",\"WARC-Target-URI\":\"http://ixtrieve.fh-koeln.de/birds/litie/document/4566\",\"WARC-Payload-Digest\":\"sha1:BR7EHVU2RGRNR7OVPVDEJXLJMF23UVV3\",\"WARC-Block-Digest\":\"sha1:6WYHIFUBO2PVSSSENZF2PQI63Z6OLCR7\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400206329.28_warc_CC-MAIN-20200922161302-20200922191302-00071.warc.gz\"}"}
https://www.colorhexa.com/22c5b3
[ "# #22c5b3 Color Information\n\nIn a RGB color space, hex #22c5b3 is composed of 13.3% red, 77.3% green and 70.2% blue. Whereas in a CMYK color space, it is composed of 82.7% cyan, 0% magenta, 9.1% yellow and 22.7% black. It has a hue angle of 173.4 degrees, a saturation of 70.6% and a lightness of 45.3%. #22c5b3 color hex could be obtained by blending #44ffff with #008b67. Closest websafe color is: #33cccc.\n\n• R 13\n• G 77\n• B 70\nRGB color chart\n• C 83\n• M 0\n• Y 9\n• K 23\nCMYK color chart\n\n#22c5b3 color description : Strong cyan.\n\n# #22c5b3 Color Conversion\n\nThe hexadecimal color #22c5b3 has RGB values of R:34, G:197, B:179 and CMYK values of C:0.83, M:0, Y:0.09, K:0.23. Its decimal value is 2278835.\n\nHex triplet RGB Decimal 22c5b3 `#22c5b3` 34, 197, 179 `rgb(34,197,179)` 13.3, 77.3, 70.2 `rgb(13.3%,77.3%,70.2%)` 83, 0, 9, 23 173.4°, 70.6, 45.3 `hsl(173.4,70.6%,45.3%)` 173.4°, 82.7, 77.3 33cccc `#33cccc`\nCIE-LAB 71.909, -43.245, -2.248 28.76, 43.524, 49.531 0.236, 0.357, 43.524 71.909, 43.303, 182.976 71.909, -55.411, 3.264 65.973, -37.639, 1.668 00100010, 11000101, 10110011\n\n# Color Schemes with #22c5b3\n\n• #22c5b3\n``#22c5b3` `rgb(34,197,179)``\n• #c52234\n``#c52234` `rgb(197,34,52)``\nComplementary Color\n• #22c562\n``#22c562` `rgb(34,197,98)``\n• #22c5b3\n``#22c5b3` `rgb(34,197,179)``\n• #2286c5\n``#2286c5` `rgb(34,134,197)``\nAnalogous Color\n• #c56222\n``#c56222` `rgb(197,98,34)``\n• #22c5b3\n``#22c5b3` `rgb(34,197,179)``\n• #c52286\n``#c52286` `rgb(197,34,134)``\nSplit Complementary Color\n• #c5b322\n``#c5b322` `rgb(197,179,34)``\n• #22c5b3\n``#22c5b3` `rgb(34,197,179)``\n• #b322c5\n``#b322c5` `rgb(179,34,197)``\nTriadic Color\n• #34c522\n``#34c522` `rgb(52,197,34)``\n• #22c5b3\n``#22c5b3` `rgb(34,197,179)``\n• #b322c5\n``#b322c5` `rgb(179,34,197)``\n• #c52234\n``#c52234` `rgb(197,34,52)``\nTetradic Color\n• #178478\n``#178478` `rgb(23,132,120)``\n• #1a9a8b\n``#1a9a8b` `rgb(26,154,139)``\n• #1eaf9f\n``#1eaf9f` `rgb(30,175,159)``\n• #22c5b3\n``#22c5b3` `rgb(34,197,179)``\n• #27dac6\n``#27dac6` `rgb(39,218,198)``\n• #3dddcc\n``#3dddcc` `rgb(61,221,204)``\n• #52e1d1\n``#52e1d1` `rgb(82,225,209)``\nMonochromatic Color\n\n# Alternatives to #22c5b3\n\nBelow, you can see some colors close to #22c5b3. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #22c58a\n``#22c58a` `rgb(34,197,138)``\n• #22c598\n``#22c598` `rgb(34,197,152)``\n• #22c5a5\n``#22c5a5` `rgb(34,197,165)``\n• #22c5b3\n``#22c5b3` `rgb(34,197,179)``\n• #22c5c1\n``#22c5c1` `rgb(34,197,193)``\n• #22bcc5\n``#22bcc5` `rgb(34,188,197)``\n• #22aec5\n``#22aec5` `rgb(34,174,197)``\nSimilar Colors\n\n# #22c5b3 Preview\n\nText with hexadecimal color #22c5b3\n\nThis text has a font color of #22c5b3.\n\n``<span style=\"color:#22c5b3;\">Text here</span>``\n#22c5b3 background color\n\nThis paragraph has a background color of #22c5b3.\n\n``<p style=\"background-color:#22c5b3;\">Content here</p>``\n#22c5b3 border color\n\nThis element has a border color of #22c5b3.\n\n``<div style=\"border:1px solid #22c5b3;\">Content here</div>``\nCSS codes\n``.text {color:#22c5b3;}``\n``.background {background-color:#22c5b3;}``\n``.border {border:1px solid #22c5b3;}``\n\n# Shades and Tints of #22c5b3\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #020d0c is the darkest color, while #fbfefe is the lightest one.\n\n• #020d0c\n``#020d0c` `rgb(2,13,12)``\n• #051e1b\n``#051e1b` `rgb(5,30,27)``\n• #082e2a\n``#082e2a` `rgb(8,46,42)``\n• #0b3f39\n``#0b3f39` `rgb(11,63,57)``\n• #0e5049\n``#0e5049` `rgb(14,80,73)``\n• #116158\n``#116158` `rgb(17,97,88)``\n• #147167\n``#147167` `rgb(20,113,103)``\n• #168276\n``#168276` `rgb(22,130,118)``\n• #199385\n``#199385` `rgb(25,147,133)``\n• #1ca495\n``#1ca495` `rgb(28,164,149)``\n• #1fb4a4\n``#1fb4a4` `rgb(31,180,164)``\n• #22c5b3\n``#22c5b3` `rgb(34,197,179)``\n• #25d6c2\n``#25d6c2` `rgb(37,214,194)``\nShade Color Variation\n• #33dcc9\n``#33dcc9` `rgb(51,220,201)``\n• #43dfcd\n``#43dfcd` `rgb(67,223,205)``\n• #54e1d2\n``#54e1d2` `rgb(84,225,210)``\n• #65e4d6\n``#65e4d6` `rgb(101,228,214)``\n• #75e7db\n``#75e7db` `rgb(117,231,219)``\n• #86eadf\n``#86eadf` `rgb(134,234,223)``\n• #97ede4\n``#97ede4` `rgb(151,237,228)``\n• #a8f0e8\n``#a8f0e8` `rgb(168,240,232)``\n• #b8f3ec\n``#b8f3ec` `rgb(184,243,236)``\n• #c9f6f1\n``#c9f6f1` `rgb(201,246,241)``\n• #daf9f5\n``#daf9f5` `rgb(218,249,245)``\n• #ebfbfa\n``#ebfbfa` `rgb(235,251,250)``\n• #fbfefe\n``#fbfefe` `rgb(251,254,254)``\nTint Color Variation\n\n# Tones of #22c5b3\n\nA tone is produced by adding gray to any pure hue. In this case, #727575 is the less saturated color, while #07e0c8 is the most saturated one.\n\n• #727575\n``#727575` `rgb(114,117,117)``\n• #697e7c\n``#697e7c` `rgb(105,126,124)``\n• #608783\n``#608783` `rgb(96,135,131)``\n• #579089\n``#579089` `rgb(87,144,137)``\n• #4e9990\n``#4e9990` `rgb(78,153,144)``\n• #46a197\n``#46a197` `rgb(70,161,151)``\n• #3daa9e\n``#3daa9e` `rgb(61,170,158)``\n• #34b3a5\n``#34b3a5` `rgb(52,179,165)``\n• #2bbcac\n``#2bbcac` `rgb(43,188,172)``\n• #22c5b3\n``#22c5b3` `rgb(34,197,179)``\n• #19ceba\n``#19ceba` `rgb(25,206,186)``\n• #10d7c1\n``#10d7c1` `rgb(16,215,193)``\n• #07e0c8\n``#07e0c8` `rgb(7,224,200)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #22c5b3 is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.53020036,"math_prob":0.7182757,"size":3713,"snap":"2019-13-2019-22","text_gpt3_token_len":1700,"char_repetition_ratio":0.12105689,"word_repetition_ratio":0.011111111,"special_character_ratio":0.5542688,"punctuation_ratio":0.23809524,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9796325,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-05-22T19:23:39Z\",\"WARC-Record-ID\":\"<urn:uuid:5542d4f2-8547-4901-874e-d7519c3acade>\",\"Content-Length\":\"36448\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:75c38525-5c45-4edf-9e2f-04a54c6ae098>\",\"WARC-Concurrent-To\":\"<urn:uuid:41777565-cc00-44fe-9db7-81677affce02>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/22c5b3\",\"WARC-Payload-Digest\":\"sha1:QQUSKYBXTLK2KWUAKDE3EAK6BZR3S7TO\",\"WARC-Block-Digest\":\"sha1:OEQH3KVJPHF3QT33INZ2C735ESRWS3O6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-22/CC-MAIN-2019-22_segments_1558232256948.48_warc_CC-MAIN-20190522183240-20190522205240-00285.warc.gz\"}"}
https://www.scienceforums.com/topic/37923-minkowski-spacetime-diagrams-re-assigned/page/3/
[ "Science Forums\n\n# Minkowski SpaceTime diagrams re assigned\n\n## Recommended Posts\n\nand here's another paradox for you, (a paradox is an unsolvable problem, like the twin paradox, still unsolved despite many weak attempts)\n\nIn the pic, from the development of the lorentz equation by Sal of Khan academy, but also as presented by many other uni professors...\n\nNow because we also know that\n\nx = ct\n\nand also that\n\nx' = ct' Because we set this up to be exactly like this)\n\nTherefore these two equations end up proving that\n\n0 = 0.\n\nBut I could have told you that in primary school", null, "• Replies 157\n• Created\n\n#### Posted Images\n\n4 minutes ago, TheProdigalProdigy said:\n\nI don't see how y=1 here as you claim.\n\nI think your problem is in arithmetic. And even if it was 1 it's being multiplied by c so it's still equal\n\n\"y\" is the vertical axis correct? (just checking)\n\nit was Time, but that was replaced with the Distance \"ct\", which we have set on our \"y\" axis scales to be equal to one unit of distance of the horizontal axis. so one unit on x distance is exactly the same as one unit on the \"y\" axis distance. That's why Light is at 45 degrees.\n\nSo \"y\" is one unit, and so is X one identical unit. 1:1 ratio.\n\n##### Share on other sites\n\n3 minutes ago, TheProdigalProdigy said:\n\nBecause ct, or x, can't be zero. First of all because c is 299mill\n\nplease work out the algebra, simplify these equations. the end result is simply 0=0\n\n##### Share on other sites\n\n2 minutes ago, TheProdigalProdigy said:\n\nYou're welcome\n\nthat does not help with the problem does it?\n\nYou are ignoring the issues and talking in circles.\n\nDid you run those equations with real values? no.\n\nDid you do the algebra on the other equations that end up proving that 0 = 0?\n\nNo.\n\n##### Share on other sites\n\nJust now, TheProdigalProdigy said:\n\nYou do know beta is like 8,000 something, right?\n\nThe proton electron mass ratio?\n\nbeta is NOT 8000 in this example, beta here is the ratio of the observers velocity over the velocity of light.\n\nbut even if it was 8000, it matters not, as it cancels itself out perfectly.\n\n0 still = 0\n\nDO THE ALGEBRA\n\n##### Share on other sites\n\n3 minutes ago, TheProdigalProdigy said:\n\nWhat did you admit on the first post in this page? That where y=1 it actually equals c. Right? Which makes xy a 1:1 ratio in both pictures.\n\nSo again, you're welcome. The issue you had was a non issue.\n\nIts still the same issue. Even if you DRAW the graph so that a division of one second on the y axis is the same physical length as a division on the x axis, which is 30000000meters, the unit of one second is NOT identical to a distance of 3 million meters. One is SECONDS, the other is Distance.\n\nSo on casual appearance, because you have set the distances between units on both axies as identical, which allowed you to substitute for one second, the distance of 3 million... this is going to cause a problem when your original equations all worked in seconds and meters, not on meters and meters.\n\nYou cant deduct a velocity from a distance. You can divide a distance by a velocity, giving time. But there is no valid equation that allows deducting a velocity from a distance.\n\n##### Share on other sites\n\n8 minutes ago, TheProdigalProdigy said:\n\nRemember what I said about philosophizing about the axiomatic nature of velocity or distance?\n\nI said it has no place in math.\n\nyou simply CANNOT deduct 3 miles per hour from 6 miles. Period, no philosophy involved.\n\n##### Share on other sites\n\n7 minutes ago, TheProdigalProdigy said:\n\nAlso a rule there, velocity of observer < c. But that's already a rule because of time dilation. So that's why it's set up that way. Nice\n\nHere's why lorentz end up as zero according to the Khan academy derivation.", null, "##### Share on other sites\n\n7 hours ago, TheProdigalProdigy said:\n\nAlready came to the same result.\n\nok take the first equation, x' = gamma(x- beta*ct)\n\nlet x =10\n\nrewrite x' = gamma ( 10 - beta 10)\n\nlet beta = 0.9 (can be anything less than 1)\n\nx' = gamma 9\n\nlet gamma = 0.9 (can be anything less than 1)\n\nx' = 8.1\n\nNOW carry on with the second equation.\n\nct' = 0.9( 10  - 0.9 * 10)\n\nx' = 10\n\nSo pick a result... is x' = 8.1 or does x'= 10?\n\n##### Share on other sites\n\n18 minutes ago, TheProdigalProdigy said:\n\nNeither, it's .9 & for ct'\n\ncrap i did it wrong again.\n\nUsing 90% light speed means that beta is 1.11111 and gamma is 2.294\n\nLet x =10\n\nNOW we do it with actual calculated values for gamma and beta and known distance for x\n\nwe get x' = minus 2.29\n\nBut x' was not going backwards! he was moving in the same direction as the light.\n\nanyway, its true that both equations balance . the results are minus 2.29 for both equations. and so all you have proven is that minus 2.29 is equal to minus 2.29\n\nwhich is not useful, as it could be any number, i.e, 42 = 42\n\nBut the number minus 2.29 is NOT the distance x' moved anyway! X' must be a positive distance.\n\nIf x' moved backwards, the input value for x should have been minus x.\n\nThe whole math was set up for both light and x' moving in the same direction.\n\nAlso, I'm no mathematician, im figuring it out myself here, so mistakes are possible.\n\nBut im right about Lorentz being crap and so too is Einstein.\n\nIf I get stuck with the math, (so far Im ok) I can just ask my x wife, who is a Maths Professor in a Chinese University, or my son who is following in her footsteps.\n\nThe upshot so far regarding any equation that just states that x=x is that its a useless equation.\n\nWhich is exactly what we have with x' = gamma(x - beta *ct)\n\na useless equation that just says that x=x.\n\n##### Share on other sites\n\n1 hour ago, TheProdigalProdigy said:\n\nNeither, it's .9 & for ct'\n\nThe error that Sal makes is when he substitutes ct for x, and ct' for x'  in the first equations picture.\n\nHe did this because he said that x=ct and x'=ct'\n\nBUT this is ONLY true in a special case, that is referring to LIGHT, AND the graph has light at 45degrees.\n\nIf you draw a graph with light at any some other angle, by having different scale factors (which is more logical) then x can not = ct, and x' can not = ct'.\n\nPlugging ct into the equation replacing x is a massive error. Because the x in this case refers to the location of the observer who is NOT doing light speed, so his x can never equal ct.\n\n##### Share on other sites\n\nGreat, so you don't have an answer, so you resort to this personal attack.\n\nIs there some problem with my math this last time?\n\nWhat is incorrect with this statement, \"Plugging ct into the equation replacing x is a massive error. Because the x in this case refers to the location of the observer who is NOT doing light speed, so his x can never equal ct. \"\n\n##### Share on other sites\n\n5 hours ago, TheProdigalProdigy said:\n\nThe observer is (0,0)\n\nExcellent, but as the claim is ALSO made that x' +ct', so the the stationary observer and the MOVING observer are STILL at 0,0, and time is zero.\n\nSo nothing has occurred, but they use that particular time, when nothing has occurred in the experiment, to come up with the result that Time has shrunk for the guy how might move one day?\n\n##### Share on other sites\n\n2 minutes ago, TheProdigalProdigy said:\n\nc will still race ahead of an observer at ~300mill m/s no matter how close to the speed of light that observer gets to.\n\nand that statement has nothing whatever to do with what I just asked you.\n\nI said that t, t', x, x' must all be set to zero, meaning that the experiment has not commenced, so there is no  data to run through you equation.\n\nAt least this is the condition that you gave to counter my argument. Your \"solution\" to negate my claim that its wrong to use x = ct, was to set t and x to zero. So lights not going anywhere either, not that that would change anything, the equations for Lorentz are still nonsense as i explained.\n\n##### Share on other sites\n\n1 hour ago, TheProdigalProdigy said:\n\nc will still race ahead of an observer at ~300mill m/s no matter how close to the speed of light that observer gets to.\n\nAnd that claim, that light always goes at c regardless of the speed of the one measuring it, is unsupported by either logic or experiment.  That's the postulate of einstein's hypothesis, so its only a guess until his hypothesis is at least shown to make some sense! which it clearly does not.\n\nusing x = ct and x' = ct' is nonsense\n\n##### Share on other sites\n\n10 minutes ago, TheProdigalProdigy said:\n\nThe cancelling is just rise over run taking you to zero. There is still a rise and a run from zero, to negative x, ct. You have a lot to learn. I pity your comprehension powers.\n\nBut the Minkowsky spacetime diagram is probably the most useful invention since Edison invented electrify! Denial of Minkowski space spells imminent treason and dishonor\n\nWhat the hell is this?\n\nCan you at least TRY to respond to my criticism?\n\nHoping sideways is not getting around the problem i raised.\n\n##### Share on other sites\n\n2 hours ago, TheProdigalProdigy said:\n\nFinal warning\n\nYou're treading on thin ice whilst entering dangerous territory allegatiins online that 🛂You must CHOSE your next words carefully✍️\n\nShould you spend so much time playing video games? Its messing with your brain.\n\nMinkowski's diagram is a load of rubbish, Lorentz transform is also math nonsense, but together, Einstein has combined them both into the biggest load of steaming crap ever to come from a small hatted one. And they collectively have put ot some real zingers.\n\nAnyway, nothing you have said has solved the problem I raised about the failure of the development of the lorentz equation, so lets just agree that I'm correct, and pop pseudo science (mainstream science) is baseless nonsense.", null, "" ]
[ null, "https://www.scienceforums.com/uploads/monthly_2021_03/561808220_40-IntroductiontotheLorentztransform.thumb.jpg.5a57d1d78e6d20dc2db979650873f661.jpg", null, "https://www.scienceforums.com/uploads/monthly_2021_03/396676630_Untitled2.thumb.jpg.04c92ff80c6533df763820b670bb106b.jpg", null, "https://www.scienceforums.com/uploads/set_resources_1/84c1e40ea0e759e3f1505eb1788ddf3c_default_photo.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.974964,"math_prob":0.9505996,"size":641,"snap":"2023-40-2023-50","text_gpt3_token_len":147,"char_repetition_ratio":0.09576138,"word_repetition_ratio":0.0,"special_character_ratio":0.23712948,"punctuation_ratio":0.11023622,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9697469,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,3,null,3,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-03T01:24:05Z\",\"WARC-Record-ID\":\"<urn:uuid:37b0cb8d-94d2-4ff9-bf5c-46b32fdc476f>\",\"Content-Length\":\"298700\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e216d817-13a4-4053-b297-5169ad72c4b7>\",\"WARC-Concurrent-To\":\"<urn:uuid:85af0665-7c35-42fa-a7ad-b9b959044d2a>\",\"WARC-IP-Address\":\"104.21.51.222\",\"WARC-Target-URI\":\"https://www.scienceforums.com/topic/37923-minkowski-spacetime-diagrams-re-assigned/page/3/\",\"WARC-Payload-Digest\":\"sha1:B5RD6T3RLDMNTXPP4XU7A7VFQFOAIBYX\",\"WARC-Block-Digest\":\"sha1:KQ47P26SLEI7AUPOMVYNNADPNBQOZIWD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233511023.76_warc_CC-MAIN-20231002232712-20231003022712-00067.warc.gz\"}"}
https://numbermatics.com/n/1520/
[ "# 1520\n\n## 1,520 is an even composite number composed of three prime numbers multiplied together.\n\nWhat does the number 1520 look like?\n\nThis visualization shows the relationship between its 3 prime factors (large circles) and 20 divisors.\n\n1520 is an even composite number. It is composed of three distinct prime numbers multiplied together. It has a total of twenty divisors.\n\n## Prime factorization of 1520:\n\n### 24 × 5 × 19\n\n(2 × 2 × 2 × 2 × 5 × 19)\n\nSee below for interesting mathematical facts about the number 1520 from the Numbermatics database.\n\n### Names of 1520\n\n• Cardinal: 1520 can be written as One thousand, five hundred twenty.\n\n### Scientific notation\n\n• Scientific notation: 1.52 × 103\n\n### Factors of 1520\n\n• Number of distinct prime factors ω(n): 3\n• Total number of prime factors Ω(n): 6\n• Sum of prime factors: 26\n\n### Divisors of 1520\n\n• Number of divisors d(n): 20\n• Complete list of divisors:\n• Sum of all divisors σ(n): 3720\n• Sum of proper divisors (its aliquot sum) s(n): 2200\n• 1520 is an abundant number, because the sum of its proper divisors (2200) is greater than itself. Its abundance is 680\n\n### Bases of 1520\n\n• Binary: 101111100002\n• Base-36: 168\n\n### Squares and roots of 1520\n\n• 1520 squared (15202) is 2310400\n• 1520 cubed (15203) is 3511808000\n• The square root of 1520 is 38.9871773795\n• The cube root of 1520 is 11.4977941579\n\n### Scales and comparisons\n\nHow big is 1520?\n• 1,520 seconds is equal to 25 minutes, 20 seconds.\n• To count from 1 to 1,520 would take you about twenty-five minutes.\n\nThis is a very rough estimate, based on a speaking rate of half a second every third order of magnitude. If you speak quickly, you could probably say any randomly-chosen number between one and a thousand in around half a second. Very big numbers obviously take longer to say, so we add half a second for every extra x1000. (We do not count involuntary pauses, bathroom breaks or the necessity of sleep in our calculation!)\n\n• A cube with a volume of 1520 cubic inches would be around 1 feet tall.\n\n### Recreational maths with 1520\n\n• 1520 backwards is 0251\n• 1520 is a Harshad number.\n• The number of decimal digits it has is: 4\n• The sum of 1520's digits is 8\n• More coming soon!" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8567589,"math_prob":0.97160923,"size":2620,"snap":"2021-21-2021-25","text_gpt3_token_len":705,"char_repetition_ratio":0.112767585,"word_repetition_ratio":0.029953917,"special_character_ratio":0.30839694,"punctuation_ratio":0.1618497,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9935881,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-07T15:35:36Z\",\"WARC-Record-ID\":\"<urn:uuid:ca919d7c-8f8f-4ea4-ade9-7e9571a7085b>\",\"Content-Length\":\"18042\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a5993aec-19d5-44a1-a2cd-a301a8d4650c>\",\"WARC-Concurrent-To\":\"<urn:uuid:8c099eb1-42f9-45df-8209-5542e5361233>\",\"WARC-IP-Address\":\"72.44.94.106\",\"WARC-Target-URI\":\"https://numbermatics.com/n/1520/\",\"WARC-Payload-Digest\":\"sha1:FCAHJ6OPOOS3CMAHLO5NLAP5NJDHRX2I\",\"WARC-Block-Digest\":\"sha1:ICTUCUZHFIBZOMOUZDMJHHTZL5EY6326\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243988796.88_warc_CC-MAIN-20210507150814-20210507180814-00194.warc.gz\"}"}
https://gis.stackexchange.com/questions/317659/google-earth-engine-error-on-point-value-extraction
[ "# Google Earth Engine: Error on point value extraction\n\nI am trying to extract rainfall (CHIRPS) values for a set of locations but I got the following error:\n\nImage.reduceRegions: Unable to find a crs\n\nThis does not happens with other datasets (such as terraclimate)\n\nThe error arise when i start downloading the table from Tasks.\n\nHere is the link to the code I have run https://code.earthengine.google.com/c27f2156e81824b0990dcfe0b0a6f455\n\nThe error should be here:\n\n``````// do extraction\nvar ft = ee.FeatureCollection(ee.List([]));\n\n//Function to extract values from image collection based on point file and export as a table\nvar fill = function(img, ini) {\nvar inift = ee.FeatureCollection(ini);\nvar scale = ee.Image(MM.first()).projection().nominalScale().getInfo()\nvar ft2 = img.reduceRegions(pts, ee.Reducer.first(),scale);\nvar date = img.date().format(\"YYYYMM\");\nvar ft3 = ft2.map(function(f){return f.set(\"date\", date)});\nreturn inift.merge(ft3);\n};\n\n// Iterates over the ImageCollection\nvar profile = ee.FeatureCollection(MM.iterate(fill, ft));\n``````\n• May I ask what does the 'ini' mean in the function(img,ini),and why should I transfer this 'ini' into featurecollection? – Zhou XF Apr 15 at 2:40\n\n## 1 Answer\n\nSome of your images inside the collection do not contain bands. Therefore, the error is thrown. Filter out images which do not contain a band using (assuming the first images correctly contains a band):\n\n``````// Filter out empty images\nMM = MM.filter(ee.Filter.listContains('system:band_names', MM.first().bandNames().get(0)))\n``````\n\nFurthermore, I think you will need to get the scale using the following, based on this post:\n\n``````// get scale\nvar scale = MM.first().projection().nominalScale().multiply(0.05); print(scale)\n``````\n\nAs your feature collection was not shared, I drew a simple feature collection: Link script\n\n• it works perfectly.Thanks – Gianca Apr 5 at 10:02" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.55112666,"math_prob":0.80304265,"size":985,"snap":"2019-35-2019-39","text_gpt3_token_len":251,"char_repetition_ratio":0.117227316,"word_repetition_ratio":0.0,"special_character_ratio":0.26395938,"punctuation_ratio":0.20725389,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9535912,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-24T17:30:44Z\",\"WARC-Record-ID\":\"<urn:uuid:3f795a81-e107-4356-bb9c-91788c2a4a92>\",\"Content-Length\":\"135248\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:aced3456-0f0f-478e-89c9-713ea840864a>\",\"WARC-Concurrent-To\":\"<urn:uuid:802119dd-5fc9-4038-b37e-213b5f9611ce>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://gis.stackexchange.com/questions/317659/google-earth-engine-error-on-point-value-extraction\",\"WARC-Payload-Digest\":\"sha1:6CJ3PSTOEPAMZUFHI3XREAFECROECS7B\",\"WARC-Block-Digest\":\"sha1:HARH52T7JTYAVRHFVJC52K2W5BKQQXEM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027321351.87_warc_CC-MAIN-20190824172818-20190824194818-00039.warc.gz\"}"}
https://www.geeksforgeeks.org/python-program-to-find-n-largest-elements-from-a-list/
[ "Related Articles\nPython program to find N largest elements from a list\n• Difficulty Level : Easy\n• Last Updated : 24 Apr, 2020\n\nGiven a list of integers, the task is to find N largest elements assuming size of list is greater than or equal o N.\n\nExamples :\n\n```Input : [4, 5, 1, 2, 9]\nN = 2\nOutput : [9, 5]\n\nInput : [81, 52, 45, 10, 3, 2, 96]\nN = 3\nOutput : [81, 96, 52]\n```\n\n## Recommended: Please try your approach on {IDE} first, before moving on to the solution.\n\nA simple solution traverse the given list N times. In every traversal, find the maximum, add it to result, and remove it from the list. Below is the implementation :\n\n `# Python program to find N largest ` `# element from given list of integers ` ` `  `# Function returns N largest elements ` `def` `Nmaxelements(list1, N): ` `    ``final_list ``=` `[] ` ` `  `    ``for` `i ``in` `range``(``0``, N):  ` `        ``max1 ``=` `0` `         `  `        ``for` `j ``in` `range``(``len``(list1)):      ` `            ``if` `list1[j] > max1: ` `                ``max1 ``=` `list1[j]; ` `                 `  `        ``list1.remove(max1); ` `        ``final_list.append(max1) ` `         `  `    ``print``(final_list) ` ` `  `# Driver code ` `list1 ``=` `[``2``, ``6``, ``41``, ``85``, ``0``, ``3``, ``7``, ``6``, ``10``] ` `N ``=` `2` ` `  `# Calling the function ` `Nmaxelements(list1, N) `\n\nOutput :\n\n`[85, 41]`\n\nTime Complexity : O(N * size) where size is size of the given list.\nMethod 2:\n\n `# Python program to find N largest ` `# element from given list of integers ` ` `  `l ``=` `[``1000``,``298``,``3579``,``100``,``200``,``-``45``,``900``] ` `n ``=` `4` ` `  `l.sort() ` `print``(l[``-``n:]) `\n\nOutput:\n\n```[-45, 100, 200, 298, 900, 1000, 3579]\nFind the N largest element: 4\n[298, 900, 1000, 3579]\n```\n\nPlease refer k largest(or smallest) elements in an array for more efficient solutions of this problem.\n\nAttention geek! Strengthen your foundations with the Python Programming Foundation Course and learn the basics.\n\nTo begin with, your interview preparations Enhance your Data Structures concepts with the Python DS Course.\n\nMy Personal Notes arrow_drop_up\nRecommended Articles\nPage :" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5573988,"math_prob":0.9197699,"size":1515,"snap":"2021-04-2021-17","text_gpt3_token_len":484,"char_repetition_ratio":0.11647915,"word_repetition_ratio":0.06711409,"special_character_ratio":0.37623763,"punctuation_ratio":0.22089553,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97105396,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-23T22:06:27Z\",\"WARC-Record-ID\":\"<urn:uuid:ef9213e1-e2d6-47c5-93f8-c6360e0f43b0>\",\"Content-Length\":\"124828\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:77a2f218-5443-492a-accf-058f30a1ae7e>\",\"WARC-Concurrent-To\":\"<urn:uuid:67bc0af1-4e1a-4a59-a275-9097e03b2632>\",\"WARC-IP-Address\":\"23.12.144.19\",\"WARC-Target-URI\":\"https://www.geeksforgeeks.org/python-program-to-find-n-largest-elements-from-a-list/\",\"WARC-Payload-Digest\":\"sha1:2C5VCNZ4ZUIG6VX3ZYLOZHHCA3ACNPGX\",\"WARC-Block-Digest\":\"sha1:LOXQFTADA33EDWH447NBK2XCBSUYV6I2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703538431.77_warc_CC-MAIN-20210123191721-20210123221721-00535.warc.gz\"}"}
https://www.jianshu.com/p/6d16805537ef
[ "【译】RxJava变换操作符:.concatMap( )与.flatMap( )的比较\n\nObservable 转换\n\npublic class DataManager {\nprivate final List<Integer> numbers;\nprivate final Executor jobExecutor;\n\npublic DataManager() {\nthis.numbers = new ArrayList<>(Arrays.asList(2, 3, 4, 5, 6, 7, 8, 9, 10));\njobExecutor = JobExecutor.getInstance();\n}\n\npublic Observable<Integer> getNumbers() {\nreturn Observable.from(numbers);\n}\n\npublic List<Integer> getNumbersSync() {\nreturn this.numbers;\n}\n\npublic Observable<Integer> squareOf(int number) {\nreturn Observable.just(number * number).subscribeOn(Schedulers.from(this.jobExecutor));\n}\n}\n\nprivate final Func1<Integer, Observable<Integer>> SQUARE_OF_NUMBER =\nnew Func1<Integer, Observable<Integer>>() {\n@Override public Observable<Integer> call(Integer number) {\nreturn dataManager.squareOf(number);\n}\n};\n\npublic Observable<Integer> squareOf(int number) {\nreturn Observable.just(number * number).subscribeOn(Schedulers.from(this.jobExecutor));\n}\n\nlogcat 输出\n\nflatMap()与concatMap()的比较\n\nflatMap()操作符使用你提供的原本会被原始Observable发送的事件,来创建一个新的Observable。而且这个操作符,返回的是一个自身发送事件并合并结果的Observable。可以用于任何由原始Observable发送出的事件,发送合并后的结果。记住,flatMap()可能交错的发送事件,最终结果的顺序可能并是不原始Observable发送时的顺序。为了防止交错的发生,可以使用与之类似的concatMap()操作符。\n\nMerge operator\n\nConcat operator\n\nProblem solved\n\nconcatMap()的救赎。把flatMap()替换成concatMap(),问题迎刃而解。你可能会问:为什么不首先阅读文档(归功于RxJava的贡献者),有时候我们真的很懒,不到万不得已绝不会去查阅文档。这张图是经过测试后的最终结果(可以在最下面找到示例代码):" ]
[ null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.8504084,"math_prob":0.7831907,"size":2854,"snap":"2019-43-2019-47","text_gpt3_token_len":1556,"char_repetition_ratio":0.17894737,"word_repetition_ratio":0.062176164,"special_character_ratio":0.19411352,"punctuation_ratio":0.12426036,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96408397,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-23T00:54:45Z\",\"WARC-Record-ID\":\"<urn:uuid:78c9389f-3514-4545-b620-db820539994c>\",\"Content-Length\":\"111749\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2d0250fb-dadb-4d25-9a29-d8d3ce79a4a2>\",\"WARC-Concurrent-To\":\"<urn:uuid:c2fa3dd6-9e5b-4d02-af47-765e5ebac442>\",\"WARC-IP-Address\":\"47.246.24.229\",\"WARC-Target-URI\":\"https://www.jianshu.com/p/6d16805537ef\",\"WARC-Payload-Digest\":\"sha1:FZCVKLCWG34SIIBVHGDTY2WRULY2YKC5\",\"WARC-Block-Digest\":\"sha1:JPCNB4KKTEXVOB7XWDXO6VW5YLWV56UB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570987826436.88_warc_CC-MAIN-20191022232751-20191023020251-00206.warc.gz\"}"}
https://scirp.org/journal/paperinformation.aspx?paperid=58317
[ "On a System of Second-Order Nonlinear Difference Equations\n\nAbstract\n\nThis paper is concerned with dynamics of the solution to the system of two second-order nonlinear difference equations", null, ",", null, ",", null, ", where", null, ",", null, ",", null, ", i = 0, 1. Moreover, the rate of convergence of a solution that converges to the equilibrium of the system is discussed. Finally, some numerical examples are considered to show the results obtained.\n\nKeywords\n\nShare and Cite:\n\nBao, H. (2015) On a System of Second-Order Nonlinear Difference Equations. Journal of Applied Mathematics and Physics, 3, 903-910. doi: 10.4236/jamp.2015.37110.\n\nConflicts of Interest\n\nThe authors declare no conflicts of interest.\n\n Papaschinopoulos, G. and Schinas, C.J. (1998) On a System of Two Nonlinear Difference Equations. Journal of Mathematical Analysis and Applications, 219, 415-426. http://dx.doi.org/10.1006/jmaa.1997.5829 Clark, D., Kulenovic, M.R.S. and Selgrade, J.F. (2003) Global Asymptotic Behavior of a Two-Dimensional Difference Equation Modelling Competition. Nonlinear Analysis, 52, 1765-1776. http://dx.doi.org/10.1016/S0362-546X(02)00294-8 Clark, D. and Kulenovic, M.R.S. (2003) A Coupled System of Rational Difference Equations. Computers and Mathematics with Applications, 43, 849-867. http://dx.doi.org/10.1016/S0898-1221(01)00326-1 Yang, X. (2005) On the System of Rational Difference Equations . Journal of Mathematical Analysis and Applications, 307, 305-311. http://dx.doi.org/10.1016/j.jmaa.2004.10.045 Zhang, Q., Yang, L. and Liu, J. (2012) Dynamics of a System of Rational Third-Order Difference Equation. Advances in Difference Equations, 136, 1-8. http://dx.doi.org/10.1186/1687-1847-2012-136 Zhang, Q., Liu, J. and Luo, Z. (2015) Dynamical Behavior of a System of Third-Order Rational Difference Equation. Discrete Dynamics in Nature and Society, 2015, Article ID: 530453. http://dx.doi.org/10.1155/2015/530453 Ibrahim, T.F. (2012) Two-Dimensional Fractional System of Nonlinear Difference Equations in the Modeling Competitive Populations. International Journal of Basic & Applied Sciences, 12, 103-121. Din, Q., Qureshi, M.N. and Khan, A.Q. (2012) Dynamics of a Fourth-Order System of Rational Difference Equations. Advances in Difference Equations, 2012, 215. http://dx.doi.org/10.1186/1687-1847-2012-215 Kocic, V.L. and Ladas, G. (1993) Global Behavior of Nonlinear Difference Equations of Higher Order with Application. Kluwer Academic Publishers, Dordrecht. http://dx.doi.org/10.1007/978-94-017-1703-8 Liu, K., Zhao, Z., Li, X. and Li, P. (2011) More on Three-Dimensional Systems of Rational Difference Equations. Discrete Dynamics in Nature and Society, 2011, Article ID: 178483. Ibrahim, T.F. and Zhang, Q. (2013) Stability of an Anti-Competitive System of Rational Difference Equations. Archives Des Sciences, 66, 44-58. Zayed E.M.E. and El-Moneam, M.A. (2011) On the Global Attractivity of Two Nonlinear Difference Equations. Journal of Mathematical Sciences, 177, 487-499. http://dx.doi.org/10.1007/s10958-011-0474-8 Touafek, N. and Elsayed, E.M. (2012) On the Periodicity of Some Systems of Nonlinear Difference Equations. Bulletin Mathématiques de la Société des Sciences Mathématiques de Roumanie, 2, 217-224. Touafek, N. and Elsayed, E.M. (2012) On the Solutions of Systems of Rational Difference Equations. Mathematical and Computer Modelling, 55, 1987-1997. http://dx.doi.org/10.1016/j.mcm.2011.11.058 Kalabusic, S., Kulenovic, M.R.S. and Pilav, E. (2011) Dynamics of a Two-Dimensional System of Rational Difference Equations of Leslie-Gower Type. Advances in Difference Equations, 2011, 29. http://dx.doi.org/10.1186/1687-1847-2011-29 Ibrahim, T.F. (2012) Boundedness and Stability of a Rational Difference Equation with Delay. Revue Roumaine de Mathématiques Pures et Appliquées, 57, 215-224. Ibrahim, T.F. and Touafek, N. (2013) On a Third-Order Rational Difference Equation with Variable Coefficients. DCDIS Series B: Applications & Algorithms, 20, 251-264. Ibrahim, T.F. (2013) Oscillation, Non-Oscillation, and Asymptotic Behavior for Third Order Nonlinear Difference Equations. Dynamics of Continuous, Discrete and Impulsive Systems, Series A: Mathematical Analysis, 20, 523-532. Zhang, Q. and Zhang, W. (2014) On a System of Two High-Order Nonlinear Difference Equations? Advances in Mathematical Physics, 2014, Article ID: 729273. Pituk, M. (2002) More on Poincare’s and Peron’s Theorems for Difference Equations. Journal Difference Equations and Applications, 8, 201-216. http://dx.doi.org/10.1080/10236190211954", null, "", null, "", null, "", null, "", null, "[email protected]", null, "+86 18163351462(WhatsApp)", null, "1655362766", null, "", null, "Paper Publishing WeChat", null, "" ]
[ null, "https://file.scirp.org/image/Edit_b2aa5cd5-da42-4984-afac-c85ca32e5ca8.jpg", null, "https://file.scirp.org/image/Edit_746d9359-6c11-472d-afad-d30dec3491f2.jpg", null, "https://file.scirp.org/image/Edit_ab589f7b-f7ee-45a0-888a-353f0ac276d2.jpg", null, "https://file.scirp.org/image/Edit_6e0a337c-1315-456f-9b59-51d7b81a8987.jpg", null, "https://file.scirp.org/image/Edit_77ca3611-5a59-49bd-ad5c-64822edbea27.jpg", null, "https://file.scirp.org/image/Edit_b8614f00-b50a-409b-a2fb-595bb497c770.jpg", null, "https://scirp.org/images/Twitter.svg", null, "https://scirp.org/images/fb.svg", null, "https://scirp.org/images/in.svg", null, "https://scirp.org/images/weibo.svg", null, "https://scirp.org/images/emailsrp.png", null, "https://scirp.org/images/whatsapplogo.jpg", null, "https://scirp.org/Images/qq25.jpg", null, "https://scirp.org/images/weixinlogo.jpg", null, "https://scirp.org/images/weixinsrp120.jpg", null, "https://scirp.org/Images/ccby.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.59748477,"math_prob":0.41143405,"size":4015,"snap":"2021-43-2021-49","text_gpt3_token_len":1310,"char_repetition_ratio":0.19870357,"word_repetition_ratio":0.066287875,"special_character_ratio":0.37384808,"punctuation_ratio":0.28460687,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98655677,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-11-29T00:26:21Z\",\"WARC-Record-ID\":\"<urn:uuid:c0253407-e8b4-475c-b077-6cab169a0451>\",\"Content-Length\":\"90647\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7e076977-03f8-4ffa-a9e2-eb14d0e7d2ed>\",\"WARC-Concurrent-To\":\"<urn:uuid:9e63f355-54a3-409d-a732-4e7f1ab1f69f>\",\"WARC-IP-Address\":\"144.126.144.39\",\"WARC-Target-URI\":\"https://scirp.org/journal/paperinformation.aspx?paperid=58317\",\"WARC-Payload-Digest\":\"sha1:T2LXSMDARLO3XBG4NEIODSR73BNFF7SC\",\"WARC-Block-Digest\":\"sha1:D7YAQ6MD6DTICUY5RAGGVGD3O4VJVQXC\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964358673.74_warc_CC-MAIN-20211128224316-20211129014316-00031.warc.gz\"}"}
https://eudml.org/search/page?q=sc.general*op.AND*l_0*c_0author_0eq%253A1.S.+Sengupta&qt=SEARCH
[ "## Currently displaying 1 – 9 of 9\n\nShowing per page\n\nOrder by Relevance | Title | Year of publication\n\nMetrika\n\nMetrika\n\nMetrika\n\n### Unbiased estimation of reliability for two-parameter exponential distribution under time censored sampling\n\nApplicationes Mathematicae\n\nThe problem considered is that of unbiased estimation of reliability for a two-parameter exponential distribution under time censored sampling. We give necessary and sufficient conditions for the existence of uniformly minimum variance unbiased estimator and also provide a characterization of a complete class of unbiased estimators in situations where unbiased estimators exist.\n\n### Estimation of the size of a closed population\n\nApplicationes Mathematicae\n\nThe problem considered is that of estimation of the size (N) of a closed population under three sampling schemes admitting unbiased estimation of N. It is proved that for each of these schemes, the uniformly minimum variance unbiased estimator (UMVUE) of N is inadmissible under square error loss function. For the first scheme, the UMVUE is also the maximum likelihood estimator (MLE) of N. For the second scheme and a special case of the third, it is shown respectively that an MLE and an estimator...\n\n### Unbiased estimation for two-parameter exponential distribution under time censored sampling\n\nApplicationes Mathematicae\n\nThe problem considered is that of unbiased estimation for a two-parameter exponential distribution under time censored sampling. We obtain a necessary form of an unbiasedly estimable parametric function and prove that there does not exist any unbiased estimator of the parameters and the mean of the distribution. For reliability estimation at a specified time point, we give a necessary and sufficient condition for the existence of an unbiased estimator and suggest an unbiased estimator based on a...\n\nMetrika\n\nMetrika\n\nPage 1" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8726098,"math_prob":0.9548276,"size":1441,"snap":"2021-21-2021-25","text_gpt3_token_len":263,"char_repetition_ratio":0.18162839,"word_repetition_ratio":0.110091746,"special_character_ratio":0.1721027,"punctuation_ratio":0.07438017,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9870169,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-24T18:39:49Z\",\"WARC-Record-ID\":\"<urn:uuid:369dbf02-e275-4794-a0db-cfa513f4a4ec>\",\"Content-Length\":\"72070\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:00330c06-bb17-492c-aab9-6b7d21f50a69>\",\"WARC-Concurrent-To\":\"<urn:uuid:1f19ccf8-abf3-4350-853c-56bf2a0b5a8b>\",\"WARC-IP-Address\":\"213.135.60.110\",\"WARC-Target-URI\":\"https://eudml.org/search/page?q=sc.general*op.AND*l_0*c_0author_0eq%253A1.S.+Sengupta&qt=SEARCH\",\"WARC-Payload-Digest\":\"sha1:D6PV3TUII46F6U3QK5TBLSI6LG6N2SG2\",\"WARC-Block-Digest\":\"sha1:E72EU47H4UHYCV5NCN5RKWQBCBLESAK3\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623488556482.89_warc_CC-MAIN-20210624171713-20210624201713-00240.warc.gz\"}"}
https://linuxtut.com/en/b759028d3609a11072e0/
[ "# [JAVA] I want to perform Group By processing with Stream (group-by-count, group-by-sum, group-by-max)\n\nFor example, suppose a user and the amount paid by that user are given in the following form:\n\n``````public class Payment {\n\npublic static void main(String[] args) {\nvar payments = List.of(\nnew Payment(\"A\", 10),\nnew Payment(\"B\", 20),\nnew Payment(\"B\", 30),\nnew Payment(\"C\", 40),\nnew Payment(\"C\", 50),\nnew Payment(\"C\", 60)\n);\n}\n\nprivate String name;\nprivate int value;\n\npublic Payment(String name, int value) {\nthis.name = name;\nthis.value = value;\n}\npublic String getName() { return name; }\npublic int getValue() { return value; }\n}\n``````\n\nNow, for each user, we want to find the number of payments, the total amount paid, or the maximum amount. In SQL, it seems that you can easily find it by combining `GROUP BY` and window function, but in Java Stream, how should you write it?\n\n``````select name, count(*) from payment group by name;\nselect name, sum(value) from payment group by name;\nselect name, max(value) from payment group by name;\n``````\n\nThe overall policy is to use `Collectors.groupingBy`. First, the number of payments, that is, `group-by-count`.\n\n``````var counts = payments.stream().collect(Collectors.groupingBy(Payment::getName, Collectors.counting()));\ncounts.entrySet().stream().map(e -> e.getKey() + \"=\" + e.getValue()).forEach(System.out::println);\n// A=1\n// B=2\n// C=3\n``````\n\nThere is a well-known method called `Collectors.counting`, so it seems good to use it. The total amount paid next. The point is `group-by-sum`, but this also has a method with a descriptive name of` Collectors.summingInt`, so just use this.\n\n``````var sums = payments.stream().collect(Collectors.groupingBy(Payment::getName, Collectors.summingInt(Payment::getValue)));\nsums.entrySet().stream().map(e -> e.getKey() + \"=\" + e.getValue()).forEach(System.out::println);\n// A=10\n// B=50\n// C=150\n``````\n\nFinally, \"maximum amount paid\" = `group-by-max`, but I personally feel that it is the most controversial. As a basic policy, it seems to be quick to use `Collectors.maxBy`.\n\n``````var maxs = payments.stream().collect(Collectors.groupingBy(Payment::getName, Collectors.maxBy(Comparator.comparingInt(Payment::getValue))));\nmaxs.entrySet().stream().map(e -> e.getKey() + \"=\" + e.getValue().get().getValue()).forEach(System.out::println);\n// A=10\n// B=30\n// C=60\n``````\n\nAt this time, the type of the variable `maxs` is`Map <String, Optional <Payment >>`. ʻOptional` is like a marker to remind you that it may be null, but here in business logic the value of` maxs` cannot be null. In short, ʻOptional` here doesn't make much sense, so I want to get rid of it. In other words, I want to set the type of `maxs` to`Map <String, Payment>`, but in such a case, it seems to be quick to do as follows.\n\n``````var maxs = payments.stream().collect(Collectors.groupingBy(Payment::getName, Collectors.collectingAndThen(Collectors.maxBy(Comparator.comparing(Payment::getValue)), Optional::get)));\nmaxs.entrySet().stream().map(e -> e.getKey() + \"=\" + e.getValue().getValue()).forEach(System.out::println);\n// A=10\n// B=30\n// C=60\n``````\n\nHowever, at this point, the feeling of black magic begins to be good, so I want to moderate it (´ ・ ω ・ `)" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7551405,"math_prob":0.791863,"size":3027,"snap":"2023-40-2023-50","text_gpt3_token_len":782,"char_repetition_ratio":0.14125042,"word_repetition_ratio":0.059808612,"special_character_ratio":0.2831186,"punctuation_ratio":0.24846625,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.967807,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-29T23:58:05Z\",\"WARC-Record-ID\":\"<urn:uuid:bbe848f2-48fc-4dd2-ae88-8ce32dda2aa9>\",\"Content-Length\":\"18817\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e79cf284-c9e8-4276-86c2-3c3fa645fe19>\",\"WARC-Concurrent-To\":\"<urn:uuid:4b812974-21db-4fd4-8ac9-18c1809e0c2f>\",\"WARC-IP-Address\":\"104.21.71.77\",\"WARC-Target-URI\":\"https://linuxtut.com/en/b759028d3609a11072e0/\",\"WARC-Payload-Digest\":\"sha1:IB3SBECC5DVLTNVWRLDG2ZFSIU2GH2P3\",\"WARC-Block-Digest\":\"sha1:FPWW62QQ2WR2XXAXCJUPAW53XDR6IEUQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510529.8_warc_CC-MAIN-20230929222230-20230930012230-00382.warc.gz\"}"}
https://www.psenterprise.com/concepts/model-targeted-experimentation
[ "# Model-targeted experimentation\n\n## Performing experiments to improve the model not the process\n\nThe guiding principle of model-targeted experimentation (also known as model-centric experimentation) is that experimentation used to improve the accuracy of the model rather than to improve the process itself. The model is then used to optimize the process design or operation.\n\nIf the empirical parameters – for example, reaction kinetic parameters, heat transfer coefficients – of a model are made as accurate and scale-invariant as possible, the model can be used to examine a wide design space much more rapidly and effectively than via experimentation alone.\n\nModel-targeted experimentation is also a key component of model-based innovation (MBI), a similar procedure applied at R&D level typically for early-stage process or product development.\n\n## What does model-targeted experimentation involve?\n\nIn a nutshell, model-targeted experimentation starts with a high-fidelity predictive model of the experimentation process and the key phenomena being studied.\n\nExperimental data is fitted to these models to calculate parameter values. Related model-based data analysis facilities also provide parameter confidence information, showing how accurately the proposed model represents what is observed.\n\nIf there is significant uncertainty in the parameters:\n\n• the model may not accurately reflect what is happening in real life. For example, a proposed reaction set may be missing one or two key reactions.\n• the experimental data may not contain sufficient or sufficiently-accurate data to calculate accurate and independent parameter values. In this case, the confidence information can be used as a guide in the design of subsequent experiments targeted at maximizing parameter accuracy.\n\n## Model-targeted experimentation – step-by-step\n\nThe key steps in model-targeted experimentation are:", null, "### Step 1. Construct first-principles models of the experiment", null, "This involves building a first principles model of the experimental setup used for gathering data, including representation of the key fundamental phenomena being studied (e.g. a detailed reaction set model).\n\nFor example, the model of a simple bench-scale stirred reactor experiment may involve:\n\n• a model of the stirred tank, complete with heat and material balances\n• a model of the cooling/heating jacket, with appropriate heat transfer equations\n• the reaction rate equations and species balance for the reactions occurring in the vessel.\n\nThe experiment model is constructed in a modular form allowing the components to be easily be implemented within the full equipment model in Step 3.\n\nInitial reaction kinetic parameter values, if not readily available, are typically found by literature search. The initial values are progressively refined in the steps below.\n\nModels may be written from scratch – for example, using equations found in research literature – but are typically taken from libraries such as the PSE Advanced Model Library for Fixed-Bed Catalytic Reactors (AML:FBCR).\n\nAn important note: when conducting model-targeted experiments, it is essential to use small-scale experimental apparatus where the phenomena being studied can be isolated as far as possible.\n\nFor example, experiments aimed at determining reaction kinetic parameter value should be as close to isothermal as possible to minimise temperature effects.\n\nLikewise, equipment should be small-scale to minimise the impact of mixing effects on results.\n\n### Step 2(a). Estimate the model parameters from data and analyse uncertainty", null, "The model constructed in Step 1 is used to estimate model parameters – typically kinetic constants or heat transfer coefficients – from initial experimental data using gPROMS’ parameter estimation techniques.\n\nIn addition to parameter values, model-based data analysis techniques built into the parameter estimation facility also yield estimates of the accuracy of these values in the form of confidence intervals, as well as estimates of the error behaviour of the measurement instruments.\n\nThis information is used to determine whether the parameters are sufficiently accurate for subsequent design and operational purposes. It is in fact possible to relate key performance indicators (KPIs) for the process back to uncertainty in parameter values, providing an indication of where further experimentation should be concentrated in order to minimise subsequent design risk.\n\n### Step 2(b). Design additional experiments, if necessary", null, "If the data analysis in Step 2 identifies areas of data uncertainty that are not within acceptable risk limits, additional experiments may need to be carried out.\n\nThese experiments can now be designed specifically to maximize information in the areas of interest, either informally or by using the model-based experiment design (MBED) techniques provided as a gPROMS option.\n\nMBED takes advantage of the significant amount of information that is already available in the form of the  mathematical model to design the optimal next experiment that yields the maximum amount of parameter information (i.e. to minimise the uncertainty in the estimated parameters).\n\nMBED helps to achieve the required parameter accuracy with the minimum number – and hence time and cost – of experiments.\n\nModel-targeted experimentation may require a shift in thinking. Rather than aiming experiments at, for example, maximizing the yield of main product, more useful model information may be obtained by maximizing the yield of an impurity. The latter provides richer information for the model in characterising side reactions, which will be important in subsequent optimization calculations.\n\n### Step 2(c). Execute experiments and iterate if necessary", null, "Now carry out the experiment designed in Step 2(b). Repeat Steps 2(a)-2(b)-2(c)-2(a) until parameter values are within acceptable accuracy.\n\n## Repeating the cycle at different scales\n\nSteps 1 and 2 may be repeated to develop sub-models at different scales, covering different phenomena. Typically this proceeds in sequential fashion, keeping the parameters previously estimated constant at each subsequent stage.\n\nFor example, in a catalytic bed reactor, initial experiments may focus on small samples of catalyst in isolated conditions. The kinetic parameters estimated from these experiments are then fixed in subsequent experiments – for example to determine bed heat transfer characteristics.\n\n### Moving on to Step 3", null, "If experiments have been conducted under suitable conditions (e.g. small-scale, isothermal experiments for determining kinetic parameters), at the end of Step 2 you will have:\n\n• high-fidelity models of all key phenomena that closely represent the experimental data\n• scale-invariant parameters, meaning that the model can predict phenomena accurately over a range of scales and conditions\n• a good understanding of where the design risk lies and where further experimentation – if any – is required.\n\nThe model has effectively been validated against experimental data. Now, and only now, are you ready to build the full equipment model and proceed with design or operational analysis.", null, "" ]
[ null, "https://psewp.wpfuel.co.uk/wp-content/uploads/2021/01/mbe_cycle-1.png", null, "https://psewp.wpfuel.co.uk/wp-content/uploads/2021/01/mbe_step1.png", null, "https://psewp.wpfuel.co.uk/wp-content/uploads/2021/01/mbe_step2a.png", null, "https://psewp.wpfuel.co.uk/wp-content/uploads/2021/01/mbe_step2b.png", null, "https://psewp.wpfuel.co.uk/wp-content/uploads/2021/01/mbe_step2c.png", null, "https://psewp.wpfuel.co.uk/wp-content/uploads/2021/01/mbe_step3.png", null, "https://www.psenterprise.com/concepts/model-targeted-experimentation", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8690666,"math_prob":0.91909015,"size":7091,"snap":"2021-31-2021-39","text_gpt3_token_len":1263,"char_repetition_ratio":0.16410328,"word_repetition_ratio":0.0,"special_character_ratio":0.1745875,"punctuation_ratio":0.08089501,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96901584,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,4,null,4,null,4,null,4,null,4,null,4,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-29T02:25:49Z\",\"WARC-Record-ID\":\"<urn:uuid:6909cb2b-4b9a-41f1-a2d4-9b84092249ef>\",\"Content-Length\":\"57438\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6b6e2ff7-cd43-4885-be33-f05770e1d7f5>\",\"WARC-Concurrent-To\":\"<urn:uuid:95c597b4-8aee-458a-ba71-668f45599565>\",\"WARC-IP-Address\":\"104.21.27.172\",\"WARC-Target-URI\":\"https://www.psenterprise.com/concepts/model-targeted-experimentation\",\"WARC-Payload-Digest\":\"sha1:OM2JK723ZVPQGASLYCPA7QS7ERGGTV3M\",\"WARC-Block-Digest\":\"sha1:Q3S4AFUU4VY6UDRQPF5UEKVY6NSBBNU4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780061350.42_warc_CC-MAIN-20210929004757-20210929034757-00434.warc.gz\"}"}
https://www.xtcsyb.com/wlbc/430.html
[ "# python魔法方法-属性转换和类的表示详解,python详\n\n``````D:Python27python.exe D:/HttpRunnerManager-master/HttpRunnerManager-master/test.py\nFalse\nFalse\n12\n234\nFalse\n``````\n\n## python魔法方法-属性转换和类的表示详解,python详解\n\n•__int__(self)\n\n•转换成整型,对应int函数。\n\n•__long__(self)\n\n•转换成长整型,对应long函数。\n\n•__float__(self)\n\n•转换成浮点型,对应float函数。\n\n•__complex__(self)\n\n•转换成 复数型,对应complex函数。\n\n•__oct__(self)\n\n•转换成八进制,对应oct函数。\n\n•__hex__(self)\n\n•转换成十六进制,对应hex函数。\n\n•__index__(self)\n\n•首先,这个方法应该返回一个整数,可以是int或者long。这个方法在两个地方有效,首先是 operator 模块中的index函数得到的值就是这个方法的返回值,其次是用于切片操作,下面会专门进行代码演示。\n\n•__trunc__(self)\n\n•当 math.trunc(self) 使用时被调用.__trunc__返回自身类型的整型截取 (通常是一个长整型).\n\n•__coerce__(self, other)\n\n•实现了类型的强制转换,这个方法对应于 coerce 内建函数的结果(python3.0开始去掉了此函数,也就是该魔法方法也没意义了,至于后续的版本是否重新加入支持,要视官方而定。)\n\n•这个函数的作用是强制性地将两个不同的数字类型转换成为同一个类型,例如:", null, "``````class Foo(object):\ndef __init__(self, x):\nself.x = x\n\ndef __int__(self):\nreturn int(self.x) + 1\n\ndef __long__(self):\nreturn long(self.x) + 1\n\na = Foo(123)\nprint int(a)\nprint long(a)\nprint type(int(a))\nprint type(long(a))\n``````", null, "``````def __int__(self):\nreturn str(self.x)\n``````", null, "``````def __int__(self):\nreturn list(self.x)\n``````", null, "``````class Foo(object):\ndef __init__(self, x):\nself.x = x\n\ndef __int__(self):\nreturn long(self.x) + 1\n\ndef __long__(self):\nreturn int(self.x) + 1\n\na = Foo(123)\nprint int(a)\nprint long(a)\nprint type(int(a))\nprint type(long(a))\n``````", null, "__index__(self):\n\n``````import operator\n\nclass Foo(object):\ndef __init__(self, x):\nself.x = x\n\ndef __index__(self):\nreturn self.x + 1\n\na = Foo(10)\nprint operator.index(a)\n``````", null, "``````class Foo(object):\ndef __init__(self, x):\nself.x = x\n\ndef __index__(self):\nreturn 3\n\na = Foo('scolia')\nb = [1, 2, 3, 4, 5]\nprint b[a]\nprint b\n``````", null, "``````class Foo(object):\ndef __init__(self, x):\nself.x = x\n\ndef __index__(self):\nreturn int(self.x)\n\na = Foo('1')\nb = Foo('3')\nc = [1, 2, 3, 4, 5]\nprint c[a:b]\n``````", null, "``````a = Foo('1')\nb = Foo('3')\nc = slice(a, b)\nprint c\nd = [1, 2, 3, 4, 5]\nprint d[c]\n``````\n\n__coerce__(self, other):\n\n``````class Foo(object):\ndef __init__(self, x):\nself.x = x\n\ndef __coerce__(self, other):\nreturn self.x, str(other.x)\n\nclass Boo(object):\ndef __init__(self, x):\nself.x = x\n\ndef __coerce__(self, other):\nreturn self.x, int(other.x)\n\na = Foo('123')\nb = Boo(123)\nprint coerce(a, b)\nprint coerce(b, a)\n``````", null, "总结:是调用了第一个参数的魔法方法。\n\n•__str__(self)\n\n•定义当 str() 被你的一个类的实例调用时所要产生的行为。因为print默认调用的就是str()函数。\n\n•__repr__(self)\n\n•定义当 repr()  被你的一个类的实例调用时所要产生的行为。 str() 和 repr() 的主要区别是其目标群体。 repr() 返回的是机器可读的输出,而 str() 返回的是人类可读的。  repr() 函数是交换模式默认调用的\n\n•函数。\n\n•__unicode__(self)\n\n•定义当 unicode() 被你的一个类的实例调用时所要产生的行为。 unicode() 和 str() 很相似,但是返回的是unicode字符串。注意,如果对你的类调用 str() 然而你只定义了 __unicode__() ,那么其将不会\n\n•工作。你应该定义 __str__() 来确保调用时能返回正确的值,并不是每个人都有心情去使用unicode()。\n\n•__format__(self, formatstr)\n\n•定义当你的一个类的实例被用来用新式的格式化字符串方法进行格式化时所要产生的行为。例如, \"Hello, {0:abc}!\".format(a) 将会导致调用 a.__format__(\"abc\") 。这对定义你自己的数值或字符串类型\n\n•是十分有意义的,你可能会给出一些特殊的格式化选项。\n\n•__hash__(self)\n\n•定义当 hash()被你的一个类的实例调用时所要产生的行为。它必须返回一个整数,用来在字典中进行快速比较。\n\n•请注意,实现__hash__时通常也要实现__eq__。有下面这样的规则:a == b 暗示着 hash(a) == hash(b) 。也就是说两个魔法方法的返回值最好一致。\n\n•这里引入一个‘可哈希对象'的概念,首先一个可哈希对象的哈希值在其生命周期内应该是不变的,而要得到哈希值就意味要实现__hash__方法。而哈希对象之间是可以比较的,这意味着要实现__eq__或\n\n•者__cmp__方法,而哈希对象相等必须其哈希值相等,要实现这个特性就意味着__eq__的返回值必须和__hash__一样。\n\n•可哈希对象可以作为字典的键和集合的成员,因为这些数据结构内部使用的就是哈希值。python中所有内置的不变的对象都是可哈希的,例如元组、字符串、数字等;而可变对象则不能哈希,例如列表、\n\n•字典等。\n\n•用户定义的类的实例默认是可哈希的,且除了它们本身以外谁也不相等,因为其哈希值来自于 id 函数。但这并不代表 hash(a) == id(a),要注意这个特性。\n\n•__nonzero__(self)\n\n•定义当 bool() 被你的一个类的实例调用时所要产生的行为。本方法应该返回True或者False,取决于你想让它返回的值。(python3.x中改为__bool__)\n\n•__dir__(self)\n\n•定义当 dir() 被你的一个类的实例调用时所要产生的行为。该方法应该返回一个属性的列表给用户。\n\n•__sizeof__(self)\n\n•定义当 sys.getsizeof() 被你的一个类的实例调用时所要产生的行为。该方法应该以字节为单位,返回你的对象的大小。这通常对于以C扩展的形式实现的Python类更加有意义,其有助于理解这些扩展。\n\n``````print to_int('str')\nprint to_int('str123')\nprint to_int('12.12')\nprint to_int('234')\nprint to_int('12#\\$%%')\n``````\n\npython学习3群:563227894\n\n``````def to_int(str):\ntry:\nint(str)\nreturn int(str)\nexcept ValueError: #报类型错误,说明不是整型的\ntry:\nfloat(str) #用这个来验证,是不是浮点字符串\nreturn int(float(str))\nexcept ValueError: #如果报错,说明即不是浮点,也不是int字符串。 是一个真正的字符串\nreturn False\n``````\n\n``````s = \"12\"\ns = \"12.12\"\n``````" ]
[ null, "http://www.bkjia.com/uploads/allimg/160729/0305203413-0.png", null, "http://www.bkjia.com/uploads/allimg/160729/0305205500-1.png", null, "http://www.bkjia.com/uploads/allimg/160729/0305201956-2.png", null, "http://www.bkjia.com/uploads/allimg/160729/03052021U-3.png", null, "http://www.bkjia.com/uploads/allimg/160729/03052012L-4.png", null, "http://files.jb51.net/file_images/article/201607/201607220847308.png", null, "http://www.bkjia.com/uploads/allimg/160729/0305205P9-6.png", null, "http://www.bkjia.com/uploads/allimg/160729/0305202023-7.png", null, "http://www.bkjia.com/uploads/allimg/160729/0305205018-8.png", null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.85359806,"math_prob":0.9954821,"size":5104,"snap":"2019-43-2019-47","text_gpt3_token_len":3069,"char_repetition_ratio":0.15078431,"word_repetition_ratio":0.21377672,"special_character_ratio":0.29290754,"punctuation_ratio":0.14108911,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9867558,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-19T08:17:40Z\",\"WARC-Record-ID\":\"<urn:uuid:48d35a7f-adc4-4867-99ef-74adfcc09a5a>\",\"Content-Length\":\"19462\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4d56439f-ab2a-4647-b4a3-666c004e8c93>\",\"WARC-Concurrent-To\":\"<urn:uuid:fd01d0a5-f9aa-4630-abfa-bf8dbd1e422c>\",\"WARC-IP-Address\":\"107.175.199.63\",\"WARC-Target-URI\":\"https://www.xtcsyb.com/wlbc/430.html\",\"WARC-Payload-Digest\":\"sha1:EJS3RI2K2Q2RUK2DZPTBWRKGZV4OAI53\",\"WARC-Block-Digest\":\"sha1:U3IUAOOMHL7HS32PJIGW77FHZQSVIDEE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496670036.23_warc_CC-MAIN-20191119070311-20191119094311-00369.warc.gz\"}"}
https://www.nature.com/articles/s41598-018-37471-0?error=cookies_not_supported&code=801f3e6f-32db-40a9-8539-011c7a3cafdd
[ "## Introduction\n\nWater, an essential constituent of life1, remains an elusive target for modeling and simulation. Effective coarse-grained (CG) models of liquid water must balance computational savings, by handling fewer degrees of freedom, while at the same time capturing its essential physical properties2,3,4,5,6. CG water models have enabled simulations exceeding micro-meters/seconds that are relevant for processes in biophysical systems that are beyond the reach of conventional atomistic molecular dynamics (MD) simulations. CG modeling entails recasting the complex and detailed atomistic model into a simpler yet accurate representation. A CG model has the ability to model key quantities of interest (QoI) when it captures the effects of the eliminated degrees of freedom (DOFs)7,8,9. The CG process requires: (i) The identification of the system’s optimal resolution. Commonly, groups of atoms are described with pseudo-atoms/interaction sites and a “mapping” function is used to determine the relation between these sites and the atomistic coordinates. For a given system various coarse-graining levels can be employed. For example, existing CG lipid membrane models range from representations with a single anisotropic site10 to three sites per lipid thus differentiating between the head and the tail11,12, or by grouping three or four heavy atoms into beads thus capturing varying degrees of chemical properties13,14,15,16; (ii) The specification of the associated Hamiltonian. Here, DOFs can be reduced by simplifying the form or by neglecting specific terms in the Hamiltonian. For instance, one can neglect the bond/angle vibrations and resort to rigid models17.\n\nIn effective CG models, the removed DOFs are insignificant for the QoI. However, to what degree a specific DOF is negligible for a given observable is hardly ever known beforehand. Thus, the majority of the CG models are designed based on intuition or extrapolations from existing models. Additionally, the number of the removed DOFs must be large so that the diminished accuracy compared to the AT models is justified with the substantial computational gains. It is usually assumed that increasing the level of coarse-graining will decrease the model’s accuracy. However, and this is perhaps a key issue, the relation between the number of employed DOF in a model and its accuracy may not be a monotonic function7,18,19. Thus, one can end up (without realizing) in a worst case scenario, where the constructed model is redundant, i.e., better accuracy can be achieved with fewer DOFs (computationally less demanding model). For example, the two-site and four-site models of n-hexane molecule perform reasonably well, while a very similar three-site model fails19. A number of works have addressed the systematic selection of CG models in bio-molecular systems20,21,22,23,24,25.\n\nThe challenge of striking the optimal balance between accuracy and computational cost is crucial for CG models of water. At the same time, obtaining water-water interactions consumes the majority of the computational effort. Thus, many CG models of water were developed. These models differ in the coarse-graining resolution level, i.e., the mapping, which ranges from 1 to 11 water molecules per CG bead2,26. Models also differ in the employed Hamiltonian. For CG models where one bead represents one water molecule (1-to-1 mapping), the Hamiltonian is either derived from the atomistic simulations9,27 or parametrized based on analytic potentials ranging from a simple Lennard-Jones (LJ) to potentials incorporating tetrahedral ordering, dipole moment, and orientation-dependent hydrogen bonding interactions28,29,30,31. On a higher coarse-graining level, it was soon realized that chargeless models, such as the standard MARTINI model32,33, introduce unphysical features when applied to interfaces, such as an interface between water and a lipid membrane34,35,36. Thus, new CG models were developed which treat the electrostatics explicitly. In the PCGS model (3-to-1)37, the CG beads carry induced dipoles, in the polarizable MARTINI model (4-to-3)34 the electrostatic is modeled analog to the Drude oscillator, in the BMW model (4-to-3)35 the CG representation resembles a rigid water molecule with a fixed dipole and quadrupole moment, while the GROMOS CG model (5-to-2)38 introduces explicit charges with a fluctuating dipole. Note that in these models the extra interaction sites have no relation to the physical system making the intuitive construction of the model even more difficult.\n\nThus far, studies reporting the effects of the choices made in the coarse-graining level and model structure are relatively few. For water, the mapping was investigated by Hadley et al.39, where the investigated CG models were single-site models and the Hamiltonian was parameterized to reproduce the structural properties of water. The mapping 4-to-1 was found to give the optimal balance between efficiency and accuracy. However, by comparing the properties of the available water models it is hard to extract any physics as the models were developed to reproduce different properties. Furthermore, one should avoid artificially constructed scoring functions that could be biased but rather perform model selection based on rigorous mathematical foundation. In this respect, the Bayesian statistical framework can serve as a powerful tool which has become a popular technique to refine, guide and critically assess the MD models40,41,42,43,44,45.\n\nIn this work, we employ the Bayesian statistical framework to critically assess many CG water models (see Fig. 1). We investigate the biologically relevant CG resolution levels, i.e., mappings, where the number of grouped water molecules ranges from 1 to 6. At each resolution, multiple model structures are examined ranging from 1-site to 3-site models where we additionally investigate the rigid and flexible versions of the 2 and 3-site models for mapping M = 4. Our main objective is to determine the model evidence for all models and thus elucidate the impact of the mapping on the model’s performance and the relevant DOFs in CG modeling of water. Furthermore, we evaluate the speed-up for each developed model which allows us to assess efficiency-accuracy trade-off. Lastly, we investigate the transferability of the water models to different thermodynamics states, i.e., to different temperatures. To this end, we employ the hierarchical Bayesian framework46 that can accurately quantify the uncertainty in the parameter space for multiple QoI, i.e., different properties or the same property at different conditions.\n\n## Methods\n\nWe investigate a set of CG water models (partially shown in Fig. 1). For all models, we employ the interactions that are implemented in the standard MD packages. In the 1S model, a water cluster is modeled with a single chargeless spherical particle employing the LJ potential ULJ (rij) = 4ε[(σ/rij)12 − (σ/rij)6] between particles i and j. The model parameters are $${\\phi }_{1S}=(\\sigma ,\\epsilon )$$. The 2S model is a two-site model, where the sites are oppositely charged (±q) and constrained to a distance r0. The negatively charged (blue in Fig. 2) site interacts additionally with the LJ potential. The model parameters are $${\\phi }_{2S}=(\\sigma ,\\epsilon ,q,{r}_{0})$$. In order to satisfy the net neutrality of the water cluster, the three-site model can be constructed in two ways, which we denote as 3S and 3S* models. The 3S model resembles a big water molecule where all three particles are charged. The central (blue) site has a charge of −q, and the other two sites have a charge of +q/2. In the 3S* model, the central site is chargeless and the other two carry a ±q charge. Both three-site models have the parameters $${\\phi }_{3S\\mathrm{,3}S\\ast }=(\\sigma ,\\epsilon ,q,{r}_{0},{\\vartheta }_{0})$$. For all rigid model structures, we consider four levels of resolution with the number of grouped water molecules equal to 3, 4, 5, and 6. For the 1S model, we additionally consider the M = 1 mapping, while for the models with partial charges we investigate also M = 12. The level of resolution fixes the total mass of the CG representation. The mass ratio between the interaction sites in the two and three-sited models is fixed to 2 with the central particle carrying the larger mass. The electrostatic is in all cases modeled with the Coulomb’s interaction Ue (rij) = qiqj/(4πεε0rij), where we set the global dielectric screening to ε = 2.5. For M = 4, we consider also the flexible analogs of the models with charges. In the 2SF model, the two sites are interacting with a harmonic potential Ub (rij) = kb (rij − r0)2 with force constant kb. Therefore, the model parameters are $${\\phi }_{2SF}=(\\sigma ,\\epsilon ,q,{r}_{0},{k}_{b})$$. For the flexible three-site models 3SF and 3SF*, the angle is unconstrained and modeled with the harmonic angle potential Ua (ϑij) = ka (ϑij − ϑ0)2 thus adding the force constant ka parameter to the parameter set, i.e., the model parameters are $${\\phi }_{3SF\\mathrm{,3}SF\\ast }=(\\sigma ,\\epsilon ,q,{r}_{0},{\\vartheta }_{0},{k}_{a})$$.\n\nWe remark that the data used as target QoI is part of the modeling choice. In this work, we use experimental data of density, dielectric constant, surface tension, isothermal compressibility, and shear viscosity, i.e., mostly thermodynamic properties. These are deemed of key importance for biophysical systems. The data used and the properties of the reference coarse-grained water models are reported in Table 1. The structural properties, e.g. radial distribution function or the dynamical properties, e.g. diffusion constant were not considered in this work because these properties cannot be measured experimentally for M > 1.\n\n### Uncertainty Quantification\n\n#### Bayesian Framework\n\nWe consider a computational model $${\\mathscr{C}}$$ that depends on a set of parameters $${\\phi }_{c}\\in {{\\mathbb{R}}}^{{N}_{\\phi }}$$ and a set of input variables or conditions $${\\boldsymbol{x}}\\in {{\\mathbb{R}}}^{{N}_{x}}$$. In the context of the current work, the computational model is the molecular dynamics solver, the model parameters correspond to the parameters of the potential and the input variables to the temperature of the system. Moreover, we consider an observable function $$F({\\boldsymbol{x}};\\,{\\phi }_{c})\\in {{\\mathbb{R}}}^{N}$$ that represents the output of the computational model. Here, the observable function is an equilibrium property of the system, e.g., the density. We are interested in inferring the parameters $${\\phi }_{c}$$ based on the a set of experimental data d = {di| i = 1, …, N} that correspond to the fixed input parameters of the model x.\n\nIn the frequentist statistics framework, the parameters of the model are obtained by optimizing a distance of the model from the data, usually the likelihood function. In the Bayesian framework, the parameters follow a conditional distribution which is given by Bayes’ theorem,\n\n$$p(\\phi |{\\boldsymbol{d}}, {\\mathcal M} )=\\frac{p({\\boldsymbol{d}}|\\phi , {\\mathcal M} )\\,p(\\phi | {\\mathcal M} )}{p({\\boldsymbol{d}}| {\\mathcal M} )},$$\n(1)\n\nwhere $$p({\\boldsymbol{d}}|\\phi , {\\mathcal M} )$$ is the likelihood function, $$p(\\phi | {\\mathcal M} )$$ is the prior probability distribution and $$p({\\boldsymbol{d}}| {\\mathcal M} )$$ is the model evidence. Here, $$\\phi$$ is the vector containing the computational model parameters $${\\phi }_{c}$$ and any other parameters needed for the definition of the likelihood or the prior density. $${\\mathcal M}$$ stands for the model under consideration and contains all the information that describes the computational and the statistical model.\n\nThe likelihood function is a measure of how likely is that the data d are produced by the computational model $${\\mathscr{C}}$$. Here, we make the assumption that the datum di is a sample from the generative model\n\n$${y}_{i}={F}_{i}({\\boldsymbol{x}};{\\phi }_{c})+{\\sigma }_{n}{d}_{i}\\varepsilon ,\\,\\varepsilon \\sim {\\mathscr{N}}\\mathrm{(0,}\\,\\mathrm{1).}$$\n(2)\n\nNamely, yi are random variables independent and normally distributed with mean equal to the observable of the model and standard deviation proportional to the data. The reason we choose this error model is because the set of experimental data d contains elements of different orders of magnitude, e.g., density is of order of 1 and surface tension of order 100. With this model the error allowed by the statistical model becomes proportional to the value of the data we want to fit47. The likelihood of the data $$p({\\boldsymbol{d}}|\\phi )$$ has the form,\n\n$$p({\\boldsymbol{d}}|\\phi )={\\mathscr{N}}({\\boldsymbol{d}}|F({\\boldsymbol{x}},{\\phi }_{c}),{\\rm{\\Sigma }}),\\,{\\rm{\\Sigma }}={\\sigma }_{n}^{2}\\,{\\rm{diag}}\\,(\\,{{\\boldsymbol{d}}}^{2}),$$\n(3)\n\nwhere $$\\phi ={({\\phi }_{c}^{{\\rm{T}}},{\\sigma }_{n})}^{{\\rm{{\\rm T}}}}$$ is the parameter vector that contains the model and the error parameters.\n\nThe denominator of Eq. (1) is defined as the integral of the numerator and is called the model evidence. This quantity can be used for model selection48 as it is discussed in the next section. Finally, the prior probability encodes all the available information on the parameters prior to observing any data. If no prior information is known for the parameters, a non informative distribution can be used, e.g. a uniform distribution. In this work we use uniform priors, see SI for detailed information.\n\n#### Model Selection\n\nAssuming we have $${N}_{ {\\mathcal M} }$$ models $${ {\\mathcal M} }_{i},\\,\\,i=1,\\ldots ,{N}_{ {\\mathcal M} }$$ that describe different computational and statistical models, we wish to choose the model that best fits the data. In Bayesian statistics, this is translated into choosing the model with the highest posterior probability,\n\n$$p({ {\\mathcal M} }_{i}|{\\boldsymbol{d}})=\\frac{p({\\boldsymbol{d}}|{ {\\mathcal M} }_{i})p({ {\\mathcal M} }_{i})}{p({\\boldsymbol{d}})},$$\n(4)\n\nwhere $$p({ {\\mathcal M} }_{i})$$ encodes any prior preference to the model $${ {\\mathcal M} }_{i}$$. Assuming all models have equal prior probabilities, the posterior probability of the model depends only on the likelihood of the data. Taking the logarithm of the likelihood and using Eq. (1) we can write\n\n$$\\begin{array}{rcl}\\mathrm{ln}\\,p({\\boldsymbol{d}}|{ {\\mathcal M} }_{i}) & = & \\int \\,\\mathrm{ln}\\,p({\\boldsymbol{d}}|{ {\\mathcal M} }_{i})\\,p(\\phi |{\\boldsymbol{d}},{ {\\mathcal M} }_{i}){\\rm{d}}\\phi \\\\ & = & \\int \\,\\mathrm{ln}\\,\\frac{p({\\boldsymbol{d}}|\\phi ,{ {\\mathcal M} }_{i})\\,p(\\phi |{ {\\mathcal M} }_{i})}{p(\\phi |{\\boldsymbol{d}},{ {\\mathcal M} }_{i})}p(\\phi |{\\boldsymbol{d}},{ {\\mathcal M} }_{i}){\\rm{d}}\\phi \\\\ & = & {\\mathbb{E}}[\\mathrm{ln}\\,p({\\boldsymbol{d}}|\\phi ,{ {\\mathcal M} }_{i})]-{\\mathbb{E}}[\\mathrm{ln}\\,\\frac{p(\\phi |{ {\\mathcal M} }_{i})}{p(\\phi |{\\boldsymbol{d}},{ {\\mathcal M} }_{i})}],\\end{array}$$\n(5)\n\nwhere the expectation is taken with respect to posterior probability $$p(\\phi |{\\boldsymbol{d}},{ {\\mathcal M} }_{i})$$. The first term is the expected fit of the data under the posterior probability of the parameters and is a measure of how well the model fits the data. The second term is the Kullback-Leibler (KL) divergence or relative entropy of the posterior from the prior distribution and is a measure of the information gain from data d under the model $${ {\\mathcal M} }_{j}$$. The KL divergence can be seen as a measure of the distance between two probability distributions.\n\nIf one would only consider the first term of Eq. (5) for the model selection, then the model that fits the data best would be selected. However, such an approach is prone to overfitting, i.e., choosing a too complex model, which reduces the predictive capabilities of the model. The second term serves as a penalization term. Models with posterior distributions that differ a lot from the prior, i.e., models that extract a lot of information from the data, are penalized more. Thus, model evidence can be seen as an implementation of the Ockham’s razor that states that simple models (in terms of the number of parameters) that reasonably fit the data should be preferred over more complex models that provide only slight improvements to the fit. For a detailed discussion on model selection and estimators of the model evidence, we refer to refs49,50.\n\n#### Hierarchical Bayesian Framework\n\nWe consider data structured as: $$\\overrightarrow{{\\boldsymbol{d}}}=\\{{{\\boldsymbol{d}}}_{1},\\ldots ,{{\\boldsymbol{d}}}_{{N}_{d}}\\}$$ where $${{\\boldsymbol{d}}}_{i}\\in {{\\mathbb{R}}}^{{N}_{i}}$$ corresponds to the conditions xi. For example, xi may correspond to different thermodynamic conditions under which the experimental data di are produced.\n\nThe classical Bayesian method for inferring the parameters of the computational model is to group all the data and estimate the probability $$p(\\phi |\\overrightarrow{{\\boldsymbol{d}}})$$ (see Fig. 3 left). However, this approach may not be suitable when the uncertainty on $$\\phi$$ is large due to the fact that different parameters may be suitable for different data sets. On the opposite side, individual parameters $${\\phi }_{i}$$ can be inferred using only the data set di (see Fig. 3 middle). This approach preserves the individual information but any information that may be contained in other data sets is lost.\n\nFinally, a balance between retaining individual information and sharing information between different data sets can be achieved with the hierarchical Bayesian framework. In this approach, the independent models corresponding to different conditions are connected using a hyper-parameter vector ψ (see Fig. 3 right). The benefits of this approach is twofold: (i) better informed individual probabilities $$p({\\phi }_{i}|\\overrightarrow{{\\boldsymbol{d}}})$$; and (ii) a data informed prior p(ψ|d) is available in case new parameters $${\\phi }^{new}$$ that correspond to unobserved data need to be inferred. A detailed description of the sampling algorithm of this approach is given in Supporting information (SI).\n\n## Results\n\n### Impact of mapping\n\nFirst, we examine the impact of the level of resolution on the model accuracy using density, dielectric constant, surface tension, isothermal compressibility, and shear viscosity experimental data (see SI). In Fig. 4 the model accuracy, as measured by the model evidence, is shown as a function of mapping M, which denotes the number of water molecules represented by a given CG model. It is usually assumed the model’s performance is decreasing with the decreased resolution of the model. Indeed, for the 1S model, we observe precisely this trend. For the charged models, the evidence is still overall monotonically decreasing with M, however, compared to the 1S model the dependency of the evidence on M is much less drastic. To investigate this dependency further, we perform the UQ inference also for the charged models at M = 12. The observed evidences are comparable to the evidence of the 1S model at M = 4. Thus, with the models that incorporate partial charges, one can resort to models with higher mappings. According to the UQ, the best model for M = 1, 3 is the 1S model whereas for M > 3 the charged models are superior. However, one should keep in mind that the chargeless and charged models are not comparable as the chargeless models cannot provide the same amount of information, e.g., the dielectric constant is not defined. Comparing the evidences of the 2S, 3S, and 3S* models, we see that the three models rank very closely with the 3S* model being somewhat better than the other two.\n\nWe emphasize that the model evidence encompasses much more than a mere evaluation of the model’s properties at the best parameters. Nonetheless, it is insightful to examine the target QoI and their dependency on the mapping. Figure 5 shows the density ρ, dielectric constant ε, surface tension γ, isothermal compressibility κ, and shear viscosity η for rigid models and mappings 1 to 6. The target QoI are obtained using the maximum a posteriori (MAP) parameters and evaluated as a mean of 5 independent simulation runs with different initial conditions. Note that ε is not defined for the 1S model and it is excluded from the target QoI in the second UQ inference of the charged models. We observe that the ρ and ε are within 10% of the experimental data for all mappings. On the contrary, the γ, κ, and η depend very strongly on the mapping. The general trend is similar for all models, i.e., as we increase the mapping the γ is decreasing, κ is increasing, and η is decreasing. This observation agrees with the general picture of coarse-graining. The more we increase the level of coarse-graining the softer are the interactions between the CG beads which correlates with increased κ and decreased γ and η. We observe that for some models there are no parameters σ and ε of the LJ potential that would fit well a certain target QoI (within the liquid state), in particular, the γ and η. A possible solution would be to replace the LJ non-bonded interaction with another interaction, e.g, the Born-Mayer-Huggins interaction that is used in the BMW model35. The 1S model with M = 4 can be directly compared with the MARTINI model as the models are equal but were developed with different target QoI. With our model, we observe very similar properties as reported for the MARTINI model. Additionally, the inferred parameters with the MAP estimates are also very close to those of the MARTINI (see SI).\n\n### Rigid vs. flexible models\n\nFor the mapping M = 4, we examine also the three flexible models 2SF, 3SF, and 3S* F. The resulting evidences for these models are listed in Table 2 along with the model evidences for the rigid counterparts. The physical motivation behind the flexible models is that they encompass the fluctuations in the dipole moment of the water cluster. In the two-site model, we incorporate them via bond vibrations, whereas in the three-site models with the angle fluctuations. Thus, the flexible models have 1 extra DOF compared to the rigid counterparts. However, as can be seen in Table 2, for the three-site models the flexible versions perform worse than the rigid ones. For the two-site model, the flexible model is only slightly better than the rigid model. Nonetheless, as flexible models usually demand smaller integration timesteps and consequently have a higher computational cost the model’s performance should be more substantial to justify the extra computational resources.\n\n### Accuracy vs. efficiency\n\nWe examine the accuracy vs. efficiency trade-off in Fig. 6 where we plot the evidence as a function of the speedup compared to the all-atom simulation. As a test simulation, we choose the NVE ensemble simulation at ambient conditions, a cubic domain with an edge of 5 nm and a simulation length of 10 ns. We also employ the maximal integration timestep still permitted by the model (see SI). The variation of the runtime varies extensively between the considered models. The computational cost depends on two factors: (i) on the number of particles, which in turn depends on the employed mapping and the number of interaction sites of the model; (ii) on the integration timestep that is increasing with increased coarse-graining since the interactions are softening. For a given mapping, we observe the smallest computational cost for the 1S model, followed by the 2S, and 3S* model while the 3S model has the highest computational cost. The difference in the 3S and 3S* models is due to the smaller timesteps required by the 3S model.\n\nThe trade-off between the accuracy and efficiency can be formally addressed as a decision problem, where the expected utility51 $${\\mathscr{U}}({ {\\mathcal M} }_{i};{\\boldsymbol{d}})$$ of an individual model $${ {\\mathcal M} }_{i}$$ given data d is given by:\n\n$${\\mathscr{U}}({ {\\mathcal M} }_{i};{\\boldsymbol{d}})=p({ {\\mathcal M} }_{i}|{\\boldsymbol{d}})u({ {\\mathcal M} }_{i}\\mathrm{).}$$\n(6)\n\nWe define the utility function $$u({ {\\mathcal M} }_{i})$$ as the decimal logarithm of the computational speedup over the atomistic model. As shown in Fig. 6c the model with the maximal expected utility is found for the 1S model with M = 1. When we consider the models incorporating the partial charge: the 3S model is the most unfavorable, while the 2S and 3S* models are comparable in terms of their expected utility. In turn, the appropriate choice for the 2S and 3S* models are mappings M = 5 and 3, respectively.\n\n### Transferability to non-ambient TD conditions\n\nOne of the challenges of coarse-graining is the transferability of CG models. Typically, CG models are more sensitive to variations in the thermodynamic conditions than the atomistic models. Furthermore, the more we increase the level of coarse-graining, the more restricted the model is to the thermodynamics state at which it is parametrized. One way of making the model more robust to transferability is to parametrize it for different conditions. Within the Bayesian formalism, the hierarchical UQ allows us to merge multiple QoI. We test the transferability of three models 2SF, 3S, and 3S* for mapping M = 4. In Fig. 7, we plot the model evidences for the hierarchical UQ, where the temperatures T = 283, 298, 323 K are merged and the evidences for the classical UQ at each temperature. We observe that the 3S* model is the most transferable, having the highest hierarchical evidence. For the three-site models, we also observe that it is easier for the CG model to fit higher temperatures." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.86410993,"math_prob":0.9957567,"size":39407,"snap":"2022-40-2023-06","text_gpt3_token_len":9468,"char_repetition_ratio":0.15229805,"word_repetition_ratio":0.014543051,"special_character_ratio":0.24411906,"punctuation_ratio":0.17490593,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99687475,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-28T01:15:36Z\",\"WARC-Record-ID\":\"<urn:uuid:ad67f79c-42dc-4438-a7ac-4c79d1774687>\",\"Content-Length\":\"356110\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:15417fac-f22c-4062-a8ce-4427a5387452>\",\"WARC-Concurrent-To\":\"<urn:uuid:4d26917a-2388-4636-8834-cff60ea7faf4>\",\"WARC-IP-Address\":\"146.75.32.95\",\"WARC-Target-URI\":\"https://www.nature.com/articles/s41598-018-37471-0?error=cookies_not_supported&code=801f3e6f-32db-40a9-8539-011c7a3cafdd\",\"WARC-Payload-Digest\":\"sha1:KVT7PCTGEBSLZPZCCWTBTMFAMSAJQHU3\",\"WARC-Block-Digest\":\"sha1:5LYRYFYNLSHKUZK5RO3XYEXP7BXKMHWI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499468.22_warc_CC-MAIN-20230127231443-20230128021443-00511.warc.gz\"}"}
https://www.r-bloggers.com/2021/07/double-descent-part-i-sample-wise-non-monotonicity/
[ "Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.\n\nOne of the most fascinating findings in the statistical/machine learning literature of the last 10 years is the phenomenon called double descent. For some reason, extremely over-parameterized models, where the number of parameters to estimate is 10-1000 times more than the number of observations, sometimes win competitions, prove to be the best models. How can this be in the presence of the bias-variance trade-off? How come the variance does not become enormous in these models? One explanation is that the U-shaped bias-variance trade-off curve takes a second downward turn at the end of the U, hence the name double descent. It’s almost magical. It’s almost like we pass through a portal, and in the new bizarro universe, the gravity pushes.\n\nToday, we will briefly talk about the idea behind the second descent and show its existence. In particular, we will see what happens when we know the functional form of the underlying data generating process, but we have small number of observations. This is one of the cases where double descent can occur, and it is called sample-wise double descent.\n\n### Bias-Variance Trade-off and Different Parameterization Regimes\n\nBias-variance trade-off is one of the most basic concepts that pretty much everybody doing prediction/forecasting should know. Briefly, it is saying that very simple models produce predictions with high bias and low variance, and as we construct more complex models to decrease the bias, we tend to increase the variance. At a certain point, the decline in bias is too small compared to the increase in the variance. Since the prediction error depends on both of them (for instance, $$MSPE = Variance + Bias^{2}$$), we need to find the sweet spot where neither the bias nor the variance is minimized, but both are small enough. In Figure 1, it is shown as the black point on the “Variance + Bias$$^2$$” curve. It is the point where the prediction error is minimized.1\n\nlibrary(tidyverse)\n#Prediction error\nf <- function(x){\n(-0.4 + 1/(x+0.5)) + (0.5*exp(x))\n}\n#The point where the prediction error is minimized\noptimum <- optimize(f, interval=c(0, 1), maximum=FALSE, tol = 1e-8)\ntemp_data <- data.frame(x = optimum$minimum, y=optimum$objective)\n\nggplot(data = temp_data, aes(x=x, y=y)) +\nxlim(0,1) +\ngeom_function(fun = function(x) 0.5*exp(x), color = \"red\", size = 2, alpha = 0.7) +\ngeom_function(fun = function(x) -0.4 + 1/(x+0.5), color = \"blue\", size = 2, alpha = 0.7) +\ngeom_function(fun = function(x) (-0.4 + 1/(x+0.5)) + (0.5*exp(x)), color = \"forestgreen\", size = 2, alpha = 0.7) +\ngeom_point(size =3) +\ntheme_minimal() + ylab(\"Error\") + xlab(\"Model Complexity\") +\ntheme(axis.text=element_blank(),\naxis.ticks=element_blank()) +\nannotate(\"text\", x=0.2, y=0.11+1/(0.2+0.5)-0.35, label= expression(paste(\"B\", ias^2)), color = \"blue\", size =5) +\nannotate(\"text\", x=0.2, y=0.11+0.5*exp(0.2), label= \"Variance\", color = \"red\", size =5) +\nannotate(\"text\", x=0.32, y=-0.35+ 0.11+(0.5*exp(0.2) + 1/(0.2+0.5)), label= expression(paste(\"MSE = Variance + B\", ias^2)), color = \"forestgreen\", size =5)", null, "The double descent phenomenon, on the other hand, causes the U-shaped prediction error curve to decrease after a certain point. The variance starts to decrease as the model has more and more predictors (and parameters to estimate), and the bias is essentially zero.\n\nlibrary(tidyverse)\n\n#Prediction error function (it is piecewise so creating two of them).\nf1 <- function(x){\nifelse(x<=2, (-0.4 + 1/(x+0.5)) + (0.5*exp(x)), NA)\n}\n\nf2 <- function(x){\nifelse(x>=2, (0 + 1/(1/(0.5*exp(4/x)))), NA)\n}\n\n#Prediction variance function (it is piecewise so creating two of them).\nvar_f1 <- function(x){\nifelse(x<=2, (0.5*exp(x)), NA)\n}\nvar_f2 <- function(x){\nifelse(x>=2, 1/(1/(0.5*exp(4/x))), NA)\n}\n\n#Prediction bias function (it is piecewise so creating two of them).\nbias_f1 <- function(x){\nifelse(x<=2,-0.4 + 1/(x+0.5),NA )\n}\nbias_f2 <- function(x){\nifelse(x>=2,0,NA )\n}\n\nggplot(data = temp_data, aes(x=x, y=y)) +\nxlim(0,4) +\ngeom_function(fun = var_f1, color = \"red\", size = 2, alpha = 0.7) +\ngeom_function(fun = var_f2, color = \"red\", size = 2, alpha = 0.7) +\ngeom_function(fun = bias_f1, color = \"blue\", size = 2, alpha = 0.7) +\ngeom_function(fun = bias_f2, color = \"blue\", size = 2, alpha = 0.7) +\ngeom_function(fun = f1, color = \"forestgreen\", size = 2, alpha = 0.7) +\ngeom_function(fun = f2, color = \"forestgreen\", size = 2, alpha = 0.7) +\ngeom_vline(xintercept = 2, linetype = \"dashed\") +\ngeom_point() +\ntheme_minimal() + ylab(\"Error\") + xlab(\"Number of Predictors/Number of observations\") +\ntheme(axis.text=element_blank(),\naxis.ticks=element_blank()) +\nannotate(\"text\", x=0.32, y=-0.2+1/(0.2+0.5), label= expression(paste(\"B\", ias^2)), color = \"blue\") +\nannotate(\"text\", x=0.2, y=-0.2+0.5*exp(0.2), label= \"Variance\", color = \"red\") +\nannotate(\"text\", x=0.26, y=0.21+(0.5*exp(0.2) + 1/(0.2+0.5)), label= expression(paste(\"Variance + B\", ias^2)), color = \"forestgreen\") +\nannotate(\"text\", x=2.4, y=-0.2+1/(0.2+0.5), label= \"Interpolation limit\", color = \"black\") +\nggtitle(\"\")", null, "Figure 2: Double Descent\n\nWhat just happened here? We hit the interpolation limit and reach the over-parameterized regime, and suddenly the variance started to decrease. How did this happen? Let’s clarify something before we move further: We changed the x-axis label. It is no longer “model complexity”, but the ratio of the number of predictors to the number of observations. Does it matter? Before the interpolation threshold, not so much. After the interpolation threshold, it does. It depends on how we define the term “complexity”. We may say that once we hit the interpolation limit, the complexity no longer increases, because the model is as complex as the training data allows. Or we can define complexity in terms of the $$\\ell_{2}$$ norm of the coefficient estimates ($$(\\mathbf{\\beta^{T}\\beta})^{1/2}$$). Then, in fact, as the ratio increases, the norm may decrease.2\n\nBack to the main question: What just happened? We will examine this in more detail in later posts as well. Briefly, when the number of predictors is larger than the number of observations in the training sample, the estimator automatically regularizes itself. This happens even if we do not explicitly apply any of the regularization methods. The estimator becomes picky. Because there are more predictors (p) than the number of observations (n), it can focus on tasks other than minimizing the training error (which is usually 0 when $$n \\leq p$$). One of those tasks can be minimizing the norm, which allows the estimator to prefer the most powerful predictors to others.\n\n### Samplewise Non-monotonicity\n\nUsually, in the double descent literature, the focus is on the number of predictors. In particular, given the sample, whether the neural network should be wider or deeper is the main focus of attention. In this first part, however, we will hold the number of parameters constant and change the sample size. Since we are interested in the shape of the bias-variance trade-off curve and we have p/n in the x-axis, for our purposes, it matters little if n or p changes as long as p/n changes.\n\nIn this simple example, we consider the case where the underlying data generating process is as follows:\n\n$y = \\mathbf{X\\beta} + \\varepsilon$ with $$\\mathbf{X}$$ and $$y$$ are known and $$\\mathbf{\\beta}$$ and $$\\varepsilon$$ are unknown. Let’s say there are 100 known predictors ($$\\mathbf{X}$$ has 100 columns). The data is produced as follows:\n\nnum_predictor <- 100\ncreate_data <- function(sigma, training_sample_size){\nX <- matrix(rnorm(10000*num_predictor), ncol = num_predictor)\n\n#Coefficients\nbeta <- runif(num_predictor)\nbeta <- beta/sqrt(sum(beta^2)) ## Normalizing so that the norm is 1\n\n#Error term\ne <- rnorm(10000, sd = sigma)\n#The outcome\ny <- X %*% beta + e\n\nX_train <- X[1:training_sample_size,]\ny_train <- y[1:training_sample_size, ,drop = FALSE]\n\nX_test <- X[(training_sample_size+1):nrow(X),]\ny_test <- y[(training_sample_size+1):nrow(y), ,drop = FALSE]\n\nreturn(list(y_train=y_train, X_train=X_train,\ny_test=y_test, X_test=X_test,\ne=e, beta=beta))\n\n}\n\n#### Linear Models\n\nIt is surprisingly straightforward to obtain the sample-wise double descent with the linear model. The trick is that when the number of observations is smaller than the number of predictors (i.e. after the interpolation limit where there are multiple solutions), one can use the Moore-Penrose inverse to calculate $=$, as the $$\\mathbf{(X^{T}X)^{-1}}$$ part is undefined. Moore-Penrose inverse chooses the solution with the smallest Euclidean norm of the coefficients when there are multiple solutions. When the matrix is invertible and there is a unique solution, the Moore-Penrose inverse is numerically same as the regular inverse of the matrix. So we use it to obtain $$\\widehat{\\beta}$$ for any p/n.\n\nBased on Iyar Lin’s reproduction of the “More Data Can Hurt for Linear Regression: Sample-wise Double Descent” paper by Preetum Nakkiran, the following code produces two lines for two different data generating processes: one with no error term ($$\\mathbf{y} = \\mathbf{X*} \\mathbf{\\beta}$$) and another with normally distributed error term with mean 0, standard deviation 0.1 ($y = +$).\n\nlibrary(furrr) # For parallel processing\nnum_workers <- parallel::detectCores() - 1L\n\n#Estimation\nLS_estimator <- function(sigma = 0.1, training_sample_size = num_predictor){\n# Putting the number generators inside to obtan the exact same data every time.\nset.seed(5)\nfull_data <- create_data(sigma=sigma, training_sample_size = training_sample_size)\n\n#Estimate coefficients with the training data and predict y_test\nbeta_hat <- MASS::ginv(full_data$X_train) %*% full_data$y_train\ny_test_hat <- full_data$X_test %*% beta_hat #Return the RMSE for given sigma and sample size. data.frame(rmse = sqrt(mean((full_data$y_test - y_test_hat)^2)),\nsigma = sigma,\ntraining_sample_size = training_sample_size)\n\n}\n\n# We are saving the file to save time.\nif (file.exists(\"all_results_lm.rds\")) {\n} else {\nplan(multisession, workers = num_workers)\nalternative_values <- expand.grid(\nsigma_values = c(0,0.1),\ntraining_sample_size_values = c(num_predictor*c(0.5, 0.75, 0.9, 0.95, 1, 1.05, 1.1, 1.25, 1.5))\n)\nres <- future_map2_dfr(.x = alternative_values$sigma_values, .y = alternative_values$training_sample_size_values,\n.f = ~LS_estimator(sigma = .x,\ntraining_sample_size = .y),\n.options = furrr_options(scheduling = 25, seed = FALSE))\nplan(sequential)\nsaveRDS(res, \"all_results_lm.rds\")\n}\n\n#The plot\ndd_plot <- ggplot(data = res) +\ngeom_line(aes(x = training_sample_size, y= rmse, group = factor(sigma), color = factor(sigma)), size = 2) +\ngeom_point(aes(x = training_sample_size, y= rmse, group = sigma, color = factor(sigma)), size = 3) +\ntheme_bw() +\ncoord_cartesian(ylim = c(0, 1)) +ylab(\"RMSE\") + xlab(\"Training Sample Size\") +\ntheme(text=element_text(size=15))\n\ndd_plot$labels$colour <- \"Std. Dev. of Error\"\n\ndd_plot", null, "Figure 3: Samplewise Double Descent with Linear Estimator\n\nLet’s start with the unsurprising curve, the red one. What happens is that as the training sample size increases (p/n decreases), the RMSE decreases. When the number of observations in the training sample size is 100, we have 100 unknown variables and 100 equations, and the system has a unique solution. As a result, the RMSE hits 0, and remains 0 thereafter.\n\nNow, the surprising one curve, the turqoise colored line. In the beginning, everything is expected: more data brings more information and more information decreases the RMSE. But then, p/n is very close to 1, the RMSE suddenly shots up. It becomes so big that we have to cut the graph (the RMSE is more than 3 when p/n = 1). The error term completely confuses the estimator. However, the confusion only happens in the small neighborhood of p/n = 1, and the RMSE quickly decreases when the sample size increases.\n\n#### Neural Networks\n\nAnother question is whether we see the same pattern with the neural networks. Note that the spike in the RMSE occurs when there is a unique set of coefficients that solves the system and there is irreducible error; namely pure noise. So, these coefficients are correct and they are the only ones that solve the system when $$p/n \\leq 1$$. Neural networks tend to use gradient descent methods to obtain the coefficients. These coefficients will be approximately correct. Let’s see how neural networks perform in the same scenarios.\n\nlibrary(torch)\n\nNN_estimator <- function(sigma = 0.1, training_sample_size = num_predictor){\n# Putting the number generators inside to obtan the exact same data every time.\nset.seed(5)\ntorch_manual_seed(5)\n\nfull_data <- create_data(sigma=sigma, training_sample_size = training_sample_size)\n\nfull_data$x_train_tensor = torch_tensor(full_data$X_train, dtype = torch_float())\nfull_data$y_train_tensor = torch_tensor(full_data$y_train, dtype = torch_float())\nfull_data$x_test_tensor = torch_tensor(full_data$X_test, dtype = torch_float())\n\ntorch_dataset <- dataset(\n\nname = \"my_data\",\n\ninitialize = function() {\nself$x_train <- full_data$x_train_tensor\nself$y_train <- full_data$y_train_tensor\n},\n\n.getitem = function(index) {\n\nx <- self$x_train[index, ] y <- self$y_train[index, ]\n\nlist(x, y)\n},\n\n.length = function() {\nself$x_train$size()[]\n}\n)\n\ntrain_ds <- torch_dataset()\ntrain_dl <- train_ds %>%\ndataloader(batch_size = 25, shuffle = FALSE)\n\n# Pretty basic 1-layer neural network model (mathematically identical to the least squares estimator)\n#Naturally, this is extremely inefficient way to do this, but let's use nn_linear.\nnet = nn_module(\n\"class_net\",\n\ninitialize = function(){\n\nself$linear1 = nn_linear(num_predictor,1) }, forward = function(x){ x %>% self$linear1()\n\n}\n\n)\n\nmodel = net()\n\noptimizer <- optim_adam(model$parameters) n_epochs <- 2000 device <- \"cpu\" train_batch <- function(b) { optimizer$zero_grad()\noutput <- model(b[]$to(device = device)) target <- b[]$to(device = device)\n\nloss <- nnf_mse_loss(output, target)\nloss$backward() optimizer$step()\n\n}\n\nfor (epoch in 1:n_epochs) {\n\nmodel$train() coro::loop(for (b in train_dl) { loss <-train_batch(b) }) if (epoch %% 100 == 0 ){ cat(sprintf(\"\\nEpoch %d\", epoch)) } } model$eval()\n\ny_test_hat <- model(full_data$x_test_tensor) y_test_hat <- as_array(y_test_hat) #Return the RMSE for given sigma and sample size. data.frame(rmse = sqrt(mean((full_data$y_test - y_test_hat)^2)),\nsigma = sigma,\ntraining_sample_size = training_sample_size)\n\n}\n\nif (file.exists(\"all_results_nn.rds\")) {\n} else {\nalternative_values <- expand.grid(\nsigma_values = c(0,0.1),\ntraining_sample_size_values = c(num_predictor*c(0.5, 0.75, 0.9, 0.95, 1, 1.05, 1.1, 1.25, 1.5))\n)\nres_nn <- data.frame()\nfor (zz in c(1:nrow(alternative_values))){\nres_nn <- bind_rows(res_nn , NN_estimator(\nsigma = alternative_values$sigma_values[zz], training_sample_size = alternative_values$training_sample_size_values[zz]\n))\n\n}\n\nsaveRDS(res_nn, \"all_results_nn.rds\")\n}\n\n#The plot\ndd_plot <- ggplot(data = res_nn) +\ngeom_line(aes(x = training_sample_size, y= rmse, group = factor(sigma), color = factor(sigma)), size = 2) +\ngeom_point(aes(x = training_sample_size, y= rmse, group = sigma, color = factor(sigma)), size = 3) +\ntheme_bw() +\ncoord_cartesian(ylim = c(0, 1)) +ylab(\"RMSE\") + xlab(\"Training Sample Size\") +\ntheme(text=element_text(size=15))\n\ndd_plot$labels$colour <- \"Std. Dev. of Error\"\n\ndd_plot", null, "Figure 4: Samplewise Double Descent with Neural Network\n\nWow!!! The sample-wise double descent did not appear as strongly with the neural networks. This is really interesting. The optimization algorithm has avoided the spike around p/n = 1 (or training sample size = 100).\n\nWhen the error term is non-existent (red curve), the neural networks did an OK job. The error is not 0 at p/n, which was the case for the linear model, but it was not too big either. But when the error term is non-negligible (turqoise curve), allowing some imperfections proved to be very helpful, especially if we are interested in preventing some catastrophic fails. There occurred some increase in RMSE at p/n = 1, but it is certainly not of the proportions that we have seen in the linear model graphs.\n\n#### Cost of Perfection\n\nWe know that the correct coefficient estimates, the set of coefficients that provide the unique solution, come from the linear model and we know that neural network coefficients are, in this sense, approximately correct. By decreasing the learning rate (the size of steps towards the correct solution that the neural network takes) and trying out alternative number of epochs, let’s see at what point do we observe the cost of perfection when p/n = 1 and standard deviation of the error term is not 0. The expectation is that in the beginning, when the number of epochs is too small, the RMSE will be quite high because we would be quite far from the correct solution. As we increase the number of epochs, the RMSE will go down initially. After a certain point, it will go up again.\n\nlibrary(torch)\n\nNN_estimator <- function(sigma = 0.1, training_sample_size = num_predictor, n_epochs){\n# Putting the number generators inside to obtain the exact same data every time.\nset.seed(5)\ntorch_manual_seed(5)\n\nfull_data <- create_data(sigma=sigma, training_sample_size = training_sample_size)\n\nbeta_hat <- MASS::ginv(full_data$X_train) %*% full_data$y_train\ny_test_hat_lm <- full_data$X_test %*% beta_hat y_train_hat_lm <- full_data$X_train %*% beta_hat\n\nfull_data$x_train_tensor = torch_tensor(full_data$X_train, dtype = torch_float())\nfull_data$y_train_tensor = torch_tensor(full_data$y_train, dtype = torch_float())\nfull_data$x_test_tensor = torch_tensor(full_data$X_test, dtype = torch_float())\n\ntorch_dataset <- dataset(\n\nname = \"my_data\",\n\ninitialize = function() {\nself$x_train <- full_data$x_train_tensor\nself$y_train <- full_data$y_train_tensor\n},\n\n.getitem = function(index) {\n\nx <- self$x_train[index, ] y <- self$y_train[index, ]\n\nlist(x, y)\n},\n\n.length = function() {\nself$x_train$size()[]\n}\n)\n\ntrain_ds <- torch_dataset()\ntrain_dl <- train_ds %>%\ndataloader(batch_size = 25, shuffle = FALSE)\n\n# Pretty basic 1-layer neural network model (mathematically identical to the least squares estimator)\n#Naturally, this is extremely inefficient way to do this, but let's use nn_linear.\nnet = nn_module(\n\"class_net\",\n\ninitialize = function(){\n\nself$linear1 = nn_linear(num_predictor,1) }, forward = function(x){ x %>% self$linear1()\n\n}\n\n)\n\nmodel = net()\n\noptimizer <- optim_adam(model$parameters, lr = 1e-5) device <- \"cpu\" train_batch <- function(b) { optimizer$zero_grad()\noutput <- model(b[]$to(device = device)) target <- b[]$to(device = device)\n\nloss <- nnf_mse_loss(output, target)\nloss$backward() optimizer$step()\nloss$item() } final_data <- data.frame() for (epoch in 1:n_epochs) { model$train()\ntrain_loss <- c()\ncoro::loop(for (b in train_dl) {\nloss <- train_batch(b)\ntrain_loss <- c(train_loss, loss)\n})\nmodel$eval() y_test_hat <- model(full_data$x_test_tensor)\ny_test_hat <- as_array(y_test_hat)\n\nfinal_data <- bind_rows(final_data, data.frame(rmse = sqrt(mean((full_data$y_test - y_test_hat)^2)), epoch = epoch, train_loss = mean(train_loss))) if (epoch %% 50 == 0 ){ cat(sprintf(\"\\nEpoch %d, training: loss: %3.5f \\n\", epoch, mean(train_loss))) cat(sprintf(\"\\nTot Epoch num %d\", n_epochs)) } } #Return the RMSE for given sigma and sample size. final_data <- final_data %>% mutate(rmse_lm = sqrt(mean((full_data$y_test - y_test_hat_lm)^2))) %>%\nmutate(rmse_lm_training = sqrt(mean((full_data\\$y_train - y_train_hat_lm)^2)))\n\nreturn(final_data)\n\n}\n\nif (file.exists(\"cost_of_perfection_nn.rds\")) {\n} else {\nres_nn_cp <- NN_estimator(\nn_epochs = 100000\n)\n\nsaveRDS(res_nn_cp, \"cost_of_perfection_nn.rds\")\n}\n\nggplot(data = res_nn_cp) +\ngeom_line(aes(x=epoch, y = rmse, color = \"Neural network\"), size = 2) +\ngeom_line(aes(x=epoch, y = rmse_lm, color = \"Linear model\"), size = 2) +\ntheme_bw() +\nscale_color_manual(name = \"Models\",\nvalues = c(\"Neural network\" = \"darkgreen\",\n\"Linear model\" = \"red\")) +\nylab(\"RMSE\") + xlab(\"Number of Epochs\") +\ntheme(text=element_text(size=15))", null, "Figure 5: Cost of Perfection\n\nWe do see the expected U-shaped RMSE curve, but the latter half of it never reaches the levels that the linear model reaches. The cost of perfection (though calling it cost of approximate perfection would be more apt) is quite small in the case of neural networks. Since the gradient descent algorithm never really reaches the exact set of coefficients, it never really pays the full price of the perfectionism. This occurs even when we select a relatively small learning rate ($$10^{-5}$$) and many epochs ($$10^6$$), and we achieve a training error that is smaller than $$10^{-13}$$.3 But the main point here is that the training error is not 0, so our solution is approximately correct for the training data. This has negligible effects for every p/n values other than 1. But when p/n = 1, the perfection vs. approximate perfection matters.\n\nThe general idea of allowing some imperfections to get better predictions should not be too novel, since the whole idea behind LASSO or Ridge regressions is to allow some bias to reduce variance. It is just nice to see how it can reveal itself under different scenarios. We will continue to see this pattern in the coming blog posts related to the double descent phenomenon as well.\n\n1. In this example, we used MSE as the prediction error. Other accuracy metric would yield qualitatively similar results in the sense that the sweet spot minimizes neither the variance nor the bias.↩︎\n\n2. This is shown in the paper titled “Triple descent and the Two Kinds of Overfitting: Where & Why do they Appear?” by d’Ascoli, Sagun, and Biroli.↩︎\n\n3. If the learning rate was infinitely small and the number of epochs was infinitely large, then we would reach the linear model solution.↩︎" ]
[ null, "https://i1.wp.com/dorukcengiz.netlify.app/post/double-descent-part-i-sample-wise-non-monotonicity/index.en_files/figure-html/unnamed-chunk-1-1.png", null, "https://i0.wp.com/dorukcengiz.netlify.app/post/double-descent-part-i-sample-wise-non-monotonicity/index.en_files/figure-html/unnamed-chunk-2-1.png", null, "https://i2.wp.com/dorukcengiz.netlify.app/post/double-descent-part-i-sample-wise-non-monotonicity/index.en_files/figure-html/unnamed-chunk-4-1.png", null, "https://i0.wp.com/dorukcengiz.netlify.app/post/double-descent-part-i-sample-wise-non-monotonicity/index.en_files/figure-html/unnamed-chunk-5-1.png", null, "https://i2.wp.com/dorukcengiz.netlify.app/post/double-descent-part-i-sample-wise-non-monotonicity/index.en_files/figure-html/unnamed-chunk-6-1.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7683508,"math_prob":0.9985239,"size":22043,"snap":"2021-31-2021-39","text_gpt3_token_len":5840,"char_repetition_ratio":0.12886247,"word_repetition_ratio":0.21067683,"special_character_ratio":0.28997868,"punctuation_ratio":0.14517766,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998302,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,3,null,3,null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-23T23:40:23Z\",\"WARC-Record-ID\":\"<urn:uuid:a182373e-9ae6-449a-9132-85143ff5a36e>\",\"Content-Length\":\"141517\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e8553e29-59b8-4054-ac81-3b6e3fd37125>\",\"WARC-Concurrent-To\":\"<urn:uuid:37919697-0f35-4e97-b6bd-1b33951def90>\",\"WARC-IP-Address\":\"172.64.135.34\",\"WARC-Target-URI\":\"https://www.r-bloggers.com/2021/07/double-descent-part-i-sample-wise-non-monotonicity/\",\"WARC-Payload-Digest\":\"sha1:7AOF524XWX4P3ARA4OZFGKQVBBSLQ4HK\",\"WARC-Block-Digest\":\"sha1:3C73NGM2TFCVI4PRKLNVKPIGXGDEZYWS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057479.26_warc_CC-MAIN-20210923225758-20210924015758-00471.warc.gz\"}"}
https://projecteuclid.org/journals/electronic-journal-of-statistics/volume-15/issue-2/Normal-approximation-and-confidence-region-of-singular-subspaces/10.1214/21-EJS1876.full
[ "Translator Disclaimer\n2021 Normal approximation and confidence region of singular subspaces\nDong Xia\nElectron. J. Statist. 15(2): 3798-3851 (2021). DOI: 10.1214/21-EJS1876\n\n## Abstract\n\nThis paper is on the normal approximation of singular subspaces when the noise matrix has i.i.d. entries. Our contributions are three-fold. First, we derive an explicit representation formula of the empirical spectral projectors. The formula is neat and holds for deterministic matrix perturbations. Second, we calculate the expected projection distance between the empirical singular subspaces and true singular subspaces. Our method allows obtaining arbitrary k-th order approximation of the expected projection distance. Third, we prove the non-asymptotical normal approximation of the projection distance with different levels of bias corrections. By the $\\lceil \\log ({d_{1}}+{d_{2}})\\rceil$-th order bias corrections, the asymptotical normality holds under optimal signal-to-noise ratio (SNR) condition where ${d_{1}}$ and ${d_{2}}$ denote the matrix sizes. In addition, it shows that higher order approximations are unnecessary when $|{d_{1}}-{d_{2}}|=O({({d_{1}}+{d_{2}})^{1/ 2}})$. Finally, we provide comprehensive simulation results to merit our theoretic discoveries.\n\nUnlike the existing results, our approach is non-asymptotical and the convergence rates are established. Our method allows the rank r to diverge as fast as $o({({d_{1}}+{d_{2}})^{1/ 3}})$. Moreover, our method requires no eigen-gap condition (except the SNR) and no constraints between ${d_{1}}$ and ${d_{2}}$.\n\n## Funding Statement\n\nThis research is supported partially by Hong Kong RGC Grant ECS 26302019 and GRF 16303320.\n\n## Acknowledgments\n\nThe author would like to thank Yik-Man Chiang for the insightful recommendations on applying the Residue theorem, Jeff Yao for the encouragements on improving the former results, and an anonymous referee for pointing out the reference Kato (2013).\n\n## Citation\n\nDong Xia. \"Normal approximation and confidence region of singular subspaces.\" Electron. J. Statist. 15 (2) 3798 - 3851, 2021. https://doi.org/10.1214/21-EJS1876\n\n## Information\n\nReceived: 1 November 2020; Published: 2021\nFirst available in Project Euclid: 29 July 2021\n\nDigital Object Identifier: 10.1214/21-EJS1876\n\nSubjects:\nPrimary: 62H10 , 62H25\nSecondary: 62G20\n\nKeywords: Normal approximation , projection distance , Random matrix theory , Singular value decomposition , spectral perturbation\n\nJOURNAL ARTICLE\n54 PAGES", null, "SHARE\nVol.15 • No. 2 • 2021", null, "" ]
[ null, "https://projecteuclid.org/Content/themes/SPIEImages/Share_black_icon.png", null, "https://projecteuclid.org/images/journals/cover_ejs.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.86739653,"math_prob":0.94562274,"size":1798,"snap":"2022-40-2023-06","text_gpt3_token_len":373,"char_repetition_ratio":0.11092531,"word_repetition_ratio":0.0,"special_character_ratio":0.20244716,"punctuation_ratio":0.124590166,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96901786,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-29T11:57:44Z\",\"WARC-Record-ID\":\"<urn:uuid:20a14847-6acf-4fbc-b920-2b6efd3ee76e>\",\"Content-Length\":\"150711\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c219f330-427f-4cd8-8c12-20282bf15985>\",\"WARC-Concurrent-To\":\"<urn:uuid:051dd497-a8b1-47fa-8400-230985e9efbe>\",\"WARC-IP-Address\":\"107.154.79.145\",\"WARC-Target-URI\":\"https://projecteuclid.org/journals/electronic-journal-of-statistics/volume-15/issue-2/Normal-approximation-and-confidence-region-of-singular-subspaces/10.1214/21-EJS1876.full\",\"WARC-Payload-Digest\":\"sha1:EF47QTJYBBEDODDQSJNH4IBMRTE56P25\",\"WARC-Block-Digest\":\"sha1:3R6YN5AYTIBQCP44KMZRVE2DUZ5WNJIE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030335350.36_warc_CC-MAIN-20220929100506-20220929130506-00064.warc.gz\"}"}
https://crypto.stackexchange.com/questions/88607/ntru-euclidean-algorithm-the-inverse-of-f-modulo-p
[ "# NTRU Euclidean algorithm the inverse of f modulo p\n\nI am very new to the world of cryptography and have just began my research in the post quantum cryptography sector. I have been reading and trying to understand NTRU key generation and am struggling to understand how to practically do the inverse of f. I know that this is solved using the Euclidean algorithm, but I am unable to figure out what the steps would be. I was hoping someone would be able to show the steps to solve the example shown on the NTRUEncrypt wiki page shown below (fp or fq), or a smaller polynomial that gets the point across. I am more interested in the practical steps then the theory behind it, as I have found many resources for that.\n\nThank you!", null, "• It is found by the Extended-GCD. If you write \"find polynomial inverse modulo\" these words into your search engine you will see. Almost all books mentions this, too. See from handbook applied cryptography page 82 Mar 4, 2021 at 7:46\n\nI'll try to follow a similar notation to the example on wikipedia\n\nOur initial inputs are $$p(x) = x^{11}-1$$ which defines the ring and $$a(x)=-x^{10}+x^9+x^6-x^4+x^2-1$$ and working mod 3 rather than the mod 2 example on wikipedia.\n\nFor step 1 we have quotient $$q_1(x)=2x+2$$ and remainder $$r_1(x)=x^9+x^7+x^6+2x^5+2x^4+x^3+2x^2+1$$ so that $$t_1=x+1$$.\n\nFor step 2 we have quotient $$q_2=2x+1$$ and remainder $$r_2=x^8+2x^6+x^4+x^3+2x^2+2x+1$$ and $$t_2=x^2$$.\n\nFor step 3 we have quotient $$q_3=x$$ and remainder $$r_3=2x^7+x^6+x^7+x^4+2x^3+2x+1$$ and $$t_2=x^3+x+1$$.\n\nFor step 4 we have quotient $$q_4=2x+2$$ and remainder $$r_4=x^6+2x^5+x^4+x^2+2x+2$$ and $$t_4=2x^4+2x^3+2x^2+2x+1$$.\n\nFor step 5 we have quotient $$q_5=2x$$ and remainder $$r_5=2x^5+x^4+2x^2+x+1$$ and $$t_5=2x^5+2x^4+x^3+2x^2+2x+1$$.\n\nFor step 6 we have quotient $$q_6=2x$$ and remainder $$r_6=x^4+2x^3+2x^2+2$$ and $$t_6=x2x^6+2x^5+x^3+x^2+1$$.\n\nFor step 7 we have quotient $$q_7=2x$$ and remainder $$r_7=2x^3+2x^2+1$$ and $$t_7=x2x^7+2x^6+2x^5+2x^3+2x^2+1$$.\n\nFor step 8 we have quotient $$q_8=2x+2$$ and remainder $$r_8=x^2+x$$ and $$t_9=2x^9+x^8+2*x^7+x^5+2x^4+2x^3+2x+1$$.\n\nFor step 9 we have quotient $$q_9=2x$$ and remainder $$r_9=1$$ and $$t_9=2x^9+x^8+2x^7+x^5+2x^4+2x^3+x+2$$, which is the required inverse.\n\nI'll leave the mod 32 example as an exercise (sagemath is your friend).\n\n• Thank you, this is what I was hoping to see, I think I was using a invalid polynomial for my p(x). Thank you for your input! Mar 4, 2021 at 19:16\n• Thanks, this has been useful for me too. I have a follow-up question: so the Euclidean algorithm usually only runs in Euclidean domains (ED) and when working mod 3 you're running it in $\\mathbb{Z}_3[X]$ (right?) which I see is an ED because $\\mathbb{Z}_3$ is a field. But if you're working mod 32, then could there be problems as $\\mathbb{Z}_{32}[X]$ is no longer an ED?\n– wdc\nMay 29, 2021 at 2:15" ]
[ null, "https://i.stack.imgur.com/OCD9o.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9685527,"math_prob":0.9998528,"size":673,"snap":"2022-05-2022-21","text_gpt3_token_len":141,"char_repetition_ratio":0.10313901,"word_repetition_ratio":0.0,"special_character_ratio":0.20059435,"punctuation_ratio":0.06716418,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.000008,"pos_list":[0,1,2],"im_url_duplicate_count":[null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-19T12:37:43Z\",\"WARC-Record-ID\":\"<urn:uuid:cf0baeef-b21f-4959-a29c-2acde12eb3d9>\",\"Content-Length\":\"229153\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:913b8599-591f-48c5-9d66-cb389de9b535>\",\"WARC-Concurrent-To\":\"<urn:uuid:b98e744d-7bd8-4b9b-85de-91b06a193006>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://crypto.stackexchange.com/questions/88607/ntru-euclidean-algorithm-the-inverse-of-f-modulo-p\",\"WARC-Payload-Digest\":\"sha1:55OK2XI4NHG4IUP2AJRONYMMZC7GG7QA\",\"WARC-Block-Digest\":\"sha1:XVAXLKASHYAJOTWG5KZLJ4TRBSP5QCFB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662527626.15_warc_CC-MAIN-20220519105247-20220519135247-00238.warc.gz\"}"}
https://www.arxiv-vanity.com/papers/astro-ph/9711288/
[ "###### Abstract\n\nWe propose a simple model in which the cosmological dark matter consists of particles whose mass increases with the scale factor of the universe. The particle mass is generated by the expectation value of a scalar field which does not have a stable vacuum state, but which is effectively stabilized by the rest energy of the ambient particles. As the universe expands, the density of particles decreases, leading to an increase in the vacuum expectation value of the scalar (and hence the mass of the particle). The energy density of the coupled system of variable-mass particles (“vamps”) redshifts more slowly than that of ordinary matter. Consequently, the age of the universe is larger than in conventional scenarios.\n\nNSF-ITP/97-146\n\nNUHEP-TH-97-14\n\nastro-ph/9711288\n\nDark Matter with Time-Dependent Mass111Based on a talk by SMC at COSMO-97, International Workshop on Particle Physics and the Early Universe, 15-19 September 1997, Ambleside, Lake District, England.\n\nGreg W. Anderson Dept. of Physics and Astronomy, Northwestern University\n\n2145 Sheridan Rd., Evanston, IL 60208-3112, USA\n\nEmail:\n\nSean M. Carroll Institute for Theoretical Physics, University of California,\n\nSanta Barbara, California 93106, USA\n\nEmail:\n\n## 1 Introduction\n\nThe Big Bang model has proven extraordinarily successful as a framework for interpreting the structure and evolution of the universe on large scales. Within that framework, the cold dark matter scenario (featuring massive particles which bring the density of the universe to its critical value, and a scale-free spectrum of Gaussian density perturbations) has provided an elegant theory of structure formation, which unfortunately seems to fall short of perfect agreement with observation. Although the precise extent to which CDM disagrees with observation is arguable, there are two important areas in which the discrepancies are particularly troubling: predicting an age for the universe which is larger than the ages of the oldest globular clusters, and matching the COBE-normalized power spectrum of density fluctuations as measured by microwave background anisotropy experiments and direct studies of large-scale structure.\n\nOne way in which the simple CDM scenario may be modified, affecting the age of the universe as well as the evolution of density fluctuations, is to imagine that the closure density is provided by something different than (or in addition to) nonrelativistic particles. In a flat Robertson-Walker universe with metric\n\n ds2=−dt2+a2(t)(dx2+dy2+dz2) (1)\n\nand energy-momentum tensor\n\n Tμν=diag(−ρ,p,p,p) , (2)\n\nthe Friedmann equations imply that the time derivative of the scale factor satisfies\n\n ˙a2=8πG3a2ρ . (3)\n\nThe evolution of is therefore dependent on how the energy density scales with ; if , we have\n\n ¨aa=4πG3(2−n)ρ . (4)\n\nHence, the more slowly the energy density decreases as the universe expands, the more slowly the expansion will decelerate, implying a correspondingly older universe for any given value of the expansion rate today — for a flat universe dominated by such an energy density, the age is , where is the Hubble parameter and the subscript refers to the present time. (Eq. (4) can also be derived by positing an equation of state and using energy conservation; the two parameterizations are related by .) The energy density in a species of ordinary “matter” (a nonrelativistic particle species ) can be written , where is the mass of the particle and is its number density. The energy density of a matter-dominated universe is therefore proportional to , as the mass stays constant while the number density is proportional to the volume; the age of such a universe is .\n\nAlthough there is some controversy over the value of the Hubble constant, most recent determinations are consistent with a value , or . The upper limit on the age of a matter-dominated flat universe is therefore . Meanwhile, calculations of the ages of globular clusters imply an age , with a lower bound of . The apparent discrepancy between these values may be resolved by a revision in distance determinations to globular clusters, as suggested by recent measurements by the Hipparcos satellite ; while this would be the simplest solution, further work is necessary to accept it with confidence.\n\nAlternative resolutions are provided by models in which the density parameter , and some or all of the unseen energy density resides in a component which redshifts more slowly than nonrelativistic matter. The most popular such alternative is the introduction of a cosmological constant , for which . Such models have some attractive features, but are also plagued with both theoretical and observational disadvantages [4, 5]. A popular variation on this theme is to invoke a slowly-rolling scalar field, or equivalently a cosmological constant whose value varies with time, or simply an unspecified smooth component . More speculative possibilities include a network of cosmic strings or stable textures . We will not enumerate the good and bad qualities of each of these scenarios, noting only that none are sufficiently compelling to discourage the exploration of still further models.\n\nIn this paper we propose a simple model in which the dark matter consists of particles whose rest mass increases with time. This is achieved by having the rest mass derive from the expectation value of a scalar field ; the potential for depends on the number density of particles, and therefore increases naturally on cosmological timescales as the universe expands. As a result, the particle energy density decreases more slowly than , resulting in a larger age for the universe. (There is also a contribution from the potential energy of , which redshifts at the same rate.) We discuss some of the cosmological consequences of this proposal, including potential observational tests. The question of structure formation in the presence of such particles, as well as the construction of realistic particle physics models containing the necessary fields, is left for future work.\n\nAfter this paper was first submitted, we became aware of earlier an proposal for dark matter with time-dependent mass by Casas, Garcìa-Bellido, and Quirós . These authors considered models of scalar-tensor gravity, in which the scalar coupled differently to different species of particles.\n\n## 2 Scale factor and age of the universe\n\nThe model consists of a scalar and a particle species , which can be either bosonic or fermionic for the purposes of this work. The mass of is imagined to come from the vacuum expectation value of , with the constant of proportionality some dimensionless parameter :\n\n mψ=λ⟨ϕ⟩ . (5)\n\nMore elaborate dependences of on are certainly conceivable, but for the purposes of this paper we make this simple choice. The dynamics of are determined by a conventional kinetic term and a potential energy . The notable feature of the model is that we choose the potential to blow up at and roll monotonically to zero as . For simplicity we will write\n\n U(ϕ)=uoϕ−p , (6)\n\nalthough more complicated forms are again possible. While such a potential seems unusual, this form can arise for example due to nonperturbative effects lifting flat directions in supersymmetric gauge theories , as well as for moduli fields in string theory. (In fact this form of potential is not strictly necessary, as the phenomenon we will describe can occur with almost any potential; however, the effects are most dramatic with this choice.)\n\nThis model possesses no stable vacuum state; in empty space tends to roll to infinity. We consider instead the behavior of in a homogeneous background of ’s with number density . In that case, the dependence of the free energy on the value of comes both from the potential and the rest energy of the particles, which have a mass proportional to . The equilibrium value of a homogeneous configuration is therefore one which minimizes an effective potential of the form\n\n V(ϕ)=u0ϕ−p+λnψϕ . (7)\n\n(See Fig. 1.)", null, "Figure 1: Effective potential for ϕ. The light solid curve is the bare potential U(ϕ)∝ϕ−1. The effective potential at finite density, given by the solid curves, is obtained by adding a contribution linear in ϕ and proportional to the number density nψ. This is plotted for two different values of nψ, corresponding to two different stages in the evolution of the universe. As the universe expands, nψ decreases, and the equilibrium value of ϕ increases.\n\nThe additional contribution can be thought of as arising because increasing increases the energy density in ’s since it increases the mass of . The expectation value of is therefore\n\n ⟨ϕ⟩=(pu0λnψ)1/(1+p) . (8)\n\n(Such a configuration is not truly stable, as spatially inhomogeneous perturbations will tend to grow, but it can be stable enough for cosmological purposes.) Density-dependent potentials such as this have been discussed previously in other contexts; see e.g. .\n\nIn an expanding universe, the number density will change with time; in turn, the mass of both and will change, as will the vacuum energy. After the interactions of have frozen out, the number density can be written as , where is the density when , which we take to be the present epoch. Then evolves as\n\n ϕ=ϕ0a3/(1+p) , (9)\n\nwhere is the value of (8) at the present time. In terms of these variables the mass of the boson is given by\n\n m2ϕ=∂2V∂ϕ2=[p(p+1)u0ϕ−(p+2)0]a−3(2+p)/(1+p) , (10)\n\nand the mass of is\n\n mψ=λϕ0a3/(1+p) . (11)\n\nBoth and are therefore variable-mass particles, or “vamps”; a cosmological model in which vamps are the dominant component of the energy density at late times will be referred to as VDM.\n\nThere are a number of contributions to the energy density of the universe in this model. These include the energy in the scalar particles, in the particles, in the time derivative of the expectation value of , in the potential , and in ordinary components of matter and radiation. For reasonable values of the parameters, the energies in and in quanta are small; the former because is only changing on cosmological timescales, and the latter because the mass of is decreasing with time. (At early times, the bosons are very massive and rapidly decay.) The important new contribution is therefore simply , the sum of the fundamental potential and the rest energy in the ’s. (We assume for now that is is nonrelativistic. As we discuss later, it is most likely that the particles were relativistic when they decoupled, but at late times their momenta have redshifted sufficiently that they are slowly moving today.) Both of these components turn out to depend on the scale factor in the same way; the ratio of the energy density in particles to that in the potential for is simply\n\n ρψρU(ϕ)=1p . (12)\n\nIt is therefore convenient to deal with the sum of these two contributions,\n\n ρV=(1+p)u0ϕ−p=(1+pp)λϕnψ , (13)\n\nwhich evolves as\n\n ρV=ρV0a−3p/(1+p) . (14)\n\nThe parameter characterizing the effective equation of state of the coupled / system is therefore .\n\nThe energy density in ordinary massive particles (baryons plus a possible cold dark matter component) redshifts as , more slowly than , and will therefore be the dominant source of energy density in the universe for intermediate redshifts. The redshift at which is given by\n\n 1+zVM=(ρV0ρM0)(1+p)/3 . (15)\n\nThe age of the universe, meanwhile, will be larger than in conventional flat models. The age corresponding to a redshift is given by\n\n t=∫a0da′˙a′=H−10∫(1+z)−10[1−Ω0+ΩM0x−1+ΩV0x(2−p)/(1+p)]−1/2dx . (16)\n\nFor the limiting case , , we find that the age of the universe now is simply\n\n t0=23H−10(1+p−1) . (17)\n\nFig. 2 plots the age of universes with as a function of , for .\n\nAn interesting feature of this model, in comparison with alternative theories of rolling scalar fields and unusual equations of state, is that (because is massive and nonrelativistic at late times) it is at least conceivable that the energy density of the universe is dominated solely by baryons and vamps (without any significant cold dark matter component). For illustrative purposes, let us define the “minimal VDM model” as that with , , and  km/sec/Mpc. This is a flat universe consisting solely of baryons and vamps, with the baryon density consistent with the prediction of Big Bang nucleosynthesis. In this minimal model, vamp-matter equality occurs at a redshift . The age of the universe turns out to be approximately  years, in good accord with the (pre-Hipparcos) ages of the oldest globular clusters.", null, "Figure 2: Age of the universe in billions of years. The values in this plot are computed for flat universes consisting of only vamps and nonrelativistic matter, with p=1.\n\n## 3 Particle parameters and abundances\n\nThe properties we have deduced to this point depend on the present energy density , but not on any assumptions about the parameters of the particle physics model in which we imagine the necessary fields and interactions could arise. To understand the formation of large-scale structure in the model, however, it is necessary to know the mass and average velocity of the particles today, and to compute these requires some detailed knowledge of the interactions of our two fields. In the absence of a specific model, we will estimate these quantities under the minimal assumptions that was in thermal equilibrium at some high temperature and has evolved freely ever since.\n\nWe begin by considering the general problem of the motion of an otherwise free particle whose mass may vary throughout spacetime. The motion of such a particle extremizes the action\n\n S=∫√−pμpμdτ=λ∫ϕ(xμ)(−gμνdxμdτdxνdτ)1/2dτ , (18)\n\nwhere is the proper time along the particle’s trajectory and is the particle’s four-momentum. Variation of this action with respect to the path leads to an equation of motion\n\n Dpμdτ≡dpμdτ+Γμρσdxρdτpσ=−λ∇μϕ , (19)\n\nwhich can be written explicitly in terms of the path as\n\n d2xμdτ2+Γμρσdxρdτdxσdτ=−(gμν+dxμdτdxνdτ)∂ν(lnϕ) . (20)\n\nSince we are assuming that is constant along spacelike hypersurfaces of the metric (1), we can solve explicitly for the motion of a particle obeying (19). In terms of the magnitude of the spacelike 3-momentum,\n\n |→p|2=gijpipj=a2δijpipj , (21)\n\nwe find\n\n |→p|∝a−1 , (22)\n\njust as for conventional (constant-mass) particles. The distinction arises for the velocity; if the four-velocity satisfies , the magnitude of the three-velocity is proportional to . Thus, as the particles get more massive with time, they naturally slow down even more rapidly than ordinary test particles.\n\nAlthough we have not specified any explicit interactions between the vamps and visible matter, we may imagine that such reactions exist as long as they are sufficiently weak that they do not lead to consequences which would have already been observed. As a result of such interactions, we presume that the ’s were in thermal equilibrium at some high temperature. Their equilibrium phase-space distribution function is either a Fermi-Dirac or Bose-Einstein distribution,\n\n f(p)=gψh3P1eE/kTψ±1 , (23)\n\nwhere is the number of spin degrees of freedom, is Planck’s constant, is Boltzmann’s constant, is the temperature of the ’s, and is the energy, given by\n\n E=p0=(m2ψ+|→p|2)1/2 . (24)\n\nAs the temperature and density increase, goes up while goes down. At sufficiently early times, therefore, the particles were relativistic, and . Under these circumstances the ’s behave like ordinary relativistic particles; their temperature redshifts as , and their energy density as . When they become nonrelativistic, on the other hand, their kinetic temperature will scale as ; they cool off more rapidly than ordinary matter. (Strictly speaking it is incorrect to speak of a temperature after the particles become nonrelativistic, as the varying rates at which the particles slow down will distort the initially thermal distribution.)\n\nIt is reasonable to assume that was relativistic when it decoupled, and we may proceed under this assumption to show that it leads to a consistent picture. In that case the number density of today is given by the standard formula \n\n nψ0=825rψ cm−3 , (25)\n\nwhere is the ratio of , the effective number of degrees of freedom in , to , the total effective number of relativistic degrees of freedom at freeze-out. (In terms of the number of physical degrees of freedom , for bosons and for fermions.) For simplicity let us consider the case . Then we can directly determine the mass of in terms of the current density parameter and Hubble constant  km/sec/Mpc:\n\n mψ=12.7Ωψ0h2r−1ψa3/2 eV . (26)\n\nIn terms of the Yukawa coupling , the other relevant parameters of the model are then\n\n u0=1.02×10−9Ω2ψ0h4λrψ (eV)5 (27)\n\nand\n\n mϕ=1.00×10−6λrψΩ1/2ψ0ha−9/4 eV . (28)\n\nThe temperature of the particles (while they are still relativistic) is diluted somewhat with respect to that of the photons, due to entropy production subsequent to the freeze-out of :\n\n Tψ=(g∗0g∗f)1/3Tγ0a−1=3.55×10−4a−1g−1/3∗f eV . (29)\n\nComparing (26) to (29), we find that the ’s first become non-relativistic at a redshift of\n\n zNR=66.3⎛⎜⎝Ωψ0h2g1/3∗frψ⎞⎟⎠2/5 . (30)\n\n## 4 Further consequences\n\nAlthough the VDM model helps to alleviate the age problem, there are a number of other cosmological tests that could conceivably rule it out. For example, nucleosynthesis places stringent limits on the number of degrees of freedom contributing to the energy density at . Particles which decouple at sufficiently high energies are not constrained by this test, as their number density is diluted by entropy production after decoupling. We do not know the temperature at which decouples, although there is no reason to believe that it isn’t sufficiently high to evade the nucleosynthesis bound. Meanwhile, the energy density in is much less than that in at high temperatures, and is therefore even less constrained.\n\nAny model which increases the age of the universe by changing the behavior of the scale factor with time will be subject to various cosmological tests which are sensitive to that relationship; these are conventionally used to place limits on the cosmological constant . Currently the most promising such tests are direct measurements of deviation from the linear Hubble law using high-redshift Type Ia supernovae , and volume/redshift tests provided by the frequency of gravitational lensing of distant quasars by intervening galaxies . These have recently been applied to a number of models with novel dependences of the scale factor on time, very similar to the scenario discussed in this paper. The results to date seem to indicate that these tests do not rule out the kind of models considered here, but may be able to do so in the near future when more data is available.\n\nOur investigation has been exclusively in the context of an unperturbed Robertson-Walker cosmology. The next step is to introduce perturbations and discuss CMB anisotropies and the formation of structure; work in this direction is in progress. However, it is worth noting some important features of the problem. There are two powerful effects which distinguish the growth of perturbations in a VDM cosmology from conventional cold dark matter, and they tend to affect the power spectrum in opposite ways. The first effect is the effectively negative pressure of the coupled system. At zero temperature, perturbations in vamps grow more rapidly than those in CDM; indeed, perturbations tend to grow even in the absence of gravity. The other effect, meanwhile, is the free streaming of the particles. The ’s decouple while relativistic, and in some respects act as hot dark matter. They will tend to flow out of overdense regions, damping the growth of perturbations until sufficiently late times. An accurate appraisal of the magnitude of this process requires numerical integration of the evolution equations, as the Boltzmann equation does not simplify as it would for massless or completely nonrelativistic particles . These two competing effects are not the entire story; for example, if the ’s are fermions they will be prevented from clustering on very small scales by the exclusion principle . The final perturbation spectrum is therefore the result of a number of processes, and cannot be reliably estimated analytically. In addition, of course, the simple model we have investigated here may be modified, either by altering the form of the potential (6) or by introducing other forms of energy in addition to baryons and vamps (e.g., ordinary hot or cold dark matter).\n\nAnother direction currently under investigation is the construction of particle physics models in which vamps may arise. A possible origin for the scalar is as one of the moduli of string theory; our understanding of the nonperturbative effects which give potentials to such fields is not sufficiently developed to attempt realistic model building at this time. In supersymmetric gauge theories, however, there are (perturbatively) flat directions whose dynamics are somewhat better understood, and in that context the search for a model may be more hopeful. In such a scenario there are a number of potentially dangerous effects which must be avoided; for example, if the expectation value of breaks supersymmetry, it may lead to gradual variations in the parameters of the standard model as the universe expands. Such variations are tightly constrained by a variety of data .\n\n## Acknowledgments\n\nWe would like to thank Edmund Bertschinger, Edward Farhi, and Mark Srednicki for helpful conversations. This work was supported in part by the National Science Foundation under grant PHY/94-07195." ]
[ null, "https://media.arxiv-vanity.com/render-output/7320551/x1.png", null, "https://media.arxiv-vanity.com/render-output/7320551/x2.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8952394,"math_prob":0.97261655,"size":24181,"snap":"2023-14-2023-23","text_gpt3_token_len":5818,"char_repetition_ratio":0.14191173,"word_repetition_ratio":0.012787724,"special_character_ratio":0.24990696,"punctuation_ratio":0.17565359,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9763525,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-05-29T18:28:49Z\",\"WARC-Record-ID\":\"<urn:uuid:c2c669fd-18cc-4227-9f0d-41abf99a4319>\",\"Content-Length\":\"398333\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:de64a056-3efd-4461-9040-7a639190e4fb>\",\"WARC-Concurrent-To\":\"<urn:uuid:28faa0ad-0c7b-426e-ac13-9745ad1b4234>\",\"WARC-IP-Address\":\"172.67.158.169\",\"WARC-Target-URI\":\"https://www.arxiv-vanity.com/papers/astro-ph/9711288/\",\"WARC-Payload-Digest\":\"sha1:JKTYF7VXTQQI64PDYOOZBE6YPRVT7RIM\",\"WARC-Block-Digest\":\"sha1:XYXLMRTER3VQXU5LVS7HIG4TDG7EBLZG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224644907.31_warc_CC-MAIN-20230529173312-20230529203312-00525.warc.gz\"}"}
https://russianpatents.com/patent/211/2115884.html
[ "# A method of measuring the displacement\n\n(57) Abstract:\n\nA method of measuring the movement relates to fiber-optic transmission systems in measurement technology and is designed to measure displacements of the object. Summarized in the measurement area, the modulated radiation at a frequency of1modulate and lower than1frequency 2and12. Emit the signal of the first harmonic of the modulation frequency 1. Then the signals of the first harmonic and second harmonic served on the block comparison, where they perform the comparison of the latter. The output signal of the block comparison serves to the input of the comparator where it is compared with a reference voltage. The output signal of the comparator is fed to the control input of the sampling device and storage, which includes the modulation signal with a frequency of2and the measured value of relative displacement of the end faces of fiber-optic channels is determined by the output signal of sample and hold. The invention eliminates the effect of multiplicative noise and increase the accuracy of the measurements. 5 Il.\n\nThe invention relates to fiber-optic transmission systems in measurement technology and can be used to measure premasticated the amount of movement of the object , at which measured value the effect on the mutual location of the receiving and transmitting ends of the fiber-optic channels or change the conditions of propagation of radiation between the fixed ends of the receiving and transmitting optical fiber channels. For this monochromatic radiation through the transmitting fibre channel fail in the measurement zone, where they form the flux enclosed in a cone of aperture of the light guide. Part of the radiation flux illuminating the input end of the receiving fiber-optic channel, derived them from the zone dimensions, and is applied to the photodetector, where the radiation is converted into a proportional electrical signal, which is used to determine the measured physical quantities. The physical basis of this method of measurement is the change in the intensity of radiation under the action of a measured parameter, which takes place from the end face of the transmitting optical fiber channel on the receiving end of a fiber link in accordance with the directivity, the light transmission fiber-optic channels, the influence of the measured values and different noise.\n\nHowever, this method of measuring displacements have the military reduces the measurement accuracy.\n\nFamous work , which presents a way to measure, for example, the moving object by using a Fabry-Perot interferometer, namely, that form a monochromatic radiation, by using the transmitting fibre channel fail him in the measurement zone, then through a host fibre channel down radiation to the photodetector, where transform it into a proportional electrical signal. It uses homodyne methods of measurement of various physical quantities, changing according to the harmonic law, which laid the basis for the study of harmonic components of the signal at the output of the homodyne system with further decoding and analysis of its envelope. Thus, to implement one of the methods described use the decomposition of the signal taken from the output of the measuring system in the spectrum. Set the value of the phase difference so that sin = 1. Then from the state of rest smoothly excite oscillations and find the first maximum amplitude of the harmonic component at the fundamental frequency of the investigated object 1. Then measure an unknown amplitude: to re-establish the value of = /2 +is hronicheskoi component at a frequency of1. Further in the formula to find the unknown value.\n\nThe main disadvantages are described in method are necessary to calculate the arguments of the Bessel functions and units values of the phase difference in the measuring system, the limitation of the range of measurements associated with the scope of uniqueness of Bessel functions, as well as the assumption that when the required two units of the magnitude of the phase difference remains constant characteristics of laser radiation (frequency stability, the laser intensity noise) and environmental parameters. To implement these conditions in practice is extremely difficult.\n\nClosest to the invention in its technical essence is a way of measuring displacements . This way, selected as a prototype, is that form of monochromatic radiation, modulate its intensity and wavelength at a frequency of1according to the harmonic law and light through the transmitting fibre channel surface of the object to the measured distance that is experiencing interference phenomena, the result of which are non-linear distortion occurring in the optical system. On the which selects the signal of the second harmonic of the modulation frequency1and the magnitude of its amplitude is determined by the desired distance. In this case, the implementation of the method is based on the following physical phenomenon: the power and the wavelength of a semiconductor laser depends on its current pumping .\n\nThe disadvantages of this method are the relatively low accuracy of the movement, the noise immunity and complicated implementation. This is because, firstly, there is no accounting for multiplicative noise, secondly, although the second harmonic and is a periodic function of the phase difference, its amplitude nonlinear changes over the period. Therefore, the determination of the unknown quantity on the basis of the amplitude of the second harmonic is inaccurate due to the nonlinearity of the latter. Let us consider the interference, which, as is well known, are divided into multiplicative and additive. To reflect the radiation power in the optical channel P can be expressed as follows :\n\nP = f(t,z)P0+ A(t,z), (1)\n\nwhere\n\nf(t,z) is the expression for the multiplicative noise;\n\nP0- source optical power;\n\nA(t,z) is the expression for the additive noise;\n\nt - time;\n\nz - external influence.\n\nAdditive interference voznikali on the photodetector. Their suppression is relatively easy: to conduct a more thorough protection to sensitive components from external radiation. Multiplicative noise due to the following factors: the instability of radiation sources; heterogeneity transparent environment fiber optic tract associated with aging fiber, its microengine, temperature. To compensate for multiplicative noise requires a fundamental change in the design of the device, method for determining the desired value.\n\nThe disadvantages include the limitation on the range of measurements associated with a scope ambiguity function on the period.\n\nThe task of the invention is to eliminate these disadvantages, i.e., improving the reliability and accuracy of measurement of moving object.\n\nThis object is achieved by a method for motion measurement, namely, that form a monochromatic radiation, modulate its intensity and wavelength at a frequency of1according to the harmonic law, by the transmitting fibre channel modulated radiation down to the zone of measurement light input end face of the receiving fiber optic canal, located on rosstanoutinmo channel radiation is led to the photodetector and the device, emit the signal of the second harmonic of the modulation frequency1that is different from the known fact that the emission modulate and also at a lower frequency2and1>>2emit the signal of the first harmonic of the modulation frequency1then the signals of the first and second harmonics served on the block comparison, where they perform the comparison of the latter, the output signal of the block comparison serves to the input of the comparator where it is compared with a reference voltage, the output signal of the comparator is fed to the control input of the sampling device and storage (water economy Department), at the entrance of the water economy Department serves the modulation signal with a frequency of2and the measured value of relative displacement of the end faces of fiber-optic channels is determined by the output signal UHV.\n\nThe main features that distinguish the proposed method from the known, are an additional modulation of the radiation at the lower frequency2further allocation of the first harmonic of the modulation frequency1with the subsequent comparison of the signals of the first and second harmonics, which defines the novelty. From the above it follows that the proposed method meets the criterion of \"inventive step\".\n\nThis gives the advantage of WPI is th measurements by improved signal processing.\n\nIn Fig. 1 presents a functional diagram of a device that implements the proposed method, Fig.2 is a plot of the ratio between the amplitudes of the second harmonic to the first signal frequency modulation1from the measured distance between the receiving and transmitting ends of the fiber-optical channels; Fig.3 is a diagram of signal processing for the case of comparison by division; in Fig.4 is a diagram of signal processing for the case of comparison by subtracting Fig.5 is a graph of the depth of modulation signal at a frequency of 2depending on the base of the Fabry - Perot interferometer (IFP).\n\nA device that implements the proposed method contains the emitter 1, made in the form of a semiconductor laser, which is a source of monochromatic radiation, the device modulation radiation 2 at a frequency of2the device modulation radiation 3 frequency1the forming device DC 4 connected to the emitter 1 sequentially transmitting fiber-optic channel 5, the input end of which is optically connected with the emitter 1 and the outlet end is located in the measurement area, the receiving fiber-optic channel 6, the input end of which is located in the measurement zone coaxially with the output end of the transmitting from the photodetector 7, and further devices emit the signal of the first harmonic 8 and the second harmonic 9 frequency modulation1, Comparer 10, where a comparison is made of the signals coming from devices 8 and 9, a further comparator 11 is connected with the block of the reference voltage 12, the output of comparator 11 is connected to the control input of the sampling device and the storage 13, an input connected with the device modulation radiation 2.\n\nThe forming device DC 4 outputs the emitter 1 to the operating point, the modulation device 2, and 3 change the pump current of the emitter 1, which in turn affects the intensity and spectral composition of the radiation of the latter . In this case, the device 3 modulates the radiation harmonic law on the high frequency1(1 - 10 MHz), the device 2 modulates the radiation at the lower frequency2(1 - 1000 kHz). For simplicity, we assume that the modulation at a frequency of2happens sawtooth law - slow linear increase of the signal to a certain level the latter, then quickly drops to the initial level. The frequency value1whichever is greater for a given hardware implementation of schemes 7 - 10, and the frequency value2- from the condition that the response time of the circuits is teaching at frequencies2and1. Modulation of the pump current device 3 causes the modulation of the radiation, namely in addition to the permanent component of wavelength0an additional value of1similarly, the device 2 provides a Supplement to the wavelength0the value of 2. The value of1is chosen from the condition that the modulation of the radiation at a frequency of1will not lead to the emergence of the next order of interference, i.e.\n\n< / BR>\nwhere\n\nm is the order of interference.\n\nModulation of the radiation at the frequency , in contrast, would lead to a shift of the order of the interference by one, that must match\n\n< / BR>\nTherefore, the amplitude modulation of the radiation by the sawtooth law i0is calculated based on the value of1which in turn directly determines for each specifically made IFP. The amplitude modulation of the radiation by the harmonic law, i should be much less than the value of i0, i.e., 1 < i0. Thus, the generated radiation is supplied by the transmitting fiber-optic channel 5 in the measurement zone. The output end of the transmitter 5 and the input receiving end 6 of the fiber-optic channels are mirrors And the rank difference of the rays . In addition, it depends on the parameters of the IFP and the supplied radiation. Under the influence of the measured physical parameter, in particular when moving the studied object, there is a change in the magnitude of the difference of the rays in the IFP, which linearly depends on the distance between the mirrors \n\n< / BR>\nwhere loptthe distance h between the mirrors IFP taking into account the refractive index n and the angle of incidence of rays, lopt= hn;\n\nThe pump current of the emitter 1 is modulated as follows on the same period \"saw\":\n\nI = I0+ i0t + icos(1t), (5)\n\nwhere\n\nI0- constant component of the pump current;\n\ni0- small value in comparison with I0representing the current modulation sawtooth law with frequency2;\n\ni - small value in comparison with I0representing the current modulation harmonic law with frequency1;\n\n1- frequency modulation;\n\nt - time.\n\nThen the power of the laser will be determined in the first approximation \n\nP0= aI + b,\n\nwhere\n\na is a constant of order 7,510-2W/a ;\n\nb is a constant of order - 2,510-3W .\n\nthe characteristic classical IFP has the following form :\n\n< / BR>\nwhere\n\nP0the radiation power at the input IFP;\n\n- reflectance mirrors IFP.\n\nBut as a result of modulation of the pump current, the wavelength is variable and can be expressed as follows :\n\n=0+ kI, (9)\n\nwhere\n\n0the wavelength at a constant pump current I0;\n\nk - d /dI 610-9m/A ,\n\nor (2) the wavelength is represented\n\n=0+ ki0t + kicos(1t). (10)\n\nThe power of the optical radiation transmitted IFP (i.e., falling into the input end of the transmitting fiber-optic channel 5 and then on the photodetector 6), taking into account (4) and (7) can be represented in the form\n\n< / BR>\nThe expression (11) describes the transfer characteristic of IPP as a function of time and frequency1modulation of the pump current. The graph of this function in the coordinates of the power (PIFP)) and time(1t) represents a curve that contains the number of extrema. The number and shape depend on the magnitude of the modulation of the pump current i, the parameters of the interferometer and emitter (a, b, k, ) and the main thing - from base IFP, i.e. the distance between the mirrors h. So, when changing h (all other parameters fixed) form of transfer is elsaelsa as the origin of the two following pulses from the edges (considered in the period), and then the process repeats.\n\nCurve (11) is periodic, it is lawful to judge its decomposition into a Fourier series of harmonics. In this case, interested in the amplitudes of the first and second harmonic frequency1. For this purpose, the signal from the photodetector 7 is fed to the input of the device emitting the signal of the first harmonic 8 and the second harmonic 9 frequency modulation1the output signals of the devices 8 and 9 is fed to the input of block comparison 10. The Comparer 10 operates as follows: electronically divides the amplitude of the second harmonic to the amplitude of the first harmonic or adjusts the signals of the first and second harmonics at one level and then subtracts one from the other. Both options give you the opportunity to get rid of the multiplicative noise and therefore are of technical interest. Here the first option signal processing. In this case, the signal at the output of block 10 is\n\nS = I2/I1, (12)\n\nwhere\n\nIithe amplitude of the i-th harmonic frequency1decomposition of a signal (11).\n\nThe graph of the function S on the distance between the mirrors IFP is periodic, with period there is a weak minimum and a pronounced maximum (Fig.2). In the prototype searched her. First, the multiplicative noise, which is superimposed on the radiation at the output of IPP, to the same extent affect the magnitude of the second harmonic. Therefore, the use of the latter as an informative parameter obviously leads to a distortion of the desired movement by the amount of interference. Secondly, the amplitude of the second harmonic nonlinear changes over the period, therefore, the definition of the desired amplitude of the second harmonic is inaccurate.\n\nIn this way it is proposed to use the peak of the second relationship to the first harmonic as a signal that controls the comparator 11. The latter compares the output signal of the block 10 with support Uop, which is provided by the block 12, i.e. e can be configured for a certain pre-selected mode of operation. This may be related, for example, a gauge scale, thus avoiding the limitations of the measurement range. In addition, electronic division leads to the suppression of the multiplicative noise contained in the signal and its harmonics.\n\nConsider, for example, a linear increase of size in IFP (measured move) (Fig. 3, a). The comparator 11 is activated according to the level of the incoming signal is in the front, is a control device for sampling and storage 13, which also receives the sawtooth signal frequency2from device 2 (Fig.3, g). The device 13 generates a signal in accordance with the measured parameter U () and stores it until the next control pulse from the comparator 11 (Fig.3, D.). The reference voltage Uopis selected so that the comparator 11 worked in the field, where the output signal Comparer 10 has the highest slope.\n\nIn the second case, the block comparison 10 subtracts the signals from one another, for example from the amplitude of the first signal depending on the amplitude of the second signal prior to their adjustment to the intersection. This is possible with the use of electronic amplifier, the gain of which kusis as follows: the signal amplitudes of the first and second harmonics become such that the graph of their difference crosses zero (the time axis) in its most steep section. The output signal of the block 10 is\n\nS = I2- I1,\n\nwith each period of the graph of the function S crosses zero twice (Fig.4, b and C). It also allows you to set the mode of operation of the comparator using a zero reference voltage U2, I1and is influenced by them, but their difference does not currently have the interference at the point where it vanishes. In addition, in the case of subtracting the hardware diagram of the device easier compared to the case of division. E-the division requires the use of digital technology, which complicates the implementation of the method. When the analog division division accuracy does not exceed 0.5 - 1%.\n\nSpecifically, the method can be implemented as follows.\n\nWill calculate the amplitudes of the currents pumping for GaAs semiconductor laser. We will use the numerical data given in .\n\nThe forming device DC 4 outputs the emitter 1 to the operating point, forming a constant current I0= 100 mA. For simplicity, we assume that the modulation current of the pumping device 2 at a frequency of 2happens sawtooth law, providing a Supplement to the wavelength0the value of2. This modulation of the radiation at a frequency of 2should lead to a shift of the order of the interference by one, which should correspond to (3) or\n\n< / BR>\nwhere\n\nm is the order of interference.\n\nWe define the amplitude modulation of the radiation by the sawtooth law i0. Ukazatel of refraction of the medium between the mirrors IFP;\n\nh is the distance between the mirrors IFP (base IFP);\n\nFormulas (14) and (15) lead to the following expression:\n\n< / BR>\nThe wavelength taking into account (2) can be represented\n\n< / BR>\nfrom here defined\n\n< / BR>\nFor the calculation we will take k = 610-9m/A,0= 810-7M, n = 1. The expression (18) leads to the following results: to measure changes h base IFP, part of the order of several millimeters, it is advisable to set the value of i0about 10 mA. A plot of the depth of modulation signal at a frequency of2the distance between the mirrors IFP h shown in Fig.5. When the modulation depth is about 10%.\n\nThe amplitude modulation of the first voltage pumping device 3 must be less than the value of i0much more, in order to ensure condition (2). In this case, we can take i = 1 mA.\n\nThus, unlike the prototype, the proposed method allows to eliminate the influence of multiplicative noise by suppressing them with electronic comparison signals. Furthermore, the method allows for more accurate measurements of the displacements due to the fact that an informative signal proportional measure the modulation her. - M.: Energoatomizdat, 1989, S. 5-8.\n\n2. Optical homodyne measurement methods. - The magazine \"Foreign Radioelectronics\", 1995, N6, S. 43 - 48.\n\n3. Author's certificate N 1516775 the USSR, CL G 01 B 11/14, 1989 (prototype).\n\n4. Butusov M M Fiber optics and instrumentation. M.: Mashinostroenie, 1987, 330 S.\n\nMethod for motion measurement, namely, that form a monochromatic radiation, modulate its intensity and wavelength at a frequency of 1according to the harmonic law, by the transmitting fibre channel modulated radiation down to the zone of measurement light input end face of the receiving fiber-optic channel at a distance from the output end of the transmitting fiber-optic channel, then using the receiving fibre channel radiation is led to the photodetector and the device emitting the signal of the second harmonic of the modulation frequency1, characterized in that the radiation modulate and also at a lower frequency2and 12emit the signal of the first harmonic of the modulation frequency1then the signals of the first and second harmonics served on the block comparison, where compare vyhodnoi signal of the comparator is fed to the control input of the sampling device and storage at the input of the sample and hold signal modulation with a frequency of2and the measured value of relative displacement of the end faces of fiber-optic channels is determined by the output signal of sample and hold.\n\nSame patents:", null, "The invention relates to the field of measurements, in particular for controlling the position of crane tracks in terms mainly of bridge cranes", null, "Vocabulary sensor // 2107258\nThe invention relates to the field of measurement technology and can be used in automatic process control systems of industrial enterprises", null, "Vocabulary sensor // 2107258\nThe invention relates to the field of measurement technology and can be used in automatic process control systems of industrial enterprises", null, "The invention relates to measuring technique and can be used in machine building, ferrous and nonferrous metallurgy in the production of rent, rubber and chemical industry in the manufacture of tubular products without stopping the process", null, "The invention relates to a device for measuring the size of the periodically moving object containing the optoelectronic measuring device that includes a transmitting-receiving elements located in at least one plane changes, perpendicular to the longitudinal axis of the object, and the processing unit, and the plane of the measuring portal is limited by at least two measuring beams arranged at an angle relative to each other", null, "Fiber optic sensor // 2082086\nThe invention relates to the field of measurement technology, in particular to the field of fiber-optical measuring instruments", null, "The invention relates to measurement devices, namely, devices for measuring the geometric parameters of the shells", null, "The invention relates to measurement techniques, in particular to a method of controlling parameters of objects, and in particular to methods of determining particle sizes, and can be used to determine the size of particles, their size composition and concentration in powders, suspensions, and aerosols", null, "The invention relates to measuring technique and can be used in fiber-optic technology in cable industry in the manufacture of optical fibers and cables, in measurement techniques in the creation and study of fiber-optic sensors, etc", null, "The invention relates to the field of measurements, in particular for controlling the position of crane tracks in terms mainly of bridge cranes", null, "The invention relates to the technical equipment and is intended to mark the boundaries of the active layer in the fuel rods in the process of their manufacture", null, "The invention relates to the control and measuring equipment, devices for measuring the temperature of the heated products in high-temperature processes", null, "The invention relates to the control and measurement technology, in particular to devices and devices to the measuring device for checking the alignment of parts, and can be used for the installation of steam turbines", null, "Vocabulary sensor // 2107258\nThe invention relates to the field of measurement technology and can be used in automatic process control systems of industrial enterprises", null, "Vocabulary sensor // 2107258\nThe invention relates to the field of measurement technology and can be used in automatic process control systems of industrial enterprises", null, "Vocabulary sensor // 2107258\nThe invention relates to the field of measurement technology and can be used in automatic process control systems of industrial enterprises", null, "The invention relates to measuring technique and can be used in machine building, ferrous and nonferrous metallurgy in the production of rent, rubber and chemical industry in the manufacture of tubular products without stopping the process", null, "" ]
[ null, "https://img.russianpatents.com/img_data/6/62775-s.jpg", null, "https://img.russianpatents.com/img_data/3/33449-s.jpg", null, "https://img.russianpatents.com/img_data/3/33449-s.jpg", null, "https://img.russianpatents.com/img_data/3/30202-s.jpg", null, "https://img.russianpatents.com/img_data/1/16404-s.jpg", null, "https://img.russianpatents.com/img_data/291/2912456-s.jpg", null, "https://img.russianpatents.com/img_data/319/3199972-s.jpg", null, "https://img.russianpatents.com/img_data/317/3179075-s.jpg", null, "https://img.russianpatents.com/img_data/313/3130679-s.jpg", null, "https://img.russianpatents.com/img_data/6/62775-s.jpg", null, "https://img.russianpatents.com/img_data/4/45088-s.jpg", null, "https://img.russianpatents.com/img_data/4/40896-s.jpg", null, "https://img.russianpatents.com/img_data/3/33450-s.jpg", null, "https://img.russianpatents.com/img_data/3/33449-s.jpg", null, "https://img.russianpatents.com/img_data/3/33449-s.jpg", null, "https://img.russianpatents.com/img_data/3/33449-s.jpg", null, "https://img.russianpatents.com/img_data/3/30202-s.jpg", null, "https://img.russianpatents.com/top.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.91037685,"math_prob":0.9285597,"size":25199,"snap":"2019-51-2020-05","text_gpt3_token_len":5186,"char_repetition_ratio":0.19317324,"word_repetition_ratio":0.20805535,"special_character_ratio":0.20096035,"punctuation_ratio":0.08253187,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9618054,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36],"im_url_duplicate_count":[null,4,null,10,null,10,null,4,null,2,null,2,null,2,null,2,null,2,null,4,null,2,null,2,null,2,null,10,null,10,null,10,null,4,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-10T15:54:12Z\",\"WARC-Record-ID\":\"<urn:uuid:7fcae85b-25cf-452b-acef-29d6549b630f>\",\"Content-Length\":\"76081\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c6b6c557-f6f1-414f-8680-08700534fecb>\",\"WARC-Concurrent-To\":\"<urn:uuid:8725d532-f65d-479f-ae3b-f31cf1a70449>\",\"WARC-IP-Address\":\"45.32.90.106\",\"WARC-Target-URI\":\"https://russianpatents.com/patent/211/2115884.html\",\"WARC-Payload-Digest\":\"sha1:QR4HHHEN4XZBGOLS64HVCGAG3KAT5TOI\",\"WARC-Block-Digest\":\"sha1:MAOIQAPGPXH2XHL43WE3FDD7N4NHTPCQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540528457.66_warc_CC-MAIN-20191210152154-20191210180154-00216.warc.gz\"}"}
https://forum.ansys.com/discussion/27275/import-source-in-var-fdtd
[ "# import source in var-fdtd\n\nMember Posts: 1\n\nI would like to get an E-field profile using 3D fdtd and then use var-fdtd to further propagate the light.\n\nFor example, light is coupled to a slab via a grating coupler (simulation using 3D fdtd) and then propagate it in the slab using var-fdtd.\n\nIs it possible to do it in Lumerical? I do not see an option for importing a source in var-fdtd.\n\nTagged:\n\n• An import source is not available in var-FDTD. If you think it can be very useful and has wide use cases, you might want to submit a feature request through the Idea Exchange (IX). As an alternative, you can consider the following workflow:\n\ni) In the 3D FDTD simulation, calculate how much light is coupled into each of the modes the waveguide supports. If you are considering modes with y-polarization, you only need to calculate the coupling coefficients for these modes only. Assuming propagation in the x-direction, the field can be expressed as a superposition of the individual y-polarized modes:\n\nEy(y,z) = a1*Ey,1(y,z) + a2*Ey,2(y,z)+ ....\n\nii) In varFDTD, the 2D field profile ,Ey(y,z), at a waveguide cross-section can be constructed by multiplying the slab mode, E(z), and the field profile, E(y). (See the \"Analyze Results\" slide of the varFDTD - Solver Physics - Algorithm course)\n\nEy(y,z) = Ey(y)*M(z) = [a1*Ey,1(y) + a2*Ey,2(y)+ ....]*M(z)\n\nwhere\n\n• M(z): slab mode in the varFDTD\n• Ey,1(y), Ey,2(y), ....: eigenmodes of the waveguide in the varFDTD (based on the effective index profile)\n\nSo, you can use the coupling coefficients from the FDTD to obtain the results, Ey(y), in the varFDTD:\n\nEy(y) = a1*Ey,1(y) + a2*Ey,2(y)+ ....\n\nIt should be noted that any fields that are not coupled into the waveguide modes with specific polarization are not accounted for in the varFDTD simulations." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.87265646,"math_prob":0.9631796,"size":1632,"snap":"2021-21-2021-25","text_gpt3_token_len":436,"char_repetition_ratio":0.10687961,"word_repetition_ratio":0.0,"special_character_ratio":0.2542892,"punctuation_ratio":0.14206128,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98598194,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-07T04:35:09Z\",\"WARC-Record-ID\":\"<urn:uuid:eadae9f9-1489-4a16-8668-464fcba5a176>\",\"Content-Length\":\"57511\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a1fd77e5-f970-4322-8a1e-a32b07835277>\",\"WARC-Concurrent-To\":\"<urn:uuid:9b25bfff-2d66-4d12-bf16-eec9f7c2ccff>\",\"WARC-IP-Address\":\"23.51.165.207\",\"WARC-Target-URI\":\"https://forum.ansys.com/discussion/27275/import-source-in-var-fdtd\",\"WARC-Payload-Digest\":\"sha1:OE633R66IAFLFN3G6BPFMRQ433TFX536\",\"WARC-Block-Digest\":\"sha1:C5XANWQMCOMF5LLBO3C74OQQODXMVVVC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243988774.96_warc_CC-MAIN-20210507025943-20210507055943-00224.warc.gz\"}"}
http://arxiv-export-lb.library.cornell.edu/abs/2001.05781?context=math
[ "math\n\n# Title: Optimal parameter for the SOR-like iteration method for solving the system of absolute value equations\n\nAbstract: The absolute value equations (AVE) $Ax - |x| - b = 0$ is of interest of the optimization community. Recently, the SOR-like iteration method has been developed (Ke and Ma [{\\em Appl. Math. Comput.}, 311:195--202, 2017]) and shown to be efficient for numerically solving the AVE with $\\nu=\\|A^{-1}\\|_2<1$ (Ke and Ma [{\\em Appl. Math. Comput.}, 311:195--202, 2017]; Guo, Wu and Li [{\\em Appl. Math. Lett.}, 97:107--113, 2019]). Since the SOR-like iteration method is one-parameter-dependent, it is an important problem to determine the optimal iteration parameter. In this paper, we revisit the convergence conditions of the SOR-like iteration method proposed by Ke and Ma ([{\\em Appl. Math. Comput.}, 311:195--202, 2017]). Furthermore, we explore the optimal parameter which minimizes $\\|T(\\omega)\\|_2$ and the approximate optimal parameter which minimizes $\\eta=\\max\\{|1-\\omega|,\\nu\\omega^2\\}$. The optimal and approximate optimal parameters are iteration-independent. Numerical results demonstrate that the SOR-like iteration method with the optimal parameter is superior to that with the approximate optimal parameter proposed by Guo, Wu and Li ([{\\em Appl. Math. Lett.}, 97:107--113, 2019]).\n Comments: 15 pages, 5 figures, 6 tables Subjects: Numerical Analysis (math.NA); Optimization and Control (math.OC) Cite as: arXiv:2001.05781 [math.NA] (or arXiv:2001.05781v1 [math.NA] for this version)\n\n## Submission history\n\nFrom: Cairong Chen [view email]\n[v1] Thu, 16 Jan 2020 13:13:36 GMT (1226kb)\n\nLink back to: arXiv, form interface, contact." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.68069315,"math_prob":0.953036,"size":1647,"snap":"2020-24-2020-29","text_gpt3_token_len":440,"char_repetition_ratio":0.13937919,"word_repetition_ratio":0.04385965,"special_character_ratio":0.30297512,"punctuation_ratio":0.1863354,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9922182,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-05-31T23:20:56Z\",\"WARC-Record-ID\":\"<urn:uuid:25a68de9-0ccf-44e7-9f4e-f34dc6d513a4>\",\"Content-Length\":\"15695\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:890f3ebe-fa26-4fb1-b3fb-04e77eee1ba2>\",\"WARC-Concurrent-To\":\"<urn:uuid:d9cfcc26-24d9-4142-ab3c-d701c152ba97>\",\"WARC-IP-Address\":\"128.84.21.203\",\"WARC-Target-URI\":\"http://arxiv-export-lb.library.cornell.edu/abs/2001.05781?context=math\",\"WARC-Payload-Digest\":\"sha1:KNJWBEDUNOFW477LB5WMJTE4KOP3L3TI\",\"WARC-Block-Digest\":\"sha1:TXXRRTNIJ6LWS2CW2JOVIF5U66Z5HKZH\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347413786.46_warc_CC-MAIN-20200531213917-20200601003917-00486.warc.gz\"}"}
https://ask.sagemath.org/question/10604/can-sage-be-used-to-solve-the-following-kind-of-problem/
[ "# Can sage be used to solve the following kind of problem?\n\n\"Given that f(1) = 9, f'(1) = 5, g(9) = 6 and g'(9) = 4, what is the approximate value of g(f(1.05))?\"\n\nedit retag close merge delete\n\nSort by » oldest newest most voted", null, "You can just use the following script to do it. You construct two functions f and g such that they have a slope of 5 and 4 respectively and add a constant such that f(1) = 9 and g(9)=6 then evaluate at 1.05 using g(f(1.05)).\n\nf(x)=5*x+4\ng(x)=4*x-30\ng(f(1.05))\n\nmore\n\nYou could even use the differential equation solvers to solve these as initial value problems and stick it in... that said, this sounds like a Hughes-Hallett homework problem." ]
[ null, "https://www.gravatar.com/avatar/e2127c859038194c0b0b9178ce60c719", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8214504,"math_prob":0.9972814,"size":917,"snap":"2019-51-2020-05","text_gpt3_token_len":307,"char_repetition_ratio":0.109529026,"word_repetition_ratio":0.2278481,"special_character_ratio":0.35986915,"punctuation_ratio":0.0990566,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9994174,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-21T08:42:40Z\",\"WARC-Record-ID\":\"<urn:uuid:fc3c00da-306d-40d6-9aed-b877314270c0>\",\"Content-Length\":\"54556\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3ca86e1a-390a-43f7-9300-d91f4c9bd67d>\",\"WARC-Concurrent-To\":\"<urn:uuid:1d5303c4-b55a-4556-a21b-1b63939d1fd1>\",\"WARC-IP-Address\":\"140.254.118.68\",\"WARC-Target-URI\":\"https://ask.sagemath.org/question/10604/can-sage-be-used-to-solve-the-following-kind-of-problem/\",\"WARC-Payload-Digest\":\"sha1:6B6MRML3TPGCN27TM36REYYW7AFLTAJK\",\"WARC-Block-Digest\":\"sha1:LZ3MEH25SBXJ5FZ6LFL3BONOPGBRNOHE\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250601628.36_warc_CC-MAIN-20200121074002-20200121103002-00504.warc.gz\"}"}
https://www.cses.fi/problemset/task/1073
[ "CSES - Towers\n• Time limit: 1.00 s\n• Memory limit: 512 MB\nYou are given $n$ cubes in a certain order, and your task is to build towers using them. Whenever two cubes are one on top of the other, the upper cube must be smaller than the lower cube.\n\nYou must process the cubes in the given order. You can always either place the cube on top of an existing tower, or begin a new tower. What is the minimum possible number of towers?\n\nInput\n\nThe first input line contains an integer $n$: the number of cubes.\n\nThe next line contains $n$ integers $k_1,k_2,\\ldots,k_n$: the sizes of the cubes.\n\nOutput\n\nPrint one integer: the minimum number of towers.\n\nConstraints\n• $1 \\le n \\le 2 \\cdot 10^5$\n• $1 \\le k_i \\le 10^9$\nExample\n\nInput:\n5 3 8 2 1 5\n\nOutput:\n2" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.58402807,"math_prob":0.9922737,"size":529,"snap":"2021-31-2021-39","text_gpt3_token_len":163,"char_repetition_ratio":0.13142857,"word_repetition_ratio":0.0,"special_character_ratio":0.32136106,"punctuation_ratio":0.14634146,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99538624,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-08-02T06:54:22Z\",\"WARC-Record-ID\":\"<urn:uuid:66d2cfa6-8d2f-4cf6-89d0-a2adab44b86e>\",\"Content-Length\":\"4396\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:48047f62-9ff2-4c97-b3d3-2e2b2a029d84>\",\"WARC-Concurrent-To\":\"<urn:uuid:ff6c51a1-29c5-4fef-bd89-4b024933250f>\",\"WARC-IP-Address\":\"188.166.104.231\",\"WARC-Target-URI\":\"https://www.cses.fi/problemset/task/1073\",\"WARC-Payload-Digest\":\"sha1:WYCHXWKO7RPNZFAXXH3JOWEW7XMHY4U2\",\"WARC-Block-Digest\":\"sha1:U2DVOTBDKH5K2TPEOYOTVIGOKWG73KBA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046154304.34_warc_CC-MAIN-20210802043814-20210802073814-00252.warc.gz\"}"}
http://planning.cs.uiuc.edu/node800.html
[ "## 15.1.1 Stability\n\nThe subject of stability addresses properties of a vector field with respect to a given point. Let", null, "denote a smooth manifold on which the vector field is defined;", null, "may be a C-space or a phase space. The given point is denoted as", null, "and can be interpreted in motion planning applications as the goal state. Stability characterizes how", null, "is approached from other states in", null, "by integrating the vector field.\n\nThe given vector field", null, "is considered as a velocity field, which is represented as", null, "(15.1)\n\nThis looks like a state transition equation that is missing actions. If a system of the form", null, "is given, then", null, "can be fixed by designing a feedback plan", null, ". This yields", null, ", which is a vector field on", null, "without any further dependency on actions. The dynamic programming approach in Section 14.5 computed such a solution. The process of designing a stable feedback plan is referred to in control literature as feedback stabilization.\n\nSubsections\nSteven M LaValle 2012-04-20" ]
[ null, "http://planning.cs.uiuc.edu/img8.gif", null, "http://planning.cs.uiuc.edu/img8.gif", null, "http://planning.cs.uiuc.edu/img215.gif", null, "http://planning.cs.uiuc.edu/img215.gif", null, "http://planning.cs.uiuc.edu/img8.gif", null, "http://planning.cs.uiuc.edu/img14.gif", null, "http://planning.cs.uiuc.edu/img2984.gif", null, "http://planning.cs.uiuc.edu/img2985.gif", null, "http://planning.cs.uiuc.edu/img253.gif", null, "http://planning.cs.uiuc.edu/img2891.gif", null, "http://planning.cs.uiuc.edu/img6388.gif", null, "http://planning.cs.uiuc.edu/img8.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.96256256,"math_prob":0.93722016,"size":486,"snap":"2020-10-2020-16","text_gpt3_token_len":97,"char_repetition_ratio":0.13692947,"word_repetition_ratio":0.0,"special_character_ratio":0.19753087,"punctuation_ratio":0.07608695,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95664024,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,2,null,null,null,null,null,4,null,4,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-18T11:09:51Z\",\"WARC-Record-ID\":\"<urn:uuid:51a7c357-96c0-4bbb-bb87-1e0fd9c619c2>\",\"Content-Length\":\"7315\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e44f01b8-b9d4-41e1-93d6-200cf95e07b5>\",\"WARC-Concurrent-To\":\"<urn:uuid:6e217158-f456-499b-b973-2a4299d72e63>\",\"WARC-IP-Address\":\"192.17.58.220\",\"WARC-Target-URI\":\"http://planning.cs.uiuc.edu/node800.html\",\"WARC-Payload-Digest\":\"sha1:RP4UZMIY5Q3R6Q4XACIOMDMD2C7MTQ34\",\"WARC-Block-Digest\":\"sha1:7YE26PG4WKK7KMJX22SFT5AHLLQ5UXH7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875143646.38_warc_CC-MAIN-20200218085715-20200218115715-00458.warc.gz\"}"}
https://neuropsychology.github.io/NeuroKit/functions/signal.html
[ "# Signal Processing#\n\n## Preprocessing#\n\n### signal_simulate()#\n\nsignal_simulate(duration=10, sampling_rate=1000, frequency=1, amplitude=0.5, noise=0, silent=False, random_state=None)[source]#\n\nSimulate a continuous signal\n\nParameters:\n• duration (float) – Desired length of duration (s).\n\n• sampling_rate (int) – The desired sampling rate (in Hz, i.e., samples/second).\n\n• frequency (float or list) – Oscillatory frequency of the signal (in Hz, i.e., oscillations per second).\n\n• amplitude (float or list) – Amplitude of the oscillations.\n\n• noise (float) – Noise level (amplitude of the laplace noise).\n\n• silent (bool) – If `False` (default), might print warnings if impossible frequencies are queried.\n\n• random_state (None, int, numpy.random.RandomState or numpy.random.Generator) – Seed for the random number generator. See for `misc.check_random_state` for further information.\n\nReturns:\n\narray – The simulated signal.\n\nExamples\n\n```In : import pandas as pd\n\nIn : import neurokit2 as nk\n\nIn : pd.DataFrame({\n...: \"1Hz\": nk.signal_simulate(duration=5, frequency=1),\n...: \"2Hz\": nk.signal_simulate(duration=5, frequency=2),\n...: \"Multi\": nk.signal_simulate(duration=5, frequency=[0.5, 3], amplitude=[0.5, 0.2])\n...: }).plot()\n...:\nOut: <Axes: >\n```", null, "### signal_filter()#\n\nsignal_filter(signal, sampling_rate=1000, lowcut=None, highcut=None, method='butterworth', order=2, window_size='default', powerline=50, show=False)[source]#\n\nSignal filtering\n\nFilter a signal using different methods such as “butterworth”, “fir”, “savgol” or “powerline” filters.\n\nApply a lowpass (if “highcut” frequency is provided), highpass (if “lowcut” frequency is provided) or bandpass (if both are provided) filter to the signal.\n\nParameters:\n• signal (Union[list, np.array, pd.Series]) – The signal (i.e., a time series) in the form of a vector of values.\n\n• sampling_rate (int) – The sampling frequency of the signal (in Hz, i.e., samples/second).\n\n• lowcut (float) – Lower cutoff frequency in Hz. The default is `None`.\n\n• highcut (float) – Upper cutoff frequency in Hz. The default is `None`.\n\n• method (str) – Can be one of `\"butterworth\"`, `\"fir\"`, `\"bessel\"` or `\"savgol\"`. Note that for Butterworth, the function uses the SOS method from `scipy.signal.sosfiltfilt()`, recommended for general purpose filtering. One can also specify `\"butterworth_ba\"` for a more traditional and legacy method (often implemented in other software).\n\n• order (int) – Only used if `method` is `\"butterworth\"` or `\"savgol\"`. Order of the filter (default is 2).\n\n• window_size (int) – Only used if `method` is `\"savgol\"`. The length of the filter window (i.e. the number of coefficients). Must be an odd integer. If default, will be set to the sampling rate divided by 10 (101 if the sampling rate is 1000 Hz).\n\n• powerline (int) – Only used if `method` is `\"powerline\"`. The powerline frequency (normally 50 Hz or 60Hz).\n\n• show (bool) – If `True`, plot the filtered signal as an overlay of the original.\n\nReturns:\n\narray – Vector containing the filtered signal.\n\nExamples\n\n```In : import numpy as np\n\nIn : import pandas as pd\n\nIn : import neurokit2 as nk\n\nIn : signal = nk.signal_simulate(duration=10, frequency=0.5) # Low freq\n\nIn : signal += nk.signal_simulate(duration=10, frequency=5) # High freq\n\n# Visualize Lowpass Filtered Signal using Different Methods\nIn : fig1 = pd.DataFrame({\"Raw\": signal,\n...: \"Butter_2\": nk.signal_filter(signal, highcut=3, method=\"butterworth\",\n...: order=2),\n...: \"Butter_2_BA\": nk.signal_filter(signal, highcut=3,\n...: method=\"butterworth_ba\", order=2),\n...: \"Butter_5\": nk.signal_filter(signal, highcut=3, method=\"butterworth\",\n...: order=5),\n...: \"Butter_5_BA\": nk.signal_filter(signal, highcut=3,\n...: method=\"butterworth_ba\", order=5),\n...: \"Bessel_2\": nk.signal_filter(signal, highcut=3, method=\"bessel\", order=2),\n...: \"Bessel_5\": nk.signal_filter(signal, highcut=3, method=\"bessel\", order=5),\n...: \"FIR\": nk.signal_filter(signal, highcut=3, method=\"fir\")}).plot(subplots=True)\n...:\n```", null, "```# Visualize Highpass Filtered Signal using Different Methods\nIn : fig2 = pd.DataFrame({\"Raw\": signal,\n...: \"Butter_2\": nk.signal_filter(signal, lowcut=2, method=\"butterworth\",\n...: order=2),\n...: \"Butter_2_ba\": nk.signal_filter(signal, lowcut=2,\n...: method=\"butterworth_ba\", order=2),\n...: \"Butter_5\": nk.signal_filter(signal, lowcut=2, method=\"butterworth\",\n...: order=5),\n...: \"Butter_5_BA\": nk.signal_filter(signal, lowcut=2,\n...: method=\"butterworth_ba\", order=5),\n...: \"Bessel_2\": nk.signal_filter(signal, lowcut=2, method=\"bessel\", order=2),\n...: \"Bessel_5\": nk.signal_filter(signal, lowcut=2, method=\"bessel\", order=5),\n...: \"FIR\": nk.signal_filter(signal, lowcut=2, method=\"fir\")}).plot(subplots=True)\n...:\n```", null, "```# Using Bandpass Filtering in real-life scenarios\n# Simulate noisy respiratory signal\nIn : original = nk.rsp_simulate(duration=30, method=\"breathmetrics\", noise=0)\n\nIn : signal = nk.signal_distort(original, noise_frequency=[0.1, 2, 10, 100], noise_amplitude=1,\n...: powerline_amplitude=1)\n...:\n\n# Bandpass between 10 and 30 breaths per minute (respiratory rate range)\nIn : fig3 = pd.DataFrame({\"Raw\": signal,\n....: \"Butter_2\": nk.signal_filter(signal, lowcut=10/60, highcut=30/60,\n....: method=\"butterworth\", order=2),\n....: \"Butter_2_BA\": nk.signal_filter(signal, lowcut=10/60, highcut=30/60,\n....: method=\"butterworth_ba\", order=2),\n....: \"Butter_5\": nk.signal_filter(signal, lowcut=10/60, highcut=30/60,\n....: method=\"butterworth\", order=5),\n....: \"Butter_5_BA\": nk.signal_filter(signal, lowcut=10/60, highcut=30/60,\n....: method=\"butterworth_ba\", order=5),\n....: \"Bessel_2\": nk.signal_filter(signal, lowcut=10/60, highcut=30/60,\n....: method=\"bessel\", order=2),\n....: \"Bessel_5\": nk.signal_filter(signal, lowcut=10/60, highcut=30/60,\n....: method=\"bessel\", order=5),\n....: \"FIR\": nk.signal_filter(signal, lowcut=10/60, highcut=30/60,\n....: method=\"fir\"),\n....: \"Savgol\": nk.signal_filter(signal, method=\"savgol\")}).plot(subplots=True)\n....:\n```", null, "### signal_sanitize()#\n\nsignal_sanitize(signal)[source]#\n\nSignal input sanitization\n\nReset indexing for Pandas Series.\n\nParameters:\n\nsignal (Series) – The indexed input signal (`pandas Dataframe.set_index()`)\n\nReturns:\n\nSeries – The default indexed signal\n\nExamples\n\n```In : import pandas as pd\n\nIn : import neurokit2 as nk\n\nIn : signal = nk.signal_simulate(duration=10, sampling_rate=1000, frequency=1)\n\nIn : df = pd.DataFrame({'signal': signal, 'id': [x*2 for x in range(len(signal))]})\n\nIn : df = df.set_index('id')\n\nIn : default_index_signal = nk.signal_sanitize(df.signal)\n```\n\n### signal_resample()#\n\nsignal_resample(signal, desired_length=None, sampling_rate=None, desired_sampling_rate=None, method='interpolation')[source]#\n\nResample a continuous signal to a different length or sampling rate\n\nUp- or down-sample a signal. The user can specify either a desired length for the vector, or input the original sampling rate and the desired sampling rate.\n\nParameters:\n• signal (Union[list, np.array, pd.Series]) – The signal (i.e., a time series) in the form of a vector of values.\n\n• desired_length (int) – The desired length of the signal.\n\n• sampling_rate (int) – The original sampling frequency (in Hz, i.e., samples/second).\n\n• desired_sampling_rate (int) – The desired (output) sampling frequency (in Hz, i.e., samples/second).\n\n• method (str) – Can be `\"interpolation\"` (see `scipy.ndimage.zoom()`), `\"numpy\"` for numpy’s interpolation (see `np.interp()`),``”pandas”`` for Pandas’ time series resampling, `\"poly\"` (see `scipy.signal.resample_poly()`) or `\"FFT\"` (see `scipy.signal.resample()`) for the Fourier method. `\"FFT\"` is the most accurate (if the signal is periodic), but becomes exponentially slower as the signal length increases. In contrast, `\"interpolation\"` is the fastest, followed by `\"numpy\"`, `\"poly\"` and `\"pandas\"`.\n\nReturns:\n\narray – Vector containing resampled signal values.\n\nExamples\n\nExample 1: Downsampling\n\n```In : import numpy as np\n\nIn : import pandas as pd\n\nIn : import neurokit2 as nk\n\nIn : signal = nk.signal_simulate(duration=1, sampling_rate=500, frequency=3)\n\n# Downsample\nIn : data = {}\n\nIn : for m in [\"interpolation\", \"FFT\", \"poly\", \"numpy\", \"pandas\"]:\n...: data[m] = nk.signal_resample(signal, sampling_rate=500, desired_sampling_rate=30, method=m)\n...:\n\nIn : nk.signal_plot([data[m] for m in data.keys()])\n```", null, "Example 2: Upsampling\n\n```In : signal = nk.signal_simulate(duration=1, sampling_rate=30, frequency=3)\n\n# Upsample\nIn : data = {}\n\nIn : for m in [\"interpolation\", \"FFT\", \"poly\", \"numpy\", \"pandas\"]:\n....: data[m] = nk.signal_resample(signal, sampling_rate=30, desired_sampling_rate=500, method=m)\n....:\n\nIn : nk.signal_plot([data[m] for m in data.keys()], labels=list(data.keys()))\n```", null, "Example 3: Benchmark\n\n```In : signal = nk.signal_simulate(duration=1, sampling_rate=1000, frequency=3)\n\n# Timing benchmarks\nIn : %timeit nk.signal_resample(signal, method=\"interpolation\",\n....: sampling_rate=1000, desired_sampling_rate=500)\n....: %timeit nk.signal_resample(signal, method=\"FFT\",\n....: sampling_rate=1000, desired_sampling_rate=500)\n....: %timeit nk.signal_resample(signal, method=\"poly\",\n....: sampling_rate=1000, desired_sampling_rate=500)\n....: %timeit nk.signal_resample(signal, method=\"numpy\",\n....: sampling_rate=1000, desired_sampling_rate=500)\n....: %timeit nk.signal_resample(signal, method=\"pandas\",\n....: sampling_rate=1000, desired_sampling_rate=500)\n....:\n```\n\n## Transformation#\n\n### signal_binarize()#\n\nsignal_binarize(signal, method='threshold', threshold='auto')[source]#\n\nBinarize a continuous signal\n\nConvert a continuous signal into zeros and ones depending on a given threshold.\n\nParameters:\n• signal (Union[list, np.array, pd.Series]) – The signal (i.e., a time series) in the form of a vector of values.\n\n• method (str) – The algorithm used to discriminate between the two states. Can be one of `\"mixture\"` (default) or `\"threshold\"`. If `\"mixture\"`, will use a Gaussian Mixture Model to categorize between the two states. If `\"threshold\"`, will consider as activated all points which value is superior to the threshold.\n\n• threshold (float) – If `method` is `\"mixture\"`, then it corresponds to the minimum probability required to be considered as activated (if `\"auto\"`, then 0.5). If `method` is `\"threshold\"`, then it corresponds to the minimum amplitude to detect as onset. If `\"auto\"`, takes the value between the max and the min.\n\nReturns:\n\nlist – A list or array depending on the type passed.\n\nExamples\n\n```In : import neurokit2 as nk\n\nIn : import numpy as np\n\nIn : import pandas as pd\n\nIn : signal = np.cos(np.linspace(start=0, stop=20, num=1000))\n\nIn : binary = nk.signal_binarize(signal)\n\nIn : pd.DataFrame({\"Raw\": signal, \"Binary\": binary}).plot()\nOut: <Axes: >\n```", null, "### signal_decompose()#\n\nsignal_decompose(signal, method='emd', n_components=None, **kwargs)[source]#\n\nDecompose a signal\n\nSignal decomposition into different sources using different methods, such as Empirical Mode Decomposition (EMD) or Singular spectrum analysis (SSA)-based signal separation method.\n\nThe extracted components can then be recombined into meaningful sources using `signal_recompose()`.\n\nParameters:\n• signal (Union[list, np.array, pd.Series]) – Vector of values.\n\n• method (str) – The decomposition method. Can be one of `\"emd\"` or `\"ssa\"`.\n\n• n_components (int) – Number of components to extract. Only used for `\"ssa\"` method. If `None`, will default to 50.\n\n• **kwargs – Other arguments passed to other functions.\n\nReturns:\n\nArray – Components of the decomposed signal.\n\nExamples\n\n```In : import neurokit2 as nk\n\n# Create complex signal\nIn : signal = nk.signal_simulate(duration=10, frequency=1, noise=0.01) # High freq\n\nIn : signal += 3 * nk.signal_simulate(duration=10, frequency=3, noise=0.01) # Higher freq\n\nIn : signal += 3 * np.linspace(0, 2, len(signal)) # Add baseline and trend\n\nIn : signal += 2 * nk.signal_simulate(duration=10, frequency=0.1, noise=0)\n\nIn : nk.signal_plot(signal)\n```", null, "```# Example 1: Using the EMD method\nIn : components = nk.signal_decompose(signal, method=\"emd\")\n\n# Visualize Decomposed Signal Components\nIn : nk.signal_plot(components)\n```", null, "```# Example 2: USing the SSA method\nIn : components = nk.signal_decompose(signal, method=\"ssa\", n_components=5)\n\n# Visualize Decomposed Signal Components\nIn : nk.signal_plot(components) # Visualize components\n```", null, "### signal_recompose()#\n\nsignal_recompose(components, method='wcorr', threshold=0.5, keep_sd=None, **kwargs)[source]#\n\nCombine signal sources after decomposition\n\nCombine and reconstruct meaningful signal sources after signal decomposition.\n\nParameters:\n• components (array) – Array of components obtained via `signal_decompose()`.\n\n• method (str) – The decomposition method. Can be one of `\"wcorr\"`.\n\n• threshold (float) – The threshold used to group components together.\n\n• keep_sd (float) – If a float is specified, will only keep the reconstructed components that are superior or equal to that percentage of the max standard deviaiton (SD) of the components. For instance, `keep_sd=0.01` will remove all components with SD lower than 1% of the max SD. This can be used to filter out noise.\n\n• **kwargs – Other arguments used to override, for instance `metric=\"chebyshev\"`.\n\nReturns:\n\nArray – Components of the recomposed components.\n\nExamples\n\n```In : import neurokit2 as nk\n\n# Create complex signal\nIn : signal = nk.signal_simulate(duration=10, frequency=1, noise=0.01) # High freq\n\nIn : signal += 3 * nk.signal_simulate(duration=10, frequency=3, noise=0.01) # Higher freq\n\nIn : signal += 3 * np.linspace(0, 2, len(signal)) # Add baseline and trend\n\nIn : signal += 2 * nk.signal_simulate(duration=10, frequency=0.1, noise=0)\n\n# Decompose signal\nIn : components = nk.signal_decompose(signal, method='emd')\n\n# Recompose\nIn : recomposed = nk.signal_recompose(components, method='wcorr', threshold=0.90)\n\nIn : nk.signal_plot(components) # Visualize components\n```", null, "### signal_detrend()#\n\nsignal_detrend(signal, method='polynomial', order=1, regularization=500, alpha=0.75, window=1.5, stepsize=0.02, components=[-1], sampling_rate=1000)[source]#\n\nSignal Detrending Apply a baseline (order = 0), linear (order = 1), or polynomial (order > 1) detrending to the signal (i.e., removing a general trend). One can also use other methods, such as smoothness priors approach described by Tarvainen (2002) or LOESS regression, but these scale badly for long signals. :Parameters: * signal (Union[list, np.array, pd.Series]) – The signal (i.e., a time series) in the form of a vector of values.\n\n• method (str) – Can be one of `\"polynomial\"` (default; traditional detrending of a given order) or `\"tarvainen2002\"` to use the smoothness priors approach described by Tarvainen (2002) (mostly used in HRV analyses as a lowpass filter to remove complex trends), `\"loess\"` for LOESS smoothing trend removal or `\"locreg\"` for local linear regression (the ‘runline’ algorithm from chronux).\n\n• order (int) – Only used if `method` is `\"polynomial\"`. The order of the polynomial. 0, 1 or > 1 for a baseline (‘constant detrend’, i.e., remove only the mean), linear (remove the linear trend) or polynomial detrending, respectively. Can also be `\"auto\"`, in which case it will attempt to find the optimal order to minimize the RMSE.\n\n• regularization (int) – Only used if `method=\"tarvainen2002\"`. The regularization parameter (default to 500).\n\n• alpha (float) – Only used if `method` is “loess”. The parameter which controls the degree of smoothing.\n\n• window (float) – Only used if `method` is “locreg”. The detrending `window` should correspond to the 1 divided by the desired low-frequency band to remove (`window = 1 / detrend_frequency`) For instance, to remove frequencies below `0.67Hz` the window should be `1.5` (`1 / 0.67 = 1.5`).\n\n• stepsize (float) – Only used if `method` is `\"locreg\"`.\n\n• components (list) – Only used if `method` is `\"EMD\"`. What Intrinsic Mode Functions (IMFs) from EMD to remove. By default, the last one.\n\n• sampling_rate (int, optional) – Only used if `method` is “locreg”. Sampling rate (Hz) of the signal. If not None, the `stepsize` and `window` arguments will be multiplied by the sampling rate. By default 1000.\n\nReturns:\n\narray – Vector containing the detrended signal.\n\nExamples\n\n```In : import numpy as np\n\nIn : import pandas as pd\n\nIn : import neurokit2 as nk\n\nIn : import matplotlib.pyplot as plt\n\n# Simulate signal with low and high frequency\nIn : signal = nk.signal_simulate(frequency=[0.1, 2], amplitude=[2, 0.5], sampling_rate=100)\n\nIn : signal = signal + (3 + np.linspace(0, 6, num=len(signal))) # Add baseline and linear trend\n\n# Apply detrending algorithms\n# ---------------------------\n# Method 1: Default Polynomial Detrending of a Given Order\n# Constant detrend (removes the mean)\nIn : baseline = nk.signal_detrend(signal, order=0)\n\n# Linear Detrend (removes the linear trend)\nIn : linear = nk.signal_detrend(signal, order=1)\n\n# Polynomial Detrend (removes the polynomial trend)\n\nIn : cubic = nk.signal_detrend(signal, order=3) # Cubic detrend\n\nIn : poly10 = nk.signal_detrend(signal, order=10) # Linear detrend (10th order)\n\n# Method 2: Tarvainen's smoothness priors approach (Tarvainen et al., 2002)\nIn : tarvainen = nk.signal_detrend(signal, method=\"tarvainen2002\")\n\n# Method 3: LOESS smoothing trend removal\nIn : loess = nk.signal_detrend(signal, method=\"loess\")\n\n# Method 4: Local linear regression (100Hz)\nIn : locreg = nk.signal_detrend(signal, method=\"locreg\",\n....: window=1.5, stepsize=0.02, sampling_rate=100)\n....:\n\n# Method 5: EMD\nIn : emd = nk.signal_detrend(signal, method=\"EMD\", components=[-2, -1])\n\n# Visualize different methods\nIn : axes = pd.DataFrame({\"Original signal\": signal,\n....: \"Baseline\": baseline,\n....: \"Linear\": linear,\n....: \"Cubic\": cubic,\n....: \"Polynomial (10th)\": poly10,\n....: \"Tarvainen\": tarvainen,\n....: \"LOESS\": loess,\n....: \"Local Regression\": locreg,\n....: \"EMD\": emd}).plot(subplots=True)\n....:\n\n# Plot horizontal lines to better visualize the detrending\nIn : for subplot in axes:\n....: subplot.axhline(y=0, color=\"k\", linestyle=\"--\")\n....:\n```", null, "References\n\n• Tarvainen, M. P., Ranta-Aho, P. O., & Karjalainen, P. A. (2002). An advanced detrending method with application to HRV analysis. IEEE Transactions on Biomedical Engineering, 49(2), 172-175\n\n### signal_distort()#\n\nsignal_distort(signal, sampling_rate=1000, noise_shape='laplace', noise_amplitude=0, noise_frequency=100, powerline_amplitude=0, powerline_frequency=50, artifacts_amplitude=0, artifacts_frequency=100, artifacts_number=5, linear_drift=False, random_state=None, silent=False)[source]#\n\nSignal distortion\n\nAdd noise of a given frequency, amplitude and shape to a signal.\n\nParameters:\n• signal (Union[list, np.array, pd.Series]) – The signal (i.e., a time series) in the form of a vector of values.\n\n• sampling_rate (int) – The sampling frequency of the signal (in Hz, i.e., samples/second).\n\n• noise_shape (str) – The shape of the noise. Can be one of `\"laplace\"` (default) or `\"gaussian\"`.\n\n• noise_amplitude (float) – The amplitude of the noise (the scale of the random function, relative to the standard deviation of the signal).\n\n• noise_frequency (float) – The frequency of the noise (in Hz, i.e., samples/second).\n\n• powerline_amplitude (float) – The amplitude of the powerline noise (relative to the standard deviation of the signal).\n\n• powerline_frequency (float) – The frequency of the powerline noise (in Hz, i.e., samples/second).\n\n• artifacts_amplitude (float) – The amplitude of the artifacts (relative to the standard deviation of the signal).\n\n• artifacts_frequency (int) – The frequency of the artifacts (in Hz, i.e., samples/second).\n\n• artifacts_number (int) – The number of artifact bursts. The bursts have a random duration between 1 and 10% of the signal duration.\n\n• linear_drift (bool) – Whether or not to add linear drift to the signal.\n\n• random_state (None, int, numpy.random.RandomState or numpy.random.Generator) – Seed for the random number generator. See for `misc.check_random_state` for further information.\n\n• silent (bool) – Whether or not to display warning messages.\n\nReturns:\n\narray – Vector containing the distorted signal.\n\nExamples\n\n```In : import numpy as np\n\nIn : import pandas as pd\n\nIn : import neurokit2 as nk\n\nIn : signal = nk.signal_simulate(duration=10, frequency=0.5)\n\n# Noise\nIn : noise = pd.DataFrame({\"Freq100\": nk.signal_distort(signal, noise_frequency=200),\n...: \"Freq50\": nk.signal_distort(signal, noise_frequency=50),\n...: \"Freq10\": nk.signal_distort(signal, noise_frequency=10),\n...: \"Freq5\": nk.signal_distort(signal, noise_frequency=5),\n...: \"Raw\": signal}).plot()\n...:\n```", null, "```# Artifacts\nIn : artifacts = pd.DataFrame({\"1Hz\": nk.signal_distort(signal, noise_amplitude=0,\n...: artifacts_frequency=1,\n...: artifacts_amplitude=0.5),\n...: \"5Hz\": nk.signal_distort(signal, noise_amplitude=0,\n...: artifacts_frequency=5,\n...: artifacts_amplitude=0.2),\n...: \"Raw\": signal}).plot()\n...:\n```", null, "### signal_flatline()#\n\nsignal_flatline(signal, threshold=0.01)[source]#\n\nReturn the Flatline Percentage of the Signal\n\nParameters:\n• signal (Union[list, np.array, pd.Series]) – The signal (i.e., a time series) in the form of a vector of values.\n\n• threshold (float, optional) – Flatline threshold relative to the biggest change in the signal. This is the percentage of the maximum value of absolute consecutive differences.\n\nReturns:\n\nfloat – Percentage of signal where the absolute value of the derivative is lower then the threshold.\n\nExamples\n\n```In : import neurokit2 as nk\n\nIn : signal = nk.signal_simulate(duration=5)\n\nIn : nk.signal_flatline(signal)\nOut: 0.008\n```\n\n### signal_interpolate()#\n\nInterpolate a signal\n\nInterpolate a signal using different methods.\n\nParameters:\n• x_values (Union[list, np.array, pd.Series]) – The samples corresponding to the values to be interpolated.\n\n• y_values (Union[list, np.array, pd.Series]) – The values to be interpolated. If not provided, any NaNs in the x_values will be interpolated with `_signal_interpolate_nan()`, considering the x_values as equally spaced.\n\n• x_new (Union[list, np.array, pd.Series] or int) – The samples at which to interpolate the y_values. Samples before the first value in x_values or after the last value in x_values will be extrapolated. If an integer is passed, nex_x will be considered as the desired length of the interpolated signal between the first and the last values of x_values. No extrapolation will be done for values before or after the first and the last values of x_values.\n\n• method (str) – Method of interpolation. Can be `\"linear\"`, `\"nearest\"`, `\"zero\"`, `\"slinear\"`, `\"quadratic\"`, `\"cubic\"`, `\"previous\"`, `\"next\"` or `\"monotone_cubic\"`. The methods `\"zero\"`, `\"slinear\"`,``”quadratic”`` and `\"cubic\"` refer to a spline interpolation of zeroth, first, second or third order; whereas `\"previous\"` and `\"next\"` simply return the previous or next value of the point. An integer specifying the order of the spline interpolator to use. See here for details on the `\"monotone_cubic\"` method.\n\n• fill_value (float or tuple or str) – If a ndarray (or float), this value will be used to fill in for requested points outside of the data range. If a two-element tuple, then the first element is used as a fill value for x_new < x and the second element is used for x_new > x[-1]. If “extrapolate”, then points outside the data range will be extrapolated. If not provided, then the default is ([y_values], [y_values[-1]]).\n\nReturns:\n\narray – Vector of interpolated samples.\n\nExamples\n\n```In : import numpy as np\n\nIn : import neurokit2 as nk\n\nIn : import matplotlib.pyplot as plt\n\n# Generate Simulated Signal\nIn : signal = nk.signal_simulate(duration=2, sampling_rate=10)\n\n# We want to interpolate to 2000 samples\nIn : x_values = np.linspace(0, 2000, num=len(signal), endpoint=False)\n\nIn : x_new = np.linspace(0, 2000, num=2000, endpoint=False)\n\n# Visualize all interpolation methods\nIn : nk.signal_plot([\n...: nk.signal_interpolate(x_values, signal, x_new=x_new, method=\"zero\"),\n...: nk.signal_interpolate(x_values, signal, x_new=x_new, method=\"linear\"),\n...: nk.signal_interpolate(x_values, signal, x_new=x_new, method=\"cubic\"),\n...: nk.signal_interpolate(x_values, signal, x_new=x_new, method=\"previous\"),\n...: nk.signal_interpolate(x_values, signal, x_new=x_new, method=\"next\"),\n...: nk.signal_interpolate(x_values, signal, x_new=x_new, method=\"monotone_cubic\")\n...: ], labels = [\"Zero\", \"Linear\", \"Quadratic\", \"Cubic\", \"Previous\", \"Next\", \"Monotone Cubic\"])\n...:\n\nIn : plt.scatter(x_values, signal, label=\"original datapoints\", zorder=3)\n```", null, "### signal_merge()#\n\nsignal_merge(signal1, signal2, time1=[0, 10], time2=[0, 10])[source]#\n\nArbitrary addition of two signals with different time ranges\n\nParameters:\n• signal1 (Union[list, np.array, pd.Series]) – The first signal (i.e., a time series)s in the form of a vector of values.\n\n• signal2 (Union[list, np.array, pd.Series]) – The second signal (i.e., a time series)s in the form of a vector of values.\n\n• time1 (list) – Lists containing two numeric values corresponding to the beginning and end of `signal1`.\n\n• time2 (list) – Same as above, but for `signal2`.\n\nReturns:\n\narray – Vector containing the sum of the two signals.\n\nExamples\n\n```In : import numpy as np\n\nIn : import pandas as pd\n\nIn : import neurokit2 as nk\n\nIn : signal1 = np.cos(np.linspace(start=0, stop=10, num=100))\n\nIn : signal2 = np.cos(np.linspace(start=0, stop=20, num=100))\n\nIn : signal = nk.signal_merge(signal1, signal2, time1=[0, 10], time2=[-5, 5])\n\nIn : nk.signal_plot(signal)\n```", null, "### signal_noise()#\n\nsignal_noise(duration=10, sampling_rate=1000, beta=1, random_state=None)[source]#\n\nSimulate noise\n\nThis function generates pure Gaussian `(1/f)**beta` noise. The power-spectrum of the generated noise is proportional to `S(f) = (1 / f)**beta`. The following categories of noise have been described:\n\n• violet noise: beta = -2\n\n• blue noise: beta = -1\n\n• white noise: beta = 0\n\n• flicker / pink noise: beta = 1\n\n• brown noise: beta = 2\n\nParameters:\n• duration (float) – Desired length of duration (s).\n\n• sampling_rate (int) – The desired sampling rate (in Hz, i.e., samples/second).\n\n• beta (float) – The noise exponent.\n\n• random_state (None, int, numpy.random.RandomState or numpy.random.Generator) – Seed for the random number generator. See for `misc.check_random_state` for further information.\n\nReturns:\n\nnoise (array) – The signal of pure noise.\n\nReferences\n\nExamples\n\n```In : import neurokit2 as nk\n\nIn : import matplotlib.pyplot as plt\n\n# Generate pure noise\nIn : violet = nk.signal_noise(beta=-2)\n\nIn : blue = nk.signal_noise(beta=-1)\n\nIn : white = nk.signal_noise(beta=0)\n\nIn : pink = nk.signal_noise(beta=1)\n\nIn : brown = nk.signal_noise(beta=2)\n\n# Visualize\nIn : nk.signal_plot([violet, blue, white, pink, brown],\n...: standardize=True,\n...: labels=[\"Violet\", \"Blue\", \"White\", \"Pink\", \"Brown\"])\n...:\n```", null, "```# Visualize spectrum\nIn : psd_violet = nk.signal_psd(violet, sampling_rate=200, method=\"fft\")\n\nIn : psd_blue = nk.signal_psd(blue, sampling_rate=200, method=\"fft\")\n\nIn : psd_white = nk.signal_psd(white, sampling_rate=200, method=\"fft\")\n\nIn : psd_pink = nk.signal_psd(pink, sampling_rate=200, method=\"fft\")\n\nIn : psd_brown = nk.signal_psd(brown, sampling_rate=200, method=\"fft\")\n\nIn : plt.loglog(psd_violet[\"Frequency\"], psd_violet[\"Power\"], c=\"violet\")\n\nIn : plt.loglog(psd_blue[\"Frequency\"], psd_blue[\"Power\"], c=\"blue\")\n\nIn : plt.loglog(psd_white[\"Frequency\"], psd_white[\"Power\"], c=\"grey\")\n\nIn : plt.loglog(psd_pink[\"Frequency\"], psd_pink[\"Power\"], c=\"pink\")\n\nIn : plt.loglog(psd_brown[\"Frequency\"], psd_brown[\"Power\"], c=\"brown\")\n```", null, "### signal_surrogate()#\n\nsignal_surrogate(signal, method='IAAFT', random_state=None, **kwargs)[source]#\n\nCreate Signal Surrogates\n\nGenerate a surrogate version of a signal. Different methods are available, such as:\n\n• random: Performs a random permutation of the signal value. This way, the signal distribution is unaffected and the serial correlations are cancelled, yielding a whitened signal with an distribution identical to that of the original.\n\n• IAAFT: Returns an Iterative Amplitude Adjusted Fourier Transform (IAAFT) surrogate. It is a phase randomized, amplitude adjusted surrogates that have the same power spectrum (to a very high accuracy) and distribution as the original data, using an iterative scheme.\n\nParameters:\n• signal (Union[list, np.array, pd.Series]) – The signal (i.e., a time series) in the form of a vector of values.\n\n• method (str) – Can be `\"random\"` or `\"IAAFT\"`.\n\n• random_state (None, int, numpy.random.RandomState or numpy.random.Generator) – Seed for the random number generator. See for `misc.check_random_state` for further information.\n\n• **kwargs – Other keywords arguments, such as `max_iter` (by default 1000).\n\nReturns:\n\nsurrogate (array) – Surrogate signal.\n\nExamples\n\nCreate surrogates using different methods.\n\n```In : import neurokit2 as nk\n\nIn : import matplotlib.pyplot as plt\n\nIn : signal = nk.signal_simulate(duration = 1, frequency = [3, 5], noise = 0.1)\n\nIn : surrogate_iaaft = nk.signal_surrogate(signal, method = \"IAAFT\")\n\nIn : surrogate_random = nk.signal_surrogate(signal, method = \"random\")\n\nIn : plt.plot(surrogate_random, label = \"Random Surrogate\")\n\nIn : plt.plot(surrogate_iaaft, label = \"IAAFT Surrogate\")\n\nIn : plt.plot(signal, label = \"Original\")\n\nIn : plt.legend()\n```", null, "As we can see, the signal pattern is destroyed by random surrogates, but not in the IAAFT one. And their distributions are identical:\n\n```In : plt.plot(*nk.density(signal), label = \"Original\")\n\nIn : plt.plot(*nk.density(surrogate_iaaft), label = \"IAAFT Surrogate\")\nOut: [<matplotlib.lines.Line2D at 0x20ac9ec2200>]\n\nIn : plt.plot(*nk.density(surrogate_random), label = \"Random Surrogate\")\nOut: [<matplotlib.lines.Line2D at 0x20ac9ec3bb0>]\n\nIn : plt.legend()\n```", null, "However, the power spectrum of the IAAFT surrogate is preserved.\n\n```In : f = nk.signal_psd(signal, max_frequency=20)\n\nIn : f[\"IAAFT\"] = nk.signal_psd(surrogate_iaaft, max_frequency=20)[\"Power\"]\n\nIn : f[\"Random\"] = nk.signal_psd(surrogate_random, max_frequency=20)[\"Power\"]\n\nIn : f.plot(\"Frequency\", [\"Power\", \"IAAFT\", \"Random\"])\nOut: <Axes: xlabel='Frequency'>\n```", null, "References\n\n• Schreiber, T., & Schmitz, A. (1996). Improved surrogate data for nonlinearity tests. Physical review letters, 77(4), 635.\n\n## Peaks#\n\n### signal_findpeaks()#\n\nsignal_findpeaks(signal, height_min=None, height_max=None, relative_height_min=None, relative_height_max=None, relative_mean=True, relative_median=False, relative_max=False)[source]#\n\nFind peaks in a signal\n\nLocate peaks (local maxima) in a signal and their related characteristics, such as height (prominence), width and distance with other peaks.\n\nParameters:\n• signal (Union[list, np.array, pd.Series]) – The signal (i.e., a time series) in the form of a vector of values.\n\n• height_min (float) – The minimum height (i.e., amplitude in terms of absolute values). For example, `height_min=20` will remove all peaks which height is smaller or equal to 20 (in the provided signal’s values).\n\n• height_max (float) – The maximum height (i.e., amplitude in terms of absolute values).\n\n• relative_height_min (float) – The minimum height (i.e., amplitude) relative to the sample (see below). For example, `relative_height_min=-2.96` will remove all peaks which height lies below 2.96 standard deviations from the mean of the heights.\n\n• relative_height_max (float) – The maximum height (i.e., amplitude) relative to the sample (see below).\n\n• relative_mean (bool) – If a relative threshold is specified, how should it be computed (i.e., relative to what?). `relative_mean=True` will use Z-scores.\n\n• relative_median (bool) – If a relative threshold is specified, how should it be computed (i.e., relative to what?). Relative to median uses a more robust form of standardization (see `standardize()`).\n\n• relative_max (bool) – If a relative threshold is specified, how should it be computed (i.e., relative to what?). Relative to max will consider the maximum height as the reference.\n\nReturns:\n\ndict\n\nReturns a dict itself containing 5 arrays:\n\n• `\"Peaks\"`: contains the peaks indices (as relative to the given signal). For instance, the value 3 means that the third data point of the signal is a peak.\n\n• `\"Distance\"`: contains, for each peak, the closest distance with another peak. Note that these values will be recomputed after filtering to match the selected peaks.\n\n• `\"Height\"`: contains the prominence of each peak. See `scipy.signal.peak_prominences()`.\n\n• `\"Width\"`: contains the width of each peak. See `scipy.signal.peak_widths()`.\n\n• `\"Onset\"`: contains the onset, start (or left trough), of each peak.\n\n• `\"Offset\"`: contains the offset, end (or right trough), of each peak.\n\nExamples\n\n```In : import neurokit2 as nk\n\n# Simulate a Signal\nIn : signal = nk.signal_simulate(duration=5)\n\nIn : info = nk.signal_findpeaks(signal)\n\n# Visualize Onsets of Peaks and Peaks of Signal\nIn : nk.events_plot([info[\"Onsets\"], info[\"Peaks\"]], signal)\n```", null, "```In : import scipy.datasets\n\nIn : ecg = scipy.datasets.electrocardiogram()\n\nIn : signal = ecg[0:1000]\n\n# Find Unfiltered and Filtered Peaks\nIn : info1 = nk.signal_findpeaks(signal, relative_height_min=0)\n\nIn : info2 = nk.signal_findpeaks(signal, relative_height_min=1)\n\n# Visualize Peaks\nIn : nk.events_plot([info1[\"Peaks\"], info2[\"Peaks\"]], signal)\n```", null, "### signal_fixpeaks()#\n\nsignal_fixpeaks(peaks, sampling_rate=1000, iterative=True, show=False, interval_min=None, interval_max=None, relative_interval_min=None, relative_interval_max=None, robust=False, method='Kubios', **kwargs)[source]#\n\nCorrect Erroneous Peak Placements\n\nIdentify and correct erroneous peak placements based on outliers in peak-to-peak differences (period).\n\nParameters:\n• peaks (list or array or DataFrame or Series or dict) – The samples at which the peaks occur. If an array is passed in, it is assumed that it was obtained with `signal_findpeaks()`. If a DataFrame is passed in, it is assumed to be obtained with `ecg_findpeaks()` or `ppg_findpeaks()` and to be of the same length as the input signal.\n\n• sampling_rate (int) – The sampling frequency of the signal that contains the peaks (in Hz, i.e., samples/second).\n\n• iterative (bool) – Whether or not to apply the artifact correction repeatedly (results in superior artifact correction).\n\n• show (bool) – Whether or not to visualize artifacts and artifact thresholds.\n\n• interval_min (float) – Only when `method = \"neurokit\"`. The minimum interval between the peaks.\n\n• interval_max (float) – Only when `method = \"neurokit\"`. The maximum interval between the peaks.\n\n• relative_interval_min (float) – Only when `method = \"neurokit\"`. The minimum interval between the peaks as relative to the sample (expressed in standard deviation from the mean).\n\n• relative_interval_max (float) – Only when `method = \"neurokit\"`. The maximum interval between the peaks as relative to the sample (expressed in standard deviation from the mean).\n\n• robust (bool) – Only when `method = \"neurokit\"`. Use a robust method of standardization (see `standardize()`) for the relative thresholds.\n\n• method (str) – Either `\"Kubios\"` or `\"neurokit\"`. `\"Kubios\"` uses the artifact detection and correction described in Lipponen, J. A., & Tarvainen, M. P. (2019). Note that `\"Kubios\"` is only meant for peaks in ECG or PPG. `\"neurokit\"` can be used with peaks in ECG, PPG, or respiratory data.\n\n• **kwargs – Other keyword arguments.\n\nReturns:\n\n• peaks_clean (array) – The corrected peak locations.\n\n• artifacts (dict) – Only if `method=\"Kubios\"`. A dictionary containing the indices of artifacts, accessible with the keys `\"ectopic\"`, `\"missed\"`, `\"extra\"`, and `\"longshort\"`.\n\n`signal_findpeaks`, `ecg_findpeaks`, `ecg_peaks`, `ppg_findpeaks`, `ppg_peaks`\n\nExamples\n\n```In : import neurokit2 as nk\n\n# Simulate ECG data\nIn : ecg = nk.ecg_simulate(duration=240, noise=0.25, heart_rate=70, random_state=42)\n\n# Identify and Correct Peaks using \"Kubios\" Method\nIn : rpeaks_uncorrected = nk.ecg_findpeaks(ecg)\n\nIn : artifacts, rpeaks_corrected = nk.signal_fixpeaks(\n...: rpeaks_uncorrected, iterative=True, method=\"Kubios\", show=True\n...: )\n...:\n```", null, "```# Visualize Artifact Correction\nIn : rate_corrected = nk.signal_rate(rpeaks_corrected, desired_length=len(ecg))\n\nIn : rate_uncorrected = nk.signal_rate(rpeaks_uncorrected, desired_length=len(ecg))\n\nIn : nk.signal_plot(\n...: [rate_uncorrected, rate_corrected],\n...: labels=[\"Heart Rate Uncorrected\", \"Heart Rate Corrected\"]\n...: )\n...:\n```", null, "```In : import numpy as np\n\n# Simulate Abnormal Signals\nIn : signal = nk.signal_simulate(duration=4, sampling_rate=1000, frequency=1)\n\nIn : peaks_true = nk.signal_findpeaks(signal)[\"Peaks\"]\n\nIn : peaks = np.delete(peaks_true, ) # create gaps due to missing peaks\n\nIn : signal = nk.signal_simulate(duration=20, sampling_rate=1000, frequency=1)\n\nIn : peaks_true = nk.signal_findpeaks(signal)[\"Peaks\"]\n\nIn : peaks = np.delete(peaks_true, [5, 15]) # create gaps\n\nIn : peaks = np.sort(np.append(peaks, [1350, 11350, 18350])) # add artifacts\n\n# Identify and Correct Peaks using 'NeuroKit' Method\nIn : peaks_corrected = nk.signal_fixpeaks(\n....: peaks=peaks, interval_min=0.5, interval_max=1.5, method=\"neurokit\"\n....: )\n....:\n\n# Plot and shift original peaks to the right to see the difference.\nIn : nk.events_plot([peaks + 50, peaks_corrected], signal)\n```", null, "References\n\n• Lipponen, J. A., & Tarvainen, M. P. (2019). A robust algorithm for heart rate variability time series artefact correction using novel beat classification. Journal of medical engineering & technology, 43(3), 173-181. 10.1080/03091902.2019.1640306\n\n## Analysis#\n\n### signal_autocor()#\n\nsignal_autocor(signal, lag=None, demean=True, method='auto', show=False)[source]#\n\nAutocorrelation (ACF)\n\nCompute the autocorrelation of a signal.\n\nParameters:\n• signal (Union[list, np.array, pd.Series]) – Vector of values.\n\n• lag (int) – Time lag. If specified, one value of autocorrelation between signal with its lag self will be returned.\n\n• demean (bool) – If `True`, the mean of the signal will be subtracted from the signal before ACF computation.\n\n• method (str) – Using `\"auto\"` runs `scipy.signal.correlate` to determine the faster algorithm. Other methods are kept for legacy reasons, but are not recommended. Other methods include `\"correlation\"` (using `np.correlate()`) or `\"fft\"` (Fast Fourier Transform).\n\n• show (bool) – If `True`, plot the autocorrelation at all values of lag.\n\nReturns:\n\n• r (float) – The cross-correlation of the signal with itself at different time lags. Minimum time lag is 0, maximum time lag is the length of the signal. Or a correlation value at a specific lag if lag is not `None`.\n\n• info (dict) – A dictionary containing additional information, such as the confidence interval.\n\nExamples\n\n```In : import neurokit2 as nk\n\n# Example 1: Using 'Correlation' Method\nIn : signal = [1, 2, 3, 4, 5]\n\nIn : r, info = nk.signal_autocor(signal, show=True, method='correlation')\n```", null, "```# Example 2: Using 'FFT' Method\nIn : signal = nk.signal_simulate(duration=5, sampling_rate=100, frequency=[5, 6], noise=0.5)\n\nIn : r, info = nk.signal_autocor(signal, lag=2, method='fft', show=True)\n```", null, "### signal_changepoints()#\n\nsignal_changepoints(signal, change='meanvar', penalty=None, show=False)[source]#\n\nChange Point Detection\n\nOnly the PELT method is implemented for now.\n\nParameters:\n• signal (Union[list, np.array, pd.Series]) – Vector of values.\n\n• change (str) – Can be one of `\"meanvar\"` (default), `\"mean\"` or `\"var\"`.\n\n• penalty (float) – The algorithm penalty. Defaults to `np.log(len(signal))`.\n\n• show (bool) – Defaults to `False`.\n\nReturns:\n\n• Array – Values indicating the samples at which the changepoints occur.\n\n• Fig – Figure of plot of signal with markers of changepoints.\n\nExamples\n\n```In : import neurokit2 as nk\n\nIn : signal = nk.emg_simulate(burst_number=3)\n\nIn : nk.signal_changepoints(signal, change=\"var\", show=True)\nOut: array([1751, 2750, 4500, 5500, 7250, 8250])\n```", null, "References\n\n• Killick, R., Fearnhead, P., & Eckley, I. A. (2012). Optimal detection of changepoints with a linear computational cost. Journal of the American Statistical Association, 107(500), 1590-1598.\n\n### signal_period()#\n\nsignal_period(peaks, sampling_rate=1000, desired_length=None, interpolation_method='monotone_cubic')[source]#\n\nCalculate signal period from a series of peaks\n\nParameters:\n• peaks (Union[list, np.array, pd.DataFrame, pd.Series, dict]) – The samples at which the peaks occur. If an array is passed in, it is assumed that it was obtained with `signal_findpeaks()`. If a DataFrame is passed in, it is assumed it is of the same length as the input signal in which occurrences of R-peaks are marked as “1”, with such containers obtained with e.g., `ecg_findpeaks()` or `rsp_findpeaks()`.\n\n• sampling_rate (int) – The sampling frequency of the signal that contains peaks (in Hz, i.e., samples/second). Defaults to 1000.\n\n• desired_length (int) – If left at the default `None`, the returned period will have the same number of elements as `peaks`. If set to a value larger than the sample at which the last peak occurs in the signal (i.e., `peaks[-1]`), the returned period will be interpolated between peaks over `desired_length` samples. To interpolate the period over the entire duration of the signal, set `desired_length` to the number of samples in the signal. Cannot be smaller than or equal to the sample at which the last peak occurs in the signal. Defaults to `None`.\n\n• interpolation_method (str) – Method used to interpolate the rate between peaks. See `signal_interpolate()`. `\"monotone_cubic\"` is chosen as the default interpolation method since it ensures monotone interpolation between data points (i.e., it prevents physiologically implausible “overshoots” or “undershoots” in the y-direction). In contrast, the widely used cubic spline interpolation does not ensure monotonicity.\n\nReturns:\n\narray – A vector containing the period.\n\nExamples\n\n```In : import neurokit2 as nk\n\nIn : signal = nk.signal_simulate(duration=10, sampling_rate=1000, frequency=1)\n\nIn : info = nk.signal_findpeaks(signal)\n\nIn : period = nk.signal_period(peaks=info[\"Peaks\"], desired_length=len(signal))\n\nIn : nk.signal_plot(period)\n```", null, "### signal_phase()#\n\nCompute the phase of the signal\n\nThe real phase has the property to rotate uniformly, leading to a uniform distribution density. The prophase typically doesn’t fulfill this property. The following functions applies a nonlinear transformation to the phase signal that makes its distribution exactly uniform. If a binary vector is provided (containing 2 unique values), the function will compute the phase of completion of each phase as denoted by each value.\n\nParameters:\n• signal (Union[list, np.array, pd.Series]) – The signal (i.e., a time series) in the form of a vector of values.\n\n• method (str) – The values in which the phase is expressed. Can be `\"radians\"` (default), `\"degrees\"` (for values between 0 and 360) or `\"percents\"` (for values between 0 and 1).\n\nReturns:\n\narray – A vector containing the phase of the signal, between 0 and 2*pi.\n\nExamples\n\n```In : import neurokit2 as nk\n\nIn : signal = nk.signal_simulate(duration=10)\n\nIn : phase = nk.signal_phase(signal)\n\nIn : nk.signal_plot([signal, phase])\n```", null, "..ipython:: python\n\nrsp = nk.rsp_simulate(duration=30) phase = nk.signal_phase(rsp, method=”degrees”) @savefig p_signal_phase2.png scale=100% nk.signal_plot([rsp, phase]) @suppress plt.close()\n\n```# Percentage of completion of two phases\nIn : signal = nk.signal_binarize(nk.signal_simulate(duration=10))\n\nIn : phase = nk.signal_phase(signal, method=\"percents\")\n\nIn : nk.signal_plot([signal, phase])\n```", null, "### signal_plot()#\n\nsignal_plot(signal, sampling_rate=None, subplots=False, standardize=False, labels=None, **kwargs)[source]#\n\nPlot signal with events as vertical lines\n\nParameters:\n• signal (array or DataFrame) – Signal array (can be a dataframe with many signals).\n\n• sampling_rate (int) – The sampling frequency of the signal (in Hz, i.e., samples/second). Needs to be supplied if the data should be plotted over time in seconds. Otherwise the data is plotted over samples. Defaults to `None`.\n\n• subplots (bool) – If `True`, each signal is plotted in a subplot.\n\n• standardize (bool) – If `True`, all signals will have the same scale (useful for visualisation).\n\n• labels (str or list) – Defaults to `None`.\n\n• **kwargs (optional) – Arguments passed to matplotlib plotting.\n\n`ecg_plot`, `rsp_plot`, `ppg_plot`, `emg_plot`, `eog_plot`\n\nReturns:\n\n• Though the function returns nothing, the figure can be retrieved and saved as follows\n\n• .. code-block:: console – # To be run after signal_plot() fig = plt.gcf() fig.savefig(“myfig.png”)\n\nExamples\n\n```In : import numpy as np\n\nIn : import pandas as pd\n\nIn : import neurokit2 as nk\n\nIn : signal = nk.signal_simulate(duration=10, sampling_rate=1000)\n\nIn : nk.signal_plot(signal, sampling_rate=1000, color=\"red\")\n```", null, "``` # Simulate data\nIn : data = pd.DataFrame({\"Signal2\": np.cos(np.linspace(start=0, stop=20, num=1000)),\n...: \"Signal3\": np.sin(np.linspace(start=0, stop=20, num=1000)),\n...: \"Signal4\": nk.signal_binarize(np.cos(np.linspace(start=0, stop=40, num=1000)))})\n...:\n\n# Process signal\nIn : nk.signal_plot(data, labels=['signal_1', 'signal_2', 'signal_3'], subplots=True)\n\nIn : nk.signal_plot([signal, data], standardize=True)\n```", null, "### signal_power()#\n\nsignal_power(signal, frequency_band, sampling_rate=1000, continuous=False, show=False, normalize=True, **kwargs)[source]#\n\nCompute the power of a signal in a given frequency band\n\nParameters:\n• signal (Union[list, np.array, pd.Series]) – The signal (i.e., a time series) in the form of a vector of values.\n\n• frequency_band (tuple or list) – Tuple or list of tuples indicating the range of frequencies to compute the power in.\n\n• sampling_rate (int) – The sampling frequency of the signal (in Hz, i.e., samples/second).\n\n• continuous (bool) – Compute instant frequency, or continuous power.\n\n• show (bool) – If `True`, will return a Poincaré plot. Defaults to `False`.\n\n• normalize (bool) – Normalization of power by maximum PSD value. Default to `True`. Normalization allows comparison between different PSD methods.\n\n• **kwargs – Keyword arguments to be passed to `signal_psd()`.\n\nReturns:\n\npd.DataFrame – A DataFrame containing the Power Spectrum values and a plot if `show` is `True`.\n\nExamples\n\n```In : import neurokit2 as nk\n\nIn : import numpy as np\n\n# Instant power\nIn : signal = nk.signal_simulate(duration=60, frequency=[10, 15, 20],\n...: amplitude = [1, 2, 3], noise = 2)\n...:\n\nIn : power_plot = nk.signal_power(signal, frequency_band=[(8, 12), (18, 22)], method=\"welch\", show=True)\n```", null, "..ipython:: python\n\n# Continuous (simulated signal) signal = np.concatenate((nk.ecg_simulate(duration=30, heart_rate=75), nk.ecg_simulate(duration=30, heart_rate=85))) power = nk.signal_power(signal, frequency_band=[(72/60, 78/60), (82/60, 88/60)], continuous=True) processed, _ = nk.ecg_process(signal) power[“ECG_Rate”] = processed[“ECG_Rate”]\n\n@savefig p_signal_power2.png scale=100% nk.signal_plot(power, standardize=True) @suppress plt.close()\n\n```# Continuous (real signal)\nIn : signal = nk.data(\"bio_eventrelated_100hz\")[\"ECG\"]\n\nIn : power = nk.signal_power(signal, sampling_rate=100, frequency_band=[(0.12, 0.15), (0.15, 0.4)], continuous=True)\n\nIn : processed, _ = nk.ecg_process(signal, sampling_rate=100)\n\nIn : power[\"ECG_Rate\"] = processed[\"ECG_Rate\"]\n\nIn : nk.signal_plot(power, standardize=True)\n```", null, "### signal_psd()#\n\nsignal_psd(signal, sampling_rate=1000, method='welch', show=False, normalize=True, min_frequency='default', max_frequency=inf, window=None, window_type='hann', order=16, order_criteria='KIC', order_corrected=True, silent=True, t=None, **kwargs)[source]#\n\nCompute the Power Spectral Density (PSD)\n\nParameters:\n• signal (Union[list, np.array, pd.Series]) – The signal (i.e., a time series) in the form of a vector of values.\n\n• sampling_rate (int) – The sampling frequency of the signal (in Hz, i.e., samples/second).\n\n• method (str) – Either `\"welch\"` (default), `\"fft\"`, `\"multitapers\"` (requires the ‘mne’ package), `\"lombscargle\"` (requires the ‘astropy’ package) or `\"burg\"`.\n\n• show (bool) – If `True`, will return a plot. If `False`, will return the density values that can be plotted externally.\n\n• normalize (bool) – Normalization of power by maximum PSD value. Default to `True`. Normalization allows comparison between different PSD methods.\n\n• min_frequency (str, float) – The minimum frequency. If default, min_frequency is chosen based on the sampling rate and length of signal to optimize the frequency resolution.\n\n• max_frequency (float) – The maximum frequency.\n\n• window (int) – Length of each window in seconds (for “Welch” method). If `None` (default), window will be automatically calculated to capture at least 2 cycles of min_frequency. If the length of recording does not allow the formal, window will be default to half of the length of recording.\n\n• window_type (str) – Desired window to use. Defaults to `\"hann\"`. See `scipy.signal.get_window()` for list of windows.\n\n• order (int) – The order of autoregression (only used for autoregressive (AR) methods such as `\"burg\"`).\n\n• order_criteria (str) – The criteria to automatically select order in parametric PSD (only used for autoregressive (AR) methods such as `\"burg\"`).\n\n• order_corrected (bool) – Should the order criteria (AIC or KIC) be corrected? If unsure which method to use to choose the order, rely on the default (i.e., the corrected KIC).\n\n• silent (bool) – If `False`, warnings will be printed. Default to `True`.\n\n• t (array) – The timestamps corresponding to each sample in the signal, in seconds (for `\"lombscargle\"` method). Defaults to None.\n\n• **kwargs (optional) – Keyword arguments to be passed to `scipy.signal.welch()`.\n\n`signal_filter`, `mne.time_frequency.psd_array_multitaper`, `scipy.signal.welch`\n\nReturns:\n\ndata (pd.DataFrame) – A DataFrame containing the Power Spectrum values and a plot if `show` is `True`.\n\nExamples\n\n```In : import neurokit2 as nk\n\nIn : signal = nk.signal_simulate(duration=2, frequency=[5, 6, 50, 52, 80], noise=0.5)\n\n# FFT method (based on numpy)\nIn : psd_multitapers = nk.signal_psd(signal, method=\"fft\", show=True)\n```", null, "```# Welch method (based on scipy)\nIn : psd_welch = nk.signal_psd(signal, method=\"welch\", min_frequency=1, show=True)\n```", null, "```# Multitapers method (requires MNE)\nIn : psd_multitapers = nk.signal_psd(signal, method=\"multitapers\", show=True)\n```", null, "```# Burg method\nIn : psd_burg = nk.signal_psd(signal, method=\"burg\", min_frequency=1, show=True)\n```", null, "```# Lomb method (requires AstroPy)\nIn : psd_lomb = nk.signal_psd(signal, method=\"lomb\", min_frequency=1, show=True)\n```", null, "### signal_rate()#\n\nsignal_rate(peaks, sampling_rate=1000, desired_length=None, interpolation_method='monotone_cubic')[source]#\n\nCompute Signal Rate\n\nCalculate signal rate (per minute) from a series of peaks. It is a general function that works for any series of peaks (i.e., not specific to a particular type of signal). It is computed as `60 / period`, where the period is the time between the peaks (see func:.signal_period).\n\nNote\n\nThis function is implemented under `signal_rate()`, but it also re-exported under different names, such as `ecg_rate()`, or `rsp_rate()`. The aliases provided for consistency.\n\nParameters:\n• peaks (Union[list, np.array, pd.DataFrame, pd.Series, dict]) – The samples at which the peaks occur. If an array is passed in, it is assumed that it was obtained with `signal_findpeaks()`. If a DataFrame is passed in, it is assumed it is of the same length as the input signal in which occurrences of R-peaks are marked as “1”, with such containers obtained with e.g., :func:.`ecg_findpeaks` or `rsp_findpeaks()`.\n\n• sampling_rate (int) – The sampling frequency of the signal that contains peaks (in Hz, i.e., samples/second). Defaults to 1000.\n\n• desired_length (int) – If left at the default None, the returned rated will have the same number of elements as `peaks`. If set to a value larger than the sample at which the last peak occurs in the signal (i.e., `peaks[-1]`), the returned rate will be interpolated between peaks over `desired_length` samples. To interpolate the rate over the entire duration of the signal, set `desired_length` to the number of samples in the signal. Cannot be smaller than or equal to the sample at which the last peak occurs in the signal. Defaults to `None`.\n\n• interpolation_method (str) – Method used to interpolate the rate between peaks. See `signal_interpolate()`. `\"monotone_cubic\"` is chosen as the default interpolation method since it ensures monotone interpolation between data points (i.e., it prevents physiologically implausible “overshoots” or “undershoots” in the y-direction). In contrast, the widely used cubic spline interpolation does not ensure monotonicity.\n\nReturns:\n\narray – A vector containing the rate (peaks per minute).\n\nExamples\n\n```In : import neurokit2 as nk\n\n# Create signal of varying frequency\nIn : freq = nk.signal_simulate(1, frequency = 1)\n\nIn : signal = np.sin((freq).cumsum() * 0.5)\n\n# Find peaks\nIn : info = nk.signal_findpeaks(signal)\n\n# Compute rate using 2 methods\nIn : rate1 = nk.signal_rate(peaks=info[\"Peaks\"],\n....: desired_length=len(signal),\n....: interpolation_method=\"nearest\")\n....:\n\nIn : rate2 = nk.signal_rate(peaks=info[\"Peaks\"],\n....: desired_length=len(signal),\n....: interpolation_method=\"monotone_cubic\")\n....:\n\n# Visualize signal and rate on the same scale\nIn : nk.signal_plot([signal, rate1, rate2],\n....: labels = [\"Original signal\", \"Rate (nearest)\", \"Rate (monotone cubic)\"],\n....: standardize = True)\n....:\n```", null, "### signal_smooth()#\n\nsignal_smooth(signal, method='convolution', kernel='boxzen', size=10, alpha=0.1)[source]#\n\nSignal smoothing\n\nSignal smoothing can be achieved using either the convolution of a filter kernel with the input signal to compute the smoothed signal (Smith, 1997) or a LOESS regression.\n\nParameters:\n• signal (Union[list, np.array, pd.Series]) – The signal (i.e., a time series) in the form of a vector of values.\n\n• method (str) – Can be one of `\"convolution\"` (default) or `\"loess\"`.\n\n• kernel (Union[str, np.array]) – Only used if `method` is `\"convolution\"`. Type of kernel to use; if array, use directly as the kernel. Can be one of `\"median\"`, `\"boxzen\"`, `\"boxcar\"`, `\"triang\"`, `\"blackman\"`, `\"hamming\"`, `\"hann\"`, `\"bartlett\"`, `\"flattop\"`, `\"parzen\"`, `\"bohman\"`, `\"blackmanharris\"`, `\"nuttall\"`, `\"barthann\"`, `\"kaiser\"` (needs beta), `\"gaussian\"` (needs std), `\"general_gaussian\"` (needs power width), `\"slepian\"` (needs width) or `\"chebwin\"` (needs attenuation).\n\n• size (int) – Only used if `method` is `\"convolution\"`. Size of the kernel; ignored if kernel is an array.\n\n• alpha (float) – Only used if `method` is `\"loess\"`. The parameter which controls the degree of smoothing.\n\nReturns:\n\narray – Smoothed signal.\n\nExamples\n\n```In : import numpy as np\n\nIn : import pandas as pd\n\nIn : import neurokit2 as nk\n\nIn : signal = np.cos(np.linspace(start=0, stop=10, num=1000))\n\nIn : distorted = nk.signal_distort(signal,\n...: noise_amplitude=[0.3, 0.2, 0.1, 0.05],\n...: noise_frequency=[5, 10, 50, 100])\n...:\n\nIn : size = len(signal)/100\n\nIn : signals = pd.DataFrame({\"Raw\": distorted,\n...: \"Median\": nk.signal_smooth(distorted, kernel=\"median\", size=size-1),\n...: \"BoxZen\": nk.signal_smooth(distorted, kernel=\"boxzen\", size=size),\n...: \"Triang\": nk.signal_smooth(distorted, kernel=\"triang\", size=size),\n...: \"Blackman\": nk.signal_smooth(distorted, kernel=\"blackman\", size=size),\n...: \"Loess_01\": nk.signal_smooth(distorted, method=\"loess\", alpha=0.1),\n...: \"Loess_02\": nk.signal_smooth(distorted, method=\"loess\", alpha=0.2),\n...: \"Loess_05\": nk.signal_smooth(distorted, method=\"loess\", alpha=0.5)})\n...:\n\nIn : fig = signals.plot()\n```", null, "```# Magnify the plot\nIn : fig_magnify = signals[50:150].plot()\n```", null, "References\n\n• Smith, S. W. (1997). The scientist and engineer’s guide to digital signal processing.\n\n### signal_synchrony()#\n\nsignal_synchrony(signal1, signal2, method='hilbert', window_size=50)[source]#\n\nSynchrony (coupling) between two signals\n\nSignal coherence refers to the strength of the mutual relationship (i.e., the amount of shared information) between two signals. Synchrony is coherence “in phase” (two waveforms are “in phase” when the peaks and troughs occur at the same time). Synchrony will always be coherent, but coherence need not always be synchronous.\n\nThis function computes a continuous index of coupling between two signals either using the `\"hilbert\"` method to get the instantaneous phase synchrony, or using a rolling window correlation.\n\nThe instantaneous phase synchrony measures the phase similarities between signals at each timepoint. The phase refers to the angle of the signal, calculated through the hilbert transform, when it is resonating between -pi to pi degrees. When two signals line up in phase their angular difference becomes zero.\n\nFor less clean signals, windowed correlations are widely used because of their simplicity, and can be a good a robust approximation of synchrony between two signals. The limitation is the need to select a window size.\n\nParameters:\n• signal1 (Union[list, np.array, pd.Series]) – Time series in the form of a vector of values.\n\n• signal2 (Union[list, np.array, pd.Series]) – Time series in the form of a vector of values.\n\n• method (str) – The method to use. Can be one of `\"hilbert\"` or `\"correlation\"`.\n\n• window_size (int) – Only used if `method='correlation'`. The number of samples to use for rolling correlation.\n\n`scipy.signal.hilbert`, `mutual_information`\n\nReturns:\n\narray – A vector containing the phase of the signal, between 0 and 2*pi.\n\nExamples\n\n```In : import neurokit2 as nk\n\nIn : s1 = nk.signal_simulate(duration=10, frequency=1)\n\nIn : s2 = nk.signal_simulate(duration=10, frequency=1.5)\n\nIn : coupling1 = nk.signal_synchrony(s1, s2, method=\"hilbert\")\n\nIn : coupling2 = nk.signal_synchrony(s1, s2, method=\"correlation\", window_size=1000/2)\n\nIn : nk.signal_plot([s1, s2, coupling1, coupling2], labels=[\"s1\", \"s2\", \"hilbert\", \"correlation\"])\n```", null, "References\n\n### signal_timefrequency()#\n\nsignal_timefrequency(signal, sampling_rate=1000, min_frequency=0.04, max_frequency=None, method='stft', window=None, window_type='hann', mode='psd', nfreqbin=None, overlap=None, analytical_signal=True, show=True)[source]#\n\nQuantify changes of a nonstationary signal’s frequency over time The objective of time-frequency analysis is to offer a more informative description of the signal which reveals the temporal variation of its frequency contents.\n\nThere are many different Time-Frequency Representations (TFRs) available:\n\n• Linear TFRs: efficient but create tradeoff between time and frequency resolution\n\n• Short Time Fourier Transform (STFT): the time-domain signal is windowed into short segments and FT is applied to each segment, mapping the signal into the TF plane. This method assumes that the signal is quasi-stationary (stationary over the duration of the window). The width of the window is the trade-off between good time (requires short duration window) versus good frequency resolution (requires long duration windows)\n\n• Wavelet Transform (WT): similar to STFT but instead of a fixed duration window function, a varying window length by scaling the axis of the window is used. At low frequency, WT proves high spectral resolution but poor temporal resolution. On the other hand, for high frequencies, the WT provides high temporal resolution but poor spectral resolution.\n\n• Quadratic TFRs: better resolution but computationally expensive and suffers from having cross terms between multiple signal components\n\n• Wigner Ville Distribution (WVD): while providing very good resolution in time and frequency of the underlying signal structure, because of its bilinear nature, existence of negative values, the WVD has misleading TF results in the case of multi-component signals such as EEG due to the presence of cross terms and inference terms. Cross WVD terms can be reduced by using smoothing kernel functions as well as analyzing the analytic signal (instead of the original signal)\n\n• Smoothed Pseudo Wigner Ville Distribution (SPWVD): to address the problem of cross-terms suppression, SPWVD allows two independent analysis windows, one in time and the other in frequency domains.\n\nParameters:\n• signal (Union[list, np.array, pd.Series]) – The signal (i.e., a time series) in the form of a vector of values.\n\n• sampling_rate (int) – The sampling frequency of the signal (in Hz, i.e., samples/second).\n\n• method (str) – Time-Frequency decomposition method.\n\n• min_frequency (float) – The minimum frequency.\n\n• max_frequency (float) – The maximum frequency.\n\n• window (int) – Length of each segment in seconds. If `None` (default), window will be automatically calculated. For `\"STFT\" method`.\n\n• window_type (str) – Type of window to create, defaults to `\"hann\"`. See `scipy.signal.get_window()` to see full options of windows. For `\"STFT\" method`.\n\n• mode (str) – Type of return values for `\"STFT\" method`. Can be `\"psd\"`, `\"complex\"` (default, equivalent to output of `\"STFT\"` with no padding or boundary extension), `\"magnitude\"`, `\"angle\"`, `\"phase\"`. Defaults to `\"psd\"`.\n\n• nfreqbin (int, float) – Number of frequency bins. If `None` (default), nfreqbin will be set to `0.5*sampling_rate`.\n\n• overlap (int) – Number of points to overlap between segments. If `None`, `noverlap = nperseg // 8`. Defaults to `None`.\n\n• analytical_signal (bool) – If `True`, analytical signal instead of actual signal is used in Wigner Ville Distribution methods.\n\n• show (bool) – If `True`, will return two PSD plots.\n\nReturns:\n\n• frequency (np.array) – Frequency.\n\n• time (np.array) – Time array.\n\n• stft (np.array) – Short Term Fourier Transform. Time increases across its columns and frequency increases down the rows.\n\nExamples\n\n```In : import neurokit2 as nk\n\nIn : sampling_rate = 100\n\nIn : signal = nk.signal_simulate(100, sampling_rate, frequency=[3, 10])\n\n# STFT Method\nIn : f, t, stft = nk.signal_timefrequency(signal,\n...: sampling_rate,\n...: max_frequency=20,\n...: method=\"stft\",\n...: show=True)\n...:\n```", null, "```# CWTM Method\nIn : f, t, cwtm = nk.signal_timefrequency(signal,\n...: sampling_rate,\n...: max_frequency=20,\n...: method=\"cwt\",\n...: show=True)\n...:\n```", null, "```# WVD Method\nIn : f, t, wvd = nk.signal_timefrequency(signal,\n...: sampling_rate,\n...: max_frequency=20,\n...: method=\"wvd\",\n...: show=True)\n...:\n```", null, "```# PWVD Method\nIn : f, t, pwvd = nk.signal_timefrequency(signal,\n...: sampling_rate,\n...: max_frequency=20,\n...: method=\"pwvd\",\n...: show=True)\n...:\n```", null, "### signal_zerocrossings()#\n\nsignal_zerocrossings(signal, direction='both')[source]#\n\nLocate the indices where the signal crosses zero\n\nNote that when the signal crosses zero between two points, the first index is returned.\n\nParameters:\n• signal (Union[list, np.array, pd.Series]) – The signal (i.e., a time series) in the form of a vector of values.\n\n• direction (str) – Direction in which the signal crosses zero, can be `\"positive\"`, `\"negative\"` or `\"both\"` (default).\n\nReturns:\n\narray – Vector containing the indices of zero crossings.\n\nExamples\n\n```In : import neurokit2 as nk\n\nIn : signal = nk.signal_simulate(duration=5)\n\nIn : zeros = nk.signal_zerocrossings(signal)\n\nIn : nk.events_plot(zeros, signal)\n```", null, "```# Only upward or downward zerocrossings\nIn : up = nk.signal_zerocrossings(signal, direction=\"up\")\n\nIn : down = nk.signal_zerocrossings(signal, direction=\"down\")\n\nIn : nk.events_plot([up, down], signal)\n```", null, "Any function appearing below this point is not explicitly part of the documentation and should be added. Please open an issue if there is one.\n\nSubmodule for NeuroKit.\n\nsignal_formatpeaks(info, desired_length, peak_indices=None, other_indices=None)[source]#\n\nFormat Peaks\n\nTransforms a peak-info dict to a signal of given length" ]
[ null, "https://neuropsychology.github.io/NeuroKit/_images/p_signal_simulate1.png", null, "https://neuropsychology.github.io/NeuroKit/_images/p_signal_filter1.png", null, "https://neuropsychology.github.io/NeuroKit/_images/p_signal_filter2.png", null, "https://neuropsychology.github.io/NeuroKit/_images/p_signal_filter3.png", null, "https://neuropsychology.github.io/NeuroKit/_images/p_signal_resample1.png", null, "https://neuropsychology.github.io/NeuroKit/_images/p_signal_resample2.png", null, "https://neuropsychology.github.io/NeuroKit/_images/p_signal_binarize.png", null, "https://neuropsychology.github.io/NeuroKit/_images/p_signal_decompose1.png", null, "https://neuropsychology.github.io/NeuroKit/_images/p_signal_decompose2.png", null, "https://neuropsychology.github.io/NeuroKit/_images/p_signal_decompose3.png", null, "https://neuropsychology.github.io/NeuroKit/_images/p_signal_recompose1.png", null, "https://neuropsychology.github.io/NeuroKit/_images/signal_detrend1.png", null, "https://neuropsychology.github.io/NeuroKit/_images/p_signal_distort1.png", null, "https://neuropsychology.github.io/NeuroKit/_images/p_signal_distort2.png", null, "https://neuropsychology.github.io/NeuroKit/_images/p_signal_interpolate1.png", null, "https://neuropsychology.github.io/NeuroKit/_images/p_signal_merge_1.png", null, "https://neuropsychology.github.io/NeuroKit/_images/p_signal_noise1.png", null, "https://neuropsychology.github.io/NeuroKit/_images/p_signal_noise2.png", null, "https://neuropsychology.github.io/NeuroKit/_images/p_signal_surrogate1.png", null, "https://neuropsychology.github.io/NeuroKit/_images/p_signal_surrogate2.png", null, "https://neuropsychology.github.io/NeuroKit/_images/p_signal_surrogate3.png", null, "https://neuropsychology.github.io/NeuroKit/_images/p_signal_findpeaks1.png", null, "https://neuropsychology.github.io/NeuroKit/_images/p_signal_findpeaks2.png", null, "https://neuropsychology.github.io/NeuroKit/_images/p_signal_fixpeaks1.png", null, "https://neuropsychology.github.io/NeuroKit/_images/p_signal_fixpeaks2.png", null, "https://neuropsychology.github.io/NeuroKit/_images/p_signal_fixpeaks3.png", null, "https://neuropsychology.github.io/NeuroKit/_images/p_signal_autocor1.png", null, "https://neuropsychology.github.io/NeuroKit/_images/p_signal_autocor2.png", null, "https://neuropsychology.github.io/NeuroKit/_images/p_signal_changepoints1.png", null, "https://neuropsychology.github.io/NeuroKit/_images/p_signal_period.png", null, "https://neuropsychology.github.io/NeuroKit/_images/p_signal_phase1.png", null, "https://neuropsychology.github.io/NeuroKit/_images/p_signal_phase3.png", null, "https://neuropsychology.github.io/NeuroKit/_images/p_signal_plot1.png", null, "https://neuropsychology.github.io/NeuroKit/_images/p_signal_plot2.png", null, "https://neuropsychology.github.io/NeuroKit/_images/p_signal_power1.png", null, "https://neuropsychology.github.io/NeuroKit/_images/p_signal_power3.png", null, "https://neuropsychology.github.io/NeuroKit/_images/p_signal_psd1.png", null, "https://neuropsychology.github.io/NeuroKit/_images/p_signal_psd2.png", null, "https://neuropsychology.github.io/NeuroKit/_images/p_signal_psd3.png", null, "https://neuropsychology.github.io/NeuroKit/_images/p_signal_psd4.png", null, "https://neuropsychology.github.io/NeuroKit/_images/p_signal_psd5.png", null, "https://neuropsychology.github.io/NeuroKit/_images/p_signal_rate1.png", null, "https://neuropsychology.github.io/NeuroKit/_images/p_signal_smooth1.png", null, "https://neuropsychology.github.io/NeuroKit/_images/p_signal_smooth2.png", null, "https://neuropsychology.github.io/NeuroKit/_images/p_signal_synchrony1.png", null, "https://neuropsychology.github.io/NeuroKit/_images/p_signal_timefrequency1.png", null, "https://neuropsychology.github.io/NeuroKit/_images/p_signal_timefrequency2.png", null, "https://neuropsychology.github.io/NeuroKit/_images/p_signal_timefrequency3.png", null, "https://neuropsychology.github.io/NeuroKit/_images/p_signal_timefrequency4.png", null, "https://neuropsychology.github.io/NeuroKit/_images/p_signal_zerocrossings1.png", null, "https://neuropsychology.github.io/NeuroKit/_images/p_signal_zerocrossings2.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6697731,"math_prob":0.9834388,"size":63962,"snap":"2023-14-2023-23","text_gpt3_token_len":17077,"char_repetition_ratio":0.184371,"word_repetition_ratio":0.23127463,"special_character_ratio":0.28792095,"punctuation_ratio":0.25660712,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9968775,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102],"im_url_duplicate_count":[null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-03T22:10:22Z\",\"WARC-Record-ID\":\"<urn:uuid:6eddbbd2-b42f-4db3-8e26-177b7572b0c9>\",\"Content-Length\":\"333562\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:000cc8e8-14c3-40d9-9490-ff586e4536de>\",\"WARC-Concurrent-To\":\"<urn:uuid:a5965e70-c1c1-4162-810d-3a07eb76c158>\",\"WARC-IP-Address\":\"185.199.109.153\",\"WARC-Target-URI\":\"https://neuropsychology.github.io/NeuroKit/functions/signal.html\",\"WARC-Payload-Digest\":\"sha1:NTU4YK3GUAFPZ3CNH2LJZFZAOV4XDKDZ\",\"WARC-Block-Digest\":\"sha1:IAXM4J6ROM47MPHHKAFMKOC4REFKJICJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224649343.34_warc_CC-MAIN-20230603201228-20230603231228-00001.warc.gz\"}"}
https://solvedlib.com/n/0-10-mole-of-argon-gas-is-allowed-to-enter-an-empty-50-cm,21121669
[ "# 0.10 mole of argon gas is allowed to enter an empty 50-cm' container at 208 C. The gas is then\n\n###### Question:\n\n0.10 mole of argon gas is allowed to enter an empty 50-cm' container at 208 C. The gas is then heated in this sealed containcr to temperature of 300\" €. Draw this proccss on PV diagram; with proper scales On each axis_ Determine the final pressure of the gas after this heating: How much work was done to the gas during this process? Dctcrminc thc change in the gas $internal cnergy during- this process, The gas is then cooled back to 20? C in a constant in constant pressure proccss; where the lid of the containcr is allowed to movc. Draw this process on your PV diagram_ Determine the final volume after this process_ How much work was done to the gas during this process? Determine the change in the gas$ internal energy during- this process _", null, "", null, "#### Similar Solved Questions\n\n##### Can you explain why? Question 5 0/2 pts Which combination would make the MOST effective buffer...\nCan you explain why? Question 5 0/2 pts Which combination would make the MOST effective buffer solution? Correct Answer 0.1 M H2CO3 and 1.0 M NaHCO3 0.1 M H2CO3 and 2.0 M NaHCO3 0.15 MH2CO3 and 0.015 M NaHCO3 You Answered 0.1 MH2CO3 and 0.1 M NaHCO3...\n##### Heights were measured for a random sample of 10 plants grown while being treated with a...\nHeights were measured for a random sample of 10 plants grown while being treated with a particular nutrient. The sample mean and sample standard deviation of those height measurements were 31 centimeters and 12 centimeters, respectively. Assume that the population of heights of treated plants is nor...\n##### Utt Urz u(0,+) = 0, u(1,t) =0, t 20 u(z,0) (1-c) 0 < x <1 Ut (c,C x(1-x) 0 <t <1\nUtt Urz u(0,+) = 0, u(1,t) =0, t 20 u(z,0) (1-c) 0 < x <1 Ut (c,C x(1-x) 0 <t <1...\n##### 6-pirig-P)-toug;-Vuw junid ppWi / V *88 nD41 FB I]\n6-pirig-P)-toug;-Vuw junid ppWi / V *88 nD41 FB I]...\n##### Graph each function. $f(x)=\\log _{3}(x-1)$\nGraph each function. $f(x)=\\log _{3}(x-1)$...\n##### Nonsynonymous mutations in genes are nucleotide changes that alter the amino acid sequence of the translated...\nNonsynonymous mutations in genes are nucleotide changes that alter the amino acid sequence of the translated protein, potentially leading to a change in protein function. Using your knowledge of the genetic code, which ONE of the following amino acid replacements can be caused by a single-base chang...\n##### Evaljate the fclloxing indelinite integrel.(6' 5)-7 &\nEvaljate the fclloxing indelinite integrel. (6' 5)-7 &...\n##### Suppaoo tho poalllon ol nn oblcal moving I 0 atroght lino Qivan by D() \" 41? + 51+ 6 Find Iha Inalanlanooua vobclly en /-\"3 The intlanlanoous veloaty at ( =3 /\nSuppaoo tho poalllon ol nn oblcal moving I 0 atroght lino Qivan by D() \" 41? + 51+ 6 Find Iha Inalanlanooua vobclly en /-\"3 The intlanlanoous veloaty at ( =3 /...\n##### A skier starting from rest skis straight down a slope 50 meters long in 5 seconds\nA skier starting from rest skis straight down a slope 50 meters long in 5 seconds. What is the magnitude of the acceleration of the skier? How would you go about solving this type of question?...\n##### The sum of three consecutive even integers is 12 less than the middle integer. What's the answer?\nThe sum of three consecutive even integers is 12 less than the middle integer. What's the answer?...\n##### Explain the ways in which Native Americans viewed European arrivals and encroachments in the sixteenth and...\nExplain the ways in which Native Americans viewed European arrivals and encroachments in the sixteenth and seventeenth centuries p.s I apologize for my category choice. History was not an option!...\n##### 6.00 HC-2.00 HC 1.5 UC3.00 cm2.00 cm\n6.00 HC -2.00 HC 1.5 UC 3.00 cm 2.00 cm...\n##### Write an equation for the function graphed here:4 3fla:)\nWrite an equation for the function graphed here: 4 3 fla:)...\n##### Fnd Ihe exc} valuz66, Co 5 ton (11] 68, Cos Don '(+8)] Wile 4ie elpeso Iprfa € elpress&. TF not neces%oryt Tahorolze Otn2 denomina if 74, tan(Sin-',) fr Ix) al\nFnd Ihe exc} valuz 66, Co 5 ton (11] 68, Cos Don '(+8)] Wile 4ie elpeso Iprfa € elpress&. TF not neces%oryt Tahorolze Otn2 denomina if 74, tan(Sin-',) fr Ix) al...\n##### The figure shows a small positive charge some distance away from a larger positive charge. The...\nThe figure shows a small positive charge some distance away from a larger positive charge. The smaller charge is currently a distance rı away from the center of the larger charge, which is twice as far away as r2. Answer the three questions below, using three significant digits. Part A: What ha...\n##### Let T : Rz[r] R? be the linear transformation given by the formula T(a | bx cr) = (a 6. € 6) and let B {I + LI 1.22 | 1} and {(1,1), (-1,1)} be bases for Rz and R? respectively: Find basis for KerT _ Find basis for ImT Determine [T]g' B: Verify that dim( Ker(T)) + dim(Im(T)) dim(Rz[z]).Compute the determinants of these matrices using cofactor formula:Date: May 16. 2021_LINEAR ALGEBRA HOMEWORKDEADLINE [8 MAY 2021. 23.59.2 =1 01 F1A =and B =and C .-1\nLet T : Rz[r] R? be the linear transformation given by the formula T(a | bx cr) = (a 6. € 6) and let B {I + LI 1.22 | 1} and {(1,1), (-1,1)} be bases for Rz and R? respectively: Find basis for KerT _ Find basis for ImT Determine [T]g' B: Verify that dim( Ker(T)) + dim(Im(T)) dim(Rz[z]). C...\n##### (b) Eliminate the parameter and write the resulting rectangular equation whose graph represents the curve.Adjust the domain of the rectangular equation, if necessary:[0, [4, 0)not necessary\n(b) Eliminate the parameter and write the resulting rectangular equation whose graph represents the curve. Adjust the domain of the rectangular equation, if necessary: [0, [4, 0) not necessary...\n##### QUESTION 18 Match the following definition with their term: Table of Drugs and Chemicals (A-Z) 1]...\nQUESTION 18 Match the following definition with their term: Table of Drugs and Chemicals (A-Z) 1] Sarcoma 2] Pneumonia 3] Aspirin 4] Infection 1 2 3 4 4 points    QUESTION 19 When coding the diagnostic statement, “wheezing and shortness of breath cau...\n##### 3. LetA =Determine the eigenvalues for A, and the subspaces corresponding to the eigenvalues_ Is A diagonalizable? What is the minimal polynomial for A?\n3. Let A = Determine the eigenvalues for A, and the subspaces corresponding to the eigenvalues_ Is A diagonalizable? What is the minimal polynomial for A?...\n##### 12. Prepare in journal form the entries necessary to record the following stock transactions of the...\n12. Prepare in journal form the entries necessary to record the following stock transactions of the Vallen Company during 2019: Oct. 1 Purchased 1,000 shares of its own common stock for $20 per share. Oct. 5 Sold 500 shares of treasury stock purchased on Oct 1. For$25 per share. Oct. 8 Sold 250 sha...\n##### Identify the oxidized substance, the reduced substance, the oxidizing agent, and the reducing agent in the...\nIdentify the oxidized substance, the reduced substance, the oxidizing agent, and the reducing agent in the redox reaction. Mg(s) + CI,() Mg?+(aq) + 2C1\" (aq) Which substance gets oxidized? OCH OCI Mg? * Mg Which substance gets reduced? OCH OM Ос, Mg? What is the oxidizingent OC OM Wh...\n##### For the reaction:Caco, (calcile)CacOs (aragonite) entropy and freo onergy change at 25\"C and atmosphere Calculate the enthalpy, Which phase Is stable under Ihese conditions? Calculate (AHlr)\" , at G00 C assuming (hat the system Is Isobaric Cp (calldeg-mole) = 24.98 5.24 X 10JT_ 6.20 X 10*T2 Given: calcite: aragonite: Cp (calldeg-mole) = 20.13 + 10.24 X 10JT_ 3.34 X 109T?\nFor the reaction: Caco, (calcile) CacOs (aragonite) entropy and freo onergy change at 25\"C and atmosphere Calculate the enthalpy, Which phase Is stable under Ihese conditions? Calculate (AHlr)\" , at G00 C assuming (hat the system Is Isobaric Cp (calldeg-mole) = 24.98 5.24 X 10JT_ 6.20 X 10...\n##### What is the algebraic expression for \"4 times the sum of q and p\"?\nWhat is the algebraic expression for \"4 times the sum of q and p\"?..." ]
[ null, "https://cdn.numerade.com/ask_images/f58d1e5d9728401baeb4075f239d1273.jpg ", null, "https://cdn.numerade.com/previews/ab784db9-3abe-43b3-a0bc-c0ebcebec73f_large.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.95632184,"math_prob":0.95839393,"size":6495,"snap":"2023-14-2023-23","text_gpt3_token_len":1447,"char_repetition_ratio":0.18179017,"word_repetition_ratio":0.0097402595,"special_character_ratio":0.22278675,"punctuation_ratio":0.093226515,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99059796,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-29T17:15:43Z\",\"WARC-Record-ID\":\"<urn:uuid:8a8fe551-577b-47e3-a5e6-dcfe9783c883>\",\"Content-Length\":\"105457\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5a4f21f4-7f9b-41f8-a2f2-871e7a22aadc>\",\"WARC-Concurrent-To\":\"<urn:uuid:601855a1-905b-4d57-8d29-b7faaa66977d>\",\"WARC-IP-Address\":\"104.21.12.185\",\"WARC-Target-URI\":\"https://solvedlib.com/n/0-10-mole-of-argon-gas-is-allowed-to-enter-an-empty-50-cm,21121669\",\"WARC-Payload-Digest\":\"sha1:F7OQGVN4GRX7575YVXBNLOOPAT34BVCS\",\"WARC-Block-Digest\":\"sha1:ADCP5ZBXQOMNKZBVAJOCMTDDO6DMLIAD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296949009.11_warc_CC-MAIN-20230329151629-20230329181629-00165.warc.gz\"}"}
http://dimacs.rutgers.edu/archive/Events/2014/abstracts/shpilka.html
[ "### DIMACS Theoretical Computer Science Seminar\n\nTitle: Reed-Muller Codes with Respect to Random Errors and Erasures\n\nSpeaker: Amir Shpilka, Technion\n\nDate: Wednesday, October 8, 2014 11:00-12:00pm\n\nLocation: CoRE Bldg, Room 301A, Rutgers University, Busch Campus, Piscataway, NJ\n\nAbstract:\n\nIn TCS we usually study error correcting codes with respect to the Hamming metric, i.e. we study their behaviour with respect to worst case errors. However, in coding theory a more common model is that of random errors, where Shannon’s results show a much better tradeoff between rate and decoding radius.\n\nWe consider the behaviour of Reed-Muller codes in the Shannon model of random errors. In particular, we show that RM codes with either low- or high-degree (n^1/2 or n-n^1/2, respectively), with high probability, can decode from an 1-r fraction of random erasures (where r is the rate). In other words, for this range of parameters RM codes achieve capacity for the Binary-Erasure-Channel. This result matches experimental observations that RM codes can achieve capacity for the BEC, similarly to Polar codes. We also show that RM-codes can handle many more random errors than the minimum distance, i.e. roughly n^d/2 errors for codes of degree n-d (where the minimum distance is only 2^d).\n\nWe show that the questions regarding the behaviour of Reed-Muller codes wrt random errors are tightly connected to the following question. Given a random set of vectors in {0,1}^n, what is the probability the their d’th tensor products are linearly independent? We obtain our results by giving answer to this question for certain range of parameters.\n\nBased on a joint work with Emmanuel Abbe and Avi Wigderson." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8713826,"math_prob":0.84548396,"size":1705,"snap":"2019-35-2019-39","text_gpt3_token_len":411,"char_repetition_ratio":0.10875955,"word_repetition_ratio":0.0077220076,"special_character_ratio":0.22170088,"punctuation_ratio":0.13662791,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96543,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-22T16:36:52Z\",\"WARC-Record-ID\":\"<urn:uuid:c19abcb0-1b92-48b6-a303-786af029aa7e>\",\"Content-Length\":\"2416\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:df2652df-fff2-405e-8840-1232e060395c>\",\"WARC-Concurrent-To\":\"<urn:uuid:677edb62-9ff4-47e4-af16-d11118eb63ea>\",\"WARC-IP-Address\":\"128.6.75.106\",\"WARC-Target-URI\":\"http://dimacs.rutgers.edu/archive/Events/2014/abstracts/shpilka.html\",\"WARC-Payload-Digest\":\"sha1:KT2MYF4XM75FNYYJCPX3UZWJY6PJRGE4\",\"WARC-Block-Digest\":\"sha1:RFSK3IBGYJUT5CCQKRNKCGKL7SO3WICI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027317274.5_warc_CC-MAIN-20190822151657-20190822173657-00003.warc.gz\"}"}
https://moam.info/solution-of-the-poisson-equation-by-differential-wiley-online-library_5caa42d5097c473c488b45cf.html
[ "## Solution of the Poisson equation by differential ... - Wiley Online Library\n\nMay 6, 2019 - and Patankar' compared solutions of the two-dimensional Poisson equation ... This set has a unique solution for the weighting coefficients, uij, ...\n\nINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, VOL. 19, 71 1-724 (1983)\n\nSOLUTION OF THE POISSON EQUATION BY DIFFERENTIAL QUADRATURE FARUK CIVAN AND C. M. SLIEPCEVICH\n\nUniversity of Oklahoma, Norman, OK, U.S.A.\n\nSUMMARY The method of differential quadrature is demonstrated by solving the two-dimensional Poisson equation. The results for three test problems are compared with the exact analytical solutions and the numerical solutions obtained by others for the Galerkin, the control-volume and the five-point finite difference methods. The method of differential quadrature leads to more accurate results for comparable levels of computational effort.\n\nINTRODUCTION Many applications of the Poisson equation to transport processes require numerical solutions which usually are based on some version of finite elements or finite differences. Ramadhyani and Patankar' compared solutions of the two-dimensional Poisson equation by the Galerkin, control-volume and five-point finite difference methods and concluded that the control-volume method produced the most accurate results. The purpose of this paper is to demonstrate the application of the method of differential quadrature to the identical test problems used by Ramadhyani and Patankar so that direct comparisons could be made with both their analytical and numerical (conventional methods) results. Bellman' introduced the method of diff erential quadrature and applied it to one-dimensional, initial value problems. Subsequently, Mingle6 applied the method to one-dimensional, initialboundary value problems. In this paper, the method is extended to the two-dimensional, boundary value problem. APPROXIMATION OF DERIVATIVES BY DIFFERENTIAL QUADRATURE Consider xi: i = 1 , 2 , . . . ,N are the sample points obtained by subdividing the x-variable into N discrete values and f(xi) are the function values at these points. If aij are the weights to attach to these function values at the sample points, the values of the function derivatives at these points are approximated by a weighted sum of the function values at these points as expressed by the following quadrature formula:'\n\nTo calculate the weighting coefficients the function f(x) is represented by an appropriate analytical function, such as a polynomial, f(x)=xk-*;\n\n0029-5981/83/050711-14\\$01.40 @ 1983 by John Wiley & Sons, Ltd.\n\nk = 1 , 2 , . .., N\n\n(2)\n\nReceived 6 July 1981 Revised 23 March 1982\n\n712\n\nF. CIVAN AND C. M. SLIEPCEVICH\n\nand its derivative,\n\naf(x)/ax\n\n= (k - l)xk-’\n\n(3)\n\nSubstituting equations (2) and (3)into equation (l),N linear algebraic equations are obtained: N\n\nc ui,.xjk-l = ( k - - l ) x f - ’ ;\n\nj=l\n\nk =1, 2 , . . . , N\n\nand i = 1 , 2 , . . . , N\n\n(4)\n\nThis set has a unique solution for the weighting coefficients, uij,because the matrix of elements is a Vandermonde matrix whose inverse can be obtained analytically as shown by Hamming.’ According to Bellman et d.,’the approximation formulae for the partial derivatives of second (and higher) order are obtained by iterating the quadrature approximation formula for the first-order partial derivative, given by equation (1).Hence, for example, the secondorder partial derivative approximation formula is derived from equation (1)by replacing f ( x ) by af(x)/ax,\n\nIn order to reduce the complexity of the derivative approximation formulae and thereby conserve on computational effort, it is advantageous to use quadrature approximation formulae for also the second6 and higher order derivative^.^ Of course, the weighting coefficients for each formula will be different from those for the first-order derivative. (This approach does not appear to be feasible for mixed partial derivatives, however.) Thus, for example, the second-order partial derivatives can be approximated by a linear weighted sum of function values at the sample points as\n\nin which bij are the weights attached to the function values at the sample points. As before, the weighting coefficients can be obtained by a procedure similar to that for equation (1). Again, the function given by equation (2) is used so that the second-order derivative is d z f ( x ) / d x 2 = (k - I)(& -2\n\n) ~ ~ - ~\n\n(7)\n\nSubstituting equation (2) and (7)into equation (6) results in a set of N linear algebraic equations for the weighting coefficients, bij, N\n\nC bijx;-’ =(k - l ) ( k - 2 ) ~ : ~ ~ k; = 1,2,. ..,N and i = 1,2,. .. ,N\n\n(8)\n\nj=l\n\nThis equation can be solved in the same manner as indicated for equation (4) above. The derivative approximation formulae derived for a function of one variable can be extended as follows for a function of two variables:\n\nSOLUTION OF POISSON EQUATION\n\n713\n\nwhere Nr and N Y are the number of sample points in the x- and y-directions, respectively. The second-order derivative approximation formulae are\n\nNY\n\nc bykf(Xi,yk)\n\n(12b)\n\nk=l\n\nSimilar formulae can be developed for all of the partial derivatives of any order for multidimensional systems,' but they will not be presented here since they are not needed for the examples which follow. APPLICATION O F APPROXIMATION FORMULAE TO POISSON EQUATION The following summary of the application of differential quadrature is general and, therefore, is not problem dependent. Although the technique illustrated below is specifically for the two-dimensional Poisson equation, it can be readily extended to the three-dimensional case. Consider the two-dimensional Poisson equation in normalized form:\n\nwhere p = L / H represents the aspect ratio and 0 s x , y s 1. Here the dimensionless quantities are given by x =X/L\n\nq5=\\$\n\nifS=O\n\nt 14)\n\n(16b)\n\nwhere 4, the dependent variable, is a function of the space co-ordinates, X and Y, and S represents a given source strength. L and H are the characteristic lengths of the physical domain in the X and Y directions. For numerical solution the two-dimensional rectangular grid system of Figure 1 is formed by subdividing the x and y variables into N\" and N Y discrete values, respectively. The spacings need not necessarily be equal; however, in this study they are assigned equally. The resulting discrete points in the x and y directions are indicated by subscripts i and j , respectively. Then the partial derivatives of the function, q5 (x, y), are replaced by the approximation formulae throughout the Poisson equation and the derivative boundary conditions to obtain a set of linear algebraic equations which can be solved using any appropriate methods. To utilize differential quadrature, two options are available. The first approach' is to replace the second-order partial derivatives with respect to x and y variables in equation (13)by the\n\n714\n\nF. CIVAN AND C. M. SLIEPCEVICH\n\n7\n\nh\n\n2\n\nI i - 4\n\ni\n\n----L\n\ni = f , Z , . . . ,N Figure 1. The unit square region and the grid for the normalized Poisson equation\n\napproximation formulae ( I l a ) and (12a) to obtain NY\n\nNY\n\nin which the following shorthand notation is used for convenience: dii=-4(xi, y j ) and Sij=S(xi, y i ) . The second approach6 is to replace the derivatives by the approximation formulae ( l l b ) and (12b), respectively:\n\nOf the two approaches, the latter saves appreciably on computational effort but requires ) additional storage for the bii's. Both the equations (17a) and (17b) contain ( N x ) ( N Yfunction values, some of which are the prescribed boundary values given by Dirichlet conditions and others by the boundary conditions involving the function derivatives, such as Neumann or mixed boundary conditions. The first step for the numerical model is to separate terms that are associated with the known boundary values and move these terms, as well as the source term, to the right of equation (17a) or (17b). The terms associated with the interior points, as well as the boundary points upon which boundary conditions involving function derivatives are imposed, must be collected on the left-hand side of equations (17a) or (17b). As a result, these equations can be represented by\n\ng . . = h.. 11\n\n(18)\n\n11\n\nwhich can be written as\n\nag-\n\nP\n\nC 3 d r n= h,; 4 a4rn\n\ni = 2 , 3 , . . . , (N\"-1)\n\nand j = 2 , 3 , .\n\n. . ,(Ivy-1)\n\n(19)\n\nHere, p and q are indices for those mesh points where the function values are to be calculated. In equation (19), agij/aq5, are the elements of the Jacobian matrix obtained from gii. For the first approach, they are given by\n\nSOLUTION OF POISSON EQUATION\n\n715\n\nand for the second approach,\n\nagij/a4,\n\n= ajqb;\n\n+si&'b;q\n\n(20b) Here, S,, are the Kroneker deltas whose values are equal to 1when rn = n but 0 when rn # n. The Jacobian matrix elements are constant numbers and have to be calculated only once for the Poisson equation. However, equation (20b) is preferred for relatively large mesh systems to conserve on computational effort. Since the elements of the Jacobian matrix are constant values, equation (19) represents (N\"- 2 ) ( N y- 2) linear algebraic equations. If these equations also contain the boundary values defined by conditions involving function derivatives, such as the Neumann or mixed type, in addition to the interior region function values, the function derivatives in these boundary conditions also need to be replaced by the quadrature approximation formulae to obtain an algebraic relation for the boundary values. For example, consider a boundary condition expressed by\n\nad(X, Y ) / a Y = f ( x > at Y = 1\n\n(21)\n\nwhere f ( x ) is a given function. Replacing the function derivative in equation (21) with the differential quadrature approximation given by equation (10) one obtains\n\nAfter collecting the term associated with the known boundary value (if there is any at all) on the right, equation (22) can also be expressed in a form similar to equation (19),\n\nwhere the Jacobian matrix elements are given by\n\nagiNy/a4, = &,a 'N,Y\n\n(24)\n\nOther types of boundary conditions involving the function derivatives can be treated similarly. The resulting set of algebraic equations are solved simultaneously using, for example, a Gaussian elimination technique for the unknown function values at points (p, 4). However, in the application problems presented next a slightly different approach will be used to reduce the size of the matrix equation that needs to be solved for the unknown function values. For this purpose, equation (22) is solved first for the unknown boundary values as\n\nUsing equation (25) the unknown boundary values will be eliminated throughout the discretized Poisson equation (17a) or (17b). In this manner the number of equations to be solved is reduced because equation (23) is not required any longer. The resulting equation can be expressed in the form of equation (19) according to\n\nag.. 1 -1L4pq=hij; q = 2 a4m\n\n( N x - 1 ) (Ny-1) p=2\n\ni = 2 , 3 , . . . , ( N x -1) and j = 2 , 3 , . . . , ( N y-1)\n\n7 16\n\nF. CIVAN AND C. M. SLIEPCEVICH\n\nwhich contains only the function values for the interior region. Upon simultaneous solution of equation (26) the unknown boundary values are obtained from their corresponding expressions, such as equation (25). Because the expressions for the Jacobian matrix elements contain the Kroneker deltas, the resulting Jacobian matrix contains many zero elements and therefore is not a 'full' matrix. One may, of course, take advantage of this situation to develop special matrix solvers to reduce the computing effort and to increase the accuracy of numerical solutions. However, for the sizes of the matrices encountered in the test problems which follow, a Gaussian elimination type of solver for a set of linear equations with a full coefficient matrix is acceptable. TEST PROBLEMS Three test problems involving the Poisson equation, which are identical to Ramadhyani and Patankar,' will be used to demonstrate the quality of the numerical solutions by the method of differential quadrature. Both of the aforementioned approaches were used for each test problem; as would be expected the results were identical. Therefore, in the interest of brevity, only the procedure for the first approach will be presented for the first problem and only the procedure for the second approach will be presented for the second and third problems.\n\nFirst problem Consider the following problem in normalized form:\n\n-+p a24 ax\n\n2a24 y=o;\n\nOsx,ysl\n\naY\n\nThe exact solution in normalized form is given by\n\n4 = sinh [wy/([email protected])]. sin (m/2)/sinh [7r/(2/3)]\n\n(32)\n\nReplacing the partial derivatives with the approximation formulae given by equations (9), ( l l a ) and (12a), the model equations (27)-(31) reduce to\n\n4il=O; i = 2 , 3 , . . . ,N\" 4N i Y = sin (7rxi/2); i = 2,3, . . . ,N\" 4lj=O; j = 1 , 2 , . . . , N Y\n\n(34)\n\nSOLUTION OF POISSON EQUATION\n\n717\n\nBy virtue of equation (36), equation (37) leads to the following expression for the value of the function on the symmetry boundary,\n\nUsing equations (34)-(38) and rearranging, equation (33) reduces to the following set of algebraic equations:\n\nfor i = 2 , 3 , . . . , (N\"- 1), and j = 2 , 3 , . . . , ( N y- l),in which the elements of the Jacobian matrix are given by\n\nand NY\n\nhij = -p2 sin (rxi/2)\n\n1 a&& 1=1\n\nSolution of equation (39) has been accomplished with subroutines DECOMP, SOLVE and SING given by Forsythe and Moler? which utilizes the method of Gaussian elimination, on the IBM 370/158 computer facility at the University of Oklahoma. Numerical calculations have been carried out using uniform grid systems, 5 x 5 and 7 x 7. A local error, defined by =- Idexact -4computedl\n\nE\n\n(42)\n\nwas calculated at every point of the uniform grid. , the maximum To facilitate comparisons, only the errors at the centre of the domain, E ~ and are shown in Figure 2 for various values of the aspect ratio L / H . error in the domain, E-, These results are compared directly with those reported by Ramadhyani and Patankar.7 It is evident that the method of differential quadrature using a 7 X 7 grid results in much lower errors than are obtained by the conventional finite element and finite difference techniques using a 7 x 7 grid.\n\nSecond problem The governing equations for the second problem in normalized form are\n\nC#l(x,O)=O;\n\nOcxsl\n\nl ) / a y = 0;\n\nOax s 1\n\n4(0,y)=O;\n\nOayal\n\na4 (I,y)/ax = 0;\n\no =S y a 1\n\na&(X,\n\n718\n\nF. CIVAN AND C. M. SLIEPCEVICH 10.0\n\n?\\$--.,\n\n1.0\n\n. . ... 0.\n\n'0\n\n0.1\n\n\\\n\n-\n\n\\\n\nl o 3 E,\n\n0.01\n\nx-.-\n\nX Galer\n\nin(7X7) Ref.7\n\n+-+Control 0\n\nvolume(7X7)\n\nRef.7\n\n5-Point f i n i t e d i f f . ( 7 X 7 ) Ref.7\n\n0---ODiff.\n\nq u a d . ( 7 ~ 7 ) p r e s e n t work\n\nI\n\nI\n\n0.001\n\n0.25\n\np r e s e n t work\n\nI\n\n0.75\n\n0.50\n\nI 1.0\n\nL/H\n\nFigure 2(a). The centre-point error for the first problem\n\nThe exact solution in normalized form is given by\n\n4\n\n1 =-(2P2 1-y2-4\n\na0\n\nC (-1)\" cosh (A,@x)cos(A,y)/[A:\n\ncosh ( A d ) ] }\n\nn=O\n\nwhere A, = (2n + 1)7~/2. For numericalsolutions, the partial derivativesare replaced with the approximationformulae given by the equations (9), (lo), (Ilb) and (12b). By rearrangingthe model, equations (43)-(47) are reduced to the following set of algebraic equations:\n\nfor i = 2 , 3 , . . . , (N' - 1) andj = 2 , 3 , . . . , (IVY- l),in which\n\nagij/a4,\n\n= 6jq(b;,,- b~N=U&x,/U&XNX)+6i,@2(biq- b & y U L Y q / U h y )\n\n(50)\n\nSOLUTION OF POISSON EQUATION\n\n719\n\nThe right of equation (49) is simply hij = -1. The function values along the symmetry boundaries are calculated by\n\n( z2 (Nx-1)\n\n4NY= -\n\nakxk#kj)/akxNx\n\n(51)\n\nand\n\nThe solution of equation (49) has been carried out using the same subroutines and grid systems as in the first problem. For various values of the aspect ratio L / H , the results in terms of the centre-point error c 0 and maximum error E,, obtained via the differential quadrature method are presented in Figure 3. Again the errors for differential quadrature with a 7 x 7\n\n720\n\nF. CIVAN AND C. M. SLIEPCEVICH\n\ngrid are lower than those obtained by the conventional finite element and finite difference technique using a 7 x 7 grid. In Figure 3(a) it should be noted that the centre-point errors for the finite element and the finite difference techniques increase with increasing aspect ratio; although this behaviour is anomalous, Ramadhyani and Patankar do not offer any comment. On the other hand, the maximum errors for the conventional techniques, shown in Figure 3(b), demonstrate normal behaviour. By contrast, both the centre-point and maximum errors produced via differential quadrature solutions exhibit the expected decreases with increasing aspect ratio. However, it is somewhat surprising to find that the 5 x 5 grid for differential quadrature gives smaller centre-point errors than the 7 x 7, as can be seen in Figure 3(a), whereas for the maximum error the 7 X 7 differential quadrature is more accurate than the 5 x 5 according to Figure 3(b).\n\nc 0 . -. 11\n\n.-N .\n\nx--\n\n\\\\\n\n+ \\ \\\n\n\\ \\\n\n\\\n\n\\\n\n'\\\n\n\\\n\n0\n\n\\\n\n\\\n\n0.01\n\nt-\n\nX-.-x\n\nG a l e r k i n ( 7 X 7 ) Ref.'\n\n+ -+ C o n t r o l\n\nvolume( 7x7) R e f . 7\n\n5-Point f i n i t e diff.(7X7)\n\n0.001\n\ni\n\nRef.7\n\nO---ODiff.\n\npresent work\n\no--aOiff.\n\np r e s e n t work\n\nI\n\nI\n\nI\n\nI\n\n1\n\nI\n\nI\n\nI\n\nFigure 3(a). The centre-point error for the second problem\n\nSOLUTION OF POISSON EQUATION\n\n100.0\n\n0.1\n\nn t\n\nx-.--ry\n\nGalerkin(7X7) Ref. 7\n\n+-+Control\n\nvolume(7X7)\n\nRef. 7\n\n5-Point f i n i t e diff.(7X:)\n\no---oOiff. o--nOiff. 0.01\n\n72 1\n\nI\n\nI\n\nRef.7\n\np r e s e n t work\n\np r e s e n t work\n\nI\n\nI\n\nI\n\nI\n\nI\n\nI\n\nThird problem This problem has been formed by considering a square region between two concentric circles of radii, rl and r2, and treating the one-dimensional problem in polar co-ordinates like a two-dimensional problem in Cartesian co-ordinates for a source strength, S,equal to zero. Moving the Cartesian co-ordinates to X1= Y1= r l / J 2 point and using equations (14) and (15) with L = H = ( r 2 - r l ) / J Z and equation (16b), the following normalized form of the Poisson equation was obtained:\n\na2d a24\n\n2+- 0. ax ay2- ’\n\n0S x , y S 1\n\n(53)\n\nfor which the Dirichlet boundary conditions along the boundaries and the exact solution are given by\n\n4 = In (r/rd/ln 2 1/2\n\nwhere r = (X2+Y )\n\n.\n\n(r2/r1)\n\n(54)\n\n722\n\nF. CIVAN AND C. M. SLIEPCEVICH\n\nAs before, the partial derivatives are replaced by the approximation formulae, equations (1lb) and (12b). Then, the terms involving the boundary values are separated and are collected on the right-hand side to obtain the following set of linear algebraic equations:\n\n10.0\n\n1 .o\n\n/ / /\n\n-\n\n0.1\n\n/ / /\n\n0\n\nl o 4 E,\n\n-\n\n0.01\n\nx-.-x\n\nGalerkin(llX11)\n\n+- + C o n t r o l 5-Point\n\n0\n\no---oOiff. 0.001 2\n\nvolume(llX11) Ref.7 finite diff.(llXll)Ref.7\n\nquad.(5x5) p r e s e n t work\n\no----oDiff. I\n\nRef. i\n\nq u a d . ( 7 X 7 ) p r e s e n t work 1\n\n4\n\nI\n\nI\n\n6\n\n1\n\nI\n\na\n\n1\n\nI 10\n\nr2'rl\n\nFigure 4(a). The centre-point error for the third problem\n\n723\n\nSOLUTION OF POISSON EQUATION 100.0\n\nI p’\n\n10.0\n\n- +/\n\n1.0\n\n/ /\n\n/\n\n/\n\nl o 4 Emax\n\n/ / / - 0\n\n0.1\n\nx-.-x\n\nG a l e r k i n ( 11x11) Ref. I\n\n+-+Control 0\n\no---ODiff. . f f iDl [ - - - -n\n\n0.01\n\n2\n\nvolume(llXl1)\n\n5-Point\n\nRef.7\n\nfinite diff.(IlXll)Ref.?\n\np r e s e n t work\n\np r e s e n t work\n\n1 a 4\n\n6\n\n10\n\nrz’rl\n\nFigure 4(b). The maximum error for the third problem\n\nUsing the same subroutines to solve equation (55) the resulting values of .I and .nax are compared with published values’ in Figure 4, It is clearly evident that the differential quadrature solution with a 7 X 7 grid is more accurate than the conventional finite element and finite difference methods with a 11x 11grid. DISCUSSION AND CONCLUSIONS Based on the foregoing comparison of results for these three test problems, it is evident that for comparable levels of computational effort, the method of differential quadrature generally produces smaller errors, although it is conceded that the conventional finite element and finite difference techniques probably give sufficiently accurate results for many applications of the Poisson equation. Thus, the principal advantage of differential quadrature is that it is basically easier to apply than conventional numerical techniques.\n\n724\n\nF. CIVAN A N D C. M. SLIEPCEVICH\n\nAlthough not evident from the test problems used herein, it has been demonstrated2 that the method of differential quadrature can be advantageously employed in solving accurately a variety of practical, multi-dimensional problems with a considerable savings in computational effort. The reason is that this technique is somewhat unique in that it appears to give optimum performance with grids as small as 5 x 5 or 7 x 7, and rarely beyond 15 x 15. In these cases, if solutions at intermediate points are required, they can be easily obtained by an appropriate interpolation, such as the Lagrange method. From the standpoint of the two approaches presented herein for applying the method of differential quadrature, the second approach, equation (17b), requires 25-75 per cent less computational effort than the first approach, equation (17a). ACKNOWLEDGEMENTS\n\nThe authors acknowledge the financial support of University Technologists, Inc. of Norman, Oklahoma, U.S.A. and the Merrick Computing Center of the University of Oklahoma.\n\nREFERENCES\n\n1. R.Bellman, B. G. Kashef and J. Casti, ‘Differential quadrature: a technique for the rapid solution of nonlinear partial differential equations’, J. Comp. Phys. 10,40-52 (1972). 2. F. Civan, ‘Solution of transport phenomena type models by the method of differential quadratures’, Ph.D. dissert., Univ. of Oklahoma (1978). 3. F. Civan and C. M. Sliepcevich, ‘Application of differential quadrature to transport processes’, J. Math. Anal. Appl., to be published. 4. G . E. Forsythe and C. B. Moler, Computer Solution of Linear Algebraic Systems, sect. 17, Prentice-Hall, Inc., Englewood Cliffs, N.J., 1967. 5. R. W. Hamming, Numerical Methods for Scientists and Engineers, 2nd edn, McGraw-Hill, New York, 1973,p. 124. 6. J. 0. Mingle, ‘Computational considerations in nonlinear diffusion’, Znt. J. num.Meth. Engng, 7 , 103-116 (1973). 7. S. Ramadhyani and S. V. Patankar, ‘Solution of the Poisson equation: comparison of the Galerkin and controlvolume methods’, Znt. J. num. Meth. Engng, 15,1395-1402 (1980)." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8816661,"math_prob":0.99498016,"size":21786,"snap":"2021-04-2021-17","text_gpt3_token_len":5870,"char_repetition_ratio":0.14971077,"word_repetition_ratio":0.0816645,"special_character_ratio":0.270403,"punctuation_ratio":0.14079748,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9992724,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-20T10:24:48Z\",\"WARC-Record-ID\":\"<urn:uuid:a5213607-bef4-472f-9768-481820bf727a>\",\"Content-Length\":\"83802\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7d3cc2de-7e99-45b4-8b2b-378cbd387e06>\",\"WARC-Concurrent-To\":\"<urn:uuid:1f92da27-2ba3-4d36-9bb5-e64c97af8b87>\",\"WARC-IP-Address\":\"104.21.68.83\",\"WARC-Target-URI\":\"https://moam.info/solution-of-the-poisson-equation-by-differential-wiley-online-library_5caa42d5097c473c488b45cf.html\",\"WARC-Payload-Digest\":\"sha1:RSJM4BZCHHACOTPTF4RIP7DDT44UKGDU\",\"WARC-Block-Digest\":\"sha1:DYUOBJWVODM3LKW4WD64AOCBSLLSQC7F\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703519984.9_warc_CC-MAIN-20210120085204-20210120115204-00141.warc.gz\"}"}
https://electricmusicstore.com/products/dual-control-voltage-processor-model-257-rev2-0
[ "# DUAL CONTROL VOLTAGE PROCESSOR MODEL 257 REV2.0\n\n## \\$99.00\n\nA set of 1 PCB.\n\nConsists of two identical sections, each of which permits several applied control voltages to define a single output voltage according to the equation:\n\nV_a K + V_b (1 - M) + M * V_c + V_offset = V_out\n\nThe algebraic manipulations possible with this module include addition, subtraction, scaling, inversion, and multiplication. Also incorporated is the capability of using one control voltage (M) to transfer control from one applied voltage (V_b) to another (V_c)." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8348807,"math_prob":0.9305117,"size":534,"snap":"2021-31-2021-39","text_gpt3_token_len":129,"char_repetition_ratio":0.11320755,"word_repetition_ratio":0.0,"special_character_ratio":0.2434457,"punctuation_ratio":0.11702128,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95477444,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-22T12:23:45Z\",\"WARC-Record-ID\":\"<urn:uuid:7488a3c4-6ccc-4602-8c2a-97c09cf1bbad>\",\"Content-Length\":\"43862\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:64a82e16-bce5-4806-afa4-8f4ad5c08171>\",\"WARC-Concurrent-To\":\"<urn:uuid:e59dbc95-02d6-4281-9bb0-84cb31df1923>\",\"WARC-IP-Address\":\"23.227.38.32\",\"WARC-Target-URI\":\"https://electricmusicstore.com/products/dual-control-voltage-processor-model-257-rev2-0\",\"WARC-Payload-Digest\":\"sha1:BF66DDKJFJZKJXHVDADWUCF3QFX4PKWC\",\"WARC-Block-Digest\":\"sha1:C5CLW4K6UZUOEVQJYCF3S2763YFMLIJD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057347.80_warc_CC-MAIN-20210922102402-20210922132402-00578.warc.gz\"}"}
https://www.wolframalpha.com/input/?i=2*(Gravitational+constant)%2F(45*(speed+of+light)%5E5)*(electron+mass)%5E2*((hydrogen+atom+radius)%5E4)*(compton+frequency)%5E6*(6.022%C3%9710%5E23)
[ "", null, "2*(Gravitational constant)/(45*(speed of light)^5)*(electron mass)^2*((hydrogen atom radius)^4)*(compton frequency)^6*(6.022×10^23)" ]
[ null, "https://www.wolframalpha.com/_next/static/images/Logo_1tof6SJC.svg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6614153,"math_prob":0.9975755,"size":378,"snap":"2021-04-2021-17","text_gpt3_token_len":98,"char_repetition_ratio":0.09625668,"word_repetition_ratio":0.0,"special_character_ratio":0.26719576,"punctuation_ratio":0.11764706,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9952455,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-21T15:31:03Z\",\"WARC-Record-ID\":\"<urn:uuid:c1444cbe-5c58-4f99-ab77-911db0cc6492>\",\"Content-Length\":\"15711\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:09fbebf5-84f6-47fb-b740-fede7a6a1437>\",\"WARC-Concurrent-To\":\"<urn:uuid:581c8ed7-e09c-4999-8690-42a6ad0bf679>\",\"WARC-IP-Address\":\"140.177.16.37\",\"WARC-Target-URI\":\"https://www.wolframalpha.com/input/?i=2*(Gravitational+constant)%2F(45*(speed+of+light)%5E5)*(electron+mass)%5E2*((hydrogen+atom+radius)%5E4)*(compton+frequency)%5E6*(6.022%C3%9710%5E23)\",\"WARC-Payload-Digest\":\"sha1:FND75CDLK27NUVWVXUN2X6QSLYME75YV\",\"WARC-Block-Digest\":\"sha1:ZHZXFWPO2DR3Z7HLMYUODAXQVSZZO6WU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703524858.74_warc_CC-MAIN-20210121132407-20210121162407-00000.warc.gz\"}"}
https://discourse.julialang.org/t/replicating-rubys-dig-in-julia/20551
[ "# Replicating Ruby's dig() in Julia\n\nHello,\n\nin Ruby there’s a method useful for navigating nested dictionaries called dig()\n\n``````nested = Dict(\n:a => Dict(\n:b => \"b\",\n:c => Dict(\n:d => \"d\"\n)\n)\n)\n\ndig(nested, :a, :b) #=> \"b\"\ndig(nested, :a, :c, :d) #=> \"d\"\ndig(nested, :a, :e) #=> nothing\n``````\n\nI have written this function. The problem is that `args...` will produce a Tuple. When called recursively, i will have a Tuple of Tuples and I don’t know how to flatten it or to avoid this. I can fix it by making `args::Tuple` and calling like this: `dig(nested, (:a, :b))`, but I find this somewhat uglier.\n\n``````function dig(dict::Dict, args...)\naux = dict[args]\nif length(args[2:end]) > 0\ndig(access, args[2:end])\nelse\naux\nend\nend\nend\n\n``````\n\nHow can I “splat” args when maing the recursive call or flatten the Tuple?\n\n``````function dig(dict::Dict, args...)\nd = dict\nfor key in args\nd = d[key]\nelse\nreturn nothing\nend\nend\nd\nend\n``````\n1 Like\n``````dig(dict::AbstractDict, key, keys...) = dig(dict[key], keys...)\ndig(x) = x\n``````\n\nThis throws KeyError rather than returning `nothing` on missing key, but should be easy to modify if you really want to return `nothing` as in your example.\nEDIT: alternately\n\n``````dig2(dict::AbstractDict, keys...) = foldl(getindex, keys; init=dict)\n``````\n6 Likes\n``````dig(x) = x\ndig(d::AbstractDict, key, keys...) = dig(get(d, key, nothing), keys...)\n``````\n\n`get(dict, key, default)` is really nice for dictionaries, and also faster than first checking (`haskey`) and then retrieving.\n\nEdit: The `foldl` solution is much faster, but fails when the key isn’t found:\n\n``````julia> dig2(d, :a, :e)\n\n``````\n3 Likes\n\n@DNF I didn’t find `foldl` to be much faster, at least after doing the necessary modifications so it doesn’t fail when the key isn’t found. And @yha first approach I’d say is conceptually simpler. Your solution is closer, although it would fail when passed `:a, :b, :c` if `:b`doesn’t exist already.\n\nCan be fixed like this:\n\n``````dig(x) = x\ndig(x::Nothing, keys...) = nothing\ndig(d::AbstractDict, key, keys...) = dig(get(d, key, nothing), keys...)\n``````\n\nBut maybe I shouldn’t be trying to navigate through empty keys on the first place…\n\n1 Like\n\nThis is just what I was looking for. One nice thing about Ruby’s `dig` is it can be used to access nested arrays as well as dicts. This is handy for working with responses from Elasticsearch. So I would expand @Amval’s answer to:\n\n``````dig(x) = x\ndig(x::Nothing, keys...) = nothing\ndig(d::AbstractDict, key, keys...) = dig(get(d, key, nothing), keys...)\ndig(a::AbstractArray, i::Integer, keys...) = checkbounds(Bool, a, i) ? dig(a[i], keys...) : nothing\n\n``````\n\nThen it can do:\n\n``````julia> nested = Dict(\n:a => [\nDict(:b => \"b\"),\nDict(:c => \"c\"),\nDict(:d => \"d\")\n]\n)\n\njulia> dig(nested, :a, 2, :c)\n\"c\"\n``````\n1 Like\n\nYou may be interested in Setfield.jl which implements a “lens” API that looks similar to what you’re doing.\n\n2 Likes" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9086488,"math_prob":0.9304401,"size":1050,"snap":"2022-40-2023-06","text_gpt3_token_len":272,"char_repetition_ratio":0.08221798,"word_repetition_ratio":0.0,"special_character_ratio":0.25428572,"punctuation_ratio":0.17073171,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97657484,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-03T15:16:11Z\",\"WARC-Record-ID\":\"<urn:uuid:f4359231-ce59-4220-94e9-f648fce5c9f7>\",\"Content-Length\":\"36491\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3058d95d-2c1f-4d8a-9bf6-dad5c71376ec>\",\"WARC-Concurrent-To\":\"<urn:uuid:ca6f0752-2fed-47a8-a777-4c346a79f18f>\",\"WARC-IP-Address\":\"64.71.144.205\",\"WARC-Target-URI\":\"https://discourse.julialang.org/t/replicating-rubys-dig-in-julia/20551\",\"WARC-Payload-Digest\":\"sha1:WPSOPNCA3WA5NIO53XNGRRBTRQNJFELG\",\"WARC-Block-Digest\":\"sha1:ITMHZK4EJ2KQ5IZYASQ2KMVNXHXK57HA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337421.33_warc_CC-MAIN-20221003133425-20221003163425-00797.warc.gz\"}"}
https://uk.mathworks.com/help/stats/nonparametric-methods.html
[ "## Nonparametric Methods\n\n### Introduction to Nonparametric Methods\n\nStatistics and Machine Learning Toolbox™ functions include nonparametric versions of one-way and two-way analysis of variance. Unlike classical tests, nonparametric tests make only mild assumptions about the data, and are appropriate when the distribution of the data is non-normal. On the other hand, they are less powerful than classical methods for normally distributed data.\n\nBoth of the nonparametric functions described here will return a `stats` structure that can be used as an input to the `multcompare` function for multiple comparisons.\n\n### Kruskal-Wallis Test\n\nThe example Perform One-Way ANOVA uses one-way analysis of variance to determine if the bacteria counts of milk varied from shipment to shipment. The one-way analysis rests on the assumption that the measurements are independent, and that each has a normal distribution with a common variance and with a mean that was constant in each column. You can conclude that the column means were not all the same. The following example repeats that analysis using a nonparametric procedure.\n\nThe Kruskal-Wallis test is a nonparametric version of one-way analysis of variance. The assumption behind this test is that the measurements come from a continuous distribution, but not necessarily a normal distribution. The test is based on an analysis of variance using the ranks of the data values, not the data values themselves. Output includes a table similar to an ANOVA table, and a box plot.\n\nYou can run this test as follows:\n\n```load hogg p = kruskalwallis(hogg) p = 0.0020 ```\n\nThe low p value means the Kruskal-Wallis test results agree with the one-way analysis of variance results.\n\n### Friedman's Test\n\nPerform Two-Way ANOVA uses two-way analysis of variance to study the effect of car model and factory on car mileage. The example tests whether either of these factors has a significant effect on mileage, and whether there is an interaction between these factors. The conclusion of the example is there is no interaction, but that each individual factor has a significant effect. The next example examines whether a nonparametric analysis leads to the same conclusion.\n\nFriedman's test is a nonparametric test for data having a two-way layout (data grouped by two categorical factors). Unlike two-way analysis of variance, Friedman's test does not treat the two factors symmetrically and it does not test for an interaction between them. Instead, it is a test for whether the columns are different after adjusting for possible row differences. The test is based on an analysis of variance using the ranks of the data across categories of the row factor. Output includes a table similar to an ANOVA table.\n\nYou can run Friedman's test as follows.\n\n```load mileage p = friedman(mileage,3) p = 7.4659e-004```\n\nRecall the classical analysis of variance gave a p value to test column effects, row effects, and interaction effects. This p value is for column effects. Using either this p value or the p value from ANOVA (p < 0.0001), you conclude that there are significant column effects.\n\nIn order to test for row effects, you need to rearrange the data to swap the roles of the rows in columns. For a data matrix `x` with no replications, you could simply transpose the data and type\n\n`p = friedman(x')`\n\nWith replicated data it is slightly more complicated. A simple way is to transform the matrix into a three-dimensional array with the first dimension representing the replicates, swapping the other two dimensions, and restoring the two-dimensional shape.\n\n```x = reshape(mileage, [3 2 3]); x = permute(x,[1 3 2]); x = reshape(x,[9 2]) x = 33.3000 32.6000 33.4000 32.5000 32.9000 33.0000 34.5000 33.4000 34.8000 33.7000 33.8000 33.9000 37.4000 36.6000 36.8000 37.0000 37.6000 36.7000 friedman(x,3) ans = 0.0082```\n\nAgain, the conclusion is similar to that of the classical analysis of variance. Both this p value and the one from ANOVA (p = 0.0039) lead you to conclude that there are significant row effects.\n\nYou cannot use Friedman's test to test for interactions between the row and column factors." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8971346,"math_prob":0.96060777,"size":3988,"snap":"2020-45-2020-50","text_gpt3_token_len":919,"char_repetition_ratio":0.13202812,"word_repetition_ratio":0.049535602,"special_character_ratio":0.23520562,"punctuation_ratio":0.106355384,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9926399,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-28T03:43:03Z\",\"WARC-Record-ID\":\"<urn:uuid:d3149df1-0ad5-49b4-ba7e-03a3bf55aa2d>\",\"Content-Length\":\"67838\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a6136d6f-737f-46b9-9d6e-0f52cdb336d0>\",\"WARC-Concurrent-To\":\"<urn:uuid:8b382492-3f58-441d-96fb-b35bd2cdceac>\",\"WARC-IP-Address\":\"23.212.144.59\",\"WARC-Target-URI\":\"https://uk.mathworks.com/help/stats/nonparametric-methods.html\",\"WARC-Payload-Digest\":\"sha1:DAPCK3SB42JPHX5TCLFVFC3OZ3PMC7CY\",\"WARC-Block-Digest\":\"sha1:64Z5JRLTDED5ZQ7PEBNJYZYGWYAHUSAZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107896048.53_warc_CC-MAIN-20201028014458-20201028044458-00555.warc.gz\"}"}
http://www.timbertoolbox.com/Calcs/modulusEcalc.htm
[ "Finding Modulus of Elasticity from Deflection", null, "Total Dimensional Change(inches) Length (inches) Force in Pounds Width of beam (inches) Depth of beam (inches) Modulus of Elasticity (E)(PSI) Fiberstress in Bending (Fb)(PSI) Pressure on Width (PSI) Length / Deflection = 1 /" ]
[ null, "http://www.timbertoolbox.com/Calcs/modE.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.78732425,"math_prob":0.87305164,"size":270,"snap":"2022-27-2022-33","text_gpt3_token_len":80,"char_repetition_ratio":0.15413533,"word_repetition_ratio":0.0,"special_character_ratio":0.22592592,"punctuation_ratio":0.0,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9654054,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-12T07:01:49Z\",\"WARC-Record-ID\":\"<urn:uuid:5a894198-d91e-44af-9c62-20e64301868e>\",\"Content-Length\":\"2969\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:646d7cbc-6af1-4db6-9543-e362988ee291>\",\"WARC-Concurrent-To\":\"<urn:uuid:53917884-af1c-496b-9b5e-334e98b42a8f>\",\"WARC-IP-Address\":\"69.65.3.210\",\"WARC-Target-URI\":\"http://www.timbertoolbox.com/Calcs/modulusEcalc.htm\",\"WARC-Payload-Digest\":\"sha1:C7JJV4FS37BCWZFS4JNFN62OGBHPG3HC\",\"WARC-Block-Digest\":\"sha1:PXIJRBAPKFX3ILHYC6PHEWY3ZKJFYXSL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882571584.72_warc_CC-MAIN-20220812045352-20220812075352-00079.warc.gz\"}"}
https://www.wikitechy.com/technology/java-programming-count-number-binary-strings-without-consecutive-1s/
[ "# Java Programming – Count number of binary strings without consecutive 1’s\n\nJava Programming - Count number of binary strings without consecutive 1’s - Mathematical Algorithms - Let a[i] be the number of binary strings of length\n\nGiven a positive integer N, count all possible distinct binary strings of length N such that there are no consecutive 1’s.\n\nExamples:\n\n```Input: N = 2\nOutput: 3\n// The 3 strings are 00, 01, 10\n\nInput: N = 3\nOutput: 5\n// The 5 strings are 000, 001, 010, 100, 101```\n\nThis problem can be solved using Dynamic Programming. Let a[i] be the number of binary strings of length i which do not contain any two consecutive 1’s and which end in 0. Similarly, let b[i] be the number of such strings which end in 1. We can append either 0 or 1 to a string ending in 0, but we can only append 0 to a string ending in 1. This yields the recurrence relation:\n\na[i] = a[i – 1] + b[i – 1] b[i] = a[i – 1] The base cases of above recurrence are a = b = 1. The total number of strings of length i is just a[i] + b[i].\n\nFollowing is the implementation of above solution. In the following implementation, indexes start from 0. So a[i] represents the number of binary strings for input length i+1. Similarly, b[i] represents binary strings for input length i+1.\n\nJava Program\n``````class Subset_sum\n{\nstatic int countStrings(int n)\n{\nint a[] = new int [n];\nint b[] = new int [n];\na = b = 1;\nfor (int i = 1; i < n; i++)\n{\na[i] = a[i-1] + b[i-1];\nb[i] = a[i-1];\n}\nreturn a[n-1] + b[n-1];\n}\n/* Driver program to test above function */\npublic static void main (String args[])\n{\nSystem.out.println(countStrings(3));\n}``````\n\nOutput:\n\n`5`\n\nIf we take a closer look at the pattern, we can observe that the count is actually (n+2)’th Fibonacci number for n >= 1. The Fibonacci Numbers are 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 141, ….\n\n```n = 1, count = 2 = fib(3)\nn = 2, count = 3 = fib(4)\nn = 3, count = 5 = fib(5)\nn = 4, count = 8 = fib(6)\nn = 5, count = 13 = fib(7)\n................\n```\n\nTherefore we can count the strings in O(Log n) time\n\nREAD  C Programming-Efficient program to calculate e^x\n\n#### About the author", null, "#### Wikitechy Editor\n\nWikitechy Founder, Author, International Speaker, and Job Consultant. My role as the CEO of Wikitechy, I help businesses build their next generation digital platforms and help with their product innovation and growth strategy. I'm a frequent speaker at tech conferences and events.\n\nX" ]
[ null, "https://www.wikitechy.com/technology/wp-content/uploads/2018/10/IMG-20160229-WA0017-150x150.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.82470196,"math_prob":0.99708045,"size":1439,"snap":"2021-04-2021-17","text_gpt3_token_len":458,"char_repetition_ratio":0.13867596,"word_repetition_ratio":0.027303753,"special_character_ratio":0.3648367,"punctuation_ratio":0.18487395,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99863183,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-17T09:20:28Z\",\"WARC-Record-ID\":\"<urn:uuid:141f83d1-363d-488a-aa64-702d01618f00>\",\"Content-Length\":\"101840\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6ffa0ada-3e4d-4017-b9d5-aa2fde9cfa4c>\",\"WARC-Concurrent-To\":\"<urn:uuid:4641982c-6b3d-46b3-8a93-2f4d9dea5622>\",\"WARC-IP-Address\":\"104.21.82.82\",\"WARC-Target-URI\":\"https://www.wikitechy.com/technology/java-programming-count-number-binary-strings-without-consecutive-1s/\",\"WARC-Payload-Digest\":\"sha1:4IGHB5ZY6EIEEV55H5JMWTOOYZRV4LYW\",\"WARC-Block-Digest\":\"sha1:DFYR3NXHNJH75SNPV5WSCP6KSMR2FQVV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038118762.49_warc_CC-MAIN-20210417071833-20210417101833-00552.warc.gz\"}"}
https://www.geeksforgeeks.org/minimize-steps-required-to-obtain-sorted-order-of-an-array/?ref=rp
[ "# Minimize Steps required to obtain Sorted Order of an Array\n\n• Difficulty Level : Expert\n• Last Updated : 10 May, 2021\n\nGiven an array arr[] consisting of a permutation of integers [1, N], derived by rearranging the sorted order [1, N], the task is to find the minimum number of steps after which the sorted order [1, N] is repeated, by repeating the same process by which arr[] is obtained from the sorted sequence at each step.\n\nExamples:\n\nAttention reader! Don’t stop learning now. Get hold of all the important DSA concepts with the DSA Self Paced Course at a student-friendly price and become industry ready.  To complete your preparation from learning a language to DS Algo and many more,  please refer Complete Interview Preparation Course.\n\nIn case you wish to attend live classes with experts, please refer DSA Live Classes for Working Professionals and Competitive Programming Live for Students.\n\nInput: arr[ ] = {3, 6, 5, 4, 1, 2}\nOutput:\nExplanation:\nIncreasing Permutation: {1, 2, 3, 4, 5, 6}\nStep 1 : arr[] = {3, 6, 5, 4, 1, 2} (Given array)\nStep 2 : arr[] = {5, 2, 1, 4, 3, 6}\nStep 3 : arr[] = {1, 6, 3, 4, 5, 2}\nStep 4 : arr[] = {3, 2, 5, 4, 1, 6}\nStep 5 : arr[] = {5, 6, 1, 4, 3, 2}\nStep 6 : arr[] = {1, 2, 3, 4, 5, 6} (Increasing Permutation)\nTherefore, the total number of steps required are 6.\nInput: arr[ ] = [5, 1, 4, 3, 2]\nOutput:\n\nApproach:\nThis problem can be solved simply by using the concept of Direct Addressing. Follow the steps given below to solve the problem:\n\n• Initialize an array dat[] for direct addressing.\n• Iterate over [1, N] and calculate the difference of the current index of every element from its index in the sorted sequence.\n• Calculate the LCM of the array dat[].\n• Now, print the obtained LCM as the minimum steps required to obtain the sorted order.\n\nBelow is the implementation of the above approach:\n\n## C++14\n\n `// C++ Program to implement``// the above approach``#include ``using` `namespace` `std;` `// Function to find``// GCD of two numbers``int` `gcd(``int` `a, ``int` `b)``{``    ``if` `(b == 0)``        ``return` `a;` `    ``return` `gcd(b, a % b);``}` `// Function to calculate the``// LCM of array elements``int` `findlcm(``int` `arr[], ``int` `n)``{``    ``// Initialize result``    ``int` `ans = 1;` `    ``for` `(``int` `i = 1; i <= n; i++)``        ``ans = (((arr[i] * ans))``            ``/ (gcd(arr[i], ans)));` `    ``return` `ans;``}` `// Function to find minimum steps``// required to obtain sorted sequence``void` `minimumSteps(``int` `arr[], ``int` `n)``{` `    ``// Inititalize dat[] array for``    ``// Direct Address Table.``    ``int` `i, dat[n + 1];` `    ``for` `(i = 1; i <= n; i++)` `        ``dat[arr[i - 1]] = i;` `    ``int` `b[n + 1], j = 0, c;` `    ``// Calculating steps required``    ``// for each element to reach``    ``// its sorted position``    ``for` `(i = 1; i <= n; i++) {``        ``c = 1;``        ``j = dat[i];``        ``while` `(j != i) {``            ``c++;``            ``j = dat[j];``        ``}``        ``b[i] = c;``    ``}` `    ``// Calculate LCM of the array``    ``cout << findlcm(b, n);``}` `// Driver Code``int` `main()``{` `    ``int` `arr[] = { 5, 1, 4, 3, 2, 7, 6 };` `    ``int` `N = ``sizeof``(arr) / ``sizeof``(arr);` `    ``minimumSteps(arr, N);` `    ``return` `0;``}`\n\n## Java\n\n `// Java program to implement``// the above approach``class` `GFG{``    ` `// Function to find``// GCD of two numbers``static` `int` `gcd(``int` `a, ``int` `b)``{``    ``if` `(b == ``0``)``        ``return` `a;` `    ``return` `gcd(b, a % b);``}` `// Function to calculate the``// LCM of array elements``static` `int` `findlcm(``int` `arr[], ``int` `n)``{``    ` `    ``// Initialize result``    ``int` `ans = ``1``;` `    ``for``(``int` `i = ``1``; i <= n; i++)``        ``ans = (((arr[i] * ans)) /``            ``(gcd(arr[i], ans)));` `    ``return` `ans;``}` `// Function to find minimum steps``// required to obtain sorted sequence``static` `void` `minimumSteps(``int` `arr[], ``int` `n)``{` `    ``// Inititalize dat[] array for``    ``// Direct Address Table.``    ``int` `i;``    ``int` `dat[] = ``new` `int``[n + ``1``];` `    ``for``(i = ``1``; i <= n; i++)``        ``dat[arr[i - ``1``]] = i;` `    ``int` `b[] = ``new` `int``[n + ``1``];``    ``int` `j = ``0``, c;` `    ``// Calculating steps required``    ``// for each element to reach``    ``// its sorted position``    ``for``(i = ``1``; i <= n; i++)``    ``{``        ``c = ``1``;``        ``j = dat[i];``        ` `        ``while` `(j != i)``        ``{``            ``c++;``            ``j = dat[j];``        ``}``        ``b[i] = c;``    ``}` `    ``// Calculate LCM of the array``    ``System.out.println(findlcm(b, n));``}` `// Driver code   ``public` `static` `void` `main(String[] args)``{``    ``int` `arr[] = { ``5``, ``1``, ``4``, ``3``, ``2``, ``7``, ``6` `};` `    ``int` `N = arr.length;` `    ``minimumSteps(arr, N);``}``}` `// This code is contributed by rutvik_56`\n\n## Python3\n\n `# Python3 program to implement``# the above approach` `# Function to find``# GCD of two numbers``def` `gcd(a, b):` `    ``if``(b ``=``=` `0``):``        ``return` `a` `    ``return` `gcd(b, a ``%` `b)` `# Function to calculate the``# LCM of array elements``def` `findlcm(arr, n):` `    ``# Initialize result``    ``ans ``=` `1` `    ``for` `i ``in` `range``(``1``, n ``+` `1``):``        ``ans ``=` `((arr[i] ``*` `ans) ``/``/``            ``(gcd(arr[i], ans)))` `    ``return` `ans` `# Function to find minimum steps``# required to obtain sorted sequence``def` `minimumSteps(arr, n):` `    ``# Inititalize dat[] array for``    ``# Direct Address Table.``    ``dat ``=` `[``0``] ``*` `(n ``+` `1``)` `    ``for` `i ``in` `range``(``1``, n ``+` `1``):``        ``dat[arr[i ``-` `1``]] ``=` `i` `    ``b ``=` `[``0``] ``*` `(n ``+` `1``)``    ``j ``=` `0` `    ``# Calculating steps required``    ``# for each element to reach``    ``# its sorted position``    ``for` `i ``in` `range``(``1``, n ``+` `1``):``        ``c ``=` `1``        ``j ``=` `dat[i]``        ``while``(j !``=` `i):``            ``c ``+``=` `1``            ``j ``=` `dat[j]` `        ``b[i] ``=` `c` `    ``# Calculate LCM of the array``    ``print``(findlcm(b, n))` `# Driver Code``arr ``=` `[ ``5``, ``1``, ``4``, ``3``, ``2``, ``7``, ``6` `]` `N ``=` `len``(arr)` `minimumSteps(arr, N)` `# This code is contributed by Shivam Singh`\n\n## C#\n\n `// C# program to implement``// the above approach``using` `System;` `class` `GFG{``    ` `// Function to find``// GCD of two numbers``static` `int` `gcd(``int` `a, ``int` `b)``{``    ``if` `(b == 0)``        ``return` `a;` `    ``return` `gcd(b, a % b);``}` `// Function to calculate the``// LCM of array elements``static` `int` `findlcm(``int` `[]arr, ``int` `n)``{``    ` `    ``// Initialize result``    ``int` `ans = 1;` `    ``for``(``int` `i = 1; i <= n; i++)``        ``ans = (((arr[i] * ans)) /``            ``(gcd(arr[i], ans)));` `    ``return` `ans;``}` `// Function to find minimum steps``// required to obtain sorted sequence``static` `void` `minimumSteps(``int` `[]arr, ``int` `n)``{` `    ``// Inititalize dat[] array for``    ``// Direct Address Table.``    ``int` `i;``    ``int` `[]dat = ``new` `int``[n + 1];` `    ``for``(i = 1; i <= n; i++)``        ``dat[arr[i - 1]] = i;` `    ``int` `[]b = ``new` `int``[n + 1];``    ``int` `j = 0, c;` `    ``// Calculating steps required``    ``// for each element to reach``    ``// its sorted position``    ``for``(i = 1; i <= n; i++)``    ``{``        ``c = 1;``        ``j = dat[i];``        ` `        ``while` `(j != i)``        ``{``            ``c++;``            ``j = dat[j];``        ``}``        ``b[i] = c;``    ``}` `    ``// Calculate LCM of the array``    ``Console.WriteLine(findlcm(b, n));``}` `// Driver code``public` `static` `void` `Main(String[] args)``{``    ``int` `[]arr = { 5, 1, 4, 3, 2, 7, 6 };` `    ``int` `N = arr.Length;` `    ``minimumSteps(arr, N);``}``}` `// This code is contributed by gauravrajput1`\n\n## Javascript\n\n ``\nOutput:\n`6`\n\nTime Complexity: O(NlogN)\nAuxiliary Space: O(N)\n\nMy Personal Notes arrow_drop_up" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5901185,"math_prob":0.9922899,"size":7163,"snap":"2021-43-2021-49","text_gpt3_token_len":2465,"char_repetition_ratio":0.120268196,"word_repetition_ratio":0.46476063,"special_character_ratio":0.38768673,"punctuation_ratio":0.18906753,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998035,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-27T14:37:32Z\",\"WARC-Record-ID\":\"<urn:uuid:70495ec7-8cb0-4df6-907c-8ac460f19790>\",\"Content-Length\":\"172456\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e184834d-4212-4f21-a14e-7febfc4d3b6b>\",\"WARC-Concurrent-To\":\"<urn:uuid:1591129c-e9bd-488b-a31d-8d161db06307>\",\"WARC-IP-Address\":\"23.207.202.207\",\"WARC-Target-URI\":\"https://www.geeksforgeeks.org/minimize-steps-required-to-obtain-sorted-order-of-an-array/?ref=rp\",\"WARC-Payload-Digest\":\"sha1:EVPGB3BIA5SSAWVFQLENUYH554T4P3UD\",\"WARC-Block-Digest\":\"sha1:4MUNUEQ7O673FEQFCMC6ALUPGP4JSU7L\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323588153.7_warc_CC-MAIN-20211027115745-20211027145745-00162.warc.gz\"}"}
https://jobs-on-monster.com/recommendations/how-to-work-out-how-many-calories.html
[ "# How To Work Out How Many Calories?\n\n## How do I calculate my daily calorie needs?\n\nTo determine your total daily calorie needs, multiply your BMR by the appropriate activity factor, as follows:\n\n• If you are sedentary (little or no exercise) : Calorie-Calculation = BMR x 1.2.\n• If you are lightly active (light exercise/sports 1-3 days/week) : Calorie-Calculation = BMR x 1.375.\n\n## How do you calculate calories burned?\n\nHere’s your equation: MET value multiplied by weight in kilograms tells you calories burned per hour (MET*weight in kg=calories/hour). If you only want to know how many calories you burned in a half hour, divide that number by two. If you want to know about 15 minutes, divide that number by four." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9065629,"math_prob":0.9558281,"size":1329,"snap":"2020-34-2020-40","text_gpt3_token_len":315,"char_repetition_ratio":0.12830189,"word_repetition_ratio":0.04366812,"special_character_ratio":0.23702031,"punctuation_ratio":0.11956522,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9949102,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-21T02:03:57Z\",\"WARC-Record-ID\":\"<urn:uuid:5525a131-0507-42cd-8d77-aa75240bef39>\",\"Content-Length\":\"15784\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0f40e343-27a2-4e74-9810-f680707eb113>\",\"WARC-Concurrent-To\":\"<urn:uuid:c5573eb2-48ed-4473-b27e-66506159ef46>\",\"WARC-IP-Address\":\"172.67.211.214\",\"WARC-Target-URI\":\"https://jobs-on-monster.com/recommendations/how-to-work-out-how-many-calories.html\",\"WARC-Payload-Digest\":\"sha1:GW7VYMZXDTY6JMPMQFNL2CVB7CL7BDIR\",\"WARC-Block-Digest\":\"sha1:PLBCQTA6PDIETOLOWRKCM3RV54D7ZQI7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400198887.3_warc_CC-MAIN-20200921014923-20200921044923-00262.warc.gz\"}"}
https://proofwiki.org/wiki/Real_Numbers_form_Ordered_Integral_Domain
[ "# Real Numbers form Ordered Integral Domain\n\n## Theorem\n\nThe set of real numbers $\\R$ forms an ordered integral domain under addition and multiplication: $\\struct {\\R, +, \\times, \\le}$.\n\n## Proof 1\n\nThis follows directly from Real Numbers form Totally Ordered Field.\n\nThe set of real numbers $\\R$ forms a totally ordered field under addition and multiplication: $\\struct {\\R, +, \\times, \\le}$.\n\nA totally ordered field is also an ordered integral domain.\n\n$\\blacksquare$\n\n## Proof 2\n\nWe have that the real numbers $\\struct {\\R, +, \\times}$ form an integral domain.\n\nIt remains to specify a property $P$ on $\\R$ such that:\n\n$(1): \\quad \\forall a, b \\in \\R: \\map P a \\land \\map P b \\implies \\map P {a + b}$\n$(2): \\quad \\forall a, b \\in \\R: \\map P a \\land \\map P b \\implies \\map P {a \\times b}$\n$(3): \\quad \\forall a \\in \\R: \\map P a \\lor \\map P {-a} \\lor a = 0$\n\nWe have that the integers $\\struct {\\Q, +, \\times}$ form an ordered integral domain, where $\\Q$ denotes the set of rational numbers.\n\nLet $P'$ be the (strict) positivity property on $\\struct {\\Q, +, \\times}$.\n\nLet us define the property $P$ on $\\R$ as:\n\n$\\forall a \\in \\R: \\map P a \\iff a = \\eqclass {\\sequence {a_n} } {}: \\forall n \\in \\N: \\map {P'} {a_n}$\n\nThat is, an element $a = \\eqclass {\\sequence {a_n} } {}$ has $P$ if and only if $a_n$ has (strict) positivity property in $\\Q$ for all $n \\in \\N$.\n\nNow let $a = \\eqclass {\\sequence {a_n} } {}$ and $b = \\eqclass {\\sequence {b_n} } {}$ such that $\\map P a$ and $\\map P b$.\n\nThen by definition of real addition:\n\n$\\eqclass {\\sequence {a_n} } {} + \\eqclass {\\sequence {b_n} } {} = \\eqclass {\\sequence {a_n + b n} } {}$\n$\\eqclass {\\sequence {a_n} } {} \\times \\eqclass {\\sequence {b_n} } {} = \\eqclass {\\sequence {a_n \\times b n} } {}$\n\nIt can be seen from the definition of (strict) positivity $P'$ on $\\Q$ that $\\map P {a + b}$ and $\\map P {a \\times b}$.\n\nIt can be seen that if $\\map P a$ then $\\neg \\map P {-a}$ and vice versa.\n\nAlso we note that $\\neg \\map P 0$ and of course $\\neg \\map P {-0}$.\n\nSo the property $P$ we defined fulfils the criteria for the (strict) positivity property.\n\nHence the result.\n\n$\\blacksquare$" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.68518937,"math_prob":0.9999789,"size":2316,"snap":"2019-51-2020-05","text_gpt3_token_len":731,"char_repetition_ratio":0.1535467,"word_repetition_ratio":0.15764706,"special_character_ratio":0.3605354,"punctuation_ratio":0.13616072,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99999654,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-13T23:58:41Z\",\"WARC-Record-ID\":\"<urn:uuid:62d1a0c2-c88c-412b-b0d3-da477914ffb5>\",\"Content-Length\":\"36348\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:78cdac7c-5c20-4ab2-965c-b741ec693837>\",\"WARC-Concurrent-To\":\"<urn:uuid:91b71d5f-dcff-4a87-b2e6-3e94370c0db7>\",\"WARC-IP-Address\":\"104.27.168.113\",\"WARC-Target-URI\":\"https://proofwiki.org/wiki/Real_Numbers_form_Ordered_Integral_Domain\",\"WARC-Payload-Digest\":\"sha1:HQFAATVVBCSL7FYDDG5JXFWXPOZJ6OFO\",\"WARC-Block-Digest\":\"sha1:67JJPBZQ7RJ5Y6V2BHI6IBROBNQW3AQV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540569332.20_warc_CC-MAIN-20191213230200-20191214014200-00390.warc.gz\"}"}
http://derivatives.garven.com/page/2/
[ "# On the role of replicating portfolios in the pricing of financial derivatives in general\n\nReplicating portfolios play a central role in terms of pricing financial derivatives. Here is what we have learned so far about replicating portfolios in Finance 4366:\n\n1. Replicating portfolios for long and short forward contracts. Buying forward is equivalent to buying the underlying on margin, and selling forward is equivalent to shorting the underlying and lending money. Like options, forwards and futures are priced by pricing the replicating portfolio and invoking the “no-arbitrage” condition. If the forward/futures price it too low, then one can earn positive returns with zero risk and zero net investment by buying forward, shorting the underlying and lending money. Similarly, if the forward futures price is too high, one can earn positive returns with zero risk and zero net investment by selling forward and buying the underlying with borrowed money. This is commonly referred to as “riskless arbitrage”; it’s riskless because you’re perfectly hedged, and it’s arbitrage because you are buying low and selling high.\n2. Replicating portfolio for a call option. The replicating portfolio for a call option is a margined investment in the underlying. For example, in my teaching note entitled “A Simple Model of a Financial Market”, I provide a numerical example where the interest rate is zero, there are two states of the world, a bond which pays off \\$1 in both states is worth \\$1 today, and a stock that pays off \\$2 in one state and \\$.50 in the other state is also worth one dollar. In that example, the replicating portfolio for a European call option with an exercise price of \\$1 consists of 2/3 of 1 share of stock (costing \\$0.66) and a margin balance consisting of a short position in 1/3 of a bond (which is worth -\\$0.33). Thus, the arbitrage-free value of the call option is \\$0.66 – \\$0.33 = \\$0.33.\n3. Replicating portfolio for a put option. Since the replicating portfolio for a call option is a margined investment in the underlying, it should come as no surprise that the replicating portfolio for a put option consists of a short position in the underlying combined with lending. Thus, in order to price the put, you need to determine and price the components of the replicating portfolio.  Continuing with the same numerical example described for the call option in part 2 above,  the replicating portfolio for an otherwise identical European put option with an exercise price of \\$1 consists of a short position in 1/3 of 1 share of stock (which is worth -\\$0.33), combined with a long position in 2/3 of a bond (which is worth \\$0.66). Thus, the arbitrage-free value of the put option is  – \\$0.33 + \\$0.66 = \\$0.33.\n4. Put-Call Parity. Note also that if you know the value of a call, the underlying, and the present value of the exercise price, then you can apply the put-call parity equation to figure out the price for the put option; i.e.,", null, "${C_0} + PV(K) = {P_0} + {S_0} \\Rightarrow {P_0} = {C_0} + PV(K) - {S_0}.$ Since we know the price of the call (\\$0.33), the present value of the exercise price (\\$1), and the stock price (\\$1), then it follows from the put-call parity equation that the value of the put is also 33 cents. More generally, if you know the values of three of the four securities that are included in the put-call parity equation, then you can infer the “no-arbitrage” value of the fourth security.\n\n# Problem Set 3, due Thursday, 2/6, is now available\n\nI just posted Problem Set 3 on the course website. It is due at the beginning of class on Thursday, 2/6, and it is based on material that we will cover during tomorrow’s class meeting.\n\n# Mark your calendars – Finance 4366 extra credit opportunity!\n\nI have decided to offer the following extra credit opportunity for Finance 4366. You can earn extra credit by attending and reporting on Dr. Michael Rectenwald’s upcoming talk entitled “Woke Capitalism: Qui Bono?” Dr. Rectenwald’s talk is scheduled for Thursday, February 13 from 5:15-6:30 p.m. in Foster 250.\n\nIf you decide to take advantage of this opportunity, I will use the grade you earn on your report to replace your lowest quiz grade in Finance 4366 (assuming that your grade on the extra credit is higher than your lowest quiz grade). The report should be in the form of a 1-2 page executive summary in which you provide a critical analysis of Dr. Rectenwald’s lecture. In order to receive credit, the report must be submitted via email to [email protected] in either Word or PDF format by no later than Monday, February 17 at 5 p.m.", null, "# The latest prediction market assessment\n\nIn a blog post last week entitled “Prediction markets’ take on removal of POTUS from office“, I wrote about the PredictIt.org prediction market, where one could purchase a “share” which pays \\$1 if the answer to the question, “Will the Senate convict Donald Trump in his first term?, turns out to be “yes”. Since the price of that share at that time was 8 cents, this meant that investors at the time were placing an 8% probability that POTUS would be removed from office as a consequence of the Senate trial.\n\nSince last week, the daily closing prices of the share price have fluctuated between a high of 11 cents (on 1/27 and 1/28) and a low of 4 cents (on 1/30); see the graph below:", null, "It will be interesting to see how the share price changes going forward. Given the political composition of the US Senate, it is not surprising that the odds of conviction are as low as they are, but expect more share price volatility as this political drama plays out. After all, the only two possibilities are for the share price to go to either \\$0 or \\$1.\n\n# Examples of commodity and financial futures contracts traded on U.S. exchanges", null, "For a non-technical introduction to forward and futures contracts, it’s hard to beat the following video tutorial on this topic:\n\n# Finance 4366 Grades on Canvas\n\nI have posted Finance 4366 numeric course grades to Canvas; the FIN 4366 grade book is at https://baylor.instructure.com/courses/108333/gradebook. To date, we have had five class meetings, three quizzes (two of which have been graded), and three problem sets (problem sets 1, 2, and the student questionnaire which was graded on a (0, 100) basis; the grade for problem set 2 is still pending). Since we haven’t had any exams yet, the course grade which now appears on Canvas was calculated using the following equation:\n\nCurrent Course Numeric Grade = (.10(Class Attendance) + .10(Quizzes) + .20(Problem Sets))/.4\n\nOr course, this is simply a special case of the final course numeric grade equation given in the course syllabus:\n\nFinal Course Numeric Grade = .10(Class Attendance) + .10(Quizzes) + .20(Problem Sets) + Max{.20(Midterm Exam 1) + .20(Midterm Exam 2) + .20(Final Exam), .20(Midterm Exam 1) + .40(Final Exam), .20(Midterm Exam 2) + .40(Final Exam)}\n\nAs time passes and I continue to collect grade data from you, the grades as reported on Canvas will be posted on a timely basis. Once Midterm 1 grades are determined, then the Canvas course grade will be calculated using the following equation:\n\nCourse Numeric Grade after Midterm 1 = (.10(Class Attendance) + .10(Quizzes) + .20(Problem Sets) + .20(Midterm 1))/.6\n\nOnce Midterm 2 grades are determined, then the following equation will be used:\n\nCourse Numeric Grade after Midterm 2 = (.10(Class Attendance) + .10(Quizzes) + .20(Problem Sets) + .20(Midterm 1) + .20(Midterm 2))/.8\n\nAfter I record final exam grades, I will use the Final Course Numeric Grade equation above to determine your final course numeric grade, and (as also noted in the course syllabus), the final course letter grade will be based upon the following schedule of final course numeric grades:\n\n A 93-100% C 73-77% A- 90-93% C- 70-73% B+ 87-90% D+ 67-70% B 83-87% D 63-67% B- 80-83% D- 60-63% C+ 77-80% F <60%\n\n# Problem Set 2 question\n\nHere’s a brief Q&A that I had last evening with a student about Problem Set 2:\n\nQ: Hi Dr. Garven, I was wondering if I should use =stdev.p or =stdev.s when solving for the standard deviation on Excel. I understand this is a sample so =stdev.s is the formula I presume that I should use but I’m not quite sure. Which should I use?\n\nA: Neither – the “=stdev.p” and “=stdev.s” commands in Excel are useful for calculating the standard deviation of observed realized values of a random variable. In problem set 2, the investor contemplates expectations of future state-contingent returns on securities. Since standard deviation is the square root of the variance, start by calculating variance (see p. 9 of the Statistics Tutorial, Part 1 lecture note)." ]
[ null, "http://s0.wp.com/latex.php", null, "http://risk.garven.com/wp-content/uploads/2020/02/Free-Enterprise-Forum-with-Michael-Rectenwald-2-13-2020.png", null, "http://risk.garven.com/wp-content/uploads/2020/01/trumpcontract-1-31-2020-1.png", null, "http://derivatives.garven.com/wp-content/uploads/2016/09/image001-e1473195426839.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9405499,"math_prob":0.91388553,"size":6427,"snap":"2020-10-2020-16","text_gpt3_token_len":1453,"char_repetition_ratio":0.1276662,"word_repetition_ratio":0.06896552,"special_character_ratio":0.2307453,"punctuation_ratio":0.106481485,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96504486,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-25T18:47:34Z\",\"WARC-Record-ID\":\"<urn:uuid:442cc29f-1104-4d95-8700-b1c2209dea4c>\",\"Content-Length\":\"49494\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:48f6b398-bb00-415f-9f70-2fbe9091228c>\",\"WARC-Concurrent-To\":\"<urn:uuid:d472d51a-934d-4d4b-b20e-01f34e0b049a>\",\"WARC-IP-Address\":\"107.180.0.214\",\"WARC-Target-URI\":\"http://derivatives.garven.com/page/2/\",\"WARC-Payload-Digest\":\"sha1:P5VZYUTEQCSMQ3O62EXIBJ42NF7VM4PZ\",\"WARC-Block-Digest\":\"sha1:Z656KNFLBZKBFPAW6APYPTH4MHLMEVCU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875146127.10_warc_CC-MAIN-20200225172036-20200225202036-00539.warc.gz\"}"}
https://www.ig.com/au/glossary-trading-terms/rate-of-return-definition
[ "CFDs are complex instruments and come with a high risk of losing money rapidly due to leverage. Please ensure you fully understand the risks involved. CFDs are complex instruments and come with a high risk of losing money rapidly due to leverage. Please ensure you fully understand the risks involved.\n\nEnglish\n\n# Rate of return definition\n\n## What is rate of return?\n\nRate of return (ROR) is the loss or gain of an investment over a certain period, expressed as a percentage of the initial cost of the investment. A positive ROR means the position has made a profit, while a negative ROR means a loss. You will have a rate of return on any investment you make.\n\nTo calculate the rate of return for an investment, subtract the starting value of the investment from its final value (remember to include dividends and interest). Then, divide this amount by the starting value of the investment, and multiply that figure by 100. This will give you the RoR, expressed as a percentage.\n\n## Learn how to trade shares\n\nDiscover how to start trading shares in four steps.\n\n### Rate of return in trading and investing\n\nA rate of return can give traders and investors key information for future trades or investments. The rate of return can be used by traders to evaluate the outcome of their trades. However, it is more commonly used as a long-term calculation by investors – to determine whether the cost of an investment is worth the potential profit or loss.\n\n## Shares vs bonds rates of return\n\nThe calculations for the rate of return for shares and the rate of return for bonds are different because shares yield dividends, while bonds carry interest.\n\n### Example rate of return calculation for shares\n\nLet’s say that you own two ABC Limited shares, which you bought for \\$40 each. This would mean that your initial investment was worth \\$80.\n\nOver the course of one year, ABC Limited pays out dividends of \\$2 per share – giving you a total of \\$4 – and the share price goes up to \\$50. This means that your total investment would be worth \\$104 (the value of the shares plus dividend payments). You would then subtract the original value of your investment (\\$80) from the new value (\\$104) and divide this by \\$80. To get your rate of return as a percentage, you would multiply this figure by 100. This gives an annual rate of return of 30%.\n\n### Examples rate of return calculation for bonds\n\nAlternatively, if you own a \\$100,000 bond with a 5% interest rate, which reaches maturity after four years, you will earn \\$5000 income every year (bond value multiplied by interest rate). If you sell the bond for \\$120,000 after one year, the appreciation – or growth – of the bond is \\$20,000 (subtract original bond value from new bond value).\n\nThe calculation of the rate of return is the interest plus appreciation, divided by original bond price – expressed as a percentage. The rate of return after one year is therefore 25% (\\$5000 plus \\$20,000, divided by \\$100,000, multiplied by 100).\n\nDiscover how to trade with IG Academy, using our series of interactive courses, webinars and seminars.\n\nA - B - C - D - E - F - G - H - I - L - M - N - O - P - Q - R - S - T - U - V - W - Y" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9306304,"math_prob":0.97242534,"size":1503,"snap":"2023-40-2023-50","text_gpt3_token_len":335,"char_repetition_ratio":0.15410273,"word_repetition_ratio":0.015209125,"special_character_ratio":0.26347306,"punctuation_ratio":0.0945946,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9957157,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-25T22:30:23Z\",\"WARC-Record-ID\":\"<urn:uuid:b87e0ffc-e923-434c-835a-7237d42541aa>\",\"Content-Length\":\"86706\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9f435881-52a5-4cf6-9d47-f2048c446c76>\",\"WARC-Concurrent-To\":\"<urn:uuid:b7edbb53-7a6b-47fd-894b-942d26df29f6>\",\"WARC-IP-Address\":\"23.6.77.230\",\"WARC-Target-URI\":\"https://www.ig.com/au/glossary-trading-terms/rate-of-return-definition\",\"WARC-Payload-Digest\":\"sha1:2IWTPV3GSEOCQU23DUYKUMFVW4UUHYXN\",\"WARC-Block-Digest\":\"sha1:IBUN3EQROOBOOSQIYD74NEAPUZ7QDHU4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510100.47_warc_CC-MAIN-20230925215547-20230926005547-00172.warc.gz\"}"}
https://www.colorhexa.com/04543d
[ "# #04543d Color Information\n\nIn a RGB color space, hex #04543d is composed of 1.6% red, 32.9% green and 23.9% blue. Whereas in a CMYK color space, it is composed of 95.2% cyan, 0% magenta, 27.4% yellow and 67.1% black. It has a hue angle of 162.8 degrees, a saturation of 90.9% and a lightness of 17.3%. #04543d color hex could be obtained by blending #08a87a with #000000. Closest websafe color is: #006633.\n\n• R 2\n• G 33\n• B 24\nRGB color chart\n• C 95\n• M 0\n• Y 27\n• K 67\nCMYK color chart\n\n#04543d color description : Very dark cyan - lime green.\n\n# #04543d Color Conversion\n\nThe hexadecimal color #04543d has RGB values of R:4, G:84, B:61 and CMYK values of C:0.95, M:0, Y:0.27, K:0.67. Its decimal value is 283709.\n\nHex triplet RGB Decimal 04543d `#04543d` 4, 84, 61 `rgb(4,84,61)` 1.6, 32.9, 23.9 `rgb(1.6%,32.9%,23.9%)` 95, 0, 27, 67 162.8°, 90.9, 17.3 `hsl(162.8,90.9%,17.3%)` 162.8°, 95.2, 32.9 006633 `#006633`\nCIE-LAB 31.121, -28.291, 7.337 4.062, 6.703, 5.494 0.25, 0.412, 6.703 31.121, 29.226, 165.462 31.121, -25.75, 12.081 25.89, -17.3, 5.541 00000100, 01010100, 00111101\n\n# Color Schemes with #04543d\n\n• #04543d\n``#04543d` `rgb(4,84,61)``\n• #54041b\n``#54041b` `rgb(84,4,27)``\nComplementary Color\n• #045415\n``#045415` `rgb(4,84,21)``\n• #04543d\n``#04543d` `rgb(4,84,61)``\n• #044354\n``#044354` `rgb(4,67,84)``\nAnalogous Color\n• #541504\n``#541504` `rgb(84,21,4)``\n• #04543d\n``#04543d` `rgb(4,84,61)``\n• #540443\n``#540443` `rgb(84,4,67)``\nSplit Complementary Color\n• #543d04\n``#543d04` `rgb(84,61,4)``\n• #04543d\n``#04543d` `rgb(4,84,61)``\n• #3d0454\n``#3d0454` `rgb(61,4,84)``\n• #1b5404\n``#1b5404` `rgb(27,84,4)``\n• #04543d\n``#04543d` `rgb(4,84,61)``\n• #3d0454\n``#3d0454` `rgb(61,4,84)``\n• #54041b\n``#54041b` `rgb(84,4,27)``\n• #010b08\n``#010b08` `rgb(1,11,8)``\n• #02231a\n``#02231a` `rgb(2,35,26)``\n• #033c2b\n``#033c2b` `rgb(3,60,43)``\n• #04543d\n``#04543d` `rgb(4,84,61)``\n• #056c4f\n``#056c4f` `rgb(5,108,79)``\n• #068560\n``#068560` `rgb(6,133,96)``\n• #079d72\n``#079d72` `rgb(7,157,114)``\nMonochromatic Color\n\n# Alternatives to #04543d\n\nBelow, you can see some colors close to #04543d. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #045429\n``#045429` `rgb(4,84,41)``\n• #045430\n``#045430` `rgb(4,84,48)``\n• #045436\n``#045436` `rgb(4,84,54)``\n• #04543d\n``#04543d` `rgb(4,84,61)``\n• #045444\n``#045444` `rgb(4,84,68)``\n• #04544a\n``#04544a` `rgb(4,84,74)``\n• #045451\n``#045451` `rgb(4,84,81)``\nSimilar Colors\n\n# #04543d Preview\n\nThis text has a font color of #04543d.\n\n``<span style=\"color:#04543d;\">Text here</span>``\n#04543d background color\n\nThis paragraph has a background color of #04543d.\n\n``<p style=\"background-color:#04543d;\">Content here</p>``\n#04543d border color\n\nThis element has a border color of #04543d.\n\n``<div style=\"border:1px solid #04543d;\">Content here</div>``\nCSS codes\n``.text {color:#04543d;}``\n``.background {background-color:#04543d;}``\n``.border {border:1px solid #04543d;}``\n\n# Shades and Tints of #04543d\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #000907 is the darkest color, while #f5fffc is the lightest one.\n\n• #000907\n``#000907` `rgb(0,9,7)``\n• #011c14\n``#011c14` `rgb(1,28,20)``\n• #022f22\n``#022f22` `rgb(2,47,34)``\n• #03412f\n``#03412f` `rgb(3,65,47)``\n• #04543d\n``#04543d` `rgb(4,84,61)``\n• #05674b\n``#05674b` `rgb(5,103,75)``\n• #067958\n``#067958` `rgb(6,121,88)``\n• #078c66\n``#078c66` `rgb(7,140,102)``\n• #089f73\n``#089f73` `rgb(8,159,115)``\n• #08b281\n``#08b281` `rgb(8,178,129)``\n• #09c48f\n``#09c48f` `rgb(9,196,143)``\n``#0ad79c` `rgb(10,215,156)``\n• #0beaaa\n``#0beaaa` `rgb(11,234,170)``\n• #15f4b4\n``#15f4b4` `rgb(21,244,180)``\n• #27f5ba\n``#27f5ba` `rgb(39,245,186)``\n• #3af6c0\n``#3af6c0` `rgb(58,246,192)``\n• #4df7c6\n``#4df7c6` `rgb(77,247,198)``\n• #60f7cc\n``#60f7cc` `rgb(96,247,204)``\n• #72f8d2\n``#72f8d2` `rgb(114,248,210)``\n• #85f9d8\n``#85f9d8` `rgb(133,249,216)``\n``#98fade` `rgb(152,250,222)``\n• #aafbe4\n``#aafbe4` `rgb(170,251,228)``\n• #bdfcea\n``#bdfcea` `rgb(189,252,234)``\n• #d0fdf0\n``#d0fdf0` `rgb(208,253,240)``\n• #e3fef6\n``#e3fef6` `rgb(227,254,246)``\n• #f5fffc\n``#f5fffc` `rgb(245,255,252)``\nTint Color Variation\n\n# Tones of #04543d\n\nA tone is produced by adding gray to any pure hue. In this case, #292f2d is the less saturated color, while #01573e is the most saturated one.\n\n• #292f2d\n``#292f2d` `rgb(41,47,45)``\n• #26322f\n``#26322f` `rgb(38,50,47)``\n• #223630\n``#223630` `rgb(34,54,48)``\n• #1f3931\n``#1f3931` `rgb(31,57,49)``\n• #1c3c33\n``#1c3c33` `rgb(28,60,51)``\n• #184034\n``#184034` `rgb(24,64,52)``\n• #154336\n``#154336` `rgb(21,67,54)``\n• #124637\n``#124637` `rgb(18,70,55)``\n• #0e4a39\n``#0e4a39` `rgb(14,74,57)``\n• #0b4d3a\n``#0b4d3a` `rgb(11,77,58)``\n• #07513c\n``#07513c` `rgb(7,81,60)``\n• #04543d\n``#04543d` `rgb(4,84,61)``\n• #01573e\n``#01573e` `rgb(1,87,62)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #04543d is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5737366,"math_prob":0.69487876,"size":3668,"snap":"2020-24-2020-29","text_gpt3_token_len":1635,"char_repetition_ratio":0.124181226,"word_repetition_ratio":0.011029412,"special_character_ratio":0.56352234,"punctuation_ratio":0.23730685,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99304765,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-06-05T07:42:58Z\",\"WARC-Record-ID\":\"<urn:uuid:df5d8318-23cd-4c07-aed9-1a4e0a42a8ee>\",\"Content-Length\":\"36189\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:13adbcef-e2b7-4442-b9a1-1aec5aa0ef40>\",\"WARC-Concurrent-To\":\"<urn:uuid:15e98d27-485f-419b-82a9-d924a1e5a655>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/04543d\",\"WARC-Payload-Digest\":\"sha1:2D6S37Q4FKP45JM5XT4OO4SNW2VBGJDL\",\"WARC-Block-Digest\":\"sha1:U5L4HA74PTE4TFA65NEMFL2IVDOCAREY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590348493151.92_warc_CC-MAIN-20200605045722-20200605075722-00350.warc.gz\"}"}
https://www.rdocumentation.org/packages/spatstat/versions/1.42-2/topics/runiflpp
[ "# runiflpp\n\n0th\n\nPercentile\n\n##### Uniform Random Points on a Linear Network\n\nGenerates $n$ random points, independently and uniformly distributed, on a linear network.\n\nKeywords\nspatial, datagen\n##### Usage\nruniflpp(n, L, nsim=1)\n##### Arguments\nn\nNumber of random points to generate. A nonnegative integer, or a vector of integers specifying the number of points of each type.\nL\nA linear network (object of class \"linnet\", see linnet).\nnsim\nNumber of simulated realisations to generate.\n##### Details\n\nThis function uses runifpointOnLines to generate the random points.\n\n##### Value\n\n• If nsim = 1, a point pattern on the linear network, i.e. an object of class \"lpp\". If nsim > 1, a list of such point patterns.\n\nrpoislpp, lpp, linnet\n\n• runiflpp\n##### Examples\ndata(simplenet)\nX <- runiflpp(10, simplenet)\nplot(X)\n# marked\nZ <- runiflpp(c(a=10, b=3), simplenet)\nDocumentation reproduced from package spatstat, version 1.42-2, License: GPL (>= 2)\n\n### Community examples\n\nLooks like there are no examples yet." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5663528,"math_prob":0.9593762,"size":823,"snap":"2019-43-2019-47","text_gpt3_token_len":232,"char_repetition_ratio":0.12087912,"word_repetition_ratio":0.0,"special_character_ratio":0.24787363,"punctuation_ratio":0.14935064,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9877186,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-20T21:55:12Z\",\"WARC-Record-ID\":\"<urn:uuid:f6ebf136-1b94-4559-bb7b-045ee51ddae7>\",\"Content-Length\":\"14845\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2b37db93-51b1-4d1f-bfc8-9a550b996a1b>\",\"WARC-Concurrent-To\":\"<urn:uuid:2137a28e-9f82-4e61-b339-b35800bacc87>\",\"WARC-IP-Address\":\"52.21.93.115\",\"WARC-Target-URI\":\"https://www.rdocumentation.org/packages/spatstat/versions/1.42-2/topics/runiflpp\",\"WARC-Payload-Digest\":\"sha1:YDSWI3EJUVFYWNU54G4YG3JZEE3ROBTQ\",\"WARC-Block-Digest\":\"sha1:AI6OLEGLGXLSZWQIQTIKP7CWU3NYGNUO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496670635.48_warc_CC-MAIN-20191120213017-20191121001017-00513.warc.gz\"}"}
http://andras.tantosonline.com/bit-block.htm
[ "# A new A/D converter design. Bit-block converters.\n\nUpdate: After a discussion on a newsgroup I realized that this idea is not totally new. It has been used in various A/D converters in the previous decades. I'll keep it on my website however since I belive it's still a good description of a lesser known A/D conversion technique.\n\n## Introduction\n\nThis paper presents a new analog to digital converter implementation. The technique described here based on the well-known process to convert base-10 fractional numbers between 0 and 1 to their base-2 representation. Different configurations featuring different speed, complexity and accuracy will be introduced. Converters with continuous time analog domain and logarithmic output will also presented.\n\nIn the past various different methods has been invented to covert analog values to their digital representation. Devices used various electric or mathematic approaches to convert an analog voltage or current value to their digital representation. The class of converters introduced in this paper use a mathematical approach. All share one common central idea: a building block, that produces one bit of digital information (or in one variant one digit of information) at a time. Using this building block many different configurations can be constructed, with different trade offs in speed, complexity and accuracy. I will describe a configuration which can achieve comparable speed to successive approximation converters with constant complexity (i.e. the complexity of the converter is independent of the resolution). I will also describe a converter that performs comparable to (pipelined) flash converters with log2(n) complexity (the complexity is proportional to the number of output bits, not the number of output values). Further modifications of the basic idea lead to a continuous-time A/D converter where the sampling occurs in the digital domain, and to a converter that performs logarithmic conversion.\n\n## The basic idea\n\nIt is a well-known technique to convert fractional base-10 numbers between 0 and 1 into their base-2 representation. I will first describe it shortly just for completeness:\n\nLet's have a number N1 between 0 inclusive and 1 exclusive in it's base-10 representation. We need it's base-2 digits. It's first digit is off course 0 and than comes the fractional sign, '.'. The first digit after the fractional sign is '1' if N1 >= 0.5 and '0' otherwise. Lets subtract 0.5 from N1 if it is greater than 0.5, and multiply this by 2. This way we get a new number, N2 between 0 and 1 just as N1 was. It's first digit after the fractional sign is the second of N1, so we can deduce N1 's second digit with the same method only use N2 instead of N1. We can continue this method until we reach Ni equals to 0 in which case all other digits are zeros, or we reach the desired accuracy. In formal:\n\n1. Let's have a number 0 <= N1 < 1. Let's call our output number's i'th digit after the fractional sign Oi. Let's define an iteration counter i and initialize it with 1.\n\n2. In iteration i, let Oi = 1 if Ni >= 0.5, 0 otherwise. Also let Ni+1 = 2* (Ni - Oi /2), which is: Ni+1 = 2*Ni - Oi\n\n3. Increment the iteration counter i with one and continue with step 2 until Ni becomes 0 or the desired accuracy is reached.\n\nAn example:\n\ni\n\nNi\n\nOi\n\nNi+1\n\n1\n\n0.41\n\n0\n\n2 * 0.41 - 0\n\n2\n\n0.82\n\n-1\n\n2 * 0.82 - 1\n\n3\n\n2.64\n\n-1\n\n2 * 0.64 - 1\n\n4\n\n6.28\n\n-1\n\n2 * 0.28 - 0\n\n5\n\n13.56\n\n-1\n\n2 * 0.56 - 1\n\n6\n\n28.12\n\n-1\n\n...\n\nSo the first 6 digits of 0.41 in base-2 is 0.011010.\n\nThe main point is that we didn't need much knowledge about the source representation. We didn't actually use that the source representation was base-10. We didn't even use that the representation is a number. What we did use is a method to decide weather a value is greater than 0.5 or not, a method to multiply a value by two, and a method to decrease a value by 1 if desired. If these functions available in any source representation, we can do the conversion. All described functionality is available in the analog domain so there is a possibility to design a circuit that implements the above algorithm. This construct will be introduced in the next chapter.\n\n## The basic circuitry\n\nThe core of the algorithm described in the previous chapter is point 2. It can be considered as a black box, with one analog input (Ni), one analog output (Ni+1) and one digital output (Oi). For easier reference lets rename these signals as follows:\n\nOld name\n\nNew name\n\nNi\n\nAIN\n\nNi+1\n\nAOUT\n\nOi\n\nDOUT\n\nWe have the transfer functions also defined:\n\nDOUT = (AIN>0.5)?1:0\n\nAOUT = 2*AIN - DOUT\n\nWe can construct a plot for these functions:", null, "Bit-block's transfer function\n\nAnd we can easily create a circuit that can implement those transfer functions:", null, "A simple bit-block implementation\n\nIn this design the operational amplifiers considered to be ideal except that their output saturates to the power lines. One possible interpretation of this circuit is a 1-bit A/D converter with a conversion error output amplified to full-scale. Lets call this building block as bit-block. Next, we will use this block in larger constructs, so it is practical to have a symbol represent of the above circuit:", null, "Bit-block symbol\n\nWe will also need a special track and hold circuit (a sample and hold circuit in its strict sense). Traditional T/H devices have two states. In state one (track) the input theoretically equals with the output. In state two (hold) the output equals the last input value of the track state. We will need a different behavior. The output should change only on state 1-2 transition and should keep it's value constant in both state one and two. The simplest implementation of this could be to cascade two traditional track and hold devices with inverted control input:", null, "A simple sample-and-hold circuit\n\nIn the following figures this symbol will refer to the above circuit:", null, "Sample-and-hold circuit symbol\n\nWith these components at hand we can now construct the complete circuit that implements the described algorithm thus reassembling an A/D converter. Actually we can construct at least three basic structures. In the first case we use the strict algorithm definition above:", null, "Converter design A\n\nThis circuit will work as an MSB first serial A/D converter. The analog input of the converter is In. The converter produces one bit on the output (Sout) for each clock pulse on Clk. The conversion starts with a one-clock wide pulse on the Start pin. The switch is shown in the position when Start is not active. This converter type provides more or less the same features for the outside world as a serial output successive approximation converter. Note however that the converter's complexity is independent of the precision of the conversion. One consequence of this feature is that the precision (the number of bits per sample) can be adjusted by control only and the same device can be used as an 8-bit or a 12-bit converter for example.\n\nIf we unfold the loop of the algorithm and implement each iteration with a unique set of components we get another layout:", null, "Converter design B\n\nThis circuit will provide a parallel output value on D[0..3] of each pulse on Clk. This output will be the converted value of In four clock cycles before. Naturally the circuit can be expanded to any number of bits. This circuit reassembles a pipe-line A/D converter scheme. The complexity of the converter however is proportional to the number of output bits – except for the digital delay line which has a cubic complexity over the number of output bits - as opposed to the number of output values as in conventional flash-converters.\n\nIf we sacrifice some performance for simplicity we can leave most of the sample and hold circuits and D filp-flops out of the design. This will lead us to the third A/D converter implementation:", null, "Converter design C\n\nThis converter scheme will produce one set of digital output on D[0..3] for each clock pulse on Clk. The output will correspond to the analog value on In at the time of last clock pulse. Thus this converter behaves like a flash converter with no pipe-lining effect.\n\n### Performance considerations\n\nDesign A has the unique feature to produce as many bits as desired. It also produces direct serial output, compatible with many current DSP's serial port. It can be used with such intelligent devices with nearly no additional logic. It's maximal conversion speed determined by the delay of the bit-block and the settling time of the sample and hold circuit. Of course it is also determined by the number of required bits:\n\nTmin= n * (Tbb+Tsh), where n is the number of output bits per sample, Tbb is the delay of the bit-block and Tsh is the settling time of the S/H circuit.\n\nIt's minimal conversion speed is determined by the required precision and the fall of the output of the S/H circuit. The circuit's precision affected by the precision of the bit-block and the precision of the S/H circuits used.\n\nDesign B's speed is determined by the delay of the bit-block and the settling time of the S/H circuit. Digital delay lines will probably be much faster than S/H-s so their delay isn't a factor.\n\nTmin= Tbb+Tsh\n\nThis converter will perform a complete conversion under this time, so it's n times faster than design A. The minimal conversion time is determined by the same featured as design A but it also dependent on the output word length (the number of cascaded S/H circuits). This means that this converter cannot be used at such slow conversion rates as design A. It's precision is affected by the precision of the bit-block and the S/H circuits, but also depends upon the similarity of the many bit-blocks and S/H's used.\n\nDesign C has a maximum conversion speed between design A and B. It's conversion time is basically determined by the sum of the delay of the bit-block:\n\nTmin= n * Tbb+Tsh\n\nIt's minimal conversion time is equal to design A because both determined by the fall-time of the sample and hold circuit. The precision of this implementation is a function of the precision of one S/H circuit only and the precision and similarity of the bit-blocks. As a result it's precision is also between design A and B.\n\nWe can summarize the main features in the following table:\n\nComplexity\n\nSpeed\n\nAccuracy\n\nDesign A\n\nlow\n\nlow\n\nhigh\n\nDesign B\n\nhigh\n\nhigh\n\nlow\n\nDesign C\n\nmedium\n\nmedium\n\nmedium\n\n## Derivatives\n\n### Direct base-n converters\n\nAs mentioned earlier the bit-block can be considered as a one-bit A/D converter with an amplified conversion error output. Having this in mind one can generalize the bit-block to other than base-2 converters:", null, "Base-n bit-block\n\nThe resistor values R1 and R2 can be calculated from the criteria that AOUT's span must be equal to AIN's span. Naturally the digital output DOUT now consists of more than 1 wires. You can use this bit-block in any of the three basic designs. One benefit is that you convert more than one bit in one step. Another feature is that you are not tied to base-2 any longer. You can create for example a decadic flash converter at the A/D convert stage of the bit-block (9 comparators) and thus getting direct base-10 (for example BCD coded) output. As an example, let's see the transfer functions of a base-3 bit-block:", null, "Base-n bit-block's transfer functions\n\n### Continuous-time converters\n\nWith further modification of the transfer function of the bit-blocks we can construct a converter that can have a continuous-time analog and a discrete-time digital domain. To achieve this, we need to use a special coding of numbers, called Gray-codes. The coding has the feature that neighboring codes differ only in one digit. This code is widely used in places where asynchronous signals have to be sampled in a synchronous part of the system, like in positional encoders or signals crossing clock-domains. A Gray-coded number can be easily created from it's base-2 representation: If a bit is one than invert all lower bits of the number. Starting from the MSB bit and applying the previous modification to all bits downwards one can get the Gray-coded version of the original number. A small modification to the transfer characteristics of the bit-block can give us the same results:", null, "Gray-coded bit-block's transfer function\n\nNote, that the modified scheme does the same thing. If the bit-block's digital output is one, the analog output and thus all lower bits are inverted. Also, the analog output's transfer function became continuous. This is another implementation of the main idea behind Gray-code, that is, a little change in the represented value causes a little change in the bits coding that value. A possible implementation using ideal, but saturating operational amplifiers (same as before) would be like this:", null, "A simple Gray-coded bit-block implementation\n\nUsing this modified bit-block, you can leave out even the last S/H circuit from converter design C. This will give us a continuous-time A/D converter, whose output can be sampled in the digital domain:", null, "Converter design D\n\nThe main benefit of this layout is that digital domain sampling is much easier to do and much more accurate than the a S/H circuit can be. Further more this design is able to work on any low conversion frequencies which none of the previous designs could do. There is also a possibility to create \"semi Gray-coded\" base-n converters using the same technique. However in this case the base (n) should be an even number. For example a base-4 output converter's semi Gray-coded version would have the following transfer functions:", null, "Gray-coded base-n bit-block's transfer functions\n\nNaturally DOUT now should be Gray-coded too.\n\n### Logarithmic converters\n\nUntil now we've used the same bit-block for generating all digits of the converted number. If we use different bit-blocks at different bit-positions we can create converters with other than linear transfer functions. For example we can construct a direct-logarithmic A/D converter. In the following this technique will be introduced with a 3-bit logarithmic converter which can convert an input value from 1mV (inclusive) to 256mV (exclusive). The output will be log2(In/[mV]) in our example. For a three bit converter the output values will correspond to the following input ranges:\n\ninput value range (mV)\n\noutput value (binary)\n\n(1;2]\n\n000\n\n(2;4]\n\n001\n\n(4;9]\n\n010\n\n(8;16]\n\n011\n\n(16;32]\n\n100\n\n(32;64]\n\n101\n\n(64;128]\n\n110\n\n(128;256]\n\n111\n\nFor the following explanation it's convenient to extend the input range of our converter to include the upper bound also, in our case 256mV. For the extended input range the converter's theoretical transfer characteristic should be like this:\n\ninput value range (mV)\n\noutput value (binary)\n\n(1;2]\n\n000\n\n(2;4]\n\n001\n\n(4;9]\n\n010\n\n(8;16]\n\n011\n\n(16;32]\n\n100\n\n(32;64]\n\n101\n\n(64;128]\n\n110\n\n(128;256]\n\n111\n\n256\n\n111\n\nThe most significant bit changes it's state at 16. This means that the MSB bit's bit-block's transfer functions should look like this:", null, "Logarithmic converter's bit-2 bit-block's transfer functions\n\nNote that, the AIN-AOUT transfer function has two different slope parts. It complicates a bit the implementation of the bit-block, but it is still possible. It also can be noticed that it maps range 1-16 and range 16-256 into the same output range (16-256).\n\nThe second bit changes it's state at values 1, 4, 16, 64 in the original input. But we want to connect this bit's bit-block over the MSB's bit-block which has the previously described mapping behavior. In detail it maps 1 to 1, 4 to 32, 16 to 1, 64 to 32. The two 0-to-1 transitions are mapped to the same value (32) and 1-to-0 transitions are also mapped to either 16 or 256. Now, we can construct the bit-block transfer functions for this bit:", null, "Logarithmic converter's bit-1 bit-block's transfer functions\n\nAs can be seen these transfer functions are not the same as the previous ones. The third bit changes it's state at every integral power of two. You can check that the first and the second bit-blocks together maps all 1-to-0 transitions to 128 and all 0-to-1 transitions to either 64 or 256. The transfer functions can be designed as follows:", null, "Logarithmic converter's bit-0 bit-block's transfer functions\n\nHaving these bit-blocks you can construct the complete 3-bit direct logarithmic A/D converter. You only have to choose the proper configuration. Because the bit-blocks are different it calls for design B or C where each bit has it's own dedicated bit-block. There's one important note to make here: the input span (along with the output span) of each stage decreases and though the upper bound remains the same the lower bound increases.\n\nAs a summary here are the general equations to implement an n-bit logarithmic A/D converter:\n\n• the lower bound of the input scale: A0>0\n\n• the number of bits of the converter: n\n\n• the quotient of the converter: q>0.\n(This means that output value i'th change corresponds to input value A0 qi)\n\nDesign outputs:\n\n• the full-scale input value is:", null, "• the upper bound of the input range is:", null, "• for bit-block i (generating bit i) the input scale's lower limit is:", null, "• for bit-block i the input scale's higher limit is:", null, "• for bit-block i the output switch point is at:", null, "## Conclusions\n\nThe basic idea explained at the beginning of this article can be the source of a group of A/D converters. They differ in complexity, speed and accuracy. They also differ in output code (Gray-code, binary code, or base-n code including BCD). Their interface can be serial or parallel and their transfer function can be linear or logarithmic. They can perform similar to existing converter classes with reduced complexity. Some has unique features, like direct logarithmic output, continuous time analog sampling or variable precision, not customary in current techniques.\n\n## Future work\n\nEach of the building blocks and converter layouts should be tested against various real-world effects, like the non-ideal behavior of the operational amplifiers, non-perfect resistor values, parasitic effects, etc. Because both design A, B and C are basically discrete time analog circuits it calls for a switched capacitor circuit implementation." ]
[ null, "http://andras.tantosonline.com/CIKK12_html_dcb027c.gif", null, "http://andras.tantosonline.com/CIKK12_html_m6d37a143.gif", null, "http://andras.tantosonline.com/CIKK12_html_cf3064f.gif", null, "http://andras.tantosonline.com/CIKK12_html_m1b4e50e8.gif", null, "http://andras.tantosonline.com/CIKK12_html_m5481e626.gif", null, "http://andras.tantosonline.com/CIKK12_html_m8ad0351.gif", null, "http://andras.tantosonline.com/CIKK12_html_m36e2d348.gif", null, "http://andras.tantosonline.com/CIKK12_html_56762c12.gif", null, "http://andras.tantosonline.com/CIKK12_html_67245936.gif", null, "http://andras.tantosonline.com/CIKK12_html_m3e49e601.gif", null, "http://andras.tantosonline.com/CIKK12_html_m6233422e.gif", null, "http://andras.tantosonline.com/CIKK12_html_6caacfd.gif", null, "http://andras.tantosonline.com/CIKK12_html_m46a843c4.gif", null, "http://andras.tantosonline.com/CIKK12_html_m1a7e3f9c.gif", null, "http://andras.tantosonline.com/CIKK12_html_166e7fce.gif", null, "http://andras.tantosonline.com/CIKK12_html_m2c52b0a6.gif", null, "http://andras.tantosonline.com/CIKK12_html_m6fc9c2fc.gif", null, "http://andras.tantosonline.com/CIKK12_html_m62828e7f.gif", null, "http://andras.tantosonline.com/CIKK12_html_m1cdd5389.gif", null, "http://andras.tantosonline.com/CIKK12_html_10327bee.gif", null, "http://andras.tantosonline.com/CIKK12_html_m9a94e40.gif", null, "http://andras.tantosonline.com/CIKK12_html_7e3277fe.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9081176,"math_prob":0.9566396,"size":15993,"snap":"2021-21-2021-25","text_gpt3_token_len":3371,"char_repetition_ratio":0.15348051,"word_repetition_ratio":0.028058361,"special_character_ratio":0.20802851,"punctuation_ratio":0.07828365,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98999137,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44],"im_url_duplicate_count":[null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-05T21:53:06Z\",\"WARC-Record-ID\":\"<urn:uuid:d5b7e54a-9aa9-49a9-bb29-153a273df631>\",\"Content-Length\":\"38845\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3fa94a28-1ce3-4f33-bf06-ba7ff173a029>\",\"WARC-Concurrent-To\":\"<urn:uuid:abc55106-cb6c-473b-abce-6bddc5527a11>\",\"WARC-IP-Address\":\"35.209.132.195\",\"WARC-Target-URI\":\"http://andras.tantosonline.com/bit-block.htm\",\"WARC-Payload-Digest\":\"sha1:QN5IIQOMJZ2FL3Z4P4RQI3RU6TWWQD34\",\"WARC-Block-Digest\":\"sha1:RNDNSWH34XSIH7B3OHS727JSHUGCKZOC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243988696.23_warc_CC-MAIN-20210505203909-20210505233909-00190.warc.gz\"}"}
https://www.komal.hu/feladat?a=honap&l=en&h=200509&t=200509
[ "", null, "Mathematical and Physical Journal\nfor High Schools\nIssued by the MATFUND Foundation\n Already signed up? New to KöMaL?\n\n# KöMaL Problems in Mathematics, September 2005\n\nPlease read the rules of the competition.\n\nShow/hide problems of signs:", null, "## Problems with sign 'K'\n\nDeadline expired on October 10, 2005.\n\nK. 43. Two students are talking after class. What is the average of your computer science marks in September?\n\n-- 4.6 exactly.\n\n-- That is impossible. The school year has just started, you cannot have so many marks yet.''\n\nWhat may the student in doubt have thought of? (The grades in Hungary are varying from 1 to 5.) (Based on the idea of\n\n(6 pont)\n\nsolution (in Hungarian), statistics\n\nK. 44. Determine the acute angles of a right-angled triangle, given that it can be divided into three isosceles triangles as shown in the Figure.", null, "(6 pont)\n\nsolution (in Hungarian), statistics\n\nK. 45. In how many different ways is it possible to place a king and a castle on the chessboard so that neither attacks the other? (The fields of the chessboard are labelled by combinations of letters and numbers in the conventional way. Two configurations count as different if at least one of the two chessmen is placed on different fields in the two cases.)\n\n(6 pont)\n\nsolution (in Hungarian), statistics\n\nK. 46. The values of a gold artefact in Aurum is proportional to the square of its mass. A 100-dollar piece is stolen by thieves. The thieves cut it up into smaller pieces of equal mass to prepare pendants of them, whose total value is 10 dollars. The pendants are bought up by a jeweller who assembles them into bracelets (of not necessarily the same mass). How much are the individual bracelets worth, given that each bracelet is made of a whole number of pendants, and their total value is 46 dollars?\n\n(6 pont)\n\nsolution (in Hungarian), statistics\n\nK. 47. There are two kinds of people living on an island: the good and the bad. The good always tell the truth and the bad always lie. Naturally, every inhabitant of the island is either a boy or a girl. Here is a conversation of two young people on the island:\n\nA: If I am good, B is bad.''\n\nB: If I am a boy, A is a girl.''\n\nFind out if each of them is good or bad and what sex they belong to.\n\n(6 pont)\n\nsolution (in Hungarian), statistics\n\nK. 48. Bob has three stencils for drawing circles. The areas of the circles are 6, 15 and 83 cm2. Bob wants to draw a few circles that have a total area of 220 cm2. How many of each should he draw? (Suggested by D. Szilágyi, Budapest)\n\n(6 pont)\n\nsolution (in Hungarian), statistics", null, "## Problems with sign 'C'\n\nDeadline expired on October 17, 2005.\n\nC. 815. The product of the real numbers a and b is 1, and", null, "Find the values of a and b.\n\n(5 pont)\n\nsolution (in Hungarian), statistics\n\nC. 816. The price of Aunt Margaret's favourite chocolate has gone up by 30%, while her pension is only raised by 15%. By what percentage will Aunt Margaret's chocolate consumption decrease, if she can only spend 15% more on chocolate.\n\n(5 pont)\n\nsolution (in Hungarian), statistics\n\nC. 817. Having calculated that 62+8=44, Clare noticed that 662+88=4444 was also true. Is it true for every n that", null, "?\n\n(5 pont)\n\nsolution (in Hungarian), statistics\n\nC. 818. A square tablecloth is layed on a round table so that their centres coincide. The areas of the circle and the square are equal. What percentage of the area of the tabletop is covered by the cloth?\n\n(5 pont)\n\nsolution (in Hungarian), statistics\n\nC. 819. A fly is sitting at the centre K of a regular hexagon ABCDEF, there is another fly at vertex B and a spider sitting at vertex A. The flies start clawling from B towards C and from K towards E simultaneously at the same speed. (The spider stays in placed.) Show that the three of them form a regular triangle at every time instant.\n\n(5 pont)\n\nsolution (in Hungarian), statistics", null, "## Problems with sign 'B'\n\nDeadline expired on October 17, 2005.\n\nB. 3832. P is an arbitrary point of the hypotenuse AB of a right-angled triangle ABC. The foot of the altitude drawn from vertex C is C1. The projection of P onto the leg AC is A1, and its projection onto the leg BC is B1.\n\na) Prove that the points P, A1, C, B1, C1 lie on a circle.\n\nb) Prove that the triangles A1B1C1 and ABC are similar.\n\n(3 pont)\n\nsolution, statistics\n\nB. 3833. Given the points A, B, C and D on the plane, construct a circle passing through A and B, such that the tangents drawn to it from C and D are equal.\n\n(3 pont)\n\nsolution (in Hungarian), statistics\n\nB. 3834. What is the largest value of the whole number n, such that with the help of two appropriately chosen weights and an equal-arm balance it is possible to determine the mass of every object that weighs a whole number of kilograms from 1 to n? (It is allowed to carry out as many measurements as needed, only using the two weights and the object to be weighed, and neither the weights nor the objects to be weighted are allowed to be cut in parts.)\n\n(5 pont)\n\nsolution (in Hungarian), statistics\n\nB. 3835. This spring there were three Hungarian teams among the best eight in the EHF women's handball championship. When the eight teams were paired randomly, each Hungarian team found that they were playing an opponent from a foreign country. What was the probability of this?\n\n(3 pont)\n\nsolution, statistics\n\nB. 3836. Represent the values of the number pair p, q on the coordinate plane, such that the equation x2-2px+q=0\n\na) has two roots;\n\nb) is satisfied by the number 2;\n\nc) is satisfied by the single number 2.\n\n(4 pont)\n\nsolution (in Hungarian), statistics\n\nB. 3837. Let P and Q, respectively, denote the centres of the squares ABDE and BCGH drawn on the sides AB and BC of the triangle ABC outwards. The midpoints of the sides AC and DH are R and S, respectively. Show that the points P, Q, R and S are the vertices of a square. (4 points)\n\n(4 pont)\n\nsolution (in Hungarian), statistics\n\nB. 3838. The binary form of the positive integer A consists of n digits of 1. Prove that the sum of the digits of the number nA in binary notation is n.\n\n(4 pont)\n\nsolution (in Hungarian), statistics\n\nB. 3839. The bisector of the angle A of triangle ABC intersects the side BC at D. The lines b and c drawn through the points B and C, respectively, are parallel to each other and equidistant from A. Let M and N denote the points of the lines b and c, such that AB bisects the line segment DM and AC bisects the line segment DN. Prove that DM=DN.\n\n(5 pont)\n\nsolution (in Hungarian), statistics\n\nB. 3840. It is known that the planes of the faces of a tetrahedron divide the space into 15 parts. What is the largest possible number of these parts that a line may pass through?\n\n(4 pont)\n\nsolution (in Hungarian), statistics\n\nB. 3841. The problem below appears in the article Szalonpóker'' (Hungarian title) of the current issue. Suppose that a deck of 52 cards is suffled in the same way. How many times does it need to be shuffled in order to get back the initial order. Solve the problem for that case, too, when the shuffling starts with the bottom card of the deck on the right, that is, when the card in the 26th place originally is put at the bottom of the new deck.\n\n(Little John and Old Firehand drop in the famous casino of Black Jacky to play cards. They play with a deck of 32 cards numbered 1 to 32. Before they agree on the rules, Black Jacky shuffles the cards as follows: He places the deck on the table, removes the top 16 cards and puts them on the table, to the right of the remaining deck without turning over. Then he forms one deck of them, with the bottom card of the deck on the left lying at the bottom, followed by alternating cards from the two decks. Then he repeats the procedure several times with the deck obtained in this way. Little John is sure that this way of shuffling the cards is not fair. Show that repeating the procedure several times will lead to a surprising result.)\n\n(4 pont)\n\nsolution (in Hungarian), statistics", null, "## Problems with sign 'A'\n\nDeadline expired on October 17, 2005.\n\nA. 377. The inscribed circle of the triangle ABC touches the side AB at C1, the side BC at A1, and the side CA at B1. It is known that the line segments AA1, BB1 and CC1 pass through a common point. Let N denote that point. Draw the three circles that pass through N and touches two of the sides. Prove that the six points of tangency are concyclic.\n\n(5 pont)\n\nsolution, statistics\n\nA. 378. Does there exist a function", null, "such that f(x)=-f(y) whenever x and y are different rationals and xy=1 or x+y", null, "{0,1}?\n\n(5 pont)\n\nsolution, statistics\n\nA. 379. Find all real numbers", null, "for which there exists a non-zero polynomial P, such that", null, "for all n. List all such polynomials for", null, "=2.\n\n(5 pont)\n\nsolution, statistics\n\n### Upload your solutions above or send them to the following address:\n\nKöMaL Szerkesztőség (KöMaL feladatok),\nBudapest 112, Pf. 32. 1518, Hungary" ]
[ null, "https://www.komal.hu/kep/PC/komal_logo_voros.png", null, "https://www.komal.hu/kep/ikon/K.gif", null, "https://www.komal.hu/kep/abra/2f/1c/7f/35e0eab817fb4952b8288a700.gif", null, "https://www.komal.hu/kep/ikon/C.gif", null, "https://www.komal.hu/kep/keplet.cgi", null, "https://www.komal.hu/kep/keplet.cgi", null, "https://www.komal.hu/kep/ikon/B.gif", null, "https://www.komal.hu/kep/ikon/A.gif", null, "https://www.komal.hu/kep/keplet.cgi", null, "https://www.komal.hu/kep/tex/in.gif", null, "https://www.komal.hu/kep/tex/lambda.gif", null, "https://www.komal.hu/kep/keplet.cgi", null, "https://www.komal.hu/kep/tex/lambda.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.93596363,"math_prob":0.973175,"size":8445,"snap":"2021-04-2021-17","text_gpt3_token_len":2146,"char_repetition_ratio":0.160526,"word_repetition_ratio":0.041586693,"special_character_ratio":0.25731203,"punctuation_ratio":0.122651935,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9894593,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26],"im_url_duplicate_count":[null,null,null,null,null,3,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-11T10:46:56Z\",\"WARC-Record-ID\":\"<urn:uuid:5f828f8f-a2e8-4c02-a3ca-e8e65b92776c>\",\"Content-Length\":\"30494\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:90ad1986-78f9-4244-ba77-4f5a0d15c42b>\",\"WARC-Concurrent-To\":\"<urn:uuid:5e6add80-962f-4b3a-814f-6197fae6f549>\",\"WARC-IP-Address\":\"157.181.227.78\",\"WARC-Target-URI\":\"https://www.komal.hu/feladat?a=honap&l=en&h=200509&t=200509\",\"WARC-Payload-Digest\":\"sha1:TDOCGZ3RUYHXABQXGJ5OLF2DIEWJ2XJJ\",\"WARC-Block-Digest\":\"sha1:SJOWXYLJKMMR654WNWZXKTWP376V2SF5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038061820.19_warc_CC-MAIN-20210411085610-20210411115610-00215.warc.gz\"}"}
https://www.colorhexa.com/42e7b7
[ "# #42e7b7 Color Information\n\nIn a RGB color space, hex #42e7b7 is composed of 25.9% red, 90.6% green and 71.8% blue. Whereas in a CMYK color space, it is composed of 71.4% cyan, 0% magenta, 20.8% yellow and 9.4% black. It has a hue angle of 162.5 degrees, a saturation of 77.5% and a lightness of 58.2%. #42e7b7 color hex could be obtained by blending #84ffff with #00cf6f. Closest websafe color is: #33ffcc.\n\n• R 26\n• G 91\n• B 72\nRGB color chart\n• C 71\n• M 0\n• Y 21\n• K 9\nCMYK color chart\n\n#42e7b7 color description : Bright cyan - lime green.\n\n# #42e7b7 Color Conversion\n\nThe hexadecimal color #42e7b7 has RGB values of R:66, G:231, B:183 and CMYK values of C:0.71, M:0, Y:0.21, K:0.09. Its decimal value is 4384695.\n\nHex triplet RGB Decimal 42e7b7 `#42e7b7` 66, 231, 183 `rgb(66,231,183)` 25.9, 90.6, 71.8 `rgb(25.9%,90.6%,71.8%)` 71, 0, 21, 9 162.5°, 77.5, 58.2 `hsl(162.5,77.5%,58.2%)` 162.5°, 71.4, 90.6 33ffcc `#33ffcc`\nCIE-LAB 82.767, -53.014, 11.359 39.367, 61.725, 54.637 0.253, 0.396, 61.725 82.767, 54.217, 167.907 82.767, -62.82, 25.445 78.565, -48.049, 13.764 01000010, 11100111, 10110111\n\n# Color Schemes with #42e7b7\n\n• #42e7b7\n``#42e7b7` `rgb(66,231,183)``\n• #e74272\n``#e74272` `rgb(231,66,114)``\nComplementary Color\n• #42e765\n``#42e765` `rgb(66,231,101)``\n• #42e7b7\n``#42e7b7` `rgb(66,231,183)``\n• #42c5e7\n``#42c5e7` `rgb(66,197,231)``\nAnalogous Color\n• #e76542\n``#e76542` `rgb(231,101,66)``\n• #42e7b7\n``#42e7b7` `rgb(66,231,183)``\n• #e742c5\n``#e742c5` `rgb(231,66,197)``\nSplit Complementary Color\n• #e7b742\n``#e7b742` `rgb(231,183,66)``\n• #42e7b7\n``#42e7b7` `rgb(66,231,183)``\n• #b742e7\n``#b742e7` `rgb(183,66,231)``\n• #72e742\n``#72e742` `rgb(114,231,66)``\n• #42e7b7\n``#42e7b7` `rgb(66,231,183)``\n• #b742e7\n``#b742e7` `rgb(183,66,231)``\n• #e74272\n``#e74272` `rgb(231,66,114)``\n• #19c492\n``#19c492` `rgb(25,196,146)``\n• #1cdaa3\n``#1cdaa3` `rgb(28,218,163)``\n• #2be4ae\n``#2be4ae` `rgb(43,228,174)``\n• #42e7b7\n``#42e7b7` `rgb(66,231,183)``\n• #59eac0\n``#59eac0` `rgb(89,234,192)``\n• #6fedc8\n``#6fedc8` `rgb(111,237,200)``\n• #86f0d1\n``#86f0d1` `rgb(134,240,209)``\nMonochromatic Color\n\n# Alternatives to #42e7b7\n\nBelow, you can see some colors close to #42e7b7. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #42e78e\n``#42e78e` `rgb(66,231,142)``\n• #42e79c\n``#42e79c` `rgb(66,231,156)``\n• #42e7a9\n``#42e7a9` `rgb(66,231,169)``\n• #42e7b7\n``#42e7b7` `rgb(66,231,183)``\n• #42e7c5\n``#42e7c5` `rgb(66,231,197)``\n• #42e7d3\n``#42e7d3` `rgb(66,231,211)``\n• #42e7e0\n``#42e7e0` `rgb(66,231,224)``\nSimilar Colors\n\n# #42e7b7 Preview\n\nThis text has a font color of #42e7b7.\n\n``<span style=\"color:#42e7b7;\">Text here</span>``\n#42e7b7 background color\n\nThis paragraph has a background color of #42e7b7.\n\n``<p style=\"background-color:#42e7b7;\">Content here</p>``\n#42e7b7 border color\n\nThis element has a border color of #42e7b7.\n\n``<div style=\"border:1px solid #42e7b7;\">Content here</div>``\nCSS codes\n``.text {color:#42e7b7;}``\n``.background {background-color:#42e7b7;}``\n``.border {border:1px solid #42e7b7;}``\n\n# Shades and Tints of #42e7b7\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #000202 is the darkest color, while #f0fdf9 is the lightest one.\n\n• #000202\n``#000202` `rgb(0,2,2)``\n• #03140f\n``#03140f` `rgb(3,20,15)``\n• #05251c\n``#05251c` `rgb(5,37,28)``\n• #073729\n``#073729` `rgb(7,55,41)``\n• #094836\n``#094836` `rgb(9,72,54)``\n• #0b5943\n``#0b5943` `rgb(11,89,67)``\n• #0e6b50\n``#0e6b50` `rgb(14,107,80)``\n• #107c5d\n``#107c5d` `rgb(16,124,93)``\n• #128e6a\n``#128e6a` `rgb(18,142,106)``\n• #149f77\n``#149f77` `rgb(20,159,119)``\n• #16b184\n``#16b184` `rgb(22,177,132)``\n• #19c291\n``#19c291` `rgb(25,194,145)``\n• #1bd39e\n``#1bd39e` `rgb(27,211,158)``\n• #1fe3aa\n``#1fe3aa` `rgb(31,227,170)``\n• #31e5b0\n``#31e5b0` `rgb(49,229,176)``\n• #42e7b7\n``#42e7b7` `rgb(66,231,183)``\n• #53e9be\n``#53e9be` `rgb(83,233,190)``\n• #65ebc4\n``#65ebc4` `rgb(101,235,196)``\n• #76eecb\n``#76eecb` `rgb(118,238,203)``\n• #88f0d2\n``#88f0d2` `rgb(136,240,210)``\n• #99f2d8\n``#99f2d8` `rgb(153,242,216)``\n• #aaf4df\n``#aaf4df` `rgb(170,244,223)``\n• #bcf6e5\n``#bcf6e5` `rgb(188,246,229)``\n• #cdf9ec\n``#cdf9ec` `rgb(205,249,236)``\n• #dffbf3\n``#dffbf3` `rgb(223,251,243)``\n• #f0fdf9\n``#f0fdf9` `rgb(240,253,249)``\nTint Color Variation\n\n# Tones of #42e7b7\n\nA tone is produced by adding gray to any pure hue. In this case, #949595 is the less saturated color, while #32f7be is the most saturated one.\n\n• #949595\n``#949595` `rgb(148,149,149)``\n• #8c9d98\n``#8c9d98` `rgb(140,157,152)``\n• #84a59c\n``#84a59c` `rgb(132,165,156)``\n• #7bae9f\n``#7bae9f` `rgb(123,174,159)``\n• #73b6a2\n``#73b6a2` `rgb(115,182,162)``\n• #6bbea6\n``#6bbea6` `rgb(107,190,166)``\n• #63c6a9\n``#63c6a9` `rgb(99,198,169)``\n``#5bcead` `rgb(91,206,173)``\n• #52d7b0\n``#52d7b0` `rgb(82,215,176)``\n``#4adfb4` `rgb(74,223,180)``\n• #42e7b7\n``#42e7b7` `rgb(66,231,183)``\n• #3aefba\n``#3aefba` `rgb(58,239,186)``\n• #32f7be\n``#32f7be` `rgb(50,247,190)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #42e7b7 is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.512895,"math_prob":0.6128336,"size":3726,"snap":"2020-10-2020-16","text_gpt3_token_len":1715,"char_repetition_ratio":0.12278345,"word_repetition_ratio":0.011049724,"special_character_ratio":0.5499195,"punctuation_ratio":0.23756906,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98911834,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-04-04T22:29:40Z\",\"WARC-Record-ID\":\"<urn:uuid:8880e613-a9f1-4cf8-bfb4-04ea81d20d2c>\",\"Content-Length\":\"36333\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3ff52231-6358-4539-b8b6-b9a6fb795ba5>\",\"WARC-Concurrent-To\":\"<urn:uuid:a1a663fc-48bb-46f5-ade5-23e8c7193453>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/42e7b7\",\"WARC-Payload-Digest\":\"sha1:BZZGDV5MUNSTXEDME35KGQPYEL3UG3AA\",\"WARC-Block-Digest\":\"sha1:DU7CJP6Y2NXBNJLD3N7FD4ZEQR4RLX4I\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585370525223.55_warc_CC-MAIN-20200404200523-20200404230523-00175.warc.gz\"}"}
https://groups.io/g/Interferometry/wiki/3125/113302
[ "Last edited · 10 revisions\n\nFor a paraboloid tested at its COC with the test surface image onto the detector the OPD measured in the pupil has the following amplitudes for the primary and secondary spherical aberration Wyant Zernike terms:\n\nC8 = (D/lambda)*(1/(3072*(F/D)^3) - 1/(7*2^20*(F/D)^7))\n\nC15 = (D/lambda)*(1/(5*2^22*(F/D)^7))\n\nNOTES\n\n1) These values were calculated using a reference sphere centred at the paraxial CoC and a radius of curvature equal to the paraxial RoC of the test surface\n\n2) These values do not correspond to an interferogram with a Zernike defocus term (C3) amplitude of zero.\n\n3) The amplitudes of the various Zernike terms depends on the amount of defocus used in obtaining the interferogram. This effect is particularly important for highly aspheric wavefronts and large amounts of defocus.\n\nD is diameter of parabolic mirror, F is focal length, lambda is wavelength of light.  They can be in any unit but must be in the same unit e.g. meters.\n\nFor anyone implementing this who wants to test their code (or spreadsheet formula) here is a sample using mm.  For a 10 inch F/4 mirror tested with a 650nm laser:  D=254, F=1016, lambda=0.00065, C8 then becomes 1.987552 waves (with only first term of formula C8 is 1.987555) and C15 is 8.8E-9 waves.\n\nFor a conicoidal test surface with conic constant the corresponding null terms are\n\nC8 = -k*(D/lambda)*(1/(3072*(F/D)^3) - (1+k)/(65536*(F/D)^5))\n\nC15 = -k*(1+k)*(D/lambda)*(1/(327680*(F/D)^5))" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.893412,"math_prob":0.98288924,"size":1373,"snap":"2022-40-2023-06","text_gpt3_token_len":387,"char_repetition_ratio":0.09422936,"word_repetition_ratio":0.0,"special_character_ratio":0.29206118,"punctuation_ratio":0.072463766,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99816865,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-25T13:47:29Z\",\"WARC-Record-ID\":\"<urn:uuid:c6975e1a-991a-451d-b719-05bba523f73d>\",\"Content-Length\":\"26063\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:063d935e-945a-4607-b337-023a4d7d0a00>\",\"WARC-Concurrent-To\":\"<urn:uuid:5842fae8-1759-4b54-9e17-b5a56948ba13>\",\"WARC-IP-Address\":\"45.79.81.153\",\"WARC-Target-URI\":\"https://groups.io/g/Interferometry/wiki/3125/113302\",\"WARC-Payload-Digest\":\"sha1:45Q7BWKYOAEKRYJGVPI4IHUSZRXRKRD3\",\"WARC-Block-Digest\":\"sha1:YHIVIX6MFMYXWMWT6L7B3OU7FBYQMLVW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030334579.46_warc_CC-MAIN-20220925132046-20220925162046-00622.warc.gz\"}"}
https://www.assignmentexpert.com/blog/how-to-find-derivatives-using-chain-rule/
[ "# How to Find Derivatives Using Chain Rule?", null, "If you’re asked to find a derivative of some function, sometimes it’s not a trivial thing to do. If you’re lucky and the function is among common functions, you can write down the required derivative right away using table of derivatives if necessary. But what if given expression is more complicated? That’s what this section is about. Let’s discuss chain rule for derivatives or so called outside-inside rule. This rule is needed when we want to find derivative of non-common function which is in fact composition of two or more common functions.\n\n## Composition of functions\n\nFirst of all, what is composition of functions? Actually it means function of another function. For example, consider the following function:\n\n$$h(x)=g(f(x))$$\n\nComposite function $h(x)$ is a composition of functions $f$ and $g$. To obtain value of function $h$ at some point $x$ we should first apply function $f$; then we apply $g$ to the obtained result. In real analysis, it’s proved that if function $f$ has derivative at the point $x_0$ and function $g$ has derivative at the point $f(x_0)$ then composite function $h(x)=g(f(x))$ also has a derivative at the point $x_0$.\n\nCheck video version of this tutorial:\n\n## Chain Rule for two functions\n\nChain rule states that the derivative of composite function $h(x)$ is found as follows:\n$$(h(x))’=(g(f(x)))’=g’(f(x))\\cdot f’(x)$$\n\nLet’s find out what this rule implies. Here we have two functions $g$ and $f$ and function $f$, so to speak, is enclosed into function $g$. Therefore we’ll call function $g$ an outside function and function $f$ an inside function. Note that it’s not scientific terms but just informal way to call functions for better understanding. Thus, chain rule states that derivative of composite function equals derivative of outside function evaluated at the inside function multiplied by the derivative of inside function:", null, "## Example: applying chain rule to find derivative\n\nConsider the following example:\n\n$$h(x)=\\sin{(2x+3)}$$\n\nWe see that under sine there is not simply “$x$” but a polynomial $2x+3$ so we can’t right away find derivative using table of derivatives for standard functions. Also we note that we can’t apply rules for product, quotient or sum/difference because we can’t “divide” sine into parts. What we mean is that the following is wrong:\n\n$$\\sin{(2x+3)}=\\sin{(2x)}+\\sin{(3)}$$\n\nThus, we can say for sure that the given function $h(x)=\\sin{(2x+3)}$ is composite. Moreover, here polynomial $2x+3$ is an inside function and $\\sin{(2x+3)}$ is an outside function. The first thing you need to do when dealing with composite function is to find out which function is inside and which is outside. Do not mix this up, be attentive. But how to determine this? There’s simple idea, this you can do mentally or on draft. Imagine that you need to evaluate the given function $\\sin{(2x+3)}$ at some point $x$ using calculator. Let’s pick $x=1$, for example. Particular value doesn’t matter here. So what do you do at first? Obviously, you’ll need to calculate $2\\cdot 1+3=5$ therefore polynomial $2x+3$ is an inside function. Then you’ll calculate $\\sin{(5)}$ and therefore $\\sin{(2x+3)}$ is an outside function.\n\nOk, now we can perform derivation. To apply chain rule, at first we need to find derivative of the outside function $g(f)=\\sin{(f)}$. Using standard table of derivatives we find out that $(\\sin {x})’=\\cos{x}$. In our case instead of $x$ we should put $2x+3$ (inside function is the argument for the outside function). Thus, we obtain:\n\n$$g'(f(x))=\\cos{(2x+3)}$$\n\nNote that we place polynomial $2x+3$ as an argument and it remains unchanged. Now we need to find derivative of the inside function:\n\n$$f'(x)=(2x+3)’=2$$\n\nFinally we substitute obtained derivatives into the formula for chain rule:\n\n$$h'(x)=(g(f(x)))’=(\\sin{(2x+3)})’=\\cos{(2x+3)}\\cdot 2=2\\cos{(2x+3)}$$\n\n## Chain rule in case of more than two functions\n\nWe’ve considered chain rule for the case when exactly one function is in and one is out. But how are we going to find derivative of something like this?\n\n$$h(x)=e^{\\cos{(\\ln{x})}}$$\n\nLet’s find out what function is inside here. As before, let’s pick arbitrary value of $x$, for example $x=2$. The first expression to calculate would be$ln(x)$. So logarithm is inside. The next in line is cosine and the function outside is exponent.\n\nGenerally, chain rule for the case of more than two functions can be written in the following manner. Suppose we have composite function $h(x)$:\n\n$$h(x)=f_n (f_(n-1) (….(f_1 (x))))$$\n\nThen, if derivatives of functions $f_k, k=\\bar{1,n}$ at corresponding points exist, the derivative of composite function $h(x)$ is:\n\n$$h'(x)=(f_n (f_{n-1}(…(f_1(x)))))’=f_n'(f_{n-1}(…(x)))\\cdot f_{n-1}'(f_{n-2}(…(x)))\\cdot … \\cdot f_1′(x)$$\n\n## Chain rule for three functions\n\nIn case of three functions the formula for derivative of composite function $h(x)=y(g(f(x)))$ look like this:\n\n$h'(x)=(y(g(f(x))))’=y'(g(f(x)))\\cdot g'(f(x)) \\cdot f'(x)$\n\n$$h(x)=e^{\\cos{(\\ln{x})}}$$\n\nFor this example in terms of formula mentioned above we have\n\n$$y=e^g$$\n\n$$g=\\cos{f}$$\n\n$$f=\\ln{x}$$\n\nAs we’ve found out the outside function is exponent. Using table of derivatives for standard functions we get that $(e^{x})’=e^x$. So the derivative of the outside function is:\n\n$$y'(g(f(x)))=e^{\\cos{(\\ln{x})}}$$\n\nRemember that we evaluate derivative of the outside function using the whole expression under the outside function as an argument . In our example it’s $\\cos{(\\ln{x})}$. Now we can find the derivative of function $g(f)$. As we know $\\cos{x}’=-\\sin{x}$. So we obtain:\n\n$$g'(f(x))=-\\sin{(\\ln{x})}$$\n\nFinally, we need to find derivative of the inside function $f(x)=\\ln{x}$. It’s one of standard functions:\n\n$$f'(x)=(\\ln{x})’=\\frac{1}{x}$$\n\nNow we can apply chain rule:\n\n$$h’=(e^{\\cos{(\\ln{x})}})’=e^{\\cos{(\\ln{x})}}\\cdot (-\\sin{(\\ln{x})})\\cdot \\frac{1}{x}=-\\frac{e^{\\cos{(\\ln{x})}} \\sin{(\\ln{x})}}{x}$$\n\nand that’s our answer. As you can see, chain rule in case of three functions is hardly more complicated than for just two of them.\n\nSumming up.\n\nIf you are asked to find derivative of composite function, apply the chain rule for derivatives. To do so, follow these steps:\n\n1. Determine how many enclosed  functions are there in your given expression and in which order them come: name inside and outside functions. Use “calculator approach” described above if needed.\n\n2. Find derivative of the outside function due to table of derivatives using the whole enclosed expression as an argument (i.e. substitute it instead of “$x$” into the formula for derivative from the table).\n\n3. Proceed if there’s more than one outside function.\n\n4. Find derivative of the inside function. This time the argument is simply “$x$” (or some other symbol due to initial denotation).\n\n5. Substitute all the derivatives into the formula for chain rule.\n\nStill have questions on chain rule? Stuck with your calculus homework? Confused with other math stuff? Ask questions, we are ready to help.", null, "1 Share\nFiled under Math." ]
[ null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.84398127,"math_prob":0.99980193,"size":6772,"snap":"2020-10-2020-16","text_gpt3_token_len":1740,"char_repetition_ratio":0.20523049,"word_repetition_ratio":0.035406698,"special_character_ratio":0.26447135,"punctuation_ratio":0.08804581,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000005,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-03-31T01:57:03Z\",\"WARC-Record-ID\":\"<urn:uuid:34559300-6ea4-4910-a04b-dae97a585e9d>\",\"Content-Length\":\"178785\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:22b47793-8544-472d-a683-c31df257edd2>\",\"WARC-Concurrent-To\":\"<urn:uuid:ac944d00-dc27-4dc1-923a-f1f74b0f27d6>\",\"WARC-IP-Address\":\"52.24.16.199\",\"WARC-Target-URI\":\"https://www.assignmentexpert.com/blog/how-to-find-derivatives-using-chain-rule/\",\"WARC-Payload-Digest\":\"sha1:PFVECF6VBNGLRBM7WIGAK7NXL57EQTFP\",\"WARC-Block-Digest\":\"sha1:ME3QUALSFZKLAVFTM5RARGN4TGXDJDM6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585370499280.44_warc_CC-MAIN-20200331003537-20200331033537-00031.warc.gz\"}"}
https://us.edugain.com/questions/ABCD-is-a-parallelogram-where-P-and-R-are-the-midpoints-of-sides-BC-and-DC-respectively-If-the-line-PR-intersects-the-diagonal-A
[ "### $ABCD$ is a parallelogram where $P$ and $R$ are the midpoints of sides $BC$ and $DC$ respectively. If the line $PR$ intersects the diagonal $AC$ at $Q$, prove that $AC = 4CQ$.\n\nStep by Step Explanation:\n1. Let us draw the image for the situation given in the question.\nAlso, join $BD$ intersecting $AC$ at $O$.", null, "2. It is given that $P$ is the mid-point of $BC$ and $R$ is the mid-point of $DC$.\n\nThus, in triangle $CBD$, by using mid-point theorem $PR \\parallel BD$.\n3. As, \\begin{aligned} & PR \\parallel BD \\\\ \\implies & PQ \\parallel BO \\text{ and } QR \\parallel OD \\end{aligned} Now, in triangle $BCO$, we have $PQ \\parallel BO$ and $P$ is the mid point of $BC$.\nBy the inverse of mid-point theorem, $Q$ is the midpoint of $OC$. $$\\implies 2 CQ = OC$$\n4. As the diagonals of a parallelogram bisect each other, $AO = OC$.\nThus, \\begin{aligned} & AC = AO + OC \\\\ \\implies & AC = 2 OC && [\\because \\text{ AO = OC]} \\\\ \\implies & AC = 2 \\times 2 CQ && [\\because \\text{ 2 CQ = OC]} \\\\ \\implies & AC = 4 CQ \\end{aligned}", null, "" ]
[ null, "https://www.edugain.com/egdraw/draw.php", null, "https://d2k75ezae8u7hz.cloudfront.net/img/legal/creative-commons.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7624414,"math_prob":1.000008,"size":1179,"snap":"2023-14-2023-23","text_gpt3_token_len":419,"char_repetition_ratio":0.12765957,"word_repetition_ratio":0.06437768,"special_character_ratio":0.38252756,"punctuation_ratio":0.103286386,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.000009,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-05T17:01:30Z\",\"WARC-Record-ID\":\"<urn:uuid:4ed48d22-8e50-40c0-ac7f-4c9dd4974659>\",\"Content-Length\":\"66293\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:08e463e4-1bf1-4e7c-b5f1-8c4eb9ac716c>\",\"WARC-Concurrent-To\":\"<urn:uuid:0b4c3feb-0fd2-4321-94c8-4ad09242f98b>\",\"WARC-IP-Address\":\"52.85.151.87\",\"WARC-Target-URI\":\"https://us.edugain.com/questions/ABCD-is-a-parallelogram-where-P-and-R-are-the-midpoints-of-sides-BC-and-DC-respectively-If-the-line-PR-intersects-the-diagonal-A\",\"WARC-Payload-Digest\":\"sha1:FJZKUU43F5BTLNXNFWR5MF44TGNFAE6C\",\"WARC-Block-Digest\":\"sha1:RROZDCF65XNXLXMEUCAC3B6KWMDKE25M\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224652149.61_warc_CC-MAIN-20230605153700-20230605183700-00449.warc.gz\"}"}
https://knexcomputer.blogspot.com/2007/01/how-it-works-part-1.html
[ "### Ball Theory", null, "Part 1: The Bit\nThe position of the ball determines its value. The left column is zero, the right is one, and the middle is indeterminate. Balls pass through the middle column only when they are changing state.", null, "We can get a ball to change its state by removing walls and adding ramps. This animation shows a bit being reset to zero.", null, "Part 2: The Not Gate\nThe NOT gate is an inverter; it outputs the opposite of the input. To do this with balls, we need two ramps. The ball's velocity allows it to make the jump over the middle.", null, "We need to design this carefully because balls enter at a variety of speeds and angles. Having small openings enforces a certain trajectory. Otherwise a ball could bounce to the wrong side, causing an error.", null, "Part 3: The AND Gate\nNow things start to get tricky. A normal AND gate has two inputs, and a single output that is active only when both inputs are active. Since we are using balls, we need to have the same number of inputs as outputs: if two balls go in, two balls better come out.", null, "The simplest solution we thought of is to have a switch where one ball sets the path of the other ball. We call this a 'stated' AND gate, since the switch can flip between its two different states.", null, "The animation above shows the gate in action. It has two inputs (A,B) and two outputs (A, A&B). The A ball must goes first, flipping the switch, which affects the 1's path for B. That way the B ball becomes A&B, while the A ball remains A.", null, "Part 4: The XOR Gate:\nAn XOR gate is active when either of its inputs are active, but not both. It can be thought of as a NOT gate which turns on if the A input is on.", null, "Part 5: The Waiting AND Gate:\nIf we did everything with state gates, we would need extra balls to create copies of values. To avoid extra complication, we use a different type switch. This forces the A ball to wait until the B ball arrives.\n\nThe animation below is the 'waiting' AND gate. The one's path of the A ball is blocked by a counter-balanced switch. If the A ball is a 1, it has to wait until the B ball drops before becoming A&B. The B ball keeps its value.", null, "The waiting AND introduces a new problem. If the gate is designed poorly, the ball can get stuck. This is known as a crash, and the system will not be able to recover." ]
[ null, "https://lh3.googleusercontent.com/blogger_img_proxy/ABLy4Eyf0y7N_WS6NHs8a9kvuZAEVbO5sF7A53-N4fJkk1I9bgj4xaXbOALoHObUQtRKNbhfEwzNc3GGcYtRkj41iZfcr8KwlJVVch5Obl4YKJVZvrhdLtUFda6TJcyj7WCFx_Fl=s0-d", null, "https://lh3.googleusercontent.com/blogger_img_proxy/ABLy4Ez8kUzqyQ34PI2JDIBQUu0uhUFt1VEuiBvMazPzLY_3DaYt6_ZJWVokMNAVAGnQez523bPKL7lGVlSkLloNSJum_HbTeexC4lXvEd2WNVgoZNxMsf-MY4tIbGzsgytAFQZXeN8=s0-d", null, "https://lh3.googleusercontent.com/blogger_img_proxy/ABLy4Ex8fkhvDefV8s4ukfLTkkwff-A6ww5k7DxOepS1XDebtOR_lZSkVGOc955uiVfcDypyZKf30sYK3zE-p6V0pwwvySg0kv-NgKkBvzZfGqVfb_5qo1l4_iuJb9Bm27oY=s0-d", null, "https://lh3.googleusercontent.com/blogger_img_proxy/ABLy4EyHKJ0uMbuXgEx9NQX5GFKVOIQttAIsIsjh4OsiZoe9QH5AKkRwa5JT8Cul1FAVxIWlVoMYPvNCE-kmurpXV8-X2GjopQEzNG_bMd4I0M4rKdoSM5iQZpx--qOsaHMPD5-y=s0-d", null, "https://lh3.googleusercontent.com/blogger_img_proxy/ABLy4Ez3CxcdMqpws7mv0MTT7YnojtU7A9G7yQM6bWaOoTXkYXdHA6tlpy4ke1Ty1cYOtydcFv9sdWAL1BEuUSw0kuf2ZKzQ4xFTgC92_oh4uZFvTAYA7AfElxfYYvvevOZoUZWpPUAMUw=s0-d", null, "https://lh3.googleusercontent.com/blogger_img_proxy/ABLy4ExdERmcm-j310Nf3nqYAGsONuXAuVgCmcpfd6-Qbl5HcttSu9WmyRWWNBv4TdmEwv6YwYS5dUZhBFWheuBna6k6c7Te-9d6pf-dRX06ukBiQEvj9xRCJw3fqnmEUGuFHtkic14qQD8qRtVHk0gC=s0-d", null, "https://lh3.googleusercontent.com/blogger_img_proxy/ABLy4EzfzUrHDDHDX_Ohe3WnhA8BiaZ5FMRHJWq6OhOSjVDoPQV7ocRCasEcCogp_BeRjOIaIeHZG1Zg-bILmtI-9vJrSAhvdrbxO97Rwr2A4pDRywlKs9nPkipURXoOozQ2cZqHEExI=s0-d", null, "https://lh3.googleusercontent.com/blogger_img_proxy/ABLy4EyVKW-qOU2W-IWB5XqtYtjXuQTqPkYp3IDYJ8hTw-QeTVzLq0KIBgH2oO9zw3Ui8VMwXhhQIe1aQUkabhTj43957R8mi7KRucSI_PKtd0RQNV0aeb0zsWq97EWG7FL_mxDCD8__mv4CN_TIk-Bq=s0-d", null, "https://lh3.googleusercontent.com/blogger_img_proxy/ABLy4ExsUpT_lwMie0OW1Eafj1JN-V_MJie8YPMPjPuNTaCK-S_mrvUVNQ0xEF5r6fuQufma4bim0cShxkY9R1EU5VTRF4PJvSmfNHDa8qt403Y8ua2-IthOgHevcXHJobyhTgDpX79vA-cSCptyDdqsnw=s0-d", null, "https://lh3.googleusercontent.com/blogger_img_proxy/ABLy4EypCgxq9JsV3Dh2XeGcENr1Yag_rvuuSFxJrGR9B9QsGzI2pBgVHQ__YqEiZOsJBIkymmAUAvlsWiNRCwqGoQJx0-CY3hgc26nM48ouP6iI7_m7hcvtIkEWjQEFyKuQu8XbqQ0=s0-d", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.942712,"math_prob":0.9598713,"size":2622,"snap":"2022-27-2022-33","text_gpt3_token_len":603,"char_repetition_ratio":0.110007636,"word_repetition_ratio":0.007968128,"special_character_ratio":0.22692601,"punctuation_ratio":0.11591695,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97012347,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],"im_url_duplicate_count":[null,2,null,2,null,2,null,1,null,2,null,2,null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-27T00:08:20Z\",\"WARC-Record-ID\":\"<urn:uuid:53057467-19c5-4318-87b4-c7783d2f427d>\",\"Content-Length\":\"33104\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:90c3eb5b-d3d2-4cd7-94cd-091ea2fa71d8>\",\"WARC-Concurrent-To\":\"<urn:uuid:07518edb-8acf-445b-8c3d-393bad1d786f>\",\"WARC-IP-Address\":\"172.253.122.132\",\"WARC-Target-URI\":\"https://knexcomputer.blogspot.com/2007/01/how-it-works-part-1.html\",\"WARC-Payload-Digest\":\"sha1:F44K3D3CCVREBJBZLHOF5F4VVUHWKG4Q\",\"WARC-Block-Digest\":\"sha1:KCQPMKJB5PPQ4RLQNQ2KWMOXPUDATBJY\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103322581.16_warc_CC-MAIN-20220626222503-20220627012503-00513.warc.gz\"}"}
https://www.mathworks.com/matlabcentral/answers/472671-create-a-double-variable-array?s_tid=prof_contriblnk
[ "# Create a double variable array\n\n13 views (last 30 days)\nJames Upton on 21 Jul 2019\nCommented: Rik on 21 Jul 2019\nThe following code wont work because: I have a date1 variable array which is a double and the P_D variable seems to be a cell array. I need to set P_D to be a double variable, all numbers in the P_D variable are doubles.\nError using plot\nInvalid data argument.\nError in thethird (line 54)\nplot(date1, P_D(:,1))\nCODE:\nP_D(i,2)= {Call};\nP_D(i,3)= {Put};\nend\ndate1 = datenum(date,'yyyy-mm-dd HH:MM:SS');\nplot(date1, P_D(:,1))\ndatetick('x', 'mm-dd')\nRik on 21 Jul 2019\nJudging from the use of cells in the first place, I would guess either of the other parameters are not scalar doubles. If they are scalar doubles, then there is no need to add the complexity of a cell array." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.69818187,"math_prob":0.6130666,"size":465,"snap":"2021-43-2021-49","text_gpt3_token_len":144,"char_repetition_ratio":0.13015184,"word_repetition_ratio":0.0,"special_character_ratio":0.3268817,"punctuation_ratio":0.20353982,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9749597,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-23T20:04:27Z\",\"WARC-Record-ID\":\"<urn:uuid:17cd430a-3dee-42c7-9d45-18829ee03bef>\",\"Content-Length\":\"134078\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:81ac1e5d-70e7-4b7e-8cbc-7b9e03238663>\",\"WARC-Concurrent-To\":\"<urn:uuid:742b5f00-b465-43bc-8169-2eccb360986d>\",\"WARC-IP-Address\":\"104.68.243.15\",\"WARC-Target-URI\":\"https://www.mathworks.com/matlabcentral/answers/472671-create-a-double-variable-array?s_tid=prof_contriblnk\",\"WARC-Payload-Digest\":\"sha1:VBEXDUL53MQCAVKDM3STEFPX2QIRIOEU\",\"WARC-Block-Digest\":\"sha1:HOZEM7JDCYRJOEKXJD3R3FAMKM5BJIRF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585768.3_warc_CC-MAIN-20211023193319-20211023223319-00662.warc.gz\"}"}
https://www.thinbug.com/q/7622
[ "### 移位运算符(&lt;&lt;&gt;&gt;)在C中是算术还是逻辑?\n\n#### 10 个答案:\n\n(作为那些不熟悉差异的读者的背景,1位的“逻辑”右移将所有位向右移动,并用0填充最左边的位。“算术”移位保留原始值最左边的位。当处理负数时,差异变得很重要。)\n\n``````signed int x1 = 5;\nassert((x1 >> 1) == 2);\nsigned int x2 = -5;\nassert((x2 >> 1) == -3);\nunsigned int x3 = (unsigned int)-5;\nassert((x3 >> 1) == 0x7FFFFFFD);\n``````\n\nWikipedia说C / C ++'通常'对签名值实施算术移位。\n\n``````~0 >> 1\n``````\n\n``````~0U >> 1;\n``````\n\n``````int logicalRightShift(int x, int n) {\nreturn (unsigned)x >> n;\n}\nint arithmeticRightShift(int x, int n) {\nif (x < 0 && n > 0)\nreturn x >> n | ~(~0U >> n);\nelse\nreturn x >> n;\n}\n``````\n\n`````` x = 5\nx >> 1\nx = 2 ( x=5/2)\n\nx = 5\nx << 1\nx = 10 (x=5*2)\n``````\n\n然而,C只有一个右移   运算符,&gt;&gt;。许多C编译器选择   哪个权利转移到依赖   在什么类型的整数   转移;经常签名整数   使用算术移位移位,   和无符号整数被移位   使用逻辑转移。\n\nGCC确实\n\n1. for -ve - &gt;算术转移\n\n2. for + ve - &gt;逻辑转变\n\n1. `<<`是算术左移或按位左移。\n2. `>>`是算术右移位器按位右移。" ]
[ null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.92968017,"math_prob":0.9992978,"size":1806,"snap":"2021-43-2021-49","text_gpt3_token_len":1514,"char_repetition_ratio":0.085460596,"word_repetition_ratio":0.0,"special_character_ratio":0.30952382,"punctuation_ratio":0.12195122,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9884763,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-08T22:28:42Z\",\"WARC-Record-ID\":\"<urn:uuid:be4ffa03-4dd9-4284-86bd-de94eb661088>\",\"Content-Length\":\"16770\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:11ef5f8a-0954-4e63-bb2c-a18305ad128d>\",\"WARC-Concurrent-To\":\"<urn:uuid:6a3e6704-0ba0-435e-8940-c0ef03b51ca6>\",\"WARC-IP-Address\":\"175.24.230.9\",\"WARC-Target-URI\":\"https://www.thinbug.com/q/7622\",\"WARC-Payload-Digest\":\"sha1:6CEHLKCRNZTTDRD45E5TCLD4QUFK54DS\",\"WARC-Block-Digest\":\"sha1:Y4SZ4RCZFODVNKJYCPHGFXME3NJDYAWI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363598.57_warc_CC-MAIN-20211208205849-20211208235849-00421.warc.gz\"}"}
https://math.wikia.org/wiki/Order_Of_Operations
[ "## FANDOM\n\n1,168 Pages\n\nThe order of operations is the operations in which you need to do these operations before other operation. $()$ Parentheses goes first. $a^b$ Then, it is exponents. $*$$/$ After, it is multiplication and division. $+-$ Then, you add and subtract. There are a lot of other functions like $sin()$ and other parentheses$[]{}$. To remember this rule, You say PEMDAS( Parentheses Exponents Multiplication Division Addition Subtraction.)\n\nCommunity content is available under CC-BY-SA unless otherwise noted." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8168796,"math_prob":0.9725621,"size":515,"snap":"2019-51-2020-05","text_gpt3_token_len":117,"char_repetition_ratio":0.14285715,"word_repetition_ratio":0.0,"special_character_ratio":0.2524272,"punctuation_ratio":0.13953489,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9839037,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-13T23:41:08Z\",\"WARC-Record-ID\":\"<urn:uuid:4d03be40-4d1d-455c-8700-3e7e3c7be647>\",\"Content-Length\":\"75630\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9a74441d-124d-4d09-af68-5e994040ad70>\",\"WARC-Concurrent-To\":\"<urn:uuid:651580ab-496a-449e-ae1a-f8c89f8ff6b8>\",\"WARC-IP-Address\":\"151.101.0.194\",\"WARC-Target-URI\":\"https://math.wikia.org/wiki/Order_Of_Operations\",\"WARC-Payload-Digest\":\"sha1:QJXXTNNSM76572GL2O3YT6A5ZHAZSLIU\",\"WARC-Block-Digest\":\"sha1:M4QSTJ3F7PHLQ27Z3L2QVRSPW2SYYO6R\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540569332.20_warc_CC-MAIN-20191213230200-20191214014200-00034.warc.gz\"}"}
https://www.encyclopediaofmath.org/index.php?title=Number
[ "# Number\n\nA fundamental concept in mathematics, which has taken shape in the course of a long historical development. The origin and formulation of this concept occurred simultaneously with the birth and development of mathematics. The practical activities of mankind on the one hand, and the internal demands of mathematics on the other, have determined the development of the concept of a number.\n\nThe need to count objects led to the origin of the notion of a natural number. All nations that have forms of writing had mastered the concept of a natural number and had developed some counting system. In its early stages, the origin and development of the concept of a number can be judged only from indirect data provided by linguistics and ethnography. Primitive man clearly had no need of counting skills to determine whether or not a given collection was complete.\n\nLater, special names were given to definite objects or events that people often encountered. Thus, in the language of certain nations there were words for such concepts as \"three men\" , or \"three boats\" , but there was no abstract concept \"three\" . In this way, probably, there arose comparatively short number series, used for the identification of individual people, individual boats, individual coconuts, etc. At these stage there was no such thing as an abstract number, and numbers were merely names.\n\nIn some primitive cultures, the necessity to communicate information about the numerical size of this or that collection has led to distinguishing certain standard collections, usually consisting of parts of the human body. In such a system of counting, each part of the body has a definite order and name. Whenever parts of the body are insufficient, bundles of sticks are used. The same purpose is served by pebbles, shells, notches on a tree or rock, lines on the ground, strings with knots, etc.\n\nThe next stage in the development of the notion of a number is connected with the transition to counting in groups; pairs, tens, dozens, etc. There arise so-called nodal numbers, and at the same time the concept of an arithmetic operation, which is reflected in the names of numbers. Definite counting methods take shape, special devices are used for counting, and numerical notations emerge. Numbers are separated from the objects being counted, and become abstract. Systems of representations of numbers begin to appear (cf. Numbers, representations of).\n\nThe process of formation of our modern representation system was exceptionally complicated. Only the final part of this process can be judged with definite authenticity. There are many known systems of representation. In Ancient Egypt there were several systems. In one of them there were special symbols for 1, 10, 100, 1000. Other numbers were represented by means of combinations of these symbols. The basic arithmetic operation in Ancient Egypt was addition. Well before 2000 B.C. the Babylonians used a base-60 representation system with the positional principle for writing numbers. They used only two symbols. The ancient Greeks used an alphabetic representation system, which was also used by the Slavs (cf. also Slavic numerals). In India, at the beginning of the new era (A.D.) there was a wide-spread oral positional decimal representation system, with several synonyms for zero (and other digits). A positional decimal representation system also arose there later. By the 8th century A.D. this system had spread as far as the Middle East. The Europeans were introduced to it in the 12th century.\n\nThe widening circle of objects to be counted, arising as a result of practical activities of people, and finally the inquisitiveness that is characteristic of mankind, gradually pushed back the limits of counting. The idea arose of the unbounded extension of the sequence of natural numbers, possibly attributable to the Greeks. One of Euclid's theorems states: \"There exist more than any given number of primes\" . Also, Archimedes tried to convince his contemporaries that it is possible to describe a number greater than \"the number of grains of sand in the world\" .\n\nFor the measurement of quantities, fractional numbers were necessary. Fractions were studied in Ancient Egypt and Babylon. Egyptian fractions were usually expressed in terms of aliquot fractions, i.e. fractions with numerator equal to 1. The Babylonians used base-60 fractions. The Chinese and the Indians were using ordinary fractions in the early centuries A.D., and were able to carry out all the arithmetic operations on them. Scholars in Central Asia, no later than the 10th century, used a base-60 positional counting system. This system was particularly widely used in astronomical calculations and tables. Traces of it have been passed on to us in the form of the units used in the measurement of time and angles. Decimal fractions were introduced at the beginning of the 15th century, and were widely used by the Samarkand mathematician Kashi (al'-Kashi). In Europe, decimal fractions became widespread following the publication of the book de Thiende (1585), written by S. Stevin. Before the introduction of decimal fractions, the Europeans had used the decimal system in practice to calculate the integer part of a number, but they used base-60 fractions or ordinary fractions for the fractional part.\n\nThe further development of the concept of number proceeded mainly along with the demands of mathematics itself. Negative numbers first appeared in Ancient China. Indian mathematicians discovered negative numbers while trying to formulate an algorithm for the solution of quadratic equations in all cases. Diophantus (3rd century) operated freely with negative numbers. They appear constantly in intermediate calculations in many of the problems in his Aritmetika. In the 16th century and 17th century, however, many European mathematicians did not appreciate negative numbers, and if such numbers occurred in their calculations they were referred to as false, or impossible. The situation changed in the 17th century, when a geometric interpretation of positive and negative numbers was discovered, as oppositely-directed segments.\n\nThe Babylonians had an algorithm for calculating the square root of a number to any accuracy. In the 5th century B.C., Greek mathematicians discovered that the side and diagonal of a square have no common measure. More generally, it turned out that two arbitrary, precisely-given segments are in general not commensurable. The Greek mathematicians did not start introducing new numbers. They avoided the above difficulty by creating a theory of ratios of segments that was independent of the concept of a number.\n\nThe development of algebra, and the techniques of approximate calculation, in connection with the demands of astronomy, led Arab mathematicians to extend the concept of a number. They began to consider ratios of arbitrary quantities, whether commensurable or not, as numbers. As Nasireddin (1201–1278) wrote: \"Each of these ratios can be called by a number, precisely equal to the number one when one term of the ratio agrees with the other term\" . European mathematics was developing in the same direction. Although G. Cardano in Practica Arithmeticae Generalis (1539) was still writing about irrational numbers as \"surds\" (from the Latin \"surdus\" , \"deaf\" ), and as \"impossible to perceive or to imagine\" , Stevin in his l'Arithmétique (1585) stated that \"a number is that which is determined by an arbitrary quantity\" and that \"no numbers are absurd, irrational, irregular, inexpressible, or surd\" . And finally I. Newton in his Arithmeticae Universalis (1707) gave the following definition: \"By a number we understand not so much a multiple of a unit as an abstract quantity associated in a systematic way to some other quantity of the same kind that is taken as a unit. Numbers arise in three forms: integer, fraction and irrational. An integer is that which can be measured by unity; a fraction is a multiple of a portion of unity; an irrational number is incommensurable with unity\" . Related to the fact that, for Newton, a quantity can be either positive or negative, the numbers in his arithmetic can also be either positive, in other words \"greater than nothing\" , or negative, in other words \"smaller than nothing\" .\n\nImaginary numbers first appeared in the work of Cardano, Ars Magma (The Great Art, 1545). In solving the system of equations", null, ",", null, ", he found the solutions", null, "and", null, ". Cardano called these solutions \"purely negative\" , and later \"sophisticatedly negative\" . The first to see a \"real\" use of introducing imaginary numbers was R. Bombelli. In his Algebra (1572) he showed that the real roots of the equation", null, ",", null, ",", null, ", in the case", null, ", can be expressed in terms of radicals of imaginary numbers. Bombelli defined arithmetic operations on such quantities, and proceeded in this way to the creation of the theory of complex numbers (cf. Complex number). In the 17th century and 18th century, many mathematicians occupied themselves in investigating the properties of imaginary numbers and their applications. Thus, L. Euler extended, e.g., the notion of a logarithm to arbitrary complex numbers (1738), and obtained a new method of integration using complex variables (1776), while earlier (1736) A. de Moivre solved the problem of extracting roots of natural degrees of an arbitrary complex number. A successful application of the theory of complex numbers was the fundamental theorem of algebra: \"Every polynomial of degree greater than zero and with real coefficients factorizes as a product of polynomials of degrees one and two with real coefficients\" (Euler, J. d'Alembert, C.F. Gauss). Nevertheless, until a geometric interpretation of complex numbers as points in the plane was given (around the end of the 18th century and in the beginning of the 19th century), many mathematicians remained distrustful of imaginary numbers.\n\nIn the early 19th century, in connection with the great successes of mathematical analysis, many scholars realized the need for a foundation of the fundamentals of analysis — the theory of limits. Mathematicians were no longer satisfied with proofs based on intuition or on geometrical representation. There also remained the problem of constructing a unified theory of numbers. Natural numbers were often thought of as collections of unity, fractions as ratios of quantities, real numbers as lengths of line segments, and complex numbers as points in the plane. There was no complete agreement as to how arithmetic operations on numbers should be introduced. Finally, the question naturally arose of the further development of the concept of a number. In particular, was it possible to introduce new numbers, related to points in space?\n\nThe 19th century saw intensive research in all the above directions. A general principle was formulated according to which any generalization of the concept of number should proceed — the so-called principle of permanence of formal computing laws. According to this, when constructing a new number system extending a given system, the operations should generalize in such a way that the existing laws remain in force (G. Peacock, 1834; H. Hankel, 1867). In the second half of the 19th century, the theory of real numbers (cf. Real number) was constructed almost simultaneously by G. Cantor (1879), Ch. Meray (1869), R. Dedekind (1872), and K. Weierstrass (1872). Here Cantor and Meray used Cauchy sequences of rational numbers, Dedekind used cuts in the field of rational numbers, and Weierstrass used infinite decimal expansions.\n\nAs a result of the work of G. Peano (1891), Weierstrass (1878) and H. Grassmann (1861), an axiomatic theory of natural numbers was constructed. W. Hamilton (1837) constructed a theory of complex numbers from pairs of real numbers, Weierstrass constructed a theory of integers from pairs of natural numbers, and J. Tannery (1894) constructed a theory of rational numbers from pairs of integers.\n\nAttempts to find generalizations of the concept of a complex number led to the theory of hypercomplex numbers (cf. Hypercomplex number). Historically, the first such number system was the quaternions (cf. Quaternion), discovered by Hamilton. After much investigation it became clear (Weierstrass, G. Frobenius, B. Pierce) that any extension of the concept of a complex number beyond the system of complex numbers itself is possible only at the cost of some of the usual properties of numbers.\n\nThroughout the 19th century, and into the early 20th century, deep changes were taking place in mathematics. Conceptions about the objects and the aims of mathematics were changing. The axiomatic method of constructing mathematics on set-theoretic foundations was gradually taking shape. In this context, every mathematical theory is the study of some algebraic system. In other words, it is the study of a set with distinguished relations, in particular algebraic operations, satisfying some predetermined conditions, or axioms.\n\nFrom this point of view every number system is an algebraic system. For the definition of concrete number systems it is convenient to use the notion of an \"extension of an algebraic system\" . This notion makes precise in a natural way the principle of permanence of formal computing laws, which was formulated above. An algebraic system", null, "is called an extension of an algebraic system", null, "if the underlying set of", null, "is a subset of that of", null, ", if there also exists a bijection from the set of relations of the system", null, "onto that of", null, ", and if for any collection of elements of the system", null, "for which some relation of that system holds, the corresponding relation of the system", null, "also holds.\n\nFor example, by the system of natural numbers one usually understands the algebraic system", null, "with two algebraic operations: addition", null, "and multiplication", null, ", and a distinguished element\n\n(unity), satisfying the following axioms:\n\n1) for each element", null, ",", null, ";\n\n2) associativity of addition: For any elements", null, "in", null, ",", null, "3) commutativity of addition: For any elements", null, "in", null, ",", null, "4) cancellation of addition: For any elements", null, "in", null, ", the equation", null, "entails the equation", null, ";\n\n5) 1 is the neutral element for the multiplication; that is, for any", null, "one has", null, ";\n\n6) associativity of multiplication: For any elements", null, "in", null, ",", null, "7) distributivity of multiplication over addition: For any elements", null, "in", null, ",", null, "8) the axiom of induction: If", null, "is a subset of", null, "containing 1 and the element", null, "whenever it contains", null, ", then", null, ".\n\nFrom", null, ", and", null, "it follows that the system of natural numbers is a semi-ring under the operations", null, "and", null, ". Hence the system of natural numbers can be defined as the minimal semi-ring with a neutral element for multiplication and without a neutral element for the addition.\n\nThe system of integers", null, "is defined as the minimal ring that is an extension of the semi-ring", null, "of natural numbers. The system of rational numbers", null, "is defined as the minimal field that is an extension of the ring", null, ". The system of complex numbers", null, "is defined as the minimal field that is an extension of the field", null, "of real numbers containing an element", null, "for which", null, "(cf. also Extension of a field).\n\nBy the system", null, "of real numbers one means the algebraic system with two binary operations", null, "and", null, ", two distinguished elements\n\nand\n\nand binary order relation", null, ". The axioms of", null, "are divided into the following groups:\n\n1) the field axiom: The system", null, "is a field;\n\n2) the order axiom: The system", null, "is a totally and strictly ordered field (cf. Ordered field);\n\n3) the Archimedean axiom: For any elements", null, ",", null, "in", null, "there exists a natural number", null, "such that", null, "4) the completeness axiom: Every Cauchy sequence", null, "of real numbers converges, i.e. if for any", null, "there is a number", null, "such that, for any", null, "and", null, "the inequality", null, "holds, then the sequence", null, "converges to some element of", null, ".\n\nBriefly, the system of real numbers is a complete, totally, strictly-Archimedean ordered field. The system of real numbers can also be defined, in an equivalent way, as a continuous totally ordered field. In this case the Archimedean axiom and the completeness axiom are replaced by the continuity axiom:\n\nIf", null, "and", null, "are non-empty subsets of", null, "such that, for any elements", null, ",", null, ", the inequality", null, "holds, then there exists an element", null, "such that", null, "for all", null, ",", null, ".\n\nThe construction of real numbers proposed by Cantor and Meray can be used to interpret the first system of axioms for the system of real numbers, while Dedekind's construction can be used to interpret the second system. Analogously, the constructions of Hamilton, Weierstrass and Tannery are interpretations of the systems of axioms for the complex, integer and rational numbers.\n\nAs interpretations of the system of natural numbers one may use the ordinal theory of natural numbers developed by Peano, and the cardinal theory of natural numbers of Cantor.\n\nThe problem of the foundations of the concept of a number, and more broadly, the foundations of mathematics, were clearly set out in the 19th century. This problem became a subject of mathematical logic, the intensive development of which continued into the 20th century." ]
[ null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n0679001.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n0679002.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n0679003.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n0679004.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n0679005.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n0679006.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n0679007.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n0679008.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n0679009.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790010.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790011.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790012.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790013.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790014.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790015.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790016.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790017.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790018.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790019.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790020.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790021.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790022.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790023.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790024.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790025.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790026.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790027.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790028.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790029.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790030.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790031.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790032.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790033.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790034.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790035.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790036.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790037.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790038.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790039.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790040.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790041.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790042.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790043.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790044.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790045.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790046.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790047.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790048.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790049.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790050.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790051.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790052.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790053.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790054.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790055.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790056.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790057.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790058.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790059.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790060.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790061.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790062.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790063.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790064.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790065.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790066.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790067.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790068.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790069.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790070.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790071.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790072.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790073.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790074.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790075.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790076.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790077.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790078.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790079.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790080.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790081.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790082.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790083.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790084.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790085.png", null, "https://www.encyclopediaofmath.org/legacyimages/n/n067/n067900/n06790086.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.93530893,"math_prob":0.95858294,"size":19476,"snap":"2019-51-2020-05","text_gpt3_token_len":4311,"char_repetition_ratio":0.15925431,"word_repetition_ratio":0.030987734,"special_character_ratio":0.21775518,"punctuation_ratio":0.13548388,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99194115,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160,161,162,163,164,165,166,167,168,169,170,171,172],"im_url_duplicate_count":[null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null,7,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-06T15:04:00Z\",\"WARC-Record-ID\":\"<urn:uuid:a2e137d2-2a06-40cf-8502-df02392d9467>\",\"Content-Length\":\"44499\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:de55c079-fcb2-46c6-9081-08fba503764e>\",\"WARC-Concurrent-To\":\"<urn:uuid:02257923-c56f-41e8-bf4f-499e286fe40f>\",\"WARC-IP-Address\":\"80.242.138.72\",\"WARC-Target-URI\":\"https://www.encyclopediaofmath.org/index.php?title=Number\",\"WARC-Payload-Digest\":\"sha1:F4FAYJ7WPGSAUAALQ7FEJS73SC4PM4EH\",\"WARC-Block-Digest\":\"sha1:ECKDC66EXQDCC3AHIAIAADYPWFJXZHHV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540488870.33_warc_CC-MAIN-20191206145958-20191206173958-00468.warc.gz\"}"}
http://library.wolfram.com/infocenter/Articles/2204/
[ "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "Title", null, "", null, "", null, "", null, "On Stability Analysis of Linear Stochastic and Time-Varying Deterministic Systems", null, "", null, "", null, "Author", null, "", null, "", null, "", null, "�. �etinkaya", null, "", null, "", null, "Journal / Anthology", null, "", null, "", null, "", null, "International Symposium on Symbolic and Algebraic Computation\n Year: 1992", null, "", null, "", null, "Description", null, "", null, "", null, "", null, "The almost-sure asymptotic stability of finite order linear stochastic systems subjected to non-white parametric excitations can be analyzed via the second method of Lyapunov with a quadratic function. In a previous work, it is reported that this approach produces an inequality condition among system parameters and some free optimization variables for a.s. asymptotic stability. However, in this study, utilizing the properties of the Kronecker Products it is shown that this inequality condition can only be used provided that certain other conditions are satisfied. These necessary conditions for the fourth order systems are explicitly obtained using the symbolic computation system Mathematica. Even though these conditions for higher order systems (N>4) are not available in an explicit form due to absence of formulae for roots of a polynomial whose degree is higher than four, a method proposed here provides a formulation for numerical and/or hybrid calculations (symbolic and numerical variables together). The need for this formulation stems from the fact that a pure numerical approach may not be efficient, if possible at all. In addition to stochastic systems, the inequality can be used to analyze the stability of linear time-varying deterministic systems due to the way the second method of Lyapunov is used. This is exhibited with the stability analysis of the damped Mathieu equation.", null, "", null, "", null, "Subject", null, "", null, "", null, "", null, "", null, "Applied Mathematics > Optimization", null, "", null, "", null, "", null, "", null, "", null, "", null, "" ]
[ null, "http://library.wolfram.com/common/images/spacer.gif", null, "http://library.wolfram.com/common/images/spacer.gif", null, "http://library.wolfram.com/common/images/spacer.gif", null, "http://library.wolfram.com/images/database/subheader.gif", null, "http://library.wolfram.com/images/database/tabsTOP/Courseware.gif", null, "http://library.wolfram.com/images/database/tabsTOP/Demos.gif", null, "http://library.wolfram.com/images/database/tabsTOP/MathSource.gif", null, "http://library.wolfram.com/images/database/tabsTOP/TechNotes.gif", null, "http://library.wolfram.com/common/images/spacer.gif", null, "http://library.wolfram.com/images/database/tabsBOTTOM/BySubject-off.gif", null, "http://library.wolfram.com/images/database/tabsBOTTOM/Articles-on.gif", null, "http://library.wolfram.com/images/database/tabsBOTTOM/Books-off.gif", null, "http://library.wolfram.com/images/database/tabsBOTTOM/Conferences-off.gif", null, "http://library.wolfram.com/images/database/grey-square.gif", null, "http://library.wolfram.com/common/images/spacer.gif", null, "http://library.wolfram.com/common/images/spacer.gif", null, "http://library.wolfram.com/common/images/spacer.gif", null, "http://library.wolfram.com/common/images/spacer.gif", null, "http://library.wolfram.com/common/images/spacer.gif", null, "http://library.wolfram.com/common/images/spacer.gif", null, "http://library.wolfram.com/images/database/grey-line.gif", null, "http://library.wolfram.com/images/database/grey-line.gif", null, "http://library.wolfram.com/common/images/spacer.gif", null, "http://library.wolfram.com/common/images/spacer.gif", null, "http://library.wolfram.com/common/images/spacer.gif", null, "http://library.wolfram.com/common/images/spacer.gif", null, "http://library.wolfram.com/common/images/spacer.gif", null, "http://library.wolfram.com/images/database/grey-line.gif", null, "http://library.wolfram.com/images/database/grey-line.gif", null, "http://library.wolfram.com/common/images/spacer.gif", null, "http://library.wolfram.com/common/images/spacer.gif", null, "http://library.wolfram.com/common/images/spacer.gif", null, "http://library.wolfram.com/common/images/spacer.gif", null, "http://library.wolfram.com/common/images/spacer.gif", null, "http://library.wolfram.com/images/database/grey-line.gif", null, "http://library.wolfram.com/images/database/grey-line.gif", null, "http://library.wolfram.com/common/images/spacer.gif", null, "http://library.wolfram.com/common/images/spacer.gif", null, "http://library.wolfram.com/common/images/spacer.gif", null, "http://library.wolfram.com/common/images/spacer.gif", null, "http://library.wolfram.com/common/images/spacer.gif", null, "http://library.wolfram.com/images/database/grey-line.gif", null, "http://library.wolfram.com/images/database/grey-line.gif", null, "http://library.wolfram.com/common/images/spacer.gif", null, "http://library.wolfram.com/common/images/spacer.gif", null, "http://library.wolfram.com/common/images/spacer.gif", null, "http://library.wolfram.com/common/images/spacer.gif", null, "http://library.wolfram.com/common/images/spacer.gif", null, "http://library.wolfram.com/images/database/grey-line.gif", null, "http://library.wolfram.com/images/database/grey-line.gif", null, "http://library.wolfram.com/common/images/spacer.gif", null, "http://library.wolfram.com/common/images/spacer.gif", null, "http://library.wolfram.com/images/database/subjects-arrow.gif", null, "http://library.wolfram.com/common/images/spacer.gif", null, "http://library.wolfram.com/common/images/spacer.gif", null, "http://library.wolfram.com/common/images/spacer.gif", null, "http://library.wolfram.com/common/images/spacer.gif", null, "http://library.wolfram.com/common/images/spacer.gif", null, "http://library.wolfram.com/common/images/spacer.gif", null, "http://library.wolfram.com/common/images/spacer.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8998577,"math_prob":0.9030219,"size":1409,"snap":"2019-13-2019-22","text_gpt3_token_len":261,"char_repetition_ratio":0.12099644,"word_repetition_ratio":0.009478673,"special_character_ratio":0.17246275,"punctuation_ratio":0.068085104,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9625914,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-05-23T23:53:17Z\",\"WARC-Record-ID\":\"<urn:uuid:122295c6-73af-4d8a-9ae5-f568c74f2f81>\",\"Content-Length\":\"49158\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:aa2b5f6c-c7c4-4b8a-a3eb-c73c38d9b32f>\",\"WARC-Concurrent-To\":\"<urn:uuid:1de70898-a518-46e2-995f-c9afed241eae>\",\"WARC-IP-Address\":\"140.177.205.65\",\"WARC-Target-URI\":\"http://library.wolfram.com/infocenter/Articles/2204/\",\"WARC-Payload-Digest\":\"sha1:ZFF7KEVACNUBX3X7BAR7OBNSSNEATNI4\",\"WARC-Block-Digest\":\"sha1:BNOIUWKE3B7YL64UJNIVVWMPSRBJDPKQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-22/CC-MAIN-2019-22_segments_1558232257432.60_warc_CC-MAIN-20190523224154-20190524010154-00209.warc.gz\"}"}
https://www.encyclopedia.com/science-and-technology/mathematics/mathematics/real-numbers
[ "# Real Numbers\n\nviews updated May 17 2018\n\n# Real Numbers\n\nA real number is any number which can be represented by a point on the number line. The numbers 3.5, 0.003, 2/3, π, and are all real numbers.\n\nThe real numbers include the rational numbers, which are those which can be expressed as the ratio of two integers, and the irrational numbers, which cannot. (In the list above, all the numbers except π and are rational.)\n\nIt is thought that the first real number to be identified as irrational was discovered by the Pythagoreans in the sixth century BC. Prior to this discovery, people believed that every number could be expressed as the ratio of two natural numbers (negative numbers had not been discovered yet). The Pythagoreans were able to show, however, that the hypotenuse of an isosceles right triangle could not be measured exactly by any scale, no matter how fine, which would exactly measure the legs.\n\nTo see what this means, imagine a number line with an isosceles right triangle drawn upon it, as in Figure 1. Imagine that the legs are one unit long.\n\nThe Pythagoreans were able to show that no matter how finely each unit was subdivided (uniformly), point P would fall somewhere inside one of those subdivisions. Even if there were a million, a billion, a billion and one, or any other number of uniform subdivisions, point P would be missed by every one of them. It would fall inside a subdivision, not at an end. Point P represents a real number because it is a definite point on the number line, but it does not represent any rational numbera /b.\n\nPoint P is not the only irrational point. The square root of any prime number is irrational. So is the cube root, or any other root. In fact, by using infinite decimals to represent the real numbers, the mathematician Cantor was able to show that the number of real numbers is uncountable. An infinite set of numbers is countable if there is some way of listing them that allows one to reach any particular one of them by reading far enough down the list. The set of natural numbers is countable because the ordinary counting process will, if it is continued long enough, bring one to any particular number in the set. In the case of the irrational numbers, however, there are so many of them that every conceivable listing of them will leave at least one of them out.\n\nThe real numbers have many familiar subsets that are countable. These include the natural numbers, the integers, the rational numbers, and the algebraic numbers (algebraic numbers are those that can be roots of polynomial equations with integral coefficients). The real numbers also include numbers that are none of the above. These are the transcendental numbers, and they are uncountable. Pi is one.\n\nExcept for rare instances such as ÷ , computations can be done only with rational numbers. When one wants to use an irrational number such as π, , or e in a computation, one must replace it with a rational approximation such as 22/7, 1.73205, or 2.718. The result is never exact. However, one can always come as close to the exact real-number answer as one wishes. If the approximation 3.14 for π does not come close enough for the purpose, then 3.142, 3.1416, or 3.14159 can be used. Each gives a closer approximation.\n\n# real numbers\n\nviews updated May 29 2018\n\nreal numbers (reals) The numbers that allow a numerical quantity to be assigned to every point on an infinite line or continuum. Real numbers are thus used to measure and calculate exactly the sizes of any continuous line segments or quantities. The development of a number system that meets these requirements has proved to be a long and complex process that reached a conclusion only in the 19th century. Establishing theoretical foundations for mathematical developments such as the calculus have involved sorting out subtle, conflicting, and inconsistent ideas about the reals (such as infinitesimals). The set of reals is infinite and not countable: there does not exist a method of making finite representations or codings of real numbers. Research on the foundations of the continuum continues – for instance on computation with the reals and on the uses of infinitesimals.\n\nThe real numbers, like the natural numbers, are one of the truly fundamental data types. Unlike the natural numbers, however, reals cannot be represented exactly in computations. They can be approximated to any degree of accuracy by rational numbers.\n\nA real number can be defined in several ways, for example as the limit of a sequence of rational numbers. A real x is represented by a sequence q(0),q(1),… of rational numbers that approximates x in the sense that for any degree of accuracy ϵ there exists some natural number n such that for all k > n, |q(k) – x| < ϵA real number is a computable real number if there is an algorithm that allows us to compute an approximation to the number to any given degree of accuracy. A real x is computable if (a) there is an algorithm that lists a sequence q(0),q(1),… of rational numbers that converges to x, and (b) there is an algorithm that to any natural number k finds a natural number p(k) such thatfor all n > p(k), |q(n) – x| < 2kMost of the real numbers that we know and use come from solving equations (e.g. the algebraic numbers) and evaluating equationally defined sequences (e.g. e and π) and are computable. However, most real numbers are noncomputable.\n\nThe approximations to real numbers used in computers must have finite representations or codings. In particular, there are gaps and separations between adjacent pairs of the real numbers that are represented (see model numbers). The separation may be the same between all numbers (fixed-point) or may vary and depend on the size of the adjacent values (floating-point). Some programming languages ignore this difference, describing floating-point numbers as “real”. Calculations with real numbers on a computer must take account of these approximations.\n\n# Real Numbers\n\nviews updated May 29 2018\n\n# Real numbers\n\nA real number is any number which can be represented by a point on a number line. The numbers 3.5, −0.003, 2/3, π, and √2 are all real numbers.\n\nThe real numbers include the rational numbers, which are those which can be expressed as the ratio of two integers , and the irrational numbers, which cannot. (In the list above, all the numbers except pi and the square root of 2 are rational.)\n\nIt is thought that the first real number to be identified as irrational was discovered by the Pythagoreans in the sixth century b.c. Prior to this discovery, people believed that every number could be expressed as the ratio of two natural numbers (negative numbers had not been discovered yet). The Pythagoreans were able to show, however, that the hypotenuse of an isosceles right triangle could not be measured exactly by any scale, no matter how fine, which would exactly measure the legs.\n\nTo see what this meant, imagine a number line with an isosceles right triangle drawn upon it, as in Figure 1. Imagine that the legs are one unit long.\n\nThe Pythagoreans were able to show that no matter how finely each unit was subdivided (uniformly), point P would fall somewhere inside one of those subdivisions. Even if there were a million, a billion, a billion and one, or any other number of uniform subdivisions, point P would be missed by every one of them. It would fall inside a subdivision, not at an end. Point P represents a real number because it is a definite point on the number line, but it does not represent any rational number a/b.\n\nPoint P is not the only irrational point. The square root of any prime number is irrational. So is the cube root, or any other root. In fact, by using infinite decimals to represent the real numbers, the mathematician Cantor was able to show that the number of real numbers is uncountable. An infinite set of numbers is \"countable\" if there is some way of listing them that allows one to reach any particular one of them by reading far enough down the list. The set of natural numbers is countable because the ordinary counting process will, if it is continued long enough, bring one to any particular number in the set. In the case of the irrational numbers, however, there are so many of them that every conceivable listing of them will leave at least one of them out.\n\nThe real numbers have many familiar subsets which are countable. These include the natural numbers, the integers, the rational numbers, and the algebraic numbers (algebraic numbers are those which can be roots of polynomial equations with integral coefficients). The real numbers also include numbers which are \"none of the above.\" These are the transcendental numbers , and they are uncountable. Pi is one.\n\nExcept for rare instances such as √2 ÷ √8 , computations can be done only with rational numbers. When one wants to use an irrational number such as π, √3 , or e in a computation, one must replace it with a rational approximation such as 22/7, 1.73205, or 2.718. The result is never exact. However, one can always come as close to the exact real-number answer as one wishes. If\n\nthe approximation 3.14 for π does not come close enough for the purpose, then 3.142, 3.1416, or 3.14159 can be used. Each gives a closer approximation.\n\n# Numbers, Real\n\nviews updated May 14 2018\n\n# Numbers, Real\n\nA real number line is a familiar way to picture various sets of numbers. For example, the divisions marked on a number line show the integers, which are the counting numbers {1, 2, 3,}, with their opposites {1, 2, 3,}, with the number 0, which divides the positive numbers on the line from the negative numbers.\n\nBut what other numbers are on a real number line? One could make marks for all the fractions, such as , and so forth, as well as marks for all the decimal fractions, such as 0.1, 0.01, 0.0000001, and so on. Any number that can be written as the ratio of two integers (such as , , and so on), where the divisor is not 0, is called a rational number, and all the rational numbers are on a real number line.\n\nAre there any other kinds of numbers on a real number line in addition to the integers and rational numbers? What about , which is approximately 1.4142? Because the decimal equivalent for never ends and never repeats, it is known as an irrational number. The set of real numbers consists of the integers and the rational numbers as well as the irrational numbers. Every real number corresponds to exactly one point on a real number line, and every point on a number line corresponds to exactly one real number.\n\nAre there any \"unreal\" numbers? That is, are there any numbers that are not on a real number line? The set of real numbers is infinitely large, and one might think that it contains all numbers, but that is not so. For example, the solution to the equation x 2 = 1 does not lie on a real number line. The solution to that equation lies on another number line called an imaginary number line, which is usually drawn at right angles to a real number line.\n\nThere are numbers, called complex numbers, that are the sum of a real number and an imaginary number and are not found on either a real or an imaginary number line. These complex numbers are found in an area called the complex plane. So both imaginary and complex numbers are \"unreal,\" so to speak, because they do not lie on a real number line.\n\nThe set of real numbers has several interesting properties. For example, when any two real numbers are added, subtracted, multiplied, or divided (excluding division by zero), the result is always a real number. Therefore, the set of real numbers is called \"closed\" for these four operations.\n\nSimilarly, the real numbers have the commutative, associative, and distributive properties. The real numbers also have an identity element for addition (0) and for multiplication (1) and inverse elements for all four operations. These properties, taken all together, are called the field properties, and the real numbers thus make up a field, mathematically speaking.\n\nsee also Field Properties; Integers; Number Sets; Numbers, Complex; Numbers, Irrational; Numbers, Rational; Numbers, Whole; Number System, Real.\n\nLucia McKay" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.952679,"math_prob":0.99360085,"size":11150,"snap":"2022-05-2022-21","text_gpt3_token_len":2485,"char_repetition_ratio":0.1991746,"word_repetition_ratio":0.51235837,"special_character_ratio":0.22179373,"punctuation_ratio":0.12710363,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99786246,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-19T10:07:42Z\",\"WARC-Record-ID\":\"<urn:uuid:2759ccd7-082c-4f59-9d28-114b46d2505c>\",\"Content-Length\":\"86549\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b82f41c2-2a43-4054-b0a2-2aabd2ce9d52>\",\"WARC-Concurrent-To\":\"<urn:uuid:29010af3-7664-41b5-b75e-4b3d10d1210e>\",\"WARC-IP-Address\":\"104.26.8.13\",\"WARC-Target-URI\":\"https://www.encyclopedia.com/science-and-technology/mathematics/mathematics/real-numbers\",\"WARC-Payload-Digest\":\"sha1:2B7NPZ6SV5FYCZUTBTWPLIGTHH2YTD45\",\"WARC-Block-Digest\":\"sha1:KNKKWJJ2JUGVSTD23AVOS5DRTMP56HFS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662526009.35_warc_CC-MAIN-20220519074217-20220519104217-00188.warc.gz\"}"}
https://wikivisually.com/wiki/Calorimeter_constant
[ "# Calorimeter constant\n\nA calorimeter constant (denoted Ccal) is a constant that quantifies the heat capacity of a calorimeter. It may be calculated by applying a known amount of heat to the calorimeter and measuring the calorimeter's corresponding change in temperature. In SI units, the calorimeter constant is then calculated by dividing the change in enthalpyH) in joules by the change in temperature (ΔT) in kelvins or degrees Celsius:\n\n$C_{\\mathrm {cal} }={\\frac {\\Delta {H}}{\\Delta {T}}}$", null, "The calorimeter constant is usually presented in units of joules per degree Celsius (J/°C) or joules per kelvin (J/K); every calorimeter has a unique calorimeter constant.\n\n## Uses\n\nThe calorimeter constants are used in constant pressure calorimetry to calculate the amount of heat required to achieve a certain raise in the temperature of the calorimeter's contents.\n\n### Example\n\nTo determine the change in enthalpy in a neutralization reactionHneutralization), a known amount of basic solution may be placed in a calorimeter, and the temperature of this solution alone recorded. Then, a known amount of acidic solution may be added and the change in temperature measured using a thermometer; the difference in temperature (ΔT, in units K or °C) may be calculated by subtracting the initial temperature from the final temperature. The enthalpy of neutralization ΔHneutralization may then be calculated according to the following equation:\n\n$\\Delta {H_{\\mathrm {neutralization} }}=C_{\\mathrm {cal} }\\cdot \\Delta {T}$", null, ".\n\nRegardless of the specific chemical process, with a known calorimeter constant and a known change in temperature the heat added to the system may be calculated by multiplying the calorimeter constant by that change in temperature." ]
[ null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/a3db481431878676812dd3b77bd857e2004ca05c", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/f5d50f4b772ef80c253835bb0fb08705b6457bc5", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.94081026,"math_prob":0.9910127,"size":36362,"snap":"2019-43-2019-47","text_gpt3_token_len":7739,"char_repetition_ratio":0.15517905,"word_repetition_ratio":0.020596115,"special_character_ratio":0.20073153,"punctuation_ratio":0.09591744,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99927455,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,2,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-15T02:30:43Z\",\"WARC-Record-ID\":\"<urn:uuid:62df9a72-f395-4f5c-bf93-0978f7b18e10>\",\"Content-Length\":\"175120\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d7476bad-9519-4237-a108-a3e8578cef90>\",\"WARC-Concurrent-To\":\"<urn:uuid:ec38f490-4a78-41d5-9dcf-19372d5bd617>\",\"WARC-IP-Address\":\"54.244.10.145\",\"WARC-Target-URI\":\"https://wikivisually.com/wiki/Calorimeter_constant\",\"WARC-Payload-Digest\":\"sha1:DC5XVMTGH4TTJE5ZC7EWG4HDCQFZEXB3\",\"WARC-Block-Digest\":\"sha1:DYYH2WJGIZFP3QG2AG57TFUENZZYWS6C\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496668561.61_warc_CC-MAIN-20191115015509-20191115043509-00418.warc.gz\"}"}
https://www.colorhexa.com/014d2a
[ "# #014d2a Color Information\n\nIn a RGB color space, hex #014d2a is composed of 0.4% red, 30.2% green and 16.5% blue. Whereas in a CMYK color space, it is composed of 98.7% cyan, 0% magenta, 45.5% yellow and 69.8% black. It has a hue angle of 152.4 degrees, a saturation of 97.4% and a lightness of 15.3%. #014d2a color hex could be obtained by blending #029a54 with #000000. Closest websafe color is: #006633.\n\n• R 0\n• G 30\n• B 16\nRGB color chart\n• C 99\n• M 0\n• Y 45\n• K 70\nCMYK color chart\n\n#014d2a color description : Very dark cyan - lime green.\n\n# #014d2a Color Conversion\n\nThe hexadecimal color #014d2a has RGB values of R:1, G:77, B:42 and CMYK values of C:0.99, M:0, Y:0.45, K:0.7. Its decimal value is 85290.\n\nHex triplet RGB Decimal 014d2a `#014d2a` 1, 77, 42 `rgb(1,77,42)` 0.4, 30.2, 16.5 `rgb(0.4%,30.2%,16.5%)` 99, 0, 45, 70 152.4°, 97.4, 15.3 `hsl(152.4,97.4%,15.3%)` 152.4°, 98.7, 30.2 006633 `#006633`\nCIE-LAB 28.063, -30.451, 14.995 3.084, 5.481, 3.086 0.265, 0.47, 5.481 28.063, 33.943, 153.784 28.063, -24.58, 19.465 23.412, -17.456, 8.573 00000001, 01001101, 00101010\n\n# Color Schemes with #014d2a\n\n• #014d2a\n``#014d2a` `rgb(1,77,42)``\n• #4d0124\n``#4d0124` `rgb(77,1,36)``\nComplementary Color\n• #014d04\n``#014d04` `rgb(1,77,4)``\n• #014d2a\n``#014d2a` `rgb(1,77,42)``\n• #014a4d\n``#014a4d` `rgb(1,74,77)``\nAnalogous Color\n• #4d0401\n``#4d0401` `rgb(77,4,1)``\n• #014d2a\n``#014d2a` `rgb(1,77,42)``\n• #4d014a\n``#4d014a` `rgb(77,1,74)``\nSplit Complementary Color\n• #4d2a01\n``#4d2a01` `rgb(77,42,1)``\n• #014d2a\n``#014d2a` `rgb(1,77,42)``\n• #2a014d\n``#2a014d` `rgb(42,1,77)``\n• #244d01\n``#244d01` `rgb(36,77,1)``\n• #014d2a\n``#014d2a` `rgb(1,77,42)``\n• #2a014d\n``#2a014d` `rgb(42,1,77)``\n• #4d0124\n``#4d0124` `rgb(77,1,36)``\n• #000101\n``#000101` `rgb(0,1,1)``\n• #001b0f\n``#001b0f` `rgb(0,27,15)``\n• #01341c\n``#01341c` `rgb(1,52,28)``\n• #014d2a\n``#014d2a` `rgb(1,77,42)``\n• #016638\n``#016638` `rgb(1,102,56)``\n• #027f45\n``#027f45` `rgb(2,127,69)``\n• #029953\n``#029953` `rgb(2,153,83)``\nMonochromatic Color\n\n# Alternatives to #014d2a\n\nBelow, you can see some colors close to #014d2a. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #014d17\n``#014d17` `rgb(1,77,23)``\n• #014d1d\n``#014d1d` `rgb(1,77,29)``\n• #014d24\n``#014d24` `rgb(1,77,36)``\n• #014d2a\n``#014d2a` `rgb(1,77,42)``\n• #014d30\n``#014d30` `rgb(1,77,48)``\n• #014d37\n``#014d37` `rgb(1,77,55)``\n• #014d3d\n``#014d3d` `rgb(1,77,61)``\nSimilar Colors\n\n# #014d2a Preview\n\nText with hexadecimal color #014d2a\n\nThis text has a font color of #014d2a.\n\n``<span style=\"color:#014d2a;\">Text here</span>``\n#014d2a background color\n\nThis paragraph has a background color of #014d2a.\n\n``<p style=\"background-color:#014d2a;\">Content here</p>``\n#014d2a border color\n\nThis element has a border color of #014d2a.\n\n``<div style=\"border:1px solid #014d2a;\">Content here</div>``\nCSS codes\n``.text {color:#014d2a;}``\n``.background {background-color:#014d2a;}``\n``.border {border:1px solid #014d2a;}``\n\n# Shades and Tints of #014d2a\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #00130a is the darkest color, while #ffffff is the lightest one.\n\n• #00130a\n``#00130a` `rgb(0,19,10)``\n• #002615\n``#002615` `rgb(0,38,21)``\n• #013a1f\n``#013a1f` `rgb(1,58,31)``\n• #014d2a\n``#014d2a` `rgb(1,77,42)``\n• #016035\n``#016035` `rgb(1,96,53)``\n• #02743f\n``#02743f` `rgb(2,116,63)``\n• #02874a\n``#02874a` `rgb(2,135,74)``\n• #029a54\n``#029a54` `rgb(2,154,84)``\n• #02ae5f\n``#02ae5f` `rgb(2,174,95)``\n• #03c169\n``#03c169` `rgb(3,193,105)``\n• #03d574\n``#03d574` `rgb(3,213,116)``\n• #03e87e\n``#03e87e` `rgb(3,232,126)``\n• #03fb89\n``#03fb89` `rgb(3,251,137)``\n• #16fc92\n``#16fc92` `rgb(22,252,146)``\n• #2afc9b\n``#2afc9b` `rgb(42,252,155)``\n• #3dfca4\n``#3dfca4` `rgb(61,252,164)``\n``#50fdad` `rgb(80,253,173)``\n• #64fdb6\n``#64fdb6` `rgb(100,253,182)``\n• #77fdbf\n``#77fdbf` `rgb(119,253,191)``\n• #8afdc8\n``#8afdc8` `rgb(138,253,200)``\n• #9efed2\n``#9efed2` `rgb(158,254,210)``\n• #b1fedb\n``#b1fedb` `rgb(177,254,219)``\n• #c4fee4\n``#c4fee4` `rgb(196,254,228)``\n• #d8feed\n``#d8feed` `rgb(216,254,237)``\n• #ebfff6\n``#ebfff6` `rgb(235,255,246)``\n• #ffffff\n``#ffffff` `rgb(255,255,255)``\nTint Color Variation\n\n# Tones of #014d2a\n\nA tone is produced by adding gray to any pure hue. In this case, #252927 is the less saturated color, while #014d2a is the most saturated one.\n\n• #252927\n``#252927` `rgb(37,41,39)``\n• #222c27\n``#222c27` `rgb(34,44,39)``\n• #1f2f28\n``#1f2f28` `rgb(31,47,40)``\n• #1c3228\n``#1c3228` `rgb(28,50,40)``\n• #193528\n``#193528` `rgb(25,53,40)``\n• #163828\n``#163828` `rgb(22,56,40)``\n• #133b29\n``#133b29` `rgb(19,59,41)``\n• #103e29\n``#103e29` `rgb(16,62,41)``\n• #0d4129\n``#0d4129` `rgb(13,65,41)``\n• #0a4429\n``#0a4429` `rgb(10,68,41)``\n• #07472a\n``#07472a` `rgb(7,71,42)``\n• #044a2a\n``#044a2a` `rgb(4,74,42)``\n• #014d2a\n``#014d2a` `rgb(1,77,42)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #014d2a is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5156435,"math_prob":0.66803396,"size":3669,"snap":"2019-35-2019-39","text_gpt3_token_len":1635,"char_repetition_ratio":0.13096862,"word_repetition_ratio":0.011029412,"special_character_ratio":0.5538294,"punctuation_ratio":0.23730685,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9885577,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-21T17:15:19Z\",\"WARC-Record-ID\":\"<urn:uuid:a091a791-ac8c-4434-8b58-4e2f900f3e08>\",\"Content-Length\":\"36186\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9e79a192-1e24-4b9a-9788-939e2e63d7a9>\",\"WARC-Concurrent-To\":\"<urn:uuid:ac7d06cc-fb0e-437c-8fc9-564664173e4f>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/014d2a\",\"WARC-Payload-Digest\":\"sha1:E727VO7FSBFU6DYA6RVR3J3MTUZVCZBJ\",\"WARC-Block-Digest\":\"sha1:LHW754QYGWJU5CSJ6XKDAF4YAGCHRQXH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514574588.96_warc_CC-MAIN-20190921170434-20190921192434-00090.warc.gz\"}"}
https://metanumbers.com/2808
[ "## 2808\n\n2,808 (two thousand eight hundred eight) is an even four-digits composite number following 2807 and preceding 2809. In scientific notation, it is written as 2.808 × 103. The sum of its digits is 18. It has a total of 7 prime factors and 32 positive divisors. There are 864 positive integers (up to 2808) that are relatively prime to 2808.\n\n## Basic properties\n\n• Is Prime? No\n• Number parity Even\n• Number length 4\n• Sum of Digits 18\n• Digital Root 9\n\n## Name\n\nShort name 2 thousand 808 two thousand eight hundred eight\n\n## Notation\n\nScientific notation 2.808 × 103 2.808 × 103\n\n## Prime Factorization of 2808\n\nPrime Factorization 23 × 33 × 13\n\nComposite number\nDistinct Factors Total Factors Radical ω(n) 3 Total number of distinct prime factors Ω(n) 7 Total number of prime factors rad(n) 78 Product of the distinct prime numbers λ(n) -1 Returns the parity of Ω(n), such that λ(n) = (-1)Ω(n) μ(n) 0 Returns: 1, if n has an even number of prime factors (and is square free) −1, if n has an odd number of prime factors (and is square free) 0, if n has a squared prime factor Λ(n) 0 Returns log(p) if n is a power pk of any prime p (for any k >= 1), else returns 0\n\nThe prime factorization of 2,808 is 23 × 33 × 13. Since it has a total of 7 prime factors, 2,808 is a composite number.\n\n## Divisors of 2808\n\n1, 2, 3, 4, 6, 8, 9, 12, 13, 18, 24, 26, 27, 36, 39, 52, 54, 72, 78, 104, 108, 117, 156, 216, 234, 312, 351, 468, 702, 936, 1404, 2808\n\n32 divisors\n\n Even divisors 24 8 4 4\nTotal Divisors Sum of Divisors Aliquot Sum τ(n) 32 Total number of the positive divisors of n σ(n) 8400 Sum of all the positive divisors of n s(n) 5592 Sum of the proper positive divisors of n A(n) 262.5 Returns the sum of divisors (σ(n)) divided by the total number of divisors (τ(n)) G(n) 52.9906 Returns the nth root of the product of n divisors H(n) 10.6971 Returns the total number of divisors (τ(n)) divided by the sum of the reciprocal of each divisors\n\nThe number 2,808 can be divided by 32 positive divisors (out of which 24 are even, and 8 are odd). The sum of these divisors (counting 2,808) is 8,400, the average is 26,2.5.\n\n## Other Arithmetic Functions (n = 2808)\n\n1 φ(n) n\nEuler Totient Carmichael Lambda Prime Pi φ(n) 864 Total number of positive integers not greater than n that are coprime to n λ(n) 36 Smallest positive number such that aλ(n) ≡ 1 (mod n) for all a coprime to n π(n) ≈ 409 Total number of primes less than or equal to n r2(n) 0 The number of ways n can be represented as the sum of 2 squares\n\nThere are 864 positive integers (less than 2,808) that are coprime with 2,808. And there are approximately 409 prime numbers less than or equal to 2,808.\n\n## Divisibility of 2808\n\n m n mod m 2 3 4 5 6 7 8 9 0 0 0 3 0 1 0 0\n\nThe number 2,808 is divisible by 2, 3, 4, 6, 8 and 9.\n\n• Abundant\n\n• Polite\n• Practical\n\n## Base conversion (2808)\n\nBase System Value\n2 Binary 101011111000\n3 Ternary 10212000\n4 Quaternary 223320\n5 Quinary 42213\n6 Senary 21000\n8 Octal 5370\n10 Decimal 2808\n12 Duodecimal 1760\n20 Vigesimal 708\n36 Base36 260\n\n## Basic calculations (n = 2808)\n\n### Multiplication\n\nn×i\n n×2 5616 8424 11232 14040\n\n### Division\n\nni\n n⁄2 1404 936 702 561.6\n\n### Exponentiation\n\nni\n n2 7884864 22140698112 62171080298496 174576393478176768\n\n### Nth Root\n\ni√n\n 2√n 52.9906 14.108 7.27946 4.89417\n\n## 2808 as geometric shapes\n\n### Circle\n\n Diameter 5616 17643.2 2.4771e+07\n\n### Sphere\n\n Volume 9.27427e+10 9.90841e+07 17643.2\n\n### Square\n\nLength = n\n Perimeter 11232 7.88486e+06 3971.11\n\n### Cube\n\nLength = n\n Surface area 4.73092e+07 2.21407e+10 4863.6\n\n### Equilateral Triangle\n\nLength = n\n Perimeter 8424 3.41425e+06 2431.8\n\n### Triangular Pyramid\n\nLength = n\n Surface area 1.3657e+07 2.60931e+09 2292.72\n\n## Cryptographic Hash Functions\n\nmd5 d0010a6f34908640a4a6da2389772a78 bb979a33849ad72fa65bb94815dd92e08da9f2f1 e0c7bfa5b8e4ae0cecfc1f14093a71e05c27abd93a3850fc35f008c3baf866b4 a915969d8c7af264b43013c59bc5a2232264920bf3fdfff05af0291c1985491e229c132c20abfca5554508ef64d4bfc67b927d8447d582a55c2be295598dfce9 98df873e9a964b4ee60acacf7a940909ace4eaa0" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6150769,"math_prob":0.9855277,"size":4368,"snap":"2020-24-2020-29","text_gpt3_token_len":1563,"char_repetition_ratio":0.11961503,"word_repetition_ratio":0.025335321,"special_character_ratio":0.44551283,"punctuation_ratio":0.08041505,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.99528766,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-11T23:15:04Z\",\"WARC-Record-ID\":\"<urn:uuid:7d151f5c-f122-4e73-ade3-f075169bc849>\",\"Content-Length\":\"48089\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:61eab0ff-c8e3-49b3-a4b7-e61c74fa0400>\",\"WARC-Concurrent-To\":\"<urn:uuid:0a022633-8796-407c-9e96-c3034650b561>\",\"WARC-IP-Address\":\"46.105.53.190\",\"WARC-Target-URI\":\"https://metanumbers.com/2808\",\"WARC-Payload-Digest\":\"sha1:KJYB4MDGUU7W6OXET6GVQ2FIMBFXSBLV\",\"WARC-Block-Digest\":\"sha1:5GTI72K6A3L6RQNCHNQ3EQHMON5VDPUL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593657129257.81_warc_CC-MAIN-20200711224142-20200712014142-00037.warc.gz\"}"}
https://www.statology.org/how-to-use-z-table/
[ "# How to use the Z Table (With Examples)\n\nA z-table is a table that tells you what percentage of values fall below a certain z-score in a standard normal distribution.\n\nA z-score simply tells you how many standard deviations away an individual data value falls from the mean. It is calculated as:\n\nz-score = (x – μ) / σ\n\nwhere:\n\n• x: individual data value\n• μ: population mean\n• σ: population standard deviation\n\nThis tutorial shows several examples of how to use the z table.\n\n### Example 1\n\nThe scores on a certain college entrance exam are normally distributed with mean μ = 82 and standard deviation σ = 8. Approximately what percentage of students score less than 84 on the exam?\n\nStep 1: Find the z-score.\n\nFirst, we will find the z-score associated with an exam score of 84:\n\nz-score = (x – μ) /  σ = (84 – 82) / 8 = 2 / 8 = 0.25\n\nStep 2: Use the z-table to find the percentage that corresponds to the z-score.\n\nNext, we will look up the value 0.25 in the z-table:", null, "Approximately 59.87% of students score less than 84 on this exam.\n\n### Example 2\n\nThe height of plants in a certain garden are normally distributed with a mean of  μ = 26.5 inches and a standard deviation of σ = 2.5 inches. Approximately what percentage of plants are greater than 26 inches tall?\n\nStep 1: Find the z-score.\n\nFirst, we will find the z-score associated with a height of 26 inches.\n\nz-score = (x – μ) /  σ = (26 – 26.5) / 2.5 = -0.5 / 2.5 = -0.2\n\nStep 2: Use the z-table to find the percentage that corresponds to the z-score.\n\nNext, we will look up the value -0.2 in the z-table:", null, "We see that 42.07% of values fall below a z-score of -0.2. However, in this example we want to know what percentage of values are greater than -0.2, which we can find by using the formula 100% – 42.07% = 57.93%.\n\nThus, aproximately 59.87% of the plants in this garden are greater than 26 inches tall.\n\n### Example 3\n\nThe weight of a certain species of dolphin is normally distributed with a mean of μ = 400 pounds and a standard deviation of σ = 25 pounds. Approximately what percentage of dolphins weigh between 410 and 425 pounds?\n\nStep 1: Find the z-scores.\n\nFirst, we will find the z-scores associated with 410 pounds and 425 pounds\n\nz-score of 410 = (x – μ) /  σ = (410 – 400) / 25 = 10 / 25 = 0.4\n\nz-score of 425 = (x – μ) /  σ = (425 – 400) / 25 = 25 / 25 = 1\n\nStep 2: Use the z-table to find the percentages that corresponds to each z-score.\n\nFirst, we will look up the value 0.4 in the z-table:", null, "Then, we will look up the value 1 in the z-table:", null, "Lastly, we will subtract the smaller value from the larger value: 0.8413 – 0.6554 = 0.1859.\n\nThus, approximately 18.59% of dolphins weigh between 410 and 425 pounds." ]
[ null, "https://www.statology.org/wp-content/uploads/2020/04/ztable1.png", null, "https://www.statology.org/wp-content/uploads/2020/04/ztable2.png", null, "https://www.statology.org/wp-content/uploads/2020/04/ztable3.png", null, "https://www.statology.org/wp-content/uploads/2020/04/ztable4.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8782441,"math_prob":0.9925402,"size":2717,"snap":"2023-40-2023-50","text_gpt3_token_len":759,"char_repetition_ratio":0.13822336,"word_repetition_ratio":0.21113244,"special_character_ratio":0.31910196,"punctuation_ratio":0.12436975,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99980086,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,4,null,9,null,9,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-23T19:10:05Z\",\"WARC-Record-ID\":\"<urn:uuid:e1051e77-c7d4-4a1a-a621-2f50a7951696>\",\"Content-Length\":\"49824\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f8abf6fd-abf4-4209-802b-a4bbde075cd4>\",\"WARC-Concurrent-To\":\"<urn:uuid:b24f9cc1-27e5-4fe5-924d-668e7e024dac>\",\"WARC-IP-Address\":\"34.149.36.179\",\"WARC-Target-URI\":\"https://www.statology.org/how-to-use-z-table/\",\"WARC-Payload-Digest\":\"sha1:6IN53RGRFE2WLYZOBB4CW5WRVGITGLY5\",\"WARC-Block-Digest\":\"sha1:XAUTL3FZTPBVAFUDFBHX2XJFGM7CDTCM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506528.19_warc_CC-MAIN-20230923162848-20230923192848-00083.warc.gz\"}"}
https://www.jiskha.com/questions/1798511/let-be-a-bernoulli-random-variable-that-indicates-which-one-of-two-hypotheses-is-true
[ "# probability\n\nLet Θ be a Bernoulli random variable that indicates which one of two hypotheses is true, and let P(Θ=1)=p. Under the hypothesis Θ=0, the random variable X has a normal distribution with mean 0, and variance 1. Under the alternative hypothesis Θ=1, X has a normal distribution with mean 2 and variance 1.\n\nConsider the MAP rule for deciding between the two hypotheses, given that X=x.\n\nSuppose for this part of the problem that p=2/3. The MAP rule can choose in favour of the hypothesis Θ=1 if and only if x≥c1. Find the value of c1.\n\nc1=\n\nFor this part, assume again that p=2/3. Find the conditional probability of error for the MAP decision rule, given that the hypothesis Θ=0 is true.\n\nP(error|Θ=0)=\n\nFind the overall (unconditional) probability of error associated with the MAP rule for p=1/2.\n\nYou may want to consult to standard normal table.\n\n1. 👍\n2. 👎\n3. 👁\n1. Has anyone solved this question?\n\n1. 👍\n2. 👎\n2. c1 = 2/3\n\n1. 👍\n2. 👎\n3. 1. 0.653\n2. 0.257\n3. 0.159\n\n1. 👍\n2. 👎\n\n## Similar Questions\n\n1. ### STATISTICS\n\nConsider a binomial random variable where the number of trials is 12 and the probability of success on each trial is 0.25. Find the mean and standard deviation of this random variable. I have a mean of 4 and a standard deviation\n\n2. ### statistic\n\nThe random variable x represents the number of boys in a family with three children. Assuming that births of boys and girls are equally likely, find the mean and standard deviation for the random variable x.\n\n3. ### probability\n\nA random experiment of tossing a die twice is performed. Random variable X on this sample space is defined to be the sum of two numbers turning up on the toss. Find the discrete probability distribution for the random variable X\n\n4. ### mathematics, statistics\n\nYou observe k i.i.d. copies of the discrete uniform random variable Xi , which takes values 1 through n with equal probability. Define the random variable M as the maximum of these random variables, M=maxi(Xi) . 1.) Find the\n\n1. ### ap stats\n\nContinuous Random Variable, I Let X be a random number between 0 and 1 produced by the idealized uniform random number generator described. Find the following probabilities: a.P(0less than or equal to X less than or equal to 0.4)\n\n2. ### statistics\n\nIdentify the given item as probability distribution, continuous random variable, or discrete random variable. The amount of time that an individual watches television. a. discrete random variable b. probability distribution c.\n\n3. ### probability\n\nDetermine whether each of the following statement is true (i.e., always true) or false (i.e., not always true). 1. Let X be a random variable that takes values between 0 and c only, for some c≥0, so that P(0≤X≤c)=1. Then,\n\n4. ### math, probability\n\n13. Exercise: Convergence in probability: a) Suppose that Xn is an exponential random variable with parameter lambda = n. Does the sequence {Xn} converge in probability? b) Suppose that Xn is an exponential random variable with\n\n1. ### Math\n\nSuppose a baseball player had 211 hits in a season. In the given probability distribution, the random variable X represents the number of hits the player obtained in the game. x P(x) 0 0.1879 1 0.4106 2 0.2157 3 0.1174 4 0.0624 5\n\n2. ### Economic\n\nThe distance a car travels on a tank of gasoline is a random variable. a. What are the possible values of this random variable? b. Are the Values countable? Explain. c. Is there a finite number of values? Explain. d. Is there\n\n3. ### Probability\n\nFor each of the following statements, determine whether it is true (meaning, always true) or false (meaning, not always true). Here, we assume all random variables are discrete, and that all expectations are well-defined and\n\n4. ### mathematics, probability, statistics\n\nYou observe k i.i.d. copies of the discrete uniform random variable Xi , which takes values 1 through n with equal probability. Define the random variable M as the maximum of these random variables, M=maxi(Xi) . 1.) Find the" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.84227574,"math_prob":0.9959248,"size":3055,"snap":"2021-04-2021-17","text_gpt3_token_len":749,"char_repetition_ratio":0.19075713,"word_repetition_ratio":0.18181819,"special_character_ratio":0.23764321,"punctuation_ratio":0.13643926,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9999063,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-28T15:17:49Z\",\"WARC-Record-ID\":\"<urn:uuid:40f2ef53-1a76-41b4-a070-4cc04e7e3653>\",\"Content-Length\":\"22817\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7b79105a-f443-4bfd-9bc5-02525b588e82>\",\"WARC-Concurrent-To\":\"<urn:uuid:3d6fd8fd-ad5a-4a51-a047-4dff103444bc>\",\"WARC-IP-Address\":\"66.228.55.50\",\"WARC-Target-URI\":\"https://www.jiskha.com/questions/1798511/let-be-a-bernoulli-random-variable-that-indicates-which-one-of-two-hypotheses-is-true\",\"WARC-Payload-Digest\":\"sha1:3MPJAPISAL3KR5UUN5PYWV432CBLUCMS\",\"WARC-Block-Digest\":\"sha1:EPRFTR3KE5JZG4JMBKBHDVTNIDLHWLXZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610704847953.98_warc_CC-MAIN-20210128134124-20210128164124-00441.warc.gz\"}"}
https://chemistry.stackexchange.com/questions/105110/how-to-predict-molecular-geometry-of-acetylene-using-vsepr-model
[ "# How to predict molecular geometry of acetylene using VSEPR model?\n\nI need to determine the molecular geometry of acetylene. For this I performed the following procedure.\n\nFirst, I drew the correct Lewis structure which is a graphical representation that shows the pairs of electrons linking the atoms of a molecule and the pairs of solitary electrons which may exist.\n\nTo predict molecular geometry I used the VSEPR model, which is based on the number of electron pairs in the central atom to determine molecular geometry. In this case, there are two central atoms: $$\\ce{C}$$ and $$\\ce{C'}$$ (the prime symbol is used to distinguish between carbons).\n\n$$\\large\\ce{H-C#C'-H}$$\n\nThe first carbon $$\\ce{C}$$ has 4 pairs of electrons, all of which are bonding. Using the VSEPR notation, $$\\ce{AX4}.$$ For the second carbon $$\\ce{C'}$$ it's exactly the same. Therefore, molecular geometry is symmetrical and tetrahedral.\n\nFor a complete study, I applied the valence bond model based on the hybridization of the atomic orbitals. This is where I don't know how to see if the link is σ- or π-type.", null, "Also, I know that the molecule can be contained in a plane, but I don't know how to explain it using VSEPR or valence bond theory.\n\n• This is one of the examples where VSEPR theory breaks down; it simply cannot be used with ethyne. Dec 2, 2018 at 14:11\n• How do I know what's wrong with VSEPR theory a priori? For example, in an exam I am asked to determine this structure, how to do it so that it is always the right one. @Martin-マーチン Dec 2, 2018 at 15:09\n• I would disagree with Martin, the VSEPR theory works for things like benzene, ethylene and acetylene, but it is unable to estimate the number of σ-bonds and lone pairs. The chemist must do this, then the VSEPR can work out the arrangement of the σ-bonds / lone pairs. The problem is that unless you understand pi systems the use of VSEPR might suggest that the torsion angle $\\ce{H-C-C-H}$ in ethene is not zero or 180°. In real life the atoms of ethylene are in a single plane. Dec 2, 2018 at 18:45\n• I don't know how to see if there are pi links (multiple links) or sigma links (single link). Dec 2, 2018 at 19:10\n• That something doesn't apply is clear. How two atoms can be linked together in a tetrahedral geometry? Dec 5, 2018 at 16:57\n\nThe first carbon $$\\ce{C}$$ has 4 pairs of electrons, all of which are bonding. Using the VSEPR notation, $$\\ce{AX4}.$$ For the second carbon $$\\ce{C'}$$ it's exactly the same. Therefore, molecular geometry is symmetrical and tetrahedral.\n\nCarbon typically has 4 pairs of electrons which are bonding. However, these four pairs of electrons could be in single, double or triple bonds. Electron pairs in a double or triple bond connect the same two atoms, so they have to be grouped together in the VSEPR scheme. For ethylene, the notation would be $$\\ce{AX2}$$, which is expected to be linear.\n\nFor a complete study, I applied the valence bond model based on the hybridization of the atomic orbitals. This is where I don't know how to see if the link is σ- or π-type.\n\nThe Lewis structures are a way to illustrate the valence bond description as far as possible. So a triple bond in the Lewis structure is one sigma and two pi bonds in the valence bond formalism.\n\n[from the comment] How do I know what's wrong with VSEPR theory a priori? For example, in an exam I am asked to determine this structure, how to do it so that it is always the right one.\n\nYou would measure or deduce the bond lengths and angles with an appropriate experiment, and see whether your data fits your prediction. The predictions work pretty well for some compounds made of H, C, N, O atoms, less so for third row main group elements, and they mostly break down for transition metals. Also, the predicted angles mostly will be approximations (except in the case of high symmetry, e.g. methane or acetylene).\n\nIn an exam, you just have to hope that the examples given work well for the level of theory (or the rules of thumb) you are applying.\n\n[OP's comment in another answer] I know how to apply the VSEPR model: I determine the Lewis structure and observe how many electron pairs the central atom has. I also understand the concept of hybridization and overlapping of atomic orbitals (simply the algebraic combination of wave functions). But, if I am given a molecule (such as acetylene, ozone or ammonia) I don't know how to apply the [valence bond] Theory correctly, I get confused when it is a sigma link or pi link. So, my doubt was focused on that aspect.\n\nA single bond in a Lewis structure is considered a sigma bond. For multiple bonds, one is a sigma bond and the remainder are pi bonds.\n\n• single bond: one sigma bond\n• double bond: one sigma and one pi bond\n• triple bond: one sigma and two pi bonds\n\nThis assumes that the Lewis structure is a good representation of the molecule. If a Lewis structure shows an \"expanded\" octet (or for transition metal complexes), you are better off looking at a molecular orbital diagrams to determine what the bond order is and whether the bond in question is sigma or pi (whether there is a node along the bond axis).\n\nThe classic example of the Lewis structure breaking down is the dioxygen molecule, which is often written with a double bond as a Lewis structure but experimental evidence indicates that there are two unpaired electrons involved in bonding.\n\nFor more detailed info, check \"hybridized atomic orbitals\". You will need Molecular Orbital Theory.\n\nThis is", null, ". In excited state, one electron from 2 s goes to empty 2p, and you will have 1 electron on each 2-shell orbital.\n\nTriple bond is formed using sp-hybridized atoms (one s orbital is \"averaged\" with one p orbital). These \"averaged\" s and p are used for sigma-bonds (2 bonds, since there are 2 orbitals), and two p-orbitals are used for pi-bonds (also 2 bonds). So each carbon has 2 sigma (with C and H) and 2 pi (with C both) bonds. It's clear from your first picture.\n\nI've found a good picture for you, which is below. It explains, why acetylene is linear molecule (hence, it's molecular geometry).", null, "• The reasoning here is in reverse, and wrong with that. The molecular structure determines which type of hybridisation (if that model is applicable at all) is used. And hybridisation is not necessary at all to describe the molecular structure of ethyne, it simply is much more convenient to understand the bonding. Also please be more careful with the language you use; atoms cannot be hybridised, only orbitals can. As such, the term 'sp-hybridised atom' is a sloppy and incorrect colloquialism. It will give the wrong impression especially to new students. Dec 2, 2018 at 14:09\n• David Klein in his 3rd ed. of \"Organic Chemistry\" uses this term. I also believe that understanding of hybridization can help us imagine geometry of molecule, because it's easy to do with those orbitals, and since question is about geometry, I've started with hybridization and ended with geometry. If something can be explained in easy way - why not? But I agree that my answer is far from ideal, so would you be so kind as to give a proper explanation? Dec 2, 2018 at 14:25\n• Yes, unfortunately the term gained popularity with many organic chemistry text books. It still is incorrect. The main problem with the interpretation you are presenting is that it is coincidentally predicting the correct molecular structure, but for the wrong reasons. This understanding of hybridisation, is wrong, and will lead to plenty of problems for other molecules. Predicting the linearity of ethyne is nothing for the comment section as there is no easy model that gives this conclusion for the correct reasons. Dec 2, 2018 at 14:41\n• Usually tasks like \"predict molecular geometry of very small molecule\" are school|college tasks which can be easily solved with \"sp3 is for tetrahedral, sp2 is for triangular planar, sp - linear\". I even dare to say that some school|college courses don't even mention that bond angles can differ from 180-120-109.5, so I think that in this case hybridization theory can be applied. Tasks like this are often use to master good old \"sp3 is for tetrahedral, sp2 is for triangular planar, sp - linear\". Dec 2, 2018 at 14:51\n• I am well aware of this, and it makes me very, very sad. It is wrong, intrinsically wrong; and just because a plethora of schools refuse to update their curriculum to include newer evidence, we don't have to spread these wrong theories even further. If students don't start with understanding where the limitations of a model system lie, then why bother teaching them anything at all. We are dealing with those misconceptions on a regular basis here, we don't have to add to the pile promoting the falseness. Dec 2, 2018 at 14:59" ]
[ null, "https://i.stack.imgur.com/FnoGi.png", null, "https://i.stack.imgur.com/1dNtU.png", null, "https://i.stack.imgur.com/RLS1m.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.91877353,"math_prob":0.91636723,"size":3064,"snap":"2022-27-2022-33","text_gpt3_token_len":683,"char_repetition_ratio":0.123202614,"word_repetition_ratio":0.014814815,"special_character_ratio":0.21507832,"punctuation_ratio":0.098846786,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9750961,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-01T16:59:02Z\",\"WARC-Record-ID\":\"<urn:uuid:c2a47031-0b9e-48d1-ae9f-de165755abb4>\",\"Content-Length\":\"251871\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:96cc3887-aa8e-4aa8-ae60-b2d870b35a02>\",\"WARC-Concurrent-To\":\"<urn:uuid:5e16959f-d160-41b0-bafd-55a5df6d3851>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://chemistry.stackexchange.com/questions/105110/how-to-predict-molecular-geometry-of-acetylene-using-vsepr-model\",\"WARC-Payload-Digest\":\"sha1:QMKCBCSRQS7AESMLU3YMB44JG7LHQZEO\",\"WARC-Block-Digest\":\"sha1:F6EA46LLZTSBZPJEIGTNZ7PLFCVGQRDR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103943339.53_warc_CC-MAIN-20220701155803-20220701185803-00335.warc.gz\"}"}
http://www.cds.caltech.edu/~murray/amwiki/Errata
[ "Errata\n\nThis page contains information about errata for Feedback Systems. The errata listed here include known and corrected errors since the first printing of the book; clicking on the link will take you to a page with additional details. Typos and other relatively minor corrections are listed here but do not have separate pages with additional details (they also do not appear on individual chapter pages).\n\nConventions:\n\n• Each line of text and each line of a displayed equation are counted as a line. Negative line numbers mean number of lines from the bottom of the page. First line on page = 1, last line on page = -1.\n• Entries in boldface indicate errors which might change significantly the interpretation of the text.\n• The version refers to the release number in which the error was found. Errors are corrected in all subsequent releases of the text.\n• The status of the errata uses the following terms:\n• Submitted - erratum has been submitted, but not yet processed\n• Open - erratum has been received, but not yet corrected\n• Pending - erratum has been corrected, but fix is not yet posted\n• Closed - erratum has been corrected and revision is posted\n• Rejected - erratum was rejected (see page for information on why)\n\nSecond Edition\n\nPreprint\n\nPageLineDescriptionVersionStatusDate\n1-2325In Section 1.8, breaking should be braking3.0hClosed28 September 2016\n1-237In Section 1.7, there is a typo/missing word near the end of the section3.0hClosed28 September 2016\n1-237In the air-fuel example in Section 1.8, missing reference to Figure 1.16b3.0hClosed28 September 2016\n1-258In Section 1.8 text on paper mills, \"production unit\" should be \"production units\"3.0hClosed28 September 2016\n1-258In the paper mill description in Section 1.8, \"production unit\" should be \"production units\"3.0hClosed28 September 2016\n10-15In Section 10.3, the caption for Figure 10.11 seems to have a typo3.0hPending13 November 2016\n10-18In Section 10.4, the Figure 10.14 caption has a grammatical error3.0hPending13 November 2016\n10-18In Section 10.4, Figure 10.14(b) seems to missing the curve for reverse driving phase3.0hPending13 November 2016\n10-2-8In Section 10.1, there is a typo in the oscillation condition equation3.0hPending13 November 2016\n10-2-8In Equation 10.1, there's a typo 0 instead of ')'3.0hDuplicate14 November 2016\n10-3-7In Section 10.1, the word \"circuit\" is misspelled.3.0hPending13 November 2016\n10-7In Section 10.2, part of Figure 10.5 seems to be missing3.0hPending13 November 2016\n11-16In Section 11.4, the Figure 11.10 caption has a typo3.0hPending13 November 2016\n11-519In Section 11.1, the word \"frequencies\" is duplicated3.0hPending13 November 2016\n12-101In Section 12.2, there is a singular vs. plural error when talking about the control signal3.0hPending19 November 2016\n12-10-1In Section 12.2, there is a typo in the equation for P^+(s)3.0hPending19 November 2016\n12-112In Section 12.2, there is a typo in the equation for the approximate inverse of P(s)3.0hPending19 November 2016\n12-11-4In Section 12.2, the word \"inherently\" is misspelled3.0hPending19 November 2016\n12-11-4In Section 12.2, the sentence about sensitivity to model uncertainty has a grammatical issue3.0hPending19 November 2016\n12-14-5In Section 12.3, there is a missing space3.0hPending19 November 2016\n12-1611In Section 12.4, the word \"limitations\" is misspelled3.0hPending19 November 2016\n12-28In Section 12.6, there is a missing comma in the caption of Figure 12.173.0hPending22 November 2016\n12-3-14In Section 12.1, the additional transfer functions for the F != 1 case seem to be incorrect3.0hPending19 November 2016\n12-3018In Section 12.6, a sentence in Theorem 12.2 is grammatically incorrect3.0hPending22 November 2016\n12-354In Section 12.7, there's a typo in the discussion of inner/outer loop design3.0hPending22 November 2016\n12-3511In Section 12.7, there's a grammatical error where it talks about the closed loop system H i3.0hPending22 November 2016\n12-4-5In Section 12.1, the vector w corresponding to Fig. 12.1 seems to be incorrect3.0hPending19 November 2016\n12-5In Section 12.2, the feedforward command u fv is mislabeled in Figure 12.33.0hPending19 November 2016\n12-5-3In Section 12.2, the word \"desired\" is misspelled3.0hPending19 November 2016\n12-617In Section 12.2, the condition for Gyv being small seems to be misstated3.0hPending19 November 2016\n13-1510In Section 13.3, the word \"response\" is duplicated3.0hPending22 November 2016\n13-6-11In Section 13.2, the word \"when\" is incorrectly capitalized3.0hPending22 November 2016\n13-78In Section 13.2, there's a grammatical issue in the discussion of system stability3.0hPending22 November 2016\n13-79In Section 13.2, the word \"transfer\" is misspelled3.0hPending22 November 2016\n13-79In Section 13.2, the word \"point\" is duplicated3.0hPending22 November 2016\n2-1119In discussion on PI control in Section 2.2, incorrect formula for G yv when \\omega is small3.0hPending1 October 2016\n2-237In Section 2.1, there is a typo in the discussion of solving for an exponential input3.0hPending30 September 2016\n2-49In Section 2.1, referred is misspelled3.0hPending30 September 2016\n2-55In Example 6.8, signalsith should be signals with3.0hClosed12 October 2016\n2-7-12Equation (2.11) has the wrong formula for the transfer function3.0hPending1 October 2016\n2-79Equation 2.11 is not equal to the preceding expression from which it came3.0hPending4 October 2016\n2-812In Section 2.2, one too many parentheses in an expression for y(t)3.0hPending4 October 2016\n2-815Display equation above equation (2.12) has extra set of parentheses3.0hPending4 October 2016\n2-913In discussion of proportional control in Section 2.2, missing v 0 in displayed equation3.0hPending1 October 2016\n3-10-8In discussion on algebraic loops in Section 3.2, there is an extra \"be\"3.0hPending4 October 2016\n3-1033In Section 3.1, there is a typo for plurality3.0hPending4 October 2016\n3-17-2After Example 3.4, difference equation should be difference equations3.0hPending4 October 2016\n3-2-3In Section 3.2, \"stiction\" should be \"friction\"3.0hRejected4 October 2016\n3-20-15In Section 3.2, just before Simulation and Analysis subsection, \"different\" should be \"difference\"3.0hPending4 October 2016\n3-312In Section 3.2, missing space after a period in description of phase portraits3.0hPending4 October 2016\n... further results\n\nFirst Edition\n\nThird printing\n\nPage Line Erratum Version\n10   -1   In the \"Instrumentation\" subsection, \"axon of the giant squid\" should be \"giant axon of the squid\" 2.10c\n37   4   In Example 2.1, the state space equations for the balance system are missing a divisor of J_t in one of the terms 2.10c\n38   20, 21   In equation (2.12),", null, "should be x (twice) 2.10e\n40   2   In Figure 2.7, the coefficients are for an update period of one day 2.10c\n42   2   In Figure 2.9, \"solid\" and \"dashed\" should be swapped 2.10e\n43   12   In displayed equation for", null, ", leading minus sign is missing from rightmost expression 2.10d\n47   -6   In Example 2.5, expression for the step response is missing some factors 2.10d\n49   19   In the normalized state space model for the spring-mass system, the derivative should be with respect to", null, "2.11b\n49   -10   In Example 2.7, reference to the previous example should be Example 2.1, not Section 2.1 2.11b\n49   -8 to -4   In Example 2.7, q should be replaced by p (three places) and theta (one place) 2.10d\n61   -4   In Exercise 2.3, disrete should be discrete 2.10e\n62   -10   In Exercise 2.4, a = 0.25 should be a = 0.75 2.10e\n84   11   In the online version of the text, there is a formatting error in equation (3.23) 2.10c\n86   -1   In equation (3.26), q_0 is not defined 2.10c\n100   -6   In Example 4.4, second simplifying assumption should be l/J_t = 1 (remove extra factor of m) 2.10d\n105   -3   On right hand side of displayed equation, x should be x_j 2.10c\n107   -7   At the end of Example 4.6, the roots should have negative real part instead of being positive 2.10d\n110   -5   Near end of Example 4.11, denominator of f'(z_2e) term should be squared 2.10d\n113   -14   In Example 4.10, \\dot V = 0 should be treated as negative semidefinite, not positive 2.10d\n116   3,6   In Example 4.11, Lyapunov equation entries for f'_1 and f'_2 are switched (solution is correct) 2.10d\n121   10   In Example 4.13, inequality should be for x_1 instead of x_2 2.10c\n127   4   In Exercise 4.2, the q_in term should be positive 2.10d\n128   6   In Exercise 4.12, some additional assumptions are required for oscillation 2.10e\n132   1   Add a comma after \"For other systems\" for clarity 2.10c\n135   10   In displayed equation above equation (5.5), there is an extraneous infinity symbol after the summation 2.10f\n138   14   In Example 5.3, dot missing from q in middle term of displayed equation for dynamics with damping 2.10d\n138   -11   In Example 5.3, omega_d should have terms in the square root reversed 2.11b\n140   0   In Figure 5.3, state labels are incorrect 2.10e\n140   -7   In displayed equation after equation (5.11), exponential is missing parenthesis; should be", null, "2.10d\n141   -4, -8   In Example 5.4, there is a sign error in one of the terms 2.10c\n145   8-9   At end of Example 5.5, eigenvalue approximations are incorrect 2.10d\n145   8-9   At end of Example 5.5, real part of eigenvalues has incorrect sign 2.10e\n147   10-15   Impulse response does not include the direct term in the proper manner 2.11b\n148   2   In the caption for Figure 5.7, \"drive\" should be \"driven\" 2.10c\n157   7   In equation (5.26), the u(k) term in the integral should be u(tau) 2.10c\n158   13, 22   In Example 5.11, the parameter", null, "should be", null, "2.11b\n158   -8   After equation (5.31), rolling friction term disappears if v = 0 2.10c\n158   -7   After equation (5.31), the equilibrium velocity should be", null, "m/s 2.11b\n158   -5   After equation (5.31), the parameter", null, "listed in the text should be", null, "2.11b\n161   3   In Example 5.12, expression non-equilibrium condition has wrong symbols 2.10d\n162   7   In equation (5.37), the parameter", null, "should be", null, "2.11b\n163   10   In equation (5.39), Coriolis term should be + C(q, \\dot q) 2.10d\n165   -4   In Exercise 5.8d, the left hand side of the first equation should be x(k+1) 2.10c\n178   -12   In Example 6.4, the initial control signal is u(0) = k_r r = \\omega_c^2 r 2.11b\n179   -12   In equation (6.17), elements of the output matrix should be in the opposite order 2.10d\n179   -8   In displayed equation after equation (6.17), a_l should be a_1 2.10d\n182   16-20, 25   In Example 6.5, reference variable should be L_d instead of r 2.10d\n182   3   In Example 6.5, definitions of z1 and z2 are switched 2.11a\n182   17   In second line of displayed equation, missing minus sign in front of the gain matrix 2.10d\n182   21   In the last paragraph of Example 6.5, r_h should be u 2.10d\n183   4   In the caption for Figure 6.7, L_e=20 should be L_e=30 2.10d\n183   -7   In equation (6.22),", null, "should be", null, "2.11b\n184   17   For underdamped second order system, eigenvalues are listed incorrectly 2.10d\n185   -3   The formula given for the rise time of a second order system in Table 6.1 is only approximate 2.11a\n186   5   Before Example 6.6, \"is Q-value\" should be \"its Q-value\" 2.10d\n186   -3   In Example 6.6, the output equation has 'x' for the state instead of 'c' 2.10c\n188   -2   In Example 6.7, the A matrix has an error in the (3,4) entry 2.11b\n189   5   In Example 6.7, the open loop eigenvalues are incorrect 2.11b\n189   15   In Example 6.7, the imaginary part of the first pair of eigenvalues is missing factor of i 2.10d\n191   -5   In Example 6.8, the state space dynamics have errors in the trigonometric terms 2.10c\n192   4-14   In Example 6.8, the state error should be \\xi and the input should be F 2.10d\n192   -15   Diagonal entries for Q_v should be rho 2.10c\n196   19, 23   In Example 6.10, the system input for the linearized model should be w instead of u 2.11a\n196   -1   In Example 6.10, the reference gain should be k_r = a_1/b 2.11b\n204   6   In Example 7.1, the output equation has 'x' for the state instead of 'c' 2.10c\n206   2   In expression for W_0, (3, 1) entry should be", null, "(remove factor of", null, "in second term) 2.10c\n207   13   \"going zero\" → \"going to zero\" 2.10d\n213   8   Estimator dynamics in Theorem 7.3 are missing B k_r r term 2.10c\n217   -6, -5   In Example 7.5, descriptions for", null, "(disturbance) and", null, "are switched 2.10c\n219   -5   Change gain matrix", null, "to", null, "for consistency 2.10d\n220   17   In expression for B(t), missing parenthesis at end of subscripted expression 2.11a\n220   -12   Above displayed equation for the gain scheduled control law,", null, "should be replaced by", null, "2.10d\n221   -15   In Example 7.5, \"where were\" → \"which were\" 2.10d\n224   4   Output in Kalman decomposition for Example 7.7 is missing state error term 2.10c\n225   -4   \"phenomena knows\" → \"phenomena known\" 2.10c\n229   10   Henrik Bode should be Hendrik Bode 2.11a\n233   -8   In equation (8.6), matrix inverse is computed incorrectly 2.10d\n235   14   In displayed equation above equation (8.9), exponential term on right hand side should be", null, "(no minus sign) 2.10d\n236   -9   In Example 8.2, resister should be resistor 2.11b\n238   -9   In equation (8.15), second order partial derivative is written incorrectly 2.10c\n238   -1   In Example 8.4, boundary condition and coefficients for psi(x) are incorrect 2.10c\n241   -12   Before Example 8.5, \"related\" → \"relate\" 2.10d\n242   7   In Example 8.5, leading term in denominator for H_\\theta F should be cubic in s 2.10d\n247   2   In Example 8.6, numerator gain term for G_ur should be k_r instead of k_1 2.10d\n247   11   Last equation in Example 8.6 has sign error in the second term 2.10c\n247   11   Last equation in Example 8.6 has errors in numerator expressions 2.10d\n249   6   In Example 8.7, remove", null, "from numerator of expression for", null, "2.10d\n256   -10,-9   In Example 8.9, u should be v in the first displayed equation and the following line 2.11a\n262   -2   In Exercise 8.2, text should read \"transfer function from", null, "to", null, "\" (instead of", null, ") 2.10d\n263   -13   In Exercise 8.5, k in the denominator of G(s) should be (k+1) 2.10c\n264   9   In Exercise 8.6, second term in expression for b_n should be", null, "", null, "", null, "2.10d\n272   10, 11   In the paragraph above Example 9.3, G(*) should be L(*) 2.11a\n272   -1   In Example 9.3, intersection of negative real axis should be at", null, "-0.5. 2.10d\n273   -5   In Example 9.4, \"that that\" → \"that\" 2.10d\n274   -2   In right hand expression of last displayed equation of Example 9.4,", null, "in the denominator should be", null, "2.10d\n275   1   In the caption for Figure 9.7, the transfer function should match equation (9.4) 2.11a\n275   3   In caption for Figure 9.7, \"intersections\" → \"intersects\" 2.10d\n275   -5   In Example 9.5, \"it is seems\" → \"it seems 2.10d\n282   -11, -9   In Example 9.9, 'gain margin' g_m should be 'stability margin' s_m 2.10d\n282   -7   In Example 9.9, the gain margin for Figure 9.12 should be", null, "2.10d\n283   12   In equation (9.8), the right hand side should be evaluated at omega0 2.11a\n283   -6   Missing parenthesis in", null, "", null, "", null, "2.10d\n284   -13   In Example 9.10, transfer function zero is at", null, "", null, "", null, "2.10d\n289   7   In equation (9.14),", null, "should be", null, "2.10d\n292*   10   In Exercise 9.7,", null, "should be", null, "2.11a\n296   1   Caption for Figure 10.3b should be \"Derivative action\" 2.10c\n297   11   After equation (10.5), low frequency limit bound be 1/T_d instead of T_d 2.10c\n299   -10   Closed loop time constant after equation (10.6) has an extra factor of 2 2.11a\n300   4   The b k_i term in the characteristic polynomial has an extra factor of s 2.10c\n300   7   In displayed equation above equation (10.7), the final term should be", null, "(not squared) 2.10d\n302   0   In Figure 10.7b,", null, "should be", null, "2.10c\n304   -8   In Example 10.4, z should be zeta 2.11a\n304   -1   In Example 10.4, proportional gain", null, "should be", null, "2.10d\n305   -1   In equation (10.12),", null, "should be", null, "2.11a\n306   1   Right after equation (10.12),", null, "should be", null, "2.11a\n308   -9   In Section 10.5, PID derivative filter should have time constant", null, "2.10d\n309   9   In equation (10.4), upper limit of the integral should be t, not infinity 2.11a\n310   -4   The expressions for the PID parameters for the op amp implementation are incorrect 2.10d\n318   -4 to -1   Reference input not included in Figure 11.2 and dimensions of blocks are listed incorrectly 2.10c\n319   0   In Figure 11.3, the label 'y' at the input to P_2 should be removed 2.10c\n326   -1   In equation (11.11), delete extraneous \"[rad]\" 2.10d\n328   -11   In Example 11.5, \"", null, "\" should be \"", null, "\" 2.10c\n328   -6   In Example 11.5, \"dotted\" line is actually \"dash-dotted\" line 2.11a\n329   3,5   In the Figure 11.10 caption and Example 11.5 text, \"dotted\" line is actually \"dash-dotted\" line 2.11a\n330   2   In the caption for Figure 11.11, Fz should be F1 2.10c\n334   0   In Figure 11.13a, curves should be labeled with z instead of b 2.10d\n338   -6   At end of Example 11.11, \"the can be achieved\" should be \"that can be achieved\" 2.10d\n339   -9   In expression for I_1, limits have an extra factor of i 2.10c\n340   3-9   Contour integral in derivation of Bode's integral formula is analyzed incorrectly 2.10c\n341   -8   In Example 11.12, value for -mg should be -39.2 2.11a\n341   -5   In Example 11.12, transfer functions for inner loop and outer loop dynamics are incorrect 2.11b\n342   5   In Example 11.12, error should be in", null, ", not", null, "2.11a\n342   13   In Example 11.12, after displayed equation for C_o(s), frequency where phase lead flattens out should be b_o/10 2.10d\n342   -16   In Example 11.12, transfer functions for inner loop and outer loop dynamics are incorrect 2.11b\n342   -10   In Example 11.12, gain term in", null, "should be -2 instead of 0.8 2.11a\n344   -4   In Exercise 11.5, final expression for", null, "should be", null, "2.11a\n345   -10   In Exercise 11.10, controller gain at frequency", null, "should be", null, "", null, "", null, "2.10d\n357   -6, -8   For Youla parameterization, P = b(s)/a(s) = B(s)/A(s) and C0 = G0(s)/F0(s) 2.10c\n359   -6, -4   In equation (12.13) and the displayed equation above it, T should be -T 2.10d\n361   13   Missing end parenthesis at end of sentence 2.10c\n363   17   In Example 12.8, repeated", null, "in denominator of transfer function for", null, "should be", null, "2.10d\n365   -13   The reference to the maximum complementary sensitivity should be to equation (12.7) 2.10c\n366   -16   In equation (12.16), a is not defined and a factor of a is missing in numerator the last term 2.10c\n367   -8   In equation (12.18), a factor of", null, "is missing from the numerator 2.10c\n368   7   In equation (12.21), final term in expression for", null, "should be", null, "2.10d\n369   -12   Replace \"dashed\" with \"shaded\" in the description of Figure 12.16a 2.10c\n371   -4   Reference to Figure 12.17b should be to Figure 12.17 (right) 2.10c\n376   2   In Exercise 12.11, first term in numerator for", null, "should be", null, "", null, "", null, "2.10d\n\nSecond printing\n\nPage Line Erratum Version\n21   7   Stanford one the competition should be Stanford won the competition 2.10c\n29   -12   Missing space after sentence ending with \"controlled differential equation\" 2.10b\n52   0   In Figure 2.16, theta should be O 2.10b\n85   -7   Equation (3.24) should have c in the numerator instead of c0 2.10b\n87   3   In equation (3.27), the state used for the output should be c, not x 2.10b\n97   -11, -4   Equation (4.4) is labeled incorrectly 2.10b\n107   12-15   In Example 4.6, u_d is not computed correctly 2.10b\n115   -5   In Example 4.11, Taylor series term is missing factor of 1/2 2.10b\n118   17   After equation (4.18), solution trajectory should be x(t; a) instead of x(t: a) 2.10b\n121   5 to 6   Global Lyapunov stability requires additional conditions 2.10b\n152   -5   In the complex input u = exp(s t), s should be i omega, not i omega t 2.10b\n157   -15   Missing factor of B in displayed equation after equation (5.28) 2.10b\n171   8   In equation (6.5), the 4, 3 entry of W_r should be g m^2 l^2 M_t/mu 2.10b\n174   -3   In Example 6.3, z_2 should be x_1/omega instead of x_2/omega 2.10b\n179   -3   Reference to equation (6.17) should be to equation (6.15) 2.10b\n216   -7 to -3   F R_v F^T term dropped in proof of Theorem 7.4 2.10b\n221   -12, -6   In Example 7.6, the coefficient in front of thetadot should be v/b 2.10b\n222   4, 5   In equation (7.25), the last two endpoint equations are not correct 2.10b\n228   13   In Exercise 7.11, output to check is the feedforward signal 2.10b\n228   -12   In Exercise 7.13, missing equals sign in covariance specification 2.10b\n234   -12   In displayed equation above equation (8.7), U should be u in the expression for the output 2.10b\n237   4   In Figure 8.3, the gain falls off at omega = a R1 k / R2 2.10b\n237   5   In Example 8.3, a = 10 rad/s 2.10b\n238   8   In Example 8.3, a = 10 rad/s 2.10b\n240   -12   After equation (8.17), \"looses\" should be \"loses\" 2.10b\n248   2   Formula for Ger should have the ' on dp, not dc 2.10b\n248   7   Formula for Ged should not have the ' on dc and is missing a minus sign 2.10b\n254   -7   Replace", null, "with", null, "2.10b\n255   8   Replace", null, "with", null, "2.10b\n256   0, 5, 6   In Figure 8.15 and accompanying text a should be omega0 2.10b\n264   16   In Exercise 8.8, second transfer function should be G_yn instead of G_yd 2.10b\n276   -6, -2   In Example 9.6, the critical gain is k = 0.5 instead of k = 1 2.10b\n277   4   In Figure 9.8, the gain of the system is k = 1 2.10b\n278   2   In proof of Theorem 9.3, \"residues of the poles of is\" should be \"residue for the poles is\" 2.10b\n284   -10   In Example 9.10, there is an extra factor of a in the second term of the step response 2.10b\n335   11, 13   In Example 11.9, the bound on the gain crossover frequency should be 6.48 rad/sec 2.10b\n351   8   Riemann sphere has radius (not diameter) 1 2.10b\n351   10   d(P_1, P_2) is the longest chordal distance, not shortest 2.10b\n357   0   In Figure 12.8, the signs of A and B are reversed 2.10b\n363   -18   In Example 12.8, process zero is at s = -2 2.10b\n363   -17   In Example 12.8, controller zero is at s = 3.5 2.10b\n364   -10 to -4   In Example 12.9, poles are at -a, -p1, -p2 2.10b\n367   -1   Remove extra word \"in\" 2.10b\n371   -11   After equation (12.23), generalized error is z, not w 2.10b\n379   23   Desborough is misspelt (p 380, line 1 in online version) 2.10b\n\nFirst printing\n\nPage Line Erratum Version\n15   6   In Figure 1.11, caption refers to \"numbers in circles\" that are not present in figure 2.9d\n19   -11   The Wright brothers made their first successful flight in 1903, not 1905 2.9d\n25   -1   In Exercise 1.1, \"loosing\" should be \"losing\" 2.9d\n32   12   In discussion on disturbance signals, \"can\" should be \"cannot\" 2.9d\n62   -6   In Exercises 2.4 and 5.9, the (2,1) entry of the dynamics matrix should be ab-b 2.9d\n63   -8   In Exercise 2.7, reference to inverted pendulum may be confusing 2.9d\n80   5   In Equation (3.19), the second term in the window dynamics should contain \\rho c 2.9d\n91   -6   In Exercise 3.2: use observable form, delta is the steering angle and \"title\" should be \"tilt\" 2.9d\n92   -9   State space dynamics in Exercise 3.5 are not correct 2.9d\n106   7   Solution for x_2j in block diagonal form discussion has sign errors 2.9d\n107   12-23   Closed loop dynamics matrix is incorrect in Example 4.6 2.9d\n108   8   Missing γ in equation (4.9) 2.9d\n110   11, 13   In Example 4.8, the linearized dynamics should contain 3 r_e, not 2 r_e 2.9d\n112   -1   In Example 4.9, V(x) should be V(z) 2.9d\n117   3   Typo in the caption for Figure 4.15 2.9d\n126   -5   In Exercise 4.1, x_0 should not be subtracted from time-shifted solution 2.9d\n126   -5   In Exercise 4.1, tau should be defined as t-t_0 2.9d\n127   5   In Exercise 4.2, a_e should be a 2.9d\n127   15   Sign error in second Lyapunov function in Exercise 4.4 2.9d\n128   3   In Exercise 4.6, Pm/J should be taken as 1 and factor of 1/2 is misplaced 2.9d\n129   -2   In Exercise 4.14c, the transformation T and its inverse are swapped 2.9d\n131   -15   Extraneous text \"!linear\" in Section 5.1 2.9d\n135   12   In equation (5.5), the upper limit of the integral should be t 2.9d\n144   -7   Example 5.5 includes a damping term not shown in Figure 5.4 2.9d\n144   -4   In Example 5.5 and 5.6, m1 and m2 should be m 2.10a\n145   5   In Examples 5.5 and 5.6, dynamics in modal form should use z, not x 2.9d\n146   -15   Equation (5.15) is linear in the initial conditions and *input* 2.10a\n148   -1   In Example 5.5 and 5.6, m1 and m2 should be m 2.10a\n149   8   In Examples 5.5 and 5.6, dynamics in modal form should use z, not x 2.9d\n154   -10   In Example 5.8, the third nodal equation should be deleted 2.10a\n155   0   The labels for v2 and v3 are misplaced in Figure 5.12 2.10a\n156   -10   In AFM dynamics, c and k are missing subscript 2 2.9d\n157   7   Missing factor of A in second term of equation (5.26) 2.10a\n164   -11   Parameter values are missing in Exercise 5.3 2.9d\n164   -3   Remove part (a) in Exercise 5.6 2.9d\n165   11   In Exercise 5.7, the 5% settling time should be 3 tau instead of 2 tau 2.9d\n165   16   In Exercise 5.8, the initial condition is x_0 2.9d\n166   3   In Exercises 2.4 and 5.9, the (2,1) entry of the dynamics matrix should be ab-b 2.9d\n168   12   The control matrix in the equilibrium set should be 'B' instead of 'b' 2.10a\n169   -12   In heuristic derivation of reachability test, α is missing subscript 2.9d\n180   -2   Formula for k_r in equation (6.21) of Theorem 6.3 is incorrect 2.9d\n181   9   Missing 's' in expression for characteristic polynomial in discussion of eigenvalue assignment 2.9d\n183   -7   In equation (6.22), k omega^2 should be k omega_0^2 2.9d\n184   -10   In equation (6.24), the sign of the sin(omega t) term is incorrect for zeta less than one 2.9d\n185   4   Values of zeta in the caption for Figure 6.8 don't match the figure 2.9d\n188   -2   Dynamics matrix in Example 6.7 has errors in (3,4) and (4,4) entries 2.9d\n189   22   Numerical gains for Example 6.7 are incorrect 2.9d\n197   3   In Figure 6.15, reference velocity is v_r = 20 m/s 2.10a\n198   14   In Exercise 6.4, need to assume A is invertible and C A^-1 B is nonzero 2.9d\n199   -1, -2   In AFM dynamics, c and k are missing subscript 2 2.9d\n200   9   In Exercise 6.13, R should be removed and rho should be rho_1 2.9d\n206   10,11   In the last paragraph of Section 7.1, \"reachable\" should be replaced by \"observable\" 2.9d\n209   2   In Example 7.2, the observer gain is given by equation (7.11) 2.9d\n214   -13   In Example 7.4, hats are missing in displayed equation for u 2.10a\n220   12   In computation of error dynamics, u_ff argument is missing 2.9d\n220   -13   Minus sign is missing in gain scheduled feedback 2.9d\n224   -7   References to first and second equations are switched in computer implementation discussion 2.9d\n240   -16, -13   Sign error in equation (8.17) - (sI - A) should be (A - sI) 2.9d\n240   -11   Explanation of the lack of zeros when B or C is full rank is confusing 2.9d\n241   -4, -6   In Example 8.5, q should be replaced by theta 2.9d\n247   -1   In section on pole/zero cancellations, pole and zero are at s = -a 2.10a\n248   1   In section on pole/zero cancellations, pole and zero are at s = -a 2.10a\n249   3   Location of the process pole is missing a zero in figure caption for cruise control example 2.9d\n250   -16   Missing parenthesis in exponential for output signal 2.10a\n253   -1   In description of Bode plot for second order transfer function, 'a' should be omega_0 2.10a\n254   12   In description of Bode plot for second order transfer function, 'a' should be omega_0 2.10a\n254-5   -10 to 9   In Example 8.8, explanation of effects of poles and zeros is incorrect and confusing 2.10a\n256   1   Caption for Figure 8.15b is missing an s in the numerator 2.9d\n257   5   In Example 8.9, the expression for sigma is not correct 2.10a\n272   8   Before Example 9.3, range of theta should be -pi/2 to pi/2 2.10a\n277   13   In Theorem 9.3, N should be Z 2.9d\n278   3   Missing factor of 2 pi in residue formula for proof of Theorem 9.3 2.10a\n288   -4, -3   Exponential signal is incorrect on the left-hand side of harmonic expansion equation 2.9d\n292   18   In Exercise 9.9, 10/a should be 1/tau 2.9d\n307   -9   Toward end of Section 10.4, k_i should be k_t 2.9d\n308   0   Figure 10.11 is missing filter on derivative term 2.9d\n310   -4   In Figure 10.13, the indices on the resistors and capacitors are incorrect 2.9d\n311   -3   Missing ydot after equation (10.16) 2.9d\n313   8,9   In Exercise 10.1, the second term in the denominator should be kp not kd 2.9d\n314   -5   In Exercise 10.11, the dynamics for x_2 should test for e less than e_0, not e less than 1 2.9d\n315   -8   Extraneous 'LDH' and comment in the text 2.9d\n319   0   Arrow labeled u_fd in Figure 11.3 is in the wrong direction 2.9d\n319   4, -2   The signal u_ff should be u_fr in the caption for Figure 11.3 and associated text 2.9d\n319   -7   Extraneous text \"!design\" in Chapter 12 2.9d\n323   -9   Maximum sensitivity occurs at frequency omega_ms not omega_sc 2.9d\n325   -9   Proportional gain in Example 11.4 transfer function should be kp 2.9d\n330   5   Denominator of vectored thrust process model should be Js^2 2.9d\n336   -8   Extraneous text \"!design\" in Chapter 12 2.9d\n345   12   Exercise 11.4 should be Example 11.4 in Exercise 11.9 2.9d\n345   14   T_d should be T_f in Exercise 11.9 2.9d\n345   -5   Numerator and denominator are switched in Exercise 11.11 2.9d\n351   2,3   Caption for Figure 12.4 doesn't quite match figure 2.9d\n356   6, 12   In Example 12.6, n = -1 should be n = 1 2.9d\n363   -11   \"Assigning a closed loop zero\" should be \"assigning a closed loop pole\" 2.9d\n369   -1   Extraneous text \"!design\" in Chapter 12 2.9d\n378   15,16   Bennett's books on the history of control were published in 1979 and 1993 2.10a" ]
[ null, "http://www.cds.caltech.edu/~murray/amwiki/extensions/wikitex/tmp/a07c6da8b1bee7e18f5f9286365ab6321.png", null, "http://www.cds.caltech.edu/~murray/amwiki/extensions/wikitex/tmp/b2dcd3b1282be2f1d1a9d21fdd8eb1e31.png", null, "http://www.cds.caltech.edu/~murray/amwiki/images/math/8/1/a/81a69207104f00baaabd6f84cafd15a0.png ", null, "http://www.cds.caltech.edu/~murray/amwiki/images/math/3/b/7/3b7aaf575e689d654905b744a861eb29.png ", null, "http://www.cds.caltech.edu/~murray/amwiki/extensions/wikitex/tmp/15542ac7430c67e8ac2e0873687e34211.png", null, "http://www.cds.caltech.edu/~murray/amwiki/extensions/wikitex/tmp/8797a93b8721c4124ee4b6a528eb53031.png", null, "http://www.cds.caltech.edu/~murray/amwiki/extensions/wikitex/tmp/0844acfe8e7736fb72a40981af0c56f71.png", null, "http://www.cds.caltech.edu/~murray/amwiki/extensions/wikitex/tmp/8d717d4c3a08678ef977b235fba8a6351.png", null, "http://www.cds.caltech.edu/~murray/amwiki/extensions/wikitex/tmp/0a20046a474647864a263719c995c2ce1.png", null, "http://www.cds.caltech.edu/~murray/amwiki/extensions/wikitex/tmp/15542ac7430c67e8ac2e0873687e34211.png", null, "http://www.cds.caltech.edu/~murray/amwiki/extensions/wikitex/tmp/8797a93b8721c4124ee4b6a528eb53031.png", null, "http://www.cds.caltech.edu/~murray/amwiki/images/math/6/f/4/6f42a758b87fccf5b526c4bd8dafc220.png ", null, "http://www.cds.caltech.edu/~murray/amwiki/images/math/1/9/e/19e51a2325d44662364ac0b5b0c84e7d.png ", null, "http://www.cds.caltech.edu/~murray/amwiki/extensions/wikitex/tmp/5a3729c0e0ad5d171eb7cf35bce62fbd1.png", null, "http://www.cds.caltech.edu/~murray/amwiki/images/math/8/e/6/8e6ba967645c302e1f2a60ec9c341e5c.png ", null, "http://www.cds.caltech.edu/~murray/amwiki/images/math/9/e/3/9e3669d19b675bd57058fd4664205d2a.png ", null, "http://www.cds.caltech.edu/~murray/amwiki/images/math/f/1/2/f1290186a5d0b1ceab27f4e77c0c5d68.png ", null, "http://www.cds.caltech.edu/~murray/amwiki/images/math/d/2/0/d20caec3b48a1eef164cb4ca81ba2587.png ", null, "http://www.cds.caltech.edu/~murray/amwiki/images/math/a/5/f/a5f3c6a11b03839d46af9fb43c97c188.png ", null, "http://www.cds.caltech.edu/~murray/amwiki/images/math/e/1/6/e1671797c52e15f763380b45e841ec32.png ", null, "http://www.cds.caltech.edu/~murray/amwiki/images/math/f/b/a/fbade9e36a3f36d3d676c1b808451dd7.png ", null, "http://www.cds.caltech.edu/~murray/amwiki/images/math/7/e/7/7e72973780b88bc8f07d0970ff3a0e2a.png ", null, "http://www.cds.caltech.edu/~murray/amwiki/extensions/wikitex/tmp/0bc2a32d881b520fd9d5fa3d8b65cebf1.png", null, "http://www.cds.caltech.edu/~murray/amwiki/extensions/wikitex/tmp/a184d066dcfe95c44f37aebe57ea7f971.png", null, "http://www.cds.caltech.edu/~murray/amwiki/images/math/7/b/7/7b774effe4a349c6dd82ad4f4f21d34c.png ", null, "http://www.cds.caltech.edu/~murray/amwiki/images/math/9/d/d/9dd4e461268c8034f5c8564e155c67a6.png ", null, "http://www.cds.caltech.edu/~murray/amwiki/images/math/4/1/5/415290769594460e2e485922904f345d.png ", null, "http://www.cds.caltech.edu/~murray/amwiki/images/math/8/e/3/8e3cd94528b399df6b4ff9a51c7cd6e9.png ", null, "http://www.cds.caltech.edu/~murray/amwiki/images/math/9/4/b/94b347a03dcb92eac992b38faf087d49.png ", null, "http://www.cds.caltech.edu/~murray/amwiki/images/math/9/d/5/9d5ed678fe57bcca610140957afab571.png ", null, "http://www.cds.caltech.edu/~murray/amwiki/images/math/2/0/f/20fec7d0bb9ddf3e996062a41c6780c1.png ", null, "http://www.cds.caltech.edu/~murray/amwiki/extensions/wikitex/tmp/00cd36e5ce363b372ee7225572e7cbad1.png", null, "http://www.cds.caltech.edu/~murray/amwiki/extensions/wikitex/tmp/38d654cf62397a9e6b27a3f1707743761.png", null, "http://www.cds.caltech.edu/~murray/amwiki/images/math/d/9/f/d9f15b6ced54a209edd913d56961b799.png ", null, "http://www.cds.caltech.edu/~murray/amwiki/images/math/4/c/4/4c4af7b7e94582de9c2742f4b8382413.png ", null, "http://www.cds.caltech.edu/~murray/amwiki/images/math/8/4/c/84c40473414caf2ed4a7b1283e48bbf4.png ", null, "http://www.cds.caltech.edu/~murray/amwiki/images/math/e/8/7/e875259212a7fdc7d7d7423503960174.png ", null, "http://www.cds.caltech.edu/~murray/amwiki/images/math/1/9/1/19190b1169422cbdf65e98ea22662914.png ", null, "http://www.cds.caltech.edu/~murray/amwiki/images/math/3/3/6/336d5ebc5436534e61d16e63ddfca327.png ", null, "http://www.cds.caltech.edu/~murray/amwiki/images/math/a/c/6/ac677d7ed7ce4259e5c14a220ec0b65e.png ", null, "http://www.cds.caltech.edu/~murray/amwiki/extensions/wikitex/tmp/b24e18847791752ca9f8f78c4ddd2e0a1.png", null, "http://www.cds.caltech.edu/~murray/amwiki/images/math/a/f/a/afad4df65167d20f7c7037f3379c6798.png ", null, "http://www.cds.caltech.edu/~murray/amwiki/extensions/wikitex/tmp/6fa0c59bac2120462bc334531e54b86c1.png", null, "http://www.cds.caltech.edu/~murray/amwiki/extensions/wikitex/tmp/392d3967ef062e68abbf58b89e50fe4b1.png", null, "http://www.cds.caltech.edu/~murray/amwiki/images/math/0/f/7/0f768ac5d5dea8d93716a27da05871de.png ", null, "http://www.cds.caltech.edu/~murray/amwiki/images/math/6/8/0/68062eda3c76ab2c139d4b93b00f11e6.png ", null, "http://www.cds.caltech.edu/~murray/amwiki/images/math/c/3/f/c3fc9448c4053c693665ab511b38ee6b.png ", null, "http://www.cds.caltech.edu/~murray/amwiki/images/math/4/e/6/4e68a48a10005a0a2b8de65a50b80a8a.png ", null, "http://www.cds.caltech.edu/~murray/amwiki/images/math/6/1/9/619709c96db72b1acd93d11d5f98e5ad.png ", null, "http://www.cds.caltech.edu/~murray/amwiki/extensions/wikitex/tmp/a09e5ff644a57c4d85ca472ccf16a0d71.png", null, "http://www.cds.caltech.edu/~murray/amwiki/extensions/wikitex/tmp/b4cf8cd23e8a2aac4eac97b34749ffc01.png", null, "http://www.cds.caltech.edu/~murray/amwiki/extensions/wikitex/tmp/a09e5ff644a57c4d85ca472ccf16a0d71.png", null, "http://www.cds.caltech.edu/~murray/amwiki/extensions/wikitex/tmp/b4cf8cd23e8a2aac4eac97b34749ffc01.png", null, "http://www.cds.caltech.edu/~murray/amwiki/images/math/c/8/f/c8fb404256127556c42b4829bee2b895.png ", null, "http://www.cds.caltech.edu/~murray/amwiki/images/math/a/6/a/a6aa04b34c37da288d2d04832133803f.png ", null, "http://www.cds.caltech.edu/~murray/amwiki/images/math/a/2/3/a23e6e96be7a85646600631ee0986a75.png ", null, "http://www.cds.caltech.edu/~murray/amwiki/extensions/wikitex/tmp/6c213d74e46cf7e0702441dc62d736651.png", null, "http://www.cds.caltech.edu/~murray/amwiki/extensions/wikitex/tmp/aba2a9e8b2a3ffe7c769f42d7a0fbc111.png", null, "http://www.cds.caltech.edu/~murray/amwiki/extensions/wikitex/tmp/20bd3b48d6dfbebd47fc109694c575f31.png", null, "http://www.cds.caltech.edu/~murray/amwiki/extensions/wikitex/tmp/18b7a398c03a0a19447ace26632689a31.png", null, "http://www.cds.caltech.edu/~murray/amwiki/extensions/wikitex/tmp/caf122471a4e60b0b113ca14f492b6361.png", null, "http://www.cds.caltech.edu/~murray/amwiki/images/math/d/4/d/d4df4d2cdd8cdb93b8868b4d5c22bcc2.png ", null, "http://www.cds.caltech.edu/~murray/amwiki/images/math/0/b/7/0b75a31757d08e34e39e62ea9d6b6e54.png ", null, "http://www.cds.caltech.edu/~murray/amwiki/images/math/d/4/d/d4df4d2cdd8cdb93b8868b4d5c22bcc2.png ", null, "http://www.cds.caltech.edu/~murray/amwiki/images/math/9/3/7/9371d7a2e3ae86a00aab4771e39d255d.png ", null, "http://www.cds.caltech.edu/~murray/amwiki/images/math/d/2/4/d241fb30a61c3708d7909db5442d08ed.png ", null, "http://www.cds.caltech.edu/~murray/amwiki/images/math/e/c/e/ece39c426d89f631a957e14df069bc10.png ", null, "http://www.cds.caltech.edu/~murray/amwiki/images/math/9/7/8/9782cc3dd24d11c506a264c42713499b.png ", null, "http://www.cds.caltech.edu/~murray/amwiki/images/math/a/4/7/a4791fd2e334993453b00d036ab792af.png ", null, "http://www.cds.caltech.edu/~murray/amwiki/extensions/wikitex/tmp/95d409a6eb88e02145b6f690e29c7f141.png", null, "http://www.cds.caltech.edu/~murray/amwiki/extensions/wikitex/tmp/8bdef722d2ac23d0303d30653f4a16bf1.png", null, "http://www.cds.caltech.edu/~murray/amwiki/images/math/0/a/c/0acf1e389a4a54c3140378ec31f26360.png ", null, "http://www.cds.caltech.edu/~murray/amwiki/images/math/8/4/c/84c40473414caf2ed4a7b1283e48bbf4.png ", null, "http://www.cds.caltech.edu/~murray/amwiki/images/math/0/3/c/03c7c0ace395d80182db07ae2c30f034.png ", null, "http://www.cds.caltech.edu/~murray/amwiki/images/math/e/3/1/e31b43e01262f34a84d6fa98be0c3131.png ", null, "http://www.cds.caltech.edu/~murray/amwiki/images/math/a/3/7/a37cfeacb48917e4af3d9660e035eeed.png ", null, "http://www.cds.caltech.edu/~murray/amwiki/images/math/8/a/0/8a0ae931a8ae699f79b7f7cada33ef39.png ", null, "http://www.cds.caltech.edu/~murray/amwiki/images/math/a/3/7/a37cfeacb48917e4af3d9660e035eeed.png ", null, "http://www.cds.caltech.edu/~murray/amwiki/images/math/8/a/0/8a0ae931a8ae699f79b7f7cada33ef39.png ", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7746223,"math_prob":0.968355,"size":13617,"snap":"2019-26-2019-30","text_gpt3_token_len":4420,"char_repetition_ratio":0.22853155,"word_repetition_ratio":0.037131883,"special_character_ratio":0.39068812,"punctuation_ratio":0.14554295,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9820254,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158],"im_url_duplicate_count":[null,null,null,5,null,null,null,5,null,10,null,10,null,5,null,null,null,5,null,10,null,10,null,5,null,5,null,5,null,5,null,null,null,null,null,null,null,5,null,null,null,null,null,5,null,8,null,5,null,null,null,null,null,null,null,5,null,5,null,null,null,5,null,5,null,5,null,5,null,5,null,10,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,null,null,5,null,5,null,10,null,10,null,10,null,10,null,5,null,5,null,5,null,5,null,null,null,5,null,5,null,5,null,null,null,5,null,null,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,10,null,null,null,5,null,null,null,10,null,null,null,10,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-20T11:52:34Z\",\"WARC-Record-ID\":\"<urn:uuid:71e6c1f7-8068-46e3-b024-fd164137182f>\",\"Content-Length\":\"184943\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e719d040-4a76-4140-bc9e-ff55e9dc0b3b>\",\"WARC-Concurrent-To\":\"<urn:uuid:7f56fc8c-84ce-4b89-b465-4daba0ac2c7f>\",\"WARC-IP-Address\":\"131.215.243.5\",\"WARC-Target-URI\":\"http://www.cds.caltech.edu/~murray/amwiki/Errata\",\"WARC-Payload-Digest\":\"sha1:GIA7NH5SUIKCD4YA5H47U72RDGQWWLOQ\",\"WARC-Block-Digest\":\"sha1:EV2VSYQYRGPW6OAQQE647RK3MM37GSZS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560627999210.22_warc_CC-MAIN-20190620105329-20190620131329-00500.warc.gz\"}"}
https://www.geeksforgeeks.org/python-summation-of-unique-elements/
[ "", null, "Open in App\nNot now\n\n# Python – Summation of Unique elements\n\n• Last Updated : 27 Mar, 2023\n\nThis article focuses on one of the operation of getting the unique list from a list that contains a possible duplicates and performing its summation. This operations has large no. of applications and hence it’s knowledge is good to have.\n\nMethod 1 : Naive method + sum() In naive method, we simply traverse the list and append the first occurrence of the element in new list and ignore all the other occurrences of that particular element. The task of summation is performed using sum().\n\n## Python3\n\n `# Python 3 code to demonstrate``# Summation of Unique elements``# using naive methods + sum()` `# initializing list``test_list ``=` `[``1``, ``3``, ``5``, ``6``, ``3``, ``5``, ``6``, ``1``]``print` `(``\"The original list is : \"` `+` `str``(test_list))` `# using naive method + sum()``# Summation of Unique elements``# from list``res ``=` `[]``for` `i ``in` `test_list:``    ``if` `i ``not` `in` `res:``        ``res.append(i)``res ``=` `sum``(res)` `# printing list after removal``print` `(``\"The unique elements summation : \"` `+` `str``(res))`\n\nOutput\n\n```The original list is : [1, 3, 5, 6, 3, 5, 6, 1]\nThe unique elements summation : 15```\n\nTime Complexity: O(n), where n is the number of elements in the list “test_list”.\nAuxiliary Space: O(n), where n is the number of elements in the list “test_list”.\n\nMethod 2 : Using set() + sum() This is the most popular way by which the duplicated are removed from the list. After that the summation of list can be performed using sum().\n\n## Python3\n\n `# Python 3 code to demonstrate``# Summation of Unique elements``# using set() + sum()` `# initializing list``test_list ``=` `[``1``, ``5``, ``3``, ``6``, ``3``, ``5``, ``6``, ``1``]``print` `(``\"The original list is : \"` `+` `str``(test_list))` `# using set() + sum()``# Summation of Unique elements``# from list``res ``=` `sum``(``list``(``set``(test_list)))` `# Summation of Unique elements``# using set() + sum()``print` `(``\"The unique elements summation : \"` `+` `str``(res))`\n\nOutput\n\n```The original list is : [1, 5, 3, 6, 3, 5, 6, 1]\nThe unique elements summation : 15```\n\nMethod #3:Using Counter() function\n\n## Python3\n\n `# Python 3 code to demonstrate``# Summation of Unique elements``from` `collections ``import` `Counter``# initializing list``test_list ``=` `[``1``, ``5``, ``3``, ``6``, ``3``, ``5``, ``6``, ``1``]``print``(``\"The original list is : \"` `+` `str``(test_list))` `freq ``=` `Counter(test_list)``res ``=` `sum``(freq.keys())` `# Summation of Unique elements``print``(``\"The unique elements summation : \"` `+` `str``(res))`\n\nOutput\n\n```The original list is : [1, 5, 3, 6, 3, 5, 6, 1]\nThe unique elements summation : 15```\n\nTime Complexity: O(N)\nAuxiliary Space: O(N)\n\nMethod #4:Using Operator.countOf() method\n\n## Python3\n\n `# Python 3 code to demonstrate``# Summation of Unique elements``import` `operator as op``# initializing list``test_list ``=` `[``1``, ``3``, ``5``, ``6``, ``3``, ``5``, ``6``, ``1``]``print` `(``\"The original list is : \"` `+` `str``(test_list))` `# using naive method + sum()``# Summation of Unique elements``# from list``res ``=` `[]``for` `i ``in` `test_list:``    ``if` `op.countOf(res,i)``=``=``0``:``        ``res.append(i)``res ``=` `sum``(res)` `# printing list after removal``print` `(``\"The unique elements summation : \"` `+` `str``(res))`\n\nOutput\n\n```The original list is : [1, 3, 5, 6, 3, 5, 6, 1]\nThe unique elements summation : 15```\n\nTime Complexity:O(N)\nAuxiliary Space:O(N)\n\nMethod #5:Using numpy method\n\n## Python3\n\n `#Python 3 code to demonstrate``#Summation of Unique elements``#using numpy``import` `numpy as np` `#initializing list``test_list ``=` `[``1``, ``3``, ``5``, ``6``, ``3``, ``5``, ``6``, ``1``]``print` `(``\"The original list is : \"` `+` `str``(test_list))` `#using numpy``#Summation of Unique elements``#from list``res ``=` `np.``sum``(np.unique(test_list))` `#printing result``print` `(``\"The unique elements summation : \"` `+` `str``(res))``#This code is contributed by Edula Vinay Kumar Reddy`\n\nOutput:\nThe original list is : [1, 3, 5, 6, 3, 5, 6, 1]\nThe unique elements summation : 15\n\nTime Complexity:O(N)\nAuxiliary Space:O(N)\n\nMethod #6:Using List Comprehension:\n\n## Python3\n\n `test_list ``=` `[``1``, ``3``, ``5``, ``6``, ``3``, ``5``, ``6``, ``1``]``print``(``\"The original list is : \"``, test_list)``res ``=` `sum``([i ``for` `i ``in` `set``(test_list) ``if` `i <``=` `6``])``print``(``\"The unique elements summation : \"``, res)``#This code is contributed by Jyothi pinjala.`\n\nOutput\n\n```The original list is : [1, 3, 5, 6, 3, 5, 6, 1]\nThe unique elements summation : 15```\n\nTime Complexity:O(N)\nAuxiliary Space:O(N)\n\nMy Personal Notes arrow_drop_up" ]
[ null, "https://media.geeksforgeeks.org/gfg-gg-logo.svg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6260322,"math_prob":0.91232514,"size":4081,"snap":"2023-14-2023-23","text_gpt3_token_len":1297,"char_repetition_ratio":0.1873927,"word_repetition_ratio":0.5970149,"special_character_ratio":0.33692724,"punctuation_ratio":0.1904762,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99270016,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-28T14:41:56Z\",\"WARC-Record-ID\":\"<urn:uuid:0cc2b67f-3ef8-4c55-996c-7a97976f434a>\",\"Content-Length\":\"197792\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:10981a8c-8127-4ca5-a1ed-c5480e5bcc04>\",\"WARC-Concurrent-To\":\"<urn:uuid:fc7628b3-56b1-4b7b-9c03-d1a1cf2f5f3e>\",\"WARC-IP-Address\":\"23.218.218.83\",\"WARC-Target-URI\":\"https://www.geeksforgeeks.org/python-summation-of-unique-elements/\",\"WARC-Payload-Digest\":\"sha1:EVOYSIPDOER6276FNSECO3GYXZZS3D4G\",\"WARC-Block-Digest\":\"sha1:YMSB66FN7IP273QMVUAEIZYVDXXG772A\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296948867.32_warc_CC-MAIN-20230328135732-20230328165732-00466.warc.gz\"}"}
https://thispointer.com/pandas-select-first-column-of-dataframe-in-python/
[ "# Pandas: Select first column of dataframe in python\n\nIn this article, we will discuss different ways to get or select the first column of dataframe as a series or list object.\n\nThere are different ways to select the first column of this dataframe. Let’s discuss them one by one,\n\n## Use iloc[] to select first column of pandas dataframe\n\nIn Pandas, the Dataframe provides an attribute iloc[], to select a portion of the dataframe using position based indexing. This selected portion can be few columns or rows . We can use this attribute to select only first column of the dataframe. For example,\n\n```# Select first column of the dataframe as a series\nfirst_column = df.iloc[:, 0]```\n\nWe selected a portion of dataframe as a series object, that included all rows, but only first column of the dataframe.\n\nHow did it work?\n\nThe syntax of dataframe.iloc[] is like,\n\n`df.iloc[row_start:row_end , col_start, col_end]`\n\nArguments:\n\n• row_start: The row index/position from where it should start selection. Default is 0.\n• row_end: The row index/position from where it should end the selection i.e. select till row_end-1. Default is till the last row of the dataframe.\n• col_start: The column index/position from where it should start selection. Default is 0.\n• col_end: The column index/position from where it should end the selection i.e. select till end-1. Default is till the last column of the dataframe.\n\nIt returns a portion of the dataframe that includes rows from row_start to row_end-1 and columns from col_start to col_end-1.\n\nTo select the first column of dataframe select from column index 0 till 1 i.e (:1) and select all rows using default values (:),\n\n```# Select first column of the dataframe as a dataframe\nfirst_column = df.iloc[: , :1]```\n\nWe provided the range to select the columns from 0 position till 1 to select the first column, therefore it returned a dataframe. If you want to select the first column as a series object then just pass the 0 instead of range. For example,\n\n```# Select first column of the dataframe as a series\nfirst_column = df.iloc[:, 0]```\n\nCheckout complete example to select first column of dataframe using iloc,\n\n```import pandas as pd\n\n# List of Tuples\nempoyees = [('Jack', 34, 'Sydney', 5) ,\n('Riti', 31, 'Delhi' , 7) ,\n('Aadi', 16, 'London', 11) ,\n('Mark', 41, 'Delhi' , 12)]\n\n# Create a DataFrame object\ndf = pd.DataFrame( empoyees,\ncolumns=['Name', 'Age', 'City', 'Experience'])\n\nprint(\"Contents of the Dataframe : \")\nprint(df)\n\n# Select first column of the dataframe as a dataframe object\nfirst_column = df.iloc[: , :1]\n\nprint(\"First Column Of Dataframe: \")\n\nprint(first_column)\nprint(\"Type: \" , type(first_column))\n\n# Select first column of the dataframe as a series\nfirst_column = df.iloc[:, 0]\n\nprint(\"First Column Of Dataframe: \")\nprint(first_column)\n\nprint(\"Type: \" , type(first_column))```\n\nOutput:\n\n```Contents of the Dataframe :\nName Age City Experience\n0 Jack 34 Sydney 5\n1 Riti 31 Delhi 7\n2 Aadi 16 London 11\n3 Mark 41 Delhi 12\nFirst Column Of Dataframe:\nName\n0 Jack\n1 Riti\n3 Mark\nType: <class 'pandas.core.frame.DataFrame'>\nFirst Column Of Dataframe:\n0 Jack\n1 Riti\n3 Mark\nName: Name, dtype: object\nType: <class 'pandas.core.series.Series'>```\n\nWe selected the first column of dataframe.\n\n## Select first column of pandas dataframe using []\n\nWe can fetch the column names of dataframe as a sequence and then select the first column name. Then using that column name, we can select the first column of dataframe as a series object using subscript operator i.e. []. For example,\n\n```# Select first column of the dataframe\nfirst_column = df[df.columns]\n\nprint(\"First Column Of Dataframe: \")\nprint(first_column)\n\nprint(\"Type: \" , type(first_column))```\n\nOutput:\n\n```First Column Of Dataframe:\n0 Jack\n1 Riti\n3 Mark\nName: Name, dtype: object\nType: <class 'pandas.core.series.Series'>```\n\n## Use head() to select the first column of pandas dataframe\n\nWe can use the dataframe.T attribute to get a transposed view of the dataframe and then call the head(1) function on that view to select the first row i.e. the first column of original dataframe. Then transpose back that series object to have the column contents as a dataframe object. For example,\n\n```# Select first column of the dataframe\n\nprint(\"First Column Of Dataframe: \")\nprint(first_column)\n\nprint(\"Type: \" , type(first_column)) ```\n\nOutput:\n\n```First Column Of Dataframe:\nName\n0 Jack\n1 Riti\n3 Mark\nType: <class 'pandas.core.frame.DataFrame'>```\n\nIt returned the first column of dataframe as a dataframe object.\n\n## Pandas: Get first column of dataframe as list\n\nSelect the first column of dataframe as a series object using iloc[:, 0] and then call the tolist() function on the series object. It will return the first column of dataframe as a list object. For example,\n\n```# Select first Column\nfirst_column = df.iloc[:, 0].tolist()\n\nprint(\"First Column Of Dataframe: \")\nprint(first_column)\n\nprint(\"Type: \" , type(first_column))```\n\nOutput:\n\n```First Column Of Dataframe:\n['Jack', 'Riti', 'Aadi', 'Mark']\nType: <class 'list'>```\n\nIt returned the first column of dataframe as a list.\n\nSummary\n\nWe learned different ways to get the first column of a dataframe as a series or list object in python.\n\n## Pandas Tutorials -Learn Data Analysis with Python\n\n### Are you looking to make a career in Data Science with Python?\n\nData Science is the future, and the future is here now. Data Scientists are now the most sought-after professionals today. To become a good Data Scientist or to make a career switch in Data Science one must possess the right skill set. We have curated a list of Best Professional Certificate in Data Science with Python. These courses will teach you the programming tools for Data Science like Pandas, NumPy, Matplotlib, Seaborn and how to use these libraries to implement Machine learning models.\n\n##### Checkout the Detailed Review of Best Professional Certificate in Data Science with Python.\n\nRemember, Data Science requires a lot of patience, persistence, and practice. So, start learning today.\n\n##### Join a LinkedIn Community of Python Developers\n\nThis site uses Akismet to reduce spam. Learn how your comment data is processed.\n\nScroll to Top" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.65286326,"math_prob":0.65635604,"size":7177,"snap":"2022-40-2023-06","text_gpt3_token_len":1689,"char_repetition_ratio":0.22640458,"word_repetition_ratio":0.26329115,"special_character_ratio":0.24912916,"punctuation_ratio":0.14042868,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9923425,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-26T13:10:07Z\",\"WARC-Record-ID\":\"<urn:uuid:36f9399d-fa0d-482f-896e-3b66f900fa18>\",\"Content-Length\":\"332368\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5383cf94-fbf2-4176-a887-ac847af2dc54>\",\"WARC-Concurrent-To\":\"<urn:uuid:54a21e65-aac1-4d16-a1a0-f8261917c0f8>\",\"WARC-IP-Address\":\"104.21.34.30\",\"WARC-Target-URI\":\"https://thispointer.com/pandas-select-first-column-of-dataframe-in-python/\",\"WARC-Payload-Digest\":\"sha1:IROBQBM4I3QJSB6OD7OCEAMVKEH3OP54\",\"WARC-Block-Digest\":\"sha1:4HWGG7D2FKQRHMX4MHK6XJB3N3VVR5UG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030334871.54_warc_CC-MAIN-20220926113251-20220926143251-00347.warc.gz\"}"}
https://joungkyun.gitbooks.io/annyung-3-user-guide/content/pkg-core-php56-geoip.html
[ "# php56-geoip\n\n### Description:\n\nPHP 5.6 geoip 확장. maxmind에서 제공하는 API와는 다른 버전이다.\n\n### Features:\n\nfunction base sample codes:\n\n``````<?php\n\\$searches = array ('oops.org', 'kornet.net', 'yahoo.com');\n\ntry {\n\\$g = GeoIP_open (GEOIP_MEMORY_CACHE|GEOIP_CHECK_CACHE);\nif ( GeoIP_db_avail (GEOIP_CITY_EDITION_REV0) )\n\\$gc = GeoIP_open (GEOIP_CITY_EDITION_REV0, GEOIP_INDEX_CACHE|GEOIP_CHECK_CACHE);\n\nif ( GeoIP_db_avail (GEOIP_ISP_EDITION) )\n\\$gi = GeoIP_open (GEOIP_ISP_EDITION, GEOIP_INDEX_CACHE|GEOIP_CHECK_CACHE);\n\nif ( ! is_resource (\\$g) )\nexit;\n\n#echo \"TYPE: \" . geoip_database_info (\\$g) .\"\\n\";\n\nforeach ( \\$searches as \\$v ) {\n\\$r = geoip_id_by_name (\\$g, \\$v);\nprint_r (\\$r);\n\nif ( is_resource (\\$gc) ) {\n\\$rc = GeoIP_record_by_name (\\$gc, \\$v);\nprint_r (\\$rc);\n}\n\nif ( is_resource (\\$gi) ) {\n\\$ri = GeoIP_org_by_name (\\$gi, \\$v);\necho \" \\$ri\\n\";\n}\n\n#echo \"### \" . geoip_country_code_by_name (\\$g, \\$v) . \"\\n\";\n#echo \"### \" . geoip_country_name_by_name (\\$g, \\$v) . \"\\n\";\n}\n\nif ( is_resource (\\$gc) ) GeoIP_close (\\$gc);\nif ( is_resource (\\$gi) ) GeoIP_close (\\$gi);\nGeoIP_close (\\$g);\n} catch ( GeoIPException \\$e ) {\nfprintf (STDERR, \"%s\\n\", \\$e->getMessage ());\n\\$err = preg_split ('/\\r?\\n/', \\$e->getTraceAsString ());\nprint_r (\\$err);\n}\n?>\n``````\n\nOOP based codes:\n\n``````<?php\n\\$searches = array ('www.example.com', 'oops.org', 'kornet.net', 'yahoo.com');\n\ntry {\n\\$g = new GeoIP (GEOIP_MEMORY_CACHE|GEOIP_CHECK_CACHE);\nif ( GeoIP_db_avail (GEOIP_CITY_EDITION_REV0) )\n\\$gc = new GeoIP (GEOIP_CITY_EDITION_REV0, GEOIP_INDEX_CACHE|GEOIP_CHECK_CACHE);\nif ( GeoIP_db_avail (GEOIP_ISP_EDITION) )\n\\$gi = new GeoIP (GEOIP_ISP_EDITION, GEOIP_INDEX_CACHE|GEOIP_CHECK_CACHE);\n\n#echo \"TYPE: \" . \\$g->database_info () .\"\\n\";\n\nforeach ( \\$searches as \\$v ) {\n\\$r = \\$g->id_by_name (\\$v);\nprint_r (\\$r);\n\nif ( GeoIP_db_avail (GEOIP_CITY_EDITION_REV0) ) {\n\\$rc = \\$gc->record_by_name (\\$v);\nprint_r (\\$rc);\n}\n\nif ( GeoIP_db_avail (GEOIP_ISP_EDITION) ) {\n\\$ri = \\$gi->org_by_name (\\$v);\necho \"ISP NAME: \\$ri\\n\";\n}\n\n#echo \"### \" . \\$g->country_code_by_name (\\$v) . \"\\n\";\n#echo \"### \" . \\$g->country_name_by_name (\\$v) . \"\\n\";\n}\n} catch ( GeoIPException \\$e ) {\nfprintf (STDERR, \"%s\\n\", \\$e->getMessage ());\n\\$err = preg_split ('/\\r?\\n/', \\$e->getTraceAsString ());\nprint_r (\\$err);\n}\n?>\n``````\n\n• None\n\n• None" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.542351,"math_prob":0.621237,"size":2345,"snap":"2019-13-2019-22","text_gpt3_token_len":806,"char_repetition_ratio":0.15036309,"word_repetition_ratio":0.29333332,"special_character_ratio":0.3803838,"punctuation_ratio":0.28412256,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9641396,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-20T21:59:50Z\",\"WARC-Record-ID\":\"<urn:uuid:2805e2bb-bc0f-4895-9edb-2a746ebfb4ea>\",\"Content-Length\":\"76666\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c9155c9c-9d11-43eb-9867-6cd581579257>\",\"WARC-Concurrent-To\":\"<urn:uuid:fe8975cb-34e5-4ff0-b80d-62d5e36ea9b7>\",\"WARC-IP-Address\":\"107.170.1.165\",\"WARC-Target-URI\":\"https://joungkyun.gitbooks.io/annyung-3-user-guide/content/pkg-core-php56-geoip.html\",\"WARC-Payload-Digest\":\"sha1:WWFAZPRTRHCU37RODXSASWIH3XR2C4FC\",\"WARC-Block-Digest\":\"sha1:3UP3ACNMFKCUPE73PJSKA73TOFDFQUJM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912202471.4_warc_CC-MAIN-20190320210433-20190320232433-00313.warc.gz\"}"}
http://resultados-quiniela.info/one-and-two-step-equations-worksheet/elegant-one-and-two-step-equations-worksheet-for-fresh-algebra-worksheets-inspirational-math-one-step-equations-73-multi-step-equations-worksheet-with-fractions/
[ "# Elegant One And Two Step Equations Worksheet For Fresh Algebra Worksheets Inspirational Math One Step Equations 73 Multi Step Equations Worksheet With Fractions", null, "", null, "Elegant one and two step equations worksheet for fresh algebra worksheets inspirational math one step equations 73 multi step equations worksheet with fractions\n\n### Another images one and two step equations worksheet\n\nYou can Download Elegant One And Two Step Equations Worksheet For Fresh Algebra Worksheets Inspirational Math One Step Equations 73 Multi Step Equations Worksheet With Fractions 600x800 px or full size click the link download below\n\nJust click download link in many Resolutions at the end of this sentence and you will be redirected on direct image file, and then you must right click on image and select \"Save image as\". 600 × 800\n\n#### See also related to Elegant One And Two Step Equations Worksheet For Fresh Algebra Worksheets Inspirational Math One Step Equations 73 Multi Step Equations Worksheet With Fractions images below\n\nThank you for visiting Elegant One And Two Step Equations Worksheet For Fresh Algebra Worksheets Inspirational Math One Step Equations 73 Multi Step Equations Worksheet With Fractions" ]
[ null, "http://resultados-quiniela.info/wp-content/uploads/2018/06/elegant-one-and-two-step-equations-worksheet-for-fresh-algebra-worksheets-inspirational-math-one-step-equations-73-multi-step-equations-worksheet-with-fractions.jpg", null, "http://resultados-quiniela.info/wp-content/uploads/2018/06/elegant-one-and-two-step-equations-worksheet-for-fresh-algebra-worksheets-inspirational-math-one-step-equations-73-multi-step-equations-worksheet-with-fractions.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7372764,"math_prob":0.95620286,"size":2261,"snap":"2019-43-2019-47","text_gpt3_token_len":415,"char_repetition_ratio":0.24014178,"word_repetition_ratio":0.2359882,"special_character_ratio":0.16983636,"punctuation_ratio":0.031161472,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99022007,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-21T13:46:30Z\",\"WARC-Record-ID\":\"<urn:uuid:acd1159a-b48f-4997-ab37-8e3617150049>\",\"Content-Length\":\"47277\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:240027e8-9ae0-4b53-b46b-bf3125553cf6>\",\"WARC-Concurrent-To\":\"<urn:uuid:66e954ea-d5c1-461d-b16e-4377fcf51f34>\",\"WARC-IP-Address\":\"104.27.152.71\",\"WARC-Target-URI\":\"http://resultados-quiniela.info/one-and-two-step-equations-worksheet/elegant-one-and-two-step-equations-worksheet-for-fresh-algebra-worksheets-inspirational-math-one-step-equations-73-multi-step-equations-worksheet-with-fractions/\",\"WARC-Payload-Digest\":\"sha1:RRPKLGJRRHTS4KEHPBMTVSKL5I5GAD2Y\",\"WARC-Block-Digest\":\"sha1:HQJYHLQZLTTXBN6V3YDRFIQFCLKIYKMN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496670821.55_warc_CC-MAIN-20191121125509-20191121153509-00405.warc.gz\"}"}
https://proofwiki.org/wiki/Definition:Fibonacci_Nim
[ "# Definition:Fibonacci Nim\n\n## Definition\n\nFibonacci nim is a two-person game whose rules are as follows:\n\n$(1): \\quad$ The game starts with one pile of $n$ counters.\n$(2): \\quad$ The first player removes a number of counters $c_1$ such that $1 \\le c_1 < n$.\n$(3): \\quad$ Each player takes it in turns to remove $c_n$ counters such that $1 \\le c_n \\le 2 c_{n - 1}$ where $c_{n - 1}$ is the number of counters taken in the other player's prevous move.\n$(3): \\quad$ The person who removes the last counter (or counters) is the winner.\n\n## Examples\n\n### $11$ starting counters\n\nLet a game of Fibonacci nim between player $\\text A$ and player $\\text B$ have a starting pile of $11$ counters.\n\n$\\text A$ removes $3$ counters, leaving $8$.\n\n$\\text B$ may remove up to $6$ counters, and takes $1$, leaving $7$.\n\n$\\text A$ may remove $1$ or $2$ counters, and takes $2$, leaving $5$.\n\n$\\text B$ may remove up to $4$ counters, and takes $1$, leaving $4$.\n\n$\\text A$ may remove $1$ or $2$ counters, and takes $1$, leaving $3$.\n\n$\\text B$ must remove either $1$ or $2$ counters, leaving $\\text A$ in a position to take all the counters next turn.\n\n$\\text A$ wins.\n\n### $1000$ starting counters\n\nLet a game of Fibonacci nim between player $\\text A$ and player $\\text B$ have a starting pile of $1000$ counters.\n\nThe optimal strategy for player $\\text A$ is to take $13$ counters.\n\n## Also see\n\n• Results about Fibonacci nim can be found here.\n\n## Source of Name\n\nThis entry was named for Leonardo Fibonacci.\n\n## Historical Note\n\nThe game of Fibonacci nim was reported by Michael J. Whinihan in $1963$ as being the invention of Robert E. Gaskell." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8552176,"math_prob":0.9995672,"size":1878,"snap":"2020-34-2020-40","text_gpt3_token_len":547,"char_repetition_ratio":0.178762,"word_repetition_ratio":0.16060606,"special_character_ratio":0.33013844,"punctuation_ratio":0.15503876,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999027,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-12T20:58:50Z\",\"WARC-Record-ID\":\"<urn:uuid:82ae2b8c-34d1-47a6-afc9-0d94d9a4cf8c>\",\"Content-Length\":\"36556\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d24e8ba8-a594-4384-bdf9-7222fc6eecb5>\",\"WARC-Concurrent-To\":\"<urn:uuid:a25162a2-934f-4aae-9847-c97f08c5cb8b>\",\"WARC-IP-Address\":\"104.27.168.113\",\"WARC-Target-URI\":\"https://proofwiki.org/wiki/Definition:Fibonacci_Nim\",\"WARC-Payload-Digest\":\"sha1:JG4A3ZZK5E7YCSR22MSPTJP3GHX7XIKH\",\"WARC-Block-Digest\":\"sha1:GPYIVTOQIS4XSJ2B3UJ3P5VATOB5WYAR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439738944.95_warc_CC-MAIN-20200812200445-20200812230445-00268.warc.gz\"}"}
https://www.geeksforgeeks.org/difference-sums-odd-even-digits/
[ "# Difference between sums of odd and even digits\n\nGiven a long integer, we need to find if the difference between sum of odd digits and sum of even digits is 0 or not. The indexes start from zero (0 index is for leftmost digit).\n\nExamples:\n\nInput: 1212112\nOutput: Yes\nExplanation:\nthe odd position element is 2+2+1=5\nthe even position element is 1+1+1+2=5\nthe difference is 5-5=0 equal to zero.\nSo print yes.\n\nInput: 12345\nOutput: No\nExplanation:\nthe odd position element is 1+3+5=9\nthe even position element is 2+4=6\nthe difference is 9-6=3 not equal to zero.\nSo print no.\n\nApproach:\n\nOne by one traverse digits and find the two sums. If difference between two sums is 0, print yes, else no.\n\nBelow is the implementation of the above approach:\n\n## C++\n\n `// C++ program for above approach` `#include` `using` `namespace` `std;`   `bool` `isDiff0(``int` `n){` `    ``int` `first = 0;` `    ``int` `second = 0;` `    ``bool` `flag = ``true``;` `    ``while``(n > 0){` `        ``int` `digit = n % 10;` `        ``if``(flag) first += digit;` `        ``else` `second += digit;` `        ``flag = !flag;` `        ``n = n/10;` `    ``}` `    ``if``(first - second == 0) ``return` `true``;` `    ``return` `false``;` `}`   `int` `main(){` `    ``int` `n = 1243;` `    ``if``(isDiff0(n)) cout<<``\"Yes\"``;` `    ``else` `cout<<``\"No\"``;` `    ``return` `0;` `}` `// This code is contributed by Kirti Agarwal(kirtiagarwal23121999)`\n\n## Java\n\n `// Java equivalent of above code` `public` `class` `Main {` `    ``public` `static` `boolean` `isDiff0(``int` `n) {` `        ``int` `first = ``0``;` `        ``int` `second = ``0``;` `        ``boolean` `flag = ``true``;` `        ``while` `(n > ``0``) {` `            ``int` `digit = n % ``10``;` `            ``if` `(flag) first += digit;` `            ``else` `second += digit;` `            ``flag = !flag;` `            ``n = n / ``10``;` `        ``}` `        ``if` `(first - second == ``0``) ``return` `true``;` `        ``return` `false``;` `    ``}`   `    ``public` `static` `void` `main(String[] args) {` `        ``int` `n = ``1243``;` `        ``if` `(isDiff0(n)) System.out.println(``\"Yes\"``);` `        ``else` `System.out.println(``\"No\"``);` `    ``}` `}`\n\n## Python\n\n `# Python program for the above approach` `def` `isDiff0(n):` `    ``first ``=` `0` `    ``second ``=` `0` `    ``flag ``=` `True` `    ``while``(n > ``0``):` `        ``digit ``=` `n ``%` `10` `        ``if``(flag):` `            ``first ``+``=` `digit` `        ``else``:` `            ``second ``+``=` `digit` `        ``if``(flag):` `            ``flag ``=` `False` `        ``else``:` `            ``flag ``=` `True` `        ``n ``=` `int``(n``/``10``)` `    ``if``(first``-``second ``=``=` `0``):` `        ``return` `True` `    ``return` `False`     `# driver code` `n ``=` `1243` `if``(isDiff0(n)):` `    ``print``(``\"Yes\"``)` `else``:` `    ``print``(``\"No\"``)`\n\n## C#\n\n `// C# Program for the above approach` `using` `System;`   `public` `class` `BinaryTree{` `    ``static` `bool` `isDiff0(``int` `n){` `        ``int` `first = 0;` `        ``int` `second = 0;` `        ``bool` `flag = ``true``;` `        ``while``(n > 0){` `            ``int` `digit = n % 10;` `            ``if``(flag) first += digit;` `            ``else` `second += digit;` `            ``flag = !flag;` `            ``n = n/10;` `        ``}` `        ``if``(first - second == 0) ``return` `true``;` `        ``return` `false``;` `    ``}` `    `  `    ``public` `static` `void` `Main(){` `        ``int` `n = 1243;` `        ``if``(isDiff0(n)) Console.Write(``\"Yes\"``);` `        ``else` `Console.Write(``\"No\"``);` `    ``}` `}`\n\n## Javascript\n\n `// JavaScript prgraom for the above approach` `function` `isDiff0(n){` `    ``let first = 0;` `    ``let second = 0;` `    ``let flag = ``true``;` `    ``while``(n > 0){` `        ``let digit = n % 10;` `        ``if``(flag) first += digit;` `        ``else` `second += digit;` `        ``flag = !flag;` `        ``n = parseInt(n/10);` `        `  `    ``}` `    ``if``(first - second == 0) ``return` `true``;` `    ``return` `false``;` `}`   `let n = 1243;` `if``(isDiff0(n)) ` `    ``console.log(``\"Yes\"``);` `else` `    ``console.log(``\"No\"``);` `    `  `    ``// THIS CODE IS CONTRIBUTED BY YASH AGARWAL(YASHAGAWRAL2852002)`\n\nOutput\n\n`Yes`\n\nTime Complexity: O(log n),\nAuxiliary space: O(1)\n\nIf you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5349804,"math_prob":0.9690791,"size":3423,"snap":"2023-40-2023-50","text_gpt3_token_len":1103,"char_repetition_ratio":0.12898508,"word_repetition_ratio":0.25975975,"special_character_ratio":0.34326613,"punctuation_ratio":0.16879432,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99980336,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-08T01:53:26Z\",\"WARC-Record-ID\":\"<urn:uuid:67fb0c31-82ea-48b2-9a3d-4fb087828861>\",\"Content-Length\":\"375087\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d13b4a82-d1db-4393-bb0c-68a1323243ae>\",\"WARC-Concurrent-To\":\"<urn:uuid:b96bb597-91e6-47c5-a470-8cf337baf425>\",\"WARC-IP-Address\":\"108.138.64.6\",\"WARC-Target-URI\":\"https://www.geeksforgeeks.org/difference-sums-odd-even-digits/\",\"WARC-Payload-Digest\":\"sha1:YKWFAM5FT2LSA6SXUXT7UBZGCLG5COKS\",\"WARC-Block-Digest\":\"sha1:XDOFFDEXWBBBAKPCUWWSGJKIBSCZHMGW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100710.22_warc_CC-MAIN-20231208013411-20231208043411-00052.warc.gz\"}"}
https://answers.everydaycalculation.com/add-fractions/3-35-plus-40-70
[ "Solutions by everydaycalculation.com\n\n3/35 + 40/70 is 23/35.\n\n1. Find the least common denominator or LCM of the two denominators:\nLCM of 35 and 70 is 70\n2. For the 1st fraction, since 35 × 2 = 70,\n3/35 = 3 × 2/35 × 2 = 6/70\n3. Likewise, for the 2nd fraction, since 70 × 1 = 70,\n40/70 = 40 × 1/70 × 1 = 40/70", null, "Download our mobile app and learn to work with fractions in your own time:" ]
[ null, "https://answers.everydaycalculation.com/mathstep-app-icon.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8734973,"math_prob":0.9986957,"size":649,"snap":"2019-51-2020-05","text_gpt3_token_len":261,"char_repetition_ratio":0.16744186,"word_repetition_ratio":0.0,"special_character_ratio":0.49614793,"punctuation_ratio":0.08,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99656934,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-16T01:42:01Z\",\"WARC-Record-ID\":\"<urn:uuid:b031c5de-b333-43f6-83f2-9441dab4a0fe>\",\"Content-Length\":\"7459\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b392938f-f4f3-49b9-9c77-8a6e06ece3c5>\",\"WARC-Concurrent-To\":\"<urn:uuid:c11d31ee-39e8-4e68-9a0c-232333ae9e20>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/add-fractions/3-35-plus-40-70\",\"WARC-Payload-Digest\":\"sha1:LZXEISYPRCVMTEB7LNPVBJENCNBN2ZGJ\",\"WARC-Block-Digest\":\"sha1:TV5BCHOYVVWI6QPDLWYQZRPZKOQE55PI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575541315293.87_warc_CC-MAIN-20191216013805-20191216041805-00417.warc.gz\"}"}
https://thedailywtf.com/articles/To_the_Hexth_Degree
[ "I have a policy that I try follow regarding duplicate concepts: I'll only post the concept again if the implementation somehow outdoes the previous. You may have guessed by the title, but today's example is from one of the more complex realms of mathematics and computer science: hexadecimal. Today's example is actually the sixth post of its kind. David H's former colleague now holds the \"hex\" prize for using no less than 5,000 lines to convert a byte array to hexadecimal, something which could normally be done with a single line of Java code ...\n\n```public abstract class HexadecimalConstants\n{\n/**\n* The number of hexadecimal characters per byte.\n*/\npublic static final int NUMBER_OF_HEXADECIMAL_CHARACTERS_PER_BYTE = 2;\n\n/**\n* The offset of the first bit within the four bits required to represent an\n*/\npublic static final int FIRST_BIT_OFFSET = 1;\n\n/**\n* The offset of the second bit within the four bits required to represent\n*/\npublic static final int SECOND_BIT_OFFSET = 2;\n\n/**\n* The offset of the third bit within the four bits required to represent an\n*/\npublic static final int THIRD_BIT_OFFSET = 3;\n\n/**\n* The offset of the fourth bit within the four bits required to represent\n*/\npublic static final int FOURTH_BIT_OFFSET = 4;\n\n/**\n* The '0' hexidecimal character.\n*/\npublic static final char ZERO = '0';\n\n/**\n* The bits that represent the '0' hexidecimal character.\n*/\npublic static final boolean[] ZERO_BITS = new boolean[]{false, false, false, false};\n\n/**\n* The '1' hexidecimal character.\n*/\npublic static final char ONE = '1';\n\n/**\n* The bits that represent the '1' hexidecimal character.\n*/\npublic static final boolean[] ONE_BITS = new boolean[]{false, false, false, true};\n\n/**\n* The '2' hexidecimal character.\n*/\npublic static final char TWO = '2';\n\n/**\n* The bits that represent the '2' hexidecimal character.\n*/\npublic static final boolean[] TWO_BITS = new boolean[]{false, false, true, false};\n\n/* ... snip 150 or so lines ... */\n\n/**\n* The 'F' hexidecimal character.\n*/\npublic static final char F = 'F';\n\n/**\n* The 'f' lower case alternative to the 'F' hexidecimal character.\n*/\npublic static final char F_LOWER = 'f';\n\n/**\n* The bits that represent the 'F' hexidecimal character.\n*/\npublic static final boolean[] F_BITS = new boolean[]{true, true, true, true};\n\n}```\n\nAnd a quick peek inside the bowels of the main conversion class ...\n\n```private static char convertBitsToHexadecimalCharacter(boolean bit1,\nboolean bit2,\nboolean bit3,\nboolean bit4) {\n\n// if the first bit is true - the binary nibble is 1???\nif (bit1) {\n\n// if the second bit is true - the binary nibble is 11??\nif (bit2) {\n\n// if the third bit is true - the binary nibble is 111?\nif (bit3) {\n\n// if the fourth bit is true - the binary nibble is 1111\nif (bit4) {\n\n// return the 'F' hexidecimal character\n\n// else the fourth bit is false - the binary nibble is 1110\n} else {\n\n// return the 'E' hexidecimal character\n\n}\n\n// else the third bit is false - the binary nibble is 110?\n} else {\n\n// if the fourth bit is true - the binary nibble is 1101\nif (bit4) {\n\n// return the 'D' hexidecimal character\n\n// else the fourth bit is false - the binary nibble is 1100\n} else {\n\n// return the 'C' hexidecimal character\n\n}\n\n}\n\n/* ... Snipped 100+ lines ... */\n\n// else the third bit is false - the binary nibble is 000?\n} else {\n\n// if the fourth bit is true - the binary nibble is 0001\nif (bit4) {\n\n// return the '1' hexidecimal character\n\n// else the fourth bit is false - the binary nibble is 0000\n} else {\n\n// return the '0' hexidecimal character", null, "[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!" ]
[ null, "https://thedailywtf.com/images/inedo/buildmaster-icon.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.65098816,"math_prob":0.91438013,"size":3797,"snap":"2022-27-2022-33","text_gpt3_token_len":925,"char_repetition_ratio":0.18534142,"word_repetition_ratio":0.32169953,"special_character_ratio":0.29233605,"punctuation_ratio":0.13928013,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9913313,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-30T01:28:58Z\",\"WARC-Record-ID\":\"<urn:uuid:5e1e1bdf-955b-478a-9273-9d07d28f6b72>\",\"Content-Length\":\"25898\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:36a95a7d-e73a-4aa0-8c38-dccc5106a8c6>\",\"WARC-Concurrent-To\":\"<urn:uuid:d0322890-8a8f-4f63-a035-319f5e522d03>\",\"WARC-IP-Address\":\"172.67.203.168\",\"WARC-Target-URI\":\"https://thedailywtf.com/articles/To_the_Hexth_Degree\",\"WARC-Payload-Digest\":\"sha1:HB7SDAA43UQLGY3NPAZTJOLGDZFOQH2L\",\"WARC-Block-Digest\":\"sha1:3IP3YKT44HDY6J3PAL5774AXNYN2ZY2K\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103646990.40_warc_CC-MAIN-20220630001553-20220630031553-00743.warc.gz\"}"}
https://stackoverflow.com/questions/16097453/how-to-compute-p-value-and-standard-error-from-correlation-analysis-of-rs-cor
[ "# How to compute P-value and standard error from correlation analysis of R's cor()\n\nI have data that contain 54 samples for each condition (x and y). I have computed the correlation the following way:\n\n``````> dat <- read.table(\"http://dpaste.com/1064360/plain/\",header=TRUE)\n> cor(dat\\$x,dat\\$y)\n 0.2870823\n``````\n\nIs there a native way to produce SE of correlation in R's cor() functions above and p-value from T-test?\n\nAs explained in this web (page 14.6)\n\n• Perhaps you're looking for `?cor.test` instead. – A5C1D2H2I1M1N2O1R2T1 Apr 19 '13 at 4:59\n\nI think that what you're looking for is simply the `cor.test()` function, which will return everything you're looking for except for the standard error of correlation. However, as you can see, the formula for that is very straightforward, and if you use `cor.test`, you have all the inputs required to calculate it.\n\nUsing the data from the example (so you can compare it yourself with the results on page 14.6):\n\n``````> cor.test(mydf\\$X, mydf\\$Y)\n\nPearson's product-moment correlation\n\ndata: mydf\\$X and mydf\\$Y\nt = -5.0867, df = 10, p-value = 0.0004731\nalternative hypothesis: true correlation is not equal to 0\n95 percent confidence interval:\n-0.9568189 -0.5371871\nsample estimates:\ncor\n-0.8492663\n``````\n\nIf you wanted to, you could also create a function like the following to include the standard error of the correlation coefficient.\n\nFor convenience, here's the equation:", null, "r = the correlation estimate and n - 2 = degrees of freedom, both of which are readily available in the output above. Thus, a simple function could be:\n\n``````cor.test.plus <- function(x) {\nlist(x,\nStandard.Error = unname(sqrt((1 - x\\$estimate^2)/x\\$parameter)))\n}\n``````\n\nAnd use it as follows:\n\n``````cor.test.plus(cor.test(mydf\\$X, mydf\\$Y))\n``````\n\nHere, \"mydf\" is defined as:\n\n``````mydf <- structure(list(Neighborhood = c(\"Fair Oaks\", \"Strandwood\", \"Walnut Acres\",\n\"Discov. Bay\", \"Belshaw\", \"Kennedy\", \"Cassell\", \"Miner\", \"Sedgewick\",\n\"Sakamoto\", \"Toyon\", \"Lietz\"), X = c(50L, 11L, 2L, 19L, 26L,\n73L, 81L, 51L, 11L, 2L, 19L, 25L), Y = c(22.1, 35.9, 57.9, 22.2,\n42.4, 5.8, 3.6, 21.4, 55.2, 33.3, 32.4, 38.4)), .Names = c(\"Neighborhood\",\n\"X\", \"Y\"), class = \"data.frame\", row.names = c(NA, -12L))\n``````\n\nCan't you simply take the test statistic from the return value? Of course the test statistic is the estimate/se so you can calc se from just dividing the estimate by the tstat:\n\nUsing `mydf` in the answer above:\n\n``````r = cor.test(mydf\\$X, mydf\\$Y)\ntstat = r\\$statistic\nestimate = r\\$estimate\nestimate; tstat\n\ncor\n-0.8492663\nt\n-5.086732\n``````" ]
[ null, "https://i.stack.imgur.com/l7Sqn.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7755605,"math_prob":0.9868865,"size":1674,"snap":"2020-34-2020-40","text_gpt3_token_len":560,"char_repetition_ratio":0.10299401,"word_repetition_ratio":0.0,"special_character_ratio":0.3548387,"punctuation_ratio":0.26262626,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9983921,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-08T01:44:23Z\",\"WARC-Record-ID\":\"<urn:uuid:c93d74a7-9620-403f-ab5a-f24ddf20f4f4>\",\"Content-Length\":\"153970\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a7d8f08c-13b5-4668-b0f9-a59cabcde34e>\",\"WARC-Concurrent-To\":\"<urn:uuid:a0d2fa66-22d7-47ce-96a2-84d20beb603f>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://stackoverflow.com/questions/16097453/how-to-compute-p-value-and-standard-error-from-correlation-analysis-of-rs-cor\",\"WARC-Payload-Digest\":\"sha1:TDVTLZ66KT7DLXLGMTU2GJ6COYTGLRL7\",\"WARC-Block-Digest\":\"sha1:IKFD4BDAAMVN27MYVYOWT76L6THQ2POX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439737233.51_warc_CC-MAIN-20200807231820-20200808021820-00320.warc.gz\"}"}
https://www.instasolv.com/question/a-dye-absorbs-a-photon-of-wavelength-2-and-re-emits-the-same-energy-into-bl600z
[ "A dye absorbs a photon of wavelengt...\nQuestion", null, "", null, "# A dye absorbs a photon of wavelength 2 and re-emits the same energy into two photons of wavelength 11 and 12 respectively. The wavelength 2 is related with 14 and 12 as: (A) 2=2+2 (B) 2 = M (C) 2 - 2 (D) 2= hh hth hth (2+2)?\n\nJEE/Engineering Exams\nChemistry\nSolution", null, "68", null, "4.0 (1 ratings)", null, "", null, "A dye absects photon of A wamelery th ( t=frac{h c}{lambda} ) (plancte's equation) hye emits same energy in 2 photom ( lambda_{1} ) and ( lambda_{2} ) ( E_{1}=frac{h c}{lambda_{1}} ) and ( f_{2}=frac{h c}{lambda_{2}} ) Since energy remains conserved ( therefore E=E_{1}+E_{2} ) [ begin{array}{l} frac{h c}{lambda}=frac{h c}{lambda_{1}}+frac{h c}{lambda_{2}} frac{1}{lambda}=frac{1}{lambda_{1}}+frac{1}{lambda_{2}} frac{1}{lambda}=frac{lambda_{1} f_{2}}{lambda_{1} lambda_{2}} alpha=lambda_{1} lambda_{2} end{array} ] ( = ) option ( (B) )", null, "Quick and Stepwise Solutions Just click and Send", null, "OVER 20 LAKH QUESTIONS ANSWERED Download App for Free" ]
[ null, "https://images.instasolv.com/QuestionBank/5cfd38197fbe001ed0ba0155/crop_image.png", null, "https://www.instasolv.com/images/fullscreen.svg", null, "https://www.instasolv.com/images/eye.svg", null, "data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iMTciIGhlaWdodD0iMTciIHZpZXdCb3g9IjAgMCAxNyAxNyIgZmlsbD0ibm9uZSIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIj4KICA8cGF0aAogICAgZD0iTTguMTIyODEgMi4wNjY4MUM4LjM5NzYxIDEuNTA4MjYgOS4xOTM5NSAxLjUwODI2IDkuNDY4NzQgMi4wNjY4MUwxMC43NDU0IDQuNjYxN0MxMS4wNzI2IDUuMzI2NzUgMTEuNzA2MyA1Ljc4OCAxMi40Mzk3IDUuODk0OTFMMTUuMzAyMSA2LjMxMjE1QzE1LjkxNjYgNi40MDE3MSAxNi4xNjIzIDcuMTU2NDQgMTUuNzE4MyA3LjU5MDU1TDEzLjY0MTEgOS42MjE3MkMxMy4xMTI4IDEwLjEzODMgMTIuODcxOCAxMC44ODEzIDEyLjk5NjMgMTEuNjA5NkwxMy40ODU5IDE0LjQ3MjlDMTMuNTkwNyAxNS4wODU5IDEyLjk0NjggMTUuNTUyOCAxMi4zOTY3IDE1LjI2MjdMOS44NDUzNyAxMy45MTcyQzkuMTg4NTIgMTMuNTcwOCA4LjQwMzA0IDEzLjU3MDggNy43NDYxOSAxMy45MTcyTDUuMTk0ODMgMTUuMjYyN0M0LjY0NDc1IDE1LjU1MjggNC4wMDA4OSAxNS4wODU5IDQuMTA1NyAxNC40NzI5TDQuNTk1MjUgMTEuNjA5NkM0LjcxOTc3IDEwLjg4MTMgNC40Nzg3OCAxMC4xMzgzIDMuOTUwNDggOS42MjE3MkwxLjg3MzI0IDcuNTkwNTVDMS40MjkyOSA3LjE1NjQ0IDEuNjc0OTggNi40MDE3MSAyLjI4OTQxIDYuMzEyMTVMNS4xNTE4MiA1Ljg5NDkxQzUuODg1MjUgNS43ODggNi41MTg5OCA1LjMyNjc1IDYuODQ2MTcgNC42NjE3TDguMTIyODEgMi4wNjY4MVoiCiAgICBmaWxsPSIjRkZENjAwIiBzdHJva2U9IiNGRkQ2MDAiIHN0cm9rZS13aWR0aD0iMS41Ii8+Cjwvc3ZnPgo=", null, "https://instasolv1.s3.ap-south-1.amazonaws.com/QuestionBank/5cfd38197fbe001ed0ba0155/solution_5cfd39e5ada1da2ec4cf34ed.png", null, "https://www.instasolv.com/images/fullscreen.svg", null, "https://www.instasolv.com/images/download-app-2.svg", null, "https://www.instasolv.com/assets/home/images/homepage/download-app.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6640589,"math_prob":0.9924656,"size":780,"snap":"2021-04-2021-17","text_gpt3_token_len":292,"char_repetition_ratio":0.23582475,"word_repetition_ratio":0.0,"special_character_ratio":0.4051282,"punctuation_ratio":0.01923077,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9997371,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16],"im_url_duplicate_count":[null,1,null,null,null,null,null,null,null,1,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-22T07:11:35Z\",\"WARC-Record-ID\":\"<urn:uuid:188e1c95-ff59-4546-a012-61d8c2af36dd>\",\"Content-Length\":\"53542\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9c1056c8-d56a-47b4-aaaf-ae08d7e89b47>\",\"WARC-Concurrent-To\":\"<urn:uuid:6ca2a843-a429-474d-9361-764d3149b779>\",\"WARC-IP-Address\":\"3.7.16.11\",\"WARC-Target-URI\":\"https://www.instasolv.com/question/a-dye-absorbs-a-photon-of-wavelength-2-and-re-emits-the-same-energy-into-bl600z\",\"WARC-Payload-Digest\":\"sha1:GVX3W3VF7JRO66SWOV5KKBMPRCJ4CZHU\",\"WARC-Block-Digest\":\"sha1:QDDUYHIXAMWL2VXYLT2XZZLS3EXDEAB5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703529128.47_warc_CC-MAIN-20210122051338-20210122081338-00529.warc.gz\"}"}
http://www.agrademath.com/mathterms/trivial.html
[ "Home Math Calculators Math  Flashcards Math Games Math  Homework Math Lessons Math Terms Math Worksheets Math Tools\nYou Are In Math Glossary of Terms\n\n# Trivial\n\nThe term trivial is frequently used for objects (for examples, groups or topological spaces) that have a very simple structure or being the simplest possible case.\n\nTrivial also refers to solutions to an equation that have a very simple structure, but for the sake of completeness cannot be omitted. These solutions are called the trivial solution.\n\nExample:\n\nconsider the differential equation y' = y\n\nwhere y = f(x) is a function whose derivative is y'.\n\nThe trivial solution is:\n\ny = 0, the zero function" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8988036,"math_prob":0.99242723,"size":501,"snap":"2020-24-2020-29","text_gpt3_token_len":110,"char_repetition_ratio":0.12676056,"word_repetition_ratio":0.024691358,"special_character_ratio":0.20558882,"punctuation_ratio":0.097826086,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99988294,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-06-04T21:07:23Z\",\"WARC-Record-ID\":\"<urn:uuid:f719f71e-f8e3-41ef-bf2a-7b0b7d5692d7>\",\"Content-Length\":\"6869\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8d35f2d7-0344-4607-9b5e-cb7dc0a88af0>\",\"WARC-Concurrent-To\":\"<urn:uuid:eff175e7-caef-45ff-934a-3ea0719625f5>\",\"WARC-IP-Address\":\"184.168.51.1\",\"WARC-Target-URI\":\"http://www.agrademath.com/mathterms/trivial.html\",\"WARC-Payload-Digest\":\"sha1:M7B23AGFM7HUMHUIBHRWVUGD67MUPGXA\",\"WARC-Block-Digest\":\"sha1:JF666KNKTCMLDP7GHKFMEJFH67MBD5KU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347458095.68_warc_CC-MAIN-20200604192256-20200604222256-00198.warc.gz\"}"}
https://codescracker.com/java/program/java-program-print-diamond-pattern.htm
[ "# Java Program to Print Diamond Pattern\n\n« Previous Program Next Program »\n\n## Print Diamond Pattern\n\nTo print diamond pattern in Java Programming, you have to use six for loops, the first for loop (outer loop) contains two for loops, in which the first for loop is to print the spaces, and the second for loop print the stars to make the pyramid of stars. Now the second for loop (outer loop) also contains the two for loop, in which the first for loop is to print the spaces and the second for loop print the stars to make the reverse pyramid of stars, which wholly makes Diamond pattern of stars as shown in the following program.\n\n## Java Programming Code to Print Diamond Pattern\n\nFollowing Java Program ask to the user to enter the number of rows for the diamond dimension to print the diamond pattern on the screen:\n\n```/* Java Program Example - Print Diamond Pattern */\n\nimport java.util.Scanner;\n\npublic class JavaProgram\n{\npublic static void main(String args[])\n{\n\nint n, c, k, space=1;\nScanner scan = new Scanner(System.in);\n\nSystem.out.print(\"Enter Number of Rows (for Diamond Dimension) : \");\nn = scan.nextInt();\n\nspace = n-1;\n\nfor(k=1; k<=n; k++)\n{\nfor(c=1; c<=space; c++)\n{\nSystem.out.print(\" \");\n}\nspace--;\nfor(c=1; c<=(2*k-1); c++)\n{\nSystem.out.print(\"*\");\n}\nSystem.out.println();\n}\n\nspace = 1;\n\nfor(k=1; k<=(n-1); k++)\n{\nfor(c=1; c<=space; c++)\n{\nSystem.out.print(\" \");\n}\nspace++;\nfor(c=1; c<=(2*(n-k)-1); c++)\n{\nSystem.out.print(\"*\");\n}\nSystem.out.println();\n}\n\n}\n}```\n\nWhen the above Java Program is compile and executed, it will produce the following output:\n\n### Same Program in Other Languages\n\nYou may also like to learn and practice the same program in other popular programming languages:\n\nJava Online Test\n\n« Previous Program" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.72126526,"math_prob":0.85724384,"size":1544,"snap":"2021-43-2021-49","text_gpt3_token_len":382,"char_repetition_ratio":0.16233766,"word_repetition_ratio":0.16,"special_character_ratio":0.2992228,"punctuation_ratio":0.1875,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9819963,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-24T08:21:44Z\",\"WARC-Record-ID\":\"<urn:uuid:e7afda0d-86fb-4216-bbe7-062359969d5e>\",\"Content-Length\":\"24367\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e3667829-e75c-4d32-b388-640ea0163750>\",\"WARC-Concurrent-To\":\"<urn:uuid:a116a669-4d4a-4c02-8aa9-2f4b3accfd44>\",\"WARC-IP-Address\":\"148.72.215.147\",\"WARC-Target-URI\":\"https://codescracker.com/java/program/java-program-print-diamond-pattern.htm\",\"WARC-Payload-Digest\":\"sha1:HYKJ2Y4TW7RCYAHDKG6BOMOXU6BWPKVC\",\"WARC-Block-Digest\":\"sha1:AFLF2QSOEPPTRXTPHMFL7PWSRUWJWGIF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585916.29_warc_CC-MAIN-20211024081003-20211024111003-00179.warc.gz\"}"}
https://www.asiaa.sinica.edu.tw/gallery/show_c.php?i=d68a53ab110aadfe3a1fc1bc11a8abe8
[ "# 研究成果藝廊\n\n## 理論及觀測宇宙學\n\n The GMRT Epoch of Reionization experiment: a new upper limit on the neutral hydrogen power spectrum at z ≈ 8.6 圖片來源: Paciga and Chang et al. Open the larger image", null, "Average power spectrum in units of K 2 of all combinations of days, excluding December 11, as a function of the multipole moment l. Each point is shown with a 2σ upper limit derived from a bootstrap error analysis, which is in most cases smaller than the size of the point. The points are logarithmically spaced as described in the text, from left to right covering the ranges 377 < l < 578, 578 < l < 899 and 899 < l < 1414. Triangles are the power before subtracting foregrounds, diamonds are after 8 MHz mean subtraction, squares are after 2 MHz mean subtraction and circles are after 0.5 MHz subtraction. The curved solid line is the theoretical EoR signal from Jelic et al. (2008), and the dashed line is the theoretical EoR signal ´ with a cold absorbing IGM as described in the text. For epoch of reionization (EoR), we have been utilizing the the Giant Metrewave Radio Telescope (GMRT) in India to measure the 21-cm power spectrum at redshifts 8 < z < 9, constraining the fluctuation of ionization. We have put a first upper limit on the amplitude of the power spectrum at a mean redshift of 8.5 (Paciga et al. 2011), as shown in figure below. The upper limit is within one order-of-magnitude of the currently popular theoretical expectations, and already rules out models where the intergalactic medium was not heated before reionization." ]
[ null, "https://www.asiaa.sinica.edu.tw/gallery/_img/2011_cosmology_M_20120629120026.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.84858507,"math_prob":0.9575777,"size":719,"snap":"2019-35-2019-39","text_gpt3_token_len":211,"char_repetition_ratio":0.12027972,"word_repetition_ratio":0.0,"special_character_ratio":0.22531293,"punctuation_ratio":0.096296296,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97471076,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-17T13:36:09Z\",\"WARC-Record-ID\":\"<urn:uuid:033faa8f-3e1b-41a4-bf3a-1259185e370d>\",\"Content-Length\":\"15407\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d27857ed-4f7d-4834-8c24-87e25d5b8fb9>\",\"WARC-Concurrent-To\":\"<urn:uuid:62ec6807-9af1-4396-a6b2-446206c884ae>\",\"WARC-IP-Address\":\"140.109.176.177\",\"WARC-Target-URI\":\"https://www.asiaa.sinica.edu.tw/gallery/show_c.php?i=d68a53ab110aadfe3a1fc1bc11a8abe8\",\"WARC-Payload-Digest\":\"sha1:BESN4YDIEL5Y4M2HAGJRYWYNPCFIISPM\",\"WARC-Block-Digest\":\"sha1:GFFNTXLQW4FZI6P7ACWIGMWYD4I6LDKJ\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514573071.65_warc_CC-MAIN-20190917121048-20190917143048-00130.warc.gz\"}"}
https://www.storyofmathematics.com/cross-multiplication
[ "", null, "# Cross Multiplication – Techniques & Examples\n\nBefore we can discuss the cross-multiplication process, let’s remind ourselves about the parts of a fraction. A fraction normally is a number written in the form a/b where a and b are integers and b is a non-zero.\n\nThe number at the top in a fraction is known as the numerator, while the number at the bottom is known as the denominator. The numerator and denominator are separated by a slash line or division bar.\n\nFor instance, 4/5, 2/7, 1/3, 1/4, etc., are all examples of fractions. It is also important to note that a rational expression similarly takes a fraction a/b, where a and b are algebraic expressions.\n\nExamples of rational expressions are; (x +5)/3, 2/x- 8, 3x/5 etc.\n\n## What is Cross Multiplication?\n\nIn mathematics, cross multiplication occurs when a variable in an equation is determined by cross multiplying two fractions or expressions. Cross multiplication can be applied also compare fractions by multiplying the numerator of each fraction by the other’s denominator.\n\n## How to Cross Multiply?\n\nThe numerator of the first fraction is multiplied with the denominator of the second fraction to perform cross multiplication. Similarly, the denominator of the first fraction is multiplied by the numerator of the second fraction.\n\nThe two products are equated, and the value of the variable is determined.\n\nTo master how to do cross multiplication, let’s examine the following cross multiplication cases:\n\n### How to cross multiply with a variable?\n\nExample 1\n\nGiven, 9/x = 3/2\n\nSolution\n\nTo find the value of x, we apply the cross-multiplication process where;\n\n• Multiply the numerator of the first fraction by the denominator of the second fraction;\n\n9* 2 =18\n\n• Similarly, multiply the denominator of the first fraction by the numerator of the second fraction;\n\nx * 3 =3x\n\n• Now equate the two products and divide both sides of the equation by 3;\n\n3x = 18\n\nx =6\n\nExample 2\n\nSolve x/5 = 4/2\n\nSolution\n\nApply the same procedures for cross multiplication;\n\n• x * 2 = 2x\n• 5 * 4 = 20\n\nNow equate the two products;\n\n2x = 20\n\nx = 10\n\nCross Multiplying with two of the Same Variable\n\nExample 3\n\n(x + 3)/2 = (x +1)/1\n\nSolution\n\nIn this case, the numerator of the first and second fractions are x +3 and x + 1, respectively.\n\nNow, apply cross-multiplication by multiplying the numerator of the first fraction by the denominator of the second fraction;\n\n• (x + 3) * 1 = x + 3\n\nMultiply dominator of 1ST fraction by numerator of 2ND fraction;\n\n• 2 * (x + 1) = 2x + 2\n\nEquate the two products and combine the like terms\n\n• 4x + 12 = 2x + 2.\n\nIsolate the variable x by adding -2x to both sides of the equation;\n\n• 4x -2x +12 = 2x -2x + 2\n\n= 2x + 12 = 2\n\nNow add -12 to both sides,\n\n• 2x + 12 -12 = 2 -12\n\n2x = -10\n\nx = -5\n\nExample 4\n\nSolve 8/ (x – 2) = 4/x\n\nSolution\n\nCross multiply;\n\n• 8 * x = 8x\n• (x- 2) * 4= 4x – 8\n\nEquate the two products and combine the like terms;\n\n8x = 4x -8\n\nIsolate the variable x;\n\n• Add -4x to both sides of the equation;\n\n8x – 4x = 8\n\n4x = 8\n\nx = 2\n\nExample 5\n\nSolve for x 2x/3 + x/2 = 5/6\n\nSolution\n\nIn this case, we multiply each term by the LCM. The LCM of 3, 2 and 6 is 6, Therefore, the equation will be;\n\n• (2x/3)6 + (x/2)6 = (5/6)6\n\n= 4x + 3x = 5\n\nCombine the like terms and divide both sides by 7;\n\n7x = 5\n\nx = 5/7\n\nExample 6\n\nSolve for x 4/10 = x/15\n\nSolution\n\nCross multiply and equate the products;\n4 * 15 = 10 * x\n\nDivide both sides of the equation by 10;\n\nx = 60/10\n\n= 6\n\n### Practice Questions\n\n1. Solve the equation, (x + 5) = (2x + 10)/3.\n\n2. Solve the equation, -6x + 2 = 12x/3\n\n3. Solve the equation, -x/9 = -9/x.\n\n4. To prepare a lemonade, 3 liters of water are mixed with 4 liters of lemon juice. How many liters of water can be mixed with 8 liters of lemon juice?\n\n5. An 8-meter flag post casts a shadow of 15 meters on the ground. How tall is an electric post which cast a shadow of 30 meters in the same condition?\n\n6. A fire engine has the capacity of holding 3000 gallons of water. If its nozzle can deliver 80 gallons of water per minute. How many gallons of water can be delivered in 10 minutes?\n\n7. A fire engine has the capacity of holding 3000 gallons of water. If its nozzle can deliver 80 gallons of water per minute. How long will it take for the tank to be empty?\n\n8. 4 gallons of paint can cover 800 square feet of a floor. Calculate the quantity of paint needed to cover 200 square feet.\n\n9. A number when divided by 2, the result is equal to the 3 more than the number whole divided by 5. What is the number?\n\n10. The reciprocal of a positive rational number is 4 times the number itself. Determine the number.\n\n11. The ratio of w to x is equal to the ratio of y to z. If x = 2w and y = 3w, express z in terms of w." ]
[ null, "https://www.storyofmathematics.com/wp-content/uploads/2022/02/som-header1.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.87139815,"math_prob":0.99954605,"size":3384,"snap":"2022-27-2022-33","text_gpt3_token_len":1012,"char_repetition_ratio":0.18846154,"word_repetition_ratio":0.08308157,"special_character_ratio":0.30555555,"punctuation_ratio":0.09261939,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999591,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-01T14:07:31Z\",\"WARC-Record-ID\":\"<urn:uuid:c4e6a19e-d808-4949-80ed-5e7119bab240>\",\"Content-Length\":\"531150\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e11352d0-7d49-4642-a992-220a9d16e67c>\",\"WARC-Concurrent-To\":\"<urn:uuid:4242c637-a3da-4c54-a531-4ec9c942d398>\",\"WARC-IP-Address\":\"172.67.190.47\",\"WARC-Target-URI\":\"https://www.storyofmathematics.com/cross-multiplication\",\"WARC-Payload-Digest\":\"sha1:2HYF4KRO3EOIZUG3RELDKSG7Q7LAWTF3\",\"WARC-Block-Digest\":\"sha1:64QPV4GRV6ZG3V7IX3WPRNBY3LKCDKYD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103941562.52_warc_CC-MAIN-20220701125452-20220701155452-00158.warc.gz\"}"}
https://db0nus869y26v.cloudfront.net/en/Composite_number
[ "A composite number is a positive integer that can be formed by multiplying two smaller positive integers. Equivalently, it is a positive integer that has at least one divisor other than 1 and itself. Every positive integer is composite, prime, or the unit 1, so the composite numbers are exactly the numbers that are not prime and not a unit.\n\nFor example, the integer 14 is a composite number because it is the product of the two smaller integers 2 × 7. Likewise, the integers 2 and 3 are not composite numbers because each of them can only be divided by one and itself.\n\nThe composite numbers up to 150 are:\n\n4, 6, 8, 9, 10, 12, 14, 15, 16, 18, 20, 21, 22, 24, 25, 26, 27, 28, 30, 32, 33, 34, 35, 36, 38, 39, 40, 42, 44, 45, 46, 48, 49, 50, 51, 52, 54, 55, 56, 57, 58, 60, 62, 63, 64, 65, 66, 68, 69, 70, 72, 74, 75, 76, 77, 78, 80, 81, 82, 84, 85, 86, 87, 88, 90, 91, 92, 93, 94, 95, 96, 98, 99, 100, 102, 104, 105, 106, 108, 110, 111, 112, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 128, 129, 130, 132, 133, 134, 135, 136, 138, 140, 141, 142, 143, 144, 145, 146, 147, 148, 150. (sequence A002808 in the OEIS)\n\nEvery composite number can be written as the product of two or more (not necessarily distinct) primes. For example, the composite number 299 can be written as 13 × 23, and the composite number 360 can be written as 23 × 32 × 5; furthermore, this representation is unique up to the order of the factors. This fact is called the fundamental theorem of arithmetic.\n\nThere are several known primality tests that can determine whether a number is prime or composite, without necessarily revealing the factorization of a composite input.\n\n## Types\n\nOne way to classify composite numbers is by counting the number of prime factors. A composite number with two prime factors is a semiprime or 2-almost prime (the factors need not be distinct, hence squares of primes are included). A composite number with three distinct prime factors is a sphenic number. In some applications, it is necessary to differentiate between composite numbers with an odd number of distinct prime factors and those with an even number of distinct prime factors. For the latter\n\n$\\mu (n)=(-1)^{2x}=1$", null, "(where μ is the Möbius function and x is half the total of prime factors), while for the former\n\n$\\mu (n)=(-1)^{2x+1}=-1.$", null, "However, for prime numbers, the function also returns −1 and $\\mu (1)=1$", null, ". For a number n with one or more repeated prime factors,\n\n$\\mu (n)=0$", null, ".\n\nIf all the prime factors of a number are repeated it is called a powerful number (All perfect powers are powerful numbers). If none of its prime factors are repeated, it is called squarefree. (All prime numbers and 1 are squarefree.)\n\nFor example, 72 = 23 × 32, all the prime factors are repeated, so 72 is a powerful number. 42 = 2 × 3 × 7, none of the prime factors are repeated, so 42 is squarefree.", null, "Euler diagram of abundant, primitive abundant, highly abundant, superabundant, colossally abundant, highly composite, superior highly composite, weird and perfect numbers under 100 in relation to deficient and composite numbers\n\nAnother way to classify composite numbers is by counting the number of divisors. All composite numbers have at least three divisors. In the case of squares of primes, those divisors are $\\{1,p,p^{2}\\)$", null, ". A number n that has more divisors than any x < n is a highly composite number (though the first two such numbers are 1 and 2).\n\nComposite numbers have also been called \"rectangular numbers\", but that name can also refer to the pronic numbers, numbers that are the product of two consecutive integers.\n\nYet another way to classify composite numbers is to determine whether all prime factors are either all below or all above some fixed (prime) number. Such numbers are called smooth numbers and rough numbers, respectively.\n\n1. ^ Pettofrezzo & Byrkit (1970, pp. 23–24)\n2. ^ Long (1972, p. 16)\n3. ^ Fraleigh (1976, pp. 198, 266)\n4. ^ Herstein (1964, p. 106)\n5. ^ Long (1972, p. 16)\n6. ^ Fraleigh (1976, p. 270)\n7. ^ Long (1972, p. 44)\n8. ^ McCoy (1968, p. 85)\n9. ^ Pettofrezzo & Byrkit (1970, p. 53)\n10. ^ Long (1972, p. 159)\n• Fraleigh, John B. (1976), A First Course In Abstract Algebra (2nd ed.), Reading: Addison-Wesley, ISBN 0-201-01984-1\n• Herstein, I. N. (1964), Topics In Algebra, Waltham: Blaisdell Publishing Company, ISBN 978-1114541016\n• Long, Calvin T. (1972), Elementary Introduction to Number Theory (2nd ed.), Lexington: D. C. Heath and Company, LCCN 77-171950\n• McCoy, Neal H. (1968), Introduction To Modern Algebra, Revised Edition, Boston: Allyn and Bacon, LCCN 68-15225\n• Pettofrezzo, Anthony J.; Byrkit, Donald R. (1970), Elements of Number Theory, Englewood Cliffs: Prentice Hall, LCCN 77-81766" ]
[ null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/990f8742560510ae15e246859d943c549b4be3a1", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/6fff93908477604ddd2c5273fcc40b2d7dbb77e8", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/2626ff364b365162abcfda5b142fe1464a970015", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/d96dd239de31328b7f9c1a03dd7a8dd1a5bdf710", null, "https://upload.wikimedia.org/wikipedia/commons/thumb/9/9c/Euler_diagram_numbers_with_many_divisors.svg/220px-Euler_diagram_numbers_with_many_divisors.svg.png", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/66b90dbd9e0e9634ec771dd7a3463b89d203a07a", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9379855,"math_prob":0.9958543,"size":2994,"snap":"2023-40-2023-50","text_gpt3_token_len":656,"char_repetition_ratio":0.22274247,"word_repetition_ratio":0.042307694,"special_character_ratio":0.2264529,"punctuation_ratio":0.09375,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9927829,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,6,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-02T07:11:29Z\",\"WARC-Record-ID\":\"<urn:uuid:2917eaf4-48c1-4d2b-ad60-815b491375bf>\",\"Content-Length\":\"93014\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3ff4cc8a-8e92-40ed-bd2e-db6038dc77b5>\",\"WARC-Concurrent-To\":\"<urn:uuid:62425f51-67b0-4b9c-ada2-13c5bc9626f7>\",\"WARC-IP-Address\":\"99.84.109.143\",\"WARC-Target-URI\":\"https://db0nus869y26v.cloudfront.net/en/Composite_number\",\"WARC-Payload-Digest\":\"sha1:7WQ4F2CIN74WTD3M6HISYVOQSXVGJZL3\",\"WARC-Block-Digest\":\"sha1:CIXAN4BGUYMO5TVHBQZEAHYB52ZCFBEY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100327.70_warc_CC-MAIN-20231202042052-20231202072052-00799.warc.gz\"}"}
https://www.edmundoptics.com.tw/knowledge-center/application-notes/optics/understanding-optical-lens-geometries/
[ "# Understanding Optical Lens Geometries\n\nOptical lenses are the most important tools in optical design for controlling light. When optical designers talk about optical lenses, they are either referring to a single lens element or an assembly of lens elements (Figure 1). Examples of single elements are plano-convex (PCX) lenses, double-convex (DCX) lenses, aspheric lenses, etc; examples of assemblies of elements are telecentric imaging lenses, infinity-corrected objectives, beam expanders, etc. Each combination is comprised of a series of lens elements and each element has a specific lens geometry which controls light in its own way.", null, "## SNELL'S LAW OF REFRACTION\n\nBefore delving into each type of lens geometry, consider how optical lenses bend light using the property of refraction. Refraction is the means by which light is deviated by a certain amount when it enters or leaves a medium. This deviation is a function of the index of refraction of the medium and the angle the light makes with respect to the surface normal. This property is governed by Snell's Law of Refraction (Equation 1) where $\\small{n_1}$ is the index of the incident medium, $\\small{\\theta_1}$ is the angle of the incident ray, $\\small{n_2}$ is the index of the refracted medium, and $\\small{\\theta_2}$ is the angle of the refracted ray. Snell's Law describes the relationship between the angles of incidence and transmission when a ray travels between multiple media (Figure 2).\n\n(1)$$n_1 \\, \\sin{\\left( \\theta_1 \\right)} = n_2 \\, \\sin{\\left( \\theta_2 \\right)}$$", null, "## OPTICAL LENS TERMINOLOGY\n\nAll optical lenses obey Snell's Law of Refraction. Consequently, it is the optical lens geometry (i.e. the surface profile) that determines how light behaves as it propagates through the optical element. To understand the terminology used in optical lens specifications, consider 10 common terms (Table 1). For more detailed definitions and a list of additional terms, please view our Glossary.\n\nAbbreviation Terminology – Definition Common Optical Lens Terminology $\\small{D}$, Dia. Diameter – The physical size of a lens. $\\small{R, R_1, R_2}$ , etc. Radius of Curvature – The directed distance from the vertex of a surface to the center of curvature. $\\small{\\text{EFL}}$ Effective Focal Length – An optical measurement given as the distance from a principal plane of an optical lens to its imaging plane. $\\small{\\text{BFL}}$ Back Focal Length – A mechanical measurement given as the distance between the last surface of an optical lens to its image plane. P, P\" Principle Plane – A hypothetical plane where incident light rays can be considered to bend due to refraction. EFL is specified from a principal plane location. $\\small{\\text{CT, CT}_1, \\text{CT}_2}$, etc. Center Thickness – The distance from a primary principal plane location to the end of an element. $\\small{\\text{ET}}$ Edge Thickness – A calculated value that depends on radii, diameter, and center thickness of a lens. $\\small{d_b}$ Entrance Beam Diameter – Diameter of collimated light entering an axicon. $\\small{d_r}$ Exit Beam Diameter – Diameter of ring of light exiting an axicon. $\\small{L}$ Length – The physical distance from end to end of a cylindrical element (e.g. cylinder lens) or the distance from apex to workpiece of an axicon.\n\n## OPTICAL LENS GEOMETRIES\n\nUsing the common terminologies from Table 1, it is easy to understand technical figures for each type of single lens element. Table 2 shows 10 of the most commonly used optical lenses and their typical applications. As optical technology advances, additional single-lens geometries such as focus-tunable lenses and assemblies such as telecentric lenses are becoming valuable tools for optical design. To learn more about telecentric lenses, view The Advantages of Telecentricity.\n\n Common Optical Lens Geometries Plano-Convex (PCX) Lens | View Product", null, "Ideal for collimation and focusing applications utilizing monochromatic illumination. Note: Orient the curved surface of a PCX lens towards the source for optimal performance.", null, "Double-Convex (DCX) | View Product", null, "Ideal for image relay, and for imaging of objects at close conjugates. Note: Aberrations will increase as the conjugate ratios increase. Also known as biconvex lenses.", null, "Plano-Concave (PCV) Lens | View Product", null, "Comprised of one flat and one inward curved surface. Ideal for beam expansion, light projection, and expanding the focal length of an optical system.", null, "Double-Concave (DCV) Lens | View Product", null, "Comprised of two inward, equally curved surfaces. Ideal for beam expansion, light projection, and expanding the focal length of an optical system. Also known as biconcave lenses.", null, "Positive Achromatic Lens | View Product", null, "Performs similar function as a PCX or DCX lens, but can provide smaller spot sizes and superior image quality. Achromatic lenses are useful for reducing spherical and chromatic aberration. Negative version for diverging light as available. For additional information, view Why Use an Achromatic Lens?", null, "Aspheric Lens | View Product", null, "Ideal for laser focusing or for replacing multiple spherical lens elements in a system. Useful for eliminating spherical aberration and greatly reducing other aberrations. For additional information, view All About Aspheric Lenses.", null, "Positive Cylinder Lens | View Product", null, "Ideal for focusing incoming light to a line or to change the aspect ratio of an image. Negative version also available.", null, "Plano-Convex (PCX) Axicons | View Product", null, "Ideal for focusing laser light into a ring with a constant thickness. Note: A smaller apex angle generates a larger ring. For additional information, view An In-Depth Look at Axicons", null, "(Full) Ball Lens | View Product", null, "Ideal for fiber coupling, endoscopy, and barcode scanning applications. Half ball lens version also available. For additional information, view Understanding Ball Lenses", null, "Rod Lens | View Product", null, "Ideal for fiber coupling and endoscopy applications. 45° version also available.", null, "Optical lenses come in many shapes and sizes – from plano-convex (PCX) to aspheric. Knowing the advantages and disadvantages of each lens type is crucial when choosing between optics as each has its own purpose. Understanding optical lens geometries helps anyone, from novice to expert, choose the best optical lens in any optical design.", null, "From singlet, doublet, or triplet lens designs to achromatic, aspheric, cylinder, ball, or fresnel, we have thousands of choices for the UV, visible, or IR spectrum.\n\nThousands of spherical lenses ready for purchase, available in glass and crystalline materials with standard and custom coatings. Don’t see what you need? We can make it.\n\nHundreds of stock aspheric lenses ready for purchase, available in glass, plastic, or infrared crystalline material. Don’t see what you need? We can make it.\n\nEase integration of simple lenses into optical assemblies by providing lens radii, index of refraction, and center thickness." ]
[ null, "https://www.edmundoptics.com.tw/contentassets/1436343882264ca585af40a5c59e1c5b/fig-1-uol.jpg", null, "https://www.edmundoptics.com.tw/contentassets/1436343882264ca585af40a5c59e1c5b/fig-2-uolg.gif", null, "https://www.edmundoptics.com.tw/contentassets/1436343882264ca585af40a5c59e1c5b/3195.jpg", null, "https://www.edmundoptics.com.tw/contentassets/1436343882264ca585af40a5c59e1c5b/fig-3-uol.gif", null, "https://www.edmundoptics.com.tw/contentassets/1436343882264ca585af40a5c59e1c5b/1001204.jpg", null, "https://www.edmundoptics.com.tw/contentassets/1436343882264ca585af40a5c59e1c5b/fig-4-uol.gif", null, "https://www.edmundoptics.com.tw/contentassets/1436343882264ca585af40a5c59e1c5b/1006403.jpg", null, "https://www.edmundoptics.com.tw/contentassets/1436343882264ca585af40a5c59e1c5b/fig-5-uol.gif", null, "https://www.edmundoptics.com.tw/contentassets/1436343882264ca585af40a5c59e1c5b/1006460.jpg", null, "https://www.edmundoptics.com.tw/contentassets/1436343882264ca585af40a5c59e1c5b/fig-6-uol.jpg", null, "https://www.edmundoptics.com.tw/contentassets/1436343882264ca585af40a5c59e1c5b/1006528.jpg", null, "https://www.edmundoptics.com.tw/contentassets/1436343882264ca585af40a5c59e1c5b/fig-7-uol.gif", null, "https://www.edmundoptics.com.tw/contentassets/1436343882264ca585af40a5c59e1c5b/1006379.jpg", null, "https://www.edmundoptics.com.tw/contentassets/1436343882264ca585af40a5c59e1c5b/fig-8-uol.gif", null, "https://www.edmundoptics.com.tw/contentassets/1436343882264ca585af40a5c59e1c5b/1006503.jpg", null, "https://www.edmundoptics.com.tw/contentassets/1436343882264ca585af40a5c59e1c5b/fig-9-uol.gif", null, "https://www.edmundoptics.com.tw/contentassets/1436343882264ca585af40a5c59e1c5b/1006227.jpg", null, "https://www.edmundoptics.com.tw/contentassets/1436343882264ca585af40a5c59e1c5b/fig-10-uol.gif", null, "https://www.edmundoptics.com.tw/contentassets/1436343882264ca585af40a5c59e1c5b/2326.jpg", null, "https://www.edmundoptics.com.tw/contentassets/1436343882264ca585af40a5c59e1c5b/fig-11-uol.jpg", null, "https://www.edmundoptics.com.tw/contentassets/1436343882264ca585af40a5c59e1c5b/2989.jpg", null, "https://www.edmundoptics.com.tw/contentassets/1436343882264ca585af40a5c59e1c5b/fig-12-uol.jpg", null, "https://www.edmundoptics.com.tw/globalassets/right-hand-column/product-categories/optics/optical-lenses/optical-lenses.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8355277,"math_prob":0.9315943,"size":6735,"snap":"2022-05-2022-21","text_gpt3_token_len":1544,"char_repetition_ratio":0.12539,"word_repetition_ratio":0.06666667,"special_character_ratio":0.21321455,"punctuation_ratio":0.10929432,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.969817,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-18T16:25:24Z\",\"WARC-Record-ID\":\"<urn:uuid:343debb0-7738-4000-8936-20f04356612f>\",\"Content-Length\":\"117124\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5a6fbb2f-907e-407d-bd8b-4c4ed7644d04>\",\"WARC-Concurrent-To\":\"<urn:uuid:87dec06f-d589-4bc8-9c03-11bc3c7409ce>\",\"WARC-IP-Address\":\"104.18.3.2\",\"WARC-Target-URI\":\"https://www.edmundoptics.com.tw/knowledge-center/application-notes/optics/understanding-optical-lens-geometries/\",\"WARC-Payload-Digest\":\"sha1:YYRCDOYVBDP3WR36AIWKYXGKF5764ZIC\",\"WARC-Block-Digest\":\"sha1:6RG4XMXBSZFQDMOKVTVZA2MFXWM2OANW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662522284.20_warc_CC-MAIN-20220518151003-20220518181003-00252.warc.gz\"}"}
https://www.roadmapmoney.com/37-dollars-an-hour/
[ "# 37 Dollars An Hour – How Much A Year? (Example Budget Included)\n\n## How Much Is 37 Dollars An Hour Per Year?\n\nTo calculate how much you will get paid per year, let’s assume you work 52 weeks of the year (with 2 weeks paid time off). If you are working a full-time job, you will be working 40 hours per week on average.\n\n40 hours multiplied by 52 weeks is 2,080 working hours in a year.\n\n\\$37 per hour multiplied by 2,080 working hours per year is an annual income of \\$76,960 per year.\n\n## Simple Tools To Live On \\$37 An Hour\n\n[elementor-template id=”506255″]\n\n## What If You Don’t Get Paid Time Off?\n\nIf you don’t get paid vacation, let’s assuming you are working 50 weeks of the year (with 2 weeks unpaid time off). And we’ll assume you work an average of 40 hours per week.\n\n40 hours multiplied by 50 weeks is 2,000 working hours in a year.\n\nNow simply multiply your hourly rate by the number of working hours in the year.\n\n\\$37 per hour multiplied by 2,000 working hours per year is an annual income of \\$74,000 per year.\n\n## How Many Working Days Are In A Year (2020)?\n\nFor a more accurate calculation, you can calculate EXACTLY how many working days are in the year.\n\nFor the year 2020, it’s a leap year, so there are 366 days in the year. Here’s how the days break down:\n\n• 262 Weekdays\n• 104 Weekend Days (woohoo!)\n\nAssuming your are working each weekday, 8 hours per day, here is how many hours you will work:\n\n262 work days multiplied by 8 hours per day is 2,096 working hours in 2020\n\n\\$37 an hour multiplied by 2,096 working hours is \\$77,552 income per year\n\n*This does not include any overtime hours worked\n\n## How Much Is \\$37 An Hour Per Month?\n\nIf you want to see how much \\$37 an hour is a month, we need to know how many working hours there are in a month.\n\nIf we divide the total working hours in a year by 12 (months), we can see how many working hours are in a month.\n\n2,096 hours a year divided by 12 is about 175 working hours per month on average.\n\nSo to calculate your monthly income, see below:\n\n\\$37 an hour multiplied by 175 hours per month is \\$6,475 per month income on average.\n\n## How Much Is \\$37 An Hour Per Week?\n\nIf you want to break it out by week, let’s assume your working a normal 40-hour week.\n\nSo to calculate your weekly income, see below:\n\n\\$37 an hour multiplied by 40 hours per week is \\$1,480 per week income.\n\n## How Much Is \\$37 An Hour Per Day?\n\nLet’s see how much you can make per day on \\$37 an hour. Assuming you are working a typical 8-hour workday, here’s the answer:\n\n\\$37 an hour multiplied by 8 hours per day is \\$296 per day income.\n\n## Example Budget For \\$37 Per Hour\n\nNow that you know how much you can make per year, per month and weekly, let’s see how a typical budget can look making \\$37 per hour.\n\nRemember, for your budget, you need to calculate the estimated take-home pay.\n\nTaking the \\$37 an hour monthly income of \\$6,475, minus taxes, the estimated take home pay is \\$4,872 (varies by state and paycheck deductions)\n\nSample Monthly Budget For \\$37 An Hour:\n\nAs you can see, this budget allows for saving a decent amount (over \\$450 per month), and would be great for a young family. I recommend putting the \\$450+ per month into your work 401k account (up to the match) for best results. Then you should put any extra into a Roth IRA.\n\nIf looking at buying a house on \\$37 an hour, make sure you can put 20% down and keep monthly housing costs under \\$1,500 per month.\n\nDon’t like the budget? Grab your own budget template below and plug in your numbers for your \\$37 an hour budget." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8966768,"math_prob":0.8306113,"size":4040,"snap":"2022-27-2022-33","text_gpt3_token_len":1103,"char_repetition_ratio":0.15510406,"word_repetition_ratio":0.06818182,"special_character_ratio":0.32029703,"punctuation_ratio":0.1,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99503654,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-05T00:19:33Z\",\"WARC-Record-ID\":\"<urn:uuid:15b8f635-26d3-4247-8d5b-553e9979c9e6>\",\"Content-Length\":\"203361\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:916939f8-8511-451e-9746-464c1dd1b0ca>\",\"WARC-Concurrent-To\":\"<urn:uuid:3a69aa09-3a15-4fdd-858d-83445abc60d9>\",\"WARC-IP-Address\":\"194.1.147.29\",\"WARC-Target-URI\":\"https://www.roadmapmoney.com/37-dollars-an-hour/\",\"WARC-Payload-Digest\":\"sha1:IBWESK6FB7I7XCJ5275WDACXAOOMSPUV\",\"WARC-Block-Digest\":\"sha1:QLKHMMV425U5NS24JYVXKM7KICUX7H3F\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104506762.79_warc_CC-MAIN-20220704232527-20220705022527-00569.warc.gz\"}"}
https://it.mathworks.com/help/images/ref/imref2d.worldtosubscript.html
[ "# worldToSubscript\n\nConvert world coordinates to row and column subscripts\n\n## Syntax\n\n``[I, J] = worldToSubscript(R,xWorld,yWorld)``\n``[I, J, K] = worldToSubscript(R,xWorld,yWorld,zWorld)``\n\n## Description\n\nexample\n\n````[I, J] = worldToSubscript(R,xWorld,yWorld)` maps points from the 2-D world system (`xWorld`,`yWorld`) to subscript arrays `I` and `J` based on the relationship defined by 2-D spatial referencing object `R`.If the kth input coordinates (`xWorld`(k),`yWorld`(k)) fall outside the image bounds in the world coordinate system, `worldToSubscript` sets the corresponding subscripts `I`(k) and `J`(k) to `NaN`.```\n\nexample\n\n````[I, J, K] = worldToSubscript(R,xWorld,yWorld,zWorld)` maps points from the 3-D world system to subscript arrays `I`, `J`, and `K`, using 3-D spatial referencing object `R`.```\n\n## Examples\n\ncollapse all\n\nRead a 2-D grayscale image of a knee into the workspace.\n\n```m = dicominfo('knee1.dcm'); A = dicomread(m);```\n\nCreate an `imref2d` object, specifying the size and the resolution of the pixels. The DICOM file contains a metadata field `PixelSpacing` that specifies the image resolution in each dimension in millimeters per pixel.\n\n`RA = imref2d(size(A),m.PixelSpacing(2),m.PixelSpacing(1))`\n```RA = imref2d with properties: XWorldLimits: [0.1562 160.1562] YWorldLimits: [0.1562 160.1562] ImageSize: [512 512] PixelExtentInWorldX: 0.3125 PixelExtentInWorldY: 0.3125 ImageExtentInWorldX: 160 ImageExtentInWorldY: 160 XIntrinsicLimits: [0.5000 512.5000] YIntrinsicLimits: [0.5000 512.5000] ```\n\nDisplay the image, including the spatial referencing object. The axes coordinates reflect the world coordinates. Notice that the coordinate (0,0) is in the upper left corner.\n\n```figure imshow(A,RA,'DisplayRange',[0 512])```", null, "Select sample points, and store their world x- and y- coordinates in vectors. For example, the first point has world coordinates (38.44,68.75), the second point is 1 mm to the right of it, and the third point is 7 mm below it. The last point is outside the image boundary.\n\n```xW = [38.44 39.44 38.44 -0.2]; yW = [68.75 68.75 75.75 1];```\n\nConvert the world coordinates to row and column subscripts using `worldToSubscript`.\n\n`[rS, cS] = worldToSubscript(RA,xW,yW)`\n```rS = 1×4 220 220 242 NaN ```\n```cS = 1×4 123 126 123 NaN ```\n\nThe resulting vectors contain the row and column indices that are closest to the point. Note that the indices are discrete, and that points outside the image boundary have `NaN` for both row and column indices.\n\nAlso, the order of the input and output coordinates is reversed. The world x-coordinate vector, `xW`, corresponds to the second output vector, `cS`. The world y-coordinate vector, `yW`, corresponds to the first output vector, `rS`.\n\nRead a 3-D volume into the workspace. This image consists of 27 frames of 128-by-128 pixel images.\n\n```load mri; D = squeeze(D); D = ind2gray(D,map);```\n\nCreate an `imref3d` spatial referencing object associated with the volume. For illustrative purposes, provide a pixel resolution in each dimension. The resolution is in millimeters per pixel.\n\n`R = imref3d(size(D),2,2,4)`\n```R = imref3d with properties: XWorldLimits: [1 257] YWorldLimits: [1 257] ZWorldLimits: [2 110] ImageSize: [128 128 27] PixelExtentInWorldX: 2 PixelExtentInWorldY: 2 PixelExtentInWorldZ: 4 ImageExtentInWorldX: 256 ImageExtentInWorldY: 256 ImageExtentInWorldZ: 108 XIntrinsicLimits: [0.5000 128.5000] YIntrinsicLimits: [0.5000 128.5000] ZIntrinsicLimits: [0.5000 27.5000] ```\n\nSelect sample points, and store their world x-, y-, and z-coordinates in vectors. For example, the first point has world coordinates (108,92,52), the second point is 3 mm above it in the +z-direction, and the third point is 5.2 mm to the right of it in the +x-direction. The last point is outside the image boundary.\n\n```xW = [108 108 113.2 2]; yW = [92 92 92 -1]; zW = [52 55 52 0.33];```\n\nConvert the world coordinates to row, column, and plane subscripts using `worldToSubscript`.\n\n`[rS, cS, pS] = worldToSubscript(R,xW,yW,zW)`\n```rS = 1×4 46 46 46 NaN ```\n```cS = 1×4 54 54 57 NaN ```\n```pS = 1×4 13 14 13 NaN ```\n\nThe resulting vectors contain the column, row, and plane indices that are closest to the point. Note that the indices are discrete, and that points outside the image boundary have index values of `NaN`.\n\nAlso, the order of the input and output coordinates is reversed. The world x-coordinate vector, `xW`, corresponds to the second output vector, `cS`. The world y-coordinate vector, `yW`, corresponds to the first output vector, `rS`.\n\n## Input Arguments\n\ncollapse all\n\nSpatial referencing object, specified as an `imref2d` or `imref3d` object.\n\nCoordinates along the x-dimension in the world coordinate system, specified as a numeric scalar or vector.\n\nData Types: `single` | `double` | `int8` | `int16` | `int32` | `int64` | `uint8` | `uint16` | `uint32` | `uint64`\n\nCoordinates along the y-dimension in the world coordinate system, specified as a numeric scalar or vector. `yWorld` is the same length as `xWorld`.\n\nData Types: `single` | `double` | `int8` | `int16` | `int32` | `int64` | `uint8` | `uint16` | `uint32` | `uint64`\n\nCoordinates along the z-dimension in the world coordinate system, specified as a numeric scalar or vector. `zWorld` is the same length as `xWorld`.\n\nData Types: `single` | `double` | `int8` | `int16` | `int32` | `int64` | `uint8` | `uint16` | `uint32` | `uint64`\n\n## Output Arguments\n\ncollapse all\n\nRow indices, returned as a positive integer scalar or vector. `I` is the same length as `yWorld`. For an m-by-n or m-by-n-by-p image, 1 ≤ `I`m.\n\nData Types: `double`\n\nColumn indices, returned as a positive integer scalar or vector. `J` is the same length as `xWorld`. For an m-by-n or m-by-n-by-p image, 1 ≤ `J`n.\n\nData Types: `double`\n\nPlane indices, returned as a positive integer scalar or vector. `K` is the same length as `zWorld`. For an m-by-n-by-p image, 1 ≤ `K`p.\n\nData Types: `double`" ]
[ null, "https://it.mathworks.com/help/examples/images/win64/Convert2DWorldCoordinatesToRowColumnSubscriptsExample_01.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7298133,"math_prob":0.9671978,"size":1681,"snap":"2021-31-2021-39","text_gpt3_token_len":501,"char_repetition_ratio":0.12880144,"word_repetition_ratio":0.0,"special_character_ratio":0.30160618,"punctuation_ratio":0.20857143,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9870643,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-08-04T18:01:26Z\",\"WARC-Record-ID\":\"<urn:uuid:73f53769-b7e1-4e91-8d37-066a1248e458>\",\"Content-Length\":\"98189\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:56646a8e-68a4-4526-8b5f-124480d05ee1>\",\"WARC-Concurrent-To\":\"<urn:uuid:8765607c-3285-4a4a-942c-047c62105a15>\",\"WARC-IP-Address\":\"23.223.252.57\",\"WARC-Target-URI\":\"https://it.mathworks.com/help/images/ref/imref2d.worldtosubscript.html\",\"WARC-Payload-Digest\":\"sha1:IZ2BJQIDFO2AC4G4JDQO3IK6VLTH2JKK\",\"WARC-Block-Digest\":\"sha1:DZOCLLWKMIBY44H4VET5JQNSE6IZPTNV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046154897.82_warc_CC-MAIN-20210804174229-20210804204229-00244.warc.gz\"}"}
https://math.stackexchange.com/questions/3956309/sine-vs-sine-understanding-the-differences/3956359
[ "# sine vs Sine: understanding the differences\n\nI was using the textbook A History in Mathematics by Victor J. Katz. I saw a theorem from Nasir al-Din al-Tusi. The way the theorem is written in the book is like this:\n\nIn any plane triangle, the ratio of the sides is equal to the ratio of the sines of the angles opposite to those sides. That is, in triangle ABC, we have AB:AC=sin(angle ACB):sin (angle ABC). [Note that since we are considering a ratio it is irrelevant whether we use Sines or sines.)\n\nThis theorem is about the law of sines. My question is about the last sentence in parenthesis.\n\nWhat is the difference between Sine and sine?\n\n• Check the text, but some use Sine for usual sine restricted to $[-\\pi,\\pi]$. – coffeemath Dec 20 '20 at 16:43\n• mathwords.com/i/inverse_trigonometry.htm – saulspatz Dec 20 '20 at 16:45\n• @coffeemath I think the interval is meant to be $[-\\pi/2,\\pi/2]$. – Joe Dec 20 '20 at 16:46\n• @Joe is right. I went too quickly. – coffeemath Dec 20 '20 at 16:47\n• @saulspatz but such a distinction would not match the quoted text which is about ratios. I.e., it would match better if we had Sine = 1000 times sin or the like. – Hagen von Eitzen Dec 20 '20 at 17:07\n\nSix pages earlier, at the beginning of 9.6.1, the first subsection on Islamic trigonometry, a parenthetical remark notes:\n\nThe Islamic sine of an arc, like that of the Hindus, was the length of a particular line in a circle of given radius $$R$$. We will keep to our notation of \"Sine\" to designate the Islamic sine function\n\nAdditionally, in 8.7.1 on Indian trigonometry, the book explains:\n\nIn what follows, we generally use the word \"Sine\" (with a capital S) to represent the length of the Indian half-chord, given that the half-chord is a line in a circle of radius $$R$$, where $$R$$ will always be stated. We reserve the word \"sine\" (with a small s) for the modern function (or, equivalently, when the radius of the circle is 1). Thus, $$\\mathrm{Sin}\\,\\theta=R\\sin\\theta$$.\n\nI have not seen this convention outside of this book. As noted in the comments, in modern mathematics, some might use a capital letter to denote a restriction of the domain of the sine function.\n\nEdit: it appears that the notation used in the book does not correspond to how I have seen $$\\sin$$ and $$\\mathrm{Sin}$$ being used elsewhere. However, I will keep this answer for the benefit of other readers.\n\n$$\\sin$$ is a function that (generally speaking) maps real numbers to real numbers. Here is how it is defined on Wolfram Mathworld:\n\nLet $$\\theta$$ be an angle measured counterclockwise from the $$x$$-axis along an arc of the unit circle. Then $$\\sin(\\theta)$$ is the vertical coordinate of the arc endpoint.", null, "There are other equivalent definitions of sine, but however you choose to define it, it is not a one-to-one function:", null, "Notice that $$\\sin(0)$$ equals $$0$$, but so does $$\\sin(\\pi)$$, $$\\sin(2\\pi)$$, etc. (I assume you are familiar with radians.) This means that the $$\\sin$$ function does not have an inverse. To get around this, we often restrict the domain of $$\\sin(\\theta)$$ by requiring that $$-\\frac{\\pi}{2}\\leq\\theta\\leq\\frac{\\pi}{2}$$:", null, "Notice that along this restricted domain, there are no two values of $$\\theta$$ that produce the same output. This means that the inverse exists. However, because a function is defined not just by its values but by its domain, when we restrict the domain of $$\\sin$$ it becomes a new function that we often denote $$\\DeclareMathOperator{\\Sin}{Sin} \\Sin$$. The inverse of $$\\Sin$$ is called $$\\Sin^{-1}$$. You will often see $$\\Sin^{-1}$$ being written simply as $$\\sin^{-1}$$, but this is really an abuse of notation. The function $$\\sin$$ does not have an inverse—only $$\\Sin$$ does. A more popular alternative to $$\\Sin^{-1}$$ is $$\\arcsin$$. This notation has the advantage of being more 'correct' than $$\\sin^{-1}$$, while also avoiding the $$\\sin$$/$$\\Sin$$ confusion.\n\n• Correction: $\\sin\\frac{\\pi}{2} \\ne 0$ – Tavish Dec 20 '20 at 19:19\n• @Tavish Thank you, I'll edit the post. – Joe Dec 20 '20 at 19:23" ]
[ null, "https://i.stack.imgur.com/zatxE.gif", null, "https://i.stack.imgur.com/RXsRJ.png", null, "https://i.stack.imgur.com/NetQW.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.91506153,"math_prob":0.99882954,"size":1681,"snap":"2021-21-2021-25","text_gpt3_token_len":421,"char_repetition_ratio":0.12343471,"word_repetition_ratio":0.0,"special_character_ratio":0.27007735,"punctuation_ratio":0.09846154,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99987483,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-09T07:11:46Z\",\"WARC-Record-ID\":\"<urn:uuid:da8331d4-b71c-4dce-9887-fd897134396f>\",\"Content-Length\":\"186598\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:861ac507-dc5b-40d6-8ec0-3ef9a58e8a4a>\",\"WARC-Concurrent-To\":\"<urn:uuid:cbedc708-3e19-4ff4-92dc-f34b7c53f122>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/3956309/sine-vs-sine-understanding-the-differences/3956359\",\"WARC-Payload-Digest\":\"sha1:EPJOUH36D5GSC7D642EZJ5OXAH752EEN\",\"WARC-Block-Digest\":\"sha1:QOAI5HGREQKZE4CW2R4IKLLLTROJPMUG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243988961.17_warc_CC-MAIN-20210509062621-20210509092621-00069.warc.gz\"}"}
https://www.jbstatistics.com/introduction-to-the-geometric-distribution/
[ "# 1.14 An Introduction to the Geometric Distribution\n\nAn introduction to the geometric distribution. I discuss the underlying assumptions that result in a geometric distribution, the formula, and the mean and variance of the distribution. I work through an example of the calculations and then briefly discuss the cumulative distribution function.\n\n### 3 thoughts on “1.14 An Introduction to the Geometric Distribution”\n\n1.", null, "thanks for teaching\n\n•", null, "2.", null, "" ]
[ null, "https://secure.gravatar.com/avatar/1cc97efeea1e43e7f47fdf79b788eff2", null, "https://secure.gravatar.com/avatar/7139aa67b7208f7beadf314d51f2494b", null, "https://secure.gravatar.com/avatar/56203f59cba6989d38cd49218c718069", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9262394,"math_prob":0.96712875,"size":619,"snap":"2021-43-2021-49","text_gpt3_token_len":119,"char_repetition_ratio":0.16260162,"word_repetition_ratio":0.0,"special_character_ratio":0.18739903,"punctuation_ratio":0.116071425,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9965406,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-19T15:20:03Z\",\"WARC-Record-ID\":\"<urn:uuid:b133eb77-a360-4037-b8cd-c2e8dc678607>\",\"Content-Length\":\"42352\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c6efe600-ac40-4235-98c4-2d7ecdfbcdc3>\",\"WARC-Concurrent-To\":\"<urn:uuid:ee235ad5-69dc-4abf-a10d-b7fc5055a312>\",\"WARC-IP-Address\":\"35.209.113.79\",\"WARC-Target-URI\":\"https://www.jbstatistics.com/introduction-to-the-geometric-distribution/\",\"WARC-Payload-Digest\":\"sha1:BJRPPYXKSAT4USIGVCTMDGJJLDULRD27\",\"WARC-Block-Digest\":\"sha1:LQRVQ7EKEDA74DQE5EM4UT5JYKVKSERK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585270.40_warc_CC-MAIN-20211019140046-20211019170046-00650.warc.gz\"}"}
https://www.npmjs.com/package/math.imul
[ "Negatively Proportional Model\n\nnpm\n\nmath.imul\n\n1.0.1 • Public • Published\n\nMath.imul", null, "", null, "An ESnext spec-compliant Math.imul shim/polyfill/replacement that works as far down as ES3.\n\nThis package implements the es-shim API interface. It works in an ES3-supported environment and complies with the spec.\n\nGetting started\n\nnpm install --save math.imul\n\nUsage/Examples\n\nconsole.log(Math.imul(2, 4)); // 8\nconsole.log(Math.imul(-1, 8)); // -8\nconsole.log(Math.imul(-2, -2)); // 4\nconsole.log(Math.imul(0xffffffff, 5)); // -5\n\nTests\n\nSimply clone the repo, npm install, and run npm test\n\nKeywords\n\nInstall\n\nnpm i math.imul\n\n3\n\n1.0.1" ]
[ null, "https://camo.githubusercontent.com/8253828b1a82780018e0af38a1a694b5f4c964923472bf188ab4db08e2bc60ed/68747470733a2f2f76657273696f6e626164672e65732f65732d7368696d732f4d6174682e696d756c2e737667", null, "https://camo.githubusercontent.com/2904c9cbe75a9ee2627dffb3f7a1c04fe02179e925533c7aa9c7e6e228f0206c/68747470733a2f2f6e6f6465692e636f2f6e706d2f6d6174682e696d756c2e706e673f646f776e6c6f6164733d747275652673746172733d74727565", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.510706,"math_prob":0.49846774,"size":495,"snap":"2022-05-2022-21","text_gpt3_token_len":146,"char_repetition_ratio":0.18329939,"word_repetition_ratio":0.0,"special_character_ratio":0.28282827,"punctuation_ratio":0.21818182,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95493585,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-26T19:57:29Z\",\"WARC-Record-ID\":\"<urn:uuid:4d4add62-28db-4ae9-9c1a-15112e069705>\",\"Content-Length\":\"56426\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0e84275f-98dc-4f99-9b73-a8db096c7bb0>\",\"WARC-Concurrent-To\":\"<urn:uuid:84e52dd1-db41-4837-acff-dcdbbfb17870>\",\"WARC-IP-Address\":\"104.16.92.83\",\"WARC-Target-URI\":\"https://www.npmjs.com/package/math.imul\",\"WARC-Payload-Digest\":\"sha1:LGUJYPA4NDWC5BRBR7EX3XI3E644XZM6\",\"WARC-Block-Digest\":\"sha1:6LQ2ICHRKUO5BPVCBUXILWXQCLJ7WKJG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320304961.89_warc_CC-MAIN-20220126192506-20220126222506-00169.warc.gz\"}"}
https://www.studypug.com/algebra-help/adding-and-subtracting-vectors-in-component-form
[ "# Adding and subtracting vectors in component form - Vectors\n\n### Adding and subtracting vectors in component form\n\nIn this section, we will learn how to find the sum, as well as the difference between vectors algebraically and graphically. We will do so with two methods – the \"Tip To Tail\" method, and the \"parallelogram method.\n\n#### Lessons\n\n#####", null, "", null, "" ]
[ null, "https://dcvp84mxptlac.cloudfront.net/diagrams2/MATH12-17-6-X-5.png", null, "https://dmn92m25mtw4z.cloudfront.net/img_set/pug-teacher/v1/pug-teacher-262w.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9037005,"math_prob":0.5651984,"size":479,"snap":"2021-43-2021-49","text_gpt3_token_len":118,"char_repetition_ratio":0.15368421,"word_repetition_ratio":0.16867469,"special_character_ratio":0.2672234,"punctuation_ratio":0.16091955,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96925634,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,8,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-23T02:54:57Z\",\"WARC-Record-ID\":\"<urn:uuid:1a068d93-b17f-4fd8-a8cd-94c57e58facf>\",\"Content-Length\":\"123117\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8dbafc9a-1b21-482a-972b-80543a2406d4>\",\"WARC-Concurrent-To\":\"<urn:uuid:0921bd4c-b8f1-4b8e-92da-7d51ae97f747>\",\"WARC-IP-Address\":\"3.238.135.136\",\"WARC-Target-URI\":\"https://www.studypug.com/algebra-help/adding-and-subtracting-vectors-in-component-form\",\"WARC-Payload-Digest\":\"sha1:GVTZLW4NA64PRCQAHU2B7QISVADYBWZM\",\"WARC-Block-Digest\":\"sha1:FVQBI775RUVG4OL5S6C3RSH2IP7XENAE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585537.28_warc_CC-MAIN-20211023002852-20211023032852-00171.warc.gz\"}"}
https://math.stackexchange.com/questions/3680101/leapfrogs-puzzle-least-number-of-moves-needed-to-interchange-the-pegs
[ "# Leapfrogs puzzle -- Least number of moves needed to interchange the pegs\n\nThis is a question from the book \"Thinking Mathematically\" by Burton and Mason.\n\nQuestion: Ten pegs of two colors are laid out in a line of 11 holes as shown below.", null, "I want to interchange the black and white pegs, but I am only allowed to move pegs into an adjacent empty hole or to jump over one peg into an empty hole. Find the minimum number of moves necessary to make the change and generalize it to a case where there are $$2n+1$$ holes.\n\nRelated thread: There's an existing thread here that already explains as to why it takes $$n^2+2n$$ moves to make the change which is something I easily figure out on my own. Please note that in that thread, a peg can move over another peg only if it is of the opposite color whereas the problem in the book does not stipulate that condition. I made the same assumption to arrive at the answer.\n\nThe part that I'm stuck with is to show that there can't be any other strategy where it takes less than $$n^2+2n$$ moves to make the change. Assume for instance that I represent a black peg by 'B', a white peg by 'W', and the hole by 'H'. I assumed that the number of moves involves interchanging 'B' and 'H', and then moving the pegs in only one direction towards the destination. However, how can I show, that it is never possible to have a pattern like 'BBH' changing to a 'HBB' so that the black peg moves by two holes towards its destination? This way, it can further reduce the number of steps needed.\n\nMy hunch is that with a pattern like 'HBB', any 'W' stuck to the right of the black pegs will remain there unless the black peg is made to move left, which effectively nullifies the advantage of having moved two steps to the right. However, this explanation is very loose.\n\n1. What if there are no white pegs to the left of the black peg in the 'HBB' part of the board?\n2. More importantly, how can I make it mathematically rigorous that there should never be a pattern like 'HBB' or 'WWH' during the process of transformation assuming the 'B's and 'H's need to be moved further to reach the goal?\n\nMy failed approaches and current thought process: Such problems, from my past experience, have a slick solution where some invariant or a monovariant is used to prove the optimality of the algorithm. I tried many but I simply could not go anywhere with it.\n\nAnother strategy is the same as that used in the Tower of Hanoi solution, which is an inductive argument, where you show that there's is an algorithm to make the optimal transformation in the $$2n+1$$ case, by using the optimal transformation in the $$2n-1$$ case, and there is no way around it. This seems possible as I was able to prove the number of moves needed to make the transformation is $$n^2+2n$$ using this strategy.\n\nHowever, it would be simply fantastic to figure out as to why we can never get a pattern like 'HBB' or 'WWH' assuming that both the pegs need to be further moved in order to reach the final goal. I would greatly appreciate it if you can provide the argument for me.\n\n• The answer in the other thread proves that it takes $n^2+2n$ moves. Isn't that what you want?\n– Ted\nMay 18, 2020 at 4:59\n• @Ted I was able to prove that we can accomplish the switching of pegs in $n^2+2n$ moves. However, neither the thread nor I could show that this is indeed the least number of moves needed. In the thread above, I have mentioned one way in which the number of moves can be less than $n^2+2n$, the one where a leap of one peg occurs on top of of a peg of the same color (ie a transformation, for example, from 'BBH' to 'HBB'). However, we all have a hunch that such a move can never occur during a successful transformation with least number of moves. May 18, 2020 at 5:27\n• The important fact is that there is only one empty spot. If you jump one peg forward over a peg of the same colour, and there are pegs of the other colour that still have to come the other way and pass them, then the only way is to make room by doing a backwards move or jump with either peg. This undoes any advantage the same-colour-jump gave. So the only situation where such a jump may help is when there are no pegs of the other colour in front. However, you can only get to that situation by backwards jumps or moves. In either case, there is no advantage to jumping same colour. May 18, 2020 at 11:21\n• The \"verbal argument\" is a summary of a formal proof. It just needs the details filled in. The only hand-wavy part is \"This undoes any advantage the same-colour-jump gave\". The rest of it easily translates into a formal proof. So if that one line can be justified, the rest of the proof is straight-forward. May 18, 2020 at 19:34\n• @MikeEarnest Thanks for pointing me to that thread. I was actually able to get the answer independently. I can't however explain as to why a jump never occurs between pegs of the same color because if that also occurs, then the answer is a number lesser than $n^2+2n$. While we can provide intuitive verbal explanation as to why that is so, it is not rigorous. I need a rigorous mathematical proof that pegs of the same colors never cross over each other as long as they both need to be further moved towards their final destination. An example of a rigorous argument is that of the tower of Hanoi. May 19, 2020 at 5:34" ]
[ null, "https://i.stack.imgur.com/lNKbx.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9666115,"math_prob":0.9402285,"size":2977,"snap":"2022-27-2022-33","text_gpt3_token_len":698,"char_repetition_ratio":0.119408004,"word_repetition_ratio":0.014492754,"special_character_ratio":0.22640242,"punctuation_ratio":0.072487645,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98772454,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-13T03:29:06Z\",\"WARC-Record-ID\":\"<urn:uuid:fa464588-f3ea-40fd-a293-60696b75db81>\",\"Content-Length\":\"224991\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2f1e3772-2a81-4eab-92bf-3c48637263ab>\",\"WARC-Concurrent-To\":\"<urn:uuid:0d5a917c-f564-40ff-a08e-661359eada40>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/3680101/leapfrogs-puzzle-least-number-of-moves-needed-to-interchange-the-pegs\",\"WARC-Payload-Digest\":\"sha1:W3JJGGFKC3Y333HX34IH6BTQDJL7NUMM\",\"WARC-Block-Digest\":\"sha1:2UUOXEASSCNG4BJIGNT6MLEBN463UIIR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882571869.23_warc_CC-MAIN-20220813021048-20220813051048-00417.warc.gz\"}"}
https://www.geteasysolution.com/Prime-factorization-of-4366
[ "# Prime factorization of 4366\n\nBelow you can find the full step by step solution for you problem. We hope it will be very helpful for you and it will help you to understand the solving process.\n\nIf it's not what You are looking for type in the field below your own integer, and You will get the solution.\n\nPrime factorization of 4366:\n\nBy prime factorization of 4366 we follow 5 simple steps:\n1. We write number 4366 above a 2-column table\n2. We divide 4366 by the smallest possible prime factor\n3. We write down on the left side of the table the prime factor and next number to factorize on the ride side\n4. We continue to factor in this fashion (we deal with odd numbers by trying small prime factors)\n5. We continue until we reach 1 on the ride side of the table\n\n 4366 prime factors number to factorize 2 2183 37 59 59 1\n\nPrime factorization of 4366 = 1×2×37×59= $1 × 2 × 37 × 59$" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.86996764,"math_prob":0.97605956,"size":1678,"snap":"2019-35-2019-39","text_gpt3_token_len":462,"char_repetition_ratio":0.35185185,"word_repetition_ratio":0.052469134,"special_character_ratio":0.37663886,"punctuation_ratio":0.036363635,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97307825,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-22T08:50:41Z\",\"WARC-Record-ID\":\"<urn:uuid:536f6d01-5eb7-4104-8756-8ac620862cf4>\",\"Content-Length\":\"21370\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e27e3b5d-c18a-4a9c-893c-d4a3711bc056>\",\"WARC-Concurrent-To\":\"<urn:uuid:8b06bce5-a83d-43c9-b6ba-77d154806fe5>\",\"WARC-IP-Address\":\"51.91.60.1\",\"WARC-Target-URI\":\"https://www.geteasysolution.com/Prime-factorization-of-4366\",\"WARC-Payload-Digest\":\"sha1:LLWWRIFM4CAR3VC7TNOPXX3CXUSUQTBX\",\"WARC-Block-Digest\":\"sha1:HMAP54CHFDQFDJJRLCFQ5FDTTMQ5QMUY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514575402.81_warc_CC-MAIN-20190922073800-20190922095800-00173.warc.gz\"}"}
http://num.bubble.ro/m/80/64/
[ "# Multiplication table for N = 80 * 63÷64\n\n80 * 63 = 5040 [+]\n80 * 63.01 = 5040.8 [+]\n80 * 63.02 = 5041.6 [+]\n80 * 63.03 = 5042.4 [+]\n80 * 63.04 = 5043.2 [+]\n80 * 63.05 = 5044 [+]\n80 * 63.06 = 5044.8 [+]\n80 * 63.07 = 5045.6 [+]\n80 * 63.08 = 5046.4 [+]\n80 * 63.09 = 5047.2 [+]\n80 * 63.1 = 5048 [+]\n80 * 63.11 = 5048.8 [+]\n80 * 63.12 = 5049.6 [+]\n80 * 63.13 = 5050.4 [+]\n80 * 63.14 = 5051.2 [+]\n80 * 63.15 = 5052 [+]\n80 * 63.16 = 5052.8 [+]\n80 * 63.17 = 5053.6 [+]\n80 * 63.18 = 5054.4 [+]\n80 * 63.19 = 5055.2 [+]\n80 * 63.2 = 5056 [+]\n80 * 63.21 = 5056.8 [+]\n80 * 63.22 = 5057.6 [+]\n80 * 63.23 = 5058.4 [+]\n80 * 63.24 = 5059.2 [+]\n80 * 63.25 = 5060 [+]\n80 * 63.26 = 5060.8 [+]\n80 * 63.27 = 5061.6 [+]\n80 * 63.28 = 5062.4 [+]\n80 * 63.29 = 5063.2 [+]\n80 * 63.3 = 5064 [+]\n80 * 63.31 = 5064.8 [+]\n80 * 63.32 = 5065.6 [+]\n80 * 63.33 = 5066.4 [+]\n80 * 63.34 = 5067.2 [+]\n80 * 63.35 = 5068 [+]\n80 * 63.36 = 5068.8 [+]\n80 * 63.37 = 5069.6 [+]\n80 * 63.38 = 5070.4 [+]\n80 * 63.39 = 5071.2 [+]\n80 * 63.4 = 5072 [+]\n80 * 63.41 = 5072.8 [+]\n80 * 63.42 = 5073.6 [+]\n80 * 63.43 = 5074.4 [+]\n80 * 63.44 = 5075.2 [+]\n80 * 63.45 = 5076 [+]\n80 * 63.46 = 5076.8 [+]\n80 * 63.47 = 5077.6 [+]\n80 * 63.48 = 5078.4 [+]\n80 * 63.49 = 5079.2 [+]\n80 * 63.5 = 5080 [+]\n80 * 63.51 = 5080.8 [+]\n80 * 63.52 = 5081.6 [+]\n80 * 63.53 = 5082.4 [+]\n80 * 63.54 = 5083.2 [+]\n80 * 63.55 = 5084 [+]\n80 * 63.56 = 5084.8 [+]\n80 * 63.57 = 5085.6 [+]\n80 * 63.58 = 5086.4 [+]\n80 * 63.59 = 5087.2 [+]\n80 * 63.6 = 5088 [+]\n80 * 63.61 = 5088.8 [+]\n80 * 63.62 = 5089.6 [+]\n80 * 63.63 = 5090.4 [+]\n80 * 63.64 = 5091.2 [+]\n80 * 63.65 = 5092 [+]\n80 * 63.66 = 5092.8 [+]\n80 * 63.67 = 5093.6 [+]\n80 * 63.68 = 5094.4 [+]\n80 * 63.69 = 5095.2 [+]\n80 * 63.7 = 5096 [+]\n80 * 63.71 = 5096.8 [+]\n80 * 63.72 = 5097.6 [+]\n80 * 63.73 = 5098.4 [+]\n80 * 63.74 = 5099.2 [+]\n80 * 63.75 = 5100 [+]\n80 * 63.76 = 5100.8 [+]\n80 * 63.77 = 5101.6 [+]\n80 * 63.78 = 5102.4 [+]\n80 * 63.79 = 5103.2 [+]\n80 * 63.8 = 5104 [+]\n80 * 63.81 = 5104.8 [+]\n80 * 63.82 = 5105.6 [+]\n80 * 63.83 = 5106.4 [+]\n80 * 63.84 = 5107.2 [+]\n80 * 63.85 = 5108 [+]\n80 * 63.86 = 5108.8 [+]\n80 * 63.87 = 5109.6 [+]\n80 * 63.88 = 5110.4 [+]\n80 * 63.89 = 5111.2 [+]\n80 * 63.9 = 5112 [+]\n80 * 63.91 = 5112.8 [+]\n80 * 63.92 = 5113.6 [+]\n80 * 63.93 = 5114.4 [+]\n80 * 63.94 = 5115.2 [+]\n80 * 63.95 = 5116 [+]\n80 * 63.96 = 5116.8 [+]\n80 * 63.97 = 5117.6 [+]\n80 * 63.98 = 5118.4 [+]\n80 * 63.99 = 5119.2 [+]\nNavigation: Home | Addition | Substraction | Multiplication | Division       Tables for 80: Addition | Substraction | Multiplication | Division\n\nOperand: 1 2 3 4 5 6 7 8 9 10 20 30 40 50 60 61 62 63 64 65 66 67 68 69 70 80 90 100 200 300 400 500 600 700 800 900 1000 2000 3000 4000 5000 6000 7000 8000 9000\n\nMultiplication for: 1 2 3 4 5 6 7 8 9 10 20 30 40 50 60 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 100 200 300 400 500 600 700 800 900 1000 2000 3000 4000 5000 6000 7000 8000 9000" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8554387,"math_prob":0.9999933,"size":9030,"snap":"2020-34-2020-40","text_gpt3_token_len":1652,"char_repetition_ratio":0.30135164,"word_repetition_ratio":0.36788777,"special_character_ratio":0.16079734,"punctuation_ratio":0.0,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99997246,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-14T17:43:48Z\",\"WARC-Record-ID\":\"<urn:uuid:87bf71d3-22c2-4bf3-907b-b7786e3e661d>\",\"Content-Length\":\"43409\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:761e0490-bee9-41d9-bf6b-60c623ad1db9>\",\"WARC-Concurrent-To\":\"<urn:uuid:3d15d63a-b150-4e4b-8550-199535ca5bb9>\",\"WARC-IP-Address\":\"104.24.97.16\",\"WARC-Target-URI\":\"http://num.bubble.ro/m/80/64/\",\"WARC-Payload-Digest\":\"sha1:2K4AZAXOEQZSOXRX62TCFRZGAVMBTU37\",\"WARC-Block-Digest\":\"sha1:NBWBGON34CQOUQP3T6R5HMMTU343NB7B\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439739347.81_warc_CC-MAIN-20200814160701-20200814190701-00069.warc.gz\"}"}
https://www.natrajsarma.com/course/class-11-applied-mathematics/
[ "# Class 11 (Applied Mathematics 2023-24)\n\n###### Course Fees: 22 BHD (Inclusive VAT)\n21 STUDENTS ENROLLED\n•", null, "•", null, "•", null, "•", null, "•", null, "•", null, "### Course Curriculum\n\n Unit 1 Unit 1 binary numbers 00:20:00 How to add binary numbers 00:11:00 How to subtract binary numbers 00:09:00 Unit 1 what is logarithm 00:15:00 Unit 1 how to take log and anti log 00:00:00 Unit 1 problems on clocks 00:11:00 Indices ,logarithm and anti logarithm 00:20:00 Calendar problems for 11 applied mathematics 00:00:00 Time and work 00:00:00 Seating arrangements lecture 1 00:00:00 Seating arrangements lecture 2 00:00:00 Notes for binary numbers ,clocks and calendars 00:00:00 Unit 2 sets Set theory Lecture 1 00:06:00 Set theory Lecture 2 00:07:00 Set theory Lecture 3 00:14:00 Set theory Lecture 4( Venn diagrams) 00:12:00 Set theory Lecture 5 00:15:00 This lecture explains the whole concept of sets. 00:00:00 Quick review of set theory for midterm 00:00:00 Unit 2 relations functions Relations and functions Class 11 (lecture 1) 00:12:00 Relations and Functions Class 11(lecture2) 00:13:00 Relations and functions (lecture 2a) 00:14:00 Relations and functions Class 11 (lecture 3) 00:00:00 Relations and functions Class 11 (lecture 4) 00:12:00 Unit 2 permutations combinations Lecture 1 the fundamental counting principle 00:14:00 Permutations Combinations lecture 2( 16 to 22) 00:05:00 Permutations when certain objects are alike (lecture 3) 00:15:00 Combinations means just a selection . Order does not matter . 00:15:00 using both the concepts of combination and permutation 00:17:00 Permutations and Combinations mixed review 00:12:00 Unit 2 sequences and series Introduction to sequences and series 00:06:00 Arithmetic progression( just a simple introduction) 00:03:00 Relationship between AM and GM 00:04:00 Inserting arithmetic and geometric means 00:04:00 Lecture 1 sequences series 00:18:00 Lecture 2 sequences series 00:08:00 Unit 3 mathematical reasoning Mathematical reasoning Lecture 1 00:14:00 Mathematical logic lecture 2 00:19:00 Unit 4 functions and calculus Finding range of functions . 00:14:00 How to take the derivative 00:00:00 Probability Probability lecture 1( class 11 concepts) 00:15:00 Probability lecture 2( class 11 concept) 00:12:00 Probability lecture 3 addition theorem(class 11 concept ) 00:11:00 Probability Lecture 4( at least 1 concept) 00:06:00 Probability lecture 5( De Morgan’s law) 00:11:00 Probability lecture 6(Conditional probability 00:05:00 Probability lecture 7 (Independent events) 00:08:00 Probability lecture 8 (Total probability rule) 00:12:00 Probability lecture 9 (Baye’s theorem on probability) 00:06:00 Doubts in probability for class 11 ( two important questions ) 00:14:00 Extra material on Bayes theorem and total probability rule 00:00:00 Probability questions from class 11 text book 00:13:00 Sample space in probability 00:03:00 Bayes Theorem higher order question from Textbook 00:12:00 Unit 6 Descriptive statistics Descriptive Statistics (lecture 1) 00:14:00 Descriptive statistics (lecture 2) 00:11:00 Bar charts and histograms 00:14:00 Pie charts 00:10:00 Variance and standard deviation for third type data 00:12:00 Mean deviation about the mean ex 15.1 00:14:00 Mean deviation about the median 00:16:00 Variance for first type data 00:12:00 Variance and standards deviation for second type data 00:09:00 How to take square root of a number 00:09:00 Unit 7 basics of financial mathematics Notes for class 11 financial Mathematics 00:00:00 What is annuity 00:04:00 How to calculate annuity 00:14:00 What is taxation 00:04:00 What is GST? 00:08:00 How to evaluate a Bond 00:08:00 Unit 8 straight lines and circles Straight lines (points 1 to 4) 00:15:00 Straight lines (points 5 to 10) 00:11:00 Straight lines (problem solving) 00:16:00 Circles lecture 1 00:15:00 Circles lecture 2 00:15:00 Circles lecture 3 00:16:00 Speed Distance time 1 00:00:00 Exam stuff doubts and exam papers Class 11 Nms applied math weekly test 00:08:00 Study material for applied math Binary numbers ,calendars and clocks 00:00:00 Extra practice material with answers (straight lines and permutations combinations ) 00:00:00 A substitute text book for applied Mathematics course 00:05:00 Distance speed time questions with answers 00:00:00 Straight lines -the ultimate practice book 00:00:00 Extra practice for 11 sets relations functions 00:00:00 Unit 8.3 PARABOLA Parabola (lecture 1) 00:06:00 Parabola derivation of formula lecture 2 00:11:00 Parabola (the 4 types) lecture 3 00:12:00 Parabola lecture 4 00:14:00 Parabola application 1 the dish antenna (lecture 5 ) 00:07:00 Parabola application 2 (lecture 6) 00:09:00 Parabola lecture 7 the suspension bridge problem 00:11:00 Parabola lecture 8 ( the girder problem ) 00:00:00\n\n## N.A\n\nratings\n• 5 stars0\n• 4 stars0\n• 3 stars0\n• 2 stars0\n• 1 stars0\n\nNo Reviews found for this course.\n\n• UNLIMITED ACCESS\n\n#### Instructor Details", null, "#### Popular Courses\n\nI graduated from IIT Kanpur and was totally inspired by my professors there. I developed a passion for teaching Mathematics and made it my career and profession.\n\n#### Popular Courses\n\n• ###### The beautiful world of science and mathematics\n4.5( 0 REVIEWS )\n335 STUDENTS\n• ###### Mathematics\n0( 0 REVIEWS )\n170 STUDENTS\n• ###### 12 Old Batch 2020-2021 ISB\n0( 0 REVIEWS )\n65 STUDENTS\n\n#### Most Rated\n\n• ###### Class 12 2021-2022 NIS\n0( 0 REVIEWS )\n0 STUDENTS\n• ###### Class 9\n0( 0 REVIEWS )\n0 STUDENTS\n• ###### Class 10\n0( 0 REVIEWS )\n0 STUDENTS\n\n#### Who’s Online\n\nThere are no users currently online" ]
[ null, "data:image/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==", null, "data:image/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==", null, "data:image/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==", null, "data:image/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==", null, "data:image/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==", null, "data:image/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==", null, "data:image/gif;base64,R0lGODlhAQABAAAAACH5BAEKAAEALAAAAAABAAEAAAICTAEAOw==", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7866117,"math_prob":0.947031,"size":5029,"snap":"2023-40-2023-50","text_gpt3_token_len":1684,"char_repetition_ratio":0.21731344,"word_repetition_ratio":0.011379801,"special_character_ratio":0.36746868,"punctuation_ratio":0.18064517,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9646528,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-04T17:13:20Z\",\"WARC-Record-ID\":\"<urn:uuid:0ab4f2a3-80de-4dce-b3b4-b6732cae3071>\",\"Content-Length\":\"188147\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:079dd8dc-7a0e-4fef-9b1a-b24ee5c531aa>\",\"WARC-Concurrent-To\":\"<urn:uuid:0047b5a7-9081-49e7-88a8-9be247ad332f>\",\"WARC-IP-Address\":\"172.105.51.129\",\"WARC-Target-URI\":\"https://www.natrajsarma.com/course/class-11-applied-mathematics/\",\"WARC-Payload-Digest\":\"sha1:3ERVES7BNSKGIYA6UOPXAQIWHENG5ZZ3\",\"WARC-Block-Digest\":\"sha1:L3KXGO4EO3TOBN4FOC3UMZSTLFVNWYH7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233511386.54_warc_CC-MAIN-20231004152134-20231004182134-00806.warc.gz\"}"}
https://notaboutapples.wordpress.com/2009/10/
[ "## S is for Symmetry\n\n29 October 2009\n\nIf your basic education was like mine,  you learned about symmetry in elementary school, and it was pretty much limited to telling which shapes were symmetric and which ones weren’t.  Of course symmetry isn’t just a matter of yes-or-no, and some objects are more symmetric than others.  A square is more symmetric than a rectangle, say, and a circle is more symmetric than either.  (This can be made precise, of course; in this case, even the crudest way is adequate: a square has 8 symmetries but a non-square rectangle only has 4, and a circle has infinitely many.)\n\nAnd as the previous two posts show, symmetries are not just geometric in nature.  Structures of all sorts, in wildly varying contexts, have interesting symmetries; moreover, these symmetries can explain some strange phenomena and put them into their proper perspective.\n\nA good working definition of symmetry is a transformation of an object which preserves its important features (the shape of a geometry object, the algebra of a number system, etc.)\n\nYou may not like my count of 4 symmetries for a rectangle.  For the record, these are the four.\n\n1. flip it over horizontally\n2. flip it over vertically\n3. do both (in other words, half-turn)\n4. do nothing\n\nDoing nothing certainly is , and it turns out to just be better to include it in our lists. For one thing, the numbers 4 and 8 are more suggestive of the notion that a square is “twice as symmetric” as a non-square rectangle than 3 and 7 would be.  There’s a better motivation, which we’ll see next time.  For now, let’s just agree that every object has at least the trivial symmetry, and that to be “symmetric” means to have at least 2 symmetries (at least one that is interesting).\n\nSo if you have only one symmetry, it’s the trivial one, and you’re asymmetric. Just like if only one person shows up to your birthday party, it’s you, and you’re lonely.\n\nSo we can classify objects based on how many symmetries they have.  Loosely speaking, the more symmetric something is, the more symmetries it has.  Actually there’s a lot to learn even from this low level of sophistication.\n\nLots of things in life have exactly two symmetries.  The human face, the grid of a traditional crossword puzzle, the shape of a piece of bread, etc.  And this particular sort of symmetry, which seems to be intrinsically aesthetically pleasing to most people, means that these objects all have something deep in common.\n\nBut we can say more.  For example: how might you compare and contrast the following shapes?", null, "Well, both have four symmetries, assuming we’re counting colors. Here are the lists (left first).\n\n1. quarter-turn\n2. half-turn\n3. three-quarter-turn\n4. nothing (or a full turn)\n\nAnd for the right square:\n\n1. flip it over horizontally\n2. flip it over vertically\n3. do both (in other words, half-turn)\n4. do nothing\n\nBut these collections of symmetries have a deeper difference.   If you just look at the shapes a minute, move them around in your head, you’ll probably notice they “feel” different.  It’s hard to nail down why exactly, but we can.\n\nIn the purple/cyan case, there was really only one kind of symmetry—rotation.  All the symmetries just come by repeating the basic quarter turn.  But there’s no one symmetry of the blue-green square that gives rise to all the others.  Also, every symmetry of the blue-green square has the property that if you do it twice, you’re back where you started, and the purple-cyan squre has other kinds of symmetry.\n\nIf this seems like this is too confusing or somehow too “high-level” mathematically, then as my six-year-old daughter always says, be brave in your heart.  I firmly believe that, like the mathematical abstraction that is counting, the mathematical abstraction that is symmetry is something that is instinctive for humans.  The deficit is not in human faculties, rather it is in human language.  Standard English provides a pretty poor language to talk about these issues.  (The analogy with our sense of smell, I think, is apt; experiment suggests that our latent sense of smell is as good as any dogs, but we think about what we talk about, and our words to differentiate smells are pretty limited and clumsy, so our sense of smell is highly impaired in practice, but not by lack of latent ability.)  But it turns out that mathematics, like it or not, is an ideal setting to verbalize gut notions of symmetry.  Like that smell you can’t quite identify and don’t know how to describe to your friend, like why you enjoyed that movie much, like the difference between how normal coffee and decaf taste, like so many things in life, symmetry seems ineffable.  But as I always say, eff the ineffable. Mathematics provides a comprehensive language for formulating and expressing ideas about symmetry, the language of group theory.  Stay tuned for the next post, where I’ll talk about this extensive language mathematicians have developed to discuss and understand symmetries.\n\nSo I’ll concede that I was forced into awkward language in that paragraph distinguishing the two shapes, but I think it’s fair to say that we understand the difference between the two shapes better than we did before.  The mind’s eye knows they’re differently symmetric, and now the mind’s mouth can make that precise.\n\nSymmetries are one of those things that, once you know to look for them, you see them everywhere.  There are lots of important examples of symmetry issues in physics, but let’s look at a very simple one: the so called arrow of time.  That is, does time have an intrinsic forward and backward?  There is the direction that we think of as forward, but is there an intrinsic difference?  Of course at the level of human experience, they are different.  I remember the year 2000 but not the year 2020.  But at the level of physical laws, it’s much less clear.  If I showed you a movie of interactions of electrons, would you be able to tell whether you were watching it playing forward or backward?  Do you see how this is like the", null, "$i$ vs.", null, "$-i$ question from a recent post?  The question is this: is there a symmetry of space-time which interchanges past and future, but preserves physical laws?  If there isn’t, then that means the arrow of time, our perception of the direction it flows, is intrinsic to the universe.  If there is, then it’s perfectly plausible to imagine, say, some other creatures which perceive themselves as moving through time the other way.  What’s funny here is that on a small enough level, say at the level of subatomic particles, most theories have past-future symmetry (an electron can gain a photon, or it can lose a photon, which is like gaining a photon backwards).  But on the larger scale of time and space, we do not see past-future symmetry.  Thermodynamics, for example, is not symmetric.  Entropy increases over time.  If I made a video of a breaking egg, you’d know which way was future.  Eggs break, eggshells don’t assemble.  The correct reconciliation of these ideas is, I think, an important open issue in physics.\n\nThere are also examples that are far less serious.  Ever play rock-paper-scissors?  Against a computer that just picks its move at random with equal probabilities? (You can do just that at eyezmaze, a site with outstanding games, if you don’t count this one.)  If you have, then you probably lost interest pretty fast, because you realized that your decisions don’t matter.  And why don’t they matter?  Because rock-paper-scissors has symmetries, three symmetries which preserve the rules about which throws beat which and which also preserve the computer’s “strategy”, just enough to interchange all the possible throws and guarantee that rock, paper, and scissors are always exactly equally good throws.  This is different from RPS against a person, which is interesting, because your opponent’s psychology doesn’t have symmetry.\n\nOkay let’s stop there, because if you’re me it doesn’t get any better than explaining in precise mathematical terms why one game is more fun than another.\n\nP.S. (at least a bit heavier than what precedes)\n\nSymmetries are at the heart of the so-called Erlangen Program for geometry developed by Felix Klein. The sound-byte version of which is “If you want to understand a geometry, understand the symmetries that preserve it.”  In the case of ordinary plane geometry (the kind you learned in Mrs. Gunderson’s algebra class in high school), this means understanding the transformations of the plane which preserve lengths and angles.  There are some obvious types of transformations that work, such as the follow.\n\n• rotations around a point\n• reflections across a line\n• translations\n\nIt turns out that all the symmetries of the usual geometric plane are given by rotations or reflections, possibly followed by a translation.  All the fundamentals of Euclidean geometry can be recovered by really understanding these families of symmetries.  You may have heard of something called hyperbolic geometry, where parallel lines behave differently.  How might someone get a concrete handle on how the hyperbolic plane works?  We can characterize its symmetries, and see that this plane has different kinds of symmetries than the ordinary Euclidean plane.  And when I say compare them, I don’t mean that in a fuzzy, hand-wavy way. All these symmetries can be expressed in concrete numerical ways (using matrices).  The power of the method leads to an increase importance of understanding various matrix groups (whatever that means) in geometry, and a closer relationship between algebra and geometry.  But this is a subject for another day.\n\n## Thought Experiment: Talking to the Other Aliens\n\n22 October 2009\n\nThis is a direct continuation of the previous post, so read that one first if you haven’t yet.  In some sense this post is simpler than the previous, in that it uses simpler concepts and doesn’t involve understanding of the real number system.  But it may be harder for many readers, because I’m asking you to imagine an alien race which does not understand certain things that you probably can’t remember a time when you didn’t understand.  And it’s hard to imagine what it would be like not to know what we know.\n\nIt’s interesting, isn’t it, how people are much better at temporarily adding an unfamiliar concept to their working context than they are at temporarily subtracting a familiar one?\n\n## Thought Experiment: Talking Math with the Aliens\n\n20 October 2009\n\nThough the connection may not at first be apparent, this is part of my promised (threatened?) attempt to put the fundamentals of Galois theory in terms suitable for readers of this blog.  It will be a slow build, because there are a lot of a pieces to put into play.\n\nToday, a thought experiment.  Imagine you have made contact with another form of intelligent life.  Communication is still at a primitive stage, but you’ve devised a way of sending each other signals, and you and the alien are in the process of building up your shared vocabulary in this new language.  (I’m imagining some sort of IM window, your imagination may vary.)\n\nWell you’ve heard that the universal language is mathematics, and you want to establish a shared vocabulary for basic math.  With some effort, you establish an agreement on the concepts of “addition” and “multiplication” (think about how you might do this, how you might distinguish these two operations from one another).  You figure out what name they have for what you call “zero” and “one” easily enough.  (For example, you could ask what number plus itself equals itself to nail down zero, then ask what number multiplied by itself equals itself, other than zero, to nail down one — think about it.)  Once you have zero and one, addition and multiplication, you can get 2, 3, 4, etc., then the negative integers, and then fractions.\n\nIt would take some time, but suppose you eventually get sufficient communication to have shared language for the real number line (maybe you explain Dedekind cuts, whatever, I don’t care). (Actually this isn’t essential, and it’s just as interesting to suppose you don’t establish shared vocabulary for the real numbers; we’ll explore that elsewhen.)\n\nSo now you’re feeling ambitious, and you want to know how the alien talks about imaginary numbers.  What does the alien call your", null, "$i$?  You assume (reasonably) that such a developed race would also have some corresponding concept, so you ask for a number which multiplies by itself to give negative one, and the alien says “blarg: blarg times blarg plus one is zero”.  Victory!\n\nBut then doubt sets in.  Are you really sure his blarg is your", null, "$i$?  After all,", null, "$(-i)^2=-1$ too.  Maybe blarg is negative", null, "$i$?  How would you know?  Think about it as long as you like, but the answer is, you wouldn’t.  There are no questions you could ask that would say for sure whether blarg was", null, "$i$ or", null, "$-i$.\n\n(You might try to say something about “the one on the upper half of the complex numbers”, but that’s no good.  You have no reason to believe that they visualize complex numbers anything like how you do, and anyway that distinction is happening only in your mind, not in the math.  It’s no more constructive than defining “three” as “the number that looks like half an eight”.  That’s not math, not even arithmetic.  It’s trivia about our way of writing numbers.)\n\nWe could rephrase this whole thing without aliens (but why would you ever prefer not to include aliens?).  Suppose that I had misunderstood my teacher the day she defined the complex plane; suppose I had thought that", null, "$i$ was one unit below the origin, the opposite of the convention you’re probably used to.  What would happen when I try to talk math with the people like you who learned it the usual way?  Nothing interesting!  You and I believe all the same statements about numbers!  We both think", null, "$(3+2i)+(4-i)=6+i$ and we both think", null, "$(3+2i)(4-i)= 14+5i$.  If we visualize these facts geometrically, then the picture in my head doesn’t match the picture in yours (it’s upside down).  As long as we stick to the numbers and equations, as long as nobody explicitly mentions the pictures we are thinking about, we’ll be in perfect agreement about complex numbers.\n\nYou may have learned in high school that, if you have a polynomial with real coefficients and", null, "$a+bi$ is a root, then so is", null, "$a-bi$.  Now we see the reason that underlies this truth: no algebraic statement in terms of real numbers can distinguish", null, "$a\\pm bi$ from one another.  The point in your mind I call", null, "$a+bi$, I call", null, "$a-bi$, and vice versa.\n\nIn fancier talk: the complex numbers have a symmetry, usually called complex conjugation, which preserves all the real numbers and which preserves any facts and relationships which can be expressed in terms of basic algebra.  The numbers", null, "$a+bi$ and", null, "$a-bi$ are interchangeable because they have to be, because they are bound by the symmetry.  Symmetries are magical things.\n\nAs we shall see, symmetries are powerful tools for understanding many kinds of situations, and the language of mathematics is the right language for getting at symmetries.\n\nBut there is more to the story.  We’ll talk to the aliens a little more next time.\n\n## Now there’s completeness and then there’s completeness…\n\n15 October 2009\n\nThis post achieves a fortuitous segue from the last post into my serious of articles on the beauty of Galois theory.\n\nIn the previous post I introduced Dedekind cuts as a means of constructing the real number line, and I said that this perspective is responsible for the completeness of the real numbers", null, "$\\mathbb{R}$.\n\nNow, that was completeness in the topological sense.  There is another, very different notion of algebraic completeness.\n\nA number system is called algebraically complete if every polynomial equation in one variable with coefficients from that number system can be solved in that number system.\n\n## Dedekind cuts\n\n15 October 2009\n\nI am currently teaching a course in geometry for teachers (Euclidean, nonEuclidean, projective, the whole ball of wax), and we were recently discussing the need for the Dedekind axiom for plane geometry, which guarantees in effect that the points on a geometric line behave the  same , way as real numbers on a real number line.  What was interesting to me was that, even after all we’d said about all the ways that geometry might behave in unexpected ways if we don’t make certain assumptions, somehow the idea that geometric lines were real number lines was more deeply ingrained.  The idea that there might be a world were there were no line segments with length", null, "$\\pi$ was harder to imagine than the idea of a world where there are multiple lines through a point parallel to another point.\n\nIt got me thinking, why is that?\n\n10 October 2009\n\nIn the last post, we show that the following simple diagram provides all the information needed to prove that", null, "$\\sqrt{2}$ is irrational.", null, "But as it true so often in mathematics, there is much more to see beyond the surface-level observations, and this time I cannot resist going back to this picture to say more.\n\nOur main points last time were:\n\n1. the big square has the same area as the two light squares together if and only if the dark square has the same area as the two white squares together\n2. a square of side", null, "$m$ has the same area as two squares of side", null, "$n$ if and only if", null, "$m/n = \\sqrt 2$.\n\nWe know we can’t get equality in either case though, which motivates the following approximate version.\n\n1. the big square has almost the same area as the two light squares together if and only if the dark square has almost the same area as the two white squares together\n2. a square of side", null, "$m$ has about the same area as two squares of side", null, "$n$ if and only if", null, "$m/n \\approx \\sqrt 2$.\n\nIn other words, if the ratio of the  sides of the large and light squares is “about”", null, "$\\sqrt 2$, then the ratio of the sides of dark and white squares is also “about”", null, "$\\sqrt 2$.  Which one is a better approximation?  The one involving the larger squares.  The absolute discrepancy in area between the squares is the same, but the relative discrepancy will be smaller if the areas are larger.  (The same reason I was so much more dramatically older than my sister when I was 6 and she was 2 than I will be when I’m 82 and she’s 78.)\n\nLet’s add some letters to simplify the statements.  If", null, "$m$ is the side of the dark square and", null, "$n$ is the side of the white square, then", null, "$m+2n$ is the side of the big square, and", null, "$m+n$ is the side of the light square.  (Make sure you can see this in the picture.)  Then our claim is that if", null, "$m/n$ is a reasonable approximation to", null, "$\\sqrt 2$, then", null, "$(m+2n)/(m+n)$ will be a better one.\n\nIt’s too much to hope for", null, "$m^2=2m^2$, but if we take", null, "$m=1,n=1$, then they’re only off by 1.  So we can take", null, "$1/1$ as a starting point.  Then we expect", null, "$3/2=1.5$ to be a better approximation.  But why stop here?  Taking", null, "$m=3,n=2$,", null, "$7/5=1.4$ is a better estimate.  We can keep this up forever, giving the following sequence of increasingly good rational approximations to $\\sqrt 2$:", null, "$1/1, 3/2, 7/5, 17/12, 41/29, 99/70, 239/169, 577/408, 1393/985, \\cdots$\nThese approximations are getting very close very fast (the last one is right to six decimal places, enough for any practical application I can think of), and we’re not working very hard to get them!\n\nActually, more still is true.  If we start with any fraction", null, "$m/n$, even one which is nowhere near", null, "$\\sqrt 2$, repeatedly applying the rule", null, "$m/n \\mapsto (m+2n)/(m+n)$ will give us a sequence of numbers that, in the long run, will converge to $\\sqrt 2$.  Since the absolute area discrepancy doesn’t change, but the squares get larger and larger, the approximation is eventually as close as we might like.  The sequence from the previous paragraph is still the best one, though, because there the area discrepancy is 1, which is the best we can hope for since we proved last time that 0 is impossible.\n\nActually, it can be proven, without anything fancier than stuff we’ve already said, that all the solutions in positive integers of the equation", null, "$m^2-2n^2=\\pm 1$ come from the sequence two paragraphs back. Can you see how?\n\nIt can also be proven that the approximations will be alternately overestimates and underestimates.  Can you see why?\n\nThere is a rich theory of  rationally approximating irrational numbers, including a method based on continued fractions for finding optimal approximating fractions to any real number.  What is amazing is that in this case we can get exactly the same answers predicted by the general theory without knowing anything sophisticated.  We don’t need continued fractions or even a precise definition of “good rational approximation”.  All we need is the picture.\n\n(In case  you either don’t like pictures or really like algebra, then the corresponding algebra fact is", null, "$(m+2n)^2 - 2(m+n)^2 = -(m^2-2n^2)$, but that’s so much less colorful…)\n\nP.S.\n\nI am aware that the triangle diagram in the previous post somehow got removed from my WordPress uploads.  I can’t fix this until I get back in my office on Monday, but I will do it at that time.\n\n## Back from Maine with Squares and Triangles\n\n6 October 2009\n\nAh, how I’ve missed my little blog.  Sorry about the hiatus, but now I’m back from the number theory conference with a head full of new ideas.  Unfortunately, most of the topics of the conference have far too many prerequisites to fit in this blog.  Let’s just say I saw many beautiful things and was reminded (in case I had forgotten) why I am a number  theorist.\n\nThere was one historical talk, in which David Cox lectured on Galois theory according to Galois.  If you aren’t a math major, don’t worry, you’ve probably never heard of Evariste Galois.  So inspired was I by this talk, and by the beauty of  the ideas at play in what Galois brought to light, that I want to share the heart of Galois theory with all of you. This will take quite a few posts to realize, working our way there one vignette, one thought experiment at a time. Fasten your seatbelts, ladies and gentlemen . . . the next few weeks will be interesting.\n\nI did learn one extremely clever thing which is suitable for this audience “right out of the box”.  The inimitable Steve Miller showed me the following purely graphical proof that", null, "$\\sqrt{2}$ is irrational.\n\nWhat would it mean for", null, "$\\sqrt{2}$ to be rational? It would mean that", null, "$\\sqrt{2}=m/n$ for some integers m and n, which we can choose to be in lowest terms.  In other words, there is a square of integer side length (m) whose area is the same as two squares of another integer side length (n), and furthermore we couldn’t find smaller integer squares with this relationship.  Place the two smaller squares in opposite corners of the larger square as sshown in the picture.", null, "By our setup, the two light purple squares together have the same area as the large square.  This means that the uncovered area (the two white squares) must account for the same area as the doubly-covered area (the darker purple square).  If the original squares have whole-number sides, then so do these.  And the new squares are obviously smaller than the new ones, since they’re physically inside the new ones.  But we had supposedly chosen the smallest possible integer squares with this property.   Contradiction.\n\nNeither is this trick is limited to", null, "$\\sqrt{2}$.  The following picture can be seen as a demonstration of the irrationality of the square root of 3, if you look at it right.  I leave that to you.", null, "If you want a further challenge, try to find proofs in the same spirit that", null, "$\\sqrt{6}$ and", null, "$\\sqrt{10}$ are irrational." ]
[ null, "https://notaboutapples.files.wordpress.com/2009/10/two-colored-squares.gif", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://notaboutapples.files.wordpress.com/2009/10/squares1.jpg", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://notaboutapples.files.wordpress.com/2009/10/squares1.jpg", null, "https://s0.wp.com/latex.php", null, "https://notaboutapples.files.wordpress.com/2009/10/triangles2.jpg", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9577102,"math_prob":0.96267843,"size":9488,"snap":"2019-51-2020-05","text_gpt3_token_len":2023,"char_repetition_ratio":0.12536904,"word_repetition_ratio":0.02252816,"special_character_ratio":0.20531197,"punctuation_ratio":0.11111111,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9757974,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116],"im_url_duplicate_count":[null,1,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,4,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,4,null,null,null,2,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-08T16:13:19Z\",\"WARC-Record-ID\":\"<urn:uuid:e8933183-e024-448c-b151-225a2490188f>\",\"Content-Length\":\"84633\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4cba98eb-d2e8-4e9f-9cab-5e5d678e94b7>\",\"WARC-Concurrent-To\":\"<urn:uuid:9f5e2c5f-76f4-4c5e-821d-1d4e1004bac0>\",\"WARC-IP-Address\":\"192.0.78.13\",\"WARC-Target-URI\":\"https://notaboutapples.wordpress.com/2009/10/\",\"WARC-Payload-Digest\":\"sha1:Q3MZAZVXW6YBA2NGSVILPIW53UW5JEIO\",\"WARC-Block-Digest\":\"sha1:GRYSAY6WRV5UHOR3IMIYD4K6PM7T4JGZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540511946.30_warc_CC-MAIN-20191208150734-20191208174734-00182.warc.gz\"}"}
https://physicsmuseum.uq.edu.au/monromatic-calculator
[ "ID:\n416\nMaker's Name:\nMonroe\nUSA\nDimensions:\n35 × 29 × 20 cm\nTour Audio:", null, "In 1912 founder of Monroe Calculators Jay R. Monroe with calculator designer and partner Frank S. Baldwin introduced the first of many Monroe calculators. These calculators similar to Contex calculators used predominantly decimal scales and complex gearing in order to calculate results.\n\nThe user would input on a keyboard the first number to be used in the equation and then the operational handle rotated to add. The second number would be input then and the handle rotated in accordance with addition or subtraction. The operations relied on stepped cylinders and complex stepped gearing to achieve the various operations. Monroe calculators became such a success that progressions such as electric motors, automatic addition, subtraction, multiplication, division and square roots were introduced. The electric motors reduced calculation time and human error significantly. It was not until early 1970’s that the Monroe calculators introduced printer roles and programmability functions to their machines.\n\nAs evident with all Monroe’s is their distinct shape due to a sliding upper carriage, similar to a type writer. This carriage allowed the machine, and user, to quickly calculate numbers of higher orders as it gave way to multiplying by different decades. The Monroe calculators soon had fully automated calculators such as the IQ-213 which could compute regular operations as well as square roots and cubes. By the late 1970’s Monroe calculators were becoming so developed that they began introducing ‘pocket’ calculators.\n\nThis electrically powered model was used in the UQ Chemistry Department.\n\nThis item is part of the UQ Physics Museum Good Old days of Calculation Tour\n•", null, "•", null, "•", null, "•", null, "•", null, "•", null, "•", null, "•", null, "•", null, "" ]
[ null, "https://physicsmuseum.uq.edu.au/system/storage/serve/31993/monro1.JPG", null, "https://physicsmuseum.uq.edu.au/system/storage/serve/31998/monro1.JPG", null, "https://physicsmuseum.uq.edu.au/system/storage/serve/32003/monro2.JPG", null, "https://physicsmuseum.uq.edu.au/system/storage/serve/32008/monro3.JPG", null, "https://physicsmuseum.uq.edu.au/system/storage/serve/32503/IMG_5256.JPG", null, "https://physicsmuseum.uq.edu.au/system/storage/serve/32508/IMG_5254.JPG", null, "https://physicsmuseum.uq.edu.au/system/storage/serve/32513/IMG_5255.JPG", null, "https://physicsmuseum.uq.edu.au/system/storage/serve/32518/IMG_5257.JPG", null, "https://physicsmuseum.uq.edu.au/system/storage/serve/32523/IMG_5258.JPG", null, "https://physicsmuseum.uq.edu.au/system/storage/serve/32528/IMG_5259.JPG", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9540188,"math_prob":0.96161467,"size":1733,"snap":"2019-51-2020-05","text_gpt3_token_len":323,"char_repetition_ratio":0.1474841,"word_repetition_ratio":0.0,"special_character_ratio":0.18176572,"punctuation_ratio":0.07534247,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9657688,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],"im_url_duplicate_count":[null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-21T03:05:30Z\",\"WARC-Record-ID\":\"<urn:uuid:c03dd606-6d60-47ae-9fd3-d673212d639b>\",\"Content-Length\":\"27347\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6f851aac-22f2-4873-8e33-44545162cad1>\",\"WARC-Concurrent-To\":\"<urn:uuid:8e0d5cd6-1756-4c3b-93de-76cfd54e0211>\",\"WARC-IP-Address\":\"130.102.131.124\",\"WARC-Target-URI\":\"https://physicsmuseum.uq.edu.au/monromatic-calculator\",\"WARC-Payload-Digest\":\"sha1:GY4BCO4GY3QN3P775OJNXC6W35UYISKT\",\"WARC-Block-Digest\":\"sha1:KIX7QITIQSSJGPD2UUSRCFEHX2ZDUPY7\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250601241.42_warc_CC-MAIN-20200121014531-20200121043531-00344.warc.gz\"}"}
https://jp.mathworks.com/matlabcentral/answers/21533-recursive-function-help
[ "# Recursive function Help\n\n3 ビュー (過去 30 日間)\nTaylor 2011 年 11 月 17 日\nI have code here with a maze that uses a recursive function to solve the maze and then prints out all the coordinates from start to exit.\n% clear workspace\nclearvars;\nclear global MAZE ROWLIMIT COLLIMIT ROWEND COLEND\n% maze, its size, and endpoint must be global\nglobal MAZE ROWLIMIT COLLIMIT ROWEND COLEND\n% maze\nMAZE(1,:) = '############';\nMAZE(2,:) = '# # #';\nMAZE(3,:) = ' # # #### #';\nMAZE(4,:) = '### # # #';\nMAZE(5,:) = '# ### # ';\nMAZE(6,:) = '#### # # # #';\nMAZE(7,:) = '# # # # # #';\nMAZE(8,:) = '## # # # # #';\nMAZE(9,:) = '# # #';\nMAZE(10,:) = '###### ### #';\nMAZE(11,:) = '# # #';\nMAZE(12,:) = '############';\n% constants & initial values\nROWLIMIT = 12;\nCOLLIMIT = 12;\nrowbegin = 3;\ncolbegin = 1;\nROWEND = 5;\nCOLEND = 12;\n% solve the maze -- (0,0) is a dummy \"previous\" value\nisPathToExit(0, 0, rowbegin, colbegin);\nThis is the code I written so far for the recursive function 'isPathToExit':\nfunction success = isPathToExit(previous_row, previous_col, current_row, current_col)\n% Finds a path from entrance to exit through the maze\n% base case\nglobal MAZE ROWLIMIT COLLIMIT ROWEND COLEND\nif current_row == ROWEND && current_col == COLEND\ndisp(sprintf('(%i, %i)', current_row, current_col));\nsuccess = true;\nreturn;\nelse\nsuccess = false;\nend\n% recursion\nif current_row+1 ~= previous_row && MAZE(current_row+1, current_col) ~= '#' && current_row+1 ~= ROWLIMIT && current_row ~= 0\nsuccess = isPathToExit(current_row, current_col, current_row+1, current_col);\nif success\ndisp(sprintf('(%i, %i)', current_row+1, current_col));\nreturn;\nend\nend\nif current_row-1 ~= previous_row && MAZE(current_row-1, current_col) ~= '#' && current_row-1 ~= ROWLIMIT && current_row ~= 0\nsuccess = isPathToExit(current_row, current_col, current_row-1, current_col);\nif success\ndisp(sprintf('(%i, %i)', current_row-1, current_col));\nreturn;\nend\nend\nif current_col+1 ~= previous_col && MAZE(current_row, current_col+1) ~= '#' && current_col+1 ~= COLLIMIT && current_col ~= 0\nsuccess = isPathToExit(current_row, current_col, current_row, current_col+1);\nif success\ndisp(sprintf('(%i, %i)', current_row, current_col+1));\nreturn;\nend\nend\nif current_col-1 ~= previous_col && MAZE(current_row, current_col-1) ~= '#' && current_col-1 ~= COLLIMIT && current_col ~= 0\nsuccess = isPathToExit(current_row, current_col, current_row, current_col+1);\nif success\ndisp(sprintf('(%i, %i)', current_row, current_col-1));\nreturn;\nend\nend\nWhen I run it, it displays the points up to a wall of the maze and stops. I need it display the points from start to exit.\n\nサインインしてコメントする。\n\n### 採用された回答\n\nAlex 2011 年 11 月 17 日\nA few comments: 1. using global variables, then settings and clearing them is a bad habit to get into.\n2. disp(Maze(i,j)); will not show the coordinates, it'll only show the element of the Maze that is the current position, which all of those should be \" \" white space.\nIf you want the indicies, use:\ndisp(sprintf('The current maze index is %i and %i, current_row, current_col));\n3. Your if statements are very hard to read. Also, you're putting the recursive statements within the if statement itself, I would recommend separating those. This makes debugging easier. Additionally, you don't need to have all those conditions based on row_limit, col_limit, and such. Since the entire maze is surounded by '#', all you need to do is check in the 3 remaining directions for a solution\nex:\nfunction success = Recursive_Fcn(prev_row, prev_col, next_row, next_col)\nif (at maze exit)\nsucess = true\nreturn\nelse\nsucess = false;\nend\nif(prev pt was not from up && up point is clear && ~sucess )\nsucess = recursive statement(up direction)\nif(success)\nprint pt\nend\nend\nif(prev pt was not from down && down point is clear && ~sucess )\nsucess = recursive statement(down direction)\nif(success)\nprint pt\nend\nend\nif(prev pt was not from lwft && left point is clear && ~sucess )\nsucess = recursive statement(left direction)\nif(success)\nprint pt\nend\nend\nif(prev pt not from right && right pt is clear && ~success)\nsuccess = recursive statement(right direction)\nif(success)\nprint pt\nend\nend\nreturn\nI made these changes to your code and got it to work.\n##### 31 件のコメント表示非表示 30 件の古いコメント\nAlex 2011 年 11 月 21 日\n\nサインインしてコメントする。\n\n### カテゴリ\n\nFind more on Historical Contests in Help Center and File Exchange\n\n### Community Treasure Hunt\n\nFind the treasures in MATLAB Central and discover how the community can help you!\n\nStart Hunting!" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6476256,"math_prob":0.92425865,"size":4371,"snap":"2023-14-2023-23","text_gpt3_token_len":1248,"char_repetition_ratio":0.20357224,"word_repetition_ratio":0.12861271,"special_character_ratio":0.31754747,"punctuation_ratio":0.17034313,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97158617,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-27T06:15:34Z\",\"WARC-Record-ID\":\"<urn:uuid:20b7d634-5726-469b-91a2-cfb782147134>\",\"Content-Length\":\"338650\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bcbfa9dd-90f1-4ec0-8a73-47bcfe5c11e3>\",\"WARC-Concurrent-To\":\"<urn:uuid:256a932d-a1c2-47e9-ad19-089d0e47d62a>\",\"WARC-IP-Address\":\"104.86.80.92\",\"WARC-Target-URI\":\"https://jp.mathworks.com/matlabcentral/answers/21533-recursive-function-help\",\"WARC-Payload-Digest\":\"sha1:P74FFVC7PVGJA3AHJHVGLAABQYQWVKNB\",\"WARC-Block-Digest\":\"sha1:ECWX3LKEHVP432NGOB626A75NPD2LX6G\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296948609.41_warc_CC-MAIN-20230327060940-20230327090940-00265.warc.gz\"}"}
https://www.geeksforgeeks.org/aggregate-functions-in-sql/?ref=lbp
[ "Related Articles\nAggregate functions in SQL\n• Difficulty Level : Easy\n• Last Updated : 20 Aug, 2019\n\nIn database management an aggregate function is a function where the values of multiple rows are grouped together as input on certain criteria to form a single value of more significant meaning.\n\nVarious Aggregate Functions\n\n```1) Count()\n2) Sum()\n3) Avg()\n4) Min()\n5) Max()```\n\nNow let us understand each Aggregate function with a example:\n\n```Id Name Salary\n-----------------------\n1 A 80\n2 B 40\n3 C 60\n4 D 70\n5 E 60\n6 F Null\n```\n\nCount():\n\nCount(*): Returns total number of records .i.e 6.\nCount(salary): Return number of Non Null values over the column salary. i.e 5.\nCount(Distinct Salary):  Return number of distinct Non Null values over the column salary .i.e 4\n\nSum():\n\nsum(salary):  Sum all Non Null values of Column salary i.e., 310\nsum(Distinct salary): Sum of all distinct Non-Null values i.e., 250.\n\nAvg():\n\nAvg(salary) = Sum(salary) / count(salary) = 310/5\nAvg(Distinct salary) = sum(Distinct salary) / Count(Distinct Salary) = 250/4\n\nMin():\n\nMin(salary): Minimum value in the salary column except NULL i.e., 40.\nMax(salary): Maximum value in the salary i.e., 80." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.67402107,"math_prob":0.987387,"size":1174,"snap":"2021-21-2021-25","text_gpt3_token_len":310,"char_repetition_ratio":0.17521368,"word_repetition_ratio":0.021052632,"special_character_ratio":0.30153322,"punctuation_ratio":0.15040651,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9928026,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-09T20:45:15Z\",\"WARC-Record-ID\":\"<urn:uuid:9caea4c7-c618-463f-9d3b-e931dc3ee146>\",\"Content-Length\":\"82244\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d7d4cdf0-de00-49a8-9827-5566d3512c62>\",\"WARC-Concurrent-To\":\"<urn:uuid:c6f06031-dfff-4a43-852a-f4df8853f4be>\",\"WARC-IP-Address\":\"104.97.85.34\",\"WARC-Target-URI\":\"https://www.geeksforgeeks.org/aggregate-functions-in-sql/?ref=lbp\",\"WARC-Payload-Digest\":\"sha1:IXXXII45WKOH5H4LQKAP5VCXXRMMX6EG\",\"WARC-Block-Digest\":\"sha1:ATPJGZMGANKCXER52BVFJUXQCAIRAXFH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243989012.26_warc_CC-MAIN-20210509183309-20210509213309-00527.warc.gz\"}"}
https://grindskills.com/why-do-we-use-pca-to-speed-up-learning-algorithms-when-we-could-just-reduce-the-number-of-features/
[ "# Why do we use PCA to speed up learning algorithms when we could just reduce the number of features?\n\nIn a machine learning course, I learned that one common use of PCA (Principal Component Analysis) is to speed up other machine learning algorithms. For example, imagine you are training a logistic regression model. If you have a training set $(x^{(i)},y^{(i)})$ for i from 1 to n and it turns out the dimension of your vector x is very large (let’s say a dimensions), you can use PCA to get a smaller dimension (let’s say k dimensions) feature vector z. Then you can train your logistic regression model on the training set $(z^{(i)},y^{(i)})$ for i from 1 to n. Training this model will be faster because your feature vector has less dimensions.\n\nHowever, I don’t understand why you can’t just reduce the dimension of your feature vector to k dimensions by just choosing k of your features at random and eliminating the rest.\n\nThe z vectors are linear combinations of your a feature vectors. Since the z vectors are confined to a k-dimensional surface, you can write the a-k eliminated feature values as a linear function of the k remaining feature values, and thus all the z’s can be formed by linear combinations of your k features. So shouldn’t a model trained on an training set with eliminated features have the same power as a model trained on a training set whose dimension was reduced by PCA? Does it just depend on the type of model and whether it relies on some sort of linear combination?\n\nLet’s say you initially have $p$ features but this is too many so you want to actually fit your model on $d < p$ features. You could choose $d$ of your features and drop the rest. If $X$ is our feature matrix, this corresponds to using $XD$ where $D \\in \\{0,1\\}^{p \\times d}$ picks out exactly the columns of $X$ that we want to include. But this ignores all information in the other columns, so why not consider a more general dimension reduction $XV$ where $V \\in \\mathbb R^{p \\times d}$? This is exactly what PCA does: we find the matrix $V$ such that $XV$ contains as much of the information in $X$ as possible. Not all linear combinations are created equally. Unless our $X$ matrix is so low rank that a random set of $d$ columns can (with high probability) span the column space of all $p$ columns we will certainly not be able to do just as well as with all $p$ features. Some information will be lost, and so it behooves us to lose as little information as possible. With PCA, the \"information\" that we're trying to avoid losing is the variation in the data.\nAs for why we restrict ourselves to linear transformations of the predictors, the whole point in this use-case is computation time. If we could do fancy non-linear dimension reduction on $X$ we could probably just fit the model on all of $X$ too. So PCA sits perfectly at the intersection of fast-to-compute and effective." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9392006,"math_prob":0.99629986,"size":2781,"snap":"2022-40-2023-06","text_gpt3_token_len":598,"char_repetition_ratio":0.13719842,"word_repetition_ratio":0.004016064,"special_character_ratio":0.21395181,"punctuation_ratio":0.07433628,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99683756,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-31T06:49:14Z\",\"WARC-Record-ID\":\"<urn:uuid:934f2dd0-2bd2-42b2-b1b0-a187c579fb78>\",\"Content-Length\":\"62807\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:19b9ad46-fe17-41ab-9726-0abd0db40e92>\",\"WARC-Concurrent-To\":\"<urn:uuid:10efe780-8040-414c-9fa6-4dd627bd91d2>\",\"WARC-IP-Address\":\"137.184.148.160\",\"WARC-Target-URI\":\"https://grindskills.com/why-do-we-use-pca-to-speed-up-learning-algorithms-when-we-could-just-reduce-the-number-of-features/\",\"WARC-Payload-Digest\":\"sha1:J7R3BZJGUTCDOSPVSCBVW3YGTJ3VBXXW\",\"WARC-Block-Digest\":\"sha1:I2UGX4T4LIYUFQ665DHT34TZPUVT5PZ3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499845.10_warc_CC-MAIN-20230131055533-20230131085533-00101.warc.gz\"}"}