URL
stringlengths 15
1.68k
| text_list
listlengths 1
199
| image_list
listlengths 1
199
| metadata
stringlengths 1.19k
3.08k
|
---|---|---|---|
https://www.campusgate.in/2019/05/profit-and-loss-1-1.html | [
"-->\n\n# Profit and Loss 1/1\n\n1. Kathy buys a watch for Rs.500 and sells it to Jake for Rs.600. Find the profit percent.\na. 20%\nb. 25%\nc. 30%\nd. 35%\n\n2. John made a profit of 25% while selling a book for Rs.250. Find the cost price of the book.\na. Rs.160\nb. Rs.170\nc. Rs.180\nd. Rs.200\n\n3. Nivas sold a pen for Rs.900 thus making 10% loss. Find the cost price.\na. Rs.850\nb. Rs.1000\nc. Rs.1200\nd. Rs.1300\n\n4. A trader buys oranges at 7 for a rupee and sells them at 40% profit. How many oranges does he sell for a rupee?\na. 3\nb. 4\nc. 5\nd. 6\n\n5. On selling mangoes at 36 for a rupee, a shopkeeper loses 10%. How many mangoes should he sell for a rupee in order to gain 8%?\na. 25\nb. 30\nc. 35\nd. 40\n\n6. A boy buys eggs at 10 for Rs.1.80 and sells them at 11 for Rs. 2. What is his gain or loss per cent?\na. 1.27%\nb. 1.01%\nc. 1.68%\nd. 1.77%\n\n7. A woman buys apples at 15 for a rupee and the same number at 20 a rupee. She mixes and sells them at 35 for 2 rupees. What is her gain per cent or loss per cent?\na. 2.04%\nb. 3.5%\nc. 4.4%\nd. 5.4%\n\n8. Some quantity of coffee is sold at Rs. 22 per kg, making 10% profit. If total gain is Rs. 88, what is the quantity of coffee sold?\na. 44\nb. 55\nc. 60\nd. 70"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.6550755,"math_prob":0.92648375,"size":2839,"snap":"2020-45-2020-50","text_gpt3_token_len":1123,"char_repetition_ratio":0.1904762,"word_repetition_ratio":0.07645875,"special_character_ratio":0.49735823,"punctuation_ratio":0.13131313,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9519561,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-20T16:51:34Z\",\"WARC-Record-ID\":\"<urn:uuid:c8506c2d-1dff-4616-8744-c7516fe53416>\",\"Content-Length\":\"98386\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8b3ff936-9d6f-443a-894a-04a71547b654>\",\"WARC-Concurrent-To\":\"<urn:uuid:6a7634ce-9e3b-46a6-8026-fc0d6481d74c>\",\"WARC-IP-Address\":\"172.217.7.147\",\"WARC-Target-URI\":\"https://www.campusgate.in/2019/05/profit-and-loss-1-1.html\",\"WARC-Payload-Digest\":\"sha1:CAGVKU5HMLHCUH33FR3UAG3LT4MHLOXX\",\"WARC-Block-Digest\":\"sha1:QDJGXKEQ7TSAHXX55DGWNYO4PFTCWTZO\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107874026.22_warc_CC-MAIN-20201020162922-20201020192922-00589.warc.gz\"}"} |
https://books.google.co.ve/books?id=8q1XAAAAYAAJ&pg=PA11&focus=viewport&vq=division&dq=related:ISBN8474916712&lr=&output=html_text | [
"Im�genes de p�ginas PDF EPUB\n .flow { margin: 0; font-size: 1em; } .flow .pagebreak { page-break-before: always; } .flow p { text-align: left; text-indent: 0; margin-top: 0; margin-bottom: 0.5em; } .flow .gstxt_sup { font-size: 75%; position: relative; bottom: 0.5em; } .flow .gstxt_sub { font-size: 75%; position: relative; top: 0.3em; } .flow .gstxt_hlt { background-color: yellow; } .flow div.gtxt_inset_box { padding: 0.5em 0.5em 0.5em 0.5em; margin: 1em 1em 1em 1em; border: 1px black solid; } .flow div.gtxt_footnote { padding: 0 0.5em 0 0.5em; border: 1px black dotted; } .flow .gstxt_underline { text-decoration: underline; } .flow .gtxt_heading { text-align: center; margin-bottom: 1em; font-size: 150%; font-weight: bold; font-variant: small-caps; } .flow .gtxt_h1_heading { text-align: center; font-size: 120%; font-weight: bold; } .flow .gtxt_h2_heading { font-size: 110%; font-weight: bold; } .flow .gtxt_h3_heading { font-weight: bold; } .flow .gtxt_lineated { margin-left: 2em; margin-top: 1em; margin-bottom: 1em; white-space: pre-wrap; } .flow .gtxt_lineated_code { margin-left: 2em; margin-top: 1em; margin-bottom: 1em; white-space: pre-wrap; font-family: monospace; } .flow .gtxt_quote { margin-left: 2em; margin-right: 2em; margin-top: 1em; margin-bottom: 1em; } .flow .gtxt_list_entry { margin-left: 2ex; text-indent: -2ex; } .flow .gimg_graphic { margin-top: 1em; margin-bottom: 1em; } .flow .gimg_table { margin-top: 1em; margin-bottom: 1em; } .flow { font-family: serif; } .flow span,p { font-family: inherit; } .flow-top-div {font-size:83%;}",
null,
"elements of that science. The references are adapted to Playfair's Geometry, but ley will in general apply equally well to Simson's translation of Euclid's Elements. As there are many who wish to obtain a practical knowledge of Surveying, whose leisure may be too limited to admit of their going through a course of Geometry, the author has adapted his work to this class, by introducing the necessary geometrical definitions and problems, and by giving plain and concise rules, entirely detached from the demonstrations; the latter being placed in the form of notes at the bottom of the page. Each rule is exemplified by one wrought example; and the most of them by several unwrought examples, with the answers annexed. In the laying out and dividing of land, which forms the most difficult part of surveying, a variety of problems is introduced, adapted to the cases most likely to occur in practice. This part of the subject, however, presents such a great variety of cases, that we should in vain attempt to give rules that would apply to all of them. It cannot therefore be too strongly recommended to every one, who has the opportunity, to make himself well acquainted with Geometry, and also with Algebra, previous to entering on the study of Surveying. Furnished with these useful auxiliaries, and acquainted with the principles of the science, the practitioner will be able to perform with ease, any thing likely to occur in his practice. The compiler thinks proper to acknowledge, that in the arrangement of the work, he availed himself of the advice of his learned preceptor and friend E. Lewis, of New-Garden; and that several of the demonstrations were furnished by him. J. GUMMERE. West-town Boarding School,",
null,
"EXPLANATION OF THE CHARACTERS USED IN THIS WORK.",
null,
"+ signifies plus, or addition. minus, or subtraction. multiplication. division. proportion. equality. square root. difference between two quantities when it is not known which is the greater. OF LOGARITHMS. LOGARITHMS are a series of numbers so contrived, that by them the work of multiplication is performed by addition, and that of division by subtraction. If a series of numbers in arithmetical progression be placed as indices, or exponents, to a series of numbers in geometrical progression, the sum or difference of any two of the fornier, will answer to the product or quotient of the two corresponding terms of the latter. Thus, 0. 1. 2. 3. 3 4. 5. 6. 7. &c. arith. series, or indices. 1. 2. 4. 8. 16. 32. 64. 128. &c. geom. series. Now 2 + 3 = 5. And 4 x 8 = 32. also 7-3=4. and 1288= 16. Therefore the arithmetical series, or indices, have the same properties as logarithms; and these properties hold true, whatever may be the ratio of the geometrical series. There may, therefore, be as many different systems of logarithms, as there can be taken different geometrical series, having unity for the first term. But the most con B part of the logarithm. The index must be placed before it agreeably to the above observation. Thus the log. of 421 is 2.62428, the log. of 4.21 is 0.62428, and the log. of .0421 is 2.62428. If the given number consist of four figures, find the three left hand figures in the column marked No. as before, and the remaining or right hand figure at the top of the table; in the column under this figure, and against the other three, is the decimal part of the logarithm. Thus the log. of 5163 is 3.71290, and the log. of .6387 is -1.80530. If the given number consist of five or six figures, find the logarithm of the four left hand figures as before; then take the difference between this logarithm and the next greater in the table. Multiply this difference by the remaining figure or figures of the given number, and cut off one or two figures to the right hand of the product, according as the multiplier consists of one or two figures; then add the remaining figure or figures of the product to the logarithm first taken out of the table, and the sum will be the logarithm required. Thus, let it be required to find the logarithm of 59686; then,",
null,
"« AnteriorContinuar »"
]
| [
null,
"https://books.google.co.ve/books/content",
null,
"https://books.google.co.ve/books/content",
null,
"https://books.google.co.ve/books/content",
null,
"https://books.google.co.ve/books/content",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.95073867,"math_prob":0.88123715,"size":4468,"snap":"2023-14-2023-23","text_gpt3_token_len":1059,"char_repetition_ratio":0.13060036,"word_repetition_ratio":0.02108037,"special_character_ratio":0.2256043,"punctuation_ratio":0.15938865,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95929956,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-05-28T00:44:57Z\",\"WARC-Record-ID\":\"<urn:uuid:cf228e8c-0c09-46d3-910d-2b0284e627ed>\",\"Content-Length\":\"36063\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e1e44b6c-90c1-4c4a-b68e-1baec9623210>\",\"WARC-Concurrent-To\":\"<urn:uuid:162a689d-b19b-4400-89e9-8abf709e92d7>\",\"WARC-IP-Address\":\"142.251.16.100\",\"WARC-Target-URI\":\"https://books.google.co.ve/books?id=8q1XAAAAYAAJ&pg=PA11&focus=viewport&vq=division&dq=related:ISBN8474916712&lr=&output=html_text\",\"WARC-Payload-Digest\":\"sha1:D6NP2NCN6ZZD5KFEMRYSIGPYCX3D4LFP\",\"WARC-Block-Digest\":\"sha1:AV4XD6KRGXZSNYGAYKSDFIF3EVE26H4O\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224643388.45_warc_CC-MAIN-20230527223515-20230528013515-00494.warc.gz\"}"} |
https://brilliant.org/discussions/thread/typing-latex/ | [
"# Typing latex\n\nI have a few questions about typing LaTeX. However, I know the basics of LaTeX, but the answers to my following questions would help my LaTeX equations look better. Any kind of help is appreciated.\n\n1) In certain submitted solutions I have seen the LaTeX equations to be aligned at the center instead of beside the text. Is there any code which aligns the equations at the center?\n\n2) When I type summations or definite integrals in LaTeX, for example \\sum{i=0}^{n} i^2 or \\int{0}^{2 \\pi} cos( \\theta) d\\theta, they appear respectively as $\\sum_{i=0}^{n} i^2$ and $\\int_{0}^{2 \\pi} cos( \\theta) d\\theta$. Note that the subscripts and superscripts do not appear exactly above the $\\sum$ or the $\\int$ sign, but to a little right of them. However in some comments in Brilliant discussions, for example this one, I have seen that the superscripts and subscripts can be written directly above the mathematical signs. What is the code to do it?\n\nI am asking it here only because I know it is possible, but the LaTeX guideline does not seem to answer it. Any kind of help will be appreciated. $☺$\n\nThanks!",
null,
"Note by Sreejato Bhattacharya\n6 years, 5 months ago\n\nThis discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science.\n\nWhen posting on Brilliant:\n\n• Use the emojis to react to an explanation, whether you're congratulating a job well done , or just really confused .\n• Ask specific questions about the challenge or the steps in somebody's explanation. Well-posed questions can add a lot to the discussion, but posting \"I don't understand!\" doesn't help anyone.\n• Try to contribute something new to the discussion, whether it is an extension, generalization or other idea related to the challenge.\n\nMarkdownAppears as\n*italics* or _italics_ italics\n**bold** or __bold__ bold\n- bulleted- list\n• bulleted\n• list\n1. numbered2. list\n1. numbered\n2. list\nNote: you must add a full line of space before and after lists for them to show up correctly\nparagraph 1paragraph 2\n\nparagraph 1\n\nparagraph 2\n\n[example link](https://brilliant.org)example link\n> This is a quote\nThis is a quote\n # I indented these lines\n# 4 spaces, and now they show\n# up as a code block.\n\nprint \"hello world\"\n# I indented these lines\n# 4 spaces, and now they show\n# up as a code block.\n\nprint \"hello world\"\nMathAppears as\nRemember to wrap math in $$ ... $$ or $ ... $ to ensure proper formatting.\n2 \\times 3 $2 \\times 3$\n2^{34} $2^{34}$\na_{i-1} $a_{i-1}$\n\\frac{2}{3} $\\frac{2}{3}$\n\\sqrt{2} $\\sqrt{2}$\n\\sum_{i=1}^3 $\\sum_{i=1}^3$\n\\sin \\theta $\\sin \\theta$\n\\boxed{123} $\\boxed{123}$\n\nSort by:\n\n\\displaystyle \\sum_{i=0}^{n}i^2 in math brackets will appear as $\\displaystyle \\sum_{i=0}^{n}i^2$\n\n- 6 years, 5 months ago\n\n$\\displaystyle \\sum_{i=0}^n i^2= \\frac{n(n+1)(2n+1)}{6}$\n\nThanks!\n\n- 6 years, 5 months ago\n\nThe intention of display style is to let inline equations, i.e. those of the form \\ ( \\ ), be allowed to take up more than 1 line of space, for example $\\displaystyle \\sum_{i=0}^n i^2$. This could cause your paragraph of text to appear jumpy, since the subscripts and superscripts now require more line width, as you can see in this paragraph.\n\nConversely, since stand-along equations, i.e. those of the form \\ [ \\ ], already take up a chunk of their own space, and hence the equations (tend to) display as intended. It is not required to use \\displaystyle (which you did in your code above)\n\nStaff - 6 years, 5 months ago\n\n\\ [ latex \\ ] aligns in the center and on a new line while \\ ( latex \\ ) aligns right or with text. At least I think so.\n\n- 6 years, 5 months ago\n\n\\text{\\sum\\limits\\_{i=0}^n i^2 } \\implies \\sum\\limits_{i=0}^n i^2\n\n- 6 years, 5 months ago\n\nOr use the $\\LaTeX$ \\displaystyle\n\n- 3 years, 8 months ago\n\nNow that your queries are answered, I would like to add that you can write \"cos\" in a neater way. Use \\cos in your latex code i.e $\\displaystyle \\cos$. You see that it looks much better than simply writing \"cos\" in your code. The same works for other trig functions too.\n\nLimits can be applied to a definite integral the same way you would do for a summation. $\\int_{0}^{1} f(x) dx$\n\n- 6 years, 5 months ago\n\nNow I have a tip for you, instead of just appending $dx$ to your integral, you should seperate it by \\, like this:\n\n\\text{\\int\\_0^1 f(x) \\, dx } \\implies \\int_0^1 f(x) \\, dx\n\nIt's not really a big deal, but it does look a tiny bit nicer.\n\n- 6 years, 5 months ago\n\nIt does look better, thanks!\n\n- 6 years, 5 months ago\n\n☺☺☺☺☺☺☺☺☺☺☺☺\n\n- 3 years, 8 months ago"
]
| [
null,
"https://ds055uzetaobb.cloudfront.net/site_media/images/default-avatar-globe.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.90233105,"math_prob":0.9648332,"size":3496,"snap":"2019-51-2020-05","text_gpt3_token_len":961,"char_repetition_ratio":0.1093929,"word_repetition_ratio":0.012987013,"special_character_ratio":0.2617277,"punctuation_ratio":0.11382114,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99571294,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-28T02:38:35Z\",\"WARC-Record-ID\":\"<urn:uuid:ebad9dfc-6d7e-4ad2-a0d9-ffa75c747fea>\",\"Content-Length\":\"103917\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9c306248-150c-4213-875a-d136e2db80ea>\",\"WARC-Concurrent-To\":\"<urn:uuid:9d58fec3-2b69-4351-b211-1e22f801035e>\",\"WARC-IP-Address\":\"104.20.34.242\",\"WARC-Target-URI\":\"https://brilliant.org/discussions/thread/typing-latex/\",\"WARC-Payload-Digest\":\"sha1:YT556PPSHLLKOF6UMDSOA6HJ5EJNMYRS\",\"WARC-Block-Digest\":\"sha1:IXX4KEXQEJQ33VDBKURXFOELAX4QOMOY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579251737572.61_warc_CC-MAIN-20200127235617-20200128025617-00500.warc.gz\"}"} |
https://www.omnicalculator.com/finance/after-tax-cost-of-debt | [
"# After-tax Cost of Debt Calculator\n\nCreated by Wei Bin Loo\nReviewed by Dominik Czernia, PhD and Jack Bowater\nBased on research by\nJohn R. Ezzell, James A. Miles Analyzing leases with the after-tax cost of debt Journal of Business Research (December 1983)\nLast updated: Sep 08, 2023\n\nWith this after-tax cost of debt calculator, you can easily calculate how much it costs a company to raise new debts to fund its assets.\n\nAfter reading this article, you will understand what is the after-tax cost of debt and how to calculate the after-tax cost of debt. You will also understand how to apply the after-tax cost of debt formula to real-life situations.\n\n## What is the after-tax cost of debt?\n\nBefore we dive into the concept of the after-tax cost of debt, we must first understand what is the cost of debt and the cost of debt formula.\n\nWe define the cost of debt as the market interest rate, or yield to maturity (YTM), that the company will have to pay if it were to raise new debt from the market. Don't worry if this sounds technical, we explain in detail how you can obtain the cost of debt in the following section. Please use our bond YTM calculator and yield to maturity calculator.\n\nHowever, when this concept is applied in real-life, where tax needs to be accounted for, the after-tax cost of debt is more commonly used. The main reason for this is because the interest paid on debt is often tax-deductible.\n\n## How to calculate the after-tax cost of debt using the after-tax cost of debt formula?\n\nThere is no better way to understand the concept of the after-tax cost of debt than to see it applied in real life.\n\nTo facilitate this, let's assume that we are analyzing a hypothetical company, Bill's Brilliant Barnacles, with the following information:\n\n• The debt of Bill's Brilliant Barnacles has a credit rating of AA;\n• The debt of Bill's Brilliant Barnacles has a maturity of 15 years;\n• Pre-tax income of Bill's Brilliant Barnacles in 2020 was $1,000,000; and • Net income of Bill's Brilliant Barnacles in 2020 was$800,000.\n\nTo calculate the after-tax cost of debt, there are 3 steps you need to follow:\n\n1. Calculate the cost of debt\n\nDetermining a company's before-tax cost of debt, or the cost of debt, has always seemed difficult and complicated.\n\nAs we explained above, the cost of debt is the market interest rate, or yield to maturity (YTM), that the company will have to pay to its debtor to raise new debts from the market. However, more often than not, it is almost impossible to obtain the market interest rate of a particular company, especially when the company's debt is not publicly traded. So, how do we calculate the before-tax cost of debt of a company?\n\nWorry not. There is still a way that we can obtain this information. And to do that, we need to know the credit rating and the maturity of the company's existing debt. In our example, the credit rating of Bill's Brilliant Barnacles' existing debt is AA and the maturity of its existing debt is 15 years.\n\nUsing these 2 pieces of information, we can estimate the company's before-tax cost of debt by comparing its debt to other publicly traded bonds with a similar credit ratings. For instance, if Charlie's Cheerful Cobblers, a company with debts of similar credit rating and maturity as Bill's Brilliant Barnacles, has an 8% yield to maturity (YTM) for its debt, we can safely assume that Bill's Brilliant Barnacles' before-tax cost of debt will be approximately 8% as well.\n\n2. Calculate the marginal corporate tax rate\n\nThere are 2 inputs that you need to calculate the marginal corporate tax rate, namely the company's pre-tax income and the company's net income.\n\nYou can calculate the marginal corporate tax rate using the formula below:\n\nmarginal corporate tax rate = 1 - (net income / pre-tax income)\n\nAs the corporate tax rate is applied to the pre-tax income, the equation above should give us the marginal corporate tax rate of Bill's Brilliant Barnacles, which is:\n\nmarginal corporate tax rate = 1 - ($800,000 /$1,000,000) = 1 - 0.8 = 0.2 = 20%\n\n3. Calculate the after-tax cost of debt\n\nNow that we have obtained the before-tax cost of debt and the marginal corporate tax rate, it is time to calculate the after-tax cost of debt. The after-tax cost of debt can be calculated using the after-tax cost of debt formula shown below:\n\nafter-tax cost of debt = before-tax cost of debt * (1 - marginal corporate tax rate)\n\nThus, in our example, the after-tax cost of debt of Bill's Brilliant Barnacles is:\n\nafter-tax cost of debt = 8% * (1 - 20%) = 6.4%\n\n## What are the benefits of calculating the after-tax cost of debt?\n\nThe benefits of finding the after-tax cost of debt (for example, with our after-tax cost of debt calculator) are:\n\n1. After-tax cost of debt assists us in making investment decisions\n\nIf you want to fund a project with debt, it is essential to make sure the after-tax cost of debt of funding the project is less than the project's rate of return. In other words, the after-tax cost of debt is the required rate of return of the project if the project is funded 100% by debt.\n\nTo earn a return from the project, the project's rate of return has to be more than the required rate of return, which is the after-tax cost of debt.\n\n2. After-tax cost of debt helps us to assess the riskiness of a company\n\nCalculating the after-tax cost of debt is also useful in assessing the riskiness of a company. If the company's after-tax cost of debt is a lot higher than the market average, it reflects that the investors require a higher return from the company. This means that the company is riskier compared to similar companies in the market.\n\n3. After-tax cost of debt is used in calculating the weighted-average cost of capital (WACC)\n\nLast, but not least, the after-tax cost of debt is also an important part of calculating the WACC, which is made up of the after-tax cost of debt, cost of equity, and the company's capital structure. You can use our WACC calculator and cost of equity calculator.\n\nIn a nutshell, calculating the after-tax cost of debt is essential when assessing companies and projects. It can tell you how risky a company is and how much return you need for a project to be profitable. It is thus very crucial to understand what is the after-tax cost of debt and how to find the after-tax cost of debt.\n\nHowever, the cost of debt is by no means the only metric to consider when assessing different projects and companies. So, it is recommended to form an overview of the company and ensure that every other metric is aligned before you make an investment decision.\n\nWei Bin Loo\nMarginal corporate tax rate\nNet income\n$Pre-tax income$\nAfter-tax cost of debt\nCost of debt\n%\nMarginal corporate tax rate\n%\nAfter-tax cost of debt\n%\nPeople also viewed…\n\n### Interest-only mortgage\n\nThe interest-only mortgage calculator shows you the interest on the mortgage (without the principal value)."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9468366,"math_prob":0.9641855,"size":6296,"snap":"2023-40-2023-50","text_gpt3_token_len":1395,"char_repetition_ratio":0.24570884,"word_repetition_ratio":0.12354521,"special_character_ratio":0.2253812,"punctuation_ratio":0.08498809,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9856496,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-28T03:51:32Z\",\"WARC-Record-ID\":\"<urn:uuid:c2735ffe-15f8-44c1-9ac1-17d4de3de2ff>\",\"Content-Length\":\"261007\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:75a9b338-e5d6-4b85-90c6-9d2d0b1a9144>\",\"WARC-Concurrent-To\":\"<urn:uuid:440c12e7-9f04-404e-9bc6-04a159414b3d>\",\"WARC-IP-Address\":\"68.70.205.4\",\"WARC-Target-URI\":\"https://www.omnicalculator.com/finance/after-tax-cost-of-debt\",\"WARC-Payload-Digest\":\"sha1:36OVN2CHFFX7KL4KNFC2HW2BPACMBHXY\",\"WARC-Block-Digest\":\"sha1:W6KG345OMGLAQO62CMKVMNZCN7AAX4A2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510358.68_warc_CC-MAIN-20230928031105-20230928061105-00847.warc.gz\"}"} |
https://www.quicktopic.com/43/H/eWqhydvFpUN | [
"",
null,
"",
null,
"TOPIC:\n\nUT ATI Data Mining 2009\n\n^ All messages 5-20 of 20 1-4 >>\n 20 Jaak Vilo",
null,
"09-28-2009 06:13 AM ET (US) Sigrid, try out different kernels. See how they work and if possible,, try to find examples where one is better than others or good enough... 19 Sigrid Pedoson 09-26-2009 11:14 AM ET (US) I don't understand task 4. 18 Sigrid Pedoson 09-26-2009 10:58 AM ET (US) Question about bonus.How many points will be given, if 1) from first bonus if there are done 2 and 3 in R2) from second bonus, if there is drawed scatterplot 17 Sven Laur 09-16-2009 04:21 AM ET (US) The question Sulev raised is a matter of taste. The resulting list of closed itemsets is the same whether we consider all proper supersets that are frequent or all proper supersets without any restrictions. Indeed, consider a frequent itemset that is not closed under the strict definition, i.e., there are some proper superset that has the same support. Then this superset must be also frequent and thus the set in question is not closed under the weaker definition. Implication to the other side is trivial. 16 Karl Potisepp 09-16-2009 12:56 AM ET (US) The definition of closed frequent itemsets says that if all it's supersets have smaller support than the itemset in question, then it is a closed set. If you can establish that the itemset in question does not have belong to any larger itemsets that have equal support, then it is a closed itemset. If there are no proper supersets to the itemset in question (with smaller support or not), then the amount of \"all its proper supersets\" is 0, and the criteria applies, and you have found a closed itemset. 15 Sulev Reisberg 09-15-2009 04:57 PM ET (US) I have a question about closed itemsets. On slide 26 the definition is \"A frequent itemset is closed if all its proper supersets have smaller support\". Am I right that \"proper superset\" stands for superset which has one item more AND MUST ALSO BE A FREQUENT ITEMSET?I also add an example just to be sure that I am getting it right:Let {a} be a closed frequent itemset with support 10. It has frequent supersets {a,b} with support 5 and {a,c} with support 5. Assuming that there is no more frequent itemsets, are {a,b} and {a,c} also closed frequent itemsets? 14 Sven Laur 09-15-2009 04:30 AM ET (US) Superset of A is a set that contains all elements of A and possibly something else. In Estoninan 'ülemhulk' as antonym to 'alamhulk'. 13 Sigrid Pedoson 09-14-2009 02:58 PM ET (US) What is \"superset\"? What is it in Estonian? 12 Sven Laur 09-14-2009 08:35 AM ET (US) >I've a question about the definition of maximal frequent itemsets. On the lecture slides, a> frequent itemset is said to be maximal if all it's *proper* supersets are infrequent (emphasis > mine). Does the word 'proper' place some additional requirements to the supersets of the > maximal itemset aside from the fact that the maximal itemset must be contained in them?Note that A is a superset of A. Thus if A is frequent then not all of its supersets are infrequent, namely A is frequent. A proper superset of A is a superset that is strictly larger than A and this makes the existence of maximal frequent itemsets possible. 11 Sven Laur 09-14-2009 08:32 AM ET (US) The questions raised about the lift were correct and by now the formula is in correct form. To make sure I will just repeat the derivation.First observation: pr[X]=supp(X)/n, pr[Y]=supp(Y)/n, pr[X and Y] = supp(X u Y)/nSecond observation: Lift shows how much P[X and Y] is larger than P[X]P[Y], i.e. how much P[X and Y] is over-represented compared to the model where X and Y are drawn independently.Definitionlift(X=>y)= pr[X and Y]/(pr[X] pr[Y]) =(supp(X u Y)/n)/(supp(X)/n * supp(Y)/n)= n* supp(XuY)/(supp(X)* supp(Y)) As a side remark note that lift has an asymmetric range, i.e., lift(X=>Y)\\in [0,infinity) whereas logarithm of lift has symmetric range. That is, lift 0.5 means that P[X and Y ] is two times underrepresented and lift 2 means that P[X and y] is two times overrepresented.As a second side remark note that lift(X=>Y)=lift(Y=>X), i.e., lift is actually a property of a pair X, Y and thus does not have a direction. The latter is one drawback of the lift measure. 10 Sulev Reisberg 09-14-2009 05:49 AM ET (US) The lift formula is now inverted, thus my previous post is not relevant anymore (I could not delete it).But I confirm the concern by Rudolf Elbrecht about \"1/n\", which should be \"n\" in my opinion also. 9 Rudolf Elbrecht 09-14-2009 01:22 AM ET (US) I have a question about the corrected lift formula on the slide 16. Shouldn't the normalisation coefficient for lift, when using absolute values, be \"n\", not \"1/n\"?On the slides there is formula:lift(X=>Y) = 1/n * supp(X U Y)/supp(X)*supp(Y),but in my opinion it is not the same aslift(X=>Y) = [supp(X U Y)/n]/[supp(X)/n]*[supp(Y)/n].Am I just doing some stupid mistake on the calculation or others think also that instead \"1/n\" there should be just \"n\" in the first formula?Edited 09-14-2009 01:23 AM 8 Sulev Reisberg 09-13-2009 11:24 AM ET (US) I have a question about calculating lift value. On slide 16 of the lecture by Sven Laur there is the following formula given to calculate a lift value:lift (X=>Y) = 1/n * supp(X)supp(Y)/supp(X U Y) We can rewrite this as follows:lift (X=>Y) = [supp(Y)/n] * supp(X)/supp(X U Y) = [supp(Y)/n] / [supp(X U Y)/supp(X)] = [supp(Y)/n] / conf(X=>Y).The first component here shows the probability of getting Y. The second component shows he probability of getting Y WHEN X (conditional probability).Now comes the difficult part: as I understand the aim of calculating lift value is to evaluate the rules, hence I can not imagine why are we dividing in such order: the first component (has nothing to do with the rule) from the second one? Shouldn't we use supp(Y)/n as the basis? In my opinion it would be much logical, if we'd do it vice verca.Let me explain this with an example:Assume that we have a dataset with 1000 transactions. In 100 of them we have Y presented, thus the overall probability of getting Y is 10%. Lets also assume that we have 2 rules: {A}=>{Y} with confidence 50% and {B}=>{Y} with confidence 20%. If we need to know which of these rules worths more attention, then it would be obvious, that we devide the confidence by overall probability of getting Y (10%) as basis: {A}=>{Y}: 50%/10% = 5 {B}=>{Y}: 20%/10% = 2This would also make a word \"lift\" a bit reasonable - you can make a rule more important (lift it up) by dividing it (its confidence) by overall probability of getting the result.In the course page there is also a referenced material http://www.borgelt.net/slides/fpm.pdf which seems to use the formula of calculating lift vice verca.Therefore my question is - am I getting it all wrong or does it matter at all which way I calculate it?Edited 09-13-2009 11:28 AM 7 Karl Potisepp 09-11-2009 04:27 AM ET (US) I've a question about the definition of maximal frequent itemsets. On the lecture slides, a frequent itemset is said to be maximal if all it's *proper* supersets are infrequent (emphasis mine). Does the word 'proper' place some additional requirements to the supersets of the maximal itemset aside from the fact that the maximal itemset must be contained in them? 6 Jaak Vilo",
null,
"09-11-2009 02:25 AM ET (US) Question was - how to get the meanings of the numbers in the bonus task...Well, this is one of the problems in publishing data sets that some have been stripped of the meanings. I did not read the background data - it is possible that original publishers of data have somewhere the mapping between the nr-s and the attribute names. If someone finds, please let others know. From the point of testing ideas, it can often be the case that ypou work on \"meaningless\" data and then go to customers, who can decide, if what was found was interesting. So - interestingness should come from data, not from meaning, first.-JaakPs. Let's use the mailing list and discussion board for general interest questions. Karl Blum wrote:> Tere,> > Küsimus boonusülesande kohta. Kas nendel andmebaasidel mida me seal > kasutame ei olegi item'ite tähendusi välja toodud? Õnnetuste andmebaasi > kirjelduses on küll erinevate itemite nimekiri kuid mitte nende seoseid > numbritega. Nii on ju igav mingeid järeldusi teha. > > Karl 5 KT 09-09-2009 04:50 AM ET (US) Testing..\n^ All messages 5-20 of 20 1-4 >>\n\nPrint |",
null,
"Views: 1318 (Unique: 475 ) / Subscribers: 8 | What's this?"
]
| [
null,
"https://www.quicktopic.com/pixel.gif",
null,
"https://www.quicktopic.com/pixel.gif",
null,
"https://www.quicktopic.com/siguser.png",
null,
"https://www.quicktopic.com/siguser.png",
null,
"https://www.quicktopic.com/rss.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.90768635,"math_prob":0.8575536,"size":8308,"snap":"2021-21-2021-25","text_gpt3_token_len":2320,"char_repetition_ratio":0.13511561,"word_repetition_ratio":0.08019191,"special_character_ratio":0.27948964,"punctuation_ratio":0.09798271,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9823262,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,null,null,7,null,7,null,7,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-17T23:11:20Z\",\"WARC-Record-ID\":\"<urn:uuid:c6cf4d81-af8e-4780-9b96-6c1ffe1bf369>\",\"Content-Length\":\"37625\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8c6b2533-d193-445c-80f6-c4a0be75711b>\",\"WARC-Concurrent-To\":\"<urn:uuid:6a5f426e-ac9d-42fe-bbd9-fb382cce797b>\",\"WARC-IP-Address\":\"209.68.19.130\",\"WARC-Target-URI\":\"https://www.quicktopic.com/43/H/eWqhydvFpUN\",\"WARC-Payload-Digest\":\"sha1:BA4TO27C3QG7X2XZJUMGAGFTKLGJFMFK\",\"WARC-Block-Digest\":\"sha1:SAE7EHV4DXCACTJPTWBW3MJWP6HLXFOX\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243991870.70_warc_CC-MAIN-20210517211550-20210518001550-00225.warc.gz\"}"} |
https://zbmath.org/?q=an%3A0549.39006 | [
"## Remarks on the stability of functional equations.(English)Zbl 0549.39006\n\nLet $$(G,+)$$ be an abelian group and let X be a Banach space. If f:$$G\\to X$$ is a function such that $$\\| f(x+y)+f(x-y)-2f(x)-2f(y)\\|\\leq \\delta$$ for every x,$$y\\in G$$ and some $$\\delta >0$$, then there exists a unique function g: $$G\\to X$$ satisfying the equation $$g(x+y)+g(x- y)=2g(x)+2g(y)$$ for every x,$$y\\in G$$ such that $$\\| f(x)-g(x)\\|\\leq \\delta /2$$ for every $$x\\in G.$$\nIn the second part there is a short proof of a stability theorem of D. H. Hyers and S. M. Ulam [Proc. Am. Math. Soc. 3, 821-828 (1952; Zbl 0047.295)] for the inequality $$f(tx+(1-t)y)=tf(x)+(1-t)f(y).$$ Finally, the author gives a counterexample for Jensen-convex functions.\nReviewer: A.Smajdor\n\n### MSC:\n\n 39B72 Systems of functional equations and inequalities 39B52 Functional equations for functions with more general domains and/or ranges\n\nZbl 0047.295\nFull Text:\n\n### References:\n\n Hyers, D. H.,On the stability of the linear functional equation. Proc. Nat. Acad. Sci. U.S.A.27 (1941), 411–416. · Zbl 0061.26403 Hyers, D. H.,Transformations with bounded m-th differences. Pacific J. Math.11 (1961), 591–602. · Zbl 0099.10501 Hyers, D. H. andUlam, M.,Approximately convex functions. Proc. Amer. Math. Soc.3 (1952), 821–828. · Zbl 0047.29505 Rockafellar, R. T.,Convex analysis. Princeton University Press, Princeton, New Jersey, 1970. · Zbl 0193.18401 Ulam, S. M.,A collection of mathematical problems. Interscience Publishers, Inc., New York, 1960. · Zbl 0086.24101\nThis reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.7211003,"math_prob":0.9959999,"size":2303,"snap":"2022-40-2023-06","text_gpt3_token_len":711,"char_repetition_ratio":0.12092214,"word_repetition_ratio":0.0,"special_character_ratio":0.33651757,"punctuation_ratio":0.22244094,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9989411,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-27T12:11:50Z\",\"WARC-Record-ID\":\"<urn:uuid:475af24f-aa00-411c-a164-ea5f3252307b>\",\"Content-Length\":\"53053\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:41cbfdd9-6c8b-423d-af5b-5ddcd588758b>\",\"WARC-Concurrent-To\":\"<urn:uuid:59e7db1e-98fc-4491-9aaa-38680a5df2d4>\",\"WARC-IP-Address\":\"141.66.194.2\",\"WARC-Target-URI\":\"https://zbmath.org/?q=an%3A0549.39006\",\"WARC-Payload-Digest\":\"sha1:XMTABR7M4ZYETEANI5VHKDN45FW2ZZ27\",\"WARC-Block-Digest\":\"sha1:BDS3SA6DYJWXUNH3ZX367MMOX7JVZPMG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030335004.95_warc_CC-MAIN-20220927100008-20220927130008-00352.warc.gz\"}"} |
https://researchoutput.ncku.edu.tw/en/publications/trajectory-interpretation-of-correspondence-principle-solution-of | [
"# Trajectory Interpretation of Correspondence Principle: Solution of Nodal Issue\n\nCiann Dong Yang, Shiang Yi Han\n\nResearch output: Contribution to journalArticlepeer-review\n\n2 Citations (Scopus)\n\n## Abstract\n\nThe correspondence principle states that the quantum system will approach the classical system in high quantum numbers. Indeed, the average of the quantum probability density distribution reflects a classical-like distribution. However, the probability of finding a particle at the node of the wave function is zero. This condition is recognized as the nodal issue. In this paper, we propose a solution for this issue by means of complex quantum random trajectories, which are obtained by solving the stochastic differential equation derived from the optimal guidance law. It turns out that point set A, which is formed by the intersections of complex random trajectories with the real axis, can represent the quantum mechanical compatible distribution of the quantum harmonic oscillator system. Meanwhile, the projections of complex quantum random trajectories on the real axis form point set B that gives a spatial distribution without the appearance of nodes, and approaches the classical compatible distribution in high quantum numbers. Furthermore, the statistical distribution of point set B is verified by the solution of the Fokker–Planck equation.\n\nOriginal language English 960-976 17 Foundations of Physics 50 9 https://doi.org/10.1007/s10701-020-00363-3 Published - 2020 Sep 1\n\n## All Science Journal Classification (ASJC) codes\n\n• Physics and Astronomy(all)\n\n## Fingerprint\n\nDive into the research topics of 'Trajectory Interpretation of Correspondence Principle: Solution of Nodal Issue'. Together they form a unique fingerprint."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.85638666,"math_prob":0.6974474,"size":2959,"snap":"2022-27-2022-33","text_gpt3_token_len":646,"char_repetition_ratio":0.13773265,"word_repetition_ratio":0.77803206,"special_character_ratio":0.22507603,"punctuation_ratio":0.105675146,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9578074,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-26T10:16:30Z\",\"WARC-Record-ID\":\"<urn:uuid:0dc8134f-9b58-4118-ae0d-5775a71f4683>\",\"Content-Length\":\"54553\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:69f5b2fa-9082-48c7-8f25-a26a30641477>\",\"WARC-Concurrent-To\":\"<urn:uuid:dea89869-cb79-41e0-bbd4-9f8466836adc>\",\"WARC-IP-Address\":\"13.228.199.194\",\"WARC-Target-URI\":\"https://researchoutput.ncku.edu.tw/en/publications/trajectory-interpretation-of-correspondence-principle-solution-of\",\"WARC-Payload-Digest\":\"sha1:ZZGQHYMOQQLAZNEZQLOWPB7UGHJ6NWNC\",\"WARC-Block-Digest\":\"sha1:SY6FGB7JXFFJWJA5T7NAMVRVTTOKPNXS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103205617.12_warc_CC-MAIN-20220626101442-20220626131442-00469.warc.gz\"}"} |
https://analytixon.com/2015/04/12/distilled-news-65/ | [
"A Subtle Way to Over-Fit\nIf you train a model on a set of data, it should fit that data well. The hope, however, is that it will fit a new set of data well. So in machine learning and statistics, people split their data into two parts. They train the model on one half, and see how well it fits on the other half. This is called cross validation, and it helps prevent over-fitting, fitting a model too closely to the peculiarities of a data set.\n\nPython Data. Leaflet.js Maps.\nPython Data. Leaflet.js Maps. Folium builds on the data wrangling strengths of the Python ecosystem and the mapping strengths of the Leaflet.js library. Manipulate your data in Python, then visualize it in on a Leaflet map via Folium.\n\nVisualizing Matrix Multiplication As a Linear Combination\nWhen multiplying two matrices, there’s a manual procedure we all know how to go through. Each result cell is computed separately as the dot-product of a row in the first matrix with a column in the second matrix. While it’s the easiest way to compute the result manually, it may obscure a very interesting property of the operation: multiplying A by B is the linear combination of A’s columns using coefficients from B. Another way to look at it is that it’s a linear combination of the rows of B using coefficients from A."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.89333993,"math_prob":0.9262091,"size":1292,"snap":"2021-21-2021-25","text_gpt3_token_len":282,"char_repetition_ratio":0.10015528,"word_repetition_ratio":0.0,"special_character_ratio":0.20510836,"punctuation_ratio":0.10780669,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9615955,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-20T16:12:03Z\",\"WARC-Record-ID\":\"<urn:uuid:6ee402e6-8fd4-40bb-a042-9b9e2aa1c6c8>\",\"Content-Length\":\"49417\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5dcda202-c58a-4b54-9711-71893ff0b20f>\",\"WARC-Concurrent-To\":\"<urn:uuid:2c90fa46-ab24-404a-b56b-2ba44d20b397>\",\"WARC-IP-Address\":\"192.0.78.178\",\"WARC-Target-URI\":\"https://analytixon.com/2015/04/12/distilled-news-65/\",\"WARC-Payload-Digest\":\"sha1:FTBUQ2EGMCDYQU2NLLON3PGZRH7RVZZ5\",\"WARC-Block-Digest\":\"sha1:GYSVJC7TUU6TKYNL5HEGSIIM2JE6O5RS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623488249738.50_warc_CC-MAIN-20210620144819-20210620174819-00038.warc.gz\"}"} |
https://diffeqflux.sciml.ai/dev/GPUs/ | [
"# Use with GPUs\n\nNote that the differential equation solvers will run on the GPU if the initial condition is a GPU array. Thus, for example, we can define a neural ODE by hand that runs on the GPU (if no GPU is available, the calculation defaults back to the CPU):\n\nusing DifferentialEquations, Flux, Optim, DiffEqFlux, DiffEqSensitivity\n\nmodel_gpu = Chain(Dense(2, 50, tanh), Dense(50, 2)) |> gpu\np, re = Flux.destructure(model_gpu)\ndudt!(u, p, t) = re(p)(u)\n\n# Simulation interval and intermediary points\ntspan = (0.0, 10.0)\ntsteps = 0.0:0.1:10.0\n\nu0 = Float32[2.0; 0.0] |> gpu\nprob_gpu = ODEProblem(dudt!, u0, tspan, p)\n\n# Runs on a GPU\nsol_gpu = solve(prob_gpu, Tsit5(), saveat = tsteps)\n\nOr we could directly use the neural ODE layer function, like:\n\nprob_neuralode_gpu = NeuralODE(gpu(dudt2), tspan, Tsit5(), saveat = tsteps)\n\nIf one is using FastChain, then the computation takes place on the GPU with f(x,p) if x and p are on the GPU. This commonly looks like:\n\ndudt2 = FastChain((x,p) -> x.^3,\nFastDense(2,50,tanh),\nFastDense(50,2))\n\nu0 = Float32[2.; 0.] |> gpu\np = initial_params(dudt2) |> gpu\n\ndudt2_(u, p, t) = dudt2(u,p)\n\n# Simulation interval and intermediary points\ntspan = (0.0, 10.0)\ntsteps = 0.0:0.1:10.0\n\nprob_gpu = ODEProblem(dudt2_, u0, tspan, p)\n\n# Runs on a GPU\nsol_gpu = solve(prob_gpu, Tsit5(), saveat = tsteps)\n\nor via the NeuralODE struct:\n\nprob_neuralode_gpu = NeuralODE(dudt2, tspan, Tsit5(), saveat = tsteps)\nprob_neuralode_gpu(u0,p)\n\n## Neural ODE Example\n\nHere is the full neural ODE example. Note that we use the gpu function so that the same code works on CPUs and GPUs, dependent on using CUDA.\n\nusing DiffEqFlux, OrdinaryDiffEq, Flux, Optim, Plots, CUDA, DiffEqSensitivity\nCUDA.allowscalar(false) # Makes sure no slow operations are occuring\n\n# Generate Data\nu0 = Float32[2.0; 0.0]\ndatasize = 30\ntspan = (0.0f0, 1.5f0)\ntsteps = range(tspan, tspan, length = datasize)\nfunction trueODEfunc(du, u, p, t)\ntrue_A = [-0.1 2.0; -2.0 -0.1]\ndu .= ((u.^3)'true_A)'\nend\nprob_trueode = ODEProblem(trueODEfunc, u0, tspan)\n# Make the data into a GPU-based array if the user has a GPU\node_data = gpu(solve(prob_trueode, Tsit5(), saveat = tsteps))\n\ndudt2 = FastChain((x, p) -> x.^3,\nFastDense(2, 50, tanh),\nFastDense(50, 2))\nu0 = Float32[2.0; 0.0] |> gpu\np = initial_params(dudt2) |> gpu\nprob_neuralode = NeuralODE(dudt2, tspan, Tsit5(), saveat = tsteps)\n\nfunction predict_neuralode(p)\ngpu(prob_neuralode(u0,p))\nend\nfunction loss_neuralode(p)\npred = predict_neuralode(p)\nloss = sum(abs2, ode_data .- pred)\nreturn loss, pred\nend\n# Callback function to observe training\nlist_plots = []\niter = 0\ncallback = function (p, l, pred; doplot = false)\nglobal list_plots, iter\nif iter == 0\nlist_plots = []\nend\niter += 1\ndisplay(l)\n# plot current prediction against data\nplt = scatter(tsteps, Array(ode_data[1,:]), label = \"data\")\nscatter!(plt, tsteps, Array(pred[1,:]), label = \"prediction\")\npush!(list_plots, plt)\nif doplot\ndisplay(plot(plt))\nend\nreturn false\nend\nresult_neuralode = DiffEqFlux.sciml_train(loss_neuralode, p,\nmaxiters = 300)"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.55986065,"math_prob":0.99912983,"size":3057,"snap":"2021-43-2021-49","text_gpt3_token_len":1047,"char_repetition_ratio":0.11464134,"word_repetition_ratio":0.124726474,"special_character_ratio":0.32973504,"punctuation_ratio":0.23028392,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99917185,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-26T21:27:10Z\",\"WARC-Record-ID\":\"<urn:uuid:24e92f64-314e-4dca-9e49-d688426cf592>\",\"Content-Length\":\"16107\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:901c05c0-d71d-4068-b8f6-17021fe3de50>\",\"WARC-Concurrent-To\":\"<urn:uuid:ca2a5675-fcf5-4800-919a-c83ae9ee1a54>\",\"WARC-IP-Address\":\"185.199.110.153\",\"WARC-Target-URI\":\"https://diffeqflux.sciml.ai/dev/GPUs/\",\"WARC-Payload-Digest\":\"sha1:OIMX54UO43H26HTTS23MEPNSCT4B6OLB\",\"WARC-Block-Digest\":\"sha1:2EX4ZZ6G72XBDY3LUP5RFQ4SDBP6HCYI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323587926.9_warc_CC-MAIN-20211026200738-20211026230738-00593.warc.gz\"}"} |
https://carlbrannen.wordpress.com/2007/09/22/ | [
"# Daily Archives: September 22, 2007\n\n## Infrared Correction to Mass I\n\nEarlier we postulated the snuark mass interaction for a transition from a left to a right handed state as a simple iq vertex stuck between two propagators:",
null,
"We postulated that the mass interaction that converts a left handed lepton to a right handed lepton involved three of these transitions happening simultaneously. We put the left handed snuarks as +x, +y, and +z, and the right handed snuarks as -x, -y, and -z. In doing this, because the snuark interaction forbids transitions between incompatible quantum states (for example from +z to -z), we found that there were only two ways this could happen. We labeled these two complex interactions as J and K. These two interactions amounted to the even permutations on three objects (not including the identity). They were (x,y,z) goes to (y,z,x) or (z,x,y). To distinguish the right and left handed states, we wrote these as (+x,+y,+z) goes to (-y,-z,-x) or (-z,-x,-y). All other permutations on the three objects (x,y,z) were forbidden because they included a forbidden interaction such as +x goes to -x.\n\nOur analysis was correct in that we did find all the final states (two of them), but it was incorrect in that we did not include all the possible intermediate states. In this post and the next, we will sum over these more complicated interactions and compute an effective mass interaction. The calculation is quite similar to a similar calculation in standard QFT: the treatment of soft photons as a vertex correction to the interaction between an electron and a hard photon. See section 6.5 of Peskin and Schroeder."
]
| [
null,
"https://carlbrannen.files.wordpress.com/2007/09/snuarkinteract.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9604581,"math_prob":0.98123807,"size":1590,"snap":"2021-21-2021-25","text_gpt3_token_len":355,"char_repetition_ratio":0.13493064,"word_repetition_ratio":0.0,"special_character_ratio":0.21698113,"punctuation_ratio":0.121875,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9763184,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-16T15:15:47Z\",\"WARC-Record-ID\":\"<urn:uuid:0f9789e1-cded-4857-899f-b75ab10d8c95>\",\"Content-Length\":\"45119\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c98b1e0a-d13e-487c-8147-cda21c3f1d2b>\",\"WARC-Concurrent-To\":\"<urn:uuid:b5db74fe-eef1-4c4e-8fc5-bc8d5bcba827>\",\"WARC-IP-Address\":\"192.0.78.12\",\"WARC-Target-URI\":\"https://carlbrannen.wordpress.com/2007/09/22/\",\"WARC-Payload-Digest\":\"sha1:TEYSTCKN3SLW6W4RBHFSZUFGEJRQTK2O\",\"WARC-Block-Digest\":\"sha1:XB4F6XCECXF7JYUNNN7PJGNEFOOJPTBX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243991224.58_warc_CC-MAIN-20210516140441-20210516170441-00020.warc.gz\"}"} |
http://export.arxiv.org/list/math.MG/pastweek?skip=0&show=10 | [
"# Metric Geometry\n\n## Authors and titles for recent submissions\n\n[ total of 14 entries: 1-10 | 11-14 ]\n[ showing 10 entries per page: fewer | more | all ]\n\n### Tue, 5 Jul 2022\n\n\nTitle: Inclusion properties of the triangular ratio metric balls\nAuthors: Oona Rainio\nSubjects: Metric Geometry (math.MG)\n arXiv:2207.01392 (cross-list from gr-qc) [pdf, ps, other]\nTitle: Causal bubbles in globally hyperbolic spacetimes\nSubjects: General Relativity and Quantum Cosmology (gr-qc); Mathematical Physics (math-ph); Differential Geometry (math.DG); Metric Geometry (math.MG)\n arXiv:2207.00963 (cross-list from math.FA) [pdf, ps, other]\nTitle: A metric fixed point theorem and some of its applications\nAuthors: Anders Karlsson\nSubjects: Functional Analysis (math.FA); Metric Geometry (math.MG)\n\n### Mon, 4 Jul 2022\n\n\nTitle: The Cheeger problem in abstract measure spaces\nSubjects: Metric Geometry (math.MG); Functional Analysis (math.FA)\n\nTitle: A framework for stereo vision via optimal transport\nSubjects: Metric Geometry (math.MG); Optimization and Control (math.OC)\n\n### Fri, 1 Jul 2022\n\n\nTitle: Three-point bounds for sphere packing\nSubjects: Metric Geometry (math.MG)\n\nTitle: Borsuk's partition problem in four-dimensional $\\ell_{p}$ space\nAuthors: Jun Wang, Fei Xue\nSubjects: Metric Geometry (math.MG)\n arXiv:2206.15342 (cross-list from math.CO) [pdf, ps, other]\nTitle: Tilings of the sphere by congruent quadrilaterals III: edge combination $a^3b$ with general angles\nComments: 41 pages, 20 figures, 12 table\nSubjects: Combinatorics (math.CO); Metric Geometry (math.MG)\n arXiv:2206.15104 (cross-list from math.DG) [pdf, ps, other]\nTitle: Euler characteristics of collapsing Alexandrov spaces\nSubjects: Differential Geometry (math.DG); Metric Geometry (math.MG)\n\n### Thu, 30 Jun 2022 (showing first 1 of 2 entries)\n\n\nTitle: A Combination Theorem for Spaces of Tree-graded Spaces\nAuthors: Rakesh Halder"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.7813373,"math_prob":0.50139326,"size":869,"snap":"2022-27-2022-33","text_gpt3_token_len":344,"char_repetition_ratio":0.21387284,"word_repetition_ratio":0.15384616,"special_character_ratio":0.47295743,"punctuation_ratio":0.2562814,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9764771,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-05T12:28:33Z\",\"WARC-Record-ID\":\"<urn:uuid:ffb0cd4b-8d19-4713-b0bb-ad6a620f2977>\",\"Content-Length\":\"20197\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:085f9b62-519f-4842-bcf3-437559234936>\",\"WARC-Concurrent-To\":\"<urn:uuid:ad1180a7-9a63-4e2f-b6ed-e5d31d42f429>\",\"WARC-IP-Address\":\"128.84.21.203\",\"WARC-Target-URI\":\"http://export.arxiv.org/list/math.MG/pastweek?skip=0&show=10\",\"WARC-Payload-Digest\":\"sha1:H7WBYVP3UEWKQ4MYAD6PKZBPHTHFPWRD\",\"WARC-Block-Digest\":\"sha1:U2SIBVP2VGNNU5APTOUBX5RPWZHC2RKP\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104576719.83_warc_CC-MAIN-20220705113756-20220705143756-00658.warc.gz\"}"} |
https://www.easycalculation.com/formulas/civil-engineering-all.html | [
"# All Civil Engineering Formulas List\n\n## Cantilever Beam Stiffness\n\n#### Formula Used:\n\nStiffness (k) = (3 × E × I ) / l3\n\nWhere,\n\nE - Young's Modulus\nI - Area Moment of Inertia\nl - Length\n\n## Colebrook White Equation\n\n#### Formula :",
null,
"where,\nv - kinematic viscosity of water,\nD - Internal diameter,\nKs - Roughness coefficient,\ng = 9.81 m/s2,\nA - Area of section.\n\n## Cantilever Beam Slope, Deflection With Couple Moment\n\n#### Formula Used:\n\nSlope at free end = ML / EI\nDeflection at any section = Mx2 / 2EI\n\nWhere,\nM is the couple moment at the free end,\nE is the Elastic Modulus,\nI is the Area moment of Inertia,\nL is the Length of the beam and\nx is the position of the load.\n\n## Cantilever Beam Slope, Deflection with Uniformly Distributed Load\n\n#### Formula Used:\n\nSlope at free end = PL3 / 6EI\nDeflection at any section = Px2( x3 + 6L2 - 4Lx ) / 24EI\n\nWhere,\n\nP is the externally applied load,\nE is the Elastic Modulus,\nI is the Area moment of Inertia,\nL is the Length of the beam and\nx is the position of the load\n\n## Cantilever Beam Slope, Deflection for Uniform Load\n\nFormula Used:\nSlope at free end = P0L3 / 6EI\nDeflection at any section = P0x2 ( x3 + 6L2 - 4Lx ) / 24EI\nP0 = PL / (L-x)\n\nWhere,\nP0 is the Maximum intensity,\nP is the Externally applied load,\nE is the Elastic Modulus,\nI is the Area moment of Inertia,\nL is the Length of the beam and\nx is the position of the load.\n\n## Cantilever Beam Slope, Deflection for Load at Free End\n\n#### Formula\n\nSlope at free end = PL2 / 2EI\nDeflection at any section = Px2(3L-x) / 6EI\n\nWhere,\n\nP is the externally applied load,\nE is the Elastic Modulus,\nI is the Area moment of Inertia,\nL is the Length of the beam and\nx is the position of the load\n\n## Cantilever Beam Slope, Deflection for Load at Any Point\n\n#### Formula Used:\n\nSlope at free end = Pa2 / 2EI\nDeflection at any section = Px2(3a-x) / 6EI(for x less than a)\nDeflection at any section = Pa2(3x-a) / 6EI(for a less than x)\n\nWhere,\n\nP is the externally applied load,\nE is the Elastic Modulus,\nI is the Area moment of Inertia,\nLis the Length of the beam and\nx is the position of the load\na is the distance of load from one end of the support\n\n## Feet and Inches Arithmetic\n\n#### Formula Used:\n\nMultiplication = ( (Value1-ft X 12) + in) X ( (Value2-ft X 12) + in) Addition = ( (Value1-ft X 12) + in) + ( (Value2-ft X 12) + in) Subtraction = ( (Value1-ft X 12) + in) - ( (Value2-ft X 12) + in) Division = ( (Value1-ft X 12) + in) / ( (Value2-ft X 12) + in)\n\nWhere,\n\nft - Feet\nin - Inches\n\n## Flexible Pavement Structural Number\n\n#### Formula:\n\nL=a1ta + b1tb + c1tsb +d1tad\n\nWhere,\n\nL=Structural Number of Flexible pavement,\na1=Layer coefficient for asphalt ,\nta=Asphalt layer thickness,\nb1=Layer coefficient of base,\ntb=Base layer thickness ,\nc1=Layer coefficient of sub-base,\ntsb=Sub-base layer thickness,\n\n## Vertical Curve Offset Distance\n\n#### Formula Used:\n\nE = [ L x (g2 - g1) ] / 8",
null,
"Where,\n\nE - Vertical Offset\nL - Length of the curve\n\n## Vertical Curve Length\n\nFormula Used:\nLm = [ S² × (g2 − g1) ] / 864 ∀ S<Lm\nLm = 2S - [ 864 / (g2 − g1) ] ∀ S>Lm\n\nWhere,\nLm - Minimum Curve length\nS - Passing Sight Distance",
null,
"## Crest Vertical Curve Length\n\n#### Formula Used:\n\nLm = ( A×S² ) / ( 200 × (√h1 + √h2)² ) ∀ S<Lm\nLm = 2S − { ( 200 × (√h1 + √h2)² ) / A } ∀ S>Lm\n\nWhere,\n\nA - Absolute difference between g2 and g1\nS - Sight Distance\nLm - Minimum Curve Length\nh1 - Height of driver's eye above roadway surface\nh2 - Height of object above roadway surface",
null,
"## SAG Vertical Curve Length\n\nFormula Used:\nLm = ( A×S² ) / ( 200 × (H + S ×tanβ) ) ∀ S<Lm\nLm = 2S − { ( 200 × (H + S ×tanβ) ) / A } ∀ S>Lm\n\nIf S > L, then the first formula is used, if L > S, then the second formula is used.\n\nWhere,\n\nA - Absolute difference between g2 and g1\nS - Sight Distance\nLm - Minimum Curve Length\nβ - Angle of Headlight Beam\n\n## Rate of Change Vertical Curve\n\n#### Formula Used:\n\nr = (g2 − g1) / L\n\nWhere,\n\nr - Rate of change of grade\nL - Length of the curve",
null,
"## Transportation Highways Horizontal Curve\n\n##### Formula\n\nR = 5729.58 / D\nT = R * tan ( A/2 )\nL = 100 * ( A/D )\nLC = 2 * R *sin (A/2)\nE = R ( (1/(cos (A/2) ) ) - 1 ) )\nM = R ( 1 - cos (A/2) )\nPC = PI - T\nPT = PC + L\n\nWhere,\n\nD = Degree of Curve, Arc Definition\n1° = 1 Degree of Curve\n2° = 2 Degrees of Curve\nP.C. = Point of Curve\nP.T. = Point of Tangent\nP.I. = Point of Intersection\nA = Intersection Angle, Angle between two tangents\nL = Length of Curve, from P.C. to P.T.\nT = Tangent Distance\nE = External Distance\nL.C. = Length of Long Chord\nM = Length of Middle Ordinate\nc = Length of Sub-Chord\nk = Length of Arc for Sub-Chord\nd = Angle of Sub-Chord\n\n## Elevation Point of Vertical Curve\n\n##### Formula Used:\n\ny = epvc + g1x + [ (g2 − g1) ×x² / 2L ]\n\nWhere,\ny - elevation of point of vertical tangency\nepvc - Initial Elevation\nx/L - Length of the curve",
null,
"## Vehicle Stopping Distance\n\n#### Formula Used:\n\nStopping Distance =(v×t) + { v² / [2×g×(f±G)] }\n\nWhere,\n\ng - gravity (9.8)\nv - Vehicle Speed\nt - perception Time\n\n## Spiral Curve Tangent Distance\n\n#### Formula Used:\n\nY = L − { L5 / ( 40×R²×Ls²) }\n\nWhere,\nY - Tangent distance to any point on the spiral\nL - Length of spiral from tangent to any point\nLs - Length of spiral\nR - Radius of Simple Curve",
null,
"## Spiral Curve Deflection Angle\n\n#### Formula Used:\n\ni = L² / ( 6×R×Ls)\n\nWhere,\ni - Tangent deflection angle to any point on the curve\nL - Length of spiral from tangent to any point\nLs - Length of spiral\nR - Radius of Simple Curve",
null,
"## Earthwork Cross Sectional Area\n\nFormula Used:",
null,
"Where,\nA - Area of cross section\nXi - Horizontal axis\nYi - Vertical axis\nn - Number of points on cross section\n\n## Earthwork Cross Section Volume\n\n##### Formula Used:\n\nV = ((A1 + A2) ×L) / 2\n\nWhere,\n\nL - Length between two areas\nA1 - Cross section area of first side\nA2 - Cross section area of second side\nV - Eathwork Volume\n\n## Concrete Slab Maximum Length\n\n#### Formula\n\nL = ( 0.00047hr (fsS) ^2 ) ^ ( 1/3 )\n\nWhere,\n\nL = Slab Length,\nhr = Thickness of reinforced slab,\nfs = Yield strength of steel reinforcement,\nS = Steel reinforcing ratio\n\n## Concrete Slab Volume\n\n#### Formula Used:\n\nVolume of concrete Slab = w × l × t\n\nWhere,\n\nl - Length\nw - Width\nt - Thickness\n\n## Concrete Slab Maximum Wall Load\n\n##### Formula:\n\nP = 9.93 ( fc^0.5 )( te^2 ) ( ( k / (19000 ( fc^0.5 )( te^3 ) ) ) ^ 0.25\n\nWhere,\n\nfc = Concrete compressive strength,\nk = Modulus of subgrade reaction,\nte = Slab thickness.\n\n#### Formula:\n\nw = 257.876s ( kh / E ) ^ 0.5\n\nWhere\n,\nw = Maximum Allowable Stationary Live Load,\nk = Modulus of subgrade reaction,\nh = Thickness of slab,\ns = Allowable extreme fiber stress in tension,\nE = Modulus of elasticity.\n\n## Concrete Footing Volume\n\n#### Formula Used:\n\nVolume of concrete Footer = [ (ow × ol) − (iw × il) ] × t\n\nWhere,\n\nol - Outside Length\now - Outside Width\nil - Inside Length\niw - Inside Width\nt - Thickness\n\n## Concrete Footing\n\n#### Formulas Used:\n\nFooting Pours = ( Diameter * ( Width / 12 ) ) * ( Depth / 12 ) / 27 );\n\n## Concrete Volume\n\n#### Formula:\n\nConcrete Volume = [( 22/7 )r2 * depth ) / 27 ] * Quantity\n\n## Block Wall Cubic Yards\n\n#### Formula:\n\nFor size = 8inch\nCubic Yards to be filled = (L * W * 0.32 / 27);\nFor size = 12inch\nCubic Yards to be filled = (L * W * 0.51 / 27);\n\n## Cubic Yards of Circular Stepping Stones\n\n#### Formula:\n\nSingle Stepping Stone = (Π X r2 X h) / 46656\n\nWhere,\n\nh = Depth in inches\nr = d / 2\nd = Diameter in inches\n\n## Cubic Yards of Rectangular Stepping Stones\n\n#### Formula:\n\nSingle Stepping Stone = (l X b X h) / 324\n\nWhere,\n\nl = Length in feet\nb = Width in feet\nh = Depth in inches\n\n## Cubic Yards of Triangular Stepping Stones\n\n#### Formula:\n\nSingle Stepping Stone = (l X b X h) / 648\n\nWhere,\n\nl = Length in feet\nb = Width in feet\nh = Depth in inches\n\n## Block\n\n#### Formula:\n\nNumber Of Blocks = (Length × Width) / Block Size\n\n## Concrete Mix Ratio\n\n#### Formula:\n\nVolume= Width × Height × Depth\nCement = Volume × 320\nSharp Sand= Volume × 600\nGravel = Volume× 1200\nWater = Volume × 176\n\n## Concrete Wall\n\n#### Formula:\n\nConcrete Wall (CW) = (Length × Thickness × Height) × 0.037037\n\n## Concrete Driveways Cost\n\n#### Formula:\n\nRectangle,\nC = L × W × T × R\nCircle,\nC = π × (M/2)2 × T × R\nFooting,\nC = L × W × D × R\nCircular column,\nC = π × (M/2)2 × D × R\n\nWhere,\n\nC = Total Cost Of Concrete Driveways\nL = Length(yard)\nW = Width(yard)\nT = Thickness(yard)\nR = Cost\nD = Depth(yard)\nM=Diameter(yard)\n\n## Safe Speed For Horizontal Curve\n\n#### Formula:\n\nIf Safe speed of Horizontal Curve greater than 50 mph\nSafe Speed for Horizontal curve ( V > 50mph ) = ( ( ( -0.03 × r ) + ( √ (((.03 × r) × (.03 × r)) + ((4 × r) × ((15 × (e / 100)) + 3.6))))) / 2)\n\nIf Safe speed of horizontal curve less than 50 mph\nSafe Speed for Horizontal curve ( V < 50mph ) = ((( -.015 × rhname ) + ( √ ((( .015 × rhname ) × ( .015 × rhname )) + ((4 × rhname) × (( 15 × ( ehname / 100 )) + 2.85 ))))) / 2);\n\nWhere,\n\nr = Radius of Horizontal Curve(ft)\ne = Superelevation\n\n## Cornering Force\n\n#### Formula:\n\nt = u × m × g × sin(a)\nf = ( u × m × g × sin(a) ) + ( m × g × cos(a) )\nv = √ (((( u × m × g × sin(a) ) + ( m × g × cos(a) )) × r ) / m )\n\nWhere,\n\nt = Static Friction\nu = Static Friction's Coefficient\nm = Mass of Vehicle (kg)\ng = Gravity Accelaration\nf = Total Net Force\nv = Maximum Speed\na = Slope of the Road\n\n## Concrete Driveway\n\n#### Formula:\n\nA = l × b\nP = 2 × (l+b)\n\nWhere,\n\nA = Drive way Area\nP = Drive way Perimeter\nl = Length\nb = width\n\n## Roof Slope\n\n#### Formula:\n\nRun(inches)= ( 12 × Rise ) / Roof Pitch\nSlope = ( Rise / Run ) × 100\nAngle = tan-1( Rise / Run )\n\n## Roof Angle\n\n#### Formula:\n\nRun(inches) = ( Rise / Slope ) × 100\nAngle = tan-1( Rise /Run )\nRoof Pitch = ( Rise /(Run/12) )\n\n## Roof Pitch\n\n#### Formula:\n\nPitch = S / ( N / 12 )\nSlope = ( S / N ) × 100\nAngle = tan-1 ( S / N )\n\nWhere,\n\nS = Rise (inches)\nN = Run (inches)\n\n## Rise Run Slope\n\n#### Formula:\n\nRun(inches) = Rise / tan(angle)\nRoof Pitch = Rise / ( Run/ 12 )\nSlope = ( Rise / Run) × 100\n\n## Curve Surveying\n\n#### Formula:\n\nl = π × r × i / 180\nt = r × tan(i / 2)\ne = ( r / cos(i / 2)) -r\nc = 2 × r × sin(i / 2)\nm = r - (r (cos(i / 2)))\nd = 5729.58 / r\n\nWhere,\n\ni = Deflection Angle\nl = Length of Curve\nt = Length of Tangent\ne = External Distance\nc = Length of Long Chord\nm = Middle Ordinate\nd = Degree of Curve Approximate\n\n## lb/ft<sup>3</sup> to kN/m<sup>3</sup> Conversion\n\n#### Formula:\n\nT = S × (9.81 kN/m³ / 62.4 lb/ft³)\n\nWhere,\n\nT = Total Unit Weight in kN/m³\nS = Total Unit Weight in lb/ft³\n\n## Insulation\n\n#### Formula:\n\nApproximate Sq.Ft Needed = Area Width × Area Height\n\n## Trapezoidal Footing Volume\n\n#### Formula:\n\nV = h / 3(A1 + A2 + √(A1 * A2))\n\nWhere,\n\nV = Volume of Trapezoid Footing\nh = Height of Trapezoidal\nA1 = Area of the Lower Shape\nA2 = Area of the Upper Shape\nA1 = m x n (Lower Height x Lower Breadth)\nA2 = o x p (Upper Height x Upper Breadth)\n\n## Concrete Yardage\n\n#### Formula:\n\nConcrete Yardage = L × W × H/12 × 0.037037\n\nWhere,\n\nW = Width(ft)\nL = Length(ft)\nH =Thickness(inch)\n\n## Curb and Gutter Barrier Concrete Yardage\n\n#### Formula:\n\nConcrete Yardage = (l×(f/12.0×(g/12.0+h/12.0))+l×(h/12.0×h/12.0)) × 0.037037\n\nWhere,\n\nl = Length(ft)\nf = Flag Thickness(inch)\ng = Gutter Width(inch)\nh = Curb Height(inch)\n\n## Concrete Wall\n\n#### Formula:\n\nConcrete Yardage = Length × Height × (Thickness /12) × 0.037037\n\n## Concrete Footing Yard\n\n#### Formula:\n\nConcrete Yardage = Length × Width(inch) /12 × Height(inch) /12 × 0.037037\n\n## Concrete Yards\n\n#### Formula:\n\nc =((((n × t/12.0)×(n×r/12.0))/2)+((n×(t/12.0×r/12.0))/2))×w × 0.037037\n\nWhere,\n\nc = Concrete Yardage\nn = Number of stairs\nr = Riser(inch)\nw = Width(ft)\n\n## Concrete Volume\n\n#### Formula:\n\nV = H x B x W\nT = M + N + O\nX = (M / T) x V\nY = (N / T) x V\nZ = (O / T) x V\n\nWhere,\n\nH = Height of Concrete\nW = Width of Concrete\nM = Cement Ratio\nN = Sand Ratio\nO = Coarse Ratio\nV = Volume of Concrete\nT = Total Ratio of ingredients\nX = Cement Quantity\nY = Sand Quantity\nZ = Coarse Quantity\n\n## Plaster\n\n#### Formula:\n\nV = A x T\nX = V x 1.54\nC = X x (M / G)\nS = X x (N / G)\n\nWhere,\n\nT = Plastering Thickness\nV = Volume of Cement Mortar\nA = Area of Plastering\nM = Ratio of Plastering Cement\nN = Ratio of Plastering Sand\nC = Cement Required (1 Part)\nS = Sand Required (5 Part)\nX = 35% Sand Bulkage\nG = Total ratio (M+N )\n\n## Floor Tile\n\n#### Formula:\n\nPerimeter of Room = (2 x ( Room length + Room breadth)) - Door width\n\nSkirting Tiles Area = Perimeter of Room x Skirting Tiles Height\n\nArea of Room = Room length x Room Breadth\n\nTotal Area to be Laid = Area of Room + Skirting Tiles Area\n\nArea of Tiles = Tiles length + Tiles Breadth\n\nNumber of Tiles We Need = (Total Area to be Laid / Area of Tiles) x Tiles Wastage%"
]
| [
null,
"https://www.easycalculation.com/engineering/engineering/civil/images/colebrookwhite.jpg",
null,
"https://www.easycalculation.com/engineering/engineering/civil/Vertical_Curve.PNG",
null,
"https://www.easycalculation.com/engineering/engineering/civil/Vertical_Curve.PNG",
null,
"https://www.easycalculation.com/engineering/engineering/civil/Vertical_Curve.PNG",
null,
"https://www.easycalculation.com/engineering/engineering/civil/Vertical_Curve.PNG",
null,
"https://www.easycalculation.com/engineering/engineering/civil/Vertical_Curve.PNG",
null,
"https://www.easycalculation.com/engineering/engineering/civil/spiral-curve.png",
null,
"https://www.easycalculation.com/engineering/engineering/civil/spiral-curve.png",
null,
"https://www.easycalculation.com/engineering/engineering/civil/area-of-cross-section.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.7391751,"math_prob":0.9986997,"size":13620,"snap":"2023-14-2023-23","text_gpt3_token_len":4380,"char_repetition_ratio":0.18867509,"word_repetition_ratio":0.19343197,"special_character_ratio":0.33707783,"punctuation_ratio":0.103421465,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99996245,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18],"im_url_duplicate_count":[null,2,null,10,null,10,null,10,null,10,null,10,null,4,null,4,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-28T19:04:40Z\",\"WARC-Record-ID\":\"<urn:uuid:17d356fc-c9e8-4242-abdb-a84ce5fbf75b>\",\"Content-Length\":\"55922\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1e36d34f-64d7-461d-b7b2-4e6ae74bef21>\",\"WARC-Concurrent-To\":\"<urn:uuid:2307f793-e1eb-462c-86ca-0987fed288bd>\",\"WARC-IP-Address\":\"66.228.40.80\",\"WARC-Target-URI\":\"https://www.easycalculation.com/formulas/civil-engineering-all.html\",\"WARC-Payload-Digest\":\"sha1:OK4EB4UH7MLGIKZYDI4SOOHQ4DRUOQQY\",\"WARC-Block-Digest\":\"sha1:2FLJMKHAESAW46PGAGB3LXM73YJVPACI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296948868.90_warc_CC-MAIN-20230328170730-20230328200730-00690.warc.gz\"}"} |
https://tantrike.com/elizabeth-downs/application-of-time-independent-schrodinger-equation.php | [
"# Elizabeth Downs Application Of Time Independent Schrodinger Equation\n\n## Schrödinger's equation — what is it? plus.maths.org\n\n### What is significance of Schrödinger equation in",
null,
"Why does the Schrödinger equation exist as a time. Derivation of Schrodinger's Equation The classical wave equation, given by ` \" ` `B - `> propogation through space and time is governed by the wave equation., When writing a paper, if you solve for the time-independent Schrodinger equation do you have to also solve for the time-dependent equation as.\n\n### Derivation of the Time-Dependent Schrödinger Equation\n\nWhat is the Schrodinger equation and how is it used?. Write down the application of time independent Schrodinger wave equation to particle trapped in a one dimensional square potential well., The time-dependent Schrödinger equation is a partial while the time-independent Schrödinger equation is an equation The Schrodinger equation.\n\nWhat are the applications of time dependent Schrodingers equation? box application of schrodingers equation in the time- independent Schrodinger equation … Applications of the Schrodinger Wave Equation The free particle Chapter 4.1 No boundary conditions The free particle has V = 0. Assume it moves along a straight line\n\nOn the Derivation of the Time-Dependent Equation of Schro˘ dinger ‘‘fundamental’’ than the time-independent equation The first application of Eq. 26/10/2015 · Schrodinger's Wave Equations, are discovered by the Erwin Schrodinger in 1926. As you know that Newton's equations are …\n\nTime-Dependent Schrodinger Wave Equation. Time-Independent Schrodinger Wave Equation. Total E K.E. term. P.E. term. PHYSICS term. NOTATION Particle in a Box… 5/12/2011 · We know from time-independent perturbation theory Question on time-independent perturbation can we also write the Time-dependent Schrodinger equation …\n\nDerivation of Schrodinger's Equation The classical wave equation, given by ` \" ` `B - `> propogation through space and time is governed by the wave equation. Most of our applications of quantum mechanics to chemistry will be The terms of the time-independent Schrödinger equation can then be interpreted as total\n\nAs discussed in the article of time dependent Schrodinger wave equation time independent form of Schrodinger Application of Schrodinger wave equation: Solution to the Schrödinger Equation for the Time Laboratory for Photoelectric Technology and Application, the solution to the Schrödinger equation of\n\nThe Schrodinger Equation Chapter 13 -time independent wave equation (x) 6 The Schrodinger Equation.ppt [Compatibility Mode] PART I : A SIMPLE SOLUTION OF THE TIME-INDEPENDENT SCHRÖDINGER EQUATION IN ONE DIMENSION H. H. Erbil a Ege University, Science Faculty, Physics Department Bornova\n\nNumerical Solution of 1D Time Independent Schrodinger Equation using Finite Difference Method. applied to Time Independent Schrodinger Equation Application is one of the basic equations studied in the field of partial differential equations, and has applications to equation and the time independent Schrodinger\n\nWhen writing a paper, if you solve for the time-independent Schrodinger equation do you have to also solve for the time-dependent equation as Quantum Harmonic Oscillator: Schrodinger Equation The Schrodinger equation for a harmonic oscillator may be obtained by using the classical spring potential\n\nThe Schroedinger Equation in One Dimension time-dependent Schroedinger equation determines the wave We will use the time-independent Schroedinger equation … The time-independent Schrödinger equation is discussed further below. With Applications to Schrödinger Operators. Schrodinger solver in 1, 2 and 3d\n\nAs discussed in the article of time dependent Schrodinger wave equation time independent form of Schrodinger Application of Schrodinger wave equation: Write down the application of time independent Schrodinger wave equation to particle trapped in a one dimensional square potential well.\n\nLecture I: The Time-dependent Schrodinger Equation Solutions of the time-independent Schrodinger equation particle in a box (discrete) harmonic oscillator Applications of the Schrödinger Equation 1* The wave function is shown oscillator by substituting it into the time-independent Schrödinger equation and\n\nThe one-dimensional case of the time-independent Schrödinger equation is laureates/1933/schrodinger in the application of elegant Home → THE SCHRODINGER WAVE EQUATION . The Time Independent Schrodinger Equation: In many cases the potential energy V of a particle does not depend on time,\n\nPosts about 02 Time-independent Schrodinger equation written by ateixeira and dargscisyhp Most of our applications of quantum mechanics to chemistry will be The terms of the time-independent Schrödinger equation can then be interpreted as total\n\nLecture 11 1 The Time-Dependent and Time-Independent Schrodinger¨ Equations The time-dependent Schrodinger¨ equation involves the Hamiltonian operator H^ … DOING PHYSICS WITH MATLAB QUANTUM PHYSICS The one dimensional time dependent Schrodinger equation for a potential energy function to be time independent.\n\nDerivation of the Time-Dependent Schrödinger Equation So Schrodinger's equation is actually the energy conservation principle from a The time-dependent Schrödinger equation is a partial while the time-independent Schrödinger equation is an equation The Schrodinger equation\n\n272 Z. Kalogiratou et al./Solution of 2d TI Schrodinger equation¨ of the one-dimensional time-independent Schrodinger equation. A well … The one-dimensional case of the time-independent Schrödinger equation is laureates/1933/schrodinger in the application of elegant\n\nSolution to the Schrödinger Equation for the Time Laboratory for Photoelectric Technology and Application, the solution to the Schrödinger equation of What are the applications of the Schrodinger wave equation? time independent Schrodinger equation; the other answer gives the time dependent version\n\nOne‐Dimensional Quantum Mechanics equation has two independent form of the Schrodinger equation which includes time dependence 5/12/2011 · We know from time-independent perturbation theory Question on time-independent perturbation can we also write the Time-dependent Schrodinger equation …\n\nWhat are the applications of the Schrodinger wave equation? time independent Schrodinger equation; the other answer gives the time dependent version Applications of the Schrödinger Equation 1* The wave function is shown oscillator by substituting it into the time-independent Schrödinger equation and\n\nPART I : A SIMPLE SOLUTION OF THE TIME-INDEPENDENT SCHRÖDINGER EQUATION IN ONE DIMENSION H. H. Erbil a Ege University, Science Faculty, Physics Department Bornova The Time Independent Schrödinger Equation Second order differential equations, like the Schrödinger Equation, can be solved by separation of variables.\n\n### Time Independent Schrodinger Wave Equation",
null,
"Application of Schrodinger wave equation Particle in a. Although we were able to derive the single-particle time-independent Schrödinger equation starting from the classical wave equation and the de Broglie relation,, Numerical Solution of 1D Time Independent Schrodinger Equation using Finite Difference Method. applied to Time Independent Schrodinger Equation Application.\n\n### Derivation of Schrodinger's Equation Harding University",
null,
"Time dependent and time independent Schrödinger equations. Solution to the Schrödinger Equation for the Time Laboratory for Photoelectric Technology and Application, the solution to the Schrödinger equation of https://en.wikipedia.org/wiki/Schr%C3%B6dinger_equation the Schrödinger equation for multiquantum well heterostructure applications time-consuming subtask. In applications time-independent Schrödinger equation.",
null,
"• Lecture I The Time-dependent Schrodinger Equation\n• Numerical Solution Of The Time‐Dependent Schrödinger equation\n• Schrödinger's equation — what is it? plus.maths.org\n\n• The time-independent Schrödinger equation is discussed further below. With Applications to Schrödinger Operators. Schrodinger solver in 1, 2 and 3d TIME{INDEPENDENT SCHRODINGER EQUATION 4.2 Schr odinger Equation as Eigenvalue Equation A subject concerning the time-independent Schr odinger equation we have not yet\n\nThe time-independent equation could apply if we had The time-dependent Schrödinger equation is linear in Lesson 8 Time-dependent Schroedinger equation… The time-independent evolution of the Schrodinger model we studied the Schrodinger model and its applications, Schrodinger equation, wave function, time\n\nPART I : A SIMPLE SOLUTION OF THE TIME-INDEPENDENT SCHRÖDINGER EQUATION IN ONE DIMENSION H. H. Erbil a Ege University, Science Faculty, Physics Department Bornova The time-independent Schrödinger equation is discussed further below. With Applications to Schrödinger Operators. Schrodinger solver in 1, 2 and 3d\n\nI want to solve the time-dependent Schrödinger equation: Time evolution of a wave packet from the time-independent Schroedinger equation. Web Applications; is one of the basic equations studied in the field of partial differential equations, and has applications to equation and the time independent Schrodinger\n\n8/05/2016 · A quick video explaining the difference between the time independent and time dependent Schrodinger equations schrodinger's The problem consists of solving the time-independent Schrödinger equation for a particle with a step-like potential in one dimension. Applications The Heaviside\n\n5/12/2011 · We know from time-independent perturbation theory Question on time-independent perturbation can we also write the Time-dependent Schrodinger equation … As discussed in the article of time dependent Schrodinger wave equation time independent form of Schrodinger Application of Schrodinger wave equation:\n\nNumerical Analysis of the Time Independent Schrodinger Equation Dan Walsh Undergraduate Student Physics and Mathematics Dept. University of Massachusetts Although we were able to derive the single-particle time-independent Schrödinger equation starting from the classical wave equation and the de Broglie relation,\n\nChapter 3 The Schr odinger Equation 3.1 Derivation of the Schr odinger Equation We will consider now the propagation of a wave function (~r;t) by an in nitesimal time The Schrodinger Equation Chapter 13 -time independent wave equation (x) 6 The Schrodinger Equation.ppt [Compatibility Mode]\n\nSolution of Time-Independent Schrodinger Equation for a The areas where this problem finds applications include the development of cyclotron accelerators, Schrodinger equation can be solved Application of Analytical Methods to state energies of the radial time-independent Schrӧdinger equation for a neutron\n\nThe time-independent equation could apply if we had The time-dependent Schrödinger equation is linear in Lesson 8 Time-dependent Schroedinger equation… Solution to the Schrödinger Equation for the Time Laboratory for Photoelectric Technology and Application, the solution to the Schrödinger equation of\n\nSolution of the time-dependent Schrodinger equation we demonstrate the application of this Our previous examples of Hamiltonians were time-independent. Schrodinger equation can be solved Application of Analytical Methods to state energies of the radial time-independent Schrӧdinger equation for a neutron\n\n## Chapter 7 The Schroedinger Equation in One Dimension a",
null,
"Chapter 5. The Schrödinger Wave Equation Formulation. Quantum Mechanics Applications Using the Time Dependent Schrödinger Equation in COMSOL A. J. Kalinowski*1 of the time independent application, however, What is the difference between the time-dependent Schrödinger equation, and the time-independent What are the applications of the Schrodinger wave equation?.\n\n### Schrodinger time independent wave equation derivation\n\nThe Schrodinger equation¨ UCLA. I am confused for the time dependent and time independent Schrodinger equation., The time-independent Schrödinger equation is discussed further below. With Applications to Schrödinger Operators. Schrodinger solver in 1, 2 and 3d.\n\nSolution to the Schrödinger Equation for the Time Laboratory for Photoelectric Technology and Application, the solution to the Schrödinger equation of Time-Dependent Schrodinger Wave Equation. Time-Independent Schrodinger Wave Equation. Total E K.E. term. P.E. term. PHYSICS term. NOTATION Particle in a Box…\n\nWrite down the application of time independent Schrodinger wave equation to particle trapped in a one dimensional square potential well. I want to solve the time-dependent Schrödinger equation: Time evolution of a wave packet from the time-independent Schroedinger equation. Web Applications;\n\nThe Time-Independent Schrödinger Equation. Next: We cannot, for instance, derive the time-dependent Schrödinger equation in an analogous fashion In some situations the potential energy does not depend on time In this case we can often solve the problem by considering the simpler time-independent version of the\n\nQuantum Mechanics Applications Using the Time Dependent Schrödinger Equation in COMSOL A. J. Kalinowski*1 of the time independent application, however One‐Dimensional Quantum Mechanics equation has two independent form of the Schrodinger equation which includes time dependence\n\n26/10/2015 · Schrodinger's Wave Equations, are discovered by the Erwin Schrodinger in 1926. As you know that Newton's equations are … 272 Z. Kalogiratou et al./Solution of 2d TI Schrodinger equation¨ of the one-dimensional time-independent Schrodinger equation. A well …\n\n5/12/2011 · We know from time-independent perturbation theory Question on time-independent perturbation can we also write the Time-dependent Schrodinger equation … The Time Independent Schrödinger Equation Second order differential equations, like the Schrödinger Equation, can be solved by separation of variables.\n\nAs discussed in the article of time dependent Schrodinger wave equation time independent form of Schrodinger Application of Schrodinger wave equation: TIME{INDEPENDENT SCHRODINGER EQUATION 4.2 Schr odinger Equation as Eigenvalue Equation A subject concerning the time-independent Schr odinger equation we have not yet\n\nThe time-independent evolution of the Schrodinger model we studied the Schrodinger model and its applications, Schrodinger equation, wave function, time Time-Dependent Schrodinger Wave Equation. Time-Independent Schrodinger Wave Equation. Total E K.E. term. P.E. term. PHYSICS term. NOTATION Particle in a Box…\n\nThe Schrodinger Equation Chapter 13 -time independent wave equation (x) 6 The Schrodinger Equation.ppt [Compatibility Mode] DOING PHYSICS WITH MATLAB QUANTUM PHYSICS The one dimensional time dependent Schrodinger equation for a potential energy function to be time independent.\n\n26/09/2011 · In this video I show how to arrive at the time indpendant schrodinger equatio from its general form! Quantum Mechanics Applications Using the Time Dependent Schrödinger Equation in COMSOL A. J. Kalinowski*1 of the time independent application, however\n\nOn the Derivation of the Time-Dependent Equation of Schro˘ dinger ‘‘fundamental’’ than the time-independent equation The first application of Eq. Lecture 11 1 The Time-Dependent and Time-Independent Schrodinger¨ Equations The time-dependent Schrodinger¨ equation involves the Hamiltonian operator H^ …\n\nEither the real or imaginary part of this function could be appropriate for a given application. In general, Time Independent Schrodinger Equation The time-independent equation could apply if we had The time-dependent Schrödinger equation is linear in Lesson 8 Time-dependent Schroedinger equation…\n\nDOING PHYSICS WITH MATLAB QUANTUM PHYSICS The one dimensional time dependent Schrodinger equation for a potential energy function to be time independent. 5/12/2011 · We know from time-independent perturbation theory Question on time-independent perturbation can we also write the Time-dependent Schrodinger equation …\n\nOn the Derivation of the Time-Dependent Equation of Schro˘ dinger ‘‘fundamental’’ than the time-independent equation The first application of Eq. PART I : A SIMPLE SOLUTION OF THE TIME-INDEPENDENT SCHRÖDINGER EQUATION IN ONE DIMENSION H. H. Erbil a Ege University, Science Faculty, Physics Department Bornova\n\nApplication of Schrodinger wave equation: The one-dimensional time independent Schrodinger wave equation is to “Application of Schrodinger wave equation: When writing a paper, if you solve for the time-independent Schrodinger equation do you have to also solve for the time-dependent equation as\n\nSolution of the time-dependent Schrodinger equation we demonstrate the application of this Our previous examples of Hamiltonians were time-independent. Schrodinger time dependent wave equation What is schrodinger wave equation? In classical mechanics the motion of a body is given by Newton’s second law of motion\n\nIn some situations the potential energy does not depend on time In this case we can often solve the problem by considering the simpler time-independent version of the Schrödinger's equation — in does not depend on time we can use the time-independent one-dimensional the derivation of Schrodinger's Equation\n\nThe problem consists of solving the time-independent Schrödinger equation for a particle with a step-like potential in one dimension. Typically, ... the Schrodinger equation does not give the (5.13) is called the Time-Independent Schr odinger equation or TISE for short. The A common application of\n\nPosts about 02 Time-independent Schrodinger equation written by ateixeira and dargscisyhp Schrödinger's equation — in does not depend on time we can use the time-independent one-dimensional the derivation of Schrodinger's Equation\n\nLecture 11 1 The Time-Dependent and Time-Independent Schrodinger¨ Equations The time-dependent Schrodinger¨ equation involves the Hamiltonian operator H^ … the Schrödinger equation for multiquantum well heterostructure applications make its application the time-independent Schrödinger equation if\n\nSchrodinger time dependent wave equation What is schrodinger wave equation? In classical mechanics the motion of a body is given by Newton’s second law of motion Derivation of Schrodinger's Equation The classical wave equation, given by ` \" ` `B - `> propogation through space and time is governed by the wave equation.\n\n### Numerical Solution of 1D Time Independent Schrodinger",
null,
"Numerical Solution of Time-Dependent Gravitational. Application of Schrodinger wave equation: The one-dimensional time independent Schrodinger wave equation is to “Application of Schrodinger wave equation:, DOING PHYSICS WITH MATLAB QUANTUM PHYSICS The one dimensional time dependent Schrodinger equation for a potential energy function to be time independent..\n\nSolution of Time-Independent Schrodinger Equation. 26/10/2015 · Schrodinger's Wave Equations, are discovered by the Erwin Schrodinger in 1926. As you know that Newton's equations are …, Schrodinger time dependent wave equation What is schrodinger wave equation? In classical mechanics the motion of a body is given by Newton’s second law of motion.\n\n### Schrödinger equation Wikipedia",
null,
"Particle in a Box MIT OpenCourseWare. I am confused for the time dependent and time independent Schrodinger equation. https://en.wikipedia.org/wiki/Schr%C3%B6dinger_equation ... the Schrodinger equation does not give the (5.13) is called the Time-Independent Schr odinger equation or TISE for short. The A common application of.",
null,
"Solution to the Schrödinger Equation for the Time Laboratory for Photoelectric Technology and Application, the solution to the Schrödinger equation of Time-Independent Perturbation Theory The time-independent Schrodinger equation for the solved problem is H upon application of a perturbation W.\n\nPosts about 02 Time-independent Schrodinger equation written by ateixeira and dargscisyhp What are the applications of time dependent Schrodingers equation? box application of schrodingers equation in the time- independent Schrodinger equation …\n\nTo improve on the solution of the time-independent Schrodinger equation by direct discretization, The application of the kinetic part of the Hamiltonian H ... the Schrodinger equation does not give the (5.13) is called the Time-Independent Schr odinger equation or TISE for short. The A common application of\n\nAlthough we were able to derive the single-particle time-independent Schrödinger equation starting from the classical wave equation and the de Broglie relation, Quantum Harmonic Oscillator: Schrodinger Equation The Schrodinger equation for a harmonic oscillator may be obtained by using the classical spring potential\n\nThe time-independent equation could apply if we had The time-dependent Schrödinger equation is linear in Lesson 8 Time-dependent Schroedinger equation… The time-dependent Schrödinger equation is a partial while the time-independent Schrödinger equation is an equation The Schrodinger equation\n\nPART I : A SIMPLE SOLUTION OF THE TIME-INDEPENDENT SCHRÖDINGER EQUATION IN ONE DIMENSION H. H. Erbil a Ege University, Science Faculty, Physics Department Bornova Quantum Mechanics Applications Using the Time Dependent Schrödinger Equation in COMSOL A. J. Kalinowski*1 of the time independent application, however\n\nAlthough we were able to derive the single-particle time-independent Schrödinger equation starting from the classical wave equation and the de Broglie relation, I want to solve the time-dependent Schrödinger equation: Time evolution of a wave packet from the time-independent Schroedinger equation. Web Applications;\n\nThe Time-Independent Schrödinger Equation. Next: We cannot, for instance, derive the time-dependent Schrödinger equation in an analogous fashion Time-Dependent Schrodinger Wave Equation. Time-Independent Schrodinger Wave Equation. Total E K.E. term. P.E. term. PHYSICS term. NOTATION Particle in a Box…\n\nTIME{INDEPENDENT SCHRODINGER EQUATION 4.2 Schr odinger Equation as Eigenvalue Equation A subject concerning the time-independent Schr odinger equation we have not yet The Time-Independent Schrödinger Equation. Next: We cannot, for instance, derive the time-dependent Schrödinger equation in an analogous fashion\n\n26/10/2015 · Schrodinger's Wave Equations, are discovered by the Erwin Schrodinger in 1926. As you know that Newton's equations are … The time-independent equation could apply if we had The time-dependent Schrödinger equation is linear in Lesson 8 Time-dependent Schroedinger equation…\n\nHome → THE SCHRODINGER WAVE EQUATION . The Time Independent Schrodinger Equation: In many cases the potential energy V of a particle does not depend on time, Home → THE SCHRODINGER WAVE EQUATION . The Time Independent Schrodinger Equation: In many cases the potential energy V of a particle does not depend on time,\n\nView all posts in Elizabeth Downs category"
]
| [
null,
"https://tantrike.com/images/5c41d859561d24651e82fdf4adc19bed.jpg",
null,
"https://tantrike.com/images/application-of-time-independent-schrodinger-equation.jpg",
null,
"https://tantrike.com/images/application-of-time-independent-schrodinger-equation-2.png",
null,
"https://tantrike.com/images/application-of-time-independent-schrodinger-equation-3.jpg",
null,
"https://tantrike.com/images/6961e23fe7029536191cd52a66a7e1a8.jpg",
null,
"https://tantrike.com/images/application-of-time-independent-schrodinger-equation-4.jpg",
null,
"https://tantrike.com/images/c8cc138de663ae9f992383b06322397d.png",
null,
"https://tantrike.com/images/ef4432100e31bd7a684a62862d71487b.jpg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8226508,"math_prob":0.9730533,"size":22450,"snap":"2023-14-2023-23","text_gpt3_token_len":4872,"char_repetition_ratio":0.35886127,"word_repetition_ratio":0.70637476,"special_character_ratio":0.17465478,"punctuation_ratio":0.07664131,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99955875,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-22T06:45:20Z\",\"WARC-Record-ID\":\"<urn:uuid:82e76ac2-eea1-4afa-b7a4-9425679fcdcf>\",\"Content-Length\":\"75592\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:09e4138f-faf3-4c26-8b1d-1f489bead5cd>\",\"WARC-Concurrent-To\":\"<urn:uuid:014205ec-d38e-45c0-a18e-f66abf950869>\",\"WARC-IP-Address\":\"51.38.147.108\",\"WARC-Target-URI\":\"https://tantrike.com/elizabeth-downs/application-of-time-independent-schrodinger-equation.php\",\"WARC-Payload-Digest\":\"sha1:I7NZDYALCCYHYBTQLEPKUV7M6WK5J32D\",\"WARC-Block-Digest\":\"sha1:OZONEJOLKMABXGTTFFW7EUMCUH3EER6U\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296943750.71_warc_CC-MAIN-20230322051607-20230322081607-00368.warc.gz\"}"} |
https://scicomp.stackexchange.com/questions/30624/2d-wave-equation-with-finite-differences-blowing-up | [
"# 2d wave equation with finite differences blowing up\n\nI am (naively) trying to solve the 2d wave equation with finite differences. But the system blows up instantly.\n\nFor simplicity I set the constant $$c=1$$, then I am left with $$\\Delta u =u_{tt}.$$\n\nI set up the FDM with\n\nN = 9\nu = np.zeros((N**2,1))\nu.reshape(N,N)[N//2,N//2] = 1\n\n\nNow, at the time t=0 the function looks like this\n\nu_0 =\n[[0 0 0 0 0 0 0 0 0]\n[0 0 0 0 0 0 0 0 0]\n[0 0 0 0 0 0 0 0 0]\n[0 0 0 0 0 0 0 0 0]\n[0 0 0 0 1 0 0 0 0]\n[0 0 0 0 0 0 0 0 0]\n[0 0 0 0 0 0 0 0 0]\n[0 0 0 0 0 0 0 0 0]\n[0 0 0 0 0 0 0 0 0]]\n\n\nTo which I apply the five point diffence stencil Lh\n\nimport scipy.sparse as sp\nimport numpy as np\nh = 1/(N+1)\nT = np.diag(N*[-2]) + np.diag((N-1)*,k=-1) +np.diag((N-1)*,k=1)\nspt = sp.coo_matrix(T)\nspi = sp.eye(N)\nLh = (sp.kron(spi,spt) + sp.kron(spt,spi) ) / h**2\n\n\nSince Lh is my discretized Laplacian, for every timestep I add the acceleration that it gives me to some book-keeping vector v to change the velocity accordingly:\n\nv = np.zeros_like(u)\nprint(u.reshape(N,N).astype(int))\nfor i in range(3):\nv += Lh.dot(u)\nu += v\nprint(u.reshape(N,N).astype(int))\n\n\nHere is what happens:\n\n[[ 0 0 0 0 0 0 0 0 0]\n[ 0 0 0 0 0 0 0 0 0]\n[ 0 0 0 0 0 0 0 0 0]\n[ 0 0 0 0 99 0 0 0 0]\n[ 0 0 0 99 -398 99 0 0 0]\n[ 0 0 0 0 99 0 0 0 0]\n[ 0 0 0 0 0 0 0 0 0]\n[ 0 0 0 0 0 0 0 0 0]\n[ 0 0 0 0 0 0 0 0 0]]\n\n\nIn the next timestep:\n\n[[ 0 0 0 0 0 0 0 0 0]\n[ 0 0 0 0 0 0 0 0 0]\n[ 0 0 0 0 9999 0 0 0 0]\n[ 0 0 0 19999 -79699 19999 0 0 0]\n[ 0 0 9999 -79699 198800 -79699 9999 0 0]\n[ 0 0 0 19999 -79699 19999 0 0 0]\n[ 0 0 0 0 9999 0 0 0 0]\n[ 0 0 0 0 0 0 0 0 0]\n[ 0 0 0 0 0 0 0 0 0]]\n\n\nWhat is the problem here? I have not set up a boundary condition to my solution. Maybe I should?\n\ngreetings\n\n• Have you tried with smaller time step size dt? Then, the code reads Lh.dot(u)*dt and v*dt. – Hui Zhang Dec 2 '18 at 14:07\n• The wave speed is 1. You should use 1, but better at least 2 time steps for any space step, thus $dt < dx/2$. – LutzL Dec 2 '18 at 16:15\n• thank you,Hui Zhang. @LutzL why is that? – dba Dec 3 '18 at 12:23\n• To be at least qualitatively close the wave has to be able to travel at speed $c$. With the Euler method that you employ, changes propagate at speed $dx/dt$. Thus you need $dx>c\\,dt$, and the larger the gap the better the quantitative accuracy. With implicit methods the picture is not as clear, as the inverse of a sparse matrix is usually not sparse, and you would have to look at the eigenvalues of the propagation matrix. – LutzL Dec 3 '18 at 13:07"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.6292698,"math_prob":0.9972142,"size":1668,"snap":"2019-13-2019-22","text_gpt3_token_len":884,"char_repetition_ratio":0.34254807,"word_repetition_ratio":0.52913755,"special_character_ratio":0.57793766,"punctuation_ratio":0.07630522,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9737442,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-25T14:17:59Z\",\"WARC-Record-ID\":\"<urn:uuid:561c43c8-e79e-41bc-b3e4-85b8a82c7370>\",\"Content-Length\":\"126491\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:082ae720-1991-492b-8bfe-96ca6be5fd2f>\",\"WARC-Concurrent-To\":\"<urn:uuid:5288e0d8-4fd1-4447-a6e1-bd6255f604b2>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://scicomp.stackexchange.com/questions/30624/2d-wave-equation-with-finite-differences-blowing-up\",\"WARC-Payload-Digest\":\"sha1:CQ37NLS2ZIANN4I7I5ZSPHDYWT7MHTHE\",\"WARC-Block-Digest\":\"sha1:4GTLFAO3VA227JOL3AIEA7UCDSFKIFHB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912203991.44_warc_CC-MAIN-20190325133117-20190325155117-00049.warc.gz\"}"} |
https://www.aerith.net/comet/catalog/1997B1/1997B1-j.html | [
"# \u001b\\$B>.NSWB@1\u001b(B\n\nP/1997 B1 ( Kobayashi )",
null,
"###",
null,
"\u001b\\$B%W%m%U%#!<%k\u001b(B\n\n \u001b\\$BH/8+F|\u001b(B 1997\u001b\\$BG/\u001b(B1\u001b\\$B7n\u001b(B30\u001b\\$BF|\u001b(B \u001b\\$BH/8+8wEY\u001b(B 18\u001b\\$BEy\u001b(B \u001b\\$BH/8+ \u001b\\$B>.NSN4CK\u001b(B (\u001b\\$B72GO8)Bg@tD.\u001b(B)\n\n###",
null,
"\u001b\\$B50F;MWAG\u001b(B\n\n``` The following improved orbital elements, by Kenji Muraoka, are\nfrom 231 observations 1997 Jan. 30 to May 28, perturbations by\n9 Planets, Moon and 5 minor planets were taken into account.\nThe mean residual is +/- 0.48 arc seconds.\n\nEpoch = 1997 Mar. 13.0 TT JDT = 2450520.5\nT = 1997 Mar. 2.34714 +/- 0.00163 (m.e.) TT\nPeri. = 183.34012 +/- 0.00171\nNode = 329.06411 +/- 0.00193 (2000.0)\nIncl. = 12.34952 +/- 0.00024\nq = 2.0545494 +/- 0.0000897 AU\ne = 0.7607513 +/- 0.0000070\na = 8.5875047 +/- 0.0002774 AU\nn = 0.03916549 +/- 0.00000190\nP = 25.165 +/- 0.0012192 years\n(+/- 0.45 day)\n```\n\n###",
null,
"\u001b\\$B@1?^\u001b(B",
null,
"1996\u001b\\$BG/\u001b(B12\u001b\\$B7n\u001b(B 3\u001b\\$BF|!A\u001b(B1997\u001b\\$BG/\u001b(B 7\u001b\\$B7n\u001b(B31\u001b\\$BF|\u001b(B\n\n###",
null,
"\u001b\\$B8wEYJQ2=\u001b(B\n\n``` m1 = 15.2 + 5 log\u001b\\$B&\\$\u001b(B + 5 log r(t - 30)\n```",
null,
"",
null,
"##### \u001b\\$B50F;MWAG\\$OB<2,7r<#;a\\$N7W;;\\$K\\$h\\$k\\$b\\$N\\$G\\$9!#\u001b(B \u001b\\$B@1?^\\$O\u001b(B StellaNavigator Ver.2.0 for Windows (\u001b\\$B%\"%9%H%m%\"!<%D\u001b(B \u001b\\$BJTCx\u001b(B / \u001b\\$B%\"%9%-!<=PHG6I4)\u001b(B) \u001b\\$B\\$G:n@.\\$7\\$?\\$b\\$N\\$G\\$9!#\u001b(B \u001b\\$B8wEY%0%i%U\\$O\u001b(BComet for Windows\u001b\\$B\\$G:n@.\\$7\\$?\\$b\\$N\\$G\\$9!#\u001b(B",
null,
""
]
| [
null,
"https://www.aerith.net/icon/hr.gif",
null,
"https://www.aerith.net/icon/pr_star.gif",
null,
"https://www.aerith.net/icon/pr_star.gif",
null,
"https://www.aerith.net/icon/pr_star.gif",
null,
"https://www.aerith.net/icon/bg_ball.gif",
null,
"https://www.aerith.net/icon/pr_star.gif",
null,
"https://www.aerith.net/comet/catalog/1997B1/mag.gif",
null,
"https://www.aerith.net/icon/hr.gif",
null,
"https://www.aerith.net/icon/hr.gif",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.6607533,"math_prob":0.99571246,"size":787,"snap":"2021-43-2021-49","text_gpt3_token_len":368,"char_repetition_ratio":0.1302682,"word_repetition_ratio":0.0,"special_character_ratio":0.6111817,"punctuation_ratio":0.23267327,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9761283,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,1,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-17T02:36:23Z\",\"WARC-Record-ID\":\"<urn:uuid:2c5187ea-9839-4485-b416-3b623a2a75c5>\",\"Content-Length\":\"4343\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9a78af22-355b-4f0f-8d3f-ddae8f32733e>\",\"WARC-Concurrent-To\":\"<urn:uuid:848a46e0-013a-4771-8d75-7e82d754b5ef>\",\"WARC-IP-Address\":\"50.63.7.209\",\"WARC-Target-URI\":\"https://www.aerith.net/comet/catalog/1997B1/1997B1-j.html\",\"WARC-Payload-Digest\":\"sha1:JYBJ2CHEUW7YXQYADCKUCCFBBKNBX26E\",\"WARC-Block-Digest\":\"sha1:EAO2KLVZRJO3N6MGAUHT4WOXJO7ATCA5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585120.89_warc_CC-MAIN-20211017021554-20211017051554-00540.warc.gz\"}"} |
https://www.selfridges.com/TW/zh/cat/farmassis-farmassis-botanical-detox-vegan-capsules-x120_R03763073/ | [
"## 以当地货币和语言购买\n\n• 澳大利亚 / 澳元 \\$\n• 加拿大 / 加元 \\$\n• 中国 / 人民币 ¥\n• 法国 / 欧元 €\n• 德国 / 欧元 €\n• 中国香港 / 港元 \\$\n• 爱尔兰 / 欧元 €\n• 意大利 / 欧元 €\n• 日本 / 日元 ¥\n• 科威特 / 美元 \\$\n• 中国澳门 / 港元 \\$\n• 荷兰 / 欧元 €\n• 卡塔尔 / 美元 \\$\n• 沙特阿拉伯 / 美元 \\$\n• 新加坡 / 新加坡元 \\$\n• 韩国 / 韩元 ₩\n• 西班牙 / 欧元 €\n• 台湾 / 新台币 \\$\n• 阿拉伯联合酋长国 / 美元 \\$\n• 英国 / 英镑 £\n• 美国 / 美元 \\$\n• 不符合您的要求?阅读更多\n• 简体中文\n• 英语\n• 英语\n• 简体中文\n• 英语\n• 英语\n• 英语\n• 简体中文\n• 英语\n• 英语\n• 英语\n• 英语\n• 英语\n• 简体中文\n• 英语\n• 英语\n• 英语\n• 英语\n• 简体中文\n• 英语\n• 英语\n• 英语\n• 简体中文\n• 英语\n• 英语\n• 简体中文\n• 英语\n• 英语\n• 简体中文\n\n## 国际送货\n\nselfridges.com 上几乎所有的商品均可提供国际配送服务,您的订单可发往全世界 130 个国家/地区,包括北美、澳洲、中东及中国。\n\n• 阿尔及利亚\n• 安道尔\n• 安提瓜和巴布达\n• 阿鲁巴\n• 澳大利亚\n• 奥地利\n• 阿塞拜疆\n• 巴林\n• 孟加拉国\n• 巴巴多斯\n• 白俄罗斯\n• 比利时\n• 伯利兹\n• 百慕大\n• 玻利维亚\n• 博兹瓦纳\n• 文莱\n• 保加利亚\n• 柬埔寨\n• 加拿大\n• 开曼群岛\n• 智利\n• 中国大陆\n• 哥伦比亚\n• 哥斯达黎加\n• 克罗地亚\n• 塞浦路斯\n• 捷克共和国\n• 丹麦\n• 多米尼克\n• 多米尼加共和国\n• 厄瓜多尔\n• 埃及\n• 萨尔瓦多\n• 爱沙尼亚\n• 芬兰\n• 法国\n• 法属圭亚那\n• 德国\n• 直布罗陀\n• 希腊\n• 格林纳达\n• 瓜德罗普岛\n• 危地马拉\n• 根西岛\n• 圭亚那\n• 洪都拉斯\n• 香港\n• 匈牙利\n• 冰岛\n• 印度\n• 印度尼西亚\n• 爱尔兰\n• 以色列\n• 意大利\n• 牙买加\n• 日本\n• 泽西岛\n• 约旦\n• 哈萨克斯坦\n• 肯尼亚\n• 科威特\n• 老挝\n• 拉脱维亚\n• 黎巴嫩\n• 莱索托\n• 列支敦士登\n• 立陶宛\n• 卢森堡\n• 澳门\n• 马来西亚\n• 马尔代夫\n• 马耳他\n• 马提尼克岛\n• 马约特岛\n• 墨西哥\n• 摩纳哥\n• 蒙特塞拉特\n• 摩洛哥\n• 缅甸\n• 纳米比亚\n• 荷兰\n• 新西兰\n• 尼加拉瓜\n• 尼日利亚\n• 挪威\n• 阿曼\n• 巴基斯坦\n• 巴拿马\n• 巴拉圭\n• 秘鲁\n• 菲律宾\n• 波兰\n• Portugal\n• 波多黎各\n• 卡塔尔\n• 留尼汪岛\n• 罗马尼亚\n• 卢旺达\n• 圣基茨与尼维斯\n• 圣卢西亚\n• 圣马丁岛(法属)\n• 圣马力诺\n• 沙特阿拉伯\n• 塞尔维亚\n• 新加坡\n• 斯洛伐克\n• 斯洛文尼亚\n• South Africa\n• 韩国\n• 西班牙\n• 斯里兰卡\n• 苏里南\n• 斯威士兰\n• 瑞典\n• 瑞士\n• 台湾\n• 坦桑尼亚\n• 泰国\n• 特立尼达和多巴哥\n• 土耳其\n• 乌干达\n• 乌克兰\n• 阿拉伯联合酋长国\n• 英国\n• 美国\n• 乌拉圭\n• 委内瑞拉\n• 越南\n\n# FARMASSIS+ Farmassis+ 植物排毒素食胶囊 x120\n\n\\$860.00\n\n*进口关税将在结算时显示\n\n01 02 03 04 05\n\n## 英国和欧洲\n\n\\$400.00\n• 无限英国定时、指定日和标准配送\n• 英国境内次日配送(英国时间下午 6 点前下单)\n• 无限欧盟地区标准配送\n• 免费退货\n• 不受最低消费金额限制\n\n## 全球\n\n\\$1,545.00\n• 无限英国定时指定日,标准配送订单超过\\$1,545.00\n• 订单超过1,545.00订单,全球范围内无限配送\n\nRef: R03763073"
]
| [
null
]
| {"ft_lang_label":"__label__zh","ft_lang_prob":0.9884608,"math_prob":0.47944677,"size":394,"snap":"2023-14-2023-23","text_gpt3_token_len":327,"char_repetition_ratio":0.24358974,"word_repetition_ratio":0.17333333,"special_character_ratio":0.40101522,"punctuation_ratio":0.010752688,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9677916,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-28T10:48:00Z\",\"WARC-Record-ID\":\"<urn:uuid:79452ee6-9d0d-4ce9-a3ab-845b15e7a0ba>\",\"Content-Length\":\"324868\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:38bdd1b0-96af-42e0-bfdf-9f5bd9a0debe>\",\"WARC-Concurrent-To\":\"<urn:uuid:08d8c0e3-26f3-4526-87be-8d141a5e2c01>\",\"WARC-IP-Address\":\"104.18.29.179\",\"WARC-Target-URI\":\"https://www.selfridges.com/TW/zh/cat/farmassis-farmassis-botanical-detox-vegan-capsules-x120_R03763073/\",\"WARC-Payload-Digest\":\"sha1:TBQ3JRWX6CR7VPZUSRXN6IZTL5NYKT3F\",\"WARC-Block-Digest\":\"sha1:WGP5O57X2BK6S3IYQMVQAQGZ7XNS6UUW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296948858.7_warc_CC-MAIN-20230328104523-20230328134523-00239.warc.gz\"}"} |
https://studylib.net/doc/25913528/copula-modeling-an-introduction-for-practitioners--founda... | [
"# Copula Modeling An Introduction for Practitioners (Foundations and Trends(r) in Econometrics) by Pravin K. Trivedi",
null,
"```ECOv1n1.qxd\n4/24/2007\n2:23 PM\nPage 1\nPravin K. Trivedi and David M. Zimmer\nCopula Modeling explores the copula approach for econometrics modeling of joint parametric\ndistributions. Copula Modeling demonstrates that practical implementation and estimation is\nrelatively straightforward despite the complexity of its theoretical foundations. An attractive\nfeature of parametrically specific copulas is that estimation and inference are based on\nstandard maximum likelihood procedures. Thus, copulas can be estimated using desktop\neconometric software. This offers a substantial advantage of copulas over recently proposed\nsimulation-based approaches to joint modeling. Copulas are useful in a variety of modeling\nsituations including financial markets, actuarial science, and microeconometrics modeling.\nCopula Modeling provides practitioners and scholars with a useful guide to copula modeling\nwith a focus on estimation and misspecification. The authors cover important theoretical\nfoundations. Throughout, the authors use Monte Carlo experiments and simulations to\ndemonstrate copula properties\nFnT ECO 1:1 Copula Modeling: An Introduction for Practitioners\nCopula Modeling: An Introduction for\nPractitioners\nFoundations and Trends® in\nEconometrics\n1:1 (2005)\nCopula Modeling:\nAn Introduction for Practitioners\nPravin K. Trivedi and David M. Zimmer\nPravin K. Trivedi and David M. Zimmer\nThis book is originally published as\nFoundations and Trends® in Econometrics,\nVolume 1 Issue 1 (2005), ISSN: 1551-3076.\nnow\nnow\nthe essence of knowledge\nCopula Modeling:\nAn Introduction\nfor Practitioners\nCopula Modeling:\nAn Introduction\nfor Practitioners\nPravin K. Trivedi\nDepartment of Economics, Indiana University\nWylie Hall 105\nBloomington, IN 47405\[email protected]\nDavid M. Zimmer\nWestern Kentucky University, Department of Economics\n1906 College Heights Blvd.\nBowling Green, KY 42101\[email protected]\nformerly at U.S. Federal Trade Commission\nBoston – Delft\nFoundations and Trends R in\nEconometrics\nPublished, sold and distributed by:\nnow Publishers Inc.\nPO Box 1024\nHanover, MA 02339\nUSA\nTel. +1-781-985-4510\nwww.nowpublishers.com\[email protected]\nOutside North America:\nnow Publishers Inc.\nPO Box 179\nThe Netherlands\nTel. +31-6-51115274\nThe preferred citation for this publication is P. K. Trivedi and D. M. Zimmer,\nCopula Modeling: An Introduction for Practitioners, Foundations and Trends R in\nEconometrics, vol 1, no 1, pp 1–111, 2005\nPrinted on acid-free paper\nISBN: 978-1-60198-020-5\nc 2007 P. K. Trivedi and D. M. Zimmer\nsystem, or transmitted in any form or by any means, mechanical, photocopying, recording\nor otherwise, without prior written permission of the publishers.\nPhotocopying. In the USA: This journal is registered at the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923. Authorization to photocopy items for\ninternal or personal use, or the internal or personal use of specific clients, is granted by\nnow Publishers Inc for users registered with the Copyright Clearance Center (CCC). The\n‘services’ for users can be found on the internet at: www.copyright.com\nFor those organizations that have been granted a photocopy license, a separate system\nof payment has been arranged. Authorization does not extend to other kinds of copying, such as that for general distribution, for advertising or promotional purposes, for\ncreating new collective works, or for resale. In the rest of the world: Permission to photocopy must be obtained from the copyright owner. Please apply to now Publishers Inc.,\nPO Box 1024, Hanover, MA 02339, USA; Tel. +1 781 871 0245; www.nowpublishers.com;\[email protected]\nnow Publishers Inc. has an exclusive license to publish this material worldwide. Permission\nPublishers, PO Box 179, 2600 AD Delft, The Netherlands, www.nowpublishers.com; e-mail:\[email protected]\nFoundations and Trends R in\nEconometrics\nVolume 1 Issue 1, 2005\nEditorial Board\nEditor-in-Chief:\nWilliam H. Greene\nDepartment of Economics\nNew York Univeristy\n44 West Fourth Street, 7–78\nNew York, NY 10012\nUSA\[email protected]\nEditors\nManuel Arellano, CEMFI Spain\nWiji Arulampalam, University of Warwick\nOrley Ashenfelter, Princeton University\nJushan Bai, NYU\nAnil Bera, University of Illinois\nTim Bollerslev, Duke University\nDavid Brownstone, UC Irvine\nXiaohong Chen, NYU\nSteven Durlauf, University of Wisconsin\nAmos Golan, American University\nBill Griffiths, University of Melbourne\nJames Heckman, University of Chicago\nJan Kiviet, University of Amsterdam\nGary Koop, Leicester University\nMichael Lechner, University of St. Gallen\nLung-Fei Lee, Ohio State University\nLarry Marsh, Notre Dame University\nJames MacKinnon, Queens University\nBruce McCullough, Drexel University\nJeff Simonoff, NYU\nJoseph Terza, University of Florida\nKen Train, UC Berkeley\nPravin Travedi, Indiana University\nEditorial Scope\nFoundations and Trends R in Econometrics will publish survey\nand tutorial articles in the following topics:\n• Identification\n• Modeling Non-linear Time Series\n• Model Choice and Specification\nAnalysis\n• Unit Roots\n• Non-linear Regression Models\n• Latent Variable Models\n• Simultaneous Equation Models\n• Qualitative Response Models\n• Estimation Frameworks\n• Hypothesis Testing\n• Biased Estimation\n• Interactions-based Models\n• Computational Problems\n• Duration Models\n• Microeconometrics\n• Financial Econometrics\n• Treatment Modeling\n• Measurement Error in Survey\nData\n• Discrete Choice Modeling\n• Models for Count Data\n• Duration Models\n• Limited Dependent Variables\n• Panel Data\n• Dynamic Specification\n• Inference and Causality\n• Continuous Time Stochastic\nModels\n• Cointegration\n• Productivity Measurement and\nAnalysis\n• Semiparametric and\nNonparametric Estimation\n• Bootstrap Methods\n• Nonstationary Time Series\n• Robust Estimation\nInformation for Librarians\nFoundations and Trends R in Econometrics, 2005, Volume 1, 4 issues. ISSN\npaper version 1551-3076. ISSN online version 1551-3084. Also available as a\ncombined paper and online subscription.\nFoundations and Trends R in\nEconometrics\nVol. 1, No 1 (2005) 1–111\nc 2007 P. K. Trivedi and D. M. Zimmer\nDOI: 10.1561/0800000005\nCopula Modeling: An Introduction\nfor Practitioners∗\nPravin K. Trivedi1 and David M. Zimmer2\n1\n2\nDepartment of Economics, Indiana University,Wylie Hall 105,\nBloomington, IN 47405, [email protected]\nWestern Kentucky University, Department of Economics, 1906 College\nHeights Blvd., Bowling Green, KY 42101, [email protected] formerly\nAbstract\njoint parametric distributions. Although theoretical foundations of copulas are complex, this text demonstrates that practical implementation\nand estimation are relatively straightforward. An attractive feature of\nparametrically specified copulas is that estimation and inference are\nbased on standard maximum likelihood procedures, and thus copulas\ncan be estimated using desktop econometric software. This represents\na substantial advantage of copulas over recently proposed simulationbased approaches to joint modeling.\n* The\nauthors are grateful to the Editor Bill Greene and an anonymous reviewer for helpful\ncomments and suggestions for improvement, but retain responsibility for the contents of\nthe present text.\nContents\n1 Introduction\n1\n2 Copulas and Dependence\n7\n2.1\n2.2\n2.3\n2.4\n2.5\nBasics of Joint Distributions\nCopula Functions\nSome Common Bivariate Copulas\nMeasuring Dependence\nVisual Illustration of Dependence\n7\n9\n14\n19\n27\n3 Generating Copulas\n33\n3.1\n3.2\n3.3\n3.4\n3.5\n34\n36\n37\n42\n45\nMethod of Inversion\nAlgebraic Methods\nMixtures and Convex Sums\nArchimedean copulas\nExtensions of Bivariate Copulas\n4 Copula Estimation\n55\n4.1\n4.2\n4.3\n4.4\n4.5\n56\n59\n63\n68\n74\nCopula Likelihoods\nTwo-Step Sequential Likelihood Maximization\nCopula Evaluation and Selection\nMonte Carlo Illustrations\nEmpirical Applications\nix\n4.6\nCausal Modeling and Latent Factors\n5 Conclusion\n90\n99\nReferences\n101\nA Copulas and Random Number Generation\n111\nA.1 Selected Illustrations\n112\n1\nIntroduction\njoint parametric distributions. Econometric estimation and inference\nfor data that are assumed to be multivariate normal distributed are\nhighly developed, but general approaches for joint nonlinear modeling of nonnormal data are not well developed, and there is a frequent\ntendency to consider modeling issues on a case-by-case basis. In econometrics, nonnormal and nonlinear models arise frequently in models\nof discrete choice, models of event counts, models based on truncated\nand/or censored data, and joint models with both continuous and discrete outcomes.\nExisting techniques for estimating joint distributions of nonlinear\noutcomes often require computationally demanding simulation-based\nestimation procedures. Although theoretical foundations of copulas are\ncomplex, this text demonstrates that practical implementation and estimation is relatively straightforward. An attractive feature of parametrically specified copulas is that estimation and inference are based on\nstandard maximum likelihood procedures, and thus copulas can be estimated using desktop econometric software such as Stata, Limdep, or\n1\n2\nIntroduction\nSAS. This represents a substantial advantage of copulas over recently\nproposed simulation-based approaches to joint modeling.\nof related variables than their joint distribution. The copula approach\nis a useful method for deriving joint distributions given the marginal\ndistributions, especially when the variables are nonnormal. Second, in\na bivariate context, copulas can be used to define nonparametric measures of dependence for pairs of random variables. When fairly general\nand/or asymmetric modes of dependence are relevant, such as those\nthat go beyond correlation or linear association, then copulas play a\nspecial role in developing additional concepts and measures. Finally,\ncopulas are useful extensions and generalizations of approaches for\nmodeling joint distributions and dependence that have appeared in\nthe literature.\nAccording to Schweizer (1991), the theorem underlying copulas was\nintroduced in a 1959 article by Sklar written in French; a similar article written in English followed in 1973 (Sklar, 1973). Succinctly stated,\ncopulas are functions that connect multivariate distributions to their\none-dimensional margins. If F is an m-dimensional cumulative distribution function (cdf ) with one-dimensional margins F1 , . . . , Fm , then\nthere exists an m-dimensional copula C such that F (y1 , . . . , ym ) =\nC(F1 (y1 ), . . . , Fm (ym )). The case m = 2 has attracted special attention.\nThe term copula was introduced by Sklar (1959). However, the idea\nof copula had previously appeared in a number of texts, most notably in\nHoeffding (1940, 1941) who established best possible bounds for these\nfunctions and studied measures of dependence that are invariant under\nstrictly increasing transformations. Relationships of copulas to other\nwork is described in Nelsen (2006).\nCopulas have proved useful in a variety of modeling situations. Several of the most commonly used applications are briefly mentioned:\n• Financial institutions are often concerned with whether\nprices of different assets exhibit dependence, particularly\nin the tails of the joint distributions. These models typically assume that asset prices have a multivariate normal\n3\ndistribution, but Ané and Kharoubi (2003) and Embrechts\net al. (2002) argue that this assumption is frequently unsatisfactory because large changes are observed more frequently\nthan predicted under the normality assumption. Value at\nRisk (VaR) estimated under multivariate normality may lead\nto underestimation of the portfolio VaR. Since deviations\nfrom normality, e.g., tail dependence in the distribution of\nasset prices, greatly increase computational difficulties of\njoint asset models, modeling based on a copula parameterized by nonnormal marginals is an attractive alternative; see\nBouyé et al. (2000), Klugman and Parsa (2000).\n• Actuaries are interested in annuity pricing models in which\nthe relationship between two individuals’ incidence of disease or death is jointly related (Clayton, 1978). For example,\nactuaries have noted the existence of a “broken heart” syndrome in which an individual’s death substantially increases\nthe probability that the person’s spouse will also experience death within a fixed period of time. Joint survivals of\nhusband/wife pairs tend to exhibit nonlinear behavior with\nstrong tail dependence and are poorly suited for models based\non normality. These models are prime candidates for copula\nmodeling.\n• Many microeconometric modeling situations have marginal\ndistributions that cannot be easily combined into joint distributions. This frequently arises in models of discrete or limited dependent variables. For example, Munkin and Trivedi\n(1999) explain that bivariate distributions of discrete event\ncounts are often restrictive and difficult to estimate. Furthermore, joint modeling is especially difficult when two\nrelated variables come from different parametric families. For\nexample, one variable might characterize a multinomial discrete choice and another might measure an event count. As\nthere are few, if any, parametric joint distributions based on\nmarginals from different families, the copula approach provides a general and straightforward approach for constructing joint distributions in these situations.\n4\nIntroduction\n• In some applications, a flexible joint distribution is part of\na larger modeling problem. For example, in the linear selfselection model, an outcome variable, say income, is only\nobserved if another event occurs, say labor force participation. The likelihood function for this model includes a joint\ndistribution for the outcome variable and the probability that\nthe event is observed. Usually, this distribution is assumed\nto be multivariate normal, but Smith (2003) demonstrates\nthat for some applications, a flexible copula representation is\nmore appropriate.\nSeveral excellent monographs and surveys are already available, particularly those by Joe (1997) and Nelsen (2006). Schweizer and Sklar\n1983, ch. 6, provide a mathematical account of developments on copulas over three decades. Nelsen (1991) focuses on copulas and measures\nof association. Other surveys take a contextual approach. Frees and\nValdez (1998) provide an introduction for actuaries that summarizes\nstatistical properties and applications and is especially helpful to new\nentrants to the field. Georges et al. (2001) provide a review of copula\napplications to multivariate survival analysis. Cherubini et al. (2004)\nfocus on financial applications, but they also provide an excellent coverage of copula foundations for the benefit of a reader who may be new to\nthe area. For those whose main concern is with modeling dependence\nusing copulas, Embrechts et al. (2002) provide a lucid and thorough\ncoverage.\nIn econometrics there is a relatively small literature that uses copulas in an explicit manner. Miller and Liu (2002) mention the copula\nmethod in their survey of methods of recovering joint distributions\nfrom limited information. Several texts have modeled sample selection\nusing bivariate latent variable distributions that can be interpreted as\nspecific examples of copula functions even though the term copula or\ncopula properties are not explicitly used; see Lee (1983), Prieger (2002)\nand van Ophem (1999, 2000). However, Smith (2003) explicitly uses the\n(Archimedean) copula framework to analyze the self-selection problem.\nSimilarly for the case of joint discrete distributions, a number of studies that explore models of correlated count variables, without explicitly\n5\nusing copulas, are developed in Cameron et al. (1988), Munkin and\nTrivedi (1999), and Chib and Winkelmann (2001). Cameron et al.\n(2004), use the copula framework to analyze the empirical distribution of two counted measures of the same event. Zimmer and Trivedi\n(2006) use a trivariate copula framework to analyze a selection model\nwith counted outcomes. In financial econometrics and time series analysis, the copula approach has attracted considerable attention recently.\nBouyé et al. (2000) and Cherubini et al. (2004) cover many issues and\nfinancial applications. A central issue is on the nature of dependence\nand hence the interpretation of a copula as a dependence function dominates. See Patton (forthcoming) for further discussion of copulas in\ntime series settings.\nrelated to estimation and misspecification. Although our main focus is\nusing copulas in an applied setting, particularly cross sectional microeconometric applications, it is necessary to cover important theoretical\nfoundations related to joint distributions, dependence, and copula generation. Sections 2 and 3 primarily deal with these theoretical issues.\nThe reader who is already familiar with the basics of copulas and dependence may wish to skip directly to Section 4, which highlights issues of\nestimation and presents several empirical applications. Section 5 offers\nconcluding remarks as well as suggestions for future research. Throughout the text, various Monte Carlo experiments and simulations are used\nto demonstrate copula properties. Methods for generating random numbers from copulas are presented in the Appendix.\n2\nCopulas and Dependence\nCopulas are parametrically specified joint distributions generated from\ngiven marginals. Therefore, properties of copulas are analogous to properties of joint distributions. This Section begins by outlining several\nimportant properties and results for joint distributions that are frequently used in the context of copulas. Copulas are formally introduced\nin Section 2.2, followed by specific examples in Section 2.3. The important topic of characterizing and measuring dependence is covered in\nSection 2.4.\n2.1\nBasics of Joint Distributions\nThe joint distribution of a set of random variables (Y1 , . . . , Ym ) is\ndefined as\nF (y1 , . . . , ym ) = Pr[Yi ≤ yi ; i = 1, . . . , m],\n(2.1)\nand the survival function corresponding to F (y1 , . . . , ym ) is given by\nF (y1 , . . . , ym ) = Pr[Yi > yi ; i = 1, . . . , m]\n= 1 − F (y1 )\nfor m = 1\n7\n8\nCopulas and Dependence\n= 1 − F1 (y1 ) − F2 (y2 ) + F1 (y1 )F2 (y2 )\nfor m = 2\n= 1 − F1 (y1 ) − F2 (y2 ) − F3 (y1 ) + F12 (y1 , y2 )\n+ F13 (y1 , y3 ) + F23 (y2 , y3 ) − F (y1 , y2 , y3 )\nfor m = 3;\nthe last equality does not hold for the independence case.\n2.1.1\nBivariate cdf properties\nThe following conditions are necessary and sufficient for a rightcontinuous function to be bivariate cdf:\n(1) limyj →−∞ F (y1 , y2 ) = 0, j = 1, 2;\n(2) limyj →∞∀j F (y1 , y2 ) = 1,\n(3) By the rectangle inequality, for all (a1 , a2 ) and (b1 , b2 )\nwith a1 ≤ b1 , a2 ≤ b2 ,\nF (b1 , b2 ) − F (a1 , b2 ) − F (b1 , a2 ) + F (a1 , a2 ) ≥ 0.\n(2.2)\nConditions 1 and 2 imply 0 ≤ F ≤ 1. Condition 3 is referred to as\nthe property that F is 2-increasing. If F has second derivatives, then\nthe 2-increasing property is equivalent to ∂ 2 F/∂y1 ∂y2 ≥ 0.\nGiven the bivariate cdf F (y1 , y2 ):\n(1) univariate margins (or marginal distribution functions) F1\nand F2 are obtained by letting y2 → ∞ and y1 → ∞, respectively. That is, F1 (y1 ) = limy2 →∞ F (y1 , y2 ) and F2 (y2 )\n= limy1 →∞ F (y1 , y2 );\n(2) the conditional distribution functions F1|2 (y1 |y2 ) and F2|1\n(y2 |y1 ) are obtained by ∂F (y1 , y2 )/∂y2 and ∂F (y1 , y2 )/∂y1 ,\nrespectively.\nThe multivariate cdf F (y1 , y2 , . . . , ym ) has the following properties:\n(1) limyj →−∞ F (y1 , y2 , . . . , ym ) = 0, j = 1, 2, . . . , m;\n(2) limyj →∞∀j F (y1 , . . . ., ym ) = 1.\nThe pseudo-generalized inverse of a distribution function is\ndenoted Fj−1 (t) which is defined as\nFj−1 (t) = inf[yj : Fj (yj ) ≥ t, 0 < t < 1].\n(2.3)\n2.2. Copula Functions\n2.1.2\n9\nFrėchet–Hoeffding bounds\nConsider any m-variate joint cdf F (y1 , . . . , ym ) with univariate marginal\ncdfs F1 , . . . , Fm . By definition, each marginal distribution can take any\nvalue in the range [0 , 1]. The joint cdf is bounded below and above by\nthe Fréchet–Hoeffding lower and upper bounds, FL and FU , defined as\n\n\nm\nX\nFL (y1 , . . . , ym ) = max \nFj − m + 1, 0 = W,\nj=1\nFU (y1 , . . . , ym ) = min[F1 , . . . , Fm ] = M,\nso that\n\n\nm\nX\nW = max \nFj − m + 1 , 0 ≤ F (y1 , . . . , ym ) ≤ min[F1 , . . . , Fm ] = M,\nj=1\n(2.4)\nwhere the upper bound is always a cdf, and the lower bound is a cdf\nfor m = 2. For m > 2, FL may be a cdf under some conditions (see\nTheorem 3.6 in Joe, 1997).\nIn the case of univariate margins, the term Frėchet–Hoeffding class\nrefers to the class of m-variate distributions F(F1 , F2 , . . . , Fm ) in which\nmargins are fixed or given. In the case where the margins are bivariate\nor higher dimensional, the term refers to the classes such as F(F12 , F13 ),\nF(F12 , F13 , F23 ).\n2.2\nCopula Functions\nUnless stated otherwise the discussion in this section is limited to the\ncase of one-dimensional margins. We begin with the definition of copula,\nfollowing Schweizer (1991).\n2.2.1\nSklar’s theorem\nSklar’s Theorem states that an m-dimensional copula (or m -copula)\nis a function C from the unit m-cube [0, 1]m to the unit interval [0, 1]\nwhich satisfies the following conditions:\n(1) C(1, . . . , 1, an , 1, . . . , 1) = an for every n ≤ m and all an in\n[0, 1];\n10\nCopulas and Dependence\n(2) C(a1 , . . . , am ) = 0 if an = 0 for any n ≤ m;\n(3) C is m-increasing.\nProperty 1 says that if the realizations of m − 1 variables are known\neach with marginal probability one, then the joint probability of the m\noutcomes is the same as the probability of the remaining uncertain outcome. Property 2 is sometimes referred to as the grounded property of\na copula. It says that the joint probability of all outcomes is zero if the\nmarginal probability of any outcome is zero. Property 3 says that the\nC-volume of any m-dimensional interval is non-negative. Properties 2\nand 3 are general properties of multivariate cdfs that were previously\nmentioned.\nIt follows that an m-copula can be defined as an m-dimensional\ncdf whose support is contained in [0, 1]m and whose one-dimensional\nmargins are uniform on [0, 1]. In other words, an m-copula is an mdimensional distribution function with all m univariate margins being\nU (0, 1). To see the relationship between distribution functions and copulas, consider a continuous m-variate distribution function F (y1 , . . . ,\nym ) with univariate marginal distributions F1 (y1 ), . . . , Fm (ym ) and\n−1 . Then y = F −1 (u ) ∼ F , . . . ,\ninverse (quantile) functions F1−1 , . . . , Fm\n1\n1\n1\n1\n−1 (u ) ∼ F\ny m = Fm\nm\nm where u1 , . . . , um are uniformly distributed variates. The transforms of uniform variates are distributed as Fi (i =\n1, . . . , m). Hence\n−1\n(um ))\nF (y1 , . . . , ym ) = F (F1−1 (u1 ), . . . , Fm\n= Pr [U1 ≤ u1 , . . . , Um ≤ um ]\n= C(u1 , . . . , um )\n(2.5)\nis the unique copula associated with the distribution function. That is\nif y ∼ F , and F is continuous then\n(F1 (y1 ), . . . , Fm (ym )) ∼ C,\nand if U ∼ C, then\n−1\n(F1−1 (u1 ), . . . , Fm\n(um )) ∼ F.\n2.2.2\nPractical implications of Sklar’s theorem\nThe above results imply that copulas can be used to express a multivariate distribution in terms of its marginal distributions. Econometricians\n2.2. Copula Functions\n11\noften know a great deal about marginal distributions of individual variables but little about their joint behavior. Copulas allow researchers to\npiece together joint distributions when only marginal distributions are\nknown with certainty. For an m-variate function F, the copula associated with F is a distribution function C : [0, 1]m → [0, 1] that satisfies\nF (y1 , . . . , ym ) = C(F1 (y1 ), . . . , Fm (ym ); θ),\n(2.6)\nwhere θ is a parameter of the copula called the dependence parameter, which measures dependence between the marginals.\nEquation (2.6) is a frequent starting point of empirical applications\nof copulas. Although θ may be a vector of parameters, for bivariate\napplications it is customary to specify it as a scalar measure of dependence. Thus, the joint distribution is expressed in terms of its respective marginal distributions and a function C that binds them together.\nA substantial advantage of copula functions is that the marginal distributions may come from different families. This construction allows\nresearchers to consider marginal distributions and dependence as two\nseparate but related issues. For many empirical applications, the dependence parameter is the main focus of estimation. Note that because\ncopulas are multivariate distributions of U (0, 1) variables, copulas are\nexpressed in terms of marginal probabilities (cdfs).\nIf the margins F1 (Y1 ), . . . , Fm (Ym ) are continuous, then the corresponding copula in Eq. (2.6) is unique, and otherwise it is uniquely\ndetermined. The latter statement means that the uniqueness property\nonly holds on Ran(F1 ) × Ran(F2 ) × · · · × Ran(Fm ). The result can be\napplied to the case of discrete margins and/or mixed continuous and\ndiscrete margins. If F1 , . . . , Fm are not all continuous, the joint distribution function can always be expressed as (2.6), although in such a\ncase the copula is not unique (see Schweizer and Sklar, 1983, ch. 6).\nIn view of (2.6) and the uniqueness property of the copula, it is often\nviewed as a dependence function.\nIf F is discrete, then there exists a unique copula representation\nfor F for (u1 , . . . , um ) ∈ Ran(F1 ) × Ran(F2 ) × · · · × Ran(Fm ). However, the general lack of uniqueness of a copula representation for discrete distributions is a theoretical issue which needs to be confronted in\nanalytical proofs but does not inhibit empirical applications. Finding\n12\nCopulas and Dependence\na unique copula for a joint distribution requires one to know the form\nof the joint distribution. Researchers use copulas because they do not\nknow the form of the joint distribution, so whether working with continuous or discrete data, a pivotal modeling problem is to choose a copula\nthat adequately captures dependence structures of the data without\nsacrificing attractive properties of the marginals.\nTo summarize: the copula approach involves specifying marginal\ndistributions of each random variable along with a function (copula)\nthat binds them together. The copula function can be parameterized to\ninclude measures of dependence between the marginal distributions. If\nthe copula is a product of two marginals, then independence is obtained,\nand separate estimation of each marginal is appropriate. Under dependence, efficient estimation of the joint distribution, by way of a copula,\nis feasible. Since a copula can capture dependence structures regardless of the form of the margins, a copula approach to modeling related\nvariables is potentially very useful to econometricians.\n2.2.3\nBecause copulas are multivariate distribution functions, the previouslymentioned Frėchet–Hoeffding bounds also apply to copulas; that is,\n\n\nm\nX\nW = max \nFj − m + 1 , 0 ≤ C(y1 , . . . , ym ) ≤ min[F1 , . . . , Fm ] = M.\nj=1\n(2.7)\nNote that the upper bound is itself a distribution function and hence a\ncopula. So we denote the upper bound as CU (y1 , . . . , ym ). If the lower\nbound is also a copula, then it is denoted as CL (y1 , . . . , ym ). This leads\nto the Frėchet–Hoeffding bounds for copulas:\nCL (y1 , . . . , ym ) ≤ C(y1 , . . . , ym ) ≤ CU (y1 , . . . , ym ).\n(2.8)\nKnowledge of Frėchet–Hoeffding bounds is important in selecting\nan appropriate copula. A desirable feature of a copula is that it should\ncover the sample space between the lower and the upper bounds and\nthat as θ approaches the lower (upper) bound of its permissible range,\nthe copula approaches the Frėchet–Hoeffding lower (upper) bound.\n2.2. Copula Functions\n13\nHowever, the parametric form of a copula may impose restrictions such\nthat the full coverage between the bounds is not attained and that one\nor both Frėchet–Hoeffding bounds are not included in the permissible\nrange. Therefore, a particular copula may be a better choice for one\ndata set than for another.\nA special case of the copula – the product copula, denoted C ⊥ , –\nresults if the margins are independent. A family of copulas that includes\nCL , C ⊥ , and CU , is said to be comprehensive.\nSeveral additional properties of copulas deserve mention due to\ntheir attractive implications for empirical applications; it is convenient to state these for the case m = 2. First, Y1 and Y2 are independent iff C is a product copula, i.e., C(y1 , y2 ) = F1 (y1 )F2 (y2 ). In\ncontrast, perfect positive or negative dependence is defined in terms\nof comonotonicity or countermonotonicity, respectively. For any\n(y1j , y2j ), (y1k , y2k ) a comonotonic set is that for which {y1j ≤\ny2j , y1k ≤ y2k } or {y1j ≥ y2j , y1k ≥ y2k }. The set is said to be countermonotonic if {y1j ≤ y2j , y1k ≥ y2k } or {y1j ≤ y2j , y1k ≥ y2k }. Second,\nY1 is an increasing function of Y2 iff C(·) = CU (·), which corresponds\nto comonotonicity and perfect positive dependence. Third, Y1 is a\ndecreasing function of Y2 iff C(·) = CL (·), which corresponds to countermonotonicity and perfect negative dependence. That is, the association is positive if the copula attains the upper Frėchet–Hoeffding\nbound and negative if it attains the lower Frėchet–Hoeffding bound.\nFourth, copulas have an attractive invariance property by which\nthe dependence captured by a copula is invariant with respect to\nincreasing and continuous transformations of the marginal distributions; see Schweizer and Sklar (1983). This means that the same\ncopula may be used for, say, the joint distribution of (Y1 , Y2 ) as\n(ln Y1 , ln Y2 ), and thus whether the marginals are expressed in terms\nof natural units or logarithmic values does not affect the copula. The\nproperties of comonotonicity and invariance jointly make copulas a\nuseful tool in applied work.\nIf (U1 , U2 ) ∼ C, then there are also copulas associated with bivariate uniform pairs (1 − U1 , 1 − U2 ), (U1 , 1 − U2 ), (1 − U1 , U2 ). These\nare called associated copulas. Of these the first pair is of special\ninterest because it leads to survival copulas. If F1−1 (u1 ) ∼ F1 , then\n14\nCopulas and Dependence\nF1−1 (1 − u1 ) ∼ F 1 , and F2−1 (1 − u2 ) ∼ F 2 , and hence (1 − U1 , 1 −\nU2 ) ∼ C. In general\n−1\nF (u) = F (F1−1 (1 − u1 ), . . . , Fm\n(1 − um ))\n−1\n−1\n= F (F 1 (u1 ), . . . , F m (um ))\n= C(u1 , . . . , um ).\n(2.9)\nAn example where working with survival copulas is both more convenient and natural comes from actuarial studies (usually referred to\nas duration analysis in econometrics and lifetime data analysis in biostatistics). Consider two possibly dependent life times, denoted T1 and\nT2 . Then, for m = 2, the joint distribution function of survival times\nis defined as the probability of the joint event (T1 ≤ t1 , T2 ≤ t1 ) and is\ngiven by\nF (t1 , t2 ) = Pr [T1 ≤ t1 , T2 ≤ t2 ] ,\n= 1 − Pr [T1 > t1 ] − Pr [T2 > t2 ] + Pr [T1 > t1 , T2 > t2 ] .\nThe joint survival probability that (T1 > t1 , T2 > t2 ) is\nS(t1 , t2 ) = Pr [T1 > t1 , T2 > t2 ] ,\n= 1 − F (t1 ) − F (t2 ) + F (t1 , t2 ),\n= S1 (t1 ) + S2 (t2 ) − 1 + C (1 − S1 (t1 ), 1 − S2 (t1 )) , (2.10)\nwhere C(·) is called the survival copula. Notice that S(t1 , t2 ) is now a\nfunction of the marginal survival functions only; S1 (t1 ) is the marginal\nsurvival probability. Given the marginal survival distributions and the\ncopula C, the joint survival distribution can be obtained. The symmetry\nproperty of copulas allows one to work with copulas or survival copulas\n(Nelsen, 2006). In the more general notation for univariate random\nvariables Eq. (2.10) can be written as\nC(u1 , u2 ) = u1 + u2 − 1 + C (1 − u1 , 1 − u2 ) = Pr[U1 > u1 , U2 > u2 ].\n2.3\nSome Common Bivariate Copulas\nOnce a researcher has specified the marginal distributions, an\nappropriate copula is selected. Because copulas separate marginal\n2.3. Some Common Bivariate Copulas\n15\ndistributions from dependence structures, the appropriate copula for\na particular application is the one which best captures dependence\nfeatures of the data. A large number of copulas have been proposed\nin the literature, and each of these imposes a different dependence\nstructure on the data. Hutchinson and Lai (1990), Joe (1997, ch. 5),\nand Nelsen (2006: 116–119) provide a thorough coverage of bivariate\ncopulas and their properties. In this section, we discuss several\ncopulas that have appeared frequently in empirical applications,\nand we briefly explain dependence structures of each copula. A\nmore detailed discussion of dependence is given in Section 2.4, and\ngraphical illustrations of dependence are provided in Section 2.5.\nHere we write copulas in terms of random variables U1 and U2 that\nhave standard uniform marginal distributions. Table 2.1 summarizes\nseveral bivariate copula functions.\n2.3.1\nProduct copula\nThe simplest copula, the product copula, has the form\nC(u1 , u2 ) = u1 u2 ,\n(2.11)\nwhere u1 and u2 take values in the unit interval of the real line. The\nproduct copula is important as a benchmark because it corresponds to\nindependence.\n2.3.2\nFarlie–Gumbel–Morgenstern copula\nThe Farlie–Gumbel–Morgenstern (FGM) copula takes the form\nC(u1 , u2 ; θ) = u1 u2 (1 + θ(1 − u1 )(1 − u2 )) .\n(2.12)\nThe FGM copula was first proposed by Morgenstern (1956). The FGM\ncopula is a perturbation of the product copula; if the dependence\nparameter θ equals zero, then the FGM copula collapses to independence. It is attractive due to its simplicity, and Prieger (2002) advocates\nits use in modeling selection into health insurance plans. However, it is\nrestrictive because this copula is only useful when dependence between\nthe two marginals is modest in magnitude.\n−1 ≤ θ ≤ 1\nθ ∈ (−∞ , ∞)\nθ ∈ (0, ∞)\n− 23 (1\n1−\n−\n1\nθ\n)\n)2 ln(1\n( 3θ−2\nθ\n− θ)\n− D1 (θ)]\nθ\nθ+2\narcsin(θ)\n4\n[1\nθ\n2\nπ\nKendall’s τ\n0\n2\nθ\n9\n1−\n*\n*\n− D2 (θ)]\narcsin( θ2 )\n12\n[D1 (θ)\nθ\n6\nπ\nSpearman’s ρ\n0\n1\nθ\n3\nNote: FGM is the Farlie–Gumbel–Morgenstern copula. The asterisk entry indicates that the expression is complicated.\nR\nk.\nNotation Dk (x) denotes the “Debye” function k/xk 0x (ett −1) dt, k = 1, 2.\nAli-Mikhail-Haq\nFrank\n−θ\n−1/θ\n(u−θ\n1 + u2 − 1)\n(e−θu1 − 1)(e−θu2 − 1)\n− θ1 log 1 +\n−θ\ne\n−1\nu1 u2 (1 − θ(1 − u1 )(1 − u2 ))]−1\n−1 < θ < +1\nΦG [Φ−1 (u1 ), Φ−1 (u2 ); θ]\nGaussian\nClayton\nθ-domain\nN.A.\n−1 ≤ θ ≤ +1\nFunction C(u1 , u2 )\nu1 u2\nu1 u2 (1 + θ(1 − u1 )(1 − u2 ))\nCopula type\nProduct\nFGM\nTable 2.1 Some standard copula functions.\n16\nCopulas and Dependence\n2.3. Some Common Bivariate Copulas\n2.3.3\n17\nGaussian (Normal) copula\nThe normal copula takes the form\nC(u1 , u2 ; θ) = ΦG Φ−1 (u1 ), Φ−1 (u2 ); θ ,\nZ Φ−1 (u1 ) Z Φ−1 (u2 )\n1\n=\n2π(1\n−\nθ2 )1/2\n−∞\n−∞\n−(s2 − 2θst + t2\n×\ndsdt\n2(1 − θ2 )\n(2.13)\nwhere Φ is the cdf of the standard normal distribution, and ΦG (u1 , u2 ) is\nthe standard bivariate normal distribution with correlation parameter θ\nrestricted to the interval (−1 , 1). This is the copula function proposed\nby Lee (1983) for modeling selectivity in the context of continuous\nbut nonnormal distributions. The idea was exploited by others without\nmaking an explicit connection with copulas. For example, Van Ophem\n(1999) used it to analyze dependence in a bivariate count model. As the\ndependence parameter approaches −1 and 1, the normal copula attains\nthe Fréchet lower and upper bound, respectively. The normal copula\nis flexible in that it allows for equal degrees of positive and negative\ndependence and includes both Fréchet bounds in its permissible range.\n2.3.4\nStudent’s t-copula\nAn example of a copula with two dependence parameters is that for the\nbivariate t-distribution with ν degrees of freedom and correlation ρ,\nt\nC (u1 , u2 ; θ1 , θ2 ) =\nZ\nZ t−1 (u2 )\nt−1\nθ (u1 )\nθ\n1\n2π(1 − θ22 )1/2\n−∞\n−∞\n−(θ1 +2)/2\n(s2 − 2θ2 st + t2\n× 1+\ndsdt,\nν(1 − θ22 )\n1\n2\n(2.14)\nwhere t−1\nθ1 (u1 ) denotes the inverse of the cdf of the standard univariate\nt-distribution with θ1 degrees of freedom. The two dependence parameters are (θ1 , θ2 ). The parameter θ1 controls the heaviness of the tails. For\nθ1 < 3, the variance does not exist and for θ1 < 5, the fourth moment\ndoes not exist. As θ1 → ∞, C t (u1 , u2 ; θ1 , θ2 ) → ΦG (u1 , u2 ; θ2 ).\n18\nCopulas and Dependence\n2.3.5\nClayton copula\nThe Clayton (1978) copula, also referred to as the Cook and Johnson\n(1981) copula, originally studied by Kimeldorf and Sampson (1975),\ntakes the form:\n−θ\n−1/θ\nC(u1 , u2 ; θ) = (u−θ\n1 + u2 − 1)\n(2.15)\nwith the dependence parameter θ restricted on the region (0, ∞). As θ\napproaches zero, the marginals become independent. As θ approaches\ninfinity, the copula attains the Fréchet upper bound, but for no value\ndoes it attain the Fréchet lower bound. The Clayton copula cannot\naccount for negative dependence. It has been used to study correlated\nrisks because it exhibits strong left tail dependence and relatively weak\nright tail dependence. Anecdotal and empirical evidence suggests that\nloan defaults are highly correlated during recessionary times. Similarly,\nresearchers have studied the “broken heart syndrome” in which spouses’\nages at death tend to be correlated. When correlation between two\nevents, such as performance of two funds or spouses’ ages at death, is\nstrongest in the left tail of the joint distribution, Clayton is an appropriate modeling choice.\n2.3.6\nFrank copula\nThe Frank copula (1979) takes the form:\n(e−θ u1 − 1)(e−θ u2 − 1)\n−1\nC(u1, u2 ; θ) = −θ log 1 +\n.\ne−θ − 1\nThe dependence parameter may assume any real value (−∞, ∞). Values\nof −∞, 0, and ∞ correspond to the Fréchet lower bound, independence,\nand Fréchet upper bound, respectively. The Frank copula is popular for\nseveral reasons. First, unlike some other copulas, it permits negative\ndependence between the marginals. Second, dependence is symmetric\nin both tails, similar to the Gaussian and Student-t copulas. Third, it is\n“comprehensive” in the sense that both Fréchet bounds are included in\nthe range of permissible dependence. Consequently, the Frank copula\ncan, in theory, be used to model outcomes with strong positive or negative dependence. However, as simulations reported below illustrate,\n2.4. Measuring Dependence\n19\ndependence in the tails of the Frank copula tends to be relatively weak\ncompared to the Gaussian copula, and the strongest dependence is centered in the middle of the distribution, which suggests that the Frank\ncopula is most appropriate for data that exhibit weak tail dependence.\nThis copula has been widely used in empirical applications (Meester\nand MacKay, 1994).\n2.3.7\nGumbel copula\nThe Gumbel copula (1960) takes the form:\nC(u1, u2 ; θ) = exp −(ũθ1 + ũθ2 )1/θ ,\nwhere ũj = − log uj . The dependence parameter is restricted to the\ninterval [1, ∞). Values of 1 and ∞ correspond to independence and\nthe Fréchet upper bound, but this copula does not attain the Fréchet\nlower bound for any value of θ. Similar to the Clayton copula, Gumbel does not allow negative dependence, but it contrast to Clayton,\nGumbel exhibits strong right tail dependence and relatively weak left\ntail dependence. If outcomes are known to be strongly correlated at\nhigh values but less correlated at low values, then the Gumbel copula\nis an appropriate choice.\n2.4\nMeasuring Dependence\nGiven a bewilderingly wide range of copulas, how should one choose\nbetween them in empirical work? What is the nature of dependence\nthat is captured by the dependence parameter(s) in different copulas?\nHow does the dependence parameter relate to the more familiar concept\nof correlation? These issues, as well as those of computational convenience and interpretability, are relevant to the choice among different\ncopulas. A key consideration is the ability of a model to capture the\ndependence between variables in a contextually satisfactory manner.\nA proper discussion of this issue requires discussion of dependence in\ngreater detail; see, for example, Drouet-Mari and Kotz (2001).\nIn this section, we restrict the discussion to the bivariate case\nalthough generalization to higher dimensions is possible. Further, we\n20\nCopulas and Dependence\ndenote this pair as (X, Y ) rather than (Y1 , Y2 ) in order to ensure notational consistency with statistical literature on dependence.\n2.4.1\nDesirable properties of dependence measures\nThe random variables (X, Y ) are said to be dependent or associated\nif they are not independent in the sense that F (X, Y ) 6= F1 (X)F2 (Y ).\nIn the bivariate case let δ(X, Y ) denote a scalar measure of dependence. Embrechts et al. (2002) list four desirable properties of this\nmeasure:\n(1) δ(X, Y ) = δ(Y, X) (symmetry);\n(2) −1 ≤ δ(X, Y ) ≤ +1 (normalization);\n(3) δ(X, Y ) = 1 ⇔ (X, Y ) comonotonic; δ(X, Y ) = −1 ⇔ (X, Y )\ncountermonotonic;\n(4) For a strictly monotonic transformation T : R → R of X :\nδ(Y, X)T increasing\nδ(T (X), Y ) =\n−δ(Y, X)T decreasing.\nCherubini et al. (2004: 95) note that association can be measured\nusing several alternative concepts and examine four in particular: linear correlation; concordance; tail dependence; and positive quadrant\ndependence. We shall consider these in turn.\n2.4.2\nCorrelation and dependence\nBy far the most familiar association (dependence) concept is the correlation coefficient between a pair of variables (X, Y ), defined as\nρXY =\ncov[X, Y ]\n,\nσX σY\nwhere cov[X, Y ] = E[XY ] − E[X]E[Y ], σX , σY > 0, σX and σY denote\nthe standard deviations of X and Y , respectively. This measure of\nassociation can be extended to the multivariate case m ≥ 3, for which\nthe covariance and correlation measures are symmetric positive definite\nmatrices.\nIt is well known that: (a) ρXY is a measure of linear dependence, (b) ρXY is symmetric, and (c) that the lower and\n2.4. Measuring Dependence\n21\nupper bounds on the inequality −1 < ρXY < 1 measure perfect negative and positive linear dependence (a property referred to as\nnormalization), and (d) it is invariant with respect to linear\ntransformations of the variables. Further, if the pair (X, Y ) follows\na bivariate normal distribution, then the correlation is fully informative about their joint dependence, and ρXY = 0 implies and is\nimplied by independence. In this case, the dependence structure\n(copula) is fully determined by the correlation, and zero correlation\nand independence are equivalent.\nIn the case of other multivariate distributions, such as the multivariate elliptical families that share some properties of the multivariate normal, the dependence structure is also fully determined\nby the correlation matrix; see Fang and Zhang (1990). However, in\ngeneral zero correlation does not imply independence. For example, if\nX ∼ N (0, 1), and Y = X 2 , then cov[X, Y ] = 0, but (X, Y ) are clearly\ndependent. Zero correlation only requires cov[X, Y ] = 0, whereas zero\ndependence requires cov[φ1 (X), φ2 (Y )] = 0 for any functions φ1 and\nφ2 . This represents a weakness of correlation as a measure of dependence. A second limitation of correlation is that it is not defined for\nsome heavy-tailed distributions whose second moments do not exist,\ne.g., some members of the stable class and Student’s t distribution\nwith degrees of freedom equal to 2 or 1. Many financial time series\ndisplay the distributional property of heavy tails and nonexistence of\nhigher moments; see, for example, Cont (2001). Boyer et al. (1999)\nfound that correlation measures were not sufficiently informative in\nthe presence of asymmetric dependence. A third limitation of the\ncorrelation measure is that it is not invariant under strictly increasing nonlinear transformations. That is ρ[T (X), T (Y )] 6= ρXY for T :\nR → R. Given these limitations, alternative measures of dependence\nshould be considered. Finally, attainable values of the correlation\ncoefficient within the interval [−1, +1] between a pair of variables\ndepend upon their respective marginal distributions F1 and F2 which\nplace bounds on the value. These limitations motivate an alternative\nmeasure of dependence, rank correlation, which we consider in the\nnext section.\n22\nCopulas and Dependence\n2.4.3\nRank correlation\nConsider two random variables X and Y with continuous distribution\nfunctions F1 and F2 , respectively, and joint distribution function F.\nTwo well-established measures of correlation are Spearman’s rank correlation (“Spearman’s rho”), defined as\nρS (X, Y ) = ρ(F1 (X), F2 (Y )),\n(2.16)\nand Kendall’s rank correlation (“Kendall’s tau”) defined as\nρτ (X, Y ) = Pr[(X1 − X2 )(Y1 − Y2 )>0] − Pr[(X1 − X2 )(Y1 − Y2 ) < 0],\n(2.17)\nwhere (X1 , Y1 ) and (X2 , Y2 ) are two independent pairs of random variables from F. The first term on the right, Pr[(X1 − X2 )(Y1 − Y2 ) > 0],\nis referred to as Pr[concordance], the second as Pr[discordance], and\nhence\nρτ (X, Y ) = Pr[concordance] − Pr[discordance]\n(2.18)\nis a measure of the relative difference between the two.\nSpearman’s rho is the linear correlation between F1 (X) and F2 (Y ),\nwhich are integral transforms of X and Y. In this sense it is a measure of rank correlation. Both ρS (X, Y ) and ρτ (X, Y ) are measures of\nmonotonic dependence between (X, Y ). Both measures are based on the\nconcept of concordance, which refers to the property that large values of one random variable are associated with large values of another,\nwhereas discordance refers to large values of one being associated with\nsmall values of the other.\nThe following properties of ρS (X, Y ) and ρτ (X, Y ) are stated and\nproved by Embrechts et al. (2002; Theorem 3):\nBoth ρS (X, Y ) and ρτ (X, Y ) have the property of symmetry, normalization, co- and countermonotonicity, and assume the value zero\nunder independence. Further,\nρS (X, Y ) = ρτ (X, Y ) = −1\niff C = CL ,\nρS (X, Y ) = ρτ (X, Y ) = 1\niff C = CU .\n2.4. Measuring Dependence\n23\nBoth ρS (X, Y ) and ρτ (X, Y ) can be expressed in terms of copulas\nas follows:\nZ 1Z 1\n{C (u1 , u2 ) − u1 u2 } du1 du2 ,\n(2.19)\nρS (X, Y ) = 12\n0\n0\nZ 1Z 1\nρτ (X, Y ) = 4\nC (u1 , u2 ) dC (u1 , u2 ) − 1\n(2.20)\n0\n0\nsee Joe (1997) or Schweizer and Wolff (1981) for details. There are\nother equivalent expressions for these measures. For example, (2.19)\nR1R1\ncan be expressed as ρS = 3 0 0 [u1 + u2 − 1]2 − [u1 − u2 ]2 dC(u1 u2 );\nsee Nelsen (2006: 185). It is also possible to obtain bounds on\nρS (X, Y ) in terms of ρτ (X, Y ), see Cherubini et al. (2004, p. 103).\nAlso ρS (X, Y ) = ρτ (X, Y ) = 1 iff C = CU iff Y = T (X) with T increasing; and ρS (X, Y ) = ρτ (X, Y ) = −1 iff C = CL iff Y = T (X) with T\ndecreasing.\nAlthough the rank correlation measures have the property of\ninvariance under monotonic transformations and can capture perfect\ndependence, they are not simple functions of moments and hence computation is more involved; see some examples in Table 2.1. In some\ncases one can use (2.19) or (2.20).\nThe relationship between ρτ and ρS is shown by a pair of inequalities\ndue to Durbin and Stuart (1951) who showed that\n3\n1\n1\n1\nρτ − ≤ ρs ≤ + ρτ − ρ2τ for ρτ ≥ 0,\n2\n2\n2\n2\n1\n3\n1\n1 2\nρ + ρ τ − ≤ ρs ≤ ρτ +\nfor ρτ ≤ 0.\n2 τ\n2\n2\n2\nThese inequalities form the basis of a widely presented 4-quadrant diagram that displays the (ρs , ρτ )-region; see Figure 2.1. Nelsen (1991)\npresents expressions for ρS and ρτ and their relationship for a number\nof copula families. He shows that “. . . While the difference between ρ\n[ρS ] and τ [ρτ ] can be as much as 0.5 for some copulas, . . . for many\nof these families, there is nearly a functional relationship between the\ntwo.”\nFor continuous copulas, some researchers convert the dependence\nparameter of the copula function to a measure such as Kendall’s tau\nor Spearman’s rho which are both bounded on the interval [−1 , 1],\n24\nCopulas and Dependence\nFig. 2.1 Clockwise: upper bound; independence copula; level sets; lower bound.\nand they do not depend on the functional forms of the marginal\ndistributions.\nDependence Measures for Discrete Data. Dependence measures for continuous data do not, in general, apply directly to discrete\ndata. Concordance measures for bivariate discrete data are subject to\nconstraints (Marshall, 1996, Denuit and Lambert, 2005). Reconsider\n(2.17) in the context of discrete variables. Unlike the case of continuous random variables, the discrete case has to allow for ties, i.e.,\n2.4. Measuring Dependence\n25\nPr[tie] = Pr[X1 = X2 or Y1 = Y2 ]. Some attractive properties of ρτ for\ncontinuous variables are consequently lost in the discrete case. Several\nmodified versions of ρτ and other dependence measures exist that handle the ties in different ways; see Denuit and Lambert (2005) for discussion and additional references. When (X, Y ) are nonnegative integers,\nPr[concordance] − Pr[discordance] + Pr[tie] = 1,\nso\nρτ (X, Y ) = 2 Pr[concordance] − 1 + Pr[tie],\n= 4 Pr[X2 < X1 , Y2 < Y1 ] − 1 + Pr[X1 = X2 or Y1 = Y2 ].\n(2.21)\nThis analysis shows that in the discrete case ρτ (X, Y ) does depend\non the margins. Also it is reduced in magnitude by the presence of\nties. When the number of distinct realized values of (X, Y ) is small,\nthere is likely to be a higher proportion of ties and the attainable\nvalue of ρτ (X, Y ) will be smaller. Denuit and Lambert (2005) obtain\nan upper bound and show, for example, that in the bivariate case with\nidentical P oisson(µ) margins, the upper bound for ρτ (X, Y ) increases\nmonotonically with µ.\n2.4.4\nTail dependence\nIn some cases the concordance between extreme (tail) values of random\nvariables is of interest. For example, one may be interested in the probability that stock indexes in two countries exceed (or fall below) given\nlevels. This requires a dependence measure for upper and lower tails of\nthe distribution. Such a dependence measure is essentially related to the\nconditional probability that one index exceeds some value given that\nanother exceeds some value. If such a conditional probability measure\nis a function of the copula, then it too will be invariant under strictly\nincreasing transformations.\nThe tail dependence measure can be defined in terms of the joint\nsurvival function S(u1 , u2 ) for standard uniform random variables u1\nand u2 . Specifically, λL and λU are measures of lower and upper tail\n26\nCopulas and Dependence\ndependence, respectively, defined by\nC(v, v)\n,\nv\nS(v, v)\nλU = lim\n.\nv→1− 1 − v\nλL = lim\nv→0+\n(2.22)\n(2.23)\nThe expression S(v, v) = Pr[U1 > v, U2 > v] represents the joint survival function where U1 = F1−1 (X), U2 = F2−1 (Y ). The upper tail\ndependence measure λU is the limiting value of S(v, v)/(1 − v), which\nis the conditional probability Pr[U1 > v|U2 > v] (= Pr[U2 > v|U1 > v]);\nthe lower tail dependence measure λL is the limiting value of the\nconditional probability C(v, v)/v, which is the conditional probability\nPr[U1 < v|U2 < v] (= Pr[U2 < v|U1 < v]). The measure λU is widely\nused in actuarial applications of extreme value theory to handle the\nprobability that one event is extreme conditional on another extreme\nevent.\nTwo other properties related to tail dependence are left tail decreasing (LTD) and right tail increasing (RTI). Y is said to be LTD in x\nif Pr[Y ≤ y|X ≤ x] is decreasing in x for all y. Y is said to be RTI in\nX if Pr[Y > y|X > x] is increasing in x for all y. A third conditional\nprobability of interest is Pr[Y > y|X = x]. Y is said to be stochastically\nincreasing if this probability is increasing in x for all y.\nFor copulas with simple analytical expressions, the computation of\nλU can be straight-forward, being a simple function of the dependence\nparameter. For example, for the Gumbel copula λU equals 2 − 2θ .\nIn cases where the copula’s analytical expression is not available,\nEmbrechts et al. (2002) suggest using the conditional probability representation. They also point out interesting properties of some standard\ncopulas. For example, the bivariate Gaussian copula has the property\nof asymptotic independence. They remark: “Regardless of how high\na correlation we choose, if we go far enough into the tail, extreme\nevents appear to occur independently in each margin.” In contrast,\nthe bivariate t-distribution displays asymptotic upper tail dependence\neven for negative and zero correlations, with dependence rising as the\ndegrees-of-freedom parameter decreases and the marginal distributions\nbecome heavy-tailed; see Table 2.1 in Embrechts et al. (2002).\n2.5. Visual Illustration of Dependence\n2.4.5\n27\nAnother measure of dependence is positive quadrant dependence\n(PQD). Two random variables X, Y are said to exhibit PQD if their\ncopula is greater than their product, i.e., C(u1 , u2 ) > u1 u2 or, simply\nC C ⊥ , where C ⊥ denotes the product copula. In terms of distribution\nfunctions PQD implies F (x, y) ≥ F1 (x)F2 (y) for all (x, y) in R2 . Suppose x and y denote two financial losses or gains. The PQD property\nimplies that the probability that losses exceed some specified values is\ngreater when (x, y) is a dependent pair than when the two are independent for all x and y. Positive quadrant dependence implies nonnegative\ncorrelation and nonnegative rank correlation. But all these properties\nare implied by comonotonicity which is the strongest type of positive\ndependence. Note that the LTD and RTI properties imply the property\nof PQD.\n2.5\nVisual Illustration of Dependence\nOne way of visualizing copulas is to present contour diagrams with\ngraphs of level sets defined as the sets in I2 given by C(u, v) = a constant, for selected constants; see graphs shown in Figure 2.1, taken\nfrom Nelsen (2006), which show level curves for the upper and lower\nbounds and the product copula. The constant is given by the boundary\ncondition C(1, t) = t. The hatched triangle in the lower right quadrant\ngives the copula level set {(u, v) ∈ I2 | C(u, v) = t} whose boundaries\nare determined by the copula lower and upper bounds, W (u, v) and\nM (u, v), respectively. Equivalently, the information can be presented\nin 3-dimensions.\nUnfortunately, this is not always a helpful way of visualizing the\ndata patterns implied by different copulas. If the intention is to highlight quadrant or tail dependence, the level curves are not helpful\nbecause they may “look” similar even when the copulas have different\nproperties. One alternative is to present two-way scatter diagrams of\nrealizations from simulated draws from copulas. These scatter diagrams\nare quite useful in illustrating tail dependence in a bivariate context.\nThe capacity of copulas to generate extreme pairs of observations can\nbe further emphasized by drawing a pair of lines parallel to the axes\n28\nCopulas and Dependence\nFig. 2.2 Simulated samples from five copulas.\nand noting the relative frequency of observations to the right or left\nof their intersection. Armstrong (2003) provides an interesting copula\ncatalogue containing scatter plots for uniform pairs (u, v) drawn from\nspecified copulas, and transformed normal pairs (Φ−1 (u), Φ−1 (v)).\nCan scatter plots help in choosing a copula that is appropriate\nfor modeling given data? When modeling is in terms of conditional\nmarginals, that is, marginals that are conditioned on some covariates,\n2.5. Visual Illustration of Dependence\n29\nthe raw scatter diagrams have limited usefulness, as we shall see in\nSection 4.5. For example, the scatter diagrams for pairs of variables\nused in empirical applications (see, for example, Figure 2.2, top left\npanel) give us no clues as to which copula would work well. A better\napproach would be to first fit the marginals and derive marginal probabilities corresponding to (u, v), by employing the probability transform.\nTwo-way scatter plots of these might be potentially useful in suggesting\nsuitable copulas; see Figure 4.3.\nTo demonstrate dependence properties of different copulas, we follow an approach similar to that of Armstrong (2003). Nelsen (2006)\nalso uses a similar graphical device. We simulate 500 pairs of uniform random variates from the Gumbel, Frank, Clayton, Gaussian, and\nFGM copulas using the approaches outlined in the Appendix. The uniform variables are converted to standard normal variables via the standard normal quantile function, yi = Φ−1 (ui ) for i = 1, 2. The pairs of\nstandard normal variates are plotted in order to illustrate dependence\nproperties of the copulas. For four of the five copulas, the dependence\nparameter θ is set such that ρτ (y1 , y2 ) equals 0.7. For the remaining\ncopula, FGM, θ is set such that ρτ (y1 , y2 ) equals 0.2, because FGM\nis unable to accommodate large dependence. Figure 2.3 displays five\ntwo-way scatters generated from simulated draws from the respective\ncopulas.\nSimulated variables from the Gaussian copula assume the familiar elliptical shape associated with the bivariate normal distribution.\nDependence is symmetric in both tails of the distribution. Similarly,\nvariables drawn from the Frank copula also exhibit symmetric dependence in both tails. However, compared to the Gaussian copula, dependence in the Frank copula is weaker in both tails, as is evident from the\n“fanning out” in the tails, and stronger in the center of the distribution.\nThis suggests that the Frank copula is best suited for applications in\nwhich tail dependence is relatively weak.\nIn contrast to the Gaussian and Frank copulas, the Clayton and\nGumbel copulas exhibit asymmetric dependence. Clayton dependence\nis strong in the left tail but weak in the right tail. The implication is\nthat the Clayton copula is best suited for applications in which two\noutcomes are likely to experience low values together. On the other\n30\nCopulas and Dependence\nFig. 2.3 Simulated samples from five conditional copulas.\nhand, the Gumbel copula exhibits strong right tail dependence and\nweak left tail dependence, although the contrast between the two tails\nof the Gumbel copula is not as pronounced as in the Clayton copula.\nConsequently, as is well-known, Gumbel is an appropriate modeling\n2.5. Visual Illustration of Dependence\n31\nchoice when two outcomes are likely to simultaneously realize upper\ntail values.1\nFinally, the FGM copula exhibits symmetry in both tails, but it cannot accommodate variables with large dependence. The FGM copula is\npopular in applied settings due to its simplicity and because it allows\nnegative dependence, but it is only appropriate for applications with\nweak dependence. The implication of these graphs is that multivariate\ndistributions with similar degrees of dependence might exhibit substantially different dependence structures. Understanding dependence\nstructures of different copulas is imperative in empirical applications.\n1 For\nGumbel, the degree of upper tail dependence is given by 2 − 2θ . When Kendall’s tau is\n0.7, the Gumbel dependence parameter is θ = 3.33. Thus, upper tail dependence is −8.06.\n3\nGenerating Copulas\nCopulas are useful for generating joint distributions with a variety of\ndependence structures. Because the selected copula restricts the dependence structure, no single copula will serve the practitioner well in all\ndata situations. Hence it is desirable to have familiarity with a menu of\navailable choices and their properties. So far in this article, we have covered a number of copulas that can be used to combine marginals. However, it is also desirable to avoid treating copulas in a blackbox fashion\nand to understand how copulas are related to other methods of generating joint distributions based on specified marginals. This requires\nan understanding of how copulas are generated. It is also useful to\nknow how new families of copulas may be generated. In this section,\nwe address this issue by considering some common approaches for generating copulas; for a deeper analysis based on characterizations of joint\ndistributions see de la Peña et al. (2003).\nA number of copulas were originally developed in specific contexts. A widely known example generates copulas through mixtures\nand compound distributions, e.g., Marshall and Olkin (1967, 1988),\nHougaard (1987). Such mixtures arise naturally in specific contexts\nsuch as life times of spouses, twins, pairs of organs and so forth. Often\n33\n34\nGenerating Copulas\nthese approaches generate dependence between variables through the\npresence of common unobserved heterogeneity. This seems attractive\nin most applications because it is impossible for observed covariates to\ncover all relevant aspects of an economic event. Because some copulas\nwere developed for specific applications, they often embody restrictions\nthat may have been appropriate in their original context but not when\napplied to other situations. It is often helpful, therefore, to know how\nthe copulas are derived, at least for some widely used families.\nSection 3.1 begins with the simplest of these, the method of\ninversion, which is based directly on Sklar’s theorem. This method\ngenerates the copula from a given joint distribution. The examples\nillustrating inversion are not useful if our concern is to begin with\nmarginals and derive a joint distribution by “copulation.” This topic\nis discussed in Section 3.3. Section 3.4 introduces Archimedean copulas, and Section 3.5 considers issues of extending copulas to dimensions\nhigher than two.\n3.1\nMethod of Inversion\nBy Sklar’s theorem, given continuous margins F1 and F2 and the joint\ncontinuous distribution function F (y1 , y2 ) = C(F1 (y1 ), F2 (y2 )), the corresponding copula is generated using the unique inverse transformations y1 = F1−1 (u1 ), and y2 = F2−1 (u2 ),\nC(u1 , u2 ) = F (F1−1 (u1 ), F2−1 (u2 )),\nwhere u1 and u2 are standard uniform variates. The same approach\ncan be applied to the survival copula. Using, as before, the notation F for the joint survival function, and F 1 and F 2 for the\nmarginal survival functions, the survival copula is given by C(u1 , u2 ) =\n−1\n−1\nF (F 1 (u1 ), F 2 (u2 )).\n3.1.1\nExamples of copulas generated by inversion\nWith a copula-based construction of a joint cdf, a set of marginals\nare combined to generate a joint cdf. Conversely, given a specification of a joint distribution, we can derive the corresponding unique\ncopula. Consider the following bivariate example from Joe (1997: 13).\n3.1. Method of Inversion\n35\nBeginning with the joint distribution, the two marginal distributions\nare derived as\nF (y1 , y2 ) = exp{−[e−y1 + e−y2 − (e−θy1 + e−θy2 )−1/θ ]},\n− ∞ < y1 , y2 < ∞,\n−y1\nlim F (y1 , y2 ) = F1 (y1 ) = exp(e\ny2 →∞\nθ≥0\n(3.1)\n) ≡ u1 ;\nlim F (y1 , y2 ) = F2 (y2 ) = exp(e−y2 ) ≡ u2 ;\ny1 →∞\nhence y1 = − log(− log(u1 )) and y2 = − log(− log(u2 )). After substituting these expressions for y1 and y2 into the distribution function, the\ncopula is\nC(u1 , u2 ) = u1 u2 exp{[(− log u1 )−θ + (− log u2 )−θ ]−1/θ }.\nThis expression can be rewritten as\nC(u1 , u2 ) = u1 u2 φ−1 {[(−φ(u1 ))−θ + (−φ(u2 )−θ )]−1/θ },\n(3.2)\nwhich will be seen to be a member of the Archimedean class. Beginning with the three joint distributions given in column 2 of the Table 3.1\ngiven below and following a similar procedure, we can derive the three\ncopulas in the last column. All three satisfy Properties 1 and 2 above.\nFor Cases 1 and 2, setting θ = 0 yields F (y1 , y2 ) = F (y1 )F (y2 ) and\nC(u1 , u2 ) = u1 u2 , which is the case independence. That is, θ is a parameter that measures dependence. In Joe’s example given above θ = 0\nimplies independence and θ > 0 implies dependence. In Case 3 the special case of independence is not possible.\nTable 3.1 Selected joint distributions and their copulas.\nCase\n1\n2.\n3.\nJoint distribution: F (y1 , y2 )\n1 − (e−θy1 + e−2θy2 −\ne−θ(y1 +2y2 ) )1/θ\nMargins: F (y1 ), F (y2 )\nF (y1 ) = 1 − e−y1\nCopula: C(u1 , u2 )\n1 − {(1 − (1 − u2 )θ )\n(1 − u1 )θ\nθ≥0\nF (y2 ) = 1 − e−2y2\n+(1 − u2 )θ }1/θ\nexp{−(e−θy1 + e−θy2 )1/θ }\nF (y1 ) = exp(−e−y1 ) ;\nexp{−(− ln u1 )θ\n+ (− ln u2 )θ }1/θ\n−∞ < y1 , y2 < ∞, θ ≥ 1\nF (y2 ) = exp(−e−y2 )\n(1 +\ne−y1\n+\ne−y2\n)−1\nF (y1 ) = (1 + e−y1 )−1 ;\nF (y2 ) = (1 + e−y2 )−1\nu1 u2 /(u2 + u1 − u1 u2 )\n36\nGenerating Copulas\nAn unattractive feature of the inversion method is that the joint\ndistribution is required to derive the copula. This limits the usefulness\nof the inversion method for applications in which the researcher does\nnot know the joint distribution.\n3.2\nAlgebraic Methods\nSome derivations of copulas begin with a relationship between\nmarginals based on independence. Then this relationship is modified\nby introducing a dependence parameter and the corresponding copula is obtained. Nelsen calls this method “algebraic.” Two examples of\nbivariate distributions derived by applying this method are the Plackett and Ali–Mikhail–Haq distributions. Here we show the derivation for\nthe latter copula.\nExample 3 in Table 3.1 is Gumbel’s bivariate logistic distribution,\ndenoted F (y1 , y2 ). Let (1 − F (y1 , y2 ))/F (y1 , y2 ) denote the bivariate\nsurvival odds ratio by analogy with the univariate survival function.\nThen,\n1 − F (y1 , y2 )\n= e−y1 + e−y2\nF (y1 , y2 )\n1 − F2 (y2 )\n1 − F1 (y1 )\n+\n,\n=\nF1 (y1 )\nF2 (y2 )\nwhere F1 (y1 ) and F2 (y2 ) are univariate marginals. Observe that in this\ncase there is no explicit dependence parameter.\nIn the case of independence, since F (y1 , y2 ) = F1 (y1 )F2 (y2 ),\n1 − F (y1 , y2 ) 1 − F1 (y1 )F2 (y2 )\n=\nF (y1 , y2 )\nF1 (y1 )F2 (y2 )\n1 − F2 (y2 )\n1 − F1 (y1 ) 1 − F2 (y2 )\n1 − F1 (y1 )\n=\n+\n+\n.\nF1 (y1 )\nF2 (y2 )\nF1 (y1 )\nF2 (y2 )\nNoting the similarity between the bivariate odds ratio in the\ndependence and independence cases, Ali, Mikhail, and Haq proposed a modified or generalized bivariate ratio with a dependence\n3.3. Mixtures and Convex Sums\n37\nparameter θ:\n1 − F (y1 , y2 ) 1 − F1 (y1 )\n1 − F2 (y2 )\n=\n+\nF (y1 , y2 )\nF1 (y1 )\nF2 (y2 )\n1 − F1 (y1 ) 1 − F2 (y2 )\n+ (1 − θ)\n.\nF1 (y1 )\nF2 (y2 )\nThen, defining u1 = F1 (y1 ), u2 = F2 (y2 ), and following the steps given\nin the preceding section, we obtain\n1 − C(u1 , u2 ; θ) 1 − u1\n1 − u2\n1 − u1 1 − u2\n+\n+ (1 − θ)\n,\n=\nC(u1 , u2 ; θ)\nu1\nu2\nu1\nu2\nwhence\nC(u1 , u2 ; θ) =\nu1 u2\n,\n1 − θ(1 − u1 )(1 − u2 )\nwhich, by introducing an explicit dependence parameter θ, extends the\nthird example in Table 3.1.\n3.3\nMixtures and Convex Sums\nGiven a copula C, its lower and upper bounds CL and CU , and the\nproduct copula C ⊥ , a new copula can be constructed using a convex\nsum. Since, as was seen earlier, the upper Fréchet bound is always a\ncopula, then for constant π1 , 0 ≤ π1 ≤ 1, the convex sum of the upper\nbound and independence copulas, denoted C M ,\nC M = π1 C ⊥ + (1 − π1 )CU\n(3.3)\nis also a copula. This mixture copula is a special case of the class of\nFréchet copulas, denoted C F , defined as\nC F = π1 CL + (1 − π1 − π2 )C ⊥ + π2 CU ,\n(3.4)\nwhere 0 ≤ π1 , π2 ≤ 1, and π1 + π2 ≤ 1.\nA closely related idea considers copulas derived by averaging over\nan infinite collection of copulas indexed by a continuous variable η with\na distribution function Λθ (η) with parameter θ. Specifically, the copula\nis obtained from the integral\nZ\nCθ (u1 , u2 ) = Eη [Cη (u1 , u2 )] =\nCη (u1 , u2 )dΛθ (η).\nR(η)\n38\nGenerating Copulas\nThis algebraic operation is usually referred to as mixing (with respect\nto η), leading to the mixture Cθ (u1 , u2 ), which is also referred to as a\nconvex sum (Nelsen, 2006).\nSimilarly, Marshall and Olkin (1988) consider the mixture\nZ\nH(y) = [F (y)]η dΛ(η), η > 0.\n(3.5)\nThey show that for any specified pair {H(y), Λ(η)}, Λ(0) = 1, there\nexists F (y) for which Eq. (3.5) holds. The right hand side can be\nwritten as ϕ [− ln F (y)], where ϕ is the Laplace transform of Λ, so\nF (y) = exp[−ϕ −1 H(y)].\nA well known example from Marshall and Olkin (1988) illustrates\nhow convex sums or mixtures lead to copulas constructed from Laplace\ntransforms of distribution functions. Let ϕ(t) denote the Laplace transform of a positive random (latent) variable η, also referred to as the\nR∞\nmixing distribution Λ, i.e., ϕ (t) = 0 e−ηt dΛ(η). This is the moment\ngenerating function evaluated at −t. An inverse Laplace transform is\nan example of a generator. By definition, the Laplace transform of a\npositive random variable ν is\nL(t) = Eη [e−tη ], η > 0\nZ\n= e−ts dFη (s) = ϕ(t),\nhence ϕ[−1] L(t) = t. L(0) = 1; L(t) is decreasing in t and always exists\nfor t ≥ 0.\nLet F1 (y1 ) = exp[− ϕ −1 (H1 (y1 ))] and F2 (y2 ) = exp[−ϕ −1 (H2 (y2 ))]\nbe some bench mark distribution functions for y1 and y2 . Let the conditional distributions given a random variable η, η > 0, be F1 (y1 |η) =\n[F1 (y1 )]η and F2 (y2 |η) = [F2 (y2 )]η . Then the mixture distribution,\nZ ∞\nH(y1 , y2 ; θ) =\n[F1 (y1 )]η [F2 (y2 )]η dΛθ (η),\n(3.6)\n0\nZ ∞\n=\nexp[−η[ϕ−1 (H1 (y1 )) + ϕ−1 (H2 (y2 )]dΛθ (η),\n0\n= ϕ[ϕ−1 (H1 (y1 )) + ϕ−1 (H2 (y2 )); θ],\n(where the last line follows from the definition of the Laplace transform) is shown by Marshall and Olkin to be the joint distribution\n3.3. Mixtures and Convex Sums\n39\nof (y1 , y2 ) and it is also a (Archimedean) copula. H1 (y1 ) and H2 (y2 )\nare the marginal distributions of H(y1 , y2 ; θ). Observe that the copula\ninvolves the parameter θ, which measures dependence.\nThis method is often referred to as the “Marshall–Olkin method,”\nand Joe (1997) refers to this as the mixture of powers method. An\ninterpretation of this method is that it introduces an unobserved heterogeneity term η in the marginals. This is also referred to as “frailty”\nin the biostatistics literature. In the bivariate example given above,\nthe same term enters both marginals. The distribution function of η\ndepends upon an unknown parameter θ, which controls the dependence\nbetween y1 and y2 in their joint distribution.\nThe approach can be used more generally to derive higher dimensional Archimedean copulas. There are at least two possible variants,\none in which the unobserved heterogeneity is common to all marginals,\nand another in which this term differs between marginals but the terms\nare jointly dependent. That is, the scalar η is replaced by a vector whose\ncomponents (say η1 and η2 ) have a joint distribution, with one or more\nparameters that characterize dependence.\n3.3.1\nExamples\nWe give three examples of copulas generated by mixtures.\nDependence between stock indexes. Hu (2004) studies the\ndependence of monthly returns between four stock indexes: S&P\n500, FTSE, Nikkei, and Hang Seng. She uses monthly averages from\nJanuary 1970 to September 2003. She models dependence on a pair-wise\nbasis using a finite mixture of three copulas (Gaussian (CG ), Gumbel\n(CGumbel ) and Gumbel-Survival (CGS )):\nCmix (u, v; ρ, α, θ) = π1 CGauss (u, v; ρ) + π2 CGumbel (u, v; α)\n+ (1 − π1 − π2 )CGS (u, v; θ).\nSuch a mixture imparts additional flexibility and also allows one to\ncapture left and/or right tail dependence. Hu uses a two-step semiparametric approach in which empirical CDFs are used to model the\nmarginals and maximum likelihood is used to estimate the dependence\nparameters ρ, α, and θ. Note that pairwise modeling of dependence can\n40\nGenerating Copulas\nbe potentially misleading if dependence is more appropriately captured\nby a higher dimensional model.\nDependence in lifetime data. Marshall and Olkin (1967) introduce the common shock model in which the occurrence of a shock or\ndisaster induces dependencies between otherwise independent lifetimes.\nMany of the earliest empirical applications to use copulas were in the\narea of survival analysis. An early example was Clayton (1978) who,\nin his study of joint lifetimes of fathers and sons, derives the joint survival function with shared frailty described by the gamma distribution.\nDependence of lifetimes on pairs of individuals (father–son, husband–\nwife, twins) is well established. Oakes (1982) presents similar results in\na random effects setting, and thus joint survival models of this form are\noften referred to as Clayton–Oakes models. Clayton and Cuzick (1985)\ndevelop an EM algorithm for estimating these models. In related work,\nHougaard (1986) models joint survival times of groups of rats inflicted\nwith tumors. Recent frailty models focus on joint survival times of\nfamily members, with particular interest devoted to annuity valuation.\nFrees et al. (1996) study joint- and last-survivor annuity contracts using\nFrank’s copula. In a recent application, Wang (2003) studies survival\ntimes of bone marrow and heart transplant patients.\nMany of these studies are based on the following specification. Consider the Cox proportional hazard regression model with the hazard\nrate h(t|x) = exp(x0 β)h0 (t), where h0 denotes the baseline hazard. Let\nη, η > 0, represent frailty (or unobserved heterogeneity) and denote the\nconditional survival function as S(t | η). Then\nS(t | η) = exp[−H(t)]η ,\nwhere H(t) denotes the integrated baseline hazard. Let T1 and T2\ndenote lifetimes that are independent conditional on η. That is,\nPr[T1 > t1 , T2 > t2 |η] = Pr[T1 > t1 |η] Pr[T2 > t2 |η]\n= S1 (t1 |η)S2 (t2 |η)\n= (exp(−H1 (t1 ))η (exp(−H2 (t2 ))η .\nIntegrating out η yields the bivariate distribution\nPr[T1 > t1 , T2 > t2 ] = E[(exp(−H1 (t1 ))(exp(−H2 (t2 ))]η .\n3.3. Mixtures and Convex Sums\n41\nDifferent joint distributions are generated using different distributional\nassumptions about the conditionals and η. Hougaard et al. (1992)\nanalyze joint survival of Danish twins born between 1881 and 1930\nassuming Weibull conditionals and the assumption that η follows a\npositive stable distribution. Heckman and Honoré (1989) study the\ncompeting risks model.\nDefault and Claims Analysis. Financial analyses of credit risks are\nconcerned with the possibility that a business may default on some financial obligation. If several firms or businesses are involved, each may be\nsubject to a stochastic default arrival process. One way to model this is\nto consider time to default as a continuous random variable that can be\nanalyzed using models for lifetime data, e.g., the Cox proportional hazard model. However, firms and businesses may face some common risks\nand shocks that create a potential for dependence in the distributions of\ntime to default. Copulas can be used to model dependent defaults.\nLi (2000) proposes the use of the Gaussian copula to analyze the\njoint distribution of m default times, denoted T1 , . . . , Tm . The joint survival distribution can be expressed as a copula using Sklar’s theorem:\nS(t1 , . . . , tm ) = Pr[T1 > t1 , . . . , Tm > tm ]\n= C(S1 (t1 ), . . . , Sm (tm )),\nwhere C denotes the survival copula, see Section 2.2.\nSuppose a random claims variable Y is exponentially distributed,\nconditional on a risk class ν,\nPr[Y ≤ y] = 1 − e−νy ,\nand let the risk class parameter ν be a gamma distributed variable with\nparameters (α, λ). Then the marginal distribution function for y is the\nPareto distribution F (y) = 1 − (1 + y/λ)−α . For Y1 and Y2 in the same\nrisk class, the joint distribution is\nF (y1 , y2 ) = 1 − Pr[Y1 > y1 ] − Pr[Y2 > y2 ] + Pr[Y1 > y1 , Y2 > y2 ]\n= F1 (y1 ) + F2 (y2 ) − 1 + [(1 − F1 (y1 ))−1/α\n+ (1 − F2 (y2 ))−1/α − 1]−α .\n(3.7)\nObserve that the right hand side is now a function of marginals and can\nbe expressed as a copula. The right hand side can also be expressed\n42\nGenerating Copulas\nin terms of marginal survival functions S1 (y1 ) = 1 − Pr[Y1 > y1 ] and\nS2 (y21 ) = 1 − Pr[Y2 > y2 ].\n3.4\nArchimedean copulas\nA particular group of copulas that has proved useful in empirical modeling is the Archimedean class. Several members of this class have already\nbeen introduced above. Archimedean copulas are popular because they\nare easily derived and are capable of capturing wide ranges of dependence. This section discusses further aspects of Archimedean copulas\nand their properties, presents examples of popular Archimedean copulas, and illustrates their dependence characteristics.\n3.4.1\nSome properties of Archimedean copulas\nConsider a class Φ of functions ϕ : [0, 1] → [0, ∞] with continuous\nderivatives on (0, 1) with the properties ϕ(1) = 0, ϕ0 (t) < 0 (decreasing) and ϕ00 (t) > 0 (convex) for all 0 < t < 1, i.e., ϕ(t) is a convex\ndecreasing function. Further, let ϕ(0) = ∞ in the sense that limt→0+\nϕ(t) = ∞. These conditions ensure that an inverse ϕ−1 exists. Any\nfunction ϕ that satisfied these properties is capable of generating a\nbivariate distribution function; thus ϕ is referred to as a “generator\nfunction.” For example, ϕ(t) = − ln(t), ϕ(t) = (1 − t)θ , and ϕ(t) = t−θ ,\nθ > 1, are members of the class. If ϕ(0) = ∞, the generator is said to be\nstrict and its inverse exists. For strict generators C(u1 , u2 ) > 0 except\nwhen u1 = 0 or u2 = 0. If ϕ(0) < ∞, the generator is not strict and\nits pseudo-inverse exists. In this case the copula has a singular component and takes the form C(u1 , u2 ) = max[(.), 0]. For example, consider ϕ(t) = (1 − t)θ , θ ∈ [1, ∞); this generates the copula C(u1 , u2 ) =\nmax[1 − [(1 − u1 )θ + (1 − u2 )θ ]1/θ , 0]. For a comparison of strict and\nnon-strict generators, see Nelsen (2006: 113).\nThe inverse of the generator is written as ϕ−1 and its pseudo-inverse\nis written as ϕ[−1] . The formal definition is\nϕ[−1] (t) =\nϕ−1 (t) 0 ≤ t ≤ ϕ(0)\n0\nϕ(t) ≤ t ≤ +∞\n3.4. Archimedean copulas\n43\nand\nϕ[−1] (ϕ(t)) = t.\nBivariate Archimedean copulas without a singular component take\nthe form:\nC(u1 , u2 ; θ) = ϕ−1 (ϕ(u1 ) + ϕ(u2 )) ,\n(3.8)\nwhere the dependence parameter θ is imbedded in the functional form\nof the generator. Archimedean copulas are symmetric in the sense\nC(u1 , u2 ) = C(u2 , u1 ) and associative in the sense C(C(u1 , u2 ), u3 ) =\nC(u1 , C(u2 , u3 )).\nThe density of the bivariate Archimedean copula is\ncu1 u2 =\nϕ00 (C(u1 , u2 ))ϕ0 (u1 )ϕ0 (u2 )\n,\n[ϕ0 (C(u1 , u2 ))]3\n(3.9)\nwhere the derivatives do not exist on the boundary ϕ(u1 ) + ϕ(u2 ) =\nϕ(0); see Genest and Mackay (1986) for a derivation.\nThe conditional density of the Archimedean copula is\nϕ0 (u2 )\n∂\nC(u1 , u2 ) = 0\n,\n∂u2\nϕ (C(u1 , u2 ))\n(3.10)\nwhich is obtained by differentiating (3.8) with respect to u2 and rearranging the result.\nDifferent generator functions yield different Archimedean copulas\nwhen plugged into Eq. (3.8). For example, consider the generator\nϕ(t) = − ln(t), 0 ≤ t ≤ 1; then ϕ(0) = ∞, ϕ[−1] (t) = exp(−t). Then (3.8)\nreduces to C(u1 , u2 ) = uv, the product copula. Consider the generator\nϕ(t; θ) = ln(1 − θ ln t), 0 ≤ θ ≤ 1; then ϕ[−1] (t) = exp((1 − et )/θ). Then\n(3.8) reduces to C(u1 , u2 ; θ) = uv exp(−θ ln(u1 ) ln(u2 )). For econometric modeling it is not clear that nonstrict generators have any specific\nadvantages. Some authors only use strict generators, e.g., see Smith\n(2005).\nThe properties of the generator affect tail dependency of the\nArchimedean copula. If ϕ0 (0) < ∞ and ϕ0 (0) 6= 0, then C(u1 , u2 ) does\nnot have the RTD property. If C(·) has the RTD property then\n1/ϕ0 (0) = −∞. See Cherubini et al. (2004, Theorem 3.12). Marshall\n44\nGenerating Copulas\nand Olkin (1988) results given in the preceding section show that\nArchimedean copulas are easily generated using inverse Laplace transformations. Since Laplace transformations have well-defined inverses,\nϕ−1 serves as a generator function.\nQuantifying dependence is relatively straightforward for\nArchimedean copulas because Kendall’s tau simplifies to a function of the generator function,\nZ 1\nϕ(t)\nτ =1+4\ndt,\n(3.11)\n0 (t)\nϕ\n0\nsee Genest and Mackay (1986) for a derivation.\n3.4.2\nArchimedean copulas extended by transformations\nIn some cases additional flexibility arising from a second parameter in\nthe copula may be empirically useful. The method of transformations\nhas been suggested as a way of attaining such flexibility; see Durrleman\net al. (2000) and Junker and May (2005). Additional flexibility results\nfrom the presence of a free parameter in the transformation function.\nThe essential step is to find valid new generators, and this can be\naccomplished using transformations.\nJunker and May (2005) prove the following result. Let ϕ be a generator and g : [0, 1] → [0, 1] be a strictly increasing concave function with\ng(1) = 1. Then ϕ ◦ g is a generator. Let f : [0, ∞] → [0, ∞] be a strictly\nincreasing convex function with f (0) = 0, then f ◦ ϕ is a generator.\nHere f and g are transformations applied to the original generator ϕ.\nExamples of transformations given by Junker and May, some of which\nhave previously appeared in the literature, include:\ng(t) = tν ,\nν ∈ (0, 1)\nln(at + 1)\n, a ∈ (0, ∞)\ng(t) =\nln(a + 1)\ne−θt − 1\n,\ne−θ − 1\nδ\nf (ϕ) = ϕ ,\nf (ϕ) = aϕ − 1,\nf (ϕ) = a−ϕ − 1,\ng(t) =\nθ ∈ (−∞, ∞)\nδ ∈ (1, ∞)\na ∈ (1, ∞)\na ∈ (0, 1).\n3.5. Extensions of Bivariate Copulas\n45\nAs an example, the Frank copula is derived from the generator ϕ(t) =\n− log (e−θt − 1)/(e−θ − 1) . This copula can be extended by using the\ntransformed generator [ϕ(t)]δ , so the Frank copula is a special case\nwhen δ = 1.\n3.4.3\nExamples of Archimedean copulas\nThe appeal of Archimedean copulas and the reason for their popularity in empirical applications is that Eq. (3.8) produces wide ranges\nof dependence properties for different choices of the generator function. Archimedean copulas are also relatively easy to estimate. There\nare dozens of existing Archimedean copulas (see Hutchinson and Lai\n(1990) for a fairly exhaustive list), and infinitely more that could be\ndeveloped (if some assumptions are relaxed). Table 3.2 lists three that\nappear regularly in statistics literature: Frank, Clayton, and Gumbel.\nFor Clayton, two cases are listed, corresponding to strict and nonstrict generators, the latter with analytic continuation at the origin,\nwhich makes it comprehensive. These three copulas, as discussed in\nSection 2.3, are popular because they accommodate different patterns\nof dependence and have relatively straightforward functional forms.\nTable 3.3 shows several copula densities.\n3.5\nExtensions of Bivariate Copulas\nWith a few exceptions, copulas are usually applied to bivariate data,\nand only one dependence parameter is estimated. We briefly discuss\nmethods for estimating systems with three or more dependent variables,\nand copulas with more than one dependence parameter.\nIdeally, an m-variate copula would have m(m − 1)/2 dependence\nparameters, one for each bivariate marginal. The most obvious choice\nis the Gaussian copula discussed by Lee (1983), which can be extended\nto include additional marginal distributions. The covariance structure\nof the multivariate normal distribution has m(m − 1)/2 dependence\nparameters. However, implementing a multivariate normal copula\nrequires calculation of multiple integrals without closed form solutions,\nwhich must be approximated numerically. Husler and Reiss (1989),\nJoe (1990), and Joe (1994) propose multivariate copulas with flexible\nGumbel\nFrank\nClayton\nClayton\nθ−1 (t−θ − 1) (nonstrict)\n−θ\n−1/θ\n[max(u−θ\n1 + u2 − 1, 0)]\n(e−θ u1 − 1)(e−θ u2 − 1)\n− θ1 log 1 +\n−θ − 1\ne\nexp −(ũθ1 + u\neθ2 )1/θ\nwhere ũj = − log uj\n(− log t)θ\n− log (e−θt − 1)/(e−θ − 1)\nϕ(t)\nθ−1 (t−θ − 1) (strict)\nC(u1, u2 ; θ)\n−θ\n−1/θ\n(u−θ\n1 + u2 − 1)\nTable 3.2 Selected Archimedean copulas and their generators.\n[1, ∞)\n(−∞, ∞)\n(−1, ∞)\\{0}\nRange of θ\n(0, ∞)\n46\nGenerating Copulas\nGumbel\nFrank\nClayton\nGaussian\nCopula\nFGM\n−2−1/θ\n−θ\n(1 + θ) (u1 u2 )−θ−1 u−θ\n1 +u2 −1\n−1/θ\n−θ\nu−θ\n1 + u2 − 1\n(e−θu1 − 1)(e−θu2 − 1)\n− θ1 log 1 +\n−θ\ne\n−1\n1/θ exp − u\nf1 θ +u\nf2 θ\nff\nf f\n( u1 +u2 )\nwhere u\nf1 = − ln u1 , and u\nf2 = − ln u2\n((e−θu1\n−θ(e−θ − 1)e−θ(u1 +u2 )\n− 1)(e−θu2 − 1) + (eθ − 1))2\n1/θ\n(u1 u2 )θ−1\nθ\nθ\nC(u1 , u2 )(u1 u2 )−1\nu\nf\n+\nu\nf\n+\nθ\n−\n1\n,\n1\n2\nθ\nθ 2−1/θ\nC12 (u1 , u2 )\n1 + θ(1 − 2u1 )(1 − 2u2 )\nn\no\n(1 − θ2 )−1/2 exp − 21 (1 − θ2 )−1 x2 +y 2 −2θxy\n× exp 12 x2 +y 2\nwhere x = Φ−1 (u1 ), y = Φ−1 (u2 )\nC(u1 , u2 )\nu1 u2 (1 + θ(1 − u1 )(1 − u2 ))\nΦG Φ−1 (u1 ), Φ−1 (u2 ); θ\nTable 3.3 Selected copula densities.\n3.5. Extensions of Bivariate Copulas\n47\n48\nGenerating Copulas\ndependence, but their models either require Monte Carlo numerical\nintegration or they perform poorly in empirical applications.\n3.5.1\nMultivariate extensions using Marshall–Olkin’s\nresults\nConsider an extension of the Marshall–Olkin method to dimensions\n> 2. The following statement follows Marshall and Olkin (1988) Theorem 2.1.\nLet H1 , . . . , Hm be univariate distribution functions, and let Λ be\nan m-variate distribution function such that Λ(0, . . . , 0) = 1 with univariate marginals Λ1 , . . . , Λm . Denote the Laplace transform of Λ and\nΛi , respectively, by ϕ and ϕi (i = 1, . . . , m). Let K be an m-variate\ndistribution function with all marginals uniform on [0, 1].\nIf Fi (y) = exp[−ϕ −1\ni Hi (y)], then\nZ\nZ\nH(y1 , . . . , ym ) = . . . K([F1 (y1 )]η1 , . . . , [Fm (ym )]ηm )dΛ(η1 , . . . , ηm )\nis an m-dimensional distribution function with marginals H1 , . . . , Hm .\nA\nspecial\ncase\nof\nthis\ntheorem\noccurs\nwhen\nη\nη\nm\nη\nm\n1\ni\nK([F1 (y1 )] , . . . , [Fm (ym )] ) = Πi=1 Fi (yi )] and the yi (i = 1, . . . , m)\nare uniform [0, 1] variates. In this case the application of the theorem\nyields the m-dimensional copula\n−1\nH(y1 , . . . , ym ) = ϕ[ϕ−1\n1 H1 (y1 ), . . . , ϕm Hm (ym )].\nA further specialization occurs if we assume, in addition, that all univariate marginals Λ1 , . . . , Λm are identical and have an upper Frėchet–\nHoeffding bound Λ1 , with Laplace transform ϕ. In this case the joint\ndistribution function is\n−1\nH(y1 , . . . , ym ) = ϕ[ϕ−1\n1 H1 (y1 ) + · · · + ϕm Hm (ym )],\nwhich extends the Archimedean form to m-dimensions. This result also\nextends to the case in which the marginals are univariate survival functions. Thus, the extension to higher dimensions requires the assumption\nthat all conditional distributions depend on a common random variable\nwhich enters the conditional as a power term. Then a single parameter will characterize dependence between all pairs of variables. This is\nclearly restrictive in the context of fitting copulas to data.\n3.5. Extensions of Bivariate Copulas\n49\nDifferent choices of Hi and Λ lead to different generators ϕ and to\ndifferent copulas. Marshall and Olkin (1988) give examples covering\nfive alternative sets of assumptions. Genest and Mackay (1986) explain\nthat, for the case of m = 2, ϕ can be a function different from a Laplace\ntransform. However, for extension to higher dimensions, due to a result\nof Schweizer and Sklar (1983) (see Marshall and Olkin, 1988: 835), the\ngenerator must be proportional to a Laplace transform. Thus, from\nthe viewpoint of one who wants to fit copulas to data, the use of the\nmixtures method for the m > 2 case involves significant restrictions.\n3.5.2\nMethod of nesting\nArchimedean copulas can be extended to include additional marginal\ndistributions. Focusing on the trivariate case, the easiest method by\nwhich to include a third marginal is\nC(u1 , u2 , u3 ) = ϕ ϕ−1 (u1 ) + ϕ−1 (u2 ) + ϕ−1 (u3 ) .\n(3.12)\nThis construction can be readily used in empirical applications, but it\nis restrictive because it is necessary to assume that ϕ−1 is completely\nmonotonic (Cherubini et al., 2004: 149), and because the specification\nimplies symmetric dependence between the three pairs (u1 , u2 ), (u2 , u3 ),\nand (u1 , u3 ), due to having a single dependence parameter. This restriction becomes more onerous as the number of marginals increases. It is\nnot possible to model separately the dependence between all pairs.\nThe functional form of an Archimedean copula will be recognized\nby those familiar with the theory of separable and additively separable\nfunctions. A function f (u1 , u2 , . . . , um ) is (weakly) separable if it can be\nwritten as\nf (u1 , u2 , . . . , um ) = φ{φ1 (u1 ), . . . , φQ (uQ )},\nf (u1 , u2 , . . . , um ) = φ\n\nQ\nX\n\nq=1\n\n\nq\nφ (uq ) ,\n\nwhere (u1 , . . . , uQ ) is a separation of the set of variables (u1 , u2 , . . . , um )\ninto Q nonoverlapping groups. A function may be separable but\n50\nGenerating Copulas\nnot additively separable. Many Archimedean copulas are additively separable. Under separability variables can be nested. For\nexample if m = 3, then the following groupings are possible:\n(u1 , u2 , u3 ; θ); (u1 , [u2 , u3 ; θ2 ]; θ1 ); (u2 , [u1 , u3 ; θ2 ]; θ1 ); (u3 , [u1 , u2 ; θ2 ]; θ1 ).\nWhen fitting copulas to data, the alternative groupings have different\nimplications and interpretations. Presumably each grouping is justified by some set of assumptions about dependence. The first grouping\nrestricts the dependence parameter θ to be the same for all pairs. The\nremaining three groupings allow for two dependence parameters, one,\nθ1 , for a pair and a second one, θ2 , for dependence between the singleton\nand the pair. The existence of generators φq that lead to flexible forms\nof Archimedean copulas seems to be an open question. Certain types\nof extensions to multivariate copulas are not possible. For example,\nGenest et al. (1995) considered a copula C such that\nH(x1 , x2 , . . . , xm , y1 , y2 , . . . , yn ) = C(F (x1 , x2 , . . . , xm ), G(y1 , y2 , . . . , yn ))\ndefines a (m + n)-dimensional distribution function with marginals F\nand G, m + n ≥ 3. They found that the only copula consistent with\nthese marginals is the independence copula. Multivariate Archimedean\ncopulas with a single dependence parameter can be obtained if restrictions are placed on the generator. For multivariate generalizations of\nGumbel, Frank and Clayton, see Cherubini et al. (2004: 150–151). Copula densities for dimensions higher than 2 are tedious to derive; however, Cherubini et al. (2004, Section 7.5) gives a general expression for\nthe Clayton copula density, and for the Frank copula density for the\nspecial case of four variables.\nWe exploit the mixtures of powers method to extend Archimedean\ncopulas to include a third marginal. For a more detailed exposition of\nthis method, see Joe (1997, ch. 5) and Zimmer and Trivedi (2006). The\ntrivariate mixtures of powers representation is\nZ ∞Z ∞\nC(u1 , u2 , u3 ) =\nGβ (u1 )Gβ (u2 )dM2 (β; α)Gα (u3 )dM1 (α),\n0\n0\n(3.13)\nwhere G(u1 ) = exp(−φ−1 (u1 )), G(u2 ) = exp(−φ−1 (u2 )), G(u3 ) =\nexp(−ϕ −1 (u3 )), and ϕ is a Laplace transformation. In this formulation, the power term α affects u1 , u2 , and u3 , and a second power term β\n3.5. Extensions of Bivariate Copulas\n51\naffects u1 and u2 . The distribution M1 has Laplace transformation ϕ(·),\n−1\nand M2 has Laplace transformation (ϕ−1 ◦ φ)−1 (−α−1 log(·)) .\nWhen φ = ϕ, expression (3.13) simplifies to expression (3.12). (The\nmathematical notation f ◦ g denotes the functional operation f (g(x)).)\nWhen φ 6= ϕ, the trivariate extension of (3.8) corresponding to\n(3.13) is\nC(u1 , u2 , u3 ) = ϕ ϕ−1 ◦ φ[φ−1 (u1 ) + φ−1 (u2 )] + ϕ−1 (u3 ) . (3.14)\nTherefore, different Laplace transformations produce different families\nof trivariate copulas.\nExpression (3.12) has symmetric dependence in the sense that\nit produces one dependence parameter θ = θu1 u2 = θu1 u3 = θu2 u3 .\nBut the dependence properties of three different marginals are\nrarely symmetric in empirical applications. The trivariate representation of expression (3.14) is symmetric with respect to (u1 , u2 )\nbut not with respect to u3 . Therefore, (3.14) is less restrictive\nthan (3.12).\nThe partially symmetric formulation of expression (3.14) yields two\ndependence parameters, θ1 and θ2 , such that θ1 ≤ θ2 . The parameter θ2 = θu1 u2 measures dependence between u1 and u2 . The parameter θ1 = θu1 u3 = θu2 u3 measures dependence between u1 and u3 as\nwell as between u2 and u3 , and the two must be equal. Distributions greater than three dimensions also have a mixtures of powers representations, but this technique yields only m − 1 dependence\nparameters for an m-variate distribution function. Therefore, the\nmixtures of powers approach is more restrictive for higher dimensions. While this restriction constitutes a potential weakness of the\napproach, it is less restrictive than formulation (3.12) which yields\nonly one dependence parameter. Moreover, the multivariate representation in Eq. (3.14) allows a researcher to explore several dependence\npatterns by changing the ordering of the marginals. For example,\ninstead of (u1 , u2 , u3 ), one could order the marginals (u3 , u2 , u1 ),\nwhich provides a different interpretation for the two dependence\nparameters.\nAs an example, we demonstrate how the Frank copula is extended\nto include a third marginal. If φ(s) = exp(−s1/θ ) and (ϕ −1 ◦ φ)(s)\n52\nGenerating Copulas\n= sθ1 /θ2 , then expression (3.14) becomes\nC(u1 , u2 , u3 ; θ1 , θ2 )\n−1\n−θ2 u1 )\n−1 1 − [1 − c2 (1 − e\n−θ1 u3\n= −θ1 log 1 − c1\n(1 − e\n) ,\n× (1 − e−θ2 u2 )]θ1 /θ2\n(3.15)\nwhere θ1 ≤ θ2 , c1 = 1 − e−θ1 , and c2 = 1 − e−θ2 . (The proof is complicated; see Joe (1993).) Despite the ability of some bivariate\nArchimedean copulas to accommodate negative dependence, trivariate Archimedean copulas derived from mixtures of powers restrict θ1\nand θ2 to be greater than zero, which implies positive dependence. This\nreflects an important property of mixtures of powers. In order for the\nintegrals in Eq. (3.13) to have closed form solutions, then power terms\nα and β, which are imbedded within θ1 and θ2 , must be positive.\nTrivariate example. Zimmer and Trivedi (2006) develop a trivariate model in which y1 and y2 are counted measures of health care use\nby two spouses and y3 is a dichotomous variable of insurance status.\nOne group consists of couples who are insured jointly, and a second\ngroup consists of married couples where wife and husband are separately insured. The interest is in determining the impact of y3 on y1 and\ny2 . They exploit the mixtures of powers representation to extend the\n(Frank) copula to include a third marginal. Specifically they use (3.14)\nwhich is somewhat less restrictive than the trivariate Archimedean cop\nula φ φ−1 (u1 ) + φ−1 (u2 + φ−1 (u3 )). The latter has symmetric dependence in the sense that it produces one dependence parameter for all\npairs of marginals. The nested Archimedean form has symmetric with\nrespect to (u1 , u2 ), but not with respect to u3 .\n3.5.3\nCopulas with two dependence parameters\nAlthough they are rarely used in empirical applications, it is possible to\nconstruct Archimedean copulas with two dependence parameters, each\nof which measures a different dependence feature. For example, one\nparameter might measure left tail dependence while the other might\nmeasure right tail dependence. The bivariate Student’s t-distribution\n3.5. Extensions of Bivariate Copulas\n53\nwas mentioned earlier as an example of a two-parameter copula. Transformation copulas in Section 3.4 are a second example. Two parameter\nArchimedean copulas take the form:\nC(u1 , u2 ) = ϕ − log K e−ϕ(u1 ) , e−ϕ(u2 ) ,\n(3.16)\nwhere K is max-id and ϕ is a Laplace transformation. If K assumes\nan Archimedean form, and if K has dependence parameter θ1 , and ϕ\nis parameterized by θ2 , then C(u1 , u2 ; θ1 , θ2 ) assumes an Archimedean\nform with two dependence parameters. Joe (1997: 149–154) discusses this calculation in more detail. As an example, we present one\nArchimedean copula with two parameters, and we direct the interested\nreader to Joe (1997) for more examples.\nIf K assumes the Gumbel form, then (3.16) takes the form:\nC(u1 , u2 ; θ1 , θ2 ) = ϕ ϕ−1 (u1 ) + ϕ−1 (u2 ) ,\nwhere ϕ(t) = (1 + t1/θ1 )−1/θ2 , θ2 > 0 and θ1 ≥ 1. For this copula,\n2−1/(θ1 θ2 ) measures lower tail dependence and 2 − 21/θ1 captures upper\ntail dependence.\n4\nCopula Estimation\nCopulas are used to model dependence when the marginal distributions\nare conditional on covariates, i.e., they have a regression structure, and\nalso when they are not. Sometimes the investigator’s main interest is\nin efficient estimation of regression parameters and only incidentally in\nthe dependence parameter. In models which do not involve covariates,\nthe main interest is, of course, in the nature of dependence.\nSimultaneous estimation of all parameters using the full maximum\nlikelihood (FML) approach is the most direct estimation method.\nA second method is a sequential 2-step maximum likelihood method\n(TSML) in which the marginals are estimated in the first step and the\ndependence parameter is estimated in the second step using the copula after the estimated marginal distributions have been substituted\ninto it. This method exploits an attractive feature of copulas for which\nthe dependence structure is independent of the marginal distributions.\nThis second method has additional variants depending upon whether\nthe first step is implemented parametrically or nonparametrically, and\non the method used to estimate the variance of the dependence parameter(s) at the second stage. A third method that is in principle feasible,\nbut not yet widely used in practice, is to estimate the parameters using\nthe generalized method of moments (GMM); this, however, requires\n55\n56\nCopula Estimation\none to first derive the moment functions. (See Prokhorov and Schmidt,\n2006, for a discussion of theoretical issues related to copula estimation.)\nIn the remainder of this section we will concentrate on the FML and\nTSML methods.\nIn what follows we will treat the case of copulas of continuous variables as the leading case. For generality we consider copulas in which\neach marginal distribution, denoted as Fj (yj |xj ; βj ), j = 1, . . . , m, is conditioned on a vector covariates denoted as xj . In most cases, unless specified otherwise m = 2, and Fj is parametrically specified. In most cases we\nwill assume that the dependence parameter, denoted θ, is a scalar.\nSections 4.1 and 4.2 cover the full maximum likelihood and the twostep sequential maximum likelihood methods. Section 4.3 covers model\nevaluation and selection. Monte Carlo examples and real data examples\nare given in Sections 4.4 and 4.5.\n4.1\nCopula Likelihoods\nHaving chosen a copula consider the derivation of the likelihood for\nthe special case of a bivariate model with uncensored failure times\n(y1 , y2 ) . Denote the marginal density functions as fj (yj |xj ; βj ) = ∂Fj\n(yj |xj ; βj )/∂yj and the copula derivative as Cj ((F1 |x1 ; β1 ), (F2 |x1 ; β1 );\nθ)/∂Fj for j = 1, 2. Then the copula density is\nd\nC (F1 (·), F2 (·))\ndy2 dy1\n= C12 (F1 (·), F2 (·)) f1 (·)f2 (·),\nc (F1 (·), F2 (·)) =\n(4.1)\nwhere\nC12 ((F1 |x1 ; β1 ), (F2 |x2 ; β2 ); θ) = ∂C((F1 |x1 ; β1 ), (F2 |x2 ; β2 ); θ)/∂F1 ∂F2 ,\n(4.2)\nand the log-likelihood function is\nLN (y1 |(x1 ; β1 ), (y2 |x2 ; β2 ); θ)\n=\nN X\n2\nX\nln fji (yji |xji ; βj )\ni=1 j=1\n+\nN\nX\ni=1\nC12 [F1 (y1i |x1i ; β1 ), F2 (y2i |x2i ; β2 ); θ] .\n(4.3)\n4.1. Copula Likelihoods\n57\nThe cross partial derivatives C12 (·) for several copulas are listed below.\nIt is easy to see that the log-likelihood decomposes into two parts, of\nwhich only the second involves the dependence parameter.\nLN (β1 , β2 , θ) = L1,N (β1 , β2 ) + L2,N (β1 , β2 , θ).\n(4.4)\nFML estimates are obtained by solving the score equations ∂LN /∂Ω =\n0 where Ω =(β1 , β2 , θ). These equations will be nonlinear in general,\nbut standard quasi-Newton iterative algorithms are available in most\nb FML . By standard\nmatrix programming languages. Let the solution be Ω\nb FML is consistent for the\nlikelihood theory under regularity conditions, Ω\ntrue parameter vector Ω0 and its asymptotic distribution is given by\n"\n−1 #\n2 L (Ω)\n√\n1\n∂\nd\nN\nb FML − Ω0 ) → N 0, − plim\nN (Ω\n.\n(4.5)\nN ∂Ω∂Ω0\nΩ0\nIn practice the more robust consistent “sandwich” variance estimator,\nobtained under quasi-likelihood theory, may be preferred as it allows\nfor possible misspecification of the copula.\n4.1.1\nLikelihood maximization by parts\nb FML .\nThere is an alternative computational strategy for obtaining Ω\nIn view of the structure of the log-likelihood, see (4.4), one can use\nthe maximization-by-parts (MBP) algorithm recently suggested\nby Song et al. (2005). This involves starting from an initial estimate of\n(β1 , β2 ) based on maximization of L1,N (β1 , β2 ), and then using Eq. (4.4)\nto solve for θ. The initial estimate ignores dependence, and hence is\nnot efficient. An iterative solution for (β1 , β2 ) given an estimate θ,\nand then for θ given revised estimates of (β1 , β2 ), yields efficient estimates because the procedure takes account of dependence. If there is\nno dependence, then the first step estimator is efficient. Song et al.\n(2005) illustrate the MBP algorithm by estimating a bivariate Gaussian copula.\nLikelihoods also may be expressed in alternative forms using associated copulas, e.g. survival copulas, as defined in Section 2.2. Such\nalternative expressions are often convenient when handling models with\ncensored observations (see Frees and Valdez, 1998: 15–16, Georges\net al., 2001).\n58\nCopula Estimation\n4.1.2\nCopula for discrete variables\nTo illustrate the case of discrete random variables, we consider the\ncase in which y1 and y2 are nonnegative integer (count) variables and\nF1 and F2 are discrete cdfs. The joint probability mass function (pmf)\nis formed by taking differences. For economy, we use the abbreviated\nnotation Fj (yj ) for Fj (yj |xj , βj ). For the bivariate case, the probability\nmass function (pmf) is\nc (F1 (y1i ), F2 (y2i ); θ) = C (F1 (y1i ), F2 (y2i ); θ)\n− C (F1 (y1i − 1), F2 (y2i ); θ)\n− C (F1 (y1i ), F2 (y2i − 1); θ)\n+ C (F1 (y1i − 1), F2 (y2i − 1); θ) .\n(4.6)\nFor the trivariate case the pmf is\nc (F1 (y1i ), F2 (y2i ), F3 (y3i ); θ)\n= C (F1 (y1i ), F2 (y2i ), F3 (y3i ); θ)\n− C (F1 (y1i − 1), F2 (y2i ), F3 (y3i ); θ)\n− C (F1 (y1i ), F2 (y2i − 1), F3 (y3i ); θ)\n− C (F1 (y1i ), F2 (y2i ), F3 (y3i − 1); θ)\n+ C (F1 (y1i − 1), F2 (y2i − 1), F3 (y3i ); θ)\n+ C (F1 (y1i ), F2 (y2i − 1), F3 (y3i − 1); θ)\n+ C (F1 (y1i − 1), F2 (y2i ), F3 (y3i − 1); θ)\n− C (F1 (y1i − 1), F2 (y2i − 1), F3 (y3i − 1); θ) .\n(4.7)\nFocusing on the bivariate case, the log likelihood function for FML\nestimation is formed by taking the logarithm of the pdf or pmf copula\nrepresentation and summing over all observations,\nLN (θ) =\nN\nX\nlog c(F1 (y1i ), F2 (y2i ); θ).\n(4.8)\ni=1\nAs was discussed in Section 2.4, the interpretation of the dependence\nparameter θ in the context of discrete data is not identical to that in\nthe corresponding continuous case. If there is special interest in estimating and interpreting the dependence parameter, an alternative to\n4.2. Two-Step Sequential Likelihood Maximization\n59\nestimating the discrete copula is to apply the continuation transformation to the discrete variable and then base the estimation of parameters\non the likelihood function for a family of continuous copulas. Specifically, the discrete variables are made continuous by adding a random\nindependent perturbation term taking values in [0, 1]. This approach\nwas used by Stevens (1950), and, as was mentioned above, by Denuit\nand Lambert (2005) to derive bounds for Kendall’s ρτ measure, and\nby Machado and Santos Silva (2005) to extend quantile regression to\ncount data.\nIn our experience, maximization of likelihood with discrete margins\noften runs into computational difficulties, reflected in the failure of the\nalgorithm to converge. In such cases it may be helpful to first apply the\ncontinuation transformation and then estimate a model based on copulas for continuous variables. For example, count variables can be made\ncontinuous and then suitable margins inserted in a specified copula.\nExamples are given in Sections 4.4.2 and 4.5.2.\n4.1.3\nBayesian analysis of copulas\nThere is now growing interest in Bayesian analysis of copulas. A recent\nexample is Pitt et al. (2006), who use a Gaussian copula to model the\njoint distribution of six count measures of health care. The multivariate density of the Gaussian copula has been given by Song (2000).\nThe text by Pitt et al. (2006) develops a Markov chain Monte Carlo\nalgorithm for estimating the posterior distribution for continuous or\ndiscrete marginals. In their application the marginal densities are specified to be zero-inflated geometric distributions. The authors show how\none might handle the additional complications resulting from discrete\nmarginals.\n4.2\nTwo-Step Sequential Likelihood Maximization\nThe TSML method separates the estimation of the marginals from that\nof the dependence parameter. We shall consider two cases. In the first\nthe marginal distributions do not involve covariates and the density\nis not parametrically specified either because it is preferred to avoid\ndoing so, or because this step is too difficult.\n60\nCopula Estimation\n4.2.1\nNonparametric first step\nConsider the case of continuous random variables. A nonparametric kernel density is used to estimate the univariate marginal densities, denoted fbj (yj ), j = 1, 2. This usually requires one to choose\na bandwidth parameter. This is used to compute the empirical discj (yj ), j = 1, 2, which may be treated as realizatribution function F\ntions of uniform random variables u1 and u2 , respectively. Given\n{b\nu1i , u\nb2i , i = 1, . . . , N }, and a copula, the dependence parameter θ can\nbe estimated as follows:\nθbTSML = arg max\nθ\nN\nX\nln ci (b\nu1i , u\nb2i ; θ).\ni=1\nThere are two issues to consider. First, the approach is straightforward if the yj are iid. This may be a reasonable assumption with\ncross section data, but may be more tenuous in time series applications. Second, because this method of estimating θ uses u\nb1i , u\nb2i , which\nare generated (rather than observed) variates, valid estimation of the\nvariance of θbTSML requires going beyond the standard likelihood calculation based on the (Fisher) information matrix.\n4.2.2\nParametric sequential approach\nSuppose that the appropriate marginal distributions are parametrically\nspecified and are conditioned on covariates. Again we can separate the\nestimation of marginal and dependence parameters. This approach is\ncomputationally attractive if the dimension of θ is large so that the\napplication of FML is problematic. Joint estimation is further encumbered if certain choices of marginal distributions contribute to a flat\nlikelihood function. The method also has an advantage that the specification of the marginals can be tested using diagnostic tests to ensure\nthat they provide an acceptable fit to the data. Appropriate marginals\nshould produce more precise estimates of the dependence parameter.\nFor the case of continuous random variables, the log likelihood function has the structure given in (4.4). As discussed by Joe (1997), it is\n4.2. Two-Step Sequential Likelihood Maximization\n61\noften easier to consider the log likelihood of each univariate margin,\nLj (βj ) =\nN\nX\nlog fj (yij |xij , βj )\ni=1\nand maximize these likelihoods to obtain βbj for each margin j. Then\ntreating these values as given, the full likelihood function based on\nEq. (4.4) is maximized with respect to only the dependence parameter\nθ. Following a convention in the statistics literature, Joe refers to this\nestimation technique as inference functions for margins (IFM).\nUnder regularity conditions, IFM produces estimates that are consistent with full maximum likelihood (FML), albeit less efficient.\nComparing efficiency of IFM and FML is difficult because asymptotic\ncovariance matrices are generally intractable, but for a number of models, IFM may perform favorably compared to ML. However, if efficiency\nis a concern, then consistent standard errors can be obtained by using\nthe following bootstrap algorithm:\n(1) Obtain βb1 , βb2 , and θb by IFM.\n(2) Randomly draw a sample of observations (with replacement)\nfrom the data. The randomly drawn sample may be smaller,\nlarger, or the same size as the number of observations in the\ndata.\n(3) Using the randomly drawn sample, reestimate β1 , β2 , and θ\nby IFM and store the values.\n(4) Repeat steps (2) and (3) many times and denote each replib\ncated estimate as βb1 (r), βb2 (r), and θ(r)\nwhere r is the\nreplication.\nb 0 are calcub = (βb1 , βb2 , θ)\n(5) Standard errors for parameters Ω\nPR b\n−1\nb\nlated using the variance matrix R\nr=1 (Ω(r) − Ω)\n0\nb\nb\n(Ω(r) − Ω) where R is the number of replications.\nFML and TSML both produce consistent estimates of the dependence parameter θ. The dependence parameter is often the primary\nresult of interest in empirical applications. For example, in financial\nstudies, portfolio managers are often interested in the joint probability\nthat two or more funds will fail during a recession. In these applications,\n62\nCopula Estimation\nthe dependence parameter indicates correlation between fund performances, and the Frėchet–Hoeffding bounds (discussed in Section 2.1)\nprovide bounds on possible correlation.\n4.2.3\nExample: Copula-based volatility models\nChen and Fan (2006) consider estimation and model selection of semiparametric copula-based multivariate dynamic (SCOMDY) models. An\nexample of such a model is the generalized autoregressive conditional\nheteroskedastic (GARCH) model with errors generated by a Gaussian\ncopula or a Student’s t-copula. Different choices of the distributions\nin the copulas generate different nested or non-nested models. The\nlog-likelihood function has a structure similar to (4.4), so Chen and\nFan propose a two-step estimator in which the non-copula parameters\nare estimated first and the copula parameters are estimated conditional on those. Given two or more models estimated in this way, each\nbased on a different copula, a practical issue is how to choose between\nthem. They develop model selection procedures, under the assumption of misspecified copulas, based on a pseudo likelihood ratio (PLR)\ncriterion. Following Vuong (1989) they use the minimum of Kullback–\nLeibler information criterion (KLIC) over the copulas to measure the\ncloseness of a SCOMDY model to the true model. They establish the\nasymptotic distribution of the PLR for both the nested and non-nested\ncases. For non-nested models the PLR criterion has an asymptotic normal distribution and for the nested case it is a mixture of chi-square\ndistributions.\nFor specificity, consider the example of a GARCH (1,1) with errors\nfrom the Gaussian copula. For j = 1, . . . , m\np\nyjt = x0jt βj + hjt εjt ,\nhjt = kj + αj hj,t−1 + δj (yjt − x0jt βj )2 ,\nwhere kj > 0, αj > 0, δj > 0, and αj + δj < 1. The εjt are zero mean,\nunit variance, serially uncorrelated errors generated from a Gaussian\ncopula with covariance matrix Σ; the covariate vectors are assumed to\nbe exogenous. The estimation procedure of Chen and Fan is ordinary\nleast squares for the parameters βj and quasi-MLE for (kj , αj , δj ).\n4.3. Copula Evaluation and Selection\n63\nUsing b· to denote sample estimates, let θbj = (βbj , b\nkj , α\nbj , δbj ) and θb =\nb\nb\n(θ1 , . . . , θm ). Given these estimates, the empirical distribution function\nb can be obtained. This is denoted as Fbj (εjt (θ)),\nb j = 1, . . . , m.\nof the εjt (θ)\nAt the final stage the likelihood based on a Gaussian copula density\nb . . . , Fbm (εmt (θ)))\nb is maximized to estimate the dependence\nc(Fb1 (ε1t (θ)),\nparameters.\nA similar procedure can be applied to estimate GARCH(1,1) models\nwith other copulas. Chen and Fan establish the asymptotic distribution of the dependence parameter and provide an estimation procedure\nfor the variances under possible misspecification of the parametric copula. One interesting result they obtain is that the asymptotic distribution of the dependence parameters is not affected by the estimation of\nb\nparameters θ.\n4.3\nCopula Evaluation and Selection\nIn empirical applications of copulas, one wants to know how well the\nmodel fits the data. It is especially useful to know if there are indicators of misspecification. For example, data may exhibit the type of tail\ndependence that the selected copula cannot capture. Further, if several copulas are tried, one wants to know which one provides the best\nfit to the data. This section will discuss these practical issues using\nMonte Carlo experiments for exposition and illustration. We will also\npresent two empirical examples to demonstrate copula estimation of\nmicroeconometric models for which parametric joint distributions are\n4.3.1\nSelection criteria\nDifferent copula functions exhibit different dependence patterns. Therefore, if a researcher wants to explore the structure of dependence,\nhe may estimate several copulas and choose one on the basis of best\nfit to the data. Other methods for choosing copulas are presented in\nSection 4.5.\nThe first step in copula estimation is to specify and estimate univariate marginal distributions. This decision should be guided by economic\n64\nCopula Estimation\nand statistical features of the data. The goodness of fit of the marginal\nmodels can and should be evaluated using diagnostic and goodness of\nfit tests. Many of these will be specific to the parametric form of the\nmarginal model. The better the fit of the marginal, the more precisely\nwe can model the dependence structure. We will illustrate some of these\nin the section on empirical applications. The second step, potentially\nmore difficult, requires specification of a copula. Here prior information plays a role. For example, if right tail dependence is expected to\nbe a feature of the data, choosing a copula functional form that does\nnot permit such dependence is not appropriate. Choosing a copula that\npermits only positive dependence is also restrictive if one expects either\npositive or negative dependence in the relevant data context. Thus, in\nmany empirical settings, choosing a relatively unrestrictive copula is a\nsensible decision. In many empirical settings trying several copulas to\nexplore the dependence structure is a sound strategy.\nThe first model selection approach we consider is due to Genest and Rivest (1993) who recommend a technique for choosing\namong bivariate Archimedean copulas. Assume that two random variables (Y1i , Y2i ), i = 1, . . . , N, have a bivariate distribution F (Y1i , Y2i )\nwith corresponding copula C (F1 (Y1 ), F2 (Y2 ); θ). The researcher must\nchoose the appropriate functional form for C (·) from several available Archimedean copulas. Let the random variable Zi = F (Y1i , Y2i )\nbe described by distribution function K(z) = Pr (Zi ≤ z). Genest\nand Rivest show that this distribution function is related to the\nArchimedean generator function,\nK(z) = z −\nϕ(z)\n.\nϕ0 (z)\nIdentifying the appropriate Archimedean copula is equivalent to identifying the optimal generator function. To determine the appropriate\ngenerator:\n• Calculate Kendall’s tau using the following nonparametric\nestimator:\n−1 X\nN\nρbτ =\nsign [(Y1i − Y1j ) (Y2i − Y2j )] .\n2\ni<j\n4.3. Copula Evaluation and Selection\n65\n• Calculate a nonparametric estimate of K by defining the\nvariable\nZi = {number of (Y1j , Y2j ) such that Y1j < Y1i\nand Y2j < Y2i } /(N − 1)}\nb\nfor i = 1, . . . , N . Then set K(z)\n= proportion of Zi ’s ≤ z.\n• For ϕ, a given Archimedean generator, calculate a parametric\nestimate of K using the equation\ne ϕ (z) = z − ϕ(z) .\nK\nϕ0 (z)\nGenerators for several popular Archimedean copulas are\nlisted in Table 3.2. Use the nonparametric estimate of τn to\ncalculate θn in each generator. For each generator function,\ne ϕ (z) is obtained. The appropriate\na different estimate of K\ngenerator, and thus the optimal Archimedean copula, is the\ne ϕ (z) is closest to the nonparametric estimate\none for which K\nb\nK(z).\nThis can be determined by minimizing the distance\nR\n2 dK(z).\ne ϕ (z) − K(z))\nb\nfunction (K\nAné and Kharoubi (2003) also demonstrate a method for selecting copulas that involves comparing parametric and nonparametric\nvalues, but unlike Genest and Rivest, their technique is not limited\nto the Archimedean class. The idea is to compute a nonparametric\nempirical copula and compare the values to estimates of parametric\ncopulas. Consider the bivariate case for which an empirical copula\nCe (F1 (Y1 ), F2 (Y2 )) may be calculated as\nbe (Y1 , Y2 ) =\nC\nN X\nN\nX\n1 {(Yj1 ≤ Yi1 ) and (Yj2 ≤ Yi2 )} ,\n(4.9)\ni=1 j=1\nwhere 1 {A} is the indicator function that equals 1 if event A\noccurs. Next, several parametric copulas are estimated, each denoted\nep (F1 (Y1 ), F2 (Y2 )). The parametric copula that is closest to the empirC\nical copula is the most appropriate choice. The researcher may evaluate\ncloseness using a distance estimator, for which a simple example is the\n66\nCopula Estimation\nsum of squares,\nDistance =\nN\nX\nbei − C\nepi )2 .\n(C\n(4.10)\ni=1\nAné and Kharoubi (2003) also use distance measures based on the\nconcept of entropy and the Anderson–Darling test, the latter which\nemphasize deviations in the tails, which is useful for applications in\nwhich tail dependence is expected to be important.\n4.3.2\nApplication of model selection to simulated data\nAs an example, we draw 500 simulated values from a particular copula\nbe and C\nbp (without explanatory variables, which means\nand calculate C\nonly the dependence parameter is estimated for the parametric copulas). For the purpose of demonstration, we compare three copulas with\nmarkedly different dependence structures: Clayton, Gumbel, and FGM.\nThe following Table 4.1 reports distance measures based on Eq. (4.10),\nwith asterisks denoting the values for parametric copulas that are closest to the corresponding empirical copula.\nFor the Clayton DGP, the parametric Clayton copula is closest\nto the nonparametric empirical copula. Similar results apply for the\nGumbel and FGM copulas. This finding is not surprising. Although\nthe distance measure provides a useful ranking of the models in terms\nof goodness-of-fit, it does not test a hypothesis of model misspecification. While these results attest to the usefulness of empirical copulas in\nchoosing appropriate copulas, approaches which compare empirical and\nparametric values are currently rarely used in applied microeconometric\nTable 4.1 Distance measures for three copulas.\nClayton\nGumbel\nDGP: Clayton with θ = 3\n0.0447∗\n0.4899\nFGM\n1.8950\nDGP: Gumbel with θ = 3\n0.8534\n0.1305∗\n3.2755\nDGP: FGM with θ = 0.8\n0.1004\n0.0998\n0.0415∗\nNote: The asterisks denote, the values for parametric copular that are closest to the corresponding empirical copula.\n4.3. Copula Evaluation and Selection\n67\napplications. The Genest and Rivest method does not consider nonArchimedean copulas such as the Gaussian and FGM copulas, both of\nwhich are popular in applied settings. Moreover, the method is computationally more demanding than copula estimation itself. The Ané\nand Kharoubi technique is also computationally demanding, because\none must estimate all copulas under consideration in addition to estimating an empirical copula. If one is already committed to estimate\nseveral different parametric copulas, the practitioner might find it easier to forgo nonparametric estimation and instead base copula selection\non penalized likelihood criteria discussed below.\nAs a part of an exploratory analysis of dependence structure,\nresearchers may graphically examine dependence patterns before estimation by plotting the points (Y1i , Y2i ); of course, this approach is more\nappropriate when no covariates are present in the marginal distributions. If the scatter diagram points to a pattern of dependence that is\nsimilar to one of the “standard” models, e.g., see the simulated plots\nin Section 2.5, then this may point to one or more appropriate choices.\nFor example, if Y1 and Y2 appear to be highly correlated in the left tail,\nthen Clayton might be an appropriate copula. If dependence appears\nto be symmetric or negative, then FGM is an appropriate choice, but\nif dependence is relatively strong, then FGM should be avoided.\nIn the standard econometric terminology, alternative copula models with a single dependence parameter are said to be non-nested.\nOne approach for choosing between non-nested parametric models estimated by maximum likelihood is to use either the Akaike or (Schwarz)\nBayesian information criterion. For example, Bayes information criterion (BIC) is equal to −2 ln(L) + K ln(N ) where ln(L) is the maximized\nlog likelihood value, K is the number of parameters, and N is the number of observations. Smaller BIC values indicate better fit. However, if\nall the copulas under consideration have the same K then use of these\ncriteria amounts to choosing the model with the largest likelihood.\nIf, on the other hand there are several specifications of the marginal\nmodels with alternative regression structures under consideration, then\npenalized likelihood criteria are useful for model selection.\nIn our experience a combination of visual inspection and penalized\nlikelihood measures provides satisfactory guidance in copula selection.\n68\nCopula Estimation\nVisual examination of (Y1i , Y2i ) provides useful prior indication of\nchoices that are likely to be satisfactory. An example of copula selection\nis Frees and Valdez (1998) who examine insurance indemnity claims\ndata for which the two variables of interest are indemnity payments\nand allocated loss adjustment expenses. Using the Genest and Rivest\nmethod, they find that the Gumbel copula provides the best fit to the\ndata, although Frank’s copula also provides a satisfactory fit. A scatter\nplot shows that the variables appear to exhibit right tail dependence\n(Figure 2.1 in their text), which offers further support for the Gumbel\ncopula. Since Clayton copulas exhibit strong left tail dependence, it\nis not surprising that they find that Clayton provides a poor fit of\nthe data. Finally, they estimate several copulas and find that Gumbel\nprovides the best fit according to an information criterion measure.\n4.4\nMonte Carlo Illustrations\nThis section presents Monte Carlo experiments in order to illustrate\nthe effects of copula misspecification. The following two subsections\nconsider cases in which dependent variables are continuous and discrete.\n4.4.1\nBivariate normal example\nFor 500 simulated observations, i = 1, . . . , 500, two normally distributed\nrandom variables are generated as\ny1i = β11 + β12 x1i + η1i ,\ny2i = β12 + β22 x2i + η2i ,\nwhere x1 and x2 are independently uniform on (−0.5, 0.5). The terms\nη1 and η2 are jointly distributed standard normal variates drawn from\na particular copula with dependence parameter θ using techniques presented in the Appendix.\nWe estimate five bivariate copulas of the form C(F1 (y1 |x1 ),\nF2 (y2 |x2 ); θ) under the assumption that the marginal distributions F1\nand F2 are standard normal cdfs,\nZ\nyj − βj1 − βj2 xj\n1\nFj (yj ) =\nφ\ndyj ,\nσj\nσj\n4.4. Monte Carlo Illustrations\n69\n√\nwhere φ(t) = 1/ 2π exp(−t2 /2). Each copula is correctly specified; that\nis, each has the same specification as the corresponding DGP. The\nobjective is to evaluate the properties of the FML estimator applied\nunder the assumption of correct specification of the copula.\nThe true values for the parameters are: (β11 , β12 , β21 , β22 , σ1 , σ2 ) =\n(2, 1, 0, −0.5, 1, 1). The true value of θ for each copula is chosen so that\nKendall’s tau is approximately equal to 0.3, except for the FGM copula, which cannot accommodate dependence of this magnitude. For the\nFGM case, θ is set equal to 0.5, which corresponds to a Kendall’s tau\nvalue of 0.11. The true values of θ are shown in Table 4.2 across the top\nrow. Each Monte Carlo experiment involves 1000 replications. For each\nreplication, new realizations of η1 and η2 are randomly drawn which\nyields new observations y1 and y2 , but x1 and x2 and the true values\nof the parameters are held constant.\nWe also show two-way scatter diagrams for the dependent variables\nusing data from one replication of each Monte Carlo experiment. Figure 2.3 is for the case of continuous variables with Gaussian marginals;\nFigure 4.1 is for the discrete case with Poisson marginals. In all cases,\nbecause of the additional variability in the data due to the presence\nof covariates in the dgp, the resulting scatters show higher dispersion\nthan the corresponding ones in Figure 2.3.\nThe estimation method is full maximum likelihood (FML), which\nis executed by maximizing the log likelihood function given in\nEq. (4.3). The log likelihood function is maximized using a Newton–\nRalphson iterative module available in the IML platform of SAS. The\nmodule requires the user to specify only the log likelihood function;\nfirst derivatives are calculated numerically within the module. For\neach copula, the Monte Carlo experiment with 1000 replications finished in less than 10 min on a standard desktop machine. However,\nfor larger experiments, the user may wish to provide analytical first\nderivatives of the log likelihood function, which can greatly reduce\nestimation times.\nFor two normally distributed continuous variables, it is unnecessary to estimate a copula function as econometric software can easily\nestimate Seemingly Unrelated Regression (SUR) models, but specifying the Monte Carlo experiment in terms of normal marginals offers a\nβ11\nβ12\nσ1\nβ21\nβ22\nσ2\nθ\nDGP\n2.00\n1.00\n1.00\n0.00\n−0.50\n1.00\nMean\n1.999\n1.002\n0.998\n−0.001\n−0.499\n0.997\n0.870\nSt. Dev.\n0.044\n0.133\n0.030\n0.045\n0.124\n0.031\n0.100\nClayton (θ = 0.86)\nMean\n1.999\n1.000\n0.998\n−0.001\n−0.501\n0.997\n5.429\nSt. Dev.\n0.041\n0.113\n0.031\n0.043\n0.109\n0.031\n0.356\nFrank (θ = 5.40)\nTable 4.2 Monte Carlo results for bivariate normal.\nMean\n1.998\n0.999\n0.998\n−0.001\n−0.495\n0.998\n1.434\nSt. Dev.\n0.043\n0.132\n0.030\n0.046\n0.131\n0.030\n0.055\nGumbel (θ = 1.43)\nMean\n1.999\n1.002\n0.996\n0.001\n−0.502\n0.997\n0.449\nSt. Dev.\n0.047\n0.133\n0.032\n0.044\n0.131\n0.031\n0.037\nGaussian (θ = 0.45)\nMean\n1.999\n0.997\n0.998\n−0.001\n−0.500\n0.997\n0.504\nSt. Dev.\n0.044\n0.156\n0.032\n0.046\n0.147\n0.033\n0.131\nFGM (θ = 0.50)\n70\nCopula Estimation\n4.4. Monte Carlo Illustrations\n71\nFig. 4.1 Bivariate simulated data generated with Poisson marginals.\nuseful frame of reference. Table 4.2 reports the average estimates across\nreplications. The averages of the FML estimates, including estimates of\nthe dependence parameters, are virtually identical to the true values.\nAn attractive property of copulas is that dependence structures are\npermitted to be nonlinear. Unfortunately, this also presents difficulties\nin comparing dependence across different copula functions. Although\nmeasures of linear correlation such as Kendall’s tau and Spearman’s rho\nmay be helpful in cross-copula comparisons, they are not directly comparable because they cannot account for nonlinear dependence structures. The following example illustrates this point.\nData are drawn from the Clayton copula using the same true values for β11 , β12 , β21 , β22 , σ1 , and σ2 that are used in the previous\nexperiment. However, the true value of the dependence parameter is\nset equal to 4, which corresponds to a Kendall’s tau value of 0.67. We\nthen use these simulated data to estimate the Gumbel copula in order\n72\nCopula Estimation\nTable 4.3 Monte Carlo results for continuous normal with data drawn from Clayton copula.\nMean\nStd. Dev.\nβ11\n2.042\n0.045\nβ12\n1.000\n0.101\nσ1\n1.071\n0.037\nβ21\n0.042\n0.044\nβ22\n−0.498\n0.097\nσ2\n1.071\n0.037\nθ\n2.510\n0.132\nto determine whether the Gumbel copula is capable of accommodating high degrees of dependence generated from the Clayton copula. A\nKendall’s tau value of 0.67 translates to a dependence parameter value\nof 3 for Gumbel’s copula. However, at such high levels of correlation,\nClayton and Gumbel exhibit different dependence structures, as shown\nin Section 2.5. As Table 4.3 shows, the Monte Carlo estimates of the\nGumbel copula using Clayton data produce a dependence parameter\nestimate that is smaller than 3.\nFor highly correlated Clayton variates, most of the dependence is\nconcentrated in the lower tail. However, lower tail dependence is relatively weak for the Gumbel copula. Consequently, it is not surprising\nthat the Gumbel copula is unable to accurately capture dependence\ngenerated from a Clayton copula.\nThe preceding example illustrates that it is difficult to compare\ndependence parameters across different copulas. To address this complication in interpreting dependence parameters, microeconometric\nresearchers typically estimate several copulas and convert the dependence parameters, post estimation, into Kendall’s tau or Spearman’s\nrho. Competing copula families are nonnested so model comparison\nis not straightforward. Moreover, the dependence parameter in some\nparticular copula family may be estimated to be on the boundary of\nthe parameter space which may make the model comparison problem\n“irregular.” When the competing copula models are “regular” one may\nuse the penalized log-likelihood criterion for model selection and make\ninferences about dependence based on the “best model.” Tail probability properties provide an indicator of another aspect of the dependence\nstructure, which in some applications is of major interest.\n4.4.2\nBivariate Poisson example\nThe next experiment investigates the performance of copula estimation when (y1 , y2 ) are discrete random variables. In general, gener-\n4.4. Monte Carlo Illustrations\n73\nating discrete variables is more difficult than simulating continuous\nobservations. We adapt the technique of Devroye (1986: 86) presented\nin the Appendix to simulate two correlated discrete count variables\n(y1 , y2 ). The realizations of (y1 , y2 ) are drawn from the copula function C(F1 (y1 |x1 ), F2 (y2 |x2 ); θ) where F1 and F2 are Poisson distribution\nfunctions,\nFj (yjk ) =\nyjk −µj yjk\nX\ne µj\ny=0\nyjk !\nfor j = 1, 2\nwhere µj = exp(βj1 + βj2 xj ) denotes the conditional mean function,\nand x1 and x2 are independently uniform on (−0.5, 0.5). For each replication, 500 new realizations of (y1 , y2 ) are drawn from the copula.\nEstimation is by full maximum likelihood (FML) based on the joint\nprobability mass function given in Eq. (4.6). Similar to the experiment using continuous data, the log likelihood function, calculated\nby Eq. (4.8), is maximized using a Newton–Ralphson iterative procedure in SAS/IML. The Monte Carlo experiments for 1000 replications\nrequire similar computation times to the continuous experiments discussed above.\nTable 4.4 reports the average estimates across 1000 replications.\nThe true values of the dependence parameters are the same as in the\ncontinuous experiment, but the true values of the other parameters\nare adjusted to ensure that exponential means are not unreasonably\nlarge. The true values are: β11 = 0.7; β12 = 1.8; β21 = 0.3; β22 = −1.4.\nFor these true values, the mean values of y1 and y2 are 2.01 and 1.35\nwhen calculated at the mean values of x1 and x2 .\nAs in the continuous case, averages of FML estimates are virtually\nidentical to the true values. The dependence parameter is slightly more\ndifficult to estimate compared to the continuous example. For example,\nthe true Frank dependence parameter is 5.40, but the average estimate\nacross all replications is 5.46, which means that the simulation misestimates the true value by approximately 1%. Nevertheless, estimates of\nthe dependence parameters are close to their true values.\nWe experience computational difficulties in estimating the Gaussian\ncopula. Then the experiment was run after applying the continuation\n74\nCopula Estimation\ntransformation (as described in Section 4.1.2 to the generated counts,\nand replacing the Poisson marginals by normal marginals for ln(y1 )\nand ln(y2 ). The results shown in Table 4.4 indicate that the use of\nthe continuation transformation and normal marginals overcomes the\ncomputational problem. But the estimates of slope and the dependence\nare biased. Specifically, the dependence parameter estimate shows an\naverage downward bias of over 35%.\n4.5\nEmpirical Applications\nThis section applies the copula methodology to three different bivariate applications. The analysis is based on observational, not generated,\ndata. Clearly when working with observational data, there is uncertainty about the DGP, and thus selection of an appropriate copula\nis an important issue. An appropriate copula is one which best captures dependence features of the outcome variables. It is important to\nnote that some copulas that are popular in applied settings were originally developed for specific empirical applications; for example, the\nClayton copula was developed for survival analysis. However, because\ncopulas separate marginal distributions from dependence structures, a\ncopula which might have been motivated by a particular application\nmay still be useful for other applications. For example, the common\nshock model may have wide applicability for discrete, continuous, and\nmixed discrete-continuous models. The important consideration when\nselecting an appropriate copula is whether dependence is accurately\nrepresented.\n4.5.1\nBivariate Tobit example\nThe well-known Seemingly Unrelated Regressions (SUR) model is one\nof the most widely used in econometrics. The key idea is that, except in\nsome special cases, more efficient estimates are obtained by estimating\nsystems of linear equations simultaneously rather than separately if\nthe error terms are jointly related. Most statistical packages include\ncommands to estimate SUR models, especially the linear SUR model\nand bivariate probit. Nonlinear versions of this model are less often\nused in empirical work, although efficiency gains from joint estimation\nβ11\nβ12\nβ21\nβ22\nθ\nDPG\n0.70\n1.80\n0.30\n−1.40\nMean\n0.698\n1.801\n0.298\n−1.399\n0.877\nSt. Dev.\n0.032\n0.105\n0.042\n0.118\n0.119\nClayton (θ = 0.86)\nMean\n0.698\n1.801\n0.298\n−1.401\n5.463\nSt. Dev.\n0.031\n0.086\n0.040\n0.105\n0.407\nFrank (θ = 5.40)\nTable 4.4 Monte Carlo results for bivariate poisson.\nMean\n0.699\n1.803\n0.300\n−1.397\n1.437\nSt. Dev.\n0.033\n0.097\n0.040\n0.114\n0.058\nGumbel (θ = 1.43)\nMean\n0.703\n1.714\n0.326\n−1.227\n0.290\nSt. Dev.\n0.037\n0.122\n0.044\n0.139\n0.047\nGauss (θ = 0.45)\nMean\n0.698\n1.799\n0.298\n−1.399\n0.509\nSt. Dev.\n0.033\n0.111\n0.042\n0.128\n0.148\nFGM (θ = 0.50)\n4.5. Empirical Applications\n75\n76\nCopula Estimation\nof such models are also to be expected. One variant of the nonlinear\nSUR model that has been used is the multivariate Tobit model. Such a\nmodel is relevant when some variables of interest might be subject to\ncensoring. Although there is an extensive literature on the estimation\nof univariate Tobit models (Amemiya, 1973, Heckman, 1976, Greene,\n1990, Chib, 1992), and the method is widely available in statistical\npackages, it is less straightforward to estimate a SUR Tobit model.\nBrown and Lankford (1992) develop a bivariate Tobit model, but their\ntechnique is not easily generalized to higher dimensions. Difficulties\nextending this model to higher dimensions are addressed in Wales and\nWoodland (1983).1\nIn this section, we demonstrate how copula functions are used to\nestimate a SUR Tobit model with relative ease. The copula approach\noffers several advantages over existing techniques. First, the researcher\nmay consider models with wide ranges of dependence that are not feasible under the Brown and Lankford (1992) bivariate normal specification. Second, the model can be extended to three or more jointly related\ncensored outcomes using the techniques presented in Section 3.5. These\nhigher dimensional models can be estimated using conventional maximum likelihood approaches and do not require Monte Carlo integration\nor Bayesian techniques.\nFor a sample of observations i = 1, . . . , N, consider two continuous\nlatent variables y1∗ and y2∗ which are specified as\n∗\nyij\n= x0ij β j + εij\nfor j = 1, 2.\n∗ > 0, then the variable y is observed, but if y ∗ ≤ 0, then y = 0.\nIf yij\nij\nij\nij\nIf εij are normally distributed as N (0, σ 2 ), then the density function of\nyij is\nfj (yij |β 0j xij )\n0\nY Y xij β j\nyij − x0ij β j\n=\n1−Φ\nφ\n,\nσj\nσj\nyij =0\n1 Huang\nyij >0\net al. (1987), Meng and Rubin (1996), Huang (1999), and Kamakura and Wedel\n(2001) develop estimation procedures for SUR Tobit models. However, all of these\napproaches are computationally demanding and difficult to implement, which may partially explain why the SUR Tobit model has not been used in many empirical applications.\n4.5. Empirical Applications\n77\nand the corresponding cdf Fj (yij |β 0j xij ) is obtained by replacing φ(·)\nwith Φ(·) in the second part of the expression.2 A bivariate Tobit representation can be expressed using a copula function. For the censored\noutcomes yi1 and yi2 , a SUR bivariate Tobit distribution is given by\nF (y1 , y2 ) = C F1 (yi1 |x0i1 β 1 ), F2 (yi2 |x0i2 β 2 ); θ .\nThe log likelihood function is calculated as shown in Eq. (4.3), and\nthe estimated values are obtained using a Newton–Ralphson iterative\nprocedure in SAS/IML. Any econometric package that accommodates\nuser-defined likelihood functions, such as STATA or LIMDEP, could\nalso be used to estimate the model. As with conventional maximum\nlikelihood models, standard errors may be calculated from the likelihood Hessian or outer-product of likelihood gradients. Standard errors\nreported below were estimated using the robust sandwich formula based\non these two matrices. Our optimization procedure in SAS/IML did\nnot require analytical first derivatives of the log likelihood function, but\nproviding them might improve estimation efficiency, especially for large\ndata sets. For the application presented below, the models converged\nin less than 2 min on a standard desktop machine without specifying\nfirst derivatives.\nWe apply a copula-based SUR Tobit to model the relationship between out-of-pocket (SLFEXP) and non-out-of-pocket medical\nexpenses (NONSLF) of elderly Americans, using 3206 cross-section\nobservations from the year 2000 Health and Retirement Survey\n(HRS). Approximately 12% of these individuals reports zero outof-pocket expenses, and approximately 20% reports zero non-out-ofpocket expenses. The outcome variables are rescaled as follows: y1 =\nlog(SLFEXP+1) and y2 = log(NONSLF+1).\nControl variables are supplemental insurance, age, race, gender,\neducation, income, marital status, measures of self-reported health, and\nspousal information.3 We include the same explanatory variables in\n2 It\nis important to note that Tobit models of this form rely on correct specification of the\nerror term ε. Heteroskedasticity or nonnormality lead to inconsistency.\n3 We treat insurance status as exogenous, although there is a case for treating it as endogenous because insurance is often purchased in anticipation of future health care needs.\nTreating endogeneity would add a further layer of complexity to the model.\n78\nCopula Estimation\nFig. 4.2 HRS expenditure example.\nboth x0i1 and x0i2 , although the two sets of covariates need not be identical. Even after conditioning on these variables, the two measures of\nmedical expenses are expected to be correlated. The potential source of\ndependence is unmeasured factors such as negative health shocks that\nmight increase all types of medical spending, attitudes to health risks,\nand choice of life style. On the other hand, having supplementary insurance may reduce out-of-pocket costs (SLFEXP) and raise insurance\nreimbursements (NONSLF). Therefore, it is not clear a priori whether\nthe dependence is positive or negative.\nTo explore the dependence structure and the choice of the appropriate copula, Figure 4.2 plots the pairs (y1 , y2 ). The variables appear\nto exhibit negative dependence in which individuals with high SLFEXP (out-of-pocket) expenses report low NONSLF (non-out-of-pocket)\nexpenses, and vice versa. Therefore, copulas that do not permit negative dependence, such as Clayton and Gumbel, might be inappropriate.\n4.5. Empirical Applications\n79\nBeyond the slight indication of negative dependence, the scatter graph\ndoes not provide much guidance in the choice of an appropriate copula. It is important to note, however, that the model above measures\ndependence after controlling for explanatory variables. Consequently,\ndependence conditional on covariates might be different than unconditional dependence. A valid empirical approach is to estimate several\ndifferent copulas and choose the model that yields the largest penalized\nlog-likelihood value.\nAnother exploratory technique is to estimate the marginal models,\ncalculate the cumulative marginal probabilities, and generate a twoway scatter plot. One can also examine the kernel density plots of\nthe marginal probabilities. The top right panel in Figure 4.2 provides\nthis information. The scatter plot of the marginal probabilities reveals\n3 or 4 clusters of observations. Two of these clusters are quite sizeable, while the remaining two are much smaller. One interpretation\nis that the clusters suggest a mixture specification. A two- or threecomponent finite mixture of copulas may provide a better fit than any\nsingle copula; within each mixture component, roughly corresponding\nto a cluster of observations, the dependence structure may be different.\nThe bimodal pattern of individual marginal probabilities, revealed by\nthe kernel density plots, is also consistent with a mixture formulation.\nWhile finite mixture copulas have been used in Hu (2004) and Chen\nand Fan (2006), their use in regression contexts is not yet widespread,\nso we do not pursue this issue here.\nTable 4.5 shows bivariate Tobit results for four copulas: Clayton,\nGaussian, Frank, and FGM. Although there is some variation, the estimates of coefficients are quite similar across the four copulas and consistent with expectations. Therefore, for brevity the table reports only\nestimates of the dependence parameter θ. Estimates of the Gaussian,\nFrank, and FGM copulas indicate that dependence is negative, with\nthe Gaussian copula attaining the highest likelihood value. Because the\nClayton and Gumbel copulas do not permit negative dependence, we\nexperience computational problems in the estimation algorithm. The\nClayton estimate of the dependence parameter settles on the lower\nbound of its permissible range (0.001), and the Gumbel model fails\nto converge to any value. The fact that these two copulas fail to con-\nθ\nθ (TSML)\nlog-like (FML)\nGaussian\nEstimate\nSt. Err.\n−0.166∗∗\n0.028\n−0.159∗∗\n0.025\n−15552.08\nClayton\nEstimate\nSt. Err.\n0.001∗∗\n0.0001\n0.001\n0.028\n−4024.96\nTable 4.5 Results for bivariate Tobit.\nFrank\nEstimate\nSt. Err.\n−0.741∗∗\n0.197\n−0.665∗∗\n0.146\n−15555.35\nFGM\nEstimate\nSt. Err.\n−0.322∗∗\n0.072\n−0.266∗∗\n0.061\n−15560.73\n80\nCopula Estimation\n4.5. Empirical Applications\n81\nverge can be interpreted as further evidence of misspecification that\nstems from using copulas that do not support negative dependence.\nThis example also indicates computational difficulties one might experience when using a misspecified copula.\nThe second from the last row of Table 4.5 presents estimates of\nθ based on two-step maximum likelihood (TSML) in which the estimates of the marginal models are used to generate probabilities that\nare then substituted into a copula which is then used to estimate only\nthe dependence parameter. In this application both steps are parametric. The results for the estimated dependence parameter are similar\nto those based on the full maximum likelihood (FML) procedure. An\nexception is the result for the Gumbel copula for which the estimation algorithm did not converge under full maximum likelihood. Under\nTSML, however, the estimated value of θ is 1.031 (0.027), which is\nclose to the boundary value of 1.0. These results suggest that TSML is\nnot only a convenient estimation method, but it may also avoid some\nof the computational problems of FML under misspecification. When,\nhowever, the dependence parameter settles at a boundary value, misspecification should be suspected.\nThis example illustrates how models that were previously challenging to estimate using established methods can be handled with relative\nease using the copula approach. Without copulas, the only available\nmethods for estimating SUR Tobit models would involve complicated\nsimulation-based algorithms. In contrast, using copula methods, the\njoint distribution can be expressed as a function of the marginals, and\nestimation proceeds by standard maximum likelihood techniques.\n4.5.2\nBivariate negative binomial example\nOur second application is a bivariate model for discrete (count) data\nwith negative binomial marginals. The application is a model of the\nnumber of prescribed and non-prescribed medications taken by individuals over a two-week period obtained from the Australian Health\nSurvey 1977–1978. The data and an early analysis based on the negative binomial regression are in Cameron et al. (1988). Outside of the\nbivariate probit model, there are few, if any, bivariate representations\n82\nCopula Estimation\nof jointly dependent discrete variables. This reflects the fact that multivariate distributions of discrete outcomes often do not have closed\nform expressions, unless the dependence structure is restricted in some\nspecial way. Munkin and Trivedi (1999) discuss difficulties associated\nwith bivariate count models. Extending an earlier model of Marshall\nand Olkin (1990), they propose an alternative bivariate count model\nwith a flexible dependence structure, but their estimation procedure\nrelies on Monte Carlo integration and is computationally demanding,\nespecially for large samples.\nWe consider two measures of drug use that are possibly dependent:\nprescription and non-prescription medicines. Although it is reasonable\nto assume that these two variables are correlated, it is not clear whether\ndependence is positive or negative, because in some cases the two may\nbe complements and in some cases substitutes. A 64 degrees-of-freedom\nchi-square contingency test of association has a p-value of 0.82; about\n58% of sample respondents record a zero frequency for both variables.\nLet prescription and non-prescription medicines be denoted y1 and\ny2 , respectively. We assume negative binomial-2 (NB2) marginals (as\nCameron et al. (1988) showed that this specification fits the data well):\nfj (yji |x0ij β j , ψj ) =\nΓ(yji + ψj−1 )\nψj−1\nΓ(yji + 1)Γ(ψj−1 )\nψj−1 + ξij\n!ψj−1\nξij\n−1\nψj + ξij\nfor j = 1, 2 ,\n!yji\n(4.11)\nwhere ξij = exp(x0ij β j ) is the conditional mean, and the conditional\n2 , where ψ is the overdispersion\nvariance is var[yij | · ] = ξij + ψj ξij\nj\nparameter. The corresponding cdf is calculated as\nFj (yji |x0ij β j , ψj )\n=\nyji\nX\nfj (yji |x0ij β j , ψj ).\nyji =0\nAfter the marginal distributions are specified, the joint distribution\nof prescription and non-prescription drugs is obtained by plugging the\nNB2 marginals into a copula function,\nF (y1 , y2 ) = C F1 (y1 |x0i2 β 2 , ψ1 ), F2 (y2 |x0i2 β 2 , ψ2 ); θ .\n4.5. Empirical Applications\n83\nThe copula log likelihood function is calculated by taking differences as\nshown in Eq. (4.6), and the estimated values are obtained by maximum\nlikelihood methods as previously discussed.\nThe data consist of 5190 observations of single-person households\nfrom the Australian National Survey 1977–1978. See Cameron et al.\n(1988) for a detailed description of the data. Explanatory variables\ninclude age, gender, income, and five variables related to health status\nand chronic conditions. Three insurance variables indicate whether the\nindividual is covered by supplementary private insurance and whether\npublic insurance is provided free of charge. For drug use, plotting the\n(y1 , y2 ) pairs of counts is not informative in choosing an appropriate\ncopula function. In part this is due to a large number of tied (y1 = y2 )\nvalues. There are 2451 ties, the overwhelming majority (2431) at 0\nand 1. If the continuation transformation is applied by adding independent pseudo random uniform (0, 1) draws to both y1 and y2 , then\nthe scatter diagram appears to indicate negative dependence. In the\nabsence of a clear indication of how to proceed, a sensible approach is\nto estimate several different copulas and compare converged likelihood\nvalues.\nAs Table 4.6 reveals, Clayton and Gumbel both encounter convergence problems similar to those in the previous empirical example.\nRecall that these copulas are only suitable for modeling positive dependence. The Gaussian copula also encounters convergence difficulties, as\nis evident by its fitted estimated likelihood, which is considerably lower\nthan that for the FGM copula. Copulas with weaker tail dependence,\nsuch as the Frank and FGM, seem to perform better. This suggests\nweak negative dependence, because whereas FGM provides a satisfactory fit, it only accommodates modest amount of dependence. The two\ncopulas that provide the best fit are Frank and FGM. Estimates of\nβ1 , β2 (not reported in the table) are robust across the Frank and FGM\ncopulas. These results show the importance of selecting copulas appropriate for given data. Failure of the estimation algorithm could be an\nindication of mismatch (inconsistency) between the properties of the\ndata and the copula restriction on the dependence parameter.\nFor the three specifications which could not be estimated because\nof convergence problems, we repeated the exercise after making two\nθ (FML-NB)\nθ (TSML-NB)\nθ (FML-Poisson)\nlog-like (FML-NB)\nθ (FML-Normal)\nlog-like (FML-Normal)\nGumbel\nEstimate\nSt. Err\n2.226**\n0.042\n2.427\n0.061\nno conv.\n−13348.41\n1.001\n0.014\n−15361.48\nClayton\nEstimate\nSt. Err\n0.001\n0.0004\n0.001\n0.004\nno conv.\n−9389.32\n0.000\n0.009\n−15361.38\nTable 4.6 Results for copula model with negative binomial marginals.\nEstimate\nSt. Err\n0.956**\n0.001\n0.949\n0.001\nno conv.\n−16899.43\n−0.0595\n0.014\n−15352.51\nGaussian\nEstimate\nSt. Err\n−0.966**\n0.108\n−0.960\n0.148\n−0.414\n0.051\n−9347.84\n–\n–\nFrank\nEstimate\nSt. Err\n−0.449**\n0.062\n−0.446\n0.058\n−0.407\n0.034\n−9349.11\n–\n–\nFGM\n84\nCopula Estimation\n4.5. Empirical Applications\n85\nchanges. First the original count data were replaced by continued data\nobtained by adding uniform (0,1) random draws. This transformation\nreduces the incidence of ties in the data. Second, we replaced the negative\nbinomial marginals by normal marginals, but with the same conditional\nmean specification. These changes made it possible to estimate the Gaussian copula; the estimated dependence parameter is −0.0595. The estimation of Clayton and Gumbel copulas again failed with the dependence\nparameter settling at the boundary values. These estimates at boundary\nvalues are not valid maximum likelihood estimates.\nNext we consider the effects of misspecifying the marginals. If the\nmarginals are in the linear exponential family, misspecification of either\nthe conditional mean or the variance will affect inference about dependence. In the present example, negative binomial marginals fit the data\nsignificantly better than Poisson marginals due to overdispersion, see\nTable IV in Cameron et al. (1988). Further, in a bivariate model there is\npotential for confounding overdispersion and dependence, see Munkin\nand Trivedi (1999).\nWe also estimated the copulas under Poisson marginals using both\nthe FML and TSML methods. The estimation algorithms for FML and\nTSML failed to converge for the Clayton, Gumbel, and Gaussian copulas. Given the analysis of the foregoing paragraphs, this result is not a\nsurprise – Clayton and Gumbel copulas do not support negative dependence that is implied by the Frank and FGM copulas, for which more\nsatisfactory results were obtained. Under Poisson marginals, however,\nconvergence of the optimization algorithm was fragile for the Frank\ncopula, but was achieved for the FGM specification, which produced\nresults similar to those for the case of negative binomial marginals; see\nTable 4.6, second from last row. Thus in the FGM case, where the copula model was computationally tractable, the neglect of overdispersion\ndid not appear to affect the estimates of the dependence parameter.\nBut in the case of Frank copula, the dependence parameter estimated\nunder Poisson marginals was less than half the value estimated under\nnegative binomial marginals.\nOur results suggest that in estimating copulas – especially the Gaussian copula – for discrete variables there may be some advantage in\nanalyzing “continued data” rather than discrete data. Applying the\n86\nCopula Estimation\ncontinuation transformation introduces measurement error and this will\nintroduce some bias in the results, which may be substantial if the range\nof data is limited. When the data are inconsistent with the restrictions\non the range of dependence parameter, this may be indicated by the\nfailure of the convergence of maximum likelihood algorithm or by estimated value of the dependence parameter settling at the boundary of\nthe admissible interval. There is a role for specification tests of dependence based on marginals.\n4.5.3\nCo-movement of commodity prices example\nModeling the joint behavior of asset or commodity prices is one of the\nmost common applications of copulas. Whereas we have mainly used\ncopulas in microeconometric settings, we next present an example of\nhow copulas may be used to model co-movement of prices in a financial time series setting. The example draws on Deb et al. (1996) which\nused multivariate GARCH models to test a variety of hypotheses about\nthe co-movements in prices of largely “unrelated” commodities. As the\nauthors made clear, that text concentrated mainly on linear dependence measures, and found relatively limited evidence in support of the\nhypothesis that herd like behavior in financial markets may account for\ndependence between price changes of commodities that are essentially\nunrelated in either consumption or production. Here we revisit the issue\nusing the copula methodology which permits us to test the hypothesis\nof asymmetric or tail dependence. Unlike the Deb et al. (1996) study\nwe shall only report results for one pair of prices because our interest\nin this example is only illustrative.\nWe consider monthly commodity price data from 1974–1992 of\nmaize and wheat. Let ∆y1t and ∆y2t denote the first-difference of\nthe log of U.S. prices of maize and wheat, respectively. The time\nseries characteristics of these variables are as follows. Maize prices\nare more volatile than wheat prices; the average monthly percentage\nchange is 7.6% for maize and 4.9% for wheat. The range of variation is\n(−22%, +29%) for maize and (−18%, 25%) for wheat. Both time series\ndisplay strong excess kurtosis and the joint test of zero skewness and\nexcess kurtosis is easily rejected. The evidence on serial correlation in\n4.5. Empirical Applications\n87\neach time series is somewhat ambiguous. The Ljung-Box portmanteau\ntest of zero serial correlation is rejected if the test is based on 13 leading\nautocorrelation coefficients, but not if we use 40 coefficients. The null\nhypothesis of no ARCH effects is not rejected at the 5% and 10% significance if we consider the first three lags only. These features of our data\nsuggest that copula estimation is a reasonable exercise for these data.\nThe joint distribution of these prices is specified using a copula\nfunction, C(F1 (∆y1t ), F2 (∆y2t ); θ). In view of the observed excess kurtosis in both time series, we assume that the marginal distributions follow Student’s t-distribution with 5 degrees of freedom.4 In preliminary\nanalysis, macroeconomic variables such as inflation, money supply, and\nexchange rates had minimal effects on the prices of these commodities, and therefore, the marginal distributions are estimated without\nexplanatory variables. Under this specification, the dependence parameter θ indicates whether changes in the prices of maize and wheat are\ndependent over time.\nThe top two panels of Figure 4.3 show the two time series, ∆y1t\nand ∆y2t ; their scatter plot is shown in the left panel in the bottom\nrow. The scatter plot is suggestive of positive dependence, and slightly\nmore points appear to lie northeast of the (0, 0) origin. This is somewhat indicative of upper tail dependence, although the graph does not\nprovide conclusive evidence. The bottom right panel shows the scatter plot of the residuals from autoregression of ∆yjt on ∆yj,t−1 and\n∆yj,t−2 , j = 1, 2. Comparison of the two bottom panels shows that the\ntwo scatter plots look similar.\nTable 4.7 shows estimates of dependence parameters and log likelihood values for several copulas. The first row shows results based on\nunivariate marginals. Four of the five copulas reveal significant positive dependence in commodity prices, while the dependence parameter\nfor the Clayton copula settles on the lower bound of its permissible\nrange (0.001). The Gumbel copula produces the largest maximized likelihood value, which indicates that changes in maize and wheat prices\nexhibit upper tail dependence. That is, months in which there are large\n4 The\nt-distribution allows for more mass in the tails compared to the more commonly used\nnormal distribution, and 5 degrees of freedom assures the existence of the first 4 moments.\n88\nCopula Estimation\nFig. 4.3 Comovements in wheat and maize price changes.\nincreases in maize prices also tend to have large increases in wheat\nprices. This would be plausible if both crops are produced in regions\naffected by similar weather patterns. The fact that maximum likelihood estimation of the Clayton copula fails to converge indicates that\nprice changes in maize and wheat do not exhibit substantial lower tail\ndependence.\nWe repeated our calculations first by including covariates as in Deb\net al. (1996), and then also under the assumption that the marginal\ndistributions are Student’s t with 3 degrees of freedom. These results\nare qualitatively similar to those mentioned above. A deeper study of\nco-movements using copula methods may yield further useful insights.\nBecause the tests of serial correlation suggest that both price series\nare serially correlated, we repeated the estimation exercise using, for\nθ (FML)\nθ (TSML-AR2)\nlog-like (FML)\nGumbel\nEstimate St. Err\n1.144∗∗\n0.027\n1.137∗∗\n0.025\n−1922.31\nClayton\nEstimate\nSt. Err\n0.001\n0.026\n0.001\n0.027\n−980.64\nEstimate\nSt. Err\n0.260∗∗\n0.033\n0.247∗∗\n0.034\n−1940.37\nGaussian\nTable 4.7 Dependence parameter estimates for maize and wheat comovements.\nFrank\nEstimate\nSt. Err\n1.914∗∗\n0.360\n1.713∗∗\n0.343\n−1982.48\nFGM\nEstimate\nSt. Err\n0.520∗∗\n0.085\n0.488∗∗\n0.087\n−1994.25\n4.5. Empirical Applications\n89\n90\nCopula Estimation\neach time series, “pre-whitened” residuals from a second order autoregression for each series. These residuals were used in a two-step procedure similar to that of Chen and Fan (2006) mentioned earlier. These\nresults are shown in the second row of Table 4.7. Once again, the previous conclusions about dependence, and especially upper tail dependence are confirmed.\nIn many applied time series studies, the assumption of joint normality\nis restrictive but fairly common. In contrast, by estimating several different copulas with varying dependence patterns, one can potentially gain a\nmore detailed understanding of how commodity prices are related.\nThere are other settings in which the copula based modeling of\ncomovements has been a fruitful alternative. For example, Heinen\nand Rengifo (2003a, 2003b) consider analyzing comovements between\ncounts in a time series setting. As demonstrated in Cameron and Trivedi\n(1998) discrete time series modeling is quite awkward, and that of its\nmultivariate variants is even more so. Then copula based time series\nmodeling is potentially a very useful alternative specification.\n4.6\nCausal Modeling and Latent Factors\nThe causal effect of one variable on another is often of particular\ninterest in econometric modeling. In the copula framework, when the\ncausal effect concerns an exogenous variable, estimation is straightforward, as the marginal distributions can be parameterized by functions of explanatory variables. However, complications arise when the\nresearcher’s interest lies in conditional statements such as Pr[u1 |u2 ]\nrather than Pr[u1 , u2 ], as the arguments in copula functions are\nmarginal as opposed to conditional distributions.\nThis section briefly discusses conditional modeling using copulas\nand provides an example based on the bivariate sample selection model.\nThis is followed by discussion of a closely related alternative to copula\nmodeling that has appeared in the literature, the latent factor model.\n4.6.1\nConditional copulas\nFor continuous random variables, a straightforward method for recovering the conditional distribution, given the joint distribution, is to\n4.6. Causal Modeling and Latent Factors\n91\ndifferentiate as follows:\n∂C(u1 , u2 )\n,\n∂u2\n∂C(u1 , u2 )\nCU2 |U1 (u1 , u2 ) =\n.\n∂u1\nCU1 |U2 (u1 , u2 ) =\n(4.12)\nFor copulas with complicated parametric forms, differentiating\nmight be awkward. Zimmer and Trivedi (2006) demonstrate that Bayes’\nRule can also be used to recover conditional copula as follows:\nCU1 |U2 (u1 , u2 ) =\nC(u1 , u2 )\n.\nu2\nSimilarly, survival copulas are useful because the conditional probability Pr[U1 > u1 | U2 > u2 ] can be expressed via survival copulas:\n1 − u1 − u2 + C(u1 , u2 ) C(1 − u1 , 1 − u2 )\n=\n.\n1 − u2\n1 − u2\n(4.13)\nAt the time of this writing, examples of conditional modeling using\ncopulas are relatively rare in econometrics. The following example is\none such application.\nPr[U1 > u1 | U2 > u2 ] =\n4.6.2\nCopula-based bivariate sample selection example\nThe bivariate sample selection model has been extensively used in economics and other social sciences. There are a number of variants, one\nof which has the following structure.\nThe model has two continuous latent variables with marginal distribution functions Fj (yj∗ ) = Pr[Yj∗ ≤ yj ], j = 1, 2, and joint distribution\nF (y1∗ , y2∗ ) = Pr[Y1∗ ≤ y1∗ , Y2∗ ≤ y2∗ ]. Let y2∗ denote the outcome of interest,\nwhich is observed only if y1∗ > 0. For example, y1∗ determines whether\nto participate in the labor market and y2∗ determines how many hours\nto work. The bivariate sample selection model consists of a selection equation such that\ny1 = 1[y1∗ > 0]\n(4.14)\nand an outcome equation such that\ny2 = 1[y1∗ > 0]y2∗ .\n(4.15)\n92\nCopula Estimation\nThat is, y2 is observed when y1∗ > 0 but not when y1∗ ≤ 0. One commonly\nused variant of this model specifies a linear model with additive errors\nfor the latent variables,\ny1∗ = x01 β 1 + ε1\ny2∗\n= x02 β 2\n(4.16)\n+ ε2 .\nSuppose that the bivariate distribution of (ε1 ε2 ) is parametrically\nspecified, e.g., bivariate normal. The bivariate sample selection model\ntherefore has likelihood function\nL=\nN\nY\n∗\n∗\n{Pr[y ∗1i ≤ 0]}1−y1i f2|1 (y2i | y1i\n> 0) × Pr[y1i\n> 0]\ny1i\n, (4.17)\ni=1\n∗ ≤ 0, since then y = 0,\nwhere the first term is the contribution when y1i\n1i\n∗ > 0. This likelihood\nand the second term is the contribution when y1i\nfunction is general and can be specialized to linear or nonlinear models.\nThe case of bivariate normality is well known in econometrics literature;\nfor details see, for example, Amemiya (1985: 385–387).\n∗ > 0) in the\nThe presence of the conditional distribution f2|1 (y2i | y1i\nlikelihood presents complications in estimation, and thus conditional\ncopulas calculated by Eq. (4.12) might be useful. Heckman’s model\n(1976) assumes a joint normal distribution. On the other hand, Lee\n(1983) shows that the Gaussian copula can be used to relax the assumption that the marginal distributions are normal, but he never explicitly\nmentioned copulas. Lee’s idea was slow to gain acceptance, but recent\ntexts by Prieger (2002) and Genius and Strazzera (2004) have relaxed\nthe assumptions of marginal and joint normality. Smith (2003) provides\na general copula-based framework for this model by demonstrating that\ncopulas can be used to extend the standard analysis to any bivariate\ndistribution with given marginals.\nFirst, observe that the conditional density can be written as follows\nin terms of marginal densities and distribution functions:\n∂\n[F2 (y2 ) − F (0, y2 )]\n∂y2\n∂\n−1\n= (1 − F1 (0))\nf2 (y2 ) −\n[F2 (y2 )] ,\n∂y2\nf2|1 (y2 |Y1∗ > 0) = (1 − F1 (0))−1\n4.6. Causal Modeling and Latent Factors\n93\nwhere f2 (y2 ) = ∂[F2 (y2 )]/∂y2 , which after substitution into (4.17)\nyields\nL=\nN\nY\n∂\n[F (0, y2i )].\n[F1 (0)]1−y1i [f2 (y2i ) −\n∂y2\n(4.18)\ni=1\nOnce the likelihood is written in this form it is immediately obvious, as\npointed out by Smith (2003), that we can generate analytical expressions for the otherwise awkward-to-handle term ∂[F (0, y2i )]/∂y2 in the\nlikelihood using alternative specifications of copulas. For the bivariate\nArchimedean class of copulas C(F1 (y1 ), F2 (y2 ); θ), with the generator\nϕ and dependence parameter θ, this term simplifies to\n∂[F (0, y2 )]\nϕ0 (F2 )\n× f2 .\n= 0\n∂y2\nϕ (Cθ )\n(4.19)\nSmith (2003) provides an empirical analysis using eight different copulas. Moreover, he extends the copula-based selection model to other\nvariants, including the so-called “Roy model” (also called the “regimes\nmodel”); see also Smith (2005). Smith’s extension of the bivariate normal selection model provides an important extension of the methods\navailable in the literature. The approach can also be used to model\nselection in the context of discrete outcomes, e.g., event counts.\n4.6.3\nCopulas and latent factor models\nWe now consider another approach to modeling dependence in the context of nonlinear regression models that arise in the context of analysis\nof cross section survival data, event counts, jointly dependent continuous and discrete variables and so forth. Latent factors have been used\nto model conditional dependence, but they can also be viewed as a\ngeneral approach to joint modeling.\nThe latent factor approach that is considered has an important\nsimilarity with the copula approach in that the joint model is built\nusing marginal (regression) models in which either common or correlated latent factors enter the models in the same manner as regressors.\nTheir simultaneous presence in different marginals generates dependence between variables. Such latent factor models have a long history in statistics. In the context of bivariate distributions they have\n94\nCopula Estimation\nappeared under a variety of names such as the shared frailty model\nin survival analysis (Hougaard, 2000), trivariate reduction model and\nlatent factor models (Skrondal and Rabe-Hesketh, 2004). Generically\nthey are all mixture models and can also be interpreted as random\neffects models, the difference being due to the structure placed on the\nway the random effects are introduced into the model and the implicit\nrestrictions that are imposed on the dependence structure.\nIn choosing a statistical framework for modeling dependence, the\nstructure of dependence should be carefully considered. In the context of regression, common observable variables in regression functions\nusually account for some dependence. However, models may specify\nadditional dependence through common unobserved factors, usually\nreferred to as “frailty” in demography and “unobserved heterogeneity” in econometrics. Dependence induced by such factors can follow a\nvariety of structures. Hougaard (2000) discusses dependence in multivariate survival models under the headings of common events and common risks. In survival models random shocks may induce dependence\nonly for short durations or only for long durations. In bivariate event\ncount models dependence may be induced for low counts or high counts,\nand not necessarily all counts. In longitudinal data models dependence\nmay be induced by persistent or transient random factors. Thus, how to\nmodel dependence is not an issue that will elicit a mechanical response.\nMarshall–Olkin Example. We now consider some specific examples. Marshall and Olkin (1990) generate bivariate distributions from\nmixtures and convolutions of product families in a manner analogous\nto Eq. (3.6). Consider the bivariate distribution\nZ ∞\nf (y1 , y2 |x1 , x2 ) =\nf1 (y1 |x1 , ν)f2 (y2 |x2 , ν)g(ν)dν,\n(4.20)\n0\nwhere f1 , f2 , and g are univariate densities, and ν > 0, may be interpreted as common unobserved heterogeneity affecting both counts. Let\nf1 (y1 |x1 , ν) and f2 (y2 |x2 , ν) denote negative binomial marginal distributions for counted variables y1 and y2 , with conditional means\nµ1 |ν = exp(x1 β1 + ν) and µ2 |ν = exp(x2 β2 + ν). Thus, a bivariate negative binomial mixture generated in this way will have univariate\n4.6. Causal Modeling and Latent Factors\n95\nnegative binomial mixtures. This approach suggests a way of specifying or justifying overdispersed and correlated count models, based\non a suitable choice of g(.), more general than in the example given\nabove. Marshall and Olkin (1990) generate a bivariate negative binomial assuming that ν has gamma distribution with parameter α−1 .\nThat is,\nZ ∞\n[(µ1 ν)y1 exp(−µ1 ν)/y1 !] [(µ2 ν)y2 exp(−µ2 ν)/y2 !]\nh(y1 , y2 |α−1 ) =\n0\ni\nexp(−ν)/Γ(α−1 ) dν\ny1 y2\nΓ(y1 + y2 + α−1 )\nµ1\nµ2\n=\ny1 !y2 !Γ(α−1 )\nµ1 + µ2 + 1\nµ1 + µ2 + 1\nα−1\n1\n×\n.\n(4.21)\nµ1 + µ2 + 1\n×\nh\nνα\n−1 −1\nThis model is straightforward to estimate by maximum likelihood.\nHowever, because the model restricts the unobserved heterogeneity\nto the identical component for both count variables, the correlation\nbetween the two count variables,\nµ1 µ2\n,\n(4.22)\nCorr(y1, y2 ) = p 2\n(µ1 + αµ1 )(µ22 + αµ2 )\nmust be positive. The model can only accommodate positive dependence. Note further that α does double duty as it is the overdispersion\nparameter and partly determines correlation.\nConsider whether a more flexible dependence structure could be\nintroduced here. More flexible bivariate and multivariate parametric count data models can be constructed by introducing correlated,\nrather than identical, unobserved heterogeneity components in models.\nFor example, suppose y1 and y2 are, respectively, P oisson(µ1 |ν1 ) and\nP oisson(µ2 |ν2 )\nµ1 |x,ν1 = exp(β 01 + λ1 ν1 + x01 β 1 ),\n(4.23)\nµ2 |x,ν2 = exp(β 02 + λ2 ν2 + x02 β 2 ),\n(4.24)\nand\n96\nCopula Estimation\nwhere ν1 and ν2 represent correlated unobserved heterogeneity. Dependence between y1 and y2 is induced if ν1 and ν2 are correlated. We\nrefer to (ν1 , ν2 ) as latent factors, and to (λ1 , λ2 ) as factor loadings.\nFor example, we could assume (ν1 , ν2 ) to be bivariate normal distributed with correlation ρ. However, maximum likelihood estimation\nof (β01 , β1 , β02 , β2 ) is not straight-forward for several reasons. First,\nbecause (ν1 , ν2 ) are latent factors, we need to fix the location and scale,\nand introduce normalizations that allow identification of the factor\nloadings. The usual solution is to fix the mean to equal 0, the variance to be 1, and one of the factor loadings, λ1 or λ2 , to equal 1, and\nto leave the other as a free parameter, which determines the correlation\nbetween the latent factors. Second, under the assumption of normality\nthere is no analytical form for the integral as was the case in Eq. (4.21),\nso maximum simulated likelihood (MSL), rather than standard maximum likelihood is required. MSL estimation of such models requires\nnumerically intensive methods, such as numerical or Monte Carlo integration, see Munkin and Trivedi (1999). In applying MSL, each term\nin the likelihood is of the form:\nZ\nf (y1 , y2 |x1 , x2 , ν1 , ν2 ) = f1 (y1 |x1 , ν1 )f2 (y2 |x2 , ν2 )g(ν1 , ν2 )dν1 dν2 ,\nS\n1X\n(s)\n(s)\n'\nf1 (y1 |x1 , ν1 )f2 (y2 |x2 , ν2 ),\nS\n(4.25)\ns=1\nwhere the second line is a numerical approximation to the integral\nobtained by replacing ν1 , ν2 by their simulated values and averaging\nover S pseudo-random draws from the assumed bivariate distribution. Maximizing this simulated likelihood function with respect to the\nunknown parameters is called Maximum Simulated Likelihood (MSL).\nGourieroux and Monfort (1996) show that if S increases at a faster rate\nthan the square root of the sample size, then MSL is asymptotically\nequivalent to maximum likelihood. Approaches for increasing the efficiency of numerical integration are available and will likely need to be\nemployed if the joint distribution involves many parameters; see Train\n(2003).\nThe MSL estimation of a latent factor model has the theoretical\nadvantage that it can be generalized to higher dimensions, although at\n4.6. Causal Modeling and Latent Factors\n97\nadditional and nontrivial computational cost. Like the copula approach\nit is based on marginals, which also is a potential advantage. Furthermore, Zimmer and Trivedi (2006) demonstrate that MSL produces similar results to copula models. However, in the present case the dependence structure will be restricted by the assumption of joint normality.\nThis is an important consideration if the main focus is on estimating the\ndependence structure, whereas it is a lesser consideration if the main\nfocus is on the regression function where the dependence parameter is\ntreated as a nuisance parameter.\n5\nConclusion\nCopulas provide a potentially useful modeling toolkit in certain settings. Sklar’s Theorem establishes that any multivariate distribution\nfunction with continuous margins has a unique copula representation.\nThe result is the basis of the statement that “. . . much of the study of\njoint distributions can be reduced to the study of copulas,” (Schweizer,\n1991: 17). The result also indicates that copulas are a “recipe” for generating joint distributions by combining given marginal distributions\naccording to a specified form of a copula function. Copulas are especially appealing because they capture dependence more broadly than\nthe standard multivariate normal framework. A leading example is the\ncase where the marginals belong to different families of distributions.\nInference about dependence can be implemented in a fully parametric or a partially parametric framework. However, as Hougaard (2000:\n435) has observed, “. . . strictly speaking, copulas are not important\nfrom a statistical point of view. It is extremely rare that marginal distributions are known. Assuming the marginals are known is in almost\nall cases in conflict with reality. Copulas make sense, however, in a more\nbroad perspective, first of all as part of the combined approach . . . where\nthe model is parameterized by means of the marginal distributions and\n99\n100\nConclusion\nthe copula. Second, they make sense for illustrating dependence. . . ”.\nConsequently, modeling marginal distributions should be done with\ncare so that gross misspecification is avoided.\nIn certain empirical contexts, as we have illustrated, copulas are\ncomputationally more attractive than the latent factor models to which\nthey are related. Latent factor models with errors generated by copulas\nwould appear to be a potentially attractive synthesis.\nSeveral directions for future research on copula modeling could\nbe particularly helpful to practitioners. To deal with the standard\neconometric issues of model misspecification, model diagnostics, and\nmodel evaluation, diagnostic tests (e.g., score tests) based only on the\nmarginals would help narrow the set of copulas under consideration.\nFor example, one could test first for tail dependence of a specific type\nand then narrow the range of copulas to be considered. More flexible\ncopula modeling would be possible if more flexible specifications for\nhigher dimensional copulas were to be developed.\nReferences\nAmemiya, T. (1973), ‘Regression analysis when the dependent variable\nis truncated normal’. Econometrica 41, 997–1016.\nAmemiya, T. (1985), Advanced Econometrics. Cambridge: Harvard\nUniversity Press.\nAné, T. and C. Kharoubi (2003), ‘Dependence structure and risk measure’. Journal of Business 76, 411–438.\nArmstrong, M. (2003), ‘Copula catalogue. Part 1: Bivariate\nArchimedean copulas’. Unpublished paper available at http://www.\ncerna.ensmp.fr.\nArmstrong, M. and A. Galli (2002), ‘Sequential nongaussian simulations using the FGM copula’. http://www.cerna.ensmp.fr/\nDocuments/MA-AG-WPCopula.pdf.\nBouyé, E., V. Durrleman, A. Nikeghbali, G. Ribouletm, and T. Roncalli\n(2000), ‘Copulas for finance: A reading guide and some applications’.\nUnpublished Manuscript, London: Financial Econometrics Research\nBoyer, B. H., M. S. Gibson, and M. Loretan (1999), ‘Pitfalls in tests\nfor changes in correlations’. Federal Reserves Board, IFS Discussion\nPaper No 597R.\n101\n102\nReferences\nBrown, E. and H. Lankford (1992), ‘Gifts of money and gifts of time:\nEstimating the effects of tax prices and available time’. Journal of\nPublic Economics 47, 321–341.\nCameron, A. C., T. Li, P. K. Trivedi, and D. M. Zimmer (2004), ‘Modeling the differences in counted outcomes using bivariate copula models: With application to mismeasured counts’. Econometrics Journal\n7, 566–584.\nCameron, A. C. and P. K. Trivedi (1998), Regression Analysis of Count\nData, Econometric Society Monographs 30. New York: Cambridge\nUniversity Press.\nCameron, A. C., P. K. Trivedi, F. Milne, and J. Piggott (1988), ‘A\nmicroeconomic model of the demand for health care and health insurance in Australia’. Review of Economic Studies 55, 85–106.\nChambers, J. M., C. L. Mallows, and B. W. Stuck (1976), ‘A method\nfor simulating stable random variables’. Journal of the American\nStatistical Association 71, 340–344.\nChen, X. and Y. Fan (2006), ‘Estimation and model selection of semiparametric copula-based multivariate dynamic models under copula\nmisspecification’. Journal of Econometrics.\nCherubini, U., E. Luciano, and W. Vecchiato (2004), Copula Methods\nin Finance. New York: John Wiley.\nChib, S. (1992), ‘Bayes regression for the Tobit censored regression\nmodel’. Journal of Econometrics 51, 79–99.\nChib, S. and R. Winkelmann (2001), ‘Markov chain Monte Carlo analysis of correlated count data’. Journal of Business and Economic\nStatistics 19, 428–435.\nClayton, D. G. (1978), ‘A model for association in bivariate life tables\nand its application in epidemiological studies of familial tendency in\nchronic disease incidence’. Biometrika 65, 141–151.\nClayton, D. G. and J. Cuzick (1985), ‘Multivariate generalizations of\nthe proporational hazards model’. Journal of Royal Statistical Society\nSeries B 34, 187–220.\nCont, R. (2001), ‘Empirical proeprties of asset returns: Stylized facts\nand statistical issues’. Quantitative Finance 1, 223–236.\nReferences\n103\nCook, R. D. and M. E. Johnson (1981), ‘A family of distributions for\nmodelling non-elliptically symmetric multivariate data’. Journal of\nRoyal Statistical Society B 43(2), 210–218.\nde la Peña, V. H., R. Ibragimov, and S. Sharakhmetov (2003), ‘Characterizations of joint distributions, copulas, information, dependence\nand decoupling, with applications to time series’. 2nd Erich L.\nLehmann Symposium – Optimality, IMS Lecture Notes – Monograph\nSeries, (J. Rojo, Ed.), In Press.\nDeb, P., P. K. Trivedi, and P. Varangis (1996), ‘Excess Co-movement\nin commodity prices reconsidered’. Journal of Applied Econometrics\n11, 275–291.\nDenuit, M. and P. Lambert (2005), ‘Constraints on concordance measures in bivariate discrete data’. Journal of Multivariate Analysis 93,\n40–57.\nDevroye, L. (1986), Non-Uniform Random Variate Generation.\nNew York: Springer-Verlag.\nDrouet-Mari, D. and S. Kotz (2001), Correlation and Dependence.\nLondon: Imperial College Press.\nDurbin, J. and A. S. Stuart (1951), ‘Inversions and rank correlations’.\nJournal of Royal Statistical Society Series B 2, 303–309.\nDurrleman, V., A. Nikeghbali, and T. Roncalli (2000), ‘How to get\nbounds for distribution convolution? A simulation study and an\napplication to risk management’. GRO Credit Lyonnais, working\npaper.\nEmbrechts, P., A. McNeil, and D. Straumann (2002), ‘Correlation\nand dependence in risk management: Properties and pitfalls’. In:\nM. A. H. Dempster (ed.): Risk Management: Value at Risk and\nBeyond. Cambridge: Cambridge University Press, pp. 176–223.\nFang, K.-T. and Y.-T. Zhang (1990), Generalized Multivariate Analysis.\nBerlin and New York: Springer-Verlag.\nFrank, M. J. (1979), ‘On the simultaneous associativity of F(x,y) and\nx+y - F(x,y)’. Aequationes Math 19, 194–226.\nFrees, E., J. Carriere, and E. Valdez (1996), ‘Annuity valuation\nwith dependent mortality’. Journal of Risk and Insurance 63,\n229–261.\n104\nReferences\nFrees, E. W. and E. A. Valdez (1998), ‘Understanding relationships\nusing copulas’. North American Actuarial Journal 2(1), 1–26.\nGenest, C. and J. Mackay (1986), ‘The joy of copulas: Bivariate distributions with uniform marginals’. The American Statistician 40,\n280–283.\nGenest, C., Q. Molina, and R. Lallena (1995), ‘De L’impossibilité de\nConstuire des lois a Marges Multidimensionnelles Données a Partir\nde Copules’. Comptes Rendus de l’Académie des Sciences Serie I,\nMathematique 320, 723–726.\nGenest, C. and L. Rivest (1993), ‘Statistical inference procedures for\nbivariate Archimedean copulas’. Journal of the American Statistical\nAssociation 88(423), 1034–1043.\nGenius, M. and E. Strazzera (2004), ‘The copula approach to sample selection modelling: An application to the recreational value of\nforests’. Working paper, The Fondazione Eni Enrico Mattei.\nGeorges, P., A.-G. Lamy, A. Nicolas, G. Quibel, and T. Roncalli (2001),\n‘Multivariate survival modeling: A unified approach with copulas’.\nUnpublished paper, France: Groupe de Recherche Opérationnelle\nCrédit Lyonnais.\nGourieroux, C. and A. Monfort (1996), Simulation Based Econometric\nMethods. New York: Oxford University Press.\nGreene, W. H. (1990), ‘Multiple roots of the Tobit log-likelihood’. Journal of Econometrics 46, 365–380.\nGumbel, E. J. (1960), ‘Distributions des Valeurs Extremes en Plusieurs\nDimensions’. Publications de l’Institute de Statistı́que de l’Université\nde Paris 9, 171–173.\nHeckman, J. (1976), ‘The common structure of statistical models of\ntruncation, sample selection, and limited dependent variables and a\nsimple estimator for such models’. Annals of Economics and Social\nMeasurement 5, 475–492.\nHeckman, J. J. and B. E. Honoré (1989), ‘The identifiability of the\ncompeting risks model’. Biometrika 76, 325–330.\nHeinen, A. and E. Rengifo (2003a), ‘Comovements in trading activity:\nA multivariate autoregressive model of time series count data using\ncopulas’. CORE Discussion Paper.\nReferences\n105\nHeinen, A. and E. Rengifo (2003b), ‘Multivariate modelling of time\nseries count data: An autoregressive conditional Poisson model’.\nCORE Discussion Paper.\nHoeffding, W. (1940), ‘Scale-invariant correlation theory’. In: N. I.\nFisher and P. K. Sen (eds.): The Collected Works of Wassily Hoeffding. New York: Springer-Verlag, pp. 57–107.\nHoeffding, W. (1941), ‘Scale-invariant correlation measures for discontinuous distributions’. In: N. I. Fisher and P. K. Sen (eds.): The\nCollected Works of Wassily Hoeffding. New York: Springer-Verlag,\npp. 109–133.\nHougaard, P. (1986), ‘A class of multivariate failure time distributions’.\nBiometrika 73, 671–678.\nHougaard, P. (1987), ‘Modeling multivariate survival’. Scandinavian\nJournal of Statistics 14, 291–304.\nHougaard, P. (2000), Analysis of Multivariate Survival Data. NewYork: Springer-Verlag.\nHougaard, P., B. Harvald, and N. V. Holm (1992), ‘Measuring the similarities between the lifetimes of adult Danish twins born between\n1881–1930’. Journal of the American Statistical Association 87,\n17–24.\nHu, L. (2004), ‘Dependence patterns across financial markets: A mixed\ncopula approach’. The Ohio State University, working paper.\nHuang, H.-C. (1999), ‘Estimation of the SUR Tobit model via the\nMCECM algorithm’. Economics Letters 64, 25–30.\nHuang, H.-C., F. A. Sloan, and K. W. Adamache (1987), ‘Estimation of\nseemingly unrelated Tobit regressions via the EM algorithm’. Journal\nof Business and Economic Statistics 5, 425–430.\nHusler, J. and R. Reiss (1989), ‘Maxima of normal random vectors:\nBetween independence and complete dependence’. Statistics and\nProbability Letters 7, 283–286.\nHutchinson, T. P. and C. D. Lai (1990), Continuous Bivariate Distributions, Emphasising Applications. Sydney, Australia: Rumsby.\nJoe, H. (1990), ‘Families of min-stable multivariate exponential and\nmultivariate extreme value distributions’. Statistics and Probability\nLetters 9, 75–81.\n106\nReferences\nJoe, H. (1993), ‘Parametric families of multivariate distributions with\ngiven margins’. Journal of Multivariate Analysis 46, 262–282.\nJoe, H. (1994), ‘Multivariate extreme-value distributions with applications to environmental data’. Canadian Journal of Statistics 22(1),\n47–64.\nJoe, H. (1997), Multivariate Models and Dependence Concepts. London:\nChapman & Hall.\nJunker, M. and A. May (2005), ‘Measurement of aggregate risk with\ncopulas’. Econometrics Journal 8, 428–454.\nKamakura, W. A. and M. Wedel (2001), ‘Exploratory Tobit factor analysis for multivariate censored data’. Multivariate Behavioral Research\n36, 53–82.\nKimeldorf, G. and A. R. Sampson (1975), ‘Uniform representations of\nbivariate distributions’. Communications in Statistics 4, 617–627.\nKlugman, S. A. and R. Parsa (2000), ‘Fitting bivariate loss distributions with copulas’. Insurance: Mathematics and Economics 24,\n139–148.\nLee, L. (1983), ‘Generalized econometric models with selectivity’.\nEconometrica 51, 507–512.\nLi, D. X. (2000), ‘On default correlation: A copula function approach’.\nJournal of Fixed Income 9, 43–54.\nMachado, J. A. F. and J. M. C. Santos Silva (2005), ‘Quantiles for\ncounts’. Journal of the American Statistical Association 100, 1226–\n1237.\nMarshall, A. (1996), ‘Copulas, marginals, and joint distributions’. In:\nL. Ruschendorf, B. Schweizer, and M. D. Taylor (eds.): Distributions\nwith Fixed Marginals and Related Topics. Hayward, CA: Institute of\nMathematic Statistics, pp. 213–222.\nMarshall, A. W. and I. Olkin (1967), ‘A multivariate exponential distribution’. Journal of the American Statistical Association 62, 30–44.\nMarshall, A. W. and I. Olkin (1988), ‘Families of multivariate distributions’. Journal of the American Statistical Association 83, 834–841.\nMarshall, A. W. and I. Olkin (1990), ‘Multivariate distributions generated from mixtures of convolution and product families’. In: H. W.\nBlock, A. R. Sampson, and T. H. Savits (eds.): Topics in Statistical\nReferences\n107\nDependence, Vol. 16 of IMS Lecture Notes-Monograph Series. pp.\n371–393.\nMeester, S. and J. MacKay (1994), ‘A parametric model for cluster\ncorrelated categorical data’. Biometrics 50, 954–963.\nMeng, X. and D. B. Rubin (1996), ‘Efficient methods for estimating\nand testing seemingly unrelated regressions in the presence of latent\nvariables and missing observations’. In: D. A. Berry, K. M. Chaloner,\nand J. K. Geweke (eds.): Bayesian Analysis in Statistics and Econometrics. John Wiley & Sons, Inc, pp. 215–227.\nMiller, D. J. and W. H. Liu (2002), ‘On the recovery of joint distributions from limited information’. Journal of Econometrics 107, 259–\n274.\nMorgenstern, D. (1956), ‘Einfache Beispiele Zweidimensionaler\nVerteilungen’. Mitteilingsblatt fur Mathematische Statistik 8, 234–\n235.\nMunkin, M. and P. K. Trivedi (1999), ‘Simulated maximum likelihood estimation of multivariate mixed-poisson regression models,\nwith application’. Econometric Journal 1, 1–21.\nNelsen, R. B. (1991), ‘Copulas and association’. In: G. Dall’Aglio,\nS. Kotz, and G. Salinetti (eds.): Advances in Probability Distributions with Given Marginals. Dordrecht: Kluwer Academic Publishers,\npp. 31–74.\nNelsen, R. B. (2006), An Introduction to Copulas. 2nd edition. New\nYork: Springer.\nOakes, D. (1982), ‘A model for association in bivariate survival data’.\nJournal of Royal Statistical Society B 44, 414–422.\nPatton, A. (2005a), ‘Modelling asymmetric exchange rate dependence’.\nInternational Economic Review. Forthcoming.\nPatton, A. (2005b), ‘Estimation of multivariate models for time series\nof possibly different lengths’. Journal of Applied Econometrics.\nforthcoming.\nPitt, M., D. Chan, and R. Kohn (2006), ‘Efficient Bayesian inference\nfor Gaussian copula regression’. Biometrika 93, 537–554.\nPrieger, J. (2002), ‘A flexible parametric selection model for non-normal\ndata with application to health care usage’. Journal of Applied\nEconometrics 17, 367–392.\n108\nReferences\nProkhorov, A. and P. Schmidt (2006), ‘Robustness, redundancy, and\nvalidity of copulas in likelihood models’. Working Paper, Michigan\nState University.\nSchweizer, B. (1991), ‘Thirty years of copulas’. In: G. Dall’Aglio, S.\nKotz, and G. Salinetti (eds.): Advances in Probability Distributions\nwith Given Marginals: Beyond the Copulas. The Netherlands: Kluwer\nSchweizer, B. and A. Sklar (1983), Probabilistic Metric Spaces. New\nYork: North Holland.\nSchweizer, B. and E. F. Wolff (1981), ‘On nonparametric measures of\ndependence for random variables’. Annals of Statistics 9, 870–885.\nSklar, A. (1973), ‘Random variables, joint distributions, and copulas’.\nKybernetica 9, 449–460.\nSkrondal, A. and S. Rabe-Hesketh (2004), Generalized Latent Variable\nModeling: Multilevel, Longitudinal and Structural Equation Models.\nBoca Raton, FL: Chapman & Hall/CRC.\nSmith, M. (2003), ‘Modeling selectivity using archimedean copulas’.\nEconometrics Journal 6, 99–123.\nSmith, M. (2005), ‘Using copulas to model switching regimes with an\napplication to child labour’. Economic Record 81, S47–S57.\nSong, P. X. (2000), ‘Multivariate dispersion models generated from\nGaussian copula’. Scandinavian Journal of Statistics 27, 305–320.\nSong, P. X. K., Y. Fan, and J. D. Kalbfleisch (2005), ‘Maximization\nby parts in likelihood inference’. Journal of the American Statistical\nAssociation 100, 1145–1158.\nStevens, W. L. (1950), ‘Fiducial Limits of the parameter of a discontinuous distribution’. Biometrika 37, 117–129.\nTrain, K. E. (2003), Discrete Choice Methods with Simulation. New\nYork: Cambridge University Press.\nVan Ophem, H. (1999), ‘A general method to estimate correlated discrete random variables’. Econometric Theory 15, 228–237.\nVan Ophem, H. (2000), ‘Modeling selectivity in count data models’.\nJournal of Business and Economic Statistics 18, 503–511.\nVuong, Q. (1989), ‘Likelihood ratio test for model selection and nonnested hypotheses’. Econometrica 57, 307–333.\nReferences\n109\nWales, T. J. and A. D. Woodland (1983), ‘Estimation of consumer\ndemand systems with binding non-negativity constraints’. Journal\nof Econometrics 21, 263–285.\nWang, W. (2003), ‘Estimating the association parameter for copula\nmodels under dependent censoring’. Journal of the Royal Statistical\nSociety B 65, 257–273.\nZimmer, D. M. and P. K. Trivedi (2006), ‘Using trivariate copulas to\nmodel sample selection and treatment effects: Application to family\nhealth care demand’. Journal of Business and Economic Statistics\n24, 63–76.\nA\nCopulas and Random Number Generation\nSimulation is a useful tool for understanding and exhibiting dependence structures of joint distributions. According to Nelsen (2006:\n40), “one of the primary applications of copulas is in simulation\nand Monte Carlo studies.” Draws of pseudo-random variates from\nparticular copulas can be displayed graphically, which allows one\nto visualize dependence properties such as tail dependence. Methods of drawing from copulas are also needed when conducting Monte\nCarlo experiments. This chapter presents selected techniques for drawing random variates from bivariate distributions and illustrates them\nwith a few examples. In our experience, the appropriate method\nfor drawing random variables depends upon which distribution is\nbeing considered; some methods are best suited for drawing variables from particular distributions. We do not claim that the methods outlined below are necessarily the “best” approaches for any given\napplication. Rather, in our experience, the following approaches are\nstraightforward to implement and provide accurate draws of random\nvariates.\nRandom variates can be plotted to show dependence between\nvariables. Many copula researchers rely on scatter plots to visualize\n111\n112\nCopulas and Random Number Generation\ndifferences between various copulas (Embrechts et al., 2002). Other\nresearchers report pdf contour plots (Smith, 2003), which are presumably easier to interpret than three-dimensional pdf graphs. Nevertheless, some researchers report combinations of pdf contour plots\nand three-dimensional graphs (Ané and Kharoubi, 2003), while others\nreport all three: scatter plots, contour graphs, and three-dimensional\nfigures (Bouyé et al., 2000). All three techniques convey the same\ninformation, so whichever presentation one chooses is essentially\na matter of preference. We use scatter plots for several reasons.\nFirst, scatter plots are easier to generate than pdf contour plots\nor three-dimensional figures and do not require complicated graphing software. Second, random draws used to create scatter plots\nare also useful for generating simulated data in Monte Carlo experiments. Third, scatter plots can be easily compared to plots of\nreal life data to assist in choosing appropriate copula functions.\nFinally, we feel that interpretations are more straightforward for scatter plots than they are for pdf contour plots or three-dimensional\nfigures.\nA.1\nSelected Illustrations\nIn this section, we sketch some algorithms for making pseudo-random\ndraws from copulas. These algorithms can be viewed as adaptations\nof various general methods for simulating draws from multivariate\ndistributions.\nA.1.1\nConditional sampling\nFor many copulas, conditional sampling is a simple method of simulating random variates. The steps for drawing from a copula are:\n• Draw u from standard uniform distribution.\n• Set y = F (−1) (u) where F (−1) is any quasi-inverse of F .\n• Use the copula to transform uniform variates. One such\ntransformation method uses the conditional distribution of\nA.1. Selected Illustrations\n113\nU2 , given u1 .\ncu1 (u2 ) = Pr[U2 ≤ u2 |U1 ≤ u1 ]\nC(u1 + ∆u1 , u2 ) − C(u1 , u2 )\n= lim\n∆u1 →0\n∆u1\n∂\n=\nC(u1 , u2 ).\n∂u1\nBy Theorem 2.2.7 in Nelsen (2006), a nondecreasing function\ncu1 (u2 ) exists almost everywhere in the unit interval.1\nIn practice, conditional sampling is performed through the following\nsteps:\n• Draw two independent random variables (v1 , v2 ) from U (0, 1).\n• Set u1 = v1 .\n• Set u2 = C2 (u2 |u1 = v1 ) = ∂C(u1 , u2 )/∂u1 .\nThen the pair (u1 , u2 ) are uniformly distributed variables drawn from\nthe respective copula C(u1 , u2 ; θ). This technique is best suited for\ndrawing variates from the Clayton, Frank, and FGM copulas; see\nArmstrong and Galli (2002). The following equations show how this\nthird step is implemented for these three different copulas (Table A.1).\nA.1.2\nElliptical sampling\nMethods for drawing from elliptical distributions, such as the bivariate\nnormal and bivariate t-distribution, are well-established in statistics.\nTable A.1 Selected conditional transforms for copula generation.\nCopula\nClayton\nFrank\nFGM\nConditional copula\n−1/θ\n−θ/(θ+1)\nu2 = v1−θ v2\n−1 +1\n1\nv2 (1 − e−θ )\nu2 = − log 1 +\nθ v2(e−θv1 − 1) − e−θv1\n√\nu2 = 2v2 /\nB−A\nA = θ(2u1 − 1); B = [1 − θ(2u1 − 1)]2 + 4θv2 (2u1 − 1)\n1 See\nExample 2.20 in Nelsen (2006: 41–42) which gives the algorithm for drawing from\nC(u1 , u2 ) = u1 u2 /(u1 + u2 − u1 u2 ).\n114\nCopulas and Random Number Generation\nThese same methods are used to draw values from the Gaussian copula.\nThe following algorithm generates random variables u1 and u2 from the\nGaussian copula C(u1 , u2 ; θ):\n• Generate two independently distributed N (0, 1) variables v1\nand v2 .\n• Set y1 = v1 .\n√\n• Set y2 = v1 · θ + v2 1 − θ2 .\n• Set ui = Φ(yi ) for i = 1, 2 where Φ is the cumulative distribution function of the standard normal distribution.\nThen the pair (u1 , u2 ) are uniformly distributed variables drawn from\nthe Gaussian copula C(u1 , u2 ; θ).\nA.1.3\nMixtures of powers simulation\nTo make draws from the Gumbel copula using conditional sampling,\nwe need to calculate C2 (u2 |v1 ) which requires an iterative solution,\nwhich is computationally expensive for applications with many simulated draws. Marshall and Olkin (1988) suggest an alternative algorithm based on mixtures of powers. The following algorithm shows\nhow the technique is used to generate draws from the Gumbel copula:\n• Draw a random variable γ having Laplace transformation\nτ (t) = exp(−t1/θ ). See below for additional detail.\n• Draw two independent random variables (v1 , v2 ) from U (0, 1).\n• Set ui = τ −γ −1 ln vi for i = 1, 2.\nThen (u1 , u2 ) are uniformly distributed variables drawn from the Gumbel copula.\nHowever, to implement the first step we have to draw a random\nvariable γ from a positive stable distribution P S(α, 1). This is accomplished using the following algorithm by Chambers et al. (1976).\n• Draw a random variable η from U (0, π).\n• Draw a random variable w from the exponential distribution\nwith mean equal to 1.\nA.1. Selected Illustrations\n115\n• Setting α = 1/θ, generate\nα\nz=\nsin((1 − α)η)(sin(αη)) 1−α\n1\n.\nsin(η) 1−α\n• Set γ = (z/w)(1−α)/α .\nThen γ is a randomly draw variable from a P S(α, 1) distribution.\nA.1.4\nSimulating discrete variables\nMethods for drawing discrete variables depend upon which type of discrete variate is desired. We focus on simulating discrete Poisson variables using a method based on Devroye’s technique of sequential search\n(Devroye, 1986). The algorithm is as follows:\n• Draw correlated uniform random variables (u1 , u2 ) from a\nparticular copula using any of the methods discussed above.\n• Set the Poisson mean = µ1 such that Pr (Y1 = 0) = e−µ1 .\n• Set Y1 = 0, P0 = e−µ1 , S = P0 .\n• If u1 < S, then Y1 remains equal to 0.\n• If u1 > S, then proceed sequentially as follows. While u1 > S,\nreplace (i) Y1 ← Y1 + 1, (ii) P0 ← µ1 P0 /Y1 , (iii) S ← S + P0 .\nThis process continues until u1 < S.\nThese steps produce a simulated variable Y1 with Poisson distribution with mean λ1 . To obtain draws of the second Poisson variable Y2 ,\nreplace u1 and µ1 with u2 and µ2 and repeat the steps above. Then\nthe pair (Y1 , Y2 ) are jointly distributed Poisson variables with means\nµ1 and µ2 .\n```"
]
| [
null,
"https://s2.studylib.net/store/data/025913528_1-86020faebd34446267f337a5d3735b2a-768x994.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.84294134,"math_prob":0.9576284,"size":196480,"snap":"2023-40-2023-50","text_gpt3_token_len":51348,"char_repetition_ratio":0.18877624,"word_repetition_ratio":0.064848155,"special_character_ratio":0.26607287,"punctuation_ratio":0.14927062,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99056965,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-07T23:19:09Z\",\"WARC-Record-ID\":\"<urn:uuid:acabe62b-4c44-478a-9261-b03534e62533>\",\"Content-Length\":\"251856\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9daaa9d4-6bc7-446d-8c12-9cbdbb3357bc>\",\"WARC-Concurrent-To\":\"<urn:uuid:f1fb374f-bb44-4afb-a6a4-41a1fda962e4>\",\"WARC-IP-Address\":\"172.67.193.117\",\"WARC-Target-URI\":\"https://studylib.net/doc/25913528/copula-modeling-an-introduction-for-practitioners--founda...\",\"WARC-Payload-Digest\":\"sha1:LS46VB5NVSTZI6I2A6X5WJCCQWXC3B6T\",\"WARC-Block-Digest\":\"sha1:NA4JQYDM4ZA4HZ432GR6ZACDDZIAYZVB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100705.19_warc_CC-MAIN-20231207221604-20231208011604-00215.warc.gz\"}"} |
https://matheducators.stackexchange.com/questions/20796/is-it-necessary-to-teach-the-definition-of-a-limit-for-engineering-majors | [
"# Is it necessary to teach the definition of a limit for engineering majors? [closed]\n\nI have always wondered whether it is necessary or not. For me, it seems that it is enough to teach them the intuitive idea, that is, limit is just an approximation of a certain process.\n\nwhat do you think about it? I am basically interested in knowing if there is pedagogical reasons based on the nature of their training.\n\nI hope you get my concern because my english is not very good looking.\n\n• matheducators.stackexchange.com/q/484/117 Apr 25 at 12:08\n• @StevenGubkin: this question, however badly phrased, is not a repeat of the one you cite because of its specificity to engineering students. Around the world possibly the majority (certainly a substantial chunk) of hours taught by mathematicians are taught to engineering students in particular. Considerations specific to engineering students are very relevant to the practice of mathematics education. Moreover, the one-size fits all calculus course in the US is not common in the rest of the world. Jun 3 at 14:22\n\nWell it doesn't really feel right to get degrees in engineering and gain years of engineering experience without even knowing what a limit actually is. And even though many engineers will do just fine without having been exposed to the rigorous definition of a limit, some engineers will need to be familiar with rigorous definitions/proofs if they ever pursue a career in academia or if they ever have to read textbooks/papers that haven't been specifically written for engineers.\n\nIt is probably not the place of mathematics educators to decide what mathematics courses engineering majors should take. But a good reference point is ABET accreditation. Over 600 universities in the US have ABET accredited engineering programs. We should defer to the professionals who set these standard and assess outcomes. Here is a description of their current criteria:\n\nABET 2021-22 Criteria\n\nThey refer to college level mathematics, and say \"For illustrative purposes, some examples of college-level mathematics include calculus, differential equations, probability, statistics, linear algebra, and discrete mathematics.\" Accredited undergraduate programs require a minimum of 30 semester credit hours of college level mathematics and basic sciences. This document gives more specific expectations for different types of engineering programs. For example, Civil Engineering requires mathematics through differential equations.\n\nI think from this it is clear that a \"survey course\" in calculus would be inappropriate for engineering students. Students in engineering programs are expected to take the same sort of single variable calculus course taken by other STEM majors, including math majors.\n\n• It very often falls to mathematicians to decide the contents of the math courses they teach in engineering degree programs even when it is the case that they have had little input in the design of these courses. Most engineers are not \"experts\" with respect to mathematical content, even with respect to what needs to be taught to engineering students. Jun 3 at 14:19\n\nThe concepts behind limits are actually very important to engineering (in the form of error/precision analysis), but are rarely phrased that way.\n\nGiven a function $$f$$, we can imagine an engineering situation where there is some desired range of outputs from the function, but the engineer has control over the value of the inputs of the function. If $$f(c) = L$$, one might want to set the value of $$x$$ close to $$c$$ so that $$f(x)$$ is within some error $$\\epsilon$$ of $$L$$. But since this is a real-world engineering situation, one cannot set the value of $$x$$ precisely, only aim for a target value and specify a level of precision. That is, we should aim to make $$x = c$$ and hopefully there is some level of precision $$\\delta$$ we can set on $$x$$ so that if $$x$$ is within $$\\delta$$ of $$c$$, then $$f(x)$$ is within the desired range.\n\nOne can then ask about the relationship between the precision necessary on the input to produce a desired precision on the output. For differentiable functions, this relationship is linear (in the limit to 0), and the multiplier is the derivative.\n\nNo, it's definitely not \"necessary\".\n\nI'm not an engineering major, but roomed with one, did a general engineering minor, and worked in/around mechanical, nuclear, mining and chemical engineering (had electrical on staff too). Passed my EIT and was at one time, about to take the PE (mechanical) exam. Most engineers in the workplace don't even use calculus, let alone epsilon-delta. For that matter I've worked with a (small number of, but still violates a Euclidean \"necessary\") licensed PEs who didn't even have a college degree.\n\nIn their normal undergrad training (fluids, thermo, etc.), engineers will often use calculus. So you need calculus to get through a BS in E. But you don't need or use epsilon-delta in those courses, either in the derivations or the homework drill or the projects.\n\nEven if we use a less picky interpretation of \"necessary\" (very helpful), I would not put epsilon delta up there. For one thing, the current stereotypical STEM training in the US, gives an exposure to epsilon delta, with a few problems, at the beginning of a calc course. But doesn't come back to it, doesn't use it. So it's not even that important for undergrad calculus or diffyQs.\n\nNow, does it hurt to have something like this? I don't think so. Especially given the limited time spent on in a typical calculus course based on a text like Thomas. I mean at least you've seen it so someone mentions it, it's not a mythical creature. And it's really just a bunch of detailed algebra pushing symbols and equations around (my same feeling about series)--maybe builds the algebra muscles in a manner helpful for and similar to multistep homework problems in fluid hydraulics or power systems. But it's not firmly connected--just general muscle building. For the vainshingly small but not converging to zero population of undergrad engineers that go on to some theoretical Ph.D. (and even within that a small set of them) and turn out to need this sort of stuff, they at least were exposed to it. And can pick it up more as needed, for their work. (Not to say they couldn't even with no previous exposure.)\n\nBeing practical, we can always come up with things that might in some circumstance be helpful (learning Latin, say). But life is finite. It is practical. It's an engineering problem, with constraints, costs. etc. So I definitely would not use the strong word necessary (even if we use it colloquially to mean strongly important) to describe formal limits. There are a gazillion things useful to engineers and a lot on their slate already. There is limited time in the day and limited brains in the skull. So, I wouldn't go beyond the week or two at beginning of a calc course, as done now.\n\n• As I was reading your answer it occurred to me that some emphasis (in fact, I would say heavy emphasis) on the corresponding theoretical concept in integral calculus (second semester U.S. based calculus) -- Riemann sums -- would be MUCH more important for engineers. By \"Riemann sums\", I don't mean proofs of convergence and such, but lots of emphasis on using slicing and other techniques to set up discrete expressions for things like area, volume, work, pressure, etc. that, in the limit, give rise to definite integral expressions. Apr 25 at 13:38\n• What courses/problems in the undergrad engineering curriculum need (more) of that? Consider for example the process shown in this evaluation of math needs for statics and dynamics--how the instrument was developed, assessed, tested, and changed. peer.asee.org/… (I'm not asking this even to disagree, just to emphasize that \"what math is needed for X, requires a bit of reflection on the X!) Apr 25 at 15:30\n• Isn't the facility in grasping how one goes from discrete sums to an integral used throughout the undergraduate engineering curriculum? In looking at my copy of the 3rd edition (1980) of Engineering Mechanics. Statics and Dynamics by Irving H. Shames, I see this in several places in Chapter 8 (moments, center of mass, transfer theorems), and in Chapter 10 (total work on a particle by a given force as the particle moves along a given path), and in Chapter 13 (total momentum, energy, etc. of planar and spatial objects in motion), etc. Apr 25 at 17:43\n• I'll skip getting my engineering electromagnetics text (course I took back in Fall 1982), but I'm pretty sure that are lots of integral set-ups involving specified charge distributions in which one begins by assuming a delta-charge of a delta-volume located at $xyz$ produces a given infinitesimal effect, then add up all these effects . . . Apr 25 at 17:46"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.97125405,"math_prob":0.8958479,"size":2689,"snap":"2021-31-2021-39","text_gpt3_token_len":592,"char_repetition_ratio":0.09497207,"word_repetition_ratio":0.0,"special_character_ratio":0.21346225,"punctuation_ratio":0.11676083,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98347014,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-18T07:21:07Z\",\"WARC-Record-ID\":\"<urn:uuid:91681e3b-6ddb-485a-89a2-ddda543170a2>\",\"Content-Length\":\"188021\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ea0dd5a0-03d6-43af-8e61-d65112458bc2>\",\"WARC-Concurrent-To\":\"<urn:uuid:e552908f-1155-4041-879c-7a107cff2c98>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://matheducators.stackexchange.com/questions/20796/is-it-necessary-to-teach-the-definition-of-a-limit-for-engineering-majors\",\"WARC-Payload-Digest\":\"sha1:2YA4VKY2AF54BGIAJ7OIYRTMXEUHIDOZ\",\"WARC-Block-Digest\":\"sha1:RJHCCYTKQALW4XBNUDNXJ4TV7OSQY4NR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780056348.59_warc_CC-MAIN-20210918062845-20210918092845-00265.warc.gz\"}"} |
https://www.britannica.com/science/probability-theory?anchor=toc32770 | [
"# probability theory\n\nmathematics\n\nprobability theory, a branch of mathematics concerned with the analysis of random phenomena. The outcome of a random event cannot be determined before it occurs, but it may be any one of several possible outcomes. The actual outcome is considered to be determined by chance.\n\nThe word probability has several meanings in ordinary conversation. Two of these are particularly important for the development and applications of the mathematical theory of probability. One is the interpretation of probabilities as relative frequencies, for which simple games involving coins, cards, dice, and roulette wheels provide examples. The distinctive feature of games of chance is that the outcome of a given trial cannot be predicted with certainty, although the collective results of a large number of trials display some regularity. For example, the statement that the probability of “heads” in tossing a coin equals one-half, according to the relative frequency interpretation, implies that in a large number of tosses the relative frequency with which “heads” actually occurs will be approximately one-half, although it contains no implication concerning the outcome of any given toss. There are many similar examples involving groups of people, molecules of a gas, genes, and so on. Actuarial statements about the life expectancy for persons of a certain age describe the collective experience of a large number of individuals but do not purport to say what will happen to any particular person. Similarly, predictions about the chance of a genetic disease occurring in a child of parents having a known genetic makeup are statements about relative frequencies of occurrence in a large number of cases but are not predictions about a given individual.",
null,
"Britannica Quiz\nDefine It: Math Terms\nHere is your mission, should you choose to accept it: Define the following math terms before time runs out.\n\nThis article contains a description of the important mathematical concepts of probability theory, illustrated by some of the applications that have stimulated their development. For a fuller historical treatment, see probability and statistics. Since applications inevitably involve simplifying assumptions that focus on some features of a problem at the expense of others, it is advantageous to begin by thinking about simple experiments, such as tossing a coin or rolling dice, and later to see how these apparently frivolous investigations relate to important scientific questions.\n\n## Applications of simple probability experiments\n\nThe fundamental ingredient of probability theory is an experiment that can be repeated, at least hypothetically, under essentially identical conditions and that may lead to different outcomes on different trials. The set of all possible outcomes of an experiment is called a “sample space.” The experiment of tossing a coin once results in a sample space with two possible outcomes, “heads” and “tails.” Tossing two dice has a sample space with 36 possible outcomes, each of which can be identified with an ordered pair (i, j), where i and j assume one of the values 1, 2, 3, 4, 5, 6 and denote the faces showing on the individual dice. It is important to think of the dice as identifiable (say by a difference in colour), so that the outcome (1, 2) is different from (2, 1). An “event” is a well-defined subset of the sample space. For example, the event “the sum of the faces showing on the two dice equals six” consists of the five outcomes (1, 5), (2, 4), (3, 3), (4, 2), and (5, 1).\n\nA third example is to draw n balls from an urn containing balls of various colours. A generic outcome to this experiment is an n-tuple, where the ith entry specifies the colour of the ball obtained on the ith draw (i = 1, 2,…, n). In spite of the simplicity of this experiment, a thorough understanding gives the theoretical basis for opinion polls and sample surveys. For example, individuals in a population favouring a particular candidate in an election may be identified with balls of a particular colour, those favouring a different candidate may be identified with a different colour, and so on. Probability theory provides the basis for learning about the contents of the urn from the sample of balls drawn from the urn; an application is to learn about the electoral preferences of a population on the basis of a sample drawn from that population.\n\nAnother application of simple urn models is to use clinical trials designed to determine whether a new treatment for a disease, a new drug, or a new surgical procedure is better than a standard treatment. In the simple case in which treatment can be regarded as either success or failure, the goal of the clinical trial is to discover whether the new treatment more frequently leads to success than does the standard treatment. Patients with the disease can be identified with balls in an urn. The red balls are those patients who are cured by the new treatment, and the black balls are those not cured. Usually there is a control group, who receive the standard treatment. They are represented by a second urn with a possibly different fraction of red balls. The goal of the experiment of drawing some number of balls from each urn is to discover on the basis of the sample which urn has the larger fraction of red balls. A variation of this idea can be used to test the efficacy of a new vaccine. Perhaps the largest and most famous example was the test of the Salk vaccine for poliomyelitis conducted in 1954. It was organized by the U.S. Public Health Service and involved almost two million children. Its success has led to the almost complete elimination of polio as a health problem in the industrialized parts of the world. Strictly speaking, these applications are problems of statistics, for which the foundations are provided by probability theory.\n\nIn contrast to the experiments described above, many experiments have infinitely many possible outcomes. For example, one can toss a coin until “heads” appears for the first time. The number of possible tosses is n = 1, 2,…. Another example is to twirl a spinner. For an idealized spinner made from a straight line segment having no width and pivoted at its centre, the set of possible outcomes is the set of all angles that the final position of the spinner makes with some fixed direction, equivalently all real numbers in [0, 2π). Many measurements in the natural and social sciences, such as volume, voltage, temperature, reaction time, marginal income, and so on, are made on continuous scales and at least in theory involve infinitely many possible values. If the repeated measurements on different subjects or at different times on the same subject can lead to different outcomes, probability theory is a possible tool to study this variability.\n\nBecause of their comparative simplicity, experiments with finite sample spaces are discussed first. In the early development of probability theory, mathematicians considered only those experiments for which it seemed reasonable, based on considerations of symmetry, to suppose that all outcomes of the experiment were “equally likely.” Then in a large number of trials all outcomes should occur with approximately the same frequency. The probability of an event is defined to be the ratio of the number of cases favourable to the event—i.e., the number of outcomes in the subset of the sample space defining the event—to the total number of cases. Thus, the 36 possible outcomes in the throw of two dice are assumed equally likely, and the probability of obtaining “six” is the number of favourable cases, 5, divided by 36, or 5/36.\n\nNow suppose that a coin is tossed n times, and consider the probability of the event “heads does not occur” in the n tosses. An outcome of the experiment is an n-tuple, the kth entry of which identifies the result of the kth toss. Since there are two possible outcomes for each toss, the number of elements in the sample space is 2n. Of these, only one outcome corresponds to having no heads, so the required probability is 1/2n.\n\nIt is only slightly more difficult to determine the probability of “at most one head.” In addition to the single case in which no head occurs, there are n cases in which exactly one head occurs, because it can occur on the first, second,…, or nth toss. Hence, there are n + 1 cases favourable to obtaining at most one head, and the desired probability is (n + 1)/2n."
]
| [
null,
"https://cdn.britannica.com/01/115001-131-7278E518/Enrico-Fermi-Italian-problem-physics-1950.jpg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.94571626,"math_prob":0.96760184,"size":8781,"snap":"2021-43-2021-49","text_gpt3_token_len":1755,"char_repetition_ratio":0.1354677,"word_repetition_ratio":0.0062154694,"special_character_ratio":0.19769958,"punctuation_ratio":0.10160098,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98500043,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-05T22:56:22Z\",\"WARC-Record-ID\":\"<urn:uuid:10caf253-5366-41e0-b653-0384ef84920c>\",\"Content-Length\":\"86334\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b632c6a3-ae27-439b-a9ef-48821fa0e2e2>\",\"WARC-Concurrent-To\":\"<urn:uuid:acd6324c-4613-467d-946e-537b28dbf550>\",\"WARC-IP-Address\":\"104.18.4.110\",\"WARC-Target-URI\":\"https://www.britannica.com/science/probability-theory?anchor=toc32770\",\"WARC-Payload-Digest\":\"sha1:3ZZDZIWAMA22RM3IHPPIBDMSBADMYHTG\",\"WARC-Block-Digest\":\"sha1:7GYPLO4PSEJ5EIMB6RYDTEDNUUTJHADM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363226.68_warc_CC-MAIN-20211205221915-20211206011915-00270.warc.gz\"}"} |
https://www.dcode.fr/modular-inverse | [
"Search for a tool\nModular Multiplicative Inverse\n\nTool to compute the modular inverse of a number. The modular multiplicative inverse of an integer N modulo m is an integer n such as the inverse of N modulo m equals n.\n\nResults\n\nModular Multiplicative Inverse -\n\nTag(s) : Arithmetics\n\nShare",
null,
"dCode and you\n\ndCode is free and its tools are a valuable help in games, maths, geocaching, puzzles and problems to solve every day!\nA suggestion ? a feedback ? a bug ? an idea ? Write to dCode!\n\nTeam dCode likes feedback and relevant comments; to get an answer give an email (not published). It is thanks to you that dCode has the best Modular Multiplicative Inverse tool. Thank you.\n\n# Modular Multiplicative Inverse\n\n## Modular Inverse Calculator\n\nTool to compute the modular inverse of a number. The modular multiplicative inverse of an integer N modulo m is an integer n such as the inverse of N modulo m equals n.\n\n### What is the modular Inverse? (Definition)\n\nThe value of the modular inverse of $a$ by the modulo $n$ is the value $a ^ {- 1}$ such that $a a ^ {- 1} = 1 \\pmod n$\n\nIt is common to note this modular inverse $u$ and to use these equations $$u \\equiv a^{-1} \\pmod n \\\\ a u \\equiv 1 \\pmod n$$\n\nIf a modular inverse exists then it is unique.\n\n### How to calculate a modular inverse?\n\nTo calculate the value of the modulo inverse, use the gcd\">extended euclidean algorithm which find solutions to the Bezout identity $au + bv = \\text{G.C.D.}(a, b)$. Here, the gcd value is known, it is 1 : $\\text{G.C.D.}(a, b) = 1$, thus, only the value of $u$ is needed.\n\nExample: $3^-1 \\equiv 4 \\mod 11$ because $4 \\times 3 = 12$ and $12 \\equiv 1 \\mod 11$\n\ndCode uses the gcd\">Extended Euclidean algorithm for its inverse modulo N calculator and arbitrary precision functions to get results with big integers.\n\n### How to calculate v?\n\nUse the Bezout identity, also available on dCode.\n\n### What does invmod mean?\n\nThe keyword invmod is the abbreviation of inverse modular.\n\n### What is a multiplicative inverse?\n\nA multiplicative inverse is the other name of a modular inverse.\n\n## Source code\n\ndCode retains ownership of the source code of the script Modular Multiplicative Inverse online. Except explicit open source licence (indicated Creative Commons / free), any algorithm, applet, snippet, software (converter, solver, encryption / decryption, encoding / decoding, ciphering / deciphering, translator), or any function (convert, solve, decrypt, encrypt, decipher, cipher, decode, code, translate) written in any informatic langauge (PHP, Java, C#, Python, Javascript, Matlab, etc.) which dCode owns rights will not be released for free. To download the online Modular Multiplicative Inverse script for offline use on PC, iPhone or Android, ask for price quote on contact page !"
]
| [
null,
"https://www.dcode.fr/images/share.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.7411287,"math_prob":0.9845178,"size":2264,"snap":"2019-51-2020-05","text_gpt3_token_len":566,"char_repetition_ratio":0.15132743,"word_repetition_ratio":0.14146341,"special_character_ratio":0.25971732,"punctuation_ratio":0.14955357,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9978642,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-10T16:32:19Z\",\"WARC-Record-ID\":\"<urn:uuid:dcf46adc-d10e-4daf-8352-5d6a4d9c9c33>\",\"Content-Length\":\"19865\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d6d65bb0-438d-4027-81dd-3c40401ab907>\",\"WARC-Concurrent-To\":\"<urn:uuid:e5072b72-1814-460d-9c90-a9bc1fcd8262>\",\"WARC-IP-Address\":\"213.186.33.107\",\"WARC-Target-URI\":\"https://www.dcode.fr/modular-inverse\",\"WARC-Payload-Digest\":\"sha1:3V5FKIMIGK2NAEHGJVQ3KXMI62ZJTM36\",\"WARC-Block-Digest\":\"sha1:KEIX7SFQIWRQOQQOK6BUITCGCJYC3CCA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540528457.66_warc_CC-MAIN-20191210152154-20191210180154-00270.warc.gz\"}"} |
https://www.colorhexa.com/0066bf | [
"# #0066bf Color Information\n\nIn a RGB color space, hex #0066bf is composed of 0% red, 40% green and 74.9% blue. Whereas in a CMYK color space, it is composed of 100% cyan, 46.6% magenta, 0% yellow and 25.1% black. It has a hue angle of 208 degrees, a saturation of 100% and a lightness of 37.5%. #0066bf color hex could be obtained by blending #00ccff with #00007f. Closest websafe color is: #0066cc.\n\n• R 0\n• G 40\n• B 75\nRGB color chart\n• C 100\n• M 47\n• Y 0\n• K 25\nCMYK color chart\n\n#0066bf color description : Strong blue.\n\n# #0066bf Color Conversion\n\nThe hexadecimal color #0066bf has RGB values of R:0, G:102, B:191 and CMYK values of C:1, M:0.47, Y:0, K:0.25. Its decimal value is 26303.\n\nHex triplet RGB Decimal 0066bf `#0066bf` 0, 102, 191 `rgb(0,102,191)` 0, 40, 74.9 `rgb(0%,40%,74.9%)` 100, 47, 0, 25 208°, 100, 37.5 `hsl(208,100%,37.5%)` 208°, 100, 74.9 0066cc `#0066cc`\nCIE-LAB 43.157, 10.031, -53.43 14.153, 13.263, 51.101 0.18, 0.169, 13.263 43.157, 54.364, 280.633 43.157, -24.31, -79.979 36.418, 5.638, -57.701 00000000, 01100110, 10111111\n\n# Color Schemes with #0066bf\n\n• #0066bf\n``#0066bf` `rgb(0,102,191)``\n• #bf5900\n``#bf5900` `rgb(191,89,0)``\nComplementary Color\n• #00bfb9\n``#00bfb9` `rgb(0,191,185)``\n• #0066bf\n``#0066bf` `rgb(0,102,191)``\n• #0006bf\n``#0006bf` `rgb(0,6,191)``\nAnalogous Color\n• #bfb900\n``#bfb900` `rgb(191,185,0)``\n• #0066bf\n``#0066bf` `rgb(0,102,191)``\n• #bf0006\n``#bf0006` `rgb(191,0,6)``\nSplit Complementary Color\n• #66bf00\n``#66bf00` `rgb(102,191,0)``\n• #0066bf\n``#0066bf` `rgb(0,102,191)``\n• #bf0066\n``#bf0066` `rgb(191,0,102)``\n• #00bf59\n``#00bf59` `rgb(0,191,89)``\n• #0066bf\n``#0066bf` `rgb(0,102,191)``\n• #bf0066\n``#bf0066` `rgb(191,0,102)``\n• #bf5900\n``#bf5900` `rgb(191,89,0)``\n• #003d73\n``#003d73` `rgb(0,61,115)``\n• #004b8c\n``#004b8c` `rgb(0,75,140)``\n• #0058a6\n``#0058a6` `rgb(0,88,166)``\n• #0066bf\n``#0066bf` `rgb(0,102,191)``\n• #0074d9\n``#0074d9` `rgb(0,116,217)``\n• #0081f2\n``#0081f2` `rgb(0,129,242)``\n• #0d8eff\n``#0d8eff` `rgb(13,142,255)``\nMonochromatic Color\n\n# Alternatives to #0066bf\n\nBelow, you can see some colors close to #0066bf. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #0096bf\n``#0096bf` `rgb(0,150,191)``\n• #0086bf\n``#0086bf` `rgb(0,134,191)``\n• #0076bf\n``#0076bf` `rgb(0,118,191)``\n• #0066bf\n``#0066bf` `rgb(0,102,191)``\n• #0056bf\n``#0056bf` `rgb(0,86,191)``\n• #0046bf\n``#0046bf` `rgb(0,70,191)``\n• #0036bf\n``#0036bf` `rgb(0,54,191)``\nSimilar Colors\n\n# #0066bf Preview\n\nThis text has a font color of #0066bf.\n\n``<span style=\"color:#0066bf;\">Text here</span>``\n#0066bf background color\n\nThis paragraph has a background color of #0066bf.\n\n``<p style=\"background-color:#0066bf;\">Content here</p>``\n#0066bf border color\n\nThis element has a border color of #0066bf.\n\n``<div style=\"border:1px solid #0066bf;\">Content here</div>``\nCSS codes\n``.text {color:#0066bf;}``\n``.background {background-color:#0066bf;}``\n``.border {border:1px solid #0066bf;}``\n\n# Shades and Tints of #0066bf\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #00080e is the darkest color, while #fafdff is the lightest one.\n\n• #00080e\n``#00080e` `rgb(0,8,14)``\n• #001222\n``#001222` `rgb(0,18,34)``\n• #001d36\n``#001d36` `rgb(0,29,54)``\n• #002749\n``#002749` `rgb(0,39,73)``\n• #00325d\n``#00325d` `rgb(0,50,93)``\n• #003c71\n``#003c71` `rgb(0,60,113)``\n• #004784\n``#004784` `rgb(0,71,132)``\n• #005198\n``#005198` `rgb(0,81,152)``\n• #005cab\n``#005cab` `rgb(0,92,171)``\n• #0066bf\n``#0066bf` `rgb(0,102,191)``\n• #0070d3\n``#0070d3` `rgb(0,112,211)``\n• #007be6\n``#007be6` `rgb(0,123,230)``\n• #0085fa\n``#0085fa` `rgb(0,133,250)``\n• #0e8fff\n``#0e8fff` `rgb(14,143,255)``\n• #2298ff\n``#2298ff` `rgb(34,152,255)``\n• #36a1ff\n``#36a1ff` `rgb(54,161,255)``\n• #49aaff\n``#49aaff` `rgb(73,170,255)``\n• #5db3ff\n``#5db3ff` `rgb(93,179,255)``\n• #71bdff\n``#71bdff` `rgb(113,189,255)``\n• #84c6ff\n``#84c6ff` `rgb(132,198,255)``\n• #98cfff\n``#98cfff` `rgb(152,207,255)``\n• #abd8ff\n``#abd8ff` `rgb(171,216,255)``\n• #bfe1ff\n``#bfe1ff` `rgb(191,225,255)``\n• #d3eaff\n``#d3eaff` `rgb(211,234,255)``\n• #e6f3ff\n``#e6f3ff` `rgb(230,243,255)``\n• #fafdff\n``#fafdff` `rgb(250,253,255)``\nTint Color Variation\n\n# Tones of #0066bf\n\nA tone is produced by adding gray to any pure hue. In this case, #586067 is the less saturated color, while #0066bf is the most saturated one.\n\n• #586067\n``#586067` `rgb(88,96,103)``\n• #51616e\n``#51616e` `rgb(81,97,110)``\n• #496176\n``#496176` `rgb(73,97,118)``\n• #42627d\n``#42627d` `rgb(66,98,125)``\n• #3b6284\n``#3b6284` `rgb(59,98,132)``\n• #33638c\n``#33638c` `rgb(51,99,140)``\n• #2c6393\n``#2c6393` `rgb(44,99,147)``\n• #25649a\n``#25649a` `rgb(37,100,154)``\n• #1d64a2\n``#1d64a2` `rgb(29,100,162)``\n• #1665a9\n``#1665a9` `rgb(22,101,169)``\n• #0f65b0\n``#0f65b0` `rgb(15,101,176)``\n• #0766b8\n``#0766b8` `rgb(7,102,184)``\n• #0066bf\n``#0066bf` `rgb(0,102,191)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #0066bf is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.5221087,"math_prob":0.86999214,"size":3656,"snap":"2021-21-2021-25","text_gpt3_token_len":1579,"char_repetition_ratio":0.14622125,"word_repetition_ratio":0.011111111,"special_character_ratio":0.55415756,"punctuation_ratio":0.22847302,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99080706,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-08T15:55:27Z\",\"WARC-Record-ID\":\"<urn:uuid:08a7b55f-babe-4a2d-b689-1bb2b1e63e8d>\",\"Content-Length\":\"36221\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:87adb7dc-5814-4b94-b3d1-fa10c51f7e7b>\",\"WARC-Concurrent-To\":\"<urn:uuid:00710461-b92d-4381-9743-095592658f8b>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/0066bf\",\"WARC-Payload-Digest\":\"sha1:IB2RNTO4WQIKNHKJV2DHNUWPWSZ7QOTV\",\"WARC-Block-Digest\":\"sha1:BSIKKBT4F3TVA4YVIAZ6GUK5YC5YXFBD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243988882.94_warc_CC-MAIN-20210508151721-20210508181721-00158.warc.gz\"}"} |
http://factivation.com/addition-subtraction/ | [
"#### Welcome Visitor!\n\nThis is the Addition/Subtraction Teachers’ Homepage, with links to lessons that teach ALL Addition/Subtraction facts! To find out more about each lesson, click the blue ▼Lesson Info links. Be sure to visit our Sample Lesson to see what makes Factivation!® the #1 teaching tool for Math Facts!",
null,
"Addition Pre/Post Assessment Tool-FREE DOWNLOAD",
null,
"##",
null,
"FUN Addition Singalongs\n\nThe Factivation!® for Addition Singalongs can be used at any time of the day to reinforce important Addition concepts and vocabulary. Students stay engaged and entertained while learning through familiar tunes and dancing animals!\n\nUse the Addition Singalongs to Teach/Reinforce:\n\nConcept of Addition, Addition vocabulary, Odd & Even Numbers, The Commutative Property",
null,
"",
null,
"## Lesson 1: Zeroes Facts\n\nLesson 1 is the first lesson using the Rule Strategy. The circus theme and calliope music of the instructional videos keep students entertained as they are learning! The Addition facts in this group are very simple for students to master. The commutative property of Addition is emphasized in this and all successive lessons.\n\nUse this Factivation!® Lesson to Teach:\n\nAll “Zeroes” Addition facts and Subtraction counterparts",
null,
"## Lesson 2: Ones Facts\n\nLesson 2 uses the Rule Strategy to teach students how to add “1″ to any number. The Addition facts in this group are very simple for students to master. The instructional videos feature an underwater theme (and island-style music) that keeps students engaged!\n\nUse this Factivation!® Lesson to Teach:\n\nAll “Ones” Addition facts and Subtraction counterparts",
null,
"## Lesson 3: Skip-Twos\n\nLesson 3 uses the Rule Strategy, with an emphasis on odd and even numbers. The instructional videos lead students on a mission to build fluency with the focus facts as they travel alongside spaceships carrying Addition cargo. Up-tempo music adds to the excitement and adventure of mastering these facts.\n\nUse this Factivation!® Lesson to Teach:\n\nAll “Twos” Addition facts and Subtraction counterparts\n\n### ↓ Sample ↓ FREE ↓ Sample ↓ FREE ↓ Sample ↓",
null,
"## Lesson 4: Little Doubles\n\nIn the Lesson 4 instructional videos, students are taken on a virtual field trip to a garden where they observe snakes, dragonflies, ants, and spiders. Students learn to connect new information to prior knowledge, as all of the creatures they encounter represent the Addition facts in this lesson. Because of these brain-friendly connections, students have little difficulty remembering the sums.\n\nUse this Factivation!® Lesson to Teach:\n\n1+1, 2+2, 3+3, 4+4, and Subtraction counterparts",
null,
"## Lesson 5: Little Neighbors\n\nLesson 5 builds upon the Doubles facts learned in the last lesson. A simple trick makes these facts simple to learn! The instructional videos take students on a trip to the movies as they build fluency down a “haunted” hallway! Cinematic elements make the Lesson 5 videos a hit with students!\n\nUse this Factivation!® Lesson to Teach:\n\n1+2, 2+3, 3+4, 4+5, Addition commutative counterparts (2+1, 3+2, 4+3, 5+4), and Subtraction counterparts",
null,
"## Lesson 6: Big Doubles\n\nIn the Lesson 6 instructional videos, dancing cows make a virtual farm field trip memorable! Students connect the new Lesson 6 focus facts to prior knowledge and are entertained throughout the video lesson, increasing motivation to master the facts from this lesson.\n\nUse this Factivation!® Lesson to Teach:\n\n5+5, 6+6, 7+7, 8+8, and Subtraction counterparts",
null,
"## Lesson 7: Big Neighbors\n\nLesson 7 builds upon the Big Doubles facts from Lesson 6. Students are presented with a familiar, previously learned strategy that can be utilized again to master the Big Neighbors facts in this lesson.\n\nUse this Factivation!® Lesson to Teach:\n\n5+6, 6+7, 7+8, 8+9, Addition commutative counterparts (6+5, 7+6, 8+7, 9+8), and Subtraction counterparts",
null,
"## Lesson 8: Double in the Middle\n\nThe last “Doubles” lesson is Lesson 8: Double in the Middle. The little-known strategy in this lesson allows students to put their Doubles facts to use again, increasing both confidence and fluency. The instructional videos feature scenes of summer fun, including marching picnic ants absconding with the focus facts.\n\nUse this Factivation!® Lesson to Teach:\n\n3+5, 5+7, 6+8, Addition commutative counterparts (4+2, 5+3, 7+5, 8+6), and Subtraction counterparts.\n\n*NOTE: The Double in the Middle strategy can be used for other facts as well: 1+3 and 3+1, 4+6 and 6+4, 7+9 and 9+7. However; these facts are addressed using alternate strategies in other lessons. Although 2+4 and 4+2 were previously covered in Lesson 3, we have included them in Lesson 8 to demonstrate to students that there is often more than one strategy from which to choose.",
null,
"## Lesson 9: Little Nines\n\nLesson 9 uses the Trick Strategy and a sports-theme to teach students a simple way to remember the sums of all facts with “9″ as an addend.\n\nUse this Factivation!® Lesson to Teach:\n\n9+2, 9+3, 9+4, 9+5, Addition commutative counterparts (2+9, 3+9, 4+9, 5+9), and Subtraction counterparts.\n\n*NOTE: Although 9+2 and 2+9 were previously covered in Lesson 3, we have included them in Lesson 9 to demonstrate to students that there is often more than one strategy from which to choose.",
null,
"## Lesson 10: Big Nines\n\nLesson 10 builds upon the last lesson, utilizing the same strategy to conquer all “Big Nines” facts. The instructional videos’ fairy-tale theme, cinematic music, and familiar characters keep students enchanted and engaged in their learning.\n\nUse this Factivation!® Lesson to Teach:\n\n9+6, 9+7, 9+8, 9+9, Addition commutative counterparts (6+9, 7+9, 8+9, 9+9), and Subtraction counterparts.\n\n*NOTE: Although 9+8 and 8+9 were previously covered in Lesson 7, we have included them in Lesson 10 to demonstrate to students that there is often more than one strategy from which to choose.",
null,
"## Lesson 11: Tele-Tens\n\nLesson 11 uses a familiar cell phone keypad to teach students all facts with a sum of 10! Isn’t technology wonderful?\n\nUse this Factivation!® Lesson to Teach:\n\n1+9, 2+8, 3+7, 4+6, Addition commutative counterparts (9+1, 8+2, 7+3, 6+4), and Subtraction counterparts.\n\n*NOTE: Although 1+9 & 9+1 and 2+8 & 8+2 were previously covered in Lessons 2 and 3, we have included them in Lesson 11 to demonstrate to students that there is often more than one strategy from which to choose.",
null,
"## Lesson 12: Final Facts\n\nThe groovin’ animals in the Lesson 12 instructional videos will tickle your students’ funny bones as they are practicing the Final Facts! The last five Addition facts are presented using the Chant Strategy. At the completion of this lesson, all Addition facts have been mastered.\n\nUse this Factivation!® Lesson to Teach:\n\n3+6, 7+4, 8+3, 8+4, 8+5, Addition commutative counterparts (6+3, 4+7, 3+8, 4+8, 5+8), and Subtraction counterparts."
]
| [
null,
"http://factivation.com/members/wp-content/uploads/2013/07/icon_pdf.png",
null,
"http://factivation.com/members/wp-content/uploads/2015/06/background_lesson-tables-300x2.png",
null,
"http://factivation.com/members/wp-content/uploads/2013/07/video_addition_singalong_oddandeven.jpg",
null,
"http://factivation.com/members/wp-content/uploads/2015/06/background_lesson-tables-300x2.png",
null,
"http://factivation.com/members/wp-content/uploads/2013/07/factlab_addition_lesson1-300x256.jpg",
null,
"http://factivation.com/members/wp-content/uploads/2013/07/factlab_addition_lesson2-300x256.jpg",
null,
"http://factivation.com/members/wp-content/uploads/2013/07/factlab_addition_lesson3-300x255.jpg",
null,
"http://factivation.com/members/wp-content/uploads/2013/07/factlab_addition_lesson4-300x254.jpg",
null,
"http://factivation.com/members/wp-content/uploads/2013/07/factlab_addition_lesson5-300x256.jpg",
null,
"http://factivation.com/members/wp-content/uploads/2013/07/factlab_addition_lesson6-300x256.jpg",
null,
"http://factivation.com/members/wp-content/uploads/2013/07/factlab_addition_lesson7-300x255.jpg",
null,
"http://factivation.com/members/wp-content/uploads/2013/07/factlab_addition_lesson8-300x258.jpg",
null,
"http://factivation.com/members/wp-content/uploads/2013/07/factlab_addition_lesson9-300x255.jpg",
null,
"http://factivation.com/members/wp-content/uploads/2013/07/factlab_addition_lesson10-300x253.jpg",
null,
"http://factivation.com/members/wp-content/uploads/2013/07/factlab_addition_lesson11-300x255.jpg",
null,
"http://factivation.com/members/wp-content/uploads/2013/07/factlab_addition_lesson12-300x257.jpg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.888373,"math_prob":0.62313795,"size":7114,"snap":"2019-51-2020-05","text_gpt3_token_len":1782,"char_repetition_ratio":0.18087201,"word_repetition_ratio":0.21434978,"special_character_ratio":0.23755974,"punctuation_ratio":0.1391863,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96031106,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32],"im_url_duplicate_count":[null,null,null,8,null,4,null,8,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-06T00:15:57Z\",\"WARC-Record-ID\":\"<urn:uuid:8d3b3b3f-e010-46ef-b348-ba2c0e982a2a>\",\"Content-Length\":\"70728\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:018bcbf9-5e04-4e8c-8dad-abb7b177f191>\",\"WARC-Concurrent-To\":\"<urn:uuid:e4fbb100-9460-4c5a-804d-9ae0258928bb>\",\"WARC-IP-Address\":\"205.134.238.27\",\"WARC-Target-URI\":\"http://factivation.com/addition-subtraction/\",\"WARC-Payload-Digest\":\"sha1:NAMDNSXG22TO475UAPFIMUOJW7STGMMT\",\"WARC-Block-Digest\":\"sha1:PPML7SR57D5M3SKLLBIH2FLD6ALTK4QU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540482954.0_warc_CC-MAIN-20191206000309-20191206024309-00043.warc.gz\"}"} |
https://www.teachstarter.com/us/teaching-resource/describe-that-decimal-worksheet-us/ | [
"# Describe That Decimal Worksheet\n\nAccess this premium resource with a Basic Plan Not ready for a premium account? Browse free resources.\n1 page (PDF)|Suitable for Grades: 4 - 6\n\nA worksheet to deepen students’ understanding of decimals.\n\nUse this teaching resource to help consolidate your students’ understanding of decimals.\n\nAssign each student a decimal, or allow the students to choose their own. If you have access to a set of decimal dice, students can roll to get a starting decimal. This is written in the box at the top of the worksheet. Laminate or slide into a dry-erase pocket and use with a dry-erase marker for added longevity.\n\nStudents then complete a range of activities to describe their decimal. Activities include:\n\n• identifying a decimal that is less than and greater than their chosen decimal\n• identifying an equivalent fraction\n• drawing a visual representation\n• adding the decimal to a place value chart\n• multiplying and dividing the decimal by 10; 100; and 1,000\n• rounding the decimal to the nearest whole number, tenth, and hundredth\n• adding 1, 0.1, and 0.01 to the decimal.\n\nUse this worksheet as a math warm-up, a guided math activity, or as a math center activity.\n\nHow do I print this teaching resource?\n\n\"I love teach starter and am so thankful to have found such a great resource collection. The fee is only a few hours pay and I saved myself that much work within the first 15 minutes!\" ~ Nanci, a happy Teach Starter member.\n\nWant to see why hundreds of thousands of teachers love Teach Starter? Download a FREE 200+ page pack of premium Teach Starter teaching resources.\n\n#### Common Core State Standards alignment\n\n• CCSS.MATH.CONTENT.4.NF.C.6\n\nUse decimal notation for fractions with denominators 10 or 100. For example, rewrite 0.62 as 62/100; describe a length as 0.62 meters; locate 0.62 on a number line diagram....\n\n• CCSS.MATH.CONTENT.4.NF.C.7\n\nCompare two decimals to hundredths by reasoning about their size. Recognize that comparisons are valid only when the two decimals refer to the same whole. Record the results of comparisons with the symbols >, =, or <, and justify the conclusion...\n\n• CCSS.MATH.CONTENT.5.NBT.A.2\n\nExplain patterns in the number of zeros of the product when multiplying a number by powers of 10, and explain patterns in the placement of the decimal point when a decimal is multiplied or divided by a power of 10. Use whole-number exponents to denot...\n\n• CCSS.MATH.CONTENT.5.NBT.A.3\n\nRead, write, and compare decimals to thousandths....\n\n• CCSS.MATH.CONTENT.5.NBT.A.3.A\n\nRead and write decimals to thousandths using base-ten numerals, number names, and expanded form, e.g., 347.392 = 3 × 100 + 4 × 10 + 7 × 1 + 3 × (1/10) + 9 × (1/100) + 2 × (1/1000)....\n\n• CCSS.MATH.CONTENT.5.NBT.A.3.B\n\nCompare two decimals to thousandths based on meanings of the digits in each place, using >, =, and < symbols to record the results of comparisons....\n\n• CCSS.MATH.CONTENT.5.NBT.A.4\n\nUse place value understanding to round decimals to any place....\n\n• CCSS.MATH.CONTENT.5.NBT.B.7\n\nAdd, subtract, multiply, and divide decimals to hundredths, using concrete models or drawings and strategies based on place value, properties of operations, and/or the relationship between addition and subtraction; relate the strategy to a written me...\n\n#### Find more resources for these topics\n\nWrite a review to help other teachers and parents like yourself. If you would like to request a change (Changes & Updates) to this resource, or report an error, simply select the corresponding tab above.\n\n#### Request a change\n\nWould you like something changed or customized on this resource? While our team makes every effort to complete change requests, we can't guarantee that every change will be completed.\n\nYou must be logged in to request a change. Sign up now!\n\n#### Report an Error\n\nYou must be logged in to report an error. Sign up now!\n\nIf any of our resources do not have 100% accurate American English (en-US), simply click on the 'Report an error' tab above to let us know. We will have the resource updated and ready for you to download in less than 24 hours. Read more..."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.86440295,"math_prob":0.9035858,"size":4123,"snap":"2019-43-2019-47","text_gpt3_token_len":968,"char_repetition_ratio":0.11896091,"word_repetition_ratio":0.0088495575,"special_character_ratio":0.24084404,"punctuation_ratio":0.18181819,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9706078,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-19T02:30:15Z\",\"WARC-Record-ID\":\"<urn:uuid:d3e1c5c7-3197-4e52-a4e0-5b6fe47deaca>\",\"Content-Length\":\"167240\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:49398cb2-1b80-41bf-82c0-11798917db97>\",\"WARC-Concurrent-To\":\"<urn:uuid:5e626803-84bb-4e73-860c-70ce2f94b7ff>\",\"WARC-IP-Address\":\"104.20.63.156\",\"WARC-Target-URI\":\"https://www.teachstarter.com/us/teaching-resource/describe-that-decimal-worksheet-us/\",\"WARC-Payload-Digest\":\"sha1:65ZVYTZAXGQ3QA2M5D2F7TJCQBNPAT25\",\"WARC-Block-Digest\":\"sha1:NZKPPSI4XTUX5XDQSLPAWDRQOIEZX6NJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496669967.80_warc_CC-MAIN-20191119015704-20191119043704-00491.warc.gz\"}"} |
https://xcorr.net/2013/12/23/fast-1d-and-2d-data-binning-in-matlab/ | [
"# Fast 1D and 2D data binning in Matlab & Python\n\nI needed a fast method of binning 1D and 2D data in Matlab – that is, to compute the mean of z conditional on x being in a given range (1d binning) or the mean z of conditional on x and y being in given ranges (2d binning). I stumbled upon a clever method using a combination of histc and sparse which involves no looping and adapted the method to the 2d case. Here are two functions that do the trick:\n\n```function [ym,yb] = bindata(y,x,xrg)\n%function [ym,yb] = bindata(y,x,xrg)\n%Computes ym(ii) = mean(y(x&gt;=xrg(ii) &amp; x &lt; xrg(ii+1)) for every ii\n%using a fast algorithm which uses no looping\n%If a bin is empty it returns nan for that bin\n%Also returns yb, the approximation of y using binning (useful for r^2\n%calculations). Example:\n%\n%x = randn(100,1);\n%y = x.^2 + randn(100,1);\n%xrg = linspace(-3,3,10)';\n%[ym,yb] = bindata(y,x,xrg);\n%X = [xrg(1:end-1),xrg(2:end)]';\n%Y = [ym,ym]'\n%plot(x,y,'.',X(:),Y(:),'r-');\n%\n%By Patrick Mineault\n%Refs: https://xcorr.net/?p=3326\n% http://www-pord.ucsd.edu/~matlab/bin.htm\n[~,whichedge] = histc(x,xrg(:)');\n\nbins = min(max(whichedge,1),length(xrg)-1);\nxpos = ones(size(bins,1),1);\nns = sparse(bins,xpos,1);\nysum = sparse(bins,xpos,y);\nym = full(ysum)./(full(ns));\nyb = ym(bins);\nend\n```\n\nAnd for the 2d case:\n\n```function [ym,yb] = bindata2(y,x1,x2,x1rg,x2rg)\n%function [ym,yb] = bindata2(y,x1,x2,x1rg,x2rg)\n%Computes:\n%ym(ii,jj) = mean(y(x1&gt;=x1rg(ii) &amp; x1 &lt; x1rg(ii+1) &amp; x2&gt;=x2rg(jj) &amp; x2 &lt; x2rg(jj+1))\n%for every ii, jj\n%If a bin is empty it returns nan for that bin\n%using a fast algorithm which uses no looping\n%Also returns yb, the approximation of y using binning (useful for r^2\n%calculations). Example:\n%\n%x = randn(500,2);\n%y = sum(x.^2,2) + randn(500,1);\n%xrg = linspace(-3,3,10)';\n%[ym,yb] = bindata2(y,x(:,1),x(:,2),xrg,xrg);\n%subplot(1,2,1);plot3(x(:,1),x(:,2),y,'.');\n%subplot(1,2,2);h = imagesc(xrg,xrg,ym);\n%\n%By Patrick Mineault\n%Refs: https://xcorr.net/?p=3326\n% http://www-pord.ucsd.edu/~matlab/bin.htm\n[~,whichedge1] = histc(x1,x1rg(:)');\n[~,whichedge2] = histc(x2,x2rg(:)');\n\nbins1 = min(max(whichedge1,1),length(x1rg)-1);\nbins2 = min(max(whichedge2,1),length(x2rg)-1);\n\nbins = (bins2-1)*(length(x1rg)-1)+bins1;\n\nxpos = ones(size(bins,1),1);\nns = sparse(bins,xpos,1,(length(x1rg)-1)*(length(x2rg)-1),1);\nysum = sparse(bins,xpos,y,(length(x1rg)-1)*(length(x2rg)-1),1);\nym = full(ysum)./(full(ns));\nyb = ym(bins);\nym = reshape(ym,length(x1rg)-1,length(x2rg)-1);\nend\n```\n\nWhat about in Python? It turns out there’s a built-in way of doing this: using the `weights` property of `numpy.histogram`.\n\n1.",
null,
"DB says:\n\nThere’s a little bug here. For the 1D case (same logic applies in 2D):\nbins = min(max(whichedge,1),length(xrg)-1);\n\nthis takes values of whichedge that are less than 1 and turns them into 1, and values of whichedge that are greater than numel(xrg)-1 and turns them into that. This would (maybe) make sense if matlab labeled edges BIN as 0 when it’s outside on the low end, and numel(xrg) when it’s outside on the high end. However, according to [help histc]:\n\n“BIN is zero for out of range values.”\n\nSo xxrg(end) all end up squished into bin 1, along with everything that belongs there. There are a few options for what to do with out of range values, I think code intended to lump them in to the extreme bins, though I think a more appropriate solution is to remove them:\n\nbins=whichedge(whichedge>0);\ny=y(whichedge>0);\n\nAn elegant strategy for lumping them in would be to simply replace xrg(1) and xrg(end) with -inf and inf. Or, if you wanted to return averages for those values falling outside of xrg, you might append a -inf and an inf to xrg, making the output ym([1 end]) the the means of the out of range values, and ym(2:end-1) the mean for y where x>=xrng(n-1) and x<xrng(n).\n\n•",
null,
"Dan C says:\n\nooh yes thanks for pointing this error out. this code will throw all data outside the input bin range into the first/last bin!\n\n2.",
null,
"Peter says:\n\nFor those of us with lesser intellectual powers, if it’s not too much trouble, do you know how to get the function to also return the number of points in each bin (2D case)?\n\n•",
null,
"Peter says:\n\nActually, it’s simply the variable ns isn’t it…fantastic function by the way! I had done the 2d one with a loop and it was crazy slow even with a low number of bins\n\n3.",
null,
"ck says:\n\nI think accumarray is even faster than using sparse, at least in the 1-D case. Also it generalizes to other functions.\n\n•",
null,
"xcorr says:\n\nGood to know, I didn’t know about that function."
]
| [
null,
"https://0.gravatar.com/avatar/350608819c18d972cb3391115dea82ef",
null,
"https://1.gravatar.com/avatar/48d55600606f07d6e224edbcc818245d",
null,
"https://0.gravatar.com/avatar/f899577c2cc0340f5849e9d3131e2e9f",
null,
"https://0.gravatar.com/avatar/f899577c2cc0340f5849e9d3131e2e9f",
null,
"https://2.gravatar.com/avatar/2440c7ceb77f29551249793453485b39",
null,
"https://2.gravatar.com/avatar/8def40ab2d48a7561a0ef948486334a2",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.75827384,"math_prob":0.9981665,"size":4432,"snap":"2021-43-2021-49","text_gpt3_token_len":1434,"char_repetition_ratio":0.100271,"word_repetition_ratio":0.1030303,"special_character_ratio":0.32558665,"punctuation_ratio":0.19926874,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99879915,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-26T18:02:31Z\",\"WARC-Record-ID\":\"<urn:uuid:e08f9375-de59-4b85-8c6f-1f6c28a74580>\",\"Content-Length\":\"113334\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6907d6e0-9914-4fa4-ad3b-fac70c71e817>\",\"WARC-Concurrent-To\":\"<urn:uuid:9b3f433d-3e36-447c-b77e-cf1a7d65bb1c>\",\"WARC-IP-Address\":\"192.0.78.25\",\"WARC-Target-URI\":\"https://xcorr.net/2013/12/23/fast-1d-and-2d-data-binning-in-matlab/\",\"WARC-Payload-Digest\":\"sha1:EGJQKRSJ5CZMMB6GNRCYZ5BARPNLDYTA\",\"WARC-Block-Digest\":\"sha1:CLIUV3W7UI6AVGVYI7NAVHR7B5VV6FBQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323587915.41_warc_CC-MAIN-20211026165817-20211026195817-00086.warc.gz\"}"} |
http://ikeysworld.com/kindle/aspects-of-ergodic-qualitative-and-statistical-theory-of-motion | [
"# Download Aspects of Ergodic, Qualitative and Statistical Theory of by Giovanni Gallavotti PDF",
null,
"By Giovanni Gallavotti\n\nmeant for newbies in ergodic idea, this ebook addresses scholars in addition to researchers in mathematical physics. the most novelty is the systematic remedy of attribute difficulties in ergodic concept by way of a unified approach by way of convergent strength sequence and renormalization workforce tools, specifically. easy thoughts of ergodicity, like Gibbs states, are built and utilized to, e.g., Asonov platforms or KAM thought. Many examples illustrate the tips and, moreover, a considerable variety of attention-grabbing themes are handled within the kind of guided problems.\n\nBest mathematical physics books\n\nBoundary and Eigenvalue Problems in Mathematical Physics.\n\nThis famous textual content makes use of a constrained variety of uncomplicated innovations and methods — Hamilton's precept, the speculation of the 1st version and Bernoulli's separation strategy — to strengthen entire strategies to linear boundary price difficulties linked to moment order partial differential equations equivalent to the issues of the vibrating string, the vibrating membrane, and warmth conduction.\n\nFourier Series (Mathematical Association of America Textbooks)\n\nIt is a concise advent to Fourier sequence protecting background, significant subject matters, theorems, examples, and purposes. it may be used for self research, or to complement undergraduate classes on mathematical research. starting with a quick precis of the wealthy historical past of the topic over 3 centuries, the reader will delight in how a mathematical concept develops in phases from a pragmatic challenge (such as conduction of warmth) to an summary thought facing ideas equivalent to units, capabilities, infinity, and convergence.\n\nSymmetry Methods for Differential Equations: A Beginner’s Guide\n\nA very good operating wisdom of symmetry equipment is particularly important for these operating with mathematical types. This e-book is an easy advent to the topic for utilized mathematicians, physicists, and engineers. The casual presentation makes use of many labored examples to demonstrate the main symmetry equipment.\n\nHomogenization: In Memory of Serguei Kozlov\n\nThis quantity is dedicated to distinct options of versions of strongly correlated electrons in a single spatial measurement by way of the Bethe Ansatz. types tested comprise: the one-dimensional Hubbard version; the supersymmetric t-J version; and different types of strongly correlated electrons severe direction research of delivery in hugely disordered random media / ok.\n\nExtra info for Aspects of Ergodic, Qualitative and Statistical Theory of Motion\n\nSample text\n\nIj = q. for some s < Qn and 1/(q. e. q. 14]: (Best approximation and convergents) A necessary and sufficient condition in order that a rational approximation to an irrational number be a best approximation is that it is a convergent of the continued fraction of r. 16]: (Bounded entries continued fractions) Show that if the entries ai of the irrational number r are uniformly bounded by N then the growth of Qn is bounded by an exponential (and one can estimate Qn by a 44 2 Ergodicity and Ergodic Points constant times [(N + (N 2 + 4) 112 )/2t).\n\nA1! is chosen g. Show also that one has n = (2 called the deferent while the circles with radii ai, j > 1 are called epicycles. g. the case n = 3 with a, b, c positive and equal to the sides of a triangle (we refer to such a case as a singular epicyclic motion). Show that the average rotation speed n exists for all but a set of zero measure of g's. g. the case in which n = 3 and a 1,a2,a3 are the sides of a triangle (see the previous problem). e. fl) for all: this is a problem posed by Lagrange and solved by Bohl.\n\nP + ~t mod 27T and ). p I (27T is ergodicif and only if (W1' ... 'Wr) are rationally independent. 2, realize that it can be thought of as a \"union\" of ergodic systems. 30 for all r ::::: 1 is not mixing. 33]: (Epicyclic motion) Let a 1 , a2, ... , an be n > 2 nonzero complex numbers, let w1 , ... , Wn be n rationally independent numbers and let a1, ... , an be n angles. Consider the motion z(g +~t) of z(g) = I;; a;eio:, (epicyclic motion). (g+~t) is given by f(g+~t) with _ 2::;, 3 a,ajWi cos( a; - ai) = ~ ."
]
| [
null,
"https://images-na.ssl-images-amazon.com/images/I/410NpZNvZUL._SX328_BO1,204,203,200_.jpg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.88764966,"math_prob":0.96652836,"size":4725,"snap":"2020-34-2020-40","text_gpt3_token_len":1069,"char_repetition_ratio":0.10336793,"word_repetition_ratio":0.018229166,"special_character_ratio":0.21671958,"punctuation_ratio":0.13860252,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9786423,"pos_list":[0,1,2],"im_url_duplicate_count":[null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-28T19:21:48Z\",\"WARC-Record-ID\":\"<urn:uuid:6df6c014-c133-4de3-b36e-ceb511f6c65c>\",\"Content-Length\":\"29859\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:507c22c3-83aa-4a01-9e8e-49a2241b6e09>\",\"WARC-Concurrent-To\":\"<urn:uuid:792a3a3a-f222-4584-97e8-1c52852dcef3>\",\"WARC-IP-Address\":\"50.63.47.1\",\"WARC-Target-URI\":\"http://ikeysworld.com/kindle/aspects-of-ergodic-qualitative-and-statistical-theory-of-motion\",\"WARC-Payload-Digest\":\"sha1:NXICOQFTVOHSDXC2UVIVV7PER6CYQV6G\",\"WARC-Block-Digest\":\"sha1:5SSSAAGDGJ6GTDTRBGKQGBBFBWMOMJ3Y\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600401604940.65_warc_CC-MAIN-20200928171446-20200928201446-00650.warc.gz\"}"} |
http://identificationkey.fr/ikeyplus/index.php/help_weightType | [
"## Help : WeightType\n\nIn the input SDD file you have the possibility to assign a weight for each taxa/character combination. The weightType parameter determines which weight calculation method will be applied :\n\n- The global method calculates the average weight of each character at the beginning (the sum of all weights of a character divided by the number of taxa). The weight of each character remains the same during the entire process.\n\n- The contextual method calculates the average weight of each remaining character at each node, based only on the remaining taxa at this step. In this case the weight of a character can change during the process depending of the weight of this character for each remaining taxa.\n\nClose"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9473609,"math_prob":0.9916485,"size":696,"snap":"2019-51-2020-05","text_gpt3_token_len":128,"char_repetition_ratio":0.20086706,"word_repetition_ratio":0.053097345,"special_character_ratio":0.18390805,"punctuation_ratio":0.057377048,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9836813,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-18T21:25:38Z\",\"WARC-Record-ID\":\"<urn:uuid:aa9fb5f2-5d37-46a6-8277-c9e947fdf26c>\",\"Content-Length\":\"2417\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4eafcdbd-c765-44c3-90d5-088e8a096090>\",\"WARC-Concurrent-To\":\"<urn:uuid:e3dd1358-b200-40ae-bc47-396059cc05e8>\",\"WARC-IP-Address\":\"134.157.190.223\",\"WARC-Target-URI\":\"http://identificationkey.fr/ikeyplus/index.php/help_weightType\",\"WARC-Payload-Digest\":\"sha1:WKOBAXXFML3K2BIWFA5TDS2SWHCTPJAZ\",\"WARC-Block-Digest\":\"sha1:7FW5V3D6UP4RG5J25JIFRS6X4PTUOOTN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250593937.27_warc_CC-MAIN-20200118193018-20200118221018-00323.warc.gz\"}"} |
http://www.mymessedupmind.co.uk/index.php/tag/algorithm/ | [
"# Javascript K-Means algorithm\n\nApril 6, 2010\n\nAfter reading an article by Howard Yeend (Pure Mango) I decided I would try to write a version of the basic learn algorithm.\n\nBelow is what I came up with.\n\n```function kmeans( arrayToProcess, Clusters )\n{\n\nvar Groups = new Array();\nvar Centroids = new Array();\nvar oldCentroids = new Array();\nvar changed = false;\n\n// initialise group arrays\nfor( initGroups=0; initGroups < Clusters; initGroups++ )\n{\n\nGroups[initGroups] = new Array();\n\n}\n\n// pick initial centroids\n\ninitialCentroids=Math.round( arrayToProcess.length/(Clusters+1) );\n\nfor( i=0; i < Clusters; i++ )\n{\n\nCentroids[i]=arrayToProcess[ (initialCentroids*(i+1)) ];\n\n}\n\ndo\n{\n\nfor( j=0; j < Clusters; j++ )\n{\n\nGroups[j] = [];\n\n}\n\nchanged=false;\n\nfor( i=0; i < arrayToProcess.length; i++ )\n{\n\nDistance=-1;\noldDistance=-1\n\nfor( j=0; j < Clusters; j++ )\n{\n\ndistance = Math.abs( Centroids[j]-arrayToProcess[i] );\n\nif ( oldDistance==-1 )\n{\n\noldDistance = distance;\nnewGroup = j;\n\n}\nelse if ( distance <= oldDistance )\n{\n\nnewGroup=j;\noldDistance = distance;\n\n}\n\n}\n\nGroups[newGroup].push( arrayToProcess[i] );\n\n}\n\noldCentroids=Centroids;\n\nfor ( j=0; j < Clusters; j++ )\n{\n\ntotal=0;\nnewCentroid=0;\n\nfor( i=0; i < Groups[j].length; i++ )\n{\n\ntotal+=Groups[j][i];\n\n}\n\nnewCentroid=total/Groups[newGroup].length;\n\nCentroids[j]=newCentroid;\n\n}\n\nfor( j=0; j < Clusters; j++ )\n{\n\nif ( Centroids[j]!=oldCentroids[j] )\n{\n\nchanged=true;\n\n}\n\n}\n\n}\nwhile( changed==true );\n\nreturn Groups;\n\n}\n```\n\nI’ve expanded the script now to support multiple dimensions, you can see my script here.\n\nFiled under: Mathematics,Programming — Tags: , , , , , — admin @ 8:35 pm",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
""
]
| [
null,
"http://www.mymessedupmind.co.uk/wp-content/themes/mymessedupmind/images/jonathan_spicer.jpg",
null,
"http://www.mymessedupmind.co.uk/wp-content/themes/mymessedupmind/images/corridor.jpg",
null,
"http://www.mymessedupmind.co.uk/wp-content/themes/mymessedupmind/images/fire3.png",
null,
"http://www.mymessedupmind.co.uk/wp-content/themes/mymessedupmind/images/fire4.png",
null,
"http://www.mymessedupmind.co.uk/wp-content/themes/mymessedupmind/images/fire3.png",
null,
"http://www.mymessedupmind.co.uk/wp-content/themes/mymessedupmind/images/logo.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.5852699,"math_prob":0.8823191,"size":1944,"snap":"2022-27-2022-33","text_gpt3_token_len":537,"char_repetition_ratio":0.15515465,"word_repetition_ratio":0.058020476,"special_character_ratio":0.29578188,"punctuation_ratio":0.1954023,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9990726,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,5,null,5,null,10,null,5,null,10,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-09T01:18:34Z\",\"WARC-Record-ID\":\"<urn:uuid:8a994baf-0376-401d-9bb0-074d3813ba1f>\",\"Content-Length\":\"19233\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2811c9a8-0972-4d7c-9de5-86251b26e1d8>\",\"WARC-Concurrent-To\":\"<urn:uuid:e6071309-db42-4711-8d0e-1c066e4285b1>\",\"WARC-IP-Address\":\"89.145.100.30\",\"WARC-Target-URI\":\"http://www.mymessedupmind.co.uk/index.php/tag/algorithm/\",\"WARC-Payload-Digest\":\"sha1:KQAFVW7QW7QS5GLWUBND77F2TGI6GLY7\",\"WARC-Block-Digest\":\"sha1:23EDBLAHQCLZJBCCOAMZGQMMXE4W7UW7\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882570879.37_warc_CC-MAIN-20220809003642-20220809033642-00053.warc.gz\"}"} |
https://www.javatpoint.com/armstrong-number-in-cpp | [
"# Armstrong Number in C++\n\nBefore going to write the C++ program to check whether the number is Armstrong or not, let's understand what is Armstrong number.\n\nArmstrong number is a number that is equal to the sum of cubes of its digits. For example 0, 1, 153, 370, 371 and 407 are the Armstrong numbers.\n\nLet's try to understand why 371 is an Armstrong number.\n\nLet's see the C++ program to check Armstrong Number.\n\nOutput:\n\n```Enter the Number= 371\nArmstrong Number.\n```\n```Enter the Number= 342\nNot Armstrong Number.\n```",
null,
"",
null,
"",
null,
"",
null,
""
]
| [
null,
"https://www.javatpoint.com/images/facebook32.png",
null,
"https://www.javatpoint.com/images/twitter32.png",
null,
"https://www.javatpoint.com/images/google-plus32.png",
null,
"https://www.javatpoint.com/images/pinterest32.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.77153647,"math_prob":0.9932123,"size":540,"snap":"2019-35-2019-39","text_gpt3_token_len":130,"char_repetition_ratio":0.23507462,"word_repetition_ratio":0.02173913,"special_character_ratio":0.2685185,"punctuation_ratio":0.12037037,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98947614,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-17T04:14:19Z\",\"WARC-Record-ID\":\"<urn:uuid:0e84835e-91b0-4740-bf44-01d237050be0>\",\"Content-Length\":\"54282\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5c238195-21ce-44dd-9b90-45a5a94e4768>\",\"WARC-Concurrent-To\":\"<urn:uuid:e5a03443-9fd2-4d38-88be-5fbc77484f15>\",\"WARC-IP-Address\":\"95.216.57.234\",\"WARC-Target-URI\":\"https://www.javatpoint.com/armstrong-number-in-cpp\",\"WARC-Payload-Digest\":\"sha1:U5KLMBFJEIEL7GMOTEHXL6XZXD7DZE4P\",\"WARC-Block-Digest\":\"sha1:BT6CDW757UJC4Z6HQYUO4LNBCUBCV3VD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514573052.26_warc_CC-MAIN-20190917040727-20190917062727-00206.warc.gz\"}"} |
https://groupprops.subwiki.org/wiki/Split_special_orthogonal_group_of_degree_two | [
"# Split special orthogonal group of degree two\n\n## Definition\n\nSuppose",
null,
"$K$ is a field. The split special orthogonal group of degree two over",
null,
"$K$ is defined as the subgroup of the general linear group of degree two over",
null,
"$K$ given as follows:",
null,
"$\\{ A \\in GL(2,K) \\mid \\operatorname{det}(A) = 1, A\\begin{pmatrix} 0 & 1 \\\\ 1 & 0 \\\\\\end{pmatrix}A^T = \\begin{pmatrix} 0 & 1 \\\\ 1 & 0 \\\\\\end{pmatrix}\\}$\n\nFor characteristic not equal to two, an alternative definition, which gives a conjugate subgroup and hence an isomorphic group, is:",
null,
"$\\{ A \\in GL(2,K) \\mid \\operatorname{det}(A) = 1, A\\begin{pmatrix} 1 & 0 \\\\ 0 & -1 \\\\\\end{pmatrix}A^T = \\begin{pmatrix} 1 & 0 \\\\ 0 & -1 \\\\\\end{pmatrix} \\}$\n\n### Over a finite field\n\nFor a finite field",
null,
"$K$, this group is denoted",
null,
"$SO(+1,2,K)$ and is termed the orthogonal group of \"+\" type. It is also denoted",
null,
"$SO(+1,2,q)$ where",
null,
"$q$ is the size of the field.\n\nIt turns out that:\n\n• If",
null,
"$q$ is a power of 2, i.e., if",
null,
"$q$ is even, then the group is a dihedral group of order",
null,
"$2(q - 1)$ and degree",
null,
"$q - 1$.\n• If",
null,
"$q$ is odd, then the group is a cyclic group of order",
null,
"$q - 1$.\n\n## Arithmetic functions\n\n### Over a finite field\n\nWe consider here the group",
null,
"$SO(+1,2,q) = SO(+1,2,K)$ where",
null,
"$K$ is a field (unique up to isomorphism) of size",
null,
"$q$.\n\nFunction Value Similar groups Explanation\norder Case",
null,
"$q$ odd:",
null,
"$q - 1$\nCase",
null,
"$q$ even:",
null,
"$2(q - 1)$\n\n## Particular cases",
null,
"$q$ (field size)",
null,
"$p$ (underlying prime, field characteristic) exponent on",
null,
"$p$ giving",
null,
"$q$ special orthogonal group",
null,
"$SO(+1,2,q)$ order of group second part of GAP ID (GAP ID is (order,2nd part) Comments\n2 2 1 cyclic group:Z2 2 1\n3 3 1 cyclic group:Z2 2 1\n4 2 2 symmetric group:S3 6 1\n5 5 1 cyclic group:Z4 4 1\n7 7 1 cyclic group:Z6 6 2\n8 2 3 dihedral group:D14 14 1\n9 3 2 cyclic group:Z8 8 1\n11 11 1 cyclic group:Z10 10 2\n13 13 1 cyclic group:Z12 12 2\n16 2 4 dihedral group:D30 30 3\n17 17 1 cyclic group:Z16 16 1"
]
| [
null,
"https://groupprops.subwiki.org/w/images/math/a/5/f/a5f3c6a11b03839d46af9fb43c97c188.png ",
null,
"https://groupprops.subwiki.org/w/images/math/a/5/f/a5f3c6a11b03839d46af9fb43c97c188.png ",
null,
"https://groupprops.subwiki.org/w/images/math/a/5/f/a5f3c6a11b03839d46af9fb43c97c188.png ",
null,
"https://groupprops.subwiki.org/w/images/math/d/4/7/d471d432ae6c240e8498e40156cacf6c.png ",
null,
"https://groupprops.subwiki.org/w/images/math/1/b/b/1bb17f8ba005aae55352020089da315b.png ",
null,
"https://groupprops.subwiki.org/w/images/math/a/5/f/a5f3c6a11b03839d46af9fb43c97c188.png ",
null,
"https://groupprops.subwiki.org/w/images/math/5/9/9/599a0d9948cb59a6cf5d47344f99bbcf.png ",
null,
"https://groupprops.subwiki.org/w/images/math/5/c/7/5c7d5d57aa8cf5c9ca7d3cfbe7257996.png ",
null,
"https://groupprops.subwiki.org/w/images/math/7/6/9/7694f4a66316e53c8cdd9d9954bd611d.png ",
null,
"https://groupprops.subwiki.org/w/images/math/7/6/9/7694f4a66316e53c8cdd9d9954bd611d.png ",
null,
"https://groupprops.subwiki.org/w/images/math/7/6/9/7694f4a66316e53c8cdd9d9954bd611d.png ",
null,
"https://groupprops.subwiki.org/w/images/math/1/8/5/185262165c7ccb6a85113a7dbb61f526.png ",
null,
"https://groupprops.subwiki.org/w/images/math/6/f/a/6fae656ff6535f6bdcf04189cbf2cef9.png ",
null,
"https://groupprops.subwiki.org/w/images/math/7/6/9/7694f4a66316e53c8cdd9d9954bd611d.png ",
null,
"https://groupprops.subwiki.org/w/images/math/6/f/a/6fae656ff6535f6bdcf04189cbf2cef9.png ",
null,
"https://groupprops.subwiki.org/w/images/math/d/b/2/db295d968f59eab41585ccca6cd07c0e.png ",
null,
"https://groupprops.subwiki.org/w/images/math/a/5/f/a5f3c6a11b03839d46af9fb43c97c188.png ",
null,
"https://groupprops.subwiki.org/w/images/math/7/6/9/7694f4a66316e53c8cdd9d9954bd611d.png ",
null,
"https://groupprops.subwiki.org/w/images/math/7/6/9/7694f4a66316e53c8cdd9d9954bd611d.png ",
null,
"https://groupprops.subwiki.org/w/images/math/6/f/a/6fae656ff6535f6bdcf04189cbf2cef9.png ",
null,
"https://groupprops.subwiki.org/w/images/math/7/6/9/7694f4a66316e53c8cdd9d9954bd611d.png ",
null,
"https://groupprops.subwiki.org/w/images/math/1/8/5/185262165c7ccb6a85113a7dbb61f526.png ",
null,
"https://groupprops.subwiki.org/w/images/math/7/6/9/7694f4a66316e53c8cdd9d9954bd611d.png ",
null,
"https://groupprops.subwiki.org/w/images/math/8/3/8/83878c91171338902e0fe0fb97a8c47a.png ",
null,
"https://groupprops.subwiki.org/w/images/math/8/3/8/83878c91171338902e0fe0fb97a8c47a.png ",
null,
"https://groupprops.subwiki.org/w/images/math/7/6/9/7694f4a66316e53c8cdd9d9954bd611d.png ",
null,
"https://groupprops.subwiki.org/w/images/math/5/c/7/5c7d5d57aa8cf5c9ca7d3cfbe7257996.png ",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.78521645,"math_prob":0.99985886,"size":1483,"snap":"2020-24-2020-29","text_gpt3_token_len":448,"char_repetition_ratio":0.1893171,"word_repetition_ratio":0.03717472,"special_character_ratio":0.32636547,"punctuation_ratio":0.10897436,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9997234,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54],"im_url_duplicate_count":[null,null,null,null,null,null,null,1,null,1,null,null,null,1,null,2,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,1,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-05-24T22:46:39Z\",\"WARC-Record-ID\":\"<urn:uuid:7ca88fdf-fac8-431d-83b5-c6203a811567>\",\"Content-Length\":\"29057\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:131f892c-b41f-4bd7-9ea0-c487ec8f7e47>\",\"WARC-Concurrent-To\":\"<urn:uuid:c5d217b6-9c15-4ef5-8d2f-e5e5d0685a7f>\",\"WARC-IP-Address\":\"96.126.114.7\",\"WARC-Target-URI\":\"https://groupprops.subwiki.org/wiki/Split_special_orthogonal_group_of_degree_two\",\"WARC-Payload-Digest\":\"sha1:GZ7JRZOOKW6SJW3OF4BYRXWCEYI35JT3\",\"WARC-Block-Digest\":\"sha1:VKTXKW5QW66MYSCE5D2WVZZZQGNTPIQS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347385193.5_warc_CC-MAIN-20200524210325-20200525000325-00079.warc.gz\"}"} |
https://confluence.cprime.io/plugins/viewsource/viewpagesrc.action?pageId=6560380 | [
"Cprime Apps has been rebranded as Anova Apps. Please note the only effect is the company name - all of our products’ names, logos, functionalities, support, etc. is exactly the same. The new location to our documentation space is https://anovaapps.atlassian.net.\n This routine is available starting with SIL Engine™ 2.5.\n\nsign(number)\n\n## Description\n\n Determines the sign of a number (signum).\n\nReturns 1 if the number is positive, zero (0) if the number is 0, and -1 if the number is negative.\n\n## Parameters\n\nParameterTypeRequiredDescription\nnumberNumberYesAny real number.\n\nnumber\n\n## Example\n\n ```number a = sign(3); print(\"a= \" + a); number b = sign(-4); print(\"b= \" + b); number c = sign(3-3); print(\"b= \" + c);```\n\nPrints:\n\na= 1;\n\nb= -1;\n\nc= 0;",
null,
""
]
| [
null,
"https://confluence.cprime.io/plugins/servlet/confluence/placeholder/macro",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.75676537,"math_prob":0.9969594,"size":706,"snap":"2021-31-2021-39","text_gpt3_token_len":203,"char_repetition_ratio":0.14672364,"word_repetition_ratio":0.0,"special_character_ratio":0.3045326,"punctuation_ratio":0.18666667,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9528842,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-27T04:53:23Z\",\"WARC-Record-ID\":\"<urn:uuid:becbf6a0-48cb-4838-b05a-159058fe0013>\",\"Content-Length\":\"10007\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4f43550f-a61d-40fb-abaf-97b5ea3c3e5c>\",\"WARC-Concurrent-To\":\"<urn:uuid:af83f348-8d7a-421b-a8a2-3e98db720374>\",\"WARC-IP-Address\":\"54.68.147.7\",\"WARC-Target-URI\":\"https://confluence.cprime.io/plugins/viewsource/viewpagesrc.action?pageId=6560380\",\"WARC-Payload-Digest\":\"sha1:3VU2DKZK5NBWIH6NG4PMJKJ6CL6LGKM7\",\"WARC-Block-Digest\":\"sha1:LYGRHF7UL6O7POFHDTLJIWELHZI3T4XX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780058263.20_warc_CC-MAIN-20210927030035-20210927060035-00423.warc.gz\"}"} |
https://games.speak-read-write.com/puzzle-pennsylvania.html | [
"# Pennsylvania\n\n S T Y E L L A V S I D N A L U I N Y B H E K M B E H R I C K E T T S G L E N F A L L S C L K U N S E O O O C S U E V A C L A T S Y R C T L I E T A R T F L Q U I E T V A L L E Y I K K U I N S H M S A P U L S E R C A D O O W T R A H D M G N E E H V O F S E X T P H A C O P S R A N A S O O F S E R C F O O Y S G L L E B Y T R E B I L F O R T Y E O R U M J R R G R U B S A R T S N L P T I R G S N T K M L A N I M R E T G N I D A E R U C E A N O H K J D W P F A S N A C H T M V N U W K E R O S I M E O N E A U T O M U S H O L N S O Y T D C Y R A I V A L A N O I T A N W U Y O S R I A E S N R E V A C O H C E N A I D N I S C I E L L N P F O N T H I L L C A S T L E J O N C A S K O S P A L N K V X X M E I G E N R A C N M M U U C D I I N D E P E N D E N C E H A L L E W A O J O U H M U E S U M R E T T U M E O Z T P Z L H Y H O P E W E L L F U R N A C E T F U M Y J L T M C S T E A M T O W N M I L L A T A N S E L M A H C S L S A J E G A L L I V Y M O N O C E D L O O T I B B A R K C A J S C R A P P L E M Q E O B N\n Find and circle the words below in the puzzle grid. The words may read down, left to right, right to left, up, or diagonally. Some words may share letters with another word. Ignore spaces in multi-word names. Abbreviations used are MKT for MARKET and MUS for MUSEUM.\n\n BOATHOUSE ROW CARNEGIE (MUSEUMS) CHOCOLATE CRYSTAL CAVE FASNACHT FONTHILL CASTLE HARTWOOD ACRES HERSHEY GARDENS HOPEWELL FURNACE INDEPENDENCE HALL INDIAN ECHO CAVERNS JACK RABBIT KING OF PRUSSIA MALL LANDIS VALLEY LIBERTY BELL MCCONNELLS MILL MEXICAN WAR STREETS MILL AT ANSELMA MUTTER MUSEUM NATIONAL AVIARY OLD ECONOMY VILLAGE PENNSYLVANIA DUTCH PHACOPS RANA PHIPPS CONSERVATORY POCONOS QUIET VALLEY READING TERMINAL MKT RICKETTS GLEN FALLS SCRAPPLE SESAME STREET SHOFUSO JAPANESE HOUSE SIMEONE AUTO MUS STEAMTOWN STRASBURG RR (RAIL ROAD) THE FRICK TOONSEUM TROLLEY (MUSEUMS)\n\nSolution"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.7595654,"math_prob":0.41110092,"size":2106,"snap":"2019-51-2020-05","text_gpt3_token_len":839,"char_repetition_ratio":0.087059945,"word_repetition_ratio":0.0,"special_character_ratio":0.328585,"punctuation_ratio":0.016200295,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9858638,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-27T16:41:39Z\",\"WARC-Record-ID\":\"<urn:uuid:aa6b505a-a7d6-40af-9c05-7bbac77baa10>\",\"Content-Length\":\"10383\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:19fd2f0c-2a23-40db-91c2-bffab533c4a1>\",\"WARC-Concurrent-To\":\"<urn:uuid:771c3aba-1842-4286-b6b3-6afdbcc5ba44>\",\"WARC-IP-Address\":\"69.163.153.129\",\"WARC-Target-URI\":\"https://games.speak-read-write.com/puzzle-pennsylvania.html\",\"WARC-Payload-Digest\":\"sha1:FUZ3YJJVC532DOUIXERUFBG4T4PIK7VV\",\"WARC-Block-Digest\":\"sha1:4TNPAL4WMYIBXWWB5WZUDO42S6EH22ZP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579251700988.64_warc_CC-MAIN-20200127143516-20200127173516-00540.warc.gz\"}"} |
https://physics.stackexchange.com/questions/505791/rotational-work-done-by-friction | [
"# Rotational Work done by Friction\n\n(Okay so I have asked this in a previous post but I think this should be a separate question)\n\nConsider a cylinder performing accelerated pure rolling (Static friction is non-zero) on a sufficiently rough surface.\n\nThe point of contact keeps changing since the body is rolling. Now I understand rotational work to the work done by a torque to rotate a body by some angle. The torque is provided by a force which is applied to a PARTICULAR POINT of the body and stays in contact with that PARTICULAR POINT for the entire rotation (let us say by an angle A). However, this is what confuses me, In pure rolling the point of contact is at rest and is not displaced at that particular instant when it is in contact with the surface, the point of contact then changes when the body rolls. Now if the initial point of contact was at rest (Was not displaced by A) then how can friction do rotational work ?\n\n• Work is always done by forces not torques,fields etc. Oct 1, 2019 at 15:43\n\nThe \"momentarily at rest\" fact indeed implies that no work is done by friction.\n\nAnd it indeed is not the torque by friction which does the work; it is the torque that causes the acceleration, which does the work!\n\nSince your cylinder is accelerating, there must be such a torque present. Maybe your cylinder is a wheel on a car. Then it is the engine that causes a torque on the wheel about the axle.\n\nThis, shall we call it engine torque, is doing rotational work on the cylinder. Was the cylinder hanging free with no contact to the ground, then this work done would add rotational kinetic energy. The cylinder would spin faster and faster - it would not move/translate, just spin/rotate.\n\nYou can think of the torque by friction as an \"intermediate enabler\" that causes this work, which otherwise would have turned into rotational kinetic energy, to be converted into translational kinetic energy of the cylinder. Friction doesn't do this work, it just acts as a means of changing from rotation to translation.\n\nYou can be further convinced of this fact that friction does no work, by thinking of what would happen if the engine torque (the torque that causes the acceleration) stopped acting:\n\nThen the cylinder would stop speeding up. But would it slow down? No! It would just continue rolling. Forever. At constant speed (constant translational and constant rotational speed). The torque by friction does no work on its own (neither positive nor negative, so it doesn't speed up nor slow down). (In fact, that static friction is not even present, since there are no other forces present to counteract.) It will continue rolling until some other forces/torques, which can do work appears, or until energy in other ways are removed/added to its motion.\n\n• I know that the work done by friction is 0 . Consider a cylinder with a horizontal force on its center. The force will provide a linear acceleration to the cylinder but the cylinder also rolls (rough surface), this torque will be provided by friction, therefore there must be some rotational work. But how can friction do this work (read my question regarding this confusion) Oct 2, 2019 at 5:29\n• @AdityaAhuja When a force pushes at the centre, then it causes no rotation about the centre. But it does cause rotation about the contact point, just only at an instant moment, before a new contact point takes over. This force is indeed providing the work, and friction is again just a means of converting this translational energy into rotational energy. Oct 2, 2019 at 5:53\n• But how can friction do rotational work if friction is not displacing the point of contact by an angle ? Oct 2, 2019 at 6:44\n\nStatic friction fixes the location of the bottom of a rolling cylinder (or wheel). The wheel is undergoing an instantaneous rotation about this fixed point. As Steeven points out, an external source of torque will try to accelerate this rotation. This acceleration will exert a force on the center of mass (or axle) of the wheel (which is moving)."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9490604,"math_prob":0.86650044,"size":896,"snap":"2022-05-2022-21","text_gpt3_token_len":183,"char_repetition_ratio":0.12668161,"word_repetition_ratio":0.0,"special_character_ratio":0.20424107,"punctuation_ratio":0.05142857,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.959917,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-26T21:01:50Z\",\"WARC-Record-ID\":\"<urn:uuid:a320e6eb-3d16-48df-8163-6d9c03b6c604>\",\"Content-Length\":\"239820\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:20aa6615-cedd-4b67-9457-8d716b56d02f>\",\"WARC-Concurrent-To\":\"<urn:uuid:af60d540-5585-4fd4-9e60-b5e8ccf82b0d>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://physics.stackexchange.com/questions/505791/rotational-work-done-by-friction\",\"WARC-Payload-Digest\":\"sha1:WEPFIESKTYQ5IBRIWMXRGDO6TKTCXEJS\",\"WARC-Block-Digest\":\"sha1:BZYISWICCNMGJSSCT7RXF5RLNR4UOHGJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662625600.87_warc_CC-MAIN-20220526193923-20220526223923-00205.warc.gz\"}"} |
https://incos.tdtu.edu.vn/publications/2020/classification-of-stable-solutions-to-a-fractional-singular-elliptic-equation | [
"Nhảy đến nội dung\n\n# Classification of Stable Solutions to a Fractional Singular Elliptic Equation with Weight\n\nAuthors:\n\nAnh Tuan Duong, Vu Trong Luong, Thi Quynh Nguyen\n\nSource title:\nActa Applicandae Mathematicae, 170: 579-591, 2020 (ISI)\nAcademic year of acceptance:\n2020-2021\nAbstract:\n\nLet p > 0 and (−Δ)s is the fractional Laplacian with 0 < s < 1. The purpose of this paper is to establish a classification result for positive stable solutions to a fractional singular elliptic equation with weight",
null,
".\n\nHere N > 2s and h is a nonnegative, continuous function satisfying h(x) ≥ C|x|a, a ≥ 0, when |x| large. We prove the nonexistence of positive stable solutions of this equation under the condition",
null,
"or equivalently\n\np > pc(N,s,a),\n\nwhere",
null,
""
]
| [
null,
"https://incos.tdtu.edu.vn/sites/incos/files/INCOS/Paper%202020-2021/9.png",
null,
"https://incos.tdtu.edu.vn/sites/incos/files/INCOS/Paper%202020-2021/10.png",
null,
"https://incos.tdtu.edu.vn/sites/incos/files/INCOS/Paper%202020-2021/11.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8478787,"math_prob":0.80339366,"size":748,"snap":"2022-40-2023-06","text_gpt3_token_len":189,"char_repetition_ratio":0.13172042,"word_repetition_ratio":0.13793103,"special_character_ratio":0.2433155,"punctuation_ratio":0.11029412,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9856776,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-06T21:12:48Z\",\"WARC-Record-ID\":\"<urn:uuid:628f0ec2-5bf2-40c5-8c8c-41b1c520930a>\",\"Content-Length\":\"48075\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1bef4579-6c73-4724-8b65-7e3e8cb75629>\",\"WARC-Concurrent-To\":\"<urn:uuid:bc3ff377-b422-446c-9581-014879951764>\",\"WARC-IP-Address\":\"101.53.9.9\",\"WARC-Target-URI\":\"https://incos.tdtu.edu.vn/publications/2020/classification-of-stable-solutions-to-a-fractional-singular-elliptic-equation\",\"WARC-Payload-Digest\":\"sha1:5IVYC7XCZDB7PIMFB7B6FFBQNUGU4KNR\",\"WARC-Block-Digest\":\"sha1:VNUKPKKOXYRS5SE4EYN6EKYZRJC4DZYS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337855.83_warc_CC-MAIN-20221006191305-20221006221305-00688.warc.gz\"}"} |
https://www.dadsworksheets.com/worksheets/multiplication/spiral-multiplication-all-facts-v4.html | [
"PLEASE GO BACK AND USE THE BIG BLUE 'PRINT' BUTTON ON THE PAGE TO PRINT THE WORKSHEET CORRECTLY!",
null,
"Sorry for the trouble! The browser won't print the embedded worksheet PDF directly using the normal 'Print' command in the file menu, so you need to click the big 'Print' button to send just the worksheet and not the surrounding page to the printer.\n\nMath Worksheets: Multiplication: Multiplication: Spiral Multiplication (All Facts) Math Fact Worksheet (Fourth Worksheet)",
null,
"",
null,
"Spiral Multiplication (All Facts) Math Fact Worksheet (Fourth Worksheet)\n\nPropertyValue\nDescriptionSpiral Multiplication (All Facts) Math Fact Worksheet: Tired of the same old math fact worksheets with rows and rows of problems? These multiplication worksheets present the facts in a spiral layout that provides fun a twist on memorizing the times tables. They use the same fact layouts as the spaceship math sheets above, so try the first two sets of worksheets if you are looking for all of the multiplication facts or for practice without the easier problems, or try the others in series for an incremental approach to learning the facts. (Fourth Worksheet)\nResource TypeWorksheet"
]
| [
null,
"https://www.dadsworksheets.com/img/printbutton.png",
null,
"https://www.dadsworksheets.com/img/pin-it-small.svg",
null,
"https://www.dadsworksheets.com/worksheets/multiplication/spiral-multiplication-all-facts-v4-medium.jpg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.80786043,"math_prob":0.46338266,"size":901,"snap":"2022-05-2022-21","text_gpt3_token_len":195,"char_repetition_ratio":0.1962096,"word_repetition_ratio":0.08955224,"special_character_ratio":0.2008879,"punctuation_ratio":0.08552632,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.974547,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-27T23:42:18Z\",\"WARC-Record-ID\":\"<urn:uuid:813200ac-3379-4427-bf84-4ab09ab66b54>\",\"Content-Length\":\"72806\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:77ddc8e8-92f5-469b-8a33-96e487b002b3>\",\"WARC-Concurrent-To\":\"<urn:uuid:489a3dc8-224d-43d3-bd07-253aa7d6b916>\",\"WARC-IP-Address\":\"104.26.0.251\",\"WARC-Target-URI\":\"https://www.dadsworksheets.com/worksheets/multiplication/spiral-multiplication-all-facts-v4.html\",\"WARC-Payload-Digest\":\"sha1:2UWP4DXXMXN3F2VH4RVRHQXENU7C75BU\",\"WARC-Block-Digest\":\"sha1:3KCR6QIJ33JUJCTS4YIKZ45AHI4RLE3U\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320305317.17_warc_CC-MAIN-20220127223432-20220128013432-00699.warc.gz\"}"} |
https://ixtrieve.fh-koeln.de/birds/litie/document/32784 | [
"# Document (#32784)\n\nAuthor\nKumpe, D.\nTitle\nMethoden zur automatischen Indexierung von Dokumenten\nImprint\nBerlin : Technische Universität Berlin / Institut für Softwaretechnik und Theoretische Informatik, Computergestützte Informationssysteme\nYear\n2006\nPages\nVII, 147 S\nAbstract\nDiese Diplomarbeit handelt von der Indexierung von unstrukturierten und natürlichsprachigen Dokumenten. Die zunehmende Informationsflut und die Zahl an veröffentlichten wissenschaftlichen Berichten und Büchern machen eine maschinelle inhaltliche Erschließung notwendig. Um die Anforderungen hierfür besser zu verstehen, werden Probleme der natürlichsprachigen schriftlichen Kommunikation untersucht. Die manuellen Techniken der Indexierung und die Dokumentationssprachen werden vorgestellt. Die Indexierung wird thematisch in den Bereich der inhaltlichen Erschließung und des Information Retrieval eingeordnet. Weiterhin werden Vor- und Nachteile von ausgesuchten Algorithmen untersucht und Softwareprodukte im Bereich des Information Retrieval auf ihre Arbeitsweise hin evaluiert. Anhand von Beispiel-Dokumenten werden die Ergebnisse einzelner Verfahren vorgestellt. Mithilfe des Projekts European Migration Network werden Probleme und grundlegende Anforderungen an die Durchführung einer inhaltlichen Erschließung identifiziert und Lösungsmöglichkeiten vorgeschlagen.\nContent\nDiplomarbeit\nTheme\nAutomatisches Indexieren\n\n## Similar documents (content)\n\n1. El Jerroudi, F.: Inhaltliche Erschließung in Dokumenten-Management-Systemen, dargestellt am Beispiel der KRAFTWERKSSCHULE e.V (2007) 0.26\n```0.26324078 = sum of:\n0.26324078 = product of:\n1.0968367 = sum of:\n0.08867782 = weight(abstract_txt:nachteile in 2528) [ClassicSimilarity], result of:\n0.08867782 = score(doc=2528,freq=1.0), product of:\n0.14659347 = queryWeight, product of:\n1.0049927 = boost\n7.7430196 = idf(docFreq=50, maxDocs=43254)\n0.018838285 = queryNorm\n0.6049234 = fieldWeight in 2528, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.7430196 = idf(docFreq=50, maxDocs=43254)\n0.078125 = fieldNorm(doc=2528)\n0.23364899 = weight(abstract_txt:diplomarbeit in 2528) [ClassicSimilarity], result of:\n0.23364899 = score(doc=2528,freq=5.0), product of:\n0.16353996 = queryWeight, product of:\n1.061494 = boost\n8.178337 = idf(docFreq=32, maxDocs=43254)\n0.018838285 = queryNorm\n1.4286966 = fieldWeight in 2528, product of:\n2.236068 = tf(freq=5.0), with freq of:\n5.0 = termFreq=5.0\n8.178337 = idf(docFreq=32, maxDocs=43254)\n0.078125 = fieldNorm(doc=2528)\n0.16545519 = weight(abstract_txt:inhaltlichen in 2528) [ClassicSimilarity], result of:\n0.16545519 = score(doc=2528,freq=2.0), product of:\n0.22217314 = queryWeight, product of:\n1.7497112 = boost\n6.740371 = idf(docFreq=138, maxDocs=43254)\n0.018838285 = queryNorm\n0.7447128 = fieldWeight in 2528, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n6.740371 = idf(docFreq=138, maxDocs=43254)\n0.078125 = fieldNorm(doc=2528)\n0.09317394 = weight(abstract_txt:werden in 2528) [ClassicSimilarity], result of:\n0.09317394 = score(doc=2528,freq=5.0), product of:\n0.15150754 = queryWeight, product of:\n2.2845871 = boost\n3.5203447 = idf(docFreq=3478, maxDocs=43254)\n0.018838285 = queryNorm\n0.6149789 = fieldWeight in 2528, product of:\n2.236068 = tf(freq=5.0), with freq of:\n5.0 = termFreq=5.0\n3.5203447 = idf(docFreq=3478, maxDocs=43254)\n0.078125 = fieldNorm(doc=2528)\n0.2968536 = weight(abstract_txt:erschließung in 2528) [ClassicSimilarity], result of:\n0.2968536 = score(doc=2528,freq=6.0), product of:\n0.26036912 = queryWeight, product of:\n2.3198557 = boost\n5.957817 = idf(docFreq=303, maxDocs=43254)\n0.018838285 = queryNorm\n1.140126 = fieldWeight in 2528, product of:\n2.4494898 = tf(freq=6.0), with freq of:\n6.0 = termFreq=6.0\n5.957817 = idf(docFreq=303, maxDocs=43254)\n0.078125 = fieldNorm(doc=2528)\n0.2190272 = weight(abstract_txt:dokumenten in 2528) [ClassicSimilarity], result of:\n0.2190272 = score(doc=2528,freq=2.0), product of:\n0.30661994 = queryWeight, product of:\n2.5174823 = boost\n6.4653587 = idf(docFreq=182, maxDocs=43254)\n0.018838285 = queryNorm\n0.714328 = fieldWeight in 2528, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n6.4653587 = idf(docFreq=182, maxDocs=43254)\n0.078125 = fieldNorm(doc=2528)\n0.24 = coord(6/25)\n```\n2. Halip, I.: Automatische Extrahierung von Schlagworten aus unstrukturierten Texten (2005) 0.22\n```0.21551731 = sum of:\n0.21551731 = product of:\n0.7697047 = sum of:\n0.062074475 = weight(abstract_txt:nachteile in 1987) [ClassicSimilarity], result of:\n0.062074475 = score(doc=1987,freq=1.0), product of:\n0.14659347 = queryWeight, product of:\n1.0049927 = boost\n7.7430196 = idf(docFreq=50, maxDocs=43254)\n0.018838285 = queryNorm\n0.4234464 = fieldWeight in 1987, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.7430196 = idf(docFreq=50, maxDocs=43254)\n0.0546875 = fieldNorm(doc=1987)\n0.082025096 = weight(abstract_txt:eingeordnet in 1987) [ClassicSimilarity], result of:\n0.082025096 = score(doc=1987,freq=1.0), product of:\n0.176524 = queryWeight, product of:\n1.1028272 = boost\n8.496791 = idf(docFreq=23, maxDocs=43254)\n0.018838285 = queryNorm\n0.46466824 = fieldWeight in 1987, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n8.496791 = idf(docFreq=23, maxDocs=43254)\n0.0546875 = fieldNorm(doc=1987)\n0.08897854 = weight(abstract_txt:unstrukturierten in 1987) [ClassicSimilarity], result of:\n0.08897854 = score(doc=1987,freq=1.0), product of:\n0.18636432 = queryWeight, product of:\n1.1331489 = boost\n8.730406 = idf(docFreq=18, maxDocs=43254)\n0.018838285 = queryNorm\n0.47744405 = fieldWeight in 1987, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n8.730406 = idf(docFreq=18, maxDocs=43254)\n0.0546875 = fieldNorm(doc=1987)\n0.044268433 = weight(abstract_txt:bereich in 1987) [ClassicSimilarity], result of:\n0.044268433 = score(doc=1987,freq=1.0), product of:\n0.14742756 = queryWeight, product of:\n1.425312 = boost\n5.490696 = idf(docFreq=484, maxDocs=43254)\n0.018838285 = queryNorm\n0.30027243 = fieldWeight in 1987, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.490696 = idf(docFreq=484, maxDocs=43254)\n0.0546875 = fieldNorm(doc=1987)\n0.071446866 = weight(abstract_txt:werden in 1987) [ClassicSimilarity], result of:\n0.071446866 = score(doc=1987,freq=6.0), product of:\n0.15150754 = queryWeight, product of:\n2.2845871 = boost\n3.5203447 = idf(docFreq=3478, maxDocs=43254)\n0.018838285 = queryNorm\n0.471573 = fieldWeight in 1987, product of:\n2.4494898 = tf(freq=6.0), with freq of:\n6.0 = termFreq=6.0\n3.5203447 = idf(docFreq=3478, maxDocs=43254)\n0.0546875 = fieldNorm(doc=1987)\n0.1877767 = weight(abstract_txt:dokumenten in 1987) [ClassicSimilarity], result of:\n0.1877767 = score(doc=1987,freq=3.0), product of:\n0.30661994 = queryWeight, product of:\n2.5174823 = boost\n6.4653587 = idf(docFreq=182, maxDocs=43254)\n0.018838285 = queryNorm\n0.61240864 = fieldWeight in 1987, product of:\n1.7320508 = tf(freq=3.0), with freq of:\n3.0 = termFreq=3.0\n6.4653587 = idf(docFreq=182, maxDocs=43254)\n0.0546875 = fieldNorm(doc=1987)\n0.23313464 = weight(abstract_txt:indexierung in 1987) [ClassicSimilarity], result of:\n0.23313464 = score(doc=1987,freq=2.0), product of:\n0.44625914 = queryWeight, product of:\n3.5069466 = boost\n6.754864 = idf(docFreq=136, maxDocs=43254)\n0.018838285 = queryNorm\n0.52241987 = fieldWeight in 1987, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n6.754864 = idf(docFreq=136, maxDocs=43254)\n0.0546875 = fieldNorm(doc=1987)\n0.28 = coord(7/25)\n```\n3. Simon, D.: Anreicherung bibliothekarischer Titeldaten durch Tagging : Möglichkeiten und Probleme (2007) 0.21\n```0.21482778 = sum of:\n0.21482778 = product of:\n0.89511573 = sum of:\n0.07606022 = weight(abstract_txt:vorgestellt in 2531) [ClassicSimilarity], result of:\n0.07606022 = score(doc=2531,freq=1.0), product of:\n0.14764956 = queryWeight, product of:\n1.4263847 = boost\n5.494828 = idf(docFreq=482, maxDocs=43254)\n0.018838285 = queryNorm\n0.5151402 = fieldWeight in 2531, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.494828 = idf(docFreq=482, maxDocs=43254)\n0.09375 = fieldNorm(doc=2531)\n0.14039168 = weight(abstract_txt:untersucht in 2531) [ClassicSimilarity], result of:\n0.14039168 = score(doc=2531,freq=2.0), product of:\n0.17633751 = queryWeight, product of:\n1.5588092 = boost\n6.004964 = idf(docFreq=289, maxDocs=43254)\n0.018838285 = queryNorm\n0.7961532 = fieldWeight in 2531, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n6.004964 = idf(docFreq=289, maxDocs=43254)\n0.09375 = fieldNorm(doc=2531)\n0.14039338 = weight(abstract_txt:inhaltlichen in 2531) [ClassicSimilarity], result of:\n0.14039338 = score(doc=2531,freq=1.0), product of:\n0.22217314 = queryWeight, product of:\n1.7497112 = boost\n6.740371 = idf(docFreq=138, maxDocs=43254)\n0.018838285 = queryNorm\n0.6319098 = fieldWeight in 2531, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n6.740371 = idf(docFreq=138, maxDocs=43254)\n0.09375 = fieldNorm(doc=2531)\n0.050002385 = weight(abstract_txt:werden in 2531) [ClassicSimilarity], result of:\n0.050002385 = score(doc=2531,freq=1.0), product of:\n0.15150754 = queryWeight, product of:\n2.2845871 = boost\n3.5203447 = idf(docFreq=3478, maxDocs=43254)\n0.018838285 = queryNorm\n0.33003232 = fieldWeight in 2531, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n3.5203447 = idf(docFreq=3478, maxDocs=43254)\n0.09375 = fieldNorm(doc=2531)\n0.20566621 = weight(abstract_txt:erschließung in 2531) [ClassicSimilarity], result of:\n0.20566621 = score(doc=2531,freq=2.0), product of:\n0.26036912 = queryWeight, product of:\n2.3198557 = boost\n5.957817 = idf(docFreq=303, maxDocs=43254)\n0.018838285 = queryNorm\n0.78990245 = fieldWeight in 2531, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n5.957817 = idf(docFreq=303, maxDocs=43254)\n0.09375 = fieldNorm(doc=2531)\n0.28260186 = weight(abstract_txt:indexierung in 2531) [ClassicSimilarity], result of:\n0.28260186 = score(doc=2531,freq=1.0), product of:\n0.44625914 = queryWeight, product of:\n3.5069466 = boost\n6.754864 = idf(docFreq=136, maxDocs=43254)\n0.018838285 = queryNorm\n0.63326854 = fieldWeight in 2531, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n6.754864 = idf(docFreq=136, maxDocs=43254)\n0.09375 = fieldNorm(doc=2531)\n0.24 = coord(6/25)\n```\n4. Schwarzendorfer, H.: Inhaltliche Erschließung von Altbeständen in allgemeinen Bibliothekskatalogen : Bestandsaufnahme und Entwicklungsmöglichkeiten (2009) 0.17\n```0.1734023 = sum of:\n0.1734023 = product of:\n0.8670114 = sum of:\n0.101413615 = weight(abstract_txt:vorgestellt in 1050) [ClassicSimilarity], result of:\n0.101413615 = score(doc=1050,freq=1.0), product of:\n0.14764956 = queryWeight, product of:\n1.4263847 = boost\n5.494828 = idf(docFreq=482, maxDocs=43254)\n0.018838285 = queryNorm\n0.6868535 = fieldWeight in 1050, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.494828 = idf(docFreq=482, maxDocs=43254)\n0.125 = fieldNorm(doc=1050)\n0.13236254 = weight(abstract_txt:untersucht in 1050) [ClassicSimilarity], result of:\n0.13236254 = score(doc=1050,freq=1.0), product of:\n0.17633751 = queryWeight, product of:\n1.5588092 = boost\n6.004964 = idf(docFreq=289, maxDocs=43254)\n0.018838285 = queryNorm\n0.7506205 = fieldWeight in 1050, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n6.004964 = idf(docFreq=289, maxDocs=43254)\n0.125 = fieldNorm(doc=1050)\n0.26472828 = weight(abstract_txt:inhaltlichen in 1050) [ClassicSimilarity], result of:\n0.26472828 = score(doc=1050,freq=2.0), product of:\n0.22217314 = queryWeight, product of:\n1.7497112 = boost\n6.740371 = idf(docFreq=138, maxDocs=43254)\n0.018838285 = queryNorm\n1.1915405 = fieldWeight in 1050, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n6.740371 = idf(docFreq=138, maxDocs=43254)\n0.125 = fieldNorm(doc=1050)\n0.0942854 = weight(abstract_txt:werden in 1050) [ClassicSimilarity], result of:\n0.0942854 = score(doc=1050,freq=2.0), product of:\n0.15150754 = queryWeight, product of:\n2.2845871 = boost\n3.5203447 = idf(docFreq=3478, maxDocs=43254)\n0.018838285 = queryNorm\n0.6223149 = fieldWeight in 1050, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n3.5203447 = idf(docFreq=3478, maxDocs=43254)\n0.125 = fieldNorm(doc=1050)\n0.2742216 = weight(abstract_txt:erschließung in 1050) [ClassicSimilarity], result of:\n0.2742216 = score(doc=1050,freq=2.0), product of:\n0.26036912 = queryWeight, product of:\n2.3198557 = boost\n5.957817 = idf(docFreq=303, maxDocs=43254)\n0.018838285 = queryNorm\n1.0532032 = fieldWeight in 1050, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n5.957817 = idf(docFreq=303, maxDocs=43254)\n0.125 = fieldNorm(doc=1050)\n0.2 = coord(5/25)\n```\n5. Probst, M.; Mittelbach, J.: Maschinelle Indexierung in der Sacherschließung wissenschaftlicher Bibliotheken (2006) 0.17\n```0.1727918 = sum of:\n0.1727918 = product of:\n0.86395895 = sum of:\n0.12414895 = weight(abstract_txt:nachteile in 3756) [ClassicSimilarity], result of:\n0.12414895 = score(doc=3756,freq=1.0), product of:\n0.14659347 = queryWeight, product of:\n1.0049927 = boost\n7.7430196 = idf(docFreq=50, maxDocs=43254)\n0.018838285 = queryNorm\n0.8468928 = fieldWeight in 3756, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.7430196 = idf(docFreq=50, maxDocs=43254)\n0.109375 = fieldNorm(doc=3756)\n0.13494582 = weight(abstract_txt:maschinelle in 3756) [ClassicSimilarity], result of:\n0.13494582 = score(doc=3756,freq=1.0), product of:\n0.15497401 = queryWeight, product of:\n1.0333205 = boost\n7.9612727 = idf(docFreq=40, maxDocs=43254)\n0.018838285 = queryNorm\n0.8707642 = fieldWeight in 3756, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.9612727 = idf(docFreq=40, maxDocs=43254)\n0.109375 = fieldNorm(doc=3756)\n0.05833612 = weight(abstract_txt:werden in 3756) [ClassicSimilarity], result of:\n0.05833612 = score(doc=3756,freq=1.0), product of:\n0.15150754 = queryWeight, product of:\n2.2845871 = boost\n3.5203447 = idf(docFreq=3478, maxDocs=43254)\n0.018838285 = queryNorm\n0.38503772 = fieldWeight in 3756, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n3.5203447 = idf(docFreq=3478, maxDocs=43254)\n0.109375 = fieldNorm(doc=3756)\n0.21682587 = weight(abstract_txt:dokumenten in 3756) [ClassicSimilarity], result of:\n0.21682587 = score(doc=3756,freq=1.0), product of:\n0.30661994 = queryWeight, product of:\n2.5174823 = boost\n6.4653587 = idf(docFreq=182, maxDocs=43254)\n0.018838285 = queryNorm\n0.7071486 = fieldWeight in 3756, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n6.4653587 = idf(docFreq=182, maxDocs=43254)\n0.109375 = fieldNorm(doc=3756)\n0.32970217 = weight(abstract_txt:indexierung in 3756) [ClassicSimilarity], result of:\n0.32970217 = score(doc=3756,freq=1.0), product of:\n0.44625914 = queryWeight, product of:\n3.5069466 = boost\n6.754864 = idf(docFreq=136, maxDocs=43254)\n0.018838285 = queryNorm\n0.7388133 = fieldWeight in 3756, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n6.754864 = idf(docFreq=136, maxDocs=43254)\n0.109375 = fieldNorm(doc=3756)\n0.2 = coord(5/25)\n```"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.60603124,"math_prob":0.99657774,"size":13637,"snap":"2021-43-2021-49","text_gpt3_token_len":5276,"char_repetition_ratio":0.23443116,"word_repetition_ratio":0.42848337,"special_character_ratio":0.5284887,"punctuation_ratio":0.2827564,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99972767,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-11-29T07:05:41Z\",\"WARC-Record-ID\":\"<urn:uuid:e4d52660-ef55-4c27-a417-8121e8d79dd1>\",\"Content-Length\":\"24465\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d8582e0d-2d4a-40c4-b27f-6313b8fb7a5d>\",\"WARC-Concurrent-To\":\"<urn:uuid:9669505d-4498-42e6-87a4-12247ae1ae1b>\",\"WARC-IP-Address\":\"139.6.160.6\",\"WARC-Target-URI\":\"https://ixtrieve.fh-koeln.de/birds/litie/document/32784\",\"WARC-Payload-Digest\":\"sha1:QZOIEJSXUUZVK3ZWQR26ZNSWMB6P6C54\",\"WARC-Block-Digest\":\"sha1:AIEC5IQFBKI5N7DG3RQWKLLZ76Z3P3XH\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964358688.35_warc_CC-MAIN-20211129044311-20211129074311-00239.warc.gz\"}"} |
https://image.hanspub.org/Html/1-2620469_22107.htm | [
" 时滞脉冲新古典增长模型的正周期解 Positive Periodic Solutions for a Class of Delayed Neoclassical Growth Model\n\nVol.06 No.06(2017), Article ID:22107,7 pages\n10.12677/AAM.2017.66087\n\nPositive Periodic Solutions for a Class of Delayed Neoclassical Growth Model\n\nChunyu Yang, Ruojun Zhang, Jingjing Zhang\n\nSchool of Mathematical Sciences, Ocean University of China, Qingdao Shandong",
null,
"Received: Aug. 25th, 2017; accepted: Sep. 11th, 2017; published: Sep. 19th, 2017",
null,
"ABSTRACT\n\nIn this paper, the existence of positive periodic solutions for a class of delayed neoclassical growth model with impulse is considered. By using the cone fixed point theorem, some sufficient conditions of the existence of positive periodic solutions for the addressed model are obtained. Moreover, an example is given to show the effectiveness of our results.\n\nKeywords:Neoclassical Growth Model, Positive Periodic Solutions, Delay, Impulse",
null,
"",
null,
"",
null,
"",
null,
"1. 引言\n\n20世纪50年代,索洛等人 提出形如 $Y\\left(t\\right)=A\\left(t\\right)F\\left(K\\left(t\\right),L\\left(t\\right)\\right)$ 的新古典增长模型,其中 $Y\\left(t\\right)$",
null,
"时期的总产出, $K\\left(t\\right)$$t$ 时期投入的资本量, $L\\left(t\\right)$$t$ 时期投入的劳动量, $A\\left(t\\right)$ 代表 $t$ 时期的技术水平,由于具有预见性与实用性,该模型被众多学者所关注。Day ,Puu ,Bischi 等人进一步研究了新古典增长模型等非线性经济动力系统,并对之有极大的创新和发展。依据经济学原理,为描述长时间经济行为,新古典增长模型有两个基本假设:一是劳动力和资本充分;二是输出市场的即时调整。然而,由于生产过程中时滞的不可避免,所以这种理想的假设在现实中是不合理的,因此有必要考虑时滞系统。\n\n2011年,Matsumoto和Szidarovszky 首次介绍了如下的时滞新古典增长模型\n\n${x}^{\\prime }\\left(t\\right)=-\\alpha x\\left(t\\right)+sF\\left(x\\left(t-\\tau \\right)\\right),$ (1.1)",
null,
"(1.2)\n\n2. 预备知识\n\n$\\left\\{\\begin{array}{l}{x}^{\\prime }\\left(t\\right)=-\\alpha \\left(t\\right)x\\left(t\\right)+\\beta \\left(t\\right){x}^{\\gamma }\\left(t-\\tau \\left(t\\right)\\right){\\text{e}}^{-\\delta \\left(t\\right)x\\left(t-\\tau \\left(t\\right)\\right)}\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}t\\ne {t}_{k},t\\ge {t}_{0}>0\\\\ \\Delta x\\left({t}_{k}\\right)=x\\left({t}_{k}^{+}\\right)-x\\left({t}_{k}^{-}\\right)={p}_{k}x\\left({t}_{k}\\right)\\text{\\hspace{0.17em}}\\text{ }\\text{ }\\text{ }\\text{ }\\text{ }\\text{ }\\text{\\hspace{0.17em}}\\text{ }k=1,2\\cdots \\end{array}$ (2.1)\n\n$x\\left(t\\right)=\\varphi \\left(t\\right),\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}t\\in \\left[-\\tau ,0\\right]。$ (2.2)\n\nH1) $\\alpha \\left(t\\right),\\beta \\left(t\\right),\\delta \\left(t\\right),\\tau \\left(t\\right)$ 是以 $\\omega$ 为周期的周期函数, $\\omega >0$\n\nH2) ${t}_{0}<{t}_{1}<{t}_{2}<\\cdots$${t}_{i},\\text{\\hspace{0.17em}}i=1,2,\\cdot \\cdot \\cdot$ 为给定的脉冲时刻, $\\underset{k\\to \\infty }{\\mathrm{lim}}{t}_{k}=+\\infty$\n\nH3) $\\left\\{{p}_{k}\\right\\}$ 为实数列, ${p}_{k}\\ne -1,k=1,2,\\cdots$\n\nH4) $\\underset{{t}_{0}<{t}_{k} 为以 $\\omega$ 为周期的周期函数(这里作一个标准的假设,若因子个数为0,则乘积为1);\n\nH5) ${\\int }_{0}^{\\omega }\\alpha \\left(t\\right)\\text{d}t>0$\n\ni) $x\\left(t\\right)$$\\left({t}_{0},{t}_{1}\\right]$$\\left({t}_{k},{t}_{k+1}\\right],k=1,2,\\cdots$ 上绝对连续;\n\nii) $\\forall {t}_{k},k=1,2,\\cdots ,x\\left({t}_{k}^{+}\\right)$$x\\left({t}_{k}^{-}\\right)$ 存在且 $x\\left({t}_{k}^{-}\\right)=x\\left({t}_{k}\\right)$\n\niii) $x\\left(t\\right)$$\\left[{t}_{0},+\\infty \\right)\\{t}_{k}$ 上几乎处处满足方程(2.1),在 $t={t}_{k},k=1,2,\\cdots$ 满足脉冲条件;\n\niv) $x\\left(t\\right)=\\varphi \\left(t\\right),t\\in \\left[-\\tau ,0\\right]$\n\n${y}^{\\prime }\\left(t\\right)=-\\alpha \\left(t\\right)x\\left(t\\right)+\\stackrel{^}{\\beta }\\left(t\\right){y}^{\\gamma }\\left(t-\\tau \\left(t\\right)\\right){\\text{e}}^{-\\stackrel{^}{\\delta }\\left(t\\right)y\\left(t-\\tau \\left(t\\right)\\right)},$ (2.3)\n\n$y\\left(t\\right)=\\varphi \\left(t\\right),\\text{\\hspace{0.17em}}t\\in \\left[-\\tau ,0\\right]。$ (2.4)\n\n$y\\left(t\\right)$ 是模型(2.3)在初值条件(2.4)下的解,是指 $y\\left(t\\right)$ 是定义在 $\\left[{t}_{0}-\\tau ,+\\infty \\right)$ 上,在 $\\left[{t}_{0},+\\infty \\right)$ 上是满足(2.3)的绝对连续函数,且在 $\\left[{t}_{0}-\\tau ,{t}_{0}\\right]$ 上满足初值条件(2.4)。\n\ni) $ax+by\\in P,\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\forall x,y\\in P,\\text{\\hspace{0.17em}}a,b>0。$\n\nii) $x,-x\\in P$ 蕴含 $x=0$\n\n${\\Omega }_{1},{\\Omega }_{2}$ 为Banach空间 $X$ 中的有界开子集, $\\theta \\in {\\Omega }_{1},{\\stackrel{¯}{\\Omega }}_{1}\\subset {\\Omega }_{2}$$P$$X$ 中的一个锥, $T:P\\cap \\left({\\stackrel{¯}{\\Omega }}_{2}\\{\\Omega }_{1}\\right)\\to P$ 为全连续算子,若 $T$ 满足条件:\n\ni) $‖Tx‖\\le ‖x‖,\\forall x\\in P\\cap \\partial {\\Omega }_{1};‖Tx‖\\ge ‖x‖,\\forall x\\in P\\cap \\partial {\\Omega }_{2}$ (即范数锥拉伸);\n\nii) $‖Tx‖\\le ‖x‖,\\forall x\\in P\\cap \\partial {\\Omega }_{2};‖Tx‖\\ge ‖x‖,\\forall x\\in P\\cap \\partial {\\Omega }_{1}$ (即范数锥压缩)。\n\n$T$$P\\cap \\left({\\stackrel{¯}{\\Omega }}_{2}\\{\\Omega }_{1}\\right)$ 中必存在不动点。\n\n3. 主要结果\n\ni) $y\\left(t\\right)$ 为模型(2.3)与(2.4)的解,则 为模型(2.1)与(2.2)在 $\\left[{t}_{0}-\\tau ,+\\infty \\right)$ 上的解;\n\nii) $x\\left(t\\right)$ 为模型(2.1)与(2.2)的解,则 $y\\left(t\\right)=\\underset{{t}_{0}<{t}_{k} 为模型(2.3)与(2.4)在 $\\left[{t}_{0}-\\tau ,+\\infty \\right)$ 上的解。\n\n$\\begin{array}{l}{x}^{\\prime }\\left(t\\right)={\\left[\\underset{{t}_{0}<{t}_{k}0。\\end{array}$\n\n$x\\left({t}_{k}^{+}\\right)=\\underset{t\\to {t}_{k}^{+}}{\\mathrm{lim}}\\underset{{t}_{0}<{t}_{j}\n\nii)设 $x\\left(t\\right)$ 是模型(2.1)与(2.2)的解,所以 $x\\left(t\\right)$$\\left({t}_{0},{t}_{1}\\right]$$\\left({t}_{k},{t}_{k+1}\\right],k=1,2,\\cdots$ 上是绝对连续的。因此, $y\\left(t\\right)=\\underset{{t}_{0}<{t}_{k}$\\left({t}_{0},{t}_{1}\\right]$$\\left({t}_{k},{t}_{k+1}\\right],k=1,2,\\cdots$ 上也是绝对连续的。\n\n$\\forall t={t}_{k},k=1,2,\\cdots$ ,有\n\n$y\\left({t}_{k}^{+}\\right)=\\underset{t\\to {t}_{k}^{+}}{\\mathrm{lim}}\\underset{{t}_{0}<{t}_{j}\n\n$y\\left({t}_{k}^{-1}\\right)=\\underset{t\\to {t}_{k}^{-1}}{\\mathrm{lim}}\\underset{{t}_{0}<{t}_{j} ,则 $y\\left(t\\right)$ 是连续的且易知在 $\\left[{t}_{0},+\\infty \\right)$ 上绝对连续,由(i)类似可证 $y\\left(t\\right)$ 为(2.3)在 $\\left[{t}_{0}-\\tau ,+\\infty \\right)$ 上满足初值条件(2.4)的解。\n\ni) $y\\left(t\\right)$ 为(2.3)与(2.4)的 $\\omega$ -周期解,则 $x\\left(t\\right)=\\underset{{t}_{0}<{t}_{k} 为(2.1)与(2.2)在 $\\left[{t}_{0}-\\tau ,+\\infty \\right)$ 上的正 $\\omega$ -周期解;\n\nii) $x\\left(t\\right)$ 为(2.1)与(2.2)的 $\\omega$ -周期解,则 $y\\left(t\\right)=\\underset{{t}_{0}<{t}_{k} 为(2.3)与(2.4)在 $\\left[{t}_{0}-\\tau ,+\\infty \\right)$ 上的正 $\\omega$ -周期解。\n\n$P=\\left\\{x\\left(t\\right)\\in X|x\\left(t\\right)\\ge 0,x\\left(t\\right)\\ge \\sigma ‖x‖,t\\in \\text{R}\\right\\}$ ,其中 $\\sigma \\in \\left(0,1\\right)$ 为后面所定义的正常数,则 $P$$X$ 中的锥。定义算子\n\n$\\left(\\Gamma x\\right)\\left(t\\right)={\\int }_{t}^{t+\\omega }G\\left(t,s\\right)\\stackrel{^}{\\beta }\\left(s\\right){x}^{\\gamma }\\left(s-\\tau \\left(s\\right)\\right){\\text{e}}^{-\\stackrel{^}{\\delta }\\left(s\\right)x\\left(s-\\tau \\left(s\\right)\\right)}\\text{d}s$ (3.1)\n\n$0\n\n$G\\left(t+\\omega ,s+\\omega \\right)=G\\left(t,s\\right),s\\in \\left[t,t+\\omega \\right]$\n\n$\\forall x\\in P,t\\in \\text{R}$ ,有\n\n$\\begin{array}{l}\\left(\\Gamma x\\right)\\left(t+\\omega \\right)={\\int }_{t+\\omega }^{t+2\\omega }G\\left(t+\\omega ,s\\right)\\stackrel{^}{\\beta }\\left(s\\right){x}^{\\gamma }\\left(s-\\tau \\left(s\\right)\\right){\\text{e}}^{-\\stackrel{^}{\\delta }\\left(s\\right)x\\left(s-\\tau \\left(s\\right)\\right)}\\text{d}s\\\\ \\text{ }\\text{ }\\text{ }\\text{ }\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\stackrel{令s=u+\\omega }{=}{\\int }_{t}^{t+\\omega }G\\left(t+\\omega ,u+\\omega \\right)\\stackrel{^}{\\beta }\\left(u+\\omega \\right){x}^{\\gamma }\\left(u+\\omega -\\tau \\right){\\text{e}}^{-\\stackrel{^}{\\delta }\\left(u+\\omega \\right)x\\left(u+\\omega -\\tau \\left(u+\\omega \\right)\\right)}\\text{d}u\\\\ \\text{ }\\text{ }\\text{ }\\text{ }\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}={\\int }_{t}^{t+\\omega }G\\left(t,u\\right)\\stackrel{^}{\\beta }\\left(u\\right){x}^{\\gamma }\\left(u-\\tau \\left(u\\right)\\right){\\text{e}}^{-\\stackrel{^}{\\delta }\\left(u\\right)x\\left(u-\\tau \\left(u\\right)\\right)}\\text{d}u\\\\ \\text{ }\\text{ }\\text{ }\\text{ }\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}=\\left(\\Gamma x\\right)\\left(t\\right),\\end{array}$\n\n$\\left(\\Gamma x\\right)\\left(t\\right)\\ge A{\\int }_{0}^{\\omega }\\stackrel{^}{\\beta }\\left(s\\right){x}^{\\gamma }\\left(s-\\tau \\left(s\\right)\\right){\\text{e}}^{-\\stackrel{^}{\\delta }\\left(s\\right)x\\left(s-\\tau \\left(s\\right)\\right)}\\text{d}s\\ge \\frac{A}{B}‖\\Gamma x‖=\\sigma ‖\\Gamma x‖$\n\n$\\left(\\Gamma x\\right)\\left(t\\right)\\le B{\\int }_{0}^{\\omega }\\stackrel{^}{\\beta }\\left(s\\right){x}^{\\gamma }\\left(s-\\tau \\left(s\\right)\\right){\\text{e}}^{-\\stackrel{^}{\\delta }\\left(s\\right)x\\left(s-\\tau \\left(s\\right)\\right)}\\text{d}s\\le B\\omega {\\stackrel{^}{\\beta }}_{M}g\\left(\\frac{\\gamma }{{\\stackrel{^}{\\delta }}_{M}}\\right)\\le B\\omega {\\stackrel{^}{\\beta }}_{M}{\\text{e}}^{-1}{\\left(\\frac{\\gamma }{{\\stackrel{^}{\\delta }}_{M}}\\right)}^{\\gamma }$\n\n$\\begin{array}{c}{\\left(\\Gamma x\\right)}^{\\prime }\\left(t\\right)={\\left[{\\int }_{t}^{t+\\omega }G\\left(t,s\\right)\\stackrel{^}{\\beta }\\left(s\\right){x}^{\\gamma }\\left(s-\\tau \\left(s\\right)\\right){\\text{e}}^{-\\stackrel{^}{\\delta }\\left(s\\right)x\\left(s-\\tau \\left(s\\right)\\right)}\\text{d}s\\right]}^{\\prime }\\\\ ={\\int }_{t}^{t+\\omega }{G}^{\\prime }\\left(t,s\\right)\\stackrel{^}{\\beta }\\left(s\\right){x}^{\\gamma }\\left(s-\\tau \\left(s\\right)\\right){\\text{e}}^{-\\stackrel{^}{\\delta }\\left(s\\right)x\\left(s-\\tau \\left(s\\right)\\right)}\\text{d}s\\\\ \\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}+G\\left(t,t+\\omega \\right)\\stackrel{^}{\\beta }\\left(t+\\omega \\right){x}^{\\gamma }\\left(t+\\omega -\\tau \\left(t+\\omega \\right)\\right){\\text{e}}^{-\\stackrel{^}{\\delta }\\left(t+\\omega \\right)x\\left(t+\\omega -\\tau \\left(t+\\omega \\right)\\right)}\\\\ \\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}-G\\left(t,t\\right)\\stackrel{^}{\\beta }\\left(t\\right){x}^{\\gamma }\\left(t-\\tau \\left(t\\right)\\right){\\text{e}}^{-\\stackrel{^}{\\delta }\\left(t\\right)x\\left(t-\\tau \\left(t\\right)\\right)}\\\\ ={\\int }_{t}^{t+\\omega }{G}^{\\prime }\\left(t,s\\right)\\stackrel{^}{\\beta }\\left(s\\right){x}^{\\gamma }\\left(s-\\tau \\left(s\\right)\\right){\\text{e}}^{-\\stackrel{^}{\\delta }\\left(s\\right)x\\left(s-\\tau \\left(s\\right)\\right)}\\text{d}s\\\\ \\le \\alpha \\left(t\\right)\\left(\\Gamma x\\right)\\left(t\\right)+\\stackrel{^}{\\beta }\\left(t\\right){\\text{e}}^{-\\stackrel{^}{\\delta }\\left(s\\right)x\\left(s-\\tau \\left(s\\right)\\right)}\\\\ \\le {\\alpha }_{M}B\\omega {\\stackrel{^}{\\beta }}_{M}{\\text{e}}^{-\\text{1}}{\\left(\\frac{\\gamma }{{\\delta }_{M}}\\right)}^{\\gamma }+{\\stackrel{^}{\\beta }}_{M}{\\text{e}}^{-{\\stackrel{^}{\\delta }}_{M}},\\end{array}$\n\n$\\stackrel{^}{\\beta }\\left(t\\right){x}^{\\gamma }{\\text{e}}^{-\\stackrel{^}{\\delta }\\left(t\\right)x}\\le \\epsilon {r}_{1},\\text{\\hspace{0.17em}}t\\in \\left[0,\\omega \\right],\\text{\\hspace{0.17em}}x\\in \\left[0,{r}_{1}\\right];$ (3.2)\n\n$\\stackrel{^}{\\beta }\\left(t\\right){x}^{\\gamma }{\\text{e}}^{-\\stackrel{^}{\\delta }\\left(t\\right)x}\\le \\epsilon {r}_{2},\\text{\\hspace{0.17em}}t\\in \\left[0,\\omega \\right],\\text{\\hspace{0.17em}}x\\in \\left[{r}_{2},+\\infty \\right)。$ (3.3)\n\n$\\left(\\Gamma x\\right)\\left(t\\right)\\le B{\\int }_{t}^{t+\\omega }\\stackrel{^}{\\beta }\\left(s\\right){x}^{\\gamma }\\left(s-\\tau \\left(s\\right)\\right){\\text{e}}^{-\\stackrel{^}{\\delta }\\left(s\\right)x\\left(s-\\tau \\left(s\\right)\\right)}\\text{d}s\\le B\\omega \\epsilon {r}_{1}<{r}_{1}$\n\n$\\left(\\Gamma x\\right)\\left(t\\right)\\ge A{\\int }_{t}^{t+\\omega }\\stackrel{^}{\\beta }\\left(s\\right){x}^{\\gamma }\\left(s-\\tau \\left(s\\right)\\right){\\text{e}}^{-\\stackrel{^}{\\delta }\\left(s\\right)x\\left(s-\\tau \\left(s\\right)\\right)}\\text{d}s\\ge A\\omega {\\stackrel{^}{\\beta }}_{m}{r}_{0}{}^{\\gamma }{\\text{e}}^{-{\\stackrel{^}{\\delta }}_{M}{r}_{0}}>\\frac{{r}_{0}}{{\\text{e}}^{-{\\stackrel{^}{\\delta }}_{M}{r}_{0}}}{\\text{e}}^{-{\\stackrel{^}{\\delta }}_{M}{r}_{0}}={r}_{0},$\n\n$M=\\underset{t\\in \\left[0,\\omega \\right]}{\\mathrm{max}}\\underset{x\\in \\left[0,{r}_{2}\\right]}{\\mathrm{max}}\\left\\{\\stackrel{^}{\\beta }\\left(t\\right){x}^{\\gamma }{\\text{e}}^{-\\stackrel{^}{\\delta }\\left(t\\right)x}\\right\\}$ ,当 $x\\in P\\cap \\partial {\\Omega }_{3}$ 时,有 $‖x‖={r}_{2}$$x\\left(t\\right)\\ge \\sigma {r}_{2}$ 。则由(3.1)和(3.3),得\n\n$\\begin{array}{c}\\left(\\Gamma x\\right)\\left(t\\right)\\le B{\\int }_{t}^{t+\\omega }\\stackrel{^}{\\beta }\\left(s\\right){x}^{\\gamma }\\left(s-\\tau \\left(s\\right)\\right){\\text{e}}^{-\\stackrel{^}{\\delta }\\left(s\\right)x\\left(s-\\tau \\left(s\\right)\\right)}\\text{d}s\\\\ \\le B\\left({\\int }_{{E}_{1}}^{}\\stackrel{^}{\\beta }\\left(s\\right){x}^{\\gamma }\\left(s-\\tau \\left(s\\right)\\right){\\text{e}}^{-\\stackrel{^}{\\delta }\\left(s\\right)x\\left(s-\\tau \\left(s\\right)\\right)}\\text{d}s+{\\int }_{{E}_{2}}^{}\\stackrel{^}{\\beta }\\left(s\\right){x}^{\\gamma }\\left(s-\\tau \\left(s\\right)\\right){\\text{e}}^{-\\stackrel{^}{\\delta }\\left(s\\right)x\\left(s-\\tau \\left(s\\right)\\right)}\\text{d}s\\right)\\\\ \\le BM\\omega +B\\omega \\epsilon {r}_{2}<{r}_{2},\\end{array}$\n\n4. 具体实例\n\n$\\left\\{\\begin{array}{l}{x}^{\\prime }\\left(t\\right)=-\\left(0.\\text{06}+\\mathrm{sin}t\\right)x\\left(t\\right)+\\left(\\text{7}+\\mathrm{cos}t\\right){x}^{\\text{3}}\\left(t-1\\right){\\text{e}}^{-\\left(\\text{2}-0.5\\mathrm{sin}t\\right)x\\left(t-1\\right)},\\text{ }t\\ne {t}_{k},\\\\ \\Delta x\\left(k\\right)=\\left(1+{p}_{k}\\right)x\\left(k\\right),\\text{ }\\text{ }k=1,2,\\cdots \\end{array}$ (4.1)\n\n$f\\left(t+2\\text{π}\\right)=\\underset{0<{t}_{k}<2\\text{π}}{\\prod }{2}^{\\mathrm{sin}k}\\cdot \\underset{2\\text{π}<{t}_{k}<2\\text{π}+t}{\\prod }{2}^{\\mathrm{sin}k}={2}^{\\underset{k=1}{\\overset{2\\text{π}}{\\sum }}\\mathrm{sin}k}\\cdot \\underset{0<{t}_{k}<2\\text{π}}{\\prod }{2}^{\\mathrm{sin}\\left(k+2\\text{π}\\right)}={2}^{\\underset{k=1}{\\overset{\\text{2π}}{\\sum }}\\mathrm{sin}k}\\cdot \\underset{0<{t}_{k}<2\\text{π}}{\\prod }{2}^{\\mathrm{sin}k}=f\\left(t\\right)$\n\n$\\alpha \\left(t\\right)=0.06+\\mathrm{sin}t$$\\beta \\left(t\\right)=7+\\mathrm{cos}t$$\\delta \\left(t\\right)=2-0.5\\mathrm{sin}t$$g\\left(x\\right)={x}^{\\text{3}}{\\text{e}}^{-\\text{3}x}$$\\stackrel{^}{\\beta }\\left(t\\right)=\\underset{0",
null,
"$\\tau \\left(t\\right)=\\text{1}$$\\gamma =\\text{3}$ ,易知, $\\alpha \\left(t\\right),\\beta \\left(t\\right),\\delta \\left(t\\right),\\tau \\left(t\\right)$ 是以 $\\text{2π}$ 为周期的周期函数, ${\\int }_{0}^{2\\text{π}}\\alpha \\left(t\\right)\\text{d}t=0.12\\text{π}>0$$A=\\frac{{\\text{e}}^{-\\text{0}\\text{.12π}}}{\\text{e}{}^{\\text{0}\\text{.12π}}-1}$$B=\\frac{{\\text{e}}^{\\text{0}\\text{.12π}}}{\\text{e}{}^{\\text{0}\\text{.12π}}-1}$${\\stackrel{^}{\\beta }}_{m}=\\text{6}$$\\frac{\\gamma }{{\\stackrel{^}{\\delta }}_{M}}=1$$\\sigma ={\\text{e}}^{-\\text{0}\\text{.24π}}$${r}_{0}\\approx \\text{1}\\text{.5}$ ,则有 $A\\omega {\\stackrel{^}{\\beta }}_{m}{r}_{0}^{2}\\approx \\text{126}\\text{.915}>89.975\\approx {\\text{e}}^{6{r}_{0}}$ ,从而定理3.1的条件满足。因此,由定理3.1,知模型(4.1)存在两个 $\\text{2π}$ -正周期解。\n\nPositive Periodic Solutions for a Class of Delayed Neoclassical Growth Model[J]. 应用数学进展, 2017, 06(06): 727-733. http://dx.doi.org/10.12677/AAM.2017.66087\n\n1. 1. Solow, R. (1956) A Contribution to the Theory of Economic Growth. Quarterly Journal of Economics, 98, 203-213. https://doi.org/10.2307/1884513\n\n2. 2. Swan, T. (1956) Economic Growth and Capital Accumulation. Quarterly Journal of Economics, 70, 65-94. https://doi.org/10.1111/j.1475-4932.1956.tb00434.x\n\n3. 3. Day, R. (1983) Irregular Growth Cycles. American Economic Review, 72, 406-414.\n\n4. 4. Day, R. (1983) The Emergence of Chaos from Classical Economic Growth. Quarterly Journal of Economics, 98, 203-213. https://doi.org/10.2307/1885621\n\n5. 5. Day, R. (1994) Complex Economic Dynamics: An Introduction to Dynamical Systems and Market Mechanism. MIT Press, Cambridge.\n\n6. 6. Puu, T. (2003) Attractions, Bifurcations and Chaos: Nonlinear Phe-nomena in Economics. Springer, Berlin. https://doi.org/10.1007/978-3-540-24699-2\n\n7. 7. Bischi, G.I., Chiarella, C., Kopel, M. and Szidarovszky, F. (2000) Nonlinear Oligopolies: Stability and Bifurcation. Springer, Berlin.\n\n8. 8. Matsumoto, A. and Szidarovszky, F. (2011) Delay Differential Neoc-lassical Growth Model. Journal of Economic Behavior & Organization, 78, 272-289. https://doi.org/10.1016/j.jebo.2011.01.014\n\n9. 9. Matsumoto, A. and Szidarovszky, F. (2013) Asymptotic Behavior of a Delay Differential Neoclassical Growth Model. Sustainability, 5, 440-455. https://doi.org/10.3390/su5020440\n\n10. 10. Lakshmikantham, V., Dimitŭr, B. and Pavel, S. (1989) Theory of Impulsive Differential Equations. World scientific, Singapore. https://doi.org/10.1142/0906\n\n11. 11. Bainov, D. and Simeonov, P. (1993) Impulsive Differential Equations: Periodic Solutions and Applications. Longman Scientific and Technical, New York.\n\n12. 12. Bainov, D.D. and Simeonov, P.S. (1995) Impulsive Differential Equations: Asymptotic Properties of the Solutions. World Scientific, Singapore. https://doi.org/10.1142/2413\n\n13. 13. 郭大钧. 非线性泛函分析[M]. 济南: 山东科学技术出版社, 2001."
]
| [
null,
"https://html.hanspub.org/file/1-2620469x1_hanspub.png",
null,
"http://image.hanspub.org:8080/Html/htmlimages\\1-2890033x\\f4daa762-ba39-44c9-b439-8965414d84df.png",
null,
"https://html.hanspub.org/file/1-2620469x5_hanspub.png",
null,
"https://html.hanspub.org/file/1-2620469x6_hanspub.png",
null,
"http://image.hanspub.org:8080\\Html/htmlimages\\1-2890033x\\e70a10f1-7c93-45ea-9603-062237856e4b.png",
null,
"http://image.hanspub.org:8080\\Html\\htmlimages\\1-2890033x\\e898c85e-ffc4-45c9-b817-14224a4d6960.png",
null,
"https://html.hanspub.org/file/1-2620469x11_hanspub.png",
null,
"https://html.hanspub.org/file/1-2620469x28_hanspub.png",
null,
"https://html.hanspub.org/file/1-2620469x245_hanspub.png",
null
]
| {"ft_lang_label":"__label__zh","ft_lang_prob":0.6860683,"math_prob":0.99999166,"size":6081,"snap":"2020-34-2020-40","text_gpt3_token_len":3867,"char_repetition_ratio":0.0962646,"word_repetition_ratio":0.04054054,"special_character_ratio":0.36408484,"punctuation_ratio":0.20848574,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9999943,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18],"im_url_duplicate_count":[null,2,null,null,null,2,null,2,null,null,null,null,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-01T15:39:45Z\",\"WARC-Record-ID\":\"<urn:uuid:0c285938-940c-4d3d-97e1-0ab5f62be7ba>\",\"Content-Length\":\"241418\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:81cbdf25-243b-4c3a-baa2-9056f097a56b>\",\"WARC-Concurrent-To\":\"<urn:uuid:f2709b91-8cd6-4662-bf26-b41f3b926ce1>\",\"WARC-IP-Address\":\"47.246.23.110\",\"WARC-Target-URI\":\"https://image.hanspub.org/Html/1-2620469_22107.htm\",\"WARC-Payload-Digest\":\"sha1:SPM2ULSG3MZXW5E3KD7GJG5UMRGFKYMY\",\"WARC-Block-Digest\":\"sha1:7YIR2QYY6YU7ANOOUUPWEV7ACRSL5LHO\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600402131777.95_warc_CC-MAIN-20201001143636-20201001173636-00350.warc.gz\"}"} |
https://app-wiringdiagram.herokuapp.com/post/tabe-applied-math-problems | [
"",
null,
"9 out of 10 based on 607 ratings. 3,341 user reviews.\n\n# TABE APPLIED MATH PROBLEMS",
null,
"TABE Applied Math Practice Test 2 - Test-Guide\nThe TABE (Test of Adult Basic Educations) Exams are a set of tests covering the areas of reading, math, and language. The TABE is designed to certify that a student has the academic skills normally acquired by completing a typical high school program of study.\nTABE Applied Math Practice Test 1 - Test-Guide\nThe TABE (Test of Adult Basic Educations) Exams are a set of tests covering the areas of reading, math, and language. The TABE is designed to certify that a student has the academic skills normally acquired by completing a typical high school program of study.\nTABE A Math Test Prep Course - Tutoring and Practice Tests\nTABE A Math test prep books and practice questions are not enough, and classes and tutors are too expensive. That’s why we created our TABE A Math test prep course - to offer the perfect balance of affordability and effectiveness that has always been missing for students preparing for the TABE A Math test.4.9/5(87)\nPage 1 of the Mathematics: Applied Study Guide for the TABE\nChoosing The Right OperationIdentifying Unimportant InformationEstimationMeasurementGeometry ProceduresUsing DataStatistics and ProbabilityPatternsFunctionsAlgebraBe familiar with buzzwords and clues because these will make it easier for you to translate words and phrases into numbers and mathematical operations. A mathematical sentence can express an equality or inequality. The words “is” and “equal to” suggest an equality of two statements, and are represented by the equal sign (=). “Greater than” and “less than” suggest an inequality, and are represented by the greater than sign (>) and less than sign (<), respectively. Then there are the phrases “g..See more on uniontestprep\nMastering the Applied Math Section of the TABE: Guided\nClick to view on Bing1:07:14Aug 10, 2018In this video, I show you what it takes to master the applied math section of the TABE test. Specifically, I work out problems from a full-length practice exam. If you would like to follow alongAuthor: Grammar HeroViews: 17K\nBest TABE Applied Math Practice Test - YouTube\nClick to view on Bing11:15Feb 24, 2016We have provided 5 TABE applied mathematics practice questions for you to study, along with an instructor working you through each problem. TABE Secrets Study Guide: TABE Exam Review for the TestAuthor: Mometrix Test PreparationViews: 38K\n150 TABE Practice Questions (practice and increase your\n150 Test of Adult Basic Education TABE Practice Questions PDF Download Study Guide Complete TABE Study Guide including hundreds of pages of Tutorials, Self-Assessments, 2 sets of practice test questions for reading, computational math, applied math, English grammar, usage, punctuation and\nTABE Math Practice Test (Example Questions)\nThe TABE math test will test the student’s problem-solving and reasoning abilities through a variety of questions in both mathematics computations and applied mathematics. Overall, there is a total of twelve mathematics domains covered by the TABE math test:\nOver 100 TABE Math Practice Questions and Math Workbook\nMar 25, 2019TABE Math practice questions, easy-to-read tutorials explaining everything in plain language, exam tips and tricks, math shortcuts, and multiple choice strategies! Everything you need, compiled by a dedicated team of experts with everything you need all in one place! Here is what the TABE Math Workbook can do for you:5/5Pages: 156Availability: In stock\nQuestion 1 Mathematics: Applied Practice Test for the TABE\nQuestion 1 Mathematics: Applied Practice Test for the TABE Amy used cups of flour, cups of sugar, 1 cup of rolled oats, 2 cups of sour milk and cup of oil to bake a cake. How many cups of dry ingredients did Amy use for the recipe?\nRelated searches for tabe applied math problems\napplied math problems and answerstabe math worksheets pdffree tabe test math computationfree applied math practice testpractice test for math tabe testtabe test 8th grade mathapplied math problemstabe math practice pdf",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
""
]
| [
null,
"https://app-wiringdiagram.herokuapp.com/post/tabe-applied-math-problems",
null,
"https://i0.wp.com/s-media-cache-ak0.pinimg.com/736x/52/38/ab/5238abc3e5785fac9424fef038ac62e8.jpg",
null,
"https://i0.wp.com/3.bp.blogspot.com/_IgrPsHt2WlQ/THn05k2x-BI/AAAAAAAAAdw/ZZHbPuOjIko/s1600/Decimals+pg+1.jpg",
null,
"https://i0.wp.com/proprofs.com/quiz-school/topic_images/p1d1d1kehg10afd8d1sem67g1643.jpg",
null,
"https://i0.wp.com/www.mogenk.com/i/2017/03/printable-worksheets-days-of-kindergarten-month-months-of-the-year-worksheets-months-of-the-year-printable-free-printable-months-of-the-year-worksheets.jpg",
null,
"https://i0.wp.com/www.test-preparation.ca/wp-content/uploads/2014/05/mathSmall.jpg",
null,
"https://i0.wp.com/www.math-aids.com/images/exponents-fraction.png",
null,
"https://i0.wp.com/www.collectedny.org/wp-content/uploads/2018/10/Daisy.jpg",
null,
"https://i0.wp.com/miamiged.com/classnotes/week3/Multiplying-fractions-3-1.gif",
null,
"https://i0.wp.com/www.mogenk.com/i/2017/03/printable-worksheets-days-of-kindergarten-month-months-of-the-year-worksheets-months-of-the-year-printable-free-printable-months-of-the-year-worksheets-972x1205.jpg",
null,
"https://i0.wp.com/www.mogenk.com/i/2017/03/printable-worksheets-days-of-kindergarten-month-months-of-the-year-worksheets-months-of-the-year-printable-free-printable-months-of-the-year-worksheets-300x372.jpg",
null,
"https://i0.wp.com/miamiged.com/classnotes/week2/placevalue.jpg",
null,
"https://i0.wp.com/www.mogenk.com/i/2017/03/blank-addition-worksheets-kindergarten-worksheet-missing-number-for-free-line-subtraction-math-300x372.png",
null,
"https://i0.wp.com/www.heliconinc.org/image/123352746_scaled_242x312.png",
null,
"https://i0.wp.com/miamiged.com/classnotes/week3/sublike4.gif",
null,
"https://i0.wp.com/www.jobtestprep.com/media/28751/abstract.png",
null,
"https://i0.wp.com/www.mometrix.com/academy/wp-content/uploads/2019/01/logo-dark-em.svg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.87749386,"math_prob":0.5183818,"size":4040,"snap":"2019-51-2020-05","text_gpt3_token_len":876,"char_repetition_ratio":0.15213083,"word_repetition_ratio":0.16640253,"special_character_ratio":0.1990099,"punctuation_ratio":0.093969144,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.968835,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34],"im_url_duplicate_count":[null,1,null,2,null,2,null,1,null,1,null,1,null,6,null,1,null,2,null,1,null,1,null,1,null,1,null,null,null,1,null,1,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-21T15:15:42Z\",\"WARC-Record-ID\":\"<urn:uuid:cbae72d6-23c4-436e-b9e2-54cbd86b2d84>\",\"Content-Length\":\"47733\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b6e6d693-e6e4-49f9-a06a-4ba8d75b6c10>\",\"WARC-Concurrent-To\":\"<urn:uuid:3da416b7-e753-4bd0-93df-d8d48115aae7>\",\"WARC-IP-Address\":\"52.22.170.144\",\"WARC-Target-URI\":\"https://app-wiringdiagram.herokuapp.com/post/tabe-applied-math-problems\",\"WARC-Payload-Digest\":\"sha1:IPIWNDNP4FVJYKAFPYCNN3AYR3QYNDV4\",\"WARC-Block-Digest\":\"sha1:M4HEO7RPYPXZ6AJYCBSY7MPSKLWH7ZJK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250604397.40_warc_CC-MAIN-20200121132900-20200121161900-00211.warc.gz\"}"} |
https://www.coursehero.com/file/47393620/Math-1325-Exam3eview-1-to-16pdf/ | [
"# Math 1325_Exam3eview 1 to 16.pdf - Exam 3 Review Provide an...\n\n• Test Prep\n• 3\n\nThis preview shows page 1 - 3 out of 3 pages.\n\nExam 3 ReviewProvide an appropriate response.1)Evaluate dy/dt for the function at the point. x3+y3=9; dx/dt = -3, x =1, y 1)2)Assume x =x(t) and y =y(t). Find dxdtif x2(y -6) =12y +3 and dydt=2 when x y =12.2)3)Identify the intervals where f(x) is decreasing.=2=5 and3)4)ƍIdentify the intervals where f (x) >0.4)5)Determine the intervals for which the function f(x) =x3+18x2+2, is decreasing.5)6)Determine the interval(s) where f(x) =x2x -3is decreasing.6)7)Given f(x) =x +16x, x <0, find the values of x corresponding to local maxima and localminima.7)8)Use a graphing utility to approximate where the local extrema of the functionf(x) =x4-3x3-2x2+5x are to two decimal places.8)9)Use the first derivative test to determine the local extrema, if any, for the function:f(x) =3(x -4)2/3+9)6.1\n10)Find the critical values and determine the intervals where f(x) is increasing and f(x) isdecreasing if f(x) =1 +3x+x2.10)Solve the problem.11)The percent of concentration of a certain drug in the bloodstream x hr after the drug isadministered is given by K(x) =3xx2+ 16. How long after the drug has been administered is2\n•",
null,
"•",
null,
"•",
null,
""
]
| [
null,
"https://www.coursehero.com/assets/img/doc-landing/start-quote.svg",
null,
"https://www.coursehero.com/assets/img/doc-landing/start-quote.svg",
null,
"https://www.coursehero.com/assets/img/doc-landing/start-quote.svg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.6270695,"math_prob":0.99851394,"size":1115,"snap":"2021-31-2021-39","text_gpt3_token_len":377,"char_repetition_ratio":0.14221422,"word_repetition_ratio":0.012048192,"special_character_ratio":0.3174888,"punctuation_ratio":0.09019608,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9992685,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-28T23:16:16Z\",\"WARC-Record-ID\":\"<urn:uuid:b3c34220-b864-4228-9f6c-9ec6a79872da>\",\"Content-Length\":\"258413\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:30559011-1f5d-4d35-a25f-8255a3173e4a>\",\"WARC-Concurrent-To\":\"<urn:uuid:1753114d-4823-46ea-b0ae-eac8b7dc02ff>\",\"WARC-IP-Address\":\"104.17.92.47\",\"WARC-Target-URI\":\"https://www.coursehero.com/file/47393620/Math-1325-Exam3eview-1-to-16pdf/\",\"WARC-Payload-Digest\":\"sha1:DGDUNLE7CJF2YWTFOVHZDHVDS6DWLWOB\",\"WARC-Block-Digest\":\"sha1:NX4MO36VIVKF6AXB3WF2EPV2ZJ4HRAKP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780060908.47_warc_CC-MAIN-20210928214438-20210929004438-00454.warc.gz\"}"} |
https://excelly-ai.io/blog/post/2023-09-08-excel-formula-not-calculating/ | [
"",
null,
"Excel Formula Not Calculating: Fixing Formula Calculation Issues - Blog | Excelly-AI\n08 September 2023\n\n# Excel Formula Not Calculating: Fixing Formula Calculation Issues\n\n## This article provides a step-by-step guide on how to resolve Excel formulas that are not calculating",
null,
"Excel is a powerful tool for data analysis and calculations, but sometimes formulas may not calculate as expected. This can be frustrating, but don’t worry—there are solutions! In this guide, we’ll explore common issues and provide step-by-step instructions on how to fix Excel formula calculation problems.\n\n## Common Causes of Excel Formula Issues\n\nBefore we dive into solutions, let’s identify some common reasons why Excel formulas may not calculate:\n\n1. Formula Errors: Typos or syntax errors in your formulas can prevent calculations.\n2. Automatic Calculation Settings: Excel’s calculation settings may be set to manual mode.\n3. Circular References: Formulas referring to themselves can cause circular references.\n4. Data Type Mismatches: Inconsistent data types within a range can lead to errors.\n\n## Troubleshooting and Solutions\n\n### 1. Formula Errors\n\nIf you suspect formula errors, double-check your formulas for correctness. Look for missing parentheses, incorrect cell references, or extra operators. Correcting these errors should enable calculations.\n\nExample:\n\n• Correct: `=SUM(A1, B1)`\n• Incorrect: `=SUM(A1 B1)` (missing comma)\n\n### 2. Automatic Calculation Settings\n\nTo ensure formulas automatically calculate, follow these steps:\n\n1. Go to Formulas > Calculation Options.\n2. Select Automatic.\n\n### 3. Circular References\n\nCircular references occur when a formula refers to its own cell or depends on a chain of cells that ultimately points back to the original cell. Locate and resolve circular references by updating the formulas involved.\n\n### 4. Data Type Mismatches\n\nEnsure data types within a range are consistent. Use functions like `ISNUMBER()`, `ISTEXT()`, or `ISBLANK()` to identify and correct data type mismatches.\n\n## Tips to Prevent Formula Calculation Problems\n\n1. Regularly audit your formulas for errors and inconsistencies.\n2. Use Excel’s built-in error-checking tools to identify issues.\n3. Document your formulas to make troubleshooting easier.\n4. Avoid circular references whenever possible."
]
| [
null,
"https://www.facebook.com/tr",
null,
"https://images.pexels.com/photos/3778619/pexels-photo-3778619.jpeg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8321059,"math_prob":0.9437867,"size":1261,"snap":"2023-40-2023-50","text_gpt3_token_len":243,"char_repetition_ratio":0.11774065,"word_repetition_ratio":0.0,"special_character_ratio":0.18873909,"punctuation_ratio":0.122171946,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9942621,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-04T10:36:49Z\",\"WARC-Record-ID\":\"<urn:uuid:01024e62-53e0-4edf-a83a-db845e7a7370>\",\"Content-Length\":\"12310\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6678579e-a1ec-4ba2-b3b8-666393a14556>\",\"WARC-Concurrent-To\":\"<urn:uuid:654be776-ff9f-4041-bb5a-c80f7516dc9b>\",\"WARC-IP-Address\":\"35.185.44.232\",\"WARC-Target-URI\":\"https://excelly-ai.io/blog/post/2023-09-08-excel-formula-not-calculating/\",\"WARC-Payload-Digest\":\"sha1:ALINVKLV7RXE3UZES4IG4SVS7FUIE5TB\",\"WARC-Block-Digest\":\"sha1:VELYVGNAJRCGXFW5YKZMU3VMHELZ5FYE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100527.35_warc_CC-MAIN-20231204083733-20231204113733-00592.warc.gz\"}"} |
https://www.adobe.com/devnet/actionscript/learning/as3-fundamentals/vectors-and-bytearrays.html | [
"## by Michelle Yaiser",
null,
"14 November 2011\n\n## Flash Builder\n\n - Requirements Required products Prerequisite knowledge You need to be familiar with ActionScript 3 variables, objects, loops, and arrays. Flash Builder (Download trial) Adobe Animate CC User level Intermediate - -\n\nIn ActionScript 3 fundamentals: Associative arrays, maps, and dictionaries, you were introduced to arrays and maps based on both `Dictionary` and `Object` as ways to store data in your applications. There are two additional types of objects provided by ActionScript 3 that will allow you to do more aggressive type checking and very low-level data manipulation. This article introduces `Vector` and `ByteArray` and builds upon the knowledge introduced in ActionScript 3 fundamentals: Array.\n\nVectors are arrays that contain one and only one predefined type of data. Vectors in ActionScript 3 can be of a fixed or dynamic size and, as you will see, this choice restricts the way you access and manipulate a vector's contents. Vectors have two key advantages: they can be optimized in Flash Player and hence perform better than standard arrays, and they can help catch coding errors by only allowing a single data type to be added to the data structure.\n\nByteArrays are literally an array structure of the bytes composing data in ActionScript 3. They are very useful for rapid access to low-level data such as that required to do graphic or sound manipulation.\n\n### Vectors\n\nVectors are a type of object, and therefore must be instantiated. Because a vector can contain only one type of data, the data type must be declared when you create the vector. The syntax to both declare a data type and instantiate a vector with that type is different than any you have encountered in ActionScript 3 so far and may look strange at first. The first line of code below declares a variable that will reference a vector containing numbers. The second line of code declares a reference variable and then instantiates a vector that will contain only `Number` data.\n\n```var myVector:Vector.<Number>; var myVector:Vector.<Number> = new Vector.<Number>(); ```\n\nYou can think about the type of element contained in a vector as being part of its type. So, the vector above is a vector of numbers. A vector of sprites is a completely different type of object and hence the following code will cause an error:\n\n```var myVector:Vector.<Number> = new Vector.<Sprite>(); TypeError: Error #1034: Type Coercion failed: cannot convert __AS3__.vec::Vector.<flash.display::Sprite>@6318e71 to __AS3__.vec.Vector.<Number>. ```\n\nYou may create a Vector of any type, complex or simple, or of any interface in ActionScript 3. All of the following are legal:\n\n```var vector1:Vector.<String> = new Vector.<String>(); var vector2:Vector.<Sprite> = new Vector.<Sprite>(); var vector3:Vector.<IEventDispatcher> = new Vector.<IEventDispatcher>(); ```\n\nLike arrays, all ActionScript 3 vectors have a length property. That length property corresponds to the number of elements stored in the vector at any given time. Executing this code will show that the vector currently has 0 elements.\n\n```var myVector:Vector.<Number> = new Vector.<Number>(); trace(myVector.length); //output: 0 ```\n\nWhen you create a new `Vector` , you have two additional choices to make. You can choose to specify an initial size for the vector and whether the size is fixed. For example, this vector will hold have a space for each day of the week.\n\n```var days:Vector.<String> = new Vector.<String>( 7 ); trace( days.length ); //output: 7 ```\n\nIn this case, the vector is created with seven places to be defined in the future. A `Vector` of seven items could be visualized as:",
null,
"When a vector is created in this way, it can grow or shrink in size, like arrays. However, there are many times where this is unnecessary. Continuing with the day of the week example, it is unlikely that the number of days in a week will change anytime soon. It is a relatively static number.\n\nWhen dealing with a static number of things, you can tell the vector that the size is fixed. This allows the vector to further optimize access, making updating and retrieving data faster. To tell a vector that it is a fixed size, you simply pass `true` to the second constructor argument during instantiation.\n\n`var days:Vector.<String> = new Vector.<String>( 7, true );`\n\nWhen a `Vector` is set to a fixed size, attempting to change its length in any way will cause an error. You are allowed to change the fixed nature of a vector at any time by altering its `fixed` property:\n\n```var days:Vector.<String> = new Vector.<String>( 7, true ); days.fixed = false; //you can change the length days.fixed = true; //you can NOT change the length ```\n\n### Manipulating data in vectors\n\nData can be added to and removed from ActionScript 3 vectors at runtime using the same methods you use with arrays. You can replace existing elements at any time and, so long as the vector is not fixed, it will grow or shrink as needed.\n\nThere are a number of methods available in the `Vector` class to manipulate data in the vector. Generally, they are very similar to methods of the `Array` class. The two most common ones are `push()` and `pop()`.\n\n#### `push()` and `pop()` methods\n\nThe `push()` method adds one or more new elements to the end of a vector. The method returns the new length of the vector after the new item(s) have been added.\n\n```var letters:Vector.<String> = new Vector.<String>(); letters.push( \"C\" ); ```",
null,
"`letters.push( \"D\", \"E\", \"F\" ); `",
null,
"```var len:uint = letters.push( \"G\" ); trace(len); //5 trace(letters.length); //5 ```",
null,
"However, remember that `Vector` only accepts one type of data. The following code will cause an error.\n\n```var letters:Vector.<Sprite> = new Vector.<Sprite>(); letters.push(\"C\"); //Error TypeError: Error #1034: Type Coercion failed: cannot convert \"C\" to flash.display.Sprite. ```\n\nAs with an array, items can be removed from the end of a vector with the `pop()` method. The `pop()` method removes one element from the end of a vector and returns it. Look at the following example:\n\n```var letters:Vector.<String> = new Vector.<String>(); letters.push( \"C\" ); ```",
null,
"`letters.push( \"D\" ); `",
null,
"```var letter:String = letters.pop(); trace( letter ); //output: D ```",
null,
"`letters.push( \"E\" ); `",
null,
"For those of you familiar with a Stack data structure, the `push()` and `pop()` methods of `Vector` provide an easy way to implement a typed stack in ActionScript 3.\n\n#### Index\n\nWhile adding data with `push()` and removing it with `pop()` is an effective way to manipulate the end of variable sized vectors, they are most commonly manipulated and accessed via index. In ActionScript 3, a vector index begins at 0, meaning the first element of the vector is the 0th element.\n\nIndividual values of the vector are accessed by using the same syntax as you would use with an array—the name of the vector, followed by square brackets and the index you wish to retrieve. For example:",
null,
"```var days:Vector.<String> = new Vector.<String>(); days[ 0 ] = \"Sun\"; days[ 1 ] = \"Mon\"; ```\n\nYou can also use this same technique to modify the contents of a vector. For example, you could modify the string \"Thu\" to say \"Hi\".\n\n```var days:Vector.<String> = new Vector.<String>( 7, true ); days[ 0 ] = \"Sun\"; days[ 1 ] = \"Mon\"; ```",
null,
"As vector indexes are 0-based, the last element of the vector is always one less than the length. If you have a vector of length 7, then the last element of the vector is at index 6, Sat in the example above.\n\nYou can also use the index to populate the vector with data, just as you did with the `push()` method.\n\n```var days:Vector.<String> = new Vector.<String>(); days[ 0 ] = \"Sun\"; days[ 1 ] = \"Mon\"; ```\n\nIf you use a fixed-length Vector, this will be your primary way to populate data.\n\n```var days:Vector.<String> = new Vector.<String>( 7, true ); days[ 0 ] = \"Sun\"; days[ 1 ] = \"Mon\"; ```\n\n### Dense vectors\n\nIn ActionScript 3, `Array` instances can be dense (all elements have value) or sparse (missing elements). This is not true with `Vector` instances. Vectors must be dense. This means that every element has to have a value.\n\nWhile this is legal with `Array` :\n\n```var elements:Array = new Array(); elements[ 1 ] = \"B\"; ```",
null,
"The same technique will cause an error to be thrown with `Vector` . All elements must exist for a Vector, so in this example, index 0 must exist before index 1 can:\n\n```var elements:Array = new Array(); elements[ 1 ] = \"B\"; ```\n\nNote that this is different than having null values:\n\n```var elements:Vector.<String> = new Vector.<String>(); elements[ 1 ] = \"B\"; //Error RangeError: Error #1125: The index 1 is out of range 0. ```\n\nWhen you insert `null` into index 0 you are creating a vector with both positions defined. One of elements simply contains a null value.\n\n```var elements:Vector.<String> = new Vector.<String>(); elements[ 0 ] = null; elements[ 1 ] = \"B\"; ```",
null,
"When you insert `null` into index 0 you are creating a vector with both positions defined. One of elements simply contains a null value.\n\n### Looping over vectors\n\nJust like with arrays, one of the most common practices when working with vectors involves iterating through the elements of a vector searching for a given value. The function below shows how you can move through a vector element by element and search for a particular value. If the function finds the value, it returns the index of the element in the vector. If it fails to find the value, it returns a -1.\n\n```function findIndexOfValue( vector:Vector.<String>, value:String ):int { var length:uint = vector.length; for ( var i:uint=0; i<length; i++ ) { if (vector[ i ] == value ) { return i; } return -1; } } var elements:Vector.<String> = new Vector.<String>(); elements[ 0 ] = [ \"A\" ]; elements[ 1 ] = [ \"B\" ]; elements[ 2 ] = [ \"C\" ]; trace( findIndexOfValue( elements, \"C\" ) ); //2 trace( findIndexOfValue( elements, \"Z\" ) ); //-1 ```\n\n### ByteArray\n\n`ByteArray` is a class that allows advanced developers to work directly with binary data in ActionScript 3. It is very useful when the need arises to work with low-level data such as that inside of images or arriving directly from the network.\n\n`var bytes:ByteArray = new ByteArray();`\n\nByteArrays concern themselves with two general types of operation: reading and writing. One of the simplest things we can do with a ByteArray is to write a value into it. The following code writes the values true, false, true into the ByteArray.\n\n```var bytes:ByteArray = new ByteArray(); bytes.writeBoolean( true ); bytes.writeBoolean( false ); bytes.writeBoolean( true ); ```",
null,
"This looks relatively similar to when you pushed items into a vector or array. There are indeed some similarities but also some important differences.\n\nFirst, you will notice that you used `writeBoolean()` when you added data to the `ByteArray` . The `ByteArray` needs to know what type of data is being added to it. `ByteArray` supports writing with the following methods:\n\nTable 1. Read and write methods of `ByteArray`\n\n Data type Read method Write method Note Boolean `readBoolean()` `writeBoolean()` read or write a true or false Numeric `readDouble()` `writeDouble()` read or write a 64-bit floating point number `readFloat()` `writeFloat()` read or write a 32-bit floating point number `readShort()` `writeShort()` read or write a 16-bit integer `readUnsignedShort()` read or write a 16-bit unsigned integer `readInt()` `writeInt()` writes a 32 bit signed integer `readUnsignedInt()` `writeUnsignedInt()` read or write a 32-bit unsigned integer String `readMultiByte()` `writeMultiByte()` read or write a multi-byte string of a specified length in a specified character set `readUTFBytes()` `writeUTFBytes()` read or write a UTF-8 string of a specified length `readUTF()` `writeUTF()` read or write a UTF-8 strings Objects `readObject()` `writeObject()` read or write an entire object into the ByteArray using Action Message Format encoding Raw data `readByte()` `writeByte()` read or write a single byte of data `readBytes()` `writeBytes()` read or write a predetermined number of discrete bytes\n``` ```\n\n```var bytes:ByteArray = new ByteArray(); bytes.writeBoolean( true ); bytes.position=0; bytes.readBoolean(); //Output: true ```\n\n#### Storage\n\nThe second major difference between `Array` and `ByteArray` is the way data is stored. In regular arrays, you think of one value existing in each slot. In a ByteArray, each slot is a fixed size—a byte—and the items you write into the ByteArray span multiple bytes. For example, the following code writes an unsigned integer (32 bits long), a Boolean value and another unsigned integer (32 bits long) into the ByteArray. Each 32-bit integer spans four bytes in the ByteArray. The Boolean value is stored in one byte. The ByteArray occupies a total of nine bytes and thus has a length of nine.\n\n```var bytes:ByteArray = new ByteArray(); bytes.writeUnsignedInt(10); bytes.writeBoolean(true); bytes.writeUnsignedInt(26); trace( bytes.length ); //output: 9 ```\n\nThe image above shows where each of those values is stored along with the binary bits stored in each byte that make up the stored value.\n\n#### Position\n\nThis reveals another significant difference between the way ByteArrays and standard arrays work: the concept of position versus index. Standard arrays and vectors allow you to retrieve data at any given index with array syntax, for example `myArray` . In the example above, what is at index 2? It is somewhere in the middle of the number 10—specifically it is a series of eight 0s. That isn't very helpful. Thus, the concept of index isn't usable with ByteArray.\n\nInstead, ByteArray uses the concept of position. Each byte within the ByteArray is a position. When you execute an operation that reads or writes, it does so at the current position and then moves to the next position. If you read one byte, the position moves forward by one. Likewise if you read four bytes, the position moves forward by four. You can query and set the current position via the `position` property. In the diagrams below, the position after each line of code is executed is shown by an arrow. Examine each line of the code below and the impact it has on the ByteArray:\n\n```var bytes:ByteArray = new ByteArray(); //Position is 0 bytes.writeUnsignedInt(10); //After write method, position is 4 ```",
null,
"`bytes.writeBoolean(true); //After write method, position is 5`",
null,
"`bytes.writeUnsignedInt( 26 ); //After write method, position is 9`",
null,
"`bytes.position = 0; //Position is 0 `",
null,
"```trace( bytes.readUnsignedInt() ); //Output 10, After read method, position is 4 ```",
null,
"```trace( bytes.readBoolean() ); //Output true, After read method, position is 5 ```",
null,
"```trace( bytes.readUnsignedInt() ); //Output 26, After read method, position is 9 ```",
null,
"This sequential access method is much faster than the array index method you use with Array and Vector, but it is more complicated. If you know the correct position of an item you wish to read, you can access it by using a combination of the position property and reading. For example, you could set the position to four and read the Boolean directly:\n\n```bytes.position = 4; //Position is 4 trace( bytes.readBoolean() ); //Output true, After read method, position is 5 ```\n\n### Looping over ByteArrays\n\nAs you may have noticed from the previous examples, you need to know the type and order of the data added to the ByteArray in order to read it back out correctly. There are however some times when you simply want to iterate through each byte. To that end, the ByteArray allows you to determine the remaining number of bytes that can be read for this purpose.\n\nThe ByteArray `bytesAvailable` property always gives you the number of bytes from the current position to the end of the ByteArray. Using this property, you can create a simple `while` loop that will get each byte of the ByteArray.\n\n```var bytes:ByteArray = new ByteArray(); bytes.writeUnsignedInt( 10 ); bytes.writeUnsignedInt( 26 ); bytes.position = 0; while ( bytes.bytesAvailable ) { trace( bytes.readByte() ); } //Output 0 0 0 10 0 0 0 26 ```\n\n### Compressing ByteArrays\n\n`ByteArray` instances have the ability to compress and decompress themselves using either the deflate or zlib compression algorithms. Beginning with Flash Player 10.0 and AIR 1.5, a ByteArray can be compressed using the deflate algorithm by calling the `deflate()` method. Decompressing the ByteArray is accomplished by calling the corresponding `inflate()` method.\n\nTo use the zlib compression algorithm, you must call `compress()` and `uncompress()` . In Flash Player, this method always uses zlib compression. In AIR, you can choose between zlib or deflate by calling `compress()` and specifying the compression algorithm to use when compressing. Valid values are defined as constants in the `CompressionAlgorithm` class. To decompress a ByteArray you compressed using `compress()` , you simply call the `uncompress()` method.\n\nThe following code example populates the ByteArray with a series of numbers. It traces the length of the ByteArray then deflates it, shows the new length, and then inflates it to the original size.\n\n```var bytes:ByteArray = new ByteArray(); bytes.writeUnsignedInt( 10 ); bytes.writeUnsignedInt( 26 ); bytes.writeUnsignedInt( 11 ); bytes.writeUnsignedInt( 12 ); bytes.writeUnsignedInt( 11 ); bytes.writeUnsignedInt( 23 ); bytes.writeUnsignedInt( 01 ); bytes.writeUnsignedInt( 16 ); bytes.writeUnsignedInt( 02 ); bytes.writeUnsignedInt( 17 ); bytes.writeUnsignedInt( 03 ); bytes.writeUnsignedInt( 16 ); trace( bytes.length ); //48 bytes.deflate(); trace( bytes.length ); //32 bytes.inflate(); trace( bytes.length ); //48 ```\n\nFor more information about compressing and decompressing ByteArrays see compress(), uncompress(), deflate(), and inflate() in the ActionScript 3.0 Reference.\n\n### Object Serialization\n\nOne of the most powerful operations that can be performed on a `ByteArray` instance is writing the contents of an entire object into a ByteArray. The object's data and type persist, so that later, a fully populated object can be read back out of the ByteArray.\n\nDoing this involves a process called serializing an object with Action Message Format (AMF). This is actually the same process required if you wish to send objects back and forth to a Java, ColdFusion or PHP server. The specific details of how objects are serialized with AMF is beyond the scope of this article. However, how to work with objects in a ByteArray is covered below.\n\nAMF natively supports core ActionScript data types. When instances of ActionScript classes like `Sprite` or `TextField` or `Date` are written into a ByteArray, they are stored as a `Sprite` or `TextField` or `Date` . If you are writing an object of a custom type into a ByteArray, it will be stored as a generic `Object` . This means that when you read it, it will be of type Object and you will not be able to cast it back to its correct type. To avoid this problem, you must register the class. The example that follows demonstrates how to write and read an object of a custom type into and out of a ByteArray.\n\nAs a first step, consider this simple object named Person:\n\n```package vo { public class Person { public var firstName:String; public var lastName:String; public var age:Number; public function Person() { } } } ```\n\nIt's a very simple object with three custom properties. If you want to be able to store this object as a `Person` object and not a generic `Object` in a ByteArray, you must first register a class alias. You only need to do this once in your application. Register the class alias anytime before you use it.\n\n`registerClassAlias( \"vo.Person\", vo.Person );`\n\nThis method simply gives Flash Player a convenient string to use as an alias for this class. By convention you use the dot path and class name to create the alias; however, technically you could use any unique alias, for example:\n\n`registerClassAlias( \"nonIntuitiveAlias\", vo.Person ); `\n\nOnce you have registered the class alias you can create a new Person object, fill it with data and write it to the ByteArray using the `writeObject()` method.\n\n```registerClassAlias( \"vo.Person\", vo.Person ); var bytes:ByteArray = new ByteArray(); var person:Person = new Person(); person.firstName = \"Lee\"; person.lastName = \"Alan\"; person.age = 37; bytes.writeObject( person ); ```\n\nTo read the data back from the ByteArray, you must first reset the position to where the `Person` object begins, in this case position 0. You can then use the `readObject()` to read the `Person` instance back out of the Array.\n\n```bytes.position = 0; var person1:Person = bytes.readObject(); trace( person1 is Person ); //true trace( person1.firstName ); //Lee trace( person === person1 ); //false ```\n\nThe object read back from the ByteArray is actually a `Person` instance. It will have all of the same properties and values of the original `Person` instance. However, it is not the same instance. It is a copy. As the last trace statement shows, `person1` is a copy of the original `person` object, not a reference to it.\n\n### Where to go from here\n\nThe `Vector` class provides an optimized array for situations when you have a homogenous data set. Vectors can be further optimized by declaring a fixed length. Attempting to add a different type of data than specified when creating the vector or attempting to modify the length of a fixed-length vector will cause an error.\n\n`ByteArray` provides low-level binary access to data in ActionScript 3. It is extremely powerful and extremely fast; however, it requires a firm understanding of data types and serialization to use effectively."
]
| [
null,
"https://www.adobe.com/content/dam/acom/en/devnet/images/michelle_yaiser_bio.jpg.adimg.mw.160.png",
null,
"https://www.adobe.com/content/dam/acom/en/devnet/images/vectors-and-bytearrays_fig01.jpg",
null,
"https://www.adobe.com/content/dam/acom/en/devnet/images/vectors-and-bytearrays_fig02.jpg",
null,
"https://www.adobe.com/content/dam/acom/en/devnet/images/vectors-and-bytearrays_fig03.jpg",
null,
"https://www.adobe.com/content/dam/acom/en/devnet/images/vectors-and-bytearrays_fig04.jpg",
null,
"https://www.adobe.com/content/dam/acom/en/devnet/images/vectors-and-bytearrays_fig02.jpg",
null,
"https://www.adobe.com/content/dam/acom/en/devnet/images/vectors-and-bytearrays_fig06.jpg",
null,
"https://www.adobe.com/content/dam/acom/en/devnet/images/vectors-and-bytearrays_fig02.jpg",
null,
"https://www.adobe.com/content/dam/acom/en/devnet/images/vectors-and-bytearrays_fig09.jpg",
null,
"https://www.adobe.com/content/dam/acom/en/devnet/images/vectors-and-bytearrays_fig10.jpg",
null,
"https://www.adobe.com/content/dam/acom/en/devnet/images/vectors-and-bytearrays_fig11.jpg",
null,
"https://www.adobe.com/content/dam/acom/en/devnet/images/vectors-and-bytearrays_fig12.jpg",
null,
"https://www.adobe.com/content/dam/acom/en/devnet/images/vectors-and-bytearrays_fig13.jpg",
null,
"https://www.adobe.com/content/dam/acom/en/devnet/images/vectors-and-bytearrays_fig14.jpg",
null,
"https://www.adobe.com/content/dam/acom/en/devnet/images/vectors-and-bytearrays_fig16.jpg",
null,
"https://www.adobe.com/content/dam/acom/en/devnet/images/vectors-and-bytearrays_fig17.jpg",
null,
"https://www.adobe.com/content/dam/acom/en/devnet/images/vectors-and-bytearrays_fig18.jpg",
null,
"https://www.adobe.com/content/dam/acom/en/devnet/images/vectors-and-bytearrays_fig19.jpg",
null,
"https://www.adobe.com/content/dam/acom/en/devnet/images/vectors-and-bytearrays_fig20.jpg",
null,
"https://www.adobe.com/content/dam/acom/en/devnet/images/vectors-and-bytearrays_fig21.jpg",
null,
"https://www.adobe.com/content/dam/acom/en/devnet/images/vectors-and-bytearrays_fig22.jpg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.79177356,"math_prob":0.8585472,"size":22104,"snap":"2020-10-2020-16","text_gpt3_token_len":4992,"char_repetition_ratio":0.15778281,"word_repetition_ratio":0.1301693,"special_character_ratio":0.23285379,"punctuation_ratio":0.14184731,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96110255,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42],"im_url_duplicate_count":[null,1,null,1,null,3,null,1,null,1,null,3,null,1,null,3,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-17T09:51:17Z\",\"WARC-Record-ID\":\"<urn:uuid:eb967aa5-b20e-4498-939b-d2d6e2630134>\",\"Content-Length\":\"69542\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:faa556b5-a8eb-4a15-aa02-8eb55ce25cc2>\",\"WARC-Concurrent-To\":\"<urn:uuid:2742cd72-1770-485f-9b27-e04fa6b278a0>\",\"WARC-IP-Address\":\"104.70.188.160\",\"WARC-Target-URI\":\"https://www.adobe.com/devnet/actionscript/learning/as3-fundamentals/vectors-and-bytearrays.html\",\"WARC-Payload-Digest\":\"sha1:PD32GPH5KV2HN6RNSZEX2WEUKHRPBPGN\",\"WARC-Block-Digest\":\"sha1:6AOC57OBKJ2RGO67YMUNAM3KFY45Y4MZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875141806.26_warc_CC-MAIN-20200217085334-20200217115334-00053.warc.gz\"}"} |
http://www.drmatthewwall.com/jnd.php?type=1&i=9 | [
"### 孔明二八-|PC蛋蛋预测|-提供|加拿大28预测\n\n28圈每日福利18888+8888 首100送28约约俸禄最高⑥⑥w\n【南宫】做最专业的pc/加拿大,首充100送28 月月俸禄最高66w。【点击注册】\n【c7】28/棋牌/电子/真人/ 玩法齐全。最高返水18% 活动福利业界最高。【点击注册】\n【28圈】无限代理模式,诚邀您前来体验。福利好,待遇高 月月分红。【点击注册】\n\n2 7 2 11\n\n2710436\n2710435 14:55 2+7+2=11\n2710434 14:51 6+0+4=10\n2710433 14:48 6+0+9=15\n2710432 14:44 3+2+6=11\n2710431 14:41 4+3+6=13\n2710430 14:37 6+5+9=20\n2710429 14:34 1+0+5=06\n2710428 14:30 3+7+3=13\n2710427 14:27 7+1+6=14\n2710426 14:23 4+0+5=09\n2710425 14:20 0+5+9=14\n2710424 14:16 2+4+8=14\n2710423 14:13 7+9+1=17\n2710422 14:09 2+7+1=10\n2710421 14:06 0+1+7=08\n2710420 14:02 0+6+4=10\n2710419 13:59 4+8+7=19\n2710418 13:55 9+6+3=18\n2710417 13:52 4+8+9=21\n2710416 13:48 9+0+0=09\n2710415 13:45 9+3+2=14\n2710414 13:41 9+9+6=24\n2710413 13:38 1+0+0=01\n2710412 13:34 6+9+1=16\n2710411 13:31 0+7+8=15\n2710410 13:27 5+4+2=11\n2710409 13:24 8+9+4=21\n2710408 13:20 7+6+1=14\n2710407 13:17 9+7+3=19\n2710406 13:13 3+7+6=16\n2710405 13:10 1+1+6=08\n2710404 13:06 3+3+6=12\n2710403 13:03 6+4+4=14\n2710402 12:59 6+7+3=16\n2710401 12:56 4+3+4=11\n2710400 12:52 7+1+1=09\n2710399 12:49 6+3+3=12\n2710398 12:45 5+2+2=09\n2710397 12:42 6+6+6=18\n2710396 12:38 6+9+2=17\n2710395 12:35 6+6+8=20\n2710394 12:31 4+5+5=14\n2710393 12:28 2+4+9=15\n2710392 12:24 4+9+8=21\n2710391 12:21 8+8+9=25\n2710390 12:17 8+4+8=20\n2710389 12:14 8+2+4=14\n2710388 12:10 5+8+2=15\n2710387 12:07 2+1+2=05\n2710386 12:03 6+5+3=14"
]
| [
null
]
| {"ft_lang_label":"__label__zh","ft_lang_prob":0.5450749,"math_prob":0.98319525,"size":1791,"snap":"2021-21-2021-25","text_gpt3_token_len":1183,"char_repetition_ratio":0.33463907,"word_repetition_ratio":0.0,"special_character_ratio":0.8900056,"punctuation_ratio":0.088652484,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99986506,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-11T06:56:31Z\",\"WARC-Record-ID\":\"<urn:uuid:f8f76296-fef3-4726-856e-9e2a0d2d8303>\",\"Content-Length\":\"31829\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a6416a06-c520-4982-b226-bbc8d5f81097>\",\"WARC-Concurrent-To\":\"<urn:uuid:d3a36308-6226-4a11-a059-6e01c50eb321>\",\"WARC-IP-Address\":\"39.109.117.90\",\"WARC-Target-URI\":\"http://www.drmatthewwall.com/jnd.php?type=1&i=9\",\"WARC-Payload-Digest\":\"sha1:F7B5RO2QQUGCPDHISF72IYW4JALWHK6O\",\"WARC-Block-Digest\":\"sha1:R4Y6LZ3NJA4KC36EQOKZJ2DXQOS3FD7A\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243991904.6_warc_CC-MAIN-20210511060441-20210511090441-00228.warc.gz\"}"} |
https://math.stackexchange.com/questions/1730413/find-the-value-of-sum-n-2n?noredirect=1 | [
"# Find the value of sum (n/2^n) [duplicate]\n\nI have the series $\\sum_{n=0}^\\infty \\frac{n}{2^n}$. I must show that it converges to 2.\n\nI was given a hint to take the derivative of $\\sum_{n=0}^\\infty x^n$ and multiply by $x$ , which gives\n\n$\\sum_{n=1}^\\infty nx^n$ , or $\\sum_{n=0}^\\infty nx^n$.\n\nClearly if I take $x=\\frac{1}{2}$ , the series is $\\sum_{n=0}^\\infty \\frac{n}{2^n}$. How do I proceed from here?\n\n## marked as duplicate by Martin Sleziak, Claude Leibovici calculus StackExchange.ready(function() { if (StackExchange.options.isMobile) return; $('.dupe-hammer-message-hover:not(.hover-bound)').each(function() { var$hover = $(this).addClass('hover-bound'),$msg = $hover.siblings('.dupe-hammer-message');$hover.hover( function() { $hover.showInfoMessage('', { messageElement:$msg.clone().show(), transient: false, position: { my: 'bottom left', at: 'top center', offsetTop: -7 }, dismissable: false, relativeToBody: true }); }, function() { StackExchange.helpers.removeMessages(); } ); }); }); Nov 18 '16 at 9:37\n\nNotice that if $|x|<1$ then the original series converges with\n$$\\sum_{n=0}^\\infty x^n \\;\\; =\\;\\; \\frac{1}{1-x}.$$\nComputing the derivative and plugging in $x=\\frac{1}{2}$ should hopefully seem easier now."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8689026,"math_prob":0.9918215,"size":402,"snap":"2019-26-2019-30","text_gpt3_token_len":145,"char_repetition_ratio":0.19346733,"word_repetition_ratio":0.0,"special_character_ratio":0.37562189,"punctuation_ratio":0.09375,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9997057,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-20T20:15:29Z\",\"WARC-Record-ID\":\"<urn:uuid:ca50f389-2fe4-478b-885c-8caf5dcee17a>\",\"Content-Length\":\"132886\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c0b4e570-c28e-4985-8491-2e38b79f3280>\",\"WARC-Concurrent-To\":\"<urn:uuid:eba2c3c6-56c6-4c3c-a489-99b23c2d44e4>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/1730413/find-the-value-of-sum-n-2n?noredirect=1\",\"WARC-Payload-Digest\":\"sha1:NH236AKVBGR4EU4ZOI5QMWRINFAMTUMC\",\"WARC-Block-Digest\":\"sha1:SYEVNQUTJCUBBTC56G7LBVDAR2S5BPQP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195526670.1_warc_CC-MAIN-20190720194009-20190720220009-00020.warc.gz\"}"} |
https://sau.thanksswf.site/85mm-x-55mm-in-cm.html | [
"# 85mm x 55mm in cm\n\nUse Googleits amazing. There are 10mm in a centimeter, so 5. What's the most outdated thing you still use today? Asked By Jasen Runte. How to Make Money Online? What does it mean when the flag is not flying at the White House? Which actor would play you in a movie about your life? Asked By Aliza Farrell. How did chickenpox get its name? When did organ music become associated with baseball? Asked By Curt Eichmann. How can you cut an onion without crying?\n\n## convert mm in cm\n\nAsked By Leland Grant. Why don't libraries smell like bookstores? Asked By Veronica Wilkinson. What is 55mm converted to centimeters? Is Ashley Biden gay? What is the timbre of the song dandansoy? What is Hugh hefner penis size? Is Duranice Pace husband dead? All Rights Reserved. The material on this site can not be reproduced, distributed, transmitted, cached or otherwise used, except with prior written permission of Multiply. Ask Question Log in.\n\nEpson h369a lcd projector\n\nUnits of Measure. Length and Distance.\n\n### Convert 55 Millimeters to Centimeters\n\nBest of small baddo dj mix 2020\n\nWiki User Answered Related Questions. How many centimeters is 55mm? How much is 55mm in centimeters? How many cms equals 55mm? What is 55 millimeters equal to centimeters?\n\n### Convert mm to cm - Conversion of Measurement Units\n\nWhat is five and a half cenimeters converted to the nearest millimeter? What is 44mm converted to in centimeters? What is 6 kilometers converted into centimeters?\n\nWhat is fourteen millimeters converted to centimeters?The answer is We assume you are converting between millimetre and centimetre. You can view more details on each measurement unit: mm or cm The SI base unit for length is the metre.\n\nNote that rounding errors may occur, so always check the results. Use this page to learn how to convert between millimetres and centimetres. Type in your own numbers in the form to convert the units!\n\nYou can do the reverse unit conversion from cm to mmor enter any two units below:. A millimetre American spelling: millimeter, symbol mm is one thousandth of a metre, which is the International System of Units SI base unit of length.\n\nThe millimetre is part of a metric system. A corresponding unit of area is the square millimetre and a corresponding unit of volume is the cubic millimetre. A centimetre American spelling centimeter, symbol cm is a unit of length that is equal to one hundreth of a metre, the current SI base unit of length. A centimetre is part of a metric system.",
null,
"It is the base unit in the centimetre-gram-second system of units. A corresponding unit of area is the square centimetre. A corresponding unit of volume is the cubic centimetre.\n\nThe centimetre is a now a non-standard factor, in that factors of 10 3 are often preferred. However, it is practical unit of length for many everyday measurements. A centimetre is approximately the width of the fingernail of an adult person.\n\nYou can find metric conversion tables for SI units, as well as English units, currency, and other data. Type in unit symbols, abbreviations, or full names for units of length, area, mass, pressure, and other types.\n\nExamples include mm, inch, kg, US fluid ounce, 6'3\", 10 stone 4, cubic cm, metres squared, grams, moles, feet per second, and many more!Please provide values below to convert millimeter [mm] to centimeter [cm], or vice versa. It indicates one thousandth of the base unit, in this case the meter. The definition of the meter has changed over time, the current definition being based on the distance traveled by the speed of light in a given amount of time.\n\nThe relationship between the meter and the millimeter is constant however.",
null,
"Prior to this definition, the meter was based on the length of a prototype meter bar. Inthe meter has been re-defined based on the changes made to the definition of a second. Definition: A centimeter symbol: cm is a unit of length in the International System of Units SIthe current form of the metric system. Metric prefixes range from factors of 10 to 10 18 based on a decimal system, with the base in this case the meter having no prefix and having a factor of 1.\n\nLearning some of the more commonly used metric prefixes, such as kilo- mega- giga- tera- centi- milli- micro- and nano- can be helpful for quickly navigating metric units. Current use: The centimeter, like the meter, is used in all sorts of applications worldwide in countries that have undergone metrication in instances where a smaller denomination of the meter is required.\n\nHeight is commonly measured in centimeters outside of countries like the United States. From: millimeter To: centimeter. Millimeter to Nautical Mile international. Millimeter to Electron Radius classical. Millimeter to Earth's Equatorial Radius. Millimeter to Earth's Distance From Sun.Online calculator to convert millimeters to centimeters mm to cm with formulas, examples, and tables. Our conversions provide a quick and easy way to convert between Length or Distance units.\n\nTIP: If the result of your conversion is 0, try increasing the \"Decimals\". How to convert mm to cm: Enter a value in the mm field and click on the \"Calculate cm\" button. Your answer will appear in the cm field. The following is a list of definitions relating to conversions between millimeters and centimeters. A millimeter is a unit of Length or Distance in the Metric System. The symbol for millimeter is mm. There are 10 millimeters in a centimeter. The International spelling for this unit is millimetre.\n\nA centimeter is a unit of Length or Distance in the Metric System. The symbol for centimeter is cm. There are 0. The International spelling for this unit is centimetre. Let's take a closer look at the conversion formula so that you can do these conversions yourself with a calculator or with an old-fashioned pencil and paper. Next, let's look at an example showing the work and calculations that are involved in converting from millimeters to centimeters mm to cm.\n\nFor quick reference purposes, below is a conversion table that you can use to convert from mm to cm. This table provides a summary of the Length or Distance units within their respective measurement systems. While using this site, you agree to have read and accepted our Terms of Service and Privacy Policy. Check Your Math. Please re-enable javascript in your browser settings. Conversion Calculator Enter your value in the conversion calculator below.\n\nConvert mm to cm mm. Conversion Definitions The following is a list of definitions relating to conversions between millimeters and centimeters.This page allows you to convert length values expressed in millimeters to their equivalent in centimeters.\n\nEnter the value in millimeters in the top field the one marked \"mm\"then press the \"Convert\" button or the \"Enter\" key. The converter also works the other way round: if you enter the value in centimeters in the \"cm\" field, the equivalent value in millimeters is calculated and displayed in the top field. It is used to describe lengths that are relatively small, like the widths of pipes, cables or connectors. The well known terms 8mm, 16mm and 32mm describe common widths of motion picture films.\n\nTo understand the approximate size of one millimeter, see this tiny red line:. And this line should be approximately five millimeters long:.\n\nCentimeter is a unit of length used by the metric system. That means there are centimeters in 1 meter. One inch is equal to 2. The following line should be approximately 1 centimeter long:. Calculators Conversions. And this line should be approximately five millimeters long: Centimeter Centimeter is a unit of length used by the metric system.Number of millimetre multiply x by 0. Note that the results given in the boxes on the form are rounded to the ten thousandth unit nearby, so 4 decimals, or 4 decimal places.\n\nWe use this length unit in different situations such as distance calculation, length, width, height and more. Other units in millimetre Convert other units:. The unit millimetre is part of the international metric system which advocates the use of decimals in the calculation of unit fractions. Conversion millimetre to centimetre millimetre mm centimetre cm. Conversion formula of mm to cm The following information will give you different methods and formula s to convert mm in cm Formulas in words By multiplication Number of millimetre multiply x by 0.\n\nLinear unit of measurement We use this length unit in different situations such as distance calculation, length, width, height and more. Other units in millimetre Convert other units: Millimetre to Megametre Millimetre to Nail Millimetre to Palm Millimetre to Yard Metric system The unit millimetre is part of the international metric system which advocates the use of decimals in the calculation of unit fractions.\n\nTable or conversion table mm to cm You will find the first millimetres converted to centimetres In you have the number of centimetres rounded to the closest unit.\n\nUnit conversion tools offered by Web agency telorDesign.Browse recipes. Appetizer recipes. Baked cheesecake recipes. Basic cheesecake recipes. Beef recipes. Burrito recipes. Cheesecake recipes. Chicken recipes.\n\nChilli recipes. Chimichanga recipes. Chipotle recipes. Desserts and sweets.",
null,
"Enchiladas recipes. Ethnic and fusion recipes. Fajitas recipes. Italian recipes. Lasagna recipes.\n\nHow To Take Natural Light Portraits With 85mm Lens\n\nMexican recipes. Pasta recipes. Pork recipes. Quesadilla recipes.\n\nGassing up the chevy\n\nRisotto recipes. Texas recipes. Tiramisu recipes. Browse recipes Appetizer recipes Baked cheesecake recipes Basic cheesecake recipes Beef recipes Burrito recipes Cheesecake recipes Chicken recipes Chilli recipes Chimichanga recipes Chipotle recipes Desserts and sweets Enchiladas recipes Ethnic and fusion recipes Fajitas recipes Italian recipes Lasagna recipes Mexican recipes Pasta recipes Pork recipes Quesadilla recipes Risotto recipes Texas recipes Tiramisu recipes Recent recipe searches chilli beef pollo spaghetti alfredo chicken enchilada potatoes enchiladas fideo beef stew pasta mushrooms linguini enchilada chicken fahitas guacamole chili con carne beef fahitas french dressing chili con carne salsa fahitas mozzarela tiramisu meatballs gorgonzola spaghetti."
]
| [
null,
"https://sau.thanksswf.site/img/501705.jpg",
null,
"https://sau.thanksswf.site/img/767464bf150ccb3bd3ee08b8fb893f4a.jpg",
null,
"https://sau.thanksswf.site/img/85mm-x-55mm-in-cm.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.85886043,"math_prob":0.9367015,"size":10955,"snap":"2021-43-2021-49","text_gpt3_token_len":2457,"char_repetition_ratio":0.17642224,"word_repetition_ratio":0.094382025,"special_character_ratio":0.1953446,"punctuation_ratio":0.11111111,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9696197,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-11-27T08:51:32Z\",\"WARC-Record-ID\":\"<urn:uuid:607ddf15-ac78-497f-b7fe-f44174eb6c39>\",\"Content-Length\":\"29364\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a476ecd6-6145-437c-ad57-734a33f72678>\",\"WARC-Concurrent-To\":\"<urn:uuid:cc6256bd-f040-4410-9f92-1b0b6c5865b8>\",\"WARC-IP-Address\":\"172.67.147.16\",\"WARC-Target-URI\":\"https://sau.thanksswf.site/85mm-x-55mm-in-cm.html\",\"WARC-Payload-Digest\":\"sha1:TZUGAW5TNNVOH6FOFDXHLPLIG5ULUDOB\",\"WARC-Block-Digest\":\"sha1:GQMCWVQ6ZBUYLOEMEAIN2UD3QUWQFLR6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964358153.33_warc_CC-MAIN-20211127073536-20211127103536-00587.warc.gz\"}"} |
https://www.seeadot.com/posts/an-introduction-to-intonation-and-microtonality-for-choirs-part-2 | [
"Microtonality is one of those seemingly deeply esoteric terms that both mystifies and terrifies musicians. The purpose of this series is to open the door of understanding to the world of microtonality and, ultimately, intonation. The question of how to achieve intonation in choral music has plagued conductors for centuries. We struggle with teaching our singers to hear the alignment of harmonies, and training their technique to maintain the pitches they hear.\n\nTo make sure we’re all on the same page, here’s a list of terms I’ll be using in this series:\n\n#### Part 2: Just Intonation vs. Equal Temperament, or Why the Piano is Always Out of Tune\n\nFor an introduction to intonation and the overtone series, read the previous article in this series here.\n\nAll sound vibrates, and within each vibration are a series of smaller vibrations. These smaller, and higher, vibrations are the overtone series, which follows the same pattern. An interesting aspect of the series is that, while the Hz go up by a constant amount, our musical perception is that the intervals get smaller as they get higher. So, if our first pitch is at 100 Hz (we’ll call it a C, though it’s not really) the series will look like this:\n\nMusically it looks like this:\n\nAs you can see, the notes of the overtone series don’t fall neatly into a typical scale, or even within a single octave. Instead, each interval between harmonics is an increasingly smaller interval than the one before it, starting with the octave, then 5th, then 4th, and so on. By halving the Hz values of the notes in the overtone series, however, we can get the most consonant octave related pitch. For instance, if 300 Hz is the 3rd overtone (the octave + fifth) 150 Hz will be the perfect fifth. The initial octave is between 100 and 200 hundred Hz, so we’ll need to cut the other notes of the series in half until they fit between these two numbers. For example:\n\nThis rational and arithmetic approach to building chords and scales based on the overtone series is referred to as Just Intonation. Because the human ear is geared towards creating consonance, we naturally gravitate towards singing intervals with this kind of intonation. The relationship between these intervals are often expressed in ratios, for instance the major 3rd is a 5/4 ratio (125/100 reduced down) and a perfect fifth would be 3/2.\n\nEqual temperament (ET), on the other hand, organizes the octave to have equal space between each of the notes. ET is how we typically tune fixed pitch instruments like pianos and guitars today. These are also based on ratios, but because of the difference in Hz between different octaves, the ratios aren’t so even looking. Again, let’s use our example of the octave between 100 Hz and 200 Hz. To evenly divide 12 notes between these two pitches we need a ratio of 2x/12 for each half step, with x being the number of half steps away from the original pitch. So 21/12 is the first half step, and 212/12 (again, twice as many Hz) is the octave. The math gets complicated, but trust me when I say this is the Hz breakdown for an ET Chromatic Scale between 100 and 200 Hz:\n\nAs you can see, there’s not a lot of rhyme or reason to these numbers. This is because our experience of the musical intervals is logarithmic, not algorithmic. We hear the distance between 100 and 200 Hz as a single octave, but we also hear the distance between 400 Hz and 800 Hz as a single octave, not 4 octaves. To illustrate this point, here is a comparison of pitches derived from the overtone series using the JI method, and Equal Temperament, which is how pianos are tuned:\n\nClearly we have some discrepancies! What this means for us as listeners is the intervals in Equal Temperament are actually less consonant (or less in tune) than the more perfectly aligned JI method. What it means for us as singers though, is that the piano is a less perfect tuning tool than our own ears. As you can see, most of the intervals are fairly close, but some are quite different. An Equally Tempered minor 7th, for instance, is 31 cents different than in JI, which is very audible. We’ll discuss why this is in the next article in the series, but in short pianos are designed to so that all the intervals are just a little bit out of tune, which allows us to play in any key, rather than having the instrument tuned to a specific one."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9561036,"math_prob":0.93049246,"size":5222,"snap":"2020-45-2020-50","text_gpt3_token_len":1155,"char_repetition_ratio":0.12054427,"word_repetition_ratio":0.0022222223,"special_character_ratio":0.2148602,"punctuation_ratio":0.10531401,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95826966,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-27T05:54:17Z\",\"WARC-Record-ID\":\"<urn:uuid:b52ed757-1067-4cf7-8cb2-dfd8826f7e23>\",\"Content-Length\":\"31572\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cbbb0034-2af9-4201-95e5-38afa90f39a5>\",\"WARC-Concurrent-To\":\"<urn:uuid:51929fc9-c289-48d0-b0a5-105ad501396f>\",\"WARC-IP-Address\":\"50.17.197.186\",\"WARC-Target-URI\":\"https://www.seeadot.com/posts/an-introduction-to-intonation-and-microtonality-for-choirs-part-2\",\"WARC-Payload-Digest\":\"sha1:S5XY66S2MLE2DWK5PWJ7QQPJTAOREBFV\",\"WARC-Block-Digest\":\"sha1:LY73RAFTQDFWOIN5G5MCI6GOMC6EAAQ6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107893402.83_warc_CC-MAIN-20201027052750-20201027082750-00295.warc.gz\"}"} |
https://gis.stackexchange.com/questions/90064/calculating-topographic-wetness-index-choosing-from-different-algorithms | [
"# Calculating Topographic Wetness Index (choosing from different algorithms)\n\nTopographic Wetness Index can be expressed as\n\n`````` Ln(a/tanB) based on the idea of Beven and Kirkby (1979)\n``````\n\nwhere\n\n`````` a is the specific catchment area (a=A/L, catchment area (A)divided by contour length(L))\n``````\n\nand\n\n`````` tanB is the slope\n``````\n\nThe basic idea here is simple, but as there are multiple ways to calculate both a and tanB, the results of a TWI can vary widely (Qin et al. 2011).\n\nFlow accumulation and catchment area can be calculated, for example by:\n\n`````` D8 (O'Callaghan, J.F. / Mark, D.M. (1984))\nD-infinity (Tarboton, D.G. (1997)\nTriangular Multiple flow direction (Seibert, J. / McGlynn, B. (2007)\n``````\n\nalgorithms, and there are many other algorithms available too.\n\nSlope is usually calculated as local slope around the pixel (Sorensen et al. 2005). Local slope can also be calculated as minimum, mean and maximum slope around the pixel. Another way to calculate slope is presented by Hjerdt et al. 2004 where slope is calculated to a point d meters below cell center.\n\nSlope is a basic tool in most of the GIS softwares, however the calculation can differ. Here are few examples: ESRI: http://webhelp.esri.com/arcgisdesktop/9.2/index.cfm?TopicName=Calculating_slope SAGA: http://sourceforge.net/apps/trac/saga-gis/wiki/Terrain%20Analysis%20-%20Morphometry%20module%20library\n\nAs you can see there are many options available to calculate both a and tanB. So, the question is, in practice, which is the proper (best) way to calculate TWI using different these algorithms? Or is there any?\n\nI personally like working in SAGA, mainly because there are a large selection of open source hydrology tools.\n\nP.s. I'm having a hard time to find out exactly how catchment slope is calculated in Saga GIS, and exactly what does it mean here. (Terrain analysis -hydrology: catchment area parallel).\n\nEDITED: Answered by Volker Wichmann from SAGA Forums: \"The catchment slope output grid of the Catchment Area (Parallel) module is computed like this: for each cell, the local slope is calculated using the approach of Zevenbergen & Thorne. These slope values are accumulated downslope. Finally, for each cell the accumulated slope values are divided by the derived catchment area of the cell. The unit of the grid are radians.\"\n\n\"The Topographic Wetness Index (TWI) module requires a normal slope grid as input. \"\n\nReferences:\n\nBeven and Kirkby 1979. A physically based variable contributing area model of basin hydrology. Hydrological Sciences Bulletin, 24, pp. 43–69.\n\nHjerdt et al. 2004. A new topographic index to quantify downslope controls on local drainage. Water Resources Research, 40, W05602, doi:10.1029/2004WR003130.\n\nO'Callaghan, J.F. and Mark, D.M. 1984. The extraction of drainage networks from digital elevation data.Computer Vision, Graphics and Image Processing, 28:323-344\n\nQin et al. 2011. An approach to computing topographic wetness index based on maximum downslope gradient. Precision Agric 12:32–43.\n\nSeibert, J. and McGlynn, B. 2007. A new triangular multiple flow direction algorithm for computing upslope areas from gridded digital elevation models, Water Ressources Research, Vol. 43, W04501\n\nSorensen et al. 2005. On the calculation of the topographic wetness index: evaluation of different methods based on field observations. Hydrol. Earth Sys. Sci. Discuss., 2, 1807–1834\n\nTarboton, D.G. 1997. A new method for the determination of flow directions and upslope areas in grid digital elevation models, Water Ressources Research, Vol.33, No.2, p.309-319\n\nI think this post in the SAGA GIS forum might prove useful in answering your question about how slope is calculated:\n\nAlso, based on my understanding of TWI (as a PhD hydrology student involved in hydrologic modeling), the D-Inf (Tarboton), MFD-md (Qin), DEMON (Costa-Cabral), and MFD (Quinn) with the exponent p=1.1 (Freemann) are the best options for determining the accumulation area 'a' in the TWI calculation.\n\nI think Sorensen's work, and Qin's work, lends credence to my own semi-professional opinion. However, Qin's improved algorithm (MFD-md) hasn't been as throughly tested and used as much as the others have.\n\nWhen I have used SAGA to calculate TWI, I first calculate the slope using the slope commend in the Terrain Analysis / Morphometry / Slope, Aspect, Curvature module, using the default algorithm mentioned in the forum post. Then I calculate catchment area using the Terrain Analysis / Hydrology / Catchment Area / Catchment Area (Parallel) choice, using either the MFD algorithm or the D-INF algorithm with 1.1 as the convergence factor (p=1.1, from Freeman).\n\nThen I run the TWI option, in Terrain Analysis / Hydrology / Topographic Indices / Topographic Wetness Index (TWI), with the options to convert area to \"1/cell size\" and use standard calculation. I convert to specific catchment area because this is what the original formulation by Beven and Kirkby called for. As for the difference between the \"Standard\" and \"TOPMODEL\", I am not sure what it is - looking into that myself right now.\n\nI forgot to add, this all assumes that the DEM has been preprocessed by filling sinks. That is another complicated topic, with two (main) separate options.\n\nI hope this helps!\n\nTom\n\nP.S. Edits for @reima:\n\nThis is something I have only recently dug into, and I can admit I don't think I've reached the bottom yet! I prefer the method of Lindsay and Creed, the minimum impact approach that chooses breeching or filling based on minimizing total topographical impact (officially named \"Impact Reduction Algorithm\" - IRA) which I thought was implemented in his Terrain Analysis Software tool (formerly TAS, now WhiteBox GAT - link: http://www.uoguelph.ca/~hydrogeo/Whitebox/).\n\nHowever, even his tool seems to implement other filling schemes:\n\n1. The sink/depression filling algorithm (basic, but insanely fast) by Wang and Liu (2006) - which I don't believe operates in the IRA manner, but similar to the way ArcMap fills sinks/depressions, just straight up without any breeching.\n\n2. And the sink/depression filling of Planchon and Darboux (2001), which floods a DEM and then removes the water bit by bit - it can enforce a slope on the filed area, which I think might improve TI calculations.\n\nArcMap has a new \"de-pitting\" add-on (http://blogs.esri.com/esri/arcgis/2013/03/05/optimized-tool-for-dem-pit-removal-now-available/) that seems similar to Lindsay and Creeds IRA, but I haven't read the cited paper yet to determine how similar. This method might be worth a look.\n\nI'm also interested in scrutinizing my assumption that TI calculations need filled DEMs. I have three different sized watershed DEMs (<100 sq km, 100-1000 sq km, >1000 sq km), clipped using a shape file from 10 m NED data. These are not filled, since the shape file already provided the watershed delineation. I am going to run the SAGA GIS TI calculation (MFD, p=1.1) on all three watersheds, on both filled and unfilled DEMs, using ArcMaps filling scheme (old and new), and the Wang and Liu algorithm (in Whitebox, maybe in SAGA), and the Planchon and Darboux algorithm (in Whitebox, maybe in SAGA). I will also be calculating the TI values using the TI calculation embedded in my hydrological model.\n\nIf you want, I can share these results with you. I might not have them for a month or so though, as I have other more pertinent research that my focus is currently on, but I need to refine my TI calculation process by mid May at the latest.\n\n• Thanks for the entry Tom! It seems we have almost identical work flow. Preprocessing of DEM is indeed another topic, especially when there are roads, bridges etc. corrupting the natural flow direction and accumulation. May I ask you what is your default method for filling sinks and how do you handle lakes in the preprocessing of DEM? I ended up removing all the lakes because every fill sink option tended to raise these water baisins higher than they actually are, causing information loss near the lakesides. Apr 8, 2014 at 7:33\n• @reima, see above. I don't do anything special for lakes, as of yet. All of my focus is on predicting discharge at the stream/river outlet - I haven't yet wrapped my mind around how lakes will interact with all this. Also, I feel like I should make clear, in the TI calculation you DON'T want \"Catchment Slope\". You want the normal slope - just want to make sure that was clear from the SAGA GIS forum link I provided. Apr 9, 2014 at 20:28\n• Yes, (I tried italicizeing the part about local (\"normal\")slope but I'll guess it's left a little bit unclear(my bad)).I have ended up using the Wang Liu 2006 fill-method so far, but if I continue with this topic, I'll definitely have to dig deeper too. I'm very intrested about your results too! Apr 10, 2014 at 11:47"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8777495,"math_prob":0.6530617,"size":3437,"snap":"2023-40-2023-50","text_gpt3_token_len":896,"char_repetition_ratio":0.11709875,"word_repetition_ratio":0.0116959065,"special_character_ratio":0.24847251,"punctuation_ratio":0.18379161,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9505456,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-02T05:45:14Z\",\"WARC-Record-ID\":\"<urn:uuid:8b4dc870-db37-4be4-b044-0babc05b37a6>\",\"Content-Length\":\"170039\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:23bad0f9-fdb5-4388-8e76-370e430a6a34>\",\"WARC-Concurrent-To\":\"<urn:uuid:02313e08-166d-472f-a0dd-e13a4c017b82>\",\"WARC-IP-Address\":\"104.18.11.86\",\"WARC-Target-URI\":\"https://gis.stackexchange.com/questions/90064/calculating-topographic-wetness-index-choosing-from-different-algorithms\",\"WARC-Payload-Digest\":\"sha1:AWYUVR7DAJY5HZ5XTZHDC2WJULAT6NFS\",\"WARC-Block-Digest\":\"sha1:5AGI3DGIVY4OM4AWWUHCQW2PMHCMKUSW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510967.73_warc_CC-MAIN-20231002033129-20231002063129-00258.warc.gz\"}"} |
https://waterman180.com/code-2/trinket-dont-hit-the-moose/ | [
"In CPS physics students are working out their trinket.io examples using python. Today as a class we talked through an example of taking a basic physics equation, finding the variables needed to be asked for, and do the calculation. Just as we learned that setting up equations in spreadsheets can make work faster, once the basic idea of coding is understood, this is another great tool."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.95503235,"math_prob":0.98861617,"size":387,"snap":"2021-31-2021-39","text_gpt3_token_len":77,"char_repetition_ratio":0.09921671,"word_repetition_ratio":0.0,"special_character_ratio":0.18863049,"punctuation_ratio":0.10666667,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9653459,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-18T06:32:22Z\",\"WARC-Record-ID\":\"<urn:uuid:50560b66-92c9-4989-a7e7-51320c3bd637>\",\"Content-Length\":\"24992\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:61dc7a6c-b9f1-46be-b483-ef5dd3329633>\",\"WARC-Concurrent-To\":\"<urn:uuid:f8b177c6-fd14-4aaf-a1ac-df439d543d7b>\",\"WARC-IP-Address\":\"104.129.170.50\",\"WARC-Target-URI\":\"https://waterman180.com/code-2/trinket-dont-hit-the-moose/\",\"WARC-Payload-Digest\":\"sha1:WBJOVTJ7VKH6DUDUY2EEJ4XHZGBQYNGN\",\"WARC-Block-Digest\":\"sha1:VO7TY7B6OFKP6RBBSVSIZAD5XPNE4HVA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780056348.59_warc_CC-MAIN-20210918062845-20210918092845-00661.warc.gz\"}"} |
https://www.jiskha.com/questions/1781651/what-do-positive-hydrogen-ions-produce-when-they-react-with-water-in-solution | [
"# Integrated Science\n\nWhat do positive hydrogen ions produce when they react with water in solution?\n\nA).hydroxide ions\nB).hydronium ions\nC).a salt\nD).negative hydrogen ions\n\nI believe the answer is A). Is this correct?\n\n1. 👍\n2. 👎\n3. 👁\n1. its not A in the middle of test\n\n1. 👍\n2. 👎\n2. It's B just finished the test :)\n\n1. 👍\n2. 👎\n\n## Similar Questions\n\n1. ### Chemistry\n\nwhat happens when NaCl(s) is dissolved in water? (1)Cl-ions are attracted to the oxygen atoms of the water (2)Cl-ions are attracted to the hydrogen atoms of the water. (3)Na+ions are attracted to the hydrogen atoms of the water.\n\n2. ### CHEM\n\nA concentrated weak acid is best described as which of the following? (a) a solution with a high pH (b) a solution where the concentration of undissociated acid particles is high and relative quantity of hydronium ions is low (c)\n\n3. ### Chemistry\n\nWhich argument supports the claim that dissolving solid calcium chloride (CaCl2) in water is a chemical change? A. The ionic bond between the calcium ions and chloride ions is broken. B. The solid calcium chloride can be recovered\n\n4. ### Chemistry\n\nHow does table salt (NaCl) dissolve in water? A. The oxygen atoms in water molecules attract sodium ions. B. Each sodium ion is surrounded by chloride ions. C. Each NaCl molecule is surrounded by water molecules. D. Water\n\n1. ### Chemistry\n\nwhen 25.0 mL of a solution containing both Fe2+ and Fe3+ ions is titrated with 23.0 mL of 0.0200 M KMnO4 (in dilute sulfuric acid). As a result, all of the Fe2+ ions are oxidized to Fe3+ ions. Next, the solution is treated with Zn\n\n2. ### Chem\n\nAccording to the Arrhenius Theory of acids and bases, a base is: A A substance that decrease the amount of hydroxide ions present. B A substance that increases the amount of hydroxide ions present when it is dissolved in water. C\n\n3. ### Chemistry\n\nWhen HCl(aq) is exactly neutralized by NaOH(aq), the hydrogen ion concentration in the resulting mixture is a. always equal than the concentration of the hydroxide ions b.sometimes greater and sometimes less than the concentration\n\n4. ### chemistry\n\n1. Acids are substances that produce hydrogen ions (H+) when dissolved in water. Lemon juice is an example of an acid. A.What does lemon juice taste like? B.What does it feel like if lemon juice gets in your eye? 2. Bases are\n\n1. ### chemistry.\n\na quantity of 25 ml of a solution containing both Fe2+ and Fe3+ ions is titrated with 23 ml of .02 M KMnO4 solution. as a result all the Fe2+ ions are oxidised to Fe3+ ions. Next the solution is treated with Zn metal to convert\n\n2. ### Chemistry help\n\n2.00g of NaOH are dissolved in water to make 2.00L of solution. What is the concentration of hydronium ions, [H3O+], in this solution?\n\n3. ### Chemistry 12\n\n9. According to Arrhenius, which of the following groups contain: i)only acids ii)only bases a. NaOH, H2CO3, KCl b. MgCl2, H2SO4, HCl c. HNO3, HCl, H3PO4 d. Mg(OH)2, AgBr, HF e. KOH, NH4OH, Ba(OH)2 10. Enough water is added to 100\n\n4. ### Chemistry\n\nBases produce hydroxide (OH- ions) in water. NH3 (ammonia) is a base, but there is no hydroxide in NH3. Why?"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8813189,"math_prob":0.81934214,"size":2830,"snap":"2021-04-2021-17","text_gpt3_token_len":799,"char_repetition_ratio":0.13021939,"word_repetition_ratio":0.071428575,"special_character_ratio":0.2522968,"punctuation_ratio":0.12543555,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9567375,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-19T14:43:24Z\",\"WARC-Record-ID\":\"<urn:uuid:e5f3ee8f-7c76-44a0-94a9-ca89eed8e909>\",\"Content-Length\":\"18912\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e481e00d-7b65-4557-ad1e-f1ed63a4c3a5>\",\"WARC-Concurrent-To\":\"<urn:uuid:d6e67532-43f4-4c8c-a9f0-0920705b96b0>\",\"WARC-IP-Address\":\"66.228.55.50\",\"WARC-Target-URI\":\"https://www.jiskha.com/questions/1781651/what-do-positive-hydrogen-ions-produce-when-they-react-with-water-in-solution\",\"WARC-Payload-Digest\":\"sha1:IQ34ZEPIRFE6Z5GL7F7XJYFFVOLFTID5\",\"WARC-Block-Digest\":\"sha1:IPOH5MX27Q2GWD7DKTQDHARH3Z6VSXXG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703519395.23_warc_CC-MAIN-20210119135001-20210119165001-00798.warc.gz\"}"} |
https://versuscycles.com/visit | [
"# Mathematics questions\n\nThis Mathematics questions helps to fast and easily solve any math problems. We will also look at some example problems and how to approach them.\n\n## The Best Mathematics questions\n\nMathematics questions can be found online or in math books. Simple solutions math is a method of teaching that focuses on breaking down complex problems into small, manageable steps. By breaking down problems into smaller pieces, students can better understand the concepts behind the problem and find more creative solutions. Simple solutions math also encourages students to think outside the box and approach problems from different angles. As a result, Simple solutions math can be an effective way to teach problem-solving skills. In addition, Simple solutions math can help to improve test scores and grades. Studies have shown that students who use Simple solutions math outperform those who do not use the method. As a result, Simple solutions math is an effective tool for helping students succeed in school.\n\nHow to solve using substitution is best explained with an example. Let's say you have the equation 4x + 2y = 12. To solve this equation using substitution, you would first need to isolate one of the variables. In this case, let's isolate y by subtracting 4x from both sides of the equation. This gives us: y = (1/2)(12 - 4x). Now that we have isolated y, we can substitute it back into the original equation in place of y. This gives us: 4x + 2((1/2)(12 - 4x)) = 12. We can now solve for x by multiplying both sides of the equation by 2 and then simplifying. This gives us: 8x + 12 - 8x = 24, which simplifies to: 12 = 24, and therefore x = 2. Finally, we can substitute x = 2 back into our original equation to solve for y. This gives us: 4(2) + 2y = 12, which simplifies to 8 + 2y = 12 and therefore y = 2. So the solution to the equation 4x + 2y = 12 is x = 2 and y = 2.\n\nA radical is a square root or any other root. The number underneath the radical sign is called the radicand. In order to solve a radical, you must find the number that when multiplied by itself produces the radicand. This is called the principal square root and it is always positive. For example, the square root of 16 is 4 because 4 times 4 equals 16. The symbol for square root is . To find other roots, you use division. For example, the third root of 64 is 4 because 4 times 4 times 4 equals 64. The symbol for the third root is . Sometimes, you will see radicals that cannot be simplified further. These are called irrational numbers and they cannot be expressed as a whole number or a fraction. An example of an irrational number is . Although radicals can seem daunting at first, with a little practice, they can be easily solved!\n\nIn addition, the built-in practice exercises further reinforce the lesson material. As a result, Think Through Math is an extremely effective way for students to learn and improve their math skills. Best of all, the app is available for free, so there's no excuse not to give it a try!\n\n## Solve your math tasks with our math solver\n\nGreat app for solving more complicated equations with different variables. Also gives detailed steps and different ways of solving the given equations, though doesn't always show the way I would solve the equation as the application has a higher-level lever of mathematics that it can apply which can theoretically be problematic if the user doesn't know what the app did to solve the equation.",
null,
"Gabriella White\nIt's the best math solving app I have ever seen. I am not a premium user but I am happy with my experience with this app. It is not necessary to buy premium because an average math student can understand the steps provided by non-premium the app. I don't know how much the premium is good but if the non-premium is this much good then premium will be outstanding. And the best way to get rid of ADS for a non-premium user is don't turn on internet while using this app. But there's no any kind of ADS",
null,
"Priscila Lopez\nHow to solve for Solve by graphing solver Solve using elimination solver Interval notation problem solver Math problem solver app"
]
| [
null,
"https://versuscycles.com/LR56320a40ff0f65/25388788904_72d2f5ec6f_z-150x150.jpg",
null,
"https://versuscycles.com/LR56320a40ff0f65/team_3-150x150.jpg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9345877,"math_prob":0.9894176,"size":3015,"snap":"2022-40-2023-06","text_gpt3_token_len":700,"char_repetition_ratio":0.11258718,"word_repetition_ratio":0.021937843,"special_character_ratio":0.23880596,"punctuation_ratio":0.11,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99748355,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-25T23:23:27Z\",\"WARC-Record-ID\":\"<urn:uuid:c107d15e-da00-4061-a89e-a5652b19b7c2>\",\"Content-Length\":\"15432\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:be43aa9a-750f-4075-8a11-a0408e87cfad>\",\"WARC-Concurrent-To\":\"<urn:uuid:54e543cb-a932-4889-ba68-7d5122339ad5>\",\"WARC-IP-Address\":\"107.167.10.244\",\"WARC-Target-URI\":\"https://versuscycles.com/visit\",\"WARC-Payload-Digest\":\"sha1:7TG3CMVQSDBTITFWQ7L3ANN35KYA7MGV\",\"WARC-Block-Digest\":\"sha1:6DINN3LXM3CIR34DGMCRIDOHTACKU3EF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030334620.49_warc_CC-MAIN-20220925225000-20220926015000-00442.warc.gz\"}"} |
https://wtmaths.com/pythagoras.html | [
"Pythagoras\n\n# Pythagoras\n\nGCSE(F), GCSE(H),\n\nPythagoras Theorem states that in a right angled triangle, the square of the hypotenuse is equal to the sum of the squares on the other two sides.\n\nMeasure the length of each side, and then square the measurement. Add the squares of the two shorter sides together, and this will equal the square of the longest side.\n\nThis is written as a^2 + b^2 = c^2, where c is the longest side, or hypotenuse. It does not matter which sides a and b represent.",
null,
"## Examples\n\n1. What is the length of the side c in this triangle? Give your answer to 1 decimal place.",
null,
"Using Pythagoras Theorem, with the shorter sides being 5 cm and 14 cm:\n\na^2 + b^2 = c^2\n\n5^2 + 14^2 = c^2\n\n25 + 196 = c^2\n\n221 = c^2\n\n14.866 = c, which is 14.9 to 1dp\n\n2. What is the length of the side CB in the triangle, below? give your answer to three significant figures.",
null,
"Using Pythogoras Theorem; with the hypotenuse (c) and one shorter side known:\n\na^2 + b^2 = c^2\n\n22^2 + b^2 = 24^2\n\n484 + b^2 = 576\n\nRearranging the equation:\n\nb^2 = 576 - 484\n\nb^2 = 92\n\nb = 9.59166, or 9.59 to 3sf"
]
| [
null,
"https://wtmaths.com/P282_z1.png",
null,
"https://wtmaths.com/P282_z2.png",
null,
"https://wtmaths.com/P282_z3.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.85705197,"math_prob":0.9995741,"size":1087,"snap":"2019-35-2019-39","text_gpt3_token_len":365,"char_repetition_ratio":0.14773777,"word_repetition_ratio":0.037914693,"special_character_ratio":0.36614534,"punctuation_ratio":0.13229571,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000013,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-23T16:09:45Z\",\"WARC-Record-ID\":\"<urn:uuid:00627313-d1ab-443f-8f8c-a31f5c0b5950>\",\"Content-Length\":\"7318\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4e01c37c-6c33-4cf5-bdf3-74ad253f6e79>\",\"WARC-Concurrent-To\":\"<urn:uuid:5cf3895e-5166-4e86-933f-0cbb544d6fff>\",\"WARC-IP-Address\":\"97.107.141.194\",\"WARC-Target-URI\":\"https://wtmaths.com/pythagoras.html\",\"WARC-Payload-Digest\":\"sha1:ORFV3TD4PN5LZONXY6CJTWDOUXNWYK36\",\"WARC-Block-Digest\":\"sha1:JQCRH4RIKVUN4N2YV6OOBDWG2NVY54Y3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514577363.98_warc_CC-MAIN-20190923150847-20190923172847-00035.warc.gz\"}"} |
https://discuss.codechef.com/t/longart-editorial/1721 | [
"",
null,
"# LONGART - Editorial\n\nHARD\n\n### PREREQUISITES\n\nGraph Theory, Matching, Maximum Flow\n\n### PROBLEM\n\nYou are given two strings A and B. Imagine a language which consists of `|A| * |B|` unique words, where each word’s first letter is in A and the second letter is in B. For each word, you are given how many times it may be used to build an Article. Let this be denoted by C(w) for a word w.\n\nAn Article is a collection of Sentences. A Sentence consists of |A| words, such that\n\n• Each letter from A is used exactly once.\n• Each letter from B is used at most once.\n\nFind the longest Article (most number of sentences) that can be achieved and print it.\n\n### EXPLANATION\n\nOf course |A| ≤ |B| to be able to build any sentence.\n\nThe problem must be broken into two pieces\n\n• Finding the maximum number of sentences that can be built in an Article - call it K.\n• Finding the sentences themselves.\n\n## First Insight\n\nIf there are at least K sentences in the longest Article, then the following network has flow `N * K`\n\n• There are |A| + |B| + 2 vertices (1 source and 1 sink extra)\n• For all edges (source, a), where a belongs to A\n• capacity( source, a ) = K\n• For all edges (a, b), where a belongs to A and b belongs to B\n• capacity( a, b ) = C(ab)\n• For all edges (b, sink), where b belongs to B\n• capacity( b, sink ) = K\n\nIt is intuitive that in such a network, the number of times each character in A is used is K and the number of times each character of B is used is less than or equal to K.\n\n## Corollary\n\nIf the maximum flow in the above network is `N * K` then there are K proper sentences\n\nLet us denote the flow between two vertices u and v as F(u, v). Now we know that if the maximum flow is `N * K` then\n\n• F(source, a) = K for all a in A\n• F(b, sink) <= K for all b in B\n\nLet us generate a bipartite graph G(A,B) such that the number of edges between a in A and b in B is equal to F(a,b).\n\nWe wish to partition the edges in G(A,B) into K matchings of size |A| each. If this is possible, then we have K proper sentences. The mapping between a matching and the sentence is easy to see.\n\nOn G(A,B) we can use the following algorithm to generate the matchings (we will prove why).\n\n```max_degree = K\n\nrepeat K times\n1. Let B' be the set of vertices in B\nwhose degree is max_degree\n2. build a matching of size |A| which\ncontains all vertices in A\ncontains all vertices in B'\n3. remove all the edges in the matching\nfrom the graph\n4. max_degree--\n```\n\nStep 2 in the above algorithm is actually always possible. Let us see why\n\n(Material for the proofs below has been derived from the book Graphs: Theory and Algorithms by K. Thulasiraman, M. N. S. Swamy)\n\nLemma: In G(A,B), you can always build a matching of size |A|\n\n• Degree of each vertex in A is max_degree\n• Let X = a subset of A\n• Let E = the edges incident on X\n• `|E| = |X| * max_degree`\n• Let Y = all the distinct vertices in B that are in the neighbourhood of all the vertices in X\n• Let E’ = edges incident on Y\n• `|E'| <= |Y| * max_degree`, because no vertex in Y (or B, or A) has a degree more than max_degree\n• But, `|E'| >= |E|`, since E is a subset of E’\n• `|Y| * max_degree >= |X| * max_degree`\n• Or, `|Y| >= |X|`\n\nNow, by Hall’s Marriage Theorem we know that there should always be a matching that saturates A.\n\nLemma: In G(A,B), you can always build a matching that contains all vertices in A and contains all vertices in B’\n\nFrom the previous lemma, we can get two matchings M1 and M2 that saturate A and saturate B’ respectively. Both are built independently and may or may not have edges in common.\n\n• Consider M, the union of M1 and M2\n• Let X be the neighbourhood of A in B\n• Let Y be the neighbourhood of B’ in A\n• We can build a bipartite graph on\n• Vertices in A\n• Vertices in B’ union X\n• Union of edges in M1 and M2\n• degree of any vertex in graph = 1 or 2\n• Hence the bipartite graph is made of mutually exclusive paths and circuits.\n• Now consider a vertex b in B’ - X\n• we have degree(b) = 1\n\nA path from b may only end at a vertex in X - B’. Can you see why?\n\nHint\n\n• In a path from b, when we arrive at any vertex in B’, we do so using an edge in M1\n• This means, the degree of such a vertex should have to be 2 and the path can continue\n\nNow,\n\n• We can augment the matching M1 by using this path such that\n• b is included\n• the last vertex in the path is excluded.\n\nThis process for each vertex b in B’ - B converts M1 into a matching that saturates A as well as B’.\n\nThus, we will have a solution by repeatedly finding this matching - and removing the edges.\n\n## Le Implementation\n\nYou can calculate the value of K by performing a binary search on the value of K.\n\nBuild the graph for each sample of K and then test whether a flow of size `|A| * K` exists or not. This test requires finding the flow as fast as possible. This calls for using a faster flow algorithm, such as Dinic’s or Push Relabel. Ford Fulkerson and Edmond Karp will certainly be too slow.\n\nFor rebuilding the sentences we can use the discussion to derive a matching that saturates A as well as B’. This will bound the complexity of our implementation by `O(K*N*N*M)`. Unfortunately K can be quite large.\n\nWhat if we collect similar sentences together?\n\nWe will have to choose the smallest F(u,v) for each edge (u,v) in the matching. Also, we will have to find the smallest delta through which we can go before a new vertex must be added to B’. The smaller one of both the value is the number of sentences that can be built by this matching.\n\nAfter deleting the edges in the matching, we may have to run augmentations to rebuild the matchings that saturate A and B’ (and hence find the matching that saturates both A and B’).\n\n• The only augmentations made to B’ would be adding more vertices to it, meaning we will augment for the new vertex that must be added to B’\n• There are at most N vertices in B’ that will limit this\n• The only augmentations made to A would be to handle deletion of edges as words get used up, meaning we will augment for the edge that is now deleted\n• There are at most `N*M` words that will limit this\n\nAn augmentation takes at most `N*M` steps. Hence the complexity of the reconstruction is `O(N*M * N*M)`. It is worth noting that the constraint on the output size (less than 30000) blocks is superflous. We would always find an answer within that size!\n\n### EXERCISE\n\nTry to find alternate ways of constructing a matching that saturates all the maximum degree vertices. A very interesting way using flows is possible by using pre-sinks. Using the Dinic’s algorithm, the matching can be found quickly. Also using collection trick described above works like an alternate solution.\n\nHint\n\n• Design a graph with similar to the G(A,B) but with two additional sinks\n• The sinks help enumerating the saturated vertices from B and the non-saturated vertices respectively.\n• The graph is apt for optimiations due to Dinic’s Flow’s properties.\n• See the tester’s solution for solution based on this approach.\n\n### SETTER’S SOLUTION\n\nCan be found here. ( Link is broken. Will upload the file on this address shortly. )\n\n### TESTER’S SOLUTION\n\nCan be found here.\n\n1 Like\n\n“It is worth noting that the constraint on the output size (less than 30000) blocks is superflous”\n\nHad i believed this, I would have solved this sum",
null,
". Implemented everything as discussed. When submitted i got WA, and I felt it was because of that 30000 factor which probably m missing. So some other bug caused me WA.\n\nMy solution is almost identical to what was presented in the editorial (in concept, at least). Just one observation. In the first part (where the flow is binary searched), a standard flow algorithm run from scratch each time is indeed too slow. However, what I did is the following: every time I got a positive answer for a flow value F (during the binary search), I copied the flow values of the graph edges. Then, when I tested whether a value F’>F is achievable, I started the max-flow algorithm from the flow values of the previous good value F (and not all the way from zero). Then I used just one of the standard max-flow algorithms with the following enhancement:\n\n• I incremented the flow on a set of maximal vertex-disjoint paths at each iteration, just like in the Hopcroft-Karp algorithm (of course, the flow was incremented by the maximum value possible along the path, not just by 1 unit)\n\nHowever, for me, generating the bipartite matchings in time (i.e. solving the second part of the problem) was more difficult. Generating each matching from scratch was too slow in my solution. So, in the end, I used a standard max-flow algorithm for finding each matching, enhanced with;\n\n• the Hopcroft-Karp idea (increase the flow on a maximal set of vertex-disjoint paths after each iteration)\n\n• use an initial greedy for each matching (to try not to start the max-flow algorithm from zero)\n\n• reuse as much as possible the flow obtained at the previous matching: e.g. if an edge i-j (i from A and j from B) was in the previous matching and it still had to be used in some future matchings, then I would keep the flow on it, unless:\n\n• j was not a “critical” node from B (i.e. it didn’t have to be part of each matching from now on) and there was some other “critical” node k from B which could not be matched by the initial greedy algorithm\n\n• the max-flow algorithm redirected the flow on this edge\n\n8 Likes\n\nA quick question, should those B - B’ and B’ - B be X - B’ and B’ - X ?\n\nYes",
null,
""
]
| [
null,
"https://s3.amazonaws.com/discourseproduction/original/3X/7/f/7ffd6e5e45912aba9f6a1a33447d6baae049de81.svg",
null,
"https://discuss.codechef.com/images/emoji/apple/frowning.png",
null,
"https://discuss.codechef.com/images/emoji/apple/slight_smile.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.92593014,"math_prob":0.94547343,"size":7036,"snap":"2020-34-2020-40","text_gpt3_token_len":1802,"char_repetition_ratio":0.12827076,"word_repetition_ratio":0.042796005,"special_character_ratio":0.25440592,"punctuation_ratio":0.079891674,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99776393,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-27T03:50:23Z\",\"WARC-Record-ID\":\"<urn:uuid:1f74a678-26ef-475b-9391-73d4e5905523>\",\"Content-Length\":\"33030\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:37a1b84c-ce44-4470-ba40-f33c59cf716d>\",\"WARC-Concurrent-To\":\"<urn:uuid:c66b9de1-0464-4c89-a40f-431b7a230326>\",\"WARC-IP-Address\":\"52.54.40.124\",\"WARC-Target-URI\":\"https://discuss.codechef.com/t/longart-editorial/1721\",\"WARC-Payload-Digest\":\"sha1:HBS6CLUA7OIGXGUYI7XX2TPXBMHUQMH3\",\"WARC-Block-Digest\":\"sha1:BJ7M4EQ62B2TIBQWQXZ5LN3L22XVRBMW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400250241.72_warc_CC-MAIN-20200927023329-20200927053329-00249.warc.gz\"}"} |
http://mca.ignougroup.com/2017/06/under-what-condition-is-ppoly-equal-to.html | [
"World's most popular travel blog for travel bloggers.\n\n# Under what condition is P/poly equal to the class of languages having Turing machines running in polynomial length with polynomial advice?\n\n, ,\nProblem Detail:\n\nSanjeev Arora and Boaz Barak show the following :\n\n$P/poly = \\cup_{c,d} DTIME (n^c)/n^d$\n\nwhere $DTIME(n^c)/n^d$ is a Turing machine which is given an advice of length $O(n^d)$ and runs in $O(n^c)$ time. I do follow the proof. But I feel the proof only holds if we assume that $\\forall n$ the advice given to any two $n$ length strings $x$ and $y$ is same.\n\nBut I am unable to see if the theorem still holds if the above condition if not applicable ?\n\nSince $P/Poly$ talks about circuit families for languages (different circuit $C_n$ for length $n$ inputs), it is natural to talk about an advice which is a function of the input's length alone. Different advice for each string will make the class too big. For any $L\\subseteq \\Sigma^*$, your advice for some $x\\in\\Sigma ^*$ can be 1 if $x\\in L$ and 0 otherwise."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.82174295,"math_prob":0.9963914,"size":976,"snap":"2020-10-2020-16","text_gpt3_token_len":274,"char_repetition_ratio":0.093621396,"word_repetition_ratio":0.0,"special_character_ratio":0.2704918,"punctuation_ratio":0.087804876,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99885786,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-04-05T16:27:00Z\",\"WARC-Record-ID\":\"<urn:uuid:91018477-87a4-4519-8fd6-9b524d34f6f8>\",\"Content-Length\":\"155945\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:40de3995-6d0a-46cb-b6fa-1eedcde43b48>\",\"WARC-Concurrent-To\":\"<urn:uuid:c8b4b345-ec01-4204-8cd4-7ea37c49913b>\",\"WARC-IP-Address\":\"216.239.32.21\",\"WARC-Target-URI\":\"http://mca.ignougroup.com/2017/06/under-what-condition-is-ppoly-equal-to.html\",\"WARC-Payload-Digest\":\"sha1:HK6MIJIP6BPCCJYE5KW4K5Z3YPGPU3BN\",\"WARC-Block-Digest\":\"sha1:XMQB3TIUJ2MUGNMCTQ2HRZRZDTRYMJWN\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585371606067.71_warc_CC-MAIN-20200405150416-20200405180916-00330.warc.gz\"}"} |
https://puzzling.stackexchange.com/questions/9341/not-regular-this-time?noredirect=1 | [
"# Not Regular This Time\n\nWe can say that an $n$-by-$n$ square is regular provided that:\n\n1. Each of the integers from $0$ to $n^2 − 1$ appears in exactly one cell, and each cell contains only one integer (so that the square is filled), and\n\n2. If we express the entries in base-$n$ form, each base-$n$ digit occurs exactly once in the units’ position, and exactly once in the $n$’s position.\n\nWhat is an example of a 4-by-4 filled, magic square which is not regular? The square should use the integers 0 to 15. Show the answer in both decimal and base-4 as well.\n\n• I don't know what you mean by a irregular magic square? :P (I'm guessing its a stupid question, sorry :P ) – The Dragonista Feb 20 '15 at 20:42\n• @TheDragonista In my last question I defined what a regular square is. – Daniella Feb 20 '15 at 20:44\n• My previous one was irregular, so shall i post that answer here and try finding a different solution for the other one? – The Dragonista Feb 20 '15 at 20:52\n• I think Togashi already did – Daniella Feb 20 '15 at 20:54\n• Would it be possible for you to include in this question what you mean by a regular square? I'm pretty sure this isn't a term that most people know with regards to magic squares, and questions need to stand alone. Thank you! – user20 Feb 20 '15 at 21:00",
null,
""
]
| [
null,
"https://i.stack.imgur.com/GLtH3.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9239087,"math_prob":0.81495804,"size":528,"snap":"2021-04-2021-17","text_gpt3_token_len":143,"char_repetition_ratio":0.13167939,"word_repetition_ratio":0.0,"special_character_ratio":0.27083334,"punctuation_ratio":0.0862069,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98490226,"pos_list":[0,1,2],"im_url_duplicate_count":[null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-24T22:56:15Z\",\"WARC-Record-ID\":\"<urn:uuid:1cae35be-5aee-4ac6-8cde-10c5aa7fbb99>\",\"Content-Length\":\"149546\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f6f2d6e9-617e-414c-847d-a10f65c700bb>\",\"WARC-Concurrent-To\":\"<urn:uuid:edc75f6e-50b0-4ccd-9bd5-4660c07e2d92>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://puzzling.stackexchange.com/questions/9341/not-regular-this-time?noredirect=1\",\"WARC-Payload-Digest\":\"sha1:CQVV6THENH2POYMERYODWI2S23FRBP3A\",\"WARC-Block-Digest\":\"sha1:4DJOB2LFXNTMLOBLFYXSIS4IW3JCP65A\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703557462.87_warc_CC-MAIN-20210124204052-20210124234052-00513.warc.gz\"}"} |
https://www.wogu.cc/office/excel/18257.html | [
"Sub 生成九九乘法表()\n\nDim Rs As Byte, Cs As Byte, Arr As Variant\n\nReDim Arr(1 To 9, 1 To 9)\n\nFor Rs = 1 To 9\n\nFor Cs = 1 To 9\n\nIf Rs >= Cs Then\n\nArr(Rs, Cs) = Rs & “ד & Cs & “=“ & Rs * Cs\n\nEnd If\n\nNext Cs\n\nNext Rs\n\nRange(“B2“).Resize(9, 9).Value = Arr\n\nCall FC(Range(“B2“).Resize(9, 9))\n\nEnd Sub\n\nSub FC(Rng As Range)\n\nWith Rng\n\n.FormatConditions.Delete"
]
| [
null
]
| {"ft_lang_label":"__label__zh","ft_lang_prob":0.766703,"math_prob":0.9512238,"size":823,"snap":"2021-43-2021-49","text_gpt3_token_len":532,"char_repetition_ratio":0.0964591,"word_repetition_ratio":0.0,"special_character_ratio":0.25759417,"punctuation_ratio":0.15714286,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97108436,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-06T00:17:33Z\",\"WARC-Record-ID\":\"<urn:uuid:739049fa-0506-4a6c-afba-1f028d2fa46a>\",\"Content-Length\":\"82496\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b6913876-452e-4c12-aa32-d45d109da0a0>\",\"WARC-Concurrent-To\":\"<urn:uuid:0bcc289c-8c51-4979-a796-f423e921469d>\",\"WARC-IP-Address\":\"43.128.12.85\",\"WARC-Target-URI\":\"https://www.wogu.cc/office/excel/18257.html\",\"WARC-Payload-Digest\":\"sha1:XZAWAET745OKNQCUW3E5ZIM4YFMGHXSR\",\"WARC-Block-Digest\":\"sha1:ZV5TQG6KDHIMHTXCUP7DGE4GQCTZIZ5G\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363226.68_warc_CC-MAIN-20211205221915-20211206011915-00329.warc.gz\"}"} |
https://crypto.stackexchange.com/questions/66091/how-to-break-rsa-signature-on-xor-of-hashed-blocks | [
"# How to break RSA signature on XOR of hashed blocks\n\nI'm trying to figure out if there's a way of breaking or weakening the following method of signing long messages\n\nGiven $$M=m_{1}m_{2}...m_{n},\\,\\,|m_i|=64b, \\,\\,h=h(m_{1})\\oplus h(m_{2})\\oplus\\cdots\\oplus h(m_{n})$$\n\nsign $$h$$ using the RSA method.\n\nalso, is it weaker than the original way (of signing every message separately)?\n\n• Can I ask what is the aim? – kelalaka Dec 25 '18 at 15:54\n• the general idea is to sign long messages with minimum signature length instead of signing every block – IGxCS Dec 25 '18 at 16:03\n• Is this homework? It looks like it. Please let us know so that we can answer appropriately. – Yehuda Lindell Dec 25 '18 at 16:16\n• It's not homework, it's a question from past exam, this is my first question here and I was wondering if I need to mention it, why is it important? – IGxCS Dec 25 '18 at 16:23\n• While possibly reducing the signature size, one has to process all to verify the sign for a single message. – kelalaka Dec 25 '18 at 16:32\n\nTo sign a message $$M = m_1 \\ldots m_n$$, you calculate $$h(M) = h(m_1) \\oplus \\ldots \\oplus h(m_n)$$ then $$S(M) = \\textsf{RSASSA}(h(M))$$ where $$\\textsf{RSASSA}$$ is some RSA-based signature mechanism. $$h$$ is presumably a cryptographic hash function. You're looking for a weakness of this signing method.\nStart small. Given $$M = m_1 m_2$$, can you think of another message $$M'$$ such that $$S(M) = S(M')$$? (Note that a weakness could be more complicated than this. For example, it could be impossible to find two messages with the same signature, but possible to derive the signature of $$M'$$ from the signature of $$M$$. However, in this case, it is possible to forge another message with the same signature.)\n$$S(m_1 m_2) = \\textsf{RSASSA}(h(m_1) \\oplus h(m_2)) = \\textsf{RSASSA}(h(m_2) \\oplus h(m_1)) = S(m_2 m_1)$$\nFollow-up exercise (easy): given an arbitrary message $$M = m_1 \\ldots m_n$$, find a longer message with the same signature. Hint:\nAbove I used the fact that $$\\oplus$$ is commutative. What other algebraic properties does $$\\oplus$$ have?\nThe normal way to sign a message $$m_1 \\ldots m_n$$ is of course $$\\textsf{RSASSA}(h(m_1 \\ldots m_n))$$. If you also want to be able to verify the signature of one part independently, one solution is to build a hash tree and sign the root hash. A two-level hash tree would be $$h(h(m_1) \\ldots h(m_n))$$. If you transmit the signature $$\\textsf{RSASSA}(h(h(m_1) \\ldots h(m_n)))$$ as well as the list of individual hashes $$(h(m_1), \\ldots, h(m_n))$$ then it's possible to verify the signature of any of the individual $$m_i$$. Signatures are larger than hashes and more expensive to compute, so this can save resources compared so signing each $$m_i$$ independently."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8240944,"math_prob":0.9982228,"size":1691,"snap":"2019-43-2019-47","text_gpt3_token_len":502,"char_repetition_ratio":0.14700653,"word_repetition_ratio":0.02238806,"special_character_ratio":0.29390892,"punctuation_ratio":0.08841463,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99988794,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-15T00:16:51Z\",\"WARC-Record-ID\":\"<urn:uuid:265184d7-1b7e-44f8-ad9c-7959a6f8b786>\",\"Content-Length\":\"142519\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6f5cbf48-692c-43fa-adf7-e54693a09c06>\",\"WARC-Concurrent-To\":\"<urn:uuid:478e95ca-da08-4880-b13c-912061cb783e>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://crypto.stackexchange.com/questions/66091/how-to-break-rsa-signature-on-xor-of-hashed-blocks\",\"WARC-Payload-Digest\":\"sha1:O6MKQ4NE35GZJT6IOOL457ILHLMWZ6X4\",\"WARC-Block-Digest\":\"sha1:S55ZKECFIQTKLGH3X6S63GBPVEDVCFXK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496668544.32_warc_CC-MAIN-20191114232502-20191115020502-00498.warc.gz\"}"} |
https://researchdatapod.com/how-to-solve-python-valueerror-can-only-compare-identically-labeled-dataframe-objects/ | [
"# How to Solve Python ValueError: Can only compare identically-labeled DataFrame objects\n\nIf you try to compare DataFrames with different indexes using the equality comparison operator `==`, you will raise the ValueError: Can only compare identically-labeled DataFrame objects. You can solve this error by using equals instead of `==.`\n\nFor example, `df1.equals(df2)`, which ignores the indexes.\n\nAlternatively, you can use `reset_index` to reset the indexes back to the default `0, 1, 2, ...` For example, `df1.reset_index(drop=True).equals(df2.reset_index(drop=True))`.\n\nThis tutorial will go through the error find detail and how to solve it with code examples.\n\n## ValueError: Can only compare identically-labeled DataFrame objects\n\nIn Python, a value is a piece of information stored within a particular object. We will encounter a ValueError in Python when using a built-in operation or function that receives an argument that is the right type but an inappropriate value. The data we want to compare is the correct type, DataFrame, but the DataFrames have the inappropriate indexes for comparison.\n\n## Example\n\nLet’s look at an example of two DataFrames that we want to compare. Each DataFrame contains the bodyweight and maximum bench presses in kilograms for six lifters. The indexes for the two DataFrames are different.\n\n```import pandas as pd\n\ndf1 = pd.DataFrame({'Bodyweight (kg)':[76,84, 93,106, 120, 56],\n'Bench press (kg)':[135, 150, 170, 140, 180, 155]},\nindex = ['lifter_1', 'lifter_2', 'lifter_3', 'lifter_4', 'lifter_5', 'lifter_6'])\n\ndf2 = pd.DataFrame({'Bodyweight (kg)':[76,84, 93,106, 120, 56],\n'Bench press (kg)':[145, 120, 180, 220, 175, 110]},\nindex = ['lifter_A', 'lifter_B', 'lifter_C', 'lifter_D', 'lifter_E', 'lifter_F'])\n\nprint(df1)\n\nprint(df2)```\n\nLet’s run this part of the program to see the DataFrames:\n\n``` Bodyweight (kg) Bench press (kg)\nlifter_1 76 135\nlifter_2 84 150\nlifter_3 93 170\nlifter_4 106 140\nlifter_5 120 180\nlifter_6 56 155\nBodyweight (kg) Bench press (kg)\nlifter_A 76 145\nlifter_B 84 120\nlifter_C 93 180\nlifter_D 106 220\nlifter_E 120 175\nlifter_F 56 110e```\n\nLet’s compare the DataFrames using the equality operator:\n\n`print(df1 == df2)`\n\nLet’s run the code to see the result:\n\n`ValueError: Can only compare identically-labeled DataFrame objects`\n\nThe ValueError occurs because the first DataFrame has indexes: `['lifter_1', 'lifter_2', 'lifter_3', 'lifter_4', 'lifter_5', 'lifter_6']` and the second DataFrame has indexes: `['lifter_A', 'lifter_B', 'lifter_C', 'lifter_D', 'lifter_E', 'lifter_F']`.\n\n### Solution #1: Use DataFrame.equals\n\nTo solve this error, we can use the DataFrame.equals function. The equals function allows us compare two Series or DataFrames to see if they have the same shape or elements. Let’s look at the revised code:\n\n`print(df1.equals(df2))`\n\nLet’s run the code to see the result:\n\n`False`\n\n### Solution #2: Use DataFrame.equals with DataFrame.reset_index()\n\nWe can drop the indexes of the DataFrames using the `reset_index()` method, then we can compare the DataFrames. To drop the indexes, we need to set the parameter `drop = True`. Let’s look at the revised code:\n\n```df1 = pd.DataFrame({'Bodyweight (kg)':[76,84, 93, 106, 120, 56],\n'Bench press (kg)':[145, 120, 180, 220, 175, 110]},\nindex = ['lifter_1', 'lifter_2', 'lifter_3', 'lifter_4', 'lifter_5', 'lifter_6'])\n\ndf2 = pd.DataFrame({'Bodyweight (kg)':[76, 84, 93, 106, 120, 56],\n'Bench press (kg)':[145, 120, 180, 220, 175, 110]},\nindex = ['lifter_A', 'lifter_B', 'lifter_C', 'lifter_D', 'lifter_E', 'lifter_F'])\n\ndf1 = df1.reset_index(drop=True)\ndf2 = df2.reset_index(drop=True)\nprint(df1)\nprint(df2)\n```\n\nLet’s look at the DataFrames with their indexes dropped:\n\n``` Bodyweight (kg) Bench press (kg)\n0 76 145\n1 84 120\n2 93 180\n3 106 220\n4 120 175\n5 56 110\nBodyweight (kg) Bench press (kg)\n0 76 145\n1 84 120\n2 93 180\n3 106 220\n4 120 175\n5 56 110```\n\nThere are two ways we can compare the DataFrames:\n\n• The whole DataFrame\n• Row-by-row comparison\n\n#### Entire DataFrame Comparison\n\nWe can use the `equals()` method to see if all elements are the same in both DataFrame objects. Let’s look at the code:\n\n`print(df1.equals(df2))`\n\nLet’s run the code to see the result:\n\n`True`\n\n#### Row-by-Row DataFrame Comparison\n\nWe can check that individual rows are equal using the equality operator once the DataFrames indexes are reset. Let’s look at the code:\n\n`print(df1 == df2)`\n\nLet’s run the code to see the result:\n\n``` Bodyweight (kg) Bench press (kg)\n0 True True\n1 True True\n2 True True\n3 True True\n4 True True\n5 True True```\n\nNote that the comparison is done row-wise for each column independently.\n\n### Solution #3: Use numpy.array_equal\n\nWe can also use numpy.array_equal to check if two arrays have the same shape and elements. We can extract arrays from the DataFrame using .values. Let’s look at the revised code:\n\n```import pandas as pd\nimport numpy as np\ndf1 = pd.DataFrame({'Bodyweight (kg)':[76,84, 93,106, 120, 56],\n'Bench press (kg)':[135, 150, 170, 140, 180, 155]},\nindex = ['lifter_1', 'lifter_2', 'lifter_3', 'lifter_4', 'lifter_5', 'lifter_6'])\n\ndf2 = pd.DataFrame({'Bodyweight (kg)':[76,84, 93,106, 120, 56],\n'Bench press (kg)':[145, 120, 180, 220, 175, 110]},\nindex = ['lifter_A', 'lifter_B', 'lifter_C', 'lifter_D', 'lifter_E', 'lifter_F'])\n\nprint(np.array_equal(df1.values, df2.values))```\n\nLet’s run the code to see the result:\n\n`False`\n\nWe can use array_equal to compare individual columns. Let’s look at the revised code:\n\n```import pandas as pd\nimport numpy as np\ndf1 = pd.DataFrame({'Bodyweight (kg)':[76,84, 93,106, 120, 56],\n'Bench press (kg)':[135, 150, 170, 140, 180, 155]},\nindex = ['lifter_1', 'lifter_2', 'lifter_3', 'lifter_4', 'lifter_5', 'lifter_6'])\n\ndf2 = pd.DataFrame({'Bodyweight (kg)':[76,84, 93,106, 120, 56],\n'Bench press (kg)':[145, 120, 180, 220, 175, 110]},\nindex = ['lifter_A', 'lifter_B', 'lifter_C', 'lifter_D', 'lifter_E', 'lifter_F'])\n\n# Get individual columns of DataFrames using iloc\ndf1_bodyweight = df1.iloc[:,0]\ndf1_bench = df1.iloc[:,1]\n\ndf2_bodyweight = df2.iloc[:,0]\ndf2_bench = df2.iloc[:,1]\n\n# Compare bodyweight and bench columns separately\n\nprint(np.array_equal(df1_bodyweight.values, df2_bodyweight.values))\nprint(np.array_equal(df1_bench.values, df2_bench.values))```\n\nLet’s run the code to see the result:\n\n```True\nFalse```\n\nThe above result informs us that the first column contains the same elements between the two DataFrames, the second column contains different elements between the two DataFrames.\n\n## Summary\n\nCongratulations on reading to the end of this tutorial! The ValueError: Can only compare identically-labeled DataFrame objects occurs when trying to compare two DataFrames with different indexes. You can either reset the indexes using `reset_index()` or use the `equals()` function which ignores the indexes. You can also use the NumPy method array_equal to compare the two DataFrames’ columns.\n\nFor further reading on errors involving Pandas, go to the articles:\n\nFor further reading on Pandas, go to the article: Introduction to Pandas: A Complete Tutorial for Beginners.\n\nHave fun and happy researching\n\n##### Suf\nResearch Scientist at | + posts\n\nSuf is a research scientist at Moogsoft, specializing in Natural Language Processing and Complex Networks. Previously he was a Postdoctoral Research Fellow in Data Science working on adaptations of cutting-edge physics analysis techniques to data-intensive problems in industry. In another life, he was an experimental particle physicist working on the ATLAS Experiment of the Large Hadron Collider. His passion is to share his experience as an academic moving into industry while continuing to pursue research. Find out more about the creator of the Research Scientist Pod here and sign up to the mailing list here!"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.71241504,"math_prob":0.9623817,"size":7622,"snap":"2022-40-2023-06","text_gpt3_token_len":2162,"char_repetition_ratio":0.1865319,"word_repetition_ratio":0.25467497,"special_character_ratio":0.31907636,"punctuation_ratio":0.19178082,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99680716,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-26T22:28:50Z\",\"WARC-Record-ID\":\"<urn:uuid:d593b853-980e-4a07-a992-b291743efed4>\",\"Content-Length\":\"368942\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:89d1bd4f-da0c-4a10-a971-4c9c9a54b48e>\",\"WARC-Concurrent-To\":\"<urn:uuid:09cfa5b3-6f80-4082-a0a7-d8f6fb92bd9a>\",\"WARC-IP-Address\":\"172.67.74.65\",\"WARC-Target-URI\":\"https://researchdatapod.com/how-to-solve-python-valueerror-can-only-compare-identically-labeled-dataframe-objects/\",\"WARC-Payload-Digest\":\"sha1:RECGZHF4436NTW35RWHPQOKRLVKXL5XP\",\"WARC-Block-Digest\":\"sha1:B7GBKRYJ6YIN53LTSQP6CTKEIVLPJLSX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030334942.88_warc_CC-MAIN-20220926211042-20220927001042-00506.warc.gz\"}"} |
https://stats.stackexchange.com/questions/319689/why-the-words-are-mapped-to-integer-before-processing-them-to-word2vec | [
"Why the words are mapped to integer before processing them to word2vec?\n\nI'm reading CNN for text classification. According to this link, inside data and preprocessing step it says:\n\n1. Load positive and negative sentences from the raw data files.\n2. Clean the text data using the same code as the original paper.\n3. Pad each sentence to the maximum sentence length, which turns out to be 59. We append special tokens to all other sentences to make them 59 words. Padding sentences to the same length is useful because it allows us to efficiently batch our data since each example in a batch must be of the same length.\n4. Build a vocabulary index and map each word to an integer between 0 and 18,765 (the vocabulary size). Each sentence becomes a vector of integers.\n\nI read the paper on word2vec, all it says is dealing with one hot vector.\n\nI'm not able to connect step 4 from above blog to paper.\n\n• Are they mentioning \"one hot vector\" in Google's word2vec paper? As stated in the first article you point to, they do not \"use[] pre-trained word2vec vectors for our word embeddings\". – tagoma Dec 20 '17 at 9:07\n• Yes, check in section 2.1 feedforward Neural Net Language Model. Yes, they are not using pre-trained embeddings. They will train from scratch – Bhaskar Dhariyal Dec 20 '17 at 9:26\n\nSuppose there's an embedding matrix $M$, of size $V \\times d$, where $V$ is the vocabulary size and $d$ is the embedding size. Suppose further an input sentence $(w_1, ..., w_k)$ that is encoded by indices $(i_1, ..., i_k)$, where $i_j \\leq V$.\n\nIn theory, the following two approaches are equivalent:\n\n• Select the rows from the embedding matrix $M$ that correspond to indices $(i_1, ..., i_k)$.\n\n• Convert $(i_1, ..., i_k)$ to one-hot encoded matrix $O$, of size $k \\times V$, and compute the dot-product $O \\cdot M$, like on the picture below.",
null,
"In both cases, the result is a matrix of embeddings for all words in the sentence: $k \\times d$. But in programming, the first approach is much more efficient than the second one. Both in terms of computational complexity (select is a cheaper operation than dot-product) and memory (it uses O(k) memory for the input instead of O(k*V)). Especially when $V$ is large, tens or even hundreds of thousands. Remember that optimization is usually done in batches, not by one sentence at a time.\n\nThat's why there's tf.nn.embedding_lookup function in tensorflow that accepts sparse representation (the tensor of indices, not one-hot vectors), and no wonder the tutorial that you refer to uses it. Though word2vec paper talks about one-hot vectors, I believe that in code they are using indices as well, because Google's 1T vocabulary size is 13M!\n\nSo, in general, it's better to avoid one-hot representation when the number of classes is large, like natural language vocabulary."
]
| [
null,
"https://i.stack.imgur.com/0qlJN.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9124087,"math_prob":0.96807986,"size":820,"snap":"2019-43-2019-47","text_gpt3_token_len":177,"char_repetition_ratio":0.125,"word_repetition_ratio":0.0,"special_character_ratio":0.22560975,"punctuation_ratio":0.09202454,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9910741,"pos_list":[0,1,2],"im_url_duplicate_count":[null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-22T11:39:30Z\",\"WARC-Record-ID\":\"<urn:uuid:4f7a6c33-adec-47ed-94a6-27100c03740a>\",\"Content-Length\":\"137809\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e2ff966a-a637-40eb-b983-1f7b0af03658>\",\"WARC-Concurrent-To\":\"<urn:uuid:082dd322-ee92-4d1f-83d1-a072ad8a0ff1>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://stats.stackexchange.com/questions/319689/why-the-words-are-mapped-to-integer-before-processing-them-to-word2vec\",\"WARC-Payload-Digest\":\"sha1:D5E3HQMGWLU54HNKQFSQL7JSZ3BQZXXV\",\"WARC-Block-Digest\":\"sha1:W53ZYQ6HIEENAELNCBR6IOMEXNBPFC47\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570987817685.87_warc_CC-MAIN-20191022104415-20191022131915-00383.warc.gz\"}"} |
https://mathematica.stackexchange.com/questions/176402/inverse-laplace-transform-of-powers-with-an-arbitrary-index | [
"# Inverse Laplace transform of powers with an arbitrary index\n\nTried for Inverse Laplace transform (ILT) for the following:\n\nL[s] = (L /(L + s) w + Q /(Q + s) (1 - w))^n\n\n\n$L[s]$ can also be written as\n\n$$L[s]=\\sum_{k=0}^n\\frac{n!}{k!\\ (n-k)!}\\ w^k\\ (1-w)^{n-k}\\ \\left(\\frac{L} {L+s}\\right)^k\\ \\left(\\frac{Q}{Q+s}\\right)^{n-k}\\ (1)$$\n\nIn code:\n\nSum[n!/(k! (n - k)!) w^k (1 - w)^(n - k) (L/(L + s))^k (Q/(Q + s))^(n - k), {k, 0, n}]\n\n\nThe ILT of auxiliary variable $s$ is referred from Tables of Integral Transforms, Vol. 1, and the complete ILT is given as $$g(t)=\\sum_{k=0}^n\\frac{n!}{k!\\ (n-k)!}\\ w^k\\ (1-w)^{n-k}\\ L^k\\ Q^{n-k}\\ t^{n-1}\\ \\phi_2(k, n - k, n -L\\ t, -Q\\ t)\\ (2)$$\n\nwhere $\\phi_2(*)$ is hyper-geometric series with $$\\phi_2(A,B,C, x,y)=\\sum_{u=0}^\\infty \\sum_{v=0}^\\infty \\frac{(A)_u(B)_v x^u y^v}{(C)_{u+v} u! v!},\\quad where (*)_a = Pochammer[*,a]\\quad (3)$$\n\nThe equivalent code of $(3)$ is,\n\nSum[(Pochhammer[A,u] Pochhammer[B,v] (x^u y^v))/(Pochhammer[C,u+v] u! v!), {u,0, ∞}, {v,0, ∞}]\n\n\nIf look into \"Tables of Integral Transforms, Vol. 1, pg-238 (PDF pg-253; form 9)\", then given Laplace transform is\n\n$$\\Gamma(\\gamma)\\ p^{-\\gamma}\\left(1-\\left(\\frac{\\lambda_1}{p}\\right)^{-\\beta_1}\\right) {...}\\left(1-\\left(\\frac{\\lambda_n}{p}\\right)^{-\\beta_n}\\right), Re\\ \\gamma>0\\quad (4)$$\n\ninverted as\n\n$$t^{\\gamma-1}\\ \\phi_2(\\beta_1,{...},\\beta_n;\\gamma;\\lambda_1\\ t,{...},\\lambda_n\\ t)\\quad (5)$$\n\nFollowing ways tried,\n\n1. If no replacement mistake, then $(4)$ taken in terms of $(1)$ variables as (ignoring terms without $s$), $\\lambda_1\\to L, \\lambda_2\\to Q, p\\to s, \\gamma \\to n, \\beta_1 \\to k$ and $\\beta_2 \\to n-k$.\n\nTo check such,\n\nInverseLaplaceTransform[Gamma[n]/s^n (1-L/s)^k (1-Q/s)^(n-k),s,t]\n\n\nWe got, Gamma[n] InverseLaplaceTransform[1/s^n (1-L/s)^k (1-Q/s)^(n-k),s,t]\n\nWondering, is Mathematica didn't recognized the known form or form to be modified for evaluation ?\n\n1. Then, computing with direct integrate for\n\nH = (L /(L + s))^k (Q/(Q + s))^(n - k);\n\n1/(2 π j) Integrate[H Exp[s t], {s, c - j ∞, c + j ∞}, Assumptions -> c > 2 && t > 0]\n(*output same as input*)\n\n\nWhat could be the reason?\n\n2. Furthermore, then convolution theorem applied,\n\nlist = {(L/(L + s))^k, (Q/(Q + s))^(n - k)};\n\nintg = Times@@(Map[InverseLaplaceTransform[#, s, t] &,list]//{#[]/.t -> s,#[]/.t -> (t-s)}&);\n\nIntegrate[intg, {s,0,t}] // PowerExpand\n\n(*E^(-Q t) L^k Q^(-k+n) t^(-1+n) Hypergeometric1F1Regularized[k,n,(-L+Q) t]*)\n\n\nIt gave out something, but not near to what $(2)$ contained.\n\n1. Further searched that,AppellF1[A;B,C;c;x,y] form could be related to $(3)$. But not exactly, as the equivalent form of $\\phi_2(k, n - k, n -L\\ t, -Q\\ t)$ is not clearly found in Wolfram documents.\n\nAll trials does not work out.\n\nSo kindly help on, how we can evaluate $(1)$ to get $(2)$, by InverseLaplaceTransform[.] or Integrate[.]?\n\n• Mathematica didn't recognized this $\\phi_2$ function ,because is not implemented. Why do you need exactly this function? $\\phi_2$ is Horns function see: mathworld.wolfram.com/HornFunction.html Jul 1, 2018 at 12:36\n• @MariuszIwaniuk thanks. Two of the articles, mentioned $(1)$ solved to $(2)$. So I am solving it for another similar modelling. But could not get through either mentioned ways. Terrible! if Mathematica don't have this. Any other alternatives? Jul 2, 2018 at 5:52\n• At a given value 'n' can be found a solution to the ILT. About alternatives,well I don't know.Maple also has no this function. Jul 2, 2018 at 6:20\n• @MariuszIwaniuk. I will try it. Wanted to learn more, can you please suggest any, less mathematical, article on Horns function ? :) Jul 3, 2018 at 12:44\n• See REFERENCES at the end of the page: mathworld.wolfram.com/HornFunction.html Jul 3, 2018 at 13:00"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.59159416,"math_prob":0.9996648,"size":2820,"snap":"2022-27-2022-33","text_gpt3_token_len":1122,"char_repetition_ratio":0.09730113,"word_repetition_ratio":0.03084833,"special_character_ratio":0.42340425,"punctuation_ratio":0.18424962,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999825,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-29T09:23:59Z\",\"WARC-Record-ID\":\"<urn:uuid:8ab7d075-2dbc-417e-b36f-881ac79453d3>\",\"Content-Length\":\"224951\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:540afbb3-d976-4b86-bae9-a2e57798d163>\",\"WARC-Concurrent-To\":\"<urn:uuid:032462f5-285a-470c-a752-f3ba731ef955>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://mathematica.stackexchange.com/questions/176402/inverse-laplace-transform-of-powers-with-an-arbitrary-index\",\"WARC-Payload-Digest\":\"sha1:DHYT4Q6CBKOEGTANGNT3NUFZ6AKRZQNL\",\"WARC-Block-Digest\":\"sha1:6A5TXNDO3XEON7F7DEC3TRZFYF3WSI57\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103626162.35_warc_CC-MAIN-20220629084939-20220629114939-00352.warc.gz\"}"} |
https://educationexpert.net/chemistry/2639374.html | [
"",
null,
"Chemistry\n14 April, 06:20\n\n# What is the total energy needed to boil 255 grams of water, given that theΔH vaporization = 40,650 J/mol\n\n+1\n1.",
null,
"14 April, 06:27\n0\n575,000 J\n\nExplanation:\n\n1) Convert the mass of water into number of moles\n\nMolar mass of water: 18.015 g/mol\n\nNumber of moles, n = mass in grams / molar mass\n\nn = 255 g / 18.015 g/mol = 14.15 mol\n\n2) Use the formula E = n * ΔH vap\n\nThis is, you have to multiply the molar ΔH vaporization by the number of moles to find the total energy to boil the given amount of water.\n\nE = 14.15 mol * 40,650 J/mol = 575,395.5 J\n\n3) Round to the correct number of significant figures.\n\nThe mass of water is the measurement with the least number of significant figures (3), so you must report the answer with 3 significant figures,\n\nE = 575,000 J ← answer"
]
| [
null,
"https://educationexpert.net/templates/educationexpert/images/icons/chemistry.svg",
null,
"https://educationexpert.net/templates/educationexpert/dleimages/noavatar.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.92865187,"math_prob":0.99643356,"size":301,"snap":"2022-27-2022-33","text_gpt3_token_len":73,"char_repetition_ratio":0.13131313,"word_repetition_ratio":0.0,"special_character_ratio":0.25249168,"punctuation_ratio":0.11111111,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99958974,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-12T08:13:18Z\",\"WARC-Record-ID\":\"<urn:uuid:997bcf92-8fa1-461d-98d3-091b169a79e0>\",\"Content-Length\":\"21851\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ea123545-da74-4dd8-a236-7260399d0cb6>\",\"WARC-Concurrent-To\":\"<urn:uuid:12859336-0ce5-4510-8538-09ac9a822e72>\",\"WARC-IP-Address\":\"172.67.223.147\",\"WARC-Target-URI\":\"https://educationexpert.net/chemistry/2639374.html\",\"WARC-Payload-Digest\":\"sha1:XBHVGU7FNHEL6RHPMKYKT5PLQXPMGIFK\",\"WARC-Block-Digest\":\"sha1:SHCJAZWVWHG7X6H5X2YIQBVYDPML3P7H\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882571597.73_warc_CC-MAIN-20220812075544-20220812105544-00188.warc.gz\"}"} |
https://www.techglads.com/cse/for-loop-in-c/ | [
"# for loop in c\n\nA for loop is a repetition control structure that allows you to efficiently write a loop that needs to execute a specific number of times.\n\n### Syntax:\n\nThe syntax of a for loop in C programming language is:\n\n```for ( init; condition; increment )\n{\nstatement(s);\n}```\n\nHere is the flow of control in a for loop:\n\n1. The init step is executed first, and only once. This step allows you to declare and initialize any loop control variables. You are not required to put a statement here, as long as a semicolon appears.\n2. Next, the condition is evaluated. If it is true, the body of the loop is executed. If it is false, the body of the loop does not execute and flow of control jumps to the next statement just after the for loop.\n3. After the body of the for loop executes, the flow of control jumps back up to theincrement statement. This statement allows you to update any loop control variables. This statement can be left blank, as long as a semicolon appears after the condition.\n4. The condition is now evaluated again. If it is true, the loop executes and the process repeats itself (body of loop, then increment step, and then again condition). After the condition becomes false, the for loop terminates.\n\n### Flow Diagram:",
null,
"for loop\n\n### Example:\n\n```#include <stdio.h>\n\nint main ()\n{\n/* for loop execution */\nfor( int a = 10; a < 20; a = a + 1 )\n{\nprintf(\"value of a: %d\\n\", a);\n}\n\nreturn 0;\n}```\n\nWhen the above code is compiled and executed, it produces the following result:\n\n```value of a: 10\nvalue of a: 11\nvalue of a: 12\nvalue of a: 13\nvalue of a: 14\nvalue of a: 15\nvalue of a: 16\nvalue of a: 17\nvalue of a: 18\nvalue of a: 19```"
]
| [
null,
"https://www.tutorialspoint.com/cprogramming/images/cpp_for_loop.jpg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8051596,"math_prob":0.86417955,"size":1625,"snap":"2020-24-2020-29","text_gpt3_token_len":397,"char_repetition_ratio":0.1770512,"word_repetition_ratio":0.019169329,"special_character_ratio":0.26646155,"punctuation_ratio":0.14971751,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9542126,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-05-29T22:22:40Z\",\"WARC-Record-ID\":\"<urn:uuid:da9b00b7-1b07-41bc-af68-3cdbf218e78c>\",\"Content-Length\":\"39020\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6bedc56a-2ddd-445c-bb2a-d00a0c157c53>\",\"WARC-Concurrent-To\":\"<urn:uuid:d7bfd22f-70cb-4b98-a681-4319886eecf3>\",\"WARC-IP-Address\":\"172.67.166.251\",\"WARC-Target-URI\":\"https://www.techglads.com/cse/for-loop-in-c/\",\"WARC-Payload-Digest\":\"sha1:YZ5P7ZQHHHFSSI3FF4UDHFTKGEWUJXSX\",\"WARC-Block-Digest\":\"sha1:4TUP4JS2Q25YVAT2KD5RB3C5F77ILMVJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347406785.66_warc_CC-MAIN-20200529214634-20200530004634-00391.warc.gz\"}"} |
https://zh.wikipedia.org/wiki/%E5%89%9B%E5%BA%A6 | [
"# 剛度\n\n$k={\\frac {P}{\\delta }}$",
null,
"## 与弹性的关系\n\n$k={\\frac {AE}{L}}$",
null,
"${A}$",
null,
"为横截面面积;\n${E}$",
null,
"为拉伸弹性模量(杨氏模量);\n${L}$",
null,
"为元素的长度。\n\n$k={\\frac {nEI}{L}}$",
null,
"${I}$",
null,
"为惯性矩;\n${n}$",
null,
"是一个依赖于边界条件的整数(对于固端等于4)。"
]
| [
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/b9f95c418fff842ec8181b71386aadc0fbec116a",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/1b741990e13d0d6c0d6805a29aff910668a0bc74",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/f5ebb239b453149a6dedba8f18670ce9a9390c08",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/bbdfd90c2613f8b892846377b56b336d872b7088",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/bb611299dffc63232ed1fc05560afc3f1dd3d005",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/48c787274bd20de0e6c5acafdb6ac46ee465e146",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/560fd148116160ac1f4e7a5a9510e2378382c81e",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/70b6881107a598c20a60e72fe82bc41a4a1f7f4c",
null
]
| {"ft_lang_label":"__label__zh","ft_lang_prob":0.99551433,"math_prob":1.0000093,"size":514,"snap":"2021-21-2021-25","text_gpt3_token_len":575,"char_repetition_ratio":0.0627451,"word_repetition_ratio":0.0,"special_character_ratio":0.1848249,"punctuation_ratio":0.018518519,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99996686,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16],"im_url_duplicate_count":[null,8,null,8,null,null,null,10,null,null,null,6,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-11T21:30:50Z\",\"WARC-Record-ID\":\"<urn:uuid:ca4fb5a3-4965-4d2b-aef0-d80786f811ea>\",\"Content-Length\":\"50855\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b25cf802-4f71-48b0-9b1f-f5ba0a5e65f0>\",\"WARC-Concurrent-To\":\"<urn:uuid:93169a6c-535a-4440-a2fc-f8b0cfc38a28>\",\"WARC-IP-Address\":\"208.80.154.224\",\"WARC-Target-URI\":\"https://zh.wikipedia.org/wiki/%E5%89%9B%E5%BA%A6\",\"WARC-Payload-Digest\":\"sha1:RNJQSZ62CQDGMVKMNN6KOWSEYXJ6VHFW\",\"WARC-Block-Digest\":\"sha1:QICUFSHUPV764YAJFSKNIGOMVXPRS4MT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243989856.11_warc_CC-MAIN-20210511184216-20210511214216-00123.warc.gz\"}"} |
https://crypto.stackexchange.com/questions/5512/message-authentication-codes-construction | [
"# Message authentication codes construction\n\nI was reading the paper $$ and came across the scheme that I show below. While I understand the scheme well, I don't understand why they prepend a 0 to the block containing $r$ and a 1 to all other blocks. What is achieved by this? They never explain why they do it. Here is the scheme as presented in the paper:",
null,
"$$Bellare, Mihir, Roch Guérin, and Phillip Rogaway. \"XOR MACs: New methods for message authentication using finite pseudorandom functions.\" Advances in Cryptology—CRYPT0’95 (1995): 15-28.\n\nThe short answer is: They prepend bits in that way because the scheme is not secure without them. While they might not explain it in English, their proof makes it clear where they use it.\n\nLet us construct an attack against the scheme that does not prepend a bit to the blocks. The same idea will work against a version of the scheme that prepends the same bit to every block. In this version of the scheme, a tag for a single-block message $M\\in\\{0,1\\}^{32}$ is $(r,z)$, where $$z = F_a(r)\\oplus F_a(\\langle1\\rangle\\| M).$$ Our attack will obtain about $2^{32}$ tags for an arbitrary single-block message $M$, stopping if it ever gets a tag $(r,z)$ such that the upper $32$ bits of $r$ are equal to $\\langle1\\rangle\\in\\{0,1\\}^{32}$ and lower $32$ bits are not equal to $M$. When it finds such a tag $(r,z)$, let $s\\neq M$ denote the lower $32$ bits of that $r$. The adversary outputs a forgery on the message $M' = s$ with tag $(r',z')$ defined by $$r' = \\langle 1\\rangle\\| M \\quad \\text{ and } \\quad z' =z$$ This tag will verify with message $M'$ because $$z' = z= F_a(r)\\oplus F_a(\\langle 1 \\rangle\\|M)= F_a(r')\\oplus F_z(\\langle 1 \\rangle\\| M'),$$ where the second equality uses the fact that our $r$ is equal to $\\langle 1\\rangle\\| s = \\langle 1\\rangle\\| M'$ (it switches the order of the arguments to $\\oplus$). Moreover, our adversary never queries the message $M'$ and one can verify that it will get the needed $r$ with good probability in about $2^{32}$ tries, so it will win the security game.\n\nThe intuition is that prepending the bits prevents this sort of thing from happening. Specifically, this adversary found a way to force the verification algorithm to treat a previously used $r$ as a message, which can't happen with their domain separation strategy.\n\nI'm guessing the idea is domain separation : if $F_a$ is a secure PRF then $G^0_a(x)=F_a(0.x)$ and $G^1_a(x)=F_a(1.x)$ are two \"independant\" PRF. What they're doing here is basically using 2 different PRF for the randomness and the message.\n\nIf that protection wasn't there you could use an attacker supplied $r$ to cancel out blocks and then with a pair $(r,F_a(r))$ you could forge tags. I haven't figured out how you could get that pair but it may be that the designers thought it would be giving too much freedom to the attacker.\n\nIt's a way of padding, so that the final chunk of data is always a multiple of the block size.\n\nThe reason why the pattern is 1 bit followed by 0 bits is to be able to find out exactly where the actual message ends and where the padding begins after you decrypt it: The message ends exactly before the first '1' bit when starting from the end of the message and going backwards. This padding method is frequently referred to as \"ISO padding\" (since it's described in ISO/IEC 9797-1 as padding method 2).\n\nThere are other ways to pad things, but this is relatively easy to implement and understand and that makes it frequently used. For more on padding check out http://en.wikipedia.org/wiki/Padding_(cryptography)\n\nUpdate:\n\nSo now to explain: The reason is that they want the \"string\" they pass into Fa to be 64-bits long; r is exactly 63-bits by choice; M[i] is 32 bits, and i is at most 31 bits (also by definition), making their concatenation also 63 bits. So they need to pad with something to ensure that it's always 64-bits long.\n\nAs to why they do a 0 for the first block r and 1 for everything else (instead of say 1 and 0), after a very cursory reading of their paper I don't see any particular reason; it seems like an arbitrary choice. This doesn't mean that the padding itself is arbitrary. It serves a very important purpose, which they describe in their paper and it needs to be distinguishable from every other possible block.\n\n• I wasn't referring to the padding. In the scheme itself, they have z=F_a(0.r) XOR F_a(1. <1>.M) XOR F_a(1.<2>.M)... – user4399 Nov 29 '12 at 1:00\n• Oops. My bad. I'll update my answer, but it may take me a few minutes - at work, so can't devote all my time to StackOverflow! – Nik Bougalis Nov 29 '12 at 1:04\n• Thanks, but I don't think it's arbitrary. They purposely make the data only 63 bits so that they can prepend the bit. My intuition is that a forgery is possible if r can look like one of the message blocks and they therefore differentiate r from the message blocks with the first bit. The issue is, I don't see how a polynomial time adversary would forge even without this bit. – user4399 Nov 29 '12 at 1:49\n• I didn't mean arbitrary in the sense of \"let's put random values\", I meant that they arbitrarily chose to use 0 for r and 1 for the other blocks, and they could have just as easily chosen 1 and 0 respectively. Clearly they want the block with the seed to never match any of the blocks of data. The reasoning for this is explained on the section titled \"SECURITY\" on the beginning of Page 4 of their paper at cs.ucdavis.edu/research/tech-reports/1995/CSE-95-18.pdf – Nik Bougalis Nov 29 '12 at 1:57\n• Let's assume that you 'pad' both kinds of blocks with a 0; apart from that, everything is the same. Now, assume that there exists a message block 0.x.M[x] for some x which happens to be bit-by-bit equal to 0.r. Now what? That single block renders the construction insecure. By ensuring that there is no such x, you eliminate that risk. – Nik Bougalis Nov 29 '12 at 2:33"
]
| [
null,
"https://i.stack.imgur.com/gD7Pu.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8406006,"math_prob":0.9436718,"size":1769,"snap":"2020-45-2020-50","text_gpt3_token_len":497,"char_repetition_ratio":0.1223796,"word_repetition_ratio":0.0,"special_character_ratio":0.29734313,"punctuation_ratio":0.07821229,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9944865,"pos_list":[0,1,2],"im_url_duplicate_count":[null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-01T09:00:57Z\",\"WARC-Record-ID\":\"<urn:uuid:a2f3b763-7d0b-4dc4-a8aa-4ac08a560b11>\",\"Content-Length\":\"174143\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:03bb3a04-a9da-41c6-9abf-d7653ee24518>\",\"WARC-Concurrent-To\":\"<urn:uuid:27775b1c-de62-4b62-b0ff-af5ef1fce941>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://crypto.stackexchange.com/questions/5512/message-authentication-codes-construction\",\"WARC-Payload-Digest\":\"sha1:CU5NN46PBQ44U4VF5VMNIDVFKPZ4OQR4\",\"WARC-Block-Digest\":\"sha1:DIFUIEJIEM3RDVMLUNU5D27CY54PMSI5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141672314.55_warc_CC-MAIN-20201201074047-20201201104047-00169.warc.gz\"}"} |
https://www.colorhexa.com/0cad0c | [
"# #0cad0c Color Information\n\nIn a RGB color space, hex #0cad0c is composed of 4.7% red, 67.8% green and 4.7% blue. Whereas in a CMYK color space, it is composed of 93.1% cyan, 0% magenta, 93.1% yellow and 32.2% black. It has a hue angle of 120 degrees, a saturation of 87% and a lightness of 36.3%. #0cad0c color hex could be obtained by blending #18ff18 with #005b00. Closest websafe color is: #009900.\n\n• R 5\n• G 68\n• B 5\nRGB color chart\n• C 93\n• M 0\n• Y 93\n• K 32\nCMYK color chart\n\n#0cad0c color description : Dark lime green.\n\n# #0cad0c Color Conversion\n\nThe hexadecimal color #0cad0c has RGB values of R:12, G:173, B:12 and CMYK values of C:0.93, M:0, Y:0.93, K:0.32. Its decimal value is 830732.\n\nHex triplet RGB Decimal 0cad0c `#0cad0c` 12, 173, 12 `rgb(12,173,12)` 4.7, 67.8, 4.7 `rgb(4.7%,67.8%,4.7%)` 93, 0, 93, 32 120°, 87, 36.3 `hsl(120,87%,36.3%)` 120°, 93.1, 67.8 009900 `#009900`\nCIE-LAB 61.646, -63.517, 60.676 15.161, 29.99, 5.337 0.3, 0.594, 29.99 61.646, 87.841, 136.311 61.646, -57.516, 74.354 54.763, -46.42, 32.556 00001100, 10101101, 00001100\n\n# Color Schemes with #0cad0c\n\n• #0cad0c\n``#0cad0c` `rgb(12,173,12)``\n• #ad0cad\n``#ad0cad` `rgb(173,12,173)``\nComplementary Color\n• #5dad0c\n``#5dad0c` `rgb(93,173,12)``\n• #0cad0c\n``#0cad0c` `rgb(12,173,12)``\n• #0cad5c\n``#0cad5c` `rgb(12,173,92)``\nAnalogous Color\n• #ad0c5d\n``#ad0c5d` `rgb(173,12,93)``\n• #0cad0c\n``#0cad0c` `rgb(12,173,12)``\n• #5d0cad\n``#5d0cad` `rgb(93,12,173)``\nSplit Complementary Color\n• #ad0c0c\n``#ad0c0c` `rgb(173,12,12)``\n• #0cad0c\n``#0cad0c` `rgb(12,173,12)``\n• #0c0cad\n``#0c0cad` `rgb(12,12,173)``\nTriadic Color\n• #adad0c\n``#adad0c` `rgb(173,173,12)``\n• #0cad0c\n``#0cad0c` `rgb(12,173,12)``\n• #0c0cad\n``#0c0cad` `rgb(12,12,173)``\n• #ad0cad\n``#ad0cad` `rgb(173,12,173)``\nTetradic Color\n• #076507\n``#076507` `rgb(7,101,7)``\n• #097d09\n``#097d09` `rgb(9,125,9)``\n• #0a950a\n``#0a950a` `rgb(10,149,10)``\n• #0cad0c\n``#0cad0c` `rgb(12,173,12)``\n• #0ec50e\n``#0ec50e` `rgb(14,197,14)``\n• #0fdd0f\n``#0fdd0f` `rgb(15,221,15)``\n• #17ef17\n``#17ef17` `rgb(23,239,23)``\nMonochromatic Color\n\n# Alternatives to #0cad0c\n\nBelow, you can see some colors close to #0cad0c. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #34ad0c\n``#34ad0c` `rgb(52,173,12)``\n• #27ad0c\n``#27ad0c` `rgb(39,173,12)``\n• #19ad0c\n``#19ad0c` `rgb(25,173,12)``\n• #0cad0c\n``#0cad0c` `rgb(12,173,12)``\n• #0cad19\n``#0cad19` `rgb(12,173,25)``\n• #0cad27\n``#0cad27` `rgb(12,173,39)``\n• #0cad34\n``#0cad34` `rgb(12,173,52)``\nSimilar Colors\n\n# #0cad0c Preview\n\nText with hexadecimal color #0cad0c\n\nThis text has a font color of #0cad0c.\n\n``<span style=\"color:#0cad0c;\">Text here</span>``\n#0cad0c background color\n\nThis paragraph has a background color of #0cad0c.\n\n``<p style=\"background-color:#0cad0c;\">Content here</p>``\n#0cad0c border color\n\nThis element has a border color of #0cad0c.\n\n``<div style=\"border:1px solid #0cad0c;\">Content here</div>``\nCSS codes\n``.text {color:#0cad0c;}``\n``.background {background-color:#0cad0c;}``\n``.border {border:1px solid #0cad0c;}``\n\n# Shades and Tints of #0cad0c\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #010801 is the darkest color, while #f5fef5 is the lightest one.\n\n• #010801\n``#010801` `rgb(1,8,1)``\n• #021a02\n``#021a02` `rgb(2,26,2)``\n• #032d03\n``#032d03` `rgb(3,45,3)``\n• #043f04\n``#043f04` `rgb(4,63,4)``\n• #065106\n``#065106` `rgb(6,81,6)``\n• #076407\n``#076407` `rgb(7,100,7)``\n• #087608\n``#087608` `rgb(8,118,8)``\n• #098809\n``#098809` `rgb(9,136,9)``\n• #0b9b0b\n``#0b9b0b` `rgb(11,155,11)``\n• #0cad0c\n``#0cad0c` `rgb(12,173,12)``\n• #0dbf0d\n``#0dbf0d` `rgb(13,191,13)``\n• #0fd20f\n``#0fd20f` `rgb(15,210,15)``\n• #10e410\n``#10e410` `rgb(16,228,16)``\nShade Color Variation\n• #18ef18\n``#18ef18` `rgb(24,239,24)``\n• #2bf02b\n``#2bf02b` `rgb(43,240,43)``\n• #3df23d\n``#3df23d` `rgb(61,242,61)``\n• #4ff34f\n``#4ff34f` `rgb(79,243,79)``\n• #62f462\n``#62f462` `rgb(98,244,98)``\n• #74f574\n``#74f574` `rgb(116,245,116)``\n• #87f787\n``#87f787` `rgb(135,247,135)``\n• #99f899\n``#99f899` `rgb(153,248,153)``\n• #abf9ab\n``#abf9ab` `rgb(171,249,171)``\n• #befabe\n``#befabe` `rgb(190,250,190)``\n• #d0fcd0\n``#d0fcd0` `rgb(208,252,208)``\n• #e2fde2\n``#e2fde2` `rgb(226,253,226)``\n• #f5fef5\n``#f5fef5` `rgb(245,254,245)``\nTint Color Variation\n\n# Tones of #0cad0c\n\nA tone is produced by adding gray to any pure hue. In this case, #5a5f5a is the less saturated color, while #05b405 is the most saturated one.\n\n• #5a5f5a\n``#5a5f5a` `rgb(90,95,90)``\n• #536653\n``#536653` `rgb(83,102,83)``\n• #4c6d4c\n``#4c6d4c` `rgb(76,109,76)``\n• #457445\n``#457445` `rgb(69,116,69)``\n• #3e7b3e\n``#3e7b3e` `rgb(62,123,62)``\n• #378237\n``#378237` `rgb(55,130,55)``\n• #308930\n``#308930` `rgb(48,137,48)``\n• #289128\n``#289128` `rgb(40,145,40)``\n• #219821\n``#219821` `rgb(33,152,33)``\n• #1a9f1a\n``#1a9f1a` `rgb(26,159,26)``\n• #13a613\n``#13a613` `rgb(19,166,19)``\n• #0cad0c\n``#0cad0c` `rgb(12,173,12)``\n• #05b405\n``#05b405` `rgb(5,180,5)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #0cad0c is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.5786481,"math_prob":0.6316493,"size":3647,"snap":"2019-13-2019-22","text_gpt3_token_len":1638,"char_repetition_ratio":0.12270107,"word_repetition_ratio":0.011090573,"special_character_ratio":0.54537976,"punctuation_ratio":0.23370786,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9810365,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-05-27T01:12:48Z\",\"WARC-Record-ID\":\"<urn:uuid:b292ce0d-02d0-4e8f-8a4b-fe29673d8d4b>\",\"Content-Length\":\"36356\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:41018177-6fe3-440a-bba0-0ff438b0ad39>\",\"WARC-Concurrent-To\":\"<urn:uuid:aa86e5bd-b5ee-453d-bb9f-8996a9d39b4b>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/0cad0c\",\"WARC-Payload-Digest\":\"sha1:4EIOZJ6YTETEH67RZPSMEYGK7GO7HKZC\",\"WARC-Block-Digest\":\"sha1:CHOSHFWNR7VVFYFUCARFTZGIKNGAKRNO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-22/CC-MAIN-2019-22_segments_1558232260358.69_warc_CC-MAIN-20190527005538-20190527031538-00115.warc.gz\"}"} |
https://www.jiskha.com/questions/1823608/which-choice-correctly-describes-a-characteristic-of-an-electromagnetic-wave-a-an | [
"# Science\n\nWhich choice correctly describes a characteristic of an electromagnetic wave?\n\na)An electromagnetic wave moves at the speed of sound.\nb)An electromagnetic wave in a vacuum moves slightly slower than the speed of light in a vacuum.\nc)An electromagnetic wave in a vacuum moves at the speed of light in a vacuum.\nd)An electromagnetic wave in a vacuum moves faster than the speed of light in a vacuum.\n\nI'm leaning towards C. I honestly don't know, I am sorry.\n\n1. 👍 0\n2. 👎 0\n3. 👁 313\n1. Yeah, it is C\n\n1. 👍 3\n2. 👎 0\n2. Thank you, Random!\n\n1. 👍 0\n2. 👎 0\n\n## Similar Questions\n\n1. ### Science\n\nWhat is the main difference between mechanical and electromagnetic waves? A. Mechanical weaves involved transfers energy; electromagnetic waves do not B. Mechanical waves require a medium to travel in; electromagnetic waves do\n\n2. ### Physics\n\nWhat is the speed of an electromagnetic wave that has a frequency of 7.8 x 10^6 Hz? How would I solve this problem?\n\n3. ### Science Help Plzz\n\n1. The images above show particles in a medium. Which of the following shows the worst conditions for a sound wave travel? A B C D is the answer A? Which of the following correctly describes a radio wave? A long wavelength and\n\n4. ### Physics\n\n1)Which of the following statements about X rays is incorrect? A)x rays darken photographic plates B)x rays can penetrate soft body tissue C)the kinetic energy of high speed electrons is converted to x rays when they collide with\n\n1. ### science\n\nWhich of the following choices correctly describes the behavior of the sound wave as it moves from metal to air? The sound wave does not change speed, but refracts. The wavelength decreases while frequency increases. The sound\n\n2. ### Science\n\nStudy the scenario. Avery shakes one end of a spring side to side. The wave travels from her end of the spring to the opposite end of the spring. Which of the following choices best describes the type of wave generated in the\n\n3. ### Science HW - - - check my answers?\n\n1. Compare the two light waves above. Which wave would produce the brightest light and which wave would have the least energy? a. wave A; wave A b. wave A; wave B c. wave B; wave A d. wave B; wave B 2. Which waves have the longest\n\n4. ### Physics\n\nMedical x rays are taken with electromagnetic waves having a wavelength around .1 nm. What are the frequency, period, and wave number of such waves? ok I got the frequency as 3 * 10^18 Hz and the wave number as 6.3 *10^10 rad/m\n\n1. ### Physical Science\n\n1. A wave has a wavelength of 10 mm and a frequency of 5.0 hertz. What is its speed? A. 50 mm/s B. 50 hertz/s C. 2.0 mm/s D. 0.50 mm/s 2. Which type of mechanical wave needs a source of energy to produce it? A. a transverse wave\n\n2. ### College Physics 2056\n\nAn electromagnetic wave strikes a 3.00-cm2 section of wall perpendicularly. The rms value of the wave's magnetic field is determined to be 6.00 10-4 T. How long does it take for the wave to deliver 1000 J of energy to the wall?\n\n3. ### Physical Science 8th Grade\n\n1. What is the resulting direction of a surface wave? Is it perpendicular, parallel, opposite, or circular? 2. What happens when a mechanical wave travels through a medium? What is transferred - mass, particles, energy, or waves?\n\n4. ### science\n\nWhich choice correctly describes the waves in the electromagnetic spectrum? The waves within the electromagnetic spectrum are all longitudinal waves which move at the same speed in a vacuum. The waves within the electromagnetic"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.87043226,"math_prob":0.9142488,"size":2984,"snap":"2020-45-2020-50","text_gpt3_token_len":736,"char_repetition_ratio":0.147651,"word_repetition_ratio":0.01831502,"special_character_ratio":0.2449732,"punctuation_ratio":0.119672135,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9797735,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-30T20:07:17Z\",\"WARC-Record-ID\":\"<urn:uuid:e9f3e9dd-8cde-4821-8b23-c81f5f4aa141>\",\"Content-Length\":\"20932\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f254d541-2b84-4a34-be18-b1818b16b306>\",\"WARC-Concurrent-To\":\"<urn:uuid:4969ec0d-6741-476d-9e22-4fd5deabc723>\",\"WARC-IP-Address\":\"66.228.55.50\",\"WARC-Target-URI\":\"https://www.jiskha.com/questions/1823608/which-choice-correctly-describes-a-characteristic-of-an-electromagnetic-wave-a-an\",\"WARC-Payload-Digest\":\"sha1:BDD7EBEJP7LNWHA4YMYUOTWXXKEQNPOJ\",\"WARC-Block-Digest\":\"sha1:PUNB4GBW36KWTFLN7LVOQHQWZNJ3BQZH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107911229.96_warc_CC-MAIN-20201030182757-20201030212757-00295.warc.gz\"}"} |
https://socratic.org/questions/what-is-the-projection-of-8-3-7-onto-1-4-3 | [
"# What is the projection of <-8,3,7 > onto <1,4,-3 >?\n\nFeb 14, 2016\n\n$\\vec{c} = < \\frac{17}{26} , \\frac{68}{26} , - \\frac{51}{26} >$\n\n#### Explanation:\n\n$\\vec{a} = < - 8 , 3 , 7 >$\n$\\vec{b} = < 1 , 4 , - 3 >$\n$\\vec{c} = \\frac{\\vec{a} \\cdot \\vec{b}}{| b | . | b |} \\cdot \\vec{b}$\n$\\vec{a} \\cdot \\vec{b} = - 8 \\cdot 1 + 3 \\cdot 4 - 3 \\cdot 7 = - 8 + 12 - 21 = - 17$\n$| \\vec{b} | = \\sqrt{{1}^{2} + {4}^{2} {\\left(- 3\\right)}^{2}} = \\sqrt{26}$\n$\\vec{c} = - \\frac{17}{\\sqrt{26} \\cdot \\sqrt{26}} \\cdot < 1 , 4 , - 3 >$\n$\\vec{c} = - \\frac{17}{26} < 1 , 4 , - 3 >$\n$\\vec{c} = < \\frac{17}{26} \\cdot 1 , \\frac{17}{26} \\cdot 4 , - \\frac{17}{26} \\cdot 3 >$\n$\\vec{c} = < \\frac{17}{26} , \\frac{68}{26} , - \\frac{51}{26} >$"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.5606727,"math_prob":1.0000099,"size":217,"snap":"2019-51-2020-05","text_gpt3_token_len":63,"char_repetition_ratio":0.12206573,"word_repetition_ratio":0.0,"special_character_ratio":0.31336406,"punctuation_ratio":0.15555556,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000014,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-20T07:01:48Z\",\"WARC-Record-ID\":\"<urn:uuid:bc58a855-be3b-4dc5-b3c2-c6e6135e971c>\",\"Content-Length\":\"33128\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0c3c8978-e7cc-43fa-be69-ce7b39fa1e0c>\",\"WARC-Concurrent-To\":\"<urn:uuid:eb440a59-b779-4299-a510-1c16471aab28>\",\"WARC-IP-Address\":\"54.221.217.175\",\"WARC-Target-URI\":\"https://socratic.org/questions/what-is-the-projection-of-8-3-7-onto-1-4-3\",\"WARC-Payload-Digest\":\"sha1:O3J4AI7S433KYULUJ6FSUJL44OVZV3UU\",\"WARC-Block-Digest\":\"sha1:Q4R4C7HZGL2SCXMRKPRQ2S65PSGFETA2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250597458.22_warc_CC-MAIN-20200120052454-20200120080454-00481.warc.gz\"}"} |
https://www.nagwa.com/en/videos/127167915046/ | [
"# Video: Finding the Measures of Angles in a Quadrilateral given a Relation between Them by Solving Linear Equations\n\nIn the parallelogram 𝐸𝐹𝐺𝐻, 𝑚∠𝐸𝐻𝐺 = (3𝑥 + 19)°, 𝑚∠𝐻𝐺𝐹 = (2𝑥 + 6)°, and 𝑚∠𝐹𝐸𝐻 = (2𝑦)°. Find the values of 𝑥 and 𝑦.\n\n03:07\n\n### Video Transcript\n\nIn the parallelogram 𝐸𝐹𝐺𝐻, the measure of angle 𝐸𝐻𝐺 is three 𝑥 plus 19 degrees, the measure of angle 𝐻𝐺𝐹 is two 𝑥 plus six degrees, and the measure of angle 𝐹𝐸𝐻 is two 𝑦 degrees. Find the values of 𝑥 and 𝑦.\n\nLet’s begin by adding the expressions that we’re given for these three angles onto the diagram. In order to answer this question, we’re going to need to recall some key facts about properties of angles in parallelograms.\n\nThere are two key properties that we need. Firstly, opposite angles in a parallelogram are congruent. They are the same as each other. Secondly, adjacent angles are supplementary, which means that they sum to 180 degrees.\n\nLet’s apply this second property to the angles 𝐸𝐻𝐺 and 𝐻𝐺𝐹, which are a pair of adjacent angles. As these angles are supplementary, we can form an equation using the unknown variable 𝑥. Three 𝑥 plus 19 plus two 𝑥 plus six is equal to 180. The left-hand side of this equation simplifies to five 𝑥 plus 25.\n\nIn order to find the value of 𝑥, we now need to solve this equation. The first step is to subtract 25 from both sides, giving five 𝑥 is equal to 155. The next step is to divide both sides of the equation by five, giving 𝑥 is equal to 31. So we’ve found the value of 𝑥, and now we need to consider how to find the value of 𝑦.\n\nThere are two possible ways that you could do this. We could use the first property of angles in parallelograms, that opposite angles are congruent, and therefore angle 𝐹𝐸𝐻 is equal to angle 𝐻𝐺𝐹. This would give the equation two 𝑦 is equal to two 𝑥 plus six. And remember, we already know the value of 𝑥. It’s 31.\n\nTherefore, we have that two 𝑦 is equal to two multiplied by 31 plus six. This simplifies to give two 𝑦 is equal to 68. And dividing both sides of the equation by two, we see that 𝑦 is equal to 34. The other method that you could choose to use is to stick with the second fact, that adjacent angles are supplementary. This means that angle 𝐹𝐸𝐻 plus angle 𝐸𝐻𝐺 is equal to 180.\n\nYou could therefore form an equation, two 𝑦 plus three 𝑥 plus 19 is equal to 180. By substituting 𝑥 equals 31, you could then solve the equation to find the value of 𝑦. And of course, it would give the same result. You can confirm that yourself if you wish to. The solution to the problem is 𝑥 is equal to 31, 𝑦 is equal to 34."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.94878596,"math_prob":0.9989295,"size":2313,"snap":"2019-51-2020-05","text_gpt3_token_len":677,"char_repetition_ratio":0.16370723,"word_repetition_ratio":0.035955057,"special_character_ratio":0.23778643,"punctuation_ratio":0.105882354,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998647,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-15T13:37:04Z\",\"WARC-Record-ID\":\"<urn:uuid:1597e5f2-d7d0-40a7-bac6-06474f5c3b38>\",\"Content-Length\":\"27588\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e12fa718-33a3-483b-af43-88d6975cee3d>\",\"WARC-Concurrent-To\":\"<urn:uuid:0c9077c7-1d8a-4b02-ac8b-85b12673363a>\",\"WARC-IP-Address\":\"52.2.115.174\",\"WARC-Target-URI\":\"https://www.nagwa.com/en/videos/127167915046/\",\"WARC-Payload-Digest\":\"sha1:C2ACH2DS5A4L2CTQPI7CHCC65HR2MVNK\",\"WARC-Block-Digest\":\"sha1:2ZQET3VETTC3WBTI34AA2KOMWVMDPJVM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575541308149.76_warc_CC-MAIN-20191215122056-20191215150056-00392.warc.gz\"}"} |
https://labs.tib.eu/arxiv/?author=Seth%20R.%20Siegel | [
"• ### Limits on the ultra-bright Fast Radio Burst population from the CHIME Pathfinder(1702.08040)\n\nApril 20, 2017 astro-ph.HE\nWe present results from a new incoherent-beam Fast Radio Burst (FRB) search on the Canadian Hydrogen Intensity Mapping Experiment (CHIME) Pathfinder. Its large instantaneous field of view (FoV) and relative thermal insensitivity allow us to probe the ultra-bright tail of the FRB distribution, and to test a recent claim that this distribution's slope, $\\alpha\\equiv-\\frac{\\partial \\log N}{\\partial \\log S}$, is quite small. A 256-input incoherent beamformer was deployed on the CHIME Pathfinder for this purpose. If the FRB distribution were described by a single power-law with $\\alpha=0.7$, we would expect an FRB detection every few days, making this the fastest survey on sky at present. We collected 1268 hours of data, amounting to one of the largest exposures of any FRB survey, with over 2.4\\,$\\times$\\,10$^5$\\,deg$^2$\\,hrs. Having seen no bursts, we have constrained the rate of extremely bright events to $<\\!13$\\,sky$^{-1}$\\,day$^{-1}$ above $\\sim$\\,220$\\sqrt{(\\tau/\\rm ms)}$ Jy\\,ms for $\\tau$ between 1.3 and 100\\,ms, at 400--800\\,MHz. The non-detection also allows us to rule out $\\alpha\\lesssim0.9$ with 95$\\%$ confidence, after marginalizing over uncertainties in the GBT rate at 700--900\\,MHz, though we show that for a cosmological population and a large dynamic range in flux density, $\\alpha$ is brightness-dependent. Since FRBs now extend to large enough distances that non-Euclidean effects are significant, there is still expected to be a dearth of faint events and relative excess of bright events. Nevertheless we have constrained the allowed number of ultra-intense FRBs. While this does not have significant implications for deeper, large-FoV surveys like full CHIME and APERTIF, it does have important consequences for other wide-field, small dish experiments.\n• ### Constraints on the Mass, Concentration, and Nonthermal Pressure Support of Six CLASH Clusters from a Joint Analysis of X-ray, SZ, and Lensing Data(1612.05377)\n\nDec. 16, 2016 astro-ph.CO\nWe present a joint analysis of Chandra X-ray observations, Bolocam thermal Sunyaev-Zel'dovich (SZ) effect observations, Hubble Space Telescope (HST) strong lensing data, and HST and Subaru Suprime-Cam weak lensing data. The multiwavelength dataset is used to constrain parametric models for the distribution of dark and baryonic matter in a sample of six massive galaxy clusters selected from the Cluster Lensing And Supernova survey with Hubble (CLASH). For five of the six clusters, the multiwavelength dataset is well described by a relatively simple model that assumes spherical symmetry, hydrostatic equilibrium, and entirely thermal pressure support. The joint analysis yields considerably better constraints on the total mass and concentration of the cluster compared to analysis of any one dataset individually. The subsample of five galaxy clusters is used to place an upper limit on the fraction of pressure support in the intracluster medium (ICM) due to nonthermal processes, such as turbulence and bulk flow of the gas. We constrain the nonthermal pressure fraction at r500c to be less than 0.11 at 95 percent confidence. This is in tension with state-of-the-art hydrodynamical simulations, which predict a nonthermal pressure fraction of approximately 0.25 at r500c for clusters of similar mass and redshift. This tension may be explained by the sample selection and/or our assumption of spherical symmetry.\n• ### A Comparison and Joint Analysis of Sunyaev-Zel'dovich Effect Measurements from Planck and Bolocam for a set of 47 Massive Galaxy Clusters(1605.03541)\n\nMay 23, 2016 astro-ph.CO\nWe measure the SZ signal toward a set of 47 clusters with a median mass of $9.5 \\times 10^{14}$ M$_{\\odot}$ and a median redshift of 0.40 using data from Planck and the ground-based Bolocam receiver. When Planck XMM-like masses are used to set the scale radius $\\theta_{\\textrm{s}}$, we find consistency between the integrated SZ signal, $Y_{\\textrm{5R500}}$, derived from Bolocam and Planck based on gNFW model fits using A10 shape parameters, with an average ratio of $1.069 \\pm 0.030$ (allowing for the $\\simeq 5$% Bolocam flux calibration uncertainty). We also perform a joint fit to the Bolocam and Planck data using a modified A10 model with the outer logarithmic slope $\\beta$ allowed to vary, finding $\\beta = 6.13 \\pm 0.16 \\pm 0.76$ (measurement error followed by intrinsic scatter). In addition, we find that the value of $\\beta$ scales with mass and redshift according to $\\beta \\propto M^{0.077 \\pm 0.026} \\times (1+z)^{-0.06 \\pm 0.09}$. This mass scaling is in good agreement with recent simulations. We do not observe the strong trend of $\\beta$ with redshift seen in simulations, though we conclude that this is most likely due to our sample selection. Finally, we use Bolocam measurements of $Y_{500}$ to test the accuracy of the Planck completeness estimate. We find consistency, with the actual number of Planck detections falling approximately $1 \\sigma$ below the expectation from Bolocam. We translate this small difference into a constraint on the the effective mass bias for the Planck cluster cosmology results, with $(1-b) = 0.93 \\pm 0.06$.\n• ### Peculiar Velocity Constraints from Five-Band SZ Effect Measurements Towards RX J1347.5-1145 with MUSIC and Bolocam from the CSO(1509.02950)\n\nMarch 1, 2016 astro-ph.CO\nWe present Sunyaev-Zel'dovich (SZ) effect measurements from wide-field images towards the galaxy cluster RX J1347.5-1145 obtained from the Caltech Submillimeter Observatory with the Multiwavelength Submillimeter Inductance Camera (MUSIC) at 147, 213, 281, and 337 GHz and with Bolocam at 140 GHz. As part of our analysis, we have used higher frequency data from Herschel-SPIRE and previously published lower frequency radio data to subtract the signal from the brightest dusty star-forming galaxies behind RX J1347.5-1145 and from the AGN in RX J1347.5-1145's BCG. Using these five-band SZ effect images, combined with X-ray spectroscopic measurements of the temperature of the intra-cluster medium (ICM) from Chandra, we constrain the ICM optical depth to be $\\tau_e = 7.33^{+0.96}_{-0.97} \\times 10^{-3}$ and the ICM line of sight peculiar velocity to be $v_{pec} = -1040^{+870}_{-840}$ km s$^{-1}$. The errors for both quantities are limited by measurement noise rather than calibration uncertainties or astrophysical contamination, and significant improvements are possible with deeper observations. Our best-fit velocity is in good agreement with one previously published SZ effect analysis and in mild tension with the other, although some or all of that tension may be because that measurement samples a much smaller cluster volume. Furthermore, our best-fit optical depth implies a gas mass slightly larger than the Chandra-derived value, implying the cluster is elongated along the line of sight.\n• ### A GMBCG Galaxy Cluster Catalog of 55,424 Rich Clusters from SDSS DR7(1010.5503)\n\nDec. 22, 2010 stat.CO, astro-ph.CO, stat.ML\nWe present a large catalog of optically selected galaxy clusters from the application of a new Gaussian Mixture Brightest Cluster Galaxy (GMBCG) algorithm to SDSS Data Release 7 data. The algorithm detects clusters by identifying the red sequence plus Brightest Cluster Galaxy (BCG) feature, which is unique for galaxy clusters and does not exist among field galaxies. Red sequence clustering in color space is detected using an Error Corrected Gaussian Mixture Model. We run GMBCG on 8240 square degrees of photometric data from SDSS DR7 to assemble the largest ever optical galaxy cluster catalog, consisting of over 55,000 rich clusters across the redshift range from 0.1 < z < 0.55. We present Monte Carlo tests of completeness and purity and perform cross-matching with X-ray clusters and with the maxBCG sample at low redshift. These tests indicate high completeness and purity across the full redshift range for clusters with 15 or more members."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8301539,"math_prob":0.9566316,"size":7245,"snap":"2020-10-2020-16","text_gpt3_token_len":1819,"char_repetition_ratio":0.097086035,"word_repetition_ratio":0.001826484,"special_character_ratio":0.23547274,"punctuation_ratio":0.09455371,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9533749,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-29T03:16:26Z\",\"WARC-Record-ID\":\"<urn:uuid:fb5131fa-2541-4d79-b53a-5d77125a2c93>\",\"Content-Length\":\"54697\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ff822b59-80a4-40c5-9d19-95453f6f9960>\",\"WARC-Concurrent-To\":\"<urn:uuid:c03cd74f-04cc-4164-b26c-f68fb436bb17>\",\"WARC-IP-Address\":\"194.95.114.13\",\"WARC-Target-URI\":\"https://labs.tib.eu/arxiv/?author=Seth%20R.%20Siegel\",\"WARC-Payload-Digest\":\"sha1:Y2UNAC5BDNM7A2ATCL6B3WDKXTYGNML3\",\"WARC-Block-Digest\":\"sha1:4TRNGG6YJH427GGPVXCU5GGZUAUQOTIM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875148375.36_warc_CC-MAIN-20200229022458-20200229052458-00269.warc.gz\"}"} |
https://stat.ethz.ch/pipermail/r-help/2003-May/034307.html | [
"# [R] Ordinal data - Regression Trees & Proportional Odds\n\nJohn Fieberg John.Fieberg at dnr.state.mn.us\nWed May 28 23:26:03 CEST 2003\n\n```I have a data set w/ an ordinal response taking on one of 10 categories.\nI am considering using polr to fit a cumulative logits model. I\npreviously fit the model in SAS (using proc logistic) which provides a\ntest for the proportional odds assumption (p < 0.001 for the test). Are\nthere simple diagnostic plots that can be used to look at the validity\nof this assumption and possibly help w/ modifying the model as\nappropriate? Any references or examples of useful R code for addressing\nthe proportional odds assumption would be much appreciated!\n\nI also used a regression tree approach to explore this data set. In\ndoing so, I treated the response as numeric, using the rpart library. I\nam rather new to regression trees - and wondered about the validity of\nthis approach. I used cross-validation to prune the tree - but plots of\nthe response clearly indicate that the data are non-normal and don't\nhave equal variance (the data are highly skewed towards larger response\ncategories - values of 8-10). I have seen some people suggest that the\ntree approach is essentially non-parametric - but then I have seen other\nreferences suggesting examination of residual plots and potential\ntransformations of the response to ensure homogeneity of variance. For\nthis data set, it will be difficult to find an appropriate\ntransformation, given the large number of responses near 10 (i.e., the\nfact that the data are constrained to be less than or equal to 10\nresults in strange residual plots).\n\nAny help is much appreciated!\n\nJohn Fieberg, Ph.D.\nWildlife Biometrician, Minnesota DNR"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.89044744,"math_prob":0.86119294,"size":1788,"snap":"2021-31-2021-39","text_gpt3_token_len":412,"char_repetition_ratio":0.096412554,"word_repetition_ratio":0.0,"special_character_ratio":0.2181208,"punctuation_ratio":0.09439528,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98689383,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-27T10:13:57Z\",\"WARC-Record-ID\":\"<urn:uuid:27f9a2bd-7007-4090-a112-32b2993b42fe>\",\"Content-Length\":\"4165\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:566dad5a-01b0-4a02-8476-c9f47b33d29f>\",\"WARC-Concurrent-To\":\"<urn:uuid:058e6e6e-2db7-419c-bfb0-d074cb0632a3>\",\"WARC-IP-Address\":\"129.132.119.195\",\"WARC-Target-URI\":\"https://stat.ethz.ch/pipermail/r-help/2003-May/034307.html\",\"WARC-Payload-Digest\":\"sha1:QRUYBHTUD3FL5ZH4RNBXL64Y5DP7LJWN\",\"WARC-Block-Digest\":\"sha1:AUJ56WAXEE6AKO6LHIWVNTI7GDNSJW5L\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046153223.30_warc_CC-MAIN-20210727072531-20210727102531-00242.warc.gz\"}"} |
https://www.gradesaver.com/textbooks/math/algebra/elementary-algebra/chapter-11-additional-topics-chapter-11-review-problem-set-page-520/71 | [
"## Elementary Algebra\n\n$\\text{Domain:}\\;\\{1,2,3\\}$ $\\text{Range:}\\;\\{4,8,10,15\\}$ $\\text{No, this relation is not a function.}$\nThe domain of a relation is the set of $x$ values for that function. In this relation, the set of $x$ values is $\\{1,2,3\\}$. The range of a relation is the set of $y$ values for that relation. In this relation the set of $y$ values is $\\{4,8,10,15\\}$. A relation is a function if each $x$ value has only one unique $y$ value. Since $2$, an element of the domain, is repeated with multiple $y$ values, this relation is not a function."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.80911005,"math_prob":1.0000036,"size":563,"snap":"2020-10-2020-16","text_gpt3_token_len":180,"char_repetition_ratio":0.19320215,"word_repetition_ratio":0.13043478,"special_character_ratio":0.3428064,"punctuation_ratio":0.18978103,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998308,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-19T08:18:02Z\",\"WARC-Record-ID\":\"<urn:uuid:410d0634-8c97-4c89-87c1-03b2c4dfcb42>\",\"Content-Length\":\"74998\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:51af64ef-3891-49f6-b524-f14b4020f312>\",\"WARC-Concurrent-To\":\"<urn:uuid:f05996ea-2829-4be9-93d0-3aa1559926c4>\",\"WARC-IP-Address\":\"54.82.171.166\",\"WARC-Target-URI\":\"https://www.gradesaver.com/textbooks/math/algebra/elementary-algebra/chapter-11-additional-topics-chapter-11-review-problem-set-page-520/71\",\"WARC-Payload-Digest\":\"sha1:ECPH5HLS4W3CLPJ6JGLDD6L2N7JEVGNP\",\"WARC-Block-Digest\":\"sha1:DR7LPVWHUBR4U3N5USNY3KYSLGGM2SYC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875144058.43_warc_CC-MAIN-20200219061325-20200219091325-00518.warc.gz\"}"} |
https://www.homeworkmarket.com/content/can-you-assist-me-finance | [
"# Can you assist me with finance?",
null,
"tasa_t\nntroduction You will assume that you still work as a financial analyst for AirJet Best Parts, Inc. The company is considering a capital investment in a new machine and you are in charge of making a recommendation on the purchase based on (1) a given rate of return of 15% (Task 4) and (2) the firm’s cost of capital (Task 5). Task 4. Capital Budgeting for a New Machine A few months have now passed and AirJet Best Parts, Inc. is considering the purchase on a new machine that will increase the production of a special component significantly. The anticipated cash flows for the project are as follows: Year 1 \\$1,100,000 Year 2 \\$1,450,000 Year 3 \\$1,300,000 Year 4 \\$950,000 You have now been tasked with providing a recommendation for the project based on the results of a Net Present Value Analysis. Assuming that the required rate of return is 15% and the initial cost of the machine is \\$3,000,000. • What is the project’s IRR? (10 pts)
• What is the project’s NPV? (15 pts)
• Should the company accept this project and why (or why not)? (5 pts)
• Explain how depreciation will affect the present value of the project. (10 pts)
• Provide examples of at least one of the following as it relates to the project: (5 pts each) • Sunk Cost • Opportunity cost • Erosion
• Explain how you would conduct a scenario and sensitivity analysis of the project. What would be some project-specific risks and market risks related to this project? (20 pts) Task 5: Cost of Capital AirJet Best Parts Inc. is now considering that the appropriate discount rate for the new machine should be the cost of capital and would like to determine it. You will assist in the process of obtaining this rate. • Compute the cost of debt. Assume AirJet Best Parts Inc. is considering issuing new bonds. Select current bonds from one of the main competitors as a benchmark. Key competitors include Raytheon, Boeing, Lockheed Martin, and the Northrop Grumman Corporation.
• What is the YTM of the competitor’s bond? You may use a number of sources, but we recommend Morningstar. Find the YTM of one 15 or 20 year bond with the highest possible creditworthiness. You may assume that new bonds issued by AirJet Best Parts, Inc. are of similar risk and will require the same return. (5 pts)
• What is the after-tax cost of debt if the tax rate is 34%? (5 pts)
• Explain what other methods you could have used to find the cost of debt for AirJet Best Parts Inc.(10 pts)
• Explain why you should use the YTM and not the coupon rate as the required return for debt. (5 pts)
• Compute the cost of common equity using the CAPM model. For beta, use the average beta of three selected competitors. You may obtain the betas from Yahoo Finance. Assume the risk free rate to be 3% and the market risk premium to be 4%.
• What is the cost of common equity? (5 pts)
• Explain the advantages and disadvantages to use the CAPM model as the method to compute the cost of common equity. Compare and contrast this method with the dividend growth model approach. (10 pts)
• Compute the cost of preferred equity assuming the dividend paid for preferred stock is \\$2.93 and the current value of the stock is \\$50 per share.
• What is the cost of preferred equity? (5 pts)
• Is there any other method to compute this cost? Explain. (5 pts)
• Assuming that the market value weights of these capital sources are 30% bonds, 60% common equity and 10% preferred equity, what is the weighted cost of capital of the firm? (10 pts)
• Should the firm use this WACC for all projects? Explain and provide examples as appropriate. (10 pts)
• Recompute the net present value of the project based on the cost of capital you found. Do you still believe that your earlier recommendation for accepting or rejecting the project was adequate? Why or why not? (5 pts)\n• 9 years ago\n• 20\n\nPurchase the answer to view it\n\nNOT RATED\n•",
null,
"graph.docx\n\nPurchase the answer to view it\n\nNOT RATED\n•",
null,
"ap_done.xlsx\n\nPurchase the answer to view it\n\nNOT RATED\n\nPurchase the answer to view it\n\nNOT RATED\n•",
null,
"Purchase the answer to view it\n\nNOT RATED\n•",
null,
"Purchase the answer to view it\n\nNOT RATED\n•",
null,
"•",
null,
"•",
null,
""
]
| [
null,
"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAEUAAABFCAMAAAArU9sbAAAABGdBTUEAALGPC/xhBQAAAAFzUkdCAK7OHOkAAAF6UExURQAAAP39/e3t7eXl5eLi4vb29v7+/uLi4vv7+/7+/vf39+bm5uPj4+Li4uPj4/n5+f39/eHh4fHx8ff39/Ly8u3t7efn5+Hh4fn5+fz8/Pz8/OPj4+Li4uPj4/j4+PDw8OXl5e/v7/39/f7+/u7u7uTk5Pj4+P39/ePj4/j4+PDw8Pz8/PDw8P7+/uLi4v7+/v7+/vb29uXl5fv7++7u7vv7++Xl5fn5+fj4+OLi4vLy8ubm5vr6+ujo6Pn5+fj4+Onp6ejo6O/v7+jo6Pn5+fj4+Pr6+v////v7+/f39/f39/r6+vHx8ezs7O3t7fz8/Pf39+bm5u/v7/z8/O/v7/39/eTk5Obm5vf39/f39/j4+Ofn5/f39+fn5/v7++Hh4f///+Li4v39/f7+/uTk5Pf39/b29vT09OXl5ezs7Ojo6PPz8+bm5vv7++/v7/z8/PX19e7u7ufn5+Pj4/Ly8u3t7fj4+Onp6fHx8fr6+urq6vn5+evr6/Dw8NFs0AcAAABfdFJOUwARmdv2Swz1JApI0er57TcU+3VFcpjN/TMZFu768TyD2YUQ9pDigPHsiILmfgnzCPJJ4duR2tpJmvdx2NHAv3G+w4TESJkt/S2Gc854o5oaQtOHF4YS5dJD0j3PR87Q+BYVggAAAxlJREFUWMOtmPdXIjEQx6NYAAVBwIqK5SxnvbN7nr17vfcElqVXC5a7+99vBXwuO5Ns9r37/phMPi9vMpmZhBCumnz+wNaSO0iD7qWtgN/XRCzrYGDXS2vl3d05sIKoW/tJcX1fq5Nk9Hzoo3z1veuRYMxONVOxmqdmzSBOGzWXzSlkOOqpnF45+BB7C5VVi50H6R+n8hrvxyGNDdSKGhoxyBcXtDy/LGQVpmRjxSicdL2BkDZ4wNG0wu6lxBPwyNuAY1uBUTLC9FJywKLV4GIHPJ00MyoWBidVe+AwTmIMqgQw9XrIE4mdlHcD7HQH1fQU+IThAr559JB1HoPTiXAoLGM0HbiHdINwi/MgTAXB112ljBhnEoyvlNF4pALxzBgnLgQU4OAZT5nSbhwPRwQUBu5Ce5nyzDh8JYKwS6N56A7SBWLgWkgpAPsujbIDRktCShbYD2qUDjAqdAtj4Bp0aLWnEzhXDIGB1zlEflAr0YJGDHUSPxjLmFCuwIoXJPAf9vKSPIfJ1oRyBlZsw5ijVBFTYCK3kU+Q8lsIicAFvQTpD+JCigoXeAlSr5JCyl9kBQnCsahF52p76UXQqgByQzG/YP3KHwGliPUzSLxouuFC8mHEfBuJXU238jmqErt+tK2Iyx9z+R450fHzLB5xUdTaieSXypXEUpVyhtpq+QXJdZX8kIc7SeGmWq4jg/hUuAgo12HcdBCtAXeMXB5zSw7l3NUAEoLjV1leuNxC4xBeG4UV9oLitdFj6AvDqvA2Gvuy5kqdJgu1kJJJxlRrMQto/1JgZorX9C+vsV4qx8yVxHopfV+XUSQououg6+t0PabKZJTGesyHfjfF5JTA+l3i+Czr2uplqJi/dWDvgHNFkpJH3wHVN0mSyeoMfZMQ4nPx+nZOEnf50LfaqkltrY281UbOu9Edkaao7m/cN+y6NOWrXfCeXpGErDiE7/LleQnG/LLZF8H06LAJY3h0WubPY2JOwJibeC/7/7L5i8NY3/xo5SPn6Pj00EA4PD0+sv6pNLR3sr+xODnGxiYXN/ZP9gS/QP8AnHtlFsX1MBUAAAAASUVORK5CYII=",
null,
"data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIGlkPSJHcm91cF8xMzc0IiB3aWR0aD0iMzAuMTMyIiBoZWlnaHQ9IjM2LjU0OSIgZGF0YS1uYW1lPSJHcm91cCAxMzc0IiB2aWV3Qm94PSIwIDAgMzAuMTMyIDM2LjU0OSI+CiAgICA8ZGVmcz4KICAgICAgICA8c3R5bGU+CiAgICAgICAgICAgIC5jbHMtMXtmaWxsOiNkZWU3ZmZ9LmNscy0ye2ZpbGw6I2ZmNzczNH0KICAgICAgICA8L3N0eWxlPgogICAgPC9kZWZzPgogICAgPHBhdGggaWQ9IlBhdGhfNTgzIiBkPSJNMzAuMyAxMC40SDEzLjM3N2EyLjg3NyAyLjg3NyAwIDAgMC0yLjg3NyAyLjg3N3YyOS4zYTIuODc3IDIuODc3IDAgMCAwIDIuODc3IDIuODc3aDIyLjg3OWEyLjg3NyAyLjg3NyAwIDAgMCAyLjg3Ny0yLjg3N1YxOS4yM3oiIGNsYXNzPSJjbHMtMSIgZGF0YS1uYW1lPSJQYXRoIDU4MyIgdHJhbnNmb3JtPSJ0cmFuc2xhdGUoLTkuNzUgLTkuNjU4KSIvPgogICAgPHBhdGggaWQ9IlBhdGhfNTg0IiBkPSJNMjkwLjc3NyAxOS4zMzhoNS45NjFMMjg3LjkgMTAuNXY1Ljk2MWEyLjg3NyAyLjg3NyAwIDAgMCAyLjg3NyAyLjg3N3oiIGNsYXNzPSJjbHMtMiIgZGF0YS1uYW1lPSJQYXRoIDU4NCIgdHJhbnNmb3JtPSJ0cmFuc2xhdGUoLTI2Ny4zNDggLTkuNzUpIi8+CiAgICA8cGF0aCBpZD0iUGF0aF81ODUiIGQ9Ik0yOS45MTggOS4wNTlMMjEuMDczLjIyMUEuNzQuNzQgMCAwIDAgMjAuNTQ1IDBIMy42MjZBMy42MzIgMy42MzIgMCAwIDAgMCAzLjYyNnYyOS4zYTMuNjMyIDMuNjMyIDAgMCAwIDMuNjI2IDMuNjI2aDIyLjg3OWEzLjYzMiAzLjYzMiAwIDAgMCAzLjYyNi0zLjYyNlY5LjU4YS43NDcuNzQ3IDAgMCAwLS4yMTMtLjUyMXptLTguNjIzLTYuNTFsNi4yODkgNi4yODloLTQuMTU1YTIuMDk0IDIuMDk0IDAgMCAxLTEuNTA2LS42MjggMi4xMjMgMi4xMjMgMCAwIDEtLjYyOS0xLjUxVjIuNTQ4em01LjIxMSAzMi41MDlIMy42MjZhMi4xMzYgMi4xMzYgMCAwIDEtMi4xMzQtMi4xMzRWMy42MjZhMi4xMzYgMi4xMzYgMCAwIDEgMi4xMzQtMi4xMzRIMTkuOFY2LjdhMy42MzIgMy42MzIgMCAwIDAgMy42MjYgMy42MjZoNS4yMTR2MjIuNTk3YTIuMTM2IDIuMTM2IDAgMCAxLTIuMTM1IDIuMTM0eiIgZGF0YS1uYW1lPSJQYXRoIDU4NSIvPgogICAgPHBhdGggaWQ9IlBhdGhfNTg2IiBkPSJNMjE2LjA3OCAzNTEuNWgtLjMzNmEuNzQyLjc0MiAwIDAgMCAwIDEuNDg1aC4zMzZhLjc0Mi43NDIgMCAwIDAgMC0xLjQ4NXoiIGRhdGEtbmFtZT0iUGF0aCA1ODYiIHRyYW5zZm9ybT0idHJhbnNsYXRlKC0xOTkuNjUyIC0zMjYuNDA4KSIvPgogICAgPHBhdGggaWQ9IlBhdGhfNTg3IiBkPSJNODcuNzg4IDM1MS41aC03LjI0NmEuNzQyLjc0MiAwIDEgMCAwIDEuNDg1aDcuMjQ2YS43NDIuNzQyIDAgMCAwIDAtMS40ODV6IiBkYXRhLW5hbWU9IlBhdGggNTg3IiB0cmFuc2Zvcm09InRyYW5zbGF0ZSgtNzQuMTAzIC0zMjYuNDA4KSIvPgogICAgPHBhdGggaWQ9IlBhdGhfNTg4IiBkPSJNOTcuOCAzMDMuN0g4MC41NDJhLjc0Mi43NDIgMCAwIDAgMCAxLjQ4NUg5Ny44YS43NDIuNzQyIDAgMSAwIDAtMS40ODV6IiBkYXRhLW5hbWU9IlBhdGggNTg4IiB0cmFuc2Zvcm09InRyYW5zbGF0ZSgtNzQuMTAzIC0yODIuMDIpIi8+CiAgICA8cGF0aCBpZD0iUGF0aF81ODkiIGQ9Ik05Ny44IDI1Nkg4MC41NDJhLjc0Mi43NDIgMCAxIDAgMCAxLjQ4NUg5Ny44YS43NDIuNzQyIDAgMSAwIDAtMS40ODV6IiBkYXRhLW5hbWU9IlBhdGggNTg5IiB0cmFuc2Zvcm09InRyYW5zbGF0ZSgtNzQuMTAzIC0yMzcuNzI1KSIvPgogICAgPHBhdGggaWQ9IlBhdGhfNTkwIiBkPSJNOTcuOCAyMDguM0g4MC41NDJhLjc0Mi43NDIgMCAxIDAgMCAxLjQ4NUg5Ny44YS43NDIuNzQyIDAgMSAwIDAtMS40ODV6IiBkYXRhLW5hbWU9IlBhdGggNTkwIiB0cmFuc2Zvcm09InRyYW5zbGF0ZSgtNzQuMTAzIC0xOTMuNDMpIi8+Cjwvc3ZnPgo=",
null,
"data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIGlkPSJHcm91cF8xMzc0IiB3aWR0aD0iMzAuMTMyIiBoZWlnaHQ9IjM2LjU0OSIgZGF0YS1uYW1lPSJHcm91cCAxMzc0IiB2aWV3Qm94PSIwIDAgMzAuMTMyIDM2LjU0OSI+CiAgICA8ZGVmcz4KICAgICAgICA8c3R5bGU+CiAgICAgICAgICAgIC5jbHMtMXtmaWxsOiNkZWU3ZmZ9LmNscy0ye2ZpbGw6I2ZmNzczNH0KICAgICAgICA8L3N0eWxlPgogICAgPC9kZWZzPgogICAgPHBhdGggaWQ9IlBhdGhfNTgzIiBkPSJNMzAuMyAxMC40SDEzLjM3N2EyLjg3NyAyLjg3NyAwIDAgMC0yLjg3NyAyLjg3N3YyOS4zYTIuODc3IDIuODc3IDAgMCAwIDIuODc3IDIuODc3aDIyLjg3OWEyLjg3NyAyLjg3NyAwIDAgMCAyLjg3Ny0yLjg3N1YxOS4yM3oiIGNsYXNzPSJjbHMtMSIgZGF0YS1uYW1lPSJQYXRoIDU4MyIgdHJhbnNmb3JtPSJ0cmFuc2xhdGUoLTkuNzUgLTkuNjU4KSIvPgogICAgPHBhdGggaWQ9IlBhdGhfNTg0IiBkPSJNMjkwLjc3NyAxOS4zMzhoNS45NjFMMjg3LjkgMTAuNXY1Ljk2MWEyLjg3NyAyLjg3NyAwIDAgMCAyLjg3NyAyLjg3N3oiIGNsYXNzPSJjbHMtMiIgZGF0YS1uYW1lPSJQYXRoIDU4NCIgdHJhbnNmb3JtPSJ0cmFuc2xhdGUoLTI2Ny4zNDggLTkuNzUpIi8+CiAgICA8cGF0aCBpZD0iUGF0aF81ODUiIGQ9Ik0yOS45MTggOS4wNTlMMjEuMDczLjIyMUEuNzQuNzQgMCAwIDAgMjAuNTQ1IDBIMy42MjZBMy42MzIgMy42MzIgMCAwIDAgMCAzLjYyNnYyOS4zYTMuNjMyIDMuNjMyIDAgMCAwIDMuNjI2IDMuNjI2aDIyLjg3OWEzLjYzMiAzLjYzMiAwIDAgMCAzLjYyNi0zLjYyNlY5LjU4YS43NDcuNzQ3IDAgMCAwLS4yMTMtLjUyMXptLTguNjIzLTYuNTFsNi4yODkgNi4yODloLTQuMTU1YTIuMDk0IDIuMDk0IDAgMCAxLTEuNTA2LS42MjggMi4xMjMgMi4xMjMgMCAwIDEtLjYyOS0xLjUxVjIuNTQ4em01LjIxMSAzMi41MDlIMy42MjZhMi4xMzYgMi4xMzYgMCAwIDEtMi4xMzQtMi4xMzRWMy42MjZhMi4xMzYgMi4xMzYgMCAwIDEgMi4xMzQtMi4xMzRIMTkuOFY2LjdhMy42MzIgMy42MzIgMCAwIDAgMy42MjYgMy42MjZoNS4yMTR2MjIuNTk3YTIuMTM2IDIuMTM2IDAgMCAxLTIuMTM1IDIuMTM0eiIgZGF0YS1uYW1lPSJQYXRoIDU4NSIvPgogICAgPHBhdGggaWQ9IlBhdGhfNTg2IiBkPSJNMjE2LjA3OCAzNTEuNWgtLjMzNmEuNzQyLjc0MiAwIDAgMCAwIDEuNDg1aC4zMzZhLjc0Mi43NDIgMCAwIDAgMC0xLjQ4NXoiIGRhdGEtbmFtZT0iUGF0aCA1ODYiIHRyYW5zZm9ybT0idHJhbnNsYXRlKC0xOTkuNjUyIC0zMjYuNDA4KSIvPgogICAgPHBhdGggaWQ9IlBhdGhfNTg3IiBkPSJNODcuNzg4IDM1MS41aC03LjI0NmEuNzQyLjc0MiAwIDEgMCAwIDEuNDg1aDcuMjQ2YS43NDIuNzQyIDAgMCAwIDAtMS40ODV6IiBkYXRhLW5hbWU9IlBhdGggNTg3IiB0cmFuc2Zvcm09InRyYW5zbGF0ZSgtNzQuMTAzIC0zMjYuNDA4KSIvPgogICAgPHBhdGggaWQ9IlBhdGhfNTg4IiBkPSJNOTcuOCAzMDMuN0g4MC41NDJhLjc0Mi43NDIgMCAwIDAgMCAxLjQ4NUg5Ny44YS43NDIuNzQyIDAgMSAwIDAtMS40ODV6IiBkYXRhLW5hbWU9IlBhdGggNTg4IiB0cmFuc2Zvcm09InRyYW5zbGF0ZSgtNzQuMTAzIC0yODIuMDIpIi8+CiAgICA8cGF0aCBpZD0iUGF0aF81ODkiIGQ9Ik05Ny44IDI1Nkg4MC41NDJhLjc0Mi43NDIgMCAxIDAgMCAxLjQ4NUg5Ny44YS43NDIuNzQyIDAgMSAwIDAtMS40ODV6IiBkYXRhLW5hbWU9IlBhdGggNTg5IiB0cmFuc2Zvcm09InRyYW5zbGF0ZSgtNzQuMTAzIC0yMzcuNzI1KSIvPgogICAgPHBhdGggaWQ9IlBhdGhfNTkwIiBkPSJNOTcuOCAyMDguM0g4MC41NDJhLjc0Mi43NDIgMCAxIDAgMCAxLjQ4NUg5Ny44YS43NDIuNzQyIDAgMSAwIDAtMS40ODV6IiBkYXRhLW5hbWU9IlBhdGggNTkwIiB0cmFuc2Zvcm09InRyYW5zbGF0ZSgtNzQuMTAzIC0xOTMuNDMpIi8+Cjwvc3ZnPgo=",
null,
"data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIGlkPSJHcm91cF8xMzc0IiB3aWR0aD0iMzAuMTMyIiBoZWlnaHQ9IjM2LjU0OSIgZGF0YS1uYW1lPSJHcm91cCAxMzc0IiB2aWV3Qm94PSIwIDAgMzAuMTMyIDM2LjU0OSI+CiAgICA8ZGVmcz4KICAgICAgICA8c3R5bGU+CiAgICAgICAgICAgIC5jbHMtMXtmaWxsOiNkZWU3ZmZ9LmNscy0ye2ZpbGw6I2ZmNzczNH0KICAgICAgICA8L3N0eWxlPgogICAgPC9kZWZzPgogICAgPHBhdGggaWQ9IlBhdGhfNTgzIiBkPSJNMzAuMyAxMC40SDEzLjM3N2EyLjg3NyAyLjg3NyAwIDAgMC0yLjg3NyAyLjg3N3YyOS4zYTIuODc3IDIuODc3IDAgMCAwIDIuODc3IDIuODc3aDIyLjg3OWEyLjg3NyAyLjg3NyAwIDAgMCAyLjg3Ny0yLjg3N1YxOS4yM3oiIGNsYXNzPSJjbHMtMSIgZGF0YS1uYW1lPSJQYXRoIDU4MyIgdHJhbnNmb3JtPSJ0cmFuc2xhdGUoLTkuNzUgLTkuNjU4KSIvPgogICAgPHBhdGggaWQ9IlBhdGhfNTg0IiBkPSJNMjkwLjc3NyAxOS4zMzhoNS45NjFMMjg3LjkgMTAuNXY1Ljk2MWEyLjg3NyAyLjg3NyAwIDAgMCAyLjg3NyAyLjg3N3oiIGNsYXNzPSJjbHMtMiIgZGF0YS1uYW1lPSJQYXRoIDU4NCIgdHJhbnNmb3JtPSJ0cmFuc2xhdGUoLTI2Ny4zNDggLTkuNzUpIi8+CiAgICA8cGF0aCBpZD0iUGF0aF81ODUiIGQ9Ik0yOS45MTggOS4wNTlMMjEuMDczLjIyMUEuNzQuNzQgMCAwIDAgMjAuNTQ1IDBIMy42MjZBMy42MzIgMy42MzIgMCAwIDAgMCAzLjYyNnYyOS4zYTMuNjMyIDMuNjMyIDAgMCAwIDMuNjI2IDMuNjI2aDIyLjg3OWEzLjYzMiAzLjYzMiAwIDAgMCAzLjYyNi0zLjYyNlY5LjU4YS43NDcuNzQ3IDAgMCAwLS4yMTMtLjUyMXptLTguNjIzLTYuNTFsNi4yODkgNi4yODloLTQuMTU1YTIuMDk0IDIuMDk0IDAgMCAxLTEuNTA2LS42MjggMi4xMjMgMi4xMjMgMCAwIDEtLjYyOS0xLjUxVjIuNTQ4em01LjIxMSAzMi41MDlIMy42MjZhMi4xMzYgMi4xMzYgMCAwIDEtMi4xMzQtMi4xMzRWMy42MjZhMi4xMzYgMi4xMzYgMCAwIDEgMi4xMzQtMi4xMzRIMTkuOFY2LjdhMy42MzIgMy42MzIgMCAwIDAgMy42MjYgMy42MjZoNS4yMTR2MjIuNTk3YTIuMTM2IDIuMTM2IDAgMCAxLTIuMTM1IDIuMTM0eiIgZGF0YS1uYW1lPSJQYXRoIDU4NSIvPgogICAgPHBhdGggaWQ9IlBhdGhfNTg2IiBkPSJNMjE2LjA3OCAzNTEuNWgtLjMzNmEuNzQyLjc0MiAwIDAgMCAwIDEuNDg1aC4zMzZhLjc0Mi43NDIgMCAwIDAgMC0xLjQ4NXoiIGRhdGEtbmFtZT0iUGF0aCA1ODYiIHRyYW5zZm9ybT0idHJhbnNsYXRlKC0xOTkuNjUyIC0zMjYuNDA4KSIvPgogICAgPHBhdGggaWQ9IlBhdGhfNTg3IiBkPSJNODcuNzg4IDM1MS41aC03LjI0NmEuNzQyLjc0MiAwIDEgMCAwIDEuNDg1aDcuMjQ2YS43NDIuNzQyIDAgMCAwIDAtMS40ODV6IiBkYXRhLW5hbWU9IlBhdGggNTg3IiB0cmFuc2Zvcm09InRyYW5zbGF0ZSgtNzQuMTAzIC0zMjYuNDA4KSIvPgogICAgPHBhdGggaWQ9IlBhdGhfNTg4IiBkPSJNOTcuOCAzMDMuN0g4MC41NDJhLjc0Mi43NDIgMCAwIDAgMCAxLjQ4NUg5Ny44YS43NDIuNzQyIDAgMSAwIDAtMS40ODV6IiBkYXRhLW5hbWU9IlBhdGggNTg4IiB0cmFuc2Zvcm09InRyYW5zbGF0ZSgtNzQuMTAzIC0yODIuMDIpIi8+CiAgICA8cGF0aCBpZD0iUGF0aF81ODkiIGQ9Ik05Ny44IDI1Nkg4MC41NDJhLjc0Mi43NDIgMCAxIDAgMCAxLjQ4NUg5Ny44YS43NDIuNzQyIDAgMSAwIDAtMS40ODV6IiBkYXRhLW5hbWU9IlBhdGggNTg5IiB0cmFuc2Zvcm09InRyYW5zbGF0ZSgtNzQuMTAzIC0yMzcuNzI1KSIvPgogICAgPHBhdGggaWQ9IlBhdGhfNTkwIiBkPSJNOTcuOCAyMDguM0g4MC41NDJhLjc0Mi43NDIgMCAxIDAgMCAxLjQ4NUg5Ny44YS43NDIuNzQyIDAgMSAwIDAtMS40ODV6IiBkYXRhLW5hbWU9IlBhdGggNTkwIiB0cmFuc2Zvcm09InRyYW5zbGF0ZSgtNzQuMTAzIC0xOTMuNDMpIi8+Cjwvc3ZnPgo=",
null,
"data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIGlkPSJHcm91cF8xMzc0IiB3aWR0aD0iMzAuMTMyIiBoZWlnaHQ9IjM2LjU0OSIgZGF0YS1uYW1lPSJHcm91cCAxMzc0IiB2aWV3Qm94PSIwIDAgMzAuMTMyIDM2LjU0OSI+CiAgICA8ZGVmcz4KICAgICAgICA8c3R5bGU+CiAgICAgICAgICAgIC5jbHMtMXtmaWxsOiNkZWU3ZmZ9LmNscy0ye2ZpbGw6I2ZmNzczNH0KICAgICAgICA8L3N0eWxlPgogICAgPC9kZWZzPgogICAgPHBhdGggaWQ9IlBhdGhfNTgzIiBkPSJNMzAuMyAxMC40SDEzLjM3N2EyLjg3NyAyLjg3NyAwIDAgMC0yLjg3NyAyLjg3N3YyOS4zYTIuODc3IDIuODc3IDAgMCAwIDIuODc3IDIuODc3aDIyLjg3OWEyLjg3NyAyLjg3NyAwIDAgMCAyLjg3Ny0yLjg3N1YxOS4yM3oiIGNsYXNzPSJjbHMtMSIgZGF0YS1uYW1lPSJQYXRoIDU4MyIgdHJhbnNmb3JtPSJ0cmFuc2xhdGUoLTkuNzUgLTkuNjU4KSIvPgogICAgPHBhdGggaWQ9IlBhdGhfNTg0IiBkPSJNMjkwLjc3NyAxOS4zMzhoNS45NjFMMjg3LjkgMTAuNXY1Ljk2MWEyLjg3NyAyLjg3NyAwIDAgMCAyLjg3NyAyLjg3N3oiIGNsYXNzPSJjbHMtMiIgZGF0YS1uYW1lPSJQYXRoIDU4NCIgdHJhbnNmb3JtPSJ0cmFuc2xhdGUoLTI2Ny4zNDggLTkuNzUpIi8+CiAgICA8cGF0aCBpZD0iUGF0aF81ODUiIGQ9Ik0yOS45MTggOS4wNTlMMjEuMDczLjIyMUEuNzQuNzQgMCAwIDAgMjAuNTQ1IDBIMy42MjZBMy42MzIgMy42MzIgMCAwIDAgMCAzLjYyNnYyOS4zYTMuNjMyIDMuNjMyIDAgMCAwIDMuNjI2IDMuNjI2aDIyLjg3OWEzLjYzMiAzLjYzMiAwIDAgMCAzLjYyNi0zLjYyNlY5LjU4YS43NDcuNzQ3IDAgMCAwLS4yMTMtLjUyMXptLTguNjIzLTYuNTFsNi4yODkgNi4yODloLTQuMTU1YTIuMDk0IDIuMDk0IDAgMCAxLTEuNTA2LS42MjggMi4xMjMgMi4xMjMgMCAwIDEtLjYyOS0xLjUxVjIuNTQ4em01LjIxMSAzMi41MDlIMy42MjZhMi4xMzYgMi4xMzYgMCAwIDEtMi4xMzQtMi4xMzRWMy42MjZhMi4xMzYgMi4xMzYgMCAwIDEgMi4xMzQtMi4xMzRIMTkuOFY2LjdhMy42MzIgMy42MzIgMCAwIDAgMy42MjYgMy42MjZoNS4yMTR2MjIuNTk3YTIuMTM2IDIuMTM2IDAgMCAxLTIuMTM1IDIuMTM0eiIgZGF0YS1uYW1lPSJQYXRoIDU4NSIvPgogICAgPHBhdGggaWQ9IlBhdGhfNTg2IiBkPSJNMjE2LjA3OCAzNTEuNWgtLjMzNmEuNzQyLjc0MiAwIDAgMCAwIDEuNDg1aC4zMzZhLjc0Mi43NDIgMCAwIDAgMC0xLjQ4NXoiIGRhdGEtbmFtZT0iUGF0aCA1ODYiIHRyYW5zZm9ybT0idHJhbnNsYXRlKC0xOTkuNjUyIC0zMjYuNDA4KSIvPgogICAgPHBhdGggaWQ9IlBhdGhfNTg3IiBkPSJNODcuNzg4IDM1MS41aC03LjI0NmEuNzQyLjc0MiAwIDEgMCAwIDEuNDg1aDcuMjQ2YS43NDIuNzQyIDAgMCAwIDAtMS40ODV6IiBkYXRhLW5hbWU9IlBhdGggNTg3IiB0cmFuc2Zvcm09InRyYW5zbGF0ZSgtNzQuMTAzIC0zMjYuNDA4KSIvPgogICAgPHBhdGggaWQ9IlBhdGhfNTg4IiBkPSJNOTcuOCAzMDMuN0g4MC41NDJhLjc0Mi43NDIgMCAwIDAgMCAxLjQ4NUg5Ny44YS43NDIuNzQyIDAgMSAwIDAtMS40ODV6IiBkYXRhLW5hbWU9IlBhdGggNTg4IiB0cmFuc2Zvcm09InRyYW5zbGF0ZSgtNzQuMTAzIC0yODIuMDIpIi8+CiAgICA8cGF0aCBpZD0iUGF0aF81ODkiIGQ9Ik05Ny44IDI1Nkg4MC41NDJhLjc0Mi43NDIgMCAxIDAgMCAxLjQ4NUg5Ny44YS43NDIuNzQyIDAgMSAwIDAtMS40ODV6IiBkYXRhLW5hbWU9IlBhdGggNTg5IiB0cmFuc2Zvcm09InRyYW5zbGF0ZSgtNzQuMTAzIC0yMzcuNzI1KSIvPgogICAgPHBhdGggaWQ9IlBhdGhfNTkwIiBkPSJNOTcuOCAyMDguM0g4MC41NDJhLjc0Mi43NDIgMCAxIDAgMCAxLjQ4NUg5Ny44YS43NDIuNzQyIDAgMSAwIDAtMS40ODV6IiBkYXRhLW5hbWU9IlBhdGggNTkwIiB0cmFuc2Zvcm09InRyYW5zbGF0ZSgtNzQuMTAzIC0xOTMuNDMpIi8+Cjwvc3ZnPgo=",
null,
"data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIGlkPSJHcm91cF8xMzc0IiB3aWR0aD0iMzAuMTMyIiBoZWlnaHQ9IjM2LjU0OSIgZGF0YS1uYW1lPSJHcm91cCAxMzc0IiB2aWV3Qm94PSIwIDAgMzAuMTMyIDM2LjU0OSI+CiAgICA8ZGVmcz4KICAgICAgICA8c3R5bGU+CiAgICAgICAgICAgIC5jbHMtMXtmaWxsOiNkZWU3ZmZ9LmNscy0ye2ZpbGw6I2ZmNzczNH0KICAgICAgICA8L3N0eWxlPgogICAgPC9kZWZzPgogICAgPHBhdGggaWQ9IlBhdGhfNTgzIiBkPSJNMzAuMyAxMC40SDEzLjM3N2EyLjg3NyAyLjg3NyAwIDAgMC0yLjg3NyAyLjg3N3YyOS4zYTIuODc3IDIuODc3IDAgMCAwIDIuODc3IDIuODc3aDIyLjg3OWEyLjg3NyAyLjg3NyAwIDAgMCAyLjg3Ny0yLjg3N1YxOS4yM3oiIGNsYXNzPSJjbHMtMSIgZGF0YS1uYW1lPSJQYXRoIDU4MyIgdHJhbnNmb3JtPSJ0cmFuc2xhdGUoLTkuNzUgLTkuNjU4KSIvPgogICAgPHBhdGggaWQ9IlBhdGhfNTg0IiBkPSJNMjkwLjc3NyAxOS4zMzhoNS45NjFMMjg3LjkgMTAuNXY1Ljk2MWEyLjg3NyAyLjg3NyAwIDAgMCAyLjg3NyAyLjg3N3oiIGNsYXNzPSJjbHMtMiIgZGF0YS1uYW1lPSJQYXRoIDU4NCIgdHJhbnNmb3JtPSJ0cmFuc2xhdGUoLTI2Ny4zNDggLTkuNzUpIi8+CiAgICA8cGF0aCBpZD0iUGF0aF81ODUiIGQ9Ik0yOS45MTggOS4wNTlMMjEuMDczLjIyMUEuNzQuNzQgMCAwIDAgMjAuNTQ1IDBIMy42MjZBMy42MzIgMy42MzIgMCAwIDAgMCAzLjYyNnYyOS4zYTMuNjMyIDMuNjMyIDAgMCAwIDMuNjI2IDMuNjI2aDIyLjg3OWEzLjYzMiAzLjYzMiAwIDAgMCAzLjYyNi0zLjYyNlY5LjU4YS43NDcuNzQ3IDAgMCAwLS4yMTMtLjUyMXptLTguNjIzLTYuNTFsNi4yODkgNi4yODloLTQuMTU1YTIuMDk0IDIuMDk0IDAgMCAxLTEuNTA2LS42MjggMi4xMjMgMi4xMjMgMCAwIDEtLjYyOS0xLjUxVjIuNTQ4em01LjIxMSAzMi41MDlIMy42MjZhMi4xMzYgMi4xMzYgMCAwIDEtMi4xMzQtMi4xMzRWMy42MjZhMi4xMzYgMi4xMzYgMCAwIDEgMi4xMzQtMi4xMzRIMTkuOFY2LjdhMy42MzIgMy42MzIgMCAwIDAgMy42MjYgMy42MjZoNS4yMTR2MjIuNTk3YTIuMTM2IDIuMTM2IDAgMCAxLTIuMTM1IDIuMTM0eiIgZGF0YS1uYW1lPSJQYXRoIDU4NSIvPgogICAgPHBhdGggaWQ9IlBhdGhfNTg2IiBkPSJNMjE2LjA3OCAzNTEuNWgtLjMzNmEuNzQyLjc0MiAwIDAgMCAwIDEuNDg1aC4zMzZhLjc0Mi43NDIgMCAwIDAgMC0xLjQ4NXoiIGRhdGEtbmFtZT0iUGF0aCA1ODYiIHRyYW5zZm9ybT0idHJhbnNsYXRlKC0xOTkuNjUyIC0zMjYuNDA4KSIvPgogICAgPHBhdGggaWQ9IlBhdGhfNTg3IiBkPSJNODcuNzg4IDM1MS41aC03LjI0NmEuNzQyLjc0MiAwIDEgMCAwIDEuNDg1aDcuMjQ2YS43NDIuNzQyIDAgMCAwIDAtMS40ODV6IiBkYXRhLW5hbWU9IlBhdGggNTg3IiB0cmFuc2Zvcm09InRyYW5zbGF0ZSgtNzQuMTAzIC0zMjYuNDA4KSIvPgogICAgPHBhdGggaWQ9IlBhdGhfNTg4IiBkPSJNOTcuOCAzMDMuN0g4MC41NDJhLjc0Mi43NDIgMCAwIDAgMCAxLjQ4NUg5Ny44YS43NDIuNzQyIDAgMSAwIDAtMS40ODV6IiBkYXRhLW5hbWU9IlBhdGggNTg4IiB0cmFuc2Zvcm09InRyYW5zbGF0ZSgtNzQuMTAzIC0yODIuMDIpIi8+CiAgICA8cGF0aCBpZD0iUGF0aF81ODkiIGQ9Ik05Ny44IDI1Nkg4MC41NDJhLjc0Mi43NDIgMCAxIDAgMCAxLjQ4NUg5Ny44YS43NDIuNzQyIDAgMSAwIDAtMS40ODV6IiBkYXRhLW5hbWU9IlBhdGggNTg5IiB0cmFuc2Zvcm09InRyYW5zbGF0ZSgtNzQuMTAzIC0yMzcuNzI1KSIvPgogICAgPHBhdGggaWQ9IlBhdGhfNTkwIiBkPSJNOTcuOCAyMDguM0g4MC41NDJhLjc0Mi43NDIgMCAxIDAgMCAxLjQ4NUg5Ny44YS43NDIuNzQyIDAgMSAwIDAtMS40ODV6IiBkYXRhLW5hbWU9IlBhdGggNTkwIiB0cmFuc2Zvcm09InRyYW5zbGF0ZSgtNzQuMTAzIC0xOTMuNDMpIi8+Cjwvc3ZnPgo=",
null,
"data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIGlkPSJHcm91cF8xMzc0IiB3aWR0aD0iMzAuMTMyIiBoZWlnaHQ9IjM2LjU0OSIgZGF0YS1uYW1lPSJHcm91cCAxMzc0IiB2aWV3Qm94PSIwIDAgMzAuMTMyIDM2LjU0OSI+CiAgICA8ZGVmcz4KICAgICAgICA8c3R5bGU+CiAgICAgICAgICAgIC5jbHMtMXtmaWxsOiNkZWU3ZmZ9LmNscy0ye2ZpbGw6I2ZmNzczNH0KICAgICAgICA8L3N0eWxlPgogICAgPC9kZWZzPgogICAgPHBhdGggaWQ9IlBhdGhfNTgzIiBkPSJNMzAuMyAxMC40SDEzLjM3N2EyLjg3NyAyLjg3NyAwIDAgMC0yLjg3NyAyLjg3N3YyOS4zYTIuODc3IDIuODc3IDAgMCAwIDIuODc3IDIuODc3aDIyLjg3OWEyLjg3NyAyLjg3NyAwIDAgMCAyLjg3Ny0yLjg3N1YxOS4yM3oiIGNsYXNzPSJjbHMtMSIgZGF0YS1uYW1lPSJQYXRoIDU4MyIgdHJhbnNmb3JtPSJ0cmFuc2xhdGUoLTkuNzUgLTkuNjU4KSIvPgogICAgPHBhdGggaWQ9IlBhdGhfNTg0IiBkPSJNMjkwLjc3NyAxOS4zMzhoNS45NjFMMjg3LjkgMTAuNXY1Ljk2MWEyLjg3NyAyLjg3NyAwIDAgMCAyLjg3NyAyLjg3N3oiIGNsYXNzPSJjbHMtMiIgZGF0YS1uYW1lPSJQYXRoIDU4NCIgdHJhbnNmb3JtPSJ0cmFuc2xhdGUoLTI2Ny4zNDggLTkuNzUpIi8+CiAgICA8cGF0aCBpZD0iUGF0aF81ODUiIGQ9Ik0yOS45MTggOS4wNTlMMjEuMDczLjIyMUEuNzQuNzQgMCAwIDAgMjAuNTQ1IDBIMy42MjZBMy42MzIgMy42MzIgMCAwIDAgMCAzLjYyNnYyOS4zYTMuNjMyIDMuNjMyIDAgMCAwIDMuNjI2IDMuNjI2aDIyLjg3OWEzLjYzMiAzLjYzMiAwIDAgMCAzLjYyNi0zLjYyNlY5LjU4YS43NDcuNzQ3IDAgMCAwLS4yMTMtLjUyMXptLTguNjIzLTYuNTFsNi4yODkgNi4yODloLTQuMTU1YTIuMDk0IDIuMDk0IDAgMCAxLTEuNTA2LS42MjggMi4xMjMgMi4xMjMgMCAwIDEtLjYyOS0xLjUxVjIuNTQ4em01LjIxMSAzMi41MDlIMy42MjZhMi4xMzYgMi4xMzYgMCAwIDEtMi4xMzQtMi4xMzRWMy42MjZhMi4xMzYgMi4xMzYgMCAwIDEgMi4xMzQtMi4xMzRIMTkuOFY2LjdhMy42MzIgMy42MzIgMCAwIDAgMy42MjYgMy42MjZoNS4yMTR2MjIuNTk3YTIuMTM2IDIuMTM2IDAgMCAxLTIuMTM1IDIuMTM0eiIgZGF0YS1uYW1lPSJQYXRoIDU4NSIvPgogICAgPHBhdGggaWQ9IlBhdGhfNTg2IiBkPSJNMjE2LjA3OCAzNTEuNWgtLjMzNmEuNzQyLjc0MiAwIDAgMCAwIDEuNDg1aC4zMzZhLjc0Mi43NDIgMCAwIDAgMC0xLjQ4NXoiIGRhdGEtbmFtZT0iUGF0aCA1ODYiIHRyYW5zZm9ybT0idHJhbnNsYXRlKC0xOTkuNjUyIC0zMjYuNDA4KSIvPgogICAgPHBhdGggaWQ9IlBhdGhfNTg3IiBkPSJNODcuNzg4IDM1MS41aC03LjI0NmEuNzQyLjc0MiAwIDEgMCAwIDEuNDg1aDcuMjQ2YS43NDIuNzQyIDAgMCAwIDAtMS40ODV6IiBkYXRhLW5hbWU9IlBhdGggNTg3IiB0cmFuc2Zvcm09InRyYW5zbGF0ZSgtNzQuMTAzIC0zMjYuNDA4KSIvPgogICAgPHBhdGggaWQ9IlBhdGhfNTg4IiBkPSJNOTcuOCAzMDMuN0g4MC41NDJhLjc0Mi43NDIgMCAwIDAgMCAxLjQ4NUg5Ny44YS43NDIuNzQyIDAgMSAwIDAtMS40ODV6IiBkYXRhLW5hbWU9IlBhdGggNTg4IiB0cmFuc2Zvcm09InRyYW5zbGF0ZSgtNzQuMTAzIC0yODIuMDIpIi8+CiAgICA8cGF0aCBpZD0iUGF0aF81ODkiIGQ9Ik05Ny44IDI1Nkg4MC41NDJhLjc0Mi43NDIgMCAxIDAgMCAxLjQ4NUg5Ny44YS43NDIuNzQyIDAgMSAwIDAtMS40ODV6IiBkYXRhLW5hbWU9IlBhdGggNTg5IiB0cmFuc2Zvcm09InRyYW5zbGF0ZSgtNzQuMTAzIC0yMzcuNzI1KSIvPgogICAgPHBhdGggaWQ9IlBhdGhfNTkwIiBkPSJNOTcuOCAyMDguM0g4MC41NDJhLjc0Mi43NDIgMCAxIDAgMCAxLjQ4NUg5Ny44YS43NDIuNzQyIDAgMSAwIDAtMS40ODV6IiBkYXRhLW5hbWU9IlBhdGggNTkwIiB0cmFuc2Zvcm09InRyYW5zbGF0ZSgtNzQuMTAzIC0xOTMuNDMpIi8+Cjwvc3ZnPgo=",
null,
"data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIGlkPSJHcm91cF8xMzc0IiB3aWR0aD0iMzAuMTMyIiBoZWlnaHQ9IjM2LjU0OSIgZGF0YS1uYW1lPSJHcm91cCAxMzc0IiB2aWV3Qm94PSIwIDAgMzAuMTMyIDM2LjU0OSI+CiAgICA8ZGVmcz4KICAgICAgICA8c3R5bGU+CiAgICAgICAgICAgIC5jbHMtMXtmaWxsOiNkZWU3ZmZ9LmNscy0ye2ZpbGw6I2ZmNzczNH0KICAgICAgICA8L3N0eWxlPgogICAgPC9kZWZzPgogICAgPHBhdGggaWQ9IlBhdGhfNTgzIiBkPSJNMzAuMyAxMC40SDEzLjM3N2EyLjg3NyAyLjg3NyAwIDAgMC0yLjg3NyAyLjg3N3YyOS4zYTIuODc3IDIuODc3IDAgMCAwIDIuODc3IDIuODc3aDIyLjg3OWEyLjg3NyAyLjg3NyAwIDAgMCAyLjg3Ny0yLjg3N1YxOS4yM3oiIGNsYXNzPSJjbHMtMSIgZGF0YS1uYW1lPSJQYXRoIDU4MyIgdHJhbnNmb3JtPSJ0cmFuc2xhdGUoLTkuNzUgLTkuNjU4KSIvPgogICAgPHBhdGggaWQ9IlBhdGhfNTg0IiBkPSJNMjkwLjc3NyAxOS4zMzhoNS45NjFMMjg3LjkgMTAuNXY1Ljk2MWEyLjg3NyAyLjg3NyAwIDAgMCAyLjg3NyAyLjg3N3oiIGNsYXNzPSJjbHMtMiIgZGF0YS1uYW1lPSJQYXRoIDU4NCIgdHJhbnNmb3JtPSJ0cmFuc2xhdGUoLTI2Ny4zNDggLTkuNzUpIi8+CiAgICA8cGF0aCBpZD0iUGF0aF81ODUiIGQ9Ik0yOS45MTggOS4wNTlMMjEuMDczLjIyMUEuNzQuNzQgMCAwIDAgMjAuNTQ1IDBIMy42MjZBMy42MzIgMy42MzIgMCAwIDAgMCAzLjYyNnYyOS4zYTMuNjMyIDMuNjMyIDAgMCAwIDMuNjI2IDMuNjI2aDIyLjg3OWEzLjYzMiAzLjYzMiAwIDAgMCAzLjYyNi0zLjYyNlY5LjU4YS43NDcuNzQ3IDAgMCAwLS4yMTMtLjUyMXptLTguNjIzLTYuNTFsNi4yODkgNi4yODloLTQuMTU1YTIuMDk0IDIuMDk0IDAgMCAxLTEuNTA2LS42MjggMi4xMjMgMi4xMjMgMCAwIDEtLjYyOS0xLjUxVjIuNTQ4em01LjIxMSAzMi41MDlIMy42MjZhMi4xMzYgMi4xMzYgMCAwIDEtMi4xMzQtMi4xMzRWMy42MjZhMi4xMzYgMi4xMzYgMCAwIDEgMi4xMzQtMi4xMzRIMTkuOFY2LjdhMy42MzIgMy42MzIgMCAwIDAgMy42MjYgMy42MjZoNS4yMTR2MjIuNTk3YTIuMTM2IDIuMTM2IDAgMCAxLTIuMTM1IDIuMTM0eiIgZGF0YS1uYW1lPSJQYXRoIDU4NSIvPgogICAgPHBhdGggaWQ9IlBhdGhfNTg2IiBkPSJNMjE2LjA3OCAzNTEuNWgtLjMzNmEuNzQyLjc0MiAwIDAgMCAwIDEuNDg1aC4zMzZhLjc0Mi43NDIgMCAwIDAgMC0xLjQ4NXoiIGRhdGEtbmFtZT0iUGF0aCA1ODYiIHRyYW5zZm9ybT0idHJhbnNsYXRlKC0xOTkuNjUyIC0zMjYuNDA4KSIvPgogICAgPHBhdGggaWQ9IlBhdGhfNTg3IiBkPSJNODcuNzg4IDM1MS41aC03LjI0NmEuNzQyLjc0MiAwIDEgMCAwIDEuNDg1aDcuMjQ2YS43NDIuNzQyIDAgMCAwIDAtMS40ODV6IiBkYXRhLW5hbWU9IlBhdGggNTg3IiB0cmFuc2Zvcm09InRyYW5zbGF0ZSgtNzQuMTAzIC0zMjYuNDA4KSIvPgogICAgPHBhdGggaWQ9IlBhdGhfNTg4IiBkPSJNOTcuOCAzMDMuN0g4MC41NDJhLjc0Mi43NDIgMCAwIDAgMCAxLjQ4NUg5Ny44YS43NDIuNzQyIDAgMSAwIDAtMS40ODV6IiBkYXRhLW5hbWU9IlBhdGggNTg4IiB0cmFuc2Zvcm09InRyYW5zbGF0ZSgtNzQuMTAzIC0yODIuMDIpIi8+CiAgICA8cGF0aCBpZD0iUGF0aF81ODkiIGQ9Ik05Ny44IDI1Nkg4MC41NDJhLjc0Mi43NDIgMCAxIDAgMCAxLjQ4NUg5Ny44YS43NDIuNzQyIDAgMSAwIDAtMS40ODV6IiBkYXRhLW5hbWU9IlBhdGggNTg5IiB0cmFuc2Zvcm09InRyYW5zbGF0ZSgtNzQuMTAzIC0yMzcuNzI1KSIvPgogICAgPHBhdGggaWQ9IlBhdGhfNTkwIiBkPSJNOTcuOCAyMDguM0g4MC41NDJhLjc0Mi43NDIgMCAxIDAgMCAxLjQ4NUg5Ny44YS43NDIuNzQyIDAgMSAwIDAtMS40ODV6IiBkYXRhLW5hbWU9IlBhdGggNTkwIiB0cmFuc2Zvcm09InRyYW5zbGF0ZSgtNzQuMTAzIC0xOTMuNDMpIi8+Cjwvc3ZnPgo=",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9129767,"math_prob":0.7826683,"size":3393,"snap":"2021-21-2021-25","text_gpt3_token_len":821,"char_repetition_ratio":0.13189732,"word_repetition_ratio":0.5322314,"special_character_ratio":0.23666371,"punctuation_ratio":0.084745765,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95789886,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-13T00:20:45Z\",\"WARC-Record-ID\":\"<urn:uuid:cef9404c-213e-43bc-b448-a06b5dc6dadc>\",\"Content-Length\":\"387129\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:264fad2f-0cdb-4af5-9099-17094ca684ce>\",\"WARC-Concurrent-To\":\"<urn:uuid:d3bd2444-7270-476e-9f57-4ec03a752dd1>\",\"WARC-IP-Address\":\"104.20.44.26\",\"WARC-Target-URI\":\"https://www.homeworkmarket.com/content/can-you-assist-me-finance\",\"WARC-Payload-Digest\":\"sha1:N4QRGTMBXKISZA4ILWAWDCDXCBQ2N7H6\",\"WARC-Block-Digest\":\"sha1:H3AIJ6QFABS4FYII77VEHXEU75MFAALZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243991413.30_warc_CC-MAIN-20210512224016-20210513014016-00127.warc.gz\"}"} |
https://math.stackexchange.com/questions/2385346/cut-edge-proof-for-graph-theory | [
"# Cut edge proof for graph theory\n\nIn an undirected connected simple graph G = (V, E), an edge e ∈ E is called a cut edge if G − e has at least two nonempty connected components. Prove: An edge e is a cut edge in G if and only if e does not belong to any simple circuit in G. This needs to be proved in each direction.\n\nTypically I would write where I am for a problem like this, but I have no idea how to approach this proof, especially why it needs to be proved in each direction (what is the significance of this?). Any and all help would be much appreciated.\n\n• The question askes you to proof an equivalence, i.e. $\\Leftrightarrow$. If you only proof one direction, e.g. $\\Leftarrow$, then you have nothing said about the validity of $\\Rightarrow$, hence you have to proof both to obtain equivalence. – M. Winter Aug 7 '17 at 9:24\n\n## 1 Answer\n\nGiven a simple undirected connected graph $G = (V,E)$. Edge $e \\in E$ is called a cut edge if $G^\\prime = (V,E-\\{e\\})$ has at least two non-empty connected components.\n\nTheorem. The edge $e \\in E$ is a cut edge $\\Leftrightarrow$ $e$ does not belong to a circuit in $G$.\n\n$\\Rightarrow$ Suppose $e \\in E$ does belong to a circuit in $G$. Then, this circuit can be described as $e_1,\\ldots,e_k$, with $e = e_i$ for some $1 \\leqslant i \\leqslant k$. Remove any $e_i$ in this circuit still allows the traversal $e_{i+1},\\ldots,e_k,e_1,\\ldots,e_{i-1}$, so $G$ is still connected, hence $e$ can not be a cut edge. Contradiction.\n\n$\\Leftarrow$ Suppose that after removing $e = \\{v_1,v_2\\}$ then $G$ is still connected. Then, consider the trail from $v_1$ to $v_2$. Adding $e$ to this trail creates a circuit (so there is a circuit in the original graph), hence a contradiction."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9643439,"math_prob":0.99914104,"size":526,"snap":"2021-21-2021-25","text_gpt3_token_len":125,"char_repetition_ratio":0.09770115,"word_repetition_ratio":0.03809524,"special_character_ratio":0.23954372,"punctuation_ratio":0.094017096,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999602,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-12T14:10:20Z\",\"WARC-Record-ID\":\"<urn:uuid:5ece3321-dfff-49ef-959f-4ad9aba01463>\",\"Content-Length\":\"159287\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:76616fe4-f532-4d18-88de-7a78f7ddf566>\",\"WARC-Concurrent-To\":\"<urn:uuid:41905ea9-e363-4114-bfaa-791e4083df57>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/2385346/cut-edge-proof-for-graph-theory\",\"WARC-Payload-Digest\":\"sha1:WGOAOLPAQDDFDJ5BSSALSPWATVNLMR7B\",\"WARC-Block-Digest\":\"sha1:3XBURL7EESVIJ3DTLCNO35DUTPKWSDDL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487584018.1_warc_CC-MAIN-20210612132637-20210612162637-00373.warc.gz\"}"} |
https://www.semanticscholar.org/paper/Random-Attractor-Associated-with-the-Equation-Zhu-Zhu/cd1d25bcba59bb7c7834fe3f901a507bb061664a | [
"# Random Attractor Associated with the Quasi-Geostrophic Equation\n\n@article{Zhu2013RandomAA,\ntitle={Random Attractor Associated with the Quasi-Geostrophic Equation},\nauthor={Rongchan Zhu and Xiangchan Zhu},\njournal={Journal of Dynamics and Differential Equations},\nyear={2013},\nvolume={29},\npages={289-322}\n}\n• Published 24 March 2013\n• Mathematics\n• Journal of Dynamics and Differential Equations\nWe study the long time behavior of the solutions to the 2D stochastic quasi-geostrophic equation on $${\\mathbb {T}}^2$$T2 driven by additive noise and real linear multiplicative noise in the subcritical case (i.e. $$\\alpha >\\frac{1}{2}$$α>12) by proving the existence of a random attractor. The key point for the proof is the exponential decay of the $$L^p$$Lp-norm and a boot-strapping argument. The upper semicontinuity of random attractors is also established. Moreover, if the viscosity constant…\n14 Citations\n• Mathematics\n• 2011\nIn this paper, we study the 2D stochastic quasi-geostrophic equation on $\\mathbb{T}^2$ for general parameter $\\alpha\\in(0,1)$ and multiplicative noise. We prove the existence of weak solutions and\n• Mathematics\n• 2020\nThe asymptotic behavior of stochastic modified quasi-geostrophic equations with damping driven by colored noise is analyzed. In fact, the existence of random attractors is established in [Formula:\n• Mathematics\nDissertationes Mathematicae\n• 2020\nWe study the stochastic dissipative quasi-geostrophic equation with space-time white noise on the two-dimensional torus. This equation is highly singular and basically ill-posed in its original form.\n• Mathematics\nDiscrete & Continuous Dynamical Systems - B\n• 2021\nIn this paper, we investigate a stochastic fractionally dissipative quasi-geostrophic equation driven by a multiplicative white noise, whose external forces contain hereditary characteristics. The\n• Li Yang\n• Mathematics\nAIMS Mathematics\n• 2021\nIn this paper, we consider the asymptotic behavior of solutions to stochastic strongly damped wave equations with variable delays on unbounded domains, which is driven by both additive noise and\n• Mathematics\n• 2021\nWe consider the Surface Quasi-Geostrophic equation (SQG) driven by space-time white noise and show the existence of a local in time solution by applying the theory of regularity structures. A main\n• Mathematics\n• 2014\nIn this paper we prove the existence of martingale solutions for the 2D stochastic fractional vorticity Navier–Stokes equation driven by space-time white noise for α ∈ (½, 1] and the 2D stochastic\n• Mathematics\n• 2011\nIn this paper, we study the 2D stochastic quasi-geostrophic equation on $\\mathbb{T}^2$ for general parameter $\\alpha\\in(0,1)$ and multiplicative noise. We prove the existence of weak solutions and\nThe long time behavior of the solutions to the two dimensional dissipative quasi-geostrophic equations is studied. We obtain a new positivity lemma which improves a previous version of A. Cordoba and\n• Mathematics\n• 2010\nWe prove new L 2-estimates and regularity results for generalized porous media equations “shifted by” a function-valued Wiener path. To include Wiener paths with merely first spatial (weak) derivates\nAbstract:The stochastically forced, two-dimensional, incompressable Navier–Stokes equations are shown to possess an unique invariant measure if the viscosity is taken large enough. This result\n• Mathematics\n• 1996\nThe random attractor to the stochastic 3D Navier-Stokes equation will be studied. In the first part we formulate an existence theorem for attractors of non-autonomous dynamical systems on a bundle of\n• Mathematics\n• 1998\nThe relationship between random attractors and global attractors for dynamical systems is studied. If a partial differential equation is perturbed by an E-small random term and certain hypotheses are\n• Mathematics\n• 2006\nWe introduce a notion of an asymptotically compact (AC) random dynamical system (RDS). We prove that for an AC RDS the Ω-limit set Ω B (ω) of any bounded set B is nonempty, compact, strictly\n• Mathematics\n• 2014\nIn this paper we prove the existence of martingale solutions for the 2D stochastic fractional vorticity Navier–Stokes equation driven by space-time white noise for α ∈ (½, 1] and the 2D stochastic"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.82807654,"math_prob":0.97616005,"size":6181,"snap":"2023-14-2023-23","text_gpt3_token_len":1425,"char_repetition_ratio":0.188441,"word_repetition_ratio":0.25670946,"special_character_ratio":0.20061478,"punctuation_ratio":0.04904051,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99741715,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-05-29T08:42:38Z\",\"WARC-Record-ID\":\"<urn:uuid:f7934b95-6d46-4215-9198-5ef54521245f>\",\"Content-Length\":\"400242\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6f0a8428-7508-475f-8edb-cff46b474315>\",\"WARC-Concurrent-To\":\"<urn:uuid:18509bab-4a48-4851-9884-1604dbe30ee8>\",\"WARC-IP-Address\":\"13.32.208.17\",\"WARC-Target-URI\":\"https://www.semanticscholar.org/paper/Random-Attractor-Associated-with-the-Equation-Zhu-Zhu/cd1d25bcba59bb7c7834fe3f901a507bb061664a\",\"WARC-Payload-Digest\":\"sha1:PZKA54NYJ5Y6CUC4W4ZSWYSAXNFWTGNZ\",\"WARC-Block-Digest\":\"sha1:GZVD2QHXPEAKE3HC33J6BHAN77AYUAR7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224644817.32_warc_CC-MAIN-20230529074001-20230529104001-00385.warc.gz\"}"} |
https://www.projecteuclid.org/euclid.cma/1413810435 | [
"Communications in Mathematical Analysis\n\nOn a Theorem by Bojanov and Naidenov Applied to Families of Gegenbauer-Sobolev Polynomials\n\nAbstract\n\nLet $\\{Q_{n,\\lambda}^{(\\alpha)}\\}_{n\\ge 0}$ be the sequence of monic orthogonal polynomials with respect the Gegenbauer-Sobolev inner product $$\\langle f,g\\rangle s:=\\int_{-1}^1 f(x)g(x)(1-x^2)^{\\alpha-\\frac{1}{2}} dx+\\lambda \\int_{-1}^1 f'(x)g'(x)(1-x^2)^{\\alpha-\\frac{1}{2}}dx,$$ where $\\alpha \\gt -\\frac{1}{2}$ and $\\lambda \\ge 0$. In this paper we use a recent result due to B.D. Bojanov and N. Naidenov , in order to study the maximization of a local extremum of the $k$th derivative $\\frac{d^k}{dx^k}$ in $[-M_{n,\\lambda},M_{n,\\lambda}]$, where $M_{n,\\lambda}$ is a suitable value such that all zeros of the polynomial $Q_{n,\\lambda}^{(\\alpha)}$ are contained in $[-M_{n,\\lambda},M_{n,\\lambda}]$ and the function $\\left|Q_{n,\\lambda}^{(\\alpha)}\\right|$ attains its maximal value at the end-points of such interval. Also, some illustrative numerical examples are presented.\n\nArticle information\n\nSource\nCommun. Math. Anal., Volume 16, Number 2 (2014), 9-18.\n\nDates\nFirst available in Project Euclid: 20 October 2014\n\nPermanent link to this document\nhttps://projecteuclid.org/euclid.cma/1413810435\n\nMathematical Reviews number (MathSciNet)\nMR3270574\n\nZentralblatt MATH identifier\n1321.33014\n\nCitation\n\nPaschoa, V. G.; Pérez, D.; Qintana, Y. On a Theorem by Bojanov and Naidenov Applied to Families of Gegenbauer-Sobolev Polynomials. Commun. Math. Anal. 16 (2014), no. 2, 9--18. https://projecteuclid.org/euclid.cma/1413810435"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.5854255,"math_prob":0.9908705,"size":2088,"snap":"2019-43-2019-47","text_gpt3_token_len":670,"char_repetition_ratio":0.12667947,"word_repetition_ratio":0.08032128,"special_character_ratio":0.3012452,"punctuation_ratio":0.18536586,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9997193,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-15T20:22:27Z\",\"WARC-Record-ID\":\"<urn:uuid:15ee5a13-1548-423b-b18a-742254a28f88>\",\"Content-Length\":\"32347\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b6f58b34-cd20-47d7-a6f7-eaa0b21336fc>\",\"WARC-Concurrent-To\":\"<urn:uuid:88d03fda-7f17-4fc0-94e6-56a548c14a2c>\",\"WARC-IP-Address\":\"132.236.27.47\",\"WARC-Target-URI\":\"https://www.projecteuclid.org/euclid.cma/1413810435\",\"WARC-Payload-Digest\":\"sha1:UURTBJYKEGLH73432252RGIABLNH3CWH\",\"WARC-Block-Digest\":\"sha1:XMXRQCHJKD5L3B6VZQYPQMDNBAL62FLX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986660231.30_warc_CC-MAIN-20191015182235-20191015205735-00515.warc.gz\"}"} |
https://metanumbers.com/58900 | [
"## 58900\n\n58,900 (fifty-eight thousand nine hundred) is an even five-digits composite number following 58899 and preceding 58901. In scientific notation, it is written as 5.89 × 104. The sum of its digits is 22. It has a total of 6 prime factors and 36 positive divisors. There are 21,600 positive integers (up to 58900) that are relatively prime to 58900.\n\n## Basic properties\n\n• Is Prime? No\n• Number parity Even\n• Number length 5\n• Sum of Digits 22\n• Digital Root 4\n\n## Name\n\nShort name 58 thousand 900 fifty-eight thousand nine hundred\n\n## Notation\n\nScientific notation 5.89 × 104 58.9 × 103\n\n## Prime Factorization of 58900\n\nPrime Factorization 22 × 52 × 19 × 31\n\nComposite number\nDistinct Factors Total Factors Radical ω(n) 4 Total number of distinct prime factors Ω(n) 6 Total number of prime factors rad(n) 5890 Product of the distinct prime numbers λ(n) 1 Returns the parity of Ω(n), such that λ(n) = (-1)Ω(n) μ(n) 0 Returns: 1, if n has an even number of prime factors (and is square free) −1, if n has an odd number of prime factors (and is square free) 0, if n has a squared prime factor Λ(n) 0 Returns log(p) if n is a power pk of any prime p (for any k >= 1), else returns 0\n\nThe prime factorization of 58,900 is 22 × 52 × 19 × 31. Since it has a total of 6 prime factors, 58,900 is a composite number.\n\n## Divisors of 58900\n\n1, 2, 4, 5, 10, 19, 20, 25, 31, 38, 50, 62, 76, 95, 100, 124, 155, 190, 310, 380, 475, 589, 620, 775, 950, 1178, 1550, 1900, 2356, 2945, 3100, 5890, 11780, 14725, 29450, 58900\n\n36 divisors\n\n Even divisors 24 12 6 6\nTotal Divisors Sum of Divisors Aliquot Sum τ(n) 36 Total number of the positive divisors of n σ(n) 138880 Sum of all the positive divisors of n s(n) 79980 Sum of the proper positive divisors of n A(n) 3857.78 Returns the sum of divisors (σ(n)) divided by the total number of divisors (τ(n)) G(n) 242.693 Returns the nth root of the product of n divisors H(n) 15.2679 Returns the total number of divisors (τ(n)) divided by the sum of the reciprocal of each divisors\n\nThe number 58,900 can be divided by 36 positive divisors (out of which 24 are even, and 12 are odd). The sum of these divisors (counting 58,900) is 138,880, the average is 38,57.,777.\n\n## Other Arithmetic Functions (n = 58900)\n\n1 φ(n) n\nEuler Totient Carmichael Lambda Prime Pi φ(n) 21600 Total number of positive integers not greater than n that are coprime to n λ(n) 180 Smallest positive number such that aλ(n) ≡ 1 (mod n) for all a coprime to n π(n) ≈ 5944 Total number of primes less than or equal to n r2(n) 0 The number of ways n can be represented as the sum of 2 squares\n\nThere are 21,600 positive integers (less than 58,900) that are coprime with 58,900. And there are approximately 5,944 prime numbers less than or equal to 58,900.\n\n## Divisibility of 58900\n\n m n mod m 2 3 4 5 6 7 8 9 0 1 0 0 4 2 4 4\n\nThe number 58,900 is divisible by 2, 4 and 5.\n\n• Abundant\n\n• Polite\n• Practical\n\n## Base conversion (58900)\n\nBase System Value\n2 Binary 1110011000010100\n3 Ternary 2222210111\n4 Quaternary 32120110\n5 Quinary 3341100\n6 Senary 1132404\n8 Octal 163024\n10 Decimal 58900\n12 Duodecimal 2a104\n20 Vigesimal 7750\n36 Base36 19g4\n\n## Basic calculations (n = 58900)\n\n### Multiplication\n\nn×i\n n×2 117800 176700 235600 294500\n\n### Division\n\nni\n n⁄2 29450 19633.3 14725 11780\n\n### Exponentiation\n\nni\n n2 3469210000 204336469000000 12035418024100000000 708886121619490000000000\n\n### Nth Root\n\ni√n\n 2√n 242.693 38.908 15.5786 8.99545\n\n## 58900 as geometric shapes\n\n### Circle\n\n Diameter 117800 370080 1.08988e+10\n\n### Sphere\n\n Volume 8.55923e+14 4.35954e+10 370080\n\n### Square\n\nLength = n\n Perimeter 235600 3.46921e+09 83297.2\n\n### Cube\n\nLength = n\n Surface area 2.08153e+10 2.04336e+14 102018\n\n### Equilateral Triangle\n\nLength = n\n Perimeter 176700 1.50221e+09 51008.9\n\n### Triangular Pyramid\n\nLength = n\n Surface area 6.00885e+09 2.40813e+13 48091.6\n\n## Cryptographic Hash Functions\n\nmd5 cb02cce59df9e0bcdc119314599f9dc9 140b4a4d4615bbffbb768cc493dcfcb459fd2075 884687b014a323cd63efd4dacbd9f215b9049162ad023a5bbddb7952c3372be7 2bed57907557c5bf458ef441203d2788de3c1347185d2c552332495454c3fb401236cf58ee312ba2e2a3de2604260aa53a35210e7506f291e8deef4e23fdb16a 2ba49e2a45e81a59b30aa53870023522095489d7"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.64860123,"math_prob":0.9896696,"size":4484,"snap":"2021-21-2021-25","text_gpt3_token_len":1576,"char_repetition_ratio":0.122544646,"word_repetition_ratio":0.031343285,"special_character_ratio":0.46074933,"punctuation_ratio":0.08020698,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.995539,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-11T01:54:17Z\",\"WARC-Record-ID\":\"<urn:uuid:b83e5106-8062-436c-b3ca-4f30ac002aa4>\",\"Content-Length\":\"48258\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:21f04b47-8e82-46ca-8707-85e50b384115>\",\"WARC-Concurrent-To\":\"<urn:uuid:417b9332-167d-44e6-a370-665ef1db6d5f>\",\"WARC-IP-Address\":\"46.105.53.190\",\"WARC-Target-URI\":\"https://metanumbers.com/58900\",\"WARC-Payload-Digest\":\"sha1:AHWVOJP5CBXJ6R4G6PPUT5XLNFFWKIRW\",\"WARC-Block-Digest\":\"sha1:ZJH7434M7J7Z6FZTQMMW6IAMPRELWDIK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243991553.4_warc_CC-MAIN-20210510235021-20210511025021-00399.warc.gz\"}"} |
https://math.stackexchange.com/questions/tagged/circulant-matrices | [
"# Questions tagged [circulant-matrices]\n\nFor questions regarding circulant matrices, where each row vector is rotated one element to the right relative to the preceding row vector.\n\n89 questions\nFilter by\nSorted by\nTagged with\n32 views\n\n### Eigenvectors of perturbed circulant matrix\n\nI have a circulant matrix defined by a positive kernel W(x): $W_{ij} = W(|i-j|)$ where W is defined on positive reals (so we are sampling {1,2,...,N} to create the matrix). I know this has ...\n22 views\n\nI read that a circulant matrix $C$ can be written as $F \\phi F^{-1}$ where $\\phi$ are $C$'s eigenvalues. Can someone give me more information about the $F$ matrix? Will it be the same for any ...\n35 views\n\n### How do I determine which connections form cycles in a directed graph's adjacency matrix?\n\ngiven a matrix A I know I can perform A^n = path length from j to v for some entry [j,v] to find paths to v. As I perform this iteration for ...\n74 views\n\n29 views\n\n### Given input signal $s$ and convolution kernel $a$, find the corresponding convolution matrix and the output signal\n\nDetermine the discrete convolution of the signal $s = (9 \\ \\ 9 \\ \\ 6 \\ \\ 9)^T$ and the convolution kernel $a = (2 \\ \\ 3 \\ \\ 1 \\ \\ 5)^T$. Given the convolution matrix $A \\in \\mathbb{R^{4 \\times 4}}$, ...\n31 views\n\n### Generalized Eigenvectors of a real symmetric circulant matrix\n\nI know that the eigenvectors and eigenvalues of any circulant matrix have a nice general form (See the wikipedia page). The wikipedia page also generalizes the eigenvalues (but not eigenvectors) for ...\n55 views\n\n### Flux of Curl with given function\n\nLet $F$ from $R^3$ to $R$ defined by $F(x, y, z) = (x − yz, xz, y)$. Let $S$ be the surface obtained by rotating the graph of $x=2^z+3^z$ with $z ∈ [0, 1]$, around the $z$-axis (with normal vectors ...\n53 views\n\n### Fourier transform of circulant or cyclic permutation matrix\n\nI understand that a circulant is expanded as a polynomial in P $$C = C_{0} P + C_{1} P^{2} + \\dots + C_{n} P^{n}$$ I also know that the columns of the Fourier matrix $F$ are the eigenvectors of $P$ ...\n86 views\n\n### The number of unitary circulant matrices over a finite field $\\mathbb{F}_{q^2}$\n\nSuppose $\\mathbb{F}=\\mathbb{F}_{q^2}$, where $q$ is a prime power. The conjugate of elements in $\\mathbb{F}$ is defined by $\\overline{x}=x^q$. I need to find the number of $n\\times n$ unitary ...\n52 views\n\n### Number of invertible elements in $\\mathbb{F}_q[X]/\\langle X^p-1\\rangle$ with $p=\\operatorname{char} \\mathbb{F}_q$\n\nI need to find the number of invertible elements in $\\mathbb{F}_q[X]/\\langle X^p-1\\rangle$ with $p=\\operatorname{char} \\mathbb{F}_q$, which is equal to the number of invertible $p\\times p$ circulant ...\n70 views\n\n61 views\n\n### What are the eigenvalues and eigenvectors of this circulant tridiagonal matrix?\n\n\\begin{equation} \\begin{pmatrix} \\alpha & \\beta & 0 & \\dots & 0 & 0 & \\beta \\\\ \\beta & \\alpha & \\beta & \\dots & 0 & 0 & 0 \\\\ ...\n245 views\n\n### Eigenvectors and Eigenvalues of Shift Matrix\n\n$$S:\\mathbb{C}^n\\rightarrow\\mathbb{C}^n,$$ $$S(x_1,x_2,...,x_n)^T = (x_n,x_1,...,x_{n-1})^T.$$ How can the eigenvalues and eigenvectors of S be calculated? I already have the standard matrix of S ...\n53 views\n\n### Quick way of finding the eigenvalues of circulant matrices over finite fields [closed]\n\nIs there a fast way to find eigenvalues of a circulant matrix over finite field? Thanks.\n101 views\n\n### Singular Circulant Matrix\n\nReffering to the above text, $C(a_0, ..., a_{n-1})$ or $C$ is a $n\\times n$ circulant matrix over complex number. Why $f(x)$ and $1-x^n$ have a common zero if and only if $C$ is singular. In addition, ...\n16 views\n\n### What types of graphs have good classical Ramsey properties?\n\nThis question is related to the search for classical Ramsey critical graphs. It is well known that circulant graphs have properties which make them good territory for finding these critical graphs. My ...\n24 views\n\n### On the cofactors of a circulant binary matrix\n\n$\\newcommand{\\M}{\\mathcal{M}}$Let us define the matrices $\\M(n,k)$ for positive integers $n,k$ with $k\\leq n$ to be the real $n\\times n$ matrix with all $1$s on the diagonal, all $1$s for $k-1$ ...\n11 views\n\n### Zero Divisor Partner of Idempotent Circulant Matrix\n\nLet $A\\in M_{n\\times n}(\\mathbb{R})$ be an idempotent circulant matrix with $\\mbox{rank}(A)=r$. Is there a way to obtain a $B\\in M_{n\\times n}(\\mathbb{R})$ such that $AB=0$ and $\\mbox{rank}(B)=n-r$?\n45 views\n\n38 views\n\n### Volume of the convex hull of the rows of a circulant matrix\n\nWhat is the formula for volume of the convex hull of the rows of a circulant matrix?\n30 views\n\n### why the inverse of a circulant matrix is circulant?\n\nDoes any body know why the inverse of a circulant matrix is circulant? is there any reference or easy proof for that?\n36 views\n\n### find the trace of a $D^{-1}$ in $A=(B+C)D^{-1}$\nLet $$B := \\begin{bmatrix} j H & kH \\\\ kH & H\\end{bmatrix}$$ where $H$ is a circulant matrix and it is symmetric and non-invertible, and $j, k$ are scalars. Let $$A := (B+C)D$$ where $C$ ..."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.76226777,"math_prob":0.99984086,"size":13851,"snap":"2020-45-2020-50","text_gpt3_token_len":4323,"char_repetition_ratio":0.197949,"word_repetition_ratio":0.028001698,"special_character_ratio":0.31369576,"punctuation_ratio":0.117062785,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.99997807,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-29T02:26:24Z\",\"WARC-Record-ID\":\"<urn:uuid:cbf3859b-ccfd-4b1f-acaa-a15486608c5f>\",\"Content-Length\":\"269390\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:97c16394-632d-4f9c-9225-220fca07e3cc>\",\"WARC-Concurrent-To\":\"<urn:uuid:617f61fe-cc16-4ecf-b951-d823f84a26bd>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/tagged/circulant-matrices\",\"WARC-Payload-Digest\":\"sha1:JF5Z62GTPIGWWT2KMBSBWWHDEMDISWRL\",\"WARC-Block-Digest\":\"sha1:5QN4GPYDMG3XSJ3YGYQZZMWIGIXBFP5V\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141195967.34_warc_CC-MAIN-20201129004335-20201129034335-00328.warc.gz\"}"} |
https://math.stackexchange.com/questions/2368552/intersecting-a-submanifold-with-a-small-ball | [
"# Intersecting a Submanifold with a Small Ball\n\nLet $M$ be an embedded $k$-submanifold of $\\mathbf R^n$ and $p$ be a point in $M$. For $\\epsilon>0$, let $S_\\epsilon(p)$ denote the sphere in $\\mathbf R^n$ of radius $\\epsilon$ having its center at $p$. So $S_\\epsilon(p)$ is itslef an embedded $(n-1)$-submanifold of $\\mathbf R^n$.\n\nIs the following true?\n\nFor $\\epsilon$ small enough, $S_\\epsilon(p)\\cap M$ is diffeomorphic to a sphere of dimension $k-1$.\n\nIntuitively, near $p$ the manifold $M$ looks like a flat $k$-plane, and intersecting $S_\\epsilon(p)$ with a $k$-plane passing through $p$ yields a $(k-1)$-sphere.\n\nThe reason why I ask this is the following: On pg 17 of Milnor's Singular Points of Complex Hypersurfaces, the author states the following (Corollary 2.9).\n\n(Not exact quote) Let $V$ be a real algebraic set and $x^0$ be a non-singular point in $V$. Every sufficiently small sphere $S_\\epsilon$ centered at $x^0$ intersects $V$ in a smooth manifold.\n\nSince the set of singular points form a closed set, we note that a small enough sphere centered at $x^0$ will not contain any singular point of $V$. It is already stated in the book that the set of non-singular points of an algebraic set form a manifold, each of whose components have the same dimension (See Theorem 2.3). Milnor does not use the qualification 'embedded' for a manifold. But after reading the reference Milnor has given for the proof of Theorem 2.3, I am convinced that the set of non-singular points form an embedded submanifold of the ambient space. Further, the proof Milnor has given makes use of the algebraic nature of $V$.\n\nSo in the above Milnor is stating something even weaker than what I am intuitively finding true in a more general setting. Or may be I am missing some subtle point.\n\nEDIT: I am a bit embarrassed to write this. But I should. The thing is that the theorem Milnor states is not just for non-singular points but also for isolated singular points. The proof provided by Milnor uses the algebraic nature of the problem just to cover the case of the isolated singular point. At any rate, @Shinchishiro Nakamura has resolved the question I had asked.\n\nThe answer is TRUE. We can prove this fact making use of the following two lemmas. Let $M$ be an embedded $k$-submanifold of $\\mathbb{R}^n$ and $p$ be a point in $M$. Without loss of generality, we may assume that $p=0$. Let $f:\\mathbb{R}^n\\to\\mathbb{R}$ denote a smooth function defined by $f(x):=x_1^2+\\cdots+x_n^2$.\n\nLemma 1: The restricted function $f|_M:M\\to\\mathbb{R}$ has $p$ as a non-degenerate critical point.\n\nProof: It is clear that $p$ is a critical point of $f|_M$. Since $M$ is a submanifold in $\\mathbb{R}^n$, there exist a local coordinate $(u_1,\\ldots,u_n)$ around $p$ such that $M$ corresponds to the set of $u_{k+1}=\\cdots=u_n=0$. Because $f$'s Hessian matrix is positive definite, so is $f|_M$'s. This completes the proof.\n\nLemma 2: Let $X$ be a smooth $n$-dimensional manifold, and $Y$ be a $k$-submanifold of $X$. Suppose that a smooth function $f:X\\to\\mathbb{R}$ has $p\\in Y$ as a non-degenerate critical point, and $f|_Y:Y\\to\\mathbb{R}$ has $p$ as a non-degenerate critical point, too. Then, there exists a local coordinate $(x_1,\\ldots,x_n)$ around $p$ such that (1) $f(x_1,\\ldots,x_n)=f(p)\\pm x_1^2\\pm\\cdots\\pm x_n^2$ and (2) $Y$ corresponds to the set of $x_{k+1}=\\cdots=x_n=0$.\n\nProof: It's an analogy of the standard proof of Morse's lemma.\n\nNow we can prove the statement. Using Lemma 1 and Lemma 2, we have a local coordinate $(x_1,\\ldots,x_n)$ around $p$ such that $f(x_1,\\ldots,x_n)=x_1^2+\\ldots+x_n^2$ and $M$ is represented by $x_{k+1}=\\ldots=x_n=0$. Taking $\\varepsilon>0$ small, we may assume that $S_\\varepsilon(p)\\cap M = f^{-1}(\\varepsilon^2)\\cap M$ is included in the local coordinate chart. Then, we have\n\n\\begin{equation} S_\\varepsilon(p)\\cap M = \\{(x_1,\\ldots,x_k,0,\\ldots,0)\\ |\\ x_1^2+\\ldots+x_k^2=\\varepsilon^2\\} \\end{equation} which is diffeomorphic to a sphere of dimension $k−1$."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.88895154,"math_prob":0.9998461,"size":2104,"snap":"2020-10-2020-16","text_gpt3_token_len":553,"char_repetition_ratio":0.12809524,"word_repetition_ratio":0.011428571,"special_character_ratio":0.25,"punctuation_ratio":0.08530806,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000076,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-03-29T04:15:00Z\",\"WARC-Record-ID\":\"<urn:uuid:30ee2997-579e-44e4-a32f-112212b9c838>\",\"Content-Length\":\"141186\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:675425fa-b7d6-4569-baf3-c366f1148bf7>\",\"WARC-Concurrent-To\":\"<urn:uuid:178ae3e8-a9a9-44f3-9a58-e97eef703677>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/2368552/intersecting-a-submanifold-with-a-small-ball\",\"WARC-Payload-Digest\":\"sha1:7OY5RZC4AWQKHBZYOTVVFBRUMFVSTGVZ\",\"WARC-Block-Digest\":\"sha1:2VTKTP4TIJZYO3RHDG7VCKJ6OLD2WV4I\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585370493684.2_warc_CC-MAIN-20200329015008-20200329045008-00398.warc.gz\"}"} |
http://www.kwiznet.com/p/takeQuiz.php?ChapterID=11086&CurriculumID=48&Method=Worksheet&NQ=10&Num=4.57&Type=C | [
"",
null,
"Name: ___________________Date:___________________\n\n Email us to get an instant 20% discount on highly effective K-12 Math & English kwizNET Programs!\n\n### High School Mathematics4.57 Volume of the Cylinder - III",
null,
"Directions: Solve the following problems. Also write at least 5 examples of your own.\n Q 1: The lateral surface area of cylinder is 176 Sq. cm. Its base area is 38.5 Sq. cm. Find its volume.304 c. cm.302 c. cm.306 c. cm.308 c. cm. Q 2: The lateral surface area of cylinder is 352 Sq. cm. Its base area is 154 Sq. cm. Find its volume.1238 c. cm.1232 c. cm.1234 c. cm.1236 c. cm. Q 3: The lateral surface area of cylinder is 1320 Sq. cm. Its base area is 616 Sq. cm. Find its volume.9246 c. cm.9242 c. cm.9244 c. cm.9240 c. cm. Q 4: The lateral surface area of cylinder is 2640 Sq. cm. Its base area is 3850 Sq. cm. Find its volume.46200 c. cm.46368 c. cm.46264 c. cm.46242 c. cm. Q 5: The lateral surface area of cylinder is 308 Sq. cm. Its base area is 75.46 Sq. cm. Find its volume.745.6 c. cm.754.6 c. cm.74.56 c. cm.75.46 c. cm. Q 6: The lateral surface area of cylinder is 4620 Sq. cm. Its base area is 7546 Sq. cm. Find its volume.113192 c. cm.113194 c. cm.113198 c. cm.113190 c. cm. Question 7: This question is available to subscribers only! Question 8: This question is available to subscribers only!"
]
| [
null,
"http://www.kwiznet.com/images/kwizNET_logo7.gif",
null,
"http://kwiznet.com/px/homes/i/Grade9/SolidGeometry/Topic_IV_Volume_of_Cylinder_III.gif",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8008028,"math_prob":0.9514538,"size":2112,"snap":"2019-26-2019-30","text_gpt3_token_len":597,"char_repetition_ratio":0.17362429,"word_repetition_ratio":0.12868632,"special_character_ratio":0.3409091,"punctuation_ratio":0.23647295,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.950857,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-22T20:12:34Z\",\"WARC-Record-ID\":\"<urn:uuid:7a94749f-7d10-4ab8-b872-43daf0ee02ef>\",\"Content-Length\":\"9676\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:237d5817-c2e0-4d1b-92fd-fc088e8102f8>\",\"WARC-Concurrent-To\":\"<urn:uuid:0e437cc8-7521-4f51-aa8b-d4839de6ff7a>\",\"WARC-IP-Address\":\"74.208.215.65\",\"WARC-Target-URI\":\"http://www.kwiznet.com/p/takeQuiz.php?ChapterID=11086&CurriculumID=48&Method=Worksheet&NQ=10&Num=4.57&Type=C\",\"WARC-Payload-Digest\":\"sha1:TWEV5IX5QOMB7UMVVXRVLMRBHQDSXZDW\",\"WARC-Block-Digest\":\"sha1:JIGBU5UKOWWRM5IZAZ53XSHQMVQJIVOT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195528220.95_warc_CC-MAIN-20190722201122-20190722223122-00528.warc.gz\"}"} |
https://docs.rs/approx/0.5.1/approx/ | [
"# Crate approx\n\nExpand description\n\nA crate that provides facilities for testing the approximate equality of floating-point based types, using either relative difference, or units in the last place (ULPs) comparisons.\n\nYou can also use the `*_{eq, ne}!` and `assert_*_{eq, ne}!` macros to test for equality using a more positional style:\n\n``````#[macro_use]\nextern crate approx;\n\nuse std::f64;\n\nabs_diff_eq!(1.0, 1.0);\nabs_diff_eq!(1.0, 1.0, epsilon = f64::EPSILON);\n\nrelative_eq!(1.0, 1.0);\nrelative_eq!(1.0, 1.0, epsilon = f64::EPSILON);\nrelative_eq!(1.0, 1.0, max_relative = 1.0);\nrelative_eq!(1.0, 1.0, epsilon = f64::EPSILON, max_relative = 1.0);\nrelative_eq!(1.0, 1.0, max_relative = 1.0, epsilon = f64::EPSILON);\n\nulps_eq!(1.0, 1.0);\nulps_eq!(1.0, 1.0, epsilon = f64::EPSILON);\nulps_eq!(1.0, 1.0, max_ulps = 4);\nulps_eq!(1.0, 1.0, epsilon = f64::EPSILON, max_ulps = 4);\nulps_eq!(1.0, 1.0, max_ulps = 4, epsilon = f64::EPSILON);``````\n\n## Implementing approximate equality for custom types\n\nThe `*Eq` traits allow approximate equalities to be implemented on types, based on the fundamental floating point implementations.\n\nFor example, we might want to be able to do approximate assertions on a complex number type:\n\n``````#[macro_use]\nextern crate approx;\n\n#[derive(Debug, PartialEq)]\nstruct Complex<T> {\nx: T,\ni: T,\n}\n\nlet x = Complex { x: 1.2, i: 2.3 };\n\nassert_relative_eq!(x, x);\nassert_ulps_eq!(x, x, max_ulps = 4);``````\n\nTo do this we can implement `AbsDiffEq`, `RelativeEq` and `UlpsEq` generically in terms of a type parameter that also implements `AbsDiffEq`, `RelativeEq` and `UlpsEq` respectively. This means that we can make comparisons for either `Complex<f32>` or `Complex<f64>`:\n\n``````impl<T: AbsDiffEq> AbsDiffEq for Complex<T> where\nT::Epsilon: Copy,\n{\ntype Epsilon = T::Epsilon;\n\nfn default_epsilon() -> T::Epsilon {\nT::default_epsilon()\n}\n\nfn abs_diff_eq(&self, other: &Self, epsilon: T::Epsilon) -> bool {\nT::abs_diff_eq(&self.x, &other.x, epsilon) &&\nT::abs_diff_eq(&self.i, &other.i, epsilon)\n}\n}\n\nimpl<T: RelativeEq> RelativeEq for Complex<T> where\nT::Epsilon: Copy,\n{\nfn default_max_relative() -> T::Epsilon {\nT::default_max_relative()\n}\n\nfn relative_eq(&self, other: &Self, epsilon: T::Epsilon, max_relative: T::Epsilon) -> bool {\nT::relative_eq(&self.x, &other.x, epsilon, max_relative) &&\nT::relative_eq(&self.i, &other.i, epsilon, max_relative)\n}\n}\n\nimpl<T: UlpsEq> UlpsEq for Complex<T> where\nT::Epsilon: Copy,\n{\nfn default_max_ulps() -> u32 {\nT::default_max_ulps()\n}\n\nfn ulps_eq(&self, other: &Self, epsilon: T::Epsilon, max_ulps: u32) -> bool {\nT::ulps_eq(&self.x, &other.x, epsilon, max_ulps) &&\nT::ulps_eq(&self.i, &other.i, epsilon, max_ulps)\n}\n}``````\n\nFloating point is hard! Thanks goes to these links for helping to make things a little easier to understand:\n\n## Macros\n\nApproximate equality of using the absolute difference.\n\nApproximate inequality of using the absolute difference.\n\nAn assertion that delegates to `abs_diff_eq!`, and panics with a helpful error on failure.\n\nAn assertion that delegates to `abs_diff_ne!`, and panics with a helpful error on failure.\n\nAn assertion that delegates to `relative_eq!`, and panics with a helpful error on failure.\n\nAn assertion that delegates to `relative_ne!`, and panics with a helpful error on failure.\n\nAn assertion that delegates to `ulps_eq!`, and panics with a helpful error on failure.\n\nAn assertion that delegates to `ulps_ne!`, and panics with a helpful error on failure.\n\nApproximate equality using both the absolute difference and relative based comparisons.\n\nApproximate inequality using both the absolute difference and relative based comparisons.\n\nApproximate equality using both the absolute difference and ULPs (Units in Last Place).\n\nApproximate inequality using both the absolute difference and ULPs (Units in Last Place).\n\n## Structs\n\nThe requisite parameters for testing for approximate equality using a absolute difference based comparison.\n\nThe requisite parameters for testing for approximate equality using a relative based comparison.\n\nThe requisite parameters for testing for approximate equality using an ULPs based comparison.\n\n## Traits\n\nEquality that is defined using the absolute difference of two numbers.\n\nEquality comparisons between two numbers using both the absolute difference and relative based comparisons.\n\nEquality comparisons between two numbers using both the absolute difference and ULPs (Units in Last Place) based comparisons."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.69103134,"math_prob":0.9866032,"size":4336,"snap":"2023-40-2023-50","text_gpt3_token_len":1171,"char_repetition_ratio":0.14358264,"word_repetition_ratio":0.24829932,"special_character_ratio":0.27052584,"punctuation_ratio":0.28017718,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9906592,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-10T00:10:18Z\",\"WARC-Record-ID\":\"<urn:uuid:721ab001-6de9-4b63-b377-442e44add04d>\",\"Content-Length\":\"50217\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:78d7da28-c1c9-4d94-bc15-b7560a29ae6a>\",\"WARC-Concurrent-To\":\"<urn:uuid:d6717b73-1144-4be8-9e64-643d9a4b7de3>\",\"WARC-IP-Address\":\"108.138.85.36\",\"WARC-Target-URI\":\"https://docs.rs/approx/0.5.1/approx/\",\"WARC-Payload-Digest\":\"sha1:RSZYVT7AH35VVKBMYC4YXYPJEHYLPM7M\",\"WARC-Block-Digest\":\"sha1:KLKKCDDLWLDN4AFAVENMWT5F4QXPNEWB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100989.75_warc_CC-MAIN-20231209233632-20231210023632-00434.warc.gz\"}"} |
https://se.mathworks.com/help/simulink/slref/divide.html | [
"# Divide\n\nDivide one input by another\n\n• Library:\n\nHDL Coder / HDL Floating Point Operations\n\nHDL Coder / Math Operations\n\n•",
null,
"## Description\n\nThe Divide block outputs the result of dividing its first input by its second. The inputs can be scalars, a scalar and a nonscalar, or two nonscalars that have the same dimensions. This block supports only complex input values at division ports when all ports have the same single or double data type.\n\nThe Divide block is functionally a Product block that has two block parameter values preset:\n\n• Multiplication — `Element-wise(.*)`\n\n• Number of Inputs — `*/`\n\nSetting nondefault values for either of those parameters can change a Divide block to be functionally equivalent to a Product block or a Product of Elements block.\n\n## Ports\n\n### Input\n\nexpand all\n\nInput signal to be multiplied with other inputs.\n\n#### Dependencies\n\nTo enable one or more X ports, specify one or more `*` characters for the Number of inputs parameter.\n\nData Types: `half` | `single` | `double` | `int8` | `int16` | `int32` | `int64` | `uint8` | `uint16` | `uint32` | `uint64` | `Boolean` | `fixed point`\n\nInput signal for division or inversion operations.\n\n#### Dependencies\n\nTo enable one or more ÷ ports, specify one or more `/` characters for the Number of inputs parameter.\n\nData Types: `half` | `single` | `double` | `int8` | `int16` | `int32` | `int64` | `uint8` | `uint16` | `uint32` | `uint64` | `Boolean` | `fixed point`\n\nFirst input to multiply or divide, provided as a scalar, vector, matrix, or N-D array.\n\nData Types: `half` | `single` | `double` | `int8` | `int16` | `int32` | `int64` | `uint8` | `uint16` | `uint32` | `uint64` | `Boolean` | `fixed point`\n\nNth input to multiply or divide, provided as a scalar, vector, matrix, or N-D array.\n\nData Types: `half` | `single` | `double` | `int8` | `int16` | `int32` | `int64` | `uint8` | `uint16` | `uint32` | `uint64` | `Boolean` | `fixed point`\n\n### Output\n\nexpand all\n\nOutput computed by multiplying, dividing, or inverting inputs.\n\nData Types: `half` | `single` | `double` | `int8` | `int16` | `int32` | `int64` | `uint8` | `uint16` | `uint32` | `uint64` | `Boolean` | `fixed point`\n\n## Parameters\n\nexpand all\n\n### Main\n\nControl two properties of the block:\n\n• The number of input ports on the block\n\n• Whether each input is multiplied or divided into the output\n\nWhen you specify:\n\n• `1` or `*` or `/`\n\nThe block has one input port. In element-wise mode, the block processes the input as described for the Product of Elements block. In matrix mode, if the parameter value is `1` or `*`, the block outputs the input value. If the value is `/`, the input must be a square matrix (including a scalar as a degenerate case) and the block outputs the matrix inverse. See Element-Wise Mode and Matrix Mode for more information.\n\n• Integer value > 1\n\nThe block has the number of inputs given by the integer value. The inputs are multiplied together in element-wise mode or matrix mode, as specified by the Multiplication parameter. See Element-Wise Mode and Matrix Mode for more information.\n\n• Unquoted string of two or more `*` and `/` characters\n\nThe block has the number of inputs given by the length of the character vector. Each input that corresponds to a `*` character is multiplied into the output. Each input that corresponds to a `/` character is divided into the output. The operations occur in element-wise mode or matrix mode, as specified by the Multiplication parameter. See Element-Wise Mode and Matrix Mode for more information.\n\n#### Programmatic Use\n\n Block Parameter: `Inputs` Type: character vector Values: ```'2' | '*' | '**' | '*/' | '*/*' | ...``` Default: `'*/'`\n\nSpecify whether the block performs `Element-wise(.*)` or `Matrix(*)` multiplication.\n\n#### Programmatic Use\n\n Block Parameter: `Multiplication` Type: character vector Values: `'Element-wise(.*)' | 'Matrix(*)' ` Default: `'Element-wise(.*)'`\n\nSpecify the dimension to multiply over as `All dimensions`, or `Specified dimension`. When you select `Specified dimension`, you can specify the Dimension as `1` or `2`.\n\n#### Dependencies\n\nTo enable this parameter, set Number of inputs to `*` and Multiplication to `Element-wise (.*)`.\n\n#### Programmatic Use\n\n Block Parameter: `CollapseMode` Type: character vector Values: `'All dimensions' | 'Specified dimension'` Default: `'All dimensions'`\n\nSpecify the dimension to multiply over as an integer less than or equal to the number of dimensions of the input signal.\n\n#### Dependencies\n\nTo enable this parameter, set:\n\n• Number of inputs to `*`\n\n• Multiplication to `Element-wise (.*)`\n\n• Multiply over to `Specified dimension`\n\n#### Programmatic Use\n\n Block Parameter: `CollapseDim` Type: character vector Values: `'1' | '2' | ...` Default: `'1'`\n\nSpecify the sample time as a value other than -1. For more information, see Specify Sample Time.\n\n#### Dependencies\n\nThis parameter is not visible unless it is explicitly set to a value other than `-1`. To learn more, see Blocks for Which Sample Time Is Not Recommended.\n\n#### Programmatic Use\n\n Block Parameter: `SampleTime` Type: character vector Values: scalar or vector Default: `'-1'`\n\n### Signal Attributes\n\nSpecify if input signals must all have the same data type. If you enable this parameter, then an error occurs during simulation if the input signal types are different.\n\n#### Programmatic Use\n\n Block Parameter: `InputSameDT` Type: character vector Values: `'off' | 'on'` Default: `'off'`\n\nLower value of the output range that Simulink® checks.\n\nSimulink uses the minimum to perform:\n\nNote\n\nOutput minimum does not saturate or clip the actual output signal. Use the Saturation block instead.\n\n#### Programmatic Use\n\n Block Parameter: `OutMin` Type: character vector Values: `'[ ]'`| scalar Default: `'[ ]'`\n\nUpper value of the output range that Simulink checks.\n\nSimulink uses the maximum value to perform:\n\nNote\n\nOutput maximum does not saturate or clip the actual output signal. Use the Saturation block instead.\n\n#### Programmatic Use\n\n Block Parameter: `OutMax` Type: character vector Values: `'[ ]'`| scalar Default: `'[ ]'`\n\nChoose the data type for the output. The type can be inherited, specified directly, or expressed as a data type object such as `Simulink.NumericType`. For more information, see Control Signal Data Types.\n\nWhen you select an inherited option, the block behaves as follows:\n\n• `Inherit: Inherit via internal rule` — Simulink chooses a data type to balance numerical accuracy, performance, and generated code size, while taking into account the properties of the embedded target hardware. If you change the embedded target settings, the data type selected by the internal rule might change. For example, if the block multiplies an input of type `int8` by a gain of `int16` and `ASIC/FPGA` is specified as the targeted hardware type, the output data type is `sfix24`. If ```Unspecified (assume 32-bit Generic)```, in other words, a generic 32-bit microprocessor, is specified as the target hardware, the output data type is `int32`. If none of the word lengths provided by the target microprocessor can accommodate the output range, Simulink software displays an error in the Diagnostic Viewer.\n\nIt is not always possible for the software to optimize code efficiency and numerical accuracy at the same time. If the internal rule doesn’t meet your specific needs for numerical accuracy or performance, use one of the following options:\n\n• Specify the output data type explicitly.\n\n• Use the simple choice of ```Inherit: Same as input```.\n\n• Explicitly specify a default data type such as `fixdt(1,32,16)` and then use the Fixed-Point Tool to propose data types for your model. For more information, see `fxptdlg` (Fixed-Point Designer).\n\n• To specify your own inheritance rule, use ```Inherit: Inherit via back propagation``` and then use a Data Type Propagation block. Examples of how to use this block are available in the Signal Attributes library Data Type Propagation Examples block.\n\n• ```Inherit: Inherit via back propagation``` — Use data type of the driving block.\n\n• `Inherit: Same as first input` — Use data type of first input signal.\n\n#### Programmatic Use\n\n Block Parameter: `OutDataTypeStr` Type: character vector Values: ```'Inherit: Inherit via internal rule``` | `'Inherit: Same as first input'` | ```'Inherit: Inherit via back propagation'``` | `'double'` | `'single'` | `'int8'` | `'uint8'` | `'int16'` | `'uint16'` | `'int32'` | `'uint32'` | `'int64'` | `'uint64'` | `'fixdt(1,16)'` | `'fixdt(1,16,0)'` | `'fixdt(1,16,2^0,0)'` | ```''``` Default: ```'Inherit: Inherit via internal rule'```\n\nSelect this parameter to prevent the fixed-point tools from overriding the Output data type you specify on the block. For more information, see Use Lock Output Data Type Setting (Fixed-Point Designer).\n\n#### Programmatic Use\n\n Block Parameter: `LockScale` Type: character vector Values: `'off' | 'on'` Default: `'off'`\n\nSelect the rounding mode for fixed-point operations. You can select:\n\n`Ceiling`\n\nRounds positive and negative numbers toward positive infinity. Equivalent to the MATLAB® `ceil` function.\n\n`Convergent`\n\nRounds number to the nearest representable value. If a tie occurs, rounds to the nearest even integer. Equivalent to the Fixed-Point Designer™ `convergent` function.\n\n`Floor`\n\nRounds positive and negative numbers toward negative infinity. Equivalent to the MATLAB `floor` function.\n\n`Nearest`\n\nRounds number to the nearest representable value. If a tie occurs, rounds toward positive infinity. Equivalent to the Fixed-Point Designer `nearest` function.\n\n`Round`\n\nRounds number to the nearest representable value. If a tie occurs, rounds positive numbers toward positive infinity and rounds negative numbers toward negative infinity. Equivalent to the Fixed-Point Designer `round` function.\n\n`Simplest`\n\nChooses between rounding toward floor and rounding toward zero to generate rounding code that is as efficient as possible.\n\n`Zero`\n\nRounds number toward zero. Equivalent to the MATLAB `fix` function.\n\nBlock parameters always round to the nearest representable value. To control the rounding of a block parameter, enter an expression using a MATLAB rounding function into the mask field.\n\n#### Programmatic Use\n\n Block Parameter: `RndMeth` Type: character vector Values: ```'Ceiling' | 'Convergent' | 'Floor' | 'Nearest' | 'Round' | 'Simplest' | 'Zero'``` Default: `'Floor'`\n\nSpecify whether overflows saturate or wrap.\n\nActionRationaleImpact on OverflowsExample\n\nSelect this check box (`on`).\n\nYour model has possible overflow, and you want explicit saturation protection in the generated code.\n\nOverflows saturate to either the minimum or maximum value that the data type can represent.\n\nThe maximum value that the `int8` (signed, 8-bit integer) data type can represent is 127. Any block operation result greater than this maximum value causes overflow of the 8-bit integer. With the check box selected, the block output saturates at 127. Similarly, the block output saturates at a minimum output value of -128.\n\nDo not select this check box (`off`).\n\nYou want to optimize efficiency of your generated code.\n\nYou want to avoid overspecifying how a block handles out-of-range signals. For more information, see Troubleshoot Signal Range Errors.\n\nOverflows wrap to the appropriate value that is representable by the data type.\n\nThe maximum value that the `int8` (signed, 8-bit integer) data type can represent is 127. Any block operation result greater than this maximum value causes overflow of the 8-bit integer. With the check box cleared, the software interprets the overflow-causing value as `int8`, which can produce an unintended result. For example, a block result of 130 (binary 1000 0010) expressed as `int8`, is -126.\n\nWhen you select this check box, saturation applies to every internal operation on the block, not just the output, or result. Usually, the code generation process can detect when overflow is not possible. In this case, the code generator does not produce saturation code.\n\n#### Programmatic Use\n\n Block Parameter: `SaturateOnIntegerOverflow` Type: character vector Values: `'off' | 'on'` Default: `'off'`\n\n## Block Characteristics\n\n Data Types `Boolean` | `double` | `fixed point` | `half` | `integer` | `single` Direct Feedthrough `yes` Multidimensional Signals `yes` Variable-Size Signals `yes` Zero-Crossing Detection `no`"
]
| [
null,
"https://se.mathworks.com/help/simulink/slref/divide_block_icon.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.6751962,"math_prob":0.8562267,"size":20516,"snap":"2020-34-2020-40","text_gpt3_token_len":4719,"char_repetition_ratio":0.121928625,"word_repetition_ratio":0.23209876,"special_character_ratio":0.21627022,"punctuation_ratio":0.10576369,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9621476,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-01T13:16:26Z\",\"WARC-Record-ID\":\"<urn:uuid:d15236a0-546d-4779-98d7-9075c4f056e7>\",\"Content-Length\":\"145040\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1bc12528-1d5d-4ad6-9569-c97434468608>\",\"WARC-Concurrent-To\":\"<urn:uuid:5a983894-f651-45c4-9774-4835155b7220>\",\"WARC-IP-Address\":\"23.197.108.134\",\"WARC-Target-URI\":\"https://se.mathworks.com/help/simulink/slref/divide.html\",\"WARC-Payload-Digest\":\"sha1:G5TNHVFLMAZCEBNL37XKJIEUE2SL2TPM\",\"WARC-Block-Digest\":\"sha1:UABM3HL4L5WKR4KJ3V6XH7IJKEY5HWVN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600402131412.93_warc_CC-MAIN-20201001112433-20201001142433-00756.warc.gz\"}"} |
https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/functional/margin_ranking_loss_cn.html | [
"# margin_ranking_loss¶\n\npaddle.nn.functional. margin_ranking_loss ( input, other, label, margin=0.0, reduction='mean', name=None ) [源代码]\n\n\\[margin\\_rank\\_loss = max(0, -label * (input - other) + margin)\\]\n\nreduction 设置为 `'mean'` 时,\n\n\\[Out = MEAN(margin\\_rank\\_loss)\\]\n\nreduction 设置为 `'sum'` 时,\n\n\\[Out = SUM(margin\\_rank\\_loss)\\]\n\nreduction 设置为 `'none'` 时,直接返回最原始的 margin_rank_loss\n\n## 参数¶\n\n• input (Tensor) - 第一个输入的 Tensor,数据类型为:float32、float64。\n\n• other (Tensor) - 第二个输入的 Tensor,数据类型为:float32、float64。\n\n• label (Tensor) - 训练数据的标签,数据类型为:float32、float64。\n\n• margin (float,可选) - 用于加和的 margin 值,默认值为 0。\n\n• reduction (string,可选) - 指定应用于输出结果的计算方式,可选值有:`'none'``'mean'``'sum'`。如果设置为 `'none'`,则直接返回 最原始的 `margin_rank_loss`。如果设置为 `'sum'`,则返回 `margin_rank_loss` 的总和。如果设置为 `'mean'`,则返回 `margin_rank_loss` 的平均值。默认值为 `'none'`\n\n• name (str,可选) - 具体用法请参见 Name,一般无需设置,默认值为 None。\n\n## 返回¶\n\nTensor,如果 `reduction``'sum'` 或者是 `'mean'`,则形状为 \\(\\),否则 shape 和输入 input 保持一致。数据类型与 `input``other` 相同。\n\n## 代码示例¶\n\n```import paddle\n\ninput = paddle.to_tensor([[1, 2], [3, 4]], dtype='float32')\nother = paddle.to_tensor([[2, 1], [2, 4]], dtype='float32')\nlabel = paddle.to_tensor([[1, -1], [-1, -1]], dtype='float32')"
]
| [
null
]
| {"ft_lang_label":"__label__zh","ft_lang_prob":0.5624698,"math_prob":0.96830606,"size":1297,"snap":"2022-40-2023-06","text_gpt3_token_len":565,"char_repetition_ratio":0.1608662,"word_repetition_ratio":0.0,"special_character_ratio":0.33616036,"punctuation_ratio":0.14351852,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99158263,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-05T23:15:20Z\",\"WARC-Record-ID\":\"<urn:uuid:5d5edcbf-3d3c-48bf-9223-6287a8336fb2>\",\"Content-Length\":\"277045\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9b546022-aa66-4ff2-8d24-bdfd487a4f82>\",\"WARC-Concurrent-To\":\"<urn:uuid:784fa91e-904c-46eb-a280-6e51f8c5a3d6>\",\"WARC-IP-Address\":\"106.12.155.111\",\"WARC-Target-URI\":\"https://www.paddlepaddle.org.cn/documentation/docs/zh/api/paddle/nn/functional/margin_ranking_loss_cn.html\",\"WARC-Payload-Digest\":\"sha1:H6VW3HGFY2ZYCPZR5ANYVQ5QFESNQBMO\",\"WARC-Block-Digest\":\"sha1:FHG67NBSFSYKLCCDW7AJ6ZSNFF2GRG23\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500294.64_warc_CC-MAIN-20230205224620-20230206014620-00536.warc.gz\"}"} |
https://chemistryblog.net/how-to-balance-chemical-equations/ | [
"What is a chemical equation?\n\nA chemical equation is the symbolic representation of a chemical reaction in the form of symbols and formulae, wherein the reactant entities are given on the left-hand side and the product entities on the right-hand side.\n\nHow to balance a chemical equation?\n\nWhat we are going to demonstrate below is the Algebraic Method or the Bottomley’s Method of balancing a chemical\n\nFor example, the following chemical equation can be considered for the purpose of demonstrating how to balance a chemical equation.\n\nPCl5 + H2O –> H3PO4 + HCl\n\nThe first step is to assign arbitrary multipliers to each member substance of the equation.\n\na PCl5 + b H2O –> c H3PO4 + d HCl\n\nHere, the objective is to ascertain the values of the arbitrary multipliers which make the chemical equation, balance. Hence the next step is to come up with linear equations of arbitrary multipliers by balancing the different elements of the two sides of the equation.\n\nBy balancing the element Phosphorus (P) –> a = c —-> equation (1)\n\nBy balancing the element Chlorine (Cl) –> 5a = d —–> equation (2)\n\nBy balancing the element Hydrogen (H) –> 2b = 3 c + d —–> equation (3)\n\nBy balancing the element Oxygen (O) –> b = 4c —–> equation (4)\n\nNow we have 4 linear equations with 4 unknowns of a, b, c and d. The next step is to solve these 4 linear equations to find the 4 unknowns.\n\nSince a = c, substituting c with a –>\n\n5a=d\n\n2b = 3a + d\n\nb = 4a\n\nSince b = 4a, substituting b with 4a —>\n\n5a = d\n\n8a = 3a + d —> 5a = d\n\nLets assume a = 1\n\nThen d=5, b=4 and c = 1\n\nFinally the balanced chemical equation is as below.\n\nPCl5 + 4 H2O –> H3PO4 + 5 HCl"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8698387,"math_prob":0.9996872,"size":1628,"snap":"2022-05-2022-21","text_gpt3_token_len":460,"char_repetition_ratio":0.17980295,"word_repetition_ratio":0.0,"special_character_ratio":0.27825552,"punctuation_ratio":0.06168831,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998487,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-19T13:48:40Z\",\"WARC-Record-ID\":\"<urn:uuid:b1465621-82b3-4162-9ccf-b3421438c870>\",\"Content-Length\":\"41234\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:afd59d2f-1ddc-4567-b65d-f4de93a32ace>\",\"WARC-Concurrent-To\":\"<urn:uuid:74b72ec9-a7fb-41e7-acff-e09dbee2c937>\",\"WARC-IP-Address\":\"95.217.28.181\",\"WARC-Target-URI\":\"https://chemistryblog.net/how-to-balance-chemical-equations/\",\"WARC-Payload-Digest\":\"sha1:43EUXVGE2BOFT2HBSOYMX6NBW7X2RGBB\",\"WARC-Block-Digest\":\"sha1:VRBFPOTBOPGESHLLEX45KL7MTXF7OD4Y\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320301341.12_warc_CC-MAIN-20220119125003-20220119155003-00103.warc.gz\"}"} |
http://intranet.math.vt.edu/netmaps/dynamic/deg21/deg21port99_Main.output | [
"These Thurston maps are NET maps for every choice of translation term. They are primitive and have degree 21. PURE MODULAR GROUP HURWITZ EQUIVALENCE CLASSES FOR TRANSLATIONS {0} {lambda1} {lambda2} {lambda1+lambda2} Since no Thurston multiplier is 1, this modular group Hurwitz class contains only finitely many Thurston equivalence classes. The number of pure modular group Hurwitz classes in this modular group Hurwitz class is 4. ALL THURSTON MULTIPLIERS c/d IN UNREDUCED FORM 0/21, 2/7, 2/3, 2/1, 4/1, 6/1, 8/1 EXCLUDED INTERVALS FOR THE HALF-SPACE COMPUTATION (-infinity,0.000000) ( 0.000000,infinity) The half-space computation does not determine rationality. EXCLUDED INTERVALS FOR JUST THE SUPPLEMENTAL HALF-SPACE COMPUTATION INTERVAL COMPUTED FOR HST OR EXTENDED HST (-0.027557,0.024822) 0/1 EXTENDED HST The supplemental half-space computation shows that these NET maps are rational. SLOPE FUNCTION INFORMATION There are no slope function fixed points. Number of excluded intervals computed by the fixed point finder: 6944 No nontrivial cycles were found. The slope function maps some slope to the nonslope. The slope function orbit of every slope p/q with |p| <= 50 and |q| <= 50 ends in the nonslope. If the slope function maps slope p/q to slope p'/q', then |p'| <= |p| for every slope p/q with |p| <= 50 and |q| <= 50. If the slope function maps slope p/q to slope p'/q', then |q'| <= |q| for every slope p/q with |p| <= 50 and |q| <= 50. FUNDAMENTAL GROUP WREATH RECURSIONS When the translation term of the affine map is 0: NewSphereMachine( \"a=<1,b,b,b,b,b,b,1,1,1,1,1,1,1,1,b^-1,b^-1,b^-1,b^-1,b^-1,b^-1>(2,21)(3,20)(4,19)(5,18)(6,17)(7,16)(8,15)(9,14)(10,13)(11,12)\", \"b=(1,21)(2,20)(3,19)(4,18)(5,17)(6,16)(7,15)(8,14)(9,13)(10,12)\", \"c=<1,a^-1*c^-1,c^-1,b*c^-1,b*c^-1,b*c^-1,b*c^-1,c^-1,c^-1,c^-1,c^-1,c,c,c,c,c*b^-1,c*b^-1,c*b^-1,c*b^-1,c,c*a>(2,21)(3,20)(4,19)(5,18)(6,17)(7,16)(8,15)(9,14)(10,13)(11,12)\", \"d=(1,2)(3,21)(4,20)(5,19)(6,18)(7,17)(8,16)(9,15)(10,14)(11,13)\", \"a*b*c*d\"); When the translation term of the affine map is lambda1: NewSphereMachine( \"a=(1,21)(2,20)(3,19)(4,18)(5,17)(6,16)(7,15)(8,14)(9,13)(10,12)\", \"b=(1,20)(2,19)(3,18)(4,17)(5,16)(6,15)(7,14)(8,13)(9,12)(10,11)\", \"c=(1,21)(2,20)(3,19)(4,18)(5,17)(6,16)(7,15)(8,14)(9,13)(10,12)\", \"d=<1,a^-1*c^-1,c^-1,b*c^-1,b*c^-1,b*c^-1,c^-1,c^-1,c^-1,c^-1,c^-1,c,c,c,c,c,c*b^-1,c*b^-1,c*b^-1,c,c*a>(2,21)(3,20)(4,19)(5,18)(6,17)(7,16)(8,15)(9,14)(10,13)(11,12)\", \"a*b*c*d\"); When the translation term of the affine map is lambda2: NewSphereMachine( \"a=(1,21)(2,20)(3,19)(4,18)(5,17)(6,16)(7,15)(8,14)(9,13)(10,12)\", \"b=<1,a^-1*c^-1,c^-1,b*c^-1,b*c^-1,b*c^-1,b*c^-1,c^-1,c^-1,c^-1,c^-1,c,c,c,c,c*b^-1,c*b^-1,c*b^-1,c*b^-1,c,c*a>(2,21)(3,20)(4,19)(5,18)(6,17)(7,16)(8,15)(9,14)(10,13)(11,12)\", \"c=(1,21)(2,20)(3,19)(4,18)(5,17)(6,16)(7,15)(8,14)(9,13)(10,12)\", \"d=(1,20)(2,19)(3,18)(4,17)(5,16)(6,15)(7,14)(8,13)(9,12)(10,11)\", \"a*b*c*d\"); When the translation term of the affine map is lambda1+lambda2: NewSphereMachine( \"a=(1,20)(2,19)(3,18)(4,17)(5,16)(6,15)(7,14)(8,13)(9,12)(10,11)\", \"b=(1,21)(2,20)(3,19)(4,18)(5,17)(6,16)(7,15)(8,14)(9,13)(10,12)\", \"c=(1,20)(2,19)(3,18)(4,17)(5,16)(6,15)(7,14)(8,13)(9,12)(10,11)\", \"d=(1,19)(2,18)(3,17)(4,16)(5,15)(6,14)(7,13)(8,12)(9,11)(20,21)\", \"a*b*c*d\");"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.62306905,"math_prob":0.9999709,"size":3288,"snap":"2019-51-2020-05","text_gpt3_token_len":1427,"char_repetition_ratio":0.17174178,"word_repetition_ratio":0.20608108,"special_character_ratio":0.5599148,"punctuation_ratio":0.2646793,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99913853,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-18T23:25:11Z\",\"WARC-Record-ID\":\"<urn:uuid:56470db0-b4d0-4775-bff6-08fcf5e0fbef>\",\"Content-Length\":\"4566\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:23a672e9-d663-42e7-837e-83c6f607d165>\",\"WARC-Concurrent-To\":\"<urn:uuid:226bbcb3-331e-40aa-bae1-7509741fe91c>\",\"WARC-IP-Address\":\"198.82.185.39\",\"WARC-Target-URI\":\"http://intranet.math.vt.edu/netmaps/dynamic/deg21/deg21port99_Main.output\",\"WARC-Payload-Digest\":\"sha1:7ABICKKVLH56OIKWE5AJ7V2TWQVBRGGE\",\"WARC-Block-Digest\":\"sha1:F3G7L2PNWBCEVOVFCTVMWLQZHWSVGSG5\",\"WARC-Identified-Payload-Type\":\"text/plain\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250593994.14_warc_CC-MAIN-20200118221909-20200119005909-00023.warc.gz\"}"} |
https://www.yourdictionary.com/wave-equation | [
"# Wave-equation meaning\n\nA differential or partial differential equation used to represent wave motion.\nnoun\nThe fundamental equation of wave mechanics.\nnoun\nA partial differential equation that describes the shape and movement of waves, given a set of boundary conditions (such as the initial shape of the wave, or the evolution of a force affecting the wave).\nThe fundamental equation of wave mechanics."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.85841596,"math_prob":0.9956016,"size":416,"snap":"2021-04-2021-17","text_gpt3_token_len":75,"char_repetition_ratio":0.17718446,"word_repetition_ratio":0.06779661,"special_character_ratio":0.17067307,"punctuation_ratio":0.08571429,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9925471,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-28T10:07:15Z\",\"WARC-Record-ID\":\"<urn:uuid:be1146f0-2af7-4097-9082-3af3dee62f2b>\",\"Content-Length\":\"130292\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bff8d9e3-c49f-4a94-a5e5-699b90f04321>\",\"WARC-Concurrent-To\":\"<urn:uuid:ae0cef85-19fa-4c47-b148-34bc22554e55>\",\"WARC-IP-Address\":\"99.84.110.104\",\"WARC-Target-URI\":\"https://www.yourdictionary.com/wave-equation\",\"WARC-Payload-Digest\":\"sha1:3N25BCBJZTHO5GSYKFQQ3AK4EY5Q3FCP\",\"WARC-Block-Digest\":\"sha1:6MV5WKVWD3HWP77U4U3MULWUHEDGJT3Q\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610704839214.97_warc_CC-MAIN-20210128071759-20210128101759-00500.warc.gz\"}"} |
https://www.intechopen.com/books/digital-systems/applications-of-general-regression-neural-networks-in-dynamic-systems | [
"Open access peer-reviewed chapter\n\n# Applications of General Regression Neural Networks in Dynamic Systems\n\nBy Ahmad Jobran Al-Mahasneh, Sreenatha Anavatti, Matthew Garratt and Mahardhika Pratama\n\nSubmitted: June 13th 2018Reviewed: July 12th 2018Published: November 5th 2018\n\nDOI: 10.5772/intechopen.80258\n\n## Abstract\n\nNowadays, computational intelligence (CI) receives much attention in academic and industry due to a plethora of possible applications. CI includes fuzzy logic (FL), evolutionary algorithms (EA), expert systems (ES) and artificial neural networks (ANN). Many CI components have applications in modeling and control of dynamic systems. FL mimics the human reasoning by converting linguistic variables into a set of rules. EA are metaheuristic population-based algorithms which use evolutionary operations such as mutation, crossover, and selection to find an optimal solution for a given problem. ES are programmed based on an expert knowledge to make informed decisions in complex tasks. ANN models how the neurons are connected in animal nervous systems. ANN have learning abilities and they are trained using data to make intelligent decisions. Since ANN have universal approximation abilities, they can be used to solve regression, classification, and forecasting problems. ANNs are made of interconnected layers where every layer is made of neurons and these neurons have connections with other neurons. These layers consist of an input layer, hidden layer/layers, and an output layer.\n\n### Keywords\n\n• applications\n• general regression\n• neural networks\n• dynamic systems\n\n## 1. Introduction\n\nNowadays, computational intelligence (CI) receives much attention in academic and industry due to a plethora of possible applications. CI includes fuzzy logic (FL), evolutionary algorithms (EA), expert systems (ES), and artificial neural networks (ANN). Many CI components have applications in modeling and control of dynamic systems. FL mimics the human reasoning by converting linguistic variables into a set of rules. EA are metaheuristic population-based algorithms which use evolutionary operations such as mutation, crossover, and selection to find an optimal solution for a given problem. ES are programmed based on an expert knowledge to make informed decisions in complex tasks. ANN model how the neurons are connected in animal nervous systems. ANN have learning abilities and they are trained using data to make intelligent decisions. Since ANN have universal approximation abilities , they can be used to solve regression, classification, and forecasting problems. ANNs are made of interconnected layers where every layer is made of neurons, and these neurons have connections with other neurons. These layers consist of an input layer, hidden layer/layers, and an output layer. ANN have two major types as shown in Figure 1: feed-forward neural network (FFNN) and recurrent neural network (RNN). In FFNN, the data can only flow from the input to hidden layer, while in RNN, the data can flow in any direction. The output of a single-hidden-layer FFNN can be written as\n\nY=WHOhxWIH+bI+bOE1\n\nwhere Yis the network output, WHOis the hidden-output layers weights matrix, his the hidden layer activation function, xis the input vector, WIHis the input-hidden layers weights matrix, bIis the input layer bias vector, and bOis the hidden layer bias vector.\n\nThe output of a single-hidden-layer RNN with a recurrent hidden layer can be written as\n\nY=WHOhxWIH+ht1WHH+bI+bOE2\n\nThe training of neural networks involves modifying the neural network parameters to reduce a given error function. Gradient descent (GD) [2, 3] is the most common ANN training method:\n\nθnew=θoldλEθE3\n\nwhere θare the network parameters, λis the learning rate, and E is the error function:\n\nE=1NI=1Nyt2E4\n\nwhere Nis the number of samples, yis the network output, and tis the network target.\n\n## 2. General regression neural network (GRNN)\n\nThe general regression neural network (GRNN) is a single-pass neural network which uses a Gaussian activation function in the hidden layer . GRNN consists of input, hidden, summation, and division layers.\n\nThe regression of the random variable yon the observed values Xof random variable xcan be found using\n\nEyX=yfXydyfXydyE5\n\nwhere fXyis a known joint continuous probability density function.\n\nWhen fXyis unknown, it should be estimated from a set of observations of xand y. fXycan be estimated using the nonparametric consistent estimator suggested by Parzen as follows:\n\nf̂XY=12πp+1/2σp+11ni=1neXXiTXXi2σ2eYYi22σ2E6\n\nwhere nis the number of observations, pis the dimension of the vector variable, and xand σare the smoothing factors.\n\nSubstituting (6) into (5) leads to\n\nŶX=i=1neXXiTXXi2σ2i=1neXXiTXXi2σ2yeYYi22σ2dyeYYi22σ2dyE7\n\nAfter solving the integration, the following will result:\n\nŶX=i=1nyeXXiTXXi2σ2i=1neXXiTXXi2σ2E8\n\n### 2.1. Previous studies\n\nGRNN was used in different applications related to modeling, system identification, prediction, and control of dynamic systems including: feedback linearization controller , HVAC process identification and control , modeling and monitoring of batch processes , cooling load prediction for buildings , fault diagnosis of a building’s air handling unit , intelligent control , optimal control for variable-speed wind generation systems , annual power load forecasting model , vehicle sideslip angle estimation , fault diagnosis for methane sensors , fault detection of excavator’s hydraulic system , detection of time-varying inter-turn short circuit in a squirrel cage induction machine , system identification of nonlinear rotorcraft heave mode , and modeling of traveling wave ultrasonic motors .\n\nSome significant modifications of GRNN include using fuzzy c-means clustering to cluster the input data of GRNN , modified GRNN which uses different types of Parzen estimators to estimate the density function of the regression , density-driven GRNN combining GRNN, density-dependent kernels and regularization for function approximation , GRNN to model time-varying systems , adapting GRNN for modeling of dynamic plants using different adaptation approaches including modifying the training targets, and adding a new pattern and dynamic initialization of σ.\n\n### 2.2. GRNN training algorithm\n\nGRNN training is rather simple. The input weights are the training inputs transposed, and the output weights are the training targets. Since GRNN is an associative memory, after training, the number of the hidden neurons is equal to the number of the training samples. However, this training procedure is not efficient if there are many training samples, so one of the suggested solutions is using a data dimensionality reduction technique such as clustering or principal component analysis (PCA). One of the novel solutions to data dimensionality reduction is using an error-based algorithm to grow GRNN as explained in Algorithm 1. The algorithm will check whether an input is required to be included in the training, based on prediction error before training GRNN with that input. If the prediction error without including that input is more than the certain level, then GRNN should be trained with it.\n\n#### 2.2.1. Reducing data dimensionality using clustering\n\nClustering techniques can be used to reduce the data dimensionality before feeding it to the GRNN. k-means clustering is one of the popular clustering techniques. The k-means clustering algorithm is explained in Algorithm 2. Also, results of comparing GRNN performance before and after applying k-means algorithm are shown in Table 1. Although the training and testing errors will increase, there are large reductions in the network size.\n\nDatasetTraining error after/before k-means MSETesting error after/before k-means MSESize reduction %\nAbalone0.0177/0.0020.0141/0.00699.76\nBuilding energy0.047/3.44e-050.0165/0.02399.76\nChemical sensor0.241/0.0160.328/0.03497.99\nCholesterol0.050/4.605e-050.030/0.00992\n\n### Table 1.\n\nUsing GRNN with k-means clustering.\n\nThe aim of the algorithm is to minimize the distance objective function:\n\nJ=i=1Nj=1Mxicj2E9\n\n#### 2.2.2. Reducing data dimensionality using PCA\n\nPCA can be used to reduce a large dataset into a smaller dataset which still carries most of the important information from the large dataset. In a mathematical sense, PCA converts a number of correlated variables into a number of uncorrelated variables. PCA algorithm is explained in Algorithm 3.\n\nDatasetTraining error after/before PCA MSETesting error after/before PCA MSESize reduction %\nAbalone0.197/0.0020.188/0.00699.8\nBuilding energy0.061/3.44e-050.049/0.02399.6\nChemical sensor0.241/0.0160.328/0.03498.3\nCholesterol0.026/4.605e-050.028/0.00992\n\n### Table 2.\n\nUsing GRNN with PCA.\n\n### 2.3. GRNN output algorithm\n\nAfter GRNN is trained, the output of GRNN can be calculated using\n\nD=XWiTXWiE10\nŶ=i=1NWoe(D/2σ2)i=1Ne(D/2σ2)E11\n\nwhere Dis the Euclidean distance between the input Xand the input weights Wi, Wois the output weight, and σis the smoothing factor of the radial basis function.\n\nGRNN output calculation is explained in Algorithm 4.\n\nOther distance measures can be also used such as Manhattan (city block), so (10) will become\n\nD=XWiE12\n\n## 3. Estimation of GRNN smoothing parameter (σ)\n\nSince σis the only free parameter in GRNN and suitable values of it will improve GRNN accuracy, it should be estimated. Since there is no optimal analytical solution for finding σ, numerical approaches can be used to estimate it. The holdout method is one of the suggested methods. In this method, samples are randomly removed from the training dataset; then using the GRNN with a fixed σ, the output is calculated using the removed samples; then the error is calculated between the network outputs and the sample targets. This procedure is repeated for different σvalues. The smoothing parameter (σ) with the lowest sum of errors is selected as the best σ. The holdout algorithm is explained in Algorithm 5.\n\nOther search and optimization methods might be also used to find σ. For instance, genetic algorithms (GA) and differential evolution (DE) are suitable options. Algorithm 6 explains how to find σusing DE or GA. Also, the results of using DE and GA are depicted in Figure 2. Both of GA and DE can find a good approximation of σwithin 100 iterations only; however, DE converges faster since it is a vectorized algorithm.",
null,
"Figure 2.DE and GA used to estimate GRNN σ. (a) Estimation of σ using DE (b) MSE evolution when using DE to estimate s (c) Estimation of σ using GA (d) MSE evolution when using GA to estimate σ.\n\n## 4. GRNN vs. back-propagation neural networks (BPNN)\n\nThere are many differences between GRNN and BPNN. Firstly, GRNN is single-pass learning algorithm, while BPNN needs two passes: forward and backward pass. This means that GRNN consumes significantly less training time. Secondly, the only free parameter in GRNN is the smoothing parameter σ, while in BPNN more parameters are required such as weights, biases, and learning rates. This also indicates that GRNN quick learning abilities and its suitability for online systems or for system where minimal computations are required. Also, another difference is that since GRNN is an autoassociative memory network, it will store all the distinct input/output samples while BPNN has a limited predefined size. This size growth issue is resolved by either using clustering or PCA (read Sections 2.21 and 2.2.2). Finally, GRNN is based on the general regression theory, while BPNN is based on gradient-descent iterative optimization method.\n\nTo show the advantages of GRNN over BPNN, a comparison is held using standard regression datasets built inside MATLAB software . For all the datasets, they are divided 70% for training and 30% for testing. After training the network with the 70% training data, the output of the neural network is found using the remaining testing data. The most notable advantage of GRNN over BPNN is the shorter training time which confirms its selection for dynamic systems modeling and control. Also, GRNN has less testing error which means it has better generalization abilities than BPNN. The comparison results are summarized in Table 3.\n\nTypeDatasetTraining time (sec)Training error (MSE)Testing error (MSE)\nGRNNAbalone0.6210.3420.384\nBPNNAbalone1.3230.4360.395\nGRNNBuilding energy0.6300.07310.628\nBPNNBuilding energy1.8800.11520.631\nGRNNChemical sensor0.7010.8881.316\nBPNNChemical sensor1.4730.2281.584\nGRNNCholesterol0.8010.0370.172\nBPNNCholesterol2.0990.0610.215\n\n### Table 3.\n\nGRNN vs. BPNN training and testing performance.\n\n## 5. GRNN in identification of dynamic systems\n\nSystem identification is the process of building a model of unknown/partially known dynamic system based on observed input/output data. Gray-box and black-box identification are two common approaches of system identification. In the gray-box approach, a nominal model of a dynamic system is known, but its exact parameters are unknown, so an identifier is used to find these parameters. In the black-box approach, the identification is based only on the data. Examples of black-box identification include fuzzy logic (FL) and neural networks (NN). GRNN can be used to identify dynamic systems quickly and accurately. There are two methods to use GRNN for system identification: the batch mode (off-line training) and sequential mode (online training). In the batch mode, all the observed data is available before the system identification, so GRNN can be trained with a big chunk of the data, while in the sequential mode only a few data samples are available for identification.\n\n### 5.1. GRNN identification in batch training mode\n\nIn the batch mode, the observed data should be divided into training, validation, and testing. GRNN will be fed with all the training data to identify the system. Then in the validation stage, the network should be tested with different data, usually randomly selected, and the error is recorded for every validation test. Then the validation process is repeated several times. Usually 10 times is standard. And then the average validation error is found based on all the validation tests. This validation procedure is called k-fold cross validation a standard technique in machine learning (ML) applications. To test the generalization ability of an identified model, a new dataset is used called testing dataset. Based on the model performance in the testing stage, one can decide whether the model is suitable or not.\n\n#### 5.1.1. Batch training GRNN to identify hexacopter attitude dynamics\n\nIn this example, GRNN is used to identify the attitude (pitch/roll/yaw) of a hexacopter drone based on real flight test data in the free flight mode. The data consist of three inputs: rolling, pitching, and yawing control values and three outputs: rolling, pitching, and yawing rates. The dataset contains 6691 data samples with a sample rate of 0.01 seconds. A total of 4683 samples are used to train GRNN in the batch mode, and the remaining data samples (2008) are used for testing. The results of hexacopter attitude identification are shown in Figure 3(a–c). The results are accurate with very low error. MSE in training stage is 0.001139 and 0.00258 in the testing stage. Also, the training time was only 0.720 seconds.",
null,
"Figure 3.Attitude identification of hexacopter in batch training: (a) rolling rate identification, (b) pitching rate identification, and (c) yawing rate identification.\n\n### 5.2. GRNN identification in sequential training mode\n\nIn sequential training, the data flow once at a time which makes using the batch training procedures impossible. So GRNN should be able to find the system model from only the current and past measurements. So it is a prediction problem. Since GRNN converges to a regression surface even with a few data samples and since it is accurate and quick, it can be used in the online dynamic systems identification.\n\n#### 5.2.1. Sequential training GRNN to identify hexacopter attitude dynamics\n\nTo use GRNN in sequential mode, it is preferred to use the delayed output of the plant as an input in addition to the current input as shown in Figure 4. The same data which was used for batch mode is used in the sequential training. The inputs to GRNN are the control values of rolling, pitching, and yawing and the delayed rolling, pitching, and yawing rates. The results of using GRNN in the sequential training mode are shown in Figure 5(a–c). The results of sequential training are more accurate than the results in batch training.",
null,
"Figure 4.Sequential training GRNN.",
null,
"Figure 5.Attitude identification of hexacopter in sequential training: (a) rolling rate identification, (b) pitching rate identification, and (c) yawing rate identification.\n\n## 6. GRNN in control of dynamic systems\n\nThe aim of adding a closed-loop controller to the dynamic systems is either to reach the desired performance or stabilize the unstable system. GRNN can be used in controlling dynamic systems as a predictive or feedback controller. GRNN in control systems can be used as either supervised or unsupervised. When GRNN is trained as a predictive then the controller input and output data are known, so this is a supervised problem. On the other hand, if GRNN is utilized as a feedback controller (see Figure 6) without being pretrained, only the controller input data is known so GRNN have to find the suitable control signal u.\n\n### 6.1. GRNN as predictive controller\n\nTo utilize GRNN as a predictive controller, it should be trained with input-output data from another controller. For example, training a GRNN with a proportional integral derivative (PID) controller input/output data as shown in Figure 7. Then the trained GRNN can be used as a controller.\n\n#### 6.1.1. Example 1: GRNN as predictive controller\n\nIf we have a discrete time system Liu described as\n\nyk+1=0.8sinyk+15ukE13\n\nThe desired reference is ydk=2sin0.1πt.\n\nThe perfect control law can be written as\n\nuk=ydk+1150.8sinyk15E14\n\nTo train GRNN as a predictive controller, the system described in (13) and (14) is simulated for 50 seconds. Then the controller output uand the plant output ywere stored. GRNN is trained with the plant output as input and the controller output as output. For any time step the plant output is fed to GRNN, and the controller output uis estimated. The estimated controller output by GRNN and the perfect controller output are almost identical as shown in Figure 8. Also, the tracking performance after using GRNN as a predictive controller is very accurate as shown in Figure 9.\n\n### 6.2. GRNN as an adaptive estimator controller\n\nSince GRNN has robust approximation abilities, it can be used to approximate the dynamics of a given system to find the control law especially if the system is partially known or unknown.\n\nAssume there is a nonlinear dynamic system written as\n\nẋ=fxt+bu+dE15\n\nwhere ẋis the derivative of the states, fxtis a known function of the states, bis the input gain, and dis the external disturbance.\n\nThe perfect control law can be written as\n\nu=1bẋfxtdE16\n\nIf fxtis unknown, then the control law in (16) cannot be found; hence, the alternative is using GRNN to estimate the unknown function fxt. To derive the update law of GRNN weights, let us define the objective function as MSE error function as follows:\n\nE=12ŷy2E17\n\nwhere ŷis the estimation of GRNN and yis the optimal value of fxt. To derive the update law of the GRNN weights, the error should be minimized with respect to GRNN weights W:\n\nEW=ŴHyHE18\n\nwhere Ŵis the GRNN current hidden-output layers weights and His the hidden layer output, so the update law of GRNN weights will be\n\nWi+1=Wi+HŴHyE19\n\n### 6.3. Example 2: using GRNN to approximate the unknown dynamics\n\nLet us consider the same discrete as in example 1:\n\nyk+1=fk+15ukE20\n\nThe desired reference is ydk=2sin0.1πt\n\nwhere fkis unknown nonlinear function.\n\nThe perfect control law can be written as\n\nuk=fk15+ydk15E21\n\nGRNN is used to estimate the unknown function fk. With applying the update law in (19), fkis estimated with an acceptable accuracy as shown in Figure 10. MSE between the ideal and the estimated fkis 0.0033. The accurate controller tracking performance is also shown Figure 11.\n\n### 6.4. GRNN as an adaptive optimal controller\n\nGRNN has learning abilities which means it is suitable to be an adaptive intelligent controller. Rather than approximating the unknown function in the control law (16), one can use GRNN to approximate the whole controller output as shown in Figure 12. The same update law as in (19) can be used to update GRNN weights to approximate the controller output u.\n\n#### 6.4.1. Example 3: using GRNN as an adaptive controller\n\nLet us consider the same discrete system as in (13):\n\nyk+1=0.8sinyk+15uk\n\nwith the same desired reference ydk=2sin0.1πt, but in this case GRNN is used to estimate the full controller output u as shown in Figure 14 and the tracking performance is shown in Figure 13.\n\n#### 6.4.2. Example 4: using GRNN as an adaptive controller\n\nLet us use GRNN to control a more complex discrete plant described as\n\nyk+1=0.2cos0.8yk+yk1+0.4sin0.8yk1+yk+2uk+uk1+0.19+yk+yk1+2uk+uk1(1+cosykE22\n\nThe desired reference in this case is\n\nydk=0.8+0.05sinπk/50+sinπk/100+sinsinπk/150\n\nThe tracking performance of adaptive GRNN is shown in Figure 15.\n\n## 7. MATLAB examples\n\nIn this section, GRNN MATLAB code examples are provided.\n\n### 7.1. Basic GRNN Commands in MATLAB\n\nIn this example, GRNN is trained to find the square of a given number.\n\nTo design a GRNN in MATLAB:\n\nFirstly, create the inputs and the targets and specify the spread parameter:\n\nSecondly, create GRNN:\n\nTo view GRNN after creating it:\n\nThe results are shown in Figure 16.\n\nTo find GRNN output based on a given input:\n\nThe result is 17.\n\n### 7.2. The holdout method to find σ\n\nchapter PDF\nCitations in RIS format\nCitations in bibtex format\n\n## More\n\n© 2018 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.\n\n## How to cite and reference\n\n### Cite this chapter Copy to clipboard\n\nAhmad Jobran Al-Mahasneh, Sreenatha Anavatti, Matthew Garratt and Mahardhika Pratama (November 5th 2018). Applications of General Regression Neural Networks in Dynamic Systems, Digital Systems, Vahid Asadpour, IntechOpen, DOI: 10.5772/intechopen.80258. Available from:\n\n### chapter statistics\n\n7Crossref citations\n\n### Related Content\n\nNext chapter\n\n#### Fourier Transform Profilometry in LabVIEW\n\nBy Andrés G. Marrugo, Jesús Pineda, Lenny A. Romero, Raúl Vargas and Jaime Meneses\n\nFirst chapter\n\n#### Introduction to the Artificial Neural Networks\n\nBy Andrej Krenker, Janez Bešter and Andrej Kos\n\nWe are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities."
]
| [
null,
"https://www.intechopen.com/media/chapter/63920/media/F2.png",
null,
"https://www.intechopen.com/media/chapter/63920/media/F3.png",
null,
"https://www.intechopen.com/media/chapter/63920/media/F4.png",
null,
"https://www.intechopen.com/media/chapter/63920/media/F5.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8897825,"math_prob":0.89436203,"size":15985,"snap":"2020-45-2020-50","text_gpt3_token_len":3512,"char_repetition_ratio":0.13891496,"word_repetition_ratio":0.1414791,"special_character_ratio":0.2234595,"punctuation_ratio":0.11715621,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98282605,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-04T15:03:47Z\",\"WARC-Record-ID\":\"<urn:uuid:eba2b6a6-c17b-4caf-a83b-de4c209c165e>\",\"Content-Length\":\"475365\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:62c91a34-8715-49d3-a8cb-1d2c7e91f9bd>\",\"WARC-Concurrent-To\":\"<urn:uuid:c3ef3d3e-1a6c-419f-8d85-3451903aca9b>\",\"WARC-IP-Address\":\"35.171.73.43\",\"WARC-Target-URI\":\"https://www.intechopen.com/books/digital-systems/applications-of-general-regression-neural-networks-in-dynamic-systems\",\"WARC-Payload-Digest\":\"sha1:V2LWG4IFPQQNK4UQKSZ6CJHW4CSD2FTE\",\"WARC-Block-Digest\":\"sha1:GL26FE4KCL3NQMX2YN7U25I3SUM4KIKE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141737946.86_warc_CC-MAIN-20201204131750-20201204161750-00366.warc.gz\"}"} |
https://hilbert.math.wisc.edu/wiki/index.php?title=Problem_Solver%27s_Toolbox&diff=prev&oldid=18214 | [
"# Difference between revisions of \"Problem Solver's Toolbox\"\n\nJump to: navigation, search\n\nThe goal of this page is to collect simple problem solving strategies and tools. We hope that students interested in the Wisconsin Math Talent Search would find the described ideas useful. This page and the discussed topics can be used as a starting point for future exploration.\n\n## General ideas\n\nThere is no universal recipe for math problems that would work every time, that's what makes math fun! There are however a number of general strategies that could be useful in most cases, here is a short list of them. (Many of these ideas were popularized by the Hungarian born mathematician George Pólya in his book How to Solve It.)\n\n• Make sure that you understand the problem.\n• If possible, draw a figure.\n• Can you connect the problem to a problem you have solved before?\n• If you have to show something for all numbers (or a large number) then try to check the statement for small values first.\n• Can you solve the problem in a special case first? Can you solve a modified version of the problem first?\n• Is there some symmetry in the problem that you can exploit?\n• Is it possible to work backward?\n• Does it help to consider an extreme case of the problem?\n• Is it possible to generalize the problem? (Sometimes the generalized is easier to solve.)\n\n## Modular arithmetic\n\nWhen we have to divide two integers, they don't always divide evenly, and there is a quotient and a remainder. For example when we divide 10 by 3 we get a remainder of 1. It turns out that these remainders behave very well under addition, subtraction, and multiplication. We say two numbers are the same \"modulo $m$\" if they have the same remainder when divided by $m$. If $a$ and $x$ are the same modulo $m$, and $b$ and $y$ are the same modulo $m$, then $a+b$ and $x+y$ are the same modulo $m$, and similarly for subtraction and multiplication.\n\nFor example, 5 is the same as 1 modulo 4, and hence $5\\cdot 5 \\cdot 5 \\cdot 5=5^4$ is the same as $1\\cdot 1\\cdot 1\\cdot 1=1$ modulo $4$. Same way you can show that $5^{1000}$ has a remainder of 1 when we divide it by 4.\n\nModular arithmetic often makes calculation much simpler. For example, see 2016-17 Set #2 Problem 3.\n\nSee Art of Problem Solving's introduction to modular arithmetic for more information.\n\n## Mathematical induction\n\nSuppose that you want to prove a statement for all positive integers, for example that for each positive integer $n$ the following is true: $1\\cdot 2+2\\cdot 3+3\\cdot 4+\\cdots+n\\cdot (n+1)=\\frac{n(n+1)(n+2)}{3}.\\qquad\\qquad(*)$\n\nMathematical induction provides a tool for doing this. You need to show the following two things:\n\n1. (Base case) The statement is true for $n=1$.\n2. (Induction step) If the statement is true for $n$ then it must be true for $n+1$ as well.\n\nIf we can show both of these parts, then it follows that the statement is true for all positive integer $n$. Why? The first part (the base case) shows that the statement is true for $n=1$. But then by the second part (the induction step) the statement must be true for $n=2$ as well. Using the second part again and again we see that the statement is true for $n=3, 4, 5, \\cdots$ and repeating this sufficiently times we can prove that the statement is true for any fixed value of $n$.\n\nOften the idea of induction is demonstrated as a version of Domino effect'. Imagine that you have an infinite row of dominos numbered with the positive integers, where if $n$th domino falls then the next one will fall as well (this is the induction step). If we make the first domino fall (this is the base case) then eventually all other dominos will fall as well.\n\n• Try to use induction to show the identity $(*)$ above for all positive integer $n$.\n• You can also use induction to show a statement for all integers $n\\ge 5$. Then for your base case you have to show that the statement is true for $n=5$. (The induction step is the same.)\n\nSee this page from Math Is Fun for some simple applications of induction.\n\n## Proof by contradiction\n\nThis is a commonly used problem solving method. Suppose that you have to prove a certain statement. Now pretend that the statement is not true and try to derive (as a consequence) a false statement. The found false statement shows that your assumption about the original statement was incorrect: thus the original statement must be true.\n\nHere is a simple example: we will prove that the product of three consecutive positive integers cannot be a prime number. Assume the opposite: that means that there is a positive integer $n$ so that $n(n+1)(n+2)$ is a prime. But among three consecutive integers we will always have a multiple of 2, and also a multiple of 3. Thus the product of the three numbers must be divisible by both 2 and 3, and hence $n(n+1)(n+2)$ cannot be a prime. This contradicts our assumption that $n(n+1)(n+2)$ is a prime, which shows that our assumption had to be incorrect.\n\nProof by contradiction can be used for example in 2016-17 Set #1 Problem 4.\n\n## Pigeonhole Principle\n\nThe Pigeonhole Principle is one of the simplest tools in mathematics, but it can be very powerful. Suppose that $n\\lt m$ are positive integers, and we have $m$ objects and $n$ boxes. The Pigeonhole Principle states that If we place each of the $m$ objects into one of the $n$ boxes then there must be at least one box with at least two objects in it. The statement can be proved by contradiction: if we can find an arrangement of objects so that each box has less than two objects in it, then each box would contain at most one object, and hence we had at most $n$ objects all together. This is a contradiction, which means that the original statement must be correct.\n\nThe Pigeonhole Principle is often used in the following, more general form. Suppose that $n, m, k$ are positive integers with $n k\\lt m$. If we place each of $m$ objects into one of $n$ boxes then there must be at least one box with at least $k+1$ objects in it. Try to prove this version by contradiction.\n\nHere is a simple application: if we roll a die 13 times then there must be a number that appears at least three times. Here each die roll correspond to an object, each of the 6 possible outcomes correspond to a possible box. Since $2\\cdot 6\\lt 13$, we must have a box with at least $2+1=3$ objects. In other words: there will be number that appears at least three times.\n\nPigeonhole Principle can be used for example in 2014-15 Set #1 Problem 4.\n\n## Angles in the circle\n\nThe following theorems are often useful when working with geometry problems.",
null,
"An illustration of Thales' Theorem. O is the center of the circle.\n\nThales' Theorem\n\nSuppose that the distinct points $A, B, C$ are all on a circle, and $AB$ is a diameter of of the circle. Then the angle $ACB$ is $90^{\\text{o}}$. In other words: the triangle $\\triangle ABC$ is a right triangle with hypotenuse $AB$.\n\nThe theorem can be proved with a little bit of angle-chasing'. Denote the center of the circle by $O$. Then $AO, BO, CO$ are all radii of the circle, so they have the same length. Thus $\\triangle AOC$ and $\\triangle BOC$ are both isosceles triangles. Now try labeling the various angles in the picture and you should quickly arrive to a proof. (You can find the worked out proof at the wiki page of the theorem, but it is more fun if you figure it out on your own!)\n\nThe converse of Thales's theorem states that if $\\triangle ABC$ is a right triangle with hypotenuse $AB$ then we can draw a circle with a center that is the midpoint of $AB$ that passes through $A, B, C$.\n\nThe Inscribed Angle Theorem below is a generalization of Thales' Theorem.\n\nThe Inscribed Angle Theorem\n\nSuppose that the distinct points $A, B, C$ are all on a circle and let $O$ be the center of the circle. Then depending on the position of these points we have the following statements:\n\n• If $O$ is on the line $AB$ then $\\angle ACB=90^{\\text{o}}$. (This is just Thales' theorem again.)\n• If $O$ and $C$ are both on the same side of the line $AB$ then the inscribed angle $\\angle ACB$ is half of $360^{\\text{o}}$ minus the central angle $\\angle AOB$:\n\n$2 \\angle ACB= \\angle AOB.$\n\n• If $O$ and $C$ are on the opposite sides of the line $AB$ then the inscribed angle $\\angle ACB$ is half of the central angle $\\angle AOB$:\n\n$2 \\angle ACB= 360^{\\text{o}}-\\angle AOB.$\n\nIf we measure the central angle $\\angle AOB$ the `right way' then we don't need to separate the three cases. In the first case the central angle is just $180^{\\text{o}}$, and the inscribed angle is exactly the half of that. In the third case if we define the central angle to be $360^{\\text{o}}-\\angle AOB$ then again we get that the inscribed angle is half of the central angle.\n\nThe theorem can be proved with angle-chasing, using the same idea that was described for Thales' theorem. See the wiki page for the proof (but first try to do it on your own!).\n\nApplications to cyclic quadrilaterals\n\nThe following statements (and their converses) are useful applications of the Inscribed Angle theorem.\n\n1. Suppose that the points $A, B, C, D$ form a cyclic quadrilateral, this means that we can draw a circle going through the four points. $AB$ divides the circle into two arcs. If the points $C$ and $D$ are in the same arc (meaning that they are on the same side of $AB$) then $\\angle ACB= \\angle ADB.$ The converse of this statement is also true: if $A, B, C, D$ are distinct points, the points $C, D$ are on the same side of the line $AB$ and $\\angle ACB= \\angle ADB$ then we can draw a circle around $A, B, C, D$, in other words $ABCD$ is a cyclic quadrilateral.\n\n2. Suppose that $ABCD$ is a cyclic quadrilateral. Then the sum of any two opposite angles is equal to $180^{\\text{o}}$. This means that $\\angle ABC+\\angle CDA= 180^{\\text{o}}, \\quad \\text{and}\\quad \\angle BCD+\\angle DAB= 180^{\\text{o}}. \\qquad\\qquad (**)$\n\nThe converse of the previous statement is also true: suppose that $ABCD$ is a quadrilateral with angles satisfying the equations $(**)$. Then $ABCD$ is a cyclic quadrilateral: we can draw a circle that passes through the four points.\n\nThe Inscribed Angle Theorem and the statements about cyclic quadrilaterals can be used for example in 2015-16 Set #4 Problem 5."
]
| [
null,
"https://hilbert.math.wisc.edu/wiki/images/thumb/Thales_thm.jpg/250px-Thales_thm.jpg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.84690493,"math_prob":0.9979028,"size":12132,"snap":"2021-21-2021-25","text_gpt3_token_len":3236,"char_repetition_ratio":0.17719327,"word_repetition_ratio":0.13198757,"special_character_ratio":0.25939664,"punctuation_ratio":0.07968621,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99996066,"pos_list":[0,1,2],"im_url_duplicate_count":[null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-12T11:26:28Z\",\"WARC-Record-ID\":\"<urn:uuid:bdc1d1c8-33c7-46da-87b0-caf5f9b6e4dd>\",\"Content-Length\":\"36489\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:92bcfa47-20a2-4a9f-b9a3-9bacea08a19f>\",\"WARC-Concurrent-To\":\"<urn:uuid:ae43c059-d197-4f5c-b840-28943727904f>\",\"WARC-IP-Address\":\"144.92.166.29\",\"WARC-Target-URI\":\"https://hilbert.math.wisc.edu/wiki/index.php?title=Problem_Solver%27s_Toolbox&diff=prev&oldid=18214\",\"WARC-Payload-Digest\":\"sha1:AVISWWLM73HYS2AAPWZIVP4PUFL7PYMO\",\"WARC-Block-Digest\":\"sha1:HKNQY2FO5SS7GJO7XG433AA5NLVCDWMW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487582767.0_warc_CC-MAIN-20210612103920-20210612133920-00028.warc.gz\"}"} |
https://www.javajee.com/content/copy-an-array-from-1d-to-2d | [
"Copy an array from 1D to 2D\n\n4 posts / 0 new\nheartin\nCopy an array from 1D to 2D\n\nWrite a program to copy a 1-dimension array with 12 elements into a 2-dimensional array with 3x4 elements. Print the 2-D array as a matrix.\n\nNote\n\nIf someone has already given an answer for above problem, try other combinations like a 1-dimension array with 8 elements into a 2-dimensional array with 2x4 elements.\n\nHint\n\n1. You can directly initialize a 1-D array and then use a for loop to copy. 1-D array has only one index as arr1[i] and 2-D array has two indexes as arr2[j][k].\n2. To print the 2-D array as a matrix, use two for loops. Outside loop is for rows and inner one for col. Inner one will be executed for all rows.\n\nALL THE BEST!!!\n\nsneha\nCopying elements from ID to 2D array\n\nThis program has been rewritten again below with heading 'Modified Program after Review'. Retaining the original program for reference.\n\npackage com.javajee.Utils;\n\npublic class CopyArray {\n\nint[] arr1={1,2,3,4,5,6,7,8,9,10,11,12};\nint rowSize=3;\nint columnSize=4;\nint[][] arr2=new int[rowSize][columnSize];\n\npublic static void main(String[] args) {\nCopyArray CA=new CopyArray();\nCA.copyElems(CA.arr1);\nCA.printElem(CA.arr2);\n\n}\n\npublic void copyElems(int[] A){\nint k=0;\nfor(int i=0;i<rowSize;i++){\nfor(int j=0;j<columnSize;j++){\narr2[i][j]=A[k];\nk++;\n}\n}\n\n}\n\npublic void printElem(int[][] B){\nfor(int i=0;i<rowSize;i++){\nfor(int j=0;j<columnSize;j++){\nSystem.out.print(B[i][j]+\" \");\n}\nSystem.out.println();\n}\n}\n\n}\n\nWas it useful?\nheartin\nGood, but can be better...\n\nHere arr1 is an instance variable. So should we really have to pass it into copyElems function? If so, why not sending arr2?\n\nIt would be great if you can try to make these methods indpendent of static / instance variables so that it can be reused in other places.\n\nNOTE:\n\n1. If rewriting, please write a new one with a heading like 'Modified Program after Review'.\n2. In the original program, add a note on top that the program has been rewritten.\nWas it useful?\nsneha\nModified Program after Review\n\npackage com.javajee.Utils;\n\npublic class CopyArray {\n\npublic static void main(String[] args) {\nint[] arr1={1,2,3,4,5,6,7,8,9,10,11,12};\nint[][] arr2=new int;\ncopyElems(arr1,arr2);\nprintElem(arr2);\n\n}\n\npublic static void copyElems(int[] A,int[][] B){\nint k=0;\nfor(int i=0;i<B.length;i++){\nfor(int j=0;j<B.length;j++){\nB[i][j]=A[k];\nk++;\n}\n}\n\n}\n\npublic static void printElem(int[][] B){\nfor(int i=0;i<B.length;i++){\nfor(int j=0;j<B.length;j++){\nSystem.out.print(B[i][j]+\" \");\n}\nSystem.out.println();\n}\n}\n\n}\n\nWas it useful?"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.6157514,"math_prob":0.93020105,"size":1989,"snap":"2022-05-2022-21","text_gpt3_token_len":605,"char_repetition_ratio":0.11536524,"word_repetition_ratio":0.079422385,"special_character_ratio":0.33936653,"punctuation_ratio":0.20689656,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9809256,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-28T08:27:45Z\",\"WARC-Record-ID\":\"<urn:uuid:a089e1bc-4b83-461b-a4db-a53a688086a1>\",\"Content-Length\":\"63271\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9682118d-a754-4275-b8f6-579d4c1f50cd>\",\"WARC-Concurrent-To\":\"<urn:uuid:f9e3c0c5-aee7-4aca-b86e-8c79930b0701>\",\"WARC-IP-Address\":\"70.32.23.80\",\"WARC-Target-URI\":\"https://www.javajee.com/content/copy-an-array-from-1d-to-2d\",\"WARC-Payload-Digest\":\"sha1:HDJSLKZIJ63LEBCEVHKHKMBPHUETLT5W\",\"WARC-Block-Digest\":\"sha1:Q6QDASYEJNWX3L33H6BFARXCXEPHNE5L\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320305423.58_warc_CC-MAIN-20220128074016-20220128104016-00608.warc.gz\"}"} |
https://www.teachoo.com/5225/955/What-is-value-of-sin--cos--tan-at-0--30--45--60---90-degree----Chapter/category/Trignometric-ratios-of-Specific-Angles---Evaluating/ | [
"Trignometric ratios of Specific Angles - Evaluating\n\nChapter 8 Class 10 Introduction to Trignometry\nConcept wise\n\nWhat is value of sin 30?\n\nand sin 0?\n\nHow do we remember them?\n\nLet's learn how. We will discuss what are different values of sin, cos, tan, cosec, sec, cot at 0, 30, 45, 60 and 90 degrees and how to memorise them.\n\nSo, we have to fill this table",
null,
"## How to find the values?\n\nTo learn the table, we should first know how sin cos tan are related\n\nWe know that\n\n• tan θ = sin θ/cosθ\n• sec θ = 1/cos θ\n• cosec θ = 1/sin θ\n• cot θ = 1/cot θ\n\nNow let us discuss different values\n\n### For sin\n\nFor memorising sin 0°, sin 30°, sin 45°, sin 60° and sin 90°\n\nWe should learn it like\n\n1. sin 0° = 0\n2. sin 30° = 1/2\n3. sin 45° = 1/√2\n4. sin 60° = √3/2\n5. sin 90° = 1\n\nSo, our pattern will be like\n\n0, 1/2, 1/√2, √3/2, 1",
null,
"### For cos\n\nFor memorising cos 0°, cos 30°, cos 45°, cos 60° and cos 90°\n\nCos is the opposite of sin.\n\nWe should learn it like\n\n1. cos 0° = sin 90° = 1\n2. cos 30° = sin 60° = √3/2\n3. cos 45° = sin 45° = 1/√2\n4. cos 60° = sin 30° = 1/2\n5. cos 90° = sin 0° = 0\n\nSo, for cos, it will be like\n\n1, √3/2, 1/√2, 1/2, 0",
null,
"### For tan\n\nWe know that tan θ = sin θ /cos θ\n\nSo, it will be\n\n• tan 0° = sin 0° / cos 0° = 0/1 = 0\n• tan 30° = sin 30° / cos 30° = (1/2)/ (√3/2) = 1/√3\n• tan 45° = sin 45° / cos 45° = (1/√2)/ (1/√2) = 1\n• tan 60° = sin 60° / cos 60° = (√3/2) / (1/2) = √3\n• tan 90° = sin 90° / cos 90° = 1/0 = Not Defined =\n\nSo, for tan, it is\n\n0, 1/√3, 1, √3, ∞",
null,
"### For cosec\n\nWe know that\n\ncosec θ = 1/sin θ\n\nFor sin, we know\n\n0, 1/2, 1/√2, √3/2, 1\n\nSo, for cosec it will be\n\n• cosec 0° = 1 / sin 0° = 1/0 = Not Defined =\n• cosec 30° = 1 / sin 40° = 1/(1/2) = 2\n• cosec 45° = 1 / sin 45° = 1/(1/√2) = √2\n• cosec 60° = 1 / sin 60° = 1/(√3/2) = 2/√3\n• cosec 90° = 1 / sin 90° = 1/1 = 1\n\nSo, for cosec, it is\n\n∞, 2, √2, 2/√3, 1",
null,
"### For sec\n\nWe know that\n\nsec θ = 1/cos θ\n\nFor cos, we know\n\n1, √3/2, 1/√2, 1/2, 0\n\nSo, for sec it will be\n\n• sec 0° = 1 / cos 0° = 1/1 = 1\n• sec 30° = 1 / cos 40° = 1/(√3/2) = 2/√3\n• sec 45° = 1 / cos 45° = 1/(1/√2) = √2\n• sec 60° = 1 / cos 60° = 1/(1/2) = 2\n• sec 90° = 1 / cos 90° = 1/0 = Not Defined =\n\nSo, for sec, it is\n\n1, 2/√3, √2, 2, ∞",
null,
"### For cot\n\nWe know that\n\ncot θ = 1/tan θ\n\nFor tan, we know that\n\n0, 1/√3, 1, √3, ∞\n\nSo, for cot it will be\n\n• cot 0° = 1 / tan 0° = 1/0 = Not Defined =\n• cot 30° = 1 / tan 30° = 1/(1/√3) = √3\n• cot 45° = 1 / tan 45° = 1/1 = 1\n• cot 60° = 1 / tan 60° = 1/√3\n• cot 90° = 1 / tan 90° = 1/∞ =\n\nSo, for cot, it is\n\n∞, √3, 1, 1/√3, 0",
null,
"So, our full table looks like this",
null,
"You can also practice questions by clicking Next.\n\n## Trigonometry Table\n\nTrigonometry Table has all the values of sin, cos, tan for all angles from 0 to 90 degree.\n\n Radian Degree Sine Cosine Tangent Radian Degree Sine Cosine Tangent 0.000 0 0.000 1.000 0.000 0.803 46 0.719 0.695 1.036 0.017 1 0.017 1.000 0.017 0.820 47 0.731 0.682 1.072 0.035 2 0.035 0.999 0.035 0.838 48 0.743 0.669 1.111 0.052 3 0.052 0.999 0.052 0.855 49 0.755 0.656 1.150 0.070 4 0.070 0.998 0.070 0.873 50 0.766 0.643 1.192 0.087 5 0.087 0.996 0.087 0.890 51 0.777 0.629 1.235 0.105 6 0.105 0.995 0.105 0.908 52 0.788 0.616 1.280 0.122 7 0.122 0.993 0.123 0.925 53 0.799 0.602 1.327 0.140 8 0.139 0.990 0.141 0.942 54 0.809 0.588 1.376 0.157 9 0.156 0.988 0.158 0.960 55 0.819 0.574 1.428 0.175 10 0.174 0.985 0.176 0.977 56 0.829 0.559 1.483 0.192 11 0.191 0.982 0.194 0.995 57 0.839 0.545 1.540 0.209 12 0.208 0.978 0.213 1.012 58 0.848 0.530 1.600 0.227 13 0.225 0.974 0.231 1.030 59 0.857 0.515 1.664 0.244 14 0.242 0.970 0.249 1.047 60 0.866 0.500 1.732 0.262 15 0.259 0.966 0.268 1.065 61 0.875 0.485 1.804 0.279 16 0.276 0.961 0.287 1.082 62 0.883 0.469 1.881 0.297 17 0.292 0.956 0.306 1.100 63 0.891 0.454 1.963 0.314 18 0.309 0.951 0.325 1.117 64 0.899 0.438 2.050 0.332 19 0.326 0.946 0.344 1.134 65 0.906 0.423 2.145 0.349 20 0.342 0.940 0.364 1.152 66 0.914 0.407 2.246 0.367 21 0.358 0.934 0.384 1.169 67 0.921 0.391 2.356 0.384 22 0.375 0.927 0.404 1.187 68 0.927 0.375 2.475 0.401 23 0.391 0.921 0.424 1.204 69 0.934 0.358 2.605 0.419 24 0.407 0.914 0.445 1.222 70 0.940 0.342 2.747 0.436 25 0.423 0.906 0.466 1.239 71 0.946 0.326 2.904 0.454 26 0.438 0.899 0.488 1.257 72 0.951 0.309 3.078 0.471 27 0.454 0.891 0.510 1.274 73 0.956 0.292 3.271 0.489 28 0.469 0.883 0.532 1.292 74 0.961 0.276 3.487 0.506 29 0.485 0.875 0.554 1.309 75 0.966 0.259 3.732 0.524 30 0.500 0.866 0.577 1.326 76 0.970 0.242 4.011 0.541 31 0.515 0.857 0.601 1.344 77 0.974 0.225 4.331 0.559 32 0.530 0.848 0.625 1.361 78 0.978 0.208 4.705 0.576 33 0.545 0.839 0.649 1.379 79 0.982 0.191 5.145 0.593 34 0.559 0.829 0.675 1.396 80 0.985 0.174 5.671 0.611 35 0.574 0.819 0.700 1.414 81 0.988 0.156 6.314 0.628 36 0.588 0.809 0.727 1.431 82 0.990 0.139 7.115 0.646 37 0.602 0.799 0.754 1.449 83 0.993 0.122 8.144 0.663 38 0.616 0.788 0.781 1.466 84 0.995 0.105 9.514 0.681 39 0.629 0.777 0.810 1.484 85 0.996 0.087 11.430 0.698 40 0.643 0.766 0.839 1.501 86 0.998 0.070 14.301 0.716 41 0.656 0.755 0.869 1.518 87 0.999 0.052 19.081 0.733 42 0.669 0.743 0.900 1.536 88 0.999 0.035 28.636 0.750 43 0.682 0.731 0.933 1.553 89 1.000 0.017 57.290 0.768 44 0.695 0.719 0.966 1.571 90 1.000 0.000 ∞ 0.785 45 0.707 0.707 1.000",
null,
""
]
| [
null,
"https://d1avenlh0i1xmr.cloudfront.net/eba7f974-6e6d-4c5e-8680-426e422aba77/trigonometry-table---unfilled.jpg",
null,
"https://d1avenlh0i1xmr.cloudfront.net/801bbeae-38f1-47fb-bbf9-1d96fa76333a/trigonometry-table---sin.jpg",
null,
"https://d1avenlh0i1xmr.cloudfront.net/c9f78722-578a-4824-969b-6d803d7292ba/trigonometry-table---cos.jpg",
null,
"https://d1avenlh0i1xmr.cloudfront.net/74b46d9a-70f4-400f-bdd1-e5f68d29a50a/trigonometry-table---tan.jpg",
null,
"https://d1avenlh0i1xmr.cloudfront.net/833d3513-c8a5-4026-943a-cc193d5fef74/trigonometry-table---cosec-csc.jpg",
null,
"https://d1avenlh0i1xmr.cloudfront.net/a48b7003-08c2-41a3-9247-a8801c0b4f96/trigonometry-table---sec-sc.jpg",
null,
"https://d1avenlh0i1xmr.cloudfront.net/72f4c6d9-2b05-49bf-bf40-280b414b825b/trigonometry-table---cot.jpg",
null,
"https://d1avenlh0i1xmr.cloudfront.net/dc5b9ff5-3f9b-4f37-a32e-725954372188/trigonometry-table.jpg",
null,
"https://www.teachoo.com/static/misc/Davneet_Singh.jpg",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.50409234,"math_prob":0.9998838,"size":5599,"snap":"2023-40-2023-50","text_gpt3_token_len":3478,"char_repetition_ratio":0.18498659,"word_repetition_ratio":0.05078809,"special_character_ratio":0.7778175,"punctuation_ratio":0.23901394,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9570226,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18],"im_url_duplicate_count":[null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-29T10:53:24Z\",\"WARC-Record-ID\":\"<urn:uuid:57069fd4-a5db-4a90-9ceb-b34dc4b82f79>\",\"Content-Length\":\"179839\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:99595240-abe6-439f-8168-9f4ecb8bcb25>\",\"WARC-Concurrent-To\":\"<urn:uuid:5a8ac84f-b121-470f-9a2b-f1e679054055>\",\"WARC-IP-Address\":\"54.237.159.171\",\"WARC-Target-URI\":\"https://www.teachoo.com/5225/955/What-is-value-of-sin--cos--tan-at-0--30--45--60---90-degree----Chapter/category/Trignometric-ratios-of-Specific-Angles---Evaluating/\",\"WARC-Payload-Digest\":\"sha1:22RVEYGSXUNL6LIPPM6JHFZIRYYMPG5P\",\"WARC-Block-Digest\":\"sha1:QCNZJNCFJAEUZR6PIWF6JRILP5HDTUDE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510501.83_warc_CC-MAIN-20230929090526-20230929120526-00202.warc.gz\"}"} |
https://bulletin.iis.nsk.su/article/1386 | [
"Abstract\n\nIt is well-known that Monte Carlo methods are used for solving various urgent problems in mathematical physics. A lot of new numerical stochastic algorithms and models were elaborated in Institute of Computational Mathematics and Mathematical Geophysics (ICM and MG) SD RAS during last years. The Monte Carlo schemes can be effectively illustrated on computer and presented for students. On the other hand, the special instrumental environment based on hypertext technology is developed in ICM and MG. These computer tools can be used for elaboration of training mathematical courses.\n\nNow the electronic manual \"Foundations of Monte Carlo methods\" is developed in ICM and MG for students of Novosibirsk State University. This manual is based on lectures of Prof. G.A. Mikhailov and Dr. A.V. Voytishek. The central themes of this course are:\n\n• methods of realization of generators of standard random and pseudo-random numbers,\n• methods of numerical modeling of discrete and continuous stochastic values (including standard and special algorithms for discrete values; the reverse distribution function method, randomization and rejection technique for continuous values),\n• numerical realization of stochastic vectors,\n• methods of modeling of stochastic processes and fields,\n• methods of calculation of multiple integrals,\n• numerical methods of solution of the integral equations of the second kind,\n• numerical solution of equations of mathematical physics,\n• applications of Monte Carlo methods (including problems of the radiation transform theory).\n\nThe manual is divided in lessons containing many illustrations, diagrams, model tasks, exercises, questions, and comments.\n\nFile\nPages\n19-24"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9021798,"math_prob":0.9055142,"size":1677,"snap":"2023-14-2023-23","text_gpt3_token_len":303,"char_repetition_ratio":0.13209803,"word_repetition_ratio":0.008298756,"special_character_ratio":0.17471676,"punctuation_ratio":0.113970585,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9940189,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-20T18:41:27Z\",\"WARC-Record-ID\":\"<urn:uuid:9c733e79-e793-44dd-a367-8bafb9d187d5>\",\"Content-Length\":\"125961\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:eb1c8cc7-5b68-4480-8662-748d53266411>\",\"WARC-Concurrent-To\":\"<urn:uuid:d9eeace9-968c-407a-8d17-72caaa909ee9>\",\"WARC-IP-Address\":\"84.237.72.72\",\"WARC-Target-URI\":\"https://bulletin.iis.nsk.su/article/1386\",\"WARC-Payload-Digest\":\"sha1:3GO3HZXMIFGPZQRXW7XKC3PM4FWN5BAA\",\"WARC-Block-Digest\":\"sha1:TN6TWGLLBJE6PRRS5NNE3RFEEKIQPB5L\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296943555.25_warc_CC-MAIN-20230320175948-20230320205948-00140.warc.gz\"}"} |
https://www.bartleby.com/essay/The-Effect-Of-Temperature-On-The-Rate-P36WSJV3VU5YW | [
"",
null,
"# The Effect Of Temperature On The Rate Of The Reaction\n\nDecent Essays\nPurpose: The activation energy lab centralized on observing the effect that temperature has on the rate of the reaction 6I- (aq) + BrO3- (aq) + 6H+ (aq) 3I2 (aq) + Br- (aq) + 3H2O while also using calculations to determine the value of the rate constant and the activation energy at different temperatures. The activation energy of a reaction is defined as the minimum amount of energy required to make the transition from reactants to products. Given that the rate constant is proportionally constant for an experiment, it changes with temperature. By keeping the concentrations of the reactants constant, the effect of temperature on the rate was able to be determined.\nTemperature generally increases a reaction rate given that particles require require a specific amount of energy to react. By increasing the temperature, more particles are able to collide with the required force in perfect geometry to cause a reaction, which will result in an increased reaction rate. An example of this includes milk turning sour much more rapidly in warmer conditions such as room temperature versus the cooler temperature of the refrigerator.\nProcedure & Observations: First approximately 250mL of water was added to a 400mL beaker and heated on a hot plate to 20-25C. As the water was heating, two test tubes were then prepared, test tube A which contained 10mL of 0.0100 M KI, 10 mL of 0.0010 M Na2SO3, and 10 mL of deionized water, while test tube B included 10mL of 0.0400 M KBrO3, 10 mL of"
]
| [
null,
"https://assets.bartleby.com/1.17/images/placeholders/essay_preview.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.94273746,"math_prob":0.96259093,"size":4976,"snap":"2022-40-2023-06","text_gpt3_token_len":1012,"char_repetition_ratio":0.1781979,"word_repetition_ratio":0.022929937,"special_character_ratio":0.1800643,"punctuation_ratio":0.08685714,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95979464,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-06T06:41:17Z\",\"WARC-Record-ID\":\"<urn:uuid:5ce211ed-dda7-456a-8434-d6997d236458>\",\"Content-Length\":\"41783\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:039623d7-4efc-4ed1-b212-a65ff88b5090>\",\"WARC-Concurrent-To\":\"<urn:uuid:e526326d-1f77-48c0-884b-9c967b3e410f>\",\"WARC-IP-Address\":\"13.32.208.118\",\"WARC-Target-URI\":\"https://www.bartleby.com/essay/The-Effect-Of-Temperature-On-The-Rate-P36WSJV3VU5YW\",\"WARC-Payload-Digest\":\"sha1:6S5YJ7FD3P37JYDJ22FQGLMHAPYVJOSY\",\"WARC-Block-Digest\":\"sha1:SVJ6YPTPB6KS3FQ6YYWYJI54XEOO5YTJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337731.82_warc_CC-MAIN-20221006061224-20221006091224-00084.warc.gz\"}"} |
https://au.mathworks.com/matlabcentral/profile/authors/18223397?detail=cody | [
"Community Profile",
null,
"# Hernia Baby\n\nLast seen: 3 days ago Active since 2021\n\nProgramming Languages:\nPython\nSpoken Languages:\nJapanese\n\n#### Statistics\n\nAll\n•",
null,
"•",
null,
"•",
null,
"•",
null,
"•",
null,
"•",
null,
"•",
null,
"•",
null,
"•",
null,
"•",
null,
"•",
null,
"•",
null,
"#### Content Feed\n\nView by\n\nSolved\n\nReverse the vector\nReverse the vector elements. Example: Input x = [1,2,3,4,5,6,7,8,9] Output y = [9,8,7,6,5,4,3,2,1]\n\n5 months ago\n\nSolved\n\nLength of the hypotenuse\nGiven short sides of lengths a and b, calculate the length c of the hypotenuse of the right-angled triangle. <<https://i.imgu...\n\n5 months ago\n\nSolved\n\nFinding Perfect Squares\nGiven a vector of numbers, return true if one of the numbers is a square of one of the numbers. Otherwise return false. Example...\n\n5 months ago\n\nSolved\n\nReturn area of square\nSide of square=input=a Area=output=b\n\n5 months ago\n\nSolved\n\nVector creation\nCreate a vector using square brackets going from 1 to the given value x in steps on 1. Hint: use increment.\n\n5 months ago\n\nSolved\n\nDoubling elements in a vector\nGiven the vector A, return B in which all numbers in A are doubling. So for: A = [ 1 5 8 ] then B = [ 1 1 5 ...\n\n5 months ago\n\nSolved\n\nCreate a vector\nCreate a vector from 0 to n by intervals of 2.\n\n5 months ago\n\nSolved\n\nFlip the vector from right to left\nFlip the vector from right to left. Examples x=[1:5], then y=[5 4 3 2 1] x=[1 4 6], then y=[6 4 1]; Request not ...\n\n5 months ago\n\nSolved\n\nWhether the input is vector?\nGiven the input x, return 1 if x is vector or else 0.\n\n5 months ago\n\nSolved\n\nFind max\nFind the maximum value of a given vector or matrix.\n\n5 months ago\n\nSolved\n\nGet the length of a given vector\nGiven a vector x, the output y should equal the length of x.\n\n5 months ago\n\nSolved\n\nInner product of two vectors\nFind the inner product of two vectors.\n\n5 months ago\n\nSolved\n\nArrange Vector in descending order\nIf x=[0,3,4,2,1] then y=[4,3,2,1,0]\n\n5 months ago\n\nSolved\n\nSelect every other element of a vector\nWrite a function which returns every other element of the vector passed in. That is, it returns the all odd-numbered elements, s...\n\n8 months ago\n\nSolved\n\nTriangle Numbers\nTriangle numbers are the sums of successive integers. So 6 is a triangle number because 6 = 1 + 2 + 3 which can be displa...\n\n9 months ago\n\nSolved\n\nGenerate a vector like 1,2,2,3,3,3,4,4,4,4\nGenerate a vector like 1,2,2,3,3,3,4,4,4,4 So if n = 3, then return [1 2 2 3 3 3] And if n = 5, then return [1 2 2...\n\n1 year ago\n\nSolved\n\nMaximum value in a matrix\nFind the maximum value in the given matrix. For example, if A = [1 2 3; 4 7 8; 0 9 1]; then the answer is 9.\n\n1 year ago\n\nSolved\n\nFind the sum of all the numbers of the input vector\nFind the sum of all the numbers of the input vector x. Examples: Input x = [1 2 3 5] Output y is 11 Input x ...\n\n1 year ago\n\nSolved\n\nGiven a and b, return the sum a+b in c.\n\n1 year ago\n\nSolved\n\nFind the numeric mean of the prime numbers in a matrix.\nThere will always be at least one prime in the matrix. Example: Input in = [ 8 3 5 9 ] Output out is 4...\n\n2 years ago\n\nSolved\n\nMake the vector [1 2 3 4 5 6 7 8 9 10]\nIn MATLAB, you create a vector by enclosing the elements in square brackets like so: x = [1 2 3 4] Commas are optional, s...\n\n2 years ago\n\nSolved\n\nTimes 2 - START HERE\nTry out this test problem first. Given the variable x as your input, multiply it by two and put the result in y. Examples:...\n\n2 years ago"
]
| [
null,
"https://au.mathworks.com/responsive_image/150/150/0/0/0/cache/matlabcentral/profiles/18223397_1614391319239.jpeg",
null,
"https://au.mathworks.com/matlabcentral/profile/badges/24_Month_Streak.png",
null,
"https://au.mathworks.com/images/responsive/supporting/matlabcentral/cody/badges/community_authored_group.png",
null,
"https://au.mathworks.com/matlabcentral/profile/badges/Thankful_2.png",
null,
"https://au.mathworks.com/matlabcentral/profile/badges/Knowledgeable_100.png",
null,
"https://au.mathworks.com/content/dam/mathworks/mathworks-dot-com/images/responsive/supporting/matlabcentral/minihack/badge-mini-hack-participant.png",
null,
"https://au.mathworks.com/matlabcentral/profile/badges/Pro.png",
null,
"https://au.mathworks.com/images/responsive/supporting/matlabcentral/fileexchange/badges/first_review.png",
null,
"https://au.mathworks.com/images/responsive/supporting/matlabcentral/fileexchange/badges/first_submission.png",
null,
"https://au.mathworks.com/images/responsive/supporting/matlabcentral/cody/badges/solver.png",
null,
"https://au.mathworks.com/matlabcentral/profile/badges/Revival_1.png",
null,
"https://au.mathworks.com/matlabcentral/profile/badges/Thankful_1.png",
null,
"https://au.mathworks.com/matlabcentral/profile/badges/First_Answer.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.64684415,"math_prob":0.99667984,"size":3579,"snap":"2023-14-2023-23","text_gpt3_token_len":1148,"char_repetition_ratio":0.18377623,"word_repetition_ratio":0.040816326,"special_character_ratio":0.30790722,"punctuation_ratio":0.15922108,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9975523,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26],"im_url_duplicate_count":[null,1,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-31T00:36:34Z\",\"WARC-Record-ID\":\"<urn:uuid:00f888bc-7250-492d-81cb-4c28317fb758>\",\"Content-Length\":\"129856\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5c8b3ba9-27d2-4ace-9467-47a36b23305b>\",\"WARC-Concurrent-To\":\"<urn:uuid:d160b0f8-2642-4063-8792-d80012d27278>\",\"WARC-IP-Address\":\"184.28.230.59\",\"WARC-Target-URI\":\"https://au.mathworks.com/matlabcentral/profile/authors/18223397?detail=cody\",\"WARC-Payload-Digest\":\"sha1:33PX6RQYJT2G2E4PUJHJDHT4OTIJTL2F\",\"WARC-Block-Digest\":\"sha1:XJYRRIXE7NJZPHP4QAGTIEXCTMJPPZJN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296949506.62_warc_CC-MAIN-20230330225648-20230331015648-00275.warc.gz\"}"} |
https://www.mathworks.com/matlabcentral/fileexchange/38959-active-figure-zoom-for-selecting-points | [
"File Exchange\n\n## Active Figure Zoom for Selecting Points\n\nversion 1.1.0.0 (8.29 KB) by Tristan Ursell\n\n### Tristan Ursell (view profile)\n\nSelect points at a user-specified zoom level that moves around the image as you click.\n\nUpdated 08 Nov 2013\n\nTristan Ursell\nNovember 2013\nActive zoom for polyline selection.\n\n[X,Y]=getline_zoom(Im1);\n[X,Y]=getline_zoom(Im1,'plot');\n[X,Y]=getline_zoom(Im1,'loop_path');\n[X,Y]=getline_zoom(Im1,'stationary');\n[X,Y]=getline_zoom(Im1,'interp',res);\n[X,Y]=getline_zoom(Im1,'interp',res,'method',mt1);\n\nHave you ever wanted to carefully select points from an image but could not get close enough to see where you wanted to select? Have you ever wanted to create a smooth, evenly spaced set of parametric contour points from a minimum of user-selected points? Then this function is for you -- it allows the user to select points from an image at a user specified zoom level that moves along with the points selected in a centered frame of reference. The resulting contour can then be interpolated to constant arc-length resolution, smoothed, and even closed into a loop.\n\nIm1 is the input image (matrix) from which points will be selected.\n\nThe optional parameter 'interp' creates evenly spaced X Y points with the resolution 'res', where res = 1, will space the X,Y points one pixel apart in arc length. The secondary option 'method' selects between a 'linear' and 'spline' interpolation, the latter being a differentially smooth curve. The default method is 'linear'.\n\nThe optional parameter 'loop_path' will loop the path so that the last point is the same as the first point. If the method is 'spline', then the looped path will also be smoothed such that the derivatives match at the beginning and ending points.\n\nThe optional parameter 'stationary' will keep the zoom and image position fixed, but all other options will be active.\n\nHold 'shift' or 'control' and then click to end polyline selection.\n\nThis script requires the 'ginput' function.\n\nThis script will generate its own figure handle.\n\nCredit to John D'Errico for writing the useful 'interparc' function included within this function.\n\nExample:\n\n[X0,Y0] = getline_zoom(Im1,'plot');\n[X1,Y1] = getline_zoom(Im1,'plot','interp',0.5,'method','spline');\n[X2,Y2] = getline_zoom(Im1,'plot','interp',0.5,'method','spline','loop_path');\n\n### Cite As\n\nTristan Ursell (2020). Active Figure Zoom for Selecting Points (https://www.mathworks.com/matlabcentral/fileexchange/38959-active-figure-zoom-for-selecting-points), MATLAB Central File Exchange. Retrieved .\n\nAntoine Brassard Simard\n\n### Antoine Brassard Simard (view profile)\n\nReally helped me also. I would suggest to change line #141 for : Im_temp=Im1(max([yb,1]):min([yt,size(Im1,1)]),max([xl,1]):min([xr,size(Im1,2)]));\n\nThat would avoid the program to crash when the temporary image reaches one of the original image boundary.\n\nAlbert Kao\n\n### Albert Kao (view profile)\n\nHi Tristan,\n\nThank you for posting this code -- it was very helpful for my project!\n\nI just have a couple of suggestions to improve the accuracy of your code:\n\n1. In line 198 (calculating the length of the curve), summing across the raw selected points will always underestimate the length of the 'true' curve, since you are subsampling the curve. Instead, I would use interparc.m to generate a smoother curve (roughly at the resolution that is desired by the user), and estimate the length of the curve using those interpolated points. This takes more computation but on certain applications the extra accuracy might be important.\n\n2. I would change line 205 (using interparc.m) to the following:\n\nO1 = interparc(round(Ltot/res) + 1,X,Y,mt1);\n\nto account for the two edge points.\n\nOtherwise, great work!"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.7963575,"math_prob":0.87089336,"size":3675,"snap":"2019-51-2020-05","text_gpt3_token_len":886,"char_repetition_ratio":0.12040316,"word_repetition_ratio":0.007782101,"special_character_ratio":0.24544218,"punctuation_ratio":0.17355372,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.960857,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-25T09:45:03Z\",\"WARC-Record-ID\":\"<urn:uuid:6e04183a-d5e5-473e-b80f-41f49e146fd3>\",\"Content-Length\":\"82589\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8adcf33d-225c-4363-ac3d-4f0b5c55754e>\",\"WARC-Concurrent-To\":\"<urn:uuid:8337e732-8b03-4784-9819-4f5a4a7af737>\",\"WARC-IP-Address\":\"23.13.150.165\",\"WARC-Target-URI\":\"https://www.mathworks.com/matlabcentral/fileexchange/38959-active-figure-zoom-for-selecting-points\",\"WARC-Payload-Digest\":\"sha1:4EXCICPQWHYUZ25V3Q6HV565IM3HDJOV\",\"WARC-Block-Digest\":\"sha1:LT3RHTT3LFXIBZ3JL3NZ423ODIQH5KWG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579251671078.88_warc_CC-MAIN-20200125071430-20200125100430-00049.warc.gz\"}"} |
https://dradelblog.com/2020/12/22/tests-of-significance-part-2/ | [
"The t-test is any statistical hypothesis test in which the test statistic follows a Student’s t-distribution under the null hypothesis.\n\nt-test is the most commonly applied when the test statistic would follow a normal distribution if the value of a scaling term in the test statistic were known. When the scaling term is unknown and is replaced by an estimate based on the data, the test statistics (under certain conditions) follow a Student’s t distribution. The t-test can be used, for example, to determine if the means of two sets of data are significantly different from each other.\n\nA two-sample location test of the null hypothesis such that the means of two populations are equal. All such tests are usually called Student’s t-tests, though strictly speaking that name should only be used if the variances of the two populations are also assumed to be equal; the form of the test used when this assumption is dropped is sometimes called Welch’s t-test. These tests are often referred to as “unpaired” or “independent samples” t-tests, as they are typically applied when the statistical units underlying the two samples being compared are non-overlapping{\\displaystyle t={\\frac {Z}{s}}={\\frac {{\\bar {X}}-\\mu }{{\\widehat {\\sigma }}/{\\sqrt {n}}}}}\n\nThe means of the two populations being compared should follow normal distributions. Under weak assumptions, this follows in large samples from the central limit theorem, even when the distribution of observations in each group is non-normal. If using Student’s original definition of the t-test, the two populations being compared should have the same variance (testable using F-testLevene’s testBartlett’s test, or the Brown–Forsythe test; or assessable graphically using a Q–Q plot). If the sample sizes in the two groups being compared are equal, Student’s original t-test is highly robust to the presence of unequal variances. Welch’s t-test is insensitive to equality of the variances regardless of whether the sample sizes are similar.\n\nThe data used to carry out the test should either be sampled independently from the two populations being compared or be fully paired. This is in general not testable from the data, but if the data are known to be dependent (e.g. paired by test design), a dependent test has to be applied. For partially paired data, the classical independent t-tests may give invalid results as the test statistic might not follow a t distribution, while the dependent t-test is sub-optimal as it discards the unpaired data.\n\nIndependent Samples student t test: Is a test of significance between two samples for each one, the mean and standard deviation are known.\n\n##### People reacted to this story.\nComments to: Tests of significance, part 2\n•",
null,
"March 25, 2021\n\nMuchas gracias. ?Como puedo iniciar sesion?",
null,
"",
null,
""
]
| [
null,
"https://secure.gravatar.com/avatar/",
null,
"https://dradelblog.com/wp-content/uploads/2019/09/onboard_image_05.min_.png",
null,
"https://dradelblog.com/wp-content/uploads/2019/09/onboard_image_07.min_.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.9077232,"math_prob":0.94176626,"size":2697,"snap":"2021-21-2021-25","text_gpt3_token_len":560,"char_repetition_ratio":0.13962124,"word_repetition_ratio":0.011792453,"special_character_ratio":0.20281795,"punctuation_ratio":0.07952286,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9973837,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-05T20:43:52Z\",\"WARC-Record-ID\":\"<urn:uuid:4cb9a8ef-0013-4742-ba9d-11f4d8df8fe6>\",\"Content-Length\":\"202941\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:75bd480b-334b-4109-ab39-94750a0c8011>\",\"WARC-Concurrent-To\":\"<urn:uuid:a4daee3a-5152-40b4-b4ae-2b3a1905b67b>\",\"WARC-IP-Address\":\"88.99.135.54\",\"WARC-Target-URI\":\"https://dradelblog.com/2020/12/22/tests-of-significance-part-2/\",\"WARC-Payload-Digest\":\"sha1:WFDK3R2G7VTWJSGEEAUMKO533VTKNY4Q\",\"WARC-Block-Digest\":\"sha1:ROUQWDQDF4WJNBJUEVORLZ3SN6D3KHW3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243988696.23_warc_CC-MAIN-20210505203909-20210505233909-00213.warc.gz\"}"} |
http://sjam.selcuk.edu.tr/sjam/article/view/146/0 | [
"### Computation of periodic orbits in a three-dimensional kinetic model of catalytic hydrogen oxidation\n\n#### Abstract\n\nAniterative method for solving periodical boundary-value problem (BVP) for autonomous ordinary differential equations (ODEs) is applied to calculations of periodic orbits and their stability in a three-dimensional kinetic model of catalytic hydrogen oxidation.\n\nAccording to the method, the periodic orbit is decomposed into pieces by local cross-sections $$\\{\\pi_i\\}$$ and between $$\\pi_i$$ and $$\\pi_{i+1}$$ the integration of the system is to be accomplished. Hence we obtain an $\\alpha$-pseudo-orbit and then construct the generalized Poincare map. Thus the BVP for ODEs is reduced to a system of nonlinear algebraic equations that takes into account both the boundary conditions of periodicity and condition of the solution continuity at boundary points of pieces. Being linearized, the algebraic system has a band structure and for solving such a system the orthogonal sweep method is extremely effective.\n\nIn the model considered we find numerically periodic orbits of rather complex structure, give an example of weakly stable dynamics, and show the role of successive period doubling bifurcations in the creation of weakly stable dynamics.\n\n#### Keywords\n\nPeriodic orbit, numerical solution of ordinary differential equations, chemical kinetic model, period doubling bifurcation, weakly stable dynamics\n\nUntitled"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.92409337,"math_prob":0.99254173,"size":1255,"snap":"2020-10-2020-16","text_gpt3_token_len":251,"char_repetition_ratio":0.12230216,"word_repetition_ratio":0.04519774,"special_character_ratio":0.18247011,"punctuation_ratio":0.050251257,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.994476,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-03-30T07:30:51Z\",\"WARC-Record-ID\":\"<urn:uuid:c2437e46-d7bb-4034-bcab-d2f6e6b5872e>\",\"Content-Length\":\"18334\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5d8223bd-d196-4fb7-857e-5152936b494e>\",\"WARC-Concurrent-To\":\"<urn:uuid:8ed65dc6-75f2-4599-a6b5-fee377cc9a9e>\",\"WARC-IP-Address\":\"193.255.244.13\",\"WARC-Target-URI\":\"http://sjam.selcuk.edu.tr/sjam/article/view/146/0\",\"WARC-Payload-Digest\":\"sha1:JPY4DITMDP4TF3MFWMUI277QIGG6E7YR\",\"WARC-Block-Digest\":\"sha1:7EINBHK6V26ZNW477ODHY445W4B2P65U\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585370496669.0_warc_CC-MAIN-20200330054217-20200330084217-00459.warc.gz\"}"} |
https://tex.stackexchange.com/questions/134246/how-can-i-influence-the-spacing-of-mathematical-functions-by-an-own-macro | [
"# How can I influence the spacing of mathematical functions by an own macro?\n\nI would like to get a macro \\of to define a function with an optional macro \\at to define its restriction. The reason is, that I don't like the spacing for functions (as there is no spacing) and I would like to automate that. The idea is, to set functions like f \\of x or f\\of x which would be quick and handy. But I am not sure, if that is doable for syntax reasons. I think, something like \\func{f}{x} would be much easier to provide, but that would be too much typing and quite a lot of change for already existing formulae.\n\nHowever, the reasons for this macro are the following properties which I would like to achieve. In future I want to expand this to all kind of derivatives or alike (total, partial, normal, increment, infinite element...).\n\nHere is the list:\n\n• Always set argument (value behind \\of in ()\n• Recognize x, (x), and {x} as argument\n• always set half space in front of the function when it is set behind a letter, number or bracket\n• set half space behind the function, if followed by a (\n• allow powers to the function and to the argument and set it properly\n• allow nested functions\n• allow \\at-notation with three optional formats (this will get interesting later on for the derivatives...)\n\nAnd here are some examples:\n\n\\documentclass{article}\n\\usepackage{mathtools}\n\\usepackage[partial=upright]{unicode-math}\n\n\\begin{document}\n\\begin{verbatim}f \\of x=x^2\\end{verbatim}\n$f(x)=x^2$\n\\begin{verbatim}f \\of (x+y)=x+y\\end{verbatim}\n$f(x+y)=x+y$\n\\begin{verbatim}f \\of (x+y)x+y\\end{verbatim}\n$f(x+y)x+y$\n\\begin{verbatim}f \\of (x+y)(x+y)(x+y)\\end{verbatim}\n$f(x+y)\\,(x+y)(x+y)$\n\\begin{verbatim}xf \\of xx\\end{verbatim}\n$x\\,f(x)x$\n\\begin{verbatim}zf \\of xg \\of yz\\end{verbatim}\n$z\\,f(x)\\,g(y)z$\n\\begin{verbatim}f^2 \\of x\\end{verbatim}\n$f^2(x) \\vee f(x)^2 \\vee (f(x))^2 \\quad \\text{don't know which one is correct}$\n\\begin{verbatim}f \\of x^2\\end{verbatim}\n$f(x^2)$\n\\begin{verbatim}f \\of g \\of x\\end{verbatim}\n$f(g(x))$\n\\begin{verbatim}\\frac{a}{b}f\\of x\\end{verbatim}\n$\\dfrac{a}{b}f(x)$\n\\begin{verbatim}(a+b)f\\of x\\end{verbatim}\n$(a+b)\\,f(x)$\n\\begin{verbatim}\\setFunctionAtBar \\end{verbatim}\n\\begin{verbatim}f \\of x \\at a\\end{verbatim}\n$f(x)|_{x=a}$\n\\begin{verbatim}\\setFunctionAtBracket\\end{verbatim}\n\\begin{verbatim}f \\of x \\at {a+b}\\end{verbatim}\n$(f(x))_{x=(a+b)}$\n\\begin{verbatim}\\setFunctionAtInline\\end{verbatim}\n\\begin{verbatim}f \\of x \\at (a+b)\\end{verbatim}\n$f(a+b)$\n\\end{document}\n\n• latex goes to a lot of effort to have a consistent syntax \\frac{a}{b} not a \\over b, \\hspace{2in} not \\hskip 2in etc :( Sep 20, 2013 at 11:37\n• hmmm, I will think of an other notation, which would be easy. But is there something around, doing the other stuff? I got the feeling, there has to be something, but I can't find it. Sep 20, 2013 at 11:56\n• Something like $(a+b)\\,f(x)$ or (f(x))_{}, ie. inserting things way before \\of is almost impossible without making things active or pre-processing the whole content. The \\of doesn't know what came before it. (You can however put \\mathinner{(a+b)} which, I believe, includes spacing around it.) Otherwise this is probably doable with a few ifnextchars, but do you really want that (see David's comment)? xparse can probably help with a \\func macro. Sep 20, 2013 at 12:06\n• I already gathered that the syntax is the major problem. But your point about pre-processing is very interesting. I will have a look on xparse when CTAN is on again. Sep 20, 2013 at 12:10\n\nAside from syntax it seems main issue is spacing before a function term. TeX will add space there automatically if it knows it is a function. \\log \\sin etc are \\mathop atoms and get space, but f is a mathord so does not. If you declare a one-letter math operator though, it gets the space that I think you want:",
null,
"\\documentclass{article}\n\n\\begin{document}\n\\showoutput\n$zf(x)$\n\n$z\\mathop{{}f}(x)$\n\n\\end{document}"
]
| [
null,
"https://i.stack.imgur.com/GxkJK.png",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.72254515,"math_prob":0.9928544,"size":2886,"snap":"2022-27-2022-33","text_gpt3_token_len":907,"char_repetition_ratio":0.22068009,"word_repetition_ratio":0.0,"special_character_ratio":0.28170478,"punctuation_ratio":0.07939189,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99929255,"pos_list":[0,1,2],"im_url_duplicate_count":[null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-07T18:00:56Z\",\"WARC-Record-ID\":\"<urn:uuid:fa342d27-ceea-429b-ade5-fc6ca682658f>\",\"Content-Length\":\"232978\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f802d0bf-9733-4651-a38e-3db9c4f7c7f6>\",\"WARC-Concurrent-To\":\"<urn:uuid:4f3a1a88-34d8-4ee6-a12f-c168d69aeb64>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://tex.stackexchange.com/questions/134246/how-can-i-influence-the-spacing-of-mathematical-functions-by-an-own-macro\",\"WARC-Payload-Digest\":\"sha1:R7XS64XZJM6EB3JIWXIQEMCT4GHZKA2S\",\"WARC-Block-Digest\":\"sha1:JVE24ADFAYHIZSXRWTM2WU57SYMWS26P\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104495692.77_warc_CC-MAIN-20220707154329-20220707184329-00763.warc.gz\"}"} |
https://financetrain.com/adding-an-asset-to-a-portfolio-improving-the-minimum-variance-frontier | [
"# Adding an Asset to a Portfolio – Improving the Minimum Variance Frontier\n\nWe can use the Sharpe Ratio to determine if adding an asset creates a better (higher) minimum variance frontier.\n\nThe Sharpe ratio is calculated using the following formula:\n\nSharpe Ratio = (E(Rasset) – RF)/σasset\n\nCalculate the Sharpe ratio for the current portfolio and then calculate the Sharpe ratio after adding the new asset.\n\nIf the Sharpe Rationew port > (Sharpe Ratiocurrent port * ρ(new asset, current port)), then the new asset should be added.\n\nρ(new asset, current port) = correlation coefficient between the current portfolio’s returns and the new asset’s returns"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.6942415,"math_prob":0.9811665,"size":1875,"snap":"2022-27-2022-33","text_gpt3_token_len":426,"char_repetition_ratio":0.13148049,"word_repetition_ratio":0.06825939,"special_character_ratio":0.2,"punctuation_ratio":0.039855074,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99897254,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-18T08:23:55Z\",\"WARC-Record-ID\":\"<urn:uuid:15932fa0-851d-4447-9c18-ee43216c96af>\",\"Content-Length\":\"46223\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3489fa81-c365-4253-a224-12bd961b9804>\",\"WARC-Concurrent-To\":\"<urn:uuid:3ecca392-fdbe-4af0-8784-afa5f62f566f>\",\"WARC-IP-Address\":\"104.16.243.78\",\"WARC-Target-URI\":\"https://financetrain.com/adding-an-asset-to-a-portfolio-improving-the-minimum-variance-frontier\",\"WARC-Payload-Digest\":\"sha1:JU6W7X6NU4PTRJAET7JQEYKCWOCFL72O\",\"WARC-Block-Digest\":\"sha1:YJE2CPD532TATX6HAYPV35YMG2MZEEMB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882573172.64_warc_CC-MAIN-20220818063910-20220818093910-00482.warc.gz\"}"} |
https://www.edureka.co/blog/insertion-sort-in-c/ | [
"",
null,
"# How to Implement Insertion Sort in C with Example\n\nLast updated on Nov 25,2020 24.5K Views",
null,
"Insertion Sort in C is a simple and efficient sorting algorithm, that creates the final sorted array one element at a time. It is usually implemented when the user has a small data set. I’ll cover the following topics:\n\n## What is Insertion Sort?\n\nInsertion Sort is a sorting algorithm where the array is sorted by taking one element at a time. The principle behind insertion sort is to take one element, iterate through the sorted array & find its correct position in the sorted array.",
null,
"Insertion Sort works in a similar manner as we arrange a deck of cards.\n\n## Brief Working and Complexity\n\nIn each iteration, it compares the current element with the values in the sorted array. If the current element is greater than the element in the array, then it leaves the element and iterates to the next array element. Otherwise, if the current element is smaller than the array element then it moves the rest of the element in the array by one position and makes space for the current in the sorted array.\n\nThis is how Insertion sort takes one input elements at a time, iterates through the sorted sub-array and with each iteration it inserts one element at its correct position. This is why the algorithm is known as Insertion sort.\n\nAs the average & worst-case complexity of this algorithm are O(n2), where n is the number of elements, Insertion sort is not good for large data sets.\n\n## Algorithm for Insertion Sort\n\n• Step 1 − If the element is the first one, it is already sorted.\n\n• Step 2 – Move to next element\n\n• Step 3 − Compare the current element with all elements in the sorted array\n\n• Step 4 – If the element in the sorted array is smaller than the current element, iterate to the next element. Otherwise, shift all the greater element in the array by one position towards the right\n\n• Step 5 − Insert the value at the correct position\n\n• Step 6 − Repeat until the complete list is sorted\n\nTo understand how Insertion sort works, refer to the below image.",
null,
"Let’s understand how insertion sort is working in the above image.\n\n• 122, 17, 93, 3, 36\n\nfor i = 1(2nd element) to 36 (last element)\n\ni = 1. Since 17 is smaller than 122, move 122 and insert 17 before 122\n\n• 17, 122, 93, 3, 36\n\ni = 2. Since 93 is smaller than 122, move 122 and insert 93 before 122\n\n• 17, 93,122, 3, 36\n\ni = 3. 3 will move to the beginning and all other elements from 17 to 122 will move one position ahead of their current position.\n\n• 3, 17, 93, 122, 36\n\ni = 4. 36 will move to position after 17, and elements from 93 to 122 will move one position ahead of their current position.\n\n• 3, 17, 36, 93 ,122\n\nNow that we have understood how Insertion Sort works, let us quickly look at a C code to implement insertion sort\n\nInsertion Sort Function\n\n```void insertionSort(int array[], int n)\n{\nint i, element, j;\nfor (i = 1; i < n; i++) { element = array[i]; j = i - 1; /* Move elements of arr[0..i-1], that are greater than key by one position */ while (j >= 0 && array[j] > element) {\narray[j + 1] = array[j];\nj = j - 1;\n}\narray[j + 1] = element;\n}\n}```\n\n## Insertion Sort in C: Code\n\n```#include <math.h>\n#include <stdio.h>\n\n// Insertion Sort Function\nvoid insertionSort(int array[], int n)\n{\nint i, element, j;\nfor (i = 1; i < n; i++) { element = array[i]; j = i - 1; while (j >= 0 && array[j] > element) {\narray[j + 1] = array[j];\nj = j - 1;\n}\narray[j + 1] = element;\n}\n}\n\n// Function to print the elements of an array\nvoid printArray(int array[], int n)\n{\nint i;\nfor (i = 0; i < n; i++)\nprintf(\"%d \", array[i]);\nprintf(\"n\");\n}```\n\nNow after executing the above C program you would have understood how Insertion Sort works & how to implement it in C language. I hope this blog is informative and added value to you. With this, we come to an end of this Insertion Sort in C article.\n\nCheck out the Java training by Edureka, a trusted online learning company with a network of more than 250,000 satisfied learners spread across the globe. Edureka’s Java J2EE and SOA training and certification course is designed for students and professionals who want to be a Java Developer. The course is designed to give you a head start into Java programming and train you for both core and advanced Java concepts along with various Java frameworks like Hibernate & Spring."
]
| [
null,
"https://googleads.g.doubleclick.net/pagead/viewthroughconversion/977137586/",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAMLCwgAAACH5BAAAAAAALAAAAAABAAEAAAICRAEAOw==",
null,
"https://www.edureka.co/blog/wp-content/uploads/2019/08/Deck-insertion-sort-150x115.png",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAMLCwgAAACH5BAAAAAAALAAAAAABAAEAAAICRAEAOw==",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.83806455,"math_prob":0.9689536,"size":4063,"snap":"2021-31-2021-39","text_gpt3_token_len":992,"char_repetition_ratio":0.17639813,"word_repetition_ratio":0.17191601,"special_character_ratio":0.2655673,"punctuation_ratio":0.1,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9865797,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,null,null,5,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-21T12:14:10Z\",\"WARC-Record-ID\":\"<urn:uuid:e2260596-e1dc-49f2-891d-beae6c471984>\",\"Content-Length\":\"157683\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:dff95e84-aeef-4bbe-ba51-28ab5e07a455>\",\"WARC-Concurrent-To\":\"<urn:uuid:ba297c9a-d4d9-49a2-b20f-9d8709d81301>\",\"WARC-IP-Address\":\"52.85.151.97\",\"WARC-Target-URI\":\"https://www.edureka.co/blog/insertion-sort-in-c/\",\"WARC-Payload-Digest\":\"sha1:MWFUMZCAMVIXVCYMDSBFTKR3JW5NNI3C\",\"WARC-Block-Digest\":\"sha1:WP66OOGDZT36VB5S554BB3DJWQG4EV4P\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057202.68_warc_CC-MAIN-20210921101319-20210921131319-00243.warc.gz\"}"} |
https://www.euclideanspace.com/maths/discrete/structure/terminology/index.htm | [
"# Maths - Terminology\n\n• extensionality, or extensional equality, refers to principles that judge objects to be equal if they have the same external properties.\n• extension\n• equaliser (category)\n• fixpoint\n• order complete -\n• complete (category) (Negation Complete) (directed complete partial orders dcpo) - A category is complete if every diagram in C has a limit in C. In order theory completeness properties assert the existence of certain infima or suprema of a given partially ordered set (poset).\n• canonicity - A type theory has canonicity if every term computes to a canonical form.\n• preorder (set) (category) - A set with a binary relation which is reflexive and transitive.\n• partial order\n• poset\n• axiom of choice\n• classifier\n• closure\n• continuous\n• exact functor\n• generators\n• Godels competeness theorem\n• hom-set (category)\n• inhabited\n• kan extension (category)\n• Lindenbaum Algebra\n• natural number object\n• powerset\n• presheaf\n• represention\n• restriction\n• section\n• sieve\n• weakly\n• well founded\n• well pointed"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.8418891,"math_prob":0.82687575,"size":1111,"snap":"2023-40-2023-50","text_gpt3_token_len":259,"char_repetition_ratio":0.12285456,"word_repetition_ratio":0.0,"special_character_ratio":0.22052205,"punctuation_ratio":0.050632913,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9801525,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-11T12:39:27Z\",\"WARC-Record-ID\":\"<urn:uuid:dc50c0c6-8181-4415-a0fe-bbc8781aec9d>\",\"Content-Length\":\"13496\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:49fb19eb-2779-496c-8f44-5485fc88bc39>\",\"WARC-Concurrent-To\":\"<urn:uuid:ae22879c-7336-459b-9af2-3c72c20d36d9>\",\"WARC-IP-Address\":\"217.160.0.191\",\"WARC-Target-URI\":\"https://www.euclideanspace.com/maths/discrete/structure/terminology/index.htm\",\"WARC-Payload-Digest\":\"sha1:3A75DJF27C3WBOQUMCFZ62HNPMI6WOQU\",\"WARC-Block-Digest\":\"sha1:Q4MTFYCBRLHDMGUBZF23GCLSFSWTJ7L6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679511159.96_warc_CC-MAIN-20231211112008-20231211142008-00224.warc.gz\"}"} |
https://zbmath.org/?q=an:0665.62034 | [
"## Estimating a real parameter in a class of semiparametric models.(English)Zbl 0665.62034\n\nThe author studies an estimation problem in semiparametric models where for a fixed value of a finite-dimensional parameter there exists a sufficient statistic for the nuisance parameter. Namely, if $$X_ 1,X_ 2,...,X_ n$$ are independent random elements with respective densities $$p_ j(\\cdot,\\theta,\\eta)$$, where $$\\theta$$ is the parameter of interest while $$\\eta$$ is the nuisance parameter, then it is assumed that $p_ j(\\cdot,\\theta,\\eta)=h_ j(\\cdot,\\theta)g(\\psi_ j(\\cdot,\\theta),\\theta,\\eta)$ and $$\\psi (X_ j,\\theta)$$ has density g($$\\cdot,\\theta,\\eta)$$ w.r.t. the measures $$v_{\\theta}$$ for some functions $$h_ j$$ and $$\\psi_ j$$. One of the basic assumptions concerning the score functions is that projecting on the set of nuisance scores is equivalent to taking conditional expectations given the sufficient statistics $$\\psi$$.\nAn estimator sequence $$\\{T_ n\\}$$ is said to be asymptotically efficient for $$\\theta$$ if $\\sqrt{n}(T_ n-\\theta)=n^{- 1/2}\\sum^{n}_{j=1}\\tilde I_ n^{-1}(\\theta,\\eta)\\tilde l_{nj}(X_ j,\\theta,\\eta)+o_{P_{\\theta \\eta}}(1),$ where $$\\tilde I_ n$$ are normalizing factors and $$\\tilde l_{nj}$$ is a naturally defined efficient score function. The efficient estimator $$T_ n$$ is obtained by the one-step method: $T_ n={\\hat \\theta}_ n+(1/n)\\sum^{n}_{j=1}I_ n^{-1}({\\hat \\theta}_ n)\\hat l_{nj}(X_ j,{\\hat \\theta}_ n),$ where $${\\hat \\theta}{}_ n$$ is a $$\\sqrt{n}$$-consistent initial estimator and $$\\hat l_{nj}$$ is an estimator of the efficient score function. The estimator is in fact optimal in the sense of the convolution and local asymptotic minimax theorem. The paper is supplemented by a number of examples of mixture models, where a useful concept of local completeness is used.\nReviewer: T.Bednarski\n\n### MSC:\n\n 62F12 Asymptotic properties of parametric estimators 62F35 Robustness and adaptive procedures (parametric inference) 62F10 Point estimation 62G05 Nonparametric estimation 62G20 Asymptotic properties of nonparametric inference\nFull Text:"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.6629042,"math_prob":0.99991906,"size":2632,"snap":"2023-14-2023-23","text_gpt3_token_len":742,"char_repetition_ratio":0.13127854,"word_repetition_ratio":0.0058139535,"special_character_ratio":0.29521278,"punctuation_ratio":0.1529175,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000043,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-05-28T13:40:35Z\",\"WARC-Record-ID\":\"<urn:uuid:98dd6cb2-b2a0-4954-ab07-e9762e8f14f0>\",\"Content-Length\":\"56274\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:10fb22d1-5aaa-45f6-9934-36b191b392b9>\",\"WARC-Concurrent-To\":\"<urn:uuid:c5d09123-b39b-42a4-9220-38e69a2ffa37>\",\"WARC-IP-Address\":\"141.66.194.2\",\"WARC-Target-URI\":\"https://zbmath.org/?q=an:0665.62034\",\"WARC-Payload-Digest\":\"sha1:LBP6UVPXQPHELMM2NRY2LCXMT3NMAO23\",\"WARC-Block-Digest\":\"sha1:NN3X5P53ZXEF6WBCFQUJVSSN7GLBN3F2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224643784.62_warc_CC-MAIN-20230528114832-20230528144832-00167.warc.gz\"}"} |
https://es.mathworks.com/matlabcentral/cody/problems/55295-chain-multiplication-01 | [
"# Problem 55295. Chain multiplication - 01\n\nSay, you are given two matrices - A (shape= 3*4) and B(shape = 4*5).\nIf you multiply these two matrices, the resultant matrix will be of shape - 3*5.\nTo obtain the resultant matrix, you will need to perform in total 60 multiplications internally between the elements of the matrices (can easily be checked from matrix multiplication rule).\nThe first question is - if you are given just two matrices, what will be the total number of multiplications needed to obtain the resultant matrix.\n\n### Solution Stats\n\n56.25% Correct | 43.75% Incorrect\nLast Solution submitted on Oct 19, 2023\n\n### Community Treasure Hunt\n\nFind the treasures in MATLAB Central and discover how the community can help you!\n\nStart Hunting!"
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.82121307,"math_prob":0.96126634,"size":1051,"snap":"2023-40-2023-50","text_gpt3_token_len":255,"char_repetition_ratio":0.1365807,"word_repetition_ratio":0.0,"special_character_ratio":0.2568982,"punctuation_ratio":0.08152174,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99495983,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-09T08:38:22Z\",\"WARC-Record-ID\":\"<urn:uuid:dfd072dc-7a80-45d3-8605-3be8bf1cd030>\",\"Content-Length\":\"89579\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:99064fa6-b5b7-4dbe-83d3-d7962bb2c4b2>\",\"WARC-Concurrent-To\":\"<urn:uuid:0b5aa974-acc5-49e3-8ee9-1b5aba1137f7>\",\"WARC-IP-Address\":\"104.86.80.92\",\"WARC-Target-URI\":\"https://es.mathworks.com/matlabcentral/cody/problems/55295-chain-multiplication-01\",\"WARC-Payload-Digest\":\"sha1:RM24APAFGJQSUTTWGDQJDIOJECMI2II4\",\"WARC-Block-Digest\":\"sha1:RP6E6YOMYEXJI5YVYNLWI23ZVXZ732P7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100873.6_warc_CC-MAIN-20231209071722-20231209101722-00419.warc.gz\"}"} |
http://www.onewall.com/homepage.nsf/PBview?Open&1FSY=Tag&RestrictToCategory=ResearchZ.ZEconomics:CONSTRUCTIONQ.QSPENDING | [
"Research >> Economics\n\nCategory: Research - Topic: Economics - CONSTRUCTION SPENDING",
null,
"",
null,
"Construction Spending Increased 0.1% in March 2022Posted: May 2, 2022 at 10:00 AM (Monday)Total Construction Construction spending during March 2022 was estimated at a seasonally adjusted annual rate of \\$1,730.5 billion, 0.1 percent (±0.7 percent)* above the revised February estimate of \\$1,728.6 billion. The March figure is 11.7 percent (±1.0 percent) above the March 2021 estimate of \\$1,548.6 billion. During the first three months ...",
null,
"Construction Spending Increased 0.5% in February 2022Posted: April 1, 2022 at 10:00 AM (Friday)Total Construction Construction spending during February 2022 was estimated at a seasonally adjusted annual rate of \\$1,704.4 billion, 0.5 percent (±0.7 percent)* above the revised January estimate of \\$1,695.5 billion. The February figure is 11.2 percent (±1.2 percent) above the February 2021 estimate of \\$1,533.3 billion. During the first ...",
null,
"Construction Spending Increased 1.3% in January 2022Posted: March 1, 2022 at 10:00 AM (Tuesday)Total Construction Construction spending during January 2022 was estimated at a seasonally adjusted annual rate of \\$1,677.2 billion, 1.3 percent (±0.8 percent) above the revised December estimate of \\$1,655.8 billion. The January figure is 8.2 percent (±1.2 percent) above the January 2021 estimate of ...",
null,
"Construction Spending Increased 0.2% in December 2021Posted: February 1, 2022 at 10:00 AM (Tuesday)Total Construction Construction spending during December 2021 was estimated at a seasonally adjusted annual rate of \\$1,639.9 billion, 0.2 percent (± 0.8 percent)* above the revised November estimate of \\$1,636.5 billion. The December figure is 9.0 percent (±1.0 percent) above the December 2020 estimate of \\$1,504.2 billion. The value of ...",
null,
"Construction Spending Increased 0.4% in November 2021Posted: January 3, 2022 at 10:00 AM (Monday)Total Construction Construction spending during November 2021 was estimated at a seasonally adjusted annual rate of \\$1,625.9 billion, 0.4 percent (±1.0 percent)* above the revised October estimate of \\$1,618.8 billion. The November figure is 9.3 percent (±1.2 percent) above the November 2020 estimate of \\$1,487.2 billion. During the first ...",
null,
"Construction Spending Decreased 0.5% in October 2021Posted: December 1, 2021 at 10:00 AM (Wednesday)Total Construction Construction spending during October 2021 was estimated at a seasonally adjusted annual rate of \\$1,598.0 billion, 0.2 percent (±1.2 percent)* above the revised September estimate of \\$1,594.8 billion. The October figure is 8.6 percent (±1.3 percent) above the October 2020 estimate of \\$1,471.7 billion. During the first ten ...",
null,
"Construction Spending Decreased 0.5% in September 2021Posted: November 1, 2021 at 10:00 AM (Monday)Total Construction Construction spending during September 2021 was estimated at a seasonally adjusted annual rate of \\$1,573.6 billion, 0.5 percent (±1.0 percent)* below the revised August estimate of \\$1,582.0 billion. The September figure is 7.8 percent (±1.5 percent) above the September 2020 estimate of \\$1,459.3 billion. During the first ...",
null,
"Construction Spending Unch% in August 2021Posted: October 1, 2021 at 08:30 AM (Friday)Total Construction Construction spending during August 2021 was estimated at a seasonally adjusted annual rate of \\$1,584.1 billion, virtually unchanged from (±1.0 percent)* the revised July estimate of \\$1,584.0 billion. The August figure is 8.9 percent (±1.5 percent) above the August 2020 estimate of \\$1,455.0 billion. During the first eight ...",
null,
"Construction Spending Increased 0.3% in July 2021Posted: September 1, 2021 at 10:00 AM (Wednesday)Total Construction Construction spending during July 2021 was estimated at a seasonally adjusted annual rate of \\$1,568.8 billion, 0.3 percent (±1.2 percent)* above the revised June estimate of \\$1,563.4 billion. The July figure is 9.0 percent (±1.5 percent) above the July 2020 estimate of \\$1,439.6 billion. During the first seven months of ...",
null,
"Construction Spending Increased 0.1% in June 2021Posted: August 2, 2021 at 10:00 AM (Monday)Total Construction Construction spending during June 2021 was estimated at a seasonally adjusted annual rate of \\$1,552.2 billion, 0.1 percent (±1.2 percent)* above the revised May estimate of \\$1,551.2 billion. The June figure is 8.2 percent (±1.3 percent) above the June 2020 estimate of \\$1,435.0 billion. During the first six months of this ...",
null,
"Construction Spending Decreased 0.3% in May 2021Posted: July 1, 2021 at 10:00 AM (Thursday)Total Construction Construction spending during May 2021 was estimated at a seasonally adjusted annual rate of \\$1,545.3 billion, 0.3 percent (±1.0 percent)* below the revised April estimate of \\$1,549.5 billion. The May figure is 7.5 percent (±1.3 percent) above the May 2020 estimate of \\$1,437.7 billion. During the first five months of this ...",
null,
"Construction Spending Increased 0.2% in April 2021Posted: June 1, 2021 at 10:00 AM (Tuesday)Total Construction Construction spending during April 2021 was estimated at a seasonally adjusted annual rate of \\$1,524.2 billion, 0.2 percent (±0.8 percent)* above the revised March estimate of \\$1,521.0 billion. The April figure is 9.8 percent (±1.2 percent) above the April 2020 estimate of \\$1,387.9 billion. During the first four months of ...",
null,
"Construction Spending Increased 0.2% in March 2021Posted: May 3, 2021 at 10:00 AM (Monday)Total Construction Construction spending during March 2021 was estimated at a seasonally adjusted annual rate of \\$1,513.1 billion, 0.2 percent (±0.8 percent)* above the revised February estimate of \\$1,509.9 billion. The March figure is 5.3 percent (±1.0 percent) above the March 2020 estimate of \\$1,436.7 billion. During the first three months ...",
null,
"Construction Spending Decreased 0.8% in February 2021Posted: April 1, 2021 at 09:47 AM (Thursday)Total Construction Construction spending during February 2021 was estimated at a seasonally adjusted annual rate of \\$1,516.9 billion, 0.8 percent (±0.7 percent) below the revised January estimate of \\$1,529.0 billion. The February figure is 5.3 percent (±1.0 percent) above the February 2020 estimate of \\$1,441.1 billion. During the first two ...",
null,
"Construction Spending Increased 1.7% in January 2021Posted: March 1, 2021 at 10:00 AM (Monday)Total Construction Construction spending during January 2021 was estimated at a seasonally adjusted annual rate of \\$1,521.5 billion, 1.7 percent (±0.7 percent) above the revised December estimate of \\$1,496.5 billion. The January figure is 5.8 percent (±1.0 percent) above the January 2020 estimate of ...",
null,
"Construction Spending Increased 1.0% in December 2020Posted: February 1, 2021 at 10:00 AM (Monday)Total Construction Construction spending during December 2020 was estimated at a seasonally adjusted annual rate of \\$1,490.4 billion, 1.0 percent (± 0.8 percent) above the revised November estimate of \\$1,475.6 billion. The December figure is 5.7 percent (±1.0 percent) above the December 2019 estimate of \\$1,410.3 billion. The value of ...",
null,
"Construction Spending Increased 0.9% in November 2020Posted: January 4, 2021 at 10:00 AM (Monday)Total Construction Construction spending during November 2020 was estimated at a seasonally adjusted annual rate of \\$1,459.4 billion, 0.9 percent (±0.8 percent) above the revised October estimate of \\$1,446.9 billion. The November figure is 3.8 percent (±1.3 percent) above the November 2019 estimate of \\$1,405.5 billion. During the first ...",
null,
"Construction Spending Increased 1.3% in October 2020Posted: December 1, 2020 at 10:00 AM (Tuesday)Total Construction Construction spending during October 2020 was estimated at a seasonally adjusted annual rate of \\$1,438.5 billion, 1.3 percent (±1.0 percent) above the revised September estimate of \\$1,420.4 billion. The October figure is 3.7 percent (±1.3 percent) above the October 2019 estimate of \\$1,386.8 billion. During the first ten ...",
null,
"Construction Spending Increased 0.3% in September 2020Posted: November 2, 2020 at 10:00 AM (Monday)Total Construction Construction spending during September 2020 was estimated at a seasonally adjusted annual rate of \\$1,414.0 billion, 0.3 percent (±1.0 percent)* above the revised August estimate of \\$1,410.4 billion. The September figure is 1.5 percent (±1.3 percent) above the September 2019 estimate of \\$1,393.3 billion. During the first ...",
null,
"Construction Spending Increased 1.4% in August 2020Posted: October 1, 2020 at 10:00 AM (Thursday)Total Construction Construction spending during August 2020 was estimated at a seasonally adjusted annual rate of \\$1,412.8 billion, 1.4 percent (±1.0 percent) above the revised July estimate of \\$1,392.7 billion. The August figure is 2.5 percent (±1.5 percent) above the August 2019 estimate of \\$1,379.0 billion. During the first eight months of ...",
null,
"Construction Spending Increased 0.1% in July 2020Posted: September 1, 2020 at 10:00 AM (Tuesday)Total Construction Construction spending during July 2020 was estimated at a seasonally adjusted annual rate of \\$1,364.6 billion, 0.1 percent (±1.2 percent)* above the revised June estimate of \\$1,362.8 billion. The July figure is 0.1 percent (±1.6 percent)* below the July 2019 estimate of \\$1,366.0 billion. During the first seven months of ...",
null,
"Construction Spending Decreased 0.7% in June 2020Posted: August 3, 2020 at 10:00 AM (Monday)Total Construction Construction spending during June 2020 was estimated at a seasonally adjusted annual rate of \\$1,355.2 billion, 0.7 percent (±1.2 percent)* below the revised May estimate of \\$1,364.7 billion. The June figure is 0.1 percent (±1.5 percent)* above the June 2019 estimate of \\$1,354.1 billion. During the first six months of this ...",
null,
"Construction Spending Decreased 2.1% in May 2020Posted: July 1, 2020 at 10:00 AM (Wednesday)Total Construction Construction spending during May 2020 was estimated at a seasonally adjusted annual rate of \\$1,356.4 billion, 2.1 percent (±1.0 percent) below the revised April estimate of \\$1,386.1 billion. The May figure is 0.3 percent (±1.5 percent)* above the May 2019 estimate of \\$1,352.9 billion. During the first five months of this ...",
null,
"Construction Spending Decreased 2.9% in April 2020Posted: June 1, 2020 at 10:00 AM (Monday)Total Construction Construction spending during April 2020 was estimated at a seasonally adjusted annual rate of \\$1,346.2 billion, 2.9 percent (±0.8 percent) below the revised March estimate of \\$1,386.6 billion. The April figure is 3.0 percent (±1.5 percent) above the April 2019 estimate of \\$1,307.1 billion. During the first four months of ...",
null,
"Construction Spending Increased 0.9% in March 2020Posted: May 1, 2020 at 10:00 AM (Friday)Total Construction Construction spending during March 2020 was estimated at a seasonally adjusted annual rate of \\$1,360.5 billion, 0.9 percent (±0.8 percent) above the revised February estimate of \\$1,348.4 billion. The March figure is 4.7 percent (±1.3 percent) above the March 2019 estimate of \\$1,299.1 billion. During the first three months ...",
null,
"Construction Spending Decreased 1.3% in February 2020Posted: April 1, 2020 at 10:00 AM (Wednesday)Total Construction Construction spending during February 2020 was estimated at a seasonally adjusted annual rate of \\$1,366.7 billion, 1.3 percent (±0.8 percent) below the revised January estimate of \\$1,384.5 billion. The February figure is 6.0 percent (±1.2 percent) above the February 2019 estimate of \\$1,289.0 billion. During the first two ...",
null,
"Construction Spending Increased 1.8% in January 2020Posted: March 2, 2020 at 10:00 AM (Monday)Total Construction Construction spending during January 2020 was estimated at a seasonally adjusted annual rate of \\$1,369.2 billion, 1.8 percent (±0.8 percent) above the revised December estimate of \\$1,345.5 billion. The January figure is 6.8 percent (±1.3 percent) above the January 2019 estimate of ...",
null,
"Construction Spending decreased 0.2% in December 2019Posted: February 3, 2020 at 10:00 AM (Monday)Total Construction Construction spending during December 2019 was estimated at a seasonally adjusted annual rate of \\$1,327.7 billion, 0.2 percent (± 0.8 percent)* below the revised November estimate of \\$1,329.9 billion. The December figure is 5.0 percent (±1.3 percent) above the December 2018 estimate of \\$1,264.8 billion. The value of ...",
null,
"Construction Spending Increased 0.6% in November 2019Posted: January 3, 2020 at 10:00 AM (Friday)Total Construction Construction spending during November 2019 was estimated at a seasonally adjusted annual rate of \\$1,324.1 billion, 0.6 percent (±1.0 percent)* above the revised October estimate of \\$1,316.8 billion. The November figure is 4.1 percent (±1.5 percent) above the November 2018 estimate of \\$1,271.4 billion. During the first ...",
null,
"Construction Spending Decreased 0.8% in OctoberPosted: December 2, 2019 at 10:00 AM (Monday)Total Construction Construction spending during October 2019 was estimated at a seasonally adjusted annual rate of \\$1,291.1 billion, 0.8 percent (±1.0 percent)* below the revised September estimate of \\$1,301.8 billion. The October figure is 1.1 percent (±1.5 percent)* above the October 2018 estimate of \\$1,277.4 billion. During the first ten ...",
null,
"Construction Spending Increased 0.5% in SeptemberPosted: November 1, 2019 at 10:00 AM (Friday)Total Construction Construction spending during September 2019 was estimated at a seasonally adjusted annual rate of \\$1,293.6 billion, 0.5 percent (±1.0 percent)* above the revised August estimate of \\$1,287.1 billion. The September figure is 2.0 percent (±1.8 percent) below the September 2018 estimate of \\$1,319.7 billion. During the first ...",
null,
"Construction Spending increased 0.1% in AugustPosted: October 1, 2019 at 10:00 AM (Tuesday)Total Construction Construction spending during August 2019 was estimated at a seasonally adjusted annual rate of \\$1,287.3 billion, 0.1 percent (±1.2 percent)* above the revised July estimate of \\$1,285.6 billion. The August figure is 1.9 percent (±1.8 percent) below the August 2018 estimate of \\$1,312.2 billion. During the first eight months ...",
null,
"Construction Spending increased 0.1% in JulyPosted: September 3, 2019 at 10:00 AM (Tuesday)Total Construction Construction spending during July 2019 was estimated at a seasonally adjusted annual rate of \\$1,288.8 billion, 0.1 percent (±1.3 percent)* above the revised June estimate of \\$1,288.1 billion. The July figure is 2.7 percent (±1.6 percent) below the July 2018 estimate of \\$1,324.8 billion. During the first seven months of ...",
null,
"Construction Spending decreased 1.3% in JunePosted: August 1, 2019 at 10:00 AM (Thursday)Total Construction Construction spending during June 2019 was estimated at a seasonally adjusted annual rate of \\$1,287.0 billion, 1.3 percent (±1.2 percent) below the revised May estimate of \\$1,303.4 billion. The June figure is 2.1 percent (±1.6 percent) below the June 2018 estimate of \\$1,314.8 billion. During the first six months of this ...",
null,
"Construction Spending decreased 0.8% in MayPosted: July 1, 2019 at 10:00 AM (Monday)Total Construction Construction spending during May 2019 was estimated at a seasonally adjusted annual rate of \\$1,293.9 billion, 0.8 percent (±1.2 percent)* below the revised April estimate of \\$1,304.0 billion. The May figure is 2.3 percent (±1.5 percent) below the May 2018 estimate of \\$1,324.3 billion. During the first five months of this ...",
null,
"Construction Spending unch% in AprilPosted: June 3, 2019 at 08:30 AM (Monday)Total Construction Construction spending during April 2019 was estimated at a seasonally adjusted annual rate of \\$1,298.5 billion, nearly the same as (±1.3 percent)* the revised March estimate of \\$1,299.2 billion. The April figure is 1.2 percent (±1.5 percent)* below the April 2018 estimate of \\$1,314.7 billion. During the first four months of ...",
null,
"Construction Spending decreased 0.9% in MarchPosted: May 1, 2019 at 10:00 AM (Wednesday)Total Construction Construction spending during March 2019 was estimated at a seasonally adjusted annual rate of \\$1,282.2 billion, 0.9 percent (±1.0 percent)* below the revised February estimate of \\$1,293.3 billion. The March figure is 0.8 percent (±1.5 percent)* below the March 2018 estimate of \\$1,293.1 billion. During the first three months ...",
null,
"Construction Spending increased 1.0% in FebruaryPosted: April 1, 2019 at 10:00 AM (Monday)Total Construction Construction spending during February 2019 was estimated at a seasonally adjusted annual rate of \\$1,320.3 billion, 1.0 percent (±0.8 percent) above the revised January estimate of \\$1,307.3 billion. The February figure is 1.1 percent (±1.5 percent)* above the February 2018 estimate of \\$1,305.5 billion. During the first two ...",
null,
"Construction Spending increased 1.3% in JanuaryPosted: March 13, 2019 at 10:00 AM (Wednesday)Total Construction Construction spending during January 2019 was estimated at a seasonally adjusted annual rate of \\$1,279.6 billion, 1.3 percent (±0.8 percent) above the revised December estimate of \\$1,263.1 billion. The January figure is 0.3 percent (±1.2 percent)* above the January 2018 estimate of ...",
null,
"Construction Spending decreased 0.6% in DecemberPosted: March 4, 2019 at 10:00 AM (Monday)Total Construction Construction spending during December 2018 was estimated at a seasonally adjusted annual rate of \\$1,292.7 billion, 0.6 percent (±1.0 percent)* below the revised November estimate of \\$1,300.6 billion. The December figure is 1.6 percent (±1.2 percent) above the December 2017 estimate of \\$1,272.6 billion. The value of ...",
null,
"Construction Spending increased 0.8% in NovemberPosted: February 1, 2019 at 10:00 AM (Friday)Total Construction Construction spending during November 2018 was estimated at a seasonally adjusted annual rate of \\$1,299.9 billion, 0.8 percent (±1.3 percent)* above the revised October estimate of \\$1,289.7 billion. The November figure is 3.4 percent (±1.5 percent) above the November 2017 estimate of \\$1,257.3 billion. During the first ...",
null,
"Construction Spending decreased 0.1% in OctoberPosted: December 3, 2018 at 10:02 AM (Monday)Total Construction Construction spending during October 2018 was estimated at a seasonally adjusted annual rate of \\$1,308.8 billion, 0.1 percent (±1.5 percent)* below the revised September estimate of \\$1,310.8 billion. The October figure is 4.9 percent (±1.6 percent) above the October 2017 estimate of \\$1,247.5 billion. During the first ten ...",
null,
"Construction Spending unch% in SeptemberPosted: November 1, 2018 at 10:00 AM (Thursday)Total Construction Construction spending during September 2018 was estimated at a seasonally adjusted annual rate of \\$1,329.5 billion, nearly the same as (±1.5 percent)* the revised August estimate of \\$1,328.8 billion. The September figure is 7.2 percent (±1.8 percent) above the September 2017 estimate of \\$1,240.4 billion. During the first ...",
null,
"Construction Spending increased 0.1% in AugustPosted: October 1, 2018 at 10:00 AM (Monday)Total Construction Construction spending during August 2018 was estimated at a seasonally adjusted annual rate of \\$1,318.5 billion, 0.1 percent (±1.6 percent)* above the revised July estimate of \\$1,317.4 billion. The August figure is 6.5 percent (±2.0 percent) above the August 2017 estimate of \\$1,237.5 billion. During the first eight months ...",
null,
"Construction Spending increased 0.1% in JulyPosted: September 4, 2018 at 10:00 AM (Tuesday)Total Construction Construction spending during July 2018 was estimated at a seasonally adjusted annual rate of \\$1,315.4 billion, 0.1 percent (±1.5 percent)* above the revised June estimate of \\$1,314.2 billion. The July figure is 5.8 percent (±1.8 percent) above the July 2017 estimate of \\$1,242.8 billion. During the first seven months of ...",
null,
"Construction Spending decreased 1.1% in JunePosted: August 1, 2018 at 10:00 AM (Wednesday)Total Construction Construction spending during June 2018 was estimated at a seasonally adjusted annual rate of \\$1,317.2 billion, 1.1 percent (±1.0 percent) below the revised May estimate of \\$1,332.2 billion. The June figure is 6.1 percent (±1.6 percent) above the June 2017 estimate of \\$1,241.3 billion. During the first six months of this ...",
null,
"Construction Spending increased 0.4% in MayPosted: July 2, 2018 at 10:00 AM (Monday)Total Construction Construction spending during May 2018 was estimated at a seasonally adjusted annual rate of \\$1,309.5 billion, 0.4 percent (±1.3 percent)* above the revised April estimate of \\$1,304.5 billion. The May figure is 4.5 percent (±1.6 percent) above the May 2017 estimate of \\$1,253.6 billion. During the first five months of this ...",
null,
"Construction Spending increased 1.8% in AprilPosted: June 1, 2018 at 10:00 AM (Friday)Total Construction Construction spending during April 2018 was estimated at a seasonally adjusted annual rate of \\$1,310.4 billion, 1.8 percent (±1.0 percent) above the revised March estimate of \\$1,286.8 billion. The April figure is 7.6 percent (±1.5 percent) above the April 2017 estimate of \\$1,217.7 billion. During the first four months of ...",
null,
"Construction Spending down 1.7% in MarchPosted: May 1, 2018 at 10:00 AM (Tuesday)Total Construction Construction spending during March 2018 was estimated at a seasonally adjusted annual rate of \\$1,284.7 billion, 1.7 percent (±0.8 percent) below the revised February estimate of \\$1,306.4 billion. The March figure is 3.6 percent (±1.3 percent) above the March 2017 estimate of \\$1,239.6 billion. During the first three months ...",
null,
"Construction Spending up 0.1% in FebruaryPosted: April 2, 2018 at 10:00 AM (Monday)Total Construction Construction spending during February 2018 was estimated at a seasonally adjusted annual rate of \\$1,273.1 billion, 0.1 percent (±1.2 percent)* above the revised January estimate of \\$1,272.2 billion. The February figure is 3.0 percent (±1.5 percent) above the February 2017 estimate of \\$1,235.7 billion. During the first two ...",
null,
""
]
| [
null,
"http://www.onewall.com/icons/ecblank.gif",
null,
"http://www.onewall.com/icons/ecblank.gif",
null,
"http://www.onewall.com/icons/ecblank.gif",
null,
"http://www.onewall.com/icons/ecblank.gif",
null,
"http://www.onewall.com/icons/ecblank.gif",
null,
"http://www.onewall.com/icons/ecblank.gif",
null,
"http://www.onewall.com/icons/ecblank.gif",
null,
"http://www.onewall.com/icons/ecblank.gif",
null,
"http://www.onewall.com/icons/ecblank.gif",
null,
"http://www.onewall.com/icons/ecblank.gif",
null,
"http://www.onewall.com/icons/ecblank.gif",
null,
"http://www.onewall.com/icons/ecblank.gif",
null,
"http://www.onewall.com/icons/ecblank.gif",
null,
"http://www.onewall.com/icons/ecblank.gif",
null,
"http://www.onewall.com/icons/ecblank.gif",
null,
"http://www.onewall.com/icons/ecblank.gif",
null,
"http://www.onewall.com/icons/ecblank.gif",
null,
"http://www.onewall.com/icons/ecblank.gif",
null,
"http://www.onewall.com/icons/ecblank.gif",
null,
"http://www.onewall.com/icons/ecblank.gif",
null,
"http://www.onewall.com/icons/ecblank.gif",
null,
"http://www.onewall.com/icons/ecblank.gif",
null,
"http://www.onewall.com/icons/ecblank.gif",
null,
"http://www.onewall.com/icons/ecblank.gif",
null,
"http://www.onewall.com/icons/ecblank.gif",
null,
"http://www.onewall.com/icons/ecblank.gif",
null,
"http://www.onewall.com/icons/ecblank.gif",
null,
"http://www.onewall.com/icons/ecblank.gif",
null,
"http://www.onewall.com/icons/ecblank.gif",
null,
"http://www.onewall.com/icons/ecblank.gif",
null,
"http://www.onewall.com/icons/ecblank.gif",
null,
"http://www.onewall.com/icons/ecblank.gif",
null,
"http://www.onewall.com/icons/ecblank.gif",
null,
"http://www.onewall.com/icons/ecblank.gif",
null,
"http://www.onewall.com/icons/ecblank.gif",
null,
"http://www.onewall.com/icons/ecblank.gif",
null,
"http://www.onewall.com/icons/ecblank.gif",
null,
"http://www.onewall.com/icons/ecblank.gif",
null,
"http://www.onewall.com/icons/ecblank.gif",
null,
"http://www.onewall.com/icons/ecblank.gif",
null,
"http://www.onewall.com/icons/ecblank.gif",
null,
"http://www.onewall.com/icons/ecblank.gif",
null,
"http://www.onewall.com/icons/ecblank.gif",
null,
"http://www.onewall.com/icons/ecblank.gif",
null,
"http://www.onewall.com/icons/ecblank.gif",
null,
"http://www.onewall.com/icons/ecblank.gif",
null,
"http://www.onewall.com/icons/ecblank.gif",
null,
"http://www.onewall.com/icons/ecblank.gif",
null,
"http://www.onewall.com/icons/ecblank.gif",
null,
"http://www.onewall.com/icons/ecblank.gif",
null,
"http://www.onewall.com/icons/ecblank.gif",
null,
"http://www.onewall.com/icons/ecblank.gif",
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.969383,"math_prob":0.8900579,"size":21818,"snap":"2022-05-2022-21","text_gpt3_token_len":6071,"char_repetition_ratio":0.26735124,"word_repetition_ratio":0.4110672,"special_character_ratio":0.3252819,"punctuation_ratio":0.20424293,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96101505,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-22T20:59:15Z\",\"WARC-Record-ID\":\"<urn:uuid:a40288af-5712-4518-9a0b-e560286c40d3>\",\"Content-Length\":\"82355\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:db56268f-0420-4579-b83e-d85151fdce40>\",\"WARC-Concurrent-To\":\"<urn:uuid:0317f48f-1323-43ff-8b63-85bf46b8976a>\",\"WARC-IP-Address\":\"76.8.55.35\",\"WARC-Target-URI\":\"http://www.onewall.com/homepage.nsf/PBview?Open&1FSY=Tag&RestrictToCategory=ResearchZ.ZEconomics:CONSTRUCTIONQ.QSPENDING\",\"WARC-Payload-Digest\":\"sha1:GZSWKEERPLU4XCZC565Z7WFNGFPVFRHS\",\"WARC-Block-Digest\":\"sha1:KXC2NRCPRTGNHJ6A3NR5543JFN5R4WVJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662546071.13_warc_CC-MAIN-20220522190453-20220522220453-00296.warc.gz\"}"} |
https://betterlesson.com/lesson/555060/compensating-to-compute-larger-numbers?from=breadcrumb_lesson | [
"# Compensating to Compute Larger Numbers\n\n1 teachers like this lesson\nPrint Lesson\n\n## Objective\n\nSWBAT use compensation to check the addition and subtraction algorithms for accuracy.\n\n#### Big Idea\n\nStudents will make numbers friendlier before computing and will compensate the answer later on.\n\n## Opening\n\n20 minutes\n\nToday's Number Talk\n\nFor a detailed description of the Number Talk procedure, please refer to the Number Talk Explanation. For this Number Talk, I am encouraging students to represent their thinking using an open number line model.\n\nFor the first task, students subtracted parts of 800 from 8,200. For example, this student subtracted 200 to get to the landmark number, 8,000, and then she subtracted the rest of the 800: 8,200-800\n\nDuring the next task, some students chose to decompose the 8,000 and then subtract one part at a time: 92,000-8,000. Other students showed how to Subtract Down or Add Up.\n\nFor the final task, students decomposed 80,000 in a variety of ways and subtracted: 520,000-80,000. To help some students, I drew a Model on the Board to help students see the connection between 520 - 80 and 520,000 - 80,000.\n\n## Teacher Demonstration\n\n45 minutes\n\nReasoning for Teaching Multiple Strategies\n\nDuring this Addition and Subtraction Unit, I truly wanted to focus on Math Practice 2: Reason abstractly and quantitatively. I knew that if students learned multiple strategies of adding and subtracting numbers, I wouldn’t only be providing them with multiple pathways to learning, but I would also be encouraging students to engage in “quantitative reasoning” by “making sense of quantities and their relationships in problem situations.” By teaching students how to use a variety of strategies, such as using number lines, bar diagrams, decomposing, compensating, transformation, and subtracting from nines, I hoped students would begin to see numbers as units and quantities that can be computed with flexibility.\n\nPowerPoint Presentation\n\nIn order to continue providing students with compensation practice, I created a PowerPoint presentation called, Compensation Practice Day 2. This way, I could continue to provide students with a rigorous learning progression. Yesterday, students did a great job adding and subtracting 2-digit to 4-digit numbers and checking their work using compensation. Today, I wanted students to continue applying the compensation strategy to 5-digit and 6-digit numbers.\n\nGoal & Vocabulary\n\nTo begin, I showed the first slide, which was the Goal and reminded studentsRemember, your goal is to be able to say, \"I can check the addition and subtraction algorithms using compensation.\" We reviewed the meaning of compensation: Compensation Meaning. Then, we took another look at the Compensation Example\n\nGuided Instruction\n\nWe then moved on to the first problem, 61,027 + 29,985. I asked students: What could I add or subtract to make this an easier problem to solve? A student suggested, \"Take away 27.\" I then Modeled on the Board as the student directed the calculations. Other students began coming up with their own strategies on their white boards. Here, one student is Compensating by Subtracting 1,027. Another student decided to try Adjusting Both Addends by subtracting 27 from one addend and adding 15 to the other addend. I was excited to see that this student correctly added the 27 and subtracted the 15 to adjust the solution.\n\nAt this point, we moved on to solving the rest of the problems in the same fashion: (61,027 - 29,985482,160 + 179,000, and 482,160 - 179,000).\n\nFor the last task students used multiple strategies. Here, Student Strategy 1, a student adds 840 to the first addend and 1,000 to the second addend. Later, he subtracted 1,000, then 800, and then 40. Another student, Student Strategy 2, solved this problem by subtracting 160. Then, he tried a second strategy where he subtracted 160 the minuend and added 1,000 to the subtrahend. Once students were given plenty of time to practice compensating, we then shared, discussed, and modeled student strategies as a class: Model 482,160-179,00.\n\n## Student Practice\n\n30 minutes\n\nFor independent practice time, I created 2 practice pages by copying & pasting portions of worksheets found at Math-Aids.com. I wanted to provide students with the space necessary to check the addition and subtraction algorithms using compensation: Compensation Practice Page 2. As students finished, they compared their answers with others at the back table.\n\nDuring this time, I conferenced with as many students as possible to question, encourage higher-level thinking, and to provide support."
]
| [
null
]
| {"ft_lang_label":"__label__en","ft_lang_prob":0.93912894,"math_prob":0.67327726,"size":4288,"snap":"2021-43-2021-49","text_gpt3_token_len":979,"char_repetition_ratio":0.15966387,"word_repetition_ratio":0.00877193,"special_character_ratio":0.24626866,"punctuation_ratio":0.15511164,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98703986,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-11-29T05:40:32Z\",\"WARC-Record-ID\":\"<urn:uuid:5d8801d5-7b7b-41bd-8d60-1b68b89ff12f>\",\"Content-Length\":\"165396\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1287af24-e2a7-4cca-bd4a-2ae287da267e>\",\"WARC-Concurrent-To\":\"<urn:uuid:aa02a1ac-4631-4e5c-8677-6799e922db4f>\",\"WARC-IP-Address\":\"52.206.231.121\",\"WARC-Target-URI\":\"https://betterlesson.com/lesson/555060/compensating-to-compute-larger-numbers?from=breadcrumb_lesson\",\"WARC-Payload-Digest\":\"sha1:775GW4WZBA7MBTXXB33ENMWAD6CM2UUQ\",\"WARC-Block-Digest\":\"sha1:ATYJ2ZD7HHEMLL2NDREHU3GJFHVD6AVU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964358688.35_warc_CC-MAIN-20211129044311-20211129074311-00278.warc.gz\"}"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.