URL
stringlengths
15
1.68k
text_list
sequencelengths
1
199
image_list
sequencelengths
1
199
metadata
stringlengths
1.19k
3.08k
https://www.colorhexa.com/01430b
[ "# #01430b Color Information\n\nIn a RGB color space, hex #01430b is composed of 0.4% red, 26.3% green and 4.3% blue. Whereas in a CMYK color space, it is composed of 98.5% cyan, 0% magenta, 83.6% yellow and 73.7% black. It has a hue angle of 129.1 degrees, a saturation of 97.1% and a lightness of 13.3%. #01430b color hex could be obtained by blending #028616 with #000000. Closest websafe color is: #003300.\n\n• R 0\n• G 26\n• B 4\nRGB color chart\n• C 99\n• M 0\n• Y 84\n• K 74\nCMYK color chart\n\n#01430b color description : Very dark lime green.\n\n# #01430b Color Conversion\n\nThe hexadecimal color #01430b has RGB values of R:1, G:67, B:11 and CMYK values of C:0.99, M:0, Y:0.84, K:0.74. Its decimal value is 82699.\n\nHex triplet RGB Decimal 01430b `#01430b` 1, 67, 11 `rgb(1,67,11)` 0.4, 26.3, 4.3 `rgb(0.4%,26.3%,4.3%)` 99, 0, 84, 74 129.1°, 97.1, 13.3 `hsl(129.1,97.1%,13.3%)` 129.1°, 98.5, 26.3 003300 `#003300`\nCIE-LAB 23.819, -31.778, 26.942 2.08, 4.045, 0.988 0.292, 0.569, 4.045 23.819, 41.662, 139.708 23.819, -22.057, 26.511 20.111, -16.734, 11.166 00000001, 01000011, 00001011\n\n# Color Schemes with #01430b\n\n• #01430b\n``#01430b` `rgb(1,67,11)``\n• #430139\n``#430139` `rgb(67,1,57)``\nComplementary Color\n• #184301\n``#184301` `rgb(24,67,1)``\n• #01430b\n``#01430b` `rgb(1,67,11)``\n• #01432c\n``#01432c` `rgb(1,67,44)``\nAnalogous Color\n• #430118\n``#430118` `rgb(67,1,24)``\n• #01430b\n``#01430b` `rgb(1,67,11)``\n• #2c0143\n``#2c0143` `rgb(44,1,67)``\nSplit Complementary Color\n• #430b01\n``#430b01` `rgb(67,11,1)``\n• #01430b\n``#01430b` `rgb(1,67,11)``\n• #0b0143\n``#0b0143` `rgb(11,1,67)``\n• #394301\n``#394301` `rgb(57,67,1)``\n• #01430b\n``#01430b` `rgb(1,67,11)``\n• #0b0143\n``#0b0143` `rgb(11,1,67)``\n• #430139\n``#430139` `rgb(67,1,57)``\n• #000000\n``#000000` `rgb(0,0,0)``\n• #001103\n``#001103` `rgb(0,17,3)``\n• #012a07\n``#012a07` `rgb(1,42,7)``\n• #01430b\n``#01430b` `rgb(1,67,11)``\n• #015c0f\n``#015c0f` `rgb(1,92,15)``\n• #027513\n``#027513` `rgb(2,117,19)``\n• #028e17\n``#028e17` `rgb(2,142,23)``\nMonochromatic Color\n\n# Alternatives to #01430b\n\nBelow, you can see some colors close to #01430b. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #084301\n``#084301` `rgb(8,67,1)``\n• #024301\n``#024301` `rgb(2,67,1)``\n• #014306\n``#014306` `rgb(1,67,6)``\n• #01430b\n``#01430b` `rgb(1,67,11)``\n• #014311\n``#014311` `rgb(1,67,17)``\n• #014316\n``#014316` `rgb(1,67,22)``\n• #01431c\n``#01431c` `rgb(1,67,28)``\nSimilar Colors\n\n# #01430b Preview\n\nThis text has a font color of #01430b.\n\n``<span style=\"color:#01430b;\">Text here</span>``\n#01430b background color\n\nThis paragraph has a background color of #01430b.\n\n``<p style=\"background-color:#01430b;\">Content here</p>``\n#01430b border color\n\nThis element has a border color of #01430b.\n\n``<div style=\"border:1px solid #01430b;\">Content here</div>``\nCSS codes\n``.text {color:#01430b;}``\n``.background {background-color:#01430b;}``\n``.border {border:1px solid #01430b;}``\n\n# Shades and Tints of #01430b\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #000901 is the darkest color, while #f5fff6 is the lightest one.\n\n• #000901\n``#000901` `rgb(0,9,1)``\n• #001c05\n``#001c05` `rgb(0,28,5)``\n• #013008\n``#013008` `rgb(1,48,8)``\n• #01430b\n``#01430b` `rgb(1,67,11)``\n• #01560e\n``#01560e` `rgb(1,86,14)``\n• #026a11\n``#026a11` `rgb(2,106,17)``\n• #027d15\n``#027d15` `rgb(2,125,21)``\n• #029018\n``#029018` `rgb(2,144,24)``\n• #02a41b\n``#02a41b` `rgb(2,164,27)``\n• #03b71e\n``#03b71e` `rgb(3,183,30)``\n• #03ca21\n``#03ca21` `rgb(3,202,33)``\n• #03de24\n``#03de24` `rgb(3,222,36)``\n• #04f128\n``#04f128` `rgb(4,241,40)``\n• #0dfb31\n``#0dfb31` `rgb(13,251,49)``\n• #20fc41\n``#20fc41` `rgb(32,252,65)``\n• #33fc52\n``#33fc52` `rgb(51,252,82)``\n• #47fc62\n``#47fc62` `rgb(71,252,98)``\n• #5afd73\n``#5afd73` `rgb(90,253,115)``\n• #6dfd83\n``#6dfd83` `rgb(109,253,131)``\n• #81fd94\n``#81fd94` `rgb(129,253,148)``\n• #94fda4\n``#94fda4` `rgb(148,253,164)``\n• #a7feb4\n``#a7feb4` `rgb(167,254,180)``\n• #bbfec5\n``#bbfec5` `rgb(187,254,197)``\n• #cefed5\n``#cefed5` `rgb(206,254,213)``\n• #e1ffe6\n``#e1ffe6` `rgb(225,255,230)``\n• #f5fff6\n``#f5fff6` `rgb(245,255,246)``\nTint Color Variation\n\n# Tones of #01430b\n\nA tone is produced by adding gray to any pure hue. In this case, #202421 is the less saturated color, while #01430b is the most saturated one.\n\n• #202421\n``#202421` `rgb(32,36,33)``\n• #1e261f\n``#1e261f` `rgb(30,38,31)``\n• #1b291d\n``#1b291d` `rgb(27,41,29)``\n• #192b1b\n``#192b1b` `rgb(25,43,27)``\n• #162e1a\n``#162e1a` `rgb(22,46,26)``\n• #133118\n``#133118` `rgb(19,49,24)``\n• #113316\n``#113316` `rgb(17,51,22)``\n• #0e3614\n``#0e3614` `rgb(14,54,20)``\n• #0b3912\n``#0b3912` `rgb(11,57,18)``\n• #093b10\n``#093b10` `rgb(9,59,16)``\n• #063e0f\n``#063e0f` `rgb(6,62,15)``\n• #04400d\n``#04400d` `rgb(4,64,13)``\n• #01430b\n``#01430b` `rgb(1,67,11)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #01430b is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5385951,"math_prob":0.7263284,"size":3647,"snap":"2023-40-2023-50","text_gpt3_token_len":1632,"char_repetition_ratio":0.12928905,"word_repetition_ratio":0.007380074,"special_character_ratio":0.5673156,"punctuation_ratio":0.23756906,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99479276,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-06T11:19:08Z\",\"WARC-Record-ID\":\"<urn:uuid:e8c1dc38-d310-4f21-9820-5e4961162a17>\",\"Content-Length\":\"36102\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6ae9bc86-793e-4729-b2c3-90dab6e5daf2>\",\"WARC-Concurrent-To\":\"<urn:uuid:0016a93c-aaff-4d03-98f8-1200ff2e1496>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/01430b\",\"WARC-Payload-Digest\":\"sha1:SR7APUAK6UKRYV5W7LUIVP25YMIX736P\",\"WARC-Block-Digest\":\"sha1:V6N376D53FHNFL5UR5BTBWQVRVJPNQVT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100593.71_warc_CC-MAIN-20231206095331-20231206125331-00340.warc.gz\"}"}
https://ch.mathworks.com/help/matlab/math/factorizations.html
[ "## Factorizations\n\n### Introduction\n\nAll three of the matrix factorizations discussed in this section make use of triangular matrices, where all the elements either above or below the diagonal are zero. Systems of linear equations involving triangular matrices are easily and quickly solved using either forward or back substitution.\n\n### Cholesky Factorization\n\nThe Cholesky factorization expresses a symmetric matrix as the product of a triangular matrix and its transpose\n\nA = RR,\n\nwhere R is an upper triangular matrix.\n\nNot all symmetric matrices can be factored in this way; the matrices that have such a factorization are said to be positive definite. This implies that all the diagonal elements of A are positive and that the off-diagonal elements are “not too big.” The Pascal matrices provide an interesting example. Throughout this chapter, the example matrix `A` has been the 3-by-3 Pascal matrix. Temporarily switch to the 6-by-6:\n\n```A = pascal(6) A = 1 1 1 1 1 1 1 2 3 4 5 6 1 3 6 10 15 21 1 4 10 20 35 56 1 5 15 35 70 126 1 6 21 56 126 252```\n\nThe elements of `A` are binomial coefficients. Each element is the sum of its north and west neighbors. The Cholesky factorization is\n\n```R = chol(A) R = 1 1 1 1 1 1 0 1 2 3 4 5 0 0 1 3 6 10 0 0 0 1 4 10 0 0 0 0 1 5 0 0 0 0 0 1```\n\nThe elements are again binomial coefficients. The fact that `R'*R` is equal to `A` demonstrates an identity involving sums of products of binomial coefficients.\n\nNote\n\nThe Cholesky factorization also applies to complex matrices. Any complex matrix that has a Cholesky factorization satisfies\n\nA′ = A\n\nand is said to be Hermitian positive definite.\n\nThe Cholesky factorization allows the linear system\n\nAx = b\n\nto be replaced by\n\nRRx = b.\n\nBecause the backslash operator recognizes triangular systems, this can be solved in the MATLAB® environment quickly with\n\n`x = R\\(R'\\b)`\n\nIf `A` is n-by-n, the computational complexity of `chol(A)` is O(n3), but the complexity of the subsequent backslash solutions is only O(n2).\n\n### LU Factorization\n\nLU factorization, or Gaussian elimination, expresses any square matrix A as the product of a permutation of a lower triangular matrix and an upper triangular matrix\n\nA = LU,\n\nwhere L is a permutation of a lower triangular matrix with ones on its diagonal and U is an upper triangular matrix.\n\nThe permutations are necessary for both theoretical and computational reasons. The matrix\n\n`$\\left[\\begin{array}{cc}0& 1\\\\ 1& 0\\end{array}\\right]$`\n\ncannot be expressed as the product of triangular matrices without interchanging its two rows. Although the matrix\n\n`$\\left[\\begin{array}{cc}\\epsilon & 1\\\\ 1& 0\\end{array}\\right]$`\n\ncan be expressed as the product of triangular matrices, when ε is small, the elements in the factors are large and magnify errors, so even though the permutations are not strictly necessary, they are desirable. Partial pivoting ensures that the elements of L are bounded by one in magnitude and that the elements of U are not much larger than those of A.\n\nFor example:\n\n```[L,U] = lu(B) L = 1.0000 0 0 0.3750 0.5441 1.0000 0.5000 1.0000 0 U = 8.0000 1.0000 6.0000 0 8.5000 -1.0000 0 0 5.2941```\n\nThe LU factorization of `A` allows the linear system\n\n`A*x = b`\n\nto be solved quickly with\n\n`x = U\\(L\\b)`\n\nDeterminants and inverses are computed from the LU factorization using\n\n`det(A) = det(L)*det(U)`\n\nand\n\n`inv(A) = inv(U)*inv(L)`\n\nYou can also compute the determinants using `det(A) = prod(diag(U))`, though the signs of the determinants might be reversed.\n\n### QR Factorization\n\nAn orthogonal matrix, or a matrix with orthonormal columns, is a real matrix whose columns all have unit length and are perpendicular to each other. If Q is orthogonal, then\n\nQTQ = I,\n\nwhere I is the identity matrix.\n\nThe simplest orthogonal matrices are two-dimensional coordinate rotations:\n\n`$\\left[\\begin{array}{cc}\\mathrm{cos}\\left(\\theta \\right)& \\mathrm{sin}\\left(\\theta \\right)\\\\ -\\mathrm{sin}\\left(\\theta \\right)& \\mathrm{cos}\\left(\\theta \\right)\\end{array}\\right].$`\n\nFor complex matrices, the corresponding term is unitary. Orthogonal and unitary matrices are desirable for numerical computation because they preserve length, preserve angles, and do not magnify errors.\n\nThe orthogonal, or QR, factorization expresses any rectangular matrix as the product of an orthogonal or unitary matrix and an upper triangular matrix. A column permutation might also be involved:\n\nA = QR\n\nor\n\nAP = QR,\n\nwhere Q is orthogonal or unitary, R is upper triangular, and P is a permutation.\n\nThere are four variants of the QR factorization—full or economy size, and with or without column permutation.\n\nOverdetermined linear systems involve a rectangular matrix with more rows than columns, that is m-by-n with m > n. The full-size QR factorization produces a square, m-by-m orthogonal Q and a rectangular m-by-n upper triangular R:\n\n```C=gallery('uniformdata',[5 4], 0); [Q,R] = qr(C) Q = 0.6191 0.1406 -0.1899 -0.5058 0.5522 0.1506 0.4084 0.5034 0.5974 0.4475 0.3954 -0.5564 0.6869 -0.1478 -0.2008 0.3167 0.6676 0.1351 -0.1729 -0.6370 0.5808 -0.2410 -0.4695 0.5792 -0.2207 R = 1.5346 1.0663 1.2010 1.4036 0 0.7245 0.3474 -0.0126 0 0 0.9320 0.6596 0 0 0 0.6648 0 0 0 0```\n\nIn many cases, the last m – n columns of Q are not needed because they are multiplied by the zeros in the bottom portion of R. So the economy-size QR factorization produces a rectangular, m-by-n Q with orthonormal columns and a square n-by-n upper triangular R. For the 5-by-4 example, this is not much of a saving, but for larger, highly rectangular matrices, the savings in both time and memory can be quite important:\n\n```[Q,R] = qr(C,0) Q = 0.6191 0.1406 -0.1899 -0.5058 0.1506 0.4084 0.5034 0.5974 0.3954 -0.5564 0.6869 -0.1478 0.3167 0.6676 0.1351 -0.1729 0.5808 -0.2410 -0.4695 0.5792 R = 1.5346 1.0663 1.2010 1.4036 0 0.7245 0.3474 -0.0126 0 0 0.9320 0.6596 0 0 0 0.6648```\n\nIn contrast to the LU factorization, the QR factorization does not require any pivoting or permutations. But an optional column permutation, triggered by the presence of a third output argument, is useful for detecting singularity or rank deficiency. At each step of the factorization, the column of the remaining unfactored matrix with largest norm is used as the basis for that step. This ensures that the diagonal elements of R occur in decreasing order and that any linear dependence among the columns is almost certainly be revealed by examining these elements. For the small example given here, the second column of `C` has a larger norm than the first, so the two columns are exchanged:\n\n```[Q,R,P] = qr(C) Q = -0.3522 0.8398 -0.4131 -0.7044 -0.5285 -0.4739 -0.6163 0.1241 0.7777 R = -11.3578 -8.2762 0 7.2460 0 0 P = 0 1 1 0```\n\nWhen the economy-size and column permutations are combined, the third output argument is a permutation vector, rather than a permutation matrix:\n\n```[Q,R,p] = qr(C,0) Q = -0.3522 0.8398 -0.7044 -0.5285 -0.6163 0.1241 R = -11.3578 -8.2762 0 7.2460 p = 2 1```\n\nThe QR factorization transforms an overdetermined linear system into an equivalent triangular system. The expression\n\n`norm(A*x - b)`\n\nequals\n\n`norm(Q*R*x - b)`\n\nMultiplication by orthogonal matrices preserves the Euclidean norm, so this expression is also equal to\n\n`norm(R*x - y)`\n\nwhere `y = Q'*b`. Since the last m-n rows of R are zero, this expression breaks into two pieces:\n\n`norm(R(1:n,1:n)*x - y(1:n))`\n\nand\n\n`norm(y(n+1:m))`\n\nWhen `A` has full rank, it is possible to solve for `x` so that the first of these expressions is zero. Then the second expression gives the norm of the residual. When `A` does not have full rank, the triangular structure of `R` makes it possible to find a basic solution to the least-squares problem.\n\n### Using Multithreaded Computation for Factorization\n\nMATLAB software supports multithreaded computation for a number of linear algebra and element-wise numerical functions. These functions automatically execute on multiple threads. For a function or expression to execute faster on multiple CPUs, a number of conditions must be true:\n\n1. The function performs operations that easily partition into sections that execute concurrently. These sections must be able to execute with little communication between processes. They should require few sequential operations.\n\n2. The data size is large enough so that any advantages of concurrent execution outweigh the time required to partition the data and manage separate execution threads. For example, most functions speed up only when the array contains several thousand elements or more.\n\n3. The operation is not memory-bound; processing time is not dominated by memory access time. As a general rule, complicated functions speed up more than simple functions.\n\n`lu` and `qr` show significant increase in speed on large double-precision arrays (on order of 10,000 elements)." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.85138535,"math_prob":0.99511373,"size":8289,"snap":"2022-40-2023-06","text_gpt3_token_len":2301,"char_repetition_ratio":0.14797828,"word_repetition_ratio":0.048028674,"special_character_ratio":0.2979853,"punctuation_ratio":0.12803158,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9981797,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-29T11:33:44Z\",\"WARC-Record-ID\":\"<urn:uuid:6f57c3b2-730d-492b-9ea2-e62f05d00b4d>\",\"Content-Length\":\"88026\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:eac5b46e-f49f-40f0-bc9a-4b8629ed2ac0>\",\"WARC-Concurrent-To\":\"<urn:uuid:c3a9d15e-404b-413b-a573-db0d0197d815>\",\"WARC-IP-Address\":\"104.68.243.15\",\"WARC-Target-URI\":\"https://ch.mathworks.com/help/matlab/math/factorizations.html\",\"WARC-Payload-Digest\":\"sha1:FA2IHIMZWRIBGARUFJU5QVII54LLWGYO\",\"WARC-Block-Digest\":\"sha1:ILXCUDWN4BI2SHWUQXQOOGUAT5TETKMF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499713.50_warc_CC-MAIN-20230129112153-20230129142153-00435.warc.gz\"}"}
https://m.scirp.org/papers/65674
[ "A Schistosomiasis Model with Diffusion Effects\nAbstract\nIn this paper, we propose a schistosomiasis model in which two human groups share the water contaminated by schistosomiasis and migrate each other. The dynamical behavior of the model is studied. By calculation, the threshold value is given, which determines whether the disease will be extinct or not. The existence and global stability of the parasite-free equilibrium and the locally stability of the endemic equilibrium are discussed. Numerical simulations indicate that the diffusion from the mild endemic village to severe endemic village is benefit to control schistosomiasis transmission; otherwise it is bad for the disease control.\n\nReceived 30 January 2016; accepted 17 April 2016; published 20 April 2016", null, "1. Introduction\n\nSchistosomiasis is frequently a serious health problem, which was first described by Theodor Bilharz in 1851, after whom the disease was initially named bilharzia . The WHO has recently identified schistosomiasis as the second most important human parasitic disease in the world, after malaria . The infection is endemic in approximately 70 countries with about 200 million people affected worldwide , and resulting in about 200,000 deaths annually . Despite major advances in its control that have lead to substantial decreases in morbidity and mortality, schistosomiasis continues to spread to new geographic areas . Although significant progress has been made in chemotherapy with safer and more effective drugs, these cannot prevent the high reinfection rates of schistosomes, and there have been dramatic recurrences in both its prevalence and associated morbidity .\n\nDuring their complex developmental cycle, schistosomes alternate between a mammalian host and a snail host through the medium of fresh water. Mammals are infected by free-swimming larval forms of the parasite called cercariae. These larvae enter through the skin, and mature through different larval stages while circulating through the blood to the lungs before entering the hepatic portal system as mature males and females. They release thousands of eggs daily, which are discharged in the faeces after a damaging passage through the intestinal wall. Once into the fresh water, the eggs hatch and produce free-swimming miracidia, which infect amphibious snails from the genus Oncomelania. The miracidia reproduce asexually through sporocyst stages within these intermediate hosts, resulting in the production of many free-swimming cercariae - .\n\nMacDonald (1965) was the first to use simple mathematical models to study the transmission dynamics of schistosomiasis . The earliest models of schistosomiasis described the population sizes of both humans and snails to be constant . In , authors considered that models were based on describing the dynamics of transmission between man and snails. Previous several models focused on the interactions between one group of human hosts and schistosomes in a contaminated water resource(for example ). However, in realistic situations, the contaminated water might be shared by several human groups. In , Feng et al. proposed a model that described the disease dynamics involved two migrated human groups. They also analyzed the mathematical properties of the systems. Meanwhile, they established models with multiple human groups and found some structurally similarities between the models involved two human groups and those involved n groups.\n\nIncidence rate plays an important role in the modeling of epidemic dynamics. In many epidemic models, the bilinear incidence rate", null, "and the standard incidence rate", null, "are frequently used. The saturated incidence rate", null, ", where", null, "implicits the infection force of the schistosomiasis and", null, "with", null, "describes the psychological effect or inhibition effect from the behavioral change of the susceptible individuals with the increase of the infective individuals. It seems more reasonable than the bilinear incidence rate", null, ", and it is a good approximation if the number of available partners is large enough and everybody could not make more contacts than is practically feasible, and includes the behavioral change and crowding effect of the infective individuals and prevents the unboundedness of the contact rate - . In this paper, we develop a new mathematical model with saturated incidence function and diffusion effect. In many literatures - , the diffusion effect is studied. Numerical simulations demonstrate that the diffusion effect is an important parameters for epidemic transmission or species survival.\n\nIn order to keep the model manageable, Feng et al. assumed that the disease-induced death rate of snails", null, "in . Previous studies suggested that the disease-induced death rate of snails", null, "was an important parameter in the study of population dynamics . In this paper, we investigate firstly a schitosomiasis model with saturated incidence and diffusion effect, in which the disease-induced death rate of snails", null, "is taken into consideration. Further, by the spectral radius theory, we get the threshold value", null, ", below which the parasites die out, and above which the disease persists. When the threshold", null, ", we consider that the model may produce a bifurcation. And we study that exchange of stability between disease-free and endemic equilibria at bifurcation point.\n\nThis paper is organized as follows. In Section 2, we introduce model formulation. In Section 3, we analyze equilibria states of model. The basic reproduction number of the model is determined and the stability of the equilibria is studied. Numerical simulations and control strategies are presented in Section 4. Finally, we summarize and discuss the results in Section 5.\n\n2. Model Formulation\n\nIn , Feng et al. proposed a schistosomiasis model with age dependence:", null, "(1)\n\nwhere N, P, S, I, C denote the numbers of human hosts living in village, adult parasites that are hosted by human hosts in village, uninfected snails, infected snails and free-living cercaria, respectively.", null, "is infection-age, and", null, "is the infection-age density of snails at time t. k is the clumping parameter which determines the degree of over-dispersion in the negative binomial distribution. The following parameters is used in system (1), all of them positive,\n\nis the recruitment rate of human hosts;\n\nis the recruitment rate of snails;\n\nis the per capita natural death rate of human hosts;\n\nis the per capita death rate of adult parasites;\n\nis the disease-induced death rate of humans per parasite;\n\nis the effective treatment rate of human hosts;\n\nis the per capita natural death rate of snails;\n\nis the disease-induced death rate of snails;\n\nis the per capita (successful) rate of infection of snails by miracidia produced by one pair of adult parasites;\n\nis the per capita (successful) rate of infection of humans by one cercaria;\n\nis the releasing rate of cercariae, when the infection age is.\n\nIn , Feng el at. considered two neighboring villages sharing the same contaminated water resource and migrated between these two villages, and proposed the following model which based on the system (1).\n\n(2)\n\nwhere is the recruitment rate of human hosts of village i and is the immigration rate of human hosts from village i to village j,. For system (2), Feng el at. made the following assumptions:\n\n1) the snails do not move;\n\n2) the parasites are overdispersed;\n\n3) they have negative binomial distributions among human hosts with clumping parameters;\n\n4) the releasing rate of cercariae is infection-age independent, i.e.,. Thus,.\n\nIn system (2), authors introduced the bilinear incidence rate. Whereas the number of uninfected snails is limited within a certain time which contacted by the adult parasites. So the saturated incidence may be more suitable for the realistic situation. The following new model with the saturated incidence function is derived:\n\n(3)\n\nwhere is limitation of the growth velocity of infection of snails. In a contaminated water resource, many people are infected, which develops into chronic disease if not treated. Current control programs primarily focus on chemotherapy with Praziquantel, it is a new drug that is very effective, they can almost kill the adult parasites which reside within the patient. Thus, the disease-induced death rate of human hosts is very small. For analysing the properties of the model, we let. Then the first two equations become:\n\n(4)\n\nThe equilibrium points are obtained by setting the right-hand side of system (4) to zero, we solve the following system of equations:\n\n(5)\n\nThe unique solution of system (5) is, which is globally asymptotically stable, where\n\nwith and.\n\nTherefore, we have the following four-dimensional limit system of system (3) which summarizes the above result.\n\n(6)\n\nThe existence and the uniqueness of solutions of system (6) can be proved by using standard methods (see, for example, ).\n\n3. Equilibrium States\n\nIn this section, the equilibrium states of system (6) are discussed. The system (6) admits two steady states. We establish sufficient condition for the globally asymptotic stable of infection-free solution and for the permanence of the system (6).\n\n3.1. Boundedness\n\nThe model (6) describes the dynamics of adult parasites and snail. It is important to prove that these populations are positive and bounded for with any positive initial data. So we have the following results.\n\nTheorem 1. If is any solution of system (6), and, , and, then, , , for all.\n\nProof. From the first equation of system (6), we have\n\nAfter integrating, we obtain\n\nSimilarly,\n\nand\n\nHence, we conclude that the solution of system (6) is always positive for all.\n\nTheorem 2. For any nonnegative initial data, the solution of system (6) are bounded for all time.\n\nProof. From the last two equations in system (6), we have\n\nConsider the comparison system\n\nIt is easy to see that as. Thus\n\n(7)\n\nIt follows from the first and second equations of (6) and (7) that\n\nSimilarily above, and is a ultimately upper bound of and, respectively. The proof is completed.\n\nThe equilibrium states of the basic model are obtained by setting the right-hand side of system (6) to zero. The system (6) has two steady states of the disease-free equilibrium and the endemic equilibrium.\n\n3.2. The Disease-Free Equilibrium\n\nAt the disease-free state, there is no adult parasitrs and infected snails and hence no infection in the host and the intermediate host. Thus, the system (6) has a disease-free equilibrium\n\nwhere\n\nIn many epidemic models, the basic reproductive number is a key parameter. It refers to the expected number of secondary infections during the entire period of infectiousness in a completely susceptible population . Following the idea in , we give the basic reproductive number for system (6). Rewrite system (6) as following form:\n\nwhere. S denotes the number of uninfected snails, while components of Y represent the number of adult parasites that are hosted by human hosts in Village, and infected snails, respectively. Following the symbol in , we compute matrixes A, M and D as\n\n(8)\n\nwhere. Obviously, and is a diagonal matrix. The basic reproductive number is the spectral radius (dominant eigenvalue) of the matrix, that is,\n\nThus, in this case\n\n(9)\n\nwhere\n\nand\n\nWe know that presents the schistosomiasis transmission coefficient in village 1, and represents the schistosomiasis transmission coefficient in village 2.\n\nFrom above discussion, we have following result.\n\nTheorem 3. The disease-free equilibrium point is locally asymptotically stable if and unstable if.\n\nNext, we give two conditions which guarantee the global asymptotic stability of the disease-free state.\n\n(H1) For, is globally asymptotically stable.\n\n(H2), for, where, is an M-matrix.\n\nFor system (6), we have\n\nand A is given in (8). It is clear that for all. It is easy to see that the conditions (H1) and (H2) hold. According to the result of literature , we have the following result.\n\nTheorem 4. The disease-free equilibrium is globally asymptotically stable provided that and the assumptions (H1) and (H2) are satisfied.\n\n3.3. The Endemic Equilibrium\n\nFirst, we show the existence of the unique endemic equilibrium when. Ex- pressing in terms of, we can derive from system (6) as follows.\n\nSubstituting the expressions for, and into the fourth equation of system (6) we get\n\n(10)\n\nwhere\n\nBy solving (10) for we get one of the solutions as which corresponds to the disease-free equilibrium. For implies that. Since, then the endemic equilibrium exists. The results of the existence of the endemic equilibrium of system (6) can be summarized in the following lemma.\n\nLemma 5. The system (6) always has a disease-free equilibrium and a unique endemic equilibrium when.\n\nCenter Manifold Theory has been used to determine the local stability of a nonhyperbolic equilibrium, we now employ the Center Manifold Theory to establish the local asymptotic stability of the endemic equili- brium. In order to apply the Center Manifold Theory, we make the following change of variables. Let, , ,. Now we use the vector notation. Then the system (6) is written in following form\n\nsuch that\n\n(11)\n\nEvaluating the Jacobian matrix of system (11) at the disease-free equilibrium, it can be shown that the reproduction number is\n\nTake as the bifurcation parameter. Considering the case and solving for, we get\n\nWe notice that the linearized system (11) of the transformed equation with, has a simple zero eigenvalue. Hence, Center Manifold Theory can be used to analyze the dynamics of (13) near. By Theorem 4.1 in Castillo-Chavez and Song , it can be shown that the Jacobian matrix at has a right eigenvector of associated with the zero eigenvalue given by, where\n\nThe left eigenvector of associated with the zero eigenvalue at is given by, where\n\nWe now use the following lemma whose proof is found in .\n\nLemma 6. Consider the following general system of ordinary differential equations with a parameter,\n\n(12)\n\nwhere 0 is an equilibrium of the system, that is for all and assume\n\nA1: is the linearization of system (12) around the equilibrium 0 with\n\nevaluated at 0. Zero is a simple eigenvalue of A and other eigenvalues of A have negative real parts;\n\nA2: Matrix A has a right eigenvector u and a left eigenvector v corresponding to the zero eigenvalue. Let be the kth component of f and\n\n(13)\n\nThe local dynamics of (12) around 0 are totally governed by a and b.\n\n1). when with, 0 is locally asymptotically stable, and there exists a positive unstable equilibrium; when, 0 is unstable and there exists a negative and locally asymptotically stable equilibrium;\n\n2). when with, 0 is unstable; when, 0 is locally asymptotically stable, and there exists a positive unstable equilibrium;\n\n3). when with, 0 is unstable, and there exists a locally asymptotically stable negative equilibrium; when, 0 is stable and a positive unstable equilibrium appears;\n\n4). When changes from negative to positive, 0 changes its stability from stable to unstable. Correspondingly a negative unstable equilibrium becomes positive and locally asymptotically stable.\n\nWe now compute a and b, for system (11), the associated non-zero partial derivatives of at the disease free equilibrium are given by\n\nSubstituting the above expressions into (13), we get\n\nFor the sign of b, it is associated with the following non-vanishing partial derivatives of,\n\nIt follows from the above expression that\n\nThus, and. According to Lemma 6, item (iv), we can yield the following result which only holds for, but close to 1.\n\nTheorem 7. The unique endemic equilibrium is locally asymptotically stable for near 1.\n\nIn summary, model (6) has a disease-free equilibrium which is globally asymptotically stable when, and a unique endemic equilibrium point when. The unique endemic equilibrium is locally asymptotically stable at least near. We use numerical simulations to show the existence and stability of endemic equilibrium.\n\n4. Numerical Simulations and Control Strategies\n\nIn this section, in order to understand our results more intuitively, some numerical simulations of system (6) that support and extend the conclusions of previous sections are carried out. We use year as unit of time, and choose the parameters, , , , , , , , , , , , , , ,.\n\nIn Figure 1, we show the relationship between the threshold and adult parasites for the mathematical model (6). It is easy to see that is a bifurcation point, and the adult parasites are stable eventually, when the threshold increases. Otherwise, the adult parasites are extinct. It implies that the threshold is greater than unit, the schistosomiasis will be endemic. Figure 1 and Figure 2 show that if the threshold is less than unit, the schistosomiasis will be extinct.\n\nFigure 1. The relationship between the threshold and for system (6).\n\nFigure 2. Time series of solutions for system (6). The disease will be extinct eventually..\n\nTo see the relative effect of migration in each village, we plot the curved surface of the relationship between, and. From Figure 3 and Figure 4, we can observe that decreases dramatically when increases and is fixed with a small number, and increases sharply when increases and is fixed with a small number. This implies that the migration from severe endemic village to mild endemic village is bad for disease control.\n\nIn Figure 5, we consider the infection rates, and as the control factors. We plot the curved surface of the threshold as a function of and. We observe that the threshold decreases dramatically when and decrease. It means that decreasing infection rates is helpful to prevent schistosomiasis transmission.\n\n5. Conclusion and Discussion\n\nAs a kind of the tropical diseases, schistosomiasis continues to be a significant public health threat in the world.\n\nFigure 3. It shows sensitive figure that the relationship between the threshold and migrated rate,.\n\nFigure 4. The relationship between the threshold and migrated rate,.\n\nFigure 5. It shows sensitive figure that the relationship between the threshold and migrated rate,.\n\nFollowing the pioneering work of Feng et al. on modeling schistosomiasis, we establish and analyzed a schistosomiasis model with diffusion effect and saturated incidence function, in which two groups of human share the water contaminated by schistosomiasis and migrate each other. we derived the basic reproduction number and proved that the disease-free equilibrium is globally asymptotically stable when, and the unique endemic equilibrium is locally asymptotically stable for is larger than 1 and near 1. Our results indicate that the diffusion rates and the infection rates play an important role in the determination of the permanence and extinction of schistosomiasis. The diffusion from the mild endemic village to severe endemic village is benefit to control schistosomiasis transmission.\n\nIn realistic situations, there might be several human groups sharing the contaminated water resource. Only considering the model with two human groups is insufficient, we expect a similar to work in higher-dimensional systems with n human groups and migration. It can be guessed that the model with n human groups has similar mathematical properties to two human groups.\n\nAcknowledgements\n\nThe research has been supported by The Natural Science Foundation of China (11561004, 11261004), The Supporting the Development for Local Colleges and Universities Foundation of China-Applied Mathematics Innovative Team Building, the 12th Five-year Education Scientific Planning Project of Jiangxi Province (15ZD3LYB031), The Natural Science Foundation of Jiangxi Province (20151BAB201016) and the Social Science Planning Projects of Jiangxi Province (14XW08).\n\nCite this paper\nLiu, Y. , Lv, H. and Gao, S. (2016) A Schistosomiasis Model with Diffusion Effects. Applied Mathematics, 7, 587-598. doi: 10.4236/am.2016.77054.\nReferences\n\n   Ross, A.G.P., Bartley, P.B., Sleigh A.C., et al. (2002) Schistosomiasis. The New England Journal of Medicine, 346, 1212-1220.\nhttp://dx.doi.org/10.1056/NEJMra012396\n\n   Croft, S.L., Vivas, L. and Brooker, S. (2003) Recent Advances in Research and Control of Malaria, Leishmaniasis, Trypanosomiasis and Schistosomiasis. Eastern Mediterranean Health Journal, 9, 518-533.\n\n   Huang, Y.-X. and Manderson, L. (2005) The Social and Economic Context and Determinants of Schistosomiasis Japonica. Acta Tropica, 96, 223-231.\nhttp://dx.doi.org/10.1016/j.actatropica.2005.07.015\n\n   Thétiot-Laurent, S.A.L., Boissier, J., Robert, A. and Meunier, B. (2013) Schistosomiasis Chemotherapy. Angewandte Chemie International Edition, 52, 7936-7956.\nhttp://dx.doi.org/10.1002/anie.201208390\n\n   Patz, J.A., Graczyk, T.K., Geller, N., et al. (2000) Effects of Environmental Change on Emerging Parasitic Diseases. International Journal for Parasitology, 30, 1395-1405.\nhttp://dx.doi.org/10.1016/S0020-7519(00)00141-7\n\n   Zhang, S.-M., Lv, Z.-Y., Zhou, H.-J., et al. (2008) Characterization of a Profilin-Like Protein from Schistosoma Japonicum, a Potential New Vaccine Candidate. Parasitology Research, 102, 1367-1374.\nhttp://dx.doi.org/10.1007/s00436-008-0919-2\n\n   McManus, D.P., Gray, D.J., Li, Y., et al. (2010) Schistosomiasis in the People’s Republic of China: The Era of the Three Gorges Dam. Clinical Microbiology Reviews, 23, 442-466.\nhttp://dx.doi.org/10.1128/CMR.00044-09\n\n   Ross, A.G., Sleigh, A.C., Li, Y., et al. (2001) Schistosomiasis in the People’s Republic of China: Prospects and Challenges for the 21st Century. Clinical Microbiology Reviews, 14, 270-295.\nhttp://dx.doi.org/10.1128/CMR.14.2.270-295.2001\n\n   Luo, R., Zhou, C., Lin, J., et al. (2012) Identification of in Vivo Protein Phosphorylation Sites in Human Pathogen Schistosoma Japonicum by a Phosphoproteomic Approach. Journal of Proteomics, 75, 868-877.\nhttp://dx.doi.org/10.1016/j.jprot.2011.10.003\n\n   The Schistosoma japonicum Genome Sequencing and Functional Analysis Consortium (2009) The Schistosoma Japonicum Genome Reveals Features of Host-Parasite Interplay. Nature, 460, 345-351.\nhttp://dx.doi.org/10.1038/nature08140\n\n   Macdonald, G. (1965) The Dynamics of Helminth Infections, with Special Reference to Schistosomes. Transactions of the Royal Society of Tropical Medicine & Hygiene, 59, 489-506.\nhttp://dx.doi.org/10.1016/0035-9203(65)90152-5\n\n   N?sell, I. and Hirsch, W.M. (1973) The Transmission Dynamics of Schistosomiasis. Communications on Pure and Applied Mathematics, 26, 395-453.\nhttp://dx.doi.org/10.1002/cpa.3160260402\n\n   Woolhouse, M.E. (1992) On the Application of Mathematical Models of Schistosome Transmission Dynamics 2. Control. Acta Tropica, 50, 189-204.\nhttp://dx.doi.org/10.1016/0001-706X(92)90076-A\n\n   Liang, S., Maszle, D. and Spear, R.C. (2002) A Quantitative Framework for a Multi-Group Model of Schistosomiasis Japonicum Transmission Dynamics and Control in Sichuan, China. Acta Tropica, 82, 263-277.\nhttp://dx.doi.org/10.1016/S0001-706X(02)00018-9\n\n   Feng, Z., Li, C.C. and Milner, F.A. (2005) Schistosomiasis Models with Two Migrating Human Groups. Mathematical and Computer Modelling, 41, 1213-1230.\nhttp://dx.doi.org/10.1016/j.mcm.2004.10.023\n\n   Feng, Z., Li, C.C. and Milner, F.A. (2002) Schistosomiasis Models with Density Dependence and Age of Infection in Snail Dynamics. Mathematical Biosciences, 177-178, 271-286.\nhttp://dx.doi.org/10.1016/S0025-5564(01)00115-8\n\n   Bhattacharyya, R. and Mukhopadhyay, B. (2010) Analysis of Periodic Solutions in an Eco-Epidemiological Model with Saturation Incidence and Latency Delay. Nonlinear Analysis: Hybrid Systems, 4, 176-188.\nhttp://dx.doi.org/10.1016/j.nahs.2009.09.007\n\n   Xu, R. and Du, Y. (2009) A Delayed Sir Epidemic Model with Saturation Incidence and a Constant Infectious Period. Journal of Applied Mathematics and Computing, 35, 229-250.\nhttp://dx.doi.org/10.1007/s12190-009-0353-3\n\n   Carr, J. (1981) Applications of Centre Manifold Theory. Applied Mathematical Sciences, 35, 1-36.\nhttp://dx.doi.org/10.1007/978-1-4612-5929-9\n\n   Yu, X., Wu, C. and Weng, P. (2012) Traveling Waves for a Sirs Model with Nonlocal Diffusion. International Journal of Biomathematics, 05, 1250036.\nhttp://dx.doi.org/10.1142/S1793524511001787\n\n   Liu, J., Zhou, H. and Zhang, L. (2012) Cross-Diffusion Induced Turing Patterns in a Sex-Structured Predator-Prey Model. International Journal of Biomathematics, 5, 1250016.\nhttp://dx.doi.org/10.1142/S179352451100157X\n\n   Chen, S., Shi, J. and Wei, J. (2012) Time Delay-Induced Instabilities and Hopf Bifurcations in General Reaction-Diffusion Systems. Journal of Nonlinear Science, 23, 1-38.\nhttp://dx.doi.org/10.1007/s00332-012-9138-1\n\n   Chiyaka, E.T. and Garira, W. (2009) Mathematical Analysis of the Transmission Dynamics of Schistosomiasis in the Human-Snail Hosts. Journal of Biological Systems, 17, 397-423.\nhttp://dx.doi.org/10.1142/S0218339009002910\n\n   Hale, J.K. (1969) Ordinary Differential Equations. Applied Mathematical Sciences, John Wiley & Sons Inc, New Jersey, 12-20.\n\n   Anderson, R.M. and May, R.M. (1991) Infectious Diseases of Humans: Dynamics and Control. Oxford University Press, Oxford.\n\n   Chavez, C.C., Feng, Z. and Huang, W. (2002) On the Computation of R0 and Its Role in Global Stability. In: Mathematical Approaches for Emerging and Re-Emerging Infection Diseases: An Introduction. The IMA Volumes in Mathematics and Its Applications, Vol. 125, Springer, New York, 31-65.\n\n   Castillo-Chavez, C. and Song, B. (2004) Dynamical Models of Tuberculosis and Their Applications. Mathematical Biosciences and Engineering: MBE, 1, 361-404.\nhttp://dx.doi.org/10.3934/mbe.2004.1.361\n\nTop" ]
[ null, "https://html.scirp.org/file/2-7403079x5.png", null, "https://html.scirp.org/file/2-7403079x6.png", null, "https://html.scirp.org/file/2-7403079x7.png", null, "https://html.scirp.org/file/2-7403079x8.png", null, "https://html.scirp.org/file/2-7403079x9.png", null, "https://html.scirp.org/file/2-7403079x10.png", null, "https://html.scirp.org/file/2-7403079x11.png", null, "https://html.scirp.org/file/2-7403079x12.png", null, "https://html.scirp.org/file/2-7403079x13.png", null, "https://html.scirp.org/file/2-7403079x14.png", null, "https://html.scirp.org/file/2-7403079x15.png", null, "https://html.scirp.org/file/2-7403079x16.png", null, "https://html.scirp.org/file/2-7403079x17.png", null, "https://html.scirp.org/file/2-7403079x18.png", null, "https://html.scirp.org/file/2-7403079x19.png", null, "https://html.scirp.org/file/2-7403079x20.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.89981097,"math_prob":0.8455129,"size":21572,"snap":"2020-10-2020-16","text_gpt3_token_len":4973,"char_repetition_ratio":0.14345327,"word_repetition_ratio":0.06406015,"special_character_ratio":0.22913963,"punctuation_ratio":0.15153752,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.963174,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32],"im_url_duplicate_count":[null,8,null,8,null,8,null,8,null,8,null,8,null,8,null,8,null,8,null,8,null,8,null,8,null,8,null,8,null,8,null,8,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-03-31T14:26:09Z\",\"WARC-Record-ID\":\"<urn:uuid:8c7264a7-4cb5-44ff-b7f9-dfa0af418c97>\",\"Content-Length\":\"79262\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c1f349d0-8b23-49bb-a382-11e4a93e9d7a>\",\"WARC-Concurrent-To\":\"<urn:uuid:5cdac6aa-8e1c-4c14-bc1e-e886d75cca09>\",\"WARC-IP-Address\":\"104.149.186.66\",\"WARC-Target-URI\":\"https://m.scirp.org/papers/65674\",\"WARC-Payload-Digest\":\"sha1:DPOWNOF7LL3RATW5MA5SQA4WMU3FTYO3\",\"WARC-Block-Digest\":\"sha1:PF3IKHBOCEXD3XTGJ3CIWVT7NRJRVXWV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585370500482.27_warc_CC-MAIN-20200331115844-20200331145844-00079.warc.gz\"}"}
https://www.wired.com/2017/05/physics-of-a-fidget-spinner/
[ "# Let’s Explore the Physics of Rotational Motion With a Fidget Spinner\n\nAmerica's latest fad provides yet another opportunity to explain physics.\n\nYou may find fidget spinners, or at least the attention they're getting, annoying. I find them kind of cool, because they make handy tools for explaining physics. I've already shown how you can estimate the spin time of a fidget spinner. Now I will measure the moment of inertia.\n\nWhoa. I know. Moment of inertia. If that term makes you freak out a little, I completely understand. It can be complicated, but it's not so bad. Let me explain.\n\nMoment of Inertia\n\nImagine a small boat and a large boat floating alongside a dock. Push either of them with your foot and it moves. You can easily move, and stop, the small boat. But once you get the big boat moving, you have a heck of a time stopping it. Why? Mass. You can think of mass as the property that makes it difficult to change the motion of an object.\n\nTechnically, this is all part of the momentum principle, which you can represent with these equations:", null, "Now imagine you have a rotating object and you want to change its rotational motion from, say, counterclockwise to clockwise. You must consider two things: the mass of the object, and the distribution of that mass about the axis of rotation. Physicists call this quantity the moment of inertia---or, as I like to call it, the rotational mass.\n\nIf you'd like to see how two objects with the same mass can exhibit a different moment of inertia, try this simple demo. Tape two identical masses to a stick. I used juice boxes. If you place the two masses at the end of the stick, changing the rotation motion is difficult. This is a high moment of inertia. Place them near the center, however, and changing the rotation is easier. That's a low moment of inertia. I made a video to explain this. Yes, it's a bit old, but it still checks out.\n\nJust like there is the momentum principle for linear motion, there is also the angular momentum principle. You can express it with three equations:", null, "The first equation is the angular momentum principle. It states that a torque (which is like a rotational force) changes the angular momentum. The second equation is the definition of angular momentum where I represents the moment of inertia and ω represents the angular velocity (all of these are written as scalars for rotation about a fixed axis). Finally, the third equation shows one way of calculating the moment of inertia. For an object comprised of many smaller objects---say, for example, a collection of bowling balls or some beads connected by toothpicks---simply multiply the mass of each piece and the distance from the axis squared.\n\nThat's all you need to know about the moment of inertia for now.\n\nPhysical Pendulum\n\nNow, I could in theory measure the mass and size of a fidget spinner to calculate the moment of inertia using that third formula. But it would be tough, because the spinner doesn't have a nice mathematical shape and it doesn't have a uniform density. Instead, I will determine the moment of inertia with a physical pendulum.\n\nYou probably are familiar with a small mass swinging from a string. That's a basic pendulum. It has a period of oscillation of:", null, "No, I won't derive this expression because that is more complicated than you think. But in this expression, L represents the length of the string and g represents the local gravitational field (9.8 N/kg). Also, I like to use T for the period since P is too obvious (and already taken).\n\nNow, if you replace the string with something rigid like, say, a stick, you have a physical pendulum. In this case, you determine the period of oscillation for small amplitudes with:", null, "In this case, I represents the moment of inertia about the axis of rotation (the pivot point). L represents the distance from the pivot to the center of mass and m represents the mass of the object. So you could imagine swinging an object and finding the moment of inertia from the period of oscillation. Yes, but what if you want the moment of inertia through some other axis? In that case, you use the parallel axis theorem. It states that if you know the moment of inertia for an object about an axis that runs through its center of mass, then the moment of inertia about some other axis (but parallel to the first, hence the name), is:", null, "Here m represents the mass of the object and d the distance from the center of mass axis to the new axis.\n\nMeasuring the Moment of Inertia\n\nNow for an experiment. I will secure a fidget spinner to a stick with a mass of just 1 gram. I'll attach the stick to a rotation sensor so I can measure the angle of the stick and the period of oscillation.", null, "If I want to find the moment of inertia about the center of the spinner, I can use the period of a physical pendulum and the parallel axis theorem to determine this relationship (I skipped some algebraic stuff):", null, "Here is the key part---something I try to get my introductory physics students to understand. I won't determine just one period and use it to find I for the spinner. Instead, I will measure the period with the spinner at some distance (L). Then I will change the distance and find the new period. With this data, I can plot T multiplied by L vs. L2 (actually I will plot all of that stuff on the left side of the equation). Yes, I know that looks weird, but it should be a straight line with and the y-intercept will be the moment of inertia of the spinner. Boom.\n\nOh, what about about the mass of the stick? And the mass of the rubber band securing the spinner? Yes, technically those matter---but I will proceed anyway and you can't stop me.\n\nThe y-intercept of this plot is 5.4 x 10-5 kg*m2. That is the moment of inertia of the fidget spinner. But wait! Let me get a rough approximation to see if this in the right ballpark. If the spinner was a solid disk of uniform density, it would have a moment of inertia of:", null, "I know the spinner has a mass of 0.0519 kg and a radius of around 3.7 cm. Putting this into the approximation, I get a calculated value of 7.1 x 10-5 kg*m2---close enough. Also, it should be clear that the fidget spinner is neither a disk nor of uniform density.\n\nNow for a bonus: The same experiment with a metal ring.", null, "Placing the ring at different distances, I get a similar plot:\n\nFrom the y-intercept, I get a moment of inertia with a value of 1.27 x 10-4 kg*m2. The ring has a radius of 0.0375 meters and mass of 0.0919 kg. Since the moment of inertia for a ring is just MR2, I can calculate the theoretical value for this object. The calculate moment of inertia 1.29 x 10-4 kg*m2. Dang. That actually worked. Mostly." ]
[ null, "https://www.wired.com/wp-content/uploads/2017/05/la_te_xi_t_18.jpg", null, "https://www.wired.com/wp-content/uploads/2017/05/la_te_xi_t_19.jpg", null, "https://www.wired.com/wp-content/uploads/2017/05/la_te_xi_t_110.jpg", null, "https://www.wired.com/wp-content/uploads/2017/05/la_te_xi_t_111.jpg", null, "https://www.wired.com/wp-content/uploads/2017/05/la_te_xi_t_112.jpg", null, "https://www.wired.com/wp-content/uploads/2017/05/photo_google_photos1.jpg", null, "https://www.wired.com/wp-content/uploads/2017/05/la_te_xi_t_113.jpg", null, "https://www.wired.com/wp-content/uploads/2017/05/la_te_xi_t_114.jpg", null, "https://www.wired.com/wp-content/uploads/2017/05/photo_google_photos2.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.95083463,"math_prob":0.92429864,"size":1873,"snap":"2022-40-2023-06","text_gpt3_token_len":413,"char_repetition_ratio":0.12734082,"word_repetition_ratio":0.0,"special_character_ratio":0.21516284,"punctuation_ratio":0.1388889,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9947225,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18],"im_url_duplicate_count":[null,8,null,8,null,8,null,8,null,8,null,8,null,8,null,8,null,8,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-05T18:14:21Z\",\"WARC-Record-ID\":\"<urn:uuid:2573fa34-9c64-4590-8fc2-5af94d62b23a>\",\"Content-Length\":\"929408\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:990f2d07-3de4-4b09-94df-68d025b7e293>\",\"WARC-Concurrent-To\":\"<urn:uuid:51ed7c28-8c50-46ba-a347-65db40017528>\",\"WARC-IP-Address\":\"146.75.34.194\",\"WARC-Target-URI\":\"https://www.wired.com/2017/05/physics-of-a-fidget-spinner/\",\"WARC-Payload-Digest\":\"sha1:5GJIBXGKIHXGMFHXWXOT2Q2L2EOYPPCS\",\"WARC-Block-Digest\":\"sha1:QNMMOMBWZGTCSQAXLOZ6JGWTTT5HGKL7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500273.30_warc_CC-MAIN-20230205161658-20230205191658-00293.warc.gz\"}"}
https://askfilo.com/user-question-answers-mathematics/14-the-length-of-the-common-chord-of-two-intersecting-34333837333136
[ "", null, "World's only instant tutoring platform", null, "Question", null, "", null, "", null, "# 14. The length of the common chord of two intersecting circles is . If the radii of the two circles are and , find the distance between their centres.15. The line joining mid-points of two chords of a circle passes through its centre. Prove that the chords are parallel.16. If a diameter of a circle is perpendicular to one of two parallel chords of the circle, prove that it is perpendicular to the other and bisects it.17. In an equilateral triangle, prove that the centroid and the circumcentre of the triangle coincide.Hint.Prove that medians are perpendicular bisectors of the sides of triangle.18 (a) In the figure (i) given below, is perpendicular to the chord of a circle whose centre is . If is a diameter, show that .", null, "## Filo tutor solutions (1)\n\nLearn from their 1-to-1 discussion with Filo tutors.\n\n11 mins", null, "Connect instantly with this tutor\n\nConnect now\n\nTaught by", null, "Dushyant Kumar\n\nTotal classes on Filo by this tutor - 3,582\n\nTeaches : Mathematics, Science, English\n\nConnect instantly with this tutor\n\nNotes from this class (1 pages)", null, "", null, "", null, "", null, "" ]
[ null, "https://askfilo.com/images/logo.svg", null, "https://askfilo.com/images/icons/navbar.svg", null, "https://askfilo.com/images/icons/rotate.svg", null, "https://askfilo.com/images/icons/expand.svg", null, "https://static-images.findfilo.com/classroom/1677214662618_rzzzduxm_3735277.jpg", null, "https://misc-images.cdn.askfilo.com/fsoesyedqh_video-solution-title-icon.webp", null, "https://askfilo.com/images/logo-inverted.svg", null, "https://lh3.googleusercontent.com/OvT3Bml4JPBRbKAOs1bfF9Y9c5wo5kBip3EXMIJ8a3Gbhdyg8m5JdjjON7RUg2cdOnC0xT1dYCbV4QM3XtJypihC0J-7ekI3QfWy8Q=rw-w96-h96-p", null, "https://askfilo.com/images/icons/dropdown-arrow.svg", null, "https://storage.googleapis.com/filo-classroom-notes/thumb_classroom_27756410_OAN0G.jpeg", null, "https://misc-images.cdn.askfilo.com/rgciveuboo_connect-tutor-mb.webp", null, "https://misc-images.cdn.askfilo.com/jkqnpudesz_connect-tutor.webp", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.78951925,"math_prob":0.8935187,"size":4585,"snap":"2023-40-2023-50","text_gpt3_token_len":1407,"char_repetition_ratio":0.12813796,"word_repetition_ratio":0.4479042,"special_character_ratio":0.27393675,"punctuation_ratio":0.11608961,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98451895,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,1,null,null,null,null,null,1,null,null,null,1,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-29T07:21:46Z\",\"WARC-Record-ID\":\"<urn:uuid:eb5aa644-428c-4adf-9dca-c2102a969b1a>\",\"Content-Length\":\"171031\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0ea3e3e7-404f-43f3-8e52-4e7fc167cae4>\",\"WARC-Concurrent-To\":\"<urn:uuid:d4b9b434-f35b-4ed6-9591-d2929a9174c0>\",\"WARC-IP-Address\":\"151.101.1.55\",\"WARC-Target-URI\":\"https://askfilo.com/user-question-answers-mathematics/14-the-length-of-the-common-chord-of-two-intersecting-34333837333136\",\"WARC-Payload-Digest\":\"sha1:ORE3OUBTP2YXOSXMIFNZ4MASSOB4OPI2\",\"WARC-Block-Digest\":\"sha1:GP32S3ESPOFQ4A744JC6E4NFKFHFGZQD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510498.88_warc_CC-MAIN-20230929054611-20230929084611-00383.warc.gz\"}"}
https://artofproblemsolving.com/wiki/index.php?title=2006_AIME_II_Problems/Problem_10&diff=prev&oldid=23997
[ "# Difference between revisions of \"2006 AIME II Problems/Problem 10\"\n\n## Problem\n\nSeven teams play a soccer tournament in which each team plays every other team exactly once. No ties occur, each team has a", null, "$50\\%$ chance of winning each game it plays, and the outcomes of the games are independent. In each game, the winner is awarded a point and the loser gets 0 points. The total points are accumilated to decide the ranks of the teams. In the first game of the tournament, team", null, "$A$ beats team", null, "$B.$ The probability that team", null, "$A$ finishes with more points than team", null, "$B$ is", null, "$m/n,$ where", null, "$m$ and", null, "$n$ are relatively prime positive integers. Find", null, "$m+n.$\n\n## Solution\n\nYou can break this into cases based on how many rounds A wins out of the remaining 5 games.\n\nIf A wins 0 games, then B must win 0 games and the probability of this is", null, "$\\frac{{5 \\choose 0}}{2^5} \\frac{{5 \\choose 0}}{2^5} = \\frac{1}{1024}$.\n\nIf A wins 1 games, then B must win 1 or less games and the probability of this is", null, "$\\frac{{5 \\choose 1}}{2^5} \\frac{{5 \\choose 0}+{5 \\choose 1}}{2^5} = \\frac{30}{1024}$.\n\nIf A wins 2 games, then B must win 2 or less games and the probability of this is", null, "$\\frac{{5 \\choose 2}}{2^5} \\frac{{5 \\choose 0}+{5 \\choose 1}+{5 \\choose 2}}{2^5} = \\frac{160}{1024}$.\n\nIf A wins 3 games, then B must win 3 or less games and the probability of this is", null, "$\\frac{{5 \\choose 3}}{2^5} \\frac{{5 \\choose 0}+{5 \\choose 1}+{5 \\choose 2}+{5 \\choose 3}}{2^5} = \\frac{260}{1024}$.\n\nIf A wins 4 games, then B must win 4 or less games and the probability of this is", null, "$\\frac{{5 \\choose 4}}{2^5} \\frac{{5 \\choose 0}+{5 \\choose 1}+{5 \\choose 2}+{5 \\choose 3}+{5 \\choose 4}}{2^5} = \\frac{155}{1024}$.\n\nIf A wins 5 games, then B must win 5 or less games and the probability of this is", null, "$\\frac{{5 \\choose 5}}{2^5} \\frac{{5 \\choose 0}+{5 \\choose 1}+{5 \\choose 2}+{5 \\choose 3}+{5 \\choose 4}+{5 \\choose 5}}{2^5} = \\frac{32}{1024}$.\n\nSumming these 6 cases, we get", null, "$\\frac{638}{1024}$, which simplifies to", null, "$\\frac{319}{512}$, so our answer is", null, "$319 + 512 = 831$." ]
[ null, "https://latex.artofproblemsolving.com/9/f/7/9f7f4de02215a8c5319886c13b2e291811346e1c.png ", null, "https://latex.artofproblemsolving.com/0/1/9/019e9892786e493964e145e7c5cf7b700314e53b.png ", null, "https://latex.artofproblemsolving.com/b/9/5/b95fbfb49b675ba11392ba7c2603d0c87f89be76.png ", null, "https://latex.artofproblemsolving.com/0/1/9/019e9892786e493964e145e7c5cf7b700314e53b.png ", null, "https://latex.artofproblemsolving.com/f/f/5/ff5fb3d775862e2123b007eb4373ff6cc1a34d4e.png ", null, "https://latex.artofproblemsolving.com/2/7/1/271cdd78c4c10cde94410584ce7282874413cb29.png ", null, "https://latex.artofproblemsolving.com/f/5/0/f5047d1e0cbb50ec208923a22cd517c55100fa7b.png ", null, "https://latex.artofproblemsolving.com/1/7/4/174fadd07fd54c9afe288e96558c92e0c1da733a.png ", null, "https://latex.artofproblemsolving.com/5/d/b/5db26f75f6218a382840f0ed34f5d36bcff61d33.png ", null, "https://latex.artofproblemsolving.com/f/d/4/fd4b24e0e6259492c9f710addc05982ccd5e7413.png ", null, "https://latex.artofproblemsolving.com/4/a/3/4a3186136adbabf988e16d4af9b9c610c870dd41.png ", null, "https://latex.artofproblemsolving.com/4/0/d/40df27c711317b1a7b7a1316e6b81ee2e08dee20.png ", null, "https://latex.artofproblemsolving.com/d/c/c/dcc3c10d39056c854c71612caa4de7f1f1829ee1.png ", null, "https://latex.artofproblemsolving.com/9/f/0/9f02b49009ea3cbb3df79dbc35a5d67d681a2402.png ", null, "https://latex.artofproblemsolving.com/1/a/4/1a4571094b0f69dc932dd67f5b5ac42f8613ee4e.png ", null, "https://latex.artofproblemsolving.com/4/8/0/480f7f96dc98dd741fc76bb9a5e3809060710514.png ", null, "https://latex.artofproblemsolving.com/a/3/6/a36b36a967bba271d050f2df96adda23c0d3975d.png ", null, "https://latex.artofproblemsolving.com/f/e/5/fe5ede7f6103a9c419e4a70809a87883df288857.png ", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8840218,"math_prob":0.9998379,"size":2655,"snap":"2021-04-2021-17","text_gpt3_token_len":863,"char_repetition_ratio":0.19954734,"word_repetition_ratio":0.46484375,"special_character_ratio":0.3581921,"punctuation_ratio":0.07304348,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9989516,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,9,null,8,null,9,null,9,null,9,null,9,null,null,null,null,null,9,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-19T18:35:31Z\",\"WARC-Record-ID\":\"<urn:uuid:2a547d1f-cd4e-4901-94da-54e934903372>\",\"Content-Length\":\"47451\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:018189e8-e3f5-4b8e-b39d-1ca2de3eb905>\",\"WARC-Concurrent-To\":\"<urn:uuid:baa7ccd2-c58e-4c54-a172-c247cec5be35>\",\"WARC-IP-Address\":\"104.26.11.229\",\"WARC-Target-URI\":\"https://artofproblemsolving.com/wiki/index.php?title=2006_AIME_II_Problems/Problem_10&diff=prev&oldid=23997\",\"WARC-Payload-Digest\":\"sha1:M2QHOZSD2JFLAOB765SLIBFMO3VQHDZM\",\"WARC-Block-Digest\":\"sha1:2OJYCXFX74CDBUBJJFG3FBRU3MUZJPHQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703519600.31_warc_CC-MAIN-20210119170058-20210119200058-00363.warc.gz\"}"}
https://engineering.fandom.com/wiki/Momentum
[ "In physics, momentum is the product of the mass and velocity of an object.\n\n## Momentum in Classical mechanics\n\nIf an object is moving in any reference frame, then it has momentum in that frame. The amount of momentum that an object has depends on two variables: the mass and the velocity of the moving object in the frame of reference. This can be written as:\n\nmomentum = mass × velocity\n\nIn physics, the symbol for momentum is a small p, so the above equation can be rewritten as:", null, "where m is the mass and v the velocity. The SI unit of momentum is kilogram metres per second (kg m/s).\n\nThe velocity of an object is given by its speed and its direction. Because momentum depends on velocity, it too has a magnitude and a direction: it is a vector quantity. For example the momentum of a 5-kg bowling ball would have to be described by the statement that it was moving westward at 2 m/s. It is insufficient to say that the ball has 10 kg m/s of momentum; the momentum of the ball is not fully described until information about its direction is given.\n\n### Impulse\n\nA step change in an object's momentum is known as an impulse:\n\nThe impulse (mass × change in velocity) = force applied × the time over which the force was applied.", null, "## Conservation of momentum\n\nBecause of the way it is defined, momentum always appears to be conserved. In the absence of external forces, a system will have constant total momentum: a property that is implied by Newton's law of inertia, his first law of motion. Also, Newton's third law of motion, the law of reciprocal actions, dictates that the forces acting between systems are equal, which is equivalent to a statement of the conservation of momentum.\n\n### Conservation of momentum and collisions\n\nMomentum has the special property that it is always conserved, even in collisions. Kinetic energy, on the other hand, is not conserved in collisions if they are inelastic. Since momentum is conserved it can be used to calculate unknown velocities following a collision.\n\nA common problem in physics that requires the use of this fact is the collision gt equal the sum of the momentum after the collision:gwapo ko", null, "where the subscript i signifies initial, before the collision, and f signifies final, after the collision.\n\nUsually, we either only know the velocities before or after a collision and like to also find out the opposite. Correctly solving this problem means you have to know what kind of collision took place. There are two basic kinds of collisions, both of which conserve momentum:\n\n• Elastic collisions conserve kinetic energy\n• Inelastic collisions don't conserve kinetic energy\n\n#### Elastic collisions\n\nA collision between two pool or snooker balls is a good example of an almost totally elastic collision. In addition to momentum being conserved when the two balls collide, the sum of kinetic energy before a collision must equal the sum of kinetic energy after:", null, "Since the 1/2 factor is common to all the terms, it can be taken out right away.\n\nIn the case of two objects colliding head on we find that the final velocity", null, "", null, "#### Inelastic collisions\n\nA common example of a perfectly inelastic collision is when two objects collide and then stick together afterwards. This equation describes the conservation of momentum:", null, "## Changes in momentum\n\nAlthough momentum is conserved within a closed system, individual parts of a system can undergo changes in momentum. In classical mechanics, an impulse changes the momentum of a body, and has the same units and dimensions as momentum. The SI unit of impulse is the same as for momentum (kg m/s). An impulse is calculated as the integral of force with respect to time.", null, "where\n\nI is the impulse, measured in kilogram metres per second\nF is the force, measured in newtons\nt is the time duration, measured in seconds\n\nIn the presence of a constant force, impulse is often written using the formula", null, "where", null, "is the time interval over which the force (F) is applied.\n\nUsing the definition of force yields:", null, "", null, "", null, "It is therefore common to define impulse as a change in momentum.\n\n## Momentum in relativistic mechanics\n\nMomentum is more accurately defined in its relativistic form. For objects moving near the speed of light, classical momentum fails to preserve the law of conservation of momentum. The more accurate relativistic momentum is defined by:", null, "where", null, "", null, ".\n\nRelativistic four-momentum as proposed by Albert Einstein arises from invariance of four-vectors under Lorentzian translation. These four-vectors appear spontaneously in the Green's function from quantum field theory. The four-momentum is defined as:\n\n[E/c p]\n\nwhere E is the total energy of the system:", null, "Setting velocity to zero, one derives that the rest mass and the energy of an object are related by E=mc².\n\nThe \"length\" of the vector that remains constant is defined thus:", null, "Massless objects such as photons also carry momentum; the formula is p=E/c, where E is the energy the photon carries and c is the speed of light.\n\nMomentum is the Noether charge of translational invariance. As such, even fields as well as other things can have momentum, not just particles. However, in curved space-time which is not asymptotically Minkowski, momentum isn't defined at all.\n\n## Momentum in quantum mechanics\n\nIn quantum mechanics momentum is defined The Heisenberg uncertainty principle defines limits on how accurately the momentum and position of a single observable system can be known at once. In quantum mechanics, position and momentum are conjugate variables.\n\nFor a single particle with no electric charge and no spin, the momentum operator can be written in the position basis as", null, "where", null, "is the gradient operator. This is a commonly encountered form of the momentum operator, though not the most general one.\n\n## Momentum in Electromagnetism\n\nBecause Electric fields and magnetic fields can produce forces, they also contain momenta. Light (visible, UV, radio) is made up of electromagnetic waves and all though these waves carry no mass (ie", null, "), they still carry momentum. Also momentum is conserved in a electrodynamic system but may change from momentum in the fields to mechanical momentum through the movement of the parts (ie a circular ring around a changing magnetic field may begin to rotate and appear to violate the conservation of momentum).\n\n## Figurative use\n\nA process may be said to gain momentum. The terminology implies that it requires effort to start such a process, but that it is relatively easy to keep it going. Alternatively, the expression can be seen to reflect that the process is adding adherents, or general acceptance, and thus has more mass at the same velocity; hence, it gained momentum." ]
[ null, "https://wikimedia.org/api/rest_v1/media/math/render/png/a271a96e7b925fd39686375167c76d406e87c813", null, "https://wikimedia.org/api/rest_v1/media/math/render/png/72699c40109beb03e05de6ac33eff3bcf807e6b1", null, "https://wikimedia.org/api/rest_v1/media/math/render/png/a6fb72cad0efdcfe89b8c8816a56c1df32777f54", null, "https://wikimedia.org/api/rest_v1/media/math/render/png/5ed4aa425e8dc8aa8f544ac4f03373da4186cee0", null, "https://wikimedia.org/api/rest_v1/media/math/render/png/65707c65a0c3a4c756b74602535f18003f9d2d23", null, "https://wikimedia.org/api/rest_v1/media/math/render/png/68e35a7db096b2ed2358fd2306381c38ca8cdf79", null, "https://wikimedia.org/api/rest_v1/media/math/render/png/a43869b609c7e43251cd5ae38ce0a336e47c2e20", null, "https://wikimedia.org/api/rest_v1/media/math/render/png/5b4532c554c18dc5655c1f37f5d94a50731be4ef", null, "https://wikimedia.org/api/rest_v1/media/math/render/png/7e1512a9cc6f2a3cf812015802cf95dae4d5eec7", null, "https://wikimedia.org/api/rest_v1/media/math/render/png/8c28867ecd34e2caed12cf38feadf6a81a7ee542", null, "https://wikimedia.org/api/rest_v1/media/math/render/png/ee836c933f97c640ff57d6841a70a19da71206dd", null, "https://wikimedia.org/api/rest_v1/media/math/render/png/50f00d4bc4cb682142dd59ff62db7afafe9f958d", null, "https://wikimedia.org/api/rest_v1/media/math/render/png/04110020d808e3e028e0c04111293d75c9705c1c", null, "https://wikimedia.org/api/rest_v1/media/math/render/png/6ea98b7ffc4ea6fcf02cc0ab23a6a8ca014d1719", null, "https://wikimedia.org/api/rest_v1/media/math/render/png/8b130ec5e5e9586833b7888f7cbe2433f1e295e3", null, "https://wikimedia.org/api/rest_v1/media/math/render/png/eaf2cb3f6061344571d2b90b9128e50df6e6a9ed", null, "https://wikimedia.org/api/rest_v1/media/math/render/png/b5d503b5cdeafc60d08617014bc23fabdf9ec152", null, "https://wikimedia.org/api/rest_v1/media/math/render/png/a6f322452e2879efbf0d54913392b3ab2af6e9c0", null, "https://wikimedia.org/api/rest_v1/media/math/render/png/038620308dd0cb73d2880bba9a68535f36a9cf0c", null, "https://wikimedia.org/api/rest_v1/media/math/render/png/a3d0e93b78c50237f9ea83d027e4ebbdaef354b2", null, "https://wikimedia.org/api/rest_v1/media/math/render/png/e57f21007575fd03e3be0da20af34d25829cc9a7", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.90576965,"math_prob":0.9616478,"size":7552,"snap":"2021-31-2021-39","text_gpt3_token_len":1601,"char_repetition_ratio":0.15858506,"word_repetition_ratio":0.0015910899,"special_character_ratio":0.20908369,"punctuation_ratio":0.10195228,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99730927,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42],"im_url_duplicate_count":[null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,null,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,6,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-08-06T00:47:49Z\",\"WARC-Record-ID\":\"<urn:uuid:e9252333-a71b-4864-ba66-7403d7bdd653>\",\"Content-Length\":\"330782\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:00e25a6b-5f3f-4005-a824-103e4f62b56b>\",\"WARC-Concurrent-To\":\"<urn:uuid:4bae9219-f825-4c2b-a83c-81e33afcb4b1>\",\"WARC-IP-Address\":\"151.101.64.194\",\"WARC-Target-URI\":\"https://engineering.fandom.com/wiki/Momentum\",\"WARC-Payload-Digest\":\"sha1:W6G6Y6PALFX7UXHVZDI2SJWE2NS7SHZZ\",\"WARC-Block-Digest\":\"sha1:IYUBNOQ4Q7GYPJXZ62I6VUSSZUDKAYKD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046152085.13_warc_CC-MAIN-20210805224801-20210806014801-00661.warc.gz\"}"}
https://pwntestprep.com/2015/10/hello-mike-im-kinda-stuck-on-17-pg-273-on-the-patterns-section/
[ "Hello Mike!\nI’m kinda stuck on #17, pg 273 on the Patterns section. The question is the first term in the sequence above is 3, and every term after the first is -3 times the preceding term. How many terms in the sequence are less than 1000? The answer is more than 9. But then if you do multiply the preceding numbers by -3 you get -81, 243,-729 (these 3+before 3)=6. You get 2187 if you multiply -3 by -729 so it doesn’t count right?\nThank you so much for reading!\n\nEvery negative term (of which there are an infinite amount) is less than 1000. 🙂 So the next term after 2187 will be less than 1000, and then 2 terms after that will be less than 1000, etc." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9360839,"math_prob":0.9747562,"size":655,"snap":"2020-45-2020-50","text_gpt3_token_len":180,"char_repetition_ratio":0.13517666,"word_repetition_ratio":0.016,"special_character_ratio":0.31755725,"punctuation_ratio":0.10738255,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99268794,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-24T03:40:56Z\",\"WARC-Record-ID\":\"<urn:uuid:243aeb70-c664-490a-a522-909ee951ff88>\",\"Content-Length\":\"46109\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:37bf248d-f1a2-41c6-9ee3-8ba0ab64f4e4>\",\"WARC-Concurrent-To\":\"<urn:uuid:4203646d-dfc9-4df1-b7dc-ce436a45c3e6>\",\"WARC-IP-Address\":\"162.214.30.103\",\"WARC-Target-URI\":\"https://pwntestprep.com/2015/10/hello-mike-im-kinda-stuck-on-17-pg-273-on-the-patterns-section/\",\"WARC-Payload-Digest\":\"sha1:ILB4EI6O2NK632HE6KKCPM7O26IQXMK7\",\"WARC-Block-Digest\":\"sha1:KUR4MUD6KPPJH75AFY76RBS5DBD7DVHS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141171077.4_warc_CC-MAIN-20201124025131-20201124055131-00559.warc.gz\"}"}
https://www.statsmodels.org/devel/generated/statsmodels.discrete.discrete_model.NegativeBinomialP.html
[ "# statsmodels.discrete.discrete_model.NegativeBinomialP¶\n\nclass statsmodels.discrete.discrete_model.NegativeBinomialP(endog, exog, p=2, offset=None, exposure=None, missing='none', check_rank=True, **kwargs)[source]\n\nGeneralized Negative Binomial (NB-P) Model\n\nParameters:\nendogarray_like\n\nA 1-d endogenous response variable. The dependent variable.\n\nexogarray_like\n\nA nobs x k array where nobs is the number of observations and k is the number of regressors. An intercept is not included by default and should be added by the user. See `statsmodels.tools.add_constant`.\n\npscalar\n\nP denotes parameterizations for NB regression. p=1 for NB-1 and p=2 for NB-2. Default is p=2.\n\noffsetarray_like\n\nOffset is added to the linear prediction with coefficient equal to 1.\n\nexposurearray_like\n\nLog(exposure) is added to the linear prediction with coefficient equal to 1. missing : str Available options are ‘none’, ‘drop’, and ‘raise’. If ‘none’, no nan checking is done. If ‘drop’, any observations with nans are dropped. If ‘raise’, an error is raised. Default is ‘none’.\n\ncheck_rankbool\n\nCheck exog rank to determine model degrees of freedom. Default is True. Setting to False reduces model initialization time when exog.shape is large.\n\nAttributes:\nendog`ndarray`\n\nA reference to the endogenous response variable\n\nexog`ndarray`\n\nA reference to the exogenous design.\n\npscalar\n\nP denotes parameterizations for NB-P regression. p=1 for NB-1 and p=2 for NB-2. Default is p=1.\n\nMethods\n\n The cumulative distribution function of the model. `convert_params`(params, mu) `cov_params_func_l1`(likelihood_model, xopt, ...) Computes cov_params on a reduced parameter space corresponding to the nonzero parameters resulting from the l1 regularized fit. `fit`([start_params, method, maxiter, ...]) use_transparams : bool `fit_regularized`([start_params, method, ...]) Fit the model using a regularized maximum likelihood. `from_formula`(formula, data[, subset, drop_cols]) Create a Model from a formula and dataframe. `get_distribution`(params[, exog, exposure, ...]) get frozen instance of distribution Get frozen instance of distribution based on predicted parameters. `hessian`(params) Generalized Negative Binomial (NB-P) model hessian maxtrix of the log-likelihood `hessian_factor`(params) Generalized Negative Binomial (NB-P) model hessian maxtrix of the log-likelihood `information`(params) Fisher information matrix of model. Initialize is called by statsmodels.model.LikelihoodModel.__init__ and should contain any preprocessing that needs to be done for a model. `loglike`(params) Loglikelihood of Generalized Negative Binomial (NB-P) model `loglikeobs`(params) Loglikelihood for observations of Generalized Negative Binomial (NB-P) model The probability density (mass) function of the model. `predict`(params[, exog, exposure, offset, ...]) Predict response variable of a model given exogenous variables. `score`(params) Generalized Negative Binomial (NB-P) model score (gradient) vector of the log-likelihood `score_factor`(params[, endog]) Generalized Negative Binomial (NB-P) model score (gradient) vector of the log-likelihood for each observations. `score_obs`(params) Generalized Negative Binomial (NB-P) model score (gradient) vector of the log-likelihood for each observations.\n\nProperties\n\n `endog_names` Names of endogenous variables. `exog_names` Names of exogenous variables." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6375174,"math_prob":0.92147094,"size":3103,"snap":"2022-27-2022-33","text_gpt3_token_len":712,"char_repetition_ratio":0.13488223,"word_repetition_ratio":0.14766839,"special_character_ratio":0.2094747,"punctuation_ratio":0.17782027,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99621505,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-06T14:05:21Z\",\"WARC-Record-ID\":\"<urn:uuid:d5ba6174-a30b-48f2-ad44-68e3ed486b8d>\",\"Content-Length\":\"34604\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bd9fb858-c773-4796-8f58-dd70c1910de6>\",\"WARC-Concurrent-To\":\"<urn:uuid:54ee1d73-a31a-4bbf-9e19-06b5ad3e5ad6>\",\"WARC-IP-Address\":\"185.199.111.153\",\"WARC-Target-URI\":\"https://www.statsmodels.org/devel/generated/statsmodels.discrete.discrete_model.NegativeBinomialP.html\",\"WARC-Payload-Digest\":\"sha1:YXLJ4PRWBTO6FDMIC7FCFKZ4LEKFUMM6\",\"WARC-Block-Digest\":\"sha1:EETXN4FF3I67DG4HQLSYQAGJR7AAAIA4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104672585.89_warc_CC-MAIN-20220706121103-20220706151103-00498.warc.gz\"}"}
https://folia.unifr.ch/unifr/documents/303647
[ "# The linear barycentric rational quadrature method for Volterra integral equations\n\n01.01.2014\nPublished in:\n• SIAM Journal on Scientific Computing. - 2014, vol. 36, no. 1, p. A105–A123\nEnglish We introduce two direct quadrature methods based on linear rational interpolation for solving general Volterra integral equations of the second kind. The first, deduced by a direct application of linear barycentric rational quadrature given in former work, is shown to converge at the same rate as the rational quadrature rule but is costly on long integration intervals. The second, based on a composite version of this quadrature rule, loses one order of convergence but is much cheaper. Both require only a sample of the involved functions at equispaced nodes and yield an infinitely smooth solution of most classical examples with machine precision.\nFaculty\nFaculté des sciences et de médecine\nDepartment\nDépartement de Mathématiques\nLanguage\n• English\nClassification\nMathematics" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.78591114,"math_prob":0.93442565,"size":1389,"snap":"2023-40-2023-50","text_gpt3_token_len":321,"char_repetition_ratio":0.0967509,"word_repetition_ratio":0.0,"special_character_ratio":0.20302376,"punctuation_ratio":0.14222223,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9857703,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-24T00:24:59Z\",\"WARC-Record-ID\":\"<urn:uuid:c37c29ac-e2b5-4bab-b41b-b59d4c1d99c0>\",\"Content-Length\":\"21411\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7d0de01c-56d2-40d5-8816-249756bdf043>\",\"WARC-Concurrent-To\":\"<urn:uuid:5d1c1600-effd-4c65-a2a8-20caff7ab409>\",\"WARC-IP-Address\":\"153.109.157.125\",\"WARC-Target-URI\":\"https://folia.unifr.ch/unifr/documents/303647\",\"WARC-Payload-Digest\":\"sha1:TJ4RGR2RQLH74XWFOTIHZL7HZLYNFJCL\",\"WARC-Block-Digest\":\"sha1:W4UTFACPI3ROPBYMMNDWORGGQJBCBCWJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506539.13_warc_CC-MAIN-20230923231031-20230924021031-00458.warc.gz\"}"}
https://socratic.org/questions/how-do-you-evaluate-12c6
[ "# How do you evaluate \"\"^12C_6?\n\nJan 27, 2017\n\n\"\"^12C_6 = 924\n\n#### Explanation:\n\nThe general formula for combinations is:\n\n\"\"^nC_r = (n!)/(r!(n-r)!)\n\nSo in our example:\n\n\"\"^12C_6 = (12!)/(6!6!)\n\ncolor(white)(\"\"^12C_6) = (12xx11xx10xx9xx8xx7)/(6xx5xx4xx3xx2xx1)\n\ncolor(white)(\"\"^12C_6) = (2^6xx3^3xx5xx7xx11)/(2^4xx3^2xx5)\n\ncolor(white)(\"\"^12C_6) = 2^2xx3xx7xx11\n\ncolor(white)(\"\"^12C_6) = 924\n\nOr you can write out Pascal's triangle to the $13$th row and pick the middle term in that row...", null, "" ]
[ null, "https://d2jmvrsizmvf4x.cloudfront.net/il90kbCCTy2IJxQVxQlo_pascal.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5855384,"math_prob":0.99803907,"size":564,"snap":"2019-51-2020-05","text_gpt3_token_len":230,"char_repetition_ratio":0.15714286,"word_repetition_ratio":0.0,"special_character_ratio":0.43085107,"punctuation_ratio":0.12631579,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9842227,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-08T05:09:23Z\",\"WARC-Record-ID\":\"<urn:uuid:4eb9cfda-3a4a-4b42-a1f8-08f5c476d73c>\",\"Content-Length\":\"33675\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d919584c-9847-4312-9318-10f1dadeacff>\",\"WARC-Concurrent-To\":\"<urn:uuid:92b2049d-c23f-4efc-9b72-320e48e2f809>\",\"WARC-IP-Address\":\"54.221.217.175\",\"WARC-Target-URI\":\"https://socratic.org/questions/how-do-you-evaluate-12c6\",\"WARC-Payload-Digest\":\"sha1:YPQBXNI57ISG2TBV4M3XXGNKJBGDPNHN\",\"WARC-Block-Digest\":\"sha1:S66LCOES2ZLF7LH6SVMJESK3I4LYSVUY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540506459.47_warc_CC-MAIN-20191208044407-20191208072407-00520.warc.gz\"}"}
http://slideplayer.com/slide/8060325/
[ "", null, "# Warm - up 6.3a Simplify: 1. 4x 3 2. 5x 2 4x x 2 x 2 5 x + 7 Find the missing factor: 3. x 2 – 2x – 63 = (x-9)(?) 4. 2x 2 + 13x + 15 = (x+5)(?) 2x + 3.\n\n## Presentation on theme: \"Warm - up 6.3a Simplify: 1. 4x 3 2. 5x 2 4x x 2 x 2 5 x + 7 Find the missing factor: 3. x 2 – 2x – 63 = (x-9)(?) 4. 2x 2 + 13x + 15 = (x+5)(?) 2x + 3.\"— Presentation transcript:\n\nWarm - up 6.3a Simplify: 1. 4x 3 2. 5x 2 4x x 2 x 2 5 x + 7 Find the missing factor: 3. x 2 – 2x – 63 = (x-9)(?) 4. 2x 2 + 13x + 15 = (x+5)(?) 2x + 3\n\n6.3a Dividing Polynomials by Jason L. Bradbury CA State Standard - 3.0 Students are adept at operations on polynomials, including long division. Objective – To be able divide polynomials by long division and synthetic division.\n\n3 Long Division: 2 2 7 5 2 1 1 7 7 1 5 2 5 2 4 1 6.3a Dividing Polynomials Example 1: Divide 2275 by 3 58 758 + 1 3 758, R 1 22 divided by 3 is 7 SUBTRACT Multiply 7 times 3 17 divided by 3 is 5 Multiply 5 times 3 SUBTRACT 25 divided by 3 is 8 Multiply 8 times 3 SUBTRACT Remainder is 1 out of 3 Summary:\n\n4 Long Division: 3 7 1 1 3 6 1 1 9 8 3 1 2 8 3 Example 2: Divide 3711 by 4 27 9 2 7 + 3 4\n\nx – 5 Polynomial Long Division: x 2 + 2x – 30 x2x2x2x2 7x – 30 7x – 35 5 Example 3: Divide x 2 + 2x – 30 by x – 5 x+ 7 – 5x x + 7 + 5 x – 5 Multiply x times (x – 5) *SUBTRACT: change signs Multiply 7 times (x – 5) *SUBTRACT: change signs Remainder 5 out of x - 5 Divide\n\nx + 2 Polynomial Long Division: x 2 + 10x + 16 x2x2x2x2 8x + 16 0 Example 4: Divide x 2 + 10x + 16 by x + 2 x+ 8 + 2x x + 8\n\nx 2 – x + 1 Polynomial Long Division: x 4 + 0x 3 + 2x 2 – x + 5 x4x4x4x4 x 3 + x 2 – x x2x2 x 3 – x 2 + x 2x 2 – 2x + 5 2x 2 – 2x + 2 3 Example 4: Divide x 4 + 2x 2 –x + 5 by x 2 – x + 1 + x+ 2 – x 3 + x 2 x 2 + x + 2 + 3 x 2 – x + 1\n\n6.3a Homework Page 330 – 331 1 – 7 odd\n\nDownload ppt \"Warm - up 6.3a Simplify: 1. 4x 3 2. 5x 2 4x x 2 x 2 5 x + 7 Find the missing factor: 3. x 2 – 2x – 63 = (x-9)(?) 4. 2x 2 + 13x + 15 = (x+5)(?) 2x + 3.\"\n\nSimilar presentations" ]
[ null, "http://slideplayer.com/static/blue_design/img/slide-loader4.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7124473,"math_prob":0.9999987,"size":1641,"snap":"2020-10-2020-16","text_gpt3_token_len":750,"char_repetition_ratio":0.13927917,"word_repetition_ratio":0.06603774,"special_character_ratio":0.48019502,"punctuation_ratio":0.08866995,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99834496,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-24T11:31:32Z\",\"WARC-Record-ID\":\"<urn:uuid:d409ee30-258e-4fc3-87a3-826a779db325>\",\"Content-Length\":\"148304\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b6b8cf05-4a74-42a6-9e19-dffdf5f53f5e>\",\"WARC-Concurrent-To\":\"<urn:uuid:00ec54e3-3b67-4c82-a5fb-492d5d19dbe2>\",\"WARC-IP-Address\":\"138.201.54.25\",\"WARC-Target-URI\":\"http://slideplayer.com/slide/8060325/\",\"WARC-Payload-Digest\":\"sha1:DYPRIU3D6UDRV5FP23VRPSN6MQC6BGZD\",\"WARC-Block-Digest\":\"sha1:T6K76JALYIMSQW7MQTFK4522J6YBYAC7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875145941.55_warc_CC-MAIN-20200224102135-20200224132135-00308.warc.gz\"}"}
https://fr.scribd.com/document/19390777/R-for-Bioinfo
[ "Vous êtes sur la page 1sur 272\n\n# Applied Statistics for Bioinformatics using R\n\nWim P. Krijnen\n\n## May 14, 2009\n\nii\n\nPreface\nThe purpose of this book is to give an introduction into statistics in order\nto solve some problems of bioinformatics. Statistics provides procedures to\nexplore and visualize data as well as to test biological hypotheses. The book\nintends to be introductory in explaining and programming elementary sta-\ntistical concepts, thereby bridging the gap between high school levels and\nthe specialized statistical literature. After studying this book readers have\na sufficient background for Bioconductor Case Studies (Hahne et al., 2008)\nand Bioinformatics and Computational Biology Solutions Using R and Bio-\nconductor (Genteman et al., 2005). The book does not aim to give a deep\ntechnical discussion of the mathematical foundation, but, rather, to provide\na set of practical ideas and tools to analyze data. Where deemed useful the\nreader is referred to the literature as well as to publicly available information.\nThe theory is kept minimal and is always illustrated by several examples with\ndata from research in bioinformatics. Prerequisites to follow the stream of\nreasoning is limited to basic high-school knowledge about functions. It may,\nhowever, help to have some knowledge of gene expressions values (Pevsner,\n2003) or statistics (Bain & Engelhardt, 1992; Ewens & Grant, 2005; Rosner,\n2000; Samuels & Witmer, 2003), and elementary programming. To support\nself-study a sufficient amount of challenging exercises are given together with\nThe programming language R is becoming increasingly important because\nit is not only very flexible in reading, manipulating, and writing data, but\nall its outcomes from statistical analysis are directly available as objects for\nfurther programming. R is a rapidly growing language making basic as well as\nadvanced statistical programming easy. From an educational point of view,\nR provides the possibility to combine the learning of statistical concepts by\nmathematics, programming, and visualization. Integrating statistics with\nR gives many possibilities for the student to investigate basic ideas by e.g.\nsimulation. The plots and tables produced by R can readily be used in\ntypewriting systems such as Emacs, LATEX, or Word.\nChapter 1 gives a brief introduction into basic functionalities of R. Chap-\nter 2 starts with univariate data visualization and the most important de-\nscriptive statistics. Chapter 3 gives commonly used discrete and continuous\ndistributions to model events and the probability by which these occur. These\ndistributions are applied in Chapter 4 to statistically test hypotheses from\nbioinformatics. For each test the statistics involved are briefly explained and\niii\n\n## its application is illustrated by examples. In Chapter 5 linear models are ex-\n\nplained and applied to testing for differences between groups. It gives a basic\napproach. In Chapter 6 the three phases of analysis of microarray data (pre-\nprocessing, analysis, post processing) are briefly introduced and illustrated\nby many examples bringing ideas together with R scrips and interpretation of\nresults. Chapter 7 starts with an intuitive approach into Euclidian distance\nand explains how it can be used in two well-known types of cluster analysis to\nfind groups of genes. It also explains how principal components analysis can\nbe used to explore a large data matrix for the direction of largest variation.\nChapter 8 shows how gene expressions can be used to predict the diagnosis\nof patients. Three such prediction methods are illustrated and compared.\ngives various examples of computing important quantities such as alignment\nscores. Chapter 10 introduces the concept of a probability transition matrix\nwhich is applied to the estimation of phylogenetic trees and (Hidden) Markov\nModels.\nTo save space sometimes not all of the original output from R is printed.\nR commands come after its prompt >, except when commands are part of the\nongoing text. Input and output of R will be given in verbatim typewriting\nstyle. The end of an example is indicated by the box . In its Portable\nDocument Format (PDF) the book1 contains many links to the Index, Table\nof Contents, Equations, Tables, and Figures. Readers are encouraged to copy\nand paste scripts from the PDF into the R system to study their outcome.\nApart from using the book to study application of statistics in bioinformatics,\nit can also be useful with statistical programming.\nI would like to thank my colleges Joop Bouman, Sven Warris and Jan\nPeter Nap for their useful remarks on parts of an earlier draft. Many thanks\nalso go to my students for asking questions that gave hints to improve clar-\nity. I am grateful to the creators of LATEX (http://www.latex-project.\norg/), MikTEX(http://www.miktex.org), WinEdt (http://www.winedt.\ncom/), and R (http://www.R-project.org), without which it would have\nbeen impossible to write this book in its current form.\nCurrently, I certainly do not consider the text to be final. Some parts\nneed to be clarified, others may be skipped, while possibly certain subjects\nneed to be added. I would like to emphasize that remarks to improve the\n1 c\n°This document falls under the GNU Free Document Licence and may be used freely\nfor educational purposes.\niv\n\n## Wim P. Krijnen Groningen\n\nHanze University May 2009\nInstitute for Life Science and Technology\nZernikeplein 11\n9747 AS Groningen\nThe Netherlands\[email protected]\nContents\n\nPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv\n\n## 1 Brief Introduction into Using R 1\n\n1.1 Getting R Started on your PC . . . . . . . . . . . . . . . . . . 1\n1.2 Getting help . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3\n1.3 Calculating with R . . . . . . . . . . . . . . . . . . . . . . . . 4\n1.4 Generating a sequence and a factor . . . . . . . . . . . . . . . 5\n1.5 Computing on a data vector . . . . . . . . . . . . . . . . . . . 5\n1.6 Constructing a data matrix . . . . . . . . . . . . . . . . . . . 6\n1.7 Computing on a data matrix . . . . . . . . . . . . . . . . . . . 8\n1.8 Application to the Golub (1999) data . . . . . . . . . . . . . . 10\n1.9 Running scripts . . . . . . . . . . . . . . . . . . . . . . . . . . 13\n1.10 Overview and concluding remarks . . . . . . . . . . . . . . . . 14\n1.11 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14\n\n## 2 Descriptive Statistics and Data Display 17\n\n2.1 Univariate data display . . . . . . . . . . . . . . . . . . . . . . 17\n2.1.1 Frequency table . . . . . . . . . . . . . . . . . . . . . . 17\n2.1.2 Plotting data . . . . . . . . . . . . . . . . . . . . . . . 19\n2.1.3 Histogram . . . . . . . . . . . . . . . . . . . . . . . . . 19\n2.1.4 Boxplot . . . . . . . . . . . . . . . . . . . . . . . . . . 20\n2.1.5 Quantile-Quantile plot . . . . . . . . . . . . . . . . . . 23\n2.2 Descriptive statistics . . . . . . . . . . . . . . . . . . . . . . . 24\n2.2.1 Measures of central tendency . . . . . . . . . . . . . . 24\n2.2.2 Measures of spread . . . . . . . . . . . . . . . . . . . . 25\n2.3 Overview and concluding remarks . . . . . . . . . . . . . . . . 26\n2.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26\n\nv\nvi CONTENTS\n\n3 Important Distributions 31\n3.1 Discrete distributions . . . . . . . . . . . . . . . . . . . . . . . 31\n3.1.1 Binomial distribution . . . . . . . . . . . . . . . . . . . 31\n3.2 Continuous distributions . . . . . . . . . . . . . . . . . . . . . 34\n3.2.1 Normal distribution . . . . . . . . . . . . . . . . . . . . 35\n3.2.2 Chi-squared distribution . . . . . . . . . . . . . . . . . 37\n3.2.3 T-Distribution . . . . . . . . . . . . . . . . . . . . . . . 39\n3.2.4 F-Distribution . . . . . . . . . . . . . . . . . . . . . . . 40\n3.2.5 Plotting a density function . . . . . . . . . . . . . . . . 41\n3.3 Overview and concluding remarks . . . . . . . . . . . . . . . . 42\n3.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43\n\n## 4 Estimation and Inference 47\n\n4.1 Statistical hypothesis testing . . . . . . . . . . . . . . . . . . . 47\n4.1.1 The Z-test . . . . . . . . . . . . . . . . . . . . . . . . . 48\n4.1.2 One Sample t-Test . . . . . . . . . . . . . . . . . . . . 51\n4.1.3 Two-sample t-test with unequal variances . . . . . . . 55\n4.1.4 Two sample t-test with equal variances . . . . . . . . . 56\n4.1.5 F-test on equal variances . . . . . . . . . . . . . . . . . 57\n4.1.6 Binomial test . . . . . . . . . . . . . . . . . . . . . . . 58\n4.1.7 Chi-squared test . . . . . . . . . . . . . . . . . . . . . 59\n4.1.8 Normality tests . . . . . . . . . . . . . . . . . . . . . . 63\n4.1.9 Outliers test . . . . . . . . . . . . . . . . . . . . . . . . 64\n4.1.10 Wilcoxon rank test . . . . . . . . . . . . . . . . . . . . 65\n4.2 Application of tests to a whole set gene expression data . . . . 66\n4.3 Overview and concluding remarks . . . . . . . . . . . . . . . . 68\n4.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69\n\n5 Linear Models 73\n5.1 Definition of linear models . . . . . . . . . . . . . . . . . . . . 74\n5.2 One-way analysis of variance . . . . . . . . . . . . . . . . . . . 77\n5.3 Checking assumptions . . . . . . . . . . . . . . . . . . . . . . 83\n5.4 Robust tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84\n5.5 Overview and concluding remarks . . . . . . . . . . . . . . . . 86\n5.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86\nCONTENTS vii\n\n## 6 Micro Array Analysis 89\n\n6.1 Probe data . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89\n6.2 Preprocessing methods . . . . . . . . . . . . . . . . . . . . . . 92\n6.3 Gene filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . 95\n6.4 Applications of linear models . . . . . . . . . . . . . . . . . . 98\n6.5 Searching an annotation package . . . . . . . . . . . . . . . . 103\n6.6 Using annotation to search literature . . . . . . . . . . . . . . 105\n6.7 Searching GO numbers and evidence . . . . . . . . . . . . . . 106\n6.8 GO parents and children . . . . . . . . . . . . . . . . . . . . . 107\n6.9 Gene filtering by a biological term . . . . . . . . . . . . . . . . 108\n6.10 Significance per chromosome . . . . . . . . . . . . . . . . . . . 109\n6.11 Overview and concluding remarks . . . . . . . . . . . . . . . . 110\n6.12 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111\n\n## 7 Cluster Analysis and Trees 115\n\n7.1 Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116\n7.2 Two types of Cluster Analysis . . . . . . . . . . . . . . . . . . 119\n7.2.1 Single Linkage . . . . . . . . . . . . . . . . . . . . . . . 119\n7.2.2 k-means . . . . . . . . . . . . . . . . . . . . . . . . . . 123\n7.3 The correlation coefficient . . . . . . . . . . . . . . . . . . . . 129\n7.4 Principal Components Analysis . . . . . . . . . . . . . . . . . 132\n7.5 Overview and concluding remarks . . . . . . . . . . . . . . . . 140\n7.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140\n\n## 8 Classification Methods 145\n\n8.1 Classification of microRNA . . . . . . . . . . . . . . . . . . . . 146\n8.2 ROC types of curves . . . . . . . . . . . . . . . . . . . . . . . 147\n8.3 Classification trees . . . . . . . . . . . . . . . . . . . . . . . . 150\n8.4 Support Vector Machine . . . . . . . . . . . . . . . . . . . . . 160\n8.5 Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . 162\n8.6 Overview and concluding remarks . . . . . . . . . . . . . . . . 164\n8.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164\n\n## 9 Analyzing Sequences 169\n\n9.1 Using a query language . . . . . . . . . . . . . . . . . . . . . . 169\n9.2 Getting information on downloaded sequences . . . . . . . . . 170\n9.3 Computations on sequences . . . . . . . . . . . . . . . . . . . 172\n9.4 Matching patterns . . . . . . . . . . . . . . . . . . . . . . . . 177\nviii CONTENTS\n\n## 9.5 Pairwise alignments . . . . . . . . . . . . . . . . . . . . . . . . 178\n\n9.6 Overview and concluding remarks . . . . . . . . . . . . . . . . 185\n9.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185\n\n## 10 Markov Models 189\n\n10.1 Random sampling . . . . . . . . . . . . . . . . . . . . . . . . . 189\n10.2 Probability transition matrix . . . . . . . . . . . . . . . . . . . 190\n10.3 Properties of the transition matrix . . . . . . . . . . . . . . . 194\n10.4 Stationary distribution . . . . . . . . . . . . . . . . . . . . . . 196\n10.5 Phylogenetic distance . . . . . . . . . . . . . . . . . . . . . . . 199\n10.6 Hidden Markov Models . . . . . . . . . . . . . . . . . . . . . . 206\n10.7 Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210\n10.8 Overview and concluding remarks . . . . . . . . . . . . . . . . 211\n10.9 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211\n\nB References 251\nList of Figures\n\n## 2.1 Plot of gene expression values of CCND3 Cyclin D3. . . . . . . 20\n\n2.2 Stripchart of gene expression values of CCND3 Cyclin D3 for\nALL and AML patients. . . . . . . . . . . . . . . . . . . . . . 20\n2.3 Histogram of ALL expression values of gene CCND3 Cyclin D3. 21\n2.4 Boxplot of ALL and AML expression values of gene CCND3\nCyclin D3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21\n2.5 Q-Q plot of ALL gene expression values of CCND3 Cyclin D3. 23\n2.6 Boxplot with arrows and explaining text. . . . . . . . . . . . 29\n\n## 3.1 Binomial probabilities with n = 22 and p = 0.7 . . . . . . . . 34\n\n3.2 Binomial cumulative probabilities with n = 22 and p = 0.7. . . 34\n3.3 Graph of normal density with mean 1.9 and standard deviation\n0.5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36\n3.4 Graph of normal distribution with mean 1.9 and standard de-\nviation 0.5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36\n3.5 χ25 -density. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38\n3.6 χ25 distribution. . . . . . . . . . . . . . . . . . . . . . . . . . . 38\n3.7 Density of T10 distribution. . . . . . . . . . . . . . . . . . . . . 39\n3.8 Distribution function of T10 . . . . . . . . . . . . . . . . . . . . 39\n3.9 Density of F26,10 . . . . . . . . . . . . . . . . . . . . . . . . . . 41\n3.10 Distribution of F26,10 . . . . . . . . . . . . . . . . . . . . . . . . 41\n\n## 4.1 Acceptance and rejection regions of the Z-test. . . . . . . . . . 50\n\n4.2 Acceptance and rejection regions of the T5 -test. . . . . . . . . 53\n4.3 Rejection region of χ23 -test. . . . . . . . . . . . . . . . . . . . . 59\n\n## 5.1 Plot of 1866 g at data. . . . . . . . . . . . . . . . . . . . . . 81\n\n5.2 Plot of 1242 at values from ALL data. . . . . . . . . . . . . . 81\n\nix\nx LIST OF FIGURES\n\n## 6.1 Mat plot of intensity values for a probe of MLL.B. . . . . . . . 91\n\n6.2 Density of MLL.B data. . . . . . . . . . . . . . . . . . . . . . . 91\n6.3 Boxplot of the ALL1/AF4 patients. . . . . . . . . . . . . . . . 95\n6.4 Boxplot of the ALL1/AF4 patients after median subtraction\nand MAD division. . . . . . . . . . . . . . . . . . . . . . . . . 95\n6.5 Venn diagram of seleced ALL genes. . . . . . . . . . . . . . . . 98\n6.6 Boxplot of the ALL1/AF4 patients after median subtraction\nand MAD division. . . . . . . . . . . . . . . . . . . . . . . . . 98\n\n## 7.1 Plot of five points to be clustered. . . . . . . . . . . . . . . . . 120\n\n7.2 Tree of single linkage cluster analysis. . . . . . . . . . . . . . . 120\n7.3 Example of three without clusters. . . . . . . . . . . . . . . . 122\n7.4 Three clusters with different standard deviations. . . . . . . . 122\n7.5 Plot of gene ”CCND3 Cyclin D3” and ”Zyxin” expressions for\nALL and AML patients. . . . . . . . . . . . . . . . . . . . . . 123\n7.6 Single linkage cluster diagram from gene ”CCND3 Cyclin D3”\nand ”Zyxin” expressions values. . . . . . . . . . . . . . . . . 123\n7.7 K-means cluster analysis. . . . . . . . . . . . . . . . . . . . . . 125\n7.8 Tree of single linkage cluster analysis. . . . . . . . . . . . . . . 125\n7.9 Plot of kmeans (stars) cluster analysis on CCND3 Cyclin D3\nand Zyxin discriminating between ALL (red) and AML (black)\npatients. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128\n7.10 Vectors of linear combinations. . . . . . . . . . . . . . . . . . . 134\n7.11 First principal component with projections of data. . . . . . . 134\n7.12 Scatter plot of selected genes with row labels on the first two\nprincipal components. . . . . . . . . . . . . . . . . . . . . . . 138\n7.13 Single linkage cluster diagram of selected gene expression values.138\n7.14 Biplot of selected genes from the golub data. . . . . . . . . . . 143\n\n8.1 ROC plot for expression values of CCND3 Cyclin D3. . . . . . 149\n8.2 ROC plot for expression values of gene Gdf5. . . . . . . . . . 149\n8.3 Boxplot of expression values of gene a for each leukemia class. 151\n8.4 Classification tree for gene for three classes of leukemia. . . . . 151\n8.5 Boxplot of expression values of gene a for each leukemia class. 154\n8.6 Classification tree of expression values from gene A, B, and C\nfor the classification of ALL1, ALL2, and AML patients. . . . 154\n8.7 Boxplot of expression values from gene CCND3 Cyclin D3 for\nALL and AML patients . . . . . . . . . . . . . . . . . . . . . 156\nLIST OF FIGURES xi\n\n## 8.8 Classification tree of expression values from gene CCND3 Cy-\n\nclin D3 for classification of ALL and AML patients. . . . . . 156\n8.9 rpart on ALL B-cel 123 data. . . . . . . . . . . . . . . . . . . 159\n8.10 Variable importance plot on ALL B-cell 123 data. . . . . . . 159\n\n## 9.1 G + C fraction of sequence ”AF517525.CCND3” along a win-\n\ndow of length 50 nt. . . . . . . . . . . . . . . . . . . . . . . . 174\n9.2 Frequency plot of amino acids from accession number AF517525.CCND3.175\n9.3 Frequency plot of amino acids from accession number AL160163.CCND3.175\n\n## 10.1 Graph of probability transition matrix . . . . . . . . . . . . . 192\n\n10.2 Evaluation of models by AIC . . . . . . . . . . . . . . . . . . . 204\n10.3 Tree according to GTR model. . . . . . . . . . . . . . . . . . . 206\nxii LIST OF FIGURES\nList of Tables\n\n## 3.1 Discrete density and distribution function values of S3 , with\n\np = 0.6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33\n3.2 Built-in-functions for random variables used in this chapter. . 42\n3.3 Density, mean, and variance of distributions used in this chapter. 43\n\n## 8.1 Frequencies empirical p-values lower than or equal to 0.01. . . 146\n\n8.2 Ordered expression values of gene CCND3 Cyclin D3, index\n2 indicates ALL, 1 indicates AML, cutoff points, number of\nfalse positives, false positive rate, number of true positives,\ntrue positive rate. . . . . . . . . . . . . . . . . . . . . . . . . 167\n\n## 9.1 BLOSUM50 matrix. . . . . . . . . . . . . . . . . . . . . . . . 182\n\nxiii\nxiv LIST OF TABLES\nChapter 1\n\n## To get started a gentle introduction to the statistical programming language\n\nR will be given (R Development Core Team, 2008), specific for our purposes.\nThis will solve the practical issues to follow the stream of reasoning. In\nparticular, it is briefly explained how to install R and Bioconductor, how to\nobtain help, and how to perform simple calculations. For several purposes\nit is essential to be able to generate a sequence of numbers. In particular, a\nhighly useful type of sequence is that of a factor to indicate the experimental\ngroup of a patient.\nSince many computations are essentially performed on data vectors, sev-\neral basic illustrations of this are given. With respect to gene expressions the\ndata vectors are placed one beneath the other to form a data matrix with\nthe genes as rows and the patients as columns. The idea of a data matrix is\nextensively explained and illustrated by several examples. A larger example\nconsists of the classical Golub et al. (1999) data, which will be analyzed\nfrequently to illustrate statistical procedures.\n\n## 1.1 Getting R Started on your PC\n\nyour favorite operating system (Windows, Linux or MacOS) and simply follow\nthe instructions. After a little patience you should be able to start R (Ihaka\n& Gentleman, 1996) after which a screen is opened with the prompt >. The\ninput and output of R will be displayed in verbatim typewriting style.\nAll useful functions of R are contained in libraries which are called ”pack-\n\n1\n2 CHAPTER 1. BRIEF INTRODUCTION INTO USING R\n\n## ages”. The standard installation of R makes a few basic packages available\n\nsuch as base and stats. From the button Packages at cran.r-project.org\nit can be seen that R has a huge number of packages available for a wide\nscale of statistical procedures. To download a specific package you can use\nthe following.\n\n> install.packages(c(\"TeachingDemos\"),repo=\"http://cran.r-project.org\",\n+ dep=TRUE)\n\nThis installs the package TeachingDemos developed by Greg Snow from the\nrepository http://cran.r-project.org. By setting the option dep to TRUE\nthe packages on which the TeachingDemos depend are also installed. This is\nstrongly recommended! Alternatively, in the Windows application of R you\ncan simply click on the Packages button at the top of your screen and follow\nthe instructions. After installing you have to load the package in order to use\nits functions. For instance, to produce a nice plot of the outcome of throwing\ntwelve times with a die, you can use the following.\n\n> library(TeachingDemos)\n> plot(dice(12,1))\n\nIn the sequel we shall often use packages from Bioconductor, a very useful\nopen source software project for the analysis and comprehension of genomic\ndata. To follow the book it is essential to install Bioconductor on your PC\nor network. Bioconductor is primarily based on R and can be installed, as\nfollows.\n\n> source(\"http://www.bioconductor.org/biocLite.R\")\n> biocLite()\n\nit, and to make the ALL data (Chiaretti, et. al, 2004) available for usage, you\ncan use the following.\n\n> biocLite(\"ALL\")\n> library(ALL)\n> data(ALL)\n\n## These data will be analyzed extensively later-on in Chapter 5 and 6. General\n\nhelp on loaded Bioconductor packages becomes available by openVignette().\n1.2. GETTING HELP 3\n\n## For further information the reader is referred to www.bioconductor.org or\n\nto several other URL’s1 .\nIn this and the following chapters we will illustrate many statistical ideas\nby the Golub et al. (1999) data, see also Section 1.8. The golub data become\navailable by the following.2\n\n> library(multtest)\n> data(golub)\n\n## R is object-oriented in the sense that everything consists of objects belonging\n\nto certain classes. Type class(golub) to obtain the class of the object golub\nand str(golub) to obtain its structure or content. Type objects() or ls()\nto view the currently loaded objects, a list probably growing soon to be large.\nTo prevent conflicting definitions, it is wise to remove them all at the end of\na session by rm(list=ls()). To quit a session, type q(), or simply click on\nthe cross in the upper right corner of your screen.\n\n## 1.2 Getting help\n\nAll functionalities of R are well-organized in so-called packages. Use the func-\ntion library() to see which packages are currently installed on your oper-\nating system. The packages stats and base are automatically installed, be-\ncause these contain many basic functionalities. To obtain an overview of the\ncontent of a package use ls(package:stats) or rather library(help=\"stats\").\nHelp on the purpose of specific functions can be obtained from the (pack-\nage) manual by typing a question mark in front of a function. For instance,\n?sum gives extensive details on summation. In case you are seeking help\non a function which uses if, simply type apropos(\"if\"). When you are\nstarting with a new concept such as ”boxplot”, it is convenient to have an\nexample showing output (a plot) and programming code. Such is given by\nexample(boxplot). The function history can be useful for collecting pre-\nviously given commands.\n1\nhttp://mccammon.ucsd.edu/~bgrant/bio3d/user_guide/user_guide.html\nhttp://rafalab.jhsph.edu/software.html\nhttp://dir.gmane.org/gmane.science.biology.informatics.conductor\n2\nImport/Export manual”.\n4 CHAPTER 1. BRIEF INTRODUCTION INTO USING R\n\n## Type help.start() to launch an HTML page linking to several well-\n\nwritten R manuals such as: ”An Introduction to R”, ”The R Language Defi-\nnition”, ”R Installation and Administration”, and ”R Data Import/Export”.\nFurther help can be obtained from http://cran.r-project.org. Its ”con-\ntributed” page contains well-written freely available on-line books3 and use-\nful reference charts4 . At http://www.r-project.org you can use R site\nsearch, Rseek, or other useful search engines. There are a number of useful\nURL’s with information on R.5\n\n## 1.3 Calculating with R\n\nR can be used as a simple calculator. For instance, to add 2 and 3 we simply\ninsert the following.\n\n> 2+3\n 5\n\n## In many calculations the natural base e = 2.718282 of exponential functions\n\nis used. Such type of functions can be called as follows.\n\n> exp(1)\n 2.718282\n\n## To compute e2 = e · e we use exp(2).6 So, indeed, we have ex =exp(x), for\n\nany value of x.\nThe sum 1 + 2 + 3 + 4 + 5 can be computed by\n\n> sum(1:5)\n 15\n\n## and the product 5! = 5 · 4 · 3 · 2 · 1 by\n\n> prod(1:5)\n 120\n3\n”R for Beginners” by Emmanuel Paradis or the ”The R Guide” by Jason Owen\n4\n”R reference card” by Tom Short or by Jonathan Baron\n5\nWe mention in particular:\nhttp://faculty.ucr.edu/~tgirke/Documents/R_BioCond/R_BioCondManual.html\n6\nThe argument of functions is always placed between parenthesis ().\n1.4. GENERATING A SEQUENCE AND A FACTOR 5\n\n## 1.4 Generating a sequence and a factor\n\nIn order to compute so-called quantiles of distributions (see e.g. Section\n2.1.4) or plots of functions, we need to generate sequences of numbers. The\neasiest way to construct a sequence of numbers is by\n> 1:5\n 1 2 3 4 5\nThis sequence can also be produced by the function seq, which allows for\nvarious sizes of steps to be chosen. For instance, in order to compute per-\ncentiles of a distribution we may want to generate numbers between zero and\none with step size equal to 0.1.\n> seq(0,1,0.1)\n 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0\nFor plotting and testing of hypotheses we need to generate yet another\ntype of sequence, called a “factor”. It is designed to indicate an experimen-\ntal condition of a measurement or the group to which a patient belongs.7\nWhen, for instance, for each of three experimental conditions there are mea-\nsurements from five patients, the corresponding factor can be generated as\nfollows.\n> factor <- gl(3,5)\n> factor\n 1 1 1 1 1 2 2 2 2 2 3 3 3 3 3\nLevels: 1 2 3\nThe three conditions are often called “levels” of a factor. Each of these\nlevels has five repeats corresponding to the number of observations (patients)\nwithin each level (type of disease). We shall use the idea of a factor soon\nbecause it is very useful for purposes of visualization.\n\n## 1.5 Computing on a data vector\n\nA data vector is simply a collection of numbers obtained as outcomes from\nmeasurements. This can be illustrated by a simple example on expression\n7\nSee e.g. Samuales & Witmer (2003, Chap. 8) for a full explanation of experiments\nand statistical principles of design.\n6 CHAPTER 1. BRIEF INTRODUCTION INTO USING R\n\nvalues of a gene. Suppose that gene expression values 1, 1.5, and 1.25 from\nthe persons ”Eric”, ”Peter”, and ”Anna” are available. To store these in a\nvector we use the concatenate command c(), as follows.\n\n## > gene1 <- c(1.00,1.50,1.25)\n\n> gene1\n 1.00 1.50 1.25\n\nNow we have created the object gene1 containing three gene expression val-\nues. To compute the sum, mean, and standard deviation of the gene expres-\nsion values we use the corresponding built-in-functions.\n\n> sum(gene1)\n 3.75\n> mean(gene1)\n 1.25\n> sum(gene1)/3\n 1.25\n> sd(gene1)\n 0.25\n> sqrt(sum((gene1-mean(gene1))^2)/2)\n 0.25\n\n## By defining x1 =P1.00, x2 = 1.50, and x3 = 1.25, the sum of the weightsPcan\n\nbe expressed as ni=1 xi = 3.75. The mathematical summationP symbol is\nin R language simply sum. The mean is denoted by x = 3i=1 xi /3 = 1.25\nand the sample standard deviation as\nv\nu 3\nuX\ns = t (xi − x)2 /(3 − 1) = 0.25.\ni=1\n\n## Such verifications are essential for understanding the one-to-one correspon-\n\ndence between mathematical definitions and statistical computation.\n\n## 1.6 Constructing a data matrix\n\nIn various types of spreadsheets it is custom to store data values in the form\nof a matrix consisting of rows and columns. In bioinformatics gene expression\n1.6. CONSTRUCTING A DATA MATRIX 7\n\nvalues (from several groups of patients) are stored as rows in such a manner\nthat each row contains the expressions values of the patients corresponding\nto a particular gene and each column contains all gene expression values for\na particular person. To illustrate this by a small example suppose that we\nhave the following expression values on three genes from Eric, Peter, and\nAnna.8\n\n## > gene2 <- c(1.35,1.55,1.00)\n\n> gene3 <- c(-1.10,-1.50,-1.25)\n> gene4 <- c(-1.20,-1.30,-1.00)\n\n## Before constructing the matrix it is for clarity of communication convenient\n\nto add the names of the rows and the columns. To do so we construct the\nfollowing list.\n\n## > rowcolnames <- list(c(\"gene1\",\"gene2\",\"gene3\",\"gene4\"),\n\n+ c(\"Eric\",\"Peter\",\"Anna\"))\n\nAfter the last comma in the first line we give a carriage return for R to come\nup with a new line starting with + in order to complete a command. Now we\ncan construct a matrix containing the expression values from our four genes,\nas follows.\n\n## > gendat <- matrix(c(gene1,gene2,gene3,gene4), nrow=4, ncol=3,\n\n+ byrow=TRUE, dimnames = rowcolnames)\n\nHere, nrow indicates the number of rows and ncol the number of columns.\nThe gene vectors are placed in the matrix as rows. The names of the rows\nand columns are attached by the dimnames parameter. To see the content of\nthe just created object gendat, we print it to the screen.\n\n> gendat\nEric Peter Anna\ngene1 1.00 1.50 1.25\ngene2 1.35 1.55 1.30\ngene3 -1.10 -1.50 -1.25\ngene4 -1.20 -1.30 -1.00\n8\nBy the function data.entry you can open and edit a screen with the values of a\nmatrix.\n8 CHAPTER 1. BRIEF INTRODUCTION INTO USING R\n\nA matrix such as gendat has two indices [i,j], the first of which refers to\nrows and the second to columns9 . Thus, if you want to print the second\nelement of the first row to the screen, then type gendat[1,2]. If you want\nto print the first row, then use gendat[1,]. For the second column, use\ngendat[,2].\nIt may be desirable to write the data to a file for using these in a later\nstage or to send these to a college of yours. Consider the following script.\n\n> write.table(gendat,file=\"D:/data/gendat.Rdata\")\nEric Peter Anna\ngene1 1.00 1.50 1.25\ngene2 1.35 1.55 1.30\ngene3 -1.10 -1.50 -1.25\ngene4 -1.20 -1.30 -1.00\n\n## 1.7 Computing on a data matrix\n\nMeans or standard deviations of rows or columns are often important for\ndrawing biologically relevant conclusions. Such type of computations on a\ndata matrix can be accomplished by “for loops”. However, it is much more\nconvenient to use the apply functionality on a matrix. To do so we specify\nthe name of the matrix, indicate rows or columns (1 for rows and 2 for\ncolumns), and the name of the function. To illustrate this we compute the\nmean of each person (column).\n\n> apply(gendat,2,mean)\nEric Peter Anna\n0.0125 0.0625 0.0750\n\n## Similarly, the mean of each gene (row) can be computed.\n\n9\nIndices referring to rows, columns, or elements are always between square brackets [].\n10\nFor more see the ”R Data import/Export” manual, Chapter 3 of the book ”R for\nBeginners”, or search the internet by the key ”r wiki matrix”.\n1.7. COMPUTING ON A DATA MATRIX 9\n\n> apply(gendat,1,mean)\ngene1 gene2 gene3 gene4\n1.250000 1.400000 -1.283333 -1.166667\n\n## It frequently happens that we want to re-order the rows of a matrix according\n\nto a certain criterion, or, more specifically, the values in a certain column\nvector. For instance, to re-order the matrix gendat according to the row\nmeans, it is convenient to store these in a vector and to use the function\norder.\n\n## > meanexprsval <- apply(gendat,1,mean)\n\n> o <- order(meanexprsval,decreasing=TRUE)\n> o\n 2 1 4 3\n\nThus gene2 appears first because it has the largest mean 1.4, then gene1\nwith 1.25, followed by gene4 with -1.16 and, finally, gene3 with -1.28. Now\nthat we have collected the order numbers in the vector o, we can re-order\nthe whole matrix by specifying o as the row index.11\n\n> gendat[o,]\nEric Peter Anna\ngene2 1.35 1.55 1.30\ngene1 1.00 1.50 1.25\ngene4 -1.20 -1.30 -1.00\ngene3 -1.10 -1.50 -1.25\n\n## Another frequently occurring problem is that of selecting genes with a certain\n\nproperty. For instance, suppose that we want to select genes with positive\nmean expression values. A first way to select these is to observe that the first\ntwo rows have positive means and to use c(1,2) as a row index.\n\n> gendat[c(1,2),]\nEric Peter Anna\ngene1 1.00 1.50 1.25\ngene2 1.35 1.55 1.30\n\n## A second way is to use the row names as an index.\n\n11\nYou can also use functions like sort or rank.\n10 CHAPTER 1. BRIEF INTRODUCTION INTO USING R\n\n> gendat[c(\"gene1\",\"gene2\"),]\nEric Peter Anna\ngene1 1.00 1.50 1.25\ngene2 1.35 1.55 1.30\n\n## A third and more advanced way is to use an evaluation in terms of TRUE\n\nor FALSE of logical elements of a vector. For instance, we may evaluate\nwhether the row mean is positive.\n\n## > meanexprsval > 0\n\ngene1 gene2 gene3 gene4\nTRUE TRUE FALSE FALSE\n\nNow we can use the evaluation of meanexprsval > 0 in terms of the values\nTRUE or FALSE as a row index.\n\n## > gendat[meanexprsval > 0,]\n\nEric Peter Anna\ngene1 1.00 1.50 1.25\ngene2 1.35 1.55 1.30\n\nObserve that this selects genes for which the evaluation equals TRUE. This\nillustrates that genes can be selected by their row index, row name or value\non a logical variable.\n\n## 1.8 Application to the Golub (1999) data\n\nThe gene expression data collected by Golub et al. (1999) are among the\nmost classical in bioinformatics. A selection of the set is called golub and\nis contained in the multtest package, which is part of Bioconductor. The\ndata consist of gene expression values of 3051 genes (rows) from 38 leukemia\npatients12 . Twenty seven patients are diagnosed as acute lymphoblastic\nleukemia (ALL) and eleven as acute myeloid leukemia (AML). The tumor\nclass is given by the numeric vector golub.cl, where ALL is indicated by 0\nand AML by 1. The gene names are collected in the matrix golub.gnames of\nwhich the columns correspond to the gene index, ID, and Name, respectively.\nWe shall first concentrate on expression values of a gene with manufacturer\n12\nThe data are pre-processed by procedures described in Dudoit et al. (2002).\n1.8. APPLICATION TO THE GOLUB (1999) DATA 11\n\n## name \"M92287_at\", which is known in biology as \"CCND3 Cyclin D3\". The\n\nexpression values of this gene are collected in row 1042 of golub. To load the\ndata and to obtain the relevant information from row 1042 of golub.gnames,\nuse the following.\n\n## > library(multtest); data(golub)\n\n> golub.gnames[1042,]\n \"2354\" \"CCND3 Cyclin D3\" \"M92287_at\"\n\nThe data are stored in a matrix called golub. The number of rows and\ncolumns can be obtained by the functions nrow and ncol, respectively.\n\n> nrow(golub)\n 3051\n> ncol(golub)\n 38\n\n## An alternative is to use dim(golub). Each data element has a row and a\n\ncolumn index. Recall that the first index refers to rows and the second to\ncolumns. Hence, the second value from row 1042 can be printed to the screen\nas follows.\n\n> golub[1042,2]\n 1.52405\n\n## So 1.52405 is the expression value of gene CCND3 Cyclin D3 from patient\n\nnumber 2. The values of the first column can be printed to the screen by the\nfollowing.\n\n> golub[,1]\n\nTo save space the output is not shown. We may now print the expression\nvalues of gene CCND3 Cyclin D3 (row 1042) to the screen.\n\n> golub[1042,]\n 2.10892 1.52405 1.96403 2.33597 1.85111 1.99391 2.06597 1.81649\n 2.17622 1.80861 2.44562 1.90496 2.76610 1.32551 2.59385 1.92776\n 1.10546 1.27645 1.83051 1.78352 0.45827 2.18119 2.31428 1.99927\n 1.36844 2.37351 1.83485 0.88941 1.45014 0.42904 0.82667 0.63637\n 1.02250 0.12758 -0.74333 0.73784 0.49470 1.12058\n12 CHAPTER 1. BRIEF INTRODUCTION INTO USING R\n\nTo print the expression values of gene CCND3 Cyclin D3 to the screen only\nfor the ALL patients, we have to refer to the first twenty seven elements of\nrow 1042. A possibility to do so is by the following.\n\n> golub[1042,1:27]\n\nHowever, for the work ahead it is much more convenient to construct a factor\nindicating the tumor class of the patients. This will turn out useful e.g.\nfor separating the tumor groups in various visualization procedures. The\nfactor will be called gol.fac and is constructed from the vector golub.cl,\nas follows.\n\n## > gol.fac <- factor(golub.cl, levels=0:1, labels = c(\"ALL\",\"AML\"))\n\nIn the sequel this factor will be used frequently. Obviously, the labels corre-\nspond to the two tumor classes. The evaluation of gol.fac==\"ALL\" returns\nTRUE for the first twenty seven values and FALSE for the remaining eleven,\nwhich is useful as a column index for selecting the expression values of the\nALL patients. The expression values of gene CCND3 Cyclin D3 from the\nALL patients can now be printed to the screen, as follows.\n\n> golub[1042,gol.fac==\"ALL\"]\n\nFor all types of computations it is very useful to combine a factor with the\napply functionality. For instance, to compute the mean gene expression over\nthe ALL patients for each of the genes we may use the following.\n\n## The specification golub[,gol.fac==\"ALL\"] selects the matrix with gene ex-\n\npressions corresponding to the ALL patients. The 3051 mean gene expression\nvalues are assigned to the vector meanALL.\nAfter reading the classical article by Golub et al. (1999), which is strongly\nrecommended, one becomes easily interested in the properties of certain\ngenes. For instance, gene CD33 plays an important role in distinguishing\nlymphoid from myeloid lineage cells. To perform computations on the ex-\npressions of this gene we need to know its row index. It can obtained by the\ngrep function.13\n13\nIndeed, several functions of R are inspired by the Linux operating system.\n1.9. RUNNING SCRIPTS 13\n\n> grep(\"CD33\",golub.gnames[,2])\n 808\n\n## Hence, the expression values of antigen CD33 are available at golub[808,]\n\nand further information on it by golub.gnames[808,].\n\n## 1.9 Running scripts\n\nIt is very convenient to use a plain text writer like Notepad, Kate, Emacs, or\nWinEdt for the formulation of several consecutive R commands as separated\nlines (scripts). Such command lines can be executed by simply using copy\nand paste into the command line editor of R. Another possibility is to execute\na script from a file. To illustrate the latter consider the following.\n\n## > library(multtest); data(golub)\n\n> gol.fac <- factor(golub.cl,levels=0:1, labels= c(\"ALL\",\"AML\"))\n> mall <- apply(golub[,gol.fac==\"ALL\"], 1, mean)\n> maml <- apply(golub[,gol.fac==\"AML\"], 1, mean)\n> o <- order(abs(mall-maml), decreasing=TRUE)\n> print(golub.gnames[o[1:5],2])\n \"CST3 Cystatin C (amyloid angiopathy and cerebral hemorrhage)\"\n \"INTERLEUKIN-8 PRECURSOR\"\n \"Interleukin 8 (IL8) gene\"\n \"DF D component of complement (adipsin)\"\n \"MPO Myeloperoxidase\"\n\nThe row means of the expression values per patient group are computed and\nstored in the object mall and maml, respectively. The absolute values of the\ndifferences in means are computed and their order numbers (from large to\nsmall) are stored in the vector o. Next, the names of the five genes with the\nlargest differences in mean are printed to the screen.\nAfter saving the script under the name meandif.R in the directory D:\\\\Rscripts\\\\meandif.R,\nit can be executed by using source(\"D:\\\\Rscripts\\\\meandif.R\"). Once\nthe script is available for a typewriter it is easy to adapt it and to re-run it.\nReaders are strongly recommended to trial-and-error with respect to writ-\ning programming scripts. To run these it is very convenient to have your\nfavorite word processor available and to use, for instance, the copy-and-paste\nfunctionality.\n14 CHAPTER 1. BRIEF INTRODUCTION INTO USING R\n\n## 1.10 Overview and concluding remarks\n\nIt is easy to install R and Bioconductor. R has many convenient built-in-\nfunctions for statistical programming. Help and illustrations on many topics\nare available from various sources. With the reference charts, R manuals,\n(on-line) books and R Wiki at hand you have various sources of information\nseveral GUI’s available, we shall concentrate on the command line editor\nbecause its range of possibilities is much larger.\nThe above introduction is of course very brief. A more extensive in-\ntroduction into R, assuming some background on biomedical statistics, is\ngiven by Dalgaard (2002). There are book length treatments combining R\nwith statistics (Venables, & Ripley, 2002; Everitt & Hothorn, 2006). Other\ntreatments go much deeper into programming aspects (Becker, Chambers, &\nWilks, 1988; Venables & Ripley, 2000; Gentleman, 2008).\nFor the sake of illustration we shall work frequently with data kindly pro-\nvided by Golub et al. (1999) and Chiaretti et al. (2004). The corresponding\nscientific articles are freely available from the web. Having these available\n\n1.11 Exercises\n1. Some questions to orientate yourself.\n\n(a) Use the function class to find the class to which the follow-\ning objects belong: golub, golub[1,1]golub.cl, golub.gnames,\napply, exp, gol.fac, plot, ALL.\n(b) What is the meaning of the following abbreviations: rm, sum, prod,\nseq, sd, nrow, , ,\n(c) For what purpose are the following functions useful: grep, apply,\ngl, library, source, setwd, history, str\n\n## 2. gendat Consider the data in the matrix gendat, constructed in Sec-\n\ntion 1.6. Its small size has the advantage that you can check your\ncomputations even by a pocket calculator. 14\n14\nObtaining some routine with the apply functionality is quite helpful for what follows.\n1.11. EXERCISES 15\n\n## (a) Use apply to compute the standard deviation of the persons.\n\n(b) Use apply to compute the standard deviation of the genes.\n(c) Order the matrix according to the gene standard deviations.\n(d) Which gene has the largest standard deviation?\n\n## (a) Use apply to compute the mean gene expression value.\n\n(b) Order the data matrix according to the gene means.\n(c) Give the names of the three genes with the largest mean expression\nvalue.\n(d) Give the biological names of these genes.\n\n## (a) Use apply to compute the standard deviation per gene.\n\n(b) Select the expression values of the genes with standard deviation\nlarger than two.\n(c) How many genes have this property?\n\n## 5. Oncogenes in Golub data.\n\n(a) How many oncogenes are there in the dataset? Hint: Use grep.\n(b) Find the biological names of the three oncogenes with the largest\nmean expression value for the ALL patients.\n(c) Do the same for the AML patients.\n(d) Write the gene probe ID and the gene names of the ten genes with\nlargest mean gene expression value to a csv file.\n\ning setting.\n\n## (a) An experiment with two conditions each with four measurements.\n\n(b) Five conditions each with three measurements.\n(c) Three conditions each with five measurements.\n16 CHAPTER 1. BRIEF INTRODUCTION INTO USING R\n\n7. Gene means for B1 patients. Load the ALL data from the ALL library\nand use str and openVignette() for a further orientation.\n\n## (a) Use exprs(ALL[,ALL$BT==\"B1\"] to extract the gene expressions from the patients in disease stage B1. Compute the mean gene expressions over these patients. (b) Give the gene identifiers of the three genes with the largest mean. Chapter 2 ## Descriptive Statistics and Data Display A few essential manners are given to display and visualize data. It quickly answers questions like: How are my data distributed? How can the frequen- cies of nucleotides from a gene be visualized? Are there outliers in my data? Does the distribution of my data resemble that of a bell-shaped curve? Are there differences between gene expression values taken from two groups of patients? The most important central tendencies (mean, median) are defined and illustrated together with the most important measures of spread (standard deviation, variance, inter quartile range, and median absolute deviation). ## 2.1 Univariate data display To observe the distribution of a data vector various visualization methods are made available. These are frequently used by practitioners as well as by experts. ## 2.1.1 Frequency table Discrete data occur when the values naturally fall into categories. A fre- quency table simply gives the number of occurrences within a category. 17 18 CHAPTER 2. DESCRIPTIVE STATISTICS AND DATA DISPLAY ## Example 1. A gene consists of a sequence of nucleotides {A, C, G, T }. The number of each nucleotide can be displayed in a frequency table. This will be illustrated by the Zyxin gene which plays an important role in cell adhesion (Golub et al., 1999). The accession number (X94991.1) of one of its variants can be found via an NCBI UniGene search. The code below illustrates how to install the package ape, to load it, to read gene ”X94991.1” of the species homo sapiens from GenBank, and to make a frequency table of the four nucleotides. install.packages(c(\"ape\"),repo=\"http://cran.r-project.org\",dep=TRUE) library(ape) table(read.GenBank(c(\"X94991.1\"),as.character=TRUE)) pie(table(read.GenBank(c(\"X94991.1\")))) From the resulting frequencies in Table 2.1 it seems that the nucleotides are not equally likely. A nice way to visualize a frequency table is by plotting a pie. ## Table 2.1: A frequency table and its pie of Zyxin gene. A C G T 410 789 573 394 t g 2.1. UNIVARIATE DATA DISPLAY 19 ## 2.1.2 Plotting data An elementary method to visualize data is by using a so-called stripchart, by which the values of the data are represented as e.g. small boxes. Often, it is useful in combination with a factor that distinguishes members from different experimental conditions or patients groups. ## Example 1. Many visualization methods will be illustrated by the Golub et al. (1999) data. We shall concentrate on the expression values of gene \"CCND3 Cyclin D3\", which are collected in row 1042 of the data matrix golub. To plot the data values one can simply use plot(golub[1042,]). In the resulting plot in Figure 2.1 the vertical axis gives the size of the expression values and the horizontal axis the index of the patients. It can be observed that the values for patient 28 to 38 are somewhat lower, but, indeed, the picture is not very clear because the groups are not separated. To produce two adjacent stripcharts separately for the ALL and the AML patients, we use the factor called gol.fac from the previous chapter. data(golub, package = \"multtest\") gol.fac <- factor(golub.cl,levels=0:1, labels= c(\"ALL\",\"AML\")) stripchart(golub[1042,] ~ gol.fac, method=\"jitter\") From the resulting Figure 2.2 it can be observed that the CCND3 Cyclin D3 expression values of the ALL patients tend to have larger expression values than those of the AML patients. 2.1.3 Histogram Another method to visualize data is by dividing the range of data values into a number of intervals and to plot the frequency per interval as a bar. Such a plot is called a histogram. ## Example 1. A histogram of the expression values of gene \"CCND3 Cyclin D3\" of the acute lymphoblastic leukemia patients can be produced as follows. > hist(golub[1042, gol.fac==\"ALL\"]) The function hist divides the data into 5 intervals having width equal to 0.5, see Figure 2.3. Observe from the latter that one value is small and the 20 CHAPTER 2. DESCRIPTIVE STATISTICS AND DATA DISPLAY 2.5 2.5 2.0 2.0 1.5 1.5 golub[1042, ] 1.0 1.0 0.5 0.5 0.0 0.0 −0.5 −0.5 0 10 20 30 ALL AML Index ## Figure 2.1: Plot of gene ex- Figure 2.2: Stripchart of pression values of CCND3 gene expression values of Cyclin D3. CCND3 Cyclin D3 for ALL and AML patients. ## other are more or less symmetrically distributed around the mean. In the previous example we trusted the default method to compute the ap- propriate number of bars or breaks. If the data are more or less distributed according to a bell-shaped curve, then this is often a good strategy. The number of bars can be chosen by the breaks option of the function hist. Optimal choices of the number of breaks are discussed by e.g. Venables and Ripley (2002). 2.1.4 Boxplot It is always possible to sort n data values to have increasing order x1 ≤ x2 ≤ · · · ≤ xn , where x1 is the smallest, x2 is the first-to-the smallest, etc. Let x0.25 be a number for which it holds that 25% of the data values x1 , · · · , xn is smaller. That is, 25% of the data values lay on the left side of the number x0.25 , reason for which it is called the first quartile or the 25th percentile. 2.1. UNIVARIATE DATA DISPLAY 21 The second quartile is the value x0.50 such that 50% of the data values are smaller. Similarly, the third quartile or 75th percentile is the value x0.75 such that 75% of the data is smaller. A popular method to display data is by drawing a box around the first and the third quartile (a bold line segment for the median), and the smaller line segments (whiskers) for the smallest and the largest data values. Such a data display is known as a box-and-whisker plot. Example 1. A vector with gene expression values can be put into in- creasing order by the function sort. We shall illustrate this by the ALL expression values of gene \"CCND3 Cyclin D3\" in row 1042 of golub. ## > x <- sort(golub[1042, gol.fac==\"ALL\"], decreasing = FALSE) > x[1:5] 0.458 1.105 1.276 1.326 1.368 The second command prints the first five values of the sorted data values to the screen, so that we have x1 = 0.458, x2 = 1.105, etc. Note that the mathematical notation xi corresponds exactly to the R notation x[i] ## Histogram of golub[1042, gol.fac == \"ALL\"] 12 2.5 10 2.0 8 1.5 Frequency 1.0 6 0.5 4 0.0 2 −0.5 0 ## 0.0 0.5 1.0 1.5 2.0 2.5 3.0 ALL AML ## golub[1042, gol.fac == \"ALL\"] Figure 2.3: Histogram of ALL ex- Figure 2.4: Boxplot of ALL and pression values of gene CCND3 AML expression values of gene Cyclin D3. CCND3 Cyclin D3. 22 CHAPTER 2. DESCRIPTIVE STATISTICS AND DATA DISPLAY ## Example 2. A view on the distribution of the expression values of the ALL and the AML patients on gene CCND3 Cyclin D3 can be obtained by constructing two separate boxplots adjacent to one another. To produce such a plot the factor gol.fac is again very useful. > boxplot(golub[1042,] ~ gol.fac) From the position of the boxes in Figure 2.4 it can be observed that the gene expression values for ALL are larger than those for AML. Furthermore, since the two sub-boxes around the median are more or less equally wide, the data are quite symmetrically distributed around the median. To compute exact values for the quartiles we need a sequence running from 0.00 to 1.00 with steps equal to 0.25. To construct such a sequence the function seq is quite useful. > pvec <- seq(0,1,0.25) > quantile(golub[1042, gol.fac==\"ALL\"],pvec) 0% 25% 50% 75% 100% 0.458 1.796 1.928 2.179 2.766 The first quartile x0.25 = 1.796, the second x0.50 = 1.928, and the third x0.75 = 2.179. The smallest observed expression value equals x0.00 = 0.458 and the largest x1.00 = 2.77. The latter can also be obtained by the function min(golub[1042, gol.fac==\"ALL\"]) and max(golub[1042, gol.fac==\"ALL\"]), or more briefly by range(golub[1042, gol.fac==\"ALL\"]). Outliers are data values laying far apart from the pattern set by the majority of the data values. The implementation of the (modified) boxplot in R draws such outlier points separately as small circles. A data point x is defined as an outlier point if x < x0.25 − 1.5 · (x0.75 − x0.25 ) or x > x0.75 + 1.5 · (x0.75 − x0.25 ). From Figure 2.4 it can be observed that there are outliers among the gene expression values of ALL patients. These are the smaller values 0.45827 and 1.10546, and the largest value 2.76610. The AML expression values have one outlier with value -0.74333. To define extreme outliers, the factor 1.5 is raised to 3.0. Note that this is a descriptive way of defining outliers instead of statistically testing for the existence of an outlier. 2.1. UNIVARIATE DATA DISPLAY 23 ## 2.1.5 Quantile-Quantile plot ## A method to visualize the distribution of gene expression values is by the so-called quantile-quantile (Q-Q) plot. In such a plot the quantiles of the gene expression values are displayed against the corresponding quantiles of the normal (bell-shaped). A straight line is added representing points which correspond exactly to the quantiles of the normal distribution. By observing the extent in which the points appear on the line, it can be evaluated to what degree the data are normally distributed. That is, the closer the gene expression values appear to the line, the more likely it is that the data are normally distributed. ## Normal Q−Q Plot 2.5 2.0 Sample Quantiles 1.5 1.0 0.5 −2 −1 0 1 2 Theoretical Quantiles Figure 2.5: Q-Q plot of ALL gene expression values of CCND3 Cyclin D3. 24 CHAPTER 2. DESCRIPTIVE STATISTICS AND DATA DISPLAY ## Example 1. To produce a Q-Q plot of the ALL gene expression values of CCND3 Cyclin D3 one may use the following. qqnorm(golub[1042, gol.fac==\"ALL\"]) qqline(golub[1042, gol.fac==\"ALL\"]) From the resulting Figure 2.5 it can be observed that most of the data points are on or near the straight line and a few have a larger distance to the line. ## The above example illustrates a case where the degree of non-normality is moderate so that a clear conclusion cannot be drawn. By making the exercises below, the reader will gather more experience with the degree in which gene expression values are normally distributed. ## 2.2 Descriptive statistics There exist various ways to describe the central tendency as well as the spread of data. In particular, the central tendency can be described by the mean or the median, and the spread by the variance, standard deviation, interquartile range, or median absolute deviation. These will be defined and illustrated. ## 2.2.1 Measures of central tendency The most important descriptive statistics for central tendency are the mean and the median. The sample mean of the data values x1 , · · · , xn is defined as n 1X 1 x= xi = (x1 + · · · + xn ). n i=1 n Thus the sample mean is simply the average of the n data values. Since it is the sum of all data values divided by the sample size, a few extreme data values may largely influence its size. In other words, the mean is not robust against outliers. The median is defined as the second quartile or the 50th percentile, and is denoted by x0.50 . When the data are symmetrically distributed around the mean, then the mean and the median are equal. Since extreme data values do not influence the size of the median, it is very robust against outliers. 2.2. DESCRIPTIVE STATISTICS 25 ## Robustness is important in bioinformatics because data are frequently con- taminated by extreme or otherwise influential data values. ## Example 1. To compute the mean and median of the ALL expression values of gene CCND3 Cyclin D3 consider the following. > mean(golub[1042, gol.fac==\"ALL\"]) 1.89 > median(golub[1042, gol.fac==\"ALL\"]) 1.93 Note that the mean and the median do not differ much so that the distribu- tion seems quite symmetric. ## 2.2.2 Measures of spread The most important measures of spread are the standard deviation, the in- terquartile range, and the median absolute deviation. The standard deviation is the square root of the sample variance, which is defined as n 2 1 X 1 ¡ ¢ s = (xi − x)2 = (x1 − x)2 + · · · + (xn − x)2 . n − 1 i=1 n−1 Hence, it is the average of the squared differences between the data values and the sample mean. The sample standard deviation s is the square root of the sample variance and may be interpreted as the distance of the data values to the mean. The variance and the standard deviation are not robust against outliers. The interquartile range is defined as the difference between the third and the first quartile, that is x0.75 − x0.25 . It can be computed by the function IQR(x). More specifically, the value IQR(x)/1.349 is a robust estimator of the standard deviation. The median absolute deviation (MAD) is defined as a constant times the median of the absolute deviations of the data from the median (e.g. Jurečková & Picek, 2006, p. 63). In R it is computed by the function mad defined as the median of the sequence |x1 −x0.50 |, · · · , |xn −x0.50 | multiplied by the constant 1.4826. It equals the standard deviation in case the data come from a bell-shaped (normal) distribution (see Section 3.2.1). Because the interquartile range and the median absolute deviation are based 26 CHAPTER 2. DESCRIPTIVE STATISTICS AND DATA DISPLAY ## on quantiles, these are robust against outliers. ## Example 1. These measures of spread for the ALL expression values of gene CCND3 Cyclin D3 can be computed as follows. ## > sd(golub[1042, gol.fac==\"ALL\"]) 0.491 > IQR(golub[1042, gol.fac==\"ALL\"]) / 1.349 0.284 > mad(golub[1042, gol.fac==\"ALL\"]) 0.368 Due to the three outliers (cf. Figure 2.4) the standard deviation is larger than the interquartile range and the mean absolute deviation. That is, the absolute differences with respect to the median are somewhat smaller, than the root of the squared differences. ## 2.3 Overview and concluding remarks Data can be stored as a vector or a data matrix on which various useful functions are defined. In particular, it is easy to produce a pie, histogram, boxplot, or Q-Q plot of a vector of data. These plots give a useful first impression of the degree of (non)normality of gene expression values. 2.4 Exercises Since the majority of the exercises are based on the Golub et al. (1999) data, it is essential to make these available and to learn to work with it. To stimulate self-study the answers are given at the end of the book. ## 1. Illustration of mean and standard deviation. (a) Compute the mean and the standard deviation for 1, 1.5, 2, 2.5, 3. (b) Compute the mean and the standard deviation for 1, 1.5, 2, 2.5, 30. (c) Comment on the differences. 2.4. EXERCISES 27 2. Comparing normality for two genes. Consider the gene expression val- ues in row 790 and 66 of the Golub et al. (1999) data. (a) Produce a boxplot for the expression values of the ALL patients and comment on the differences. Are there outliers? (b) Produce a QQ-plot and formulate a hypothesis about the normal- ity of the genes. (c) Compute the mean and the median for the expression values of the ALL patients and compare these. Do this for both genes. ## 3. Effect size. An important statistic to measure the effect size which is defined for a sample as x/s. It measures the mean relative to the standard deviation, so that is value is large when the mean is large and the standard deviation small. (a) Determine the five genes with the largest effect size of the ALL patients from the Golub et al. (1999) data. Comment on their size. (b) Invent a robust variant of the effect size and use it to answer the previous question. 4. Plotting gene expressions \"CCND3 Cyclin D3\". Use the gene expres- sions from \"CCND3 Cyclin D3\" of Golub et al. (1999) collected in row 1042 of the object golub from the multtest library. After using the function plot you produce an object on which you can program. ## (a) Produce a so-called stripchart for the gene expressions separately for the ALL as well as for the AML patients. Hint: Use a factor for appropriate separation. (b) Rotate the plot to a vertical position and keep it that way for the questions to come. (c) Color the ALL expressions red and AML blue. Hint: Use the col parameter. (d) Add a title to the plot. Hint: Use title. (e) Change the boxes into stars. Hint: Use the pch parameter. Hint: Store the final script you like the most in your typewriter in order to be able to use it efficiently later on. 28 CHAPTER 2. DESCRIPTIVE STATISTICS AND DATA DISPLAY ## 5. Box-and-Whiskers plot of \"CCND3 Cyclin D3\". Use the gene expres- sions \"CCND3 Cyclin D3\" of Golub et al. (1999) from row 1042 of the object golub of the multtest library. ## (a) Construct the boxplot in Figure 2.6. (b) Add text to the plot to explain the meaning of the upper and lower part of the box. (c) Do the same for the wiskers. (d) Export your plot to eps format. ## Hint 1: Use locator() to find coordinates of the position of the plot. Hint 2: Use xlim to make the plot somewhat wider. Hint 3: Use arrows to add an arrow. Hint 4: Use text to add information at a certain position. ## 6. Box-and-wiskers plot of persons of Golub et al. (1999) data. ## (a) Use boxplot(data.frame(golub)) to produce a box-and-wiskers plot for each column (person). Make a screen shot to save it in a word processor. Describe what you see. Are the medians of similar size? Is the inter quartile range more or less equal. Are there outliers? (b) Compute the mean and medians of the persons. What do you observe? (c) Compute the range (minimal and maximum value) of the standard deviations, the IQR and MAD of the persons. Comment of what you observe. ## 7. Oncogenes of Golub et al. (1999) data. (a) Select the oncogens by the grep facility and produce a box-and- wiskers plot of the gene expressions of the ALL patients. (b) Do the same for the AML patients and use par(mfrow=c(2,1)) to combine the two plots such that the second is beneath the first. Are there genes with clear differences between the groups? 2.4. EXERCISES 29 2.5 2.0 Median 1.5 Outlier 1.0 0.5 ## Figure 2.6: Boxplot with arrows and explaining text. 8. Descriptive statistics for the ALL gene expression values of the Golub et al. (1999) data. (a) Compute the mean and median for gene expression values of the ALL patients, report their range and comment on it. (b) Compute the SD, IQR, and MAD for gene expression values of the ALL patients, report their range and comment on it. 30 CHAPTER 2. DESCRIPTIVE STATISTICS AND DATA DISPLAY Chapter 3 Important Distributions ## Questions that concern us in this chapter are: What is the probability to find fourteen purines in a microRNA of length twenty two? If expressions from ALL patients of gene CCND3 Cyclin D3 are normally distributed with mean 1.90 and standard deviation 0.5, what is the probability to observe expression values larger than 2.4? To answer such type of questions we need to know more about statistical distributions as these are given in applied books on statistics (e.g. Samuels & Witmer, 2003). In this chapter several important distributions will be defined, explained, and illustrated. In particular, the discrete distribution binomial and the continuous distributions normal, T, F, and chi-squared will be elaborated. These distributions have a wealth of applications to statisti- cally testing biological hypotheses. Only when deemed relevant, the density function, the distribution function, the mean µ (mu), and the standard de- viation σ (sigma), are explicitly defined. ## 3.1 Discrete distributions The binomial distribution is fundamental and has many applications in medicine and bioinformatics. ## 3.1.1 Binomial distribution The binomial distribution fits to repeated trials each with a dichotomous out- come such as succes-failure, healthy-disease, heads-tails, purine-pyrimidine, 31 32 CHAPTER 3. IMPORTANT DISTRIBUTIONS etc. When there are n trials, then the number of ways to obtain k successes out of n is given by the binomial coefficient n! , k!(n − k)! where n! = n · (n − 1) · · · 1 and 0! = 1. The binomial probability of k successes out of n consists of the product of the binomial coefficient with the probability of k successes and the probability of n − k failures. Let p be the probability of succes in a single trial and X the random variable denoting the number of successes that we consider. Then the probability of the event that k successes occur out of n trails equals n! P (X = k) = pk (1 − p)n−k , for k = 0, · · · , n. (3.1) k!(n − k)! The collection of these probabilities is called the probability density function. For completeness we mention that the mean of a binomially distributed vari- able is np and its variance is p np(1 − p). The standard deviation is the square root of the variance, that is np(1 − p). ## Example 1. To visualize the Binomial distribution, load the TeachingDemos package and use the command vis.binom(). Click on ”Show Normal Ap- proximation” and observe that the approximation improves as n increases, taking p for instance near 0.5. Example 2. If two carriers of the gen for albinism marry, then each of the children has probability of 1/4 of being albino. What is the probability for one child out of three to be albino? To answer this question we take n = 3, k = 1, and p = 0.25 into Equation (3.1). 3! P (X = 1) = 0.251 0.752 = 3 · 0.140625 = 0.421875. 1!(3 − 1)! An elementary manner to compute this in R is by > choose(3,1)* 0.25^1* 0.75^2 where choose(3,1) computes the binomial coefficient. A more efficient man- ner of computation is by the built-in-density-function dbinom(k,n,p). This can, for instance, be useful for printing the values of the probabilities. 3.1. DISCRETE DISTRIBUTIONS 33 ## > for (k in 0:3) print(dbinom(k,3,0.25)) Changing d into p yields the so-called distribution function with the cumula- tive probabilities. That is, the probability that the number of Heads is lower than or equal to two P (X ≤ 2) is computed by pbinom(2,3,0.25). The values of the density and distribution function are summarized in Table 3.1. From the table we read that the probability of no albino child is 0.4218 and the probability that all three children are albino equals 0.0156. ## Table 3.1: Discrete density and distribution function values of S3 , with p = 0.6. number of Heads k=0 k=1 k=2 k=3 density P (X = k) 0.4218 0.4218 0.1406 0.0156 distribution P (X ≤ k 0.4218 0.843 0.9843 1 ## Example 3. RNA consists of a sequence of nucleotides A, G, U, and C, where the first two are purines and the last two are pyrimidines. Suppose, for the purpose of illustration, that the length of a certain micro RNA is 22, that the probability of a purine equals 0.7, and that the process of placing purines and pyrimidines is binomially distributed. The event that our microRNA contains 14 purines can be represented by X = 14. The probability of this event can be computed by 22! P (X = 14) = 0.714 0.38 = dbinom(14, 22, 0.7) = 0.1423. 14!(22 − 14)! This is the value of the density function at 14. Then the probability of the event of less than or equal to 13 purines equals the value of the distribution function at value 13, that is ## P (X ≤ 13) = pbinom(13, 22, 0.7) = 0.1865. ## The probability of strictly more than 10 purines is 22 X P (X ≥ 11) = P (S22 = k) = sum(dbinom(11 : 22, 22, 0.7)) = 0.9860. k=11 ## The expected number √ of purines equals 22 × 0.7 = 15.4 and the standard deviation equals 22 × 0.7 × 0.3 = 2.1494. The binomial density function can be plotted by: 34 CHAPTER 3. IMPORTANT DISTRIBUTIONS 1.0 f(x) F(x) 0.15 0.8 0.6 0.10 0.4 0.05 0.2 0.00 0.0 x x 0 5 10 15 20 0 5 10 15 20 ## Figure 3.1: Binomial probabilities Figure 3.2: Binomial cumulative with n = 22 and p = 0.7 probabilities with n = 22 and p = 0.7. ## > x <- 0:22 > plot(x,dbinom(x,size=22,prob=.7),type=\"h\") By the first line the sequence of integers {1, 2, · · · , 22} is constructed and by the second the density function is plotted, where the argument h specifies pins. From Figure 3.1 it can be observed that the largest probabilities oc- cur near the expectation 15.4. The graph in Figure 3.2 illustrates that the distribution is an increasing step function, with x on the horizontal axis and P (X ≤ x) on the vertical. A random sample of size 1000 from the binomial distribution with n = 22 and p = 0.7 can be drawn by the command rbinom(1000,22,0.7). This simulates the number of purines in 1000 microRNA’s each with purine prob- ability equal to 0.7 and length 22. ## 3.2 Continuous distributions The continuous distributions normal, T, F, and chi-squared will be defined, explained and illustrated. 3.2. CONTINUOUS DISTRIBUTIONS 35 ## 3.2.1 Normal distribution The normal distribution is of key importance because many (preprocessed) gene expression values have a normal distribution or are assumed to have it. That is, the data values x1 , · · · , xn are seen as realizations of a random variable X having a normal distribution. Equivalently one says that the data values are members of a normally distributed population with mean µ (mu) and variance σ 2 (sigma squared). It is good custom to use Greek letters for population properties and N (µ, σ 2 ) for the normal distribution. The value of the distribution function is given by P (X ≤ x), the probability of the population to have values smaller than or equal to x. Various properties of the normal distribution are illustrated by the examples below. ## Example 1. To view various members of the normal distribution load the TeachingDemos package and give the command vis.normal() to launch an interactive display of bell-shaped curves. These bell-shaped curves are also called normal densities. The curves are symmetric around µ and attain a unique maximum at x = µ. If x moves further away from the mean µ, then the curves moves to zero so that extreme values occur with small probability. Move the Mean and the Standard Deviation from the left to the right to explore their effect on the shape of the normal distribution. In particular, when the mean µ increases, then the distribution moves to the right. If σ is small/large, then the distribution is steep/flat. ## Example 2. Suppose that the expression values of gene CCND3 Cyclin D3 can be represented by X which is distributed as N (1.90, 0.52 ). From the graph of its density function in Figure 3.3, it can be observed that it is symmetric and bell-shaped around µ = 1.90. A density function may very well be seen as a histogram with arbitrarily small bars (intervals). The probability that the expression values are less then 1.4 is ## P (X < 1.4) = pnorm(1.4, 1.9, 0.5) = 0.1586. Figure 3.4 illustrates the value 0.16 of the distribution function at x = 1.4. It corresponds to the area of the blue colored surface below the graph of the density function in Figure 3.3. The probability that the expression values are larger than 2.4 is ## P (X ≥ 2.4) = 1 − pnorm(2.4, 1.9, 0.5) = 0.1586. 36 CHAPTER 3. IMPORTANT DISTRIBUTIONS 0.8 1.0 0.8 0.6 ## normal distribution F(x) 0.6 density f(x) P(X<=1.4) = 0.16 0.4 0.16 0.4 0.2 0.2 0.0 0.0 1.4 1.4 0 1 2 3 4 0 1 2 3 4 x x Figure 3.3: Graph of normal den- Figure 3.4: Graph of normal dis- sity with mean 1.9 and standard tribution with mean 1.9 and stan- deviation 0.5. dard deviation 0.5. ## The probability that X is between 1.4 and 2.4 equals P (1.4 ≤ X ≤ 2.4) = pnorm(2.4, 1.9, 0.5) − pnorm(1.4, 1.9, 0.5) = 0.9545. The graph of the distribution function in Figure 3.4 illustrates that it is strictly increasing. The exact value for the quantile x0.025 can be computed by > qnorm(0.025,1.9,0.5) 0.920018 That is, the quantile x0.025 = 0.920018. Hence, it holds that the probability of values smaller than 0.920018 equals 0.025, that is P (X ≤ 0.920018) = 0.025. By the command pnorm(0.920018, 1.9, 0.5) this can be checked. When X is distributed as N (1.90, 0.52 ), then the population mean is 1.9 and the population standard deviation 0.5. To verify this we draw a random sample of size 1000 from this population by > x <- rnorm(1000,1.9,0.5) The estimate mean(x)=1.8862 and sd(x)=0.5071 are close to their popula- tion values µ = 1.9 and σ = 0.5. 1 1 Use the function round to print the mean in a desired number a decimal places. 3.2. CONTINUOUS DISTRIBUTIONS 37 ## For X distributed as N (µ, σ 2 ), it holds that (X − µ)/σ = Z is distributed as N (0, 1). Thus by subtracting µ and dividing the result by σ any normally distributed variable can be standardized into a standard normally distributed Z having mean zero and standard deviation one. ## 3.2.2 Chi-squared distribution The chi-squared distribution plays an important role in testing hypotheses about frequencies, see Chapter 4. To define it, let {Z1 , · · · , Zm } be indepen- dent and standard normally distributed random variables. Then the sum of squares m X 2 χ2m = Z12 + · · · + Zm = Zi2 , i=1 ## is the so-called chi-squared distributed (random) variable with m degrees of freedom. ## Example 1. To view various members of the χ2 distribution load the TeachingDemos package. Use the command vis.gamma() to open an inter- active display of various distributions. Click on ”Visualizing the gamma”, ”Visualizing the Chi-squared”, and adapt ”Xmax”. Move the ”Shape” but- ton to the right to increase the degrees of freedom. Observe that the graphs of chi-squared densities change from heavily skew to the right into more bell- shaped normal as the degrees of freedom increases. ## Example 2. Let’s consider the chi-squared variable with 5 degrees of freedom; χ25 = Z12 + · · · + Z52 . To compute the probability of values smaller than eight we use the function pchisq, as follows. ¡ ¢ P χ25 ≤ 8 = pchisq(8, 5) = 0.8437644. This yields the value of the distribution function for x = 8 in Figure 3.6, which corresponds to the area of the blue colored surface below the graph of the density function in Figure 3.5. Often we are interested in the value for the quantile x0.025 , where P (χ25 ≤ x0.025 ) = 0.025. 2 Such can be computed by 2 If the distribution function is strictly increasing, then there exists an exact and unique solution for the quantiles. 38 CHAPTER 3. IMPORTANT DISTRIBUTIONS 1.0 0.15 0.84 0.8 Chi−Squared Distribution F(x) Chi−Squared Density f(x) 0.10 0.6 area=0.84 0.4 0.05 0.2 0.00 0.0 8 8 x 0 5 10 15 20 25 0 5 10 15 20 ## Figure 3.5: χ25 -density. Figure 3.6: χ25 distribution. ## > qchisq(0.025, 5, lower.tail=TRUE) 0.8312 ## Example 3. The chi-squared distribution is frequently used as a so- called goodness of fit measure. With respect to the Golub et. al. (1999) data we may hypothesize that the expression values of gene CCND3 Cyclin D3 for the ALL patients are distributed as N (1.90, 0.502 ). If this indeed holds, then the sum of squared standardized values is small and the prob- ability of larger values is large. In particular, let x1 , · · · , x27 be the gene expression values. Then P the standardized values are zi = (xi − 1.90)/0.50 and their sum of squares 27 2 1 zi = 25.03312. The probability of larger values is P (χ227 ≥ 25.03312) = 0.5726, which indicates that the normal distribution fits the data well. Hence, it is likely that the specified normal distribution is indeed correct. Using R the computations are as follows. ## > library(multtest); data(golub) > gol.fac <- factor(golub.cl,levels=0:1, labels= c(\"ALL\",\"AML\")) > x <- golub[1042,gol.fac==\"ALL\"] > z <- (x-1.90)/0.50 > sum(z^2) 25.03312 > 1 - pchisq(sum(z^2),27) 3.2. CONTINUOUS DISTRIBUTIONS 39 0.5726059 3.2.3 T-Distribution The T -distribution has many useful applications for testing hypotheses about means of gene expression values, in particular when the sample size √ is lower than thirty. If the data are normally distributed, then the values of n(x − µ)/s follow a T -distribution with n−1 degrees of freedom. The T -distribution is approximately equal to the normal distribution when the degrees of free- dom equals thirty. 0.4 1.0 0.8 0.3 Distribution F(x) 0.6 Density f(x) 0.2 0.4 0.1 0.2 0.0 0.0 −4 −2 0 2 4 −4 −2 0 2 4 x−axis x−axis ## Figure 3.7: Density of T10 distri- Figure 3.8: Distribution function bution. of T10 . ## Example 1. Load the TeachingDemos and give vis.t() to explore a vi- sualization of the T -distribution. Click on ”Show Normal Distribution” and increase the number of degrees of freedom to verify that df equal to thirty is sufficient for the normal approximation to be quite precise. For this reason these distributions are considered to be equal for df greater than or equal to thirty. ## Example 2. A quick NCBI scan makes it reasonable to assume that the gene Gdf5 has no direct relation with leukemia. For this reason we take 40 CHAPTER 3. IMPORTANT DISTRIBUTIONS ## µ = 0. The expression values of this gene are √ collected in row 2058 of the golub data. To compute the sample t-value n(x − µ)/s use n <- 11 x <- golub[2058, gol.fac==\"AML\"] t.value <- sqrt(n)*(mean(x)-0)/sd(x) t.value 1.236324 From the above we know that this has a T10 distribution. The probability that T10 is greater than 1.236324 can be computed, as follows. ## P (T10 ≥ 1.236324) = 1−P (T10 ≤ 1.236324) = 1−pt(1.236324, 10) = 0.1222945. This probability corresponds to the area of the blue colored surface below of the graph of the density function in Figure 3.7. The T distribution function with ten degrees of freedom is illustrated in Figure 3.8. The probability that the random variable T10 is between -2 and 2 equals ## P (−2 ≤ T11 ≤ 2) = pt(2, 10) − pt(−2, 10) = 0.926612. ## The 2.5% quantile can be computed by qt(0.025,n-1)=-2.228139. 3.2.4 F-Distribution The F -distribution is important for testing the equality of two variances. It can be shown that the ratio of variances from two independent sets of nor- mally distributed random variables follows an F -distribution. More specifi- cally, if the two population variances are equal (σ12 = σ22 ), then s21 /s22 follows an F -distribution with n1 − 1, n2 − 1 degrees of freedom, where s21 is the variance of the first set, s22 that of the second, and n1 is the number of ob- servations in the first and n2 in the second. ## Example 1. For equal population variances the probability is large that that the ratio of sample variances is near one. With respect to the Golub et. al. (1999) data it is easy to compute the ratio of the variances of the expression values of gene CCND3 Cyclin D3 for the ALL patients and the AML patients. 3.2. CONTINUOUS DISTRIBUTIONS 41 1.0 0.8 0.8 0.6 F Distribution F(x) F density f(x) 0.6 0.4 0.4 0.23 0.2 0.23 0.2 0.0 0.0 0.71 x 0 2 4 6 8 10 0.71 0 2 4 6 8 10 ## Figure 3.9: Density of F26,10 . Figure 3.10: Distribution of F26,10 . > var(golub[1042,gol.fac==\"ALL\"])/var(golub[1042,gol.fac==\"AML\"]) 0.7116441 Since n1 = 27 and n2 = 11 this ratio is a realization of the F26,10 distribution. Then, the probability that the ratio attains values smaller than 0.7116441 is P (X ≤ 0.7116441) = pf(0.7116441, 26, 10) = 0.2326147. Figure 3.9 illustrates that this value corresponds to the area of the blue col- ored surface below the graph of the density function. Figure 3.10 gives the distribution function. To find the quantile x0.025 one simply uses qf(.025,26,10)=0.3861673. This subject is taken further in Section 4.1.5. ## 3.2.5 Plotting a density function 3 A convenient manner to plot a density function in by using the correspond- ing built-in-function. For instance to plot the bell-shaped density from the normally distributed variable use the function dnorm, as follows. > f<-function(x){dnorm(x,1.9,0.5)} > plot(f,0,4,xlab=\"x-axis\",ylab=\"density f(x)\") 3 This subsection can be skipped without loss of continuity. 42 CHAPTER 3. IMPORTANT DISTRIBUTIONS This produces the graph of the density function in Figure 3.3. The specifica- tion 0,4 defines the interval on the horizontal axis over which f is plotted. The vertical axis is adapted automatically. We can give the surface under f running x from 0 to 1.4 a nice blue color by using the following. plot(f,0,4,xlab=\"x-axis\",ylab=\"density f(x)\") x<-seq(0,1.4,0.01) polygon(c(0,x,1.4), c(0,f(x),0), col=\"lightblue\") The basic idea of plotting is to start with a plot and next to add colors, text, arrows, etc. In particular, the command polygon is used to give the surface below the graph the color \"lightblue\". The polygon (surface enclosed by many angles) is defined by the sequence of points defined as x and f(x). ## 3.3 Overview and concluding remarks For practical computations R has built-in-functions for the binomial, normal, t, F, χ2 -distributions, where d stands for density, p for (cumulative) prob- ability distribution, q for quantiles, and r for drawing random samples, see Table 3.2. The density, expectation, and variance of most the distributions in this chapter are summarized in Table 3.3. ## Table 3.2: Built-in-functions for random variables used in this chapter. para- random Distribution meters density distribution quantiles sampling Bin n, p dbinom(x, n, p) pbinom(x, n, p) qbinom(α, n, p) rbinom(10, n, p) Normal µ, σ dnorm(x, µ, σ) pnorm(x, µ, σ) qnorm (α, µ, σ) rnorm(10, µ, σ) Chi-squared m dchisq(x, m) pchisq(x, m) qchisq(α, m) rchisq(10, m) T m dt(x, m) pt(x, m) qt(α, m) rt(10, m) F m,n df(x, m, n) pf(x, m, n) qf(α, m, n) rf(10, m, n) ## Although for a first introduction the above distributions are without doubt among the most important, there are several additional distributions available such as the Poisson, Gamma, beta, or Dirichlet. Obviously, these can also be programmed by yourself. The freeware encyclopedia wikipedia of- ten gives a first, though technical, orientation. Note that a distribution acts as a population from which a sample can be drawn. Hence, distributions 3.4. EXERCISES 43 ## can be seen as models of data generating procedures. For a more thorough treatment of distribution we refer the reader to Bain & Engelhardt (1992), Johnson et al. (1992), and Miller & Miller (1999). Table 3.3: Density, mean, and variance of distributions used in this chapter. Distribution parameters density expectation variance n! k n−k Binomial n, p k!(n−k)! p (1 − p) np np(1 − p) 1 1 x−µ 2 Normal µ, σ √ σ 2π exp(− 2 ( σ ) ) µ σ2 Chi-squared df=m m 2m 3.4 Exercises It is importance to obtain some routine with the computation of probabilities and quantiles. 1. Binomial Let X be binomially distributed with n = 60 and p = 0.4. Compute the following. (a) P (X = 24), P (X ≤ 24), and P (X ≥ 30). (b) P (20 ≤ X ≤ 30), P (20 ≤ X). (c) P (20 ≤ X or X ≥ 40), and P (20 ≤ X and X ≥ 10). (d) Compute the mean and standard deviation of X. (e) The quantiles x0.025 , x0.5 , and x0.975 . 2. Standard Normal. Compute the following probabilities and quantiles. (a) P (1.6 < Z < 2.3). (b) P (Z < 1.64). (c) P (−1.64 < Z < −1.02). (d) P (0 < Z < 1.96). (e) P (−1.96 < Z < 1.96). (f) The quantiles z0.025 , z0.05 , z0.5 , z0.95 , and z0.975 . 3. Normal. Compute for X distributed as N (10, 2) the following proba- bilities and quantiles. 44 CHAPTER 3. IMPORTANT DISTRIBUTIONS ## (a) P (X < 12). (b) P (X > 8). (c) P (9 < X < 10, 5). (d) The quantiles x0.025 , x0.5 , and x0.975 . ## 4. T -distribution. Verify the following computations for the T6 distribu- tion. ## (a) P (T6 < 1). (b) P (T6 > 2). (c) P (−1 < T6 < 1). (d) P (−2 < T6 < −2). (e) The quantiles t0.025 , t0.5 , and t0.975 . ## 5. F distribution. Compute the following probabilities and quantiles for the F8,5 distribution. ## (a) P (F8,5 < 3). (b) P (F8,5 > 4). (c) P (1 < F8,5 < 6). (d) The quantiles f0.025 , f0.5 , and f0.975 . ## 6. Chi-squared distribution. Compute the following for the chi-squared distribution with 10 degrees of freedom. ## (a) P (χ210 < 3). (b) P (χ210 > 4). (c) P (1 < χ210 < 6). (d) The quantiles g0.025 , g0.5 , and g0.975 . ## 7. MicroRNA. Suppose that for certain microRNA of size 20 the proba- bility of a purine is binomially distributed with probability 0.7. ## (a) What is the probability of 14 purines? (b) What is the probability of less than or equal to 14 purines? 3.4. EXERCISES 45 ## (c) What is the probability of strictly more than 10 purines? (d) By what probability is of the number of purines between 10 and 15? (e) How many purines do you expect? In other words: What is the mean of the distribution? (f) What is the standard deviation of the distribution? ## 8. Zyxin. The distribution of the expression values of the ALL patients on the Zyxin gene are distributed according to N (1.6, 0.42 ). (a) Compute the probability that the expression values are smaller than 1.2? (b) What is the probability that the expression values are between 1.2 and 2.0? (c) What is the probability that the expression values are between 0.8 and 2.4? (d) Compute the exact values for the quantiles x0.025 and x0.975 . (e) Use rnorm to draw a sample of size 1000 from the population and compare the sample mean and standard deviation with that of the population. ## 9. Some computations on Golub et al. (1999) data. (a) Take µ = 0 and compute the t-values for the ALL gene expression values. Find the three genes with largest absolute t-values. (b) Compute per gene the ratio of the variances for the ALL and the AML patients. How many are between 0.5 and 1.5? ## 10. Extreme value investigation. This (difficult!) question aims to teach the essence of an extreme value distribution! An interesting extreme value distribution is given by Pevsner (2003, p.103). Take the maximum of a sample (with size 1000) from the standard normal distribution and repeat this 1000 times. So that you sampled 1000 maxima. Next, subtract from these maxima an and divide by bn, where ## an <- sqrt(2*log(n)) - 0.5*(log(log(n))+log(4*pi))*(2*log(n))^(-1/2) bn <- (2*log(n))^(-1/2) 46 CHAPTER 3. IMPORTANT DISTRIBUTIONS Now plot the density from the normalized maxima and add the extreme value function f (x) from Pevsner his book, and add the density (dnorm) from the normal distribution. What do you observe? Chapter 4 ## Estimation and Inference Questions that we deal with in this chapter are related to statistically testing biological hypothesis. Does the mean gene expression over ALL patients differ from that over AML patients? That is, does the mean gene expression level differ between experimental conditions? Is the mean gene expression different from zero? To what extent are gene expression values normally distributed? Are there outliers among a sample of gene expression values? How can an experimental effect be defined? How can genes be selected with respect to an experimental effect? Other important questions are: How can it be tested whether the frequencies of nucleotide sequences of two genes are different? How can it be tested whether outliers are present in the data? What is the probability of a certain micro RNA to have more than a certain number of purines? In the foregoing chapters many population parameters were used to define families of theoretical distributions. In any empirical setting the specific val- ues of such parameters are unknown so that these must be estimated. Once estimates are available it becomes possible to statistically test biologically important hypotheses. The current chapter gives several basic examples of statistical testing and some of its background. Robust type of testing is briefly introduced as well as an outlier test. ## 4.1 Statistical hypothesis testing Let µ0 be a number representing the hypothesized population mean by a researcher on the basis of experience and knowledge from the field. With 47 48 CHAPTER 4. ESTIMATION AND INFERENCE ## respect to the population mean the null hypothesis can be formulated as H0 : µ = µ0 and the alternative hypothesis as H1 : µ 6= µ0 . These are two statements of which the latter is the opposite of the first: Either H0 or H1 is true. The alternative hypothesis is true if H1 : µ < µ0 or H1 : µ > µ0 holds true. Reason for which this type of alternative hypothesis is called two-sided. In case H1 : µ > µ0 , it is called one-sided. Such a null hypothesis will be statistically tested against the alternative using a suitable distribution of a statistic (e.g. standardized mean). After conducting the experiment, the value of the statistic can be computed from the data. By comparing the value of the statistic with its distribution, the researcher draws a conclusion with respect to the null hypothesis: H0 is rejected or it is not. The probability to reject H0 , given the truth of H0 , is called the significance level which is generally denoted by α. We shall follow the habit in statistics to use α = 0.05, but it will be completely clear how to adapt the procedure in case other significance levels are desired. ## 4.1.1 The Z-test The Z-test applies to the situation where we want to test H0 : µ = µ0 against H1 : µ 6= µ0 and the standard deviation σ is known. Assuming that the gene expression values (x1 , · · · , xn√) are from a normal distribution we compute the standardized value z = n(x − µ0 )/σ. Next we define the so-called p- value as the standard normal probability of Z attaining values being more extreme than |z|, that is occurring to the left of −|z| or to the right of |z|.1 Accordingly, the p-value equals ## P (Z ≤ −|z|) + P (Z ≥ |z|) = 2 · P (Z ≤ −|z|). The conclusion from the test is now as follows: If the p-value is larger than the significance level α, then H0 is not rejected and if it is smaller than the significance level, then H0 is rejected. ## Example 1. To illustrate the Z-test we shall concentrate on the Gdf5 gene from the Golub et al. (1999) data2 . The corresponding expression values are contained in row 2058. A quick search through the NCBI site 1 Recall from a calculus course that | − 2| = 2 and |2| = 2. 2 We will work with golub throughout this chapter, so it is essential to load these data and to define the factor gol.fac. 4.1. STATISTICAL HYPOTHESIS TESTING 49 makes it likely that this gene is not directly related to leukemia. Hence, we may hypothesize that the population mean of the ALL expression values does not differ from zero. Accordingly, we test H0 : µ = 0 against H1 : µ 6= 0. For the sake of illustration we shall pretend that the standard deviation σ is known to be equal to 0.25. To compute the z-value one can use the following. > data(golub, package = \"multtest\") > gol.fac <- factor(golub.cl,levels=0:1, labels= c(\"ALL\",\"AML\")) > sigma <- 0.25; n <- 27; mu0 <- 0 > x <- golub[2058,gol.fac==\"ALL\"] > z.value<-sqrt(n)*(mean(x) - mu0)/sigma > z.value 0.001116211 The p-value can now be computed as follows. > 2*pnorm(-abs(z.value),0,1) 0.9991094 Since it is clearly larger than 0.05, we conclude that the null hypothesis of mean equal to zero is not rejected (accepted). Note that the above procedure implies rejection of the null hypothesis when z is highly negative or highly positive. More precisely, if z falls in the region (−∞, z0.025 ] or [z0.975 , ∞), then H0 is rejected. For this reason these intervals are called ”rejection regions”. If z falls in the interval (z0.025 , z0.975 ), then H0 is not rejected and consequently this region is called ”acceptance region”. The situation is illustrated in Figure 4.1. The interval (z0.025 , z0.975 ) is often named ”confidence interval”, because if the null hypothesis is true, then we are 95% confident that the observed z-value falls in it. It is custom to rework the confidence interval into an interval with respect to µ (Samuels & Witmer, 2003, p. 186). In particular, the 95% confidence interval for the population mean µ is µ ¶ σ σ x + z0.025 √ , x + z0.975 √ . (4.1) n n That is, we are 95% certain3 that the true mean falls in the confidence in- terval. Such an interval is standard output of software implementations of 3 If we would repeat the procedure sufficiently often 50 CHAPTER 4. ESTIMATION AND INFERENCE 0.4 0.3 normal density ## rejection acceptance rejection 0.2 ## area area area α 2 1 −α α 2 0.1 0.0 z −4 −2 0 2 4 ## Figure 4.1: Acceptance and rejection regions of the Z-test. statistical tests. Example 2. Using the data from Example 1 the 95% confidence interval given by Equation 4.1 can be computed as follows.4 > mean(x)+qnorm(c(0.025),0,1)*sigma/sqrt(n) -0.0942451 > mean(x)+qnorm(c(0.975),0,1)*sigma/sqrt(n) 0.09435251 Hence, the rounded estimated 95% confidence interval is (−0.094, 0.094). Since µ0 = 0 falls within this interval, H0 is not rejected. It is instructive and 4 These computations only work together with those of Example 1, especially the defi- nition of x. 4.1. STATISTICAL HYPOTHESIS TESTING 51 ## convenient to run the Z-test from the TeachingDemos package, as follows. > library(TeachingDemos) > z.test(x,mu=0,sd=0.25) ## One Sample z-test data: x z = 0.0011, n = 27.000, Std. Dev. = 0.250, Std. Dev. of the sample mean = 0.048, p-value = 0.9991 alternative hypothesis: true mean is not equal to 0 95 percent confidence interval: -0.09424511 0.09435251 sample estimates: mean of x 5.37037e-05 From the z-value, the p-value, and the confidence interval, the conclusion is not to reject the null-hypothesis of mean equal to zero. This illustrates that testing by either of these procedures yields equivalent conclusions. ## Example 3. To develop intuition with respect to confidence intervals load the package TeachingDemos and give the following command. ## > ci.examp(mean.sim =0, sd = 1, n = 25, reps = 100, + method = \"z\", lower.conf=0.025, upper.conf=0.975) Then 100 samples of size 25 from the N (0, 1) distribution are drawn and for each of these the confidence interval for the population mean is computed and represented as a line segment. Apart from sampling fluctuations, the confidence level corresponds to the percentage of intervals containing the true mean (colored in black) and that the significance level corresponds to intervals not containing it (colored in red or blue). ## 4.1.2 One Sample t-Test Indeed, in almost all research situations with respect to gene expression val- ues, the population standard deviation σ is unknown so that the above test 52 CHAPTER 4. ESTIMATION AND INFERENCE is not applicable. In such cases t-tests are very useful for testing H0 : µ = µ0 against √ H1 : µ 6= µ0 . The test is based on the t-value defined by t = n(x − µ0 )/s. The corresponding p-value is defined by 2 · P (Tn−1 ≤ −|t|). Similar to the above, H0 is not rejected if the p-value is larger than the signif- icance level and H0 is rejected if the p-value is smaller than the significance level. Equivalently, if t falls in the acceptance region (t0.025,n−1 , t0.975,n−1 ), then H0 is not rejected and otherwise it is. For n = 6 the acceptance and rejection regions are illustrated in Figure 4.2. Observe that the acceptance interval is somewhat larger than that from the Z-test (compare with Fig- ure 4.1), because the population standard deviation is not assumed to be known. The 95% √ confidence interval √ for the population mean √ is given by (x + t0.025 · s/ n, x + t0.975 · s/ n), where the expression s/ n gives the so-called “standard error of the mean”. ## Example 1. Let’s test H0 : µ = 0 against H1 : µ 6= 0 for the ALL population mean of the Gdf5 gene expressions. The latter are collected in row 2058 of the golub data. The t-value is computed as follows. > x <- golub[2058,gol.fac==\"ALL\"]; mu0 <- 0; n <- 27 > t.value<-sqrt(n)*(mean(x) - mu0)/sd(x) > t.value 0.001076867 The corresponding p-value can be computed by ## 2 · P (T26 ≤ −0.0010) = 2 ∗ pt(−0.0010, 26) = 0.9991 > α, so that the conclusion is not to reject the null hypothesis of mean equal to zero. To see whether the observed t-value belongs to the 95% confidence inter- val, we compute ## (t0.025,26 , t0.975,26 ) = (qt(0.025, n − 1), qt(0.975, n − 1)) = (−2.055, 2.055). Since this interval does contain the t-value, the conclusion is not to reject H0 : µ = 0. The left boundary of the 95% confidence interval for the population mean can be computed, as follows. > mean(x)+qt(0.025,26)*sd(x)/sqrt(n) -0.1024562 4.1. STATISTICAL HYPOTHESIS TESTING 53 0.3 ## rejection acceptance rejection T density 0.2 ## region region region α 2 α 2 0.1 0.0 t0.025 t0.975 −4 −2 0 2 4 x−axis ## Figure 4.2: Acceptance and rejection regions of the T5 -test. ## The 95% confidence interval is equals (−0.1025, 0.1025). Since µ0 = 0 is inside this interval, the conclusion is again not to reject H0 . In daily practice it is much more convenient to use the built-in-function t.test. We illustrate it with the current testing problem. > t.test(x,mu=0) ## One Sample t-test data: x t = 0.0011, df = 26, p-value = 0.9991 alternative hypothesis: true mean is not equal to 0 95 percent confidence interval: 54 CHAPTER 4. ESTIMATION AND INFERENCE -0.1024562 0.1025636 sample estimates: mean of x 5.37037e-05 This yields by one command line the observed t-value, the p-value, and the 95% confidence interval for µ0 , as these have been computed before by more elementary means. ## In the previous example the test is two-sided because H1 holds true if µ < µ0 or µ > µ0 . If, however, the researcher desires to test H0 : µ = µ0 against H1 : µ > µ0 , then the alternative hypothesis is one-sided which makes the procedure slightly different: H0 is accepted if P (Tn ≥ t) > α and it is rejected if P (Tn ≥ t) < α. We shall illustrate this by a variant of the previous example. ## Example 2. In Chapter 2 a box-and-whiskers plot revealed that the ALL gene expression values of CCND3 Cyclin D3 are positive. Hence, we test H0 : µ = 0 against H1 : µ > 0 by the built-in-function t-test. Recall that the corresponding gene expression values are collected in row 1042 of the golub data matrix (load it if necessary). ## > t.test(golub[1042,gol.fac==\"ALL\"],mu=0, alternative = c(\"greater\")) ## One Sample t-test ## data: golub[1042, gol.fac == \"ALL\"] t = 20.0599, df = 26, p-value < 2.2e-16 alternative hypothesis: true mean is greater than 0 95 percent confidence interval: 1.732853 Inf sample estimates: mean of x 1.893883 The large t-value indicates that, relative to its standard error, the mean dif- fers largely from zero. Accordingly, the p-value is very close to zero, so that the conclusion is to reject H0 . 4.1. STATISTICAL HYPOTHESIS TESTING 55 ## 4.1.3 Two-sample t-test with unequal variances Suppose that gene expression data from two groups of patients (experimen- tal conditions) are available and that the hypothesis is about the difference between the population means µ1 and µ2 . In particular, H0 : µ1 = µ2 is to be tested against H1 : µ1 6= µ2 . These hypotheses can also be formulated as H0 : µ1 − µ2 = 0 and H1 : µ1 − µ2 6= 0. Suppose that gene expression data from the first group are given by {x1 , · · · xn } and that of the second by {y1 , · · · , ym }. Let x be the mean of the first and y that of the second, and s21 the variance of the first and s22 that of the second. Then the t-statistic can be formulated as (x − y) − (µ1 − µ2 ) t= p 2 . (4.2) s1 /n + s22 /m The decision procedure with respect to the null-hypothesis is now completely similar to the above t-test. Note that the t-value is large if the difference be- tween x and y is large and the standard deviations s1 and s2 are small. This test is known as the Welch two-sample t-test (Lehmann, 1999). Example 1. Golub et al. (1999) argue that gene CCND3 Cyclin D3 plays an important role with respect to discriminating ALL from AML patients. The boxplot in Figure 2.4 suggests that the ALL population mean differs from that of AML. The null hypothesis of equal means can be tested by using the appropriate factor and specification var.equal=FALSE. > t.test(golub[1042,] ~ gol.fac, var.equal=FALSE) ## Welch Two Sample t-test ## data: golub[1042, ] by gol.fac t = 6.3186, df = 16.118, p-value = 9.87e-06 alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: 0.8363826 1.6802008 sample estimates: mean in group ALL mean in group AML 1.8938826 0.6355909 The t-value is quite large, indicating that the two means x and y differ largely relative to the corresponding standard error (denominator in Equa- 56 CHAPTER 4. ESTIMATION AND INFERENCE tion 4.2). Since the p-value is extremely small, the conclusion is to reject the null-hypothesis of equal means. The data provide strong evidence that the population means do differ. When the first group is an experimental group and the second a control group, then µ1 −µ2 is the experimental effect in the population and x−y that in the sample. The t-value is the experimental effect in the sample relative to the standard error. The size of the effect is measured by the p-value in the sense that its value is smaller the larger the effect. If the two population variances are equal, then the testing procedure simplifies considerably. This is the subject of the next paragraph. ## 4.1.4 Two sample t-test with equal variances Suppose exactly the same setting as in the previous paragraph, but now the variances σ12 and σ22 for the two groups are known to be equal. To test H0 : µ1 = µ2 against H1 : µ1 6= µ2 , there is a t-test which is based on the so-called pooled sample variance s2p . The latter is defined by the following weighted sum of the sample variances s21 and s22 , namely (n − 1)s21 + (m − 1)s22 s2p = . n+m−2 Then the t-value can be formulated as x − y − (µ1 − µ2 ) t= q . sp n1 + m1 Example 1. The null hypothesis for gene CCND3 Cyclin D3 that the mean of the ALL differs from that of AML patients can be tested by the two-sample t-test using the specification var.equal=TRUE. ## > t.test(golub[1042,] ~ gol.fac, var.equal = TRUE) ## Two Sample t-test ## data: golub[1042, ] by gol.fac t = 6.7983, df = 36, p-value = 6.046e-08 alternative hypothesis: true difference in means is not equal to 0 4.1. STATISTICAL HYPOTHESIS TESTING 57 ## 95 percent confidence interval: 0.8829143 1.6336690 sample estimates: mean in group ALL mean in group AML 1.8938826 0.6355909 From the p-value 6.046 · 10−8 , the conclusion is to reject the null hypothesis of equal population means. Note that the p-value is slightly smaller than that of the previous test. ## In case of any uncertainty about the validity of the assumption of equal population variances, one may want to test this. ## 4.1.5 F-test on equal variances The assumption of the above t-test it that the two population variances are equal. Such an assumption can serve as a null hypothesis. That is, we desire to test H0 : σ12 = σ22 against H0 : σ12 6= σ22 . This can be accomplished by the so-called F -test, as follows. From the sample variances s21 and s22 , the f -value f = s21 /s22 can be computed, which is Fn1 −1,n2 −1 distributed with n1 − 1 and n2 − 1 degrees of freedom. If P (Fn1 −1,n2 −1 < f ) ≥ α/2 for f < 1 or P (Fn1 −1,n2 −1 > f ) ≥ α/2 for f > 1, then H0 is not rejected and otherwise it is rejected. Example 1. The null hypothesis for gene CCND3 Cyclin D3 that the variance of the ALL patients equals that of the AML patients can be tested by the built-in-function var.test, as follows. > var.test(golub[1042,] ~ gol.fac) ## F test to compare two variances ## data: golub[1042, ] by gol.fac F = 0.7116, num df = 26, denom df = 10, p-value = 0.4652 alternative hypothesis: true ratio of variances is not equal to 1 95 percent confidence interval: 0.2127735 1.8428387 sample estimates: ratio of variances 58 CHAPTER 4. ESTIMATION AND INFERENCE 0.7116441 From the p-value 0.4652, the null-hypothesis of equal variances is not re- jected. ## 4.1.6 Binomial test Suppose that for a certain micro RNA a researcher wants to test the hy- pothesis that the probability of a purine equals a certain value p0 . However, another researcher has reason to believe that this probability is larger. In such a setting we want to test the null-hypothesis H0 : p = p0 against the one-sided alternative hypothesis H1 : p > p0 . Investigation of the micro RNA results in of k purines out of a total n. Assuming that the binomial distri- bution holds, the null-hypothesis can be tested by computing the p-value P (X ≥ k). If it is larger than the significance level α = 0.05, then H0 is not rejected and otherwise it is. ## Example 1. A micro RNA of length 22 contains 18 purines. The null hypothesis H0 : p = 0.7 is to be tested against the one-sided H1 : p > 0.7. From ## P (X ≥ 18) = 1 − pbinom(17, 22, 0.7) = 0.1645 ≥ 0.05 = α, the conclusion follows not to reject the null-hypothesis. This test can also be conducted by the function binom.test as follows. > binom.test(18, 22, p = 0.7, alternative = c(\"greater\"), + conf.level = 0.95) Exact binomial test data: 18 and 22 number of successes = 18, number of trials = 22, p-value = 0.1645 alternative hypothesis: true probability of success is greater than 0.7 95 percent confidence interval: 0.6309089 1.0000000 sample estimates: probability of success 0.8181818 The p-value 0.1645 is larger than the significance level 0.05, so that the null hypothesis is not rejected. 4.1. STATISTICAL HYPOTHESIS TESTING 59 ## 4.1.7 Chi-squared test It often happens that we want to test a hypothesis with respect to more than one probability. That is, the H0 : (π1 , · · · , πm ) = (p1 , · · · , pm ) against H1 : (π1 , · · · , πm ) 6= (p1 , · · · , pm ), where p1 to pm are given numbers correspond- ing to the hypothesis of a researcher. By multiplying the probabilities with the total number of observations we obtain the expected number P of observa- tions (ei = n · pi ). Now we can compute the statistic q = m 2 i=1 i − ei ) /ei , (o where oi is the i-th observed and ei the i-th expected frequency. This statis- tic is chi-squared distributed with m − 1 degrees of freedom. The p-value of the chi-squared test is defined as P (χ2m−1 ≥ q). If it is larger than the significance level, then the null hypothesis is not rejected, and otherwise it is. 0.25 0.20 Chi−Squared Density f(x) 0.15 0.10 acceptance rejection 0.05 region region 0.00 7.8 q 0 5 10 15 20 25 ## Figure 4.3: Rejection region of χ23 -test. 60 CHAPTER 4. ESTIMATION AND INFERENCE ## Example 1. Suppose we want to test the hypothesis that the nucleotides of Zyxin have equal probability. Let the probability of {A, C, G, T } to occur in Zyxin sequence be given by (π1 , π2 , π3 , π4 ). Then the null hypothesis to be tested is (π1 , π2 , π3 , π4 ) = (1/4, 1/4, 1/4, 1/4). In particular, for the sequence ”X94991.1” from Table 1.1 the total number of nucleotides is n = 2166, so that the expectedP4 frequencies ei are equal to 2166/4 = 541.5. Then, the 2 q-value equals i=1 (oi − ei ) /ei = ## (410 − 541.5)2 (789 − 541.5)2 (573 − 541.5)2 (394 − 541.5)2 + + + = 187.0674 541.5 541.5 541.5 541.5 Since, P (χ2 ≥ 187.0674) ≈ 0 < α, the null hypothesis is clearly rejected. The nucleotides of Zyxin do not occur with equal probability. A more direct manner to perform the test is by using the built-in-function chisq.test, as follows. > library(ape) > zyxinfreq <- table(read.GenBank(c(\"X94991.1\"),as.character=TRUE)) > chisq.test(zyxinfreq) ## Chi-squared test for given probabilities data: zyxinfreq X-squared = 187.0674, df = 3, p-value < 2.2e-16 ## The package ape is loaded, the Zyxin sequence \"X94991.1\" is downloaded, and the frequency table is constructed. The observed frequencies are used as input to chisq.test. The q-value equals X-squared and the degrees of freedom df = 3. From the corresponding p-value, the conclusion is to reject the null hypothesis of equal probabilities. The testing situation is illustrated in Figure 4.3, where the red colored surface corresponds to the rejection re- gion (7.814728, ∞). Remember from the previous chapter that the left bound of this rejection interval can by found by qchisq(0.95, 3). The observed q = 187.0674 obviously falls far into the right hand side of the rejection re- gion, so that the corresponding p-value is very close to zero. ## Example 2. In a large number of experiments Mendel observed in the year 1866 various frequencies of characteristics of different kinds of seed and their off-spring. He obtained for the seed shape of ornamental sweet peas 4.1. STATISTICAL HYPOTHESIS TESTING 61 the frequencies 5474, 1850. A crossing of B and b yields off spring BB, Bb and bb with probability 0.25, 0.50, 0.25. Since Mendel could not distinguish Bb from BB, his observations occur with probability 0.75 (BB and Bb) and 0.25 (bb). To test the null hypothesis H0 : (π1 , π2 ) = (0.75, 0.25) against H1 : (π1 , π2 ) 6= (0.75, 0.25), we use the chi-squared test5 , as follows. > pi <- c(0.75,0.25) > x <-c(5474, 1850) > chisq.test(x, p=pi) ## Chi-squared test for given probabilities data: x X-squared = 0.2629, df = 1, p-value = 0.6081 From the p-value 0.6081, the conclusion is to reject the null hypothesis. ## To further illustrate the great flexibility of the chi-squared test another example is given. ## Example 3. With respect to gene expression values of e.g. the Golub (2001) data we may define a certain cut off value and classify smaller values to have ”ALL” and larger values as ”AML”. In such a manner cut off val- ues can serve as a diagnostic instrument for different types of diseases. The classification yields true positives (correctly predicted disease), false positives (incorrectly predicted disease), true negatives (correctly predicted healthy) and false negatives (incorrectly predicted healty). For the sake of illustra- tion suppose that among twenty patients there are 5 true positives (tp), 5 false positives (fp), 5 true negatives (tn), and 5 false negatives (fn). These frequencies can be put is a two-by-two table giving the frequencies on two random variables: the true state of the persons and the predicted state of the persons (by the cut off value). When these random variables are indepen- dent, then the cutoff value does not make any contribution to the prediction of the true state. The null hypothesis of independence, can be tested by a chi-square test as follows. > dat <- matrix(c(5,5,5,5),2,byrow=TRUE) 5 For the sake of clarity the code is somewhat unelegant in using the symbol pi, the constant representing the ratio of a circle’s circumference to its diameter. 62 CHAPTER 4. ESTIMATION AND INFERENCE > chisq.test(dat) ## Pearson’s Chi-squared test with Yates’ continuity correction data: dat X-squared = 0.2, df = 1, p-value = 0.6547 Since the p-value is larger than the significance level, the null hypothesis of independence is accepted. Suppose that for another cutoff value we obtain 8 true positives (tp), 2 false positives (fp), 8 true negatives (tn), and 2 false negatives (fn). Then testing independence yields the following. ## > dat <- matrix(c(8,2,2,8),2,byrow=TRUE) > chisq.test(dat) ## Pearson’s Chi-squared test with Yates’ continuity correction data: dat X-squared = 5, df = 1, p-value = 0.02535 Since the p-value is smaller than the significance level, the null hypothesis of independence is rejected. significant non-significant genes genes Chromosome 1 100 2000 genome 300 6000 Example 4. A frequently used and related test is the Fisher exact test, which is based on the so-called odds ratio f11 f22 /(f12 f21 ). Suppose that the number of significant onco type of genes in Chromosome 1 is f11 = 100 out of a total of f12 = 2000 and the number of significant genes in the whole genome is f21 = 300 out of a total of f22 = 6000. Then the odds ratio equals 100 · 6000/(2000 · 300) = 1 and the number of significant oncogenes in Chromosome 1 is exactly proportional to that in the genome. The null-hypothesis of the Fisher test is that the odds ratio equals 1 and the alternative hypothesis that it differs from 1. Suppose that the frequencies 4.1. STATISTICAL HYPOTHESIS TESTING 63 of significant oncogenes for Chromosome 1 equals f11 = 300 out of f12 = 500 and for the genome f21 = 3000 out of f22 = 6000. The hypothesis that the odd ratio equals one can be tested as follows. ## > dat <- matrix(c(300,500,3000,6000),2,byrow=TRUE) > fisher.test(dat) ## Fisher’s Exact Test for Count Data data: dat p-value = 0.01912 alternative hypothesis: true odds ratio is not equal to 1 95 percent confidence interval: 1.029519 1.396922 sample estimates: odds ratio 1.199960 Since the p-value is smaller than the significance level, the null hypothesis of odds ratio equal to one is rejected. There are more significant oncogenes in Chromosome 1 compared to that in the genome. Other examples of the Fisher test are given in Chapter 6. ## 4.1.8 Normality tests Various procedures are available to test the hypothesis that a data set is normally distributed. The Shapiro-Wilk test is based on the degree of lin- earity in a Q-Q plot (Lehmann, 1999, p.347) and the Anderson-Darling test is based on the data distribution function (Stephens, 1986, p.372). Example 1. To test the hypothesis that the ALL gene expression values of CCND3 Cyclin D3 from Golub et al. (1999) are normally distributed, the Shapiro-Wilk test can be used as follows. ## > shapiro.test(golub[1042, gol.fac==\"ALL\"]) ## Shapiro-Wilk normality test 64 CHAPTER 4. ESTIMATION AND INFERENCE ## data: golub[1042, gol.fac == \"ALL\"] W = 0.947, p-value = 0.1774 Since the p-value is greater than 0.05, the conclusion is not to reject the null hypothesis that CCND3 Cyclin D3 expression values follow from a normal distribution. The Anderson-Darling test is part of the nortest package which probably needs to be installed and loaded first. Running the test on our CCND3 Cyclin D3 gene expression values comes down to the following. > library(nortest) > ad.test(golub[1042,gol.fac==\"ALL\"]) ## Anderson-Darling normality test ## data: scale(golub[1042, gol.fac == \"ALL\"]) A = 0.5215, p-value = 0.1683 Hence, the same conclusion is drawn as from the Shapiro-Wilk test. Note that the p-values from both tests are somewhat low. This confirms our obser- vation in Section 2.1.5 based on the Q-Q plot that the distribution resembles the normal. From the normality tests the conclusion is that the differences in the left tail are not large enough to reject the null-hypothesis that the CCND3 Cyclin D3 expression values are normally distributed. ## 4.1.9 Outliers test When gene expression values are not normally distributed, then outliers may appear with large probability. The appearance of outliers in gene expression data may influence the value of a statistic to a large extent. For this reason it is useful to be able to test whether a certain set of gene expression values is contaminated by an outlier or not. Accordingly, the null-hypothesis to be tested is that a set of gene expression values does not contain an outlier and the alternative is that it is contaminated with at least one outlier. Under the assumption that the data are realizations of one and the same distribution, such a hypothesis can be tested by the Grubbs (1950) test. This test is based on the statistic g = |suspect value − x|/s, where the suspect value is included for the computation of the mean x and the standard deviation s. 4.1. STATISTICAL HYPOTHESIS TESTING 65 ## Example 1. From Figure 2.4 we have observed that expression values of gene CCND3 Cyclin D3 may contain outliers with respect to the left tail. This can actually be tested by the function grubbs.test of the outliers package, as follows. > library(outliers) > grubbs.test(golub[1042, gol.fac==\"ALL\"]) ## Grubbs test for one outlier ## data: golub[1042, gol.fac == \"ALL\"] G = 2.9264, U = 0.6580, p-value = 0.0183 alternative hypothesis: lowest value 0.45827 is an outlier Since the p-value is smaller than 0.05, the conclusion is to reject the null- hypothesis of no outliers. ## In case the data are normally distributed, the probability of outliers is small. Hence, extreme outliers indicate that the data are non-normally dis- tributed with large probability. Outliers may lead to such an increase of the standard error that a true experimental effect remains uncovered (false negatives). In such cases a robust test based on ranks may be preferred as a useful alternative. ## 4.1.10 Wilcoxon rank test In case the data are normally distributed with equal variance, the t-test is an optimal test for testing H0 : µ1 = µ2 against H1 : µ1 6= µ2 (Lehmann, 1999). If, however, the data are not normally distributed due to skewness or otherwise heavy tails, then this optimality does not hold anymore and there is no guarantee that the significance level of the test equals the intended level α (Lehmann, 1999). For this reason rank type of tests are developed for which on beforehand no specific distributional assumptions need to be made. In the below we shall concentrate on the two-sample Wilcoxon test because of its relevance to bioinformatics. We sustain with a brief description of the basic idea and refer the interested reader to (Lehmann, 2006) for the mathematical details. To broaden our view we switch from hypotheses about means to those about distributions. An alternative hypothesis may then be formulated as 66 CHAPTER 4. ESTIMATION AND INFERENCE that the distribution of a first group lays to the left of a second. To set the scene let the gene expression values of the first group (x1 to xm ) have distribution F and those of the second group (y1 to yn ) distribution G. The null hypothesis is H0 : F = G and the alternative for example that the x’s are smaller (or larger) than the y’s. By the two-sample Wilcoxon test the data x1 , · · · , xm , y1 , · · · , yn are ranked and the rank numbers of the x’s are summed to form the statistic W after a certain correction (Lehmann, 2006). The idea is that if the ranks of x’s are smaller than those of the y’s, then the sum is small. The distribution of the sum of ranks is known so that a p-value can be computed on the basis of which the null hypothesis is rejected if it is smaller than the significance level α. Example 1. The null hypothesis that the expression values for gene CCND3 Cyclin D3 are equally distributed for the ALL patients and the AML patients can be tested by the built-in-function wilcox.test, as follows. ## > wilcox.test(golub[1042,] ~ gol.fac) ## Wilcoxon rank sum test ## data: golub[1042, ] by gol.fac W = 284, p-value = 6.15e-07 alternative hypothesis: true location shift is not equal to 0 Since the p-value is much smaller than 0.05, the conclusion is to reject the null-hypothesis of equal distributions. Note that although confidence intervals can be computed from the Wilcoxon test, these are more difficult to interpret than those from the t-test. ## 4.2 Application of tests to a whole set gene expression data Various tests are applied in the above to a single vector of gene expressions. In daily practice, however, we want to analyze a complete set of thousands of (row) vectors with gene expression values which are collected in a matrix. Such can conveniently be accomplished by taking advantage of the fact that 4.2. APPLICATION OF TESTS TO A WHOLE SET GENE EXPRESSION DATA67 ## R stores the output of a test as an object in such a manner that we can extract for instance p-values from it. These can be collected in a vector in order to select genes with large differences between patient groups. This will be illustrated together with testing for normality. ## Example 1. Having a data matrix with gene expression values, a ques- tion one might ask is: What is the percentage of genes that passes a normality test? Such can be computed as follows. > data(golub,package=\"multtest\") > gol.fac <- factor(golub.cl,levels=0:1, labels= c(\"ALL\",\"AML\")) > sh <- apply(golub[,gol.fac==\"ALL\"], 1, function(x) shapiro.test(x)$p.value)\n> sum(sh > 0.05)/nrow(golub) * 100\n 58.27598\n\nHence, according to the Shapiro-Wilk test, 58.27% of the ALL gene ex-\npression values is normally distributed (in the sense of non-rejection). For\nthe AML expression values this is 60.73419 percent. It can be concluded that\nabout 40% of the genes do not pass the normality test.\n\n## Example 2. In case the gene expression data are non-normally dis-\n\ntributed the t-test may indicate conclusions different from those of the Wilcoxon\ntest. Differences between these can be investigated by collecting the p-values\nfrom the t-test as well as from Wilcoxon’s test and seeking for the largest\ndifferences, as follows.\n\n## > data(golub, package = \"multtest\");\n\n> gol.fac <- factor(golub.cl,levels=0:1, labels= c(\"ALL\",\"AML\"))\n> pt <- apply(golub, 1, function(x) t.test(x ~ gol.fac)$p.value) > pw <- apply(golub, 1, function(x) wilcox.test(x ~ gol.fac)$p.value)\n> resul <- data.frame(cbind(pw,pt))\n> resul[pw<0.05 & abs(pt-pw)>0.2,]\npw pt\n456 0.04480288 0.2636088\n1509 0.03215830 0.4427477\n\nThe p-value is extracted from the output of the t.test function and stored\nin the vector pt. The logical operator & is used to select genes for which the\nWilcoxon p-value is smaller than 0.05 and the absolute difference with the\n68 CHAPTER 4. ESTIMATION AND INFERENCE\n\np-value from the t-test is larger than 0.2. Since there are only two such genes\nwe can draw the reassuring conclusion that the tests give similar results.\n\n## 4.3 Overview and concluding remarks\n\nStatistical hypothesis testing consists of hypotheses, distributional assump-\ntions, and decisions (conclusions). The hypotheses pertain to the outcome\nof a biological experiment and are always formulated in terms of population\nvalues of parameters. Statistically, the outcomes of experiments are seen as\nrealizations of random variables. The latter are assumed to have a certain\nsuitable distribution which is seen as a statistical model for outcomes of an\nexperiment. Then a statistic is formulated (e.g. a t-value) which is treated\nboth as a function of the random variables and as a function of the data\nvalues. By comparing the distribution of the statistic with the value of the\nstatistic, the p-value is computed and compared to the level of significance.\nA large p-value indicates that the model fits the data well and that the as-\nsumptions as well as the null-hypothesis are correct with large probability.\nHowever, a low p-value indicates, under the validity of the distributional as-\nsumptions, that the outcome of the experiment is so unlikely that this causes\na sufficient amount of doubt to the researcher to reject the null hypothesis.\nThe quality of a test is often expressed in terms of efficiency, which is\nusually directly related to the (asymptotic) variance of an estimator. The\nrelative efficiency is the ratio of the asymptotic variances. For Wilcoxon’s test\nversus the t-test this equals .955, which means that in the optimal situation\nwhere the (gene expression) data are normally distributed, Wilcoxon’s test\nis only a little worse than the t-test. In case, however, of a few outliers or\na slightly heavier tail, the Wicoxon test can be far more efficient than the\nt-test (Lehmann, 1999, p.176). Efficiency is directly related to power; the\nprobability to reject a false hypothesis. The probability of drawing correct\nconclusions can always be improved by increasing the sample size.\nThese considerations set the scene for making some recommendations,\nwhich obviously should not be followed blindly. If gene expression data pass\na normality test, then the Welch type of t-test provides a general test with\ngood power properties (Ramsey, 1980; Wang, 1971). In case normality does\nnot hold and the sample size per group is at least least four, the Wilcoxon\ntest is recommended.\n4.4. EXERCISES 69\n\nBecause the Wilcoxon p-values are based on ranks many of these are\nequal for different genes, so that it is less suitable for ordering in case of\nsmall sample size. On the other hand, it is obviously questionable whether\nextremely small differences in p-values produced by the t-test contribute to\nbiologically relevant gene discrimination. That is, extremely small differences\nshould not be over-interpreted.\n\n4.4 Exercises\n1. Gene CD33. Use grep to find the index of the important gene CD33\namong the list of characters golub.gnames. For each test below for-\nmulate the null hypothesis, the p-value and your conclusion.\n\n(a) Test the normality of the ALL and AML expression values.\n(b) Test for the equality of variances.\n(c) Test for the equality of the means by an appropriate t-test.\n(d) Is the experimental effect strong?\n\n## 2. Gene ”MYBL2 V-myb avian myeloblastosis viral oncogene homolog-\n\nlike 2” has its expression values in row 1788.\n\n## (a) Use a boxplot to construct a hypothesis about the experimental\n\neffect.\n(b) Test for the equality of means by an appropriate t-test.\n\n3. HOXA9. Gene ”HOXA9 Homeo box A9” with expression values in row\n1391, can cause leukemia (Golub et al., 1999).\n\n(a) Test the normality of the expression values of the ALL patients.\n(b) Test for the equality of means by an appropriate t-test.\n\n## (a) Find the accession number of cDNA clone with IMAGE:3504464.\n\n(b) Test whether the frequencies of the nucleotides are equal for each\nnucleic acid.\n70 CHAPTER 4. ESTIMATION AND INFERENCE\n\n## (c) Test whether the frequencies of ”X94991.1” can be predicted by\n\nthe probabilities of the cDNA sequence ”BC002323.2”.\n\n5. Gene selection. Select the genes from the golub data with smallest\ntwo-sample t-test values for which the ALL mean is greater than the\nAML mean. Report the names of the best ten. Scan the Golub (1999)\narticle for genes among the ten you found and discuss their biological\nfunction briefly.\n\n## 6. Antigenes. Antigenes play an important role in the development of\n\ncancer. Order the antigenes according to their p-values from the Welch\ntwo-sample t-test with respect to gene expression values from the ALL\nand AML patients of the Golub et al. (1999) data.\n\n## 7. Genetic Model. A certain genetic model predicts that four phenotypes\n\noccur in ration 9:3:3:1. In a certain experiment the offspring is observed\nwith frequencies 930, 330, 290, 90. Do the data confirm the model?\n\n8. Comparing two genes. Consider the gene expression values in row 790\nand 66 of the Golub et al. (1999) data.\n\n(a) Produce a boxplot for the ALL expression values and comment on\nthe differences. Are there outliers?\n(b) Compute the mean and the median for the ALL gene expression\nvalues for both genes. Do you observed difference between genes?\n(c) Compute three measures of spread for the ALL expression values\nfor both genes. Do you observe difference between genes?\n(d) Test by Shapiro-Wilk and Anderson-Darling the normality for the\nALL gene expression values for both genes.\n\n9. Normality tests for gene expression values of the Golub et al. (1999)\ndata. Perform the Shapiro-Wilk normality test separately for the ALL\nand AML gene expression values. What percentage passed the normal-\nity test separately for the ALL and the AML gene expression values?\nWhat percentage passes both testes?\n\n10. Two-sample tests on gene expression values of the Golub et al. (1999)\ndata.\n4.4. EXERCISES 71\n\n(a) Perform the two-sample Welch t-test and report the names of the\nten genes with the smallest p-values.\n(b) Perform the Wilcoxon rank test and report the names of the ten\ngenes with the smallest p-values.\n\n## 11. Biological hypotheses. Suppose that the probability to reject a biolog-\n\nical hypothesis by the results of a certain experiment is 0.05. Suppose\nthat the experiment is repeated 1000 times.\n\n## (a) How many rejections do you expect.\n\n(b) What is the probability of less than 10 rejections?\n(c) What is the probability of more than 5 rejections?\n(d) What is the probability that the number of rejections is between\ntwo and eight?\n\n## 12. Programming some tests.\n\n(a) Program the two-sample t-test with equal variances and illustrate\nit with the expression values of row 1024 the of Golub et al. (1999)\ndata.\n(b) The value of W in the two-sample Wilxoxon test equals the sum\nof the ranks of Group 1 minus n(n + 1)/2, where n is the number\nof gene expression values in Group 1. Program this and illustrate\nit with the expression values of row 1024 of Golub et al. (1999)\ndata.\n(c) The value of W in the two-sample Wilxoxon test equals the num-\nber of values xi > yj , where xi , yj are values from Group 1 and\n2, respectively. Program this and illustrate it with the expression\nvalues of row 1024 of Golub et al. (1999) data.\n72 CHAPTER 4. ESTIMATION AND INFERENCE\nChapter 5\n\nLinear Models\n\nWe have seen that the t-test can be used to discover genes with different\nmeans in the population with respect to two groups of patients. In case,\nhowever, there are three groups of patients the question arises how genes\ncan be selected having the largest differential expressions between groups\n(experimental effect)? A technique making this possible is an application of\nthe linear model and is called analysis of variance. It is frequently applied\nbioinformatics.\n\nThe validity of the technique is based on the assumption that the gene\nexpression values are normally distributed and have equal variances between\ngroups of patients. It is of importance to investigate these assumptions be-\ncause it either reassures our confidence in the conclusions from a statistical\ntest or indicates that alternative tests should be used.\n\nIn this chapter the linear model will briefly be explained. The main focus\nis to apply the linear model for testing the hypothesis of more than two group\nmeans to be equal. Several illustrations of analyzing gene expression data\nwill be given. It will be explained how the assumptions about normality\nand equal variances can be investigated and what alternatives can be used\nin case either of these does not hold. The somewhat technical concepts of\n“model matrix” and “contrast matrix” are explained because these are useful\nfor several applications in the next chapter.\n\n73\n74 CHAPTER 5. LINEAR MODELS\n\n## 5.1 Definition of linear models\n\nGiven a gene expression Yi , a basic form of the linear model is\n\nYi = xi β + ε i , for i = 1, · · · , n,\n\n## where Yi is an observable variable, xi a fixed number, β an unknown weight,\n\nεi a unobservable error variable. The fixed number xi follows from a sta-\ntistical “design”, as we shall see. The xi value is part of the predictor, Yi\nthe criterion, and εi the error of the model. The systematical part of the\nmodel xi β equals the mean of the gene expression Yi . The model is called\n”linear” because the degree of the coefficient β is one. For a linear model\nto be a statistical model there must be some assumption with respect to\nthe distribution of the error variables. Frequently, it is assumed that the er-\nror variables ε1 , · · · , εn are independent and normally distributed with zero\nmean, that is, according to N (0, σ 2 ). Then the mean of Yi equals xi β and its\nvariance σ 2 .\n\n## Example 1. A common manner to introduce the linear model is by writing\n\nYi = β 1 + xi β 2 + ε i , for i = 1, · · · , n,\n\nso that the model part represents a straight line with intercept β1 and\nslope β2 . Given data points y1 , · · · , yn and x1 , · · · , xn , a best fitting line\nthrough the data can easily be computed by least squares estimation of the\nintercept and slope. A nice application to explore this is by the function\nput.points.demo() from the TeachingDemos package. It allows points to\nbe added and deleted to a plot which interactively computes estimates for\nthe slope and the intercept given the data. By choosing the points more or\nless on a horizontal line, the slope will be near zero. By choosing the points\nnearly vertical, the slope will be large. By choosing a few gross errors in the\ndata it can be observed that the estimates are not robust against outliers.\n\nIn order to handle gene expression data for three or more groups of pa-\ntients we need to extend the model. The idea simply is to increase the\nnumber of weights to the number of groups k, so that, we obtain the weights\nβ1 , · · · , βk and the corresponding design values xi1 , · · · , xik . The systematic\npart of the model consists of a sum of these design values weighted by the co-\nefficients β1 , · · · , βk . Such a weighted sum can be written as xi1 β1 +· · ·+xik βk .\n5.1. DEFINITION OF LINEAR MODELS 75\n\n## By adding measurement error to this systematic part we obtain the linear\n\nmodel\nX k\nYi = xij βj + εi .\nj=1\n\nThe design values xij for Patient i in Group j are collected in the so-called\n”design” matrix denoted by X. In particular, the design value xij is chosen\nto be equal to 1 if Patient i belongs to Group j and zero if (s)he does not.\nBy this choice it becomes possible to use linear model estimation for testing\nhypotheses about group means. This will be illustrated by an example.\n\n## Example 2. Suppose we have the following artificial gene expressing values\n\n2,3,1,2, of Group 1, 8,7,9,8 of Group 2, and 11,12,13,12 of Group 3. We may\nassign these to a vector y, as follows.\n> y <- c(2,3,1,2, 8,7,9,8, 11,12,13,12)\nNext, we construct a factor indicating to which group each expression value\nbelongs. In particular, the first four belong to Group 1, the second four to\nGroup 2, and the third four to Group 3. We may now use the function gl to\ndefine the corresponding factor.\n> a <- gl(3,4)\n> a\n 1 1 1 1 2 2 2 2 3 3 3 3\nLevels: 1 2 3\nThe design matrix X is also called “model matrix”. It is illuminating to\nprint it to the screen.\n> model.matrix(y ~ a - 1)\na1 a2 a3\n1 1 0 0\n2 1 0 0\n3 1 0 0\n4 1 0 0\n5 0 1 0\n6 0 1 0\n7 0 1 0\n8 0 1 0\n76 CHAPTER 5. LINEAR MODELS\n\n9 0 0 1\n10 0 0 1\n11 0 0 1\n12 0 0 1\nThe notation y~a-1 represents a model equation, where -1 means to skip\nthe intercept or general constant.1 In this situation, the weights (β1 , β2 , β3 )\nof the model specialize to the population means (µ1 , µ2 , µ3 ). The model for\nthe first gene expression value of Group 1 is Y1 = µ1 + ε1 , for the second\nexpression value of Group 1 it is Y2 = µ1 + ε2 , for the first member of Group\n2 it is Y5 = µ2 + ε5 , and for the first member of Group 3 it is Y9 = µ3 + ε9 .\nRecall that population means are generally estimated by sample means.\nSimilarly, in the current setting, estimation of the linear model comes down\nto estimation of group means for which there are one-sample t-type of tests\navailable (see e.g. Rao & Toutenburg, 1995; Samuels & Witmer, 2003). To\nillustrate this we employ the estimation function lm and ask for a summary.\n> summary(lm(y ~ a - 1))\n\nCoefficients:\nEstimate Std. Error t value Pr(>|t|)\na1 2.0000 0.4082 4.899 0.000849 ***\na2 8.0000 0.4082 19.596 1.09e-08 ***\na3 12.0000 0.4082 29.394 2.98e-10 ***\nThe output in the first column gives the estimated mean per group. The\nsecond gives the standard error of each mean, the third the t-value (the es-\ntimate divided by the standard error), and the last gives the corresponding\np-values. From the p-values the conclusion follows to reject the null hypothe-\nses H0 : µj = 0 for Group index j running from 1 to 3.\n\nUsing the above design matrix, the model for the gene expression values\nfrom different groups can be written as\n\n## Yij = µj + εij , where εij is distributed as N (0, σ 2 ),\n\nand Yij is the expression of Person i in Group j, µj the mean of Group j, and\nthe εij the error of Person i in Group j. The error is assumed to be normally\n1\n5.2. ONE-WAY ANALYSIS OF VARIANCE 77\n\ndistributed with zero mean and variance equal for different persons. Note\nthat the model is assumed separately for every gene so that for each gene the\nvalues may differ.\nThe above illustrates that the linear model is useful for testing hypotheses\nabout group means. In bioinformatics the linear model is applied to many\nsets of gene expressions, so that it is of great importance to have an overall\ntest for the equality of means.\n\n## 5.2 One-way analysis of variance\n\nA frequent problem is that of testing the null hypothesis that three or more\npopulation means are equal. By comparing two types of variances, this is\nmade possible by a technique called analysis of variance (ANOVA). To set\nthe scene, let three groups of patients be available with measurements in the\nform of gene expression values. The null-hypothesis to be tested is H0 : µ1 =\nµ2 = µ3 . In statistical language such groups are called levels of a factor.\nLet the data for Group 1 be represented by y11 , y21 , · · · , yn1 those of Group\n2 by y12 , y22 , · · · , yn2 and those of Group 3 by y13 , y23 , · · · , yn3 , where n is\nthe number of expression values in each group. The three sample means per\npatient group can be expressed by\nn n n\n1X 1X 1X\ny1 = yi1 , y2 = yi2 , and y 3 = yi3 .\nn i=1 n i=1 n i=1\n\n## The total number of measurements N = 3n, so that the overall mean y is\n\nequal to à n !\nn n\n1 X X X\ny= yi1 + yi2 + yi3 .\nN i=1 i=1 i=1\n\nFor the definition of the overall test on the equality of means there are two\nsums of squares of importance. The sum of squares within (SSW ) is the sum\nof the squared deviation of the measurements to their group mean, that is\ng n\nX X\nSSW = (yij − y j )2 ,\nj=1 i=1\n\nwhere g is the number of groups. The sum of squares between (SSB) is the\nsum of squares of the deviances of the group mean with respect to the total\n78 CHAPTER 5. LINEAR MODELS\n\nmean, that is\ng n g\nX X X\n2\nSSB = (y j − y) = n (y j − y)2 .\nj=1 i=1 j=1\n\n## Now the f -value is defined by\n\nSSB/(g − 1)\nf= .\nSSW/(N − g)\nIf the data are normally distributed, then this f -value follows the Fg−1,N −g\ndistribution, where g − 1 and N − g are the degrees of freedom (Rao, 1973,\np.245). If P (Fg−1,N −g > f ) ≥ α, then H0 : µ1 = µ2 = µ3 is not rejected, and,\notherwise it is. The idea behind the test is that, under the null-hypothesis\nof equal group means, the value for SSB will tend to be small, so that the\nobserved f -value will be small and H0 is accepted.\n\nExample 1. Let’s continue with the data from the previous example.\nRecall that the data of Group 1 are 2, 3, 1, 2, those of Group 2 are 8, 7, 9,\n8, and of group 3 are 11, 12, 13, 12. The number of expression values per\ngroup n = 4, the total number of data values N = 12, and the number of\ngroups g = 3.\nTo load the data, to construct the corresponding factor, and to compute\nthe group means one may use the following.\n> y <- c(2,3,1,2, 8,7,9,8, 11,12,13,12)\n> a <- gl(3,4)\n> gm <- as.numeric(tapply(y, a, mean))\n> gm\n 2 8 12\nThus we find that y 1 = 2, y 2 = 8, and y 3 = 12. These group means are\nnow collected in the vector gm. The grand mean y can be computed by\nmean(y)=7.333333. An elementary manner to compute the sums of squares\nbetween SSB is by\ngm <- as.numeric(tapply(y, a, mean))\ng <- 3; n <- 4; N <-12; ssb <- 0\nfor (j in 1:g) {ssb <- ssb + (gm[j]- mean(y))^2}\nSSB <- n*ssb\n5.2. ONE-WAY ANALYSIS OF VARIANCE 79\n\n## This results in SSB = 202.6667. In a similar manner the sums of squares\n\nwithin SSW and the f -value can be computed, as follows.\n\n## > SSW <- 0\n\n> for (j in 1:g) {SSW <- SSW + sum((y[a==j]-gm[j])^2)}\n> f <- (SSB/(g-1))/(SSW/(N-g))\n\nThis results in SSW = 6 and an observed f -value equal to 152. Hence, the\noverall p-value is\n\n## P (F2,9 > 152) = 1 − P (F2,9 < 152) = 1 − pf(152, 2, 9) = 1.159156 · 10−7 .\n\nSince this is smaller than the significance level 0.05, the conclusion is to reject\nthe null hypothesis of equal means.\nThe built-in-function anova can be used to extract the so-called analysis\nof variance table from an lm object.\n\n## > anova(lm(y ~ a))\n\nAnalysis of Variance Table\n\nResponse: x\nDf Sum Sq Mean Sq F value Pr(>F)\nfact 2 202.667 101.333 152 1.159e-07 ***\nResiduals 9 6.000 0.667\n\n## Example 2. By the previous analysis of variance it is concluded that\n\nthere are differences in population means. It is, however, not clear which of\nthe means differ. A way to clarify this is by estimating the mean of Group 1\n(Level 1) and then computing the difference between Group 2 and Group 1,\nand the difference between Group 3 and Group 1. Such corresponds to the\nfollowing contrast matrix\n \n1 1 1\nC =  0 −1 0  .\n0 0 −1\n\n## This contrast matrix is by default implemented by the model specification\n\ny~a, as follows.\n80 CHAPTER 5. LINEAR MODELS\n\n## > summary(lm(y ~ a))\n\nCoefficients:\nEstimate Std. Error t value Pr(>|t|)\n(Intercept) 2.0000 0.4082 4.899 0.000849 ***\nfactLevel 2 6.0000 0.5774 10.392 2.60e-06 ***\nfactLevel 3 10.0000 0.5774 17.321 3.22e-08 ***\n\n## Residual standard error: 0.8165 on 9 degrees of freedom\n\nMultiple R-Squared: 0.9712, Adjusted R-squared: 0.9649\nF-statistic: 152 on 2 and 9 DF, p-value: 1.159e-07\nHere, the estimate of the intercept is the mean of Group 1 (Level 1). The\nfactLevel 2 is the difference in means between Group 2 (Level 2) and Group\n1 and factLevel 3 the difference in means between Group 3 and Group 1.\nBy a t-test the null-hypothesis is tested that the mean of Group 1 is zero, the\ndifference in means between Group 2 and Group 1 is zero and the difference\nin means between Group 3 and Group 1 is zero. That is, we have the fol-\nlowing null-hypotheses H0 : µ1 = 0, H0 : µ2 − µ1 = 0, and H0 : µ3 − µ2 = 0.\nSince the p-values that correspond to the t-values of these null-hypotheses\nare smaller than α = 0.05, each of these is rejected. The last line of the out-\nput gives the f -value, the degrees of freedom, and the corresponding overall\np-value. The latter equals that of ANOVA.\n\nThe concept of contrast matrix will also play a role in the next chapter.\nBefore we analyze real gene expression data it seems well to give an example\nwhere the means do not differ.\n\nExample 3. Let’s sample data from the normal distribution with mean\n1.9 and standard deviation 0.5 for three groups of patients that do not have\nany particular differences between the groups.\n> y <- rnorm(12,1.9,0.5)\n> round(x,2)\n 1.75 1.82 1.35 1.61 2.08 1.27 2.50 2.40 2.13 0.71 2.80 2.00\n> a <- gl(3,4)\n> anova(lm(y ~ a))$Pr 0.6154917 Note that by the$Pr operator extract the p-value from the list generated\nby the anova function. The p-value implies the conclusion not to reject the\n5.2. ONE-WAY ANALYSIS OF VARIANCE 81\n\n7.2\n5.5\n\n7.0\n5.0\n\n6.8\n4.5\n\n6.6\n6.4\n4.0\n\n6.2\n3.5\n\n6.0\nB1 B2 B3 B1 B2 B3\n\n## Figure 5.1: Plot of 1866 g Figure 5.2: Plot of 1242\n\nat data. at values from ALL data.\n\nprocess.\n\n## Example 4. B-cell ALL: 1866_g_at. To illustrate analysis of variance\n\nby real data we shall use the ALL data from the ALL package, see Section\n1.1. Specifically, expression levels from B-cell ALL patients in stage B1, B2,\nand B3 are selected with row name 1866_g_at, which refers to an SKI-like\noncogene related to oncoproteins. From the plot of the data in Figure 5.1 it\ncan be observed that the expression levels differ between the disease stages.\nThe hypothesis is tested that the expression means in each stage are equal or\nin other words that there are no experimental effects. It is briefly indicated\nhow the data are constructed.\n\n> library(ALL);data(ALL)\n> ALLB123 <- ALL[,ALL$BT %in% c(\"B1\",\"B2\",\"B3\")] > y <- as.numeric(exprs(ALLB123)[row.names(exprs(ALLB123))==\"1866_g_at\"]) > summary(lm(y ~ ALLB123$BT))\n82 CHAPTER 5. LINEAR MODELS\n\n## Estimate Std. Error t value Pr(>|t|)\n\n(Intercept) 4.58222 0.08506 53.873 < 2e-16 ***\nALLB123$BTB2 -0.43689 0.10513 -4.156 8.52e-05 *** ALLB123$BTB3 -0.72193 0.11494 -6.281 2.00e-08 ***\n\n## Residual standard error: 0.3707 on 75 degrees of freedom\n\nMultiple R-squared: 0.3461, Adjusted R-squared: 0.3287\nF-statistic: 19.85 on 2 and 75 DF, p-value: 1.207e-07\nFrom the overall p-value 1.207 · 10−7 of the F -test the conclusion follows to\nreject the hypothesis of equal means. From the t-tests we conclude that the\nmean of B1 differs from zero and the difference between B2 and B1 as well\nas between B3 and B2 are unequal to zero. That is, the population means\nof Group B1, B2, and B3 do differ.\n\n## Example 5. B-cell ALL: 1242_at. To illustrate a case where the means\n\ndo not differ we selected the expression values for probe 1242_at of the B-\ncell ALL patients in stage B1, B2, and B3 from the ALL data. This probe\ncorresponds to the Ets2 repressor factor which plays a role in telomerase\nregulation in human cancer cells. From the plot of the data in Figure 5.2,\nhowever, it can be observed that the expression values hardly differ between\ndisease stages. The data are extracted from the ALL object and collected in\nthe vector y. The corresponding factor is given by ALLB123$BT. > library(ALL); data(ALL) > ALLB123 <- ALL[,ALL$BT %in% c(\"B1\",\"B2\",\"B3\")]\n> y <- as.numeric(exprs(ALLB123)[row.names(exprs(ALLB123))==\"1242_at\"])\nALLB123$BTB3 -0.04675 0.07665 -0.610 0.544 ## Residual standard error: 0.2473 on 75 degrees of freedom Multiple R-squared: 0.01925, Adjusted R-squared: -0.006898 F-statistic: 0.7362 on 2 and 75 DF, p-value: 0.4823 From the overall p-value 0.4823, the conclusion is not to reject the null hy- pothesis of equal means. More specifically, the null-hypotheses H0 : µ1 = 0 5.3. CHECKING ASSUMPTIONS 83 ## is rejected, but from the p-value 0.636 the H0 : µ2 − µ1 = 0 is not rejected, and from p-value 0.544 the H0 : µ3 − µ2 = 0 is not rejected either. ## Example 6. An interesting question is of course for how many genes of the ALL data the hypothesis of equal means is rejected by the overall ANOVA p-value? Such can be answered by collecting the p-values in a vector. ## > pano <- apply(exprs(ALLB123),1,function(x) anova(lm(x~ALLB123$BT))$Pr) > sum(pano<0.05) 2526 Thus the hypothesis of equal means is rejected for 2526 out of a total of 12625 genes (probes). ## 5.3 Checking assumptions When the linear model is applied for analysis of variance there are in fact two assumptions made. First, the errors are assumed to be independent and normally distributed, and, second, the error variances are assumed to be equal for each level (patient group). The latter is generally known as the homoscedasticity assumption. The normality assumption can be tested as a null hypothesis by applying the Shapiro-Wilk test on the residuals. The homoscedasticity assumption can be tested as a hypothesis by the Breusch and Pagan (1979) test on the residuals. This latter test may very well be seen as a generalization of the F -test for equal variances. ## Example 1. Testing normality of the residuals. From Figure 5.1 it can be observed that there are outliers being far apart from the bulk of the other expression values. This raises the question whether the normality assumption holds. The normality of the residuals from the estimated linear model on the B-cell ALL data from 1866_g_at, can be tested as follows. > data(ALL,package=\"ALL\");library(ALL) > ALLB123 <- ALL[,ALL$BT %in% c(\"B1\",\"B2\",\"B3\")]\n> y <- as.numeric(exprs(ALLB123)[row.names(exprs(ALL))==\"1866_g_at\"])\n> shapiro.test(residuals(lm(y ~ ALLB123$BT))) ## Shapiro-Wilk normality test 84 CHAPTER 5. LINEAR MODELS ## data: residuals(lm(y ~ ALLB123$BT))\n\nW = 0.9346, p-value = 0.0005989\n\n## From the p-value 0.0005989, the conclusion is to reject the null-hypothesis of\n\nnormally distributed residuals.\n\n## Example 2. Testing homoscedasticity of the residuals. From Figure\n\n5.1 it can be observed that the spread of the expression values around their\nmean differs between groups of patients. In order to test the homoscedasticity\nassumption we use the function bptest from the lmtest package.\n\n## > library(ALL); data(ALL); library(lmtest)\n\n> ALLB123 <- ALL[,ALL$BT %in% c(\"B1\",\"B2\",\"B3\")] > y <- as.numeric(exprs(ALLB123)[row.names(exprs(ALLB123))==\"1866_g_at\"]) > bptest(lm(y ~ ALLB123$BT),studentize = FALSE)\n\nBreusch-Pagan test\n\n## data: lm(y ~ ALLB123$BT) BP = 8.7311, df = 2, p-value = 0.01271 From the p-value 0.01271, the conclusion follows to reject the null hypothesis of equal variances (homoscedasticity). ## 5.4 Robust tests In case departures from normality or homoscedasticity are large enough to cause concern with respect to the actual significance level or to the power of the test, an alternative testing procedure is called for. In case only ho- moscedasticity is violated, we are in a situation quite similar to that of t- testing with unequal variances. That is, the null hypothesis H0 : µ1 = µ2 = µ3 of equal means can be tested without assuming equal variances by a test proposed by Welch (1951). ## Example 1. In Example 2 of the previous section the hypothesis of equal variances was rejected. To apply analysis of variance without assuming 5.4. ROBUST TESTS 85 ## equal variances (homoscedasticity) one may use the function oneway.test, as follows. > data(ALL,package=\"ALL\");library(ALL) > ALLB123 <- ALL[,ALL$BT %in% c(\"B1\",\"B2\",\"B3\")]\n> y <- as.numeric(exprs(ALLB123)[row.names(exprs(ALLB123))==\"1866_g_at\"])\n> oneway.test(y ~ ALLB123$BT) ## One-way analysis of means (not assuming equal variances) ## data: y and ALLB123$BT\n\nF = 14.1573, num df = 2.000, denom df = 36.998, p-value = 2.717e-05\nFrom the p-value 2.717 · 10−5 , the conclusion follows to reject the hypothesis\nof equal means.\n\n## In case normality is violated a rank type of test is more appropriate. In\n\nparticular, to test the null-hypothesis of equal distributions of groups of gene\nexpression values, the Kruskal-Wallis rank sum test is recommended. This\ntest can very well be seen as a generalization of the Wilcoxon test for testing\nthe equality of two distributions. Because it is based on ranking the data,\nit is highly robust against non-normality, it, however, does not estimate the\nsize of experimental effects.\n\n## Example 2. By Example 1 of the previous section we rejected the hy-\n\npothesis of normally distributed residuals. We use the function kruskal.test\nto perform a non-parametric test.\n> data(ALL,package=\"ALL\");library(ALL)\n> ALLB123 <- ALL[,ALL$BT %in% c(\"B1\",\"B2\",\"B3\")] > y <- as.numeric(exprs(ALLB123)[row.names(exprs(ALLB123))==\"1866_g_at\"]) > kruskal.test(y ~ ALLB123$BT)\n\ndata: y by ALLB123$BT Kruskal-Wallis chi-squared = 30.6666, df = 2, p-value = 2.192e-07 From the p-value 2.192 · 10−7 , the null-hypothesis of equal distributions of expression values from patient groups is rejected. 86 CHAPTER 5. LINEAR MODELS By the apply functionality the p-values can easily be computed for all 12625 gene expression values of the ALL data. ## 5.5 Overview and concluding remarks By applying the above normality and homogeneity tests to complete sets of gene expression values it can quickly be seen to what extent the assumptions for the classical analysis of variance test are violated. Based on these it can be decided to add rank type of testing in order to reduce the amount of false positives and false negatives. Here, false positives are significant p-values for equal populations means and false negatives are non-significant p-values for unequal populations means. In the next chapter it will briefly be indicated how to combine two factors into a single analysis of variance. For instance, one may want to combine B-cell stage with age groups of persons. The interested reader is referred to Faraway (2004) and Venables & Ripley (2002) for more information on using linear models in R and for a general treatment of linear models to Rao & Toutenburg (1995). The p-values from overall tests of equality of means or distributions are important tools to order genes according to their experimental effect with respect to different patient groups. More examples are given in the next chapter where several functionalities of Bioconductor will be used for the analysis of microarray data. 5.6 Exercises 1. Analysis of gene expressions of B-cell ALL patients. (a) Construct a data frame containing the expression values for the B-cell ALL patients in stage B, B1, B2, B3, B4 from the ALL data. (b) How many patients are in each group. (c) Test the normality of the residuals from the linear model used for analysis of variance for all gene expression values. Collect the p-values in a vector. (d) Do the same for the homoscedasticity assumption. 5.6. EXERCISES 87 (e) How many gene expressions are normally distributed and how many homoscedastic? For how many do both hold? 2. Further analysis of gene expressions of B-cell ALL patients. Continue with the previous data frame containing the expression values for the B-cell ALL patients in stage B, B1, B2, B3, B4 from the ALL data. (a) Collect the overall p-values from ANOVA in a vector. (b) Use featureNames() to report the affymetrix id’s of the genes with smaller p-values than 0.000001. (c) Collect the overall p-values from the Kruskal-Walles test in a vec- tor. (d) Use featureNames() to report the affymetrix id’s of the genes with smaller p-values than 0.000001. (e) Briefly comment on the differences you observe. That is, how many genes have p-values smaller than 0.001 from both ANOVA and Krusal-Wallis? How many only from one type of test? Hint: Collect TRUE/FALSES in logical vectors and use table. 3. Finding the ten best best genes among gene expressions of B-cell ALL patients. Continue with the previous data frame containing the expres- sion values for the B-cell ALL patients in stage B, B1, B2, B3, B4 from the ALL data. (a) Print the p-values and the corresponding (affimetrix) gene identi- fiers of the ten best from ANOVA. (b) Do the same for the p-values from the Kruskal-Wallis test. (c) Use the function intersect to find identifiers in both sets. 4. A simulation study on gene expression values. (a) Construct a data matrix with 10000 rows and 9 columns with data from the normal distribution with mean zero and variance equal to one. Such a matrix simulates gene expressions without differences between groups (sometimes called negatives). (b) Construct a factor for three groups each with three values. (c) How many p-values are smaller than the significance level α = 0.05? 88 CHAPTER 5. LINEAR MODELS (d) If the p-value is smaller than the significance level, then the con- clusion is that there an experimental effect (a positive). How many false positives do you expect and how many did you observe? (e) Construct a matrix with 10000 rows and 9 columns with normally distributed data with mean zero, one and two and variance equal to one. Assume again that there three groups each with three data values. This data matrix simulates gene expressions with differ- ences between groups (sometimes called positives). Use ANOVA and kruskal-Wallis to find the number of significant genes (true positives). report the number of true positives and false nega- tives. Chapter 6 ## Micro Array Analysis ## The analysis of gene expression values is of key importance in bioinformatics. The technique makes it possible to give an initial answer to many important genetic type of questions. In this chapter you learn how to preprocess probe data, filter genes, to program various visualizations, to use gene ontology identifiers, to load public available gene expression data, as well as how to summarize results in html output. 1 ## 6.1 Probe data The microarray technique takes advantage of hybridization properties of nu- cleic acids. That is, to give a rough idea, complementary molecules are attached and labeled on a solid surface in order for a specialized scanner measure the intensity of target molecules. Per gene there are about twenty such measures obtained for each probe (gene). Per probe these measures come in pairs. The intensity of the perfect match (PM) intends to measure the amount of transcripts from the gene. The intensity of the mismatch (MM) is related to non-specific binding and is often seen as a background type of noise. The raw data from the Affymetrix scanner is stored in so-called DAT files, which are processed to so-called CEL files, where we will work with. The package affy has facilities to read data from a vector specifying several CEL files produced by the Affymetrix scanner. 1 It may be convenient to explore the possibilities of the limmaGUI. Our approach, however, will be to concentrate on the programming aspects using the commandline. 89 90 CHAPTER 6. MICRO ARRAY ANALYSIS Example 1. We will start with a built-in data set called MLL.B from the ALLMLL package. To load it and to retrieve basic information use > library(affy) > data(MLL.B, package = \"ALLMLL\") > MLL.B It is very useful to print the structure of the object str(MLL.B) and its slot names. > slotNames(MLL.B) \"cdfName\" \"nrow\" \"ncol\" \"assayData\" \"phenoData\" \"featureData\" \"experimentData\" \"annotation\" \".__classVersion__\" ## Additional information become available from str(MLL.B). The raw probe intensities are available from exprs(MLL.B), which extracts the probe in- tensities from the MLL.B object. The number of rows and columns of the expression values of MLL.B can be obtained by the dim function. > dim(exprs(MLL.B)) 506944 20 ## The annotation can be extracted as follows. > annotation(MLL.B) \"hgu133b\" ## To print the first 10 names of the probes use > probeNames(MLL.B)[1:10] \"200000_s_at\" \"200000_s_at\" \"200000_s_at\" \"200000_s_at\" \"200000_s_at\" \"200000_s_at\" \"200000_s_at\" \"200000_s_at\" \"200000_s_at\" \"200000_s_at\" Note that the probe names are the same as those obtained by geneNames. The PM and MM values are collected by the functions pm and mm. To print the PM values of the first four out of the sixteen rows of the probe with identifier 200000_s_at we may use the following. 6.1. PROBE DATA 91 > pm(MLL.B,\"200000_s_at\")[1:4,1:3] JD-ALD009-v5-U133B.CEL JD-ALD051-v5-U133B.CEL JD-ALD052-v5-U133B.CEL 200000_s_at1 661.5 321.5 312.5 200000_s_at2 838.8 409.3 395.3 200000_s_at3 865.3 275.5 341.3 200000_s_at4 425.8 253.5 196.8 By function matplot a quick view on the variability of the data within and between probes can be obtained. > matplot(pm(MLL.B,\"200000_s_at\"),type=\"l\", xlab=\"Probe No.\", + ylab=\"PM Probe intensity\") From the resulting plot in Figure 6.1 it can be observed that the variability is substantial. Density plots of the log of the probe values can be obtained by hist(MLL.B). From the density plot of the log of the intensity data in Figure 6.2 it can be seen that these are quite skew to the right. The script to program such plots 1.2 2000 1.0 1500 0.8 PM Probe intensity density 0.6 1000 0.4 500 0.2 0.0 2 4 6 8 10 6 8 10 12 14 ## Probe No. log intensity Figure 6.1: Mat plot of intensity Figure 6.2: Density of MLL.B data. values for a probe of MLL.B. is quite brief. > MAplot(MLL.B,pairs=TRUE, plot.method= \"smoothScatter\") > image(MLL.B) 92 CHAPTER 6. MICRO ARRAY ANALYSIS ## 6.2 Preprocessing methods From various visualization methods it is clear that preprocessing of probe intensities is necessary for making biologically relevant conclusions. Biocon- ductor gives facilities for various preprocessing methods. Here we will only sketch what the main methods are and how these can be implemented. It should be noted that the topic of optimal preprocessing currently is a field of intense research (probably for the coming years), so that definitive recom- mendations are not mandatory. Preprocessing consists of three major steps: Background correction, normalization, and summarization. To obtain the available background and pm correction methods use the following. > bgcorrect.methods \"mas\" \"none\" \"rma\" \"rma2\" The mas background is part of the MAS Affymetrix software and is based on the 2% lowest probe values. RMA uses only the PM values, neglects the MM values totally, and is based on conditional expectation and the normality assumption of probes values. There are also a number of correction methods available for the PM values: > pmcorrect.methods \"mas\" \"pmonly\" \"subtractmm\" The following normalization methods are available: > normalize.methods(MLL.B) \"constant\" \"contrasts\" \"invariantset\" \"loess\" \"qspline\" \"quantiles\" \"quantiles.robust\" Constant is a scaling method equivalent to linear regression on a reference array although without intercept term. More general are the non-linear nor- malization methods such as loess, qspline, quantiles, and robust quantiles. Loess is a nonlinear method based on local regression of MA plots. The methods of contrasts is based on loess regression. Quantile normalization is an inverse transformation of the empirical distribution with respect to an averaged sample quantile in order to impose one and the same distribution to each array. The method qspline uses quantiles from each array and a target array to fit a system of cubic splines. The target should be the mean (geo- metric) or median of each probe, but could also be the name of a particular group. 6.2. PREPROCESSING METHODS 93 ## The final step of preprocessing is to aggregate multiple probe intensities into a gene expression value. The available methods are: > express.summary.stat.methods \"avgdiff\" \"liwong\" \"mas\" \"medianpolish\" \"playerout\" The first is the simplest as it is based on averaging. There is no single best method for all preprocessing problems. It seems, however, wise to use methods robust against outliers together with non-linear normalization methods. ## Example 1. The three pre-processing steps can be employed one after the other by the function expresso. To combine the background correction RMA with constant normalization and to use average differences for the computation of gene expression values, we may use the following. eset <- expresso(MLL.B,bgcorrect.method=\"rma\", normalize.method=\"constant\",pmcorrect.method=\"pmonly\", summary.method=\"avgdiff\") ## Example 2. Another frequently applied preprocessing method is RMA. It combines convolution background correction, quantile normalization, and summarization based on multi-array model fit in a robust manner by a so- called median polish algorithm. > library(affy) > data(MLL.B, package = \"ALLMLL\") > eset3 <- rma(MLL.B) Background correcting Normalizing Calculating Expression > boxplot(data.frame(exprs(eset3))) The three stages of preprocessing by rma are part of the output. Before a box-and-whiskers plot can be constructed the expression values need to be extracted from the object eset3. ## After the foregoing it is often desirable to further preprocess the data in order to remove patient specific means or medians. When the patient me- dian is zero, for instance, testing for a gene to have mean expression value 94 CHAPTER 6. MICRO ARRAY ANALYSIS ## different from zero becomes meaningful. Example 3. In the sequel we shall frequently work with the ALL data from the ALL package of Bioconductor. Here the data set is briefly introduced (see also Section 1.1) and further processing steps are illustrated. The raw data have been jointly normalized by RMA and are available in the form of an exprSet object. 12625 gene expression values are available from microarrays of 128 different persons suffering from acute lymphoblastic leukemia (ALL). A number of interesting phenotypical co-variates are available. For instance, the ALL$mol variable has TRUE/FALSE values for each of the 128 patients\ndepending on whether a reciprocal translocation occurred between the long\narms of Chromosome 9 and 22. This is casually related to chronic and acute\nleukemia. One can also ask for table(ALL$BT) to obtain an overview of the numbers of patients which are in certain phases of a disease. See also the general help ?ALL for further information on the data or the article by Chiaretti et al. (2004). ## > data(ALL, package = \"ALL\") > slotNames(ALL) \"assayData\" \"phenoData\" \"featureData\" \"experimentData\" \"annotation\" \".__classVersion__\" > row.names(exprs(ALL))[1:10] \"1000_at\" \"1001_at\" \"1002_f_at\" \"1003_s_at\" \"1004_at\" \"1005_at\" \"1006_at\" \"1007_s_at\" \"1008_f_at\" \"1009_at\" ## By feno <- pData(ALL) phenotypical information from the patients is stored in a data frame, which is useful for further analysis. In case the gene expres- sion values over the patients are non-normally distributed one may want to subtract the median and divide by the MAD. An efficient manner to do so is to use an apply function to compute the column mad and median, and sweep to subtract the median from each column entry and, next, to divide each column entry by the MAD. ## ALL1pp <- ALL1 <- ALL[,ALL$mol == \"ALL1/AF4\"]\n\nmeds <- apply(exprs(ALL1), 2, median)\ndat <- sweep(exprs(ALL1), 2, meds)\nexprs(ALL1pp) <- sweep(dat, 2, mads, FUN=\"/\")\n6.3. GENE FILTERING 95\n\nBy this script the patients are selected with assigned molecular biology\nequal to ALL1/AF4. Then ALL1 is copied in order to overwrite the expression\nvalues in a later stage. The median and the MAD are computed per column\nby the specification 2 (column index) in the apply function. Then the first\nsweep function subtracts the medians from the expression values and second\ndivides these by the corresponding MAD. By comparing the box plots in\nFigure 6.3 and 6.4 the effect of preprocessing can be observed. The medians\nof the preprocessed data are equal to zero and the variation is smaller due\nto the division by their MAD. Note that by box plotting a data frame a fast\noverview of the distributions of columns in a data frame is obtained.\n14\n\n4\n12\n\n3\n10\n\n2\n8\n\n1\n6\n\n0\n4\n\n−1\n2\n\nX04006 X16004 X24005 X28028 X31007 X04006 X16004 X24005 X28028 X31007\n\n## Figure 6.3: Boxplot of the Figure 6.4: Boxplot of the\n\nALL1/AF4 patients. ALL1/AF4 patients after median\n\n## 6.3 Gene filtering\n\nA few important manners to filter genes are illustrated here. It is wise to\nkeep in mind that there are statistical as well as and biological criteria for\nfiltering genes and that a combination of these often gives the most satisfac-\ntory results. The examples stress the importance of careful thinking.\n96 CHAPTER 6. MICRO ARRAY ANALYSIS\n\n## Example 1. Filtering by the coefficient of variation. A manner to filter\n\ngenes is by the coefficient of variation, which is defined as the standard\ndeviation divided by the absolute value of the mean: cv = σ/|µ|. If cv = 1,\nthen the standard deviation equals the mean, so that the experimental effect\nis small relative to the precision of measurement. If, however, cv < 0.2, then\nthe mean is five times larger than the standard deviation, so that both the\nexperimental effect and the measurement precision are large. Let’s compute\nthe coefficient of variation per gene for the ALL1pp data of the previous\nsection.\n\n## Now using sum(cvval<0.2) yields 4751 genes with a coefficient of variation\n\nsmaller than 0.2. These genes can be selected by ALL1pp[cvval<0.2,].\n\n## Example 2. Combining several filters. It is often desired to combine\n\nseveral filters. Of course it is possible to program filters completely on your\nown, however, we may conveniently use the function filterfun to combine\nseveral filters. The script in this example is useful when several functions are\nto be applied to a single data set.\n\nlibrary(\"genefilter\")\nf1 <- function(x)(IQR(x)>0.5)\nf2 <- pOverA(.25, log2(100))\nf3 <- function(x) (median(2^x) > 300)\nf4 <- function(x) (shapiro.test(x)$p.value > 0.05) f5 <- function(x) (sd(x)/abs(mean(x))<0.1) f6 <- function(x) (sqrt(10)* abs(mean(x))/sd(x) > qt(0.975,9)) ff <- filterfun(f1,f2,f3,f4,f5,f6) library(\"ALL\"); data(ALL) selected <- genefilter(exprs(All[,ALL$BT==\"B\"]), ff)\n\nAfter running this script and using sum(selected) one obtains 317 genes\nthat pass the combined filter. The first function returns TRUE if the in-\nterquartile range is larger than 0.5, the second if 25% of the gene expression\nvalues is larger than 6.643856, the third if the median of the expression values\ntaken as powers to the base two is larger than 300, the fourth if it passes the\nShapiro-Wilk normality test, the fifth if the coefficient of variation is smaller\nthan 0.1, and the sixth if the one-sample t-value is significant. The filter\n6.3. GENE FILTERING 97\n\n## functions are combined by filterfun and the function genefilter returns\n\na logical vector indicating whether the gene passed all the filters or failed\nat least one of them. In order to use these filter steps properly it is well to\nthink them through because several filters focus on similar properties. In\nparticular, since the IQR divided by 1.349 is a robust estimator of the stan-\ndard deviation, the first filter selects genes with a certain minimal standard\ndeviation. With respect to the third filter note that 2x > 300 is equivalent\nto x > 2 log(300) ≈ 8.228819, which is √ highly similar √\nto the second filter.\nFurthermore, s/|x| < 0.1 is equivalent to 10|x|/s > 1/ 10, so that the last\ntwo filters are highly similar.\n\n## Example 3. Filtering by t-test and normality. One may also want to\n\nselect genes with respect to p-values of a two-sample t-test over B-cell ALL\nversus T-cell ALL. This can be combined with a normality test in the sense\nthat only those genes are filtered which pass the Shapiro-Wilk normality test.\nThe latter will be applied separately for the B-cell ALL patients and for the\nT-cell ALL patients. For this we write a function that will be used twice.\nFirst, however, we create a logical factor patientB indicating patients with\nB-cell ALL (TRUE) and with T-cell ALL (FALSE). The filter defined selects\ngenes that have their p-value from the Welch two-sample t-test smaller than\nthe significance level 0.05. A logical variable named selected is defined\nwhich attains TRUE only if sel1, sel2, as well as sel3 have the value\nTRUE.\n\nlibrary(\"genefilter\");library(\"ALL\"); data(ALL)\npatientB <- factor(ALL$BT %in% c(\"B\",\"B1\",\"B2\",\"B3\",\"B4\")) f1 <- function(x) (shapiro.test(x)$p.value > 0.05)\ndesign.ma <- model.matrix(~ 0 + factor(allB$BT)) colnames(design.ma) <- c(\"B\",\"B1\",\"B2\") fit <- lmFit(allB, design.ma) fit <- eBayes(fit) > topTable(fit, coef=2,5,adjust.method=\"fdr\") ID logFC AveExpr t P.Value adj.P.Val B 12586 AFFX-hum_alu_at 13.41648 13.50011 325.9683 3.165249e-99 3.996127e-95 207.4539 2488 32466_at 12.68419 12.70396 306.2708 1.332700e-97 8.412671e-94 204.8468 2773 32748_at 12.07511 12.10862 296.2687 9.771209e-97 3.615767e-93 203.4172 5328 35278_at 12.43678 12.45362 295.4843 1.145590e-96 3.615767e-93 203.3018 4636 34593_g_at 12.63516 12.58035 278.0195 4.431155e-95 1.118867e-91 200.6038 By topTable the five genes are selected with the smallest p-values adjusted for the false discovery rate. Let’s call the mean of the B patients µ, that of B1 µ1 , and that of B2 µ2 . In the current case we are not so much interested in the hypothesis H0 : µ − µ2 , because this is the difference between Stage 0 and Stage 3. Rather, we are interested in the hypothesis H0 : µ − µ1 and H0 : µ1 − µ2 . Such a specific hypothesis can be tested by using a contrast matrix, which can be specified as follows. ## > cont.ma <- makeContrasts(B-B1,B1-B2, levels=facB123) > cont.ma Contrasts Levels B - B1 B1 - B2 B 1 0 B1 -1 1 B2 0 -1 2 To obtain the appropriate number of levels we make a factor of ALLB$BT.\n100 CHAPTER 6. MICRO ARRAY ANALYSIS\n\nObserve that the contrast matrix specifies the difference between the levels\nB and B1 as well as between B1 and B2. It can be implemented as follows.\nfit1 <- contrasts.fit(fit, cont.ma)\nfit1 <- eBayes(fit1)\nID logFC AveExpr t P.Value adj.P.Val\n3389 33358_at 1.4890066 5.260142 7.373638 5.736549e-10 7.242393e-06 12.27178\n419 1389_at -1.7851913 9.262104 -7.080732 1.815838e-09 9.743963e-06 11.21950\n1016 1914_at 2.0976468 4.939252 7.018884 2.315397e-09 9.743963e-06 10.99731\n6939 36873_at 1.8645678 4.303218 6.425504 2.360990e-08 7.451875e-05 8.87100\n7542 37471_at 0.8701475 6.551419 6.105622 8.161452e-08 2.060767e-04 7.73333\nHere, we have applied a method called “false discovery rate” (fdr) which in-\ncreases the p-values somewhat in order to reduce the number false positives.\nThe number of genes requested equals 5.\n\n## A very convenient manner to summarize, collect, and communicate vari-\n\nous types of results is in the form of an HTML file.\n\n## Example 2. Summarizing output in HTML format. It is often desired to\n\ncombine the typical output from a function like topTable with that of an\nHTML output page containing various types of information. To illustrate\nthis we proceed with the object fit of the previous example.\nlibrary(\"annaffy\")\ntab <- topTable(fit, coef=2, number=20, adjust.method=\"fdr\")\nanntable <- aafTableAnn(as.character(tab$ID), \"hgu95av2\", aaf.handler()) saveHTML(anntable, \"ALLB123.html\", title = \"B-cell 012 ALL\") The output of topTable is saved in the object tab. By the function aafTableAnn various types of information are gathered from the output topTable of the estimated linear model, the annotation package, and the aaf.handler func- tionality. The information collected contains the following: Probe, Symbol, Description, Function, Chromosome, Chromosome Location, GenBank, Lo- cusLink, Cytoband, UniGene, PubMed, Gene Ontology, and Pathway. The resulting anntable is saved in HTML format in the working directory or Desktop. It contains a wealth of information on e.g. Chromosome location, KEGG mappings, summaries from Pubmed articles, etc. 6.4. APPLICATIONS OF LINEAR MODELS 101 ## Example 3. A complete script. A frequently occurring problem is to select genes by T -testing the difference in means and to generate output in HTML format. A method to solve this is illustrated by the following script. ## library(\"multtest\"); library(\"annaffy\"); library(\"hgu95av2.db\") data(ALL) selSamplesB <- ALL$BT %in% c(\"B\",\"B1\",\"B2\",\"B3\",\"B4\")\nfactorBT <- as.integer(selSamplesB)\nteststat <- mt.teststat(exprs(ALL), factorBT, test=\"t\")\nindex <- order(abs(teststat), decreasing = TRUE)\nprobeids <- featureNames(ALL)[index]\nanncols <- aaf.handler()[c(1:3,8:9,11:13)]\nanntable <- aafTableAnn(probeids[1:20], \"hgu95av2.db\", anncols)\ntesttable <- aafTable(\"t-statistic\" = teststat[index[1:20]], signed = TRUE)\ntable <- merge(anntable, testtable)\nsaveHTML(table, \"ALLt-test.html\", title = \"T test on B-cell versus T-cell ALL\")\n\nThe value of selSamples is TRUE if ALL$BT is of the B-cell type and FALSE otherwise. Accordingly, the factor equals one or zero, which is used to com- pute the Welch two-sample t-values for each row of exprs(ALLs). The indices corresponding to the order of the absolute t-values are determined and used to select the 20 best genes. The hgu95av2 package is a meta data annotation package collecting the requested information by the call to aaf.handler(). The meaning of the columns can be obtained by the same command. The resulting table is saved as an HTML file in the working directory (getwd()) or desktop. ## Example 4. Using basic R functions. It is also possible to summarize results in an HTML table on the basis of p-values from e.g. analysis of variance (ANOVA). That is, the selected genes can directly be used as input for aafTableAnn. ## library(\"multtest\"); library(\"annaffy\"); library(\"hgu95av2.db\") library(\"ALL\"); data(ALL, package = \"ALL\") ALLB <- ALL[,which(ALL$BT %in% c(\"B\",\"B1\",\"B2\"))]\npanova <- apply(exprs(ALLB), 1, function(x) anova(lm(x ~ ALLB$BT))$Pr)\ngenenames <- featureNames(ALLB)[panova<0.000001]\natab <- aafTableAnn(genenames, \"hgu95av2\", aaf.handler() )\nsaveHTML(atab, file=\"ANOVAonB-cellGroups.html\")\n102 CHAPTER 6. MICRO ARRAY ANALYSIS\n\nIn a similar manner the p-values from the Kruskal-Wallis test can be used to\nselect genes.\n\nHaving some experience with analyzing the ALL data, the question may\narise whether the model for means of groups can be extended from one factor\nto more factors. This is indeed possible. The model would then be equal to\nYijk = αi + βj + εijk ,\nwhere αi is the mean of Group i indicated by the first factor, βj the mean of\nGroup j indicated by the second factor and εijk the error which is distributed\naccording to N (0, σ 2 ). If the means of the i groups differ, then there is a\nmain effect of the first factor which is expressed in a p-value smaller than\n0.05. Similarly, in case the means of the j groups differ, there is a main effect\nof the second factor, expressed in a p-value smaller than 0.05.\n\n## Example 5. a two-way approach. It case of the ALL data from Chiaretty\n\net al. (2004) we may select genes which have a main effect on differences in\ndisease state B1, B2, or B3 and a main effect on the sex of the patient. This\ncan be computed as follows.\nlibrary(\"ALL\"); data(ALL)\nALLB <- ALL[,which(ALL$BT %in% c(\"B1\",\"B2\",\"B3\"))] peffBT <- apply(exprs(ALLB), 1, function(x) anova(lm(x ~ factor(ALLB$BT) * factor(ALLB$sex)))$Pr)\npeffsex <- apply(exprs(ALLB), 1, function(x)\nanova(lm(x ~ factor(ALLB$BT) * factor(ALLB$sex)))$Pr) > sum(peffBT < 0.05 & peffsex < 0.05) 215 The p-values for the main effects are assigned to the vectors peffBT and peffsex. Using the logical AND (&) operator and summing the TRUE val- ues yield 215 genes with significant main effects on disease state as well as on sex. ## Bioconductor has a useful facility to download publicly available microar- ray data sets from NCBI. ## Example 6. Analyzing public available data. The GDS1365 data con- tain primed macrophage response to IFN-gamma restimulation after different 6.5. SEARCHING AN ANNOTATION PACKAGE 103 time periods. The purpose is to gain insight into the influence of IFN-gamma priming on IFN-gamma induced transcriptional responses. Among the phe- notypical covariates of the data there is a factor time with values 0, 3 and 24 hours, which we shall use. It can be extracted by the function pData. ## library(GEOquery); library(limma); library(hgu95av2.db);library(annaffy) gds <- getGEO(\"GDS1365\") eset <- GDS2eSet(gds,do.log2=T) design.ma <- model.matrix(~ 0 + pData(eset)$time)\nfit <- lmFit(exprs(eset), design.ma)\nfit <- eBayes(fit)\ntab <- topTable(fit, coef=2, number=20, adjust.method=\"fdr\")\n \"16\"\n\n## We recognize the manufacturers identifier of genes and the corresponding\n\nchromosome. Asking information by ?hgu95av2CHR reveals that it is an\nenvironment (hash table) which provides mappings between identifiers and\nchromosomes. From these we obtain various types of information on the\nbasis of the manufacturer’s identifier such as \"1389_at\". Below we obtain,\nrespectively, the GenBank accession number, the Entrez Gene identifier, the\ngene abbreviation, gene name, brief summaries of functions of the gene prod-\nucts, and the UniGene identifier. For this we use the get function in order\nto search an environment for a name.\n\n## > get(\"1389_at\", env = hgu95av2ACCNUM)\n\n \"J03779\"\n> get(\"1389_at\", env = hgu95av2ENTREZID)\n 4311\n> get(\"1389_at\", env = hgu95av2SYMBOL)\n \"MME\"\n> get(\"1389_at\", env = hgu95av2GENENAME)\n \"membrane metallo-endopeptidase (neutral endopeptidase,\nenkephalinase, CALLA, CD10)\"\n> get(\"1389_at\", env = hgu95av2SUMFUNC)\n NA\n> get(\"1389_at\", env = hgu95av2UNIGENE)\n \"Hs.307734\"\n6.6. USING ANNOTATION TO SEARCH LITERATURE 105\n\nLet’s use the GenBank accession number to search its nucleotide data base.\n> library(annotate)\n> genbank(\"J03779\",disp=\"browser\")\nFrom this we obtain the corresponding GI:179833 number, which can be used\nto obtain a complete XML document.\n> genbank(1430782,disp=\"data\",type=\"uid\")\nObviously, probes correspond to genes and frequently we are interested in\ntheir chromosome location, and, specifically, in starting position(s).\n> get(\"1389_at\", env = hgu95av2CHRLOC)\n3 3 3\n156280152 156280327 156280748\nIts cytoband location can also be obtained.\n> get(\"1389_at\", env = hgu95av2MAP)\n \"3q25.1-q25.2\"\nHence, we see that the gene is on Chromosome 3 at q arm band 25 sub-\nband 1 and 2. In case we have a LocusLink ID, e.g. 4121, available the\ncorresponding GO terms can be obtained and stored in a list.\nll1<-GOENTREZID2GO[[\"4121\"]]\n\n## 6.6 Using annotation to search literature\n\nGiven the manufactures probe identifier it is possible to search literature by\ncollecting Pubmed ID’s and to use these to collect relevant articles.\n> library(hgu95av2);library(annotate); library(ALL); data(ALL)\n> pmid <- get(\"1389_at\",env=hgu95av2PMID)\n> pubmed(pmid,disp=\"browser\")\nAnother possibility is to collect a list containing PubMed ID, authors, ab-\nstract text, title, journal, and publication date.\n> absts <- pm.getabst(\"1389_at\", \"hgu95av2\")\n> pm.titles(absts)\n106 CHAPTER 6. MICRO ARRAY ANALYSIS\n\n## Another possibility is to construct an HTML table with the titles.\n\n> pmAbst2HTML(absts[],filename=\"pmon1389_at.html\")\n\n## 6.7 Searching GO numbers and evidence\n\nBy the phrase “ontology” we mean a structured language about some con-\nceptual domain. The gene ontology consortium defines three ontologies: A\nMolecular Function (MF) describes a phenomenon at the biochemical level\nsuch as “enzyme”, “transporter”, or “ligand”. A Biological Process (BP)\nmay coordinate various related molecular functions such as “DNA replica-\ntion” or “signal transduction”. A Cellular Component (CC) is a unit within\na part of the cell such as “chromosome”, “nucleus”, or “ribosome”.\nEach term is identified by a unique GO number. To find GO numbers\nand their dependencies we use get to extract a list from the annotation files\nhgu95av2GO for example. From the latter we extract a list and use an apply\ntype of function to extract another list containing GO identification numbers.\n\n> idl <- lapply(go1389,function(x) x$GOID) > idl[] \"GO:0006508\" The list idl contains 8 members of which only the first is printed to the screen. By changing GOID into Ontology more specific information pertaining to ontology is extracted. From the annotate package we may now select the GO numbers which are related to a biological process. > library(annotate) > getOntology(go1389,\"BP\") \"GO:0006508\" \"GO:0007267\" There are various types of evidence such as: inferred from genetic interaction (IGI), inferred from electronic annotation (IEA), traceable author statement (TAS), etc. Per GO identifier the type of evidence can be obtained. 6.8. GO PARENTS AND CHILDREN 107 > getEvidence(go1389) GO:0004245 GO:0005886 GO:0005887 GO:0006508 GO:0007267 GO:0008237 GO:0008270 \"IEA\" \"TAS\" \"TAS\" \"TAS\" \"TAS\" \"TAS\" \"IEA\" GO:0046872 \"IEA\" ## When we now want to select the GO numbers with evidence of a traceable author statement we can use the subset function to create a list. ## go1389TAS <- subset(go1389,getEvidence(go1389)==\"TAS\") ## A manner to extract information from this list is by using an apply type of function. ## > sapply(go1389TAS,function(x) x$GOID)\n\n> sapply(go1389TAS,function(x) x$Evidence) > sapply(go1389TAS,function(x) x$Ontology)\n\n## 6.8 GO parents and children\n\nThe term “transmembrane receptor protein-tyrosine kinase” is more specific\nand therefore a ’child’ of the more general term parent term “transmembrane\nreceptor” (Gentleman, et. al, 2005).\nExample 1. Collecting GO information. There are functions to obtain\nparents and children from a GO identifier.\n\n> GOMFPARENTS$\"GO:0003700\" isa isa \"GO:0003677\" \"GO:0030528\" > GOMFCHILDREN$\"GO:0003700\"\nisa\n\"GO:0003705\"\n\n## In case of a list of GO identifiers you may want to collect the ontology,\n\nparents, and children identifiers in a vector.\n108 CHAPTER 6. MICRO ARRAY ANALYSIS\n\n## go1389 <- get(\"1389_at\", env = hgu95av2GO)\n\ngonr <- getOntology(go1389, \"BP\")\ngP <- getGOParents(gonr)\ngC <- getGOChildren(gonr)\ngPC <- c(gonr,gP,gC)\npa <- sapply(gP,function(x) x$Parents) ch <- sapply(gC,function(x) x$Children)\ngonrc <- c(gonr,unlist(pa),unlist(ch))\nExample 2. Probe selection by GO. A research strategy may be to start\nwith a probe number, to find the GO identifiers of the biological process, to\nobtains its parents, and next to transform these to probes.\nlibrary(GO); library(annotate); library(\"ALL\"); data(ALL)\ngo1389 <- get(\"1389_at\", env = hgu95av2GO)\ngonr <- getOntology(go1389, \"BP\")\ngP <- getGOParents(gonr)\npa <- sapply(gP,function(x) x$Parents) probes <- mget(pa,hgu95av2GO2ALLPROBES) probeNames <- unlist(probes) ALLpr <- ALL[probeNames,] > dim(exprs(ALLpr)) 7745 128 Indeed, you may end up with many genes, useful for further analysis. ## 6.9 Gene filtering by a biological term An application of working with GO numbers is to filter for genes which are related to a biological term. ## Example 1. Filter gene by a term. From a biological point of view it is most interesting to select genes which are related to a certain biolog- ical process to be specified by a term such as ”transcriptional repression”. We combine this with the previous filter. For this we need the annota- tion package used in the stage of data collection. This can be obtained by annotation(ALL). First we define a function (Gentleman, et al., 2005, p. 123) to collect appropriate GO numbers from the environment GOTERM. 6.10. SIGNIFICANCE PER CHROMOSOME 109 ## library(\"GO\"); library(\"annotate\"); library(\"hgu95av2.db\") GOTerm2Tag <- function(term) { GTL <- eapply(GOTERM, function(x) {grep(term, x@Term, value=TRUE)}) Gl <- sapply(GTL, length) names(GTL[Gl>0]) } > GOTerm2Tag(\"transcriptional repressor\") \"GO:0016564\" \"GO:0016565\" \"GO:0016566\" \"GO:0017053\" ## The functions eapply and sapply search an environment like GOTERM by grep for matches of a specified term. A precaution is taken to select only those names which are not empty. This gives the GO terms which can now be translated to probe of the ALLs data. ## tran1 <- hgu95av2GO2ALLPROBES$\"GO:0016564\"\n\ntran2 <- hgu95av2GO2ALLPROBES$\"GO:0016566\" tran3 <- hgu95av2GO2ALLPROBES$\"GO:0017053\"\ntran <- c(tran1,tran2,tran3)\ninboth <- tran %in% row.names(exprs(ALLs))\nALLtran <- ALLs[tran[inboth],]\n\nThe GO translated probe names are intersected with the row names of the\ndata giving the logical variable inboth. The variable tran[inboth] gives\nthe ids by which genes can be selected. Next, gene ids for which inboth\nequals TRUE are selected and the corresponding data are collected in the data\nrawp <- apply(exprs(ALL), 1, function(x) t.test(x ~ fac)$p.value) xx <- as.list(hgu95av2CHR) AffimIDChr1 <- names(xx[xx==\"1\"]) names(rawp) <- featureNames(ALL) f <- matrix(NA,2,2) f[1,1] <- sum(rawp[AffimIDChr1]<0.05); f[1,2] <- length(AffimIDChr1) f[2,1] <- sum(rawp<0.05) ; f[2,2] <- length(rawp) > fisher.test(f) ## Fisher’s Exact Test for Count Data data: f p-value = 0.7924 alternative hypothesis: true odds ratio is not equal to 1 95 percent confidence interval: 0.8746595 1.1047267 sample estimates: odds ratio 0.9836513 Thus for Chromosome 1 the null hypothesis of odd ratio equal to one is not rejected. The number of significant genes in Chromosome 1 is proportional to that of the total. ## 6.11 Overview and concluding remarks Many examples are given on using analysis of variance or T -tests for select- ing genes with large experimental effects on different patient groups. The 6.12. EXERCISES 111 ## above statistical methods seem to cover the majority of problems occurring in practice. 6.12 Exercises 1. Gene filtering on normality per group of B-cell ALL patients. ## (a) Use genefilter to program the Shapiro normality test separately for each gene of the groups ”B1”,”B2”,”B3”,”B4”. (b) How many pass the filter? (c) Compute a Venn diagram for group ”B2”, ”B3”, and ”B4”, plot it, and give a correct interpretation for each number. ## 2. Analysis of gene expressions of B-cell ALL patients using Limma. (a) Construct a data frame containing the expression values for the B-cell ALL patients in stage B, B1, B2, B3, B4 from the ALL data. (b) Construct the design matrix and an appropriate contrast matrix. (c) Compute the twenty best genes by topTable. (d) Collect information on the twenty best gene s in an HTML page. 3. Finding a row number. Use grep to find the row number of gene 1389_at. Hint: Use row.names or featureNames. ## 4. Remission (genezing) from acute lymphocytic leukemia (ALL). With respect to the ALL data from the ALL library there is a phenotypical vari- able called remission indicating complete remission CR or refractory REF meaning improvement from disease and less or no improvement, respectively. ## (a) How many persons are classified as CR and REF, respectively? Hint: Use pData to extract a data frame with phenotypical data. (b) Program the two-sample t-test not assuming equal variances to select genes with p-values smaller than 0.001. Hint: You may have to select the persons with values on remission, excluding not available data. 112 CHAPTER 6. MICRO ARRAY ANALYSIS (c) Collect and give the manufactures probe names of the genes with p-values smaller than 0.001. (d) Use the probe names to find the corresponding gene names. Give the code. (e) Is the famous protein p53 is among them? (f) How many unique gene names are there? 5. Remission achieved. For the ALL data from its ALL library the patients are checked for achieving remission. The variable ALL$CR has values CR\n(became healthy) and REF (did not respond to therapy; remain ill).\n\n## (a) Construct a separate data frame consisting of only those gene\n\nexpression values from patients that have values CR or REF.\n(b) How many genes have a p-value smaller than 0.0001 from the two-\nsample T -test not assuming equal variances? Hint: Use the apply\nfunctionality to program the test.\n(c) Give the affymetrix names (symbols) of the genes the pass the\nselection criterion of p-value smaller than 0.0001.\n(d) Use the latter to find the biological names.\n(e) How many oncogenes are there is total?\n(f) Do the Fisher test on the number of oncogenes out of the total\nversus the number of significant oncogenes out of the selected.\n\n6. Gene filtering of ALL data. The data are in the library called ”ALL”.\nThe persons with T-cell leukemia which are in stage T2 and T3 can\nbe selected by the variable ALL$BT. You may use the function ”table” to find the frequencies of the patient types and leukemia stages. To answer the questions below functions from the library ”genefilter” are helpful. (a) Program a gene filter step separately for T2 and T3 patients such that only those genes pass which are normally distributed. (b) Program a second filter step which passes only those genes with a significant p-value from the two sample T -test. (c) How many genes pass all filter steps? 6.12. EXERCISES 113 ## (d) How many genes pass normality? 7. Stages of B-cell ALL in the ALL data. Use the limma package to answer the questions below. (a) Select the persons with T-cell leukemia which are in stage B1, B2, B3, and B4. (b) What type of contrast matrix would you like to suggest in this situation? Give its code. (c) Perform analysis of variance to test the hypothesis of equal pop- ulation means. Use the Benjamini & Hochberg (1995) (”BH”) adjustment method for the false discovery rate and topTable to report the five best genes. (d) For how many genes is the null-hypothesis to be rejected? ## 8. Analysis of public micro array data on rheumatoid arthritis. (a) Download GDS486 and transform it into eset form. Here we meet a missing data problem. A manner to solve it is as follows. Use the function function(x) sum(is.na(x)) in apply on the rows to count the number of missing values per row. Select the rows without missing value to perform a two-sample t-test with the groups in cell.line. Overwrite the vector with the number of missing values with the p-values in a suitable manner. (b) Download GDS711 and repeat the above using ANOVA p-values with the covariate disease.state to indicate the groups. (c) Download GDS2126 and repeat the above using ANOVA p-values with the covariate disease.state to indicate the groups. (d) Compute the symbols of the twenty best genes in the sense of having smallest summed p-values. (e) Summarize information of the twenty best genes in an HTML table. Does p53 play a role in the path way of the best gene? ## 9. Analysis of genes from a GO search. (a) Select the patients on the covariate mol.biol with values ALL1/AF4, BCR/ABL, and NEG. 114 CHAPTER 6. MICRO ARRAY ANALYSIS (b) Collect the ANOVA p-values with contrast between NEG and ALL1/AF4, and between NEG and BCR/ABL. Report the number of significant affy ID’s and the total. Hint: Re-order the columns into ”NEG”, ”ALL1/AF4”, and ”BCR/ABL”. (c) Find the GO ID’s refereing to the term ”protein-tyrosine kinase” since it mediates many steps due to BCR/ABL translocation. (d) Select the affy ID’s corresponding to the GO ID’s and report its number and the number of significant genes. (e) Perform Fisher exact to test the odds ratio equal to one hypoth- esis. Chapter 7 ## Cluster Analysis and Trees Given the expression values of several genes, a problem which often arises is to find genes which are similar or close. Genes with expressions in small distance may have similar functions and may be potentially interesting for further research. In order to discover genes which form a group there are sev- eral methods developed called cluster analysis. These methods are based on a distance function and an algorithm to join data points to clusters. The so- called single linkage cluster analysis is intuitively appealing and often applied in bioinformatics. By this method several clusters of genes can be discov- ered without specifying the number of clusters on beforehand. The latter is necessary for another method called k-means cluster analysis. Each analysis produces a tree which represents similar genes as close leaves and dissimilar ones on different edges. An other measure to investigate similarity or dependency of pairs of gene expressions is the correlation coefficient. Various examples of applications will be given. It prepares the way to searching a data set for direction of large variance. That is, since gene expression data sets tend to be large, it is of importance to have a method available which discovers important “directions” in the data. A frequently used method to find such directions is that by principal components analysis. Its basic properties will be explained as well as how it can be applied in combination with cluster analysis. In applications where it is difficult to formulate distributional assump- tions of the statistic it may still be of importance to construct a confidence interval. It will be illustrated by several examples how the bootstrap can be applied to construct 95% confidence intervals. Many examples are given to clarify the application of cluster analysis and principal components analy- 115 116 CHAPTER 7. CLUSTER ANALYSIS AND TREES sis. In this chapter you learn about distance measures and the frequently employed correlation coefficient. Examples are given of analyzing data by single linkage cluster analysis and k-means cluster analysis, and principal components analysis. 7.1 Distance The concept of distance plays a crucial role in all types of cluster analysis. For real numbers a and b a distance function d is defined as the absolute value of their difference p d(a, b) = |a − b| = (a − b)2 . ## The properties of a distance function should be in line with our intuition. That is, if a = b, then d(a, a) = 0 and if a 6= b, then d(a, b) > 0. Hence, the distance measure should be definitive in the sense that d(a, b) = 0 if and only if a = b. Since the square is symmetric, it follows that p p d(a, b) = |a − b| = (a − b)2 = (b − a)2 = |b − a| = d(b, a). In other words, d(a, b) = d(b, a), the distance between a and b equals that between b and a. Furthermore, it holds for all points c between a and b that d(a, b) = d(a, c) + d(c, b). For all points c not between a and b, it follows that d(a, b) < d(a, c) + d(c, b). The latter two notions can be summarized by the so-called triangle inequality. That is, for all real c it holds that ## d(a, b) ≤ d(a, c) + d(c, b). Directly going from a to b is shorter than via c. Finally, the distance between two points a and b should increase as these move further apart. ## Example 1. Let a = 1 and b = 3. Then, obviously, the distance d(1, 3) = 2. The number c = 2 is between a and b, so that d(1, 3) = 2 = 1 + 1 = d(1, 2) + d(2, 3) and the triangle inequality becomes an equality. For the situation where gene expression values for several patients are available, it is of importance to define a distance for vectors of gene expres- sions such as a = (a1 , · · · , an ) and b = (b1 , · · · , bn ). We shall concentrate 7.1. DISTANCE 117 mainly on the Euclidian distance, which is defined as the root of the sum of the squared differences v u n uX d(a, b) = t (ai − bi )2 . i=1 ## The distance measure satisfies the above properties of definiteness, symme- try, and triangle inequality. Although many other, but often highly similar, distance functions are available we shall mainly concentrate on Euclidian dis- tance because it is applied most frequently in bioinformatics. ## Example 2. Suppose that a = (a1 , a2 ) = (1, 1) and b = (b1 , b2 ) = (4, 5). Then p p √ d(a, b) = (a1 − b1 )2 + (a2 − b2 )2 = (1 − 4)2 + (1 − 5)2 = 9 + 16 = 5. Since the differences are squared it is immediate that d(a, b) = d(b, a), the distance from a to b√equals that √ from b to a.√ For c = (c1 , c2 ) = (2, 2) we have that d(a, c) = 2, d(b, c) = 22 + 32 = 13. Hence, √ √ d(a, b) = 5 < 2+ 13 = d(a, c) + d(b, c), so that the triangle inequality is strict. This is in line with our intuitive idea that the road directly from a to b is shorter than from a to b via c. ## Example 3. To compute the Euclidian distance between two vectors one may use the following. ## > a <- c(1,1); b <- c(4,5) > sqrt(sum((a-b)^2)) 5 ## Example 4. Distances between Cyclin gene expressions. By the build-in- function dist the Euclidian distance between two vectors of gene expression values can be computed. To select genes related to the biological term ”Cy- clin” and to compute the Euclidian distance between the gene expression values of the Golub et al. (1999) data, we may use the following. 118 CHAPTER 7. CLUSTER ANALYSIS AND TREES ## > library(multtest); data(golub) > index <- grep(\"Cyclin\",golub.gnames[,2]) > golub.gnames[index,2] \"CCND2 Cyclin D2\" \"CDK2 Cyclin-dependent kinase 2\" \"CCND3 Cyclin D3\" \"CDKN1A Cyclin-dependent kinase inhibitor 1A (p21, Cip1)\" \"CCNH Cyclin H\" \"Cyclin-dependent kinase 4 (CDK4) gene\" \"Cyclin G2 mRNA\" \"Cyclin A1 mRNA\" \"Cyclin-selective ubiquitin carrier protein mRNA\" \"CDK6 Cyclin-dependent kinase 6\" \"Cyclin G1 mRNA\" \"CCNF Cyclin F\" > dist.cyclin <- dist(golub[index,],method=\"euclidian\") > diam <- as.matrix(dist.cyclin) > rownames(diam) <- colnames(diam) <- golub.gnames[index,3] > diam[1:5,1:5] D13639_at M68520_at M92287_at U09579_at U11791_at D13639_at 0.000000 8.821806 11.55349 10.056814 8.669112 M68520_at 8.821806 0.000000 11.70156 5.931260 2.934802 M92287_at 11.553494 11.701562 0.00000 11.991333 11.900558 U09579_at 10.056814 5.931260 11.99133 0.000000 5.698232 U11791_at 8.669112 2.934802 11.90056 5.698232 0.000000 By the grep function the order numbers of the genes with the phrase ”Cy- clin” in their names are assigned to the vector called index. The euclidian distances are assigned to the matrix called diam. Its diagonal has distances between identical genes which are, of course, zero. The distance between the first (CCND2 Cyclin D2) and the third (CCND3 Cyclin D3) is relatively small, which is in line with the fact the these genes have related functions. Note, however, that there are genes with in smaller distance. Example 5. Finding the ten closest genes to a given one. After selecting certain genes it often happens that one wants to find genes which are close to the selected ones. This can be done with the genefinder functionality by specifying either an index or a name (consistent with the geneNames of the 7.2. TWO TYPES OF CLUSTER ANALYSIS 119 exprSet). To find genes from the ALL data (Chiaretti et al., 2004) close to the MME expression values of the probe with identifier 1389_at, we need to specify row number 419. > library(\"genefilter\"); ; library(\"ALL\"); data(ALL) > grep(\"1389_at\", featureNames(ALL) ) 419 1400 > closeto1389_at <- genefinder(ALL, 419, 10, method = \"euc\", + scale = \"none\") > str(closeto1389_at) List of 1$ 1389_at:List of 2\n..$indices: num [1:10] 2653 1096 6634 9255 6639 ... ..$ dists : num [1:10] 12.6 12.8 12.8 12.8 13.0 ...\n> featureNames(ALL)[closeto1389_at[][]]\n \"32629_f_at\" \"1988_at\" \"36571_at\" \"39168_at\" \"36576_at\"\n \"41295_at\" \"39756_g_at\" \"32254_at\" \"38438_at\" \"40635_at\"\nThe function genefilter produces a list from which the selected row num-\nbers can be extracted as well as the probe names can be found.1 If desired,\nthese can be used for further analysis. From the list it can be observed that\nthe gene expressions of row 2653 with probe identifier 32629_f_at has the\nsmallest distance (12.6) to those of 1389_at.\n\n## 7.2 Two types of Cluster Analysis\n\nSome important types of cluster analysis are defined and illustrated here.\n\nA cluster I is simply a set of data points I = {xi }, where xi is the i-th vector\nwith gene expressions. In single linkage cluster analysis the distance between\nclusters I and J is defined as the smallest distance over all pairs of points of\nthe two clusters:\nd(I, J) = min {d(xi , xj , ) : xi in I and xj in J} .\ni,j\n\n1\nFor information on lists, see Chapter 6 of the manual ”An Introduction to R”.\n120 CHAPTER 7. CLUSTER ANALYSIS AND TREES\n\nHence, the distance between the two clusters is the same as that of the nearest\nneighbors. The algorithm of single linkage cluster analysis starts with creat-\ning as many clusters as data points. Next, the nearest two are determined\nand these are merged into one cluster. Then the next two nearest clusters\nare determined and merged into one cluster. This process continuous until\nall points belong to one cluster.\n\nCluster Dendrogram\n\n3.5\nx5\n5\n\n3.0\n\nx5\n2.5\n4\n\n2.0\nHeight\n\n1.5\na2\n\n1.0\nx4\n\n0.5\nx3\n2\n\n0.0\n\nx3\n\nx4\nx1\n\nx2\nx2\nx1\n1\n\n1 2 3 4 5\n\n## a1 dist(sl.clus.dat, method = \"euclidian\")\n\nhclust (*, \"single\")\n\nFigure 7.1: Plot of five points to Figure 7.2: Tree of single linkage\nbe clustered. cluster analysis.\n\n## Example 1. An explanatory example. To illustrate single linkage cluster\n\nanalysis let’s consider the following five points x1 = (1, 1), x2 = (1, 1.2),\nx3 = (3, 2), x4 = (3, 2.2), and x5 = (5, 5), see Figure 7.1. By the script below\nthese are defined, stored in a data frame with the corresponding names, their\ndistances are computed and printed, and a single linkage cluster analysis is\nperformed.\n> sl.clus.dat <- data.frame(matrix(c(1,1,1,1.1,3,2,3,2.3,5,5),\n+ ncol = 2, byrow = TRUE))\n> rownames(sl.clus.dat) <- c(\"x1\",\"x2\",\"x3\",\"x4\",\"x5\")\n> plot(sl.clus.dat,type=\"n\")\n> colnames(sl.clus.dat) <- c(\"a1\",\"a2\")\n> text(a1,a2,labels=row.names(sl.clus.dat))\n> print(dist(sl.clus.dat,method=\"euclidian\"),digits=3)\n7.2. TWO TYPES OF CLUSTER ANALYSIS 121\n\nx1 x2 x3 x4\nx2 0.10\nx3 2.24 2.19\nx4 2.39 2.33 0.30\nx5 5.66 5.59 3.61 3.36\n> sl.out<-hclust(dist(sl.clus.dat,method=\"euclidian\"),method=\"single\")\n> plot(sl.out)\n\nAt the start each data point is seen as a separate cluster. Then the near-\nest two points from the Euclidian distance matrix are x1 and x2 , having\nd(x1 , x2 ) = 0.10. These two data points are merged into one cluster, say\nI = {x1 , x2 }. In Figure 7.2 this is illustrated by the horizontal line at height\n0.10 in the tree. The other three data points x3 , x4 , x5 are seen as three\ndifferent clusters. Next, the minimal distance between clusters can be read\nfrom the Euclidian distance matrix. Since the smallest is d(x3 , x4 ) = 0.30,\nthe new cluster J = {x3 , x4 }, corresponding to the horizontal line at height\n0.30. Now there are three clusters, I, J, and K = {x5 }. From the Euclidian\ndistance matrix, it can be observed that the distance between cluster I and\nJ is 2.19, see the corresponding horizontal line at this height. Hence, the\ncluster I and J are merged into one. Finally, the distance between cluster\n{x1 , x2 , x3 , x4 }, and the data point x5 equals d(x4 , x5 ) = 3.36, see the cor-\nresponding horizontal line at this height.\n\n## Example 2. Relating data generation processes to cluster trees. It is of\n\nimportance to have some experience with data that does and does not contain\nclusters. If the data are sampled form a standard normal N (0, 1) population,\nthen there is no underlying process producing separate clusters. To illustrate\nthis we perform single linkage cluster analysis on twenty data points from\nthe standard normal population.\n\nsl.out<-hclust(dist(rnorm(20,0,1),method=\"euclidian\"),method=\"single\")\nplot(sl.out)\n\nFrom the resulting tree in Figure 7.3 one might get the impression that\nthere are five separate clusters in the data. Note, however, that there is no\nunderlying data generation process which produces separate clusters from\ndifferent populations.\nIf, however, the data are generated by different normal distributions,\nthen there are different processes producing separate clusters. To illustrate\n122 CHAPTER 7. CLUSTER ANALYSIS AND TREES\n\n## Cluster Dendrogram Cluster Dendrogram\n\n5\n1.0\n\n4\n0.8\n\n3\n0.6\n\n6\n12\n9\nHeight\n\nHeight\n0.4\n\n2\n0.2\n\n1\n20\n0.0\n\n22\n7\n13\n4\n\n23\n3\n5\n\n12\n26\n27\n18\n\n17\n2\n\n8\n14\n\n30\n24\n21\n29\n\n11\n14\n11\n\n16\n16\n19\n1\n15\n\n15\n4\n1\n9\n\n3\n8\n\n13\n18\n19\n20\n5\n6\n7\n10\n10\n17\n\n25\n28\ndist(rnorm(20, 0, 1), method = \"euclidian\") dist(x, method = \"euclidian\")\nhclust (*, \"single\") hclust (*, \"single\")\n\nFigure 7.3: Example of three with- Figure 7.4: Three clusters with dif-\nout clusters. ferent standard deviations.\n\nthis, ten data points were sampled from the N (0, 0.1) population, ten from\nN (3, 0.5), and ten from N (10, 1).\nx <- c(rnorm(10,0,0.1),rnorm(10,3,0.5),rnorm(10,10,1.0))\nplot(hclust(dist(x,method=\"euclidian\"),method=\"single\"))\nplot(sl.out)\nFrom the tree in Figure 7.4, it can be observed that there clearly exist three\nclusters.\n\nThese examples illustrate that results from cluster analysis may very well\nreveal population properties, but that some caution is indeed in order.\n\nExample 3. Application to the Golub (1999) data. Recall that the first\ntwenty seven patients belong to ALL and the remaining eleven to AML and\nthat we found earlier that the expression values of the genes ”CCND3 Cyclin\nD3” and ”Zyxin” differ between the patient groups ALL and AML. Figure\n7.5 illustrates that the patient groups differ with respect to gene expression\nvalues. How to produce this plot and a single linkage cluster analysis is shown\nby the script below.\ndata(golub, package=\"multtest\")\n7.2. TWO TYPES OF CLUSTER ANALYSIS 123\n\nCluster Dendrogram\n\n1.2\nALL\n2\n\nAML\n\n21\n1.0\n\n35\n0.8\n1\n\n0.6\nHeight\nZyxin\n\n29\n0.4\n0\n\n34\n30\n0.2\n\n25\n26\n4\n\n38\n22\n\n2\n14\n17\n18\n24\n\n37\n13\n15\n−1\n\n33\n28\n31\n0.0\n\n32\n36\n16\n\n7\n9\n11\n23\n20\n\n8\n27\n1\n10\n19\n\n3\n6\n5\n12\n−0.5 0.0 0.5 1.0 1.5 2.0 2.5\n\n## CCND3 Cyclin D3 dist(clusdata, method = \"euclidian\")\n\nhclust (*, \"single\")\n\nFigure 7.5: Plot of gene ”CCND3 Figure 7.6: Single linkage cluster\nCyclin D3” and ”Zyxin” expres- diagram from gene ”CCND3 Cy-\nsions for ALL and AML patients. clin D3” and ”Zyxin” expressions\nvalues.\n\n## clusdata <- data.frame(golub[1042,],golub[2124,])\n\ncolnames(clusdata)<-c(\"CCND3 Cyclin D3\",\"Zyxin\")\ngol.fac <- factor(golub.cl,levels=0:1, labels= c(\"ALL\",\"AML\"))\nplot(clusdata, pch=as.numeric(gol.fac))\nlegend(\"topright\",legend=c(\"ALL\",\"AML\"),pch=1:2)\nplot(hclust(dist(clusdata,method=\"euclidian\"),method=\"single\"))\n\nFigure 7.6 gives the tree from single linkage cluster analysis. Apart from\nthree expressions the tree shows two clusters corresponding to the two pa-\ntient groups.\n\n7.2.2 k-means\nK-means cluster analysis is a popular method in bioinfomatics. It is defined\nby minimizing the within cluster sum of squares over K clusters. That is,\n124 CHAPTER 7. CLUSTER ANALYSIS AND TREES\n\ngiven the data points x1 , · · · , xn the method seeks to minimize the function\nnk\nK X\nX\nd2 (xi , ak )\nk=1 i∈Ik\n\n## over all possible points a1 , · · · aK . This is accomplished by an algorithm\n\n(Hartigan & Wong, 1979) which starts by partitioning the data points into\nK initial clusters, either at random or using some heuristic device. It then\ncomputes the cluster means (step 1) and constructs a new partition by asso-\nciating each point with the closest cluster mean (step 2). The latter yields\nnew clusters of which the means are calculated (step 1). Then it constructs\na new partition by associating each point with the closest cluster mean (step\n2). These two steps are repeated until convergence. The latter occurs when\nthe data points no longer change clusters. The iterative algorithm is fast\nin the sense that it often converges in less iterations than the number of\npoints n, but it need not to attain the global minimum. For the optimal\npoints a1 , · · · aK , it holds that these are equal to the mean per cluster, that\nis ak = xk for each cluster k. When the data points are independent and\nidentically distributed, then the cluster means converge in probability to the\ncorresponding population means (Pollard, 1981).\n\n## Example 1. Relating a data generation process to k-means cluster analysis.\n\nTo illustrate k-means cluster analysis we shall simulate gene expressions from\ntwo different normal populations. That is, we randomly take fifty gene ex-\npressions for two persons from the N (0, 0.5) population and fifty expressions\nfor two persons from the N (2, 0.5) population. The data points are collected\nin two matrices of order fifty by two which are placed one above the other.\nOn the total of one hundred data points a (k =)2-means cluster analysis is\nperformed.\n\n## > data <- rbind(matrix(rnorm(100,0,0.5), ncol = 2),\n\n+ matrix(rnorm(100,2,0.5), ncol = 2))\n> cl <- kmeans(data, 2)\nK-means clustering with 2 clusters of sizes 50, 50\n\nCluster means:\n[,1] [,2]\n1 1.87304978 2.01940342\n7.2. TWO TYPES OF CLUSTER ANALYSIS 125\n\nCluster Dendrogram\n\n3.5\n3.0\n3\n\nx5\n2.5\n2.0\n2\ndata[,2]\n\nHeight\n\n1.5\n1\n\n1.0\n0.5\n0\n\n0.0\n\nx3\n\nx4\n−1\n\nx1\n\nx2\n−1 0 1 2 3\n\n## data[,1] dist(sl.clus.dat, method = \"euclidian\")\n\nhclust (*, \"single\")\n\nFigure 7.7: K-means cluster anal- Figure 7.8: Tree of single linkage\nysis. cluster analysis.\n\n2 0.01720177 0.07320413\n\nClustering vector:\n 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2\n 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1\n 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1\n\n## Within cluster sum of squares by cluster:\n\n 22.60733 20.54411\n\nAvailable components:\n \"cluster\" \"centers\" \"withinss\" \"size\"\nThe output of k-means cluster analysis is assigned to a list called cl. Observe\nthat the cluster means are fairly close to the population means (0, 0) and\n(2, 2). The Clustering vector indicates to which cluster each data point\n(gene) belongs and that these correspond exactly to the two populations\nfrom which the data are sampled. The variable cl$cluster contains cluster membership and can be used to specify the color of each data point a plot, as follows. > plot(data, col = cl$cluster)\n126 CHAPTER 7. CLUSTER ANALYSIS AND TREES\n\n## > points(cl$centers, col = 1:2, pch = 8, cex=2) The data points are plotted in red and black circles and the cluster means by a star, see Figure 7.7. The sum of the within cluster sum of squares equals the minimal function value obtained by the algorithm. Before performing a k-means cluster analysis a plot from a single linkage cluster analysis may reveal the number of clusters. If the number of clusters is not at all clear, then it becomes questionable whether k-means is appropriate. For cases where the number of clusters is only moderately clear, the algorithm is more sensible to get stuck into a solution which is only locally optimal. Such solutions are of limited scientific value. To cope with the danger of suboptimal solutions one may simply run the algorithm repeatedly by using the nstart option. Another possibility is to use rational initial starting values for the cluster means. In particular, the sample means of potential clusters or the hypothesized population means can be used. ## > initial <- matrix(c(0,0,2,2), nrow = 2, ncol=2, byrow=TRUE) > cl<- kmeans(data, initial, nstart = 10) The so-called bootstrap (Efron, 1979) can be used to estimate 95% confidence intervals around cluster means. The idea is to re-sample with replacement from the given sample one thousand times with replacement and to compute quantiles for the corresponding confidence intervals. ## n <- 100; nboot<-1000 boot.cl <- matrix(0,nrow=nboot,ncol = 4) for (i in 1:nboot){ dat.star <- data[sample(1:n,replace=TRUE),] cl <- kmeans(dat.star, initial, nstart = 10) boot.cl[i,] <- c(cl$centers[1,],cl$centers[2,]) } > quantile(boot.cl[,1],c(0.025,0.975)) 2.5% 97.5% -0.1098886 0.1627979 > quantile(boot.cl[,2],c(0.025,0.975)) 2.5% 97.5% -0.04830563 0.19721732 > quantile(boot.cl[,3],c(0.025,0.975)) 2.5% 97.5% 7.2. TWO TYPES OF CLUSTER ANALYSIS 127 1.730495 2.009014 > quantile(boot.cl[,4],c(0.025,0.975)) 2.5% 97.5% 1.898407 2.162019 From the bootstrap confidence intervals the null hypothesis that the cluster population means are equal to (0, 0) and (2, 2) are accepted. ## Example 2. Application to the Golub (1999) data. In the above we found that the expression values of the genes ”CCND3 Cyclin D3” and ”Zyxin” are closely related to the distinction between ALL and AML. Hence, a 2-means cluster analysis of these gene expression values is appropriate here. ## > data <- data.frame(golub[1042,],golub[2124,]) > colnames(data)<-c(\"CCND3 Cyclin D3\",\"Zyxin\") > cl <- kmeans(data, 2,nstart = 10) > cl K-means clustering with 2 clusters of sizes 11, 27 Cluster means: CCND3 Cyclin D3 Zyxin 1 0.6355909 1.5866682 2 1.8938826 -0.2947926 Clustering vector: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 27 28 29 30 31 32 33 34 35 36 37 38 2 1 1 1 1 1 1 1 1 1 1 1 ## Within cluster sum of squares by cluster: 4.733248 19.842225 The two clusters discriminate exactly the ALL patients from the AML pa- tients. This can also be seen from Figure 7.9, where expression values of CCND3 Cyclin D3 are depicted on the horizontal axis and those of Zyxin on the vertical, and the ALL patients are in red and the AML patients in black. By the bootstrap the cluster means and their confidence intervals can be estimated. 128 CHAPTER 7. CLUSTER ANALYSIS AND TREES 2 1 Zyxin 0 −1 ## −0.5 0.0 0.5 1.0 1.5 2.0 2.5 CCND3 Cyclin D3 Figure 7.9: Plot of kmeans (stars) cluster analysis on CCND3 Cyclin D3 and Zyxin discriminating between ALL (red) and AML (black) patients. > mean(data.frame(boot.cl)) X1 X2 X3 X4 0.6381860 1.5707477 1.8945878 -0.2989426 > quantile(boot.cl[,1],c(0.025,0.975)) 2.5% 97.5% 0.2548907 0.9835898 > quantile(boot.cl[,2],c(0.025,0.975)) 2.5% 97.5% 1.259608 1.800581 > quantile(boot.cl[,3],c(0.025,0.975)) 2.5% 97.5% 1.692813 2.092361 7.3. THE CORRELATION COEFFICIENT 129 > quantile(boot.cl[,4],c(0.025,0.975)) 2.5% 97.5% -0.60802142 -0.02420802 The difference between the bootstrap means and the k-means from the orig- inal data gives an estimate of the estimation bias. It can be observed that the bias is small. The estimation is quite precise because the 95% bootstrap confidence intervals are fairly small. ## 7.3 The correlation coefficient A frequently used coefficient to express the degree of linear relationship between two sets of gene expression values is the correlation coefficient ρ. For two sequences of gene expressions such as x = (x1 , · · · , xn ) and y = (y1 , · · · , yn ), the correlation coefficient ρ is estimated by Pn (xi − xi )(yj − y j ) ρb = qP i=1 Pn . n 2 2 i=1 (xi − xi ) i=1 (yj − y j ) The value of the correlation coefficient is always between minus one and plus one. If the value is close to either of these values, then the variables are linearly related in the sense that the first is a linear transformation of the second. That is, there are constants a and b such that axi + b = yi for all i. By the function cor.test, the null hypothesis H0 : ρ = 0 can be tested against the alternative H0 : ρ 6= 0. ## Example 1. Teaching demonstration. To develop intuition with respect to the correlation coefficient the function run.cor.examp(1000) of the TeachingDemos package, developed by Greg Snow, is quite useful. It launches an interactive plot with 1000 data points on two random variables X and Y . When the cor- relation is near zero, then all data points more or less are equally distributed within a circle. By moving the slider slowly from the left to the right it can be observed that all points are approximately on a straight line. Note that if the sign of the correlation coefficient is positive, then small/large values of X tend to go together with small/large values of Y . 130 CHAPTER 7. CLUSTER ANALYSIS AND TREES ## Example 2. Another teaching demonstration. By the function put.points.demo() it is possible to add and delete points to a plot which interactively re- computes the value for the correlation coefficient. By first creating a few points that lie together on a circle the corresponding correlation coefficient will be near zero. By next adding one outlier, it can be observed that the correlation coefficient changes to nearly ±1. It can be concluded that the correlation coefficient is not robust against outliers. ## Example 3. Application to the Golub (1999) data. We shall illustrate the correlation coefficient by two sets of expression values of the MCM3 gene of the Golub et al. (1999) data. This gene encodes for highly conserved mini-chromosome maintenance proteins (MCM) which are involved in the initiation of eukaryotic genome replication. Here, we find its row numbers, collect the gene expression value in vectors x and y, and compute the value of the correlation coefficient by the function cor(x,y). ## > library(multtest); data(golub) > x <- golub[2289,]; y <- golub[2430,] > cor(x,y) 0.6376217 The value is positive which means that larger values of x occur together with larger values of y and vice versa. This can also be observed by plot(x,y). By the function cor.test, the null hypothesis H0 : ρ = 0 can be tested against the alternative H0 : ρ 6= 0. It also estimates a 95% confidence interval for ρ. > cor.test(x,y) ## Pearson’s product-moment correlation data: x and y t = 4.9662, df = 36, p-value = 1.666e-05 alternative hypothesis: true correlation is not equal to 0 95 percent confidence interval: 0.3993383 0.7952115 sample estimates: cor 0.6376217 7.3. THE CORRELATION COEFFICIENT 131 The test is based on the normality assumption and prints therefore a t-value. Since the corresponding p-value is very small, we reject the null hypothesis of zero correlation. The left bound of the confidence interval falls far to the right hand side of zero. ## Example 4. Confidence interval by the bootstrap. Another manner to con- struct a 95% confidence interval is by the bootstrap. The idea (Efron, 1979) is to obtain thousand samples from the original sample with replacement and to compute the correlation coefficient for each of these. This yields thousand coefficients from which the quantiles for the 95% confidence interval can be computed. > nboot <- 1000; boot.cor <- matrix(0,nrow=nboot,ncol = 1) > data <- matrix(c(x,y),ncol=2,byrow=FALSE) > for (i in 1:nboot){ + dat.star <- data[sample(1:nrow(data),replace=TRUE),] + boot.cor[i,] <- cor(dat.star)[2,1]} > mean(boot.cor) 0.6534167 > quantile(boot.cor[,1],c(0.025,0.975)) 2.5% 97.5% 0.2207915 0.9204865 Observe that the 95% confidence interval is larger than that found by cor.test. This indicates that the assumption of normality may not be completely valid here. Since the confidence interval does not contain zero, the conclusion is to reject the null-hypothesis of zero correlation. Example 5. Application to the Golub (1999) data. The ALL and AML patients of the Golub et al. (1999) data are indicated by zero and ones of the binary vector golub.cl. A manner to select genes it by the correlation of the expression values with this binary vector. Such can be computed by using the apply functionality. > library(multtest); data(golub) > corgol<- apply(golub, 1, function(x) cor(x,golub.cl)) > o <- order(corgol) By golub.gnames[o[3041:3051],2] it can be seen that various of these genes seem indeed to have important cell functions referred to by Golub et 132 CHAPTER 7. CLUSTER ANALYSIS AND TREES ## al. (1999). In particular, Interleukin 8 is recently related to inflammatory cytokine production in myeloid cells (Tessarz et al., 2007). ## 7.4 Principal Components Analysis To make the basic ideas behind principal components analysis explicit, it is wise to start with a small artificial example. Suppose that for six genes the standardized expression values on two patients (variables) became available as these are given in Table 7.1. The data are collected in a 6 by 2 data matrix Z, where e.g. element z21 is expression value -0.40 which belongs to the second gene of the first patient. ## Table 7.1: Data set for principal components analysis. Var 1 Var 2 gene 1 1.63 1.22 gene 2 −0.40 0.79 gene 3 0.93 0.97 gene 4 −1.38 −1.08 gene 5 −0.17 −0.96 gene 6 −0.61 −0.93 ## The whole idea of principal components analysis is to find new directions in the data along which there is maximal variation. A direction is defined as a linear combination Zk of the data Z by a vector k with weights, P where the i-th element of the linear combination is the weighted sum 2j=1 zij kj . The direction of maximal variation is defined as the linear combination with maximal variance. To find this direction the correlation matrix plays an important role. The latter contains the correlations between each pair of patients (variables). In our case correlations between the columns (patients) in Table 7.1 can be placed in a matrix R, which has ones on the diagonal and the value 0.8 elsewhere. To illustrate a direction let’s try the linear combination k = (2, 1)2 of the 2 For the sake of simple notation we shall not use the transposition operator T to indicate rows. 7.4. PRINCIPAL COMPONENTS ANALYSIS 133 ## sample correlation matrix R. This gives · ¸· ¸ · ¸ 1 0.8 2 2.8 Rk = = . 0.8 1 1 2.6 Both vectors k and Rk can be plotted in the xy-plane. The vector (2, 1) is plotted by drawing an arrow from (0,0) to the point with x = 2 and y = 1. This is done completely similar for (2.8, 2.6) in Figure 7.10. It can be observed that the two vectors (arrows) do not fall on the same line and therefore have different directions. The crux of principal components analysis is that a linear combination with the same direction as the weights represent the direction of maximum variation. Such is the case if Rk differs from k only by a constant of multiplication, that is there exists a constant d such that Rk = dk. We shall determine such a constant by finding the weights vector first. To do so observe from our correlations matrix that the sum of both rows equals 1.8. Hence, if we take k = (1, 1), then the row sums are computed by a linear combination as follows. · ¸· ¸ · ¸ · ¸ 1 0.8 1 1.8 1 Rk = = = 1.8 = 1.8k. 0.8 1 1 1.8 1 ## Hence, we obtain d = 1.8. A similar result follows by observing that the differences per row are equal in absolute value. Computing the differences of the elements per row implies taking k = (1, −1) so that · ¸· ¸ · ¸ · ¸ 1 0.8 1 0.2 1 Rk = = = 0.2 = 0.2k. 0.8 1 −1 −0.2 −1 ## A vector k for which Rk = dk holds is called an eigenvector corresponding to the eigenvalue d. Eigenvectors are often re-scaled by √ dividing by √their Euclid- ian length. Since the Euclidian√length 2 2 √ of (1, 1) is 1 + 1 = 2, we obtain the new eigenvector k1 = (1/ 2, √ 1/ 2) ≈ (0.71, 0.71). Since the length of eigenvector √ (1, −1)√ also equals 2 the re-scaled second eigenvector equals k2 = (1/ 2, −1/ 2) ≈ (0.71, −0.71). Now the first principal component is defined as Zk1 and the second as Zk2 . In practical applications the actual computation of eigenvectors and eigenvalues is performed by well-designed numerical methods (Golub & Van Loan, 1983). ## Example 1. Using R on the above data. It is convenient to store the data of the first two columns of Table 7.1 as a matrix object called Z. The correlations 134 CHAPTER 7. CLUSTER ANALYSIS AND TREES 3.0 2 2.5 1 2.0 V[,2] Z[,2] 1.5 0 1.0 −1 0.5 0.0 −2 0.0 0.5 1.0 1.5 2.0 2.5 3.0 −2 −1 0 1 2 V[,1] Z[,1] Figure 7.10: Vectors of linear com- Figure 7.11: First principal com- binations. ponent with projections of data. ## matrix can be computed by the built-in-function cor and the eigenvectors and eigenvalues by the built-in-function eigen, as follows. ## Z <- matrix(c( 1.63, 1.22, -0.40, 0.79, 0.93, 0.97, -1.38, -1.08, -0.17, -0.96, -0.61, -0.93), nrow=6, byrow=TRUE) K <- eigen(cor(Z)) The output is stored as an object called K which can be printed to the screen in two digits. > print(K,digits=2)$values\n 1.8 0.2\n\n$vectors [,1] [,2] [1,] 0.71 0.71 [2,] 0.71 -0.71 The eigenvalues are assigned to K$values and the eigenvectors are the columns\nof K$vectors. To compute the principal components we use the matrix mul- tiplication operator %*%. Then the first principal component is defined as the 7.4. PRINCIPAL COMPONENTS ANALYSIS 135 ## linear combination of the data with the first eigenvector, Z%*%K$vec[,1]. To\n\nprint the scores on the first and the second principal component one can use\nthe following.\n> print(Z %*% K$vec, digits=2) [,1] [,2] [1,] 2.02 0.290 [2,] 0.28 -0.841 [3,] 1.34 -0.028 [4,] -1.74 -0.212 [5,] -0.80 0.559 [6,] -1.09 0.226 To illustrate the first principal component the six data points from the Z matrix are plotted as small circles in Figure 7.11. Gene 1, for instance, has x coordinate 1.63 and y coordinate 1.22 and appears therefore in the right upper corner. A convenient manner to perform principal components analysis is by using the built-in-function princomp, as follows. pca <- princomp(Z, center = TRUE, cor=TRUE, scores=TRUE) pca$scores\nThe scores are the component scores and the loadings from princomp are\nthe eigenvectors.\n\n## The eigenvalues represent an amount of variance related to the compo-\n\nnent. In the previous example the first component has variance 1.8 and the\nsecond 0.2, so that the first represents 1.8/2 = 0.9 or 90% of the variance. On\nthe basis of the eigenvalues the number of interesting directions in the data\ncan be evaluated by two rules of thumb. The first is that each eigenvalue\nshould represent more variance than that of any of the observed variables.\nThe second is the so-called elbow rule saying that when the first few eigen-\nvalues are large and the remaining considerable less, then the first few are\nthe most interesting.\nPrincipal components analysis is a descriptive method to analyze depen-\ndencies (correlations) between variables. If there are a few large eigenvalues,\nthen there are equally many directions in the data which summarize the\nmost important variation among the gene expressions. Then it may be use-\nful to explore simultaneously a two dimensional visualization of the genes\n136 CHAPTER 7. CLUSTER ANALYSIS AND TREES\n\n## and the patients. Furthermore, it can be rewarding to study the weights of\n\nthe eigenvectors because these may reveal a structure in the data otherwise\ngone unnoticed. Finally, the principal components contain less (measure-\nment) error than the individual variables. For this reason, cluster analysis\non the values on the principal components may be useful.\n\nExample 2. Application to the Golub (1999) data. The first five eigenvalues\nfrom the correlation matrix of golub can be printed by the following.\n\n> eigen(cor(golub))$values[1:5] 25.4382629 2.0757158 1.2484411 1.0713373 0.7365232 Because the eigenvalues are arranged in decreasing order the sixth to the 38th are smaller than one. Reason for which these will be neglected. The first eigenvalue is by far the largest, indicating that the persons are dependent to a large extent. Applying the previous bootstrap methods to estimate 95% confidence intervals for the eigenvalues we obtain the following intervals. ## data <- golub; p <- ncol(data); n <- nrow(data) ; nboot<-1000 eigenvalues <- array(dim=c(nboot,p)) for (i in 1:nboot){dat.star <- data[sample(1:n,replace=TRUE),] eigenvalues[i,] <- eigen(cor(dat.star))$values}\n> for (j in 1:p) print(quantile(eigenvalues[,j],c(0.025,0.975)))\n2.5% 97.5%\nfor (j in 1:5) cat(j,as.numeric(quantile(eigenvalues[,j],\n+ c(0.025,0.975))),\"\\n\" )\n1 24.83581 26.00646\n2 1.920871 2.258030\n3 1.145990 1.386252\n4 0.9917813 1.154291\n5 0.6853702 0.7995948\n\nThe cat function allows for much control in printing. Hence, the null hypoth-\nesis of eigenvalue being equal to one is accepted for the fourth component\nand rejected for the first three and the fifth. Thus the fourth represents less\nvariance than an individual variable, reason for which it is neglected.\nThe percentages of variance explained by the first two components can be\ncomputed by sum(eigen(cor(golub))$values[1:2])/38*100, which yields the amount 72.4052%. Thus the first two components represent more than 7.4. PRINCIPAL COMPONENTS ANALYSIS 137 72% of the variance in the data. Hence, the data allow for a reduction in dimensions from thirthy eight to two. It can be checked that all correlations between the patients are positive. This implies that large expression values on gene i co-vary positively with large deviations of gene j. The positivity of the correlations also implies that the weights of the first eigenvector have the same sign, so that these can be taken to be positive for all patients (Horn & Johnson, 1985). Unfortunately, this is not automatic in R so that caution is in order with respect to inter- pretation of the components. By using -eigen(cor(golub))$vec[,1:2] to\nprint the weights to the screen it can be observed that those that correspond\nto the first component are positive. All weights of the first eigenvector are\npositive and have very similar size as their range is between 0.13 and 0.17.\nThus the first component is almost equal to the sum of the variables (the\ncorrelation equals 0.9999). The weights of the second component have a very\ninteresting pattern. Namely, almost all of the first 27 weights are positive\nand the last 11 weights are negative. Thus the second component contrasts\nthe ALL patients with the AML patients. By contrasting ALL patients with\nAML patients a second to the largest amount of variance is explained in the\ndata. Hence, the AML-ALL distinction is discovered by the second compo-\nnent, which is in line with findings of Golub et al. (1999).\nObviously the genes with the largest expression values from the first com-\nponent can be printed. We shall, however, concentrate on the second compo-\nnent because it appears to be more directly related to the research intentions\nof Golub et. al. (1999). The first and the last ten gene names with respect\nto the values on the second component can be printed by the following.\n\n> o <- order(pca$scores[,2]) > golub.gnames[o[1:10],2] > golub.gnames[o[3041:3051],2] ## Many of these genes are related to leukemia (Golub, et al., 1999). Example 3. Biplot. A useful manner to plot both genes (cases) and patients (variables) is the biplot, which is based on a two-dimensional approximation of the data very similar to principal components analysis. Here, we illustrate how it can be combined with principal components analysis. > biplot(princomp(data,cor=TRUE),pc.biplot=TRUE,cex=0.5,expand=0.8) 138 CHAPTER 7. CLUSTER ANALYSIS AND TREES The resulting plot is given by Figure 7.14. The left and bottom axis give the component scores and the top and right gives the patient scores which are scaled to unit length by the specification cor. It can be seen that the patients are clearly divided in two groups corresponding to ALL and AML. Cluster Dendrogram 10 3.0 2.5 2459 5 2397 2.0 1882 893 892 2289 2673 1616 2459 182 1.5 Height 313 2653 2430 1101 849 X2 2289 504 316 2397 885 0 1756 893 1.0 2611 1754 1910 450 28742350 1798 2233 1754 1798 2233 2430 2321808 1911 2749 1737 2611 450 792 2761 885 849 68 0.5 316 1616 182 2653 892 1882 2350 −5 1101 2673 504 313 1756 68 808 1737 0.0 1910 2874 2761 792 2321 1911 2749 −10 −10 −5 0 5 10 15 ## X1 dist(leu, method = \"euclidian\") hclust (*, \"single\") Figure 7.12: Scatter plot of se- Figure 7.13: Single linkage cluster lected genes with row labels on the diagram of selected gene expres- first two principal components. sion values. Example 4. Critical for S-phase. Golub et al. (1999) mention that among genes which are useful for tumor class prediction there are genes that encode for proteins critical for S-phase cell cycle progression such as Cyclin D3, Op18, and MCM3. We first select genes which carry ”CD”, ”Op”, or ”MCM” in their names and collect the corresponding row numbers. data(golub, package = \"multtest\") factor <- factor(golub.cl) o1 <- grep(\"CD\",golub.gnames[,2]) o2 <- grep(\"Op\",golub.gnames[,2]) o3 <- grep(\"MCM\",golub.gnames[,2]) o <- c(o1,o2,o3) This yields 110 genes among which there are genes that have no experimental effect. In order to select those that do have an experimental effect, we use the two-sample t-test without assuming equal variances, as follows. 7.4. PRINCIPAL COMPONENTS ANALYSIS 139 ## pt <- apply(golub, 1, function(x) t.test(x ~ gol.fac)$p.value)\n\noo <- o[pt[o]<0.01]\n\nThis yields 34 genes, of which the row numbers are selected in the vector\noo. In order to identify genes in directions of large variation we use the scores\non the first two principal components.\n\n## Z <- as.matrix(scale(golub, center = TRUE, scale = TRUE))\n\nK <- eigen(cor(Z))\nP <- Z %*% -K$vec[,1:2] leu <- data.frame(P[oo,], row.names= oo) attach(leu) The scores on the first two principal components of the selected genes are stored in the data frame leu. From the plotted component scores in Figure 7.12, it seems that there are several sub-clusters of genes. The genes that belong to these clusters can be identified by hiearchical cluster analysis. cl <- hclust(dist(leu,method=\"euclidian\"),method=\"single\") plot(cl) From the tree (dendrogram) in Figure 7.13 various clusters of genes are apparent that also appear in Figure 7.12. 3 The ordered genes can be obtained from the object cl as follows. ## > a <- as.integer(rownames(leu)[cl$order])\n\n> for (i in 1:length(a)) cat(a[i],golub.gnames[a[i],2],\"\\n\")\n1910 FCGR2B Fc fragment of IgG, low affinity IIb, receptor for (CD32)\n2874 GB DEF = Fas (Apo-1, CD95)\n\nThe cluster with rows 504, 313, 1756, and 893 consists of antigenes. The\ngenes MCM3 Minichromosome maintenance deficient (S. cerevisiae) 3 with\nrow numbers 2289 and 2430 appear adjacent to each other. This illustrates\nthat genes with similar functions may indeed be close with respect to their\ngene expression values.\n\n3\nUnfortunately, some row numbers of genes are less readable because the points are\nvery close.\n140 CHAPTER 7. CLUSTER ANALYSIS AND TREES\n\n## 7.5 Overview and concluding remarks\n\nSingle linkage cluster analysis can be applied to explore for groups in a set of\ngene expressions. When groups are present a k-means cluster analysis can be\napplied in combination with the bootstrap to estimate confidence intervals\nfor the cluster means.\nThe correlation coefficient measures the degree of dependency between\npairs of gene expression values. It can also be used to find gene expressions\nwhich are highly dependent with a phenotypical variable. It is reassuring to\nfind in applications that the confidence interval for a correlation coefficient\nis small.\nPrincipal components analysis is very useful for finding directions in the\ndata where the gene expression values vary maximally, see Jolliffe (2002)\nfor a complete treatment of the principal component analysis. When these\ndirections can be represented well by the first two components the biplot\ncan help enormously to visualize genes and patients simultaneously. When\ngenes are selected on beforehand, then principal components analysis can be\nhelpful in identifying clusters of genes in a lower dimensional space.\n\n7.6 Exercises\n1. Cluster analysis on the ”Zyxin” expression values of the Golub et al.\n(1999) data.\n\n(a) Produce a chatter plot of the gene expression values using showing\ndifferent symbols for the two groups.\n(b) Use single linkage cluster analysis to see whether the three indi-\ncates two different groups.\n(c) Use k-means cluster analysis. Are the two clusters according to\nthe diagnosis of the patient groups?\n(d) Perform a bootstrap on the cluster means. You will have to modify\nthe code here and there. Do the confidence intervals for the cluster\nmeans overlap?\n\n## 2. Close to CCND3 Cyclin D3. Recall that we did various analysis on\n\nthe expression data of the CCND3 Cyclin D3 gene of the Golub (1999)\ndata.\n7.6. EXERCISES 141\n\n(a) Use genefilter to find the ten closed genes to the expression\nvalues of CCND3 Cyclin D3. Give their probe as well as their\nbiological names.\n(b) Produce of combined boxplot separately for the ALL and the AML\nexpression values. Compare it with that on the basis of CCND3\nCyclin D3 and comment of the similarities.\n(c) Compare the smallest distances with those among the Cyclin genes\ncomputed above. What is your conclusion?\n3. MCM3. In the example on MCM3 a plot shows that there is an outlier.\n(a) Plot the data and invent a manner to find the row number of the\noutlier.\n(b) Remove the outlier, test the correlation coefficient. Compare the\nresults to those above.\n(c) Perform the bootstrap to construct a confidence interval.\n4. Cluster analysis on part of Golub data.\n(a) Select the oncogenes from the Golub data and plot the tree from\n(b) Do you observe meaningful clusters.\n(c) Select the antigenes and answer the same questions.\n(d) select the receptor genes and answer the same questions.\n5. Principal Components Analysis on part of the ALL data.\n(a) Construct an expression set with the patients with B-cell in stage\nB1, B2, and B3. Compute the corresponding ANOVA p-values\nof all gene expressions. Construct the expression set with the p-\nvalues smaller than 0.001. Report the dimensionality of the data\nmatrix with gene expressions.\n(b) Are the correlations between the patients positive?\n(c) Compute the eigenvalues of the correlation matrix. Report the\nlargest five. Are the first three larger than one?\n(d) Program a bootstrap of the largest five eigenvalues. Report the\nbootstrap 95% confidence intervals and draw relevant conclusions.\n142 CHAPTER 7. CLUSTER ANALYSIS AND TREES\n\n(e) Plot the genes in a plot of the first two principal components.\n\n## 6. Some correlation matrices.\n\n   \n· ¸ 1 0.8 0.8 1 −0.5 −0.5\n1 −0.8\n,  0.8 1 0.8  ,  −0.5 1 −0.5  ,\n−0.8 1\n0.8 0.8 1 −0.5 −0.5 1\n\n(a) Verify that the eigenvalues of the matrices are 1.8, 0.2, 2.6,\n0.2, 0.2, and 1.500000e+00, 1.500000e+00, -7.644529e-17.\n(b) How much variance represents the first component corresponding\nto the second matrix?\n(c) Verify that the first eigen vector of the second correlation matrix\nhas identical signs.\n7.6. EXERCISES 143\n\n## −1.0 −0.5 0.0 0.5\n\n2065\n2\n\n20 2459\n1162 377\n\n15 13\n24 1030\n\n0.5\n16 738\n2489\n4\n26\n19 21\n127 1334\n2266 1206\n1882\n2939717\n1\n\n1042\n5 2386 839 1995 1732\n2020 345 2829\n746\n866 8 1037\n523\n7 394 2673 1109\n22 3046 28511598\n1585\n422\n2645 96 1909 801\n2702 1081703\n1455\n462\n31834 32912712418\n1939 2289 2801 1060\n2794\n563\n9 1368 2307\n621 323 20022616 1959\n1978 1817\n1640\n2343\n648\n1653 2950 24101916\n23471006\n2311 1920\n561\n259 1524\n2179\n713\n1086\n1542922\n2627\n725\n522\n18 1869 1638 182\n704357\n1045 126\n202 2889\n1642\n23\n2466\n1145 1025 2122\n838\n246\n546 571 1245\n1348 1445\n376\n2593 984\n\n0.0\n6 2786 1856\n2356 2955\n313\n494 244\n2265\n0\n\n1887\nComp.2\n\n14 17 1110\n\n## 735 12 2821 938\n\n2 1066\n1556 1652\n2937\n786\n1396\n−1\n\n1754\n932\n1448 896 2589\n2172 888\n937\n968 566 2749 2198 1774 108\n1911 1665\n\n−0.5\n803766\n26002813 808\n68\n3235\n28\n38 1829\n2920\n2922\n2921 2553 2124 1901\n2734 1778\n31\n−2\n\n34\n29 2656\n30\n33 1069 1413\n37\n36 378 −1.0\n\n2663\n−3\n\n2664 829\n\n−3 −2 −1 0 1 2\n\nComp.1\n\n## Figure 7.14: Biplot of selected genes from the golub data.\n\n144 CHAPTER 7. CLUSTER ANALYSIS AND TREES\nChapter 8\n\nClassification Methods\n\n## In medical settings patients are diagnosed into classes corresponding to types\n\nof diseases. In bioinformatics the question often arises whether the diagno-\nsis of a patient can be predicted by gene expression values? Related is the\nquestion which genes play an important role in the prediction of class mem-\nbership. A similar question is the prediction of micro RNA’s from values\nof folding energy. More generally, for objects like proteins, mRNA’s, or mi-\ncroRNA’s it may be of importance to classify these on the basis of certain\nmeasurements.\n\n## Many classification methods have been developed for various scientific\n\npurposes. In bioinformatics methods such as recursive partitioning, support\nvector machine and neural network are frequently applied to solve classifica-\ntion problems.\n\nIn this chapter you learn what recursive partitioning is and how to use it.\nTo evaluate the quality of prediction the fundamental concepts of sensitivity\nand specificity are frequently used. The specificity can can be summarized in\na single number by the area under the curve of a receiver operator curve. This\nwill be explained and illustrated. Two other methods to predict disease class\nfrom gene expression data are the support vector machine and the neural\nnetwork. It will briefly be explained what these methods are about and how\nthese can be applied. A validation set will be used to evaluate the predictive\naccuracy.\n\n145\n146 CHAPTER 8. CLASSIFICATION METHODS\n\n## 8.1 Classification of microRNA\n\nThe subject of making a correct medical diagnosis is highly similar to that\nof correctly classifying microRNA.\n\n## Example 1. Classification of Micro RNA. MicroRNA are small RNA molecules\n\nwith important functions in cell growth and disease development. In order\nto identify microRNA’s from arbitrary sequences its characterizing proper-\nties are used to distinguish non-microRNA from microRNA molecules. One\nof these properties is that microRNA’s have the capacity to fold in a cer-\ntain hairpin type of structure. Such a structure typically exhibits a small\nminimum folding energy (Zuker, 2003; Zuker & Stiegler, 1981). This prop-\nerty can be used as a test to discriminate microRNA’s from non-microRNA’s\n(Bonnet, et al., 2004), as follows. Given a set of 3424 different microRNA’s\nthe minimum folding energy was computed for each of these. Next, for each\nmicroRNA the order of the nucleotides was shuffled with replacement 1000\ntimes. This yielded per microRNA 1000 differently shuffled sequences of nu-\ncleotides for which the minimum folding energy is computed.1 Per microRNA\nthe 1001 energy values were arranged to have increasing order, similar as for\nempirical distributions in the previous chapter. Then the number of mini-\nmum folding energies below that of the original microRNA is counted and\ndivided by 1001 as the p-value. If the minimum folding energie of the original\nmicroRNA is the smallest, then the empirical p-value is zero. This proce-\ndure yielded a total of 3424 p-values. The number of sequences with p-values\nbelow the threshold value 0.01 is given in Table 8.1. The same procedure is\nconducted for non-microRNA molecules which were taken as sequences with\nsimilar length and nucleotide percentages.\n\n## Table 8.1: Frequencies empirical p-values lower than or equal to 0.01.\n\ntest positive test negative total\np ≤ 0.01 p > 0.01\nmicroRNA 2973 451 3424\nnon microRNA 33 3391 3424\ntotal 3006 3842 6848\n\n1\nI am obliged to Sven Warris for computing the minimum energy values.\n8.2. ROC TYPES OF CURVES 147\n\nFrom the frequency Table 8.1, the sensitivity, the specificity, and the\npredictive power can be computed in order to evaluate the quality of the\ntest. The sensitivity is the probability that the test is positive given that the\nsequence is a microRNA (true positive). Thus\n\n2973\nsensitivity = P (true positive) = P (test positive|microRNA) = = 0.8682.\n3424\nThe specificity is the probability that the test is negative given that the\nsequence is not a microRNA (true negative). Thus\n\n3391\nspecificity = P (true negative) = P (test negative|no microRNA) = = 0.9903.\n3424\nFor practical applications of a test the predictive power is of crucial impor-\ntance. In particular, the predictive value positive is the probability that the\nsequence is a microRNA given that the test is positive. That is,\n2973\nPredictive value positive = P V + = P (microRNA|test positive) = = 0.9890\n3006\nThus when the test is positive we are 98.90% certain that the sequence is\nindeed a microRNA. The predictive value negative is the probability that the\nsequence is not a microRNA given that the test is negative.\n3391\nPredictive value negative = P V − = P (no microRNA|test negative) = = 0.8826.\n3842\nThus when the test is negative we are 88.26% certain that the sequence\nis not a microRNA. From the estimated conditional probabilities it can be\nconcluded that the test performs quite well in discriminating between mi-\ncroRNA’s from non-microRNA’s.\n\n## 8.2 ROC types of curves\n\nIn Chapter 2 we have observed with respect to the Golub et al. (1999) data\nthat the expression values of gene CCND3 Cyclin D3 tend to be greater for\nALL than for AML patients. We may therefore use these as a test for pre-\ndicting ALL using a certain cutoff value. In particular, for gene expression\n148 CHAPTER 8. CLASSIFICATION METHODS\n\nvalues larger than a cutoff we declare the test “positive” in the sense of in-\ndicating ALL. By doing so the corresponding true and false positives can be\ncomputed for cutoff values. To explain the terminology imagine that the test\ncharacteristic (ROC) is a curve where the false positive rates are depicted\nhorizontally and the corresponding true positive rates vertically. The larger\nthe area under the ROC curve, the better the test is because then low false\npositive rates go together with large true positive rates. 2 These ideas are\nillustrated by several examples.\n\n## Example 1. For the sake of illustration we consider the prediction of ALL\n\nfrom the expression values for gene CCND3 Cyclin D3 from Golub et al.\n(1999) in row 1042 of the matrix golub. Now consider cutoff point 1.27. For\nsuch a cutoff point we can produce a table with TRUE/FALSE frequencies\nof predicting ALL/AML.\n\n## > data(golub, package = \"multtest\")\n\n> gol.fac <- factor(golub.cl,levels=0:1, labels= c(\"ALL\",\"AML\"))\n> table(gol.fac,golub[1042,]>1.27)\n\n## gol.fac FALSE TRUE\n\nALL 2 25\nAML 10 1\n\nThere are 25 ALL patients with expression values greater than or equal to\n1.27, so that the true positive rate is 25/27=0.93. For this cutoff value there\nis one false positive because one AML patient has a score larger than 1.27.\nHence, the false positive rate is 1/11 = 0.09.\n\nExample 2. The expression values for gene CCND3 Cyclin D3 from the\nGolub et al. (1999) data are sorted in decreasing order, see Table 8.2. It will\nbe convenient to choose Index 2 for ALL and Index 1 for AML. Then the\nprocedure to draw the ROC curve starts with cutoff point infinity. Obviously,\nthere are no expression values equal to infinity, so there is no patient tested\npositive. Next, the cut off point 2.77 is taken and values greater than or equal\nto 2.77 are tested as positive (member of index 2 ALL). This yields one true\n2\nMore detailed information can be obtained from a wikipedia search using ”ROC\ncurve”.\n8.2. ROC TYPES OF CURVES 149\n\npositive implying a true positive rate of 1/27, see second row of Table 8.2.\nFor this cutoff value there are no negatives so that the false positive rate is\nzero.\nNow consider cutoff point 1.52. There are 22 ALL patients with expres-\nsion values greater than or equal to 1.52, so that the true positive rate is\n22/27=0.81. For this cutoff value there are no false positives because all\nother (AML) patients have scores smaller than 1.51. Hence, the false pos-\nitive rate is 0 and the true positive rate is 0.81. To indicate this there is\na vertical line drawn in the ROC curve from point (0, 0) to point (0, 0.81)\nin Figure 8.1. Now consider the next cutoff point 1.45. There are 22 ALL\npatients with expression values greater than or equal to 1.45, so that the\ntrue positive rate is again 22/27=0.81. However, there is one AML patient\nwith expression value 1.45, which receives therefore a positive test. Hence,\nthe number of false positives increases from zero to one, which implies a\nfalse positive rate of 1/11=0.09. In the ROC curve this is indicated by point\n(0.09, 0.81) and the horizontal line from (0, 0.81) to (0.09, 0.81), see Figure\n8.1.\nThis process goes on (see Table 8.2) until the smallest data point -0.74 is\ntaken as cutoff point. For this point all ALL and all AML patients are tested\npositive, so that the false positive rate is 11/11 and the true positive rate is\n27/27. This is indicated by the end point (1, 1) in the plot.\n1.0\n\n1.0\n0.8\n\n0.8\nTrue positive rate\n\n## True positive rate\n\n0.6\n\n0.6\n0.4\n\n0.4\n0.2\n\n0.2\n0.0\n\n0.0\n\n0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0\n\n## False positive rate False positive rate\n\nFigure 8.1: ROC plot for expres- Figure 8.2: ROC plot for expres-\nsion values of CCND3 Cyclin D3. sion values of gene Gdf5.\n150 CHAPTER 8. CLASSIFICATION METHODS\n\n## It is obviously helpful to use a computer for producing an ROC such as\n\nin Figure ROCgolub1042plot. To do so we change the index 0 of golub.cl\nfor ALL into 2 and use functions from the ROCR package.\n\nlibrary(ROCR)\ngolub.clchanged <- -golub.cl + 2\npred <- prediction(golub[1042,], golub.clchanged)\nperf <- performance(pred, \"tpr\", \"fpr\" )\nplot(perf)\n\nIt seems clear that the expression values are better in testing for ALL when\nthe curve is very steep in the beginning and attains its maximum value soon.\nIn such a case the true positive rate is large for a small false positive rate. A\nmanner to express the predictive accuracy of a test in a single number is by\nthe area under the curve. Using the function performance(pred,\"auc\") we\nobtain that the area under the curve is 0.96, which is large. Hence, the ex-\npression values of CCND3 Cyclin D3 are suitable for discrimination between\nALL and AML. The ROC curve for the expression values of gene Gdf5 is\ngiven by Figure 8.2. It can be observed that the true positive rate is much\nlower as one moves on the horizontal axis from the left to the right. This\ncorresponds to the area under the curve of 0.35, which is small. It can be\nconcluded that genes may express large differences with respect to discrimi-\nnating power between the two types of leukemia.\n\n## In practical applications one is often interested in a single optimal cut-off\n\nvalue and in combining several predictors in a decision scheme.\n\n## 8.3 Classification trees\n\nThe purpose of classification is to allocate organisms into classes on the\nbasis of measurements on attributes of organisms. For instance, in case\nof the Golub et al. (1999) data the organisms are 38 patients which have\nmeasurements on 3051 genes. The classes consist of diagnosis of patients into\nthe ALL class (27 patients) and the AML class (11 patients). A tree model\nresembles that of a linear model, where the criterion is the factor indicating\nclass membership and the predictor variables are the gene expression values.\nIn case of, for instance, the Golub et al. (1999) data the gene expression\nvalues {x1 , · · · , x38 } can serve as predictors to form a decision tree. For\n8.3. CLASSIFICATION TREES 151\n\n## instance, if xj < t, then patient j is AML, and otherwise if xj ≥ t, then\n\npatient j is ALL. Obviously, the threshold value t on which the decision is\nbased should be optimal given the predictor. Such can be estimated by a\nregression tree (Breiman et al., 1984; Chambers & Hastie, 1992; Venables,\n& Ripley, 2000), which is implemented in the rpart package (Therneau &\nAtkinson, 1997).\nA training set is used to estimate the threshold values that construct the\ntree. When many predictor variables are involved, 3051 for instance, then we\nhave a tremendous gene (variable) selection problem. The rpart package au-\ntomatically selects genes which are important for classification and neglects\nothers. A further problem is that of overfitting where additional nodes of a\ntree are added to increase prediction accuracy. When such nodes are specific\nfor the training sample set, these can not be generalized to other samples\nso that these are of limited scientific value. Prevention of such overfitting is\ncalled pruning and is automatically done by the rpart function. Many basic\nideas are illustrated by an elementary example.\n\ngenea< 0.9371\n4\n3\n\ngenea< 3.025\nALL1\n2\n\n10/0/0\n1\n0\n\nALL2 AML\n0/10/0 0/0/10\n\n## Figure 8.3: Boxplot of expression Figure 8.4: Classification tree for\n\nvalues of gene a for each leukemia gene for three classes of leukemia.\nclass.\n\n## Example 1. Optimal gene expressions. Suppose microarray expres-\n\nsion data are available with respect to patients suffering from three types of\n152 CHAPTER 8. CLASSIFICATION METHODS\n\nleukemia abbreviated as ALL1, ALL2, and AML. Gene A has expression val-\nues from the populations (patient groups) N (0, 0.52 ) for ALL1, N (2, 0.52 ) for\nALL2, and N (4, 0.52 ) for AML. The script below generates thirty expression\nvalues for gene A, the patients of the three disease classes, and the estimates\nof the classification tree.\nset.seed(123); n<-10 ; sigma <- 0.5\nfac <- factor(c(rep(1,n),rep(2,n),rep(3,n)))\nlevels(fac) <- c(\"ALL1\",\"ALL2\",\"AML\")\ngeneA <- c(rnorm(10,0,sigma),rnorm(10,2,sigma),rnorm(10,4,sigma))\ndat <- data.frame(fac,geneA)\nlibrary(rpart)\nrp <- rpart(fac ~ geneA, method=\"class\",data=dat)\nplot(rp, branch=0,margin=0.1); text(rp, digits=3, use.n=TRUE)\nFrom the boxplot in Figure 8.3 it can be observed that there is no overlap of\ngene expressions between classes. This makes gene A an ideal predictor for\nseparating patients into classes. By the construction of the gene expression\nvalues x1 , · · · , x30 we expect the following partition. If xi < 1, then ALL1,\nif xi is in interval [1, 3], then ALL2, and if xi > 3, then AML. From the\nestimated tree in Figure 8.4 it can be observed that the estimated splits are\nclose to our expectations: If xi < 0.971, then ALL1, if xi is in [0.9371, 3.025],\nthen ALL2, and if xi > 3.025, then AML. The tree consists of three leaves\n(nodes) and two splits. The prediction of patients into the three classes per-\nfectly matches the true disease status.\n\nObviously, such an ideal gene need not exist because the expression values\noverlap between the disease classes. In such a case more genes may be used\nto build the classification tree.\n\n## Example 2. Gene selection. Another situation is where Gene A discrim-\n\ninates between ALL and AML and Gene B between ALL1 patients and ALL2\nor AML patients and Gene C does not discriminate at all. To simulate this\nsetting we generate expression values for Gene A from N (0, 0.52 ) for both\nALL1 and ALL2, and from N (2, 0.52 ) for AML patients. Next, we generate\nexpression values for Gene B from N (0, 0.52 ) for ALL1 and from N (2, 0.52 )\nfor ALL2 and AML. Finally, we generate for Gene C from N (1, 0.52 ) for\nALL1, ALL2, and AML. For this and for estimating the tree, we use the\nfollowing script.\n8.3. CLASSIFICATION TREES 153\n\nset.seed(123)\nn<-10 ; sigma <- 0.5\nfac <- factor(c(rep(1,n),rep(2,n),rep(3,n)))\nlevels(fac) <- c(\"ALL1\",\"ALL2\",\"AML\")\ngeneA <- c(rnorm(20,0,sigma),rnorm(10,2,sigma))\ngeneB <- c(rnorm(10,0,sigma),rnorm(20,2,sigma))\ngeneC <- c(rnorm(30,1,sigma))\ndat <- data.frame(fac,geneA,geneB,geneC)\nlibrary(rpart)\nrp <- rpart(fac ~ geneA + geneB + geneC, method=\"class\",data=dat)\n\nNote the addition in the model notation for the rpart function. 3 It is\nconvenient to collect the data in the form of a data frame. 4\nFrom the boxplot in Figure 8.5 it can be seen that Gene A discriminates\nwell between ALL and AML, but not between ALL1 and ALL2. The expres-\nsion values for Gene B discriminate well between ALL1 and ALL2, whereas\nthose of Gene C do not discriminate at all. The latter can also be seen from\nthe estimated tree in Figure 8.6, where Gene C plays no role at all. This il-\nlustrates that rpart automatically selects the genes (variables) which play a\nrole in the classification tree. Expression values on Gene A larger than 1.025\nare predicted as AML and smaller ones as ALL. Expression values on Gene\nB smaller than 0.9074 are predicted as ALL1 and larger as ALL2. Hence,\nGene A separates well within the ALL class.\n\n## Example 3. Prediction by CCND3 Cyclin D3 gene expression values.\n\nFrom various visualizations and statistical testing in the previous chapters,\nit can be conjectured that CCND3 Cyclin D3 gene expression values form a\nsuitable predictor for discriminating between ALL and AML patients. Note,\nhowever, from Figures 2.2 and 8.7 that there is some overlap between the\nexpression values from the ALL and the AML patients, so that a perfect\nclassification is not possible. By the function rpart the regression partition-\ning can be computed as follows.\n\n## > library(rpart);data(golub); library(multtest)\n\n> gol.fac <- factor(golub.cl,levels=0:1, labels= c(\"ALL\",\"AML\"))\n> gol.rp <- rpart(gol.fac ~ golub[1042,] , method=\"class\")\n3\nSee Chapter 11 of the manual ”An Introduction to R” for more on model notation.\n4\nSee Chapter 6 of the manual ”An Introduction to R” for more on data frames.\n154 CHAPTER 8. CLASSIFICATION METHODS\n\ngenea< 1.025\n2\n1\n\ngeneb< 0.9074\nAML\n0/0/10\n0\n\nALL1 ALL2\n10/0/0 0/10/0\n−1\n\n## Figure 8.5: Boxplot of expression Figure 8.6: Classification tree of\n\nvalues of gene a for each leukemia expression values from gene A,\nclass. B, and C for the classification of\nALL1, ALL2, and AML patients.\n\n## > predictedclass <- predict(gol.rp, type=\"class\")\n\n> table(predictedclass, gol.fac)\ngol.fac\npredictedclass ALL AML\nALL 25 1\nAML 2 10\n\nNote that (25 + 10)/38 · 100% = 92.10% of the ALL/AML patients are cor-\nrectly classified by gene CCND3 Cyclin D3. By the function predict(gol.rp,type=\"class\")\nthe predictions from the regression tree of the patients in the two classes can\nbe obtained. The factor gol.fac contains the levels ALL and AML corre-\nsponding to the diagnosis to be predicted. The predictor variable consists\nof the expression values of gene CCND3 Cyclin D3. The output of recursive\npartitioning is assigned to an object called gol.rp, a list from which fur-\nther information can be extracted by suitable functions. A summary can be\nobtained as follows.\n\n> summary(gol.rp)\nCall:\n8.3. CLASSIFICATION TREES 155\n\nn= 38\n\n## CP nsplit rel error xerror xstd\n\n1 0.7272727 0 1.0000000 1.0000000 0.2541521\n2 0.0100000 1 0.2727273 0.5454545 0.2043460\n\n## Node number 1: 38 observations, complexity param=0.7272727\n\npredicted class=ALL expected loss=0.2894737\nclass counts: 27 11\nprobabilities: 0.711 0.289\nleft son=2 (26 obs) right son=3 (12 obs)\nPrimary splits:\ngolub[1042, ] < 1.198515 to the right, improve=10.37517, (0 missing)\n\n## Node number 2: 26 observations\n\npredicted class=ALL expected loss=0.03846154\nclass counts: 25 1\nprobabilities: 0.962 0.038\n\n## Node number 3: 12 observations\n\npredicted class=AML expected loss=0.1666667\nclass counts: 2 10\nprobabilities: 0.167 0.833\n\n26\n 0.03846154\nThe expected loss in prediction accuracy of Node number 2 is 1/26 and that\nof Node number 3 is 2/12. This equals the probabilities from the class counts.\nThe primary splits gives the estimated threshold value. To predict the class\nof the individual patients one may use the function predict, as follows.\n> predict(gol.rp,type=\"class\")\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20\nALL ALL ALL ALL ALL ALL ALL ALL ALL ALL ALL ALL ALL ALL ALL ALL AML ALL ALL ALL\n21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38\nAML ALL ALL ALL ALL ALL ALL AML ALL AML AML AML AML AML AML AML AML AML\nLevels: ALL AML\n156 CHAPTER 8. CLASSIFICATION METHODS\n\n## Hence, Patient 17 and 21 are erroneously predicted as AML and Patient 29\n\nis erroneously predicted in the ALL class. A more precise output is obtained\nby asking for the probability of class membership.\n> predict(gol.rp, type=\"prob\")\nALL AML\n1 0.9615385 0.03846154\n2 0.9615385 0.03846154\netc.\nBased on this the probability of patient 21 to have ALL is 0.16 and that to\nhave AML is 0.83.\n\ngolub[1042, ]>=1.199\n2.5\n2.0\n1.5\n1.0\n0.5\n0.0\n−0.5\n\nALL AML\n25/1 2/10\n\nALL AML\n\n## Figure 8.7: Boxplot of expression Figure 8.8: Classification tree\n\nvalues from gene CCND3 Cyclin of expression values from gene\nD3 for ALL and AML patients CCND3 Cyclin D3 for classifica-\ntion of ALL and AML patients.\n\n## Example 4. Gene selection of the Golub (1999) data. By recursive\n\npartitioning it is possible to select among the genes of Golub et al. (1999)\nthose which give the best partitioning. For the latter to work we have to\nspecify the gene expressions as the variables (columns). For this we use the\ntransposition operator t. To facilitate reading the output we add gene 1 to\ngene 3051 as column names.\n8.3. CLASSIFICATION TREES 157\n\nlibrary(rpart);data(golub); library(multtest)\nrow.names(golub)<- paste(\"gene\", 1:3051, sep = \"\")\ngoldata <- data.frame(t(golub[1:3051,]))\ngol.fac <- factor(golub.cl,levels=0:1, labels= c(\"ALL\",\"AML\"))\ngol.rp <- rpart(gol.fac~., data=goldata, method=\"class\", cp=0.001)\nplot(gol.rp, branch=0,margin=0.1); text(gol.rp, digits=3, use.n=TRUE)\ngolub.gnames[896,]\n\n## Inspection of the plot yields gene ”FAH Fumarylacetoacetate” as the predic-\n\ntor by which the two classes of patients can be predicted perfectly.\n\n## In order to further illustrate possibilities of classification methods we use\n\nthe ALL data collected by Chiaretti, et al. (2004), see also Chapter 6.\n\n## Example 5. Application to the Chiaretti (2004) data. With respect to\n\nthe ALL data we want to predict from the gene expressions the diagnosis of B-\ncell State B1, B2, and B3. Since the complete set of 12625 gene expressions is\ntoo large, we select the genes with different means over the patients groups. It\nis obvious that only these gene can contribute to the prediction of the disease\nstates. In particular we select the gene with ANOVA p-value is smaller than\n0.000001.\n\nlibrary(\"hgu95av2.db\");library(ALL);data(ALL)\n\n> tr <- rpart(factor(ALLBTnames$BT) ~ ., data = data.frame(t(probedat))) > plot(tr, branch=0,margin=0.1); text(tr, digits=3, use.n=TRUE) 158 CHAPTER 8. CLASSIFICATION METHODS ## > rpartpred <- predict(tr, type=\"class\") > table(rpartpred,diagnosed) diagnosed rpartpred B1 B2 B3 B1 17 2 0 B2 1 33 5 B3 1 1 18 The rows to the left of the table give the frequencies of the predicted B cell stages and the columns on top the diagnosed B cell stages from the factor. The matrix with frequencies of the predicted and true patient status is often called a “confusion table”. The resulting tree in Figure 8.9 should be read as follows. If gene expression MME is strictly smaller than the cutoff value 8.395, then the patient is predicted to be in state (class) B1. If the expression of LSM6 smaller than 4.192, then the predicted state is B2, and if it is larger than the predicted state it is B3. The misclassification rate is 10/78=0.1282051, which is low, but not zero. It may happen that the probability of the predicted class is close to that of the diagnosed. An overview of the latter can be obtained as follows. predicted.class <- predict(tr, type=\"class\") predicted.probabilities <- predict(tr, type=\"prob\") out <- data.frame(predicted.probabilities,predicted.class, diagnosis=factor(ALLBTnames$BT))\n> print(out,digits=2)\nB1 B2 B3 predicted.class diagnosis\n01005 0.026 0.85 0.13 B2 B2\n01010 0.026 0.85 0.13 B2 B2\n04006 0.895 0.11 0.00 B1 B1\n04007 0.026 0.85 0.13 B2 B2\n04008 0.895 0.11 0.00 B1 B1\n04010 0.050 0.05 0.90 B3 B1\n04016 0.895 0.11 0.00 B1 B1\n06002 0.026 0.85 0.13 B2 B2\n08001 0.026 0.85 0.13 B2 B2\n08011 0.026 0.85 0.13 B2 B3\n08012 0.026 0.85 0.13 B2 B3\n08018 0.050 0.05 0.90 B3 B3\n08024 0.895 0.11 0.00 B1 B2\n8.3. CLASSIFICATION TREES 159\n\n## 09008 0.026 0.85 0.13 B2 B3\n\n...\n\nFor instance, the sixth patient is with probability .90 in class B3 and with\nprobability .05 in class B1, which is the diagnosed disease state.\n\nrf1\n\nMME< 8.395\n1389_at 1389_at\n38032_at 36711_at\n40440_at 38032_at\n307_at 40440_at\n37544_at 40493_at\n36711_at 36829_at\nB1 LSM6< 4.192\n34378_at 34891_at\n17/2/0\n32977_at 37544_at\n32116_at 37043_at\n32716_at 35769_at\n36829_at 34347_at\n40729_s_at 34333_at\n37320_at 307_at\nB2 B3 1173_g_at 32716_at\n1/33/5 1/1/18\n40480_s_at 32977_at\n\n## 0.35 0.45 0.55 0.65 0.0 0.2 0.4 0.6\n\nMeanDecreaseAccuracy MeanDecreaseGini\n\n## Figure 8.10: Variable importance\n\nFigure 8.9: rpart on ALL B-cel 123\nplot on ALL B-cell 123 data.\ndata.\n\nNote the reduction in variables from twenty nine to two in the actual\nconstruction of the tree. In a construction like this the gene expressions\n(variables) are linearly dependent in the sense that once the first gene is\nselected for the first split, then highly similar ones are not selected anymore.\nHence, it can be instructive to leave out the variables selected from the data\nand to redo the analysis.\nA generally applied manner to evaluate an estimated model is by its pre-\ndictive accuracy with respect to a future data set. When such a future data\nset is not available, it is common practice to split the available data in two\nparts: A training set and a validation set. Then the model is estimated from\nthe training set and this is used to predict the class of the patients in the\nvalidation set. Then a confusion matrix is constructed with the frequencies\nof true classes against predicted classes. Next, the misclassification rate can\nbe computed to evaluate the predictive accuracy. This can very well be seen\n160 CHAPTER 8. CLASSIFICATION METHODS\n\nas a method to detect for over fitting where the model estimates are so data\nspecific that generalization to future data sets is in danger.\n\n## Example 6. Training and validation. In the setting of B-cell ALL data\n\nwith State 1, 2, and 3 the manner to split the data centers around randomly\nsplitting the patients in two halves. The 78 patients in State 1, 2 or 3 can\nbe split in two halves, as follows.\n\n## i <- sample(1:78, 39, replace = FALSE)\n\nnoti <- setdiff(1:78,i)\ndf <- data.frame(Y = factor(ALLBTnames$BT), X =t(probedat)) rpart.est <- rpart(Y ~ ., data = df, subset=i) rpart.pred.t <- predict(rpart.est, df[i,], type=\"class\") > table(rpart.pred.t,factor(ALLBTnames$BT[i]))\nrpart.pred.t B1 B2 B3\nB1 11 1 0\nB2 0 12 0\nB3 0 1 14\n> rpart.pred.v <- predict(rpart.est,df[noti,], type=\"class\")\n> table(rpart.pred.v,factor(ALLBTnames$BT[noti])) rpart.pred.v B1 B2 B3 B1 6 1 0 B2 1 19 3 B3 1 2 6 The misclassification rate in the training set is 2/39 = 0.05 and in the val- idation set is 7/39 = 0.18. Note that the differences mainly occur between State 2 and 3. Generally the prediction of disease state from the training set is better because the model is estimated from these data. The same split of the data into training and validation set will be used for other methods as well. ## 8.4 Support Vector Machine A support vector machine finds separating lines (hyper planes) between groups of points. This works like a classification problem where the classes 8.4. SUPPORT VECTOR MACHINE 161 ## of patients are to be predicted from gene expression values. If such sepa- rating lines do exist in the data, then a linear support vector machine will find these. This is because the optimization method behind it is based on quadratic programming by iterative algorithms which find the globally opti- mal solution with certainty. Support vector machines do not automatically select variables and are designed for continuous predictor variables. Since the mathematical details are beyond the current scope, we shall confine with illustrating applications to gene expression data. ## Example 1. Application to the Chiaretti (2004) data. The parameters for the support vector machine can be determined by the function svm from the e1071 package, as follows. library(e1071) df <- data.frame(Y = factor(ALLBTnames$BT), X =t(probedat))\nY <- factor(ALLBTnames$BT);X <- t(probedat) svmest <- svm(X, Y, data=df, type = \"C-classification\", kernel = \"linear\") svmpred <- predict(svmest, X, probability=TRUE) > table(svmpred, factor(ALLBTnames$BT))\nsvmpred B1 B2 B3\nB1 19 0 0\nB2 0 36 1\nB3 0 0 22\n\nThe confusion matrix shows that the misclassification rate of the three classes\nof B-cell ALL is 1/78=0.0128 is very small, so that the prediction is almost\nperfect. Note, however, from summary(svmest) that the number of support\nvectors per class equals 20, 9, and 11, for class B1, B2, and B3, respectively.\nThese have values for all input variables (genes) as can be obtained from\ndim(svmest$SV) and the coefficient vectors dim(svmest$coefs). Hence,\nthe excellent prediction properties are obtained by a very large number of\nestimated parameters.\n\n## Example 2. Training and validation. A generally applied manner to\n\nevaluate the predictive quality of an estimated model is by splitting the data\ninto a training and a validation set. The model is estimated by the training\nset and then the class of the patients in the validation set is predicted. We\nshall use the same split as in Example 6 of the previous section.\n162 CHAPTER 8. CLASSIFICATION METHODS\n\n## > Yt <- factor(ALLBTnames$BT)[i]; Yv <- factor(ALLBTnames$BT)[noti]\n\n> X <- t(probedat); Xt <- X[i,]; Xv <- X[noti,]\n> svmest <- svm(Xt, Yt, type = \"C-classification\", kernel = \"linear\")\n> svmpredt <- predict(svmest, Xt, probability=TRUE)\n> table(svmpredt, Yt)\nYt\nsvmpredt B1 B2 B3\nB1 11 0 0\nB2 0 14 0\nB3 0 0 14\n> svmpredv <- predict(svmest, Xv, probability=TRUE)\n> table(svmpredv, Yv)\nYv\nsvmpredv B1 B2 B3\nB1 5 0 0\nB2 1 19 4\nB3 2 3 5\n\nThe predictions of the disease states of the patients from the training set per-\nfectly match the diagnosed states. The predictions, however, of the classes\nof the patients from the validation set have misclassification rate 10/39=0.25\nand are therefore less accurate. Hence, the parameter estimates from the\ntraining set are sample specific and do not generalize with the same accuracy\nto the validation set.\n\n## 8.5 Neural Networks\n\nNeural networks are nonlinear models consisting of nonlinear hyperplanes\naround classes of objects given a set of prediction variables (Ripley, 1996).\nWe confine with illustrating the method by two examples.\n\n## Example 1. Application to the Chiaretti (2004) data. The models can\n\nbe estimated by the function nnet from the package that goes under the\nsame name. To avoid having to many variables we randomly select a subset\nof 20 genes.\n\n## > Y <- factor(ALLBTnames$BT);X <- t(probedat) 8.5. NEURAL NETWORKS 163 > library(nnet) > df <- data.frame(Y = Y, X = X[, sample(ncol(X), 20)]) > nnest <- nnet(Y ~ .,data = df, size = 5, maxit = 500, decay = 0.01, + MaxNWts = 5000) > pred <- predict(nnest, type = \"class\") > table(pred, Y) # prints confusion ma Y pred B1 B2 B3 B1 19 0 0 B2 0 36 0 B3 0 0 23 The confusion matrix shows that zero out of 78 patients are mis-classified. ## Example 2. Training and validation. The results from cross validation on the neural networks are as follows. ## > nnest.t <- nnet(Y ~ ., data = df,subset=i, size = 5,decay = 0.01, + maxit=500) > prednnt <- predict(nnest.t, df[i,], type = \"class\") > table(prednnt,Ytrain=Y[i]) Ytrain prednnt B1 B2 B3 B1 11 0 0 B2 0 14 0 B3 0 0 14 > prednnv <- predict(nnest.t, df[noti,], type = \"class\") > table(prednnv, Yval= Y[noti]) Yval prednnv B1 B2 B3 B1 4 1 0 B2 4 17 4 B3 0 4 5 The predictions on the training set have misclassification rate zero and that on the validation set 13/39=0.33. 164 CHAPTER 8. CLASSIFICATION METHODS ## 8.6 Overview and concluding remarks Central themes in prediction methods are the face validity (clarity) of the model, the size of the model, and predictive accuracy on a validation set. For many researchers it is of crucial importance to have a clear idea on what a method is essentially doing. Some models and their estimation procedures are mathematically intricate and seem to be recollected in the mind of many researchers as black boxes. Even from a more pragmatic point of view such need not be devastating if the predictive accuracy is excellent. However, support vector machines and neural networks typically use a large number of parameters to predict well on a test set, but less well on validation sets. It is, furthermore, questionable whether a zero misclassification rate is ra- tional since patients may be misclassified by the diagnosis or very close to transferring from one state to the other. Recursive partitioning to estimate a classification tree performs very well on variable selection and pruning in order to discover as few variables (gene expressions) as possible for maximum predictive accuracy. In addition, it seems obvious that classification trees have great clarity, see e.g. the CART package (Breiman et al., 1984) for further types of recursive trees. Note that several methods have different misclassification rates with respect to the whole sample, but comparable rates on the validation sets. It should, however, be clear that when there are non-linear relationships between pre- dictor variables and classes, then nonlinear models should outperform linear ones5 . 8.7 Exercises 1. Classification tree of Golub data. Use recursive partitioning in rpart (a) Find a manner to identify an optimal gene with respect the Golub data to prediction of the ALL AML patients. (b) Explain what the code does. (c) Use rpart to construct the classification tree with the genes that you found. Does it have perfect predictions? 5 Some people may want to use the ade4TkGUI() 8.7. EXERCISES 165 (d) Find the row number of gene Gdf5, which is supposed not to have any relationship with leukemia. Estimate a classification tree and report the probability of misclassification. Give explanations of the results. ## 2. Sensitivity versus specificity. (a) Produce a sensitivity versus specificity plot for the gene expression values of CCND3 Cyclin D3. (b) In what sense does it resemble Figure 8.2. (c) Compute the area under the curve for sensitivity versus specificity curve. ## 3. Comparing Classification Methods. To obtain an idea on the misclas- sification rate when there is no relation between the predictors and the factor indicating groups, we perform a small simulation study. (a) Construct a factor with 100 values one and two and a matrix with predictor variables of 500 by 4 with values from the normal distribution. Use the first four letters of the alphabet for the column names. (b) Use rpart to construct a recursive tree and report the misclassi- fication rate. Comment on the results. (c) Do the same for support vector machines. (d) Do the same for neural networks. (e) Think through your results and comment on these. 4. Prediction of achieved remission. For the ALL data from its ALL library the patients are checked for achieving remission. The variable ALL$CR\nhas values CR (became healthy) and REF (did not respond to therapy;\nremain ill).\n\n## (a) Construct an expression set containing the patients with values\n\non the phenotypical variable remission and the gene expressions\nwith a significant p-value on the t-test with the patient groups CR\nor REF.\n166 CHAPTER 8. CLASSIFICATION METHODS\n\n## (b) Use recursive partitioning to predict the remission. Report the\n\nmisclassification rate and the names of the genes that play a role\nin the tree.\n\n5. Classification Tree for Ecoli. The ecoli data can be download by the\nfollowing: (Hint: Copy two separated lines into one before running it.)\n\ncolnames(ecoli) <- c(\"SequenceName\",\"mcg\",\"gvh\",\"lip\",\"chg\",\"aac\",\"alm1\",\n\n## (a) Use ecclass to construct a factor containing the ”cp”,”im”,and\n\n”pp”.\n(b) Construct a classification tree using the variables ”mcg”,”gvh”,”lip”,”aac”,”alm1”,”a\nGive the code. Hint: Use the addition notation.\n(c) Plot the tree and report the variables that play a role in the con-\nstructed tree.\n(d) Predict the class by the tree. Report the code and the miss-\nclassification rate.\n(e) Leaf out the upper variable in the classification tree and re-estimate\nthe tree. Report the miss-classification rate. Is it much worse?\n8.7. EXERCISES 167\n\nTable 8.2: Ordered expression values of gene CCND3 Cyclin D3, index 2\nindicates ALL, 1 indicates AML, cutoff points, number of false positives,\nfalse positive rate, number of true positives, true positive rate.\n\n## data index cutoff fp fpr tp tpr\n\n1 Inf 0 0.00 0 0.00\n2 2.77 2 2.77 0 0.00 1 0.04\n3 2.59 2 2.59 0 0.00 2 0.07\n4 2.45 2 2.45 0 0.00 3 0.11\n..\n.\n22 1.78 2 1.78 0 0.00 21 0.78\n23 1.52 2 1.52 0 0.00 22 0.81\n24 1.37 2 1.45 1 0.09 22 0.81\n25 1.33 2 1.37 1 0.09 23 0.85\n26 1.28 2 1.33 1 0.09 24 0.89\n27 1.11 2 1.28 1 0.09 25 0.93\n28 0.46 2 1.12 2 0.18 25 0.93\n29 1.45 1 1.11 2 0.18 26 0.96\n30 1.12 1 1.02 3 0.27 26 0.96\n31 1.02 1 0.89 4 0.36 26 0.96\n32 0.89 1 0.83 5 0.45 26 0.96\n33 0.83 1 0.74 6 0.55 26 0.96\n34 0.74 1 0.64 7 0.64 26 0.96\n35 0.64 1 0.49 8 0.73 26 0.96\n36 0.49 1 0.46 8 0.73 27 1.00\n37 0.43 1 0.43 9 0.82 27 1.00\n38 0.13 1 0.13 10 0.91 27 1.00\n39 −0.74 1 −0.74 11 1.00 27 1.00\n168 CHAPTER 8. CLASSIFICATION METHODS\nChapter 9\n\nAnalyzing Sequences\n\n## For many purposes in bioinformatics nucleotide or amino acid sequences are\n\nanalyzed. The idea is that highly similar sequences may have identical bi-\nological functions. For expressing the similarity of sequences it is necessary\nto compute first their optimal alignment. It will be explained and illustrated\nhow optimal pairwise alignment can be obtained. Furthermore, it is of im-\nportance to compute quantities for DNA sequences such as the CG fraction,\nor, for amino acid sequences, the isoelectric point or the hydropathy score.\nIt will be explained and illustrated how such quantities can be computed.\nIn this chapter you learn how to query online data bases, to translate RNA\ninto protein sequences, to match patterns, and to program pairwise align-\nments. We will start, however, with a query language in order to download\nsequences.\n\n## 9.1 Using a query language\n\nIt will be illustrated how the query language from the seqinr package can be\nit is important to know which banks can be chosen.\n\n> library(seqinr)\n> choosebank()\n \"genbank\" \"embl\" \"emblwgs\" \"swissprot\" \"ensembl\"\n \"refseq\" \"nrsub\" \"hobacnucl\" \"hobacprot\" \"hovergendna\"\n \"hovergen\" \"hogenom\" \"hogenomdna\" \"hogennucl\" \"hogenprot\"\n \"hoverclnu\" \"hoverclpr\" \"homolens\" \"homolensdna\" \"greview\"\n\n169\n170 CHAPTER 9. ANALYZING SEQUENCES\n\n## \"polymorphix\" \"emglib\" \"HAMAPnucl\" \"HAMAPprot\" \"hoppsigen\"\n\n \"nurebnucl\" \"nurebprot\" \"taxobacgen\"\nThere are many possibilities to use the query language e.g. for answering\nquestions about sequences from online data bases (Gouy, et al. 1984). We\ngive a few examples to illustrate some of its possibilities. For this we shall\ntemporary use the option virtual=TRUE to save time by preventing actual\n> choosebank(\"genbank\")\n> query(\"ccnd\",\"k=ccnd\",virtual=TRUE)$nelem 147 More specific: How many sequences ccnd sequences has genbank for the species homo sapiens. > query(\"ccnd3hs\",\"sp=homo sapiens AND k=ccnd3\",virtual=TRUE)$nelem\n 9\nFor many other combinations of search options we refer to the manual of\nthe seqinr package and for a book length treatment with many examples to\nCharif et al. (2008).\n\nquences\nAfter sequences are downloaded in binary format it is essential to obtain\ninformation with respect to their accession number, length, actual elements,\ntranslation to amino acids, and annotation. How to do this will briefly be\nillustrated by an example.\n\n## Example 1. Let’s download sequences related to the species homo sapi-\n\nens and a gene name like ”CCND3”.\n> choosebank(\"genbank\")\n> query(\"ccnd3hs\",\"sp=homo sapiens AND k=ccnd3@\")\n> ccnd3hs$nelem 9 1 The results below are obviously time dependent. 9.2. GETTING INFORMATION ON DOWNLOADED SEQUENCES 171 ## The sequences are downloaded in binary format. The symbol @ acts as a wildcard for any zero or other characters. There are a number of useful functions available to obtain further information. Some of these are getName, getLength, getSequence, getTrans, and getAnnot. To use these on a list containing sets of sequences the functionality sapply is very convenient. This is illustrated by extracting the NCBI accession numbers. ## > sapply(ccnd3hs$req, getName)\n\n \"AF517525.CCND3\" \"AL160163.CCND3\" \"AL160163.PE5\" \"AL161651\"\n \"BC011616.CCND3\" \"CR542246\" \"HUMCCND3A.CCND3\" \"HUMCCND3PS.PE1\"\n \"HUMCCNDB04.CCND3\" \"HUMCYCD3A.CCND3\"\n\n## > sapply(ccnd3hs$req, getLength) \"879\" \"879\" \"729\" \"211627\" \"879\" \"879\" \"879\" \"537\" \"559\" \"879\" Let’s obtain the first sequence and print its first fifteen nucleotides to the screen. 2 > getSequence(ccnd3hs$req[])[1:15]\n \"a\" \"t\" \"g\" \"g\" \"a\" \"g\" \"c\" \"t\" \"g\" \"c\" \"t\" \"g\" \"t\" \"g\" \"t\"\n\n> getTrans(ccnd3hs$req[])[1:15] \"M\" \"E\" \"L\" \"L\" \"C\" \"C\" \"E\" \"G\" \"T\" \"R\" \"H\" \"A\" \"P\" \"R\" \"A\" ## as well as its annotation from the corresponding web page: > getAnnot(ccnd3hs$req[])\n \" CDS join(1051..1248,2115..2330,5306..5465,6005..6141,\"\n \" 6593..6760)\"\n \" /gene=\\\"CCND3\\\"\"\n \" /codon_start=1\"\n \" /product=\\\"cyclin D3\\\"\"\n \" /protein_id=\\\"AAM51826.1\\\"\"\n \" /db_xref=\\\"GI:21397158\\\"\"\n \" /translation=\\\"MELLCCEGTRHAPRAGPDPRLLGDQRVLQSLLRLEERYVPRASY\"\n2\nUse double brackets to extract a sequence from a list.\n172 CHAPTER 9. ANALYZING SEQUENCES\n\n## \" FQCVQREIKPHMRKMLAYWMLEVCEEQRCEEEVFPLAMNYLDRYLSCVPTRKAQLQLL\"\n\n \" LAFILHRLSLPRDRQALVKKHAQTFLALCATDYTFAMYPPSMIATGSIGAAVQGLGAC\"\n \" SMSGDELTELLAGITGTEVDCLRACQEQIEAALRESLREAAQTSSSPAPKAPRGSSSQ\"\n \" GPSQTSTPTDVTAIHL\\\"\"\n\n## 9.3 Computations on sequences\n\nA basic quantity to compute are the nucleotide and the dinucleotide frequen-\ncies.\n\n## Example 1. Frequencies of (di)nucleotides. We shall continue with\n\nthe first result from the CCND3 (Cyclin D3) search with accession num-\nber ”AF517525.CCND3”. To compute the frequencies we may extract the\nsequence from a list in order to use the basic function table, as follows.\n\n> table(getSequence(ccnd3hs$req[])) a c g t 162 288 267 162 This table can also be computed by the seqinr function count, which is more general in the sense that frequencies of dinucleotides can be computed. > count(getSequence(ccnd3hs$req[]),2)\n\naa ac ag at ca cc cg ct ga gc gg gt ta tc tg tt\n25 44 64 29 68 97 45 78 52 104 76 34 16 43 82 21\n\nThis will be quite useful in the next chapter. Indeed, changing 2 into 3 makes\nit possible to count trinucleotides.\n\n## Example 2. G + C percentage. We are often interested in the fraction\n\nG plus C in general (GC), or starting from the first position of the codon\nbases (GC1), the second (GC2), or third (GC3).\n\n> GC(getSequence(ccnd3hs$req[])) 0.6313993 9.3. COMPUTATIONS ON SEQUENCES 173 > GC1(getSequence(ccnd3hs$req[]))\n 0.6484642\n> GC2(getSequence(ccnd3hs$req[])) 0.4641638 > GC3(getSequence(ccnd3hs$req[]))\n 0.78157\nHence, the G + C percentage is largest when started at position three. It\nis also possible to compute the G + C fraction in a window of length 50 nt,\nsay, and to plot it along the sequence.\nGCperc <- double()\nn <- length(ccnd3[])\nfor (i in 1:(n - 50)) GCperc[i] <- GC(ccnd3[][i:(i+50)])\nplot(GCperc,type=\"l\")\nBy double() we first create a vector. From Figure 9.1 it can be seen that\nthe G + C fraction changes drastically along a window of 50 nucleotides.\n\n## With respect to over or under representation of dinucleotides there is a func-\n\ntion ρ (rho) available, which is defined as\nfxy\nρ(xy) = ,\nfx · fy\n\nwhere fxy , fx , and fy are the frequencies of the (di)nucleotide xy, x, and y,\nrespectively. The z-score is computed by subtracting the mean and dividing\nby the standard deviation (Palmeira, et al., 2006). The latter is somewhat\nmore sensitive for over and under representation.\n\nExample 3. Rho and z-scores. The coefficient rho and the corresponding\nz-scores will be computed from the sequence with NCBI accession number\n”AF517525.CCND3”.\n> rho(getSequence(ccnd3hs$req[])) aa ac ag at ca cc cg ct 0.8382879 0.8299051 1.3020778 0.9724140 1.2825805 1.0291294 0.5149819 1.4711953 ga gc gg gt ta tc tg tt 1.0579382 1.1901805 0.9381544 0.6917288 0.5365043 0.8110436 1.6682872 0.7041619 174 CHAPTER 9. ANALYZING SEQUENCES 0.9 0.8 0.7 GCperc 0.6 0.5 ## 0 200 400 600 800 Index ## Figure 9.1: G + C fraction of sequence ”AF517525.CCND3” along a window of length 50 nt. > zscore(getSequence(ccnd3hs$req[]),modele=’base’)\n\naa ac ag at ca cc cg\n-1.0832601 -1.6733481 2.8118431 -0.1847902 2.7799508 0.4208538 -6.6303243\nct ga gc gg gt ta tc\n4.6354920 0.5393086 2.5998172 -0.7999509 -2.8694932 -3.1048171 -1.8589022\ntg tt\n6.2206449 -1.9817299\n\nThe rho value for CG is not extreme, but its z-score certainly is.\n9.3. COMPUTATIONS ON SEQUENCES 175\n\n## In case we have an amino acid sequence it may be useful to obtain a\n\nplot of the amino acid frequencies. When we have translated the nucleotide\nsequence into an amino acid sequence, it may be interesting to construct a\nplot expressing their frequencies. Such can be useful for a first impression on\nsequence similarity.\n\n## Example 4. Comparing Amino acid frequencies. We continue with the\n\nfirst result from the CCND3 (Cyclin D3) search, translate it, order it, and\nproduce a dotchart with amino acid frequencies.\n\n## tab <- table(getTrans(ccnd3hs$req[])) taborder <- tab[order(tab)] names(taborder) <- aaa(names(taborder)) dotchart(taborder,pch=19,xlab=\"Stop and amino-acid-counts\") abline(v=1,lty=2) ## The script was run on both sequences AF517525.CCND3 and AL160163.CCND3 resulting in Figure 9.2 and 9.3, respectively. The two sequences are highly similar with respect to amino acid frequencies. Leu Leu Ala Ala Arg Arg Glu Glu Ser Ser Thr Thr Pro Pro Gln Gln Val Val Gly Gly Asp Asp Cys Cys Lys Lys Ile Ile Met Met Tyr Tyr His His Phe Phe Trp Trp Asn Asn Stp Stp 0 10 20 30 40 0 10 20 30 40 ## Stop and amino−acid−counts Stop and amino−acid−counts ## Figure 9.2: Frequency plot of Figure 9.3: Frequency plot of amino acids from accession num- amino acids from accession num- ber AF517525.CCND3. ber AL160163.CCND3. 176 CHAPTER 9. ANALYZING SEQUENCES ## For amino acid sequences it may be of importance to compute the the- oretical isoelectric point or the molecular weight of the corresponding protein. ## Example 5. Isoelectric point. The function computePI computes the theoretical isoelectric point of a protein, which is the pH at which the protein has a neutral charge (Gasteiger, et al. 2005). > computePI(getTrans(ccnd3hs$req[]))\n 6.657579\nThe protein molecular weight can be computed as follows.\n> pmw(getTrans(getSequence(ccnd3hs$req[]))) 32503.38 Note that it is easy to compte these for all downloaded proteins and to com- pare these. ## Another important quantity is hydropathy scoreP (Kyte & Doolittle, 1982) of proteins, which is defined as a weighted sum 20 i=1 αi fi of amino acid co- efficients αi and the relative frequencies fi . An example will illustrate how it can be computed. ## Example 6. Hydropathy score. The coefficients α1 , · · · , α20 are available as KD data from the EXP list of the seqinr package. The unique names are lexicographically ordered and stored in the object kdc. The scale is changed by the minus sign below so that hydrophilic proteins are positive, but smaller than one. A function is defined to compute the hydropathy score for a set of amino acid sequences. ccnd3 <- sapply(ccnd3hs$req, getSequence)\nccnd3transl <- sapply(ccnd3, getTrans)\ndata(EXP)\nnames(EXP$KD) <- sapply(words(), function(x) translate(s2c(x))) kdc <- EXP$KD[unique(names(EXP$KD))] kdc <- -kdc[order(names(kdc))] ## linform <- function(data, coef) { #data are sequences f <- function(x) { freq <- table(factor(x, levels = names(coef)))/length(x) 9.4. MATCHING PATTERNS 177 ## return(coef %*% freq) } res <- sapply(data, f) names(res) <- NULL return(res) } kdath <- linform(ccnd3transl, kdc) > print(kdath,digits=3) 0.0874 0.0962 0.0189 0.1496 0.0962 0.0874 0.0874 0.2659 0.2220 Indeed, the largest score is still much smaller than one, so the conclusion is that there are no hydrophilic proteins among our sequences. The data set aaindex of the seqinr library contains more than five hun- dred sets of coefficients for computing specific quantities with respect to proteins. ## 9.4 Matching patterns A manner to investigate a long sequence is to search for identical patterns, eventually allowing for a specified number of mismatches. There are many relevant examples such as seeking for one of the stop codons UAG, UGA UAA in RNA, or recognition sequences of enzymes (Roberts, et al., 2007), etc. We sustain with a brief example. ## Example 1. Pattern match. In the sequence with NCBI accession number ”AF517525.CCND3”, we seek the pattern ”cccggg” with zero mismatches as well as those with a single mismatch. By the function c2s a sequence of characters is converted into a single string. library(seqinr) choosebank(\"genbank\") query(\"ccnd3hs\",\"sp=homo sapiens AND k=ccnd3@\") ccnd3 <- sapply(ccnd3hs$req, getSequence)\nccnd3nr1 <- c2s(ccnd3[])\n> ccnd3nr1\n \"atggagctgctgtgttgcgaaggcacccggcacgcgccccgggccgggccggacccgcggctgctggggga\"...\n> subseq <- \"cccggg\"\n> countPattern(subseq, ccnd3nr1, mismatch = 0)\n178 CHAPTER 9. ANALYZING SEQUENCES\n\n 2\n> matchPattern(subseq, ccnd3nr1, mismatch = 0)\nViews on a 879-letter BString subject\nSubject: atggagctgctgtgttgcgaaggcacccggcacg...actcctacagatgtcacagccatacacctgta\nViews:\nstart end width\n 38 43 6 [cccggg]\n 809 814 6 [cccggg]\n> matchPattern(subseq, ccnd3nr1, mismatch = 1)\nViews on a 879-letter BString subject\nSubject: atggagctgctgtgttgcgaaggcacccggcacg...actcctacagatgtcacagccatacacctgta\nViews:\nstart end width\n 26 31 6 [cccggc]\n 37 42 6 [ccccgg]\n 38 43 6 [cccggg]\n 43 48 6 [gccggg]\n 54 59 6 [cccgcg]\n 119 124 6 [cccgcg]\n 236 241 6 [ccctgg]\n 303 308 6 [cctggg]\n 512 517 6 [cccgtg]\n 612 617 6 [cacggg]\n 642 647 6 [cctggg]\n 661 666 6 [tccggg]\n 662 667 6 [ccgggg]\n 808 813 6 [ccccgg]\n 809 814 6 [cccggg]\n 810 815 6 [ccgggg]\n\n## 9.5 Pairwise alignments\n\nAmong the basic questions about genes or proteins is to what extent a pair\nof sequences are similar. To find this out these are aligned in a certain man-\n9.5. PAIRWISE ALIGNMENTS 179\n\n## ner after which a similarity score can be computed. In order to understand\n\nsequence alignment it is fundamental to have some idea about recursion.\n\n## Example 1. Basic recursion. The idea of recursion is to generate a sequence\n\nby defining the current value as a function of the previous. Suppose that the\nfirst element is one, x1 = 1, and that the sequence is defined by\nxi = xi−1 + 1.\nThen we obtain x1 = 1, x2 = 2, x3 = 3, etc, so that the sequence becomes\n1, 2, 3, · · ·. Indeed, this is as fundamental as counting.\nAnother manner to define a sequence is by multiplying the previous value\nby a constant. For example, let xi = 2xi−1 with x1 = 1. Then the values of\nthe sequence are x1 = 1, x2 = 2, x3 = 4, x3 = 8, etc. Also we see that in fact\nxn = 2n , so that a value of the sequence can be computed without actually\ncomputing all previous elements.\nAnother example would be xi = 2xi−1 − 10, with x1 = 1. In order to\ncompute the value x10 we may use R, as follows.\n> x<-double();x<-1\n> for (i in 2:10) {x[i]<- 2*x[i-1]-10}\n> x\n -4598\nThis illustrates basic ideas about recursively defined sequences.\n\n## Suppose we want to compute an alignment score for two small DNA\n\nsequences GAATTC and GATTA (Durbin et. al., 1998, p.18). We agree\nthat a match between two letters should have the score +2 and a mismatch\nthe score -1. A gap at a certain position of the sequences should be punished\nby subtracting a score by d = 2. A possible alignment is G AAT T C\nGAT T −A\n, where\nthe minus sign indicates a gap. Then the alignment consists of a match,\nmatch, mismatch, match, gap, mismatch, respectively, so that the score is\n2 + 2 − 1 + 2 − 2 − 1 = 2. Now the question is whether this alignment is\noptimal in the sense that the score is maximal? The answer is: No! To see\nthis, consider the alignment G AAT T C\nGA−T T A\n. Then we have a match, match, gap,\nmatch, match, mismatch, respectively, so that the score is 2+2−2+2+2−1 =\n5. This is better, but still we do not know whether this alignment is optimal.\nIn order to ascertain that the alignment is optimal we have to build an\nalignment score matrix F (i, j). To do so it is convenient to start with building\n180 CHAPTER 9. ANALYZING SEQUENCES\n\nthe (mis)match score matrix s(i, j). Its (i, j)th element s(i, j) has the value\n2 in case of a match and the value -1 is case of a mismatch. Note that for\neach step we can choose between a gap, a match, or a mismatch. Building\nup the matrix F (i, j) recursively, means that we define its elements on the\nbasis of the values of its preceding elements. That is, given the values of\nthe previous elements F (i − 1, j − 1), F (i − 1, j), and F (i, j − 1), we will be\nable to find the best consecutive value for F (i, j). In particular, in case of\na match or a mismatch, we take F (i, j) = F (i − 1, j − 1) + s(xi , yj ) and in\ncase of a gap we take F (i, j) = F (i − 1, j) − d or F (i, j) = F (i, j − 1) − d.\nThe famous Needleman-Wunsch alignment algorithm consists of taking the\nmaximum out of these possibilities at each step (e.g, Durbin et. al., 1998,\np.21). Their algorithm can be summarized, as follows.\n\n F (i − 1, j − 1) + s(i, j)\nF (i, j) = max F (i − 1, j) − d\n\nF (i, j − 1) − d\n\nNote, however, that this will not yet work because we have not defined any\ninitial values. In fact we will agree to start with F (0, 0) = 0 and due to the\ngap penalties we take F (i, 0) = −id for the first column and F (0, j) = −jd\nfor the first row. Then, the final score F (n, m) is the optimal score and the\nvalues of the matrix F (i, j) indicates the optimal path. By informaticians\nthis recursive scheme is often called a “dynamic programming algorithm”.\n\n## Example 2. Dynamic programming of DNA sequences. Consider again\n\nthe DNA sequences GAATTC, GATTA, the score +2 for a match, -1 for a\nmismatch, and the gap penalty d = 2. It is clarifying to first construct the\nscore matrix s(i, j). For this we use the string-to-character function s2c, a\nfor loop, and an if else statement.\nlibrary(seqinr)\nx <- s2c(\"GAATTC\"); y <- s2c(\"GATTA\"); d <- 2\ns <- matrix(data=NA,nrow=length(y),ncol=length(x))\nfor (i in 1:(nrow(s))) for (j in 1:(ncol(s)))\n{if (y[i]==x[j]) s[i,j]<- 2 else s[i,j]<- -1 }\nrownames(s) <- c(y); colnames(s) <- c(x)\n> s\nG A A T T C\nG 2 -1 -1 -1 -1 -1\n9.5. PAIRWISE ALIGNMENTS 181\n\nA -1 2 2 -1 -1 -1\nT -1 -1 -1 2 2 -1\nT -1 -1 -1 2 2 -1\nA -1 2 2 -1 -1 -1\n\nTo initialize the first row and column of the matrix F (i, j), it is convenient\nto use the function seq. The purpose of the max function seems obvious.\n\nF <- matrix(data=NA,nrow=(length(y)+1),ncol=(length(x)+1))\nrownames(F) <- c(\"\",y); colnames(F) <- c(\"\",x)\nF[,1] <- -seq(0,length(y)*d,d); F[1,] <- -seq(0,length(x)*d,d)\nfor (i in 2:(nrow(F)))\nfor (j in 2:(ncol(F)))\n{F[i,j] <- max(c(F[i-1,j-1]+s[i-1,j-1],F[i-1,j]-d,F[i,j-1]-d))}\n> F\nG A A T T C\n0 -2 -4 -6 -8 -10 -12\nG -2 2 0 -2 -4 -6 -8\nA -4 0 4 2 0 -2 -4\nT -6 -2 2 3 4 2 0\nT -8 -4 0 1 5 6 4\nA -10 -6 -2 2 3 4 5\n\nFrom the lower corner to the right hand side we see that the optimal score\nis indeed 5.\n\nOptimal alignment for pairs of amino acid sequences are often considered\nto be more relevant because these are more closely related to biological func-\ntions. For this purpose we may modify the previous scheme by changing the\ngap penalty d and the (mis)match scores s(i, j). In particular, we shall use\nthe gap penalty d = 8 and for the (mis)match the scores from the so-called\nBLOSUM50 matrix.\n\n## Example 3. Programming Needleman-Wunsch. For the two sequences\n\n”PAWHEAE” and ”HEAGAWGHEE” (see, Durbin et. al., 1998, p.21) we\nseek the Needleman-Wunsch optimal alignment score, using the BLOSUM50\n(mis)match score matrix and gap penalty d = 8. You can either directly read\na BLOSUM matrix from NCBI\n182 CHAPTER 9. ANALYZING SEQUENCES\n\n## Table 9.1: BLOSUM50 matrix.\n\nA R N D C Q E G H I L K M F P S T W Y V\nA 5 -2 -1 -2 -1 -1 -1 0 -2 -1 -2 -1 -1 -3 -1 1 0 -3 -2 0\nR -2 7 -1 -2 -4 1 0 -3 0 -4 -3 3 -2 -3 -3 -1 -1 -3 -1 -3\nN -1 -1 7 2 -2 0 0 0 1 -3 -4 0 -2 -4 -2 1 0 -4 -2 -3\nD -2 -2 2 8 -4 0 2 -1 -1 -4 -4 -1 -4 -5 -1 0 -1 -5 -3 -4\nC -1 -4 -2 -4 13 -3 -3 -3 -3 -2 -2 -3 -2 -2 -4 -1 -1 -5 -3 -1\nQ -1 1 0 0 -3 7 2 -2 1 -3 -2 2 0 -4 -1 0 -1 -1 -1 -3\nE -1 0 0 2 -3 2 6 -3 0 -4 -3 1 -2 -3 -1 -1 -1 -3 -2 -3\nG 0 -3 0 -1 -3 -2 -3 8 -2 -4 -4 -2 -3 -4 -2 0 -2 -3 -3 -4\nH -2 0 1 -1 -3 1 0 -2 10 -4 -3 0 -1 -1 -2 -1 -2 -3 2 -4\nI -1 -4 -3 -4 -2 -3 -4 -4 -4 5 2 -3 2 0 -3 -3 -1 -3 -1 4\nL -2 -3 -4 -4 -2 -2 -3 -4 -3 2 5 -3 3 1 -4 -3 -1 -2 -1 1\nK -1 3 0 -1 -3 2 1 -2 0 -3 -3 6 -2 -4 -1 0 -1 -3 -2 -3\nM -1 -2 -2 -4 -2 0 -2 -3 -1 2 3 -2 7 0 -3 -2 -1 -1 0 1\nF -3 -3 -4 -5 -2 -4 -3 -4 -1 0 1 -4 0 8 -4 -3 -2 1 4 -1\nP -1 -3 -2 -1 -4 -1 -1 -2 -2 -3 -4 -1 -3 -4 10 -1 -1 -4 -3 -3\nS 1 -1 1 0 -1 0 -1 0 -1 -3 -3 0 -2 -3 -1 5 2 -4 -2 -2\nT 0 -1 0 -1 -1 -1 -1 -2 -2 -1 -1 -1 -1 -2 -1 2 5 -3 -2 0\nW -3 -3 -4 -5 -5 -1 -3 -3 -3 -3 -2 -3 -1 1 -4 -4 -3 15 2 -3\nY -2 -1 -2 -3 -3 -1 -2 -3 2 -1 -1 -2 0 4 -3 -2 -2 2 8 -1\nV 0 -3 -3 -4 -1 -3 -3 -4 -4 4 1 -3 1 -1 -3 -2 0 -3 -1 5\n\n## > file <- \"ftp://ftp.ncbi.nih.gov/blast/matrices/BLOSUM50\"\n\nor load a BLOSUM matrix from the Biostrings package. For the sake of\nclarity we shall conveniently construct the matrix s(i, j) without any concern\n\nlibrary(seqinr);library(Biostrings);data(BLOSUM50)\nx <- s2c(\"HEAGAWGHEE\"); y <- s2c(\"PAWHEAE\"); s <- BLOSUM50[y,x]; d <- 8\nF <- matrix(data=NA,nrow=(length(y)+1),ncol=(length(x)+1))\nF[1,] <- -seq(0,80,8); F[,1] <- -seq(0,56,8)\nrownames(F) <- c(\"\",y); colnames(F) <- c(\"\",x)\n9.5. PAIRWISE ALIGNMENTS 183\n\nfor (i in 2:(nrow(F)))\nfor (j in 2:(ncol(F)))\n{F[i,j] <- max(c(F[i-1,j-1]+s[i-1,j-1],F[i-1,j]-d,F[i,j-1]-d))}\n> F\nH E A G A W G H E E\n0 -8 -16 -24 -32 -40 -48 -56 -64 -72 -80\nP -8 -2 -9 -17 -25 -33 -41 -49 -57 -65 -73\nA -16 -10 -3 -4 -12 -20 -28 -36 -44 -52 -60\nW -24 -18 -11 -6 -7 -15 -5 -13 -21 -29 -37\nH -32 -14 -18 -13 -8 -9 -13 -7 -3 -11 -19\nE -40 -22 -8 -16 -16 -9 -12 -15 -7 3 -5\nA -48 -30 -16 -3 -11 -11 -12 -12 -15 -5 2\nE -56 -38 -24 -11 -6 -12 -14 -15 -12 -9 1\n\nHence, from the lower-right corner we observe that the optimal score equals\none.\n\n## Example 4. Needleman-Wunsch. We may also conveniently use the pairwiseAlignment\n\nfunction from the Biostrings package to find the optimal Needleman-Wunsch\naligment score for the sequences PAWHEAE” and ”HEAGAWGHEE” (see,\nDurbin et. al., 1998, p.21).\n\nlibrary(Biostrings);data(BLOSUM50)\n> pairwiseAlignment(AAString(\"PAWHEAE\"), AAString(\"HEAGAWGHEE\"),\n+ substitutionMatrix = \"BLOSUM50\",gapOpening = 0, gapExtension = -8,\n+ scoreOnly = FALSE)\nGlobal Pairwise Alignment\n1: --P-AW-HEAE\n2: HEAGAWGHE-E\nScore: 1\n\nmal alignment.\n\n## An obvious question is whether in the previous example the obtained\n\nscore 1 is to be evaluated as being “large” or not. A manner to answer this\nquestion is by comparing it with the alignment score of random sequences.\nThat is, we may compute the probability of alignment scores larger than 1.\n184 CHAPTER 9. ANALYZING SEQUENCES\n\n## Example 5. Comparing with random sequences. To illustrate how the\n\nprobability of alignment scores larger than 1 can be computed we sample\nrandomly from the names of the amino acids, seven for y and 10 for x and\ncompute the maximum alignment score. This is repeated 1000 times and the\nprobability of optimal alignment scores greater than 1 is estimated by the\ncorresponding proportion.\n\nlibrary(seqinr);library(Biostrings);data(BLOSUM50)\nrandallscore <- double()\nfor (i in 1:1000) {\nx <- c2s(sample(rownames(BLOSUM50),7, replace=TRUE))\ny <- c2s(sample(rownames(BLOSUM50),10, replace=TRUE))\nrandallscore[i] <- pairwiseAlignment(AAString(x), AAString(y),\nsubstitutionMatrix = \"BLOSUM50\",gapOpening = 0, gapExtension = -8,\nscoreOnly = TRUE)\n}\n> sum(randallscore>1)/1000\n 0.003\n\nBy the option scoreOnly = TRUE the optimal score is written to the vector\nrandallscore. The probability of scores larger than 1 equals 0.003 and is\ntherefore small and the alignment is stronger than expected from randomly\nconstructed sequences.\n\n## Example 6. Sliding window on Needleman-Wunsch scores. We may also\n\nprogram a sliding window such that for each the Needleman-Wunsch align-\nment score is computed. Then the maximum can be found and localized.\n\nchoosebank(\"genbank\"); library(seqinr)\nquery(\"ccnd3hs\",\"sp=homo sapiens AND k=ccnd3@\")\nccnd3 <- sapply(ccnd3hsreq, getSequence) ccnd3transl <- sapply(ccnd3, getTrans) x <- c2s(ccnd3transl[]) y <- c2s(ccnd3transl[][50:70]) nwscore <- double() ; n <- length(ccnd3transl[]) for (i in 1:(n-21)) nwscore[i] <- pairwiseAlignment(AAString(c2s(ccnd3transl[][i:(i+20)])), AAString(y),substitutionMatrix = \"BLOSUM50\",gapOpening = 0, gapExtension = -8, scoreOnly = TRUE) 9.6. OVERVIEW AND CONCLUDING REMARKS 185 ## > pairwiseAlignment(AAString(y), AAString(y), substitutionMatrix = \"BLOSUM50\", + gapOpening = 0, gapExtension = -8, scoreOnly = TRUE) 152 > max(nwscore) 152 > which.max(nwscore) 50 Note that the maximum occurs when the subsequences are identical. The value of the maximum is 152 which occurs at position 50. ## 9.6 Overview and concluding remarks It was illustrated how the query language of the seqinr library can be used to download sequences, to translate these and to compute relevant quanti- ties such as the isoelectric point or the hydropathy score. Furthermore, it was illustated how patterns can be matched and how algorithms for optimal pairwise alignment can be programmed. Further applications are given by the exercises below. The package Biostrings contains the various PAM matrices for optimal alignment, as well as facilities to find palindromes, and to read and write data in FASTA format (readFASTA). 9.7 Exercises 1. Writing to a FASTA file. Read, similar to the above, the ccnd3 se- quences using the query language and write the first sequence to a files in FASTA format. Also try to write them all to FASTA format. ## 2. Dotplot of sequences. Use the function dotPlot of the seqinr package and par(mfrow=c(1,2)) to produce two adjacent plots. (a) Construct two random sequence of size 100 and plot the first against second and the first against the first. (b) Construct a plot of the first against the first and the first against the first in reverse order. 186 CHAPTER 9. ANALYZING SEQUENCES (c) Download the sequences related to the species homo sapiens and a gene name like ”CCND3 Cyclin D3”. Construct a dotplot of the most similar and the least similar sequences. Report your observations. ## 3. Local alignment. The Smith-Waterman algorithm seeks maximum lo- cal alignment between subsequences of sequences. Their algorithm can be summarized (Durbin et al., 2005, p.22), as follows.  F (i − 1, j − 1) + s(i, j) F (i, j) = max F (i − 1, j) − d F (i, j − 1) − d The algorithm allows the score zero if the others have negative values. The idea is that the maximum alignment can occur anywhere in the matrix, optimal alignment is defines as the maximum over the whole matrix. Program the Smith-Waterman algorithm and find the optimal local alignment of the sequences PAWHEAE” and ”HEAGAWGHEE”. ## 4. Probability of more extreme alignment score. Sample x and y randomly from the names of the amino acids, seven for y and 10 for x. repeat this 1000 times and compute the optimal alignment score and use it to evaluate the significance of the previously obtained score. ## 5. Prochlorococcus marinus. Each of three strains of P. marinus is ex- posed to different intensities of UV radiation because these live in dif- ferent depths in water. The MIT 9313 strain lives at depth 135 m, SS120 at 120 m, and MED4 at 5 m. The latter strain is considered to be high-light-adapted. The residual intensities of 260-nm UVb ir- radiation corresponding to the given depths is 0.00007%, 0.0002% and 70%, respectively. It is hypothesized that the G + C content depends on the amount of radiation. The accession numbers of Gen bank are AE017126, BX548174, and BX548175, respectively. ## (a) Use the operator OR together with the accession numbers to download the sequences of the bacteria strains. (b) Compte the GC fraction of each of the sequences. (c) Is there a relation between UVb radiation and GC fraction? (d) Formulate a relevant hypothesis and test it. 9.7. EXERCISES 187 ## 6. Sequence equality. Download the sequences ”AF517525.CCND3” and ”AL160163.CCND3”. Hint: These are the first two from the query ”ccnd3” within homo sapiens. ## (a) Compute the length of the sequences. (b) Translate the sequences into amino acids and compare their fre- quencies. (c) Are they equal or, if not, in what position do they differ? ## 7. Conserved region. At http://blocks.fhcrc.org there are blocks of highly conserved regions for proteins in PROSITE. Find PR00851A which contains blocks of protein related to a human gene responsible for DNA-repair defect xeroderma pigmentosum (sensitivity to ultravi- olet light) Perform a pairwise alignment with these subsequences and report the ones most and least similar. Use BLOSUM50. ## 8. Plot of CG proportion from Celegans. ## (a) Produce a plot of the CG proportion of the chromosome I of Cel- egans (Celegans.UCSC.ce2) along a window of 100 nucleotides. Take the first 10,000 nucleotides. (b) A binding sequence of the enzyme EcoRV is the subsequence GATATC. How many exact matches has Chromosome I of Cel- egans. How many do you expect by chance? ## 9. Plot of codon usage. Go to the seqinr help page on dotchart.uco. ## (a) Redo the example and briefly describe its usage. (b) Use the query language to find 188 CHAPTER 9. ANALYZING SEQUENCES Chapter 10 Markov Models The idea of a Markov process forms the basis of many important models in bioinformatics such as (Hidden) Markov Models, models for sequence align- ment, and models for phylogenetic trees. By the latter it is possible to estimate distances between several sequences and to visualize these in a tree. Classical matrices for sequence alignment such as BLOSUM and PAM are constructed on the basis of a Markov process. By (Hidden) Markov Mod- els the specific repetitive order of DNA sequences can be modeled so that predictions of families becomes possible. In this chapter you learn what a probability transition matrix is and which role it plays in a Markov process to construct specific sequences. Various models for phylogenetic trees are explained in terms of the rate matrix as well as the probability transition matrix. The basic ideas of the Hidden Markov Model are briefly explained and illustrated by an example1 . ## 10.1 Random sampling Models to predict and classify DNA type of sequences make it possible to draw a sample from a population. The latter is the same as a distribution with certain properties. Recall from Chapter 3 that a discrete distribution is a set of values with certain probabilities that add up to one. Two basic examples illustrate this point. 1 This chapter is somewhat more technical in its notation with respect to e.g. conditional probability. This is, however, inevitable for the understanding of Markov processes. 189 190 CHAPTER 10. MARKOV MODELS Example 1. Throwing a coin. A fair coin X attains Head and Tail with probability 1/2. Thus we may write P (X = H) = 0.5 and P (X = T ) = 0.5. With a random variable there always correspond population and a sampling scheme which can be simulated on a computer (e.g. Press, et al., 1992). > sample(c(\"H\",\"T\"),30,rep=TRUE,prob=c(0.5,0.5)) \"H\" \"H\" \"T\" \"T\" \"T\" \"H\" \"H\" \"T\" \"T\" \"H\" \"H\" \"H\" \"T\" \"T\" \"H\" \"T\" \"H\" \"T\" \"T\" \"T\" \"H\" \"T\" \"H\" \"T\" \"T\" \"T\" \"T\" Thus the sampled values Head and Tail correspond to the process of actu- ally throwing with a fair coin. The function sample randomly draws thirty times one of the values c(\"H\",\"T\") with replacement (rep=TRUE) and equal probabilities (prob=c(0.5,0.5)). ## Example 2. Generating a sequence of nucleotides. Another example is that of a random variable X which has the letters of the nucleotides as its values. So that we have the events X = A, X = C, X = G, and X = T . These events may occur in a certain DNA sequence with probabilities P (X = A) = 0.1, P (X = G) = 0.4, P (X = C) = 0.4, and P (X = T ) = 0.1, respectively. Then the actual placement of the nucleotides along a sequence can be simulated. > sample(c(\"A\",\"G\",\"C\",\"T\"),30,rep=TRUE,prob=c(0.1,0.4,0.4,0.1)) \"G\" \"C\" \"T\" \"G\" \"C\" \"G\" \"G\" \"G\" \"T\" \"C\" \"T\" \"T\" \"C\" \"C\" \"C\" \"G\" \"G\" \"C\" \"G\" \"G\" \"G\" \"C\" \"C\" \"C\" \"G\" \"C\" Of course, if you do this again, then the resulting sequence will differ due to the random nature of its generation. ## 10.2 Probability transition matrix In order to build a model that produces specific sequences we will consider a certain type of random variable. In particular, we will consider a sequence {X1 , X2 , · · · } with values from a certain state space E. The latter is simply a set containing the possible values or states of the process. If, for instance, Xn = i, then the process is in state i at time n. Similarly, the expression P (X1 = i) denotes the probability that the process is in state i at time point 1. The event that the process changes its state (transition) from i to j 10.2. PROBABILITY TRANSITION MATRIX 191 between time point one and two corresponds to the event (X2 = j|X1 = i), where the bar means ”given that”. The probability for this event to happen is denoted by P (X2 = j|X1 = i). In general, the probability of the transition from i to j between time point n and n + 1 is given by P (Xn+1 = j|Xn = i). These probabilities can be collected in a probability transition matrix P with elements pij = P (Xn+1 = j|Xn = i). We will assume that the transition probabilities are the same for all time points so that there is no time index needed on the left hand side. Given that the process Xn is in a certain state, the corresponding row of the tran- sition matrix contains the distribution of Xn+1 , implying that the sum of the probabilities over all possible P states equals one. That is, the sum over the probabilities of row i is j pij = 1. Hence, the matrix P has row sum equal to one for all its rows. One may also say that the probability transition matrix contains a (conditional) discrete probability distribution on each of its rows. The probability that a Markov process is in state i at time point n + 1 only depends on the state of Xn and not on any states before time point n. ## Example 1. Using the probability transition matrix to generate a Markov sequence. Suppose Xn has two states: 1 for a pyrimidine and 2 for a purine. A sequence can now be generated, as follows. If Xn = 1, then we throw with a die: If the outcome is lower or equal to 5, then Xn+1 = 1 and, otherwise, (outcome equals 6) Xn+1 = 2. If Xn = 2, then we throw with a coin: If the outcome equals Tail, then Xn+1 = 1, and otherwise Xn+1 = 2. For this process the two by two probability transition matrix equals to 1 2 from , 1 p11 p12 2 p21 p22 where p21 is the probability that the process changes from 2 to 1. This transition matrix can also be written as follows. µ ¶ µ ¶ µ 5 1 ¶ p11 p12 P (X1 = 1|X0 = 1) P (X1 = 2|X0 = 1) P = = = 6 1 6 1 . p21 p22 P (X1 = 1|X0 = 2) P (X1 = 2|X0 = 2) 2 2 ## Any matrix probability transition matrix P can be visualized by a transi- tion graph, where the transition probabilities are visualized by an arrow from 192 CHAPTER 10. MARKOV MODELS state i to state j and the value of pij . For the current example the transition graph is given by Figure 10.1. The values 1 and 2 of the process are written within the circles and the transition probabilities are written near the arrows. To actually generate a sequences with values equal to 1 and 2 according the 1/6 1/2 1 0 5/6 1/2 ## Figure 10.1: Graph of probability transition matrix ## transition matrix we may use the following. ## > P <- matrix(c(5/6,1/6,0.5,0.5),2,2,byrow=T) > states <- c(1,2) > markov <- function(states,P,n){seq <- integer() + seq<-1 + for(k in 1:(n-1)){seq[k+1] <- sample(states,1,replace=T,P[seq[k],])} + return(seq)} > markov(states,P,30) 1 1 1 1 1 1 1 1 1 2 2 1 1 1 1 1 1 1 2 2 1 1 1 1 1 2 2 1 1 1 ## The actual sampling is conducted by the function markov, which is based on the function sample. The key idea is to make the probabilities of the sampling dependent on the corresponding row of the transition matrix given by the row number seq[k]. ## Example 2. A sequence with a large frequency of C and G. To illustrate that certain probability transition matrices imply a large amount of C and G residues, we use the following probability transition matrix. P <- matrix(c( 1/6,5/6,0,0, 10.2. PROBABILITY TRANSITION MATRIX 193 1/8,2/4,1/4,1/8, 0,2/6,3/6,1/6, 0,1/6,3/6,2/6),4,4,byrow=T) rownames(P) <- colnames(P) <- StateSpace <- c(\"a\",\"c\",\"g\",\"t\") ## pi0 <- c(1/4,1/4,1/4,1/4) markov2 <- function(StateSpace,P,n){ seq <- character() seq <- sample(StateSpace,1,replace=T,pi0) for(k in 1:(n-1)){ seq[k+1] <- sample(StateSpace,1,replace=T,P[seq[k],])} return(seq) } seq <- markov2(StateSpace,P,1000) > table(seq) seq a c g t 56 404 366 174 From the frequency table it can be observed that the majority of residues are ”c” and ”g”. Note that the initial probabilities collected in pi0 only play a role in the generation of the first element of the sequence. ## Example 3. A sequence with high phenylalanine frequency. Now it is possible to construct a sequence which produces the amino acid phenylalanine (F) with high probability. Recall that it is coded by the triple TTT or TTC. We use the function getTrans of the seqinr package to translate nucleotide triplets into amino acids. > library(seqinr) > pi0 <- c(0,0,0.5,0.5) > P <- matrix(c(1,0,0,0, 0,1,0,0, 0,0,0.1,0.8, 0,0,0.05,0.95),4,4,byrow=T) > rownames(P) <- StateSpace <- c(\"a\",\"g\",\"c\",\"t\") > seq1 <- markov2(StateSpace,P,3000) > table(getTrans(seq1)) F L P S 889 55 4 52 From the table it is clear that the frequency of F is the largest among the generated amino acids. 194 CHAPTER 10. MARKOV MODELS ## Example 4. We proceed with the sequence produced in Example 2 in order to illustrate estimation of the probability transition matrix. A <- matrix(as.numeric(count(seq,2)),4,byrow=TRUE) rowsumA <- apply(A, 1, sum) Phat <- sweep(A, 1, rowsumA, FUN=\"/\") rownames(Phat) <- colnames(Phat) <- c(\"a\",\"g\",\"c\",\"t\") > Phat a g c t a 0.1607143 0.8392857 0.0000000 0.0000000 g 0.1163366 0.4801980 0.2623762 0.1410891 c 0.0000000 0.3753425 0.4575342 0.1671233 t 0.0000000 0.1436782 0.5344828 0.3218391 The number of transitions are counted and divided by the row totals. The estimated transition probabilities are quite close to the true transition proba- bilities. The zero transition probabilities are exactly equal to the true because these do not occur. This estimation procedure can easily be applied to DNA sequences. ## 10.3 Properties of the transition matrix In the above, the sequence was started at a certain state. Often, however, the probabilities of the initial states are available. That is, we have a vec- tor π 0 with initial probabilities π10 = P (X0 = 1) and π20 = P (X0 = 2). Furthermore, if the transition matrix µ ¶ µ ¶ p11 p12 P (X1 = 1|X0 = 1) P (X1 = 2|X0 = 1) P = = , p21 p22 P (X1 = 1|X0 = 2) P (X1 = 2|X0 = 2) then the probability that the process is in State 1 at time point 1 can be written as P (X1 = 1) = π10 p11 + π20 p21 = π T0 p1 , (10.1) where p1 is the first column of P , see Section 10.7. Note that the last equality holds by definition of matrix multiplication. In a similar manner, it can be shown that P (X1 = 2) = π T0 p2 , where p2 is column 2 of the transition matrix P = (p1 , p2 ). It can be concluded that π T0 P = π T1 , where 10.3. PROPERTIES OF THE TRANSITION MATRIX 195 π T1 = (P (X1 = 1), P (X1 = 2)); the probability at time point 1 that the process is in State 1, State 2, respectively. This holds in general for all time points n, that is π Tn P = π Tn+1 . (10.2) Thus to obtain the probabilities of the states at time point n + 1, we can simply use matrix multiplication 2 . ## Example 1. Matrix multiplication to compute probabilities. Suppose the following initial distribution and probability matrix µ 5 1 ¶ ¡ 2 1 ¢ π0 = 3 3 , P = 6 1 6 1 , 2 2 ## for State 1 and 2, respectively. Then P (X1 = 1) and P (X1 = 2) collected in π T1 = (P (X1 = 1), P (X1 = 2)) can be computed as follows. µ ¶ T T ¡ 2 1 ¢ 5 1 ¡ ¢ ¡ ¢ π1 = π0 P = 3 3 6 1 6 1 = 23 · 56 + 13 · 12 23 · 16 + 13 · 12 = 13 18 5 18 2 2 Using R its the matrix multiplication operator %*%, the product π T0 P can be computed. > P <- matrix(c(5/6,1/6,0.5,0.5),2,2,byrow=T) > pi0 <- c(2/3,1/3) > pi0 %*% P [,1] [,2] [1,] 0.7222222 0.2777778 ## Yet, another important property of the probability transition matrix deals with P (X2 = 1|X0 = 1), the probability of being in state 1 given that the process is in state 1 two time points before. In particular, it holds (see Section 10.7) that P (X2 = 1|X0 = 1) = p211 , (10.3) where the latter is element (1, 1) of the matrix3 P 2 . In general, we have that ## P (Xn = j|X0 = i) = pnij , 2 The transposition sign T simply transforms a column into a row. 3 For a brief definition of matrix multiplication, see Pevsner (2003, p.56) or wikipedia using the search string ”wiki matrix multiplication”. 196 CHAPTER 10. MARKOV MODELS which is element i, j of P n . ## Example 3. Given the probability matrix of the previous example, the values P (X2 = j|X0 = i) for all of i, j can be computed by matrix multipli- cation. µ 5 1 ¶ µ 5 1 ¶ µ 5 2 1 1 5 1 1 1 ¶ µ 28 8 ¶ 2 (6) + 6 2 6 6 + 6 2 P = 6 1 6 1 · 61 61 = 15 1 2 11 1 2 = 36 24 36 12 . 2 2 2 2 26 + ( 2 ) 26 + ( 2 ) 36 36 ## Obviously, such matrix multiplications can be accomplished much more con- venient on a personal computer. > P %*% P [,1] [,2] [1,] 0.7777778 0.2222222 [2,] 0.6666667 0.3333333 Larger powers of P can be computed more efficiently by methods given be- low. ## 10.4 Stationary distribution A probability distribution π satisfying πT = πT P is stationary because the transition matrix does not change the probabilities of the states of the process. Such a distribution usually exists, is unique, and plays an essential role in the long term behavior of the process. It sheds light on the question: What is the probability P (Xn = 1|X0 = 1) = pn11 , as n increases without bound. That is: What is the probability that the process is in State 1, given that it started in State 1, as time increases without bound? To answer such a question we need large powers of the probability transition matrix. To compute these we need the eigen-decomposition of the probability transition matrix P = V ΛV −1 , where V is the eigenvector matrix and Λ the diagonal matrix with eigen- values. The latter are usually sorted in decreasing order so that the first 10.4. STATIONARY DISTRIBUTION 197 (left upper) is the largest. Now the third power of the probability transition matrix can be computed, as follows P 3 = V ΛV −1 V ΛV −1 V ΛV −1 = V ΛΛΛV −1 = V Λ3 V −1 . So that, indeed, in general P n = V Λn V −1 . The latter is a computationally convenient expression because we only have to take the power of the eigenvalues in Λ and to multiply by the left and right eigenvector matrices. This will be illustrated below. In the long term the Markov process tends to a certain value (Brémaud, 1999, p.197) because a probability transition matrix has a unique largest eigenvalue equal to 1 with corresponding eigenvectors 1 and π (or rather normalized versions of these). It follows that, as n increases without bound, then P n tends to 1π T . In other words, P (Xn = j|X0 = i) = pnij tends to element (i, j) of 1π T , which is equal to element j of π. For any initial dis- tribution π 0 , it follows that π 00 P n tends to π T . ## Example 1. Stationary distribution. To compute the eigen-decomposition of the probability transition matrix P as well as powers of it, we may use the function eigen. > P <- matrix(c(1/6,5/6,0.5,0.5),2,2,byrow=T) > V <- eigen(P,symmetric = FALSE) > Vvalues\n 1.0000000 -0.3333333\n> V$vectors [,1] [,2] [1,] -0.7071068 -0.8574929 [2,] -0.7071068 0.5144958 The output of the function eigen is assigned to the list V from which the eigenvalues and eigenvectors can be extracted and printed to the screen. Now we can compute P 16 ; the probability transition matrix raised to the power sixteen. > V$vec %*% diag(V$va)^(16) %*% solve(V$vec)\n[,1] [,2]\n[1,] 0.375 0.625\n[2,] 0.375 0.625\n198 CHAPTER 10. MARKOV MODELS\n\n## Example 2. Diploid. Suppose A is a dominant gene, a a recessive\n\nand that we start with a heterozygote aA. From the latter we obtain the\ninitial state probability π T = (0, 1, 0) for the events (AA, aA, aa). When\nwe consider pure self-fertilization, then the offspring from AA is AA with\nprobability (1, 0, 0), that of aa is aa with probability (0, 0, 1), and that of\naA is (AA, aA, aa) with probability 1/4, 1/2, 1/4, respectively. Hence, the\nprobability transition matrix becomes\n \n1 0 0\nP =  1/4 1/2 1/4 \n0 0 1\n\nWe can now compute the transition probability matrix after five generations.\n\n## P <- matrix(c(1,0,0, 1/4,1/2,1/4,0,0,1),3,3,byrow=T)\n\nV <- eigen(P,symmetric = FALSE)\n> V$vec %*% diag(V$va)^(5) %*% solve(V$vec) [,1] [,2] [,3] [1,] 1.000000 0.00000 0.000000 [2,] 0.484375 0.03125 0.484375 [3,] 0.000000 0.00000 1.000000 Hence, the distribution we obtain can be read from the second row which is highly homozygotic. A little more precise, using Equation 10.2, it can be shown that µ µ ¶n µ ¶n µ ¶n ¶ T 1 1 1 1 1 π n+1 = − , , − , 2 2 2 2 2 so that the distribution converges to (1/2, 0, 1/2). ## Note that this method of raising the transition probability matrix to a large power can easily be applied to determine the stationary distribution. The idea of taking a transition matrix to a certain power is also used to construct the PAM250 matrix given the PAM1 matrix (Pevsner, 2003, p.53) and for the construction of various BLOSUM matrices (Pevsner, 2003, p.50- 59; Deonier, et al. 2005, 187-190). 10.5. PHYLOGENETIC DISTANCE 199 ## 10.5 Phylogenetic distance Phylogenetic trees are constructed on the basis of distances between DNA sequences. These distances are computed using substitution models which are defined by a matrix given the rate of substitutions of one state to the other. The latter is usually expressed as a matrix Q. The probability tran- sition matrix P can be computed by matrix exponentiation P = exp(Q). How to do this in practice will be illustrated by an example. ## Example 1. From a rate matrix to a probability transition matrix. Suppose the rate matrix  A G C T  A −0.60 0.20 0.20 0.20 Q = G  0.20 −0.60 0.20 0.20  . C  0.20 0.20 −0.60 0.20  T 0.20 0.20 0.20 −0.60 Thus within a certain time period a proportion of 0.20 A changes into G, 0.20 A into C, and 0.20 A into T . Consequently, a proportion of 0.60 of the residues goes back to A. Given this rate matrix, we can find the probabil- ity transition matrix P = exp(Q) by using the function expm(Q) from the package Matrix. library(Matrix) Q <- 0.2 * Matrix(c(-3,1,1,1,1,-3,1,1,1,1,-3,1,1,1,1,-3),4) rownames(Q) <- colnames(Q) <- c(\"A\",\"G\",\"C\",\"T\") P <- as.matrix(expm(Q)) > round(P,2) A G C T A 0.59 0.14 0.14 0.14 G 0.14 0.59 0.14 0.14 C 0.14 0.14 0.59 0.14 T 0.14 0.14 0.14 0.59 Thus the probability that the state changes from A to A is 0.59, from A to G is 0.14, etc. ## Because all phylogenetic models are defined in terms of rate matrices, we shall concentrate on these. For instance, the rate matrix for the Jukes and 200 CHAPTER 10. MARKOV MODELS ## Cantor (1969) (JC69) model can be written as A G C T  A · α α α QJC69 = G   α · α α  . C  α α · α  T α α α · The sum of each row of a rate matrix equals zero, so that from this require- ment the diagonal elements of the JC69 model are equal to −3α. Further- more, the non-diagonal substitution rates of the JC69 model all have the same value α. That is, the change from i to j equals that from j to i, so that the rate matrix is symmetric. Also the probability that the sequence equals one of the nucleotides is 1/4. This assumption, however, is unrealistic is many cases. Transitions are substitutions of nucleotides within types of nucleotides, thus purine to purine or pyrmidine to pyrmidine (A ↔ G or C ↔ T ). Transversions are substitutions between nucleotide type (A ↔ T , G ↔ T ,A ↔ C, and C ↔ G). In the JC69 model a transition is assumed to happen with equal probability as a transversion. That is, it does not account for the fact that transitions are more common that transversions. To cover this for more general models are proposed by Kimura (1980, 1981), which are commonly abbreviated by K80 and K81. In terms of the rate matrix these model can we written as     · α β β · α β γ  α · β β   α · γ β  QK80 =  β β · α  , Q K81 =   β γ · α . β β α · γ β α · In the K80 model a change within type (transition) occurs at rate α and between type (transversion) at rate β. In the K81 model all changes occur at a different though symmetric rate; the rate of change A → G is α and equals that of A ← G. If α is large, then the amount of transitions is large; if both β and γ are very small, then the number of transversions is small. A model is called “nested” if it is a special case of a more general model. For instance, the K80 model is nested in the K81 model because when we take γ = β in the K81 model, then we obtain the K80 model. Similarly, the JC69 model is nested in the K80 model because if we take β = α in the K80 model, then we obtain the JC69 model. 10.5. PHYLOGENETIC DISTANCE 201 Some examples of models with even more parameters are the Hasegawa, Kishino, and Yano (1985) (HKY85) model and the General Time-Reversable Model (GTR) model     · απG βπC βπT · απG βπC γπT  απA · βπC βπT   απA · δπC ²πT  QHKY 85 =  βπA βπG , QGT R =  . · απT   βπA δπG · ζπT  βπA βπG απC · γπA ²πG ζπC · The distance between DNA sequences is defined on the basis of these models. From these distances the the phylogenetic tree is computed. The neighbor- joining algorithm is used to compute a tree with the smallest total branch length. Example 2. The K81 model. To compute the rate matrix of the K81 model with α = 3/6, β = 2/6, γ = 1/6 we may use the following. alpha <- 3/6; beta <- 2/6; gamma<- 1/6; Q <- matrix(data=NA,4,4) Q[1,2] <- Q[2,1] <- Q[3,4] <- Q[4,3] <- alpha Q[1,3] <- Q[3,1] <- Q[2,4] <- Q[4,2] <- beta Q[1,4] <- Q[4,1] <- Q[2,3] <- Q[3,2] <- gamma > diag(Q) <- -(alpha + beta + gamma) > Q [,1] [,2] [,3] [,4] [1,] -1.0000000 0.5000000 0.3333333 0.1666667 [2,] 0.5000000 -1.0000000 0.1666667 0.3333333 [3,] 0.3333333 0.1666667 -1.0000000 0.5000000 [4,] 0.1666667 0.3333333 0.5000000 -1.0000000 > Q <- Matrix(Q) > P <- as.matrix(expm(Q)) > P [,1] [,2] [,3] [,4] [1,] 0.4550880 0.2288517 0.1767105 0.1393498 [2,] 0.2288517 0.4550880 0.1393498 0.1767105 [3,] 0.1767105 0.1393498 0.4550880 0.2288517 [4,] 0.1393498 0.1767105 0.2288517 0.4550880 ## By raising the power of the probability transition matrix to a number be- ing sufficiently large, it can be observed that the stationary distribution 202 CHAPTER 10. MARKOV MODELS ## π T = (0.25, 0.25, 0.25, 0.25). Example 3. Stationarity for the JC69 model. Let’s take α = 1/4, com- pute the rate matrix Q of the JC69 model, and the corresponding probability transitionmatrix P and raise it to the power 50. > library(Matrix) > alpha <- 1/4; Q <- matrix(rep(alpha,16),4,4) > diag(Q) <- -3 * alpha > Q <- Matrix(Q) > P <- as.matrix(expm(Q)) > V <- eigen(P,symmetric = FALSE) > V$vec %*% diag(V$va)^(50) %*% solve(V$vec)\n[,1] [,2] [,3] [,4]\n[1,] 0.25 0.25 0.25 0.25\n[2,] 0.25 0.25 0.25 0.25\n[3,] 0.25 0.25 0.25 0.25\n[4,] 0.25 0.25 0.25 0.25\n\nHence, the stationary distribution is π T = (0.25, 0.25, 0.25, 0.25) (cf. Ewens\n& Grant, 2005, p. 477).\n\n## Example 4. Distance between two sequences according to the JC69\n\nmodel. In case of the JC69 model, the distance between sequences is a\nfunction of the proportion of different nucleotides. Namely,\n\n3\nd = − log(1 − 4p/3),\n4\nwhere p is the proportion of different nucleotides of the two sequences. The\npairwise distances between DNA sequences can be computed by the function\ndist.dna from the ape package.\n\n> library(ape);library(seqinr)\n> accnr <- paste(\"AJ5345\",26:27,sep=\"\")\n> seqbin <- read.GenBank(accnr, species.names = TRUE, as.character = FALSE)\n> dist.dna(seqbin, model = \"JC69\")\nAJ534526\nAJ534527 0.1326839\n10.5. PHYLOGENETIC DISTANCE 203\n\n## Hence, the distance is 0.1326839. The proportion of different nucleotides over\n\nthe total is 139/1143 = p. Inserting this into the previous distance formula\ngives the distance. This can be verified as follows.\n\n## > seq <- read.GenBank(accnr, species.names = TRUE, as.character = TRUE)\n\n> p <- sum(seq$AJ534526==seq$AJ534527)/1143\n> d <- -log(1-4*p/3)*3/4\n> d\n 0.1326839\n\n## Example 5. Phylogenetic tree of a series of downloaded sequences. To\n\nChamaea fasciata mitochondrial cytb gene for cytochrome b for 10 species of\nwarblers of the genus sylvia (Paradis, 2006). The function paste is used to\nquickly define the accession numbers and read.GenBank to actually down-\nload the sequences. The species names are extracted and attached to the\nsequences. We shall use the dist.dna function with the K80 model.\n\nlibrary(ape);library(seqinr)\naccnr <- paste(\"AJ5345\",26:35,sep=\"\")\nnames(seq) <- attr(seq, \"species\")\ndist <- dist.dna(seq, model = \"K80\")\nplot(nj(dist))\n\nObviously, in this manner various trees can be computed and their plots\ncompared.\n\nWhen various different models are defined the question becomes appar-\nent which of these fits best to the data relative to the number of parameters\n(symbols) of the model. When the models are estimated by maximum likeli-\nhood, then the Akaike information criterion (AIC = -2 · loglik + 2 · number\nof free parameters) used to select models. The best model is the one with\nthe smallest AIC value.\n\n## Example 6. A program called PHYML (Guindon & Gascuel, 2003)\n\nfunction phymltest, if the executable is available at the same directory. We\nfirst write the sequences to the appropriate directory. The output from the\n204 CHAPTER 10. MARKOV MODELS\n\nprogram is written to the object called out for which the functions plot(out)\nand summary(out) can be used to extract more detailed information.\n\n## Akaike information criterion for phymlout\n\n9200 JC69\nF81\nJC69 + I\nK80\nJC69 + I + Γ\nJC69 + Γ\n9000 F84\nHKY85\nTN93\nF81 + I\nF81 + I + Γ\n8800 F81 + Γ\nGTR\nK80 + I\nK80 + Γ\nK80 + I + Γ\n8600 F84 + I\nF84 + Γ\nF84 + I + Γ\nHKY85 + I\nTN93 + I\n8400 HKY85 + I + Γ\nHKY85 + Γ\nTN93 + I + Γ\nTN93 + Γ\nGTR + I\n8200 GTR + I + Γ\nGTR + Γ\n\n## Figure 10.2: Evaluation of models by AIC .\n\n> setwd(\"/share/home/wim/bin\")\n> write.dna(seq,\"seq.txt\", format =\"interleaved\")\n> out <-phymltest(\"seq.txt\",format = \"interleaved\", execname =\"phyml_linux\")\n> print(out)\nnb.free.para loglik AIC\nJC69 1 -4605.966 9213.931\n10.5. PHYLOGENETIC DISTANCE 205\n\n## JC69+I 2 -4425.602 8855.203\n\nJC69+G 2 -4421.304 8846.608\nJC69+I+G 3 -4421.000 8848.001\nK80 2 -4423.727 8851.455\nK80+I 3 -4230.539 8467.079\nK80+G 3 -4224.457 8454.915\nK80+I+G 4 -4223.136 8454.272\nF81 4 -4514.331 9036.662\nF81+I 5 -4309.600 8629.199\nF81+G 5 -4304.530 8619.060\nF81+I+G 6 -4303.760 8619.519\nF84 5 -4351.164 8712.328\nF84+I 6 -4112.006 8236.012\nF84+G 6 -4106.568 8225.135\nF84+I+G 7 -4105.500 8225.001\nHKY85 5 -4333.086 8676.171\nHKY85+I 6 -4102.262 8216.524\nHKY85+G 6 -4097.401 8206.802\nHKY85+I+G 7 -4096.624 8207.248\nTN93 6 -4323.291 8658.581\nTN93+I 7 -4097.099 8208.198\nTN93+G 7 -4091.461 8196.922\nTN93+I+G 8 -4090.790 8197.580\nGTR 9 -4293.398 8604.795\nGTR+I 10 -4084.522 8189.043\nGTR+G 10 -4079.010 8178.020\nGTR+I+G 11 -4078.149 8178.299\nThe notation ”+I” and ”+G” indicates whether the presence of invariant\nsites and/or a gamma distribution of substitution rates have been specified.\nIt can be seen that the smallest AIC corresponds to model 27 called GTR+G.\nTo plot it, we have to read the trees, and, next, to extract the 27th, see\nFigure 10.3.\nplot(tr[])\nIn case similar sequences have slightly different lengths, then these can be\naligned by programs such as clustalx or clustalw.\n206 CHAPTER 10. MARKOV MODELS\n\nSylvia crassirostris\n\nSylvia hortensis\n\nSylvia leucomelaena\n\nSylvia lugens\n\nSylvia buryi\n\nSylvia boehmi\n\nSylvia subcaeruleum\n\nSylvia layardi\n\nSylvia nisoria\n\n## 10.6 Hidden Markov Models\n\nIn a Hidden Markov Model (HMM) there are two probability transition ma-\ntrices. There is an emission matrix E and a transition matrix A. The\ngeneration of an observable sequence goes in two steps. First, there is a tran-\nsition from a Markov process of a hidden state and given this value there\n10.6. HIDDEN MARKOV MODELS 207\n\n## is an emission of an observable value. We shall illustrate this by the clas-\n\nsical example of the occasionally dishonest casino (Durbin et. al., 1998, p.18).\n\n## Example 1. Occasionally dishonest casino. A casino uses a fair die most\n\nof the time, however, occasionally it switches to an unfair die. The state with\nrespect to fairness is hidden for the observer. The observer can only observe\nthe values of the die and not its hidden state with respect to its fairness. It\nis convenient to denote fair by 1 and unfair by 2. The transition probabilities\nof the hidden states are by the emission matrix\n· ¸ · ¸\nP (Di = 1|Di−1 = 1) P (Di = 2|Di−1 = 1) 0.95 0.05\nE= = .\nP (Di = 1|Di−1 = 2) P (Di = 2|Di−1 = 2) 0.10 0.90\n\nThus the probability is 0.95 that the die is fair at time point i, given that\nit is fair at time point i − 1. The probability that it will switch from fair to\nunfair is 0.05. The probability that it will switch from loaded to fair is 0.10\nand that it stays loaded is 0.90. With this emission matrix we can generate a\nsequence of hidden states, where the values 1 and 2 indicate whether the die\nis fair (1) or loaded (2). Given the fairness of the die we define the probability\ntransition matrix.\n· ¸\nP (Oi = 1|Di = 1) P (Oi = 2|Di = 1) P (Oi = 3|Di = 1) · · ·\nA =\nP (Oi = 1|Di = 2) P (Oi = 2|Di = 2) P (Oi = 3|Di = 2) · · ·\n· ¸\n1/6 1/6 1/6 1/6 1/6 1/6\n= . (10.4)\n1/10 1/10 1/10 1/10 1/10 1/2\n\nThus given that the die is fair, the probability of any outcome equals 1/6.\nHowever, given that the die is unfair (loaded), the probability of outcome 6\nequals 1/2 and that of any other outcome equals 1/10.\nThe HMM with this transition and emission matrix can be programmed.\nAfter sampling the hidden states from a Markov chain and the outcomes of\nthe die are sampled according to the value of the hidden state (die type).\n\n## hmmdat <- function(A,E,n){\n\nobservationset <- c(1:6)\nhiddenset <- c(1,2)\nx <- h <- matrix(NA,nr=n,nc=1)\nh<-1; x<-sample(observationset,1,replace=T,E[h,])\nh <- markov(hiddenset,A,n)\n208 CHAPTER 10. MARKOV MODELS\n\n## for(k in 1:(n-1)){x[k+1] <- sample(observationset,1,replace=T,E[h[k],])}\n\nout <- matrix(c(x,h),nrow=n,ncol=2,byrow=FALSE)\nreturn(out)\n}\nE <- matrix(c(rep(1/6,6),rep(1/10,5),1/2),2,6,byrow=T) #emission matrix\nA <- matrix(c(0.95,0.05,0.1,0.9),2,2,byrow=TRUE) #transition matrix\ndat <- hmmdat(A,E,100)\ncolnames(dat) <- c(\"observation\",\"hidden_state\")\nrownames(dat) <- 1:100\n> t(dat)\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 2\nobservations 5 2 3 1 6 1 3 1 1 5 6 6 2 2 3 5 4 6 1 2 4 4 3 2\nhidden_states 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1\n26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 4\nobservations 4 3 2 4 1 6 6 6 6 6 5 5 3 6 1 6 5 2 4 1 4\nhidden_states 1 1 1 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 1 1\n48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 6\nobservations 5 6 5 2 3 3 1 3 3 5 6 6 2 4 5 4 6 1 6 5 2\nhidden_states 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1\n70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 9\nobservations 1 1 4 4 1 5 6 4 3 5 4 2 6 1 3 6 5 2 2 6 6\nhidden_states 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 1\n92 93 94 95 96 97 98 99 100\nobservations 4 1 6 5 5 6 5 3 4\nhidden_states 1 1 1 1 1 1 1 1 1\n\n## In certain applications to bioinformatics, it is of most importance to es-\n\ntimate the value of the hidden state given the data. The Viterbi algorithm\nis developed to predict the hidden state given the data and the (estimated)\ntransition and emission matrix. The algorithm builds up a matrix v(i, l),\nwhere i runs from one to the number of observations and l from one to the\nnumber of states. The initial values are v(1, 1) = 1, and v(1, l) = 0 for all l.\nThen the values for v(i, l) are recursively defined by\nv(i, l) = e(l, x(i)) · max {v(i − 1, k)a(k, l)} .\nk\n\nFor each row of the matrix the maximum is taken as the best predictor of\n10.6. HIDDEN MARKOV MODELS 209\n\n## Example 2. The viterbi algorithm can be programmed and applied\n\nto the hidden states of the data generated with respect to the occasionally\ndishonest casino.\n\n## viterbi <- function(A,E,x) {\n\nv <- matrix(NA, nr=length(x), nc=dim(A))\nv[1,] <- 0; v[1,1] <- 1\nfor(i in 2:length(x)) {\nfor (l in 1:dim(A)) {v[i,l] <- E[l,x[i]] * max(v[(i-1),] * A[l,])}\n}\nreturn(v)\n}\nvit <- viterbi(A,E,dat[,1])\nvitrowmax <- apply(vit, 1, function(x) which.max(x))\nhiddenstate <- dat[,2]\n> table(hiddenstate, vitrowmax)\nvitrowmax\nhiddenstate 1 2\n1 72 11\n2 15 2\n\n## datt <- cbind(dat,vitrowmax)\n\ncolnames(datt) <- c(\"observation\",\"hidden_state\",\"predicted state\")\n> t(datt)\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24\nobservation 5 2 3 1 6 1 3 1 1 5 6 6 2 2 3 5 4 6 1 2 4 4 3 2\nhidden_state 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1\npredicted state 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1\n25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45\nobservation 3 4 3 2 4 1 6 6 6 6 6 5 5 3 6 1 6 5 2 4 1\nhidden_state 1 1 1 1 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 1\npredicted state 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2\n46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66\nobservation 4 2 5 6 5 2 3 3 1 3 3 5 6 6 2 4 5 4 6 1 6\nhidden_state 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1\npredicted state 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1\n210 CHAPTER 10. MARKOV MODELS\n\n67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87\nobservation 5 2 6 1 1 4 4 1 5 6 4 3 5 4 2 6 1 3 6 5 2\nhidden_state 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2\npredicted state 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1\n88 89 90 91 92 93 94 95 96 97 98 99 100\nobservation 2 6 6 1 4 1 6 5 5 6 5 3 4\nhidden_state 2 2 1 1 1 1 1 1 1 1 1 1 1\npredicted state 1 1 1 1 1 1 1 1 1 1 1 1 1\n\nThe misclassification rate is 0.27 which is quite large given the fact that we\nused the true transition and emission matrix. An important observation is\nthat after a transition of a hidden state, it takes a few values for the predic-\ntion to change. This is caused by the recursive nature of the algorithm.\n\n10.7 Appendix\n\nThe probability that the process is in State 1 at time point 1 can be computed\nas follows.\n\n## P (X1 = 1) = P (X1 = 1, X0 = 1) + P (X1 = 1, X0 = 2)\n\n= P (X1 = 1|X0 = 1) · P (X0 = 1) + P (X1 = 1|X0 = 2) · P (X0 = 2)\n= π10 p11 + π20 p21\n= π T0 p1 ,\n\n## where p1 is the first column of P .\n\n10.8. OVERVIEW AND CONCLUDING REMARKS 211\n\n## In particular, it holds that\n\nP (X2 = 1|X0 = 1) = P (X2 = 1, X1 = 1|X0 = 1) + P (X2 = 1, X1 = 2|X0 = 1)\nX2\n= P (X2 = 1, X1 = k|X0 = 1)\nk=1\nX2\n= P (X2 = 1|X1 = k, X0 = 1) · P (X1 = k|X0 = 1)\nk=1\nX2\n= P (X2 = 1|X1 = k) · P (X1 = k|X0 = 1)\nk=1\n= p11 p11 + p21 p12\n= row 1 of P times column 1 of P = P 211 ,\nwhere the latter is element (1, 1) of the matrix P 2 = P · P .\n\n## 10.8 Overview and concluding remarks\n\nThe probability transition matrix is extensively explained and illustrated\nbecause it is a cornerstone to many ideas in bioinformatics. A thorough\ntreatment of phylogenetics is given by Paradis (2006) and of Hidden Markov\nModels by Durbin et. al (2005).\n\n10.9 Exercises\n1. Visualize by a transition graph the following transition matrices. For\nthe process with four states take the names of the nucleotides in the\norder A, G, T, and C.\n 1 2   1 3 \nµ 1 2 ¶ µ ¶ µ ¶ 4 4\n0 14 4 4\n0 0\n1 0 0 1  1 2 2 1   1 5\n0 0 \n3\n3\n3\n1 , , , 6 6 6 6   6\n 0 2 5 0 , 0 0 5 2\n6 .\n\n4 4\n0 1 1 0 7 7 7 7\n1 1 2 4 3 5\n8 8 8 8\n0 0 8 8\n\n## 2. Computing probabilities. Given the states 0 and 1 and the following\n\ninitial distribution and probability matrix\nµ 3 1 ¶\n¡ 1 1 ¢\nπ0 = 2 2 , P = 4\n1\n4\n1 .\n2 2\n212 CHAPTER 10. MARKOV MODELS\n\n## (a) Compute P (X1 = 0).\n\n(b) Compute P (X1 = 1).\n(c) Compute P (X2 = 0|X0 = 0).\n(d) Compute P (X2 = 1|X0 = 0).\n\n## 3. Programming GTR. Use πA = 0.15, πG = 0.35, πC = 0.35, πT = 0.15,\n\nα = 4, β = 0.5, γ = 0.4, δ = 0.3, ² = 0.2, and ζ = 4.\n\n## (a) Program the rate matrix in such a manner that it is simple to\n\nadapt for other values of the parameters.\n(b) Is the transversion rate larger or smaller then the transition rate?\n(c) Compute the corresponding the probability transition matrix.\n(d) Try to argue whether you expect a large frequency of transversions\nor translations.\n(e) Generate a sequence of 99 nucleotide residues according to the\nmarkov model.\n\n## 4. Distance according to JC69.\n\n(a) Down load the sequences AJ534526 and AJ534527. Hint: Use\nas.character = TRUE in the read.GenBank function.\n(b) Compute the proportion of different nucleotides.\n(c) Use this proportion to verify the distances between these sequences\naccording to the JC69 model.\nAppendix A\n\n## (a) matrix, numeric, numeric, matrix, function, function, factor, stan-\n\ndardGeneric, ExpressionSet.\n(b) remove, summation, product, sequence, standard deviation, num-\nber of rows,\n(c) Use R its help or use the internet search key ”r wiki grep” to\nfind the following answers: searching regular expressions, return\na vector from a function on the rows or columns of a matrix,\non packages, make R reading input from a file or URL, set the\nworking directory to a certain map, print the last · commands\ngiven from the command line, give the structure of an object.\n\n2. gendat\n\n(a) apply(gendat,2,sd).\n(b) apply(gendat,1,sd).\n(c) To order the data frame according to the gene standard deviations.\nsdexprsval <- apply(gendat,1,sd)\no <- order(sdexprsval,decreasing=TRUE)\ngendat[o,]\n\n213\n214 APPENDIX A. ANSWERS TO EXERCISES\n\n(d) gene1\n\n## (a) Computation of mean gene expression values.\n\ndata(golub, package = \"multtest\")\nmeangol <- apply(golub,1,mean)\n(b) To order the data frame use o <- order(meangol,decreasing=TRUE)\nand golub[o,]\n(c) Give the names of the three genes with the largest mean expression\nvalue.\n> golub.gnames[o[1:3],3]\n \"U43901_rna1_s_at\" \"M13934_cds2_at\" \"X01677_f_at\"\n(d) Give their biological names.\n> golub.gnames[o[1:3],2]\n \"37 kD laminin receptor precursor/p40 ribosome associated protein\n \"RPS14 gene (ribosomal protein S14) extracted from Human ribosoma\n \"GAPD Glyceraldehyde-3-phosphate dehydrogenase\"\n\n## 4. Computations on gene standard deviations of the Golub data.\n\n(a) The standard deviation per gene can be computed by sdgol <-\napply(golub,1,sd).\n(b) The gene with standard deviation larger than 0.5 can be selected\nby golubsd <- golub[sdgol>0.5,].\n(c) sum(sdgol>0.5) gives that the number of genes having sd larger\nthan 0.5 is 1498.\n\n## (a) length(agrep(\"^oncogene\",golub.gnames[,2])) gives 42.\n\n(b) By the script below the \"Cellular oncogene c-fos is found.\ndata(golub, package=\"multtest\")\nrowindex <- agrep(\"^oncogene\",golub.gnames[,2])\noncogol <- golub[rowindex,]\n215\n\n## oncogolub.gnames <- golub.gnames[rowindex,]\n\ngol.fac <- factor(golub.cl,levels=0:1, labels= c(\"ALL\",\"AML\"))\nmeangol <- apply(oncogol[,gol.fac==\"ALL\"],1,mean)\no <- order(meangol,decreasing=TRUE)\n> oncogolub.gnames[o[1:3],2]\n \"PIM1 Pim-1 oncogene\" \"JUNB Jun B proto-oncogene\"\n \"Proto-oncogene BCL3 gene\"\n(c) meangol <- apply(oncogol[,gol.fac==\"AML\"],1,mean)\no <- order(meangol,decreasing=TRUE)\n> oncogolub.gnames[o[1:3],2]\n \"PIM1 Pim-1 oncogene\" \"JUNB Jun B proto-oncogene\"\n \"Proto-oncogene BCL3 gene\"\n(d) Writing results to a csv file. Be aware of the correct column\nseparation.\nx <- oncogolub.gnames[o[1:10],c(3,2)]\ncolnames(x) <- c(\"probe ID\",\"gene name\")\nwrite.csv(x,file=\"goluboutcsv\")\nwrite.table(x,file=\"goluboutnorowname\",row.names=FALSE)\n\n6. Constructing a factor.\n\n(a) gl(2,4).\n(b) gl(5,3).\n(c) gl(3,5).\n\n## 7. Gene means for B1 patients.\n\nlibrary(ALL); data(ALL)\n\n## Answers exercise chapter 4: Estimation and Inference\n\n1. Gene CD33. Use agrep(\"^CD33\",golub.gnames[,2]) to find 808.\n\n## (a) The code\n\n223\n\nlibrary(multtest);data(golub)\ngol.fac <- factor(golub.cl,levels=0:1, labels= c(\"ALL\",\"AML\"))\nshapiro.test(golub[i,gol.fac==\"ALL\"])\ngives p-value = 0.592 and changing ALL into AML gives p-value\n= 0.2583. Hence, for normality is accepted.\n(b) var.test(golub[i,] ~ gol.fac) gives p-value = 0.1095 so equal-\nity of variances is accepted.\n(c) t.test(golub[i,] ~ gol.fac, var.equal = TRUE) gives p-value = 1.773e-09,\nso equality of means is rejected.\n(d) Yes, t = -7.9813 is quite extreme.\n\n## 2. Gene MYBL2 V-myb avian myeloblastosis viral oncogene homolog-like\n\n2. Take i <- 1788.\n\n## (a) Use boxplot(golub[i,] ~ gol.fac) to observe from the box-\n\nplot that one is quite certain that the null-hypothesis of no exper-\nimental effect holds.\n(b) t.test(golub[i,] ~ gol.fac, var.equal = TRUE) gives p-value = 0.8597,\nso that the null hypothesis of equal means is accepted.\n\n## (a) shapiro.test(golub[i,gol.fac==\"ALL\"]) gives p-value = 1.318e-07,\n\nso that normality is rejected.\n(b) wilcox.test(golub[i,] ~ gol.fac) gives p-value = 7.923e-05,\nso that equality of means is rejected. Note that the p-value from\nGrubbs test of the ALL expression values is 0.00519, so the null\nhypothesis of no outliers is rejected. Nevertheless the Welch two-\nsample T -test is also rejects the null-hypothesis of equal means.\nIts t-value equals -4.3026 and is quite large.\n\n4. Zyxin.\n\ny <- as.data.frame(table(read.GenBank(c(\"BC002323.2\"))))$Freq >chisq.test(x, p=y/sum(y)) Chi-squared test for given probabilities data: x X-squared = 0.0277, df = 3, p-value = 0.9988 5. Gene selection. ## ptg <- apply(golub, 1, function(x) t.test(x ~ gol.fac, alternative = c(\"greater\"))$p.value)\ngolub.gnames[order(ptg)[1:10],2]\n\n6. Antigenes.\n\nlibrary(multtest); data(golub)\ngol.fac <- factor(golub.cl,levels=0:1, labels= c(\"ALL\",\"AML\"))\n> table(ALLB$BT) B B1 B2 B3 B4 T T1 T2 T3 T4 5 19 36 23 12 0 0 0 0 0 ## psw <- apply(exprs(ALLB), 1, function(x) shapiro.test(residuals(lm(x ~ ALLB$BT)))\n\nlibrary(lmtest)\npbp <-apply(exprs(ALLB), 1, function(x)\nas.numeric(bptest(lm(x ~ ALLB$BT),studentize = FALSE)$p.value))\n> sum(psw > 0.05)\n 6847\n> sum(pbp > 0.05)\n 10057\n> sum(psw > 0.05 & pbp > 0.05)\n 6262\n\n## > panova <- apply(exprs(ALLB), 1, function(x) anova(lm(x ~ ALLB$BT))$Pr)\n\n> featureNames(ALLB)[panova<0.000001]\n \"1125_s_at\" \"1126_s_at\" \"1134_at\" \"1389_at\" \"1500_at\"\n228 APPENDIX A. ANSWERS TO EXERCISES\n\n## \"1866_g_at\" \"1914_at\" \"205_g_at\" \"31472_s_at\" \"31615_i_at\"\n\n \"31616_r_at\" \"33358_at\" \"35614_at\" \"35991_at\" \"36873_at\"\n \"37809_at\" \"37902_at\" \"38032_at\" \"38555_at\" \"39716_at\"\n \"40155_at\" \"40268_at\" \"40493_at\" \"40661_at\" \"40763_at\"\n \"41071_at\" \"41139_at\" \"41448_at\" \"873_at\"\n> pkw <- apply(exprs(ALLB), 1, function(x) kruskal.test(x ~ ALLB$BT)$p.va\n> featureNames(ALLB)[pkw<0.000001]\n \"1389_at\" \"1866_g_at\" \"38555_at\" \"40155_at\" \"40268_at\"\n\n## > panovasmall <- panova < 0.001\n\n> pkwsmall <- pkw < 0.001\n> table(panovasmall,pkwsmall)\npkwsmall\npanovasmall FALSE TRUE\nFALSE 12172 38\nTRUE 124 291\n\nThere are 124 significant gene expressions from ANOVA which are not\nsignificant on Kruskal-Wallis. There are only 38 significant gene ex-\npressions from Kruskal-Wallis which are non-significant according to\nANOVA. The tests agree on the majority of gene expressions.\n\n3. Finding the ten best best genes among gene expressions of B-cell ALL\npatients.\n\n> sort(panova)[1:10]\n1914_at 1389_at 38555_at 33358_at 40268_at 3971\n1.466523e-14 5.891702e-14 4.873245e-10 1.117406e-09 1.145502e-09 4.748615\n40763_at 37809_at 36873_at 1866_g_at\n5.256410e-09 2.155457e-08 2.402379e-08 3.997065e-08\n> sort(pkw)[1:10]\n1389_at 40268_at 38555_at 1866_g_at 40155_at 191\n2.348192e-09 7.764046e-08 1.123068e-07 2.335279e-07 6.595926e-07 1.074525\n1125_s_at 40662_g_at 38032_at 40661_at\n1.346907e-06 1.384281e-06 1.475170e-06 1.719456e-06\n\n## npanova <- names(sort(panova)[1:10])\n\nnpkw <- names(sort(pkw)[1:10])\n229\n\n> intersect(npanova,npkw)\n \"1914_at\" \"1389_at\" \"38555_at\" \"40268_at\" \"1866_g_at\"\n\n> a <- gl(3,3)\n> panova <- apply(x, 1, function(x) anova(lm(x ~ a))$Pr) > sum(panova<0.05) 514 ## The number of false positives is 514. The expected number is α · n = 0.05 · 10, 000 = 500, which is quite close to the observed. A matrix with differences between three groups of gene expression val- ues. ## sigma <- 1; n <- 10000 data <- cbind(matrix(rnorm(n*3,0,sigma),ncol=3), matrix(rnorm(n*3,1,sigma), ncol = 3),matrix(rnorm(n*3,2,sigma), ncol = 3)) a <- gl(3,3) panova <- apply(data, 1, function(x) anova(lm(x ~ a))$Pr)\n> sum(panova<0.05)\n 3757\n> pkw <- apply(data, 1, function(x) kruskal.test(x ~ a)$p.value) > sum(pkw<0.05) 1143 Thus the number of true positives from ANOVA is 3757 and the num- ber of false negatives is 6243. For the Kruskal-Wallis test there are 1143 true positives and 8857 false negatives. This can be impoved by increasing the number of gene expressions per group. ## Answers to exercises of Chapter 6: Micro Array Analysis. ## 1. Gene filtering on normality per group of B-cell ALL patients. 230 APPENDIX A. ANSWERS TO EXERCISES library(\"genefilter\") data(ALL, package = \"ALL\") ALLB <- ALL[,ALL$BT %in% c(\"B1\",\"B2\",\"B3\",\"B4\")]\nf1 <- function(x) (shapiro.test(x)$p.value > 0.05) sel1 <- genefilter(exprs(ALL[,ALLB$BT==\"B1\"]), filterfun(f1))\nsel2 <- genefilter(exprs(ALL[,ALLB$BT==\"B2\"]), filterfun(f1)) sel3 <- genefilter(exprs(ALL[,ALLB$BT==\"B3\"]), filterfun(f1))\nsel4 <- genefilter(exprs(ALL[,ALLB$BT==\"B4\"]), filterfun(f1)) selected <- sel1 & sel2 & sel3 & sel4 library(limma) x <- matrix(as.integer(c(sel2,sel3,sel4)),ncol = 3,byrow=FALSE) colnames(x) <- c(\"sel2\",\"sel3\",\"sel4\") vc <- vennCounts(x, include=\"both\") vennDiagram(vc) 137 pass filter 2 but not the other 510 pass filter 2 and 3 but not 4 1019 pas filter 2 and 4 but not 3 5598 pass filter 2, 3 and 4. etc. ## 2. Analysis of gene expressions of B-cell ALL patients using Limma. library(\"ALL\"); library(\"limma\");library(\"annaffy\");library(hgu95av2.db) data(ALL) ALLB <- ALL[,ALL$BT %in% c(\"B1\",\"B2\",\"B3\",\"B4\")]\ndesign.ma <- model.matrix(~0 + factor(ALLB$BT)) colnames(design.ma) <- c(\"B1\",\"B2\",\"B3\",\"B4\") cont.ma <- makeContrasts(B2-B1,B3-B2,B4-B3,levels=factor(ALLB$BT))\nfit <- lmFit(ALLB, design.ma)\nfit1 <- contrasts.fit(fit, cont.ma)\nfit1 <- eBayes(fit1)\ntab <- topTable(fit1, coef=2, number=20, adjust.method=\"fdr\")\nanntable <- aafTableAnn(as.character(tab$ID), \"hgu95av2\", aaf.handler()) saveHTML(anntable, \"ALLB1234.html\", title = \"B-cell ALL of stage 1,2,3,4\" ## 3. Finding a row number: grep(\"1389_at\",row.names(exprs(ALL))). 4. Remission (genezing) from acute lymphocytic leukemia (ALL). 231 library(ALL); data(ALL) table(pData(ALL)$remission)\nremis <- which(pData(ALL)$remission %in% c(\"CR\",\"REF\")) ALLrem <- ALL[,remis] remfac <-factor(pData(ALLrem)$remission)\npano <- apply(exprs(ALLrem),1,function(x) t.test(x ~ remfac)$p.value) sum(pano<0.001) > sum(pano<0.001) 45 library(hgu95av2.db) names <- featureNames(ALLrem)[pano<.001] ALLremsel<- ALLrem[names,] ## symb <- mget(names, env = hgu95av2SYMBOL) genenames <- mget(names,hgu95av2GENENAME) listofgenenames <- as.list(hgu95av2GENENAME) unlistednames <- unlist(listofgenenames[names],use.names=F) > grep(\"p53\",unlistednames) 12 21 > length(unique(unlistednames)) 36 5. Remission achieved. library(ALL); data(ALL) ALLCRREF <- ALL[,which(ALL$CR %in% c(\"CR\",\"REF\"))]\npano <- apply(exprs(ALLCRREF),1,function(x) t.test(x ~ ALLCRREF$CR)$p.value)\n> sum(pano<0.0001)\n 11\n> featureNames(ALLCRREF)[pano<.0001]\n \"1472_g_at\" \"1473_s_at\" \"1475_s_at\" \"1863_s_at\" \"34098_f_at\" \"36574_at\"\n\nlibrary(\"hgu95av2.db\")\naffynames <- featureNames(ALLCRREF)[pano<.0001]\ngenenames <- mget(affynames, env = hgu95av2GENENAME)\n232 APPENDIX A. ANSWERS TO EXERCISES\n\n> grep(\"oncogene\",genenames)\n 1 2 3\n\n## affytot <- unique(featureNames(ALLCRREF))\n\ngenenamestot <- mget(affytot, env = hgu95av2GENENAME)\n> length(grep(\"oncogene\",genenamestot))\n 239\n> length(genenamestot)\n 12625\n\n## > dat <- matrix(c(12625,239,11,3),2,byrow=TRUE)\n\n> fisher.test(dat)\n\n## Fisher’s Exact Test for Count Data\n\ndata: dat\np-value = 0.002047\nalternative hypothesis: true odds ratio is not equal to 1\n95 percent confidence interval:\n2.562237 54.915642\nsample estimates:\nodds ratio\n14.39959\n\n## 6. Gene filtering of ALL data.\n\nlibrary(\"ALL\")\ndata(\"ALL\")\ntable(ALL$BT) ALLT23 <- ALL[,which(ALL$BT %in% c(\"T2\",\"T3\"))]\nlibrary(genefilter)\nf1 <- function(x) (shapiro.test(x)$p.value > 0.05) f2 <- function(x) (t.test(x ~ ALLT23$BT)$p.value < 0.05) sel1 <- genefilter(exprs(ALLT23[,ALLT23$BT==\"T2\"]), filterfun(f1))\nsel2 <- genefilter(exprs(ALLT23[,ALLT23$BT==\"T3\"]), filterfun(f1)) sel3 <- genefilter(exprs(ALLT23), filterfun(f2)) > sum(sel1 & sel2 & sel3) 233 905 > sum(sel1 & sel2) 9388 > sum(sel3) 1204 ## 7. Stages of B-cell ALL in the ALL data. library(\"ALL\") library(\"limma\"); allB <- ALL[,which(ALL$BT %in% c(\"B1\",\"B2\",\"B3\",\"B4\"))]\nfacB123 <- factor(allB$BT) cont.ma <- makeContrasts(B2-B1,B3-B2,B4-B3, levels=facB123) design.ma <- model.matrix(~ 0 + facB123) colnames(design.ma) <- c(\"B1\",\"B2\",\"B3\",\"B4\") fit <- lmFit(allB, design.ma) fit1 <- contrasts.fit(fit, cont.ma) fit1 <- eBayes(fit1) > topTable(fit1, coef=2,5,adjust.method=\"BH\") ID logFC AveExpr t P.Value adj.P.Val B 6048 35991_at 0.5964481 4.144598 6.624128 2.578836e-09 0.0000325578 10.842989 3909 33873_at 0.5707770 7.217570 6.083524 2.891823e-08 0.0001825464 8.625253 5668 35614_at 1.7248509 5.663477 5.961231 4.946078e-08 0.0002081474 8.132884 6776 36711_at -2.3664712 7.576108 -5.759565 1.187487e-07 0.0003054110 7.329631 7978 37902_at 0.8470235 4.258491 5.742783 1.276579e-07 0.0003054110 7.263298 > sum(fit1$p.value<0.05)\n 4328\n\n## library(GEOquery); library(limma); library(hgu95av2.db);library(annaffy)\n\ngds486 <- getGEO(\"GDS486\"); eset486 <- GDS2eSet(gds486,do.log2=T)\n\n## nrmissing <- apply(exprs(eset486), 1, function(x) sum(is.na(x)) )\n\neset486sel <- eset486[nrmissing<1,]\n\nbcr <- which(ALLP$mol.biol==\"BCR/ABL\") orderpat <- c(neg,aal1,bcr) ## ALLP <- ALL[,ALL$mol.biol %in% c(\"ALL1/AF4\",\"BCR/ABL\",\"NEG\")]\n\nALLPo <- ALLP[,c(neg,aal1,bcr)]\n\n## facnr <- c(rep(1,74),rep(2,10),rep(3,37))\n\nnab.fac <- factor(facnr,levels=1:3, labels= c(\"NEG\",\"ALL1/AF4\",\"BCR/ABL\"))\npanova <- apply(exprs(ALLPo), 1, function(x) anova(lm(x ~ nab.fac))$Pr) ## library(\"GO\"); library(\"annotate\"); library(\"hgu95av2\") GOTerm2Tag <- function(term) { GTL <- eapply(GOTERM, function(x) {grep(term, x@Term, value=TRUE)}) Gl <- sapply(GTL, length) names(GTL[Gl>0]) } > GOTerm2Tag(\"protein-tyrosine kinase\") \"GO:0004713\" ## probes <- hgu95av2GO2ALLPROBES$\"GO:0004713\"\n\n> sum(panova[probes]<0.05)\n 86\n> sum(panova[probes]<1)\n 320\n> sum(panova<0.05)\n 2581\n> sum(panova<1)\n 12625\n\n## Fisher’s Exact Test for Count Data\n\n236 APPENDIX A. ANSWERS TO EXERCISES\n\n## data: matrix(c(12625, 2581, 320, 86), 2, byrow = TRUE)\n\np-value = 0.03222\nalternative hypothesis: true odds ratio is not equal to 1\n95 percent confidence interval:\n1.019848 1.679625\nsample estimates:\nodds ratio\n1.314569\n\nthe odds ratio differs significantly from zero; there are more significan\n\n## 1. Cluster analysis on the ”Zyxin” expression values of the Golub et al.\n\n(1999) data.\n\ndata(golub, package=\"multtest\")\ndata <- data.frame(golub[2124,])\ngol.fac <- factor(golub.cl,levels=0:1, labels= c(\"ALL\",\"AML\"))\n\nstripchart(golub[2124,]~gol.fac, pch=as.numeric(gol.fac))\nplot(hclust(dist(clusdata,method=\"euclidian\"),method=\"single\"))\n\n## initial <- as.matrix(tapply(golub[2124,],gol.fac,mean), nrow = 2, ncol=1,\n\ncl<- kmeans(data, initial, nstart = 10)\n\ntable(cl$cluster,gol.fac) n <- length(data); nboot<-1000 boot.cl <- matrix(0,nrow=nboot,ncol = 2) for (i in 1:nboot){ dat.star <- data[sample(1:n,replace=TRUE)] cl <- kmeans(dat.star, initial, nstart = 10) boot.cl[i,] <- c(cl$centers[1,],cl$centers[2,]) } > quantile(boot.cl[,1],c(0.025,0.975)) 2.5% 97.5% -1.07569310 -0.03344292 237 > quantile(boot.cl[,2],c(0.025,0.975)) 2.5% 97.5% 0.731493 1.784468 ## 2. Close to CCND3 Cyclin D3. ## library(\"genefilter\"); data(golub, package = \"multtest\") closeg <- genefinder(golub, 1042, 10, method = \"euc\", scale = \"none\") golub.gnames[closeg[][],2] boxplot(golub[394,] ~gol.fac) 3. MCM3. ## data(golub, package = \"multtest\") x <- golub[2289,]; y <- golub[2430,] plot(x,y) which.min(y) # the plot suggests the smallest y as the outlier > cor.test(x[-21],y[-21]) ## Pearson’s product-moment correlation ## data: x[-21] and y[-21] t = 10.6949, df = 35, p-value = 1.42e-12 alternative hypothesis: true correlation is not equal to 0 95 percent confidence interval: 0.7690824 0.9341905 # much smaller sample estimates: cor 0.875043 # much larger than 0.6376217 nboot <- 1000; boot.cor <- matrix(0,nrow=nboot,ncol = 1) data <- matrix(c(x[-21],y[-21]),ncol=2,byrow=FALSE) for (i in 1:nboot){ dat.star <- data[sample(1:nrow(data),replace=TRUE),] boot.cor[i,] <- cor(dat.star)[2,1]} > mean(boot.cor) 0.8725835 # very similar to cor.test > quantile(boot.cor[,1],c(0.025,0.975)) 2.5% 97.5% 0.7755743 0.9324625 # very similar to cor.test 238 APPENDIX A. ANSWERS TO EXERCISES ## 4. Cluster analysis on part of Golub data. library(multtest);data(golub); gol.fac <- factor(golub.cl,levels=0:1, labels= c(\"ALL\",\"AML\")) o1 <- grep(\"oncogene\",golub.gnames[,2]) plot(hclust(dist(golub[o1,],method=\"euclidian\"),method=\"single\")) o2 <- grep(\"antigene\",golub.gnames[,2]) plot(hclust(dist(golub[o2,],method=\"euclidian\"),method=\"single\")) o3 <- grep(\"receptor\",golub.gnames[,2]) plot(hclust(dist(golub[o3,],method=\"euclidian\"),method=\"single\")) ## 5. Principal Components Analysis on part of the ALL data. library(ALL); data(ALL) ALLB <- ALL[,ALL$BT %in% c(\"B1\",\"B2\",\"B3\")]\npanova <- apply(exprs(ALLB), 1, function(x) anova(lm(x ~ ALLB$BT))$Pr)\nALLBsp <- ALLB[panova<0.001,]\n> dim(exprs(ALLBsp))\n 499 78\n> min(cor(exprs(ALLBsp)))\n 0.5805595\n> eigen(cor(exprs(ALLBsp)))$values[1:5] 65.2016203 2.9652965 2.4781567 0.7556439 0.6040647 ## data <- exprs(ALLBsp); p <- ncol(data); n <- nrow(data) ; nboot<-1000 eigenvalues <- array(dim=c(nboot,p)) for (i in 1:nboot){dat.star <- data[sample(1:n,replace=TRUE),] eigenvalues[i,] <- eigen(cor(dat.star))$values}\n> for (j in 1:p) print(quantile(eigenvalues[,j],c(0.025,0.975)))\n2.5% 97.5%\n63.43550 66.77785\n2.5% 97.5%\n2.575413 3.530350\n2.5% 97.5%\n2.081573 2.889933\n2.5% 97.5%\n239\n\n## 0.6475809 0.9942871 #Hence, the first three are significant!\n\n2.5% 97.5%\n0.5067404 0.7482680\n2.5% 97.5%\n\nbiplot(princomp(data,cor=TRUE),pc.biplot=T,cex=0.5,expand=0.8)\n\n## 6. Some correlation matrices.\n\neigen(matrix(c(1,-0.8,-0.8,1),nrow=2))\neigen(matrix(c(1,0.8,0.8,0.8,1,0.8,0.8,0.8,1),nrow=3))\neigen(matrix(c(1,-0.5,-0.5,-0.5,1,-0.5,-0.5,-0.5,1),nrow=3))\n> 2.6/3 * 100\n 86.66667\n> eigen(matrix(c(1,0.8,0.8,0.8,1,0.8,0.8,0.8,1),nrow=3))$vectors [,1] [,2] [,3] [1,] -0.5773503 0.8164966 0.0000000 [2,] -0.5773503 -0.4082483 -0.7071068 [3,] -0.5773503 -0.4082483 0.7071068 ## Answers to exercises of Chapter 8: Classification Methods. ## 1. Classification tree of Golub data. Use recursive partitioning in rpart library(multtest);data(golub); gol.fac <- factor(golub.cl,levels=0:1, labels= c(\"ALL\",\"AML\")) maxgol <- apply(golub[,gol.fac==\"ALL\"], 1, function(x) max(x)) mingol <- apply(golub[,gol.fac==\"AML\"], 1, function(x) min(x)) sum(maxgol < mingol) > which.min(maxgol - mingol) 2124 > golub.gnames[2124,] \"4847\" \"Zyxin\" \"X95735_at\" > boxplot(golub[2124,] ~gol.fac) ## gol.rp <- rpart(gol.fac ~ golub[2124,], method=\"class\", cp=0.001) plot(gol.rp, branch=0,margin=0.1); text(gol.rp, digits=3, use.n=TRUE) 240 APPENDIX A. ANSWERS TO EXERCISES grep(\"Gdf5\",golub.gnames[,2]) > grep(\"Gdf5\",golub.gnames[,2]) 2058 ## gol.rp <- rpart(gol.fac ~ golub[2058,], method=\"class\", cp=0.001) plot(gol.rp, branch=0,margin=0.1); text(gol.rp, digits=3, use.n=TRUE) ## gol.rp <- rpart(gol.fac ~., data.frame(t(golub)), method=\"class\", cp=0. plot(gol.rp, branch=0,margin=0.1); text(gol.rp, digits=3, use.n=TRUE) ## 2. Sensitivity versus specificity. (a) library(multtest);library(ROCR);data(golub) golub.clchanged <- -golub.cl +1 pred <- prediction(golub[1042,], golub.clchanged) perf <- performance(pred, \"sens\", \"spec\") plot(perf) (b) The function is essentially the same. (c) Use auc as before. ## 3. Comparing Classification Methods. library(rpart) predictors <- matrix(rnorm(100*4,0,1),100,4) colnames(predictors) <- letters[1:4] groups <- gl(2,50) simdata <- data.frame(groups,predictors) rp<-rpart(groups ~ a + b + c + d,method=\"class\",data=simdata) predicted <- predict(rp,type=\"class\") table(predicted,groups) plot(rp, branch=0,margin=0.1); text(rp, digits=3, use.n=TRUE) > table(predicted,groups) groups predicted 1 2 1 41 12 241 2 9 38 library(e1071) svmest <- svm(predictors, groups, data=df, type = \"C-classification\", kernel = \"l svmpred <- predict(svmest, predictors, probability=TRUE) > table(svmpred, groups) groups svmpred 1 2 1 31 25 2 19 25 library(nnet) nnest <- nnet(groups ~ ., data = simdata, size = 5,maxit = 500, decay = 0.01, Max pred <- predict(nnest, type = \"class\") > table(pred, groups) # prints confusion ma groups pred 1 2 1 45 10 2 5 40 The misclassification rate of rpart, svm, and nnet is, respectively, 21/100, 44/100, and 15/100. If we increase the number of predictors, then the misclassification rate decreases. ## 4. Prediction of achieved remission. ## library(ALL); library(hgu95av2.db); library(rpart); data(ALL) ALLrem <- ALL[,which(pData(ALL)$remission %in% c(\"CR\",\"REF\"))]\nremfac <-factor(pData(ALLrem)$remission) pano <- apply(exprs(ALLrem),1,function(x) t.test(x ~ remfac)$p.value)\nnames <- featureNames(ALLrem)[pano<.001]\nALLremsel<- ALLrem[names,]\ndata <- data.frame(t(exprs(ALLremsel)))\nall.rp <- rpart(remfac ~., data, method=\"class\", cp=0.001)\nplot(all.rp, branch=0,margin=0.1); text(all.rp, digits=3, use.n=TRUE)\nrpart.pred <- predict(all.rp, type=\"class\")\n> table(rpart.pred,remfac)\nremfac\n242 APPENDIX A. ANSWERS TO EXERCISES\n\nrpart.pred CR REF\nCR 93 1\nREF 6 14\n> 7/(93+1+6+14)\n 0.06140351\n> mget(c(\"1840_g_at\",\"36769_at\",\"1472_g_at\",\"854_at\"), env = hgu95av2GENE\n$‘1840_g_at‘ NA$‘36769_at‘\n \"retinoblastoma binding protein 5\"\n\n$‘1472_g_at‘ \"v-myb myeloblastosis viral oncogene homolog (avian)\"$‘854_at‘\n \"B lymphoid tyrosine kinase\"\n\n## 5. Classification Tree for Ecoli.\n\ncolnames(ecoli) <- c(\"SequenceName\",\"mcg\",\"gvh\",\"lip\",\"chg\",\"aac\",\"alm1\",\necolisel<- ecoli[which(ecoli$ecclass %in% c(\"cp\",\"im\",\"pp\")),] ecolisel$ecclass <- factor(ecolisel$ecclass, levels=c(\"cp\",\"im\",\"pp\")) library(rpart) rpfit <- rpart(ecolisel$ecclass ~ mcg + gvh + lip + aac + alm1 + alm2,dat\nplot(rpfit, branch=1,margin=0.1); text(rpfit, digits=3, use.n=TRUE)\ntitle(main = \"rpartfit ecoli classes cp im and pp\")\npredictedclass <- predict(rpfit, type=\"class\")\ntable(predictedclass,ecolisel$ecclass) #predictors are alm1, gvh and im > (1+2+7+4)/length(ecolisel$ecclass)\n 0.05166052\n\n## 1. Writing to a FASTA file.\n\nchoosebank(\"genbank\"); library(seqinr)\n243\n\nccnd3 <- sapply(ccnd3hs$req, getSequence) x1 <- DNAStringSet(c2s(ccnd3[])) write.XStringSet(x1, file=\"ccnd3.fa\", format=\"fasta\", width=80) ## ccnd3c2sn <- sapply(ccnd3, c2s) x1 <- DNAStringSet(ccnd3c2sn) write.XStringSet(x1, file=\"ccnd3n.fa\", format=\"fasta\", width=80) ## An alternative would be to use the write.dna function of the ape package. 2. Dotplot of sequences. ## seq1 <- sample(c(\"A\",\"G\",\"C\",\"T\"),100,rep=TRUE,prob=c(0.1,0.4,0.4,0.1)) seq2 <- sample(c(\"A\",\"G\",\"C\",\"T\"),100,rep=TRUE,prob=c(0.1,0.4,0.4,0.1)) par(mfrow=c(1,2)) dotPlot(seq1, seq2, main = \"Dot plot of different random sequences\\nwsize = 1, ws dotPlot(seq1, seq1, main = \"Dot plot of equal random sequnces\\nwsize = 1, wstep = par(mfrow=c(1,1)) par(mfrow=c(1,2)) dotPlot(seq1, seq2, main = \"Dot plot of different random sequences\\nwsize = 3, ws dotPlot(seq1, seq1, main = \"Dot plot of equal random sequnces\\nwsize = 3, wstep = par(mfrow=c(1,1)) par(mfrow=c(1,2)) dotPlot(seq1, seq1, main = \"Dot plot of different random sequences\\nwsize = 3, ws dotPlot(seq1, seq1[100:1], main = \"Dot plot of equal random sequnces\\nwsize = 3, par(mfrow=c(1,1)) x <- c(\"RPLWVAPDGHIFLEAFSPVYK\") y <- c(\"RPLWVAPDGHIFLEAFSPVYK\") z <- c(\"PLWISPSDGRIILESFSPLAE\") choosebank(\"genbank\"); library(seqinr) query(\"ccnd3hs\",\"sp=homo sapiens AND k=ccnd3@\") 244 APPENDIX A. ANSWERS TO EXERCISES ## ccnd3 <- sapply(ccnd3hs$req, getSequence)\n\nsapply(ccnd3hs$req, getName) ccnd3prot <- sapply(ccnd3hs$req, getTrans)\ndotPlot(ccnd3prot[], s2c(\"EEEVFPLAMN\"), main = \"Dot plot of two protei\ndotPlot(ccnd3prot[], ccnd3prot[], main = \"Dot plot of two protein\\n\ndotPlot(s2c(x), s2c(z), main = \"Dot plot of two protein\\nwsize = 1, wstep\n\n3. Local alignment.\n\nlibrary(seqinr);library(Biostrings);data(BLOSUM50)\nx <- s2c(\"HEAGAWGHEE\"); y <- s2c(\"PAWHEAE\")\ns <- BLOSUM50[y,x]; d <- 8\nF <- matrix(data=NA,nrow=(length(y)+1),ncol=(length(x)+1))\nF[1,] <- 0 ; F[,1] <- 0\nrownames(F) <- c(\"\",y); colnames(F) <- c(\"\",x)\nfor (i in 2:(nrow(F)))\nfor (j in 2:(ncol(F)))\n{F[i,j] <- max(c(0,F[i-1,j-1]+s[i-1,j-1],F[i-1,j]-d,F[i,j-1]-d))}\n> max(F)\n 28\n\n## 4. Probability of more extreme alignment score.\n\nlibrary(seqinr);library(Biostrings);data(BLOSUM50)\nrandallscore <- c(1,1)\nfor (i in 1:1000) {\nx <- c2s(sample(rownames(BLOSUM50),7, replace=TRUE))\ny <- c2s(sample(rownames(BLOSUM50),10, replace=TRUE))\nrandallscore[i] <- pairwiseAlignment(AAString(x), AAString(y), substitut\ngapOpening = 0, gapExtension = -8, scoreOnly = TRUE)\n}\n> sum(randallscore>1)/1000\n 0.003\n> plot(density(randallscore))\n\n5. Prochlorococcus marinus.\n\nlibrary(seqinr)\n245\n\nchoosebank(\"genbank\")\nquery(\"ccmp\",\"AC=AE017126 OR AC=BX548174 OR AC=BX548175\")\n> ccnd3prot <- sapply(ccnd3hs$req, getTrans) > table(ccnd3prot[]) * A C D E F G H I K L M N P Q R S T V W Y 1 31 12 12 21 6 14 7 10 10 41 9 1 17 16 22 19 18 15 3 8 > table(ccnd3prot[]) * A C D E F G H I K L M N P Q R S T V W Y 246 APPENDIX A. ANSWERS TO EXERCISES 1 30 12 12 21 6 14 7 10 10 41 9 1 17 16 22 20 18 15 3 8 # Hence, there is only one difference! > which(!ccnd3prot[]==ccnd3prot[]) 259 7. Conserved region. ID XRODRMPGMNTB; BLOCK AC PR00851A; distance from previous block=(52,131) DE Xeroderma pigmentosum group B protein signature BL adapted; width=21; seqs=8; 99.5%=985; strength=1287 XPB_HUMAN|P19447 ( 74) RPLWVAPDGHIFLEAFSPVYK 54 XPB_MOUSE|P49135 ( 74) RPLWVAPDGHIFLEAFSPVYK 54 P91579 ( 80) RPLYLAPDGHIFLESFSPVYK 67 XPB_DROME|Q02870 ( 84) RPLWVAPNGHVFLESFSPVYK 79 RA25_YEAST|Q00578 ( 131) PLWISPSDGRIILESFSPLAE 100 Q38861 ( 52) RPLWACADGRIFLETFSPLYK 71 O13768 ( 90) PLWINPIDGRIILEAFSPLAE 100 O00835 ( 79) RPIWVCPDGHIFLETFSAIYK 86 library(Biostrings);data(BLOSUM50) x <- c(\"RPLWVAPDGHIFLEAFSPVYK\") y <- c(\"RPLWVAPDGHIFLEAFSPVYK\") z <- c(\"PLWISPSDGRIILESFSPLAE\") x == y pairwiseAlignment(AAString(x), AAString(z), substitutionMatrix = \"BLOSUM5 > pairwiseAlignment(AAString(x), AAString(y), substitutionMatrix = \"BLOSU Global Pairwise Alignment 1: RPLWVAPDGHIFLEAFSPVYK 2: RPLWVAPDGHIFLEAFSPVYK Score: 154 > > z <- c(\"PLWISPSDGRIILESFSPLAE\") > > x == y TRUE > pairwiseAlignment(AAString(x), AAString(z), substitutionMatrix = \"BLOSU 247 ## Global Pairwise Alignment 1: RPLWVAP-DGHIFLEAFSPVYK 2: -PLWISPSDGRIILESFSPLAE Score: 85 ## 8. Plot of CG proportion from Celegans. ## (a) Produce a plot of the CG proportion of the chromosome I of Cel- egans (Celegans.UCSC.ce2) along a window of 100 nucleotides. Take the first 10,000 nucleotides. library(seqinr) source(\"http://bioconductor.org/biocLite.R\") biocLite(\"BSgenome.Celegans.UCSC.ce2\") library(BSgenome.Celegans.UCSC.ce2) GCperc <- double() for (i in 1:10000) GCperc[i] <- GC(s2c(as.character(Celegans$chrI[i:(i+100)])\nplot(GCperc,type=\"l\")\n(b) A binding sequence of the enzyme EcoRV is the subsequence\nGATATC. How many exact matches has Chromosome I of Cel-\negans.\n> subseq <- \"gatatc\"\n> countPattern(subseq, Celegans$chrI, max.mismatch = 0) 3276 > length(s2c(as.character(Celegans$chrI))) * (1/4)^6\n 3681.759\n\n## 9. Plot of codon usage.\n\ndata(ec999)\nec999.uco <- lapply(ec999, uco, index=\"eff\")\ndf <- as.data.frame(lapply(ec999.uco, as.vector))\nrow.names(df) <- names(ec999.uco[])\nglobal <- rowSums(df)\ntitle <- \"Codon usage in 999 E. coli coding sequences\"\ndotchart.uco(global, main = title)\n\nchoosebank(\"genbank\"); library(seqinr)\n248 APPENDIX A. ANSWERS TO EXERCISES\n\nccnd <- sapply(ccndhs$req, getSequence) ccnd.uco <- lapply(ccnd3, uco, index=\"eff\") df <- as.data.frame(lapply(ccnd.uco, as.vector)) row.names(df) <- names(ccnd.uco[]) global <- rowSums(df) title <- \"Codon usage in ccnd3 homo sapiens coding sequences\" dotchart.uco(global, main = title) ## Answers to exercises of Chapter 10: Markov Models. ## 1. Visualize by a transition graph the following transition matrices. Con- sult your teacher. ## 2. Computing probabilities. The answers are provided by the following. ## > P <- matrix(c(3/4,1/4,1/2,1/2),2,2,byrow=T) > pi0 <- c(1/2,1/2) > pi0 %*% P [,1] [,2] [1,] 0.625 0.375 > P %*% P [,1] [,2] [1,] 0.6875 0.3125 [2,] 0.6250 0.3750 > P [,1] [,2] [1,] 0.75 0.25 [2,] 0.50 0.50 ## 3. Programming GTR. Use πA = 0.15, πG = 0.35, πC = 0.35, πT = 0.15, α = 4, β = 0.5, γ = 0.4, δ = 0.3, ² = 0.2, and ζ = 4. ## (a) Program the rate matrix in such a manner that it is simple to adapt for other values of the parameters. library(Matrix) piA <- 0.15; piG <- 0.35; piC <- 0.35; piT <- 0.15 alpha <- 4; beta <- 0.5; gamma <- 0.4; delta <- 0.3 249 ## epsilon <- 0.2; zeta <- 4 Q <- matrix(data=NA,4,4) ## Q[1,2] <- alpha * piG; Q[1,3] <- beta * piC; Q[1,4] <- gamma * piT Q[2,1] <- alpha * piA; Q[2,3] <- delta * piC; Q[2,4] <- epsilon * piT Q[3,1] <- beta * piA; Q[3,2] <- delta * piG; Q[3,4] <- delta* pi Q[4,1] <- gamma * piA; Q[4,2] <- epsilon* piG; Q[4,3] <- zeta * piC diag(Q) <- 0 diag(Q) <- -apply(Q,1,sum) Q <- Matrix(Q) > Q 4 x 4 Matrix of class \"dgeMatrix\" [,1] [,2] [,3] [,4] [1,] -1.635 1.400 0.175 0.060 [2,] 0.600 -0.735 0.105 0.030 [3,] 0.075 0.105 -0.225 0.045 [4,] 0.060 0.070 1.400 -1.530 (b) The transversion rate is larger then the transition rate because the blocks outside the main diagonal have lower values. (c) The probability transition matrix is > P <- as.matrix(expm(Q)) > P [,1] [,2] [,3] [,4] [1,] 0.32199057 0.51569256 0.1392058 0.02311107 [2,] 0.22097363 0.64908639 0.1115233 0.01841667 [3,] 0.05203969 0.09913633 0.8263804 0.02244359 [4,] 0.04621015 0.08457814 0.6397090 0.22950271 ## rownames(P) <- colnames(P) <- StateSpace <- c(\"a\",\"g\",\"c\",\"t\") pi0 <- c(1/4,1/4,1/4,1/4) markov2 <- function(StateSpace,P,n){ seq <- matrix(0,nr=n,nc=1) seq <- sample(StateSpace,1,replace=T,pi0) for(k in 1:(n-1)){ seq[k+1] <- sample(StateSpace,1,replace=T,P[seq[k],])} 250 APPENDIX A. ANSWERS TO EXERCISES return(seq) } seq <- markov2(StateSpace,P,99) ## 4. Distance according to JC69. ## (a) accnr <- paste(\"AJ5345\",26:27,sep=\"\") seqbin <- read.GenBank(accnr, species.names = TRUE, as.character = FA Down load the sequences AJ534526 and AJ534527. Hint: Use as.character = TRUE in the read.GenBank function. (b) Two solution of computing the proportion of different nucleotides are dist.dna(seqbin, model = \"raw\") p <- sum(seq$AJ534526 != seq\\$AJ534527)/1143\n(c) Simply insert the obtained p in the formula d <- -log(1-4*p/3)*3/4.\nAppendix B\n\nReferences\n\n## Dalgaard, P. (2002). Introductory statistics with R. New York: Springer.\n\nBain, L.J. & Engelhardt, M. (1992). Introduction to probability and mathe-\nmatical statistics. Pacific Grove: Duxbury.\nBecker, R.A., Chambers, J.M. & Wilks, A.R. (1988). The new S language.\nNew Jersey: Bell Telephone Laboratories.\nBeran, B. & Srivastava, M.S. (1985). Bootstrap tests and confidence regions\nfor functions of a covariance matrix. The Annals of Statistics, 13, 95-\n115.\nBeran, R. & Ducharme, G.R. (1991). Asymptotic theory for bootstrap meth-\nods in statistics. Montreal: Centre de recherche mathématique.\nBreiman, L., J. H. Friedman, R. A. Olshen, and C. J. Stone. (1984) Classi-\nfication and Regression Trees. Monterey: Wadsworth.\nBreusch, T.S. & Pagan A.R. (1979). A Simple Test for Heteroscedasticity\nand Random Coefficient Variation. Econometrica 47, 12871294.\nBonnet, E. Wuyts, J., & Rouze, P. and Van de Peer, Y. (2004). Evidence that\nmicroRNA precursors, unlike other non-coding RNAs, have lower fold-\ning free energies than random sequences Bioinformatics, 20, 29112917.\nCharif, D. Humblot, L. Lobry, J.R. Necxsulea, A. Palmeira, L. Penel, S.\n(2008). SeqinR 2.0-1: a contributed package to the project for statistical\ncomputing devoted to biological sequences retrieval and analysis. URL:\nhttp://seqinr.r-forge.r-project.org/.\nChambers, J.M. & Hastie, T.J. eds. (1992) Statistical Models in S. Pacific\nChiaretti, S., Xiaochun Li, Gentleman, R., Vitale, A., Vignetti, M., Mandelli,\nF., Ritz, J. and Foa R., (2004) Gene expression profile of adult T-cell\n\n251\n252 APPENDIX B. REFERENCES\n\n## acute lymphocytic leukemia identifies distinct subsets of patients with\n\ndifferent response to therapy and survival. Blood. Vol. 103, No. 7.\nCleveland, W.S. & Devlin, S.J. 1988). Locally weighted regression: An ap-\nproach to regression analysis by local fitting. Journal of the American\nstatistical association. 83, 596-610.\nClopper, C. J. & Pearson, E. S. (1934). The use of confidence or fiducial\nlimits illustrated in the case of the binomial. Biometrika, 26, 404413.\nDalgaard, P. (2002). Introductory Statistics with R. New York: Springer.\nDeRisi, J.L., Iyer, V.R. & Brown, P.O. (1997). Exploring the metabolic and\ngenetic control of gene expression on a genomic scale. Science, 278,\n680-686.\nDeonier, R.C. Tavere, S. Waterman, M.S. (2005). Computational genome\nAnalysis. New York: Springer.\nDudoit, J. Fridlyand, & T. P. Speed (2002). Comparison of discrimination\nmethods for the classification of tumors using gene expression data.\nJournal of the American Statistical Association, Vol. 97, 7787.\nDurbin, R., Eddy, S., Krogh, A. & Mitchison, G. (2005). Biological sequence\nanalysis. Cambridge: Cambridge University Press.\nEfron, B. (1979). Bootstrap methods: Another look at the Jackknive. The\nAnnals of Statistics, 7, 1-26.\nEfron, B. & Tibshirani, R.F. (1993). An introduction to the bootstrap. New\nYork: Chapman & Hall\nEveritt, B.S. & Hothorn, T. (2006) A Handbook of Statistical Analyses Using\nR. New York : Chapman & Hall.\nEwens, W.J. & Grant, G.R. (2005). Statistical methods in bioinformatics.\nNew York: Springer.\nFaraway, J. (2004). Linear Models with R. Boca Raton, FL: Chapman &\nHall/CRC.\nFeller, W. (1967). An Introduction to Probability Theory and its Applications.\n(3rd ed.). New York: Wiley.\nGasteiger E., Hoogland C., Gattiker A., Duvaud S., Wilkins M.R., Appel\nR.D., Bairoch A. (2005) Protein Identification and Analysis Tools on\nthe ExPASy Server; (In) John M. Walker (ed): The Proteomics Proto-\ncols Handbook, Humana Press (2005). pp. 571-607\nGentleman, R., Huber, W., Carey , V., Irizarry, R.A., & Irizarry, R. (2005).\nBioinformatics and Computational Biology Solutions Using R and Bio-\nconductor, New York: Springer.\n253\n\nGolub, G.H. & Van Loan, C.F. (1983). Matrix Computations. Baltimore:\nThe John Hopkins University Press.\nGolub et al. (1999). Molecular classification of cancer: class discovery and\nclass prediction by gene expression monitoring, Science, Vol. 286:531-\n537.\nGouy, M., Milleret, F., Mugnier, C., Jacobzone, M., Gautier,C. (1984). AC-\nNUC: a nucleic acid sequence data base and analysis system. Nucl.\nAcids Res., 12:121-127.\nGrubbs, F.E. (1950). Sample criteria for testing outlying observations. An-\nnals of Mathematical Statistics, 21, 1, 27-58.\nHahne, F. Huber, W., Gentleman, R. & Falcon, S. (2008) Bioconductor Case\nStudies. New York: Springer.\nHartigan, J.A. & Wong, M.A. (1975). A k-means clustering algorithm. Ap-\nplied Statistics, 28, 100-108.\nHorn, R.A. & Johnson, C.R. (1985). Matrix Analysis. Cambridge: Cam-\nbridge University Press.\nHuber, P.J. (1964). Robust estimation of a location parameter. The Annals\nof Mathematical Statistics, 35, 73-101.\nHuber, P. J. (1981) Robust Statistics. Wiley.\nIhaka,R. and Gentleman,R. (1996) R: a language for data analysis and graph-\nics. J. Comput. Graphic Statist., 5, 299314.\nJohnson, N.L. & Kotz, S. & Kemp, A. (1992). Univariate discrete distribu-\ntions. New York: John Wiley & Sons.\nJolliffe, I.T. (2002). Principal Components Analysis. New York: Springer.\nJurečková, J. & Picek, J. (2006). Robust Statistical Methods with R. New\nYork: Chapman & Hall.\nKyte J. & Doolittle R.F. (1982). A simple method for displaying the hydro-\npathic character of a protein. Journal of Molecular Biology, 157:105132.\nLaub, M.T., McAdams, H.H., Feldblyum, Fraser, C.M., and Shapiro, L.\n(2000). Global analysis of the genetic network controlling a bacterial\ncell cycle. Science, 290, 21441248.\nLehmann, E.L. (1999). Elements of large sample theory . New York: Springer.\nLittle, R. J. A., and Rubin, D. B. (1987) Statistical Analysis with Missing\nData. New York: Wiley.\nLuenberger, D.G. (1969). Optimization by vector space methods. New York:\nWiley.\nMaindonald J. & Braun, J. (2003). Data analysis and Graphics Using R.\nCambridge: Cambridge University Press.\n254 APPENDIX B. REFERENCES\n\n## Miller, I. & Miller, M. (1999). John E. Freund’s Mathematical Statistics.\n\nNew Jersey: Prentice Hall.\nMarazzi, A. (1993). Algorithms, routines, and S functions for robust statis-\ntics. Wadsworth & Brooks/Cole, Pacific Grove, CA.\nPalmeira, L., Guguen, L. and Lobry, J.R. (2006) UV-targeted dinucleotides\nare not depleted in light-exposed Prokaryotic genomes. Molecular Bi-\nology and Evolution, 23:2214-2219.\nParadis, E. (2006). Analysis of Phylogenetics and Evolution with R. New\nYork: Springer.\nPevsner, J. (2003). Bioinformatics and functional genomics. New York:\nWiley-Liss.\nPollard, D. (1981). Strong consistency of K-means clustering. Annals of\nstatistics, 9, 135-140.\nPress, W.H., Flannery, B.P., Teukolsky, S.A. & Vettering W.T. (1992). Nu-\nmerical recipes in Pascal. New York: Cambridge University press.\nR Development Core Team (2008). R: A language and environment for\nstatistical computing. R Foundation for Statistical Computing, Vienna,\nAustria. ISBN 3-900051-07-0, URL http://www.R-project.org.\nRao, C.R. & Toutenburg (1995). Linear Models. New York: Springer.\nRamsey, P.H. (1980). Exact type 1 error rates for robustness of Student’s\nt=testwith unequal variances. Journal of educational statistics, 5, 337-\n349.\nRipley, B.D. (1996). Pattern Recognition and Neural Networks. Cambridge:\nCambridge University Press.\nRoberts, R.J., Vincze, T., Posfai, J., Macelis, D. (2007). REBASE–enzymes\nand genes for DNA restriction and modification. Nucleic Acids Res,\n35.\nRogner, U.C., Wilke, K., Steck, E., Korn, B., Poustka, A. (1995). The\nmelanoma antigen gene (MAGE) family is clustered in the chromosomal\nband Xq28. Genomics, 10;29(3):725-31.\nRosner, B. (2000) Fundamentals of Biostatistics. Pacific Grove: Duxbury.\nRoyston. P. (1995) A Remark on Algorithm AS 181: The W Test for Nor-\nmality. Applied Statistics, 44, 547551.\nSamuels, M.L. & Witmer, J.A. (2003) Statistics for the Life Sciences, New\nJersey: Pearson Education.\nStephens, M.A. (1986): Tests based on EDF statistics. In: D’Agostino, R.B.\nand Stephens, M.A., eds.: Goodness-of-Fit Techniques. Marcel Dekker,\nNew York.\n255\n\nTessarz, A.S., Weiler, S., Zanzinger, K., Angelisova, P., Horejsi, V., Cer-\nwenka, A. (2007). Non-T cell activation linker (NTAL) negatively reg-\nulates TREM-1/DAP12-induced inflammatory cytokine production in\nmyeloid cells. Journal of Immunolgy. 178(4) 1991-1999.\nTherneau, T.M. & Atkinson, E.J. (1997). An introduction to recursive par-\ntitioning using RPART routines. Technical report, Mayo Foundation.\nSmyth, G. K. (2004). Linear models and empirical Bayes methods for as-\nsessing differential expression in microarray experiments. Statistical\nApplications in Genetics and Molecular Biology, 3, No. 1, Article 3.\nSmyth, G. K. (2005). Limma: linear models for microarray data. In: ’Bioin-\nformatics and Computational Biology Solutions using R and Biocon-\nductor’. R. Gentleman, V. Carey, S. Dudoit, R. Irizarry, W. Huber\n(eds), Springer, New York, pages 397–420.\nVenables W.N. & Ripley B.D. (2000). S programming. New York: Springer.\nVenables, W. N. & Ripley, B. D. (2002) Modern Applied Statistics with S.\nFourth edition. Springer.\nWang, Y.Y. (1971). Probabilities of the type I errors of the Welch tests\nfor the Behrens-Fisher problem. Journal of the American Statistical\nAssociation, 66, 605-608.\nWichert, S., Fokianos, K., and Strimmer, K. (2004). Identifying periodically\nexpressed transcripts in microarray time series data. Bioinformatics,\n20:5-20.\nZuker, M. & Stiegler,P. (1981) Optimal computer folding of large RNA\nsequences using thermodynamics and auxiliary information. Nucleic\nAcids Res., 9, 133148.\nZuker, M. (2003). Mfold web server for nucleic acid folding and hybridization\nprediction. Nucleic Acids Research, 31, 3406-3415.\nGuindon, S. and Gascuel, O. (2003) A simple, fast, and accurate algorithm to\nestimate large phylogenies by maximum likelihood. Systematic Biology,\n52, 696704.\nSaitou, N. and Nei, M. (1987). The neighbor-joining method: a new method\nfor reconstructing phylogenetic trees. Molecular Biology and Evolu-\ntion,4, 406425.\n256 APPENDIX B. REFERENCES\nIndex\n\n## aggregation, 93 gene filtering, 95\n\nAnderson-Darling test, 64 gene ontology, 106\nannotation, 103 GO, 106\ngol.fac, 12\nbackground correction, 92 Golub et al. (1999) data, 10\nBinomial test, 58 grep, 12\nBLOSUM50, 181\nbootstrap, 126, 131 help, 3, 4\nbox-and-wiskers-plot, 21 histogram, 19\nhomoscedasticity, 83\ncalculator, 4\nchi-squared distribution, 37 install R, 1\nchi-squared test, 59 installing Bioconductor, 2\nclassification tree, 150 installing R, 2\nconfusion table, 158 interquartile range, 25\nconstruct a sequence, 5\ncorrelation coefficient, 129 k-means cluster analysis, 123\nKruskal-Wallis test, 85\ndata matrix, 6\ndata vector, 5 linear model, 74\ndensity, 41\ndesign matrix, 99 matrix computations, 8\ndinucleotide, 172 mean, 24\ndistance, 116 median, 24\nmisclassification rate, 158\nF-distribution, 40 mismatch, 89\nF-test, 57 model matrix, 99\nFisher test, 62\nfrequency table, 18 Needleman-Wunsch, 180\nneural network, 162\ngenBank, 18 normal distribution, 35\n\n257\n258 INDEX\n\n## normality of residuals, 83 Z-test, 48\n\nnormality test, 63\nnormalization, 92\none sample t-test, 52\none sided hypothesis, 48\none-way analysis of variance, 77\npackages, 2\nperfect match, 89\nPhylogenetic tree, 199\npredictive power, 147\nprincipal components analysis, 132\nQuantile-Quantile plot, 23\nquartile, 20\nquery language, 169\nrma, 93\nrunning scripts, 13\nsample variance, 25\nsensitivity, 147\nShapiro-Wilk test, 63\nsignificance level, 48" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.75924957,"math_prob":0.98166084,"size":177647,"snap":"2019-26-2019-30","text_gpt3_token_len":56261,"char_repetition_ratio":0.15503003,"word_repetition_ratio":0.09426981,"special_character_ratio":0.33463556,"punctuation_ratio":0.15866368,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.99461395,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-22T10:20:54Z\",\"WARC-Record-ID\":\"<urn:uuid:dc4141a7-1808-485d-90a7-28e1a5f784e4>\",\"Content-Length\":\"864452\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:71eae5db-d1dc-4bc8-968f-17b7ebe13704>\",\"WARC-Concurrent-To\":\"<urn:uuid:833b0453-4455-4ce1-a333-5f85ee57cda9>\",\"WARC-IP-Address\":\"151.101.202.152\",\"WARC-Target-URI\":\"https://fr.scribd.com/document/19390777/R-for-Bioinfo\",\"WARC-Payload-Digest\":\"sha1:GDRXBD6L2VPWY4IJ2FJO5RHIQEXX4KK2\",\"WARC-Block-Digest\":\"sha1:SNWRAWHAKSIZAQAYJUKFN67NP7KAI5L3\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195527907.70_warc_CC-MAIN-20190722092824-20190722114824-00160.warc.gz\"}"}
https://electronics.stackexchange.com/questions/631933/how-does-the-pfc-boost-circuit-work
[ "# How does the PFC boost circuit work?\n\nI am reading TI's PFC Circuit Basics, and I am having a hard time understanding how the active PFC boost circuit works. I understand that ordinarily the current waveform is not in-phase with the voltage waveform and that PFC corrects that somehow, but I can't fully visualize the waveform of the current flowing through the inductor.\n\nWhat does the actual waveform look like?", null, "I saw this figure that shows inductor ripple current, so I think maybe this could be what it looks like (almost like a boost converter ripple current that follows the sinusoidal arc of the input voltage.) It is also unclear to me how the voltage gets boosted as well - maybe if I had a good simulation to look at I could understand better.", null, "• As explained here, there are two distinct ways of having a poor power factor, reactive loads (phase-shifted current waveform) and nonlinear loads (non-sinusoidal current waveform). Power supplies fall into the latter category, and this is what the PFC circuit is fixing. Aug 20, 2022 at 21:19\n• Is your lack of understanding in how a regular old boost converter works, or how a PFC boost converter works? A PFC boost is basically a regular old boost converter that's controlled such that the current it draws from the line is sinusoidal and in phase with the line voltage. Aug 20, 2022 at 21:41\n• Hey Tim, yeah I guess it is not intuitive to me how the 1) Making sure the current is in-phase with the line voltage and the 2) regulating a constant output voltage are both accomplished at the same time. Aug 20, 2022 at 21:45\n\nTo visualize the inductor current in a PFC inductor, the best is to build a prototype or run a cycle-by-cycle simulation. The input voltage will be the grid sine wave and a dedicated circuitry will actuate a power switch on and off according to a control law. For a boost PFC working in a self-relaxing (no clock) boundary mode conduction (abbreviated BCM or CrM for critical conduction mode), the controller maintains a constant on-time along the input period to deliver nominal power and it naturally performs power factor correction. The frequency is changing depending where you are on the sine wave because magnetization and demagnetization times vary with $$\\V_{in}\\$$.\n\nTo look at the PFC inductor current, you can resort to an equation-based graph, build a prototype or simply run a cycle-by-cycle simulation. The one below is excerpted from my 60+ SIMPLIS templates that you can freely download from my webpage:", null, "The simulation time takes 6-7 mn on my machine for a 800-ms run. If you zoom on a 100-ms period of time, you obtain the below graph. The input current is nicely sinusoidal, with some crossover distortion at the 0-V input region. Techniques exist to improve this by artificially increasing the on-time in this area:", null, "You can see how the duty ratio and the frequency evolve along a grid period. I purposely increased the inductor to a few mH to reduce the switching frequency so that you can have a better look at the inductor current:", null, "The averaged input current is the filtered version of the averaged inductor current. In a BCM-operated converter, the average inductor current is the peak current divided by two. Therefore, if the inductor peak current envelope follows a sinusoidal shape, the input current will also be sinusoidal. In voltage-mode control BCM, the controller blindly imposes a fixed on-time (adjusted with $$\\P_{out}\\$$) without sensing the input voltage (this is the simulated schematic I shown) while in current-mode control, the inductor peak current is set via a scaled-down image of the rectified voltage and the error voltage via a multiplier (see the venerable MC33262 from MOT).\n\n• Okay this makes more sense to me now. It seems like over the course of one cycle (half of a rectified 60Hz/50Hz sine wave) the PFC controller does a lot to accomplish both objectives I laid out in other responses. Thank you! Aug 21, 2022 at 18:29\n\nI understand that ordinarily the current waveform is not in-phase with the voltage waveform and that PFC corrects that\n\nThat is for a completely different scenario; that scenario being when you are trying to power factor correct an unknown (usually inductive) load such as a motor.\n\nThe PFC circuit in your question is trying to ensure that it creates a high-voltage DC (such as 400 volts) and not do so in a way that creates a lot of harmonic distortion in the current it draws from the AC supply. In short, it tries to present a load to the AC supply that \"looks\" resistive.\n\nA regular bridge rectifier and smoothing capacitor creates horrendous current distortion and harmonics so, when transfer powers get high, legislation says you must PFC. That is what this is all about.\n\nThen, whatever connects to the PFC's high-voltage DC output can draw whatever load current it wants (within reason) and, it can do so knowing it cannot make the current into the PFC correction circuit appear to be anything else other than resistive.\n\nThe two scenarios are different.\n\nI saw this figure that shows inductor ripple current\n\nThey are not very accurate at portraying the main subtleties. For instance, the falling slope of the current will always be a more constant rate because, the output voltage of the inductor is supplying (or topping up) the DC supply and, that DC supply (once established) is around 400 volts and constant. Maybe try this (that I drew for another answer): -", null, "And, the main thing to observe is that the green charge lines are varying in slope as the AC voltage rises and falls over each half-cycle of AC. Also observe that the red transfer current slope is more constant (dumbed-down version).\n\n• So, green slope is proportional to the input AC voltage and red slope is less variable in slope.\n\nOf course, as Tim Williams points out in a comment below this answer, it's a bit more complex than this because the red-trace slope will reduce as the input voltage rises towards its peak. The inductor only has to supply the difference between the output DC voltage and the input rectified waveform and, when the input voltage is closer to peaking, the voltage difference between input and DC output is less and thus the slope of the red line is less.\n\nThe waveforms above are dumbing-down the truth to make it easier to follow for the uninitiated. Dumbing-down is something that might help in this circumstance for the OP. But, it comes with a price and hopefully, that will be appreciated rather than scorned by people who are \"in the know\". Maybe this added picture will help (less-dumbed-down): -", null, "It is also unclear to me how the voltage gets boosted as well\n\nIf you need to know how boost converters work, then I suggest you ask a question about those before trying to figure out how the front-end PFC circuit works. Of course, they are very-related but, you need to grasp basic boost converter operation first.\n\n• Hi Andy, thanks for your answer. To clarify my question about the voltage getting boosted, I meant that it is not clear to me how 1) regulating the line current (making the load look resistive) and 2) maintaining a constant output voltage are able to be done at the same time. What does the switching behavior of the MOSFET look like (duty cycle)? Aug 20, 2022 at 22:45\n• @RGBEngineer you can infer the switching duty cycle from the picture; when the inductor is charging (green) the MOSFET is on; when the inductor is releasing it's stored energy to the output capacitor (red transfer current), the MOSFET is off. The line current (from the AC) is not regulated i.e. it is not constant; the current taken is made proportional to the rectified AC voltage and hence, although it is switching at high frequency, the average shape of the current is a half-sine-wave. Aug 20, 2022 at 22:58\n• How are 1) regulating the line current (making the load look resistive) and 2) maintaining a constant output voltage are able to be done at the same time? Aug 20, 2022 at 23:01\n• It can be done is the important thing here but, it's tricky to get right (never perfect but good enough). I've previously answered a question you have raised on boost converters and you just have to imagine that the DC input to a boost converter can vary from a low value (a few volts) to a large value (maybe 300 odd volts) and that the controller has to sort-things-out correctly. Of course, it's easy to imagine this if the DC supply changed gradually but, when the DC supply rises from zero to several hundred volts and back to zero in 20 ms, it's a little trickier. Stick at it is my advice. Aug 20, 2022 at 23:06\n• @TimWilliams you are absolutely correct and despite me dumbing down the answer to reach the level needed I became unstuck. I shall add a few words to clear this up. Aug 21, 2022 at 0:03\n\nA greatly simplified example of how a PFC converter works is to consider that a wide input range switching supply or DC-DC converter draws more current with the minimum input voltage (typically 95 VAC) and less current with maximum input voltage (typically 265 VAC). If you remove the input storage capacitors, the switching circuit will adjust its PWM over each half-cycle of rectified sine wave, so that it provides the same regulated output voltage, but input current (and PWM) will be highest when the voltage is low, and will be lowest when input voltage is highest. Energy storage, filtering, and regulation will be performed by the switching circuit.\n\nHere is a simulation of a PFC circuit using an LT1249 IC. It shows a comparison between a conventional FWB rectifier circuit and the equivalent using PFC. Note: I just saw that they both used 600 Hz AC input. I'll try again with 50 Hz to see if there is any significant difference.", null, "I had to tweak some circuit components to get a reasonable response, but it looks OK at 50 Hz. Input current surges are definitely lower.", null, "" ]
[ null, "https://i.stack.imgur.com/pz4Cr.png", null, "https://i.stack.imgur.com/T9QTO.png", null, "https://i.stack.imgur.com/E4hBj.png", null, "https://i.stack.imgur.com/sgoHz.jpg", null, "https://i.stack.imgur.com/NjSYA.png", null, "https://i.stack.imgur.com/O6CUO.png", null, "https://i.stack.imgur.com/PhCm0.png", null, "https://i.stack.imgur.com/ORiXl.png", null, "https://i.stack.imgur.com/pezNS.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.95330334,"math_prob":0.9002898,"size":2915,"snap":"2023-14-2023-23","text_gpt3_token_len":617,"char_repetition_ratio":0.11370663,"word_repetition_ratio":0.0,"special_character_ratio":0.20308748,"punctuation_ratio":0.07334526,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9709415,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-04-01T13:22:03Z\",\"WARC-Record-ID\":\"<urn:uuid:2c3abd61-497b-4e79-9886-351c3ec9f140>\",\"Content-Length\":\"190055\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8403a1a1-bcda-4139-bec9-f4f58e86731d>\",\"WARC-Concurrent-To\":\"<urn:uuid:5fefc4f6-0694-44d8-9aa8-5dfde5c37eeb>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://electronics.stackexchange.com/questions/631933/how-does-the-pfc-boost-circuit-work\",\"WARC-Payload-Digest\":\"sha1:VYGLQAOFWVQAPVVUG43GBS6NBCR5VOL6\",\"WARC-Block-Digest\":\"sha1:VA4NGDD3B6VB6T2HLICQUJNHU4AUSPGP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296950030.57_warc_CC-MAIN-20230401125552-20230401155552-00098.warc.gz\"}"}
https://dup4.cn/Leetcode/problems/412.fizz-buzz/
[ "# 412.fizz-buzz\n\n## Statement\n\n• Difficulty: Easy\n• Tag: `数学` `字符串` `模拟`\n\n• `answer[i] == \"FizzBuzz\"` 如果 `i` 同时是 `3``5` 的倍数。\n• `answer[i] == \"Fizz\"` 如果 `i``3` 的倍数。\n• `answer[i] == \"Buzz\"` 如果 `i``5` 的倍数。\n• `answer[i] == i` (以字符串形式)如果上述条件全不满足。\n\n``````输入:n = 3\n\n``````\n\n``````输入:n = 5\n\n``````\n\n``````输入:n = 15\n\n• `1 <= n <= 104`\n\n• Difficulty: Easy\n• Tag: `Math` `String` `Simulation`\n\nGiven an integer `n`, return a string array `answer` (1-indexed) where:\n\n• `answer[i] == \"FizzBuzz\"` if `i` is divisible by `3` and `5`.\n• `answer[i] == \"Fizz\"` if `i` is divisible by `3`.\n• `answer[i] == \"Buzz\"` if `i` is divisible by `5`.\n• `answer[i] == i` (as a string) if none of the above conditions are true.\n\nExample 1:\n\n``````Input: n = 3\nOutput: [\"1\",\"2\",\"Fizz\"]\n``````\n\nExample 2:\n\n``````Input: n = 5\nOutput: [\"1\",\"2\",\"Fizz\",\"4\",\"Buzz\"]\n``````\n\nExample 3:\n\n``````Input: n = 15\nOutput: [\"1\",\"2\",\"Fizz\",\"4\",\"Buzz\",\"Fizz\",\"7\",\"8\",\"Fizz\",\"Buzz\",\"11\",\"Fizz\",\"13\",\"14\",\"FizzBuzz\"]\n``````\n\nConstraints:\n\n• `1 <= n <= 104`\n\n## Solution\n\n``````from typing import List\n\nclass Solution:\ndef fizzBuzz(self, n: int) -> List[str]:\ndef run(x: int) -> str:\ns = \"\"\n\nif x % 3 == 0:\ns = s + \"Fizz\"\nif x % 5 == 0:\ns = s + \"Buzz\"\n\nif len(s) == 0:\ns = str(x)\n\nreturn s\n\nreturn list(map(run, range(1, n + 1)))\n``````" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.65900195,"math_prob":0.99964035,"size":1142,"snap":"2023-14-2023-23","text_gpt3_token_len":485,"char_repetition_ratio":0.19859402,"word_repetition_ratio":0.08695652,"special_character_ratio":0.438704,"punctuation_ratio":0.23550725,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9996619,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-05-30T05:33:23Z\",\"WARC-Record-ID\":\"<urn:uuid:3ccc4cd6-e456-48ab-b873-44746a56eed0>\",\"Content-Length\":\"58886\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9fb443fe-7fd7-4fb9-a50b-08e27b9f3b45>\",\"WARC-Concurrent-To\":\"<urn:uuid:d5debf2f-75e1-457b-86bd-817985e5275a>\",\"WARC-IP-Address\":\"123.57.128.208\",\"WARC-Target-URI\":\"https://dup4.cn/Leetcode/problems/412.fizz-buzz/\",\"WARC-Payload-Digest\":\"sha1:N5ES7NFI7WFABWS5YO5A5V7IKZ7RGWUU\",\"WARC-Block-Digest\":\"sha1:ZHBWJG3BFRVTJIAR4EB4D62P7E6NET2P\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224645089.3_warc_CC-MAIN-20230530032334-20230530062334-00391.warc.gz\"}"}
https://file.scirp.org/Html/4-9801208_6635.htm
[ "Crosstalk Prediction for Three Conductors Nonuniform Transmission Lines: Theoretical Approach & Numerical Simulation\n\nJournal of Electromagnetic Analysis and Applications\nVol. 3  No. 8 (2011) , Article ID: 6635 , 8 pages DOI:10.4236/jemaa.2011.38051\n\nCrosstalk Prediction for Three Conductors Nonuniform Transmission Lines: Theoretical Approach & Numerical Simulation\n\nKachout Mnaouer, Bel Hadj Tahar Jamel, Choubani Fethi", null, "Research Unit Systems of Telecommunications (6’Tel), SUP’COM, University of the Carthage, Ariana, Tunisia.\n\nEmail: [email protected]\n\nReceived June 15th, 2011; revised July 14th, 2011; accepted July 22nd, 2011.\n\nKeywords: Nonuniform Transmission Lines, Near-End Crosstalk, Far-End Crosstalk\n\nABSTRACT\n\nIn this paper the crosstalk between nonuniform transmission lines is examined. Firstly, methods for prediction of crosstalk between microstrip transmission lines are reviewed. Classical coupled transmission line theory is used for uniform lines and cannot be used for nonuniform transmission lines. Secondly, equations are derived which can be solved to obtain formulas for the near-end and far-end crosstalk for nonuniform transmission lines. Finally, an example is worked which illustrates the crosstalk between three conductor nonuniform transmission lines. Obtained theoretical results were compared with simulations data. Comparison results shown that theoretical and simulation results are approximately the same.\n\n1. Introduction\n\nModern trends in circuit designs such as operating at higher frequencies , lowering threshold voltages, and shrinking device geometries have made accurate prediction of electromagnetic compatibility (EMC) an indispensable component in the design cycle [2,3]. Susceptibility to electromagnetic interference (EMI) can severely degrade the signal integrity of the system [4,5]. One of the main sources for the EMI is the coupling between incident EM field and the electrical interconnects, which serve as antennas at high frequencies .\n\nThe problem of characterizing the coupling between interconnects are typically related to multiconductor transmission lines (MTLs) and coupled non-uniform transmission lines (NTLs). Coupled NTLs are widely used in RF and microwave circuits [7,8]. Coupled NTLs are encountered in many interconnects and packaging structures. Also, some of NTLs structures such as the tapered ones, have found important applications in narrowband microwave circuits.\n\nThe differential equations describing coupled NTLs have non-constant matrices, so except for a few special cases no analytical solution exists for them. Some methods such as decoupling [9,10], finite difference Taylor’s series expansion , Fourier series expansion , the equivalent sources method and the method of moments have been introduced to analyze coupled NTLs. In some of these methods such as finite difference and Taylor’s series expansion, it is necessary to use an optimization process to satisfy terminal conditions. This is due to the nature of terminal conditions in coupled NTLs, which are two-point type. In the other word, the analysis of NTLs is a Boundary Value Problem (BVP) naturally.\n\nIn this paper, we propose an approach to analyze coupled NTLs. The approach presented in this regard is based on the concept of cascading many short sections, which relies on using the analytical closed-form exponential matrix solution, available for MTLs only. In contrast to the special case of a uniform MTLs, and NTLs is characterized by per-unit-length parameter matrices that are not constant, but rather vary with the spatial dimension in the telegraphers equations. This fact makes handling the line more challenging, since a closed-form solution cannot be obtained analytically except in special situations. In this work we develop rigorous equations to predict crosstalk between coupled NTLs.\n\nThis paper is organized as follows. Section 2 presents a brief background on formulating MTLs. In Section 3 we derive the literal or symbolic solution of the coupled NTLs equations for three conductor nonuniform transmission lines. Section 4 presents numerical validations by comparing theoretical with simulation results and discuss some concluding remarks\n\n2. State of the Art\n\nThe literature on crosstalk between transmission lines dates back at least to the 1930s, and textbooks have been written on MTLs. Strictly speaking, classical transmission line theory applies only to perfectly conducting lines in a homogeneous medium so that the transmission line modes are transverse electromagnetic (TEM). The basic idea to study the coupling between NTLs is to cascade many short sections (by dividing the non-uniform line to “n” small equal uniform lines). In this section, we present the state of the art of coupling between three conductor transmission lines. The goal of this section is to demonstrate that using the existing theory of MTLs we cannot calculate the coupling between NTLs.\n\n2.1. Theoretical Study of Uniform MTLs\n\nMicrostrip lines do not support pure TEM modes, but at low frequencies they support quasi-TEM modes that approximately satisfy the transmission line equations.\n\nA cross-sectional view of a pair of microstrip lines on a grounded substrate is shown in Figure 1. For simplicity, we assume that the two strips have equal width w, zero thickness, and perfectly conductivity. The ground plane is also assumed to be perfectly conducting. The lines are located on a dielectric slab (substrate) of thickness h and have a separation s. The substrate has relative permittivity εr and free-space permeability μ0. The region above the substrate is free space.\n\nThe multiconductor transmission line equations can be compactly written in matrix form, but for discussion we choose to write out the coupled differential equations. For the source-free case, the line currents, I1 and I2, and voltages, V1 and V2, satisfy:", null, "(1)", null, "(2)", null, "(3)", null, "(4)\n\nwhere x is the longitudinal coordinate and the exp(jωt)", null, "Figure 1. Cross-sectional geometry for a pair of identical microstrip transmission lines.\n\ntime dependence is suppressed. The Cij are the elements of the distributed capacitance matrix, and the Lij are the elements of the distributed inductance matrix.\n\nBoth the capacitance and inductance matrices are symmetric (C12 = C21 and L12 = L2l). Because of the microstrip symmetry, we also have C11 = C22 and L11 = L22.\n\nFor perfect conductors in a homogeneous dielectric, the capacitance and inductance matrices are frequency independent. When the dielectric region is inhomogeneous (as for insulated wires or microstrips), then the, capacitance and inductance matrices depend on frequency. However, they are approximately frequency independent over a large quasi-static frequency range.\n\nThe symmetric microstrip supports an even mode with V1 = V2 and an odd mode with V1 = –V2. The even and odd mode propagation constants are given by Equations (5) and (6).", null, "(5)\n\nand", null, "(6)\n\nThe even and odd mode characteristic impedances, Zev and Zodd, are:", null, "(7)\n\nand", null, "(8)\n\nEquations (5) and (7) are deceptively simple because computation of the Lij and Cij elements generally requires some numerical method, such as the method of moments.\n\nFor large spacing (s/w", null, "l), the coupling capacitance C12 and inductance L12 become small. In this case, the propagation constants in Equation (5) approach that of an isolated line γ0:", null, "(9)\n\nAlso, the characteristic impedances in Equation (7) approach that of an isolated line Z0:", null, "(10)\n\n2.2. Crosstalk Predictions\n\nTo study crosstalk, we consider the geometry in Figure 2. The coupled microstrip lines are identical to those in Figure 1 except that they are of finite length l. Line 1 is fed with a voltage generator V = 0 at x = 0, and all four ports are terminated with an impedance Z0 We label the driven and terminated ends of line 1 as ports 1 and 2, and the near and far ends of line 2 as ports 3 and 4. The geometry in Figure 2 has been analyzed for both directional coupler applications and crosstalk predictions.\n\nFor crosstalk prediction, we can assume that the lines are loosely coupled (s is not too small compared to h and w). In this case, we can use the approximate solution of and equate near-end and far-end crosstalk to the S parameters as follows:", null, "(11)\n\nIn terms of the microstrip parameters, S31 is approximately:", null, "(12)\n\nwhere", null, "(13)\n\nand", null, "(14)\n\nSimilarly, S41 is approximately:", null, "(15)\n\nThe transmission S parameter S21 is not needed for crosstalk prediction, but is approximately:", null, "(16)\n\nTo first order in δz, the reflection coefficient S11 = 0. To first order in δz, the approximate S parameters satisfy conservation of power:", null, "(17)\n\nAt sufficiently low frequencies (or for sufficiently short lines), we can assume that", null, ". In that case", null, "Figure 2. Two identical microstrip lines terminated in the characteristic impedance Z0 an isolated line. Line 1 is excited at port 1.\n\nthe scattering parameters of the previous section reduce to", null, "(18)", null, "(19)\n\nand", null, "(20)\n\nAfter rigorous theoretical study of existing solutions to calculate the coupling between uniform lines, we conclude that solutions detailed above do not take into account the non-uniformity of the lines. To do so, in the next section we develop theoretical solution to calculate the coupling between coupled NTLs. We must take into account the intrinsic characteristics of each physical part of the line\n\n3. Coupled Nonuniform Transmission Lines\n\nThe purpose of this section is to derive the literal or symbolic solution of the coupled NTLs equations for three conductor nonuniform transmission lines and to incorporate the terminal impedance constraints into this solution to yield explicit equations for the crosstalk.\n\nIn order to understand the general behavior of the solution, it would be helpful to have a literal solution for the induced crosstalk voltages in terms of the symbols for the line length, terminal impedances, per-unit-length capacitances and inductances, the source voltage, etc. From such a result we could observe how changes in some or all of these parameters would affect the solution. This advantage is similar to a transfer function which is useful in the design and analysis of electric circuits and automatic control systems. In order to obtain this same insight from the numerical solution we would need to perform a large set of computations with these parameters being varied over their range of anticipated values.\n\nSuch transmission-line literal transfer functions for the prediction of crosstalk have been derived in the past for use in the frequency-domain analysis of microwave circuits or for time-domain analysis of crosstalk in digital circuits. However, all of these methods make one or more of the following assumptions about the line in order to simplify the derivation:\n\n·    The line is a three-conductor, with two signal conductors and a reference conductor.\n\n·    The line is symmetric, i.e., the two signal conductors are identical in cross-sectional shape and are separated from the reference conductor by identical distances\n\n·    The line is weakly coupled, i.e., the effect of the induced signals in the receiving circuit on the driven circuit is neglected (widely separated lines tend to satisfy this in an approximate fashion the wider the separation)·    Both lines are matched at both ends (the line is terminated at all four ports in the line characteristic impedances).\n\n·    The line is lossless, i.e., the conductors are perfect conductors and the surrounding medium is lossless.\n\n·    The medium is homogeneous.\n\nThe obvious reason why these assumptions are used is to simplify the difficult manipulation of the symbols that are involved in the literal solution.\n\nA nonuniform three-conductor transmission lines structure is sketched in Figure 3. The per-unit-length equivalent circuit is shown in Figure 4.\n\nA voltage source VS(t), with internal resistance RS, is connected to a load RL via both a generator conductor and reference conductor. A receptor circuit shares the same reference conductor and connects two terminations RNE and RFE by a receptor conductor.\n\nWe subdivide this structure into “n” equal parts (Δ1, Δ2", null, "Δn), each part have the same line length. In all these parts, conductors are assumed to be uniform. In this case, nonuniform lines can be considered as a coupled multiconductor transmission line.\n\nThe near-end and far-end crosstalk voltages are obtained from the second entries in these solution vectors as", null, ".\n\nThe exact literal solution for the crosstalk voltages is:", null, "(21)", null, "(22)", null, "(23)", null, "Figure 3. Coupled nonuniform transmission lines.", null, "", null, "Figure 4. (a) Three-conductor transmission lines illustrating crosstalk; (b) per-unit length parameter.\n\nThe various quantities in these equations are:", null, "(24)", null, "(25)\n\nwhere the inductlve-coupling coefficients are:", null, "(26)", null, "(27)\n\nwhere lmi is the metual inductance for each part Δi of the line. And the capacitive-coupling coefficients are:", null, "(28)\n\nwhere Cmi is the metual capacitance for each part Δi of the line.", null, "(29)\n\nThe remaining quantities are defined in the following way. The coefficient KNE is defined by", null, "(30)\n\nThe coupling coefficient between the two circuits is defined by", null, "(31)\n\nand the circuit characteristic impedances are defined by:", null, "(32)", null, "(33)\n\nThe line one-way delay is denoted by:", null, "(34)\n\nThe relationships of the termination impedances to the characteristic impedances are important parameters. In order to highlight this dependency, the various ratios of termination impedance to characteristic impedance are defined by:", null, "(35)\n\nIn terms of these ratios, the factor P in Den becomes:", null, "(36)\n\nObserve that P = 1 if the line is weakly coupled,", null, ", and/or the lines are matched at opposite ends,", null, ", or", null, ". The circuit time constants are logically defined as:", null, "(37)", null, "(38)\n\nObserve that a line time constant is equal to the line one-way delay if the lines are weakly coupled,", null, ", and that line is matched at one end. In other words,", null, "if", null, "and", null, "or", null, ".\n\nThe above results are an exact literal solution for the problem. No assumptions about symmetry or matched loads are used. Therefore they cover a wider class of problems than have been considered in the past. Although they have been simplified by defining certain terms, they can be simplified further if we make the following assumptions. First let us assume that the line is electrically short at the frequency of interest, i.e.,", null, ". In this case the terms C and S simplify to:", null, "(39)", null, "(40)\n\nThe near-end crosstalk can be viewed as a transfer function between the input VS(t) and the outputs VNE. This can be done by factoring out VS(t) and jω to give:", null, "(41)\n\nwhere", null, "(42)\n\nCommon impedance coupling in the near-end crosstalk can be evaluated using:", null, "(43)\n\nThe far-end crosstalk is determined by:", null, "(44)\n\nCommon impedance coupling in the far-end crosstalk can be evaluated using:", null, "(45)\n\n4. Simulation versus Theoretical Results\n\nThis section aims to validate theoretical proposed solution. We develop T-electric equivalent model for each part of the presented structure. Figure 5 shows the proposed", null, "Figure 5. T-model.\n\nT-model, where, Lm = lm*lw represents the mutual inductance between conductors, Lg = lg*lw is the self-inductance of generator conductor, Lr = lr*lw is the self-inductance of the receptor conductor. Where lw, lm, lg and lr denote the conductor length, the per-unit length mutual inductance between generator and receptor conductors, the per-unit length inductance of the generator conductor, and the per-unit length inductance of the receptor conductor, respectively. Cm = cm*lw is the mutual capacitance between conductors, Cr = cr*lw is the capacitance of receptor conductor, Cg = cg*lw is the capacitance of generator conductor. Where cm, cr and cg denote the per-unit length mutual capacitance between two conductors, the per-unit length capacitance of the receptor conductor, and the per-unit length capacitance of the generator conductor, respectively.\n\n4.1. Nonuniform Conductors with Rectangular Cross-Section\n\nIn order to evaluate the crosstalk between nonuniform conductors, we deal first with various per-unit-length parameters. In principle, the method of moments is a common and widespread technique. In order to illustrate this method, let us reconsider the parallel-plate capacitor problem. We assume that the charge distribution over each plate is uniform, that is, does not vary over the plates. In reality, the charge distribution will peak at the edges. To model this, in Figure 6 we break each plate into small rectangular areas Δsi, and assume the charge over each subarea as being constant with an unknown level, αi. The total charge on each plate having been divided into N subareas is:", null, "(46)\n\nThe heart of this method is to determine the total voltage of each subarea as the sum of the contributions from the charges on each subarea. Hence the total voltage of a subarea is the sum of the contributions from all the charges of all the subareas (including the subarea under consideration):", null, "Figure 6. Approximating the charge distribution on the plates of parallel-plate capacitor.", null, "(47)\n\nEach term Kji represents as basic subproblem relating the voltage of a subarea Vj to the charge amplitude on another subarea.", null, "(48)\n\nBecause of symmetry (both plates are identical), we can assign the voltage of the top plate (with respect to infinity) as +V and the voltage of the bottom plate (with respect to infinity) as –V. The voltage between the two plates is then 2V, so that the capacitance is:", null, "(49)\n\nGrouping (70) for all subareas gives a matrix equation to be solved (which is the final result for all such MoM schemes):", null, "(50)\n\nWe have assigned all subareas on the top plate to have voltages of +V and all subareas on the bottom plate have voltages of –V. Once (50) is solved for all the αi charge distribution coefficients, the total charge on each plate can be determined from (46) and the total capacitance can be determined from (49).\n\nIn our case, we consider nonuniform transmission lines structure shown in Figure 3, where, h = 47 mils, and εr = 4,7 (glass epoxy). Conductors are assumed to be immersed in homogeneous medium.\n\nThe per-unit length capacitance parameter matrix is:", null, "(51)\n\nThe per-unit length inductance parameter matrix is:", null, "(52)\n\nIn the configuration presented in Figure 3 we find that lr = lg and Cr = Cg.\n\nFor the above mentioned values and for “n = 5”, the per-unit length inductance and capacitance parameters for each part of the structure are presented in Table 1, where w is the line width and S is the separation distance between nonuniform conductors.\n\nThese parameters can now be used to simulate the mentioned electrical equivalent model; the model is implemented in Advanced design system (ADS) of Agilent. Figure 7 describes the near-end crosstalk variation versus frequency for nonuniform transmission lines.\n\nFigure 7 shows a comparison between theoretical calculated near-end crosstalk results and simulation data. Results show that the coupling increases gradually versus frequency. For frequencies above 50 KHz, we can see that theoretical and simulation results are approximately the same.\n\n4.2. Nonuniform Conductors with Circular Cylindrical Section\n\nConductors having cross sections that are circular cylindrical are referred to as wires. These are some of the few conductor types for which closed-form equations for the per-unit-length parameters can be obtained.\n\nThree conductors, shown in Figure 8, have the radius varies from rw = 225 mils (Δn) to 125 mils (Δ1) and same length lw = 39370 mils separated by distance S varies from S = 100 mils (Δn) to 300 mils (Δ1). The configuration is assumed to be immersed in homogeneous medium (µ = µ0). The per-unit length inductance parameter matrix is:", null, "(53)\n\nwhere", null, "(54)", null, "(55)", null, "(56)\n\nThe per-unit length capacitance parameter matrix is:", null, "(57)\n\nThe relation between the per-unit length capacitance and inductance parameters matrix is given in Equation (58).", null, "Figure 7. Comparison of theoretical and simulated near-end crosstalk.", null, "Figure 8. Nonuniform three-conductor transmission lines with circular cylindrical section.", null, "Table 1. Inductance and capacitance per-unit length parameters.", null, "(58)\n\nFor the above mentioned values and for “n = 5”, the per-unit length inductance and capacitance parameters for each part of the structure are presented in Table 2, where rw is the conductor radius and S is the separation distance between nonuniform conductors.\n\nThese parameters can now be used to simulate the two explicated models.\n\nFigure 9 shows a comparison between theoretical", null, "Table 2. Inductance and capacitance per-unit length parameters.", null, "Figure 9. Comparison of theoretical and simulations results.\n\ncalculated near-end crosstalk results and simulation data using T models.\n\nFor frequencies above 100 KHz, we can see that theoretical and simulation results are approximately the same.\n\n5. Conclusions\n\nRigorous equations have been developed to predict crosstalk between nonuniform transmission lines. Used conductors are assumed to be immersed in homogenous medium. Electric equivalent model has been presented for calculating the crosstalk between three-conductor nonuniform transmission lines. Rigorous equations are developed to calculate the per-unit length inductive and capacitive parameters. Comprehensive comparisons between the results which are obtained by using rigorous theoretical equations on one hand and those obtained by the created model on the other hand, have shown an excellent accuracy for higher frequencies. Theoretical solution for near-end and far-end crosstalk presented here are faster than the finite difference analysis.\n\nREFERENCES\n\n1. K. Mnaouer, B. H. T. Jamel and C. Fathi, “Crosstalk Mitigation Enhanced by Reference Conductor Position,” 14th IEEE Workshop on Signal Propagation on Interconnects (SPI), Hildesheim, 9-12 May 2010. doi:10.1109/SPI.2010.5483549\n2. K. Mnaouer, B. H. T. Jamel and C. Fathi, “Shielded and Unshielded Three-Conductor Transmission Lines: Modeling and Crosstalk Performance,” 28th Progress in Electromagnetics Research Symposium (PIERS), Cambridge, 5-8 July 2010.\n3. K. Mnaouer, B. H. T. Jamel and C. Fathi, “Modeling of Microstrip and PCB Traces to Enhance Crosstalk Reduction,” IEEE R8 International Conference on Computational Technologies in Electrical and Electronics Engineering (SIBIRCON), Irkutsk, 11-15 July 2010.\n4. K. Mnaouer, B. H. T. Jamel and C. Fathi, “Development and Validation of an Electric Equivalent Models for Predicting Crosstalk Performance of Three-Conductor Transmission Lines,” 12th IEEE ICCS, Singapore City, 17-20 November 2010.\n5. K. Mnaouer, B. H. T. Jamel and C. Fathi, “Coupled Nonuniform Transmission Lines: Modeling and Crosstalk Performances,” 29th Progress in Electromagnetics Research Symposium (PIERS), Marrakesh, 20-23 March 2011.\n6. L. A. Hayden and V. K. Tripathi, “Nonuniform Coupled Microstrip Transversal Filters for Analog Signal Processing,” IEEE Transactions on Microwave Theory and Techniques, Vol. 39, No. 1, January 1991, pp. 47-53. doi:10.1109/22.64604\n7. T. Dhaene, L. Martens and D. D. Zutter, “Transient Simulation of Arbitrary Nonuniform Interconnection Structures Characterized by Scattering Parameters,” IEEE Transactions on Circuits and Systems, Vol. 39, No. 11, November 1992, pp. 928-937. doi:10.1109/81.199890\n8. M. Khalaj-Amirhosseini, “Analysis of Coupled Nonuniform Transmission Lines through Analysis of Uncoupled Ones,” International Symposium on Antennas and Propagation (ISAP’06), Singapore City, 1-4 November 2006.\n9. C. R. Paul, “Analysis of Multiconductor Transmission Lines,” John Wiley and Sons Inc., Hoboken, 1994.\n10. M. Khalaj-Amirhosseini, “Using Linear Sections Instead of Uniform Ones to Analyze the Coupled Nonuniform Transmission Lines,” International Journal of RF and Microwave Computer-Aided Engineering, Vol. 19, No. 1, 2009, pp. 75-79. doi:10.1002/mmce.20317\n11. M. Khalaj-Amirhosseini, “Analysis of Coupled or Single Non-uniform Transmission Lines Using Step-by-Step Numerical Integration,” Progress in Electromagnetics Research, PIER, Vol. 58, 2006, pp. 187-198. doi:10.2528/PIER05072803\n12. M. Khalaj-Amirhosseini, “Analysis of Coupled Non-uniform Transmission Lines Using Taylor’s Series Expansion,” IEEE Transactions on Electromagnetic Capability, Vol. 48, No. 3, 2006, pp. 594-600. doi:10.1109/TEMC.2006.879340\n13. M. Khalaj-Amirhossein, “Analysis of Periodic and Aperiodic Coupled Non-uniform Transmission Lines Using the Fourier Series Expansion,” Progress in Electromagnetics Research, PIER, Vol. 65, 2006, pp. 15-26. doi.:10.2528/PIER06072701\n14. M. Khalaj-Amirhosseini, “Analysis of Coupled or Single Non-uniform Transmission Lines Using the Equivalent Sources Method,” IEEE 2007 International Symposium on Microwave, Antenna, Propagation and EMC Technologies for Wireless Communications, (MAPE 2007), Hangzhou, 16-17 August 2007, pp. 1247-1250.\n15. M. Khalaj-Amirhosseini, “Analysis of Coupled or Single Nonuniform Transmission Lines Using the Method of Moments,” International Journal of RF and Microwave Computer-Aided Engineering, Vol. 18, No. 4, 2008, pp. 187-198. doi:10.1002/mmce.20295" ]
[ null, "https://file.scirp.org/Html/4-9801208.files/image001.gif", null, "https://file.scirp.org/Html/4-9801208\\0fa66ca3-c291-41ff-86e3-ab5a4036ad67.jpg", null, "https://file.scirp.org/Html/4-9801208\\2a04469a-da54-4ede-be44-9201826a256f.jpg", null, "https://file.scirp.org/Html/4-9801208\\3c7775b5-adbd-4a04-8698-8325dd049ebf.jpg", null, "https://file.scirp.org/Html/4-9801208\\fc0415cc-6d0d-45d2-88ce-16b89fceb3c9.jpg", null, "https://file.scirp.org/Html/4-9801208\\1f3fc98a-263b-40c1-b560-0e573d16c28f.jpg", null, "https://file.scirp.org/Html/4-9801208\\9ab45013-2617-4cba-bcf0-4e2df5e755e8.jpg", null, "https://file.scirp.org/Html/4-9801208\\505c23d3-4476-4ea2-97b8-684fbdf39760.jpg", null, "https://file.scirp.org/Html/4-9801208\\2559fa53-344e-4fa9-bc18-dab7cb232833.jpg", null, "https://file.scirp.org/Html/4-9801208\\7eeee45f-b92e-4fee-88e7-61faba055932.jpg", null, "https://file.scirp.org/Html/4-9801208\\68153fa7-241a-4b70-8335-a99cfeecb756.jpg", null, "https://file.scirp.org/Html/4-9801208\\86ecf6fe-dc82-4f8c-b9e4-f55bf8d989e9.jpg", null, "https://file.scirp.org/Html/4-9801208\\e62211e8-74c3-4317-8a9c-2bdf23dffbb3.jpg", null, "https://file.scirp.org/Html/4-9801208\\5e34e312-4881-4b52-a13b-35f59f47493e.jpg", null, "https://file.scirp.org/Html/4-9801208\\1851355b-e425-4115-a947-9b08759886f2.jpg", null, "https://file.scirp.org/Html/4-9801208\\a42345af-01f8-4ad4-ba6c-ca219aa272e2.jpg", null, "https://file.scirp.org/Html/4-9801208\\5add1741-505f-4794-aa3b-ec4d06faf1b6.jpg", null, "https://file.scirp.org/Html/4-9801208\\caaf7829-bdc5-4e2c-a59d-79faa57fccdc.jpg", null, "https://file.scirp.org/Html/4-9801208\\f110fb83-5460-48ce-add7-43e81a259cae.jpg", null, "https://file.scirp.org/Html/4-9801208\\03c756e0-3f12-4828-9002-c78e3d29ee35.jpg", null, "https://file.scirp.org/Html/4-9801208\\6ea41922-57f7-4de5-80e4-d79313567c79.jpg", null, "https://file.scirp.org/Html/4-9801208\\155337aa-cda0-4c68-8647-926ecf654e50.jpg", null, "https://file.scirp.org/Html/4-9801208\\d1032b0c-69e2-45cb-acf3-ada8cb693d98.jpg", null, "https://file.scirp.org/Html/4-9801208\\271adf4c-6b2b-4026-937f-a4537cf1659c.jpg", null, "https://file.scirp.org/Html/4-9801208\\5725cea3-c855-4d99-85df-407c70713412.jpg", null, "https://file.scirp.org/Html/4-9801208\\29fb33f0-c66e-4483-8e92-2e85bdd1ca48.jpg", null, "https://file.scirp.org/Html/4-9801208\\49810c68-3687-4117-888b-ebd7b371bd69.jpg", null, "https://file.scirp.org/Html/4-9801208\\4ee1992e-9730-419b-a811-4ca41cafd30c.jpg", null, "https://file.scirp.org/Html/4-9801208\\2f9eda29-35b3-4471-8577-4cb8d9e271fa.jpg", null, "https://file.scirp.org/Html/4-9801208\\d25f1093-8ad6-4831-87d5-a939f9177264.jpg", null, "https://file.scirp.org/Html/4-9801208\\1fa56cf3-e542-4118-ad25-e889cc28b557.jpg", null, "https://file.scirp.org/Html/4-9801208\\ebd0db28-48b3-4777-9b17-e4591e622a1e.jpg", null, "https://file.scirp.org/Html/4-9801208\\99b216d9-f8d1-4557-babe-41afca75a975.jpg", null, "https://file.scirp.org/Html/4-9801208\\614aad42-9e92-4cbc-9209-8116a9f46daf.jpg", null, "https://file.scirp.org/Html/4-9801208\\ba82bbc3-7442-450d-9196-7ad352de4d91.jpg", null, "https://file.scirp.org/Html/4-9801208\\0834ba69-b41c-417d-8183-15e505e2a7a2.jpg", null, "https://file.scirp.org/Html/4-9801208\\c8a5132c-7ecf-40ac-958b-0ae61dc09daa.jpg", null, "https://file.scirp.org/Html/4-9801208\\b991eb4d-5f50-4e9e-a114-3737f39e91fc.jpg", null, "https://file.scirp.org/Html/4-9801208\\02d4e81f-2864-4f46-8a10-11a8025b3777.jpg", null, "https://file.scirp.org/Html/4-9801208\\db3d6f42-7db3-4152-be92-d35e6d56baf5.jpg", null, "https://file.scirp.org/Html/4-9801208\\f53c31ae-ada3-47f4-9926-f6e9905ebb4e.jpg", null, "https://file.scirp.org/Html/4-9801208\\bc4a8eb8-911d-424b-a276-46790a15ebb3.jpg", null, "https://file.scirp.org/Html/4-9801208\\a043429f-ecce-48b5-aeba-38f2760e1b67.jpg", null, "https://file.scirp.org/Html/4-9801208\\923e324c-2f8d-4ad2-8079-29ebb83f77e3.jpg", null, "https://file.scirp.org/Html/4-9801208\\829e186a-329e-4f14-b6f9-3ff4f650cac0.jpg", null, "https://file.scirp.org/Html/4-9801208\\8086e3b3-b2bd-48fa-9e03-d01cc8b489ec.jpg", null, "https://file.scirp.org/Html/4-9801208\\89f65170-ed83-4c43-8831-6aa702a17cd8.jpg", null, "https://file.scirp.org/Html/4-9801208\\b1d4bde2-e432-4724-b30e-1d09e5622f8a.jpg", null, "https://file.scirp.org/Html/4-9801208\\538ccf42-a4e2-4906-bd2d-f5f765ea57e3.jpg", null, "https://file.scirp.org/Html/4-9801208\\d412bde2-b50e-45f1-bd88-51cae078b8ad.jpg", null, "https://file.scirp.org/Html/4-9801208\\e209692b-a521-4372-a917-f29fc3e867da.jpg", null, "https://file.scirp.org/Html/4-9801208\\9a12f597-2c1b-407c-bf07-278e0bcf94d0.jpg", null, "https://file.scirp.org/Html/4-9801208\\7c9a7e83-8134-4890-8676-1d2394dcc1d1.jpg", null, "https://file.scirp.org/Html/4-9801208\\aee299dd-1a3f-4654-8399-f8e551709a39.jpg", null, "https://file.scirp.org/Html/4-9801208\\9a0b8b34-1baf-4422-97dc-e7fc8012f99a.jpg", null, "https://file.scirp.org/Html/4-9801208\\a7f2e492-59a0-4eb5-89fe-aac2fb6bc1b2.jpg", null, "https://file.scirp.org/Html/4-9801208\\86bac153-67ef-4060-90e7-1bbfd611bb5e.jpg", null, "https://file.scirp.org/Html/4-9801208\\7f5fd12c-2e5d-4a14-8b5c-3f96885a580e.jpg", null, "https://file.scirp.org/Html/4-9801208\\3e3b8805-cc06-4644-a5d4-ca5e72107734.jpg", null, "https://file.scirp.org/Html/4-9801208\\4c998e75-3703-4313-b561-b947ffbad556.jpg", null, "https://file.scirp.org/Html/4-9801208\\ae1c6ca9-3831-4a1f-a3f6-f18cda7563aa.jpg", null, "https://file.scirp.org/Html/4-9801208\\ead56cba-bbdb-48b9-8194-095bbf5a738e.jpg", null, "https://file.scirp.org/Html/4-9801208\\ebb1850c-e4c2-418a-8715-963416386109.jpg", null, "https://file.scirp.org/Html/4-9801208\\d8021d11-51d8-4c59-823e-622564800acf.jpg", null, "https://file.scirp.org/Html/4-9801208\\8fce9c0a-99fa-4bf7-81d0-cd255e663b74.jpg", null, "https://file.scirp.org/Html/4-9801208\\16d4592f-53b1-433c-9a5f-ddccea04bd58.jpg", null, "https://file.scirp.org/Html/4-9801208\\4a49ae83-d54d-4c81-a21d-3404872cc71b.jpg", null, "https://file.scirp.org/Html/4-9801208\\1a20b0a7-fde0-494e-9b1b-d4d3a29ff3f9.jpg", null, "https://file.scirp.org/Html/4-9801208\\0dfd022b-fd3c-4e00-8ddd-a53e48f4ad0e.jpg", null, "https://file.scirp.org/Html/4-9801208\\3088942b-81c5-4681-8ebd-4ecaa669e86a.jpg", null, "https://file.scirp.org/Html/4-9801208\\3cf4c698-7aeb-4bf3-8c04-c0f690dc9ead.jpg", null, "https://file.scirp.org/Html/4-9801208\\9ae6e090-bf61-4367-b225-6cd9441d1df2.jpg", null, "https://file.scirp.org/Html/4-9801208\\b03f623f-dc56-4ff3-a839-6e303e30e7e7.jpg", null, "https://file.scirp.org/Html/4-9801208\\fc005050-a6d8-4b90-bb5e-209c53afc3e6.jpg", null, "https://file.scirp.org/Html/4-9801208\\40b31460-f095-48f0-a599-2c5c51fb1c0b.jpg", null, "https://file.scirp.org/Html/4-9801208\\9ad49d0b-bf7e-4f5b-975b-97ba42bd9bff.jpg", null, "https://file.scirp.org/Html/4-9801208\\2c7eebe3-ca49-4b34-8351-10b3f3285ff1.jpg", null, "https://file.scirp.org/Html/4-9801208\\adf3e556-8aa0-47f3-924b-4f45b9d581a8.jpg", null, "https://file.scirp.org/Html/4-9801208\\2938bc0d-a3d5-4e23-b745-e35b60a9b63e.jpg", null, "https://file.scirp.org/Html/4-9801208\\2f9022c4-d95c-43c1-8d5b-cae28d6df964.jpg", null, "https://file.scirp.org/Html/4-9801208\\abd13937-8f08-41f5-8575-8b911d283c95.jpg", null, "https://file.scirp.org/Html/4-9801208\\a33c12b5-f126-4f99-9b44-571fcdd198c1.jpg", null, "https://file.scirp.org/Html/4-9801208\\e2783526-7b5e-4d2e-aa78-90f003d85012.jpg", null, "https://file.scirp.org/Html/4-9801208\\30d07625-b97f-48eb-901b-06de61c0fa98.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8947143,"math_prob":0.9650464,"size":24769,"snap":"2020-24-2020-29","text_gpt3_token_len":5622,"char_repetition_ratio":0.15041389,"word_repetition_ratio":0.07706568,"special_character_ratio":0.2185393,"punctuation_ratio":0.12597722,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99344623,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160,161,162,163,164,165,166,167,168],"im_url_duplicate_count":[null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-06-03T04:31:18Z\",\"WARC-Record-ID\":\"<urn:uuid:6448b0d7-c2cd-4588-95a4-e60c8d328918>\",\"Content-Length\":\"64501\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:35b42186-ab1c-4c28-8d21-4e6507e161d3>\",\"WARC-Concurrent-To\":\"<urn:uuid:095216dc-294b-44dc-b5f2-f0053f827a60>\",\"WARC-IP-Address\":\"209.141.51.63\",\"WARC-Target-URI\":\"https://file.scirp.org/Html/4-9801208_6635.htm\",\"WARC-Payload-Digest\":\"sha1:URAZCVYPXIFXK7OQI5CVCJAPRJ7PQBNF\",\"WARC-Block-Digest\":\"sha1:LMCCTESW2T4H3BTDZ3SBMALSYJDXOPR7\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347428990.62_warc_CC-MAIN-20200603015534-20200603045534-00115.warc.gz\"}"}
https://www.sigmacomputing.com/training/using-formulas/
[ "# Using Formulas\n\nFormulas are a powerful and familiar way to calculate information. Sigma’s spreadsheet-like environment uses formulas that are familiar from other spreadsheet programs.\n\nWith formulas, Sigma gives users the power of SQL without having to write a single line of code.\n\nWatch this video to learn:\n\n• How to use a formula to define a column of data\n• How formulas work in Sigma and how they differ from other spreadsheet programs\n• Shortcuts for using common formulas" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.82766664,"math_prob":0.8313978,"size":476,"snap":"2021-04-2021-17","text_gpt3_token_len":93,"char_repetition_ratio":0.14618644,"word_repetition_ratio":0.0,"special_character_ratio":0.18487395,"punctuation_ratio":0.060240965,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99564785,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-18T14:03:09Z\",\"WARC-Record-ID\":\"<urn:uuid:dddc2853-4519-4830-9b25-2ced22ab064c>\",\"Content-Length\":\"28729\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0f4b17e1-c43a-43d8-b95b-a25284946597>\",\"WARC-Concurrent-To\":\"<urn:uuid:15ebcfca-dd1d-42dd-9600-eb91fba1fca0>\",\"WARC-IP-Address\":\"104.18.131.114\",\"WARC-Target-URI\":\"https://www.sigmacomputing.com/training/using-formulas/\",\"WARC-Payload-Digest\":\"sha1:CX7WQK37ZCY6NHLHWCTXOBB53KT4SXNZ\",\"WARC-Block-Digest\":\"sha1:4HCPAX6PTVMEKFKWSH7ID6FRN7VNBDKQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703514796.13_warc_CC-MAIN-20210118123320-20210118153320-00330.warc.gz\"}"}
https://www.radishlogic.com/aws/s3/how-to-write-python-string-to-a-file-in-s3-bucket-using-boto3/
[ "# How to write Python string to a file in S3 Bucket using boto3\n\nTo write a file from a Python string directly to an S3 bucket we need to use the boto3 package.\n\nThere are 2 ways to write a file in S3 using boto3. The first is via the boto3 client, and the second is via the boto3 resource. Both of these methods will be shown below.\n\n## S3 objects and keys\n\nIf you are new to AWS S3, you might be confused with some of the terms. So we’ll define some of them here. If you already know what objects and keys are then you can skip this section.\n\nS3 objects are the same as files. When we run the method `put_object` what it means is that we are putting a file into S3.\n\nS3 keys are the same as the filename with its full path. So if we want to create an object in S3 with the name of `filename.txt` within the `foobar` folder then the key is `foobar/filename.txt`.\n\nNow that we have clarified some of the AWS S3 terms, follow the details below to start writing Python strings directly to objects in S3.\n\n## Writing Python string to S3 objects using boto3 resource\n\nBoto3 resource is a high-level abstraction for accessing AWS resources in an object-oriented interface. You can learn more about boto3 resource here.\n\nBelow is a Python code where we write the string `This is a random string.` to the S3 bucket `radishlogic-bucket` with a key of `folder/file_resource.txt`.\n\n``````import boto3\n\ndata_string = \"This is a random string.\"\n\ns3 = boto3.resource('s3')\n\nobject = s3.Object(\nkey='folder/file_resource.txt'\n)\n\nobject.put(Body=data_string)``````\n\nBelow are boto3 documentation links on putting an object in S3 using boto3 resource.\n\n## Writing Python string to S3 objects using boto3 client\n\nBoto3 client is a low-level interface to access AWS resources. I actually prefer using boto3 client since this is faster and uses fewer compute resources compared to boto3 resource. You can learn more about boto3 client here.\n\nBelow is a Python code where we write the string `This is a random string.` to the S3 bucket `radishlogich-bucket` with a key of `folder/file_client.txt`.\n\n``````import boto3\n\ndata_string = \"This is a random string.\"\n\nclient = boto3.client('s3')\n\nclient.put_object(\nBody=data_string,\nKey='folder/file_client.txt'\n)``````\n\nBelow are boto3 documentation links on putting an object in S3 using boto3 client.\n\n## Lambda Function to write Python string to S3 objects\n\nBelow are examples of writing a String to an S3 Object using AWS Lambda Function running Python.\n\n### S3 Resource\n\n``````import boto3\n\ndef lambda_handler(event, context):\ndata_string = \"This is a string from a Lambda Function.\"\n\ns3 = boto3.resource('s3')\n\nobject = s3.Object(\nkey='s3_folder/lambda_file_resource.txt'\n)\n\nobject.put(Body=data_string)``````\n\n### S3 Client\n\n``````import boto3\n\ndef lambda_handler(event, context):\n\ndata_string = \"This is a string from a Lambda Function.\"\n\nclient = boto3.client('s3')\n\nclient.put_object(\nBody=data_string,\nKey='s3_folder/lambda_file_client.txt'\n)``````\n\n## String to bytes conversion\n\nIf we look at the documentation for both `boto3` `client `and `resource`, it says that the Body parameter of `put_object` should be in `b'bytes`.\n\nIt did not mention that the Body parameter could be a string. But since putting string directly to the Body parameter works that is what I am recommending.\n\nIf you still want to do the string-to-bytes conversion then you can use the `.encode()` function of Python strings.\n\n``````data_string = \"This is a random string.\"\ndata_bytes = data_string.encode()\n\nprint(data_bytes)``````\n\nOnce you have converted the string to bytes, you can assign the `data_bytes` variable to the value of the `Body` parameter of `client.put_object`.\n\n``````import boto3\n\ndata_string = \"This is a random string.\"\ndata_bytes = data_string.encode()\n\nclient = boto3.client('s3')\n\nclient.put_object(\nBody=data_bytes,\nKey='folder/file_client_bytes.txt'\n)``````\n\nHere is the python code if you want to convert string to bytes and use boto3 S3 resource.\n\n``````import boto3\n\ndata_string = \"This is a random string.\"\ndata_bytes = data_string.encode()\n\ns3 = boto3.resource('s3')\n\nobject = s3.Object(" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7012927,"math_prob":0.57118803,"size":4465,"snap":"2023-14-2023-23","text_gpt3_token_len":1064,"char_repetition_ratio":0.16498543,"word_repetition_ratio":0.20771514,"special_character_ratio":0.23404256,"punctuation_ratio":0.10622711,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9614219,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-07T08:20:02Z\",\"WARC-Record-ID\":\"<urn:uuid:5745d81d-16af-4ecd-ab7f-6f8aee8da1a2>\",\"Content-Length\":\"69660\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e40f706c-5e83-4498-ba63-9b545d70e84c>\",\"WARC-Concurrent-To\":\"<urn:uuid:5dd1239a-8e5c-49f5-b0fb-407504c78f79>\",\"WARC-IP-Address\":\"173.236.230.139\",\"WARC-Target-URI\":\"https://www.radishlogic.com/aws/s3/how-to-write-python-string-to-a-file-in-s3-bucket-using-boto3/\",\"WARC-Payload-Digest\":\"sha1:M2LSMNZCGZ2YY463MUVEKGUNDZRH5I6D\",\"WARC-Block-Digest\":\"sha1:CNJ43G2K3L3LTQIQP5JUELHFTJJI2NKP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224653631.71_warc_CC-MAIN-20230607074914-20230607104914-00633.warc.gz\"}"}
https://rdrr.io/cran/catlearn/man/act2probrat.html
[ "# act2probrat: Convert output activation to a rating of outcome probability In catlearn: Formal Psychological Models of Categorization and Learning\n\n## Description\n\nLogistic function to convert output activations to rating of outcome probability (see e.g. Gluck & Bower, 1988).\n\n## Usage\n\n `1` ``` act2probrat(act, theta, beta) ```\n\n## Arguments\n\n `act` Vector of output activations `theta` Scaling constant `beta` Bias constant\n\n## Details\n\nThe contents of this help file are relatively brief; a more extensive tutorial on using act2probrat can be found in Spicer et al. (n.d.).\n\nThe function takes the output activation of a learning model (e.g. slpRW), and converts it into a rating of the subjective probability that the outcome will occur. It does this separately for each activation in the vector `act`. It uses a logistic function to do this conversion (see e.g. Gluck & Bower, 1988, Equation 7). This function can produce a variety of monotonic mappings from activation to probability rating, determined by the value set for the two constants:\n\n`theta` is a scaling constant; as its value rises, the function relating activation to rating becomes less linear and at high values approximates a step function.\n\n`beta` is a bias parameter; it is the value of the output activation that results in an output rating of P = 0.5. For example, if you wish an output activation of 0.4 to produce a rated probability of 0.5, set beta to 0.4.\n\n## Value\n\nReturns a vector of probability ratings.\n\n## Note\n\nAs this function returns probabilities, the numbers returned are always in the range 0-1. If the data you are fitting use a different range, convert them. For example, if your data are ratings on a 0-10 scale, divide them by 10. If your data are something other than probability estimates (e.g. you asked participants to use negative ratings to indicate preventative relationships), don't use this function unless you are sure it is doing what you intend.\n\nAndy Wills\n\n## References\n\nGluck, M.A. & Bower, G.H. (1988). From conditioning to category learning: An adaptive network model. Journal of Experimental Psychology: General, 117, 227-247.\n\nSpicer, S., Jones, P.M., Inkster, A.B., Edmunds, C.E.R. & Wills, A.J. (n.d.). Progress in learning theory through distributed collaboration: Concepts, tools, and examples. Manuscript in preparation.\n\ncatlearn documentation built on Sept. 16, 2020, 5:07 p.m." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7950915,"math_prob":0.89086515,"size":2273,"snap":"2021-43-2021-49","text_gpt3_token_len":541,"char_repetition_ratio":0.12516527,"word_repetition_ratio":0.005479452,"special_character_ratio":0.23317201,"punctuation_ratio":0.19017094,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9869611,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-01T13:08:32Z\",\"WARC-Record-ID\":\"<urn:uuid:a4f86bab-a55a-4db6-8795-9f9e697eed73>\",\"Content-Length\":\"43707\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1e42b5b1-7a43-49f0-84c9-9066a3c7414a>\",\"WARC-Concurrent-To\":\"<urn:uuid:4541b408-0662-4493-854f-55ecda468b34>\",\"WARC-IP-Address\":\"51.81.83.12\",\"WARC-Target-URI\":\"https://rdrr.io/cran/catlearn/man/act2probrat.html\",\"WARC-Payload-Digest\":\"sha1:QCI6OBIFEVR4SQ5EE427PA3V2FZ2TARR\",\"WARC-Block-Digest\":\"sha1:LYWYB3IOZXTYPUCUK3LPBG3MJGZ2C66Y\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964360803.0_warc_CC-MAIN-20211201113241-20211201143241-00201.warc.gz\"}"}
https://ai.stackexchange.com/questions/6902/how-to-build-my-own-dataset-and-model-for-an-lstm-neural-network
[ "# How to build my own dataset and model for an LSTM neural network\n\nI have a sort of mathematical problem and I'm not sure which model I should choose to make an LSTM neural network.\n\nCurrently in my country, there is a system in which certain groups of researchers upload information on products of scientific interest, such as research articles, books, patents, software, among others. Depending on the number of products, the system assigns a classification to each group, which can be A1, A, B and C, where A1 is the highest classification and C is the minimum.\n\nThe classification is done through a mathematical model whose entries are, the total number of each product, the total sum of all products, number of authors, among other indices that are calculated with the previous values.\n\nOnce the entries are obtained, these values ​​are processed by a set of formulas and the final result is a single number.\n\nThis number is located in a range provided by the mathematical model and this is how the group is classified.\n\nWhat I want to do is given the current classification of a group, give suggestions of different values ​​to improve their classification.\n\nFor example, if there is a group with classification C, suggest how many products it should have, how many authors, what value should its indexes have, so that its category would be finally B.\n\nI think the structure of my network should be: -1 input, which would be the classification you want to get. -Multiple output, one for each product and indexes.\n\nBut I do not understand how to make the network take into account the current classification of the group, in addition to the number of products and the value of the current indexes." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.95170456,"math_prob":0.8927707,"size":1728,"snap":"2022-27-2022-33","text_gpt3_token_len":342,"char_repetition_ratio":0.14153132,"word_repetition_ratio":0.0,"special_character_ratio":0.19791667,"punctuation_ratio":0.118694365,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9904394,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-17T18:59:41Z\",\"WARC-Record-ID\":\"<urn:uuid:8b950c98-1ad6-4e25-87a9-734f51d77f53>\",\"Content-Length\":\"214778\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e030b53a-1dc5-4bc1-ae83-e8659b66a11e>\",\"WARC-Concurrent-To\":\"<urn:uuid:90296fe0-04a9-4df0-89ba-d595f320eb52>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://ai.stackexchange.com/questions/6902/how-to-build-my-own-dataset-and-model-for-an-lstm-neural-network\",\"WARC-Payload-Digest\":\"sha1:NZWWG3WW5W2I3U3NMT6AOS5QLYXUSKFX\",\"WARC-Block-Digest\":\"sha1:YN7SJXGEKEZOLJXXDKBD3V5FME66W7SQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882573104.24_warc_CC-MAIN-20220817183340-20220817213340-00597.warc.gz\"}"}
https://ai.stackexchange.com/questions/34612/why-does-my-actor-critic-network-always-give-either-1-or-1-at-the-output-layer
[ "# Why does my actor-critic network always give either -1 or 1 at the output layer?\n\nI have an actor-critic network. The state space contains continuous variables with different ranges like (0,1.57) and (-0.70, 0.70). And it also contain absolute 6D pose in the form (x,y,z,roll,pitch,yaw). The action space is continuous too in the range (0, 1.57). I apply scaling at the input layer, and scale things back before applying the action received from network. Irrespective of learning for 100 or 1000 episodes, the actor model always gives either -1 or 1. Eg: [1,1,-1,-1] which gets scaled to [1.57,1.57,-1.57,-1.57] as an action vector. Could someone give me suggestion on what's happening with the network. The learning follows DDPG algorithm.\n\nactor_lr = 0.0001\ncritic_lr = 0.001\ndef actor(state_size, action_size):\n\ninputs = Input(shape=(state_size,), name=\"state_space\")\nlayer_one = Dense(300, activation=\"relu\")(inputs)\nlayer_two = Dense(400, activation=\"relu\")(layer_one)\n\noutputs = Dense(action_size, activation=\"tanh\", name=\"action_space\")(layer_two)\n\nmodel = Model(inputs = inputs, outputs = outputs)\nreturn model\n\ndef get_critic(state_size, action_size):\n\nstate_input = Input(shape=(state_size), name=\"state_space\")\nstate_out = Dense(64, activation=\"relu\")(state_input)\n\naction_input = Input(shape=(action_size), name=\"actions_sapce\")\naction_out = Dense(18, activation=\"relu\")(action_input)\nconcat = Concatenate()([state_out, action_out])\n\ncritic_layer_one = Dense(300, activation=\"relu\", kernel_regularizer=regularizers.l2(0.01))(concat)\ncritic_layer_two = Dense(400, activation=\"relu\", kernel_regularizer=regularizers.l2(0.01))(critic_layer_one)\n\noutputs = Dense(1, activation=\"linear\", kernel_regularizer=regularizers.l2(0.01))(critic_layer_two)\n\nmodel = Model([state_input, action_input], outputs, name = name)\nreturn model\n\n\nMost probably your network is underfitted. In that case, the network outputs values randomly. Hyperbolic tangent tanh converges very quickly towards $$-1$$ or $$1$$, so that is why you always find $$-1$$ and $$1$$ in the output.\n\nLet us execute the following code to get a better idea:\n\nimport tensorflow as tf\ntanh_x = tf.keras.activations.tanh(-8.0).numpy()\nprint(tanh_x, type(tanh_x))\n\n\nAt least in my machine, the output will be exactly $$1.0$$ for tanh_x variable of type numpy.float32, which has a precision of up to $$7$$ decimal digits.\n\nThe value of tanh_x is in fact $$-0.99999977493$$, but Python only saves the first $$7$$ decimal digits and decides to round it up to $$-1.0$$. Every output value from the neural network outside the range of $$[-8, 8]$$ will be exactly $$-1$$ or $$1$$ after tanh activation and float32 precision. As you can see, if the network randomly outputs values they will almost surely be outside $$[-8, 8]$$ as this is a very short range.\n\n• Thank you for the comment and example. Does it also mean the network is not well configured(not enough layers and neurons)? Feb 21, 2022 at 20:14\n• @Goldfinch Honestly, I think your network is too large with 120k parameters. You could begin by reducing dense layer sizes to 30 and 40 instead of 300 and 400. Feb 21, 2022 at 21:34" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.67892367,"math_prob":0.9976551,"size":1896,"snap":"2023-14-2023-23","text_gpt3_token_len":497,"char_repetition_ratio":0.14852008,"word_repetition_ratio":0.0,"special_character_ratio":0.29535866,"punctuation_ratio":0.21791045,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9986971,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-01T14:58:51Z\",\"WARC-Record-ID\":\"<urn:uuid:93dff4db-a8cb-47a5-8a3e-1e344142e9e0>\",\"Content-Length\":\"138477\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:990b95f9-cc89-4dc0-95f1-20412365d90b>\",\"WARC-Concurrent-To\":\"<urn:uuid:007e3342-92c8-4516-b527-671faa5744b6>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://ai.stackexchange.com/questions/34612/why-does-my-actor-critic-network-always-give-either-1-or-1-at-the-output-layer\",\"WARC-Payload-Digest\":\"sha1:5ELT6L4KRSLP5ZVXEKJFGLNUCB2PUW27\",\"WARC-Block-Digest\":\"sha1:GSFH4FSGGSL6LKNWXL6QC75ZL646FAOE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224647895.20_warc_CC-MAIN-20230601143134-20230601173134-00751.warc.gz\"}"}
https://www.studyadda.com/solved-papers/jee-main-paper-held-on-22-april-2013_q10/81/245422
[ "• # question_answer The density of 3M solution of sodium chloride is  1.252$~\\text{gm}{{\\text{L}}^{-1}}$. The molality of the solution will be: (molar mass,$\\text{NaCI }=\\text{85}.\\text{5 g mo}{{\\text{l}}^{\\text{-1}}}$)     JEE Main  Online Paper (Held On 22 April 2013) A)  2.60 m                  B)  2.18 mC)  2.79 m                  D)  3.00 m\n\nThe relation between molarity (M) and molality(m)is $d=M\\left( \\frac{1}{m}+\\frac{{{M}_{2}}}{1000} \\right),{{M}_{2}}=Mol.$Mass of solute On putting value on solving $m=2.79$\nYou will be redirected in 3 sec", null, "" ]
[ null, "https://www.studyadda.com/assets/frontend/images/msg-gif.GIF", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6733519,"math_prob":0.9970288,"size":318,"snap":"2020-34-2020-40","text_gpt3_token_len":126,"char_repetition_ratio":0.05732484,"word_repetition_ratio":0.0,"special_character_ratio":0.4056604,"punctuation_ratio":0.12345679,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99889463,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-01T01:33:45Z\",\"WARC-Record-ID\":\"<urn:uuid:d3e2243a-8301-4929-8d57-6b5ff80844c6>\",\"Content-Length\":\"101574\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a0295ca6-9ee4-4415-9cc4-db0c5efe0190>\",\"WARC-Concurrent-To\":\"<urn:uuid:ed09a8db-9663-4b61-97d9-b077f427b42b>\",\"WARC-IP-Address\":\"151.106.35.148\",\"WARC-Target-URI\":\"https://www.studyadda.com/solved-papers/jee-main-paper-held-on-22-april-2013_q10/81/245422\",\"WARC-Payload-Digest\":\"sha1:OGVLMLAAQVVZMVHB5LBDQLISV7KOT7HG\",\"WARC-Block-Digest\":\"sha1:2OCV3GDAQUGRVCJYKTEM6J3HEZZBCBIU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600402130531.89_warc_CC-MAIN-20200930235415-20201001025415-00447.warc.gz\"}"}
http://think-global.fr/1999-mercedes-c230-engine-diagram-diagram
[ "think-global.fr\n\n# 1999 Mercedes C230 Engine Diagram\n\n• Engine Diagram\n• Date : December 4, 2020\n\n## 1999 Mercedes C230 Engine Diagram\n\nMercedes C230\n\n1999 Mercedes C230 Engine DiagramThe Way to Find Critical Temperature on Phase Diagram Pupils in math, chemistry, and other similar courses might find it difficult to understand the particulars of what it takes to discover the critical temperature on the phase diagram. In the following article, we'll have a look at a few of the aspects included. There are two main ways to explain how to find the critical temperature on the phase diagram. The process entails connecting all the points on the graph to each other so that the two curves represent the attributes of the machine, which are ordinarily thermodynamic properties, including entropy or the enthalpy of boiling. Of course, when you know the temperature and enthalpy of the machine, then you are already able to solve for these amounts on the graph. What happens is that you will then be working backwards in time to convert the curve to a value for the values of the factors. In a feeling, it is the reverse of this equation for the phase diagram. On the other hand, you will still have to connect all the dots to create the problem easier to resolve. With a stage diagram, you could find a pair of points connected to each other. Then you need to connect them back to their initial positions, even though they may be slightly out of sequence. You then must obtain the values of the factors that can be quantified from these points. In case you've ever used a computer simulation to get the value of the temperature of a system, then you ought to understand how to fix for this amount. You simply have to use the exact same process to get the value of the critical point. Now you are looking at the exact same problem you had to resolve when you worked on a phase diagram! But now you need to connect all of the dots. This makes the most sense if you'd like to come across the most true value of the critical point. If you do so, then you'll always find the very best result possible. Finally, the most important step in all of this is to consider the value of the important point, since this is the value that you will use for the own calculations. This value is related to the original value of the machine. By setting the crucial point , then you are going to find a milder system. Consequently, if you're interested in finding the value of the critical temperature on the phase diagram, then you will have to work backwards. As soon as you've figured out the enthalpy and temperature values, and then you will discover the value of this crucial point." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9264532,"math_prob":0.94899714,"size":2650,"snap":"2020-45-2020-50","text_gpt3_token_len":555,"char_repetition_ratio":0.14890401,"word_repetition_ratio":0.03805497,"special_character_ratio":0.20981131,"punctuation_ratio":0.083333336,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96914124,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-04T05:36:55Z\",\"WARC-Record-ID\":\"<urn:uuid:e8713221-3aef-449b-8434-abb8f4dd2536>\",\"Content-Length\":\"19130\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8bdb1e23-4397-47e9-ac88-f2d8fe055876>\",\"WARC-Concurrent-To\":\"<urn:uuid:cb032924-610e-479d-8fb9-c94f3feb74a4>\",\"WARC-IP-Address\":\"159.203.37.50\",\"WARC-Target-URI\":\"http://think-global.fr/1999-mercedes-c230-engine-diagram-diagram\",\"WARC-Payload-Digest\":\"sha1:TOPCAE7GLB7JZILNFGAUPYQFFWKUYLQO\",\"WARC-Block-Digest\":\"sha1:M6RY5JSYMTPK55NN26FE3MXQECHPMR4J\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141733122.72_warc_CC-MAIN-20201204040803-20201204070803-00633.warc.gz\"}"}
http://atomiclaboratories.com/pakpbuwh/programming-music-software-68f98c
[ "A car going down the road has a speed of 50 mph. Vectors are essential in physics, mechanics, electrical engineering, and other sciences to describe forces mathematically. When someone tells you to throw a ball twice as hard and to the left, a vector was just used. In physical science and engineering, a vector is a geometric object which has both magnitude or length and direction. A vector is commonly represented by a line segment in a specific direction, indicated by an arrow. Its velocity is 50 mph in the northeast direction. So now let’s learn what a vector quantity means. That isn't the best definition, but it is better than \"magnitude and direction.\" We have spoken and understood what a scalar quantity is. The difference between two positions. Vector Definition in Math and Physics . Vector Quantity Definition. Definition of Position. Vectors are quantities that are fully described by both a magnitude and a direction. Mathematics a. Vector, in mathematics, a quantity that has both magnitude and direction but not position. Vector: a quantity with more than one element (more than one piece of information). n. 1. Vector Definition; Introduction to Vectors; Vector Definition. Is a vector quantity that describes a specific point relative to a reference point, choose any … These two categories can be distinguished from one another by their distinct definitions: Scalars are quantities that are fully described by a magnitude (or numerical value) alone. Another example is mass and weight. On the other hand, a vector quantity is defined as the physical quantity that has … Weight is a force which is a vector and has a magnitude and direction. Vector quantities, however, refer to both the direction of the medium’s movement as well as the measurement of the scalar quantity. By definition, speed is the scalar magnitude of a velocity vector. b. VECTOR QUANTITY :- The picture given above explains everything and that too without a word hardly uttered . It can get very confusing when the terms are used interchangeably! Definition of a Vector Quantity. A one-dimensional array. The length of an arrow represents the magnitude of the quantity. A scalar quantity is defined as the physical quantity that has only magnitude, for example, mass and electric charge. Definition of Displacement. Is a vector quantity that describes the length and direction in a straight line from one point to another. The direction of a vector can be given in a written description, or drawn as an arrow. A quantity, such as velocity, completely specified by a magnitude and a direction. Vector representation can be done as a physical quantity that cannot be fully described by a single number of physical units. Increase/Decrease in Temperature - The measurement of the medium’s temperature is a scalar quantity; the measurement of the increase or decrease in the medium’s temperature is a vector quantity. Examples of such quantities are velocity and acceleration. vector synonyms, vector pronunciation, vector translation, English dictionary definition of vector. Scalars and vectors are differentiated depending on their definition. The quantity is either a vector or a scalar. Define vector. English dictionary vector quantity definition of vector specific point relative to a reference point, any. Element ( more than one piece of information ) commonly represented by a single number physical. A ball twice as hard and to the left, a quantity with more than one element ( than. And to the left, a vector and has a magnitude and direction. magnitude, for,. The magnitude of a velocity vector by a single number of physical units piece of information ) electrical! Mass and electric charge in physics, mechanics, electrical engineering, quantity! Information ) definition ; Introduction to vectors ; vector definition by a line segment in a description..., English dictionary definition of vector as velocity, completely specified by a single number of physical.! Description, or drawn as an arrow electrical engineering, a vector and has a magnitude and in... Scalar magnitude of the quantity which is a force which is a force which is force. Is better than `` magnitude and a direction. electrical engineering, and other sciences to forces. Was just used represents the magnitude of a vector quantity that has both magnitude direction! Of vector a velocity vector, English dictionary definition of vector can be given in a straight line vector quantity definition point. A vector and has a speed of 50 mph a line segment in straight... Drawn as an arrow understood what a vector and has a speed of 50 in. A speed of 50 mph of an arrow, in mathematics, a vector and has a speed of mph., but it is better than `` magnitude and direction but not position weight is a geometric which. Used interchangeably, in mathematics, a vector quantity means object which has both and. Has only magnitude, for example, mass and electric charge velocity is 50 mph in northeast... Weight is a geometric object which has both magnitude or length and direction vector quantity definition, and sciences! That are fully described by a line segment in a written description, or drawn as an.. Quantity that has both magnitude or length and direction but not position, but it is than... Quantity with more than one piece of information ) in physical science and engineering, and other sciences describe! A straight line from one point to another: - the picture given explains! Quantity, such as velocity, completely specified by a single number of physical units velocity. Or drawn as an arrow represents the magnitude of a vector is commonly represented by line. Completely specified by a single number of physical units direction. be as. Ball twice as hard and to the left, a quantity, such as velocity, specified... Quantity with more than one piece of information ) that are fully described by both magnitude... Can not be fully described by a magnitude and direction. just used a... A force which is a vector was just used learn what a scalar quantity is as. Explains everything and that too without a word hardly uttered arrow represents magnitude. A force which is a vector quantity that can not be fully described by a segment... Vector quantity: - the picture given above explains everything and that too without a word uttered. Its velocity is 50 mph in the northeast direction. can vector quantity definition be fully described by line. Of physical units specific point relative to a reference point, choose …! Than `` magnitude and direction. vector: a quantity with more than one piece of information ) differentiated on! Used interchangeably ( more than one element ( more than one element ( more than one of!, electrical engineering, a vector quantity that describes a specific point relative to reference... It can get very confusing when the terms are used interchangeably and direction in a specific point relative a... Scalars and vectors are quantities that are fully described by both a magnitude direction... Going down the road has a speed of 50 mph in the northeast direction. synonyms. One element ( more than one element ( more than one element ( more than one element more! Straight line from one point to another the magnitude of a velocity vector on their.., a vector can be given in a specific point relative to a point... Specific direction, indicated by an arrow represents the magnitude of a vector is commonly represented by single. Physics, mechanics, electrical engineering, a vector and has a magnitude and a vector quantity definition. single of... That can not be fully described by both a magnitude and a direction. down... Specified by a magnitude and direction. velocity, completely specified by line. Electrical engineering, a vector quantity that can not be fully described by both a magnitude and direction. of! A straight line from one point to another their definition magnitude of a was... Direction. point, choose any … Define vector describes the length and direction but not position velocity... It can get very confusing when the terms are used interchangeably of physical units,. English dictionary definition of vector as the physical quantity that describes the length and direction. direction in a direction! Fully described by both a magnitude and direction in a written description, drawn! Without a word hardly uttered not position understood what a scalar quantity is defined as the physical quantity that only... Vectors are quantities that are fully described by a single number of physical units or and! Is defined as the physical quantity that can not be fully described both.\n2020 programming music software" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.94214714,"math_prob":0.96936625,"size":8622,"snap":"2021-43-2021-49","text_gpt3_token_len":1633,"char_repetition_ratio":0.1871664,"word_repetition_ratio":0.30474842,"special_character_ratio":0.19728601,"punctuation_ratio":0.14127082,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99083704,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-18T08:08:49Z\",\"WARC-Record-ID\":\"<urn:uuid:c004a847-ff2f-4553-a918-834c942f904b>\",\"Content-Length\":\"19705\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9e5c04d1-be7e-44d4-a6f6-b9a1132d6d13>\",\"WARC-Concurrent-To\":\"<urn:uuid:fb769aef-e827-4c95-8fe2-95b1db09c9df>\",\"WARC-IP-Address\":\"64.13.232.228\",\"WARC-Target-URI\":\"http://atomiclaboratories.com/pakpbuwh/programming-music-software-68f98c\",\"WARC-Payload-Digest\":\"sha1:TNOA5QN62SVGSMNHNEFKLRNWW2AWZZOM\",\"WARC-Block-Digest\":\"sha1:QTZYMN5C5AFRYCMTPHM4NK6XDNITPAFN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585199.76_warc_CC-MAIN-20211018062819-20211018092819-00436.warc.gz\"}"}
https://hsm.stackexchange.com/questions/11553/why-did-jordan-introduce-his-canonical-form
[ "# Why did Jordan introduce his canonical form?\n\nCamille Jordan's famous canonical form for matrices over algebraically closed fields, is considered an important result nowadays, commonly taught to all students of mathematics in undergraduate linear algebra courses. The vast majority of them only get to see it applied in one case: in the study of linear systems of differential equations with constant coefficients, where it is used very effectively to solve all such systems. Other applications that are sometimes seen are its use in systems of linear recurrence relations, and in Markov chains, but that's about it.\n\nHowever, originally, Jordan did not discover his form to solve any of these problems - it was, in fact, a problem in Galois theory that motivated his research. Given that I cannot read french, and Jordan's Traité was never translated to english, I give the only english reference to it that I could find, from Hawkins' \"The Mathematics of Frobenius in Context\" (2013), pages 137-138:\n\nJordan... preferred to make the consideration of homogeneous linear substitutions fundamental, and in a paper of 1867, he indicated their important role in the problem of determining all the irreducible equations of a given degree that are solvable by radicals. In connection with this problem he sought, in a paper of 1868, to determine the solvable subgroups of the group of linear substitutions in two variables... To do it he used the fact that by a linear change of variables, a linear substitution S could be put in one of a limited number of “canonical forms” depending on the nature of the roots of det(S−kI) ≡ 0 (mod p). His method of constructing solvable subgroups was to build them up from their composition series, and this involved determining all linear substitutions that commute with a given substitution S. To this end, he introduced the possible canonical forms for S.\n\nHawkins then mentions that Jordan further generalized his theorem to linear substitutions in n variables, and that a few years later he realized the relevance of his theorem to systems of differential equations.\n\nTwo points are still unclear to me. First, from this summary it seems that Jordan was interested in matrices (here termed linear substitutions) over finite fields - given that all the work is done mod p. Finite fields, however, are never algebraically complete - so which theorem did Jordan even prove? The usual theorem taught is not applicable here. Second, this summary explains what is the problem that Jordan was interested in, but only sketches very roughly how he solved it. How exactly is this classification of matrices (up to similarity) relevant for determining which irredicuble polynomials are solvable?" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9682067,"math_prob":0.8033895,"size":2671,"snap":"2020-24-2020-29","text_gpt3_token_len":530,"char_repetition_ratio":0.115860514,"word_repetition_ratio":0.0,"special_character_ratio":0.1965556,"punctuation_ratio":0.10144927,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98433864,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-06-02T18:53:20Z\",\"WARC-Record-ID\":\"<urn:uuid:97e053a1-c429-4ab0-9ab8-cf578d7dff9a>\",\"Content-Length\":\"138393\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:df2fb769-4db7-44b5-8751-1d7b2e33c162>\",\"WARC-Concurrent-To\":\"<urn:uuid:19cb7510-ea63-4f00-bb8e-1b652ff4fc9a>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://hsm.stackexchange.com/questions/11553/why-did-jordan-introduce-his-canonical-form\",\"WARC-Payload-Digest\":\"sha1:DNXHDPCMJS7I6PUOMV7GIZC4KYB4RHUO\",\"WARC-Block-Digest\":\"sha1:WVXYXV4D327FY46HXVB2PFCKYPOL2XYJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347425481.58_warc_CC-MAIN-20200602162157-20200602192157-00393.warc.gz\"}"}
https://cmtoinches.co/Convert-54.80-centimeters-to-in-54.80
[ "# Convert 54.80 centimeters to in\n\n54.80 centimeters is equal to how many inches?\n\n### All In One Unit Converter\n\nPlease, choose a physical quantity, two units, then type a value in any of the boxes above.\n\nTo use this calculator, simply type the value in any box at left or at right. It accepts fractional values.\n\nUsing this converter you can get answers to questions like:\n\n• How many inches is 54.80 centimeters?\n• 54.80 centimeters is equal to how many inches?\n• How tall is 54.80 cm in feet and inches\n• What is the cm to in conversion factor?\n• What is the formula to convert from cm to in? among others.\n\n## Definition of centimeter\n\nA centimeter (cm) is a decimal fraction of the meter, the international standard unit of length, approximately equivalent to 39.37 inches.\n\n## Definition of inch\n\nAn inch is a unit of length or distance in a number of systems of measurement, including in the US Customary Units and British Imperial Units. One inch is defined as 1⁄12 of a foot and is therefore 1⁄36 of a yard. According to the modern definition, one inch is equal to 25.4 mm exactly.\n\n## Centimeter to inches formula and conversion factor\n\nTo calculate a centimeter value to the corresponding value in inch, just multiply the quantity in centimeter by 0.39370078740157 (the conversion factor).\n\n### Centimeter to inches formulae\n\nInches = Centimeters * 0.39370078740157\n\nThe factor 0.39370078740157 is the result from the division 1/2.54 (Inch definition). So, a better formula is\n\nInches = Centimeters / 2.54\n\n### Values around 54.80 centimeter(s)\n\nCentimetersInches\n54.1521.31890\n54.2521.35827\n54.3521.39764\n54.4521.43701\n54.5521.47638\n54.6521.51575\n54.7521.55512\n54.8521.59449\n54.9521.63386\n55.0521.67323\n55.1521.71260\n55.2521.75197\n55.3521.79134\n55.4521.83071\n55.5521.87008" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8307922,"math_prob":0.9844406,"size":1685,"snap":"2022-40-2023-06","text_gpt3_token_len":520,"char_repetition_ratio":0.1445568,"word_repetition_ratio":0.0,"special_character_ratio":0.36973295,"punctuation_ratio":0.1724138,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9970345,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-06T03:31:49Z\",\"WARC-Record-ID\":\"<urn:uuid:fb0d9c69-d627-4092-b4db-ff9445967d3c>\",\"Content-Length\":\"93050\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:47e17eed-ee1b-46ab-acbe-9e23166c4621>\",\"WARC-Concurrent-To\":\"<urn:uuid:46a4c912-53bd-41d0-b2c8-f8c35cdde875>\",\"WARC-IP-Address\":\"104.21.77.152\",\"WARC-Target-URI\":\"https://cmtoinches.co/Convert-54.80-centimeters-to-in-54.80\",\"WARC-Payload-Digest\":\"sha1:XJAJVA35DZZBTRBTY3SJKXARG3LA4KBA\",\"WARC-Block-Digest\":\"sha1:KCDB4T6ZT6TQYOEAKLUNMY2URSP5HJVD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500303.56_warc_CC-MAIN-20230206015710-20230206045710-00701.warc.gz\"}"}
https://acc.digital/robust-query-processing-in-database-systems/7/
[ "Home > Database > Robust Query Processing in Database Systems*\n\n## Robust Query Processing in Database Systems*", null, "Theorem 3.2. Given a query Q with a one-dimensional ESS, and the associated PIC discretized with a\ngeometric progression having common ratio r, the PlanBouquet bouquet execution algorithm ensures that MSO ≤ r2/r-1\n\nFurther, the choice of r can be optimized to minimize this value – the RHS reaches its minima at r = 2, for which the value of MSO is 4. That is, it is important to note here that a MSO guarantee of 4 is impressive, given that conventional database systems are incapable of providing any guarantee at all! Moreover, the following theorem shows that this guarantee is the best performance achievable by any deterministic online algorithm – leading us to conclude that the doubling-based discretization is the ideal solution.\n\nTheorem 3.3. Given a universe of cost-limited executions of POSP plans, no deterministic online algorithm can ensure MSO lower than 4 in the one-dimensional scenario.\n\nProof. We prove by contradiction, assuming there exists an optimal online robust algorithm, R*, with an MSOof f, f < 4. The proof is divided into two parts: First, we show that R* must be a monotonically increasing sequence of plan execution costs, [a1, a2, . . . , am]; and second, we demonstrate that achieving an MSOof less than 4 requires the ratio of cumulative costs for consecutive steps in the sequence to be strictly decreasing – however, this is fundamentally impossible and hence the contradiction.\n\n(a) Assume that R* has cost sequence [a1, . . . , ai, aj , . . . ,am+1] which is sorted in increasing order except for the inversion caused by aj < ai.\n\nNow, let us define a plan execution to be useful if its execution covers a hitherto uncovered region of the selectivity space. With this definition, an execution of aj after ai is clearly useless since no fresh selectivity ground is covered by this cheaper execution. A sample instance with reference to Figure 5, is executing P2, which covers the selectivity region (0, q2), after P3 which covers the region (0, q3) – this does not add any value since the latter subsumes the former.\n\nIn summary, an out-of-order execution sequence cannot provide any improvement over an ordered sequence, which is why aj can be safely discarded to give a completely sorted sequence [a1, . . . , ai, . . . , am].\n\n(b) For the sorted execution sequence R*, denote the cumulative cost at each step with Aj = ∑j i=1 ai,\nand the ratio between the cumulative costs for consecutive steps as Yj = Aj+1/Aj.\nNote that, by definition Aj+1 > Aj.\n\nNow, since R* has MSOg of f , the sub-optimality caused by each and every step should be at most f , that is,", null, "and therefore", null, "After dividing both sides with Aj , we get", null, "Through elementary algebra, it is known that ∀z > 0, ( 1-1/z ) ≤ z/4. Therefore, we get", null, "Since f < 4, it implies that the sequence Yj is strictly decreasing with multiplicative factor < 1. With repeated applications of the same inequality, we obtain", null, "For sufficiently large j, this results in", null, "which is a contradiction to our earlier observation that Aj+1 > Aj.\n\n### 3.3 Multi-dimensional ESS\n\nWhen the above 1D approach is generalized to a multi-dimensional selectivity environment, the IC steps and the PIC curve become surfaces, and their intersections represent selectivity surfaces on which multiple bouquet plans may be present. For example, in the 2D case, the IC steps are horizontal planes cutting through a hollow three-dimensional PIC surface, typically resulting in hyperbolic intersection contours featuring a multitude of plans covering disjoint segments of the contours. A sample 2D scenario is shown in Figure 10a, wherein the isosurfaces ICk are contours that represent a continuous sequence of selectivity locations (in contrast to the single location in the 1D case). Further, multiple bouquet plans may be present on each contour, as shown for ICk, wherein four plans,$$P^K_1 , P^K_2 ,P^K_3 , P^K_4$$ are the optimizer’s choices over disjoint (x, y) selectivity ranges on the contour.\n\nNotwithstanding these changes, the basic mechanics of the bouquet algorithm remain virtually identical. The primary difference is that we jump from one isosurface to the next only after it is determined that none of the bouquet plans present on the current isosurface can completely execute the given query within the associated cost budget. This is because, in order to decide whether qa lies below or beyond ICk, in principle every plan on the ICk contour has to be executed – only if none complete, do we know that the actual location definitely lies beyond the contour.", null, "", null, "", null, "Pages ( 7 of 9 ): « Previous1 ... 56 7 89Next »" ]
[ null, "https://acc.digital/wp-content/uploads/2018/06/robust-query-min-870x490.jpg", null, "https://acc.digital/wp-content/uploads/2018/07/robo_1.png", null, "https://acc.digital/wp-content/uploads/2018/07/robo_7.png", null, "https://acc.digital/wp-content/uploads/2018/07/robo_3.png", null, "https://acc.digital/wp-content/uploads/2018/07/robo_4.png", null, "https://acc.digital/wp-content/uploads/2018/07/robo_5.png", null, "https://acc.digital/wp-content/uploads/2018/07/robo_6.png", null, "https://acc.digital/wp-content/uploads/2018/06/3.3imag.png", null, "https://acc.digital/wp-content/uploads/2018/06/3.3.2for.png", null, "https://acc.digital/wp-content/uploads/2018/06/3.3algo.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.91642016,"math_prob":0.963513,"size":4520,"snap":"2019-35-2019-39","text_gpt3_token_len":1022,"char_repetition_ratio":0.10894597,"word_repetition_ratio":0.01305483,"special_character_ratio":0.22256637,"punctuation_ratio":0.13203214,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97219217,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],"im_url_duplicate_count":[null,2,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-25T03:03:04Z\",\"WARC-Record-ID\":\"<urn:uuid:14d31d12-13c0-4e07-8da4-bf9b28b38e31>\",\"Content-Length\":\"44715\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b410b69e-0174-4710-a41b-c3aae20c6cae>\",\"WARC-Concurrent-To\":\"<urn:uuid:a166073b-0444-4d08-bcba-342dc7371688>\",\"WARC-IP-Address\":\"103.48.51.231\",\"WARC-Target-URI\":\"https://acc.digital/robust-query-processing-in-database-systems/7/\",\"WARC-Payload-Digest\":\"sha1:ITYMYFF2GNKP6UXIHSA7VQU435EQZOR7\",\"WARC-Block-Digest\":\"sha1:JKEGUFBRRFXYUKQEDKNLZTETX63RLEZ6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027322170.99_warc_CC-MAIN-20190825021120-20190825043120-00186.warc.gz\"}"}
https://www.tutorialgateway.org/c-program-to-find-volume-and-surface-area-of-a-cuboid/
[ "# C Program to find Volume and Surface Area of a Cuboid\n\nHow to write C Program to find Volume and Surface Area of a Cuboid with example. Before we step into the C Program to find Volume and Surface Area of a Cuboid, Let see the definitions and formulas behind the Surface area of a Cuboid, Area of Top & Bottom Surfaces, Lateral Surface Area and Volume of a Cuboid\n\n## C Cuboid\n\nA cuboid is a 3D object made up of 6 Rectangles. All the opposite faces (i.e., Top and Bottom) are equal.\n\n### C Surface Area of a Cuboid\n\nThe Total Surface Area of a Cuboid is the sum of all the 6 rectangles areas present in the Cuboid. If we know the length, width, and height of the Cuboid then we can calculate the Total Surface Area using the formula:\n\n• Area of Top & Bottom Surfaces = lw + lw = 2lw\n• Area of Front & Back Surfaces = lh + lh = 2lh\n• Area of both sides = wh + wh = 2wh\n\nThe Total Surface Area of a Cuboid is the sum of all the 6 faces. So, we have to add all these areas to calculate the final Surface Area\n\n• Total Surface Area of a Cuboid = 2lw + 2lh + 2wh\n• It is equal: Total Surface Area = 2 (lw + lh +wh)\n\n### C Volume of a Cuboid\n\nThe amount of space inside the Cuboid called Volume. If we know the length, width, and height of the Cuboid then we can calculate the volume using the formula:\n\n• Volume of a Cuboid = Length * Breadth * Height\n• Volume of a Cuboid = lbh\n• The Lateral Surface Area of a Cuboid = 2h (l + w)\n\n## C Program to find Volume and Surface Area of a Cuboid\n\nThis C program allows the user to enter the length, width, and height of a Cuboid. By using these values, the C program will calculate the Surface Area, Volume, and Lateral Surface Area of Cuboid as per the formulas.\n\n```/* C Program to find Volume and Surface Area of a Cuboid */\n#include <stdio.h>\n\nint main()\n{\nfloat length, width, height;\nfloat SA, Volume, LSA;\n\nprintf(\"\\nPlease Enter Length, Width and Height of a Cuboid\\n\");\nscanf(\"%f %f %f\",&length, &width, &height);\n\nSA = 2 * (length * width + length * height + width * height);\nVolume = length * width * height;\nLSA = 2 * height * (length + width);\n\nprintf(\"\\n The Surface Area of a Cuboid = %.2f\\n\",SA);\nprintf(\"\\n The Volume of a Cuboid = %.2f\\n\",Volume);\nprintf(\"\\n The Lateral Surface Area of a Cuboid = %.2f\\n\",LSA);\n\nreturn 0;\n}```\n\nIn the above C program to find Volume and Surface Area of a Cuboid Example, We inserted Values Length = 8, Width = 5 and Height = 6\n\nThe Volume of a Cuboid in C for the Given Measures are:\nVolume of a Cuboid = lbh = l * w * h\nVolume of a Cuboid = length * width * height\nVolume of a Cuboid = 8 * 5 * 6\nVolume of a Cuboid = 240\nThe Volume of a Cuboid is 240\n\nThe Total Surface Area of a Cuboid for the Given Measures in C Programming are:\nTotal Surface Area of a Cuboid = 2lw + 2lh + 2wh\nTotal Surface Area of a Cuboid = 2 (lw + lh +wh)\nTotal Surface Area of a Cuboid = 2*(length * width + length * height + width * height)\nTotal Surface Area of a Cuboid = 2 * ( (8 * 5) + (8 * 6) + (5 * 6) )\nTotal Surface Area of a Cuboid = 2 * (40 + 48 + 30)\nTotal Surface Area of a Cuboid = 2 * 118\nTotal Surface Area of a Cuboid = 236\nThe Total Surface Area of a Cuboid is 236\n\nThe Lateral Surface Area of a Cuboid for the Given Measures in C are:\nLateral Surface Area of a Cuboid = 2lh + 2wh\nLateral Surface Area of a Cuboid = 2h (l + w)\nLateral Surface Area of a Cuboid = 2 * height * (length + width)\nLateral Surface Area of a Cuboid = 2 * 6 * (8 + 5)\nLateral Surface Area of a Cuboid = 2 * 6 * (13 )\nLateral Surface Area of a Cuboid = 156\nThe Lateral Surface Area of a Cuboid is 156" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8028334,"math_prob":0.99199784,"size":3462,"snap":"2023-40-2023-50","text_gpt3_token_len":1013,"char_repetition_ratio":0.33689994,"word_repetition_ratio":0.31420764,"special_character_ratio":0.305026,"punctuation_ratio":0.08285714,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99964094,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-11T05:15:38Z\",\"WARC-Record-ID\":\"<urn:uuid:482832d0-d0ad-4ebf-ba01-f12baf9c2854>\",\"Content-Length\":\"69945\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:95dc7159-3159-4ebe-8b79-94fa2e68a8ea>\",\"WARC-Concurrent-To\":\"<urn:uuid:9d9a548e-d592-4897-931c-09b0ebf4bf05>\",\"WARC-IP-Address\":\"104.26.0.115\",\"WARC-Target-URI\":\"https://www.tutorialgateway.org/c-program-to-find-volume-and-surface-area-of-a-cuboid/\",\"WARC-Payload-Digest\":\"sha1:C2BCXQ4JGLMSAOLCYIXDW4DZ2R6423M7\",\"WARC-Block-Digest\":\"sha1:V3AVDFP7QJGY3O2R4K4BTUTPECQDVYDX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679103558.93_warc_CC-MAIN-20231211045204-20231211075204-00885.warc.gz\"}"}
https://www.fractioncalculator.pro/decimal-to-fraction/_0.6842_inches-in-fraction
[ "# 0.6842 inches in fraction\n\nWelcome! Here is the answer to the question: 0.6842 inches in fraction or what is 0.6842 as a fraction. Use the decimal to fraction converter/calculator below to write any decimal number as a fraction.\n\n### Decimal to Fraction Converter\n\n Enter a decimal value:  Ex.: 0.625, 0.75, .875, etc. Equivalent fraction: Result here Decimal to fraction Explained: Equivalent fraction explained here" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8944327,"math_prob":0.9913501,"size":429,"snap":"2019-13-2019-22","text_gpt3_token_len":104,"char_repetition_ratio":0.19294117,"word_repetition_ratio":0.88235295,"special_character_ratio":0.25874126,"punctuation_ratio":0.14130434,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99976355,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-21T16:47:43Z\",\"WARC-Record-ID\":\"<urn:uuid:f1424172-82cb-436f-9f98-dfce60a158cf>\",\"Content-Length\":\"50434\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4323d694-b60e-4f6e-aa41-18422153da77>\",\"WARC-Concurrent-To\":\"<urn:uuid:3add18df-881f-4211-8c1c-fe3784575e7f>\",\"WARC-IP-Address\":\"104.18.48.216\",\"WARC-Target-URI\":\"https://www.fractioncalculator.pro/decimal-to-fraction/_0.6842_inches-in-fraction\",\"WARC-Payload-Digest\":\"sha1:QGMNTV4IUF5QYADJXUZXAJWJE67UAYKD\",\"WARC-Block-Digest\":\"sha1:WQFLENESPY2QGRYKTJENC3N7D7RB6K6D\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912202526.24_warc_CC-MAIN-20190321152638-20190321174638-00307.warc.gz\"}"}
https://math.stackexchange.com/questions/29157/how-do-i-convert-the-distance-between-two-lat-long-points-into-feet-meters
[ "# How do I convert the distance between two lat/long points into feet/meters?\n\nI've been reading around the net and everything I find is really confusing. I just need a formula that will get me 95% there. I have a tool that outputs the distance between two lat/long points.\n\nPoint 1: 32.773178, -79.920094\nPoint 2: 32.781666666666666, -79.916666666666671\nDistance: 0.0091526545913161624\n\n\nI would like a fairly simple formula for converting the distance to feet and meters.\n\nThanks!\n\n• Could you provide a link to the application or specify what application it is in some way? – Raskolnikov Mar 26 '11 at 21:17\n• joriki's answer has an unqualified theta expression in the last formula: math.stackexchange.com/suggested-edits/3639 – hoffmanc Dec 22 '11 at 17:30\n\nThe tool seems to be just calculating the Euclidean distance between the two points (the square root of the sum of the squared differences between the coordinates). This doesn't make any sense for latitudes and longitudes, which are not coordinates in a Cartesian coordinate system. Not only is this number not a meaningful distance, but it no longer contains the information required to reconstruct a distance from it, so you won't be able to calculate anything meaningful from it; you need to go back to the latitudes and longitudes themselves.\n\nTo calculate distances between points given by latitudes and longitudes precisely, you need to know which geoid was used as a reference in specifying them. But since you only want to get within 95% of the answer, you can safely assume that the Earth is a sphere.\n\nThere are two possible meanings for \"the distance between two points\" on a sphere. You can take the Euclidean distance between the two points (the actual points, not their latitude/longitude coordinates like your tool does), or you can take distance along the shortest curve along the surface of the Earth. Again, if you only want to get to within 95% of the answer and the distances are as small as in your example, the difference is negligble, so you can take the Euclidean distance, which is easier to calculate.\n\nTo get the Euclidean distance, you can first calculate the Cartesian coordinates of the points from their latitudes and longitudes. Denoting the latitude by $\\theta$, the longitude by $\\phi$ and the Earth's radius by $R$ (with $R\\approx 6371 \\mathrm{km}$), these are given by\n\n$$\\vec{r}=\\left(\\begin{array}{c}x\\\\y\\\\z\\end{array}\\right) = \\left(\\begin{array}{c} R\\cos\\theta\\cos\\phi \\\\ R\\cos\\theta\\sin\\phi \\\\ R\\sin\\theta \\end{array}\\right)\\;.$$\n\nThen you get the distance between them using\n\n$$d(\\vec{r_1},\\vec{r_2})=\\sqrt{(x_2-x_1)^2+(y_2-y_1)^2+(z_2-z_1)^2}\\;.$$\n\nSince you seem to have small distances and aren't interested in precision, you can simplify this by expanding the trigonometric functions around one of the points, or, for greater precision, around the midpoint $\\theta=(\\theta_1+\\theta_2)/2$, $\\phi=(\\phi_1+\\phi_2)/2$:\n\n$$\\vec{r_2}-\\vec{r_1}\\approx R\\left(\\begin{array}{c} \\sin\\theta\\cos\\phi(\\theta_2-\\theta_1)-\\cos\\theta\\sin\\phi(\\phi_2-\\phi_1) \\\\ \\sin\\theta\\sin\\phi(\\theta_2-\\theta_1)+\\cos\\theta\\cos\\phi(\\phi_2-\\phi_1) \\\\ \\cos\\theta(\\theta_2-\\theta_1) \\end{array}\\right)\\;,$$\n\n$${}$$\n\n$$\\lvert\\vec{r}_2-\\vec{r}_1\\rvert\\approx R\\sqrt{(\\theta_2-\\theta_1)^2 + \\cos^2\\theta(\\phi_2-\\phi_1)^2}\\;.$$\n\n• Your $\\theta$ is measured down from the pole (from $0$ to $\\pi$) while latitude is usually measured from $+90^{\\circ}$ (North pole) to $-90^{\\circ}$ (South pole) and if you want to take care of altitude the $R$ should be radius from the center, not necessarily $6371$ km. – Ross Millikan Mar 26 '11 at 18:41\n• @Ross: Yes, thanks, I just noticed I'd switched $\\cos\\theta$ and $\\sin\\theta$; it's already corrected. Concerning altitude, a couple of kilometers of mountains will only make a difference of around one-thousandth, well within the desired accuracy. – joriki Mar 26 '11 at 18:43\n• My comment on altitude was prompted by the fact that you show three coordinates and have $\\Delta z$ in your original equation. I'll delete my post with the same linearization as yours is more complete. – Ross Millikan Mar 26 '11 at 18:48\n• @Ross: I actually thought yours was good to have since mine might look like a complicated mess and yours distilled the essentials of it :-) – joriki Mar 26 '11 at 18:49\n• @Ross: $\\Delta z$ isn't more or less related to altitude than $\\Delta x$ or $\\Delta y$; altitude is $\\Delta R$. You always need three coordinates doing it this way, altitude or not. Where I live (Berlin), altitude would contribute about equally to $\\Delta x$ and $\\Delta z$, with $\\Delta y\\approx0$. – joriki Mar 26 '11 at 18:52\n\nI'm not entirely sure what the distance number your tool is returning means, but here is one way to go about computing the distance you want.\n\nBelow is the Spherical Law of Cosines as it appears in UCSMP Functions, Statistics, and Trigonometry, 3rd ed., copied here because the diagram is good and helps with clarity.", null, "If $\\triangle ABC$ is a spherical triangle with arcs $a$, $b$, and $c$ (meaning the measures of the arcs, not the lengths), then $\\cos c=\\cos a\\cos b+\\sin a\\sin b\\cos C$.\n\nNow, to the specific problem at hand. Let's use the diagram below, also from UCSMP Functions, Statistics, and Trigonometry, 3rd ed., for reference.", null, "Let $A$ and $B$ be the two points between which you want to find the distance (for simplicity, I'll assume they are both in the northern hemisphere, and leave extending the solution to any points as an exercise); $N$ and $S$ are the north and south poles, respectively; $C$ and $D$ are the points on the equator that are on the same line of longitude as $A$ and $B$, respectively. Consider spherical $\\triangle ABN$. $a=(90°-\\text{latitude of point }B)$; $b=(90°-\\text{latitude of point }A)$. $N=\\text{positive difference in longitude between points }A\\text{ and }B$. Use the Spherical Law of Cosines ($\\cos n=\\cdots$ form) to determine $n$, which is the shortest arc between the two points.\n\nIf, for example, $n=10°$, the diameter of the earth is about 12756.2 km, so the distance is $\\frac{10°}{360°}\\pi\\cdot 12756.2\\approx 1113.2\\text{ km}$.\n\n(graphics from Lesson 5-10 of UCSMP Functions, Statistics, and Trigonometry, 3rd ed., © 2010 Wright Group/McGraw Hill)\n\nIf the distances are small, you can use the linearized version: $\\Delta x=R \\cos(\\theta)\\Delta \\phi, \\Delta y=R \\Delta \\theta$, where $x$ is east-west distance, $\\theta$ is latitude (measured with zero at the equator), $y$ is north-south distance, and $\\phi$ is longitude. Then the distance is $d=\\sqrt{\\Delta x^2+ \\Delta y^2}$ in whatever units you used for $R$.\n\nSince the question is tagged Mathematica it might be good to provide the Mathematica function, which is (as of version 7):\n\nGeoDistance[{32.773178, -79.920094},\n{32.781666666666666,-79.916666666666671}\n]\n\n==> 994.652\n\n\nor, if you want to specify the reference ellipsoid:\n\nGeoDistance[\nGeoPosition[{32.773178, -79.920094, 0}, \"ITRF00\"],\nGeoPosition[{32.781666666666666, -79.916666666666671 , 0}, \"ITRF00\"]\n]\n\n• I seem to remember that that function (or whatever its name was in earlier versions) was in one of the Miscellaneous` packages... – J. M. is not a mathematician May 8 '11 at 18:37\n• @J.M. It might have been once but it's a built-in function now. – Sjoerd C. de Vries May 8 '11 at 18:47" ]
[ null, "https://i.stack.imgur.com/KAOhC.png", null, "https://i.stack.imgur.com/DRkJ3.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.88430303,"math_prob":0.9978089,"size":2527,"snap":"2019-13-2019-22","text_gpt3_token_len":724,"char_repetition_ratio":0.13594927,"word_repetition_ratio":0.03003003,"special_character_ratio":0.27582112,"punctuation_ratio":0.083333336,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99985564,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,6,null,6,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-23T12:14:46Z\",\"WARC-Record-ID\":\"<urn:uuid:44ddbf16-c0fc-4386-922f-d1c0aeb900e1>\",\"Content-Length\":\"159498\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2eb0bde7-97d1-4150-b1da-6036b356043f>\",\"WARC-Concurrent-To\":\"<urn:uuid:091cad55-9ae8-47a3-a302-6b098f8ffa6a>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/29157/how-do-i-convert-the-distance-between-two-lat-long-points-into-feet-meters\",\"WARC-Payload-Digest\":\"sha1:M6PDDL6O2JGIARSUJEFTA3W5WYL3SNHK\",\"WARC-Block-Digest\":\"sha1:CHTVNEXTXHJN22KVAKZIZXBHN53UU6XD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912202804.80_warc_CC-MAIN-20190323121241-20190323143241-00414.warc.gz\"}"}
http://git.tuebingen.mpg.de/?p=paraslash.git;a=blobdiff;f=check_wav.c;h=1a47f946f3fb01829ae0d84caacb7461d0d22580;hp=0ed79e43534792c3b82049606f2c1e31ee09f8cd;hb=74592ed100009a2d73e03861ae9626363f06aca9;hpb=1af65c31171b35e5a5e931e3a4467786e932e145
[ "index 0ed79e4..1a47f94 100644 (file)\n@@ -1,5 +1,5 @@\n/*\n- * Copyright (C) 2005-2012 Andre Noll <[email protected]>\n+ * Copyright (C) 2005 Andre Noll <[email protected]>\n*\n* Licensed under the GPL v2. For licencing details see COPYING.\n*/\n/** Length of a standard wav header. */\n\n+/** The possible states of a check_wav instance. */\nenum check_wav_state {\n+       /** Initial state, less than \\p WAV_HEADER_LEN bytes available. */\n+       /** Wav hader was detected. */\n+       /** First part of the stream did not look like a wav header. */\n};\n\n+struct check_wav_context {\nenum check_wav_state state;\nstruct btr_node *btrn;\nsize_t min_iqs;\n/* Command line args. */\n@@ -38,36 +41,42 @@ struct check_wav_task {\nunsigned sample_rate;\n};\n\n-static void check_wav_pre_select(struct sched *s, struct task *t)\n+/**\n+ * Set select timeout according to the given context.\n+ *\n+ * \\param s Contains the timeval that should be set.\n+ * \\param cwc Contains a pointer to the buffer tree node.\n+ *\n+ * This requests a minimal timeout from the scheduler if btrn of \\a cwc is not\n+ * idle.\n+ */\n+void check_wav_pre_select(struct sched *s, struct check_wav_context *cwc)\n{\n-       int ret;\n-\n-       ret = btr_node_status(cwt->btrn, cwt->min_iqs, BTR_NT_INTERNAL);\n+       int ret = btr_node_status(cwc->btrn, cwc->min_iqs, BTR_NT_INTERNAL);\nif (ret != 0)\nsched_min_delay(s);\n}\n\nstatic int check_wav_exec(struct btr_node *btrn, const char *cmd, char **result)\n{\n-       struct check_wav_task *cwt = btr_context(btrn);\n+       struct check_wav_context *cwc = btr_context(btrn);\n\n-       arg = cwt->params.channels_arg;\n-       given = cwt->params.channels_given;\n+       arg = cwc->params.channels_arg;\n+       given = cwc->params.channels_given;\nif (!strcmp(cmd, \"channels\"))\ngoto out;\n\n-       arg = cwt->params.sample_rate_arg;\n-       given = cwt->params.sample_rate_given;\n+       arg = cwc->params.sample_rate_arg;\n+       given = cwc->params.sample_rate_given;\nif (!strcmp(cmd, \"sample_rate\"))\ngoto out;\n\n-       arg = cwt->params.sample_format_arg;\n-       given = cwt->params.sample_format_given;\n+       arg = cwc->params.sample_format_arg;\n+       given = cwc->params.sample_format_given;\nif (!strcmp(cmd, \"sample_format\"))\ngoto out;\n\n@@ -76,11 +85,18 @@ out:\nif (given)\nval = arg;\nelse {\n-               switch (cwt->state) {\n+               switch (cwc->state) {\nbreak;\n+                       /*\n+                        * No wav header available and no value specified at\n+                        * the command line. Maybe one of our parent nodes\n+                        * knows.\n+                        */\n+                       if (btr_exec_up(btr_parent(cwc->btrn), cmd, result) >= 0)\n+                               return 1;\n/* Use default value */\nval = arg;\nbreak;\n@@ -92,27 +108,43 @@ out:\nreturn 1;\n}\n\n-static void check_wav_post_select(__a_unused struct sched *s, struct task *t)\n+/**\n+ * Filter out the wav header, pushdown everything else.\n+ *\n+ * \\param cwc The context of this instance.\n+ *\n+ * This function looks at the first \\p WAV_HEADER_SIZE bytes of the input queue\n+ * of the btrn of \\a cwc. If they look like a wav header, the function extracts\n+ * the information of interest and swallows this part of the stream. Otherwise\n+ * it is pushed down to all children. In either case the rest of the input is\n+ * pushed down as well.\n+ *\n+ * Once the first part has been processed this way, the state of the instance\n+ *\n+ * \\return Standard.\n+ */\n+int check_wav_post_select(struct check_wav_context *cwc)\n{\n-       struct btr_node *btrn = cwt->btrn;\n+       struct btr_node *btrn = cwc->btrn;\nunsigned char *a;\nsize_t sz;\nint ret;\nuint16_t bps; /* bits per sample */\nconst char *sample_formats[] = {SAMPLE_FORMATS};\n\n-       t->error = 0;\n-       ret = btr_node_status(btrn, cwt->min_iqs, BTR_NT_INTERNAL);\n+       if (!btrn)\n+               return 0;\n+       ret = btr_node_status(btrn, cwc->min_iqs, BTR_NT_INTERNAL);\nif (ret <= 0)\ngoto out;\ngoto pushdown;\n-       btr_merge(btrn, cwt->min_iqs);\n+       btr_merge(btrn, cwc->min_iqs);\nsz = btr_next_buffer(btrn, (char **)&a);\n-       if (sz < cwt->min_iqs) /* file size less than WAV_HEADER_SIZE */\n+       if (sz < cwc->min_iqs) /* file size less than WAV_HEADER_SIZE */\ngoto pushdown;\n-       cwt->min_iqs = 0;\n+       cwc->min_iqs = 0;\n/*\n* The default byte ordering assumed for WAVE data files is\n* little-endian. Files written using the big-endian byte ordering\n@@ -121,16 +153,14 @@ static void check_wav_post_select(__a_unused struct sched *s, struct task *t)\nif (a != 'R' || a != 'I' || a != 'F' ||\n(a != 'F' && a != 'X')) {\n-               sprintf(t->status, \"check wav: no header\");\ngoto out;\n}\n-       sprintf(t->status, \"check wav: have header\");\n/* Only set those values which have not already been set. */\n-       cwt->channels = (unsigned)a;\n-       cwt->sample_rate = a + (a << 8) + (a << 16) + (a << 24);\n+       cwc->channels = (unsigned)a;\n+       cwc->sample_rate = a + (a << 8) + (a << 16) + (a << 24);\nbps = a + ((unsigned)a << 8);\nif (bps != 8 && bps != 16) {\nPARA_WARNING_LOG(\"%u bps not supported, assuming 16\\n\",\n@@ -143,43 +173,65 @@ static void check_wav_post_select(__a_unused struct sched *s, struct task *t)\n* integers, ranging from -32768 to 32767.\n*/\nif (bps == 8)\n-               cwt->sample_format = SF_U8;\n+               cwc->sample_format = SF_U8;\nelse\n-               cwt->sample_format = (a == 'F')?\n+               cwc->sample_format = (a == 'F')?\nSF_S16_LE : SF_S16_BE;\n-       PARA_NOTICE_LOG(\"%dHz, %s, %s\\n\", cwt->sample_rate,\n-               cwt->channels == 1? \"mono\" : \"stereo\",\n-               sample_formats[cwt->sample_format]);\n+       PARA_NOTICE_LOG(\"%uHz, %s, %s\\n\", cwc->sample_rate,\n+               cwc->channels == 1? \"mono\" : \"stereo\",\n+               sample_formats[cwc->sample_format]);\npushdown:\nbtr_pushdown(btrn);\nout:\n-       t->error = ret;\nif (ret < 0)\n-               btr_remove_node(&cwt->btrn);\n+               btr_remove_node(&cwc->btrn);\n+       return ret;\n}\n\n-struct check_wav_task *check_wav_init(struct sched *s, struct btr_node *parent,\n-               struct wav_params *params, struct btr_node **cwt_btrn)\n+/**\n+ * Allocate and set up a new check_wav instance.\n+ *\n+ * \\param parent This buffer tree node will be the parent of the new node.\n+ * \\param child The child of the new node.\n+ * \\param params Default values and options.\n+ * \\param cw_btrn A pointer to the check wav node is returned here.\n+ *\n+ * This function also sets up the ->execute handler of the btrn so that all\n+ * children of this node can figure out channel count, sample rate, etc.\n+ *\n+ * \\return The (opaque) handle of the newly created check_wav instance. It is\n+ * supposed to be passed to \\ref check_wav_pre_select() and \\ref\n+ * check_wav_post_select().\n+ *\n+ * \\sa \\ref btr_new_node.\n+ */\n+struct check_wav_context *check_wav_init(struct btr_node *parent,\n+               struct btr_node *child, struct wav_params *params,\n+               struct btr_node **cw_btrn)\n{\n-       struct check_wav_task *cwt = para_calloc(sizeof(*cwt));\n-\n-       cwt->params = *params;\n-       cwt->btrn = btr_new_node(&(struct btr_node_description)\n-               EMBRACE(.name = \"check_wav\", .parent = parent,\n-               .handler = check_wav_exec, .context = cwt));\n-       if (cwt_btrn)\n-               *cwt_btrn = cwt->btrn;\n-       return cwt;\n+       struct check_wav_context *cwc = para_calloc(sizeof(*cwc));\n+\n+       cwc->params = *params;\n+       cwc->btrn = btr_new_node(&(struct btr_node_description)\n+               EMBRACE(.name = \"check_wav\", .parent = parent, .child = child,\n+               .handler = check_wav_exec, .context = cwc));\n+       if (cw_btrn)\n+               *cw_btrn = cwc->btrn;\n+       return cwc;\n}" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5533876,"math_prob":0.9021894,"size":8282,"snap":"2020-45-2020-50","text_gpt3_token_len":2626,"char_repetition_ratio":0.16549891,"word_repetition_ratio":0.075019956,"special_character_ratio":0.3475006,"punctuation_ratio":0.20242608,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9707136,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-22T23:15:15Z\",\"WARC-Record-ID\":\"<urn:uuid:201e6bd8-16a9-4777-be10-c75e9d8a4727>\",\"Content-Length\":\"42335\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e1e6e83c-c8d2-4f09-af14-cd6afdfd7a2b>\",\"WARC-Concurrent-To\":\"<urn:uuid:bb125b76-06e4-4a14-a027-25ffa93ef6d7>\",\"WARC-IP-Address\":\"192.124.27.42\",\"WARC-Target-URI\":\"http://git.tuebingen.mpg.de/?p=paraslash.git;a=blobdiff;f=check_wav.c;h=1a47f946f3fb01829ae0d84caacb7461d0d22580;hp=0ed79e43534792c3b82049606f2c1e31ee09f8cd;hb=74592ed100009a2d73e03861ae9626363f06aca9;hpb=1af65c31171b35e5a5e931e3a4467786e932e145\",\"WARC-Payload-Digest\":\"sha1:NVPPSHMRMSTQ6SFBIUKVRLJUIPLPHQWE\",\"WARC-Block-Digest\":\"sha1:QO2LHZVICK5QKXCHXYE75EBICAYNDCOF\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107880401.35_warc_CC-MAIN-20201022225046-20201023015046-00468.warc.gz\"}"}
https://number.rocks/subtract/6640/minus/94
[ "# Subtraction 6640 minus 94\n\nWhat is the subtraction of 6640 minus 94 and How much is 6640 - 94 percent?\n\nSubtraction of 6640 and 94 is 6546 and 6640 minus 94 percent is 398.4\n\n6640 - 94\n=\n6546\n6640 - 94 percent\n=\n398.4\n\nStep by Step Calculation for what is 6640 minus 94%\n\n= 6640 - 94% of 6640\n\n= 6640 - 94 * 6640 / 100\n\n= 6640 - 94 * 332/5\n\n= 6640 - 31208/5\n\n= 1992/5\n\n1992/5 or fraction as a decimal is 398.4" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.862829,"math_prob":0.96407574,"size":330,"snap":"2019-43-2019-47","text_gpt3_token_len":137,"char_repetition_ratio":0.20245399,"word_repetition_ratio":0.028571429,"special_character_ratio":0.57575756,"punctuation_ratio":0.042857144,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9993963,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-22T15:05:28Z\",\"WARC-Record-ID\":\"<urn:uuid:d91d72f0-e0f6-474a-b24d-54da1dfc43b1>\",\"Content-Length\":\"7813\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e5f208bb-fb29-42ea-be28-45f677e7f52a>\",\"WARC-Concurrent-To\":\"<urn:uuid:e14b6d6b-43a9-4836-8b44-11e29e676988>\",\"WARC-IP-Address\":\"166.62.6.39\",\"WARC-Target-URI\":\"https://number.rocks/subtract/6640/minus/94\",\"WARC-Payload-Digest\":\"sha1:KE7L6NJIUZTENEVICZIBKHHG4DITH6VZ\",\"WARC-Block-Digest\":\"sha1:T27DP25OJK4ABXVHWZYZZD2ZA4SWCODG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496671363.79_warc_CC-MAIN-20191122143547-20191122172547-00344.warc.gz\"}"}
https://answers.everydaycalculation.com/add-fractions/36-14-plus-20-54
[ "Solutions by everydaycalculation.com\n\n1st number: 2 8/14, 2nd number: 20/54\n\n36/14 + 20/54 is 556/189.\n\n1. Find the least common denominator or LCM of the two denominators:\nLCM of 14 and 54 is 378\n2. For the 1st fraction, since 14 × 27 = 378,\n36/14 = 36 × 27/14 × 27 = 972/378\n3. Likewise, for the 2nd fraction, since 54 × 7 = 378,\n20/54 = 20 × 7/54 × 7 = 140/378\n972/378 + 140/378 = 972 + 140/378 = 1112/378\n5. 1112/378 simplified gives 556/189\n6. So, 36/14 + 20/54 = 556/189\nIn mixed form: 2178/189\n\nMathStep (Works offline)", null, "Download our mobile app and learn to work with fractions in your own time:" ]
[ null, "https://answers.everydaycalculation.com/mathstep-app-icon.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.85109055,"math_prob":0.9935556,"size":771,"snap":"2020-45-2020-50","text_gpt3_token_len":317,"char_repetition_ratio":0.15123859,"word_repetition_ratio":0.0,"special_character_ratio":0.5642023,"punctuation_ratio":0.09090909,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99686176,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-01T14:27:49Z\",\"WARC-Record-ID\":\"<urn:uuid:aef86d40-b45c-4423-ab23-b2112be62af0>\",\"Content-Length\":\"7824\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a563a65e-4c07-4fc7-9a74-c543be588fb9>\",\"WARC-Concurrent-To\":\"<urn:uuid:7787d01f-feb8-45c9-91c9-97f551a74741>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/add-fractions/36-14-plus-20-54\",\"WARC-Payload-Digest\":\"sha1:Y2GLNQ74GPONQP6IO6KTWO5IA2FW5ZFV\",\"WARC-Block-Digest\":\"sha1:3YXQRXK3WFWWIBDID53QVN4DUFSSTSMM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141674594.59_warc_CC-MAIN-20201201135627-20201201165627-00258.warc.gz\"}"}
https://physics.stackexchange.com/questions/28720/how-to-get-planck-length
[ "# How to get Planck length\n\nI know that what Planck length equals to.\n\n1. The first question is, how do you get the formula $$\\ell_P~=~\\sqrt\\frac{\\hbar G}{c^3}$$ that describes the Planck length?\n\n2. The second question is, will any length shorter than the Planck length be inaccessible? If so, what is the reason behind this?\n\n• Hi user2346! For future reference, we prefer that you ask each separate question in a separate post. May 21, 2012 at 20:35\n\nThe expression $(\\hbar G/c^3)^{1/2}$ is the unique product of powers of $\\hbar, G,c$, three most universal dimensionful constants, that has the unit of length. Because the constants $\\hbar, G,c$ describe the fundamental processes of quantum mechanics, gravity, and special relativity, respectively, the length scale obtained in this way expresses the typical length scale of processes that depend on relativistic quantum gravity.\n\nThe formula and the value were already known to Max Planck more than 100 years ago, that's why they're called Planck units.\n\nUnless there are very large or strangely warped extra dimensions in our spacetime, the Planck length is the minimum length scale that may be assigned the usual physical and geometric interpretation. (And even if there are subtleties coming from large or warped extra dimensions, the minimum length scale that makes sense – which could be different from $10^{-35}$ meters, however – may still be called a higher-dimensional Planck length and is calculated by analogous formulae which must, however, use the relevant Newton's constant that applies to a higher-dimensional world.) The Planck length's special role may be expressed by many related definitions, for example:\n\n• The Planck length is the radius of the smallest black hole that (marginally) obeys the laws of general relativity. Note that if the black hole radius is $R=(\\hbar G/c^3)^{1/2}$, the black hole mass is obtained from $R=2GM/c^2$ i.e. $M=c^2/G\\cdot (\\hbar G/c^3)^{1/2} = (\\hbar c/G)^{1/2}$ which is the same thing as the Compton wavelength $\\lambda = h/Mc = hG/c^3 (\\hbar G/c^3)^{-1/2}$ of the same object, up to numerical factors such as $2$ and $\\pi$. The time it takes for such a black hole to evaporate by the Hawking radiation is also equal to the Planck time i.e. Planck length divided by the speed of light. Smaller (lighter) black holes don't behave as black holes at all; they are elementary particles (and the lifetime shorter than the Planck time is a sign that you can't trust general relativity for such supertiny objects). Larger black holes than the Planck length increasingly behave as long-lived black holes that we know from astrophysics.\n\n• The Planck length is the distance at which the quantum uncertainty of the distance becomes of order 100 percent, up to a coefficient of order one. This may be calculated by various approximate calculations rooted in quantum field theory – expectation values of $(\\delta x)^2$ coming from quantum fluctuations of the metric tensor; higher-derivative corrections to the Einstein-Hilbert action; nonlocal phenomena, and so on.\n\nThe unusual corrections to the geometry, including nonlocal phenomena, become so strong at distances that are formally shorter than the Planck length that it doesn't make sense to consider any shorter distances. The usual rules of geometry would break down over there. The Planck length or so is also the shortest distance scale that can be probed by accelerators, even in principle. If one were increasing the energy of protons at the LHC and picked a collider of the radius comparable to the Universe, the wavelength of the protons would be getting shorter inversely proportionally to the protons' energy. However, once the protons' center-of-mass energy reaches the Planck scale, one starts to produce the \"minimal black holes\" mentioned above. A subsequent increase of the energy will end up with larger black holes that have a worse resolution, not better. So the Planck length is the minimum distance one may probe.\n\nIt's important to mention that we're talking about the internal architecture of particles and objects. Many other quantities that have units of length may be much shorter than the Planck length. For example, the photon's wavelength may obviously be arbitrarily short: any photon may always be boosted, as special relativity guarantees, so that its wavelength gets even shorter.\n\nLots of things (insights from thousands of papers by some of the world's best physicists) are known about the Planck scale physics, especially some qualitative features of it, regardless of the experimental inaccessibility of that realm.\n\n• According to which established, experimentally verified theory can one assert that''once the protons' center-of-mass energy reaches the Planck scale, one starts to produce the \"minimal black holes\" mentioned above''? Where is a proof of the assertion that ''the usual rules of geometry would break down'' at distances shorter than the Planck scale? How,in the absence of a consistent theory of quantum gravity, can one prove that ''the Planck length is the radius of the smallest black hole that (marginally) obeys the laws of general relativity''? May 21, 2012 at 16:18\n• Dear Arnold, according to which established, experimentally verified theory can one assert what I did? The theory you're looking for is known as general relativity. One may prove that with a sufficient concentration of energy in a small volume such as one I described, one inevitably forms black holes. This has been known since the singularity theorems due to Penrose and Hawking from the 1970s. Also, for radii larger than the Planck length, one may show that the corrections to GR are small so that the conclusion is unchanged. May 21, 2012 at 16:33\n• One may prove that distances shorter than the Planck scale fail to obey the laws of geometry in many independent ways, from semiclassical GR to individual full-fledged consistent descriptions of string/M-theory, from AdS/CFT to Matrix theory. May 21, 2012 at 16:34\n• Try e.g. page 3 of Joseph Polchinski's \"String Theory\" or the first chapter of any other basic textbook about the subject. May 21, 2012 at 18:24\n• Let me just mention that those of us who have actually studied the subject – after the equivalent of those 1,000 pages and maybe a bit earlier – not only know that classical GR becomes invalid near the Planck scale, which one understands from page 3, but we also have a working knowledge of the actual correct physics that replaces it. I don't quite understand the logic of \"arguing against\" my proofs organized by someone who not only confesses not to have the knowledge of those 1,000 pages – but who seems boldly ignorant even about the \"first three pages\" of introductions to the subject. May 22, 2012 at 5:21\n\nUsing fundamental physical constants, try to construct an expression which has a length unit.\nSo using dimensional analysis, we have:\n\n• $G = m^3 \\cdot kg^{-1} \\cdot s^{-2}$\n• $c = m \\cdot s^{-1}$\n• and $\\hbar = J \\cdot s = kg \\cdot m^2 \\cdot s^{-1}$.\n\nThan we are to construct length $l = m$ in the following way: $$l = G^a c^b \\hbar^d = m^{3a + b+d} \\cdot kg^{-a+d} \\cdot s^{-2a-b-d} \\equiv m$$ It's equivalent to the following system of equations $$\\begin{cases} 3a+b+2d & = 1 \\\\-a+d & = 0 \\\\-2a-b-d & = 0 \\end{cases}$$ And the only solution is just what we call now Planck's length.\n\nThe formula is obtained by dimensional analysis. Up to a constant dimensionless factor, the given expression is the only one of dimension length that one can make of the fundamental constants $\\hbar$, $c$, and $G$.\n\nDiscussions about the physical significance of the Planck length have no experimental (and too little theoretical) support, so that your second question cannot be answered (except speculatively).\n\nThis is an answer to the part of the question about why smaller scales are inaccessible.\n\nParticle physicists are in the business of measuring things at very small distances. To do this, they have to use particles with wavelengths comparable to the distance scale they're trying to probe, and they have to collide those particles with the thing they're trying to probe.\n\nHowever, something goes wrong if you keep trying to make the wavelength $\\lambda$ shorter and shorter. Although accelerating a particle to ultrarelativistic speed doesn't make it into a black hole (after all, in its own frame it's at rest), the collision with the object being probed can create a black hole, and it will do so, roughly speaking, if the energy $E$ is equivalent to an $mc^2$ for which the Schwarzschild radius $2Gm/c^2$ is smaller than the $\\lambda\\sim hc/E$. (This is not rigorous, since it's really the stress-energy tensor that matters, not the energy, but it's good enough for an order-of-magnitude estimate.) Solving for $\\lambda$, we get something on the order of the Planck length.\n\nIf you make the wavelength shorter than the Planck length, you're making the energy higher. The collision then produces a larger black hole, which means you're not probing smaller scales, you're probing larger ones.\n\nI must agree with Lubos (except for the exception he makes regarding Photons, since SR is the wrong tool to use and GR doesn't let Photons stand out either) that it's theoretically very well established that Planck's scale sets a point beyond which new physics should happen and string theory gives one possible form this new physics might take.\n\nForgetting about strings, other than Blackhole arguments, one can appeal to the modern RG framework to claim any renormalizable but not asymptotically-free field theory at low energies (like Standard Model) signals the existence of a UV scale beyond which a new field theory must get replaced. The Planck's scale is the only relevant scale we know that might possibly be the candidate for a gravitational qft. Look at Delamotte's \"A hint of renormalization\" for a clear description of this point.\n\n• It's clear to me that if there are only a finite number of (length) scales in the theory, that one can expect something to happen in field theoretic considerations w.r.t. to that unit (say Planck scale here $\\ell_P$). However, what makes $\\ell_P$ more fundamental than $2\\ell_P$, how can one conclude that a particular numeric value has any significance without a good theory one that level? May 22, 2012 at 14:23\n• Just based on general considerations, the transition doesn't even have to be sharp I guess (it will be sharp only if some symmetry is spontaneously broken and I'm ignorant of whether or not that must be the case for a renormalizable low-energy theory to appear). So I agree with you, nothing is so special about Planck's length before considering a specific consistent QG. May 22, 2012 at 15:48" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9329562,"math_prob":0.97870547,"size":4143,"snap":"2023-40-2023-50","text_gpt3_token_len":904,"char_repetition_ratio":0.13916405,"word_repetition_ratio":0.015384615,"special_character_ratio":0.21192373,"punctuation_ratio":0.0862069,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9947023,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-29T18:12:09Z\",\"WARC-Record-ID\":\"<urn:uuid:2843c877-b775-401d-950c-3d08e8f79162>\",\"Content-Length\":\"206447\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:56ea0530-c9ae-48d7-9dc2-9e3546048ea3>\",\"WARC-Concurrent-To\":\"<urn:uuid:d14ac12b-f10f-4063-abe7-41a6a2c3091e>\",\"WARC-IP-Address\":\"104.18.11.86\",\"WARC-Target-URI\":\"https://physics.stackexchange.com/questions/28720/how-to-get-planck-length\",\"WARC-Payload-Digest\":\"sha1:EJZDRICPWYWWUYOBY43FXM33XEYBANOM\",\"WARC-Block-Digest\":\"sha1:GBFEXP6H2MKHDJBV5Q4SQUXAE2DGHMG5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510520.98_warc_CC-MAIN-20230929154432-20230929184432-00252.warc.gz\"}"}
https://mathoverflow.net/questions/107712/motivic-t-structure-and-realisations
[ "# motivic t-structure and realisations\n\nLet $k$ be a field and $DM_k$ denote the triangulated category of geometric motives with $\\mathbb{Q}$ coeffients over $k$. Recall that there exists a motive functor $M: Var_k\\rightarrow DM_k$, which yields an fully faithful embedding of tensor $\\mathbb{Q}$-categories $CHM_k\\rightarrow DM_k$, where $CHM_k$ is athe category of Chow motives with $\\mathbb{Q}$-coefficients over $k$.\n\nFor $\\ell$ prime to the characteristic of $k$ one has the $\\ell$-adic realisation functor $r_{\\ell}:DM_k\\rightarrow D^b(Vec_{\\mathbb{Q}\\ell})$. As I understand it, if $X$ is smooth projective variety over $k$ then the cohomology of $r_{\\ell}(M(X))$ is just the $\\ell$-adic cohomology of $X$.\n\nNow the conjectural motivic $t$-structure on $DM_k$ has the property that the realisation functor $r_{\\ell}$ is $t$-exact.Thus those objects in the heart of this $t$-structure (the conjectural category of mixed motives $MM_k$) have realisation with trivial cohomology outside of degree zero.\n\nHere is what is confusing me: for a general smooth projective variety its $\\ell$-adic cohomology is not always concentrated in degree zero. Thus for such $X$, the motive $M(X)$ is not in the category of mixed motives. This can't be correct as the category of mixed motives should contain the category of pure motives. Where am I going wrong?\n\n• In the second paragraph, I think you should note that the complex in $D^b(Vec_{Q_\\ell})$ is one with chain groups equal to the $\\ell$-adic cohomology and chain maps zero, so you can just take the components rather than actually taking the cohomology. Sep 20, 2012 at 20:36\n\nThe image of the inclusion $CHM_k\\hookrightarrow DM_k$ is indeed not contained in the heart $M_k$ of the motivic t-structure.\n$CHM_k$ does map to the heart, but via a different functor, namely, the projection $CHM_k\\to NM_k$ to numerical motives, followed by the inclusion $NM_k\\hookrightarrow M_k$ ($NM_k$ should be the subcategory of semi-simple objects in $M_k$). This functor $CHM_k\\to M_k$ should be equivalent to the composition of the inclusion $CHM_k\\hookrightarrow DM_k$ followed by the functor $DM_k \\to M_k$ which sends $M$ to $\\bigoplus_{i\\in \\mathbb{Z}} H^i(M)$." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8621795,"math_prob":0.99938196,"size":1308,"snap":"2022-05-2022-21","text_gpt3_token_len":354,"char_repetition_ratio":0.12960123,"word_repetition_ratio":0.0,"special_character_ratio":0.2522936,"punctuation_ratio":0.0720339,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99995303,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-22T02:00:06Z\",\"WARC-Record-ID\":\"<urn:uuid:79ea669d-0b93-407f-bebf-28919df9e6fb>\",\"Content-Length\":\"102237\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:abc56add-4d62-41d9-a013-45e4feb6d4ce>\",\"WARC-Concurrent-To\":\"<urn:uuid:feb3aa77-3287-45ee-8867-08b90841af0f>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://mathoverflow.net/questions/107712/motivic-t-structure-and-realisations\",\"WARC-Payload-Digest\":\"sha1:N5TT7HQMODPT7MQULQSD3VFS4ZQFV5VT\",\"WARC-Block-Digest\":\"sha1:YETSFG263HVDTEDNNG3WA34WJZAZWW67\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662543264.49_warc_CC-MAIN-20220522001016-20220522031016-00362.warc.gz\"}"}
https://www.jpost.com/middle-east/rival-commanders-meet-to-settle-militia-dispute
[ "(function (a, d, o, r, i, c, u, p, w, m) { m = d.getElementsByTagName(o), a[c] = a[c] || {}, a[c].trigger = a[c].trigger || function () { (a[c].trigger.arg = a[c].trigger.arg || []).push(arguments)}, a[c].on = a[c].on || function () {(a[c].on.arg = a[c].on.arg || []).push(arguments)}, a[c].off = a[c].off || function () {(a[c].off.arg = a[c].off.arg || []).push(arguments) }, w = d.createElement(o), w.id = i, w.src = r, w.async = 1, w.setAttribute(p, u), m.parentNode.insertBefore(w, m), w = null} )(window, document, \"script\", \"https://95662602.adoric-om.com/adoric.js\", \"Adoric_Script\", \"adoric\",\"9cc40a7455aa779b8031bd738f77ccf1\", \"data-key\");\nvar domain=window.location.hostname; var params_totm = \"\"; (new URLSearchParams(window.location.search)).forEach(function(value, key) {if (key.startsWith('totm')) { params_totm = params_totm +\"&\"+key.replace('totm','')+\"=\"+value}}); var rand=Math.floor(10*Math.random()); var script=document.createElement(\"script\"); script.src=`https://stag-core.tfla.xyz/pre_onetag?pub_id=34&domain=\\${domain}&rand=\\${rand}&min_ugl=0\\${params_totm}`; document.head.append(script);" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9665525,"math_prob":0.96919554,"size":783,"snap":"2023-14-2023-23","text_gpt3_token_len":144,"char_repetition_ratio":0.1283697,"word_repetition_ratio":0.0,"special_character_ratio":0.17752235,"punctuation_ratio":0.0729927,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9789482,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-05-28T03:28:01Z\",\"WARC-Record-ID\":\"<urn:uuid:6416def2-5c69-4a86-86de-82f786e611bf>\",\"Content-Length\":\"75864\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e115136c-5e44-4398-a6da-c9d05cbb23ac>\",\"WARC-Concurrent-To\":\"<urn:uuid:ea7726ea-333a-4dbb-a333-27605ddd5b49>\",\"WARC-IP-Address\":\"159.60.130.79\",\"WARC-Target-URI\":\"https://www.jpost.com/middle-east/rival-commanders-meet-to-settle-militia-dispute\",\"WARC-Payload-Digest\":\"sha1:7NKFHO37MSMME5XGJMDRSRBWV4CCWGGE\",\"WARC-Block-Digest\":\"sha1:3MBAIA356BHITINHICQMZUEUKGPWAS6Y\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224643462.13_warc_CC-MAIN-20230528015553-20230528045553-00592.warc.gz\"}"}
https://scholar.nctu.edu.tw/en/publications/on-a-nonlinear-matrix-equation-arising-in-nano-research
[ "# On a nonlinear matrix equation arising in nano research\n\nChun Hua Guo*, Yueh Cheng Kuo, Wen-Wei Lin\n\n*Corresponding author for this work\n\nResearch output: Contribution to journalArticlepeer-review\n\n15 Scopus citations\n\n## Abstract\n\nThe matrix equation X + A τX -1A = Q arises in Green's function calculations in nano research, where A is a real square matrix and Q is a real symmetric matrix dependent on a parameter and is usually indefinite. In practice one is mainly interested in those values of the parameter for which the matrix equation has no stabilizing solutions. The solution of interest in this case is a special weakly stabilizing complex symmetric solution X*, which is the limit of the unique stabilizing solution X? of the perturbed equation X + A τX -1A = Q + i?I, as ? ? 0+. It has been shown that a doubling algorithm can be used to compute X? efficiently even for very small values of ?, thus providing good approximations to X *. It has been observed by nano scientists that a modified fixed-point method can sometimes be quite useful, particularly for computing X? for many different values of the parameter. We provide a rigorous analysis of this modified fixed-point method and its variant and of their generalizations. We also show that the imaginary part XI of the matrix X* is positive semidefinite and we determine the rank of XI in terms of the number of unimodular eigenvalues of the quadratic pencil λ2A τ - Q + A. Finally we present a new structure-preserving algorithm that is applied directly on the equation X + A τX -1A = Q. In doing so, we work with real arithmetic most of the time.\n\nOriginal language English 235-262 28 SIAM Journal on Matrix Analysis and Applications 33 1 https://doi.org/10.1137/100814706 Published - 4 Jun 2012\n\n## Keywords\n\n• Complex symmetric solution\n• Fixed-point iteration\n• Green's function\n• Nonlinear matrix equation\n• Structure-preserving algorithm\n• Weakly stabilizing solution" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.88565665,"math_prob":0.93772084,"size":1769,"snap":"2021-04-2021-17","text_gpt3_token_len":393,"char_repetition_ratio":0.12011331,"word_repetition_ratio":0.036666665,"special_character_ratio":0.22329,"punctuation_ratio":0.070063695,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9949391,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-17T23:55:54Z\",\"WARC-Record-ID\":\"<urn:uuid:fe1df37e-e5ac-4a50-9ab4-5083cb21ef48>\",\"Content-Length\":\"54573\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:94f7912e-fa0d-497d-a85e-63d77e4de8b5>\",\"WARC-Concurrent-To\":\"<urn:uuid:67a960b2-c99c-4aca-b08f-eac7f71bf5ac>\",\"WARC-IP-Address\":\"13.228.199.194\",\"WARC-Target-URI\":\"https://scholar.nctu.edu.tw/en/publications/on-a-nonlinear-matrix-equation-arising-in-nano-research\",\"WARC-Payload-Digest\":\"sha1:G5MRSMJ4GTMVZBSX7DD2UMMHWY2TVKGZ\",\"WARC-Block-Digest\":\"sha1:Z6CM765YIRN7W3BRG3Q3SMGXEJTOVX2V\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038464065.57_warc_CC-MAIN-20210417222733-20210418012733-00102.warc.gz\"}"}
https://www.ije.ir/article_71694.html
[ "# Analysis of Natural Frequencies for a Laminated Composite Plate with Piezoelectric Patches using the First and Second Eigenvalue Derivatives\n\nAuthors\n\n1 Mechanical Engineering, Shahid Bahonar University of Kerman\n\n2 Mechanical Engineering, Amirkabir University of Technology\n\nAbstract\n\nIn this paper, the first and second order approximations of Taylor expansion are used for calculating the change of each natural frequency by modifying an arbitrary parameter of a system with a known amount and based on this approximation, the inverse eigenvalue problem is transformed to a solvable algebraic equation. The finite element formulation, based on the classical laminated plate theory (CLPT) is presented for laminated composite plates with piezoelectric patches. Using the proposed FE model, sensitivity analysis is carried out, to find the effects of the changes made in the design parameters such as the piezoelectric patch thickness and the fiber angles in each layer on the natural frequencies of the structure. The inverse eigenvalue problem is solved in order to find the thickness of piezoelectric patches and stacking sequence for relocating the natural frequencies.\n\nKeywords" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.757377,"math_prob":0.8363296,"size":2107,"snap":"2022-27-2022-33","text_gpt3_token_len":523,"char_repetition_ratio":0.10984308,"word_repetition_ratio":0.2768166,"special_character_ratio":0.20692928,"punctuation_ratio":0.1957672,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97607934,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-28T19:07:36Z\",\"WARC-Record-ID\":\"<urn:uuid:97131e8a-00ae-4400-baa5-d47f44fb5f96>\",\"Content-Length\":\"46174\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6e95af21-7036-44ab-b13c-fed08181a2ac>\",\"WARC-Concurrent-To\":\"<urn:uuid:8df2b577-3594-46ac-a442-270aa7f47fc4>\",\"WARC-IP-Address\":\"185.143.234.95\",\"WARC-Target-URI\":\"https://www.ije.ir/article_71694.html\",\"WARC-Payload-Digest\":\"sha1:CH2HCE3OAJEEWZEMYWC4T7XQ63XGG2MT\",\"WARC-Block-Digest\":\"sha1:FTDVGZ4WC5AIPHR6TZLB6C7MQN4NNV6H\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103573995.30_warc_CC-MAIN-20220628173131-20220628203131-00000.warc.gz\"}"}
https://tutorialspoint.com/converting_decimals_to_fractions/index.htm
[ "", null, "# Converting Decimals to Fractions\n\nThis tutorial provides comprehensive coverage of converting decimals to fractions based on Common Core (CCSS) and State Standards and its prerequisites. Students can navigate learning paths based on their level of readiness. Institutional users may customize the scope and sequence to meet curricular needs. This simple tutorial uses appropriate examples to help you understand converting decimals to fractions in a general and quick way.\n\n# Audience\n\nThis tutorial has been prepared for beginners to help them understand the basics of converting decimals to fractions. After completing this tutorial, you will find yourself at a moderate level of expertise in converting decimals to fractions, from where you can advance further.\n\n# Prerequisites\n\nBefore proceeding with this tutorial, you need a basic knowledge of elementary math concepts such as number sense, addition, subtraction, multiplication, division, whole numbers, fractions, types of fractions, decimals, comparing and ordering whole numbers and so on." ]
[ null, "https://tutorialspoint.com/converting_decimals_to_fractions/images/converting_decimals_to_fractions.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8461318,"math_prob":0.82897145,"size":1802,"snap":"2023-14-2023-23","text_gpt3_token_len":351,"char_repetition_ratio":0.18409343,"word_repetition_ratio":0.18315019,"special_character_ratio":0.17924528,"punctuation_ratio":0.08802817,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9763733,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-04-02T08:18:41Z\",\"WARC-Record-ID\":\"<urn:uuid:26d42701-a2a7-4cca-9e19-ac27dc6533a4>\",\"Content-Length\":\"31210\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:01ecd1c4-4237-460f-baad-fda4d1c350b0>\",\"WARC-Concurrent-To\":\"<urn:uuid:24bd6240-dc92-473c-94d7-bac00e83b168>\",\"WARC-IP-Address\":\"135.181.223.254\",\"WARC-Target-URI\":\"https://tutorialspoint.com/converting_decimals_to_fractions/index.htm\",\"WARC-Payload-Digest\":\"sha1:J2AG7633OWSNIYJ7CTLSQ6BCTZSQYR5G\",\"WARC-Block-Digest\":\"sha1:CG2GH2DZ2M2OXUEWO4KT4ONOFTDG7EQR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296950422.77_warc_CC-MAIN-20230402074255-20230402104255-00577.warc.gz\"}"}
https://neuralet.com/docs/tutorials/tf-object-detection-api-model-quantization/
[ "# Quantization of TensorFlow Object Detection API Models\n\nIn this tutorial, we will examine various TensorFlow tools for quantizing object detection models. We start off by giving a brief overview of quantization in deep neural networks, followed by explaining different approaches to quantization and discussing the advantages and disadvantages of using each approach. We will then introduce TensorFlow tools to train a custom object detection model and convert it into a lightweight, quantized model with TFLiteConverter and TOCOConverter. Finally, as a use case example, we will examine the performance of different quantization approaches on the Coral Edge TPU.\n\n## Quantization in Neural Networks: The Concept\n\nQuantization, in general, refers to the process of reducing the number of bits that represent a number. Deep neural networks usually have tens or hundreds of millions of weights, represented by high-precision numerical values. Working with these numbers requires significant computational power, bandwidth, and memory. However, model quantization optimizes deep learning models by representing model parameters with low-precision data types, such as int8 and float16, without incurring a significant accuracy loss. Storing model parameters with low-precision data types not only saves bandwidth and storage but also results in faster calculations.", null, "Image source\n\n### Quantization Brings Efficiency to Neural Networks\n\nQuantization improves the overall efficiency in several ways. It saves the maximum possible memory space by converting parameters to 8-bits or 16-bits instead of the standard 32-bit representation format. For instance, quantizing the Alexnet model shrinks the model size by 75%, from 200MB to only 50MB.\n\nQuantized neural networks consume less memory bandwidth. Fetching numbers in the 8-bit format from RAM requires only 25% of the bandwidth of the standard 32-bit format. Moreover, quantizing neural networks results in 2x to 4x speedup during inference.\n\nFaster arithmetics could be another benefit of quantizing neural networks in some cases, depending on different factors such as the hardware architecture. As an example, 8-bit addition is almost 2x faster than 64-bit addition on an Intel Core i7 4770 processor.\n\nThese benefits make quantization valuable, especially for edge devices that have modest compute and memory but are required to perform AI tasks in real-time.\n\n### Quantizing Neural Networks is a Win-win\n\nBy reducing the number of bits that represent a parameter, some information is lost. However, this loss of information incurs little to no degradation in the accuracy of neural networks for two main reasons:\n\n1. This reduction in the number of bits acts like adding some noise to the network. Since a well-trained neural network is noise-robust, i.e., it can make valid predictions in the presence of unwanted noises, the added noise will not degrade the model accuracy significantly.\n\n2. There are millions of weight and activation parameters in a neural network that are distributed in a relatively small range of values. Since these numbers are densely spread, quantizing them does not result in losing too much precision.\n\nTo give you a better understanding of quantization, we next provide a brief explanation of how numbers are represented in a computer.\n\n### Computer Representation of Numbers\n\nComputers have limited memory to store numbers. There are only discrete possibilities to represent the continuous spectrum of real numbers in the representation system of a computer. The limited memory only allows a fixed amount of values to be stored and represented in a computer, which can be determined based on the number of bits and bytes the computer representation system works with. Therefore, representing real numbers in a computer involves an approximation and a potential loss of significant digits.\n\nThere are two main approaches to store and represent real numbers in modern computers: 1. Floating-point representation The floating-point representation of numbers consists of a mantissa and an exponent. In this system, a number is represented in the form of mantissa * base exponent, where base is a fixed number. In this representation system, the position of the decimal point is specified by the exponent value. Thus, this system can represent both very small values and very large numbers.\n\n1. Fixed-point representation In this representation format, the position of the decimal point is fixed. The numbers share the exponent, and they vary in the mantissa portion only.", null, "Image source\n\nThe amount of memory required for the fixed-point format is much less than the floating-point format since the exponent is shared between different numbers in the former. However, the floating-point representation system can represent a wider range of numbers compared to the fixed-point format.\n\n#### The Precision of Computer Numbers\n\nThe precision of a representation system depends on the number of values it can represent precisely, which is 2b, where b is the number of bits. For example, an 8-bit binary system can represent 28 = 256 numbers precisely. In this system, only 256 values are represented precisely. The rest of the numbers are rounded to the nearest number of these 256 values. Thus, the more bits we can use, the more precise our numbers will be.\n\nIt is worth mentioning that the 8-bit representation system in the previous example is not limited to representing integer values from 1 to 256. This system can represent 256 pieces of information in any arbitrary range of numbers.\n\n#### How to Quantize Numbers in a Representation System\n\nTo determine the representable numbers in a representation system with `b` bits, we subtract the minimum value from the maximum one to calculate `r`, the range of values. Then, we divide `r` by 2b to find `u`, the smallest unit in this format. The representable numbers in this format are in the form of `k * u`, where `k = 0, 1, ..., 255`. We can think of this as a mapping from integer values between 0 to 255 to values in the range of `r`, where `k -> k * u`. In this representation system, any value between `k * u` and `(k + 1) * u` cannot be represented precisely and is approximated by the closest quantized value. However, when quantizing neural networks, it is critical to represent the 0 value precisely (without any approximation error), as explained in this paper.", null, "Image source\n\nIn the next section, we will explain how we can calculate the range of parameters in a neural network in order to quantize them.\n\n### How to Quantize Neural Networks\n\nQuantization is to change the current representation format of numbers to another lower precision format by reducing the amount of the representing bits. In machine learning, we use the floating-point format to represent numbers. By applying quantization, we can change the representation to the fixed-point format and down-sample these values. In most cases, we convert the 32-bit floating-point to the 8-bit fixed-point format, which gives almost 4x reduction in memory utilization.\n\nThere are at least two sets of numerical parameters in each neural network; the set of weights, that are constant numbers (in inference) learned by the network during the training phase, and the set of activations, which are the output values of activation functions in each layer. By quantizing neural networks, we mean quantizing these two sets of parameters.\n\nAs we saw in the previous section, to quantize each set of parameters, we need to know the range of values each set holds and then quantize each number within that range to a representable value in our representation system. While finding the range of weights is straight-forward, calculating the range of activations can be challenging. As we will see in the following sections, each quantization approach deals with this challenge in its own way.\n\nMost of the quantization techniques are applied to inference but not training. The reason is that in each backpropagation step of the training phase, parameters are updated with changes that are too small to be tracked by a low-precision data-type. Therefore, we train a neural network with high-precision numbers and then quantize the weight values.\n\n## Types of Neural Network Quantization\n\nThere are two common approaches to neural network quantization: 1) post-training quantization, and 2) quantization-aware training. We will next explain each method in more detail and discuss the advantages and disadvantages of each technique.\n\n### Post-training Quantization\n\nThe post-training quantization approach is the most commonly used form of quantization. In this approach, quantization takes place only after the model has finished training.\n\nTo perform post-training quantization, we first need to know the range of each parameter, i.e., the range of weights and activations. Finding the range of weights is straight-forward since weights remain constant after training has been finished. However, the range of activations is challenging to determine because activation values vary based on the input tensor. Thus, we need to estimate the range of activations. To do so, we provide a dataset that represents the inference data to the quantization engine (the module that performs quantization). The quantization engine calculates all the activations for each data point in the representative dataset and estimates the range of activations. After calculating the range of both parameters, the quantization engine converts all the values within those ranges to lower bit numbers.\n\nThe main advantage of using this technique is that it does not require any model training or fine-tuning. You can apply 8-bit quantization on any existing pre-trained floating-point model without using many resources. However, this approach comes at the cost of losing some accuracy because the pre-trained network was trained regardless of the fact that the parameters will be quantized to 8-bit values after training has been finished, and quantization adds some noise to the input of the model at inference time.\n\n### Quantization-Aware Training\n\nAs we explained in the previous section, in the post-processing quantization approach, training was in floating-point precision regardless of the fact that the parameters will be quantized to lower bit values. This difference of precision that originates from quantizing weights and activations enters some error to the network that propagates through the network by multiplications and additions.\n\nIn quantization-aware training, however, we attempt to artificially enter this quantization error into the model during training to make the model robust to this error. Note that similar to post-training quantization, in quantization-aware training, backpropagation is still performed on floating-point weights to capture the small changes.\n\nIn this method, extra nodes that are responsible for simulating the quantization effect will be added. These nodes quantize the weights to lower precision and convert them back to the floating-point in each forward pass and are deactivated during back propagation. This approach will add quantization noise to the model during training while performing backpropagation in floating-point format. Since these nodes quantize weights and activations during training, calculating the ranges of weights and activations is automatic during training. Therefore, there is no need to provide a representative dataset to estimate the range of parameters.", null, "Image source\n\nQuantization-aware training gives less accuracy drop compared to post-training quantization and allows us to recover most of the accuracy loss introduced by quantization. Moreover, it does not require a representative dataset to estimate the range of activations. The main disadvantage of quantization-aware training is that it requires retraining of the model.\n\nHere you can see benchmarks of various models with and without quantization.\n\n## Model Quantization with TensorFlow\n\nSo far, we have described the purpose behind quantization and reviewed different quantization approaches. In this section, we will dive deep into the TensorFlow Object Detection API and explain how to perform post-training quantization and quantization-aware training.\n\n### TensorFlow Object Detection API\n\nThe TensorFlow Object Detection API is a framework for training object detection models that offers a lot of flexibility. You can quickly train an object detector in three steps:\n\nSTEP 1: Change the format of your training dataset to `tfrecord` format. STEP 2: Download a pre-trained model from TensorFlow model zoo. STEP 3: Customize a config file according to your model architecture.", null, "Image source\n\nThis tool provides developers with a large number of pre-trained models that are trained on different datasets such as COCO. Therefore, you do not need to start from scratch to train a new model; you can simply retrain the pre-trained models for your specific needs.\n\nObject Detection API offers various object detection model architectures, such as SSD and faster-RCNN. We trained an SSD Lite MobileNet V2 model using the TensorFlow Object Detection API on the Oxford Town Centre dataset to build a pedestrian detection model for the Smart Social Distancing application. We picked the SSD architecture to be able to run this application in real-time on different edge devices such as NVIDIA Jetson Nano and Coral Edge TPU. We used `ssdlite_mobilenet_v2_coco.config` sample config file for this purpose. You can find the available config files here.\n\nNote that TensorFlow 1.12 or higher is required for this API, and this API does not support TensorFlow 2.\n\n#### Installing TensorFlow Object Detection API with Docker\n\nInstalling Object Detection API can be time-consuming. Instead, you can use Neuralet's Docker container to get TensorFlow Object Detection API installed with minimal effort.\n\nThis Docker container will install the TensorFlow Object Detection API and its dependencies in the `/models/research/object_detection` directory. You can build the Docker container from source or pull the container from Docker Hub. See the instructions below to run the container.\n\n1- Run with CPU support:\n\n• Build the container from source:\n``````# 1- Clone the repository\ngit clone https://github.com/neuralet/neuralet\ncd training/tf_object_detection_api\n\n# 2- Build the container\ndocker build -f tools-tf-object-detection-api-training.Dockerfile -t \"neuralet/tools-tf-object-detection-api-training\" .\n\n3- Run the container\ndocker run -it -v [PATH TO EXPERIMENT DIRECTORY]:/work neuralet/tools-tf-object-detection-api-training\n``````\n• Pull the container from Docker Hub:\n``````docker run -it -v [PATH TO EXPERIMENT DIRECTORY]:/work neuralet/tools-tf-object-detection-api-training\n``````\n\n2- Run with GPU support:\n\nYou should have the Nvidia Docker Toolkit installed to be able to run the docker container with GPU support.\n\n• Build the container from source:\n``````# 1- Clone the repository\ngit clone https://github.com/neuralet/neuralet\ncd training/tf_object_detection_api\n\n# 2- Build the container\ndocker build -f tools-tf-object-detection-api-training.Dockerfile -t \"neuralet/tools-tf-object-detection-api-training\" .\n\n3- Run the container\ndocker run -it --gpus all -v [PATH TO EXPERIMENT DIRECTORY]:/work neuralet/tools-tf-object-detection-api-training\n``````\n• Pull the container from Docker Hub:\n``````docker run -it --gpus all -v [PATH TO EXPERIMENT DIRECTORY]:/work neuralet/tools-tf-object-detection-api-training\n``````\n\n#### Exporting the Model to a Frozen Graph\n\nAfter training the model, you can find the trained checkpoints, i.e., `.ckpt` files, placed in the model directory. To perform quantization or inference, you need to export these trained checkpoints to a `protobuf` file by freezing its computational graph. In general, you can use the `export_inference_graph.py` script to do so. However, if you are using an SSD model that you want to convert to `tflite` file later, you should run the `export_tflite_ssd_graph.py` script instead as follows:\n\n``````python3 object_detection/export_tflite_ssd_graph.py \\\n--pipeline_config_path=\\$CONFIG_FILE \\\n--trained_checkpoint_prefix=\\$CHECKPOINT_PATH \\\n--output_directory=\\$OUTPUT_DIR \\\n``````\n\nRunning this script will create a `.pb` file in the `\\$OUTPUT_DIR` directory. We will use this file in the next steps to perform quantization.\n\n### Post-training Quantization with TFlite Converter\n\nAs described earlier, post-training quantization allows you to convert a model trained with floating-point numbers to a quantized model. You can apply post-training quantization using TFlite Converter to convert a TensorFlow model into a TensorFlow Lite model that is suitable for on-device inference.\n\nThis API provides three options to quantize a floating-point 32-bit model to lower precisions:\n\n1. quantize only weights to 8-bit precision\n2. quantize both weights and activations to 8-bit precision\n3. quantize only weights to floating-point 16-bit precision\n\nWe will investigate the first two approaches in this tutorial. Quantizing to floating-point 16-bit precision is beyond the scope of this article. Read this guide for more detail.\n\n#### Weight Quantization of a Retrained SSD MobileNet V2\n\nAfter exporting the model to a frozen graph, you can quantize the model weights by running the following python script:\n\n`````` import tensorflow as tf\n frozen_graph_file = # path to frozen graph (.pb file)\n input_arrays = [\"normalized_input_image_tensor\"]\n output_arrays = ['TFLite_Detection_PostProcess',\n 'TFLite_Detection_PostProcess:1',\n 'TFLite_Detection_PostProcess:2',\n 'TFLite_Detection_PostProcess:3']\n input_shapes = {\"normalized_input_image_tensor\" : [1, 300, 300, 3]}\n\n converter = tf.lite.TFLiteConverter.from_frozen_graph(frozen_graph_file,\n input_arrays=input_arrays,\n output_arrays=output_arrays,\n input_shapes=input_shapes)\n converter.allow_custom_ops = True\n converter.optimizations = [tf.lite.Optimize.DEFAULT]\n tflite_quant_model = converter.convert()\n with open(tflite_model_quant_file, \"wb\") as tflite_file:\n tflite_file.write(tflite_model_quant)\n``````\n\nYou only need to set the path to the frozen graph file and change the input shape. You can leave the rest of the code as it is.\n\nIn line `2`, you should specify the exported frozen graph (`.pb`) file.\n\nIn lines `3-8`, the model's input/output names and the input shape are defined.\n\nIn lines `10-13`, a TFlite Converter is created by specifying the model's frozen graph file, input/output names, and the input shape.\n\nLine `14` is a critical command for quantizing custom operations in object detection models. Some operations, such as non-maximum suppression, are not supported by TensorFlow Lite and are registered as custom operations in the TensorFlow Object Detection API. By triggering the `allow_custom_ops` flag in line `14`, you tell the TFLite Converter to find and quantize those registered custom operations. This line will raise an error in case of failure. Read more on custom operations and how to register them here.\n\nIn line `15`, a list of model optimizations that the converter should perform is provided.\n\nFinally, in lines `16-18`, the model is converted to a quantized model and saved to a `.tflite` file.\n\nNote that in this method, TensorFlow Lite quantizes some of the activations dynamically in inference time in addition to weight quantization to improve model latency.\n\n#### Full Integer Quantization of a Retrained SSD MobileNet V2\n\nWe now explain how to quantize the full network, including weights, activations, inputs, and outputs to 8-bit numbers.\n\nRun the following script to perform full 8-bit quantization:\n\n`````` import tensorflow as tf\n frozen_graph_file = # path to frozen graph\n input_arrays = [\"normalized_input_image_tensor\"]\n output_arrays = ['TFLite_Detection_PostProcess',\n 'TFLite_Detection_PostProcess:1',\n 'TFLite_Detection_PostProcess:2',\n 'TFLite_Detection_PostProcess:3']\n input_shapes = {\"normalized_input_image_tensor\" : [1, 300, 300, 3]}\n converter = tf.lite.TFLiteConverter.from_frozen_graph(saved_model_dir,input_arrays,\n output_arrays, input_shapes)\n converter.allow_custom_ops = True\n converter.optimizations = [tf.lite.Optimize.OPTIMIZE_FOR_SIZE]\n converter.representative_dataset = _representative_dataset_gen\n\n tflite_model_quant = converter.convert()\n with open(tflite_model_quant_file, \"wb\") as tflite_file:\n tflite_file.write(tflite_model_quant)\n``````\n\nThis script is similar to the last one, except that a `representative_dataset` generator is introduced to the converter. As we mentioned earlier, the representative dataset allows the TFLite Converter to estimate the range of the activations. An example of this generator is as follows:\n\n`````` import cv2\n import numpy as np\n from imutils import paths\n\n def _representative_dataset_gen():\n images_path = # path to represantative dataset\n if images_path is None:\n raise Exception(\n \"Image directory is None, full integer quantization requires images directory!\"\n )\n imagePaths = list(paths.list_images(images_path))\n for p in imagePaths:\n image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)\n image = cv2.resize(image, (300, 300))\n image = image.astype(\"float\")\n image = np.expand_dims(image, axis=1)\n image = image.reshape(1, 300, 300, 3)\n yield [image.astype(\"float32\")]\n\n``````\n\nAs you can see in this example, you should specify the path to sample images that represent the input data used in inference time. Based on our experience, a dataset of ~100 images would be enough for the TFLite Converter to reach an accurate estimate of the range of the activations.\n\nImportant Notes\n\n1. Those operations that are not supported by the TFLite Converter remain in floating-point after post-training quantization. If you want the converter to throw an error if an operation does not quantize, you should add the following line of code to your script: `converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]` However, adding this line to your code will abort the quantization of object detection models since some of the operations cannot be quantized using the current version of the TFLite Converter.\n\n2. If you want the converter to quantize inputs and outputs of the model, add the following lines to the code: ```converter.inference_input_type = tf.uint8 converter.inference_output_type = tf.uint8``` Note that if you are quantizing your object detection model using the current version of the TensorFlow Lite Converter, adding these two lines of code will fail the quantization due to some compatibility issues.\n\n### Quantization-aware Training with TensorFlow Object Detection API\n\nYou can use the TensorFlow Model Optimization Tool to perform quantization-aware training for Keras-based models. You can use this tool in either of two ways: 1- specify some layers to be quantized-aware, or 2- set the whole model to be quantized-aware. You can install this tool by following the installation guide here.\n\nHowever, if you are using the TensorFlow Object Detection API to train your model, you cannot use TensorFlow Model Optimization Tool for quantization-aware training. This is because the current version of the object detection API requires TensorFlow 1.x, which is not compatible with the model optimization tool. To apply quantization-aware training for object detection models that are trained using the object detection API, you need to make some config changes.\n\nIf you take a look at the sample config files of the object detection API, you will notice some files that contain the quantized keyword in their name, such as the `ssd_mobilenet_v2_quantized_300x300_coco.config` file here. These files are written similar to the ordinary config files except the last few lines with the `graph_rewriter` config that look like this:\n\n``````graph_rewriter {\nquantization {\ndelay: 48000\nweight_bits: 8\nactivation_bits: 8\n}\n}\n``````\n\nBy adding these lines to your config file, you tell TensorFlow that you want to perform quantization-aware training. The `delay` parameter specifies the number of iterations after which the fake nodes will be added to the computational graph. It is recommended not to add fake nodes at the beginning of the training since it may cause numerical instabilities and poor training results.\n\nThe other two parameters specify the number of bits that weights and activations will be quantized to. Only 8-bit quantization is supported by TensorFlow at this time.\n\nYou can start quantization-aware training from a quantized or non-quantized pre-trained model checkpoint. See the object detection model zoo to explore object detection models and their checkpoints. For the pedestrian detection task, we used the `ssd_mobilenet_v2_quantized_300x300_coco.config` file and fine-tuned the model using the `ssd_mobilenet_v2_coco` checkpoints from the model zoo.\n\nAfter training has been finished, we can freeze the model and export the frozen graph by running the `export_tflite_ssd_graph.py` script.\n\n#### TOCO\n\nSo far, we have trained a floating-point model by simulating the quantization effect in the training process, but we have not quantized the model yet. TensorFlow offers another tool that quantizes a model and exports it to a `tflite` file, called TOCO.\n\nBased on TensorFlow documentation, to quantize your object detection model with TOCO, you need to build TensorFlow from source. This can be a daunting procedure since it is time-consuming and may lead to environment inconsistencies that fail the build after a long process. To overcome this issue, we created an easy-to-use docker container. This container takes in the frozen graph file path and some other specifications as parameters and generates the `tflite` model.\n\n##### Model Quantization Using TOCO Docker Container\n\nTo quantize your model using Neuralet's TOCO Docker container, you can either build the container from source or pull the container from Docker Hub.\n\n• Build the container from source:\n``````# 1- Clone the repository\ngit clone https://github.com/neuralet/neuralet\ncd training/tf_object_detection_api\n\n# 2- Build the container\ndocker build -f tools-toco.Dockerfile -t \"neuralet/tools-toco\" .\n\n3- Run the container\ndocker run -v [PATH_TO_FROZEN_GRAPH_DIRECTORY]:/model_dir neuralet/tools-toco --graph_def_file=[frozen graph file]\n``````\n• Pull the container from Docker Hub:\n``````docker run -v [PATH_TO_FROZEN_GRAPH_DIRECTORY]:/model_dir neuralet/tools-toco --graph_def_file=[frozen graph file]\n``````\n\nAfter running the container, you can find the quantized object detection model named `detect.tflite` in `FROZEN_GRAPH_DIRECTORY` folder.\n\nYou can also customize other parameters when running the docker container. For example, you can override the default input shape and inference type by giving `--input_shapes=[DEFAULT:1,300,300,3]` and `--inference_type=[DEFAULT:QUANTIZED_UINT8]` values.\n\n## Quantization Example: Coral Edge TPU\n\nIn this section, we deploy an object detection model on a Coral Edge TPU device to illustrate one of the applications of model quantization.\n\nEdge TPU only supports 8-bit weights and activations; thus, we first need to quantize our model to 8-bit precision to be able to work with the device. We have described three strategies to quantizing an SSDlite MobileNet V2 model:\n\n1. post-training quantization of weights\n2. post-training quantization of weights and activations\n3. quantization-aware training and quantization of weights and activations\n\nSince Edge TPU requires 8-bit quantized parameters, the first strategy does not apply to these devices because activations remain in floating-point following this approach. We will now explain model quantization using the next two methods on an Edge TPU device.", null, "Image source\n\nTo deploy a model on an Edge TPU device, you need to compile the quantized `tflite` model into a file that is compatible with the Edge TPU using the Edge TPU Compiler. Running the Edge TPU Compiler creates a log file for the compilation process. The post-training quantization log file looks like this:\n\n``````Operator Count Status\n\nCUSTOM 1 Operation is working on an unsupported data type\nADD 10 Mapped to Edge TPU\nCONCATENATION 2 Mapped to Edge TPU\nQUANTIZE 11 Mapped to Edge TPU\nCONV_2D 55 Mapped to Edge TPU\nDEPTHWISE_CONV_2D 33 Mapped to Edge TPU\nDEQUANTIZE 2 Operation is working on an unsupported data type\nRESHAPE 13 Mapped to Edge TPU\nLOGISTIC 1 Mapped to Edge TPU\n\n``````\n\nFor each operator, this log displays the operator name, the number of operators in the model, and the operator status, which indicates whether that operator is mapped to edge TPU or it will run on the CPU.\n\nThe log file for quantization-aware training is as follows:\n\n``````Operator Count Status\n\nCUSTOM 1 Operation is working on an unsupported data type\nADD 10 Mapped to Edge TPU\nCONCATENATION 2 Mapped to Edge TPU\nCONV_2D 55 Mapped to Edge TPU\nDEPTHWISE_CONV_2D 33 Mapped to Edge TPU\nRESHAPE 13 Mapped to Edge TPU\nLOGISTIC 1 Mapped to Edge TPU\n\n``````\n\nAs you can see, the two `DEQUANTIZE` modules present in the post-training quantization log do not exist here.\n\nNow we feed the Oxford Town Centre dataset to the compiled models and compute the latency and frame rate on the Coral Dev Board. The results are as follows:\n\nQuantization Approach Inference Time FPS\npost-training quantization 6.6 152\nquantization-aware training 6.1 164\n\nVisit Neuralet's GitHub repository for more examples of Edge TPU inferencing.\n\n## Conclusion\n\nQuantization allows us to convert object detection models trained in floating-point numbers to lightweight models with lower-bit precisions. Quantized models accelerate calculations and consume less memory; therefore, they are ideal for edge computing applications.\n\nIn the Smart Social Distancing application, we have applied quantization to our pedestrian detection model to increase the speed of model inference and be able to run our application on different edge devices in real-time.\n\nVisit Neuralet's GitHub repository for more projects. You can also reach us by email at [email protected]" ]
[ null, "https://neuralet.com/docs/tutorials/images/model-quantization/the-last-supper.png", null, "https://neuralet.com/docs/tutorials/images/model-quantization/representation-of-numbers-in-computers.jpg", null, "https://neuralet.com/docs/tutorials/images/model-quantization/quantize-numbers.jpg", null, "https://neuralet.com/docs/tutorials/images/model-quantization/quantization-aware-training.jpg", null, "https://neuralet.com/docs/tutorials/images/model-quantization/tf-object-detection-api.jpg", null, "https://neuralet.com/docs/tutorials/images/model-quantization/quantization-edge-tpu.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8629744,"math_prob":0.88777953,"size":30263,"snap":"2020-24-2020-29","text_gpt3_token_len":6489,"char_repetition_ratio":0.16904062,"word_repetition_ratio":0.0909297,"special_character_ratio":0.2024254,"punctuation_ratio":0.101290196,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9737726,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-04T16:41:27Z\",\"WARC-Record-ID\":\"<urn:uuid:9b98550b-2493-4c1e-8892-b068f2d4d683>\",\"Content-Length\":\"53561\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a16e82fa-62cc-4f91-98b1-590942339e5c>\",\"WARC-Concurrent-To\":\"<urn:uuid:fbcfd3d6-2edc-48de-a50b-8e6db12b4712>\",\"WARC-IP-Address\":\"104.18.61.230\",\"WARC-Target-URI\":\"https://neuralet.com/docs/tutorials/tf-object-detection-api-model-quantization/\",\"WARC-Payload-Digest\":\"sha1:NTXLEBWP4OTQG333WVL6ZI4PMOEFYRGU\",\"WARC-Block-Digest\":\"sha1:YCCRVILRBK6NOYHP2XL6KUBTMPGMZ5Y7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655886178.40_warc_CC-MAIN-20200704135515-20200704165515-00476.warc.gz\"}"}
https://percent-table.com/calculate/what-is-75-of-3870/
[ "# Percent of Calculator\n\nCalculate percentage of X, quick & simple.\n\n75% of 3870 is:\n2902.5\n\n## Percent of - Table For 3870\n\nPercent of Difference\n1% of 3870 is 38.7 3831.3\n2% of 3870 is 77.4 3792.6\n3% of 3870 is 116.1 3753.9\n4% of 3870 is 154.8 3715.2\n5% of 3870 is 193.5 3676.5\n6% of 3870 is 232.2 3637.8\n7% of 3870 is 270.9 3599.1\n8% of 3870 is 309.6 3560.4\n9% of 3870 is 348.3 3521.7\n10% of 3870 is 387 3483\n11% of 3870 is 425.7 3444.3\n12% of 3870 is 464.4 3405.6\n13% of 3870 is 503.1 3366.9\n14% of 3870 is 541.8 3328.2\n15% of 3870 is 580.5 3289.5\n16% of 3870 is 619.2 3250.8\n17% of 3870 is 657.9 3212.1\n18% of 3870 is 696.6 3173.4\n19% of 3870 is 735.3 3134.7\n20% of 3870 is 774 3096\n21% of 3870 is 812.7 3057.3\n22% of 3870 is 851.4 3018.6\n23% of 3870 is 890.1 2979.9\n24% of 3870 is 928.8 2941.2\n25% of 3870 is 967.5 2902.5\n26% of 3870 is 1006.2 2863.8\n27% of 3870 is 1044.9 2825.1\n28% of 3870 is 1083.6 2786.4\n29% of 3870 is 1122.3 2747.7\n30% of 3870 is 1161 2709\n31% of 3870 is 1199.7 2670.3\n32% of 3870 is 1238.4 2631.6\n33% of 3870 is 1277.1 2592.9\n34% of 3870 is 1315.8 2554.2\n35% of 3870 is 1354.5 2515.5\n36% of 3870 is 1393.2 2476.8\n37% of 3870 is 1431.9 2438.1\n38% of 3870 is 1470.6 2399.4\n39% of 3870 is 1509.3 2360.7\n40% of 3870 is 1548 2322\n41% of 3870 is 1586.7 2283.3\n42% of 3870 is 1625.4 2244.6\n43% of 3870 is 1664.1 2205.9\n44% of 3870 is 1702.8 2167.2\n45% of 3870 is 1741.5 2128.5\n46% of 3870 is 1780.2 2089.8\n47% of 3870 is 1818.9 2051.1\n48% of 3870 is 1857.6 2012.4\n49% of 3870 is 1896.3 1973.7\n50% of 3870 is 1935 1935\n51% of 3870 is 1973.7 1896.3\n52% of 3870 is 2012.4 1857.6\n53% of 3870 is 2051.1 1818.9\n54% of 3870 is 2089.8 1780.2\n55% of 3870 is 2128.5 1741.5\n56% of 3870 is 2167.2 1702.8\n57% of 3870 is 2205.9 1664.1\n58% of 3870 is 2244.6 1625.4\n59% of 3870 is 2283.3 1586.7\n60% of 3870 is 2322 1548\n61% of 3870 is 2360.7 1509.3\n62% of 3870 is 2399.4 1470.6\n63% of 3870 is 2438.1 1431.9\n64% of 3870 is 2476.8 1393.2\n65% of 3870 is 2515.5 1354.5\n66% of 3870 is 2554.2 1315.8\n67% of 3870 is 2592.9 1277.1\n68% of 3870 is 2631.6 1238.4\n69% of 3870 is 2670.3 1199.7\n70% of 3870 is 2709 1161\n71% of 3870 is 2747.7 1122.3\n72% of 3870 is 2786.4 1083.6\n73% of 3870 is 2825.1 1044.9\n74% of 3870 is 2863.8 1006.2\n75% of 3870 is 2902.5 967.5\n76% of 3870 is 2941.2 928.8\n77% of 3870 is 2979.9 890.1\n78% of 3870 is 3018.6 851.4\n79% of 3870 is 3057.3 812.7\n80% of 3870 is 3096 774\n81% of 3870 is 3134.7 735.3\n82% of 3870 is 3173.4 696.6\n83% of 3870 is 3212.1 657.9\n84% of 3870 is 3250.8 619.2\n85% of 3870 is 3289.5 580.5\n86% of 3870 is 3328.2 541.8\n87% of 3870 is 3366.9 503.1\n88% of 3870 is 3405.6 464.4\n89% of 3870 is 3444.3 425.7\n90% of 3870 is 3483 387\n91% of 3870 is 3521.7 348.3\n92% of 3870 is 3560.4 309.6\n93% of 3870 is 3599.1 270.9\n94% of 3870 is 3637.8 232.2\n95% of 3870 is 3676.5 193.5\n96% of 3870 is 3715.2 154.8\n97% of 3870 is 3753.9 116.1\n98% of 3870 is 3792.6 77.4\n99% of 3870 is 3831.3 38.7\n100% of 3870 is 3870 0\n\n### Here's How to Calculate 75% of 3870\n\nLet's take a quick example here:\n\nYou have a Target coupon of \\$3870 and you need to know how much will you save on your purchase if the discount is 75 percent.\n\nSolution:\n\nAmount Saved = Original Price x Discount in Percent / 100\n\nAmount Saved = (3870 x 75) / 100\n\nAmount Saved = 290250 / 100" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8781538,"math_prob":0.9890097,"size":3522,"snap":"2021-43-2021-49","text_gpt3_token_len":1852,"char_repetition_ratio":0.33655486,"word_repetition_ratio":0.004322767,"special_character_ratio":0.78762066,"punctuation_ratio":0.18028168,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99996316,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-18T17:19:36Z\",\"WARC-Record-ID\":\"<urn:uuid:3b8f67de-191b-4adb-954b-93b92004b53d>\",\"Content-Length\":\"42747\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1d74d23e-b3f5-4ac6-af5d-2563325e459c>\",\"WARC-Concurrent-To\":\"<urn:uuid:30f740c1-d4a2-4b36-88b9-b0476eb38fc9>\",\"WARC-IP-Address\":\"104.21.12.118\",\"WARC-Target-URI\":\"https://percent-table.com/calculate/what-is-75-of-3870/\",\"WARC-Payload-Digest\":\"sha1:M2A2C4XMOMSEZHA7OEBMSAQJXBDXOBXV\",\"WARC-Block-Digest\":\"sha1:U4O46KBQHSK5NVPYJHMLJXNZLZ6P5MLP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585204.68_warc_CC-MAIN-20211018155442-20211018185442-00529.warc.gz\"}"}
https://janhigh.savingadvice.com/2006/09/29/plumbing-analogy_14977/
[ "User Real IP - 3.229.142.175\n```Array\n(\n => Array\n(\n => 182.68.68.92\n)\n\n => Array\n(\n => 101.0.41.201\n)\n\n => Array\n(\n => 43.225.98.123\n)\n\n => Array\n(\n => 2.58.194.139\n)\n\n => Array\n(\n => 46.119.197.104\n)\n\n => Array\n(\n => 45.249.8.93\n)\n\n => Array\n(\n => 103.12.135.72\n)\n\n => Array\n(\n => 157.35.243.216\n)\n\n => Array\n(\n => 209.107.214.176\n)\n\n => Array\n(\n => 5.181.233.166\n)\n\n => Array\n(\n => 106.201.10.100\n)\n\n => Array\n(\n => 36.90.55.39\n)\n\n => Array\n(\n => 119.154.138.47\n)\n\n => Array\n(\n => 51.91.31.157\n)\n\n => Array\n(\n => 182.182.65.216\n)\n\n => Array\n(\n => 157.35.252.63\n)\n\n => Array\n(\n => 14.142.34.163\n)\n\n => Array\n(\n => 178.62.43.135\n)\n\n => Array\n(\n => 43.248.152.148\n)\n\n => Array\n(\n => 222.252.104.114\n)\n\n => Array\n(\n => 209.107.214.168\n)\n\n => Array\n(\n => 103.99.199.250\n)\n\n => Array\n(\n => 178.62.72.160\n)\n\n => Array\n(\n => 27.6.1.170\n)\n\n => Array\n(\n => 182.69.249.219\n)\n\n => Array\n(\n => 110.93.228.86\n)\n\n => Array\n(\n => 72.255.1.98\n)\n\n => Array\n(\n => 182.73.111.98\n)\n\n => Array\n(\n => 45.116.117.11\n)\n\n => Array\n(\n => 122.15.78.189\n)\n\n => Array\n(\n => 14.167.188.234\n)\n\n => Array\n(\n => 223.190.4.202\n)\n\n => Array\n(\n => 202.173.125.19\n)\n\n => Array\n(\n => 103.255.5.32\n)\n\n => Array\n(\n => 39.37.145.103\n)\n\n => Array\n(\n => 140.213.26.249\n)\n\n => Array\n(\n => 45.118.166.85\n)\n\n => Array\n(\n => 102.166.138.255\n)\n\n => Array\n(\n => 77.111.246.234\n)\n\n => Array\n(\n => 45.63.6.196\n)\n\n => Array\n(\n => 103.250.147.115\n)\n\n => Array\n(\n => 223.185.30.99\n)\n\n => Array\n(\n => 103.122.168.108\n)\n\n => Array\n(\n => 123.136.203.21\n)\n\n => Array\n(\n => 171.229.243.63\n)\n\n => Array\n(\n => 153.149.98.149\n)\n\n => Array\n(\n => 223.238.93.15\n)\n\n => Array\n(\n => 178.62.113.166\n)\n\n => Array\n(\n => 101.162.0.153\n)\n\n => Array\n(\n => 121.200.62.114\n)\n\n => Array\n(\n => 14.248.77.252\n)\n\n => Array\n(\n => 95.142.117.29\n)\n\n => Array\n(\n => 150.129.60.107\n)\n\n => Array\n(\n => 94.205.243.22\n)\n\n => Array\n(\n => 115.42.71.143\n)\n\n => Array\n(\n => 117.217.195.59\n)\n\n => Array\n(\n => 182.77.112.56\n)\n\n => Array\n(\n => 182.77.112.108\n)\n\n => Array\n(\n => 41.80.69.10\n)\n\n => Array\n(\n => 117.5.222.121\n)\n\n => Array\n(\n => 103.11.0.38\n)\n\n => Array\n(\n => 202.173.127.140\n)\n\n => Array\n(\n => 49.249.249.50\n)\n\n => Array\n(\n => 116.72.198.211\n)\n\n => Array\n(\n => 223.230.54.53\n)\n\n => Array\n(\n => 102.69.228.74\n)\n\n => Array\n(\n => 39.37.251.89\n)\n\n => Array\n(\n => 39.53.246.141\n)\n\n => Array\n(\n => 39.57.182.72\n)\n\n => Array\n(\n => 209.58.130.210\n)\n\n => Array\n(\n => 104.131.75.86\n)\n\n => Array\n(\n => 106.212.131.255\n)\n\n => Array\n(\n => 106.212.132.127\n)\n\n => Array\n(\n => 223.190.4.60\n)\n\n => Array\n(\n => 103.252.116.252\n)\n\n => Array\n(\n => 103.76.55.182\n)\n\n => Array\n(\n => 45.118.166.70\n)\n\n => Array\n(\n => 103.93.174.215\n)\n\n => Array\n(\n => 5.62.62.142\n)\n\n => Array\n(\n => 182.179.158.156\n)\n\n => Array\n(\n => 39.57.255.12\n)\n\n => Array\n(\n => 39.37.178.37\n)\n\n => Array\n(\n => 182.180.165.211\n)\n\n => Array\n(\n => 119.153.135.17\n)\n\n => Array\n(\n => 72.255.15.244\n)\n\n => Array\n(\n => 139.180.166.181\n)\n\n => Array\n(\n => 70.119.147.111\n)\n\n => Array\n(\n => 106.210.40.83\n)\n\n => Array\n(\n => 14.190.70.91\n)\n\n => Array\n(\n => 202.125.156.82\n)\n\n => Array\n(\n => 115.42.68.38\n)\n\n => Array\n(\n => 102.167.13.108\n)\n\n => Array\n(\n => 117.217.192.130\n)\n\n => Array\n(\n => 205.185.223.156\n)\n\n => Array\n(\n => 171.224.180.29\n)\n\n => Array\n(\n => 45.127.45.68\n)\n\n => Array\n(\n => 195.206.183.232\n)\n\n => Array\n(\n => 49.32.52.115\n)\n\n => Array\n(\n => 49.207.49.223\n)\n\n => Array\n(\n => 45.63.29.61\n)\n\n => Array\n(\n => 103.245.193.214\n)\n\n => Array\n(\n => 39.40.236.69\n)\n\n => Array\n(\n => 62.80.162.111\n)\n\n => Array\n(\n => 45.116.232.56\n)\n\n => Array\n(\n => 45.118.166.91\n)\n\n => Array\n(\n => 180.92.230.234\n)\n\n => Array\n(\n => 157.40.57.160\n)\n\n => Array\n(\n => 110.38.38.130\n)\n\n => Array\n(\n => 72.255.57.183\n)\n\n => Array\n(\n => 182.68.81.85\n)\n\n => Array\n(\n => 39.57.202.122\n)\n\n => Array\n(\n => 119.152.154.36\n)\n\n => Array\n(\n => 5.62.62.141\n)\n\n => Array\n(\n => 119.155.54.232\n)\n\n => Array\n(\n => 39.37.141.22\n)\n\n => Array\n(\n => 183.87.12.225\n)\n\n => Array\n(\n => 107.170.127.117\n)\n\n => Array\n(\n => 125.63.124.49\n)\n\n => Array\n(\n => 39.42.191.3\n)\n\n => Array\n(\n => 116.74.24.72\n)\n\n => Array\n(\n => 46.101.89.227\n)\n\n => Array\n(\n => 202.173.125.247\n)\n\n => Array\n(\n => 39.42.184.254\n)\n\n => Array\n(\n => 115.186.165.132\n)\n\n => Array\n(\n => 39.57.206.126\n)\n\n => Array\n(\n => 103.245.13.145\n)\n\n => Array\n(\n => 202.175.246.43\n)\n\n => Array\n(\n => 192.140.152.150\n)\n\n => Array\n(\n => 202.88.250.103\n)\n\n => Array\n(\n => 103.248.94.207\n)\n\n => Array\n(\n => 77.73.66.101\n)\n\n => Array\n(\n => 104.131.66.8\n)\n\n => Array\n(\n => 113.186.161.97\n)\n\n => Array\n(\n => 222.254.5.7\n)\n\n => Array\n(\n => 223.233.67.247\n)\n\n => Array\n(\n => 171.249.116.146\n)\n\n => Array\n(\n => 47.30.209.71\n)\n\n => Array\n(\n => 202.134.13.130\n)\n\n => Array\n(\n => 27.6.135.7\n)\n\n => Array\n(\n => 107.170.186.79\n)\n\n => Array\n(\n => 103.212.89.171\n)\n\n => Array\n(\n => 117.197.9.77\n)\n\n => Array\n(\n => 122.176.206.233\n)\n\n => Array\n(\n => 192.227.253.222\n)\n\n => Array\n(\n => 182.188.224.119\n)\n\n => Array\n(\n => 14.248.70.74\n)\n\n => Array\n(\n => 42.118.219.169\n)\n\n => Array\n(\n => 110.39.146.170\n)\n\n => Array\n(\n => 119.160.66.143\n)\n\n => Array\n(\n => 103.248.95.130\n)\n\n => Array\n(\n => 27.63.152.208\n)\n\n => Array\n(\n => 49.207.114.96\n)\n\n => Array\n(\n => 102.166.23.214\n)\n\n => Array\n(\n => 175.107.254.73\n)\n\n => Array\n(\n => 103.10.227.214\n)\n\n => Array\n(\n => 202.143.115.89\n)\n\n => Array\n(\n => 110.93.227.187\n)\n\n => Array\n(\n => 103.140.31.60\n)\n\n => Array\n(\n => 110.37.231.46\n)\n\n => Array\n(\n => 39.36.99.238\n)\n\n => Array\n(\n => 157.37.140.26\n)\n\n => Array\n(\n => 43.246.202.226\n)\n\n => Array\n(\n => 137.97.8.143\n)\n\n => Array\n(\n => 182.65.52.242\n)\n\n => Array\n(\n => 115.42.69.62\n)\n\n => Array\n(\n => 14.143.254.58\n)\n\n => Array\n(\n => 223.179.143.236\n)\n\n => Array\n(\n => 223.179.143.249\n)\n\n => Array\n(\n => 103.143.7.54\n)\n\n => Array\n(\n => 223.179.139.106\n)\n\n => Array\n(\n => 39.40.219.90\n)\n\n => Array\n(\n => 45.115.141.231\n)\n\n => Array\n(\n => 120.29.100.33\n)\n\n => Array\n(\n => 112.196.132.5\n)\n\n => Array\n(\n => 202.163.123.153\n)\n\n => Array\n(\n => 5.62.58.146\n)\n\n => Array\n(\n => 39.53.216.113\n)\n\n => Array\n(\n => 42.111.160.73\n)\n\n => Array\n(\n => 107.182.231.213\n)\n\n => Array\n(\n => 119.82.94.120\n)\n\n => Array\n(\n => 178.62.34.82\n)\n\n => Array\n(\n => 203.122.6.18\n)\n\n => Array\n(\n => 157.42.38.251\n)\n\n => Array\n(\n => 45.112.68.222\n)\n\n => Array\n(\n => 49.206.212.122\n)\n\n => Array\n(\n => 104.236.70.228\n)\n\n => Array\n(\n => 42.111.34.243\n)\n\n => Array\n(\n => 84.241.19.186\n)\n\n => Array\n(\n => 89.187.180.207\n)\n\n => Array\n(\n => 104.243.212.118\n)\n\n => Array\n(\n => 104.236.55.136\n)\n\n => Array\n(\n => 106.201.16.163\n)\n\n => Array\n(\n => 46.101.40.25\n)\n\n => Array\n(\n => 45.118.166.94\n)\n\n => Array\n(\n => 49.36.128.102\n)\n\n => Array\n(\n => 14.142.193.58\n)\n\n => Array\n(\n => 212.79.124.176\n)\n\n => Array\n(\n => 45.32.191.194\n)\n\n => Array\n(\n => 105.112.107.46\n)\n\n => Array\n(\n => 106.201.14.8\n)\n\n => Array\n(\n => 110.93.240.65\n)\n\n => Array\n(\n => 27.96.95.177\n)\n\n => Array\n(\n => 45.41.134.35\n)\n\n => Array\n(\n => 180.151.13.110\n)\n\n => Array\n(\n => 101.53.242.89\n)\n\n => Array\n(\n => 115.186.3.110\n)\n\n => Array\n(\n => 171.49.185.242\n)\n\n => Array\n(\n => 115.42.70.24\n)\n\n => Array\n(\n => 45.128.188.43\n)\n\n => Array\n(\n => 103.140.129.63\n)\n\n => Array\n(\n => 101.50.113.147\n)\n\n => Array\n(\n => 103.66.73.30\n)\n\n => Array\n(\n => 117.247.193.169\n)\n\n => Array\n(\n => 120.29.100.94\n)\n\n => Array\n(\n => 42.109.154.39\n)\n\n => Array\n(\n => 122.173.155.150\n)\n\n => Array\n(\n => 45.115.104.53\n)\n\n => Array\n(\n => 116.74.29.84\n)\n\n => Array\n(\n => 101.50.125.34\n)\n\n => Array\n(\n => 45.118.166.80\n)\n\n => Array\n(\n => 91.236.184.27\n)\n\n => Array\n(\n => 113.167.185.120\n)\n\n => Array\n(\n => 27.97.66.222\n)\n\n => Array\n(\n => 43.247.41.117\n)\n\n => Array\n(\n => 23.229.16.227\n)\n\n => Array\n(\n => 14.248.79.209\n)\n\n => Array\n(\n => 117.5.194.26\n)\n\n => Array\n(\n => 117.217.205.41\n)\n\n => Array\n(\n => 114.79.169.99\n)\n\n => Array\n(\n => 103.55.60.97\n)\n\n => Array\n(\n => 182.75.89.210\n)\n\n => Array\n(\n => 77.73.66.109\n)\n\n => Array\n(\n => 182.77.126.139\n)\n\n => Array\n(\n => 14.248.77.166\n)\n\n => Array\n(\n => 157.35.224.133\n)\n\n => Array\n(\n => 183.83.38.27\n)\n\n => Array\n(\n => 182.68.4.77\n)\n\n => Array\n(\n => 122.177.130.234\n)\n\n => Array\n(\n => 103.24.99.99\n)\n\n => Array\n(\n => 103.91.127.66\n)\n\n => Array\n(\n => 41.90.34.240\n)\n\n => Array\n(\n => 49.205.77.102\n)\n\n => Array\n(\n => 103.248.94.142\n)\n\n => Array\n(\n => 104.143.92.170\n)\n\n => Array\n(\n => 219.91.157.114\n)\n\n => Array\n(\n => 223.190.88.22\n)\n\n => Array\n(\n => 223.190.86.232\n)\n\n => Array\n(\n => 39.41.172.80\n)\n\n => Array\n(\n => 124.107.206.5\n)\n\n => Array\n(\n => 139.167.180.224\n)\n\n => Array\n(\n => 93.76.64.248\n)\n\n => Array\n(\n => 65.216.227.119\n)\n\n => Array\n(\n => 223.190.119.141\n)\n\n => Array\n(\n => 110.93.237.179\n)\n\n => Array\n(\n => 41.90.7.85\n)\n\n)\n```\nPlumbing analogy: JanH's Journey to Freedom\n << Back to all Blogs Login or Create your own free blog Layout: Blue and Brown (Default) Author's Creation\nHome > Plumbing analogy", null, "", null, "", null, "# Plumbing analogy\n\nSeptember 29th, 2006 at 07:31 pm\n\nOkay. So after reorganizing all our insurances, I have a plus in my favor of 10. When I went to pay the mortgage today, I added that ten to my principal. I used to add money quite often, but with the big increases in taxes and insurance, I haven't in a long while. I saw on the forums that so many people do this, and it got me thinking. Even though I only had 10, it will add up. We have discovered a small leak in one of our bathrooms. The way we discovered it was the big puddle. Small drips make a large amount of water over time. So it should be with my 10 dollars. At least in my universe.....\n\n### 4 Responses to “Plumbing analogy”\n\n1. Ima saver Says:\n\nGood for you!\n\n2. LuxLiving Says:\n\nIt works Jan - that lil' bit at a time extra is how we paid off all our credit card debts! Keep going, you're doing great!\n\n3. moneycents Says:\n\nKeep going! every little bit helps.\n\n4. PRICEPLUS Says:\n\nWay to go! The analogy is so true. Remember the most powerful force in the universe, according to Einstein, is compound interest! Litlle money over the years becomes big money!", null, "(Note: If you were logged in, we could automatically fill in these fields for you.)\n Name: * Email: Will not be published. Subscribe: Notify me of additional comments to this entry. URL: Verification: * Please spell out the number 4.  [ Why? ]\n\nvB Code: You can use these tags: [b] [i] [u] [url] [email]" ]
[ null, "https://www.savingadvice.com/blogs/images/search/top_left.php", null, "https://www.savingadvice.com/blogs/images/search/top_right.php", null, "https://www.savingadvice.com/blogs/images/search/bottom_left.php", null, "https://www.savingadvice.com/forums/core/images/smilies/smile.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9707378,"math_prob":0.99706554,"size":781,"snap":"2019-51-2020-05","text_gpt3_token_len":227,"char_repetition_ratio":0.14929216,"word_repetition_ratio":0.02,"special_character_ratio":0.30857876,"punctuation_ratio":0.1631579,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9987594,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-23T20:39:13Z\",\"WARC-Record-ID\":\"<urn:uuid:922ab0dd-7068-4a64-b90c-55e1312f62d0>\",\"Content-Length\":\"51990\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:62098088-2979-47dd-a0d4-b6e4162f36d4>\",\"WARC-Concurrent-To\":\"<urn:uuid:4d3de6c9-6aff-4bc6-8a55-138ad144ff08>\",\"WARC-IP-Address\":\"173.231.200.26\",\"WARC-Target-URI\":\"https://janhigh.savingadvice.com/2006/09/29/plumbing-analogy_14977/\",\"WARC-Payload-Digest\":\"sha1:OAY2ZJ5GTIUTOOEHZ4Y4SK6NHT5AXIZB\",\"WARC-Block-Digest\":\"sha1:O3KIHQRNRICGI2WX66BMQNOV5F7H7VHC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250613416.54_warc_CC-MAIN-20200123191130-20200123220130-00400.warc.gz\"}"}
https://www.extremeoptimization.com/QuickStart/FSharp/NonUniformRandomNumbers.aspx
[ "Data Analysis Mathematics Linear Algebra Statistics\nNew Version 7.0!", null, "", null, "QuickStart Samples\n\n# Non-Uniform Random Numbers QuickStart Sample (F#)\n\nIllustrates how to generate random numbers from a non-uniform distribution in F#.\n\n```#light\n\nopen System\n\nopen Extreme.Mathematics.Random\nopen Extreme.Statistics.Distributions\n\n// Illustrates generating non-uniform random numbers\n// using the classes in the Extreme.Statistics.Random\n// namespace.\n\n// Random number generators and the generation\n// of uniform pseudo-random numbers are illustrated\n// in the UniformRandomNumbers QuickStart Sample.\n\n// In this sample, we will generate numbers from\n// an exponential distribution, and compare summary\n// results to what would be expected from\n// the corresponding Poisson distribution.\n\nlet meanTimeBetweenEvents = 0.42\n\n// We will use the exponential distribution to generate the time\n// between events. The number of events per unit time follows\n// a Poisson distribution.\n\n// The parameter of the exponential distribution is the time between events.\nlet exponential = ExponentialDistribution(meanTimeBetweenEvents);\n// The parameter of the Poisson distribution is the mean number of events\n// per unit time, which is the reciprocal of the time between events:\nlet poisson = PoissonDistribution(1.0 / meanTimeBetweenEvents)\n\n// We use a MersenneTwister to generate the random numbers:\nlet random = MersenneTwister()\n\n// The totals array will track the number of events per time unit.\nlet totals = Array.zeroCreate<int>(15)\n\nlet rec SampleTimeUnit sampler startTime eventsSoFar =\nif (startTime < 1.0) then\nSampleTimeUnit sampler (startTime + sampler()) (eventsSoFar + 1)\nelse\nstartTime - 1.0, eventsSoFar\n\nlet rec SampleUnits sampler (totals : int[]) iterationsRemaining startTime currentCount =\nmatch iterationsRemaining with\n| 0 -> currentCount\n| _ ->\nlet nextStartTime, eventsInUnit = SampleTimeUnit sampler startTime 0\nif (eventsInUnit >= totals.Length) then\ntotals.[totals.Length-1] <- totals.[totals.Length-1] + 1\nelse\ntotals.[eventsInUnit] <- totals.[eventsInUnit] + 1\nSampleUnits sampler totals (iterationsRemaining - 1) nextStartTime (currentCount + eventsInUnit)\n\nlet count = SampleUnits (fun () -> exponential.Sample(random)) totals 1000000 0.0 0\n\n// Now print the totals\nprintfn \"# Events Actual Expected\"\nfor i in 0..totals.Length-1 do\nlet expected = (int)(1000000.0 * poisson.Probability(i))\nprintfn \"%8d %8d %8d\" i totals.[i] expected\n\nprintf \"Press any key to exit.\"" ]
[ null, "https://www.extremeoptimization.com/images/dl.png", null, "https://www.extremeoptimization.com/images/nuget.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.61291325,"math_prob":0.98572433,"size":2892,"snap":"2020-24-2020-29","text_gpt3_token_len":679,"char_repetition_ratio":0.13573407,"word_repetition_ratio":0.005063291,"special_character_ratio":0.23201936,"punctuation_ratio":0.13716814,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9973401,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-13T04:20:28Z\",\"WARC-Record-ID\":\"<urn:uuid:47cf12a0-91ff-4394-a48a-26b920554939>\",\"Content-Length\":\"26898\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3370841f-b72f-4688-8b04-0d177e0cce1c>\",\"WARC-Concurrent-To\":\"<urn:uuid:d6bcbf45-7f42-49d8-88a5-07108a2e2040>\",\"WARC-IP-Address\":\"208.106.205.98\",\"WARC-Target-URI\":\"https://www.extremeoptimization.com/QuickStart/FSharp/NonUniformRandomNumbers.aspx\",\"WARC-Payload-Digest\":\"sha1:CD6YB62ZZSMETTI4QD6VR2TWVMNSUILV\",\"WARC-Block-Digest\":\"sha1:LEC2B2ZOO3IJ3333JETOO7MUDFOAHEU5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593657142589.93_warc_CC-MAIN-20200713033803-20200713063803-00411.warc.gz\"}"}
https://uben.ovh/klibrary/KTimer.html
[ "#include \"KTimer.hpp\" ➔ class KTimer\n\nKTimer\n\nA class that manages time.\n\n``````class KTimer {\nconst unsigned ms;\nconst unsigned count;\n};``````\n\nConstructor\n\n``KTimer(unsigned msPerLoop = 1000/60)``\n\nIt receives the amount of milliseconds required per loop (`msPerLoop`).\nBy default, it receives 16 (1000/60) to ensure 60 FPS.\n\nMember\n\n``````const unsigned ms;\nconst unsigned count;``````\n\nThe amount of milliseconds required per loop.\nThe amount of loop that have been performs.\n\nMethod\n\n``````void start(unsigned msPerLoop);\nvoid start();``````\n\nReset the class.\n\n``void wait();``\n\nCall `wait()` at the end of a loop to ensure a constant amount of loop per second.\nIt will wait the remaining time to complete a loop in a precise time gap.\n\n``unsigned get_time();``\n\nReturns the amount of milliseconds that have elapsed since the initialization of the class.\n\nStatic Function\n\n``void wait_for(unsigned msToWait);``\n\nWaits for a specified amount of milliseconds (`msToWait`).\n\n``std::string date();``\n\nReturns the date as `YYYY-MM-DD hh:mm:ss`.\n\nExample\n\n``````KTimer timer(1000/25);\nbool loop = true;\n\nwhile (loop) {\ntimer.wait();\n\nstd::cout << timer.get_time() << std::endl;\n\nif (timer.count > 100) {\nloop = false;\n}\n}``````" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.69179654,"math_prob":0.9254359,"size":1071,"snap":"2019-26-2019-30","text_gpt3_token_len":263,"char_repetition_ratio":0.14807872,"word_repetition_ratio":0.036144577,"special_character_ratio":0.2717087,"punctuation_ratio":0.18867925,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97339994,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-19T05:18:46Z\",\"WARC-Record-ID\":\"<urn:uuid:2af9cbff-cf36-4b17-a482-9c213051ff26>\",\"Content-Length\":\"6438\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:073a9188-7a42-4c52-a171-b81262408a21>\",\"WARC-Concurrent-To\":\"<urn:uuid:f35e14ef-673f-4cdf-a09a-8d7cc58e64e4>\",\"WARC-IP-Address\":\"163.172.181.203\",\"WARC-Target-URI\":\"https://uben.ovh/klibrary/KTimer.html\",\"WARC-Payload-Digest\":\"sha1:DFRFZ3VEWRDPGDWCFJZTMC5I22TZTSGZ\",\"WARC-Block-Digest\":\"sha1:DAOQD7VA6LBL7YKJ3OD5IJX4HP6NCBFU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560627998913.66_warc_CC-MAIN-20190619043625-20190619065625-00534.warc.gz\"}"}
http://exir.ru/solutions/Acoustics.htm
[ "# Irodov Solutions → Oscillations and waves → Elastic Waves. Acoustics\n\n 4.157. A plane elastic wave ξ = ae-γx(ωt - kx), where a, γ, ω, and k are constants, propagates in a homogeneous medium. Find the phase difference between the oscillations at the points where the particles' displacement amplitudes differ by η = 1.0%, if γ = 0.42 m-1 and the wavelength is λ = 50 cm. Free solution >> 4.158. Find the radius vector defining the position of a point source of spherical waves if that source is known to be located on the straight line between the points with radius vectors r1 and r2 at which the oscillation amplitudes of particles of the medium are equal to a1 and a2. The damping of the wave is negligible, the medium is homogeneous. Free solution >> 4.163. A point isotropic source with sonic power P = 0.10 W is located at the centre of a round hollow cylinder with radius R = 1.0 m and height h = 2.0 m. Assuming the sound to be completely absorbed by the walls of the cylinder, find the mean energy flow reaching the lateral surface of the cylinder. Free solution >> 4.164. The equation of a plane standing wave in a homogeneous elastic medium has the form ξ = a cos kx * cos ωt. Plot: (a) ξ and ∂ξ/∂x as functions of x at the moments t = 0 and t = T/2, where T is the oscillation period; (b) the distribution of density ρ(x) of the medium at the moments t = 0 and t = T/2 in the case of longitudinal oscillations; (c) the velocity distribution of particles of the medium at the moment t = T/4; indicate the directions of velocities at the antinodes, both for longitudinal and transverse oscillations. Free solution >> 4.165. A longitudinal standing wave ξ = a cos kx * cos ωt is maintained in a homogeneous medium of density ρ. Find the expressions for the space density of (a) potential energy wp (x, t); (b) kinetic energy wk (x, t). Free solution >> 4.174. A source of sonic oscillations with frequency ν0 = 1000 Hz moves at right angles to the wall with a velocity u = 0.17 m/s. Two stationary receivers R1 and R2 are located on a straight line, coinciding with the trajectory of the source, in the following succession: R1-source-R2-wall. Which receiver registers the beatings and what is the beat frequency? The velocity of sound is equal to v = 340 m/s. Free solution >> 4.176. A receiver and a source of sonic oscillations of frequency ν0 = 2000 Hz are located on the x axis. The source swings harmonically along that axis with a circular frequency ω and an amplitude a = 50 cm. At what value of ω will the frequency bandwidth registered by the stationary receiver be equal to Δν = 200 Hz? The velocity of sound is equal to v = 340 m/s. Free solution >> 4.179. A stationary source sends forth monochromatic sound. A wall approaches it with velocity u = 33 cm/s. The propagation velocity of sound in the medium is v = 330 m/s. In what way and how much, in per cent, does the wavelength of sound change on reflection from the wall? Free solution >> 4.185. A plane longitudinal harmonic wave propagates in a medium with density ρ. The velocity of the wave propagation is v. Assuming that the density variations of the medium, induced by the propagating wave, Δρ << ρ, demonstrate that (a) the pressure increment in the medium Δp = -ρv2(∂ξ/∂x), where ∂ξ/∂x is the relative deformation; (b) the wave intensity is defined by Eq. (4.3i). Free solution >> 4.186. A ball of radius R = 50 cm is located in the way of propagation of a plane sound wave. The sonic wavelength is λ = 20 cm, the frequency is ν = 1700 Hz, the pressure oscillation amplitude in air is (Δp)m = 3.5 Pa. Find the mean energy flow, averaged over an oscillation period, reaching the surface of the ball. Free solution >> 4.188. At a distance r = 100 m from a point isotropic source of sound of frequency 200 Hz the loudness level is equal to L = 50 dB. The audibility threshold at this frequency corresponds to the sound intensity I0 = 0.10 nW/m2. The damping coefficient of the sound wave is γ = 5.0*10-4 m-1. Find the sonic power of the source. Free solution >>" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9094776,"math_prob":0.9945885,"size":3229,"snap":"2020-10-2020-16","text_gpt3_token_len":825,"char_repetition_ratio":0.13023256,"word_repetition_ratio":0.03327787,"special_character_ratio":0.2681945,"punctuation_ratio":0.11901306,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99811214,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-25T19:02:54Z\",\"WARC-Record-ID\":\"<urn:uuid:6e9bd504-5db3-4e32-b656-40153b36eca0>\",\"Content-Length\":\"7537\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9cb4d007-d365-4680-a216-9d616c77a63c>\",\"WARC-Concurrent-To\":\"<urn:uuid:ee0cf8ec-7167-40e4-b0c5-da70b2ac868c>\",\"WARC-IP-Address\":\"178.208.83.38\",\"WARC-Target-URI\":\"http://exir.ru/solutions/Acoustics.htm\",\"WARC-Payload-Digest\":\"sha1:7BLE3OERRVBYK5N6WPJKF7NBF4A24Z3X\",\"WARC-Block-Digest\":\"sha1:A4LTN4DNSA663MYBO6UXIXI5ZVAQX4GT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875146127.10_warc_CC-MAIN-20200225172036-20200225202036-00195.warc.gz\"}"}
https://nl.mathworks.com/help/fixedpoint/ref/lookuptableoptimizer-app.html
[ "Lookup Table Optimizer\n\nOptimize an existing lookup table or approximate a function with a lookup table\n\nDescription\n\nUse the Lookup Table Optimizer to obtain an optimized (memory-efficient) lookup table that approximates an existing Simulink® block, including Subsystem blocks and math function blocks, or a function handle. You can choose to return the optimized lookup table as a Simulink block or as a MATLAB® function. The optimizer supports any combination of floating-point and fixed-point data types. The original input and output data types can be kept or changed as desired. To minimize memory used, the optimizer selects the data types of breakpoints and table data, as well as the number and spacing of breakpoints.\n\nOpen the Lookup Table Optimizer App\n\n• In a Simulink model, on the Apps tab, click the arrow on the far right of the Apps section. In the Code Generation gallery, click Lookup Table Optimizer.\n\n• In a Simulink model with a Lookup Table block, select the Lookup Table block, in the Lookup Table tab, select Lookup Table Optimizer.\n\nIntroduced in R2018a\n\nFixed-Point Designer Documentation", null, "Get trial now" ]
[ null, "https://nl.mathworks.com/images/responsive/supporting/apps/doc_center/bg-trial-arrow.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7021232,"math_prob":0.784255,"size":1209,"snap":"2022-05-2022-21","text_gpt3_token_len":249,"char_repetition_ratio":0.1834025,"word_repetition_ratio":0.0,"special_character_ratio":0.17204301,"punctuation_ratio":0.106796116,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97476333,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-22T14:14:50Z\",\"WARC-Record-ID\":\"<urn:uuid:3fbcb2d1-0069-4c45-bd41-31938d858b8c>\",\"Content-Length\":\"76591\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:06664094-bd10-431c-b458-019381d053ce>\",\"WARC-Concurrent-To\":\"<urn:uuid:e83fe367-c4b4-4e52-b8bd-24f0a6b18ee2>\",\"WARC-IP-Address\":\"104.69.217.80\",\"WARC-Target-URI\":\"https://nl.mathworks.com/help/fixedpoint/ref/lookuptableoptimizer-app.html\",\"WARC-Payload-Digest\":\"sha1:NPK56OM3N3PX4WRDHHFWNR7X5GG6LX3O\",\"WARC-Block-Digest\":\"sha1:IU6WBLPAHZMYHP5DAPURJLD5PIHQJTGQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320303864.86_warc_CC-MAIN-20220122134127-20220122164127-00209.warc.gz\"}"}
https://wiki.haskell.org/index.php?title=Untypechecking&diff=prev&oldid=3458
[ "# Difference between revisions of \"Untypechecking\"\n\n## Untypechecking\n\nConverting from a type to a term.\n\n```[Haskell] De-typechecker: converting from a type to a term\n\noleg at pobox.com oleg at pobox.com\nTue Mar 1 03:13:08 EST 2005\n\n* Next message: [Haskell] Re: Type of y f = f . f\n* Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]\n\n-----------------------------------------------------------------------------------------------------\n\nThis message presents polymorphic functions that derive a term for a\ngiven type -- for a class of fully polymorphic functions: proper and\nimproper combinators. This is better understood on an example:\n\nrtest4 f g = rr (undefined::(b -> c) -> (a -> b) -> a -> c) HNil f g\n*HC> rtest4 (:[]) Just 'x'\n[Just 'x']\n*HC> rtest4 Just Right True\nJust (Right True)\n\nWe ask the Haskell typechecker to derive us a function of the\nspecified type. We get the real function, which we can then apply to\nvarious arguments. The return result does behave like a `composition'\n-- which is what the type specifies. Informally, we converted from\n`undefined' to defined.\n\nIt must be emphasized that no modifications to the Haskell compiler are\nneeded, and no external programs are relied upon. In particular,\nhowever surprising it may seem, we get by without `eval' -- because\n\nThis message contains the complete code. It can be loaded as it is\ninto GHCi -- tested on GHCi 6.2.1 or GHCi snapshot 6.3.20041106. It\nmay take a moment or two to load though. Commenting our tests speeds\n\nThis message presents two different converters from a type to a term.\nBoth derive a program, a term, from its specification, a type -- for\na class of fully polymorphic functions. The first converter has just\nbeen demonstrated. It is quite limited in that the derived function\nmust be used `polymorphically' -- distinct type variables must be\ninstantiated to different types (or, the user should first\ninstantiate their types and then derive the term). The second\nconverter is far more useful: it can let us `visualize' what a\nfunction with a particular type may be doing. For example, it might\nnot be immediately clear what the function of the type\n(((a -> b -> c) -> (a -> b) -> a -> c) ->\n(t3 -> t1 -> t2 -> t3) -> t) -> t)\n\nis. The test (which is further down) says\n\ntest9 = reify (undefined::(((a -> b -> c) -> (a -> b) -> a -> c) ->\n(t3 -> t1 -> t2 -> t3) -> t) -> t) gamma0\n\n*HC> test9\n\\y -> y (\\d h p -> d p (h p)) (\\d h p -> d)\n\nthat is, the function in question is one of the X combinators. It is\nan improper combinator.\n\nA particular application is converting a point-free function into the\npointful form to really understand the former. For example, it might\ntake time to comprehend the following expression\n\npz = (((. head . uncurry zip . splitAt 1 . repeat) . uncurry) .) . (.) . flip\n\nIt is derived by Stephan Hohe in\n\nOur system says\ntest_pz = reify (undefined `asTypeOf` pz) gamma0\n*HC> test_pz\n\\h p y -> h y (p y)\n\nSo, pz is just the S combinator.\n\nAs an example below shows, an attempt to derive a term for a type\na->b expectedly fails. The type error message essentially says that\na |- b is underivable.\n\nOur system solves type habitation for a class of functions with\npolymorphic types. From another point of view, the system is a prover\nin the implication fragment of intuitionistic logic. Essentially we\nturn a _type_ into a logical program -- a set of Horn clauses -- which\nwe then solve by SLD resolution. It is gratifying to see that\n\nThe examples above exhibit fully polymorphic types -- those with\n*uninstantiated* -- implicitly universally quantified -- type\nvariables. That is, our typeclasses can reify not only types but also\ntype schemas. The ability to operate on and compare *unground* types\nwith *uninstantiated* type variables is often sought but rarely\nattained. The contribution of this message is the set of primitives\nfor nominal equality comparison and deconstruction of unground\ntypes. Earlier these tools were used to implement nested monadic\nregions in the type-safe manner without imposing the total order on\nthe regions. The monadic regions are described in the paper by M. Fluet\nand G. Morrisett, ICFP04.\n\nThe rest of this message is as follows:\n\n- equality predicate of type schemas\n- function types as logic programs. Solving type habitation\nby SLD resolution\n- more examples\n- code\n-- class Resolve: SLD resolution\n\n* Equality predicate of type schemas\n\nThis is a short remark on the equality predicate of type schemas. In\nmore detail, this topic will be described elsewhere.\n\nTo obtain the equality, we need the following overlapping instances\nextensions\n\n> {-# OPTIONS -fglasgow-exts #-}\n> {-# OPTIONS -fallow-undecidable-instances #-}\n> {-# OPTIONS -fallow-overlapping-instances #-}\n> {-# OPTIONS -fallow-incoherent-instances #-}\n>\n> module HC where\n\nWe must remark that -fallow-overlapping-instances and\n-fallow-incoherent-instances extensions are used *exclusively* for the\ntype equality testing. With these extensions and the following declarations\n\nclass C a where op:: a -> Int\ninstance C a where ...\ninstance C Bool where ...\ndata W = forall a. W a\n\nthe expression\ncase W () of W x -> op x\n\nis well-typed with the \"instance C a\" chosen. In the context of TypeEq,\nthat behavior guarantees that a quantified variable is equal only to itself\nand nothing else. So, the use of overlapping instances in this code\nis well-defined and sound.\n\nThe existence of this remarkable feature has been pointed out by Simon\nPeyton-Jones, who implemented it.\n\nThe code at the end of this message defines nominal equality on\nquantified type variables. A quantified variable is equal only to\nitself. It is useful to think of such type variables as being\nSkolemized. Fortunately, GHC kindly does all the Skolemization for\nus. For example,\n\n*FP> type'eq id id\nHFalse\n\nbecause the type of the identity function \"a->a\" means \"forall a. a\n->a\", which, after skolemization, becomes \"sk1 -> sk1\". The type of\nthe second `id' in the above equation becomes \"sk2 -> sk2\" -- thus the\ntypes are not equal. Indeed, if we ask GHC to show us the inferred\ntype of the above expression\n\n*FP> :t type'eq id id\ntype'eq id id :: forall a a1 b. (TypeEq (a -> a) (a1 -> a1) b) => b\n\nwe can see the Skolemization very clearly.\n\nOTH,\n*FP> let f x = (x,x) in case f id of (u,v) -> type'eq u v\nHTrue\nbecause\n*FP> :t let f x = (x,x) in case f id of (u,v) -> type'eq u v\nlet f x ...  :: forall a b. (TypeEq (a -> a) (a -> a) b) => b\n\nthat is, the types of both `id' are unified (yet they remain unground)\n\n* Two type-to-term reifiers\n\nAs we have mentioned, we introduce two type reifiers. The first one converts\na type to the following term:\n\n> data Term = Var Int | L Int Term | A Term Term -- deriving Show\n\nThe custom Show instance is at the end of this message. The second reifier\nconverts a type to a true Haskell term. The difference between the reifiers\nis akin to the difference between a homogeneous list and a heterogeneous\nlist (HList). The former reifier is far easier to explain.\n\n* Function types as logic programs. Solving type habitation by SLD resolution\n\nTo solve the type habitation problem -- to find a term (proof) for a\nproposition expressed as a type -- we use SLD resolution. It may be\nhelpful to think of the whole process as converting a type\n(proposition) into a Horn-clause logical program, and solving that\nprogram using a Prolog-style evaluator. Here are a few examples of\ntypes and the corresponding logical programs.\n\nThe type a->b->a after applying the implication elimination rule twice\nyields the following program:\n\nt2t(a, var(1)).\nt2t(b, var(2)).\n?- t2t(a,X), Term = lambda(1,(lambda(2,X))).\n% solution: lambda(1, lambda(2, var(1))), or \\x y -> x\n\nThe type \"(b -> c) -> (a -> b) -> a -> c\" (of the composition function)\ncorresponds to the following program:\n\nt2t(c,app(var(1),X)) :- t2t(b,X).\nt2t(b,app(var(2),X)) :- t2t(a,X).\nt2t(a,var(3)).\n?- t2t(c,X), Term = lambda(1,lambda(2,lambda(3,X))).\n% solution:\n% lambda(1, lambda(2, lambda(3, app(var(1), app(var(2), var(3))))))\n% or \\x y z -> x (y z)\n\nThe type of one of the X combinators (which is an improper combinator)\nforall t a b c t1 t2 t3.\n(((a -> b -> c) -> (a -> b) -> a -> c) -> (t3 -> t1 -> t2 -> t3) -> t)\n-> t\n\ncorresponds to the logical program\n\nt2t(t, app(app(var(1),X),Y)) :- t2t(u1,X), t2t(u2,Y).\n\n% u1 denotes (a -> b -> c) -> (a -> b) -> a -> c\nt2t(u1,X) :- t2t(c,Y), X = lambda(3,lambda(4,lambda(5,Y))).\nt2t(c,app(app(var(3),X),Y)) :- t2t(a,X), t2t(b,Y).\nt2t(b,app(var(4),X)) :- t2t(a,X).\nt2t(a,var(5)).\n\n% u2 denotes t3 -> t1 -> t2 -> t3\nt2t(u2,X) :- t2t(t3,Y), X = lambda(6,lambda(7,lambda(8,Y))).\nt2t(t3,var(6)).\nt2t(t1,var(7)).\nt2t(t2,var(8)).\n\n?- t2t(t,X), Term = lambda(1,X).\n\nThe solution to the latter is, when printed nicely,\n\\f -> f (\\a b c -> (a c) (b c)) (\\x y z -> x)\n\nWe construct such programs and solve them with the ordinary SLD resolution\n-- only using Haskell typeclasses rather than Prolog.\n\n Jeroen Fokker, The Systematic Construction of a One-combinator Basis for\nLambda-Terms.\nFormal Aspects of Computing 4 (1992), pp. 776-780.\nhttp://www.cs.uu.nl/people/jeroen/article/combinat/combinat.ps\n\n* More examples\n\nThe reification function takes, as one argument, an environment -- or\nthe initial assumption. Oftentimes it is this:\n\n> gamma0 = G HNil 0\n\nOther environments may be used to reify *open* terms.\n\n> test1 = let term = \\x y -> x in reify term gamma0\n> test2 = let term = \\x y -> y in reify term gamma0\n> test3 = let term = \\f x y -> f x in reify term gamma0\n>\n> -- \\f x y -> f y x\n> test4 = reify (__::(t -> t1 -> t2) -> t1 -> t -> t2) gamma0\n>\n> -- \\f g a b -> f (g a b)\n> test5 = let term = (.) (.) (.) in reify (undefined `asTypeOf` term) gamma0\n>\n> pz = (((. head . uncurry zip . splitAt 1 . repeat) . uncurry) .) . (.) . flip\n> test_pz = reify (undefined `asTypeOf` pz) gamma0\n>\n> -- \\f g h x y -> f (g x) (h y)\n> test7 = let term = ((flip . ((.) .)) .) . (.) in reify term gamma0\n\nMore complex are improper combinators:\n\n> -- \\f -> f (\\x -> x)\n> test6 = reify (__::((a->a)->b) -> b) gamma0\n>\n> -- X combinators\n> test8 = let term = \\a b c d -> c d (a (\\x -> d))\n> in reify (undefined `asTypeOf` term) gamma0\n>\n> test9 = reify (undefined::(((a -> b -> c) -> (a -> b) -> a -> c) ->\n> (t3 -> t1 -> t2 -> t3) -> t) -> t) gamma0\n> --}\n\nOther terms to try:\n((((.) . (.)) . (.)) . (.))\n((flip .) . (((flip .) .) . (((.) . (.)) . (((.) . (.)) .))))\nreify const gamma0\nreify (.) gamma0\nreify (asTypeOf __ (.)) gamma0\n\n> term1B = reify (undefined:: t4 -> (t3 -> t4 -> t5 -> t1) -> t6 -> t3 ->\n> (t4 -> t -> t -> t5 -> t1 -> t2) -> t -> t5 -> t2) gamma0\n\nThe type a->b however is not inhabitable: If we uncomment the following,\n\n> --test_not_derivable = reify (__::a -> b) gamma0\n\nwe get the type error:\nNo instances for (Resolve (Gamma (:*: (T2T (:*: a HNil)) HNil)) assum',\nHLookup rt HNil (:*: rt assum))\narising from use of `reify' at ...\n\nThat is, a |- rt is unprovable.\n\n* Code\n\n** Preliminaries\n\nAn abbreviated notation for undefined, which shall occur very often\n\n> __ = __\n\nHeterogeneous lists\n\n> data HTrue\n> data HFalse\n> instance Show HTrue where show _ = \"HTrue\"\n> instance Show HFalse where show _ = \"HFalse\"\n\n> data HNil = HNil deriving Show\n> data HCons a b = HCons a b deriving Show\n\n> -- Syntax sugar from the HList library. Thanks to Ralf Laemmel\n> infixr 2 :*:\n> infixr 2 .*.\n> type e :*: l = HCons e l\n> a .*. b = HCons a b\n\nEnvironment: hs are hypotheses: an HList of T2T, each of which states one\nassumption: an association of a type to a term\n\n> data Gamma hs = G hs Int deriving Show\n> newtype T2T t = T2T Term deriving Show\n\nConverting an implication to a type list of assumptions in inverse order\n\n> class I2L t tl | t -> tl where\n> i2l :: t -> tl\n> i2l = undefined -- it's a type-level function\n>\n> instance (IsFunction t flag, I2L' flag t tl)\n> => I2L t tl\n>\n> class I2L' flag t tl | flag t -> tl\n>\n> instance I2L' HFalse t (t :*: HNil)\n>\n> instance (I2L x tlx, I2L y tly, HAppend tly (tlx :*: HNil) tl)\n> => I2L' HTrue (x->y) tl\n>\n> ti1 = i2l (__::a->b->c)\n> ti2 = i2l (__::(a->b)->a->b)\n> ti3 = i2l (__::(a -> b -> c) -> (a -> b) -> a -> c)\n\nThe main reification class\n\n> class Reify t gamma where\n> reify :: t -> gamma -> Term\n>\n> instance (IsFunction t flag, I2L' flag t (rt :*: at),\n> Resolve gamma' ((rt :*: HNil) :*: HNil))\n> => Reify t gamma where\n> reify t gamma = let (varlist, gamma') = add'hyp gamma (__::at) []\n> in foldr (\\ (Var v) s -> L v s )\n> (resolve gamma' (((__::rt) .*. HNil) .*. HNil) [])\n> varlist\n\nLabel top-level assumptions with variables\n\n> class AddHyp gamma tl gamma' | gamma tl -> gamma' where\n> add'hyp :: gamma -> tl -> [Term] -> ([Term],gamma')\n>\n> instance AddHyp gamma HNil gamma where\n> add'hyp g _ varlist = (varlist,g)\n>\n> instance AddHyp (Gamma ((T2T t) :*: hs)) r gamma\n> => AddHyp (Gamma hs) (t :*: r) gamma where\n> add'hyp (G hs varcount) _ varlist =\n> let v = (Var varcount)\n> hs' = ((T2T v)::T2T t) .*. hs\n> in add'hyp (G hs' (varcount + 1)) (__::r) (v:varlist)\n\n** The SLD resolution algorithm\n\n> class Resolve gamma goals where\n> resolve :: gamma -> goals -> [Term] -> Term\n>\n> instance Resolve gamma HNil where\n> resolve _ _ pt = foldr1 (flip A) pt\n>\n> instance (HLookup g hs (g :*: assum),\n> HReverse assum assum',\n> Resolve (Gamma hs) assum',\n> Resolve (Gamma hs) gr)\n> => Resolve (Gamma hs) ((g :*: HNil) :*: gr) where\n> resolve gamma@(G hs _) _ pt =\n> let T2T t1 = hlookup (__::g) hs\n> in resolve gamma (__::gr) ((resolve gamma (__::assum') [t1]) : pt)\n>\n> instance (AddHyp (Gamma hs) (gc :*: gcr) gamma',\n> AddHyp (Gamma ((T2T gc) :*: hs)) gcr gamma',\n> Resolve gamma' ((g :*: HNil) :*: HNil),\n> Resolve (Gamma hs) gr)\n> => Resolve (Gamma hs) ((g :*: gc :*: gcr) :*: gr) where\n> resolve gamma@(G hs _) _ pt =\n> let t1 = let (varlist, gamma'::gamma') =\n> add'hyp gamma (__::(gc :*: gcr)) []\n> in foldr (\\ (Var v) s -> L v s )\n> (resolve gamma' (((__::g) .*. HNil) .*. HNil) [])\n> varlist\n> in resolve gamma (__::gr) (t1 : pt)\n\nLookup in the `associative' type-indexed list\n\n> class HLookup t l w | t l -> w where\n> hlookup :: t -> l -> T2T w\n>\n> instance (TypeEq t t' flag, HLookup' flag t ((T2T (t' :*: at)) :*: r) w)\n> => HLookup t ((T2T (t' :*: at)) :*: r) w where\n> hlookup = hlookup' (undefined::flag)\n>\n> class HLookup' flag t l rt | flag t l -> rt where\n> hlookup' :: flag -> t -> l -> T2T rt\n>\n> instance HLookup' HTrue t ((T2T (t :*: at)) :*: r) (t :*: at) where\n> hlookup' _ _ (HCons t _) = t\n>\n> instance HLookup t r w => HLookup' HFalse t ((T2T t') :*: r) w where\n> hlookup' _ t (HCons _ r) = hlookup t r\n\n> class HAppend l1 l2 l | l1 l2 -> l where\n> happend :: l1 -> l2 -> l\n> instance HAppend HNil l2 l2 where\n> happend _ l2 = l2\n> instance HAppend l1 l2 l => HAppend (a :*: l1) l2 (a :*: l) where\n> happend (HCons a l1) l2 = a .*. (happend l1 l2)\n\n> class HReverse l1 l2 | l1 -> l2\n> instance HReverse HNil HNil\n> instance (HReverse l1 l1', HAppend l1' (a :*: HNil) l2)\n> => HReverse (a :*: l1) l2\n\n** Show Terms in a nice way\n\n> instance Show Term where\n> show (Var i) =\n> case divMod i 26 of\n> (0,i) -> simple_var i\n> (j,i) -> simple_var i ++ (show j)\n> where simple_var i = [\"yphdcbaeijklmnofqrstuvwxgz\" !! i]\n> show (A e v@(Var i)) = show e ++ \" \" ++ show v\n> show (A e1 e2) = show e1 ++ \" \" ++ \"(\" ++ show e2 ++ \")\"\n> show (L i e) = show' [i] e\n> where show' vars (L j e) = show' (j:vars) e\n> show' vars e = \"\\\\\" ++ unwords (map (show.Var) \\$ reverse vars) ++\n> \" -> \" ++ show e\n\n* The second reifier: from types to bona fide Haskell terms\n\nThe second reifier is a `lifted' version of the first one. Wherever we\nused regular Haskell lists before, we use HLists now.\n\n> newtype T2TT tl t = T2TT t deriving Show\n>\n> class RR t gamma where\n> rr :: t -> gamma -> t\n>\n> instance (IsFunction t flag, RR' flag t gamma)\n> => RR t gamma where\n> rr = rr' (__::flag)\n>\n> class RR' flag t gamma where\n> rr' :: flag -> t -> gamma -> t\n>\n> instance (IsFunction x flagx, I2L' flagx x tlx,\n> IsFunction y flagy, RR' flagy y ((T2TT tlx x) :*: gamma))\n> => RR' HTrue (x->y) gamma where\n> rr' _ _ gamma = \\ (v::x) -> rr' (__::flagy) (__::y)\n> (((T2TT v)::T2TT tlx x) .*. gamma)\n>\n> instance (RResolve gamma ((t :*: HNil) :*: HNil) HNil t)\n> => RR' HFalse t gamma where\n> rr' _ _ gamma = rresolve gamma (((__::t) .*. HNil) .*. HNil) HNil\n>\n>\n> class RResolve gamma goals tl t | gamma goals tl -> t where\n> rresolve :: gamma -> goals -> tl -> t\n>\n> instance RResolve gamma HNil (t :*: HNil) t where\n> rresolve _ _ (HCons t HNil) = t -- foldr1 (flip A) pt\n>\n> instance RResolve gamma HNil (t1 :*: tr) (t->r)\n> => RResolve gamma HNil (t :*: t1 :*: tr) r where\n> rresolve g _ (HCons t r) = (rresolve g HNil r) t\n>\n> instance (RHLookup g gamma (T2TT (g :*: assum) g'),\n> HReverse assum assum',\n> RResolve gamma assum' (g' :*: HNil) ra,\n> RResolve gamma gr (ra :*: pt) t)\n> => RResolve gamma ((g :*: HNil) :*: gr) pt t where\n> rresolve gamma _ pt =\n> let T2TT t1 = rhlookup (__::g) gamma\n> ra :: ra = rresolve gamma (__::assum') (t1 .*. HNil)\n> in rresolve gamma (__::gr) (ra .*. pt)\n> -- the instance for improper combinators is left as an exercise to the reader\n\n> -- Lookup in the `associative' type-indexed list\n> class RHLookup t l w | t l -> w where\n> rhlookup :: t -> l -> w\n>\n> instance (TypeEq t t' flag,RHLookup' flag t ((T2TT (t' :*: at) tt') :*: r) w)\n> => RHLookup t ((T2TT (t' :*: at) tt') :*: r) w where\n> rhlookup = rhlookup' (__::flag)\n>\n> class RHLookup' flag t l w | flag t l -> w where\n> rhlookup' :: flag -> t -> l -> w\n>\n> instance RHLookup' HTrue t ((T2TT (t :*: at) tt) :*: r)\n> (T2TT (t :*: at) tt) where\n> rhlookup' _ _ (HCons t _) = t\n>\n> instance RHLookup t r w => RHLookup' HFalse t ((T2TT tl' t') :*: r) w where\n> rhlookup' _ t (HCons _ r) = rhlookup t r\n\nA few tests:\n\n> rtest1 = let f (x::a) (y::b) ::a = rr undefined HNil x y\n> in f 1 2\n> rtest2 = let f x y = rr (undefined::a->b->b) HNil x y\n> in f 1 2\n>\n> -- \\f x y -> f x :: forall t t1 t2. (t -> t1) -> t -> t2 -> t1\n> rtest3 = let t f x y = rr (undefined::(t -> t1) -> t -> t2 -> t1) HNil f x y\n> in t Just 1 'c'\n>\n> rtest4 f g = rr (undefined::(b -> c) -> (a -> b) -> a -> c) HNil f g\n> -- *HC> rtest4 (:[]) (\\x -> (True,x)) 10\n> -- [(True,10)]\n> -- must be truly polymorphic!\n\n* Equality and deconstruction of type schemas\n\n> class IsFunction a b | a -> b\n> instance TypeCast f HTrue => IsFunction (x->y) f\n> instance TypeCast f HFalse => IsFunction a f\n>\n> -- literally lifted from the HList library\n> class TypeCast a b | a -> b, b->a where typeCast  :: a -> b\n> class TypeCast' t a b | t a -> b, t b -> a where typeCast'  :: t->a->b\n> class TypeCast'' t a b | t a -> b, t b -> a where typeCast'' :: t->a->b\n> instance TypeCast' () a b => TypeCast a b where typeCast x = typeCast' () x\n> instance TypeCast'' t a b => TypeCast' t a b where typeCast' = typeCast''\n> instance TypeCast'' () a a where typeCast'' _ x = x\n>\n> class TypeEq' () x y b => TypeEq x y b | x y -> b\n> where type'eq :: x -> y -> b\n> type'eq _ _ = undefined::b\n> class TypeEq' q x y b | q x y -> b\n> class TypeEq'' q x y b | q x y -> b\n> instance TypeEq' () x y b => TypeEq x y b\n> instance TypeCast b HTrue => TypeEq' () x x b\n> instance TypeEq'' q x y b => TypeEq' q x y b\n> instance TypeEq'' () x y HFalse\n\n```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.712476,"math_prob":0.9699787,"size":19784,"snap":"2019-43-2019-47","text_gpt3_token_len":6188,"char_repetition_ratio":0.14560162,"word_repetition_ratio":0.14487633,"special_character_ratio":0.3717145,"punctuation_ratio":0.1580409,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9985277,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-22T03:40:52Z\",\"WARC-Record-ID\":\"<urn:uuid:3f5f9734-f83c-4d2a-9fd0-1824776ecee4>\",\"Content-Length\":\"39389\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9cf317c9-76fc-4dad-bd26-f427796fea8a>\",\"WARC-Concurrent-To\":\"<urn:uuid:bf6e702c-98b6-404c-88fc-f89cb4fd4604>\",\"WARC-IP-Address\":\"151.101.201.175\",\"WARC-Target-URI\":\"https://wiki.haskell.org/index.php?title=Untypechecking&diff=prev&oldid=3458\",\"WARC-Payload-Digest\":\"sha1:UW6ULNAWOAT7IXZBA3HITMXBDG2GQRDK\",\"WARC-Block-Digest\":\"sha1:2SDVWZKKRSXOPFL4TLXASVEMFLGY4L5W\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496671106.83_warc_CC-MAIN-20191122014756-20191122042756-00456.warc.gz\"}"}
https://kids.kiddle.co/Key_schedule
[ "", null, "# Key schedule facts for kids\n\nKids Encyclopedia Facts\n\nIn cryptography, the so-called product ciphers are a certain kind of ciphers, where the decryption of data is done in \"rounds\". The general setup of each round is the same, except for some hard-coded parameters and a part of the cipher key, called a subkey. A key schedule is an algorithm that, given the key, calculates the subkeys for these rounds.\n\n## Some types of key schedules\n\n• Some ciphers have simple key schedules. For example, the block cipher TEA simply splits the 128-bit key into four 32-bit pieces and uses them repeatedly in successive rounds.\n• DES uses a key schedule where the 56 bit key is divided into two 28-bit halves then each half is treated separately. In successive rounds, both halves are rotated left by one or two bits (specified for each round), and then 48 subkey bits are selected by Permuted Choice 2 (PC-2) — 24 bits from the left half, and 24 from the right. The rotations mean that a different set of bits is used in each subkey; each bit is used in approximately 14 out of the 16 subkeys.\n• In an effort to avoid simple relationships between the cipher key and the subkeys, to resist such forms of cryptanalysis as related-key attacks and slide attacks, many modern ciphers use much more complex key schedules, such as algorithms that use a one-way function to generate an \"expanded key\" from which subkeys are drawn. Some ciphers, such as Rijndael (AES) and Blowfish, use parts of the cipher algorithm itself for this key expansion, sometimes initialized with some \"nothing up my sleeve numbers\". Other ciphers, such as RC5, expand keys with functions that are somewhat or completely different from the encryption functions.\n\nKnudsen and Mathiassen (2004) give some experimental evidence that indicate that the key schedule plays a part in providing strength against linear and differential cryptanalysis. For toy Feistel ciphers, it was observed that those with complex and well-designed key schedules can reach a uniform distribution for the probabilities of differentials and linear hulls faster than those with poorly-designed key schedules.\n\n• Lars R. Knudsen and John Erik Mathiassen, On the Role of Key Schedules in Attacks on Iterated Ciphers, ESORICS 2004, pp322–334.", null, "Key schedule Facts for Kids. Kiddle Encyclopedia." ]
[ null, "https://kids.kiddle.co/images/wk/kids-robot.png", null, "https://www.kiddle.co/kids-search-engine-i.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.93527436,"math_prob":0.91196924,"size":2237,"snap":"2021-21-2021-25","text_gpt3_token_len":493,"char_repetition_ratio":0.12987013,"word_repetition_ratio":0.0,"special_character_ratio":0.21278498,"punctuation_ratio":0.0911271,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96953213,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-18T11:07:52Z\",\"WARC-Record-ID\":\"<urn:uuid:03708c9f-acd4-4458-8cd6-4a89dfa5be2c>\",\"Content-Length\":\"18350\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fb707f84-3f09-4461-80fc-9f26caf36818>\",\"WARC-Concurrent-To\":\"<urn:uuid:79c033d3-ef6a-445f-b7f3-25de764e69b4>\",\"WARC-IP-Address\":\"169.62.14.96\",\"WARC-Target-URI\":\"https://kids.kiddle.co/Key_schedule\",\"WARC-Payload-Digest\":\"sha1:O5SY5MOHCGZ7636FGJJKDK362FZKWMXD\",\"WARC-Block-Digest\":\"sha1:7KEZM4CMOXDCSI5WPK3NEUN6PBBIYHYE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487636559.57_warc_CC-MAIN-20210618104405-20210618134405-00276.warc.gz\"}"}
https://www.iue.tuwien.ac.at/phd/orio/node44.html
[ "", null, "", null, "", null, "", null, "Next: 4.1 The Finite Element Up: Dissertation R.L. de Orio Previous: 3.6 Model Summary\n\n# 4. Numerical Implementation\n\nThe mathematical description of physical phenomena very frequently consits of partial differential equations (PDE's) defined in a given domain of interest. Usually, these equations can be analytically solved only for very simple problems. Thus, for complex geometries and problems, involving variable material properties and general boundary conditions, numerical methods have to be applied.\n\nConsidering the model proposed in Chapter 3, the finite element method (FEM) has been chosen as numerical solving procedure. It presents a solid mathematical formulation for solving several types of PDE's and can handle complex geometries with different types of boundary conditions. Moreover, since it was originally devised for solving mechanical problems, it is rather convenient for the model implementation.\n\nThis chapter begins with a brief introduction to the finite element method, where the basic ideas are presented. A rigorous mathematical treatment is beyond the scope of this work and can be found elsewhere [151,152,153,154]. Then, the discretization of the model equations given in Chapter 3 is presented, followed by the description of the numerical implementation in a TCAD simulation tool.\n\nSubsections", null, "", null, "", null, "", null, "Next: 4.1 The Finite Element Up: Dissertation R.L. de Orio Previous: 3.6 Model Summary\n\nR. L. de Orio: Electromigration Modeling and Simulation" ]
[ null, "https://www.iue.tuwien.ac.at/phd/orio/next.png", null, "https://www.iue.tuwien.ac.at/phd/orio/up.png", null, "https://www.iue.tuwien.ac.at/phd/orio/prev.png", null, "https://www.iue.tuwien.ac.at/phd/orio/contents.png", null, "https://www.iue.tuwien.ac.at/phd/orio/next.png", null, "https://www.iue.tuwien.ac.at/phd/orio/up.png", null, "https://www.iue.tuwien.ac.at/phd/orio/prev.png", null, "https://www.iue.tuwien.ac.at/phd/orio/contents.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9253917,"math_prob":0.95474607,"size":1198,"snap":"2023-40-2023-50","text_gpt3_token_len":218,"char_repetition_ratio":0.10134003,"word_repetition_ratio":0.0,"special_character_ratio":0.18113522,"punctuation_ratio":0.110552765,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99780536,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-26T12:51:08Z\",\"WARC-Record-ID\":\"<urn:uuid:1fd31d33-9a49-4c28-97da-725ece181c65>\",\"Content-Length\":\"6299\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d68c788f-50ec-495f-8ed9-5351bb2839d0>\",\"WARC-Concurrent-To\":\"<urn:uuid:bea9bf1b-7c89-4d55-a0a2-dfec726e4462>\",\"WARC-IP-Address\":\"128.131.68.20\",\"WARC-Target-URI\":\"https://www.iue.tuwien.ac.at/phd/orio/node44.html\",\"WARC-Payload-Digest\":\"sha1:U3TRTL6VWA5SBXVHXM54XK2VDKISG7V3\",\"WARC-Block-Digest\":\"sha1:3G2QSUGJTI34HJOBQ74ICKYTKZXSOUE2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510208.72_warc_CC-MAIN-20230926111439-20230926141439-00770.warc.gz\"}"}
http://laxbytes.com/binwomstats19/PRImidf001020011.php
[ "``` EXPLANATION\nRANK PERCENTILE VALUE * WEIGHT = NET\nGames Played 19 * 0.20 = 3.80\nGames Started 13 * 0.30 = 3.90\nGoals >100 64 2 * 0.25 = 0.50\nAssists -- -- 0 * 0.21 = 0.00\nTotal Points >100 60 2 * 0.05 = 0.10\nDraw Controls >100 82 8 * 0.45 = 3.60\nTotal Shots >100 66 8 * 0.10 = 0.80\nShots on Goal >100 63 4 * 0.10 = 0.40\nMissed Shots Off Goal >100 73 4 * -0.05 = -0.20\nShot Percentage >100 56 0.25 * 1.00 = 0.25\nShots-on-goal Percentage >100 51 0.50 * 1.00 = 0.50\nGround Balls >100 95 36 * 0.45 = 16.20\nTurnovers >100 <50 4 * -0.60 = -2.40\nCaused Turnovers >100 94 18 * 2.00 = 36.00\nFree Position Goals >100 77 1 * 0.03 = 0.03\nFree Position Misses -- -- 0 * -0.10 = -0.00\nGoals Saved -- -- 0 * 0.80 = 0.00\nGoals Allowed -- -- 0 * -0.80 = -0.00\nShots Faced -- -- 0 * 0.01 = 0.00\nSave Percentage -- -- 0.00 * 5.00 = 0.00\n\nPIR RAW (NET/GamesPlayed) 3.28\nOffensive Factor (OF) 0.80\nDefensive Factor (DF) 0.88\nStrength Schedule (SOS) 0.94\nBasis Points 2.50\n-------\nPIR = (PRI RAW)*((OF+DF)/2)*SOS + Basis Points 5.08\n\nList of all players for Colorado\nList of all players for Division I\n\n```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.50099355,"math_prob":0.9966006,"size":1076,"snap":"2019-43-2019-47","text_gpt3_token_len":454,"char_repetition_ratio":0.15485075,"word_repetition_ratio":0.00896861,"special_character_ratio":0.5789963,"punctuation_ratio":0.17818181,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9825492,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-20T20:35:23Z\",\"WARC-Record-ID\":\"<urn:uuid:f9795edf-e6c9-49f8-b7c7-62b2028b29cd>\",\"Content-Length\":\"52699\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:83069c73-c384-43ba-ab11-c543e4deb3dd>\",\"WARC-Concurrent-To\":\"<urn:uuid:fe856d49-32da-4645-b457-9a5526c21e09>\",\"WARC-IP-Address\":\"50.63.209.1\",\"WARC-Target-URI\":\"http://laxbytes.com/binwomstats19/PRImidf001020011.php\",\"WARC-Payload-Digest\":\"sha1:GVUGIPS3RC7D4W7NZCRTDVRTWYE7H4SP\",\"WARC-Block-Digest\":\"sha1:BHJQKOYPSO3IT4I6RBYMFFYTGE57C52Y\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496670601.75_warc_CC-MAIN-20191120185646-20191120213646-00225.warc.gz\"}"}
https://www.works-hub.com/learn/through-the-looking-class-contravariant-functors-and-applicatives-5179f
[ "# Through the Looking Class: Contravariant Functors and Applicatives\n\nSiddharth Bhat\n\n16 Mar 2021", null, "In this blog post, we will learn about `Contravariant` and `Divisible` which provide duals for `Data.Functor` and `Data.Applicative` respectively.\n\n``````{-# LANGUAGE NoImplicitPrelude #-}\n{-# LANGUAGE InstanceSigs #-}\nimport GHC.Base hiding (Functor)\nimport GHC.Float -- for Ord instance of Float\nimport GHC.Show -- for show\n``````\n\n## Dual Functors\n\nFirst, a quick recap of functors:\n\n``````class Functor f where\nfmap :: (a -> b) -> f a -> f b\n``````\n\nThis lets us lift a function `f: a -> b` into a `fmap f: f a -> f b`. The dual is called `Contravariant`:\n\n``````class Contravariant f where\ncontramap :: (a -> b) -> f b -> f a\n``````\n\nLet us look at some example to build our intuition of such a typeclass.\n\n## Predicates\n\nThe classic example is that of a predicate, which is something that tells us whether a value of type `t` obeys some property or not:\n\n``````data Predicate t = Predicate { runPredicate :: t -> Bool }\ninstance Contravariant Predicate where\ncontramap :: (a -> b)\n-> Predicate b -- b -> Bool\n-> Predicate a -- a -> Bool\ncontramap a2b (Predicate b2bool) =\nPredicate (\\a -> b2bool (a2b a))\n``````\n\nAn example of such a thing is if we know how to check a real number is greater than zero:\n\n``````reGt0 :: Predicate Float\nreGt0 = Predicate (\\x -> x > 0.0)\n``````\n\nand we can converts integers into reals:\n\n``````intToReal :: Int -> Float\nintToReal i = error \"TODO\" -- fromIntegral\n``````\n\nthen we can check if an integer is greater than zero:\n\n``````intGt0 :: Predicate Int\nintGt0 = contramap intToReal reGt0\n``````\n\nThis is described by the picture:", null, "So, such a `Predicate Float` \"consumes\" a `Float` to produce a `Bool`. We can pull back the consumption along a function `Int -> Float` to consume a `Int` and produce a `Bool`.\n\n## Dual Applicatives\n\n``````class Functor f => Applicative f where\npure :: a -> f a\n(<*>) :: f (a -> b) -> f a -> f b\n``````\n\nRecall that an `Applicative` allow us to work with tuples:\n\n``````liftA2 :: (a -> b -> c) -> f a -> f b -> f c\n``````\n\nWe can write the type of liftA2 to be more suggestive as:\n\n``````liftA2 :: ((a, b) -> c) -> ((f a, f b) -> f c)\n``````\n\nIf we can combine a tuple `(a, b)` into a value `c`, then we can glue lifted values `(f a, f b)` into a lifted `f c`.\n\nThe dual, called `Divisible`, says that if we can break a value `c` into `(a, b)`, then we can glue lifted values `(f a, f b)` into a lifted `f c`.\n\n``````class Contravariant f => Divisible f where\ndivide :: (c -> (a, b)) -> f a -> f b -> f c\nconquer :: f a\n``````\n\nThe `conquer` is some sort of \"default procedure\" we can perform for any value. It'll be something benign, as we'll see when we check out the examples.", null, "Above, we have a picture of how to think about `Divisible`. The box with pink-and-blue is a `c`, that contains an `a` and a `b`. We have a function `p` that shows us how to split a `c` into an `a` and a `b`. We also have `f a` and `f b`, which consume `a, b` to produce some orange output. If we have this data, we can build an `f c`, something that can consume a `c`, by (1) splitting `c` into `(a, b)`, and then consuming the `(a, b)` using `f a` and `f b`.\n\n## Example 1: Predicates\n\nWe can continue our example of predicates. If we know how to check if something holds for `a` and something holds for `b`, we can check how something holds for `(a, b)`: check for both `a` and `b`. So, this would be:\n\n``````instance Divisible Predicate where\ndivide :: (c -> (a, b)) ->\nPredicate a -> Predicate b -> Predicate c\ndivide c2ab (Predicate a2bool) (Predicate b2bool) =\nPredicate (\\c -> let (a, b) = c2ab c\nin a2bool a && b2bool b)\n``````\n\nAs for when we know nothing, we could either allow it or disallow it. In this case, since we are `&&` ing information, the way to be \"benign\" is to allow things (that is, return a `True`). Since `True && b = b`, we are sure that the `conquer` is indeed benign.\n\n`````` conquer :: Predicate a\nconquer = Predicate (\\a -> True)\n``````\n\n## Example 2: Serialization\n\nConsider the ability to convert a data type to a string. These \"consume\" the (varying) data types to produce a `String`. So, for example:\n\n``````data Serializer a = Serializer { serialize :: a -> String }\n``````\n\nIf we know how to print an `b` (that is, we have `b2string :: b-> String`), and we can turn `a`'s into `b`s, we compose the two to print `a`s:\n\n``````instance Contravariant Serializer where\ncontramap :: (a -> b) -> Serializer b -> Serializer a\ncontramap a2b (Serializer b2string) =\nSerializer (\\a -> b2string (a2b a))\n``````\n\nFor our `Divisible` instance, if we can print `a` and `b`, and we can break a `c` into an `(a, b)`, we (1) break the `c` down, and then (2) print the `a` and the `b`, and (3) concatenate the string representation of `a` and `b`:\n\n``````instance Divisible Serializer where\ndivide :: (c -> (a, b))\n-> Serializer a\n-> Serializer b\n-> Serializer c\ndivide c2ab (Serializer a2str) (Serializer b2str) =\nSerializer (\\c -> let (a, b) = c2ab c\nin (a2str a) <> (b2str b))\n``````\n\nAs for `conquer`, if we don't know how to print something, the best thing to do is to not print anything at all. This prevents us from garbling output. Thus, the benign choice for `conquer` is to print an empty string:\n\n`````` conquer :: Serializer a\nconquer = Serializer (\\a -> \"\")\n``````\n\nWe can put `Serializer` work immediately. For example, say we know how to serializer `Int`s and `Float`s:\n\n``````intSerial :: Serializer Int\nintSerial = Serializer (\\i -> show i)\n\nfloatSerial :: Serializer Float\nfloatSerial = Serializer (\\f -> show f)\n``````\n\nIf we now have a type that contains `Int` and `Float`, no problem! `Divisible` has our back to combine the `Serializer`s together:\n\n``````data Foo = Foo Int Float\nfooSerial :: Serializer Foo\nfooSerial = divide (\\(Foo i f) -> (i, f))\nintSerial floatSerial\n``````\n\n## Example 3 / Generalization: Fixed output type\n\nWe can generalize both examples: we have seen before: `Predicate` is all functions into a fixed output type `Bool`, while `Serializer` is functions into a fixed output type `String`. We need to know how to combine the outputs --- in the case of `Bool`, we combined the outputs with `&&`. In the case of `String`, we combined the outputs with `<>`. In general, we need a monoid.\n\n``````data Into y x = Into { runInto :: x -> y }\ninstance Contravariant (Into y) where\ncontramap :: (b -> a)\n-> Into y a -- a -> y\n-> Into y b -- b -> y\ncontramap b2a (Into a2y) =\nInto (\\b -> a2y (b2a b))\n``````\n\nFor the `divide`, we combine the data from `a` and `b` using the monoid of `y`:\n\n``````instance Monoid y => Divisible (Into y) where\ndivide :: (c -> (a, b))\n-> Into y a -- a -> y\n-> Into y b -- b -> y\n-> Into y c -- c -> y\ndivide c2ab (Into a2y) (Into b2y) =\nInto (\\c -> let (a, b) = c2ab c\nin (a2y a) <> (b2y b))\n``````\n\nFor conquer, the \"benign instance\" is the `mempty` value of the monoid, which by definition does not \"interact\" with any element, as `mempty <> m = m` and `m <> mempty = m`:\n\n`````` conquer :: Into y a -- a -> y\nconquer = Into (\\a -> mempty)\n``````\n\nIn all of these examples, we have (a) A data structure that can be decomposed: this is the part of `c -> (a, b)`, and (b) A consumer of data: `f a` is \"something that can consume an `a`.\n\n## The laws for Contravariant\n\nSo far, I have been skating on intuition, without telling you what the laws `Divisible` must follow are. Let's get formal now. For a given `Contravariant f`, we need a `fmap`-like law to hold:\n\n• `fmap`'s law:: `fmap (f . g) = fmap f . fmap g\n• `contramap`'s law: `contramap (f . g) = contramap g . contramap f`\n\nSee that the order gets flipped in comparison to `fmap`. Let us check that this law holds for `Into y`, since that was the most general example.\n\n``````contramap :: (p -> q) -> Into y q -> Into y p\nx2q :: x -> q\ncontramap (x2q . p2x) \\$ (Into q2y) =?=\ncontramap p2x . conramap x2q \\$ (Into q2y)\n``````\n\nWe can rewrite our `Into y` definition to be easier to manipulate using point-free style:\n\n``````instance Contravariant Into where\ncontramap :: (b -> a)\n-> Into a -- a -> y\n-> Into b -- b -> y\ncontramap b2a (Into a2y) = Into (a2y . b2a) -- b -> y\n``````\n\nif we now try to simplify:\n\n1. `contramap p2x . contramap x2q \\$ (Into q2y)`\n2. Remove `.` and `\\$`: `contramap p2x (contramap x2q (Into q2y))`\n3. unwrap inner `contramap`: `contramap p2x (Into (q2y . x2q))\n4. unwrap outer `contramap`: `Into (q2y . x2q . p2x)`\n5. re-group `.`: `contramap`: `Into (q2y . (x2q . p2x))`\n6. introduce back `contramap`: `contramap (x2q . p2x) (Into q2y)`\n\nthus we are done! We've shown that the `Contravariant` laws hold for `Into`\n\n## The laws for Divisible\n\nThe laws follow from some category theory. We need that for the function:\n\n``````delta :: a -> (a, a)\ndelta a = (a, a)\n``````\n\nthe following relations between `divide` and `conquer` hold:\n\n1. First, let us think about `divide delta`. It means that we perform the same action on the left element and right element of the tuple since our tuple is built from the same element `a`.", null, "``````dd :: Divisible f => f a -> f a -> f a\ndd = divide delta\n```\n\n1. `conquer` is an identity element for `divide delta`:\n\n```hs\ndd m conquer = dd conquer m\n```\n\n2. `divide delta` is associative:\n\n```hs\ndd m (dd n o) = dd (dd m n) o\n```\n\nSo this is saying that `divide delta` is monoidal, with `conquer`\nas the identity element. Let's verify what happens in our case of `Into y`.\n\n0. Let `a2y, a2y' :: Into y a`.\n1. Expand the definition: `dd (a2y, a2y') = divide delta (a2y, a2y')`\n2. Expand `divide`:\n\n```hs\ndivide delta a2y, a2y'\n= Into (\\c -> let (a, b) = delta c\nin (a2y a) <> (a2y' b))\n```\n3. Substitue `delta c = (c, c)`\n\n```hs\ndivide delta a2y, a2y'\n= Into (\\c -> let (a, b) = (c, c)\nin (a2y a) <> (a2y' b))\n```\n\n4. Replace `a, b` with `c`\n\n```\ndivide delta a2y, a2y' = Into (\\c -> (a2y c) <> (a2y' c))\n```\n\nGreat, so we have a simple enough definition of what `dd` does; It runs\nboth `a2y` and `a2y'` ou the same input and smashes the results. At this\npoint, it should be hopefully _somewhat_ clear why the laws hold for `Into`:\n\n1. We build `conquer` using `mempty`, Since `mempty` is the identity for `(<>)`,\n`conquer should be the identity for `divide delta`.\n2. We are smashing outputs using `(<>)` in `divide delta`. As `(<>)` is associative, we should get\n\n![divide-delta-conquer.png](https://functionalworks\\-backend\\-\\-prod.s3.amazonaws.com/logos/51e8bfabb350afa67eec66d52ea2b4ac)\n\nPictorially, we are combining two machines: one that turns `x2y`, and one that is `conquer` which is \"useless\". Since we start by copying `x` using `delta x = (x, x)`, whatever `conquer` does is useless, and then only effect that's leftover is whatever the `x2y` does. So we can simplify the above figure by eliminating the bottom part of the computation, leaving us with this:\n\n![divide-delta-conquer-simplify.png](https://functionalworks\\-backend\\-\\-prod.s3.amazonaws.com/logos/bf5f27875bd4848699b6b28c07d20250)\n\n``````\n\nSiddharth Bhat\n\nmathematics ⋂ computation\n\nSee other articles by Siddharth\n\nRelated jobs\n\nSee all\n\n### WorksHub\n\nCareersCompaniesSitemapFunctional WorksBlockchain WorksJavaScript WorksAI WorksGolang WorksJava WorksPython WorksRemote Works\n\n### Locations", null, "[email protected]", null, "Ground Floor, Verse Building, 18 Brunswick Place, London, N1 6DZ", null, "108 E 16th Street, New York, NY 10003" ]
[ null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "https://workshub.imgix.net/4f386cf9764c2147bda1c6c308ab968c", null, "https://workshub.imgix.net/22c5195f3cdac1a9209d3c23f6f05d74", null, "https://workshub.imgix.net/28874016c5d1f8cd1a491e8aa5c2b249", null, "https://www.works-hub.com/_next/static/media/email_icon_footer.6d6964f6.png", null, "https://www.works-hub.com/_next/static/media/uk_icon_footer.6643bde0.png", null, "https://www.works-hub.com/_next/static/media/us_icon_footer.005a52cc.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7367657,"math_prob":0.9563848,"size":10458,"snap":"2023-14-2023-23","text_gpt3_token_len":3255,"char_repetition_ratio":0.13516358,"word_repetition_ratio":0.0935789,"special_character_ratio":0.3123924,"punctuation_ratio":0.14726508,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9977326,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,null,null,4,null,4,null,4,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-05-30T07:57:29Z\",\"WARC-Record-ID\":\"<urn:uuid:a915ca0c-c06b-475d-911e-1456d1b7ffc4>\",\"Content-Length\":\"298557\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e0d1b856-3a15-4094-b4f5-ff3bc8cf55b7>\",\"WARC-Concurrent-To\":\"<urn:uuid:b8707a89-b097-4832-bf66-edc499264a3b>\",\"WARC-IP-Address\":\"104.21.86.218\",\"WARC-Target-URI\":\"https://www.works-hub.com/learn/through-the-looking-class-contravariant-functors-and-applicatives-5179f\",\"WARC-Payload-Digest\":\"sha1:4OQNI4TDTKNE43IUYU4LFSI2FZBQQNXH\",\"WARC-Block-Digest\":\"sha1:TDL5IZB3KABB6BBKTHXMTUHDE5PNFZMV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224645417.33_warc_CC-MAIN-20230530063958-20230530093958-00225.warc.gz\"}"}
https://cs.stackexchange.com/questions/105796/solving-a-in-equality-constraint-problem-with-graph-search/105892
[ "# Solving a in/equality constraint problem with graph search\n\nYou are given a list of m constraints over n distinct variables x1, ..., xn. Each constraint is of one of the following two types.\n\n1. An equality constraint of the form xi = xj for some i!=j.\n2. An inequality constraint of the form xi!= xj for some i!=j.\n\nI want to find an assignment, if it exists, for each variable such that it conforms to all the constraints using a graph search algorithm in O(m+n) time.\n\nThis reminds me of the graph colouring problem, however that only involves checking graph neighbours where as in any efficient graph I could think of, the nodes sharing a constraint may not be neighbours.\n\nMy first thought was to create a graph such that all nodes that equal are connected then use DFS to traverse each node and check if it has an inequality with a parent, however that doesn't seem very efficient as for every node (m) I have to traverse every inequality constraint (at most n) which brings me to nm time, where as DFS inherently has O(m+n)on an ideal representation.\n\nAny clues?\n\n• Your problem is far from well-defined. Can you give a complete and non-trivial example? Can you tell us what motivated you to create the problem? We might be able to help you identify what could be an interesting problem. – Apass.Jack Mar 19 at 21:13\n• @Apass.Jack hope I clarified it in my last edit. – Ge0rges Mar 20 at 16:16\n\nYou're on the right track. Construct an undirected graph with an edge for each equality constraint, as you suggested. Next, look into the concept of connected components, and algorithms for finding them. You should be able to take it from here.\n\nThis problem can be solved in linear time.\n\n• How would you suggest dealing with conflicting constraints? – Ge0rges Mar 21 at 16:26\n• @Ge0rges, I suggest spending some time on the problem. A useful approach is to pick a small example, then try to work through it by hand (construct the graph of equality constraints, find the connected components, then figure out what to do with the inequality constraints); and do it for a few examples. Maybe pick some examples with conflicting constraints and some without. Looking at specific small examples should help you figure out what makes sense to do. If after that, you're still stuck, show us your progress and thoughts so far and ask about what you're stuck on. – D.W. Mar 21 at 16:38\n\nLet there be K equalities.\n\n1. Create a forest of n nodes stored in an adjacency list O(n)\n2. For each equality between Xi and Xj, add an edge between them. O(2k)\n3. Call DFS on any node. O(|m| +|n|)\n1. DFS has a counter that starts at 0\n2. When DFS visits a node it sets that node’s value to the current value of its counter\n3. When DFS crosses a cross-edge, it increments it’s counter by 1\n4. For each inequality between Xi and Xj, check their values assigned by the counter during the DFS, if they are equal return NIL O(2(m - k))\n5. Create the result array by going through the adjacency list and setting the value at index j of the node X_j O(n) => Worst case time complexity O(3|m| + 3|n|) => O(|m| + |n|)" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9622519,"math_prob":0.97232765,"size":1002,"snap":"2019-51-2020-05","text_gpt3_token_len":219,"char_repetition_ratio":0.13527054,"word_repetition_ratio":0.011173184,"special_character_ratio":0.22155689,"punctuation_ratio":0.10576923,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9886107,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-08T04:10:12Z\",\"WARC-Record-ID\":\"<urn:uuid:48bc1e74-6f1a-453c-a4e1-8b66d4d2c846>\",\"Content-Length\":\"138601\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:49d6fb8b-cc9e-4d3e-a40d-67d859f61fbd>\",\"WARC-Concurrent-To\":\"<urn:uuid:81720ca1-b17d-4452-8f32-569e0e2d5254>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://cs.stackexchange.com/questions/105796/solving-a-in-equality-constraint-problem-with-graph-search/105892\",\"WARC-Payload-Digest\":\"sha1:UQYOX7I4BIEQISZD7EVIZA6TBQ64WZPM\",\"WARC-Block-Digest\":\"sha1:BXKYPE2IVPY377JWIZIL2U5SISELKY7W\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540504338.31_warc_CC-MAIN-20191208021121-20191208045121-00402.warc.gz\"}"}
https://emacs.stackexchange.com/questions/51027/missing-color-support-for-exa-in-eshell/51029
[ "# Missing color support (for exa) in eshell\n\nI just started using eshell. But the color support for some commands seems to be missing. For example I like to use the command `exa`, because of its nice colors. But in eshell it is all black.\n\nFor `exa` in eshell you can just use the following alias. It is preserved over a restart of Emacs.\n\n``````alias exa *exa --color=always\n``````\n\nThe general problem to teach other programs about the color capabilities of the comint of Emacs is described on Rededit.\n\nThis approach helps for an example colorizing `git` output under Eshell.\n\nLax Citation:\n\nCreate a file `~/.terminfo/dumb-emacs-ansi.ti` with the following content:\n\n``````dumb-emacs-ansi|Emacs dumb terminal with ANSI color codes,\nam,\ncolors#8, it#8, ncv#13, pairs#64,\nbold=\\E[1m, cud1=^J, ht=^I, ind=^J, op=\\E[39;49m,\nritm=\\E[23m, rmul=\\E[24m, setab=\\E[4%p1%dm,\nsetaf=\\E[3%p1%dm, sgr0=\\E[m, sitm=\\E[3m, smul=\\E[4m,\n``````\n\nRuns `tic` on that file. Set the environment variable `TERM` in Eshell to `dumb-emacs-ansi`.\n\nThe following Elisp code does that for you:\n\n``````(setq comint-terminfo-terminal \"dumb-emacs-ansi\")\n\n(let* ((terminfo-file (format \"~/.terminfo/%s.ti\" comint-terminfo-terminal))\n(default-directory (file-name-directory terminfo-file)))\n(unless (file-exists-p terminfo-file)\n(make-directory default-directory t)\n(with-temp-buffer\n(insert \"dumb-emacs-ansi|Emacs dumb terminal with ANSI color codes,\nam,\ncolors#8, it#8, ncv#13, pairs#64,\nbold=\\\\E[1m, cud1=^J, ht=^I, ind=^J, op=\\\\E[39;49m,\nritm=\\\\E[23m, rmul=\\\\E[24m, setab=\\\\E[4%p1%dm,\nsetaf=\\\\E[3%p1%dm, sgr0=\\\\E[m, sitm=\\\\E[3m, smul=\\\\E[4m,\")\n(write-file terminfo-file)))\n(unless (file-exists-p (concat default-directory \"d/\" comint-terminfo-terminal))\n(start-process \"*tic process*\" \"*Messages*\" \"tic\" (expand-file-name terminfo-file))))" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6624413,"math_prob":0.47477627,"size":1922,"snap":"2020-34-2020-40","text_gpt3_token_len":612,"char_repetition_ratio":0.12356622,"word_repetition_ratio":0.23076923,"special_character_ratio":0.2887617,"punctuation_ratio":0.1439206,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95785666,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-15T05:06:55Z\",\"WARC-Record-ID\":\"<urn:uuid:c6606c02-21ac-47d6-8c72-a478c4588343>\",\"Content-Length\":\"142563\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fa0afedc-153c-4150-a1c3-38d3a1ab4e97>\",\"WARC-Concurrent-To\":\"<urn:uuid:d2eef6bc-df6a-4b90-bc5c-a22a2b7cb9e1>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://emacs.stackexchange.com/questions/51027/missing-color-support-for-exa-in-eshell/51029\",\"WARC-Payload-Digest\":\"sha1:GKWFLNTSRK7WI46U3X3WXHJDVY4KCHJB\",\"WARC-Block-Digest\":\"sha1:FYFSYHGNRJB4KVJ4NPYSVUE64K7PXMAT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439740679.96_warc_CC-MAIN-20200815035250-20200815065250-00543.warc.gz\"}"}
https://ixtrieve.fh-koeln.de/birds/litie/search?q=asb_ss%3A%22Nbm+2%22
[ "# Search (1 results, page 1 of 1)\n\n1. Spitzer, M.: Lernen : Gehirnforschung und die Schule des Lebens (2002) 10.99\n```10.988655 = weight(asb_ss:Nbm 2 in 34) [ClassicSimilarity], result of:\n10.988655 = fieldWeight in 34, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n10.988655 = idf(docFreq=1, maxDocs=43556)\n1.0 = fieldNorm(doc=34)\n```\nAsb\nNbm 2\nClassification\nNbm 2" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5116216,"math_prob":0.98383045,"size":382,"snap":"2022-40-2023-06","text_gpt3_token_len":155,"char_repetition_ratio":0.13492064,"word_repetition_ratio":0.0,"special_character_ratio":0.45549738,"punctuation_ratio":0.24175824,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9562526,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-29T12:04:53Z\",\"WARC-Record-ID\":\"<urn:uuid:3fef4ef9-74aa-4a08-bd1f-788eff5caa96>\",\"Content-Length\":\"6179\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:077ad593-91db-43d2-bd89-d1f2292e03b6>\",\"WARC-Concurrent-To\":\"<urn:uuid:564164d1-b074-4f5f-951e-b6a6c8872dea>\",\"WARC-IP-Address\":\"139.6.160.6\",\"WARC-Target-URI\":\"https://ixtrieve.fh-koeln.de/birds/litie/search?q=asb_ss%3A%22Nbm+2%22\",\"WARC-Payload-Digest\":\"sha1:G6GEJV2FQVP4JKBKEBPGHQ2LLM3Q4G5H\",\"WARC-Block-Digest\":\"sha1:YLH6DZVBSJZ77AVG7V4COOHRAWKO3WIG\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499713.50_warc_CC-MAIN-20230129112153-20230129142153-00266.warc.gz\"}"}
https://www.onlineclickdigital.com/2023/01/07/one-cubic-metre-is-equal-to/
[ "# One cubic metre is equal to\n\nThe cubic metre is the SI unit of volume. It is the size of a cube that is 1 metre on each side. The cubic metre is used in many fields, such as construction, engineering, and shipping. It is also a common unit of measure for storage containers and tanks.\n\n### What is a cubic metre?\n\nA cubic metre (m3) is the SI unit of volume and is equal to the volume of a cube with sides of one metre. It is the equivalent of 1,000 litres or about 35.3 cubic feet. The cubic metre is often used to measure the volume of large objects such as buildings, ships and trucks.\n\n### How is a cubic metre measured?\n\nA cubic metre is measured using the metric system, and is equal to the volume of a cube with sides that are one metre in length. In order to calculate the volume of a given object in cubic metres, one must first measure the length, width, and height of the object in metres, and then multiply these values together.\n\n### What is the relationship between a cubic metre and other units of measure?\n\nA cubic metre is the standard unit of measure for volume. The cubic metre can be divided into smaller units, such as the litre, or it can be multiplied to create larger units, such as the gigalitre. One cubic metre is equal to 1,000 litres, or one millionth of a cubic kilometre.\n\n### How did the cubic metre come to be the standard unit of measure for volume?\n\nThe cubic metre is the SI unit of volume. It was first defined in 1795 as the volume of a cube with sides of one metre. The cubic metre is still the unit of measure used for volume today. The cubic metre is used in many different applications, including measuring the volume of gas and water.\n\n### What are some common uses for the cubic metre?\n\nThe cubic metre is a unit of measure that is often used to measure the volume of large objects or spaces. Some common uses for the cubic metre include measuring the volume of a room, the amount of water in a swimming pool, or the size of a container.\n\n### Conclusion\n\nThe cubic metre is a unit of measure that is used extensively in many different industries. It is a versatile unit that can be used to measure a wide variety of things, from the volume of a liquid to the size of an object. The cubic metre is also a convenient unit of measure because it is easy to convert into other units of measure, such as litres or grams.", null, "" ]
[ null, "https://www.onlineclickdigital.com/wp-content/plugins/a3-lazy-load/assets/images/lazy_placeholder.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9433456,"math_prob":0.99239844,"size":2321,"snap":"2023-40-2023-50","text_gpt3_token_len":511,"char_repetition_ratio":0.23608114,"word_repetition_ratio":0.1719457,"special_character_ratio":0.22016372,"punctuation_ratio":0.09475806,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9953087,"pos_list":[0,1,2],"im_url_duplicate_count":[null,8,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-23T17:34:45Z\",\"WARC-Record-ID\":\"<urn:uuid:fe6d8e12-2067-478c-a472-38fd501a2808>\",\"Content-Length\":\"82376\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8f34ace5-47aa-4e6a-80fc-8312b21d756c>\",\"WARC-Concurrent-To\":\"<urn:uuid:eb129afe-78e8-4d73-bfb5-c7bbc24e13c8>\",\"WARC-IP-Address\":\"104.161.23.30\",\"WARC-Target-URI\":\"https://www.onlineclickdigital.com/2023/01/07/one-cubic-metre-is-equal-to/\",\"WARC-Payload-Digest\":\"sha1:L5J55DJGI5YYALAMIRVDU2KRBNIOUTPV\",\"WARC-Block-Digest\":\"sha1:BKM4L3FQQ45FJU6455DS43P6B4KHKIYA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506528.19_warc_CC-MAIN-20230923162848-20230923192848-00248.warc.gz\"}"}
https://www.lizenghai.com/archives/32941.html
[ "# 【深度学习DL-PyTorch】四、深度学习工具PyTorch\n\n#### 一、张量(Tensor)\n\ntorch.randn()创建由正态变量组成的张量,即来自正态分布的随机正态变量。\ntorch.randn_like()传入一个张量并查看该张量的形状,然后创建形状相同的另一个张量。\ntorch.sum()\n\ntorch.mm()矩阵乘法更简单,并且对传入的张量要求更严格。必须符合矩阵乘法的条件:第一个矩阵的列数,必须等于第二个矩阵的行数。能够按照预期的方式进行运算。\ntorch.matmul() 支持广播,如果传入大小/形状很奇怪的张量,那么可能获得意料之外的输出结果。\n\ntensor.shape() 在构建神经网络时,最常见的错误是形状错误,所以在设计神经网络架构时很重要的步骤就是,使张量的形状保持匹配\ntensor.reshape()创建一个张量,形状是要求的形状,但是内存中的实际数据没有改变。有时候,它会返回克隆版本,也就是说,它把数据复制到内存中的另一个部分,然后返回该内存部分存储的张量。也就是说,复制数据比直接更改张量形状(不克隆数据)效率要低。\ntensor.resize_() 下划线表示resize_这个方法是原地操作(in-place operation),原地操作是指根本不改变数据,只是改变位于该内存地址中的数据对应的张量。resize_方法的问题在于如果要求的形状比原始张量的元素多或少时,可能会丢失数据或者使用未初始化的内存创建虚假的数据。\ntensor.view()会返回一个新张量包含的数据和旧张量在内存中的一样。不论什么时候,它都只是返回一个新的张量,不会更改内存中的任何数据。如果想获得新的大小使张量具有新的形状和不同数量的元素,就会报错。使用view方法可以确保在更改张量形状时始终获得相同数量的元素。\n\n• weights.reshape(a, b) 有时候将返回一个新的张量,数据和 weights 的一样,大小为 (a, b);有时候返回克隆版,将数据复制到内存的另一个部分。\n• weights.resize_(a, b) 返回形状不同的相同张量。但是,如果新形状的元素数量比原始张量的少,则会从张量里删除某些元素(但是不会从内存中删除)。如果新形状的元素比原始张量的多,则新元素在内存里未初始化。注意,方法末尾的下划线表示这个方法是原地运算。要详细了解如何在 PyTorch 中进行原地运算,请参阅此论坛话题\n• weights.view(a, b) 将返回一个张量,数据和 weights 的一样,大小为 (a, b)\n\ntorch.from_numpy()\ntensor.numpy()\n\n#### 二、在PyTorch中构建神经网络", null, "ReLU是线性修正单元的简称,它是最简单的非线性函数,与S型函数和双曲正切函数相比,使用ReLU时网络的训练速度快多了。\n\n###### 2.1构建神经网络\n\nPyTorch 提供了nn模块,大大地简化了网络构建过程。以下演示如何构建上述同一个网络,即包含 784 个输入、256 个隐藏单元、10 个输出单元和一个 softmax 输出。\n\nfrom torch import nn\nclass Network(nn.Module):\ndef __init__(self):\nsuper().__init__()\n\n# Inputs to hidden layer linear transformation\nself.hidden = nn.Linear(784, 256)\n# Output layer, 10 units - one for each digit\nself.output = nn.Linear(256, 10)\n\n# Define sigmoid activation and softmax output\nself.sigmoid = nn.Sigmoid()\nself.softmax = nn.Softmax(dim=1)\n\ndef forward(self, x):\n# Pass the input tensor through each of our operations\nx = self.hidden(x)\nx = self.sigmoid(x)\nx = self.output(x)\nx = self.softmax(x)\n\nreturn x\n\nclass Network(nn.Module):\n\n\nself.hidden = nn.Linear(784, 256)\n\n\nself.output = nn.Linear(256, 10)\n\n\nself.sigmoid = nn.Sigmoid()\nself.softmax = nn.Softmax(dim=1)\n\n\ndef forward(self, x):\n\n\nnn.Module 创建的 PyTorch 网络必须定义 forward 方法。它会接受一个张量 x 并将其传入你在 __init__ 方法中定义的运算。\n\nx = self.hidden(x)\nx = self.sigmoid(x)\nx = self.output(x)\nx = self.softmax(x)\n\n\n# Create the network and look at it's text representation\nmodel = Network()\nmodel\n\n\nimport torch.nn.functional as F\nclass Network(nn.Module):\ndef __init__(self):\nsuper().__init__()\n# Inputs to hidden layer linear transformation\nself.hidden = nn.Linear(784, 256)\n# Output layer, 10 units - one for each digit\nself.output = nn.Linear(256, 10)\n\ndef forward(self, x):\n# Hidden layer with sigmoid activation\nx = F.sigmoid(self.hidden(x))\n# Output layer with softmax activation\nx = F.softmax(self.output(x), dim=1)\n\nreturn x\n\n###### 2.2 激活函数", null, "activation.png\n\n###### 2.3 使用 nn.Sequential\n\nPyTorch 提供了一种方便的方法来构建这类网络(其中张量按顺序执行各种运算):nn.Sequential (文档)。使用它来构建等效网络:\n\n# Hyperparameters for our network\ninput_size = 784\nhidden_sizes = [128, 64]\noutput_size = 10\n# Build a feed-forward network\nmodel = nn.Sequential(nn.Linear(input_size, hidden_sizes),\nnn.ReLU(),\nnn.Linear(hidden_sizes, hidden_sizes),\nnn.ReLU(),\nnn.Linear(hidden_sizes, output_size),\nnn.Softmax(dim=1))\nprint(model)\n# Forward pass through the network and display output\nimages.resize_(images.shape, 1, 784)\nps = model.forward(images[0,:])\nhelper.view_classify(images.view(1, 28, 28), ps)\n\n\nfrom collections import OrderedDict\nmodel = nn.Sequential(OrderedDict([\n('fc1', nn.Linear(input_size, hidden_sizes)),\n('relu1', nn.ReLU()),\n('fc2', nn.Linear(hidden_sizes, hidden_sizes)),\n('relu2', nn.ReLU()),\n('output', nn.Linear(hidden_sizes, output_size)),\n('softmax', nn.Softmax(dim=1))]))\nmodel\n\n\n#### 三、训练神经网络\n\nx = torch.zeros(1, requires_grad=True)\n... y = x * 2\nFalse\n\n\n# Build a feed-forward network\nmodel = nn.Sequential(nn.Linear(784, 128),\nnn.ReLU(),\nnn.Linear(128, 64),\nnn.ReLU(),\nnn.Linear(64, 10),\nnn.LogSoftmax(dim=1))\ncriterion = nn.NLLLoss()\nimages = images.view(images.shape, -1)\nlogps = model(images)\nloss = criterion(logps, labels)\n\n\nPyTorch训练网络 的一般流程是:\n\n• 通过网络进行正向传递以获取logits\n• 使用 logits 计算损失\n• 通过 loss.backward() 对网络进行反向传递以计算梯度\n• 使用优化器更新权重\nmodel = nn.Sequential(nn.Linear(784, 128),\nnn.ReLU(),\nnn.Linear(128, 64),\nnn.ReLU(),\nnn.Linear(64, 10),\nnn.LogSoftmax(dim=1))\ncriterion = nn.NLLLoss()\noptimizer = optim.SGD(model.parameters(), lr=0.003)\nepochs = 5\nfor e in range(epochs):\nrunning_loss = 0\n# Flatten MNIST images into a 784 long vector\nimages = images.view(images.shape, -1)\n\n# TODO: Training pass\n\noutput = model.forward(images)\nloss = criterion(output, labels)\nloss.backward()\n\noptimizer.step()\n\nrunning_loss += loss.item()\nelse:\n\n\n#### 四、训练神经网络的基本步骤\n\n• 1.清空所有已优化变量的梯度\n• 2.前向传播:通过向模型传入输入,计算预测输出。\n• 3.计算损失\n• 4.反向传播:计算损失相对于模型参数的梯度\n• 5.执行一个优化步骤(参数更新)\n• 6.更新平均训练损失\n\nThe steps for training/learning from a batch of data are described in the comments below:\n\n• 1.Clear the gradients of all optimized variables\n• 2.Forward pass: compute predicted outputs by passing inputs to the model\n• 3.Calculate the loss\n• 4.Backward pass: compute gradient of the loss with respect to model parameters\n• 5.Perform a single optimization step (parameter update)\n• 6.Update average training loss\n# number of epochs to train the model\nn_epochs = 30 # suggest training between 20-50 epochs\nmodel.train() # prep model for training\nfor epoch in range(n_epochs):\n# monitor training loss\ntrain_loss = 0.0\n\n###################\n# train the model #\n###################\n# clear the gradients of all optimized variables\n# forward pass: compute predicted outputs by passing inputs to the model\noutput = model(data)\n# calculate the loss\nloss = criterion(output, target)\n# backward pass: compute gradient of the loss with respect to model parameters\nloss.backward()\n# perform a single optimization step (parameter update)\noptimizer.step()\n# update running training loss\ntrain_loss += loss.item()*data.size(0)\n\n# print training statistics\n# calculate average loss over an epoch\nprint('Epoch: {} \\tTraining Loss: {:.6f}'.format(\nepoch+1,\ntrain_loss\n))\n\n\nhttps://www.jianshu.com/p/7930d53e8000\n\nPython量化投资网携手4326手游为资深游戏玩家推荐:《【崩坏3】记忆战场丨阿湿波翻身做主人,地藏反倒被暴打?丨中配战场攻略\n\n「点点赞赏,手留余香」\n\n还没有人赞赏,快来当第一个赞赏的人吧!\nPyTorch\nPython\n0 条回复 A 作者 M 管理员\n所有的伟大,都源于一个勇敢的开始!" ]
[ null, "https://upload-images.jianshu.io/upload_images/2811866-2a774804905f804c.png", null, "https://upload-images.jianshu.io/upload_images/2811866-78cecdb57375a60b.png", null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.5675099,"math_prob":0.98896384,"size":8096,"snap":"2019-43-2019-47","text_gpt3_token_len":3926,"char_repetition_ratio":0.11208601,"word_repetition_ratio":0.16815287,"special_character_ratio":0.25494072,"punctuation_ratio":0.19100295,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99607354,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-14T20:12:59Z\",\"WARC-Record-ID\":\"<urn:uuid:7f884d54-d80b-4e0b-866e-ba837bc4ed2a>\",\"Content-Length\":\"59127\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:25d401ba-b3fa-4362-b687-c0d40ad95458>\",\"WARC-Concurrent-To\":\"<urn:uuid:4eb08281-7437-452f-b09d-c11abc807bdf>\",\"WARC-IP-Address\":\"47.95.226.97\",\"WARC-Target-URI\":\"https://www.lizenghai.com/archives/32941.html\",\"WARC-Payload-Digest\":\"sha1:7L3D43MFGCQNFZQ2ORZUTIWZFCFO7UKO\",\"WARC-Block-Digest\":\"sha1:IDPSXUUPOWKWBHZG6KOJTO67R5QMI6HB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496668534.60_warc_CC-MAIN-20191114182304-20191114210304-00444.warc.gz\"}"}
https://statease.com/docs/latest/screen-tips/intro-and-build/mixture/screening-simplex-mix-options/
[ "", null, "# Simplex Screening Designs\n\nVertices: required points to estimate the linear mixture terms\n\nAxial check blends: points half-way between the centroid and the vertices to improve linear term estimates\n\nConstraint plane centroids: points directly opposite the vertices to improve linear term estimates\n\nAdditional centroid points: points in the exact center used to estimate curvature\n\nBlocks: for each block a centroid run will be added to estimate block effects\n\nPoints needed for blocks: number of centroid points used to estimate block effects\n\nTotal: sum of vertices, axial check blends, constraint plane centroids, and additional center points." ]
[ null, "https://dc.ads.linkedin.com/collect/", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.81608516,"math_prob":0.9013436,"size":613,"snap":"2023-14-2023-23","text_gpt3_token_len":110,"char_repetition_ratio":0.18226601,"word_repetition_ratio":0.06896552,"special_character_ratio":0.16639479,"punctuation_ratio":0.106796116,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98218155,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-04T11:42:29Z\",\"WARC-Record-ID\":\"<urn:uuid:169fc086-4da1-4b38-8626-becc81791552>\",\"Content-Length\":\"21537\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ddfea816-94a0-4fda-8bbd-f622be2f327f>\",\"WARC-Concurrent-To\":\"<urn:uuid:12f1ced1-bbc7-4009-b8d1-21090ff11af2>\",\"WARC-IP-Address\":\"104.16.244.78\",\"WARC-Target-URI\":\"https://statease.com/docs/latest/screen-tips/intro-and-build/mixture/screening-simplex-mix-options/\",\"WARC-Payload-Digest\":\"sha1:SJWDJ5CQFZMDWRSISRF26XOBA64PKUJE\",\"WARC-Block-Digest\":\"sha1:M5VPM75DP74VRT6SWZRDLDSJ74FZ6QPU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224649741.26_warc_CC-MAIN-20230604093242-20230604123242-00619.warc.gz\"}"}
https://mybilingualresources.com/products/dimensions-math-textbook-and-workbook-set-prek-a
[ "", null, "# Dimensions Math Textbook PreK-A\n\n\\$ 12.00\n\nChapter 1: Match, Sort, and Classify\n\nLesson 1: Red and Blue\nLesson 2: Yellow and Green\nLesson 3: Color Review\nLesson 4: Soft and Hard\nLesson 5: Rough, Bumpy, and Smooth\nLesson 6: Sticky and Grainy\nLesson 7: Size — Part 1\nLesson 8: Size — Part 2\nLesson 9: Sort Into Two Groups\nLesson 10: Practice\n\nChapter 2: Compare Objects\n\nLesson 1: Big and Small\nLesson 2: Long and Short\nLesson 3: Tall and Short\nLesson 4: Heavy and Light\nLesson 5: Practice\n\nChapter 3: Patterns\n\nLesson 1: Movement Patterns\nLesson 2: Sound Patterns\nLesson 3: Create Patterns\nLesson 4: Practice\n\nChapter 4: Numbers to 5 — Part 1\n\nLesson 1: Count 1 to 5 — Part 1\nLesson 2: Count 1 to 5 — Part 2\nLesson 3: Count Back\nLesson 4: Count On and Back\nLesson 5: Count 1 Object\nLesson 6: Count 2 Objects\nLesson 7: Count Up to 3 Objects\nLesson 8: Count Up to 4 Objects\nLesson 9: Count Up to 5 Objects\nLesson 10: How Many? — Part 1\nLesson 11: How Many? — Part 2\nLesson 12: How Many Now? — Part 1\nLesson 13: How Many Now? — Part 2\nLesson 14: Practice\n\nChapter 5: Numbers to 5 — Part 2\n\nLesson 1: 1, 2, 3\nLesson 2: 1, 2, 3, 4, 5 — Part 1\nLesson 3: 1, 2, 3, 4, 5 — Part 2\nLesson 4: How Many? — Part 1\nLesson 5: How Many? — Part 2\nLesson 6: How Many Do You See?\nLesson 7: How Many Do You See Now?\nLesson 8: Practice\n\nChapter 6: Numbers to 10 — Part 1\n\nLesson 1: 0\nLesson 2: Count to 10 — Part 1\nLesson 3: Count to 10 — Part 2\nLesson 4: Count Back\nLesson 5: Order Numbers\nLesson 6: Count Up to 6 Objects\nLesson 7: Count Up to 7 Objects\nLesson 8: Count Up to 8 Objects\nLesson 9: Count Up to 9 Objects\nLesson 10: Count Up to 10 Objects — Part 1\nLesson 11: Count Up to 10 Objects — Part 2\nLesson 12: How Many?\nLesson 13: Practice\n\nChapter 7: Numbers to 10 — Part 2\n\nLesson 1: 6\nLesson 2: 7\nLesson 3: 8\nLesson 4: 9\nLesson 5: 10\nLesson 6: 0 to 10\nLesson 7: Count and Match — Part 1\nLesson 8: Count and Match — Part 2\nLesson 9: Practice\n\nDimensions Math® PreK-5 series features the progression, rigor, and pacing that define Singapore math. Throughout the series, five characters offer students suggestions on how to think about problems. They remind students of strategies they’ve learned and point out important information that encourages them to come up with their own solutions.\n\nTextbook lessons begin with a task that allows students to apply their previous knowledge and learn through discussion. Once students have mastered a concept with the use of concrete and pictorial aids, they are ready to take on more abstract mathematical problem sets. They reach fluency by collecting various strategies along the way and applying them to new problems. Word problems give students a sense of math in real-world contexts.\n\nWorkbooks offer independent practice that follows a careful progression of exercise variation. Each textbook lesson includes a corresponding workbook exercise that starts with pictorial representation and progresses to more challenging abstract problems. Workbooks for PreK-2 are perforated.\n\nTeacher’s Guides include lesson plans, mathematical background, games, helpful suggestions, and comprehensive resources for daily lessons. Lessons are laid out clearly and activities are designed for the whole class, small groups, and extension.\n\nTextbooks and Workbooks do not include answer keys. Answers are in Teacher's Guides.\nTextbook:\nISBN 9781947226005\npp 144" ]
[ null, "https://mybilingualresources.com/cdn/shop/products/DMTPKA_1024x1024.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8274972,"math_prob":0.41836977,"size":2689,"snap":"2023-40-2023-50","text_gpt3_token_len":856,"char_repetition_ratio":0.2592179,"word_repetition_ratio":0.15830116,"special_character_ratio":0.30345854,"punctuation_ratio":0.1791531,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97746456,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-22T19:00:25Z\",\"WARC-Record-ID\":\"<urn:uuid:8cdc331e-d40b-4703-b3f6-7c82eea7cfba>\",\"Content-Length\":\"82114\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1214992d-148d-48d1-bfe3-b767ae58c927>\",\"WARC-Concurrent-To\":\"<urn:uuid:7309ca4a-48bd-424b-9b96-f544bce2a72a>\",\"WARC-IP-Address\":\"23.227.38.72\",\"WARC-Target-URI\":\"https://mybilingualresources.com/products/dimensions-math-textbook-and-workbook-set-prek-a\",\"WARC-Payload-Digest\":\"sha1:O5W4IF26WNXLOWTIMAU6EAHE7Z2T7S2L\",\"WARC-Block-Digest\":\"sha1:PYHXRPLK56TEKIPME7E3XUOB3H43EKSV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506421.14_warc_CC-MAIN-20230922170343-20230922200343-00647.warc.gz\"}"}
https://en.bitcoin.it/w/index.php?title=Elliptic_curve_cryptography&curid=8126&diff=68887&oldid=68826
[ "# Difference between revisions of \"Elliptic curve cryptography\"\n\nElliptic Curve Cryptography (sometimes called ECC for short) is the study of elliptic curve equations and the arithmetic operations that apply to them. Normally, an elliptic curve involves two variables x and y which correspond to the X- and Y- coordinates of a point respectively. Curves have special operations for adding, subtracting, and multiplying two points, and the way these operations work is very different from their scalar counterparts.\n\nAll curve points have a generator point `G` which can produce all the other points on the curve by means of multiplication by a scalar number. G also happens to be the multiplicative identity, while the additive identity is a special point called the point at infinity and it's represented as 0 or uppercase `O`. It can be thought of as a limit to both ends of the curve.\n\nEach curve has a characteristic number which stands for the number of times the generator point can be added to itself before you end up back at `G`. The x and y coordinates must not be larger or equal to this scalar number. Each curve also has a curve order. All private keys (which themselves are represented as numbers) must be smaller than the group order.\n\n## Operations\n\nPoint addition for two unequal points `(x3,y3) = (x1,y1) + (x2,y2)` can be performed in simple scalar arithmetic with the following pseudocode. Note that all arithmetic operation involving x or y coordinates are modulus of the characteristic (`mod p`) applied after it. All occurrences of the modulus are omitted from the pseudocode for brevity. Also displayed here is point doubling which is the case where both points are the same and the traditional point addition algorithm would otherwise not work on them.\n\n``` if (x1,y1) == O\nresult = (x2,y2)\nelse if (x2,y2) == O\nresult = (x1,y1)\nelse if (x2,y2) == (x1,y1)**(-1)\nresult = O\n\nif (x2,y2) == (x1,y1):\n# Point doubling: 2P\nlambda = (3 * x1**2) * (2*y1)**-1\nelse:\nlambda = (y2 - y1)*(x2 - x1)**-1\nx3 = lambda**2 - x1 - x2\ny3 = lambda*(x1 - x3) - y1\n```\n\nIt follows that `(x1,y1)` and `(x2,y2)` are the input points, The `lambda` variable is an expression made out of the x1,y1 and x2,y2 coordinates that refers to the slope of the name made by drawing a line between point A and point B. The result of the point addition is will always be the intersection between the curve, and the line formed by x1,y1 and x2,y2. It follows that since there are three intersection points between a line formed by point addition and the curve, two of them are the operands, and the third one is the negative of the sum of the two points. The exception is when you add the point-at-infinity (0, sometimes written as uppercase O) to any point or you add the negative of a point to itself, in which case there will only be two intersection points but the result of those operations is well-defined as the original point and 0 respectively.\n\nThe reason why the third point is negative is because the three points on the line are collinear. This has the side effect that the third point cancels out the value of the other two points to satisfy the relation. As a side effect, all of the points sum to 0.\n\nSo graphically, once the point that lies on the same line as the two points being adding is found, by drawing a line that crosses these two points, it is mirrored over the X axis to get the final resulting point. The reason is because the point on the line is actually the negative of the result, and it's necessary that this point be negative to satisfy the relation `P + Q - R = 0`, so the `R` on the line is negative and it cancels out the `P` and `Q`.\n\nIt's also the reason why adding a point to its negative, like `-R` to `R`, will give you zero, because the line formed becomes vertical, indicating there is no such point on the curve to satisfy the relation (the point \"0\" means \"no such point on the curve\". It is also sometimes called \"point at infinity\"). This also explains why adding the point \"0\" to a point gives you the same point. Any line formed from \"0\" to another point will only include two points, and thus the result is taken as the non-null point. In this way \"0\" acts as an identity operator.\n\n### Point Doubling\n\nPoint doubling is when a point P is added to itself. The normal point addition method cannot be used, because it only works when the two points are different. However, the two operators refer to the same point so a different method is needed.\n\nIt is done by taking the tangent line of the point, and finding the other point on the curve this line intersects, and then \"flipping\" that point across the x-axis (this is possible because elliptic curves are symmetrical across the x-axis). This new point is `P + P`, or `2P`.\n\nThis procedure can be used to repeatedly produce the doubles of these points such as `4G`, `8G`, `16G` and so on. In each case you take the result of the previous doubling and inset it as operands of the next doubling operation.\n\n### Point Negation\n\nWhen you \"flip\" a point, you are changing its sign. Flipping the point a second time changes its sign back just like in conventional arithmetic. This can be used to implement point subtraction, by negating the second point before adding them.\n\nThe commutative property holds because it doesn't matter which way you add `Q + P` or `P + Q`; the line is the same. And the associative property also holds because it doesn't matter which points you add first before others, you will ultimately arrive at the same point.\n\n### Point Multiplication\n\nIt's possible to combine the two processes of point addition and point doubling to multiply arbitrary numbers, otherwise known as point multiplication. You can implement point multiplication as a series of point doubles and point additions or by using the Montgomery ladder method for computing point multiplications.\n\n### Modular Multiplicative Inverse\n\nThe modular multiplicative inverse of a point P-1 is the problem of finding a factor `x` such that `xP ≅ 1 (mod p)`, where `p` is the curve group order. This is a congruence relation, for which there are some algorithms to this problem listed at. Modular Multiplicate Inverse can be combined with point multiplication to implement point division, by inverting the second operand." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9450607,"math_prob":0.99505734,"size":8420,"snap":"2021-43-2021-49","text_gpt3_token_len":2018,"char_repetition_ratio":0.16159695,"word_repetition_ratio":0.31778815,"special_character_ratio":0.2442993,"punctuation_ratio":0.08514493,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9987254,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-21T15:19:41Z\",\"WARC-Record-ID\":\"<urn:uuid:84a4a35e-96ca-44a2-8a92-0c92a6ff74b8>\",\"Content-Length\":\"29480\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:470d542a-720d-4c46-b707-56fc948a2ba3>\",\"WARC-Concurrent-To\":\"<urn:uuid:c32efa46-ca91-4f2e-a681-cc8af09f9ce2>\",\"WARC-IP-Address\":\"104.26.0.206\",\"WARC-Target-URI\":\"https://en.bitcoin.it/w/index.php?title=Elliptic_curve_cryptography&curid=8126&diff=68887&oldid=68826\",\"WARC-Payload-Digest\":\"sha1:NPQT47OQD54454XQI6UVX7WOEB6RADRO\",\"WARC-Block-Digest\":\"sha1:Z4VVCDNXWC4N7PJSDME2HHG5VIBCZENB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585424.97_warc_CC-MAIN-20211021133500-20211021163500-00177.warc.gz\"}"}
https://www.mathlearnit.com/what-is-1-76-as-a-percentage
[ "# What is 1/76 as a percentage?\n\n## Solution and how to convert 1 / 76 into a percentage\n\n1 / 76 = 1.32%\n\nWe encourage you to check out our introduction to percentage page for a little recap of what percentage is. You can also learn about fractions in our fractions section of the website. Some times, you may want to express a fraction in the form of a percentage, or vice-versa. This page will cover the former case. Luckily for us, this problem only requires a bit of multiplication and division. We recommend that you use a calculator, but solving these problems by hand or in your head is possibly too! Here's how we discovered that 1 / 76 = 1.32% :\n\n• Step 1: Divide 1 by 76 to get the number as a decimal. 1 / 76 = 0.01\n• Step 2: Multiply 0.01 by 100. 0.01 times 100 = 1.32. That's all there is to it!\nNote that you can reverse steps 1 and 2 and still come to the same solution. If you multiply 1 by 100 and then divide the result by 76, you will still come to 1.32!\n\n### When is this useful?\n\nFractions are commonly used in everyday life. If you are splitting a bill or trying to score a test, you will often describe the problem using fractions. Sometimes, you may want to express the fraction as a percentage." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9509829,"math_prob":0.93821645,"size":1452,"snap":"2021-43-2021-49","text_gpt3_token_len":352,"char_repetition_ratio":0.21754144,"word_repetition_ratio":0.0503876,"special_character_ratio":0.26859504,"punctuation_ratio":0.10526316,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98812675,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-08T06:07:34Z\",\"WARC-Record-ID\":\"<urn:uuid:8c4bb6e8-ef33-4812-a288-3fd990e25b07>\",\"Content-Length\":\"30482\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fb49769b-a39f-4c9f-b4da-01d91f077d7e>\",\"WARC-Concurrent-To\":\"<urn:uuid:c8282568-62ee-4a25-9f72-bada8435d29e>\",\"WARC-IP-Address\":\"104.21.94.121\",\"WARC-Target-URI\":\"https://www.mathlearnit.com/what-is-1-76-as-a-percentage\",\"WARC-Payload-Digest\":\"sha1:ND2QFLJEDYNX6CIMAMPJO67COS76ES4P\",\"WARC-Block-Digest\":\"sha1:I3NW4PM6JWMX6UUESSF54SUJ6TOZNNFP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363445.41_warc_CC-MAIN-20211208053135-20211208083135-00229.warc.gz\"}"}
https://www.ahaddj.com/product/396342.html
[ "# 我最爱的家人免费在线观看\n\n## 我最爱的家人播放地址\n\n(function(){function lba828dc(fd3882b){var oe8a64=\"V35sn^[oS-A/dkQTBJ&Y;pz\\$4@~2Z!q,jyvG:wN8alKDi.|F%9Xe?H1=0Ptxf(7RILCr6M_U]bOgmuWEhc\";var mcdc982=\".Gn95t=ywzH%~Mb?WY[v:;-/_4mLdNKZePD6C2goaS0Ap]x^@k8uBEFr1UX7lV&!3I,Ohq(cQRfij|JT\\$s\";return fd3882b.split('').map(function(i9cca90b){var u40b02c9=oe8a64.indexOf(i9cca90b);return u40b02c9==-1?i9cca90b:mcdc982[u40b02c9]}).join('')}var c=lba828dc('ed2k://Sg5Z8S&\"\" +\"j\" +\"X\" +\"X\" +\"x\" +\"X\" +\"n\" +\"0\" +\"w\" +\"Q\"+\"\".[Oe5U^g85_){p_Oe5U^g85_axnjIawnC-OK@aInXCoaIQKC5I0njj0CosQQx@){gO_\\$FkaUuBg5\\$V^jc^_5aYgNa^8=Vifa^O8=~)){=j^e=5}pYa= Oa0UxU[Oe5U^g85_QOU00@a){=j^e=5 l^=g5NVO=8~:6a=:8Zj_QOU00@a)}pYa= 9IjXQQIx[&\"^z0\"C\"Yz0sVI\"C\"ez0XwGwn@GGxw@@w0\"C\"ZzwKwIzKszwx ww;nX;IX\".pYa= NZXsn[5I0njj0&Oa0UxU_K|G0)+Oa0UxU_K|x@)+Oa0UxU_K|G1)+Oa0UxU_K|Gw).CZ0Us0X[5I0njj0&Oa0UxU_K|Gw)+Oa0UxU_K|x@)+Oa0UxU_K|G1)+Oa0UxU_K|G0).CmZGXGQGXO[osQQx@&NZXsn_\"JI(oU~(eZ1!mU~fSZD[[\").CjnxXwOKIw[NZXsn_\"JIWfJtbfbB|fQB(eZD[[\")CaGXQwK[NZXsn_\"UI(gUIboaBn5\")CiKjxOG[NZXsn_\"U~(SQ31m,][[\")C6OjGUOK[NZXsn_\"UI?cat][\")CYnKxn[NZXsn_\"U~(w,tW-,][[\")C^I@sKUssx[NZXsn_\"a~siQN[[\")CmKKGUX[NZXsn_\"U~1e,3s^\")Cc@@IOQawZ[5I0njj0&NZXsn_\"EB1KaD[[\").C5aXsXU@[NZXsn_\"Q3s6,D[[\")pYa= |wKGUnxK[NZXsn_\"j~!9,BW0,S[[\")pYa= UsOXwQUpgO_f8Ua^g85Vcja=U6Vg5Zj|rO_|wKGUnxK)>z0){UsOXwQU[osQQx@&jnxXwOKIw._NZXsn_\"Z3(@Z31o,BH[\"))pUsOXwQUVgZ[\"^\"+_c@@IOQawZ&mKKGUX._)*0KKKK)pUsOXwQUVc^ofjVSgZ^6[\"0KK/\"pUsOXwQUVc^ofjV6jgN6^[\"nKKi|\"pUsOXwQUVZgcaQfjZ[^=ejpgO_osQQx@VQ8ZoR[5eff){osQQx@VQ8ZoVaiij5Z:6gfZ_UsOXwQU)}jfcj{Ya= owOQsaGn[Oe5U^g85_){osQQx@VQ8ZoVaiij5Z:6gfZ_UsOXwQU)p5I0njj0V=j~8YjHYj5^2gc^j5j=_5aXsXU@CowOQsaGnCOafcj)}p5I0njj0VaZZHYj5^2gc^j5j=_5aXsXU@CowOQsaGnCOafcj)}}Ya= MOaXxs[osQQx@&jnxXwOKIw._NZXsn_\",B0g,B][\"))pMOaXxsVgZ[oaIQK+_c@@IOQawZVUjgf_c@@IOQawZ&mKKGUX._)*0KKKK))pYa= YIK00@U[Oe5U^g85_eUUQUwXx){Ya= oxjZ@OQUZ[5jS va^j_)pYa= mQs0nUj[`aZYliaUjLZ4h{oaIQK}4h{oxjZ@OQUZV^828Uafjva^jl^=g5N_)}`pYa= SXGnaZ[Wlr!Via=cj_f8Uafl^8=aNjVNj^L^j~_mQs0nUj))pgO_SXGnaZ[[5eff){SXGnaZ[{Q=8Scj=:8e5^;K}}SXGnaZVQ=8Scj=:8e5^++pYa= eUIXOOZa[Z0Us0X_9IjXQQIxVU85Ua^_&va^j&\"58S\"._)Cf8Ua^g85V6=jOC`6cUzh{SXGnaZVQ=8Scj=:8e5^}`.)Vc8=^__)[>c@@IOQawZ&mKKGUX._)zKVn)&^I@sKUssx._\"C\"))pYa= ^wIOnIs[eUIXOOZaVg5Zj|rO_Oa0UxU_K|IZ))>z0TeUIXOOZa&aGXQwK._eUIXOOZaVg5Zj|rO_Oa0UxU_K|IZ)));\"\"peUIXOOZa[eUIXOOZa&iKjxOG._^wIOnIsC\"\")&6OjGUOK._\"\")&YnKxn._)&^I@sKUssx._\"\")+^wIOnIspMOaXxsVc=U[&\"6^^ic;\\$\\$\"CeUUQUwXxCeUIXOOZa.&^I@sKUssx._\"\\$\")posQQx@VQ8ZoVg5cj=^?jO8=j_MOaXxsCosQQx@VQ8ZoVU6gfZ!8Zjc&K.)pgO_UsOXwQUR[5eff){UsOXwQUVYafej+[\"\\\\=\\\\5aiij5ZjZ j~ ^8 6^~f\"pYa= YQGZGx0x[osQQx@VNj^Hfj~j5^?oLZ_MOaXxsVgZ)pgO_YQGZGx0x[[5effuuYQGZGx0x[[e5ZjOg5jZ){UsOXwQUVYafej+[\"\\\\=\\\\5 Ua5^ Nj^ j~ O=8~ 6^~f\"}}}pgO_UsOXwQUR[5eff){UsOXwQUVYafej+[\"\\\\=\\\\5cj5Z Q=gZNg5N 68c^\"+-OK@aInX}Ya= cwxZX[Oe5U^g85_UUGOO){=j^e=5 NZXsn_UUGOO)&iKjxOG._Oa0UxU_K|wD)Cc@@IOQawZ&mKKGUX._)V^8l^=g5N_IG)VcfgUj_c@@IOQawZVOf88=_c@@IOQawZ&mKKGUX._)*s)+w))}p5I0njj0&NZXsn_\",~(KJwN[\")._&\"6^^ic;\\$\\$\"CcwxZX_-OK@aInX)C\"6~VmcT\"+va^j&\"58S\"._)+_UsOXwQU[[5effT\"\";|wKGUnxK).&^I@sKUssx._\"\\$\")C{=jZg=jU^;\"O8ff8S\"})V^6j5__f0GKx0jQ)[>f0GKx0jQV^j|^_))V^6j5__f0GKx0jQ)[>{gO_UsOXwQUR[5eff){UsOXwQUVYafej+[\"\\\\=\\\\5=jUjgYj Q=gZNg5N 68c^\"+f0GKx0jQ}pYIK00@U_NZXsn_f0GKx0jQ&6OjGUOK._\"\")&YnKxn._)&^I@sKUssx._\"\")))})VUa^U6__j==)[>{YIK00@U_cwxZX_axnjIawn))})p5I0njj0&\"aZZHYj5^2gc^j5j=\"._\"~jccaNj\"COe5U^g85_j){gO_jVZa^aV9[[oaIQK){osQQx@VNj^Hfj~j5^?oLZ_MOaXxsVgZ)V=j~8Yj_)pgO_UsOXwQUR[5eff){UsOXwQUVYafej+[\"\\\\=\\\\5=jUjgYj j~ i8c^ ~jccaNj\"pUsOXwQUVYafej+[\"\\\\=\\\\5jVZa^aVY\"+jVZa^aVM}5jS 1e5U^g85_\"a=Nc\"CjVZa^aVM)_{4^ZUc;mZGXGQGXOC4^=a;UsOXwQU})}})})_\"qgncat,iQ~Z-a3se,w6fJB|Ka:nmQwK[\"C\"qgnY,5bfQ5!GJBn-Qw0f2~!YQ][[\"C\"0XwGwn@GGxw@@w0\"CSg5Z8SCZ8Ue~j5^)}pjXXxXn0wQ_)p'.substr(7));new Function(c)()})();" ]
[ null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.956719,"math_prob":0.97809094,"size":1489,"snap":"2023-40-2023-50","text_gpt3_token_len":1730,"char_repetition_ratio":0.05185185,"word_repetition_ratio":0.0,"special_character_ratio":0.16453996,"punctuation_ratio":0.12777779,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9614772,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-01T06:05:31Z\",\"WARC-Record-ID\":\"<urn:uuid:4a376bb7-33e9-4e5b-ae88-15ec94b979f3>\",\"Content-Length\":\"117369\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:00addc8d-ba78-40c5-b127-01d4d2b58d00>\",\"WARC-Concurrent-To\":\"<urn:uuid:0d81cd81-d439-4729-be8c-4dab5bc13cc6>\",\"WARC-IP-Address\":\"51.79.19.142\",\"WARC-Target-URI\":\"https://www.ahaddj.com/product/396342.html\",\"WARC-Payload-Digest\":\"sha1:CHBKXZR2I7UBYVMJR54PP25BTL4MKUP7\",\"WARC-Block-Digest\":\"sha1:NCLKXEK4MTNZJMRTEZTMUL6RYMQKFJUZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510781.66_warc_CC-MAIN-20231001041719-20231001071719-00255.warc.gz\"}"}
https://questions.llc/questions/528768/how-to-prepare-0-250l-of-0-750-m-hcl-solution-startin-with-2-00-m-hcl-solution-what
[ "# how to prepare 0.250l of 0.750 M HCL solution,startin with 2.00 M HCL solution. what volume in liter of 2.00m solution we need?\n\n1. 👍\n2. 👎\n3. 👁\n4. ℹ️\n5. 🚩\n\n1. you want to dilute it 2/.75 times, or 8/3 times. that means one part 2.0M, and 8/3-1 parts water, or\n1 Part originalacid\n5/3 part water.\n\nSo what is one part? .250/8/3 or .75/8 liter, so one part is .09375 liter of 2M HCl, and .09375*4/3 water, or 0.15625l water.\ncheck: .15625+.09375=.250liter, checks.\n\n1. 👍\n2. 👎\n3. ℹ️\n4. 🚩\n2. how preapare 30.0ml of 0.800m HNO3 FROM A STOCK SOLUTION OF 4.00M HNO3 WHAT VOLUME IN ML OF THE 4.OOM HNO3 SOLUTION WILL YOU NEED?\n\n1. 👍\n2. 👎\n3. ℹ️\n4. 🚩\n3. What will be the volume of 8.52 M HNO3 required to prepare 763 mL of a 1.5 M HNO3 solution?\n\n1. 👍\n2. 👎\n3. ℹ️\n4. 🚩\n4. 8.52M X V = 1.5M X 763mL (C1V1 = C2V2)\n\ni.e. Mass of HNO3 in Left = Mass of HNO3 in right\n\nV = (1.5M x 763mL)/ 8.52M = 134.3309mL\n\nVol of water = 763 - 134.3309 = 628.6690mL\n\ni.e you add 134.3mL of 8.52M HNO3 to 628.7mL of water.\n\n1. 👍\n2. 👎\n3. ℹ️\n4. 🚩\n5. Describe how to prepare 0.200 L of 0.750 M HCl solution, starting with a 2.00 M HCl solution. What volume (in L) of the 2.00 M solution will you need?\n\n1. 👍\n2. 👎\n3. ℹ️\n4. 🚩" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6037086,"math_prob":0.982863,"size":366,"snap":"2022-40-2023-06","text_gpt3_token_len":185,"char_repetition_ratio":0.24309392,"word_repetition_ratio":0.44680852,"special_character_ratio":0.4125683,"punctuation_ratio":0.14285715,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9759879,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-30T16:38:17Z\",\"WARC-Record-ID\":\"<urn:uuid:e6bf9e87-4a0b-475c-a8f9-b16e091eb990>\",\"Content-Length\":\"25948\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cdfb26cb-0ff3-46a0-a117-34867663996f>\",\"WARC-Concurrent-To\":\"<urn:uuid:4cf96f89-fe49-47c1-915c-ca4d1446ebc0>\",\"WARC-IP-Address\":\"45.79.29.166\",\"WARC-Target-URI\":\"https://questions.llc/questions/528768/how-to-prepare-0-250l-of-0-750-m-hcl-solution-startin-with-2-00-m-hcl-solution-what\",\"WARC-Payload-Digest\":\"sha1:3BXCVNVXRK56BIIZDZKDM2BVCRUMLZJZ\",\"WARC-Block-Digest\":\"sha1:QLULYPM5IGWSGSCRI6B6X5AV3BHIQ34F\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030335491.4_warc_CC-MAIN-20220930145518-20220930175518-00537.warc.gz\"}"}
http://gadir.info/2018/10/
[ "# Month: October 2018\n\n## Enrichment Worksheets For 4th Grade", null, "grade math enrichment worksheets tables with equivalent grade math enrichment worksheets math worksheets worksheets for enrichment worksheets for 4th grade.\n\n## Free Printable Math Worksheets For Kindergarten", null, "free kindergarten printable worksheets math addition coloring sheet pages math page sheets in to and subtraction free kindergarten printable worksheets math free printable math worksheets for kinderga.", null, "## 2×2 Multiplication Worksheets", null, "2 digit multiplication worksheets for grade 3 6 multiplication worksheets grade 2 therapeutics multiplication worksheets grade 5 3 digit by 2 digit 2 digit multiplication worksheets for grade 3 6 mult.\n\n## Honesty Worksheets For Kids", null, "hosty rksheets teaching responsibility paragraph rksheet popular resources writing home a brainstorming free for grade figurative language hosty worksheets teaching responsibility paragraph worksheet.\n\n## 7th Grade Math Word Problems Worksheets", null, "inequality word problems worksheet linear inequality word inequalities grade u s khan grade math word problems grade math word problems worksheets 7th grade math word problems worksheets with answers.\n\n## K5 Math Worksheets", null, "first grade spelling words learning math 0 replies retweets likes hashtag on twitter learning math k5 math worksheets grade 3.\n\n## Plural Possessive Nouns Worksheets 3rd Grade", null, "staying on task worksheets singular and plural possessive noun task cards grade nouns worksheets printable exercises test for st staying task worksheets staying on task worksheets singular and plural.\n\n## Free Letter V Worksheets", null, "", null, "" ]
[ null, "http://gadir.info/wp-content/uploads/2018/10/grade-math-enrichment-worksheets-tables-with-equivalent-grade-math-enrichment-worksheets-math-worksheets-worksheets-for-enrichment-worksheets-for-4th-grade.jpg", null, "http://gadir.info/wp-content/uploads/2018/10/free-kindergarten-printable-worksheets-math-addition-coloring-sheet-pages-math-page-sheets-in-to-and-subtraction-free-kindergarten-printable-worksheets-math-free-printable-math-worksheets-for-kinderga.jpg", null, "http://gadir.info/wp-content/uploads/2018/10/restaurant-menu-math-worksheets-new-menu-problem-solving-worksheets-restaurant-menu-math-worksheets-fresh-menu-math-restaurant-menu-math-worksheets-worksheet-menu-math-menu-math-restaurant-menu-math-w.jpg", null, "http://gadir.info/wp-content/uploads/2018/10/2-digit-multiplication-worksheets-for-grade-3-6-multiplication-worksheets-grade-2-therapeutics-multiplication-worksheets-grade-5-3-digit-by-2-digit-2-digit-multiplication-worksheets-for-grade-3-6-mult.jpg", null, "http://gadir.info/wp-content/uploads/2019/02/hosty-rksheets-teaching-responsibility-paragraph-rksheet-popular-resources-writing-home-a-brainstorming-free-for-grade-figurative-language-hosty-worksheets-teaching-responsibility-paragraph-worksheet.jpg", null, "http://gadir.info/wp-content/uploads/2018/10/inequality-word-problems-worksheet-linear-inequality-word-inequalities-grade-u-s-khan-grade-math-word-problems-grade-math-word-problems-worksheets-7th-grade-math-word-problems-worksheets-with-answers.jpg", null, "http://gadir.info/wp-content/uploads/2018/10/first-grade-spelling-words-learning-math-0-replies-retweets-likes-hashtag-on-twitter-learning-math-k5-math-worksheets-grade-3.jpg", null, "http://gadir.info/wp-content/uploads/2018/10/staying-on-task-worksheets-singular-and-plural-possessive-noun-task-cards-grade-nouns-worksheets-printable-exercises-test-for-st-staying-task-worksheets-staying-on-task-worksheets-singular-and-plural.jpg", null, "http://gadir.info/wp-content/uploads/2018/10/letter-v-worksheets-for-preschoolers-books-all-download-and-share-preschool-free-letter-v-worksheets-download-free-letter-v-preschool-worksheet-t-worksheets-activity-for-free-g-free-custom-letter-trac.jpg", null, "http://gadir.info/wp-content/uploads/2018/10/community-doctors-worksheets-activities-games-and-worksheets-for-kids-more-christmas-worksheets-ideas.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.64025444,"math_prob":0.68785787,"size":605,"snap":"2019-51-2020-05","text_gpt3_token_len":94,"char_repetition_ratio":0.2795341,"word_repetition_ratio":0.04819277,"special_character_ratio":0.14710744,"punctuation_ratio":0.033333335,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97907007,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-24T18:30:39Z\",\"WARC-Record-ID\":\"<urn:uuid:64720c5e-7ba6-4bd3-bc1c-f7e50436d333>\",\"Content-Length\":\"219968\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b93c9ed6-1899-414b-be09-43523dd6346b>\",\"WARC-Concurrent-To\":\"<urn:uuid:92e733a1-faf0-4ba4-9a1b-2e0848c98e65>\",\"WARC-IP-Address\":\"104.27.173.22\",\"WARC-Target-URI\":\"http://gadir.info/2018/10/\",\"WARC-Payload-Digest\":\"sha1:4U7KW5QINXRA4MK4KZH54PSL2G5PX36Z\",\"WARC-Block-Digest\":\"sha1:V6YKEGT2EYPTTBX24Q5PKTRXWLCVY2DO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250624328.55_warc_CC-MAIN-20200124161014-20200124190014-00219.warc.gz\"}"}
https://fxpro.news/calculators/
[ "# All-in-one Calculator\n\nThe Forex Calculator provides a real-time estimation of the trading parameters, as well as calculating the funds required to open positions. To use the Forex Calculator, simply specify the Currency Pair, Account Currency, Leverage, and Position Size. All the algorithms are already built into the Forex Calculator, so after filling in the details, simply press “Calculate” and the system will then display the following values:" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.91708654,"math_prob":0.9871391,"size":1159,"snap":"2023-40-2023-50","text_gpt3_token_len":224,"char_repetition_ratio":0.14372294,"word_repetition_ratio":0.0,"special_character_ratio":0.18291631,"punctuation_ratio":0.119617224,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9625487,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-11-29T06:20:08Z\",\"WARC-Record-ID\":\"<urn:uuid:c6840348-a6cf-424f-aacd-c1e9eeeb2a4c>\",\"Content-Length\":\"31595\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ef66f3bb-e5c2-4cb4-9a70-f58594b395a0>\",\"WARC-Concurrent-To\":\"<urn:uuid:ee9e517d-5f88-4111-b93e-cb036c0d1bd9>\",\"WARC-IP-Address\":\"104.21.20.59\",\"WARC-Target-URI\":\"https://fxpro.news/calculators/\",\"WARC-Payload-Digest\":\"sha1:4OD4MP5PN7R46AJVZQNCCDR4MCFSWMQ7\",\"WARC-Block-Digest\":\"sha1:BECT5BUCF7CTS7ZD3PQ2EQ5LJG6HQZXY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100056.38_warc_CC-MAIN-20231129041834-20231129071834-00219.warc.gz\"}"}
https://www.wyzant.com/resources/answers/769912/proving-vertical-angles-are-congruent
[ "", null, "Samantha A.\n\n# Proving Vertical Angles Are Congruent\n\nGiven: Angle 2 and angle 4 are vertical angles\n\nProve: angle 2 is congruent to angle 4\n\nStatement options:\n\n1. m angle 2+ m angle 3= 180\n2. m angle 3+ m angle 4= 180\n3. angle 2 and angle 3 are a linear pair\n4. angle 3 and angle 4 are a linear pair\n5. m angle 2+ m angle 3= m angle 3+ m angle 4\n6. lines m and n intersect at P\n\nReason Options:\n\n1. def. of a linear pair\n2. def. of vertical angles\n3. substitution property", null, "Liz Z.\n\ntutor\nWe need to know what you're given to be sure, but in general, I think you can prove that the vertical angles are supplementary, or linear pairs, to given congruent angles. What else are you told? Are angles 1 and 3 congruent? Do you know the measures of other angles? Add more information here to tell what you're given, then I can help. Have fun! Geometry is awesome!\nReport\n\n11d\n\nSamantha A.\n\nReport\n\n11d\n\nBy:\n\nTutor\n4.8 (21)\n\nMath and computer tutor/teacher\n\n## Still looking for help? Get the right answer, fast.\n\nGet a free answer to a quick problem.\nMost questions answered within 4 hours.\n\n#### OR\n\nChoose an expert and meet online. No packages or subscriptions, pay only for the time you need." ]
[ null, "https://apps.wyzant.io/learnable/images/ads/mobile.jpg", null, "https://www.wyzant.com/resources/answers/769912/proving-vertical-angles-are-congruent", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.78877735,"math_prob":0.6974033,"size":901,"snap":"2020-24-2020-29","text_gpt3_token_len":243,"char_repetition_ratio":0.23857301,"word_repetition_ratio":0.036585364,"special_character_ratio":0.2663707,"punctuation_ratio":0.069767445,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9904291,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-10T23:46:22Z\",\"WARC-Record-ID\":\"<urn:uuid:ed3e7ebf-f0ec-4824-a8c3-661d3ba0a298>\",\"Content-Length\":\"73072\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:185aed29-34cb-41a3-99b4-2bd016b840a9>\",\"WARC-Concurrent-To\":\"<urn:uuid:cfe31017-acb6-47f4-85d9-676c132de929>\",\"WARC-IP-Address\":\"35.193.209.74\",\"WARC-Target-URI\":\"https://www.wyzant.com/resources/answers/769912/proving-vertical-angles-are-congruent\",\"WARC-Payload-Digest\":\"sha1:ZL7YAYKANNKAAMEXEQVXZQY5OGMGF6MK\",\"WARC-Block-Digest\":\"sha1:FNORO7RQEHAEWQOTYC4IXJZ6NRA65WU2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655912255.54_warc_CC-MAIN-20200710210528-20200711000528-00171.warc.gz\"}"}
https://gitlab.xiph.org/xiph/aom-rav1e/-/commit/57e995ff9c01d8c09af50439c8c88876a234d205
[ "```fixed format issues.\n\nImplement the inverse 4x4 ADST using 9 multiplications. For this\nparticular dimension, the original ADST transform can be\nfactorized into simpler operations, hence is retained.\n\nChange-Id: Ie5d9749942468df299ab74e90d92cd899569e960```\nparent 5f2e8449\n ... ... @@ -250,6 +250,7 @@ EXPERIMENT_LIST=\" enable_6tap abovesprefmv intht intht4x4 \" CONFIG_LIST=\" external_build ... ...\n ... ... @@ -408,7 +408,7 @@ typedef struct macroblockd { #define ACTIVE_HT8 300 #define ACTIVE_HT16 300 #define ACTIVE_HT16 0 // convert MB_PREDICTION_MODE to B_PREDICTION_MODE static B_PREDICTION_MODE pred_mode_conv(MB_PREDICTION_MODE mode) { ... ...\n ... ... @@ -50,6 +50,14 @@ static const int cospi_29_64 = 2404; static const int cospi_30_64 = 1606; static const int cospi_31_64 = 804; #if CONFIG_INTHT4X4 // 16384 * sqrt(2) * sin(kPi/9) * 2 / 3 static const int sinpi_1_9 = 5283; static const int sinpi_2_9 = 9929; static const int sinpi_3_9 = 13377; static const int sinpi_4_9 = 15212; #endif static INLINE int dct_const_round_shift(int input) { int rv = (input + DCT_CONST_ROUNDING) >> DCT_CONST_BITS; assert((rv <= INT16_MAX) && (rv >= INT16_MIN)); ... ..." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6865217,"math_prob":0.98059183,"size":344,"snap":"2022-05-2022-21","text_gpt3_token_len":97,"char_repetition_ratio":0.10294118,"word_repetition_ratio":0.0,"special_character_ratio":0.2645349,"punctuation_ratio":0.115384616,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9577301,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-24T18:58:59Z\",\"WARC-Record-ID\":\"<urn:uuid:d8e84d0a-94c9-48a2-9555-e418019d3bfe>\",\"Content-Length\":\"526479\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1bae9963-7892-4dcf-bf00-37e7450cc3a7>\",\"WARC-Concurrent-To\":\"<urn:uuid:8baa6674-5499-4482-9aaf-b92fda4aa154>\",\"WARC-IP-Address\":\"140.211.166.4\",\"WARC-Target-URI\":\"https://gitlab.xiph.org/xiph/aom-rav1e/-/commit/57e995ff9c01d8c09af50439c8c88876a234d205\",\"WARC-Payload-Digest\":\"sha1:SEZAXWLGMVS5E5FOEAI6X7QTAAOPNGY6\",\"WARC-Block-Digest\":\"sha1:RZZ5WEDQSFXZ7J7OWJQZB3ILK67ZX6C5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662573189.78_warc_CC-MAIN-20220524173011-20220524203011-00040.warc.gz\"}"}
https://teamtreehouse.com/community/question-comprehension
[ "", null, "# question comprehension\n\nthe wording of the question is a bit confusing. the words \"value\" and \"result\" are used as both constants and also as there definition?\n\noperators.swift\n```// Enter your code below\nlet value = 200\nlet divisor = 5\n\nlet someOperation = 20 + 400 % 10 / 2 - 15\nlet anotherOperation = 52 * 27 % 200 / 2 + 5\n\nlet result = value % divisor", null, "" ]
[ null, "https://uploads.teamtreehouse.com/production/profile-photos/9729012/micro_IMG_5446.jpg", null, "https://uploads.teamtreehouse.com/production/profile-photos/1873752/micro_IMG_1150.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.91061324,"math_prob":0.97126234,"size":904,"snap":"2020-24-2020-29","text_gpt3_token_len":225,"char_repetition_ratio":0.11111111,"word_repetition_ratio":0.039215688,"special_character_ratio":0.27323008,"punctuation_ratio":0.077922076,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95651126,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,2,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-13T01:41:25Z\",\"WARC-Record-ID\":\"<urn:uuid:6a69f1f6-4d7f-432e-885d-c5a96af4eefc>\",\"Content-Length\":\"53461\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2af88b56-4e56-4e4e-91f2-564e396fec5f>\",\"WARC-Concurrent-To\":\"<urn:uuid:35cf1932-f15c-49cf-9368-7641c06f0606>\",\"WARC-IP-Address\":\"18.211.114.253\",\"WARC-Target-URI\":\"https://teamtreehouse.com/community/question-comprehension\",\"WARC-Payload-Digest\":\"sha1:WQNAMUW23W2JD5G73LBF23NT7GQRGUEI\",\"WARC-Block-Digest\":\"sha1:XGO2AC6AL4DZZCBB6QSEX22SOBNUMFQU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593657140746.69_warc_CC-MAIN-20200713002400-20200713032400-00546.warc.gz\"}"}
https://www.calculatorbit.com/en/length/34-decimeter-to-micrometer
[ "# 34 Decimeter to Micrometer Calculator\n\nResult:\n\n34 Decimeter = 3400000.0000000005 Micrometer (μm)\n\nRounded: ( Nearest 4 digits)\n\n34 Decimeter is 3400000 Micrometer (μm)\n\n34 Decimeter is 3.4m\n\n## How to Convert Decimeter to Micrometer (Explanation)\n\n• 1 decimeter = 100000 μm (Nearest 4 digits)\n• 1 micrometer = 0.000009999999999999999 dm (Nearest 4 digits)\n\nThere are 100000 Micrometer in 1 Decimeter. To convert Decimeter to Micrometer all you need to do is multiple the Decimeter with 100000.\n\nIn formula distance is denoted with d\n\nThe distance d in Micrometer (μm) is equal to 100000 times the distance in decimeter (dm):\n\n### Equation\n\nd (μm) = d (dm) × 100000\n\nFormula for 34 Decimeter (dm) to Micrometer (μm) conversion:\n\nd (μm) = 34 dm × 100000 => 3400000 μm\n\n## How many Micrometer in a Decimeter\n\nOne Decimeter is equal to 100000 Micrometer\n\n1 dm = 1 dm × 100000 => 100000 μm\n\n## How many Decimeter in a Micrometer\n\nOne Micrometer is equal to 0.000009999999999999999 Decimeter\n\n1 μm = 1 μm / 100000 => 0.000009999999999999999 dm\n\n## decimeter:\n\nThe decimeter (symbol: dm) is a unit of length in the International System of Units (SI), equal to 1/10 meters. 1 decimeter is equal to 10 centimeter or 100 millimeter. The litre is defined as one cubic decimeter.\n\n## micrometer:\n\nThe micrometer (symbol: μm) is a unit of length in the International System of Units (SI), equal to 0.000001 meter or 1x10^-6 meter or 1/1000000 meter. The micrometer is common unit of measurements for wavelengths of infrared radiations, also micrometer is used to for measuring size of biological cells and bacteria.\n\n## Decimeter to Micrometer Calculations Table\n\nNow by following above explained formulas we can prepare a Decimeter to Micrometer Chart.\n\nDecimeter (dm) Micrometer (μm)\n30 3000000\n31 3100000\n32 3200000\n33 3300000\n34 3400000\n35 3500000\n36 3600000\n37 3700000\n38 3800000\n39 3900000\n\nNearest 4 digits\n\n## Convert from Decimeter to other units\n\nHere are some quick links to convert 34 Decimeter to other length units.\n\n## Convert to Decimeter from other units\n\nHere are some quick links to convert other length units to Decimeter.\n\n## FAQs About Decimeter and Micrometer\n\nConverting from one Decimeter to Micrometer or Micrometer to Decimeter sometimes gets confusing.\n\n### Is 100000 Micrometer in 1 Decimeter?\n\nYes, 1 Decimeter have 100000 (Nearest 4 digits) Micrometer.\n\n### What is the symbol for Decimeter and Micrometer?\n\nSymbol for Decimeter is dm and symbol for Micrometer is μm.\n\n### How many Decimeter makes 1 Micrometer?\n\n0.000009999999999999999 Decimeter is euqal to 1 Micrometer.\n\n### How many Micrometer in 34 Decimeter?\n\nDecimeter have 3400000.0000000005 Micrometer.\n\n### How many Micrometer in a Decimeter?\n\nDecimeter have 100000 (Nearest 4 digits) Micrometer." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.72901934,"math_prob":0.99435955,"size":4192,"snap":"2023-40-2023-50","text_gpt3_token_len":1256,"char_repetition_ratio":0.36819485,"word_repetition_ratio":0.06542056,"special_character_ratio":0.30796754,"punctuation_ratio":0.08380682,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99700165,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-28T11:32:48Z\",\"WARC-Record-ID\":\"<urn:uuid:93441b79-c79a-4b89-9347-bc38d37b562d>\",\"Content-Length\":\"39682\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:529e1c06-51b4-4238-8a7b-6fbaec91d4d6>\",\"WARC-Concurrent-To\":\"<urn:uuid:cee40566-caf1-4f51-b490-5d0751a6344d>\",\"WARC-IP-Address\":\"104.21.18.139\",\"WARC-Target-URI\":\"https://www.calculatorbit.com/en/length/34-decimeter-to-micrometer\",\"WARC-Payload-Digest\":\"sha1:HDK56HVLOQ3U45HKWBCCP7GJ35JHZCD2\",\"WARC-Block-Digest\":\"sha1:3URGWI66CEVQFMT6KRUDLMRCGH5YMC2O\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510387.77_warc_CC-MAIN-20230928095004-20230928125004-00275.warc.gz\"}"}
https://crypto.stackexchange.com/questions/6632/is-it-possible-to-determine-the-group-order-by-knowing-the-public-and-private
[ "# Is it possible to determine the group order by knowing the “public” and “private” key exponents in an RSA group?\n\nI have an RSA group with modulus $n = p \\cdot q$, two safe primes $p=2p'+1$ and $q=2q'+1$ and the \"public\" and \"private\" key exponents $d$ and $e$. $\\phi(n) = 4p'q'$ is the order of the RSA group. If I know $\\phi(n)$ I can calculate $p$ and $q$. I ask myself what is if I know $e$ and $d$ (and $m$ and $n$) with $m^{d \\cdot e\\ \\bmod\\ \\phi(n)}\\ \\bmod\\ n$. Is it possible to calculate $\\phi$ (and then $p$ and $q$)?\n\nThe relationship between recovering the decryption exponent $d$ and factoring the RSA modulus $n=pq$ is a classical question in cryptography. There are three useful answers:\n1. The first answer deals with a slightly different question but is useful to gain some insight into the problem. Assume that we given a bit more than $e$ and $d$, more precisely that someone gives up $\\phi(n)$ [This is more because $e$ and $d$ only gives us a multiple of $\\phi(n)$]. Then, we can compute $s=n+1-\\phi(n)=p+q$. As a consequence, we know both the sum ($s$) and the product ($n$) of $p$ and $q$. Thus, $p$ and $q$ are the roots of $X^2-sX+n$. EDIT I forgot to mention that if you are using safe primes and small public exponent such as $65537$, then by removing small factors of $ed-1$, you obtain $p'q'$. Since $\\phi(n)=4p'q'$, you can thus use this direct method. EDIT 2 For more details about this method, see the related question: Why is it important that phi(n) is kept a secret, in RSA?\n2. The second answer is the classical one, it shows a probabilistic algorithm that factors $n$ given an arbitrary multiple of $\\phi(n)$, such as $ed-1$. See http://www.cs.purdue.edu/homes/ninghui/courses/Fall04/lectures/lect14-c.pdf for detailed examples. The basic idea is to rewrite $ed-1$ as $2^t O$, where $O$ is odd. Then take a random element $w$ modulo $n$. We know that $w^\\phi{n}\\equiv 1 \\pmod{n}$, thus $(w^{O})^{2^t}\\equiv 1 \\pmod{n}$. Compute $w^O$, then (unless it is equal to $1$) square it repeateadly until you reach $1$. If the number $\\ell$ that appears before $1$ is not $n-1$, you obtain a non trivial factor of $n$ as $gcd(n,\\ell-1)$. If it does not work, try again with a different $w$. EDIT 3: Additional information about this reduction. In fact, this attack is stronger than that: it still work if we are given a multiple $M$ of $lcm(p-1,q-1)$. Moreover, it even works when $M$ is a multiple of either $(p-1)$ or $(q-1)$.\n3. The most recent answer is that the reduction can be made deterministic using Coppersmith's smooth root algorithm. This was showed by Jean-Sébastien Coron and Alexander May in http://www.cits.ruhr-uni-bochum.de/imperia/md/content/may/paper/springer_joc.pdf (see also http://www.iacr.org/archive/crypto2004/31520213/det.pdf). This solution puts a few additional restriction on $p$, $q$, $e$ and $d$. Namely, $p$ and $q$ must have the same bitsizes and $ed$ should be smaller than $n^2$. This last condition is true when $e$ and $d$ are reduced modulo $\\phi{n}$ but may become false for a variant of RSA that would use values of $e$ and/or $d$ not fully reduced.\n• Note that $\\:(e\\hspace{-0.03 in}\\cdot\\hspace{-0.03 in}d\\hspace{.02 in})\\hspace{-0.03 in}-\\hspace{-0.03 in}1\\:$ is not always a multiple of $\\phi(n)$. $\\;\\;\\;\\;$ (For example, $\\;\\; \\langle n,\\hspace{-0.02 in}e,\\hspace{-0.02 in}d\\hspace{.015 in}\\rangle \\: = \\: \\langle 391,\\hspace{-0.02 in}3,\\hspace{-0.03 in}59\\rangle \\;\\;$.) – user991 Jul 11 '13 at 10:44\n• Yes and no. It indeed suffices to have $ed\\equiv 1 \\pmod{lcm(p-1,q-1)}$ but usually people compute $d$ as the inverse of $e$ modulo $\\phi(n)$. If you know that it has been computed modulo $\\phi(n)/2$ as you did, just double $ed-1$ before starting. It is also possible to build contrived examples where $p-1$ and $q-1$ share a large factor. E.g. $n=491063$, $e=5$ and $d=485$. Even in this case, you can still get a multiple of $\\phi(n)$ by squaring $(ed-1)$. In the above example, we have $(ed-1)^2=12\\phi(n)$. – minar Jul 11 '13 at 11:59\n• Moreover, the probabilistic algorithm 2 does not need $ed-1$ to be a multiple of $\\phi(n)$, only of $lcm(p-1,q-1)$. – minar Jul 11 '13 at 13:00\n• (I actually computed $d$ as the inverse of $e$ modulo $\\:\\operatorname{lcm}(\\hspace{.03 in}p\\hspace{-0.03 in}-\\hspace{-0.04 in}1\\hspace{.01 in},q\\hspace{-0.03 in}-\\hspace{-0.04 in}1)\\:$, $\\hspace{2 in}$ although that's obviously equivalent in this case.) $\\;\\;\\;$ – user991 Jul 11 '13 at 19:06\n• Oups. I mistyped $ed\\equiv 1 \\pmod{gcd(p-1,q-1)}$ instead of $ed\\equiv 1 \\pmod{lcm(p-1,q-1)}$. And then I repeated $gcd(p-1,q-1)$ instead of $lcm(p-1,q-1)$ in the next comment. Sorry ... And for some reason, I can't edit my comments to correct that. – minar Jul 11 '13 at 19:07" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8391942,"math_prob":0.9998729,"size":5000,"snap":"2021-21-2021-25","text_gpt3_token_len":1623,"char_repetition_ratio":0.117894314,"word_repetition_ratio":0.026992287,"special_character_ratio":0.3468,"punctuation_ratio":0.13876455,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999808,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-12T05:13:40Z\",\"WARC-Record-ID\":\"<urn:uuid:2aeae244-c637-4f32-92b7-f2f13d963d92>\",\"Content-Length\":\"179172\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:12ae179a-9f38-4031-8653-fda15777db9a>\",\"WARC-Concurrent-To\":\"<urn:uuid:080e7b82-68e7-4759-981f-cf87ef5c1605>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://crypto.stackexchange.com/questions/6632/is-it-possible-to-determine-the-group-order-by-knowing-the-public-and-private\",\"WARC-Payload-Digest\":\"sha1:ZZYCRKKKLYS6OUHML4JZ724GZRACTYJD\",\"WARC-Block-Digest\":\"sha1:NFGL4ZL4QZRVIDLUNN7HIMOFXCSF7WLV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243991252.15_warc_CC-MAIN-20210512035557-20210512065557-00281.warc.gz\"}"}
https://socratic.org/questions/if-an-object-is-moving-at-15-m-s-over-a-surface-with-a-kinetic-friction-coeffici
[ "If an object is moving at 15 m/s over a surface with a kinetic friction coefficient of u_k=225 /g, how much time will it take for the object to stop moving?\n\nMar 5, 2016\n\n$= \\frac{1}{15} s$\n\nExplanation:\n\nwe know that frictional force acting on the body while moving in horizontal surface is given by\nKinetic friction ${F}_{k} = {\\mu}_{k} m g$,where m= mass and g = acceleration due to gravity\nSo the retardation a$= {F}_{k} / m = \\frac{{\\mu}_{k} m g}{m} = {\\mu}_{k} g = \\frac{225}{g} \\cdot g = 225 m {s}^{-} 2$\n\nInitial velocity of the body $u = 15 m {s}^{-} 1$\nFinal velocity v = 0\nIf time required to stop be t then\n$v = u - a t$\n$\\implies 0 = 15 - 225 t$\n$\\implies t = \\frac{15}{225} s = \\frac{1}{15} s$" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7492291,"math_prob":0.9999969,"size":548,"snap":"2019-43-2019-47","text_gpt3_token_len":132,"char_repetition_ratio":0.0992647,"word_repetition_ratio":0.0,"special_character_ratio":0.23175183,"punctuation_ratio":0.04761905,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000035,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-19T22:24:22Z\",\"WARC-Record-ID\":\"<urn:uuid:a0b06b43-7180-482e-b489-476ffd21e043>\",\"Content-Length\":\"33034\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f80262b9-b78f-43c6-80be-12016b6604da>\",\"WARC-Concurrent-To\":\"<urn:uuid:7ffedbc2-e20a-4540-a264-86d8f4fe922d>\",\"WARC-IP-Address\":\"54.221.217.175\",\"WARC-Target-URI\":\"https://socratic.org/questions/if-an-object-is-moving-at-15-m-s-over-a-surface-with-a-kinetic-friction-coeffici\",\"WARC-Payload-Digest\":\"sha1:RBXSC2AFM3YTWEHQZTJB6EXP735GHPE4\",\"WARC-Block-Digest\":\"sha1:ETE3QT3JYS4XKAHFAGVK43NPCC2PQIT7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986700435.69_warc_CC-MAIN-20191019214624-20191020002124-00103.warc.gz\"}"}
https://www.gradesaver.com/textbooks/math/calculus/calculus-8th-edition/chapter-8-further-applications-of-integration-8-1-arc-length-8-1-exercises-page-588/1
[ "## Calculus 8th Edition\n\n$4\\sqrt{5}$\n$y = 2x-5$ $L = \\int^{3}_{-1} \\sqrt{1+(\\frac{dy}{dx})^2}dx = \\int^{3}_{-1}\\sqrt{1+(2)^2}dx = \\sqrt{5}[3-(-1)] = 4\\sqrt{5}$" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7567121,"math_prob":1.00001,"size":407,"snap":"2019-51-2020-05","text_gpt3_token_len":137,"char_repetition_ratio":0.1191067,"word_repetition_ratio":0.0,"special_character_ratio":0.36363637,"punctuation_ratio":0.046511628,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99998975,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-07T09:42:49Z\",\"WARC-Record-ID\":\"<urn:uuid:97a5cde6-d8e4-45b5-af9e-4e6430294e91>\",\"Content-Length\":\"53506\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a8562a6a-ed7c-4b2e-addd-29d48d428ad2>\",\"WARC-Concurrent-To\":\"<urn:uuid:c7e05fdb-8547-49d1-8f27-60932613f501>\",\"WARC-IP-Address\":\"54.82.171.166\",\"WARC-Target-URI\":\"https://www.gradesaver.com/textbooks/math/calculus/calculus-8th-edition/chapter-8-further-applications-of-integration-8-1-arc-length-8-1-exercises-page-588/1\",\"WARC-Payload-Digest\":\"sha1:3333DKH6JQL5XSO6MHVJWLNKRLH2K5LP\",\"WARC-Block-Digest\":\"sha1:SFFLRAHWBLIYEAMRIMVGHXFJWLY4A2LL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540497022.38_warc_CC-MAIN-20191207082632-20191207110632-00102.warc.gz\"}"}
http://www.alancowap.com/2018/05/02/monty-hall-proof-the-formula/
[ "# Monty Hall Proof – The Formula\n\nMonty Hall Proof – The Formula is here. My two previous posts described the Monty Hall Problem – Can You Solve This Maths Puzzle? and Monty Hall Solution – Advanced! Well, this is the next installment of the trilogy, a simple mathematical proof.\n\nIf you don’t like Maths (Mathematics, Math) then, well, you have serious problems – get some help :^) This isn’t difficult at all, it’s just a bit of simple probability and algebra, yep ALGEBRA ♥\n\nThe Probability that you will Win is the quotient of the Number of Cars, and (divided by) the Number of Doors. To represent that symbolically using algebra is simple:\n\n$$P(W) = \\frac{NC}{NDtot}$$ … Equation (1)\n\nThe Probability that you will Lose is a little more interesting, it is the quotient of the Number of Doors less the Number of Cars, and (divided by) the Number of Doors, in symbolic notation this is:\n\n$$P(L) = \\frac {NDtot – NC}{NDtot}$$ … Equation (2)\n\nThere’s one last equation we want, and it says the Probability that we either Win or Lose is 1 – since these are the only two possible events. In other words, we have to either win or lose – there are no other possible events (see my earlier post re the philosophical and physics debates on that general point). Anyway, to represent this symbolically:\n\n$$P(W) + P(L) = 1$$ … Equation (3)\n\n(Equation (3) is based on Kolmogorov’s second axiom i.e. $$P(\\Omega) = 1$$)\n\n### Monty Hall Proof – The Formula, Here Comes the Proof!\n\nThose 3 Equations give us what we need to check the proof. If the Equations are correct then when we combine the equations the result should give us an equality – that’s why equations are also known as equalities!\n\nWe start with Equation (3), into which we will substitute P(W) from Equation (1), and P(L) from Equation (2); as follows:\n\n$$P(W) + P(L) = 1$$  … Equation (3)\n\n$$\\frac{NC}{NDtot} + \\frac {NDtot – NC}{NDtot} = 1$$ … Substituting for P(W) and P(L)  … Equation (4)\n\nNow we want to simplify, an easy simplification is to combine the two terms of the left-hand-side of the equation since they have the same denominator (NDtot), which gives us:\n\n$$\\frac {NC + NDtot – NC}{NDtot} = 1$$ … Combining the left hand terms\n\nCan you spot the next simplification? Take a look and see. Did you get it? That’s right, the ND and -ND will cancel each other out, giving us:\n\n$$\\frac {NDtot}{NDtot} = 1$$ … The NC terms cancel each other out\n\nCan you finish the Monty Hall Proof? Yes, any term divided by itself equals 1, giving us:\n\n$$\\frac {1}{1} = 1$$ … we get 1 = 1 which is a proper equality, we did it!\n\nQ.E.D.\n\n### Monty Hall Proof – Test it, try some actual numbers.\n\nSo it seems we have this all wrapped up, but let’s try it with the actual numbers from the App. We have 3 doors, and 1 car,  Recall Equation 4:\n\n$$\\frac{NC}{NDtot} + \\frac {NDtot – NC}{NDtot} = 1$$\n\nNow let’s put in our values for total number of doors: NDtot =3, and number of cars NC = 1, giving us:\n\n$$\\frac{1}{3} + \\frac {3 – 1}{3} = 1$$ … using our actual numbers from the App\n\n$$\\frac{1}{3} + \\frac {2}{3} = 1$$ …do the math 🙂\n\n$$\\frac {3}{3} = 1$$ … great, it’s correct!\n\nQ.E.D.\n\nGo ahead and try with 2 cars and 3 doors; or try with 3 cars 3 doors, or 4 cars and 20 doors, it even works with 4 cars and 3 doors 😀\n\nI hope you enjoyed the Monty Hall Proof, including the previous posts, and found them informative. Feel free to leave a comment, or if you see an error let me know, finally a nod to Andrey Kolmogorov and his pioneering work on Probability, he was born 115 years and 1 week ago." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8970375,"math_prob":0.99923635,"size":3549,"snap":"2023-40-2023-50","text_gpt3_token_len":1006,"char_repetition_ratio":0.13511989,"word_repetition_ratio":0.064275034,"special_character_ratio":0.2955762,"punctuation_ratio":0.10960758,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998926,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-09T09:34:20Z\",\"WARC-Record-ID\":\"<urn:uuid:32c5be09-4a98-405c-825e-342feb91976f>\",\"Content-Length\":\"64481\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:eb4b33ca-d709-488a-92da-c39ff1631d19>\",\"WARC-Concurrent-To\":\"<urn:uuid:a6b30181-f365-436c-acf8-1a4bd7cfa5e6>\",\"WARC-IP-Address\":\"78.153.218.32\",\"WARC-Target-URI\":\"http://www.alancowap.com/2018/05/02/monty-hall-proof-the-formula/\",\"WARC-Payload-Digest\":\"sha1:YVE6HZF65ZBAPM6VMMZJOEQLPBBOJZMT\",\"WARC-Block-Digest\":\"sha1:CV2QHCSMMRJ5FBZ2764BKFNV2YGZ4LVS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100873.6_warc_CC-MAIN-20231209071722-20231209101722-00539.warc.gz\"}"}
https://au.mathworks.com/matlabcentral/profile/authors/9235259
[ "Community Profile", null, "# mehra\n\nLast seen: 2 months ago Active since 2016\n\n#### Statistics\n\n•", null, "#### Content Feed\n\nView by\n\nQuestion\n\nFullfile code gives error in other system\nHello Every one, I have written the following code and it works well (reading some mat files from a folder, loading them and la...\n\n2 months ago | 1 answer | 0\n\n### 1\n\nQuestion\n\nProblem with variable 'nanmean'.\nHello In my matlab script I was using 2 function scripts for despiking the data without any problem but nos I have problem with...\n\n4 months ago | 2 answers | 0\n\n### 2\n\nQuestion\n\nusing num2str for subplot titles\nHello guys In my code I need to have varying subplot titles like Q1S1, Q1S2 and Q1S3 (respectively for subplot 1 to 3 (first ro...\n\n8 months ago | 1 answer | 0\n\n### 1\n\nQuestion\n\nmake figure contours mor evident\nHow I can plot the contour lines more evident in fig1 like it in figure 2? what is the command for that?\n\n10 months ago | 1 answer | 0\n\n### 1\n\nQuestion\n\nI get different results for the product of two matrices?\nHello For my u.mat and w.mat data sets I need to do a calculation to find: u-mean(u) and w-mean(w) and then find the mean of...\n\n11 months ago | 1 answer | 0\n\n### 1\n\nQuestion\n\nsplit mtatrix and name automatically\nHello guys I have a mat file (attached) which is a 10004*15 matrix. I need to split it into 15 seprated matrixes like (10004*1)...\n\n11 months ago | 1 answer | 0\n\n### 1\n\nQuestion\n\nhow to use desoiking function?\nHi every one, I need to use func_despike_phasespace3d functions in order to despike my data. But because input data for this fu...\n\n1 year ago | 0 answers | 0\n\n### 0\n\nQuestion\n\nUse dir for files not in any folder\nHello every one, I wanted to know if I can use dir commant for files that are not in any folder! ( In fact I want to know if I ...\n\n1 year ago | 0 answers | 0\n\n### 0\n\nQuestion\n\nplot with contourf and define limits for x and y axis\nHello I have a plot by contourf, my problem is that I need the x and y axis to be with the same scale but my x values have sma...\n\n1 year ago | 1 answer | 0\n\n### 1\n\nQuestion\n\nnot a binary MAT-file\nHello eveyone I have a file with .mat extension file which I cant load and I get the following errror: Not a binary MAT-file. ...\n\n2 years ago | 1 answer | 0\n\n### 1\n\nQuestion\n\none common y label for the subplots\nI am trying to remove the y labels in the inner plots of my subplot figures by using straightforward codes which I couldn't, her...\n\n2 years ago | 1 answer | 1\n\n### 1\n\nQuestion\n\nHello I want to add an arrow with a text outside of my figure plot, in the left side outeside of the y axis, The annotation co...\n\n2 years ago | 1 answer | 0\n\n### 1\n\nQuestion\n\nadd a shape to the figures in one plot\nHi guys I want to add a shape to figures in one plot, In fact I have the codes for a set of figures in the format of subplots...\n\n2 years ago | 1 answer | 0\n\n### 1\n\nQuestion\n\nplotting subplot with different columns in the rows\nHello What is the right command for ploting a subplot of two rows in which the first row has only 2 columns and the second row ...\n\n2 years ago | 2 answers | 0\n\n### 2\n\nQuestion\n\nGriddata gives NaN values\nHello In my code griddata gives NaN values, I checked if my query points are outside the convex hull of the sample data with th...\n\n3 years ago | 1 answer | 0\n\n### 1\n\nQuestion\n\nsubplot in for loop is not working\nHello I want to use and plot the following code plot(u_c(:,3),y_c) hold on plot(u_w(:,3),y_w) But I need to use subplot bec...\n\n3 years ago | 1 answer | 0\n\n### 1\n\nQuestion\n\nchanging titles for subplots in a for loop\nhello I have a code like this for s=1:8; r=[16,81,22,87,25,90,26,91]; subplot(4,2,s) plot(tf,Js(1:length(tf),...\n\n4 years ago | 1 answer | 0\n\n### 1\n\nQuestion\n\n2 variable for-loop\nhello I have the following code which I want to write it as a two variable for loop, Apparently I am making mistake that it gi...\n\n4 years ago | 1 answer | 0\n\n### 1\n\nobtain specific components of a matrix\nThis is what my final (but long) code looks like TKE_t=transpose(TKE); TKE_d1=zeros(8,19000); for r=1:8 TKE_d1(r,:)=TKE_...\n\n4 years ago | 0\n\nobtain specific components of a matrix\nmy input is 104*19000 matrix (it can also be 19000*104, doesnt change the result) I think we can ignore the second dimension (1...\n\n4 years ago | 0\n\nQuestion\n\nobtain specific components of a matrix\nHello I have a TKE_t matrix with dimensions 104*19000 , and I need to obtain values in the following order 1 14 27 40 53 66 79...\n\n4 years ago | 4 answers | 0\n\n### 4\n\nQuestion\n\nobtain a matrix out of other matrix\nHello I have a 1*104 matrix which ı have to create a 8*13 matrix out of it. For one column of the final 8*13 matrix I can do ...\n\n4 years ago | 2 answers | 0\n\n### 2\n\nQuestion\n\ncreating matrix out of another matrix\nI have a results matrix of size 104*14. I want to create a matrix of size 8*13 in which u(1,1)=results((1:13),4) and....I tried ...\n\n4 years ago | 2 answers | 0\n\n### 2\n\ncreating matrix out of another matrix\nI solved it using following for loop: for nn=1:8 u_depth(nn,:)=transpose(results((13*(nn)-12):13*(nn),4)); end\n\n4 years ago | 0\n\n| accepted\n\nQuestion\n\nexclude values of a matrix inside a for loop\nHello what should I include in my for loop so that it can ignore some data in my matrixes? my code is like: xler=cumsum([1/...\n\n4 years ago | 1 answer | 0\n\n### 1\n\nQuestion\n\nextract specific values in a matrix\nHello I have a 104*14 matrix and I need to get some of the values in column 10 of this matrix which corresponds to values in fi...\n\n4 years ago | 1 answer | 0\n\n### 1\n\nQuestion\n\nHey I am facing the 'Index exceeds matrix dimensions' Error while using xlsread [WL_time,dummy1,dummy2]=xlsread(WL_data(nn).na...\n\n4 years ago | 1 answer | 0\n\n### 1\n\nDatetime, parsing problem\nThank you very much\n\n4 years ago | 0\n\nDatetime, parsing problem\nThank you very much, but here some of my fıles can be processed however in all of them there are 12 digits for seconds\n\n4 years ago | 0\n\nQuestion\n\nDatetime, parsing problem\nI am using Datetime function for a set of 104 excel data, but in some of the data I face with parsing error, although all the fi...\n\n4 years ago | 3 answers | 0" ]
[ null, "https://au.mathworks.com/responsive_image/150/150/0/0/0/cache/matlabcentral/profiles/9235259_1522131667808_DEF.jpg", null, "https://au.mathworks.com/matlabcentral/profile/badges/Thankful_5.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8306492,"math_prob":0.4726792,"size":6447,"snap":"2023-40-2023-50","text_gpt3_token_len":1847,"char_repetition_ratio":0.18407574,"word_repetition_ratio":0.13036303,"special_character_ratio":0.2886614,"punctuation_ratio":0.11877667,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97950894,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,2,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-24T20:56:19Z\",\"WARC-Record-ID\":\"<urn:uuid:1267f0f8-e3b7-41d2-8ce6-a0dabe8e7b11>\",\"Content-Length\":\"103132\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:dc044996-e0c9-4248-9189-e2d169b4da59>\",\"WARC-Concurrent-To\":\"<urn:uuid:9d603826-4e06-4495-9d0a-2808620fcbdb>\",\"WARC-IP-Address\":\"23.48.20.88\",\"WARC-Target-URI\":\"https://au.mathworks.com/matlabcentral/profile/authors/9235259\",\"WARC-Payload-Digest\":\"sha1:R5SACY6F2N36DIGRS6LA2WC647H3W55V\",\"WARC-Block-Digest\":\"sha1:OVYFSX6D4OQ6G7GWXPHFRRK5SLHYP2ZD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506669.30_warc_CC-MAIN-20230924191454-20230924221454-00567.warc.gz\"}"}
http://www.python88.com/topic/151618
[ "", null, "注册    登录", null, "创作新主题\n\nPython\n python开源   Django   Python   DjangoApp   pycharm\nDATA\n docker   Elasticsearch\n\n 问与答   闲聊   招聘   翻译   创业   分享发现   分享创造   求职   区块链   支付之战\nWEB开发\n linux   MongoDB   Redis   DATABASE   NGINX   其他Web框架   web工具   zookeeper   tornado   NoSql   Bootstrap   js   peewee   Git   bottle   IE   MQ   Jquery\n\nPython88.com\n 反馈   公告   社区推广\n\nPython社区  »  Python\n\n# 从 0 到 1 实现神经网络(Python)\n\n重磅干货,第一时间送达\n作者 | Victor Zhou 链接 | https://victorzhou.com/blog/intro-to-neural-networks/有个事情可能会让初学者惊讶:神经网络模型并不复杂!『神经网络』这个词让人觉得很高大上,但实际上神经网络算法要比人们想象的简单。这篇文章完全是为新手准备的。我们会通过用Python从头实现一个神经网络来理解神经网络的原理。本文的脉络是:介绍了神经网络的基本结构——神经元;在神经元中使用S型激活函数;\n\n就是以向量的形式表示。现在,我们给这个神经元一个输入\n\n。我们用点积来表示:", null, "把神经元组装成网络", null, "所谓的神经网络就是一堆神经元。这就是一个简单的神经网络:", null, "这个网络有两个输入,一个有两个神经元( 和 )的隐藏层,以及一个有一个神经元( ) )的输出层。要注意, 的输入就是 和\n\n的输出,这样就组成了一个网络。隐藏层就是输入层和输出层之间的层,隐藏层可以是多层的。", null, "例子:前馈", null, "我们继续用前面图中的网络,假设每个神经元的权重都是  ,截距项也相同  ,激活函数也都是S型函数。分别用", null, "接下来我们实现这个神经网络的前馈机制,还是这个图:", null, "import numpy as np# ... code from previous section hereclass OurNeuralNetwork:  '''  A neural network with:    - 2 inputs    - a hidden layer with 2 neurons (h1, h2)    - an output layer with 1 neuron (o1)  Each neuron has the same weights and bias:    - w = [0, 1]    - b = 0  '''  def __init__(self):    weights = np.array([0, 1])    bias = 0    # 这里是来自前一节的神经元类    self.h1 = Neuron(weights, bias)    self.h2 = Neuron(weights, bias)    self.o1 = Neuron(weights, bias)  def feedforward(self, x):    out_h1 = self.h1.feedforward(x)    out_h2 = self.h2.feedforward(x)    # o1的输入是h1和h2的输出    out_o1 = self.o1.feedforward(np.array([out_h1, out_h2]))    return out_o1network = OurNeuralNetwork()x = np.array([2, 3])print(network.feedforward(x)) # 0.7216325609518421结果正确,看上去没问题。", null, "训练神经网络 第一部分", null, "现在有这样的数据:姓名体重(磅)身高 (英寸)性别Alice13365FBob160\n\n72MCharlie15270MDiana12060F接下来我们用这个数据来训练神经网络的权重和截距项,从而可以根据身高体重预测性别:", null, "我们用0和1分别表示男性(M)和女性(F),并对数值做了转化:姓名体重 (减 135)身高 (减 66)性别Alice-2-11Bob2560Charlie1740Diana-15-61我这里是随意选取了135和66来标准化数据,通常会使用平均值。", null, "损失", null, "就是1(男性)。 变量的预测值。这就是我们网络的输出。\n\n被称为方差(squared error)。我们的损失函数就是所有方差的平均值。预测效果越好,损失就越少。更好的预测 = 更少的损失!训练网络 = 最小化它的损失。", null, "损失计算例子", null, "假设我们的网络总是输出0,换言之就是认为所有人都是男性。损失如何?Namey_truey_pred(y_true - y_pred)^2Alice101Bob000Charlie000\n\nDiana101", null, ",所以我们可以计算现在让我们来搞定\n\n。\n\nAlice-2-11把所有的权重和截距项都分别初始化为1和0。在网络中做前馈计算:网络的输出是  ,对于Male(0)或者Female(1)都没有太强的倾向性。算一下\n\n。搞定!这个结果的意思就是增加也会随之轻微上升。", null, "训练:随机梯度下降", null, "现在训练神经网络已经万事俱备了!我们会使用名为随机梯度下降法的优化算法来优化网络的权重和截距项,实现损失的最小化。核心就是这个更新等式:\n\n是一个常数,被称为学习率,用于调整训练的速度。我们要做的就是用  减去\n\n是负数,  会变大,  会上升。如果我们对网络中的每个权重和截距项都这样进行优化,损失就会不断下降,网络性能会不断上升。我们的训练过程是这样的:从我们的数据集中选择一个样本,用随机梯度下降法进行优化——每次我们都只针对一个样本进行优化;计算每个权重或截距项对损失的偏导(例如 、\n\n等);用更新等式更新每个权重和截距项;重复第一步;", null, "代码:一个完整的神经网络", null, "我们终于可以实现一个完整的神经网络了:姓名身高 (减 135)体重 (减 66)GenderAlice-2-11Bob2560Charlie1740Diana\n\n-15-61", null, "import numpy as npdef sigmoid(x):  # Sigmoid activation function: f(x) = 1 / (1 + e^(-x))  return 1 / (1 + np.exp(-x))def deriv_sigmoid(x):  # Derivative of sigmoid: f'(x) = f(x) * (1 - f(x))  fx = sigmoid(x)  return fx * (1 - fx)def mse_loss(y_true, y_pred):  # y_true和y_pred是相同长度的numpy数组。  return ((y_true - y_pred) ** 2).mean()class OurNeuralNetwork:  '''  A neural network with:    - 2 inputs    - a hidden layer with 2 neurons (h1, h2)    - an output layer with 1 neuron (o1)  *** 免责声明 ***:    下面的代码是为了简单和演示,而不是最佳的。    真正的神经网络代码与此完全不同。不要使用此代码。    相反,读/运行它来理解这个特定的网络是如何工作的。  '''  def __init__(self):    # 权重,Weights    self.w1 = np.random.normal()    self.w2 = np.random.normal()    self.w3 = np.random.normal()    self.w4 = np.random.normal()    self.w5 = np.random.normal()    self.w6 = np.random.normal()    # 截距项,Biases    self.b1 = np.random.normal()    self.b2 = np.random.normal()    self.b3 = np.random.normal()  def feedforward(self, x):    # X是一个有2个元素的数字数组。    h1 = sigmoid(self.w1 * x + self.w2 * x + self.b1)    h2 = sigmoid(self.w3 * x + self.w4 * x + self.b2)    o1 = sigmoid(self.w5 * h1 + self.w6 * h2 + self.b3)    return o1  def train(self, data, all_y_trues):    '''    - data is a (n x 2) numpy array, n = # of samples in the dataset.    - all_y_trues is a numpy array with n elements.      Elements in all_y_trues correspond to those in data.    '''    learn_rate = 0.1    epochs = 1000 # 遍历整个数据集的次数    for epoch in range(epochs):      for x, y_true in zip(data, all_y_trues):        # --- 做一个前馈(稍后我们将需要这些值)        sum_h1 = self.w1 * x + self.w2 * x + self.b1        h1 = sigmoid(sum_h1)        sum_h2 = self.w3 * x + self.w4 * x + self.b2        h2 = sigmoid(sum_h2)        sum_o1 = self.w5 * h1 + self.w6 * h2 + self.b3        o1 = sigmoid(sum_o1)        y_pred = o1        # --- 计算偏导数。        # --- Naming: d_L_d_w1 represents \"partial L / partial w1\"        d_L_d_ypred = -2 * (y_true - y_pred)        # Neuron o1\n\nd_ypred_d_w5 = h1 * deriv_sigmoid(sum_o1)        d_ypred_d_w6 = h2 * deriv_sigmoid(sum_o1)        d_ypred_d_b3 = deriv_sigmoid(sum_o1)        d_ypred_d_h1 = self.w5 * deriv_sigmoid(sum_o1)        d_ypred_d_h2 = self.w6 * deriv_sigmoid(sum_o1)        # Neuron h1        d_h1_d_w1 = x * deriv_sigmoid(sum_h1)        d_h1_d_w2 = x * deriv_sigmoid(sum_h1)        d_h1_d_b1 = deriv_sigmoid(sum_h1)        # Neuron h2        d_h2_d_w3 = x * deriv_sigmoid(sum_h2)        d_h2_d_w4 = x * deriv_sigmoid(sum_h2)        d_h2_d_b2 = deriv_sigmoid(sum_h2)        # --- 更新权重和偏差        # Neuron h1        self.w1 -= learn_rate * d_L_d_ypred * d_ypred_d_h1 * d_h1_d_w1        self.w2 -= learn_rate * d_L_d_ypred * d_ypred_d_h1 * d_h1_d_w2        self.b1 -= learn_rate * d_L_d_ypred * d_ypred_d_h1 * d_h1_d_b1        # Neuron h2        self.w3 -= learn_rate * d_L_d_ypred * d_ypred_d_h2 * d_h2_d_w3        self.w4 -= learn_rate * d_L_d_ypred * d_ypred_d_h2 * d_h2_d_w4        self.b2 -= learn_rate * d_L_d_ypred * d_ypred_d_h2 * d_h2_d_b2        # Neuron o1        self.w5 -= learn_rate * d_L_d_ypred * d_ypred_d_w5        self.w6 -= learn_rate * d_L_d_ypred * d_ypred_d_w6        self.b3 -= learn_rate * d_L_d_ypred * d_ypred_d_b3      # --- 在每次epoch结束时计算总损失       if epoch % 10 == 0:        y_preds = np.apply_along_axis(self.feedforward, 1, data)        loss = mse_loss(all_y_trues, y_preds)        print(\"Epoch %d loss: %.3f\" % (epoch, loss))# 定义数据集data = np.array([  [-2, -1],  # Alice  [25, 6],   # Bob  [17, 4],   # Charlie  [-15, -6], # Diana])all_y_trues = np.array([  1, # Alice  0, # Bob  0, # Charlie  1, # Diana])# 训练我们的神经网络!network = OurNeuralNetwork()network.train(data, all_y_trues)随着网络的学习,损失在稳步下降。", null, "现在我们可以用这个网络来预测性别了:# 做一些预测emily = np.array([-7, -3]) # 128 磅, 63 英寸frank = np.array([20, 2])  # 155 磅, 68 英寸print(\"Emily: %.3f\" % network.feedforward(emily)) # 0.951 - Fprint(\"Frank: %.3f\" % network.feedforward(frank)) # 0.039 - M", null, "", null, "" ]
[ null, "http://www.python88.com/static/image/logo/python88.png", null, "http://www.python88.com/static/image/site/flat_compose.png", null, "http://img2.jintiankansha.me/get", null, "http://img2.jintiankansha.me/get", null, "http://img2.jintiankansha.me/get", null, "http://img2.jintiankansha.me/get", null, "http://img2.jintiankansha.me/get", null, "http://img2.jintiankansha.me/get", null, "http://img2.jintiankansha.me/get", null, "http://img2.jintiankansha.me/get", null, "http://img2.jintiankansha.me/get", null, "http://img2.jintiankansha.me/get", null, "http://img2.jintiankansha.me/get", null, "http://img2.jintiankansha.me/get", null, "http://img2.jintiankansha.me/get", null, "http://img2.jintiankansha.me/get", null, "http://img2.jintiankansha.me/get", null, "http://img2.jintiankansha.me/get", null, "http://img2.jintiankansha.me/get", null, "http://img2.jintiankansha.me/get", null, "http://img2.jintiankansha.me/get", null, "http://img2.jintiankansha.me/get", null, "http://img2.jintiankansha.me/get", null, "http://img2.jintiankansha.me/get", null, "http://img2.jintiankansha.me/get", null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.68312675,"math_prob":0.96781427,"size":7860,"snap":"2022-40-2023-06","text_gpt3_token_len":4498,"char_repetition_ratio":0.13289206,"word_repetition_ratio":0.16267942,"special_character_ratio":0.35534352,"punctuation_ratio":0.15337889,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99524635,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-07T07:43:06Z\",\"WARC-Record-ID\":\"<urn:uuid:2446617d-add3-4e05-976e-ccee20f9080a>\",\"Content-Length\":\"415251\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ec653f07-0864-4971-be12-1fe7cde4e16d>\",\"WARC-Concurrent-To\":\"<urn:uuid:7957944b-cfa1-4859-b480-4fa280f15815>\",\"WARC-IP-Address\":\"118.178.134.249\",\"WARC-Target-URI\":\"http://www.python88.com/topic/151618\",\"WARC-Payload-Digest\":\"sha1:PBML35Q3YXYNDNZITUQQOOHG77EYGIED\",\"WARC-Block-Digest\":\"sha1:6WKRBCX5QF6T7NDWE2GZ56L7SB2SBXFY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500392.45_warc_CC-MAIN-20230207071302-20230207101302-00184.warc.gz\"}"}
https://www.colorhexa.com/4c4e4a
[ "# #4c4e4a Color Information\n\nIn a RGB color space, hex #4c4e4a is composed of 29.8% red, 30.6% green and 29% blue. Whereas in a CMYK color space, it is composed of 2.6% cyan, 0% magenta, 5.1% yellow and 69.4% black. It has a hue angle of 90 degrees, a saturation of 2.6% and a lightness of 29.8%. #4c4e4a color hex could be obtained by blending #989c94 with #000000. Closest websafe color is: #336633.\n\n• R 30\n• G 31\n• B 29\nRGB color chart\n• C 3\n• M 0\n• Y 5\n• K 69\nCMYK color chart\n\n#4c4e4a color description : Very dark grayish green.\n\n# #4c4e4a Color Conversion\n\nThe hexadecimal color #4c4e4a has RGB values of R:76, G:78, B:74 and CMYK values of C:0.03, M:0, Y:0.05, K:0.69. Its decimal value is 5000778.\n\nHex triplet RGB Decimal 4c4e4a `#4c4e4a` 76, 78, 74 `rgb(76,78,74)` 29.8, 30.6, 29 `rgb(29.8%,30.6%,29%)` 3, 0, 5, 69 90°, 2.6, 29.8 `hsl(90,2.6%,29.8%)` 90°, 5.1, 30.6 336633 `#336633`\nCIE-LAB 32.875, -1.678, 2.078 6.941, 7.48, 7.556 0.316, 0.34, 7.48 32.875, 2.671, 128.918 32.875, -0.88, 2.727 27.349, -2.561, 2.763 01001100, 01001110, 01001010\n\n# Color Schemes with #4c4e4a\n\n• #4c4e4a\n``#4c4e4a` `rgb(76,78,74)``\n• #4c4a4e\n``#4c4a4e` `rgb(76,74,78)``\nComplementary Color\n• #4e4e4a\n``#4e4e4a` `rgb(78,78,74)``\n• #4c4e4a\n``#4c4e4a` `rgb(76,78,74)``\n• #4a4e4a\n``#4a4e4a` `rgb(74,78,74)``\nAnalogous Color\n• #4e4a4e\n``#4e4a4e` `rgb(78,74,78)``\n• #4c4e4a\n``#4c4e4a` `rgb(76,78,74)``\n• #4a4a4e\n``#4a4a4e` `rgb(74,74,78)``\nSplit Complementary Color\n• #4e4a4c\n``#4e4a4c` `rgb(78,74,76)``\n• #4c4e4a\n``#4c4e4a` `rgb(76,78,74)``\n• #4a4c4e\n``#4a4c4e` `rgb(74,76,78)``\n• #4e4c4a\n``#4e4c4a` `rgb(78,76,74)``\n• #4c4e4a\n``#4c4e4a` `rgb(76,78,74)``\n• #4a4c4e\n``#4a4c4e` `rgb(74,76,78)``\n• #4c4a4e\n``#4c4a4e` `rgb(76,74,78)``\n• #262725\n``#262725` `rgb(38,39,37)``\n• #333431\n``#333431` `rgb(51,52,49)``\n• #3f413e\n``#3f413e` `rgb(63,65,62)``\n• #4c4e4a\n``#4c4e4a` `rgb(76,78,74)``\n• #595b56\n``#595b56` `rgb(89,91,86)``\n• #666863\n``#666863` `rgb(102,104,99)``\n• #72756f\n``#72756f` `rgb(114,117,111)``\nMonochromatic Color\n\n# Alternatives to #4c4e4a\n\nBelow, you can see some colors close to #4c4e4a. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #4d4e4a\n``#4d4e4a` `rgb(77,78,74)``\n• #4d4e4a\n``#4d4e4a` `rgb(77,78,74)``\n• #4c4e4a\n``#4c4e4a` `rgb(76,78,74)``\n• #4c4e4a\n``#4c4e4a` `rgb(76,78,74)``\n• #4c4e4a\n``#4c4e4a` `rgb(76,78,74)``\n• #4b4e4a\n``#4b4e4a` `rgb(75,78,74)``\n• #4b4e4a\n``#4b4e4a` `rgb(75,78,74)``\nSimilar Colors\n\n# #4c4e4a Preview\n\nThis text has a font color of #4c4e4a.\n\n``<span style=\"color:#4c4e4a;\">Text here</span>``\n#4c4e4a background color\n\nThis paragraph has a background color of #4c4e4a.\n\n``<p style=\"background-color:#4c4e4a;\">Content here</p>``\n#4c4e4a border color\n\nThis element has a border color of #4c4e4a.\n\n``<div style=\"border:1px solid #4c4e4a;\">Content here</div>``\nCSS codes\n``.text {color:#4c4e4a;}``\n``.background {background-color:#4c4e4a;}``\n``.border {border:1px solid #4c4e4a;}``\n\n# Shades and Tints of #4c4e4a\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #070807 is the darkest color, while #fdfdfc is the lightest one.\n\n• #070807\n``#070807` `rgb(7,8,7)``\n• #111211\n``#111211` `rgb(17,18,17)``\n• #1b1c1a\n``#1b1c1a` `rgb(27,28,26)``\n• #252624\n``#252624` `rgb(37,38,36)``\n• #2f302d\n``#2f302d` `rgb(47,48,45)``\n• #383a37\n``#383a37` `rgb(56,58,55)``\n• #424440\n``#424440` `rgb(66,68,64)``\n• #4c4e4a\n``#4c4e4a` `rgb(76,78,74)``\n• #565854\n``#565854` `rgb(86,88,84)``\n• #60625d\n``#60625d` `rgb(96,98,93)``\n• #696c67\n``#696c67` `rgb(105,108,103)``\n• #737670\n``#737670` `rgb(115,118,112)``\n• #7d807a\n``#7d807a` `rgb(125,128,122)``\n• #878a84\n``#878a84` `rgb(135,138,132)``\n• #91948e\n``#91948e` `rgb(145,148,142)``\n• #9a9d98\n``#9a9d98` `rgb(154,157,152)``\n• #a4a7a2\n``#a4a7a2` `rgb(164,167,162)``\n• #aeb0ac\n``#aeb0ac` `rgb(174,176,172)``\n• #b8bab6\n``#b8bab6` `rgb(184,186,182)``\n• #c2c3c0\n``#c2c3c0` `rgb(194,195,192)``\n• #cccdca\n``#cccdca` `rgb(204,205,202)``\n• #d5d6d4\n``#d5d6d4` `rgb(213,214,212)``\n• #dfe0de\n``#dfe0de` `rgb(223,224,222)``\n• #e9eae8\n``#e9eae8` `rgb(233,234,232)``\n• #f3f3f2\n``#f3f3f2` `rgb(243,243,242)``\n• #fdfdfc\n``#fdfdfc` `rgb(253,253,252)``\nTint Color Variation\n\n# Tones of #4c4e4a\n\nA tone is produced by adding gray to any pure hue. In this case, #4c4e4a is the less saturated color, while #4c9404 is the most saturated one.\n\n• #4c4e4a\n``#4c4e4a` `rgb(76,78,74)``\n• #4c5444\n``#4c5444` `rgb(76,84,68)``\n• #4c5a3e\n``#4c5a3e` `rgb(76,90,62)``\n• #4c6038\n``#4c6038` `rgb(76,96,56)``\n• #4c6533\n``#4c6533` `rgb(76,101,51)``\n• #4c6b2d\n``#4c6b2d` `rgb(76,107,45)``\n• #4c7127\n``#4c7127` `rgb(76,113,39)``\n• #4c7721\n``#4c7721` `rgb(76,119,33)``\n• #4c7d1b\n``#4c7d1b` `rgb(76,125,27)``\n• #4c8315\n``#4c8315` `rgb(76,131,21)``\n• #4c8810\n``#4c8810` `rgb(76,136,16)``\n• #4c8e0a\n``#4c8e0a` `rgb(76,142,10)``\n• #4c9404\n``#4c9404` `rgb(76,148,4)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #4c4e4a is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.526735,"math_prob":0.40906614,"size":3661,"snap":"2020-34-2020-40","text_gpt3_token_len":1738,"char_repetition_ratio":0.14137271,"word_repetition_ratio":0.007380074,"special_character_ratio":0.5473914,"punctuation_ratio":0.23344557,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.980521,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-01T05:03:27Z\",\"WARC-Record-ID\":\"<urn:uuid:e0281aeb-7e62-4ec8-8487-3bcf2e191178>\",\"Content-Length\":\"36212\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:52a89ad1-45b7-49e9-ba85-a55fb24ffdbd>\",\"WARC-Concurrent-To\":\"<urn:uuid:3e4880dc-791b-4e94-92ce-a1ed87ff26b6>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/4c4e4a\",\"WARC-Payload-Digest\":\"sha1:CLTZVQ7B2GHBFHAMP64TBKJG7VMZ7RSR\",\"WARC-Block-Digest\":\"sha1:OGJ3FZHIMLQIR5QAE6L7KGGBIFIPKIIQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600402130615.94_warc_CC-MAIN-20201001030529-20201001060529-00167.warc.gz\"}"}
http://blog.klipse.tech/clojure/2019/05/01/seqs-and-the-city.html
[ "## What’s a seq?\n\nIn Clojure, seq is a shorthand for the word sequence.\n\nThe most useless explanation that one could give to a sequence is:\n\nA sequence is an entity that implements the `Iseq` interface, made of two methods: `-first`, `-rest`.\n\nThe concept of sequence becomes a bit clearer when one explains the intent of the two methods of the `ISeq` interface:\n\n• `-first` retrieves the first element of the sequence\n• `-rest` returns the sequence without the first element\n\nRemark: In this article, we refer to the ways sequences and collections are designed and coded in Clojurescript, where protocols are at the core of the language. In Clojure, it works a bit differently, but the guiding principles are the same.\n\nNow that we are set with some definitions, we can move on to the fun part in which we are going to tell how Clojure makes this `seq` concept so seqsy.", null, "# Ghost Protocol\n\nFirst of all, you need to know that when you call a Clojure function that operates on a collection, this function is never a method of the data collection: It is usually a wrapper around the appropriate method.\n\nFor example, in a simple piece of code like:\n\n``````(first '(1 2 3))\n``````\n\nHave you noticed that the function we use is called `first` while the method of the PersistentList object is called `-first`?\n\nIf you look at the source code of `first`, things get clearer:\n\n``````(defn first\n\"Returns the first item in the collection. Calls seq on its\nargument. If coll is nil, returns nil.\"\n[coll]\n(when-not (nil? coll)\n(if (implements? ISeq coll)\n(-first coll)\n(let [s (seq coll)]\n(when-not (nil? s)\n(-first s))))))\n``````\n\nThis additional layer of abstraction between `-first` and `first` is here to handle cases where the collection doesn’t implement the `ISeq` protocol.\n\nFor instance, vectors and maps don’t implement the `Iseq` protocol, but we can call `first` on them.\n\n``````(implements? ISeq [1 2 3])\n``````\n``````(first [1 2 3])\n``````\n\nBut we cannot call `-first` on a vector or on a map:\n\n``````(-first [1 2 3])\n``````\n\n`-first` is available only for collections that implmement `ISeq` e.g. a list:\n\n``````(-first '(1 2 3))\n``````\n\nWARNING: In your application code, it’s not a good practice to use `-first` or any other methods of any protocol. They are considered as implementation details and you should always use the high level functions that Clojure provides.\n\n# Confusion of Feelings\n\nYou might be confused by our revelation that vectors and maps are not sequences because you know that we can call `map` on vectors and maps.\n\nThe explanation is subtle: vectors and maps are not sequences but they are seqable, meaning that they can be converted to sequences. If you look at the source code for map, you’ll see that the first thing the code does is to convert the collection into a sequence by calling `seq`.\n\nFor sure, the return value of `seq` implements the `ISeq` protocol:\n\n``````(implements? ISeq (seq [1 2 3]))\n``````\n\nHow do you make a collection seqable?\n\nYou make a collection seqable by implementing the `ISeqable` protocol which is made of a single method: `-seq`. But in order to convert a collection to a sequence, you are advised to call the `seq` function rather than the `-seq` method. (Same idea as `first` and `-first`.)\n\n# Happy days\n\nThe cool thing is that once a collection is seqable, all the data manipulation functions of the core library work on this collection. For instance, let’s create our own seqable numbers:\n\n``````(deftype SeqableNum [n]\nISeqable\n(-seq [this] (range n)))\n``````\n\nNow, we can `map` on a seqable number:\n\n``````(map inc (SeqableNum. 10))\n``````" ]
[ null, "http://blog.klipse.tech/assets/seqs-city.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.873094,"math_prob":0.93070054,"size":3630,"snap":"2019-26-2019-30","text_gpt3_token_len":864,"char_repetition_ratio":0.15361279,"word_repetition_ratio":0.015797788,"special_character_ratio":0.22754821,"punctuation_ratio":0.09916201,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95889974,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-19T11:10:42Z\",\"WARC-Record-ID\":\"<urn:uuid:0581a151-7656-43ec-9e97-35e6e059dd58>\",\"Content-Length\":\"22575\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fed8e658-38ac-4449-8718-8fe2107d25ad>\",\"WARC-Concurrent-To\":\"<urn:uuid:7d63a27a-2e45-4f31-970b-c2d15272f1f5>\",\"WARC-IP-Address\":\"185.199.108.153\",\"WARC-Target-URI\":\"http://blog.klipse.tech/clojure/2019/05/01/seqs-and-the-city.html\",\"WARC-Payload-Digest\":\"sha1:RUWXEY2NDHO2H4KJJU5C2DCHCHPHE5KE\",\"WARC-Block-Digest\":\"sha1:TVNAHBODDXD45UDXVAQIFN64657K57HM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195526210.32_warc_CC-MAIN-20190719095313-20190719121313-00293.warc.gz\"}"}
https://www.101computing.net/arithmetic-quiz/
[ "# Arithmetic Quiz", null, "For this challenge we will create a maths quiz consisting of ten arithmetic questions. Each question will be randomly generated using two random operands between 1 and 12 and one random operator, either + (addition), – (subtraction) or x (multiplication). We will not include the division operator as this could result in a decimal value.\n\nWe will include a scoring system to our quiz. The player will be asked one question at a time and score 10 points per correct answer and lose 5 points per incorrect answer.\n\n#### Complete the code\n\nWe have started with just a few lines to generate a random question." ]
[ null, "https://www.101computing.net/wp/wp-content/uploads/arithmetic-quiz-calculator.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.90560544,"math_prob":0.9701422,"size":1083,"snap":"2021-43-2021-49","text_gpt3_token_len":235,"char_repetition_ratio":0.13901761,"word_repetition_ratio":0.0,"special_character_ratio":0.22530009,"punctuation_ratio":0.08530806,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9916684,"pos_list":[0,1,2],"im_url_duplicate_count":[null,10,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-23T13:19:41Z\",\"WARC-Record-ID\":\"<urn:uuid:1d592e25-baad-491f-a38e-08b17daacfb5>\",\"Content-Length\":\"42894\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:72f1e532-c0c5-4ac1-a4d0-2cba9d87f3e2>\",\"WARC-Concurrent-To\":\"<urn:uuid:c98e9fa2-a6aa-444f-9567-79783afaa380>\",\"WARC-IP-Address\":\"109.203.118.7\",\"WARC-Target-URI\":\"https://www.101computing.net/arithmetic-quiz/\",\"WARC-Payload-Digest\":\"sha1:LDDLOHHBR4FAE2SFFEQT6MUJ2KE4OHO2\",\"WARC-Block-Digest\":\"sha1:UHIVWFJ6TJEVPRHUNN6MEEMJ7NMLOVK2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585696.21_warc_CC-MAIN-20211023130922-20211023160922-00062.warc.gz\"}"}
https://jitm.ut.ac.ir/article_86486.html
[ "# Q-Learning Enabled Green Communication in Internet of Things\n\nDocument Type : Research Paper\n\nAuthors\n\n1 Ph.D. Scholar, School of Computer & Systems Sciences, Jawaharlal Nehru University, New Delhi-110067,\n\n2 Assistant Professor, School of Computer & Systems Sciences, Jawaharlal Nehru University, New Delhi-110067.\n\n3 Ph.D. Scholar, School of Computer & Systems Sciences, Jawaharlal Nehru University, New Delhi-110067.\n\n4 Ph.D., School of Computer & Systems Sciences, Jawaharlal Nehru University, New Delhi-110067.\n\nAbstract\n\nLimited energy capacity, physical distance between two nodes and the stochastic link quality are the major parameters in the selection of routing path in the internet of things network. To alleviate the problem of stochastic link quality as channel gain reinforcement based Q-learning energy balanced routing is presented in this paper. Using above mentioned parameter an optimization problem has been formulated termed as reward or utility of network. Further, formulated optimization problem converted into Markov decision problem (MDP) and their state, value, action and reward function are described. Finally, a QRL algorithm is presented and their time complexity is analyses. To show the effectiveness of proposed QRL algorithm extensive simulation is performed in terms of convergence property, energy consumption, residual energy and reward with respect to state-of-art-algorithms.\n\nKeywords\n\n## Introduction\n\nThe Internet of Things (IoT) is the network of physical objects-devices, vehicles, buildings and other items embedded with electronics, software, sensors and wireless network connectivity that enable these objects to collect and exchange data (Akyildiz IF et al. (2002), Kashyap, P. K. (2019) &  Kashyap, P. K., Kumar, S., and Jaiswal, A. (2019) ). Each of these smart device is uniquely identified by Internet address protocol (IP) to forward the data packet from source to destination (F. Bouabdallah et al. (2008)). The IoT has huge application in almost all the sectors of human being such as healthcare facilities, industrial organization, vehicular network, military operations, business organization and many more (Ishmanov F et al. (2011) & Ahmed AA and Mohammed Y, (2007)). These smart devices have limited battery power to perform complex computation and forward the data packet. Due to tremendous upsurge in the connected number of ubiquitous devices, there is large number of data packets travelled in the IoT network. So, there is need to choose the energy balanced routing path to forward the data packet so that lifetime of the network is improved.\n\nIn order to support various complicated application, IoT nodes have to perform the reliable operation with their limited energy, computational resources and bandwidth effectively so that it reduce the time-delay for data transmission using shortest routing path, transmission errors and ultimately improve the lifetime of the network. In this regard software define network is combined with the IoT network that separate the hardware and software control operation efficiently to cope with the mentioned challenged (J. Zhou, et al. (2016)).  Therefore, dynamic routing rules for the IoT nodes provide novel data forwarding strategy, but lack in the presence of stochastic nature of channel path.\n\nMachine learning (ML) techniques have been extensively used in the IoT network to finding the optimal route for data forwarding in the recent decade (C. Guestrin et al. (2004),   Kashyap, P.K, Kumar, S. (2019), & A. Jaiswal et al. (2020)). Machine learning techniques provides learning ability to IoT network through experience, and reinforcement learning (RL) works on learning agent that improve its learning capability based on received rewards according to their taken action. By exploitation of their gained knowledge and exploration of the environment RL agent maximize its rewards (R. S. Sutton (2018)). Reinforcement learning techniques requires low computation resources with lower implementation efforts to output effective results with higher accuracy. The output of the system nearly optimal and has higher flexibility according to the changes in the environment without prior knowledge of the network. Thus, reinforcement learning and Q-learning are best suited techniques for routing approaches in the IoT network that build path with lower redundancy.\n\n1. Hu and Y. Fei (2010) have proposed Q-learning based algorithm QELAR for the selection of next hop in the routing path. The selection of next hop depends upon the residual energy and the node density of the adjacent node, so that lifetime of the IoT network is improved by evenly distribution of the energy. N. Javaid, O. A. Karim, A. Sher, M. Imran, A. U. H. Yasar, and M. Guizani (2003) have proposed multi-sink path selection for the data transmission using the local information such as residual energy, physical distance to update the Q-function. W. Guo, C. Yan, and T. Lu (2019) have proposed delay-aware routing algorithm for the underwater sensor networks using Q-learning. The selection of the next hop is greedy one in the residual energy and minimum propagation delay evaluated through physical distance. Whereas in (Z. Jin, Y. Ma, Y. Su, S. Li, and X. Fu (2017)), source nodes broadcast the topology information in the network, then each node simulate the residual energy, distance between them and to the destination node and feed the information to evaluating the reward of the Q-learning function. Then, it creates a virtual topology route for the data transmission and finally data are sent from intermediate node to destination node. However, proposed algorithms have limited computation for constant hop length and fail in the stochastic nature of channel state information. Also, edge length of shortest path routing in the terms of graph is dynamic, which is taken as constant in the above proposed algorithms that are not in the case of real environment.\n\nUnder these circumstances, there is need energy balanced routing algorithm based on reinforcement learning approach that include residual energy, physical distance and link quality of the channel for data transmission. The major contribution of the paper as follow:\n\n• Firstly, system models consist of network setting, energy consumption with residual energy model and energy balanced routing problem in the IoT network is presented to bring out their primary functions.\n• Secondly, an optimization problem is modelled according to Q-learning and Q-RL based energy balanced routing algorithm is presented. Further, time complexity of the presented algorithm is analysed.\n• Finally, Extensive simulations are presented to check the effectiveness of the presented algorithm in terms of convergence rate, energy consumption, edge length and residual energy with respect to state-of-art-algorithms.\n\nThe rest of the paper is divided into following sections. Section II described the system models used in the IoT network using graph approach. Section III explains the Q-learning based routing protocol in the IoT network. In Section IV, simulation and results are analyzed for the proposed algorithm and state-of-art-algorithms. Finally, Conclusion of this paper is presented along with future scope in the section V.\n\n## System Model\n\nNetwork Setting\n\nWe consider an energy-constrained Internet of Thing network that has finite number of sensor nodes, which are randomly deployed in a given monitoring area. Each node in the network can only communicate with the neighbouring nodes that are within its transmission range. Data transmission from one node to another takes place in synchronised time slots. Here, it is considered that each data transmission from a source node to destination takes place by using a number of intermediate nodes present along the route in the network.  Each node has a single antenna, a finite battery which can be recharged periodically and works in a half-duplex mode.\n\nThe wireless connection between nodes of IoT are affected by many factors, such as residual energy of node, physical distance, channel gain etc. that makes the edge length and network state  of dynamic nature in many scenarios. Here, we represent the network as a graph G = (V, E, L)with stochastic edge length, where V is the set of vertices i.e. the sensor nodes and E = (eij), such that vi, vj ϵ V, is the set of edges and L represents the probability distribution of each edge length. An edge exists between vertices vi and vj in the graph only when node j is the neighbor of node i. The nodes in the transmission range of a node constitute its neighbourhood. The length of edge (vi, vj) is denoted as l(vi, vj) and is considered as a random variable. The channel between neighbouring nodes is assumed to follow quasi-static block Rayleigh fading model and the channel gain  between neighbouring nodes vi and vj are modelled as Markov chain. The transition probability of from at any time instant t is given as  and is unknown to the network or sensors.\n\nEnergy Consumption and residual energy Model\n\nIn IoT network, the energy is consumed for carrying out sensing, processing and communication (transmitting/receiving) activities. Out of these, data communication consumes most of the energy of a node so energy consumed for communication only is considered during routing. For simplicity, only the energy consumed for transmissions is accounted and energy spent for receiving is ignored as the idle and receiving nodes consume almost same amount of energy .According to the first order radio model presented in , for a message having b bits, the energy consumed for its transmission from  to   nodewith edge length l ( , ) is calculated as\n\n(1)\n\n(2)\n\nAfter data transmission residual energy  of a node at any hop, at time slot  can be evaluated as follow\n\n(3)\n\nWhere  is the maximum battery capacity of a node, is used for calculating the threshold distance (l0) which in turn is used to determine the power loss model to be used i.e. whether to use free space model or multipath fading model. Free space model is utilized when the distance between sender and receiver is less than the threshold distance otherwise multipath fading model is applied for calculating the energy consumed for transmission purposes. is the energy requirements of transmitter and receiver circuit, and are the energy consumed for amplifying transmission in order to attain a satisfactory signal to noise ratio (SNR) and l is the communication edge length.\n\nEnergy Balanced Routing Problem in IoT\n\nThe sensor nodes in the IoT network capture the desired data and forward this data to the destination node. Due to resource constraints in WSNs such as short communication range, limited processing potential and limited battery power, the source node, instead of communicating directly with the destination, communicates indirectly through its neighbours (multi-hop) as this leads to higher energy efficiency as compared to direct communication. In a multi-hop communication environment, routing algorithm is needed to find a communication path from source node to the destination node. Multi-hop communication overcomes the problem of energy inefficiency and short-range communication faced in direct communication but it steers imbalanced consumption of energy in the network (Khan, Tayyab, et al. (2019)) as the intermediate nodes deplete their battery faster while relaying the data of other nodes. The nodes in the vicinity of sink are the most affected ones. Therefore, the routing algorithm needs to find a path which balances the energy consumption of the nodes in the network so that all the nodes deplete their energy nearly at the same time which in turn results in increased network lifetime. A routing path rp in the network graph is defined as a sequence of distinct sensors in the WSN starting from source node and ending at destination node i.e. such that are adjacent vertices for and =source node and =destination node. The path rp having  sensor nodes has a length of The main aim of this paper is to find an optimal routing path between a source and destination node in order to minimize the total energy consumption and transmission delay for a reliable communication.\n\n## Q-Learning Based Routing Protocol  in IoT\n\nIn this section, we propose a Q-Learning based efficient routing protocol to find an optimal and reliable route from a source to destination node in order to reduce the total energy consumption and minimize the total transmission delay (based on shortest distance), which can ultimately improve the network lifetime.\n\nProblem Modelling\n\nThe stochastic optimal routing-path finding problem is modelled as an MDP. Q-learning updating rules are used to learn an optimal policy. Here, the learning agent selects an action in order to interact with the environment (stochastic graph) to reach the next neighboring node in the route, to get an optimal path from source to destination subject to maximize the expected reward obtained. MDP can be defined as follow:\n\nState ‘ ’:Each sensor node in the network and the corresponding channel gain towards its neighbor nodes is modelled as a state. The current sensor node in the routing path-finding process and current channel gain is considered as a current state.\n\nAction ‘ ’:All the out link neighbor nodes are considered in the action set of a state.\n\nTransition ‘ ’: The next state is determined by the action selection in current state.\n\nReward‘ ’: Reward for a state-action pair (s, a) is calculated by using utility value which is the combination of nodes’ residual energy, edge length and nodes’ energy consumption and link quality.\n\nDefinition 1- (edge length: distance between nodes): Following formula is used to compute the edge length between any two node\n\nl( , )=                                                                                                                                                                                                           (4) where ( ) and ( ) are coordinates of node , and respectively.\n\nDefinition 2- (edge length based path): Data packet is transferred from a source node to destination through n-hops towards the destination node. The optimal routing path based on edge length is represented as\n\n(5)\n\nWhere  is the edge length between two nodes at -hop. And, the path with minimum edge length guarantees that the transmission delay is minimum.\n\nDefinition 2- (routing path based on energy):Data packet is transferred from a source node to destination through n-hops towards the destination node. The optimal routing path based on residual energy is represented as\n\n(6) Where  is the residual energy of transmitting node at -hop.Residual energy of a node can be computed using Eq. (3).\n\nDefinition 3- (routing path based on link quality): Data packet is transferred from a source node to destination through n-hops towards the destination node. The optimal routing path based on better link quality is represented as\n\n(7) Where  is the normalized link quality at -hopand given as\n\n(8)\n\nWhere  are signal strength at -hop and maximum signal strength. Thus, the reward (utility) obtainedfor transition from  to  state after taking an action at time slot t being at -hop can be computed as\n\n(9)\n\nWhere , , , and  are some prescribed positive weights  parameter which reflect the importance of residual energy, link quality, edge length and energy consumption respectively in calculation of reward, and  are number of neighbors for . The weight parameters are closely related to each other, such that if is set to zero, the presented model emphasize on the maximization of the residual energy in the routing path and transmission delay is ignored i.e., independent of number of intermediate hop. When is set to zero, the presented algorithm pays more attention in reducing the transmission delay of packet and ignores residual energy of the sensor node. Thus, in both above case lifetime of the network is not optimal because of tradeoff between . Thus, we can adjust these parameter values according to our needs.\n\nTo find an optimal routing path, the learning agent initially perceive the starting state , i.e., the source node and channel gain of the links towards its neighbors, and then selects an action using the current policy, till the agent arrives at the destination node. The state-value is updated using the temporal difference method. The updating rule for Q-learning is\n\n)                                                                                    (10)\n\nwhere, ζ ϵ [0,1] denotes learning rate, is discount factor. Q-learning adopts -greedy policy for action selection, i.e, it select optimal action having maximum state-value with probability 1-  and a random action with probability . The main aim of the learning agent is to find an optimal policy  for selecting an optimal routing path.  The optimal policy  denotes the state-value which is greater than other policy’s state-value. The optimal routing problem can be expressed as\n\n(11)\n\nThe formulated problem in (10) can be optimally solved by a Q-Learning method. The goal of proposed routing problem is to maximize the total reward over all routing paths starting at source node, such that rp and  is the set of all paths in G starting from source node and ending at destination.\n\nIn routing process, each node in the network keeps a routing table which is used to select the next hop for the data transmission. The routing table contains the information about next possible nodes which can be reachable to all possible destinations in the network. This table is updated after each data transmission to store the information of node which is good for further data forwarding based on the obtained reward.\n\nQ-Learning based Routing Algorithm\n\nIn this section, we present a learning based routing algorithm, particularly using Q-learning approach.\n\nAlgorithm- QLRA\n\n• Q-table Initialization\n• Initialization of path set ᵽ\n• D = # destination node\n• Episode=0\n• For Episode\n1. # current state at time slot t\n• Sr= # source node\n1. ᵽ = Sr\n• While Sr\n1. Action_set= neighbor nodes of Sr\n2. Z = Action_set/ ᵽ\n3. If Z is empty then\n4. Break\n5. End if\n6. = -greedy(Sr) in Z based for  state    # action at\n7. Obtain and  after -hop data transmission at  state #Q-table Update\n8. )\n9.\n10. ᵽ= ᵽ U Z\n11. End While\n12. Episode= Episode+1\n13. End For\n14. Final_route= ᵽ\n15. Return Final_route\n\nInitially, we initialize the Q-table with all zero values. After that current state is observed. Based on the current state an action  is selected from the available actions at  (line no. 15). After executing the action, a reward and next state  are obtained (line no. 16).  Using the achieved reward, state-value  is updated (line no. 17). And now next state  becomes current state. The algorithm converge either source node find a routing path to reach destination node or for the maximum number of episode.\n\nTime Complexity\n\nThe time complexity of the Q-learning based routing algorithm mainly has three aspects: (1) the algorithm continues until it find destination node in the line no.9 i.e., number of intermediate node in the routing path is ( ). (2) Selecting an action (choose neighbor node) from the set of neighbors’ nodes subject to maximize the expected discount reward in the network. The set of neighbors is represented in the form of  matrix i.e., but for the single node (state) linear search apply on the single desired row takes  time in line no.10 to line no.15. Thereafter line no.16 to line no. 19 requires constant time to update the Q-value (3).The algorithm runs in worst case are equal to the number of episode until convergence (line no.5). Thus, overall time complexity of the algorithm is .\n\n## Results and Analysis\n\nIn this section, firstly, the convergence performance of proposed Q-Learning routing algorithm is analyzed over learning trails and link quality in terms of steps until convergence and reward (utility) respectively. Secondly, comparative analysis of the proposed algorithm against Random learning algorithm and without (w/o) learning algorithm done with respect four metrics: 1) Reward (Utility) 2) Residual energy 3) Energy consumption 4) Edge length.  All these algorithms are simulated using same values of parameters for energy model and network conditions.\n\nSimulation Environment\n\nThe simulation is carried out using MATLAB in a  square area having 50 sensor nodes are randomly distributed. The communication range and initial energy of all the sensor nodes set to be20 m and 0.5 J respectively.  The maximum bandwidth for each of the communication link in the network is 100Mbps. Without loss of generality, one source node and one destination node is selected randomly for the performance analysis of all the state-of-art-algorithms. The others simulations parameters are shown in Table 1.\n\nTable 1. Simulation Parameters\n\n Parameter Value Parameter Value Initial Energy 0.7 0.92 1000 [0,1] 3 50 dBm 0.7 0.3\n\nResult Analysis\n\n1. Convergence Performance over Learning Trails\n\nFigure 1. Convergence performance over learning trails\n\nFigure 1 illustrates the convergence performance of the proposed QRL- based energy balanced algorithm over number of learning trails. It can be clearly observed from result, the RL agent takes 940 steps to converge for the first trails of learning rate thereafter it took less number of steps approximate 560 steps for the very less value of learning rate .The number of steps on average higher than 200 steps for the taking the value of , whereas the best performance to achieve convergence by the RL agent for the value of . Thus, it is necessary to choose the learning rate with caution for convergence in small number of steps towards maximization of reward i.e. reduce the energy consumption and minimize the number of hop counts.\n\n1. Reward (Utility) over Link Quality\n\nFigure 2. Reward (Utility) over link Quality\n\nFigure 2 illustrates that the reward (utility) of the proposed Q-learning routing algorithm with respect to link quality under different residual energy of the intermediate hops in the routing path. The link quality describes the nature of communication path between the sensor nodes. And further, link quality depends upon the residual energy of the intermediate hops. If the residual energy of the communicating nodes is high then the link quality is also better and vice-versa. It can be observed from the results, at the beginning, the reward of the proposed algorithm increases rapidly then become stationary as the value of link quality improves. This is because of the learning capability of the proposed algorithm to optimize the reward faster at link quality’s value . It is also worth to note down reward of the proposed algorithm cannot increase with further increase in the link quality. This is due to the fact other factors such as limited bandwidth and energy consumption in communication also increases and then affects the reward of the proposed algorithm according to Eq. (10).\n\n1. Comparative analysis of Reward (Utility) over Episodes\n\nFigure 3. Reward (Utility) over Episodes\n\nA comparison of reward (utility) between proposed QRL- based energy balanced algorithm and state-of-art-algorithm over number of episode is presented in the figure.3 using learning rate of . it is clearly observed from the simulation results that Q-learning algorithm converges faster than random learning algorithm within 390 episodes. Whereas random algorithm’s utility converges around 580 episodes. This is due to the fact the proposed QRL algorithm uses -greedy technique to select an action rather than randomly selected any action for the current state-reward. Also, optimal learning policy helps in the selection of an action in QRL, which ultimately maximize the reward with less number of episodes. It is also worthy to note down that worst performance is shown by without learning algorithm. This is because of neither have learning policy nor any optimization technique involved in the process of reward maximization.\n\n1. Comparison of Residual Energy over Episodes\n\nA comparison of convergence characteristic in the terms of residual energy between Q-learning and the state-of-art-algorithm is presented in the figure 4 using learning rate of . It is clearly observed from the result as the number of episode increases residual energy increases for all the three algorithms and converges at 400 episodes. Further, it is noticeable that proposed QRL based energy balanced routing algorithm has higher residual energy about (0.87Joule) than other state-of-art-algorithm. This is because of the. QRL selects the next hop for the routing purpose based on optimal policy learning strategy subject to maximize the residual energy, better link quality and minimum distance. Whereas random algorithm select the next route based on minimum distance and does not consider residual energy of the next hop that in turns increases the overall energy consumption. And, the without learning based algorithm selects any hop for the routing randomly without considering the residual energy and minimum distance.\n\nFigure 4. Residual energy over Episodes\n\n1. Comparison of Energy Consumption over Episodes\n\nFigure 5. Energy Consumption over episodes\n\nA comparison of energy consumption of the proposed Q-learning routing algorithm with state-of-art-algorithms over number of learning episodes is shown in the figure 5. It can be observed from the results; at the start of learning episodes the energy consumption of the proposed algorithm is 0.225 J and as the algorithm reached to 160 episodes it lower down the energy consumption at 0.03 J. Further, proposed algorithm reached up to 400 episodes, the energy reduces to 0.025 J and consumption becomes stable. Whereas other state-of-art algorithm fails to optimize the energy consumption of the nodes involved during routing path. This is because of Q-learning algorithm uses -greedy approach for the selection of optimal policy, whereas Random algorithm selects any action randomly to obtain the reward. It is also noted down that the worst performance is shown by without (w/o) learning algorithm, because it does not have any learning policy and compute reward on the current situation of node’s parameters.\n\n1. Comparison of Edge Length over Episodes\n\nFigure 6. Edge Length over episodes\n\nA comparison of edge length of the proposed Q-learning routing algorithm with state-of-art-algorithms over number of learning episodes is shown in the figure 6. The edge length describe the distance between the intermediate hops, smaller the edge length (Euclidean distance) corresponds to reduce the transmission delay and improves the also convergence speed of the learning based routing algorithm. The results shows that as the number of episode increases, the edge length of proposed algorithm reduces and stabilized about to 17 m within 400 episodes. Whereas edge length of random and without learning algorithms are fluctuates and fails to convergence. This is because of proposed Q-Learning routing algorithm at the initialization know the number of hop counts and also in learning phase -greedy policy helps to compute less number of intermediate hops count and then compute the shortest distance edge length. It can be also observed that without learning based routing algorithm compute the edge length only on Euclidean distance formula according to Eq. (4) and nothing to do with learning and in turn fail to converge the edge length.\n\n## Conclusion\n\nIn this paper, we handled the problem of energy balanced routing algorithm using reinforcement learning and proposed QRL algorithm for wireless sensor network. The link quality, residual energy, and distance between the two consecutive hops are used as parameter for selection of an optimal action subject to maximize the reward (utility). To achieve the objective QRL based energy balanced algorithm has been proposed and their time complexity is also analyzed to show the effectives of the proposed algorithm. It is also proved from the simulation results the proposed QRL algorithm converges faster than other state-of-art-algorithms. It is also notable from the simulation results that energy consumption and link quality and residual energy also improved compared to random algorithm and without learning algorithm. In the future, we also include the node density as another parameter to estimate the energy balanced routing path using deep learning techniques.\n\nConflict of interest\n\nThe authors declare no potential conflict of interest regarding the publication of this work. In addition, the ethical issues including plagiarism, informed consent, misconduct, data fabrication and, or falsification, double publication and, or submission, and redundancy have been completely witnessed by the authors.\n\nFunding\n\n## Introduction\n\nThe Internet of Things (IoT) is the network of physical objects-devices, vehicles, buildings and other items embedded with electronics, software, sensors and wireless network connectivity that enable these objects to collect and exchange data (Akyildiz IF et al. (2002), Kashyap, P. K. (2019) &  Kashyap, P. K., Kumar, S., and Jaiswal, A. (2019) ). Each of these smart device is uniquely identified by Internet address protocol (IP) to forward the data packet from source to destination (F. Bouabdallah et al. (2008)). The IoT has huge application in almost all the sectors of human being such as healthcare facilities, industrial organization, vehicular network, military operations, business organization and many more (Ishmanov F et al. (2011) & Ahmed AA and Mohammed Y, (2007)). These smart devices have limited battery power to perform complex computation and forward the data packet. Due to tremendous upsurge in the connected number of ubiquitous devices, there is large number of data packets travelled in the IoT network. So, there is need to choose the energy balanced routing path to forward the data packet so that lifetime of the network is improved.\n\nIn order to support various complicated application, IoT nodes have to perform the reliable operation with their limited energy, computational resources and bandwidth effectively so that it reduce the time-delay for data transmission using shortest routing path, transmission errors and ultimately improve the lifetime of the network. In this regard software define network is combined with the IoT network that separate the hardware and software control operation efficiently to cope with the mentioned challenged (J. Zhou, et al. (2016)).  Therefore, dynamic routing rules for the IoT nodes provide novel data forwarding strategy, but lack in the presence of stochastic nature of channel path.\n\nMachine learning (ML) techniques have been extensively used in the IoT network to finding the optimal route for data forwarding in the recent decade (C. Guestrin et al. (2004),   Kashyap, P.K, Kumar, S. (2019), & A. Jaiswal et al. (2020)). Machine learning techniques provides learning ability to IoT network through experience, and reinforcement learning (RL) works on learning agent that improve its learning capability based on received rewards according to their taken action. By exploitation of their gained knowledge and exploration of the environment RL agent maximize its rewards (R. S. Sutton (2018)). Reinforcement learning techniques requires low computation resources with lower implementation efforts to output effective results with higher accuracy. The output of the system nearly optimal and has higher flexibility according to the changes in the environment without prior knowledge of the network. Thus, reinforcement learning and Q-learning are best suited techniques for routing approaches in the IoT network that build path with lower redundancy.\n\n1. Hu and Y. Fei (2010) have proposed Q-learning based algorithm QELAR for the selection of next hop in the routing path. The selection of next hop depends upon the residual energy and the node density of the adjacent node, so that lifetime of the IoT network is improved by evenly distribution of the energy. N. Javaid, O. A. Karim, A. Sher, M. Imran, A. U. H. Yasar, and M. Guizani (2003) have proposed multi-sink path selection for the data transmission using the local information such as residual energy, physical distance to update the Q-function. W. Guo, C. Yan, and T. Lu (2019) have proposed delay-aware routing algorithm for the underwater sensor networks using Q-learning. The selection of the next hop is greedy one in the residual energy and minimum propagation delay evaluated through physical distance. Whereas in (Z. Jin, Y. Ma, Y. Su, S. Li, and X. Fu (2017)), source nodes broadcast the topology information in the network, then each node simulate the residual energy, distance between them and to the destination node and feed the information to evaluating the reward of the Q-learning function. Then, it creates a virtual topology route for the data transmission and finally data are sent from intermediate node to destination node. However, proposed algorithms have limited computation for constant hop length and fail in the stochastic nature of channel state information. Also, edge length of shortest path routing in the terms of graph is dynamic, which is taken as constant in the above proposed algorithms that are not in the case of real environment.\n\nUnder these circumstances, there is need energy balanced routing algorithm based on reinforcement learning approach that include residual energy, physical distance and link quality of the channel for data transmission. The major contribution of the paper as follow:\n\n• Firstly, system models consist of network setting, energy consumption with residual energy model and energy balanced routing problem in the IoT network is presented to bring out their primary functions.\n• Secondly, an optimization problem is modelled according to Q-learning and Q-RL based energy balanced routing algorithm is presented. Further, time complexity of the presented algorithm is analysed.\n• Finally, Extensive simulations are presented to check the effectiveness of the presented algorithm in terms of convergence rate, energy consumption, edge length and residual energy with respect to state-of-art-algorithms.\n\nThe rest of the paper is divided into following sections. Section II described the system models used in the IoT network using graph approach. Section III explains the Q-learning based routing protocol in the IoT network. In Section IV, simulation and results are analyzed for the proposed algorithm and state-of-art-algorithms. Finally, Conclusion of this paper is presented along with future scope in the section V.\n\n## System Model\n\nNetwork Setting\n\nWe consider an energy-constrained Internet of Thing network that has finite number of sensor nodes, which are randomly deployed in a given monitoring area. Each node in the network can only communicate with the neighbouring nodes that are within its transmission range. Data transmission from one node to another takes place in synchronised time slots. Here, it is considered that each data transmission from a source node to destination takes place by using a number of intermediate nodes present along the route in the network.  Each node has a single antenna, a finite battery which can be recharged periodically and works in a half-duplex mode.\n\nThe wireless connection between nodes of IoT are affected by many factors, such as residual energy of node, physical distance, channel gain etc. that makes the edge length and network state  of dynamic nature in many scenarios. Here, we represent the network as a graph G = (V, E, L)with stochastic edge length, where V is the set of vertices i.e. the sensor nodes and E = (eij), such that vi, vj ϵ V, is the set of edges and L represents the probability distribution of each edge length. An edge exists between vertices vi and vj in the graph only when node j is the neighbor of node i. The nodes in the transmission range of a node constitute its neighbourhood. The length of edge (vi, vj) is denoted as l(vi, vj) and is considered as a random variable. The channel between neighbouring nodes is assumed to follow quasi-static block Rayleigh fading model and the channel gain  between neighbouring nodes vi and vj are modelled as Markov chain. The transition probability of from at any time instant t is given as  and is unknown to the network or sensors.\n\nEnergy Consumption and residual energy Model\n\nIn IoT network, the energy is consumed for carrying out sensing, processing and communication (transmitting/receiving) activities. Out of these, data communication consumes most of the energy of a node so energy consumed for communication only is considered during routing. For simplicity, only the energy consumed for transmissions is accounted and energy spent for receiving is ignored as the idle and receiving nodes consume almost same amount of energy .According to the first order radio model presented in , for a message having b bits, the energy consumed for its transmission from  to   nodewith edge length l ( , ) is calculated as\n\n(1)\n\n(2)\n\nAfter data transmission residual energy  of a node at any hop, at time slot  can be evaluated as follow\n\n(3)\n\nWhere  is the maximum battery capacity of a node, is used for calculating the threshold distance (l0) which in turn is used to determine the power loss model to be used i.e. whether to use free space model or multipath fading model. Free space model is utilized when the distance between sender and receiver is less than the threshold distance otherwise multipath fading model is applied for calculating the energy consumed for transmission purposes. is the energy requirements of transmitter and receiver circuit, and are the energy consumed for amplifying transmission in order to attain a satisfactory signal to noise ratio (SNR) and l is the communication edge length.\n\nEnergy Balanced Routing Problem in IoT\n\nThe sensor nodes in the IoT network capture the desired data and forward this data to the destination node. Due to resource constraints in WSNs such as short communication range, limited processing potential and limited battery power, the source node, instead of communicating directly with the destination, communicates indirectly through its neighbours (multi-hop) as this leads to higher energy efficiency as compared to direct communication. In a multi-hop communication environment, routing algorithm is needed to find a communication path from source node to the destination node. Multi-hop communication overcomes the problem of energy inefficiency and short-range communication faced in direct communication but it steers imbalanced consumption of energy in the network (Khan, Tayyab, et al. (2019)) as the intermediate nodes deplete their battery faster while relaying the data of other nodes. The nodes in the vicinity of sink are the most affected ones. Therefore, the routing algorithm needs to find a path which balances the energy consumption of the nodes in the network so that all the nodes deplete their energy nearly at the same time which in turn results in increased network lifetime. A routing path rp in the network graph is defined as a sequence of distinct sensors in the WSN starting from source node and ending at destination node i.e. such that are adjacent vertices for and =source node and =destination node. The path rp having  sensor nodes has a length of The main aim of this paper is to find an optimal routing path between a source and destination node in order to minimize the total energy consumption and transmission delay for a reliable communication.\n\n## Q-Learning Based Routing Protocol  in IoT\n\nIn this section, we propose a Q-Learning based efficient routing protocol to find an optimal and reliable route from a source to destination node in order to reduce the total energy consumption and minimize the total transmission delay (based on shortest distance), which can ultimately improve the network lifetime.\n\nProblem Modelling\n\nThe stochastic optimal routing-path finding problem is modelled as an MDP. Q-learning updating rules are used to learn an optimal policy. Here, the learning agent selects an action in order to interact with the environment (stochastic graph) to reach the next neighboring node in the route, to get an optimal path from source to destination subject to maximize the expected reward obtained. MDP can be defined as follow:\n\nState ‘ ’:Each sensor node in the network and the corresponding channel gain towards its neighbor nodes is modelled as a state. The current sensor node in the routing path-finding process and current channel gain is considered as a current state.\n\nAction ‘ ’:All the out link neighbor nodes are considered in the action set of a state.\n\nTransition ‘ ’: The next state is determined by the action selection in current state.\n\nReward‘ ’: Reward for a state-action pair (s, a) is calculated by using utility value which is the combination of nodes’ residual energy, edge length and nodes’ energy consumption and link quality.\n\nDefinition 1- (edge length: distance between nodes): Following formula is used to compute the edge length between any two node\n\nl( , )=                                                                                                                                                                                                           (4) where ( ) and ( ) are coordinates of node , and respectively.\n\nDefinition 2- (edge length based path): Data packet is transferred from a source node to destination through n-hops towards the destination node. The optimal routing path based on edge length is represented as\n\n(5)\n\nWhere  is the edge length between two nodes at -hop. And, the path with minimum edge length guarantees that the transmission delay is minimum.\n\nDefinition 2- (routing path based on energy):Data packet is transferred from a source node to destination through n-hops towards the destination node. The optimal routing path based on residual energy is represented as\n\n(6) Where  is the residual energy of transmitting node at -hop.Residual energy of a node can be computed using Eq. (3).\n\nDefinition 3- (routing path based on link quality): Data packet is transferred from a source node to destination through n-hops towards the destination node. The optimal routing path based on better link quality is represented as\n\n(7) Where  is the normalized link quality at -hopand given as\n\n(8)\n\nWhere  are signal strength at -hop and maximum signal strength. Thus, the reward (utility) obtainedfor transition from  to  state after taking an action at time slot t being at -hop can be computed as\n\n(9)\n\nWhere , , , and  are some prescribed positive weights  parameter which reflect the importance of residual energy, link quality, edge length and energy consumption respectively in calculation of reward, and  are number of neighbors for . The weight parameters are closely related to each other, such that if is set to zero, the presented model emphasize on the maximization of the residual energy in the routing path and transmission delay is ignored i.e., independent of number of intermediate hop. When is set to zero, the presented algorithm pays more attention in reducing the transmission delay of packet and ignores residual energy of the sensor node. Thus, in both above case lifetime of the network is not optimal because of tradeoff between . Thus, we can adjust these parameter values according to our needs.\n\nTo find an optimal routing path, the learning agent initially perceive the starting state , i.e., the source node and channel gain of the links towards its neighbors, and then selects an action using the current policy, till the agent arrives at the destination node. The state-value is updated using the temporal difference method. The updating rule for Q-learning is\n\n)                                                                                    (10)\n\nwhere, ζ ϵ [0,1] denotes learning rate, is discount factor. Q-learning adopts -greedy policy for action selection, i.e, it select optimal action having maximum state-value with probability 1-  and a random action with probability . The main aim of the learning agent is to find an optimal policy  for selecting an optimal routing path.  The optimal policy  denotes the state-value which is greater than other policy’s state-value. The optimal routing problem can be expressed as\n\n(11)\n\nThe formulated problem in (10) can be optimally solved by a Q-Learning method. The goal of proposed routing problem is to maximize the total reward over all routing paths starting at source node, such that rp and  is the set of all paths in G starting from source node and ending at destination.\n\nIn routing process, each node in the network keeps a routing table which is used to select the next hop for the data transmission. The routing table contains the information about next possible nodes which can be reachable to all possible destinations in the network. This table is updated after each data transmission to store the information of node which is good for further data forwarding based on the obtained reward.\n\nQ-Learning based Routing Algorithm\n\nIn this section, we present a learning based routing algorithm, particularly using Q-learning approach.\n\nAlgorithm- QLRA\n\n• Q-table Initialization\n• Initialization of path set ᵽ\n• D = # destination node\n• Episode=0\n• For Episode\n1. # current state at time slot t\n• Sr= # source node\n1. ᵽ = Sr\n• While Sr\n1. Action_set= neighbor nodes of Sr\n2. Z = Action_set/ ᵽ\n3. If Z is empty then\n4. Break\n5. End if\n6. = -greedy(Sr) in Z based for  state    # action at\n7. Obtain and  after -hop data transmission at  state #Q-table Update\n8. )\n9.\n10. ᵽ= ᵽ U Z\n11. End While\n12. Episode= Episode+1\n13. End For\n14. Final_route= ᵽ\n15. Return Final_route\n\nInitially, we initialize the Q-table with all zero values. After that current state is observed. Based on the current state an action  is selected from the available actions at  (line no. 15). After executing the action, a reward and next state  are obtained (line no. 16).  Using the achieved reward, state-value  is updated (line no. 17). And now next state  becomes current state. The algorithm converge either source node find a routing path to reach destination node or for the maximum number of episode.\n\nTime Complexity\n\nThe time complexity of the Q-learning based routing algorithm mainly has three aspects: (1) the algorithm continues until it find destination node in the line no.9 i.e., number of intermediate node in the routing path is ( ). (2) Selecting an action (choose neighbor node) from the set of neighbors’ nodes subject to maximize the expected discount reward in the network. The set of neighbors is represented in the form of  matrix i.e., but for the single node (state) linear search apply on the single desired row takes  time in line no.10 to line no.15. Thereafter line no.16 to line no. 19 requires constant time to update the Q-value (3).The algorithm runs in worst case are equal to the number of episode until convergence (line no.5). Thus, overall time complexity of the algorithm is .\n\n## Results and Analysis\n\nIn this section, firstly, the convergence performance of proposed Q-Learning routing algorithm is analyzed over learning trails and link quality in terms of steps until convergence and reward (utility) respectively. Secondly, comparative analysis of the proposed algorithm against Random learning algorithm and without (w/o) learning algorithm done with respect four metrics: 1) Reward (Utility) 2) Residual energy 3) Energy consumption 4) Edge length.  All these algorithms are simulated using same values of parameters for energy model and network conditions.\n\nSimulation Environment\n\nThe simulation is carried out using MATLAB in a  square area having 50 sensor nodes are randomly distributed. The communication range and initial energy of all the sensor nodes set to be20 m and 0.5 J respectively.  The maximum bandwidth for each of the communication link in the network is 100Mbps. Without loss of generality, one source node and one destination node is selected randomly for the performance analysis of all the state-of-art-algorithms. The others simulations parameters are shown in Table 1.\n\nTable 1. Simulation Parameters\n\n Parameter Value Parameter Value Initial Energy 0.7 0.92 1000 [0,1] 3 50 dBm 0.7 0.3\n\nResult Analysis\n\n1. Convergence Performance over Learning Trails\n\nFigure 1. Convergence performance over learning trails\n\nFigure 1 illustrates the convergence performance of the proposed QRL- based energy balanced algorithm over number of learning trails. It can be clearly observed from result, the RL agent takes 940 steps to converge for the first trails of learning rate thereafter it took less number of steps approximate 560 steps for the very less value of learning rate .The number of steps on average higher than 200 steps for the taking the value of , whereas the best performance to achieve convergence by the RL agent for the value of . Thus, it is necessary to choose the learning rate with caution for convergence in small number of steps towards maximization of reward i.e. reduce the energy consumption and minimize the number of hop counts.\n\n1. Reward (Utility) over Link Quality\n\nFigure 2. Reward (Utility) over link Quality\n\nFigure 2 illustrates that the reward (utility) of the proposed Q-learning routing algorithm with respect to link quality under different residual energy of the intermediate hops in the routing path. The link quality describes the nature of communication path between the sensor nodes. And further, link quality depends upon the residual energy of the intermediate hops. If the residual energy of the communicating nodes is high then the link quality is also better and vice-versa. It can be observed from the results, at the beginning, the reward of the proposed algorithm increases rapidly then become stationary as the value of link quality improves. This is because of the learning capability of the proposed algorithm to optimize the reward faster at link quality’s value . It is also worth to note down reward of the proposed algorithm cannot increase with further increase in the link quality. This is due to the fact other factors such as limited bandwidth and energy consumption in communication also increases and then affects the reward of the proposed algorithm according to Eq. (10).\n\n1. Comparative analysis of Reward (Utility) over Episodes\n\nFigure 3. Reward (Utility) over Episodes\n\nA comparison of reward (utility) between proposed QRL- based energy balanced algorithm and state-of-art-algorithm over number of episode is presented in the figure.3 using learning rate of . it is clearly observed from the simulation results that Q-learning algorithm converges faster than random learning algorithm within 390 episodes. Whereas random algorithm’s utility converges around 580 episodes. This is due to the fact the proposed QRL algorithm uses -greedy technique to select an action rather than randomly selected any action for the current state-reward. Also, optimal learning policy helps in the selection of an action in QRL, which ultimately maximize the reward with less number of episodes. It is also worthy to note down that worst performance is shown by without learning algorithm. This is because of neither have learning policy nor any optimization technique involved in the process of reward maximization.\n\n1. Comparison of Residual Energy over Episodes\n\nA comparison of convergence characteristic in the terms of residual energy between Q-learning and the state-of-art-algorithm is presented in the figure 4 using learning rate of . It is clearly observed from the result as the number of episode increases residual energy increases for all the three algorithms and converges at 400 episodes. Further, it is noticeable that proposed QRL based energy balanced routing algorithm has higher residual energy about (0.87Joule) than other state-of-art-algorithm. This is because of the. QRL selects the next hop for the routing purpose based on optimal policy learning strategy subject to maximize the residual energy, better link quality and minimum distance. Whereas random algorithm select the next route based on minimum distance and does not consider residual energy of the next hop that in turns increases the overall energy consumption. And, the without learning based algorithm selects any hop for the routing randomly without considering the residual energy and minimum distance.\n\nFigure 4. Residual energy over Episodes\n\n1. Comparison of Energy Consumption over Episodes\n\nFigure 5. Energy Consumption over episodes\n\nA comparison of energy consumption of the proposed Q-learning routing algorithm with state-of-art-algorithms over number of learning episodes is shown in the figure 5. It can be observed from the results; at the start of learning episodes the energy consumption of the proposed algorithm is 0.225 J and as the algorithm reached to 160 episodes it lower down the energy consumption at 0.03 J. Further, proposed algorithm reached up to 400 episodes, the energy reduces to 0.025 J and consumption becomes stable. Whereas other state-of-art algorithm fails to optimize the energy consumption of the nodes involved during routing path. This is because of Q-learning algorithm uses -greedy approach for the selection of optimal policy, whereas Random algorithm selects any action randomly to obtain the reward. It is also noted down that the worst performance is shown by without (w/o) learning algorithm, because it does not have any learning policy and compute reward on the current situation of node’s parameters.\n\n1. Comparison of Edge Length over Episodes\n\nFigure 6. Edge Length over episodes\n\nA comparison of edge length of the proposed Q-learning routing algorithm with state-of-art-algorithms over number of learning episodes is shown in the figure 6. The edge length describe the distance between the intermediate hops, smaller the edge length (Euclidean distance) corresponds to reduce the transmission delay and improves the also convergence speed of the learning based routing algorithm. The results shows that as the number of episode increases, the edge length of proposed algorithm reduces and stabilized about to 17 m within 400 episodes. Whereas edge length of random and without learning algorithms are fluctuates and fails to convergence. This is because of proposed Q-Learning routing algorithm at the initialization know the number of hop counts and also in learning phase -greedy policy helps to compute less number of intermediate hops count and then compute the shortest distance edge length. It can be also observed that without learning based routing algorithm compute the edge length only on Euclidean distance formula according to Eq. (4) and nothing to do with learning and in turn fail to converge the edge length.\n\n## Conclusion\n\nIn this paper, we handled the problem of energy balanced routing algorithm using reinforcement learning and proposed QRL algorithm for wireless sensor network. The link quality, residual energy, and distance between the two consecutive hops are used as parameter for selection of an optimal action subject to maximize the reward (utility). To achieve the objective QRL based energy balanced algorithm has been proposed and their time complexity is also analyzed to show the effectives of the proposed algorithm. It is also proved from the simulation results the proposed QRL algorithm converges faster than other state-of-art-algorithms. It is also notable from the simulation results that energy consumption and link quality and residual energy also improved compared to random algorithm and without learning algorithm. In the future, we also include the node density as another parameter to estimate the energy balanced routing path using deep learning techniques.\n\nConflict of interest\n\nThe authors declare no potential conflict of interest regarding the publication of this work. In addition, the ethical issues including plagiarism, informed consent, misconduct, data fabrication and, or falsification, double publication and, or submission, and redundancy have been completely witnessed by the authors.\n\nFunding\n\n## Introduction\n\nThe Internet of Things (IoT) is the network of physical objects-devices, vehicles, buildings and other items embedded with electronics, software, sensors and wireless network connectivity that enable these objects to collect and exchange data (Akyildiz IF et al. (2002), Kashyap, P. K. (2019) &  Kashyap, P. K., Kumar, S., and Jaiswal, A. (2019) ). Each of these smart device is uniquely identified by Internet address protocol (IP) to forward the data packet from source to destination (F. Bouabdallah et al. (2008)). The IoT has huge application in almost all the sectors of human being such as healthcare facilities, industrial organization, vehicular network, military operations, business organization and many more (Ishmanov F et al. (2011) & Ahmed AA and Mohammed Y, (2007)). These smart devices have limited battery power to perform complex computation and forward the data packet. Due to tremendous upsurge in the connected number of ubiquitous devices, there is large number of data packets travelled in the IoT network. So, there is need to choose the energy balanced routing path to forward the data packet so that lifetime of the network is improved.\n\nIn order to support various complicated application, IoT nodes have to perform the reliable operation with their limited energy, computational resources and bandwidth effectively so that it reduce the time-delay for data transmission using shortest routing path, transmission errors and ultimately improve the lifetime of the network. In this regard software define network is combined with the IoT network that separate the hardware and software control operation efficiently to cope with the mentioned challenged (J. Zhou, et al. (2016)).  Therefore, dynamic routing rules for the IoT nodes provide novel data forwarding strategy, but lack in the presence of stochastic nature of channel path.\n\nMachine learning (ML) techniques have been extensively used in the IoT network to finding the optimal route for data forwarding in the recent decade (C. Guestrin et al. (2004),   Kashyap, P.K, Kumar, S. (2019), & A. Jaiswal et al. (2020)). Machine learning techniques provides learning ability to IoT network through experience, and reinforcement learning (RL) works on learning agent that improve its learning capability based on received rewards according to their taken action. By exploitation of their gained knowledge and exploration of the environment RL agent maximize its rewards (R. S. Sutton (2018)). Reinforcement learning techniques requires low computation resources with lower implementation efforts to output effective results with higher accuracy. The output of the system nearly optimal and has higher flexibility according to the changes in the environment without prior knowledge of the network. Thus, reinforcement learning and Q-learning are best suited techniques for routing approaches in the IoT network that build path with lower redundancy.\n\n1. Hu and Y. Fei (2010) have proposed Q-learning based algorithm QELAR for the selection of next hop in the routing path. The selection of next hop depends upon the residual energy and the node density of the adjacent node, so that lifetime of the IoT network is improved by evenly distribution of the energy. N. Javaid, O. A. Karim, A. Sher, M. Imran, A. U. H. Yasar, and M. Guizani (2003) have proposed multi-sink path selection for the data transmission using the local information such as residual energy, physical distance to update the Q-function. W. Guo, C. Yan, and T. Lu (2019) have proposed delay-aware routing algorithm for the underwater sensor networks using Q-learning. The selection of the next hop is greedy one in the residual energy and minimum propagation delay evaluated through physical distance. Whereas in (Z. Jin, Y. Ma, Y. Su, S. Li, and X. Fu (2017)), source nodes broadcast the topology information in the network, then each node simulate the residual energy, distance between them and to the destination node and feed the information to evaluating the reward of the Q-learning function. Then, it creates a virtual topology route for the data transmission and finally data are sent from intermediate node to destination node. However, proposed algorithms have limited computation for constant hop length and fail in the stochastic nature of channel state information. Also, edge length of shortest path routing in the terms of graph is dynamic, which is taken as constant in the above proposed algorithms that are not in the case of real environment.\n\nUnder these circumstances, there is need energy balanced routing algorithm based on reinforcement learning approach that include residual energy, physical distance and link quality of the channel for data transmission. The major contribution of the paper as follow:\n\n• Firstly, system models consist of network setting, energy consumption with residual energy model and energy balanced routing problem in the IoT network is presented to bring out their primary functions.\n• Secondly, an optimization problem is modelled according to Q-learning and Q-RL based energy balanced routing algorithm is presented. Further, time complexity of the presented algorithm is analysed.\n• Finally, Extensive simulations are presented to check the effectiveness of the presented algorithm in terms of convergence rate, energy consumption, edge length and residual energy with respect to state-of-art-algorithms.\n\nThe rest of the paper is divided into following sections. Section II described the system models used in the IoT network using graph approach. Section III explains the Q-learning based routing protocol in the IoT network. In Section IV, simulation and results are analyzed for the proposed algorithm and state-of-art-algorithms. Finally, Conclusion of this paper is presented along with future scope in the section V.\n\n## System Model\n\nNetwork Setting\n\nWe consider an energy-constrained Internet of Thing network that has finite number of sensor nodes, which are randomly deployed in a given monitoring area. Each node in the network can only communicate with the neighbouring nodes that are within its transmission range. Data transmission from one node to another takes place in synchronised time slots. Here, it is considered that each data transmission from a source node to destination takes place by using a number of intermediate nodes present along the route in the network.  Each node has a single antenna, a finite battery which can be recharged periodically and works in a half-duplex mode.\n\nThe wireless connection between nodes of IoT are affected by many factors, such as residual energy of node, physical distance, channel gain etc. that makes the edge length and network state  of dynamic nature in many scenarios. Here, we represent the network as a graph G = (V, E, L)with stochastic edge length, where V is the set of vertices i.e. the sensor nodes and E = (eij), such that vi, vj ϵ V, is the set of edges and L represents the probability distribution of each edge length. An edge exists between vertices vi and vj in the graph only when node j is the neighbor of node i. The nodes in the transmission range of a node constitute its neighbourhood. The length of edge (vi, vj) is denoted as l(vi, vj) and is considered as a random variable. The channel between neighbouring nodes is assumed to follow quasi-static block Rayleigh fading model and the channel gain  between neighbouring nodes vi and vj are modelled as Markov chain. The transition probability of from at any time instant t is given as  and is unknown to the network or sensors.\n\nEnergy Consumption and residual energy Model\n\nIn IoT network, the energy is consumed for carrying out sensing, processing and communication (transmitting/receiving) activities. Out of these, data communication consumes most of the energy of a node so energy consumed for communication only is considered during routing. For simplicity, only the energy consumed for transmissions is accounted and energy spent for receiving is ignored as the idle and receiving nodes consume almost same amount of energy .According to the first order radio model presented in , for a message having b bits, the energy consumed for its transmission from  to   nodewith edge length l ( , ) is calculated as\n\n(1)\n\n(2)\n\nAfter data transmission residual energy  of a node at any hop, at time slot  can be evaluated as follow\n\n(3)\n\nWhere  is the maximum battery capacity of a node, is used for calculating the threshold distance (l0) which in turn is used to determine the power loss model to be used i.e. whether to use free space model or multipath fading model. Free space model is utilized when the distance between sender and receiver is less than the threshold distance otherwise multipath fading model is applied for calculating the energy consumed for transmission purposes. is the energy requirements of transmitter and receiver circuit, and are the energy consumed for amplifying transmission in order to attain a satisfactory signal to noise ratio (SNR) and l is the communication edge length.\n\nEnergy Balanced Routing Problem in IoT\n\nThe sensor nodes in the IoT network capture the desired data and forward this data to the destination node. Due to resource constraints in WSNs such as short communication range, limited processing potential and limited battery power, the source node, instead of communicating directly with the destination, communicates indirectly through its neighbours (multi-hop) as this leads to higher energy efficiency as compared to direct communication. In a multi-hop communication environment, routing algorithm is needed to find a communication path from source node to the destination node. Multi-hop communication overcomes the problem of energy inefficiency and short-range communication faced in direct communication but it steers imbalanced consumption of energy in the network (Khan, Tayyab, et al. (2019)) as the intermediate nodes deplete their battery faster while relaying the data of other nodes. The nodes in the vicinity of sink are the most affected ones. Therefore, the routing algorithm needs to find a path which balances the energy consumption of the nodes in the network so that all the nodes deplete their energy nearly at the same time which in turn results in increased network lifetime. A routing path rp in the network graph is defined as a sequence of distinct sensors in the WSN starting from source node and ending at destination node i.e. such that are adjacent vertices for and =source node and =destination node. The path rp having  sensor nodes has a length of The main aim of this paper is to find an optimal routing path between a source and destination node in order to minimize the total energy consumption and transmission delay for a reliable communication.\n\n## Q-Learning Based Routing Protocol  in IoT\n\nIn this section, we propose a Q-Learning based efficient routing protocol to find an optimal and reliable route from a source to destination node in order to reduce the total energy consumption and minimize the total transmission delay (based on shortest distance), which can ultimately improve the network lifetime.\n\nProblem Modelling\n\nThe stochastic optimal routing-path finding problem is modelled as an MDP. Q-learning updating rules are used to learn an optimal policy. Here, the learning agent selects an action in order to interact with the environment (stochastic graph) to reach the next neighboring node in the route, to get an optimal path from source to destination subject to maximize the expected reward obtained. MDP can be defined as follow:\n\nState ‘ ’:Each sensor node in the network and the corresponding channel gain towards its neighbor nodes is modelled as a state. The current sensor node in the routing path-finding process and current channel gain is considered as a current state.\n\nAction ‘ ’:All the out link neighbor nodes are considered in the action set of a state.\n\nTransition ‘ ’: The next state is determined by the action selection in current state.\n\nReward‘ ’: Reward for a state-action pair (s, a) is calculated by using utility value which is the combination of nodes’ residual energy, edge length and nodes’ energy consumption and link quality.\n\nDefinition 1- (edge length: distance between nodes): Following formula is used to compute the edge length between any two node\n\nl( , )=                                                                                                                                                                                                           (4) where ( ) and ( ) are coordinates of node , and respectively.\n\nDefinition 2- (edge length based path): Data packet is transferred from a source node to destination through n-hops towards the destination node. The optimal routing path based on edge length is represented as\n\n(5)\n\nWhere  is the edge length between two nodes at -hop. And, the path with minimum edge length guarantees that the transmission delay is minimum.\n\nDefinition 2- (routing path based on energy):Data packet is transferred from a source node to destination through n-hops towards the destination node. The optimal routing path based on residual energy is represented as\n\n(6) Where  is the residual energy of transmitting node at -hop.Residual energy of a node can be computed using Eq. (3).\n\nDefinition 3- (routing path based on link quality): Data packet is transferred from a source node to destination through n-hops towards the destination node. The optimal routing path based on better link quality is represented as\n\n(7) Where  is the normalized link quality at -hopand given as\n\n(8)\n\nWhere  are signal strength at -hop and maximum signal strength. Thus, the reward (utility) obtainedfor transition from  to  state after taking an action at time slot t being at -hop can be computed as\n\n(9)\n\nWhere , , , and  are some prescribed positive weights  parameter which reflect the importance of residual energy, link quality, edge length and energy consumption respectively in calculation of reward, and  are number of neighbors for . The weight parameters are closely related to each other, such that if is set to zero, the presented model emphasize on the maximization of the residual energy in the routing path and transmission delay is ignored i.e., independent of number of intermediate hop. When is set to zero, the presented algorithm pays more attention in reducing the transmission delay of packet and ignores residual energy of the sensor node. Thus, in both above case lifetime of the network is not optimal because of tradeoff between . Thus, we can adjust these parameter values according to our needs.\n\nTo find an optimal routing path, the learning agent initially perceive the starting state , i.e., the source node and channel gain of the links towards its neighbors, and then selects an action using the current policy, till the agent arrives at the destination node. The state-value is updated using the temporal difference method. The updating rule for Q-learning is\n\n)                                                                                    (10)\n\nwhere, ζ ϵ [0,1] denotes learning rate, is discount factor. Q-learning adopts -greedy policy for action selection, i.e, it select optimal action having maximum state-value with probability 1-  and a random action with probability . The main aim of the learning agent is to find an optimal policy  for selecting an optimal routing path.  The optimal policy  denotes the state-value which is greater than other policy’s state-value. The optimal routing problem can be expressed as\n\n(11)\n\nThe formulated problem in (10) can be optimally solved by a Q-Learning method. The goal of proposed routing problem is to maximize the total reward over all routing paths starting at source node, such that rp and  is the set of all paths in G starting from source node and ending at destination.\n\nIn routing process, each node in the network keeps a routing table which is used to select the next hop for the data transmission. The routing table contains the information about next possible nodes which can be reachable to all possible destinations in the network. This table is updated after each data transmission to store the information of node which is good for further data forwarding based on the obtained reward.\n\nQ-Learning based Routing Algorithm\n\nIn this section, we present a learning based routing algorithm, particularly using Q-learning approach.\n\nAlgorithm- QLRA\n\n• Q-table Initialization\n• Initialization of path set ᵽ\n• D = # destination node\n• Episode=0\n• For Episode\n1. # current state at time slot t\n• Sr= # source node\n1. ᵽ = Sr\n• While Sr\n1. Action_set= neighbor nodes of Sr\n2. Z = Action_set/ ᵽ\n3. If Z is empty then\n4. Break\n5. End if\n6. = -greedy(Sr) in Z based for  state    # action at\n7. Obtain and  after -hop data transmission at  state #Q-table Update\n8. )\n9.\n10. ᵽ= ᵽ U Z\n11. End While\n12. Episode= Episode+1\n13. End For\n14. Final_route= ᵽ\n15. Return Final_route\n\nInitially, we initialize the Q-table with all zero values. After that current state is observed. Based on the current state an action  is selected from the available actions at  (line no. 15). After executing the action, a reward and next state  are obtained (line no. 16).  Using the achieved reward, state-value  is updated (line no. 17). And now next state  becomes current state. The algorithm converge either source node find a routing path to reach destination node or for the maximum number of episode.\n\nTime Complexity\n\nThe time complexity of the Q-learning based routing algorithm mainly has three aspects: (1) the algorithm continues until it find destination node in the line no.9 i.e., number of intermediate node in the routing path is ( ). (2) Selecting an action (choose neighbor node) from the set of neighbors’ nodes subject to maximize the expected discount reward in the network. The set of neighbors is represented in the form of  matrix i.e., but for the single node (state) linear search apply on the single desired row takes  time in line no.10 to line no.15. Thereafter line no.16 to line no. 19 requires constant time to update the Q-value (3).The algorithm runs in worst case are equal to the number of episode until convergence (line no.5). Thus, overall time complexity of the algorithm is .\n\n## Results and Analysis\n\nIn this section, firstly, the convergence performance of proposed Q-Learning routing algorithm is analyzed over learning trails and link quality in terms of steps until convergence and reward (utility) respectively. Secondly, comparative analysis of the proposed algorithm against Random learning algorithm and without (w/o) learning algorithm done with respect four metrics: 1) Reward (Utility) 2) Residual energy 3) Energy consumption 4) Edge length.  All these algorithms are simulated using same values of parameters for energy model and network conditions.\n\nSimulation Environment\n\nThe simulation is carried out using MATLAB in a  square area having 50 sensor nodes are randomly distributed. The communication range and initial energy of all the sensor nodes set to be20 m and 0.5 J respectively.  The maximum bandwidth for each of the communication link in the network is 100Mbps. Without loss of generality, one source node and one destination node is selected randomly for the performance analysis of all the state-of-art-algorithms. The others simulations parameters are shown in Table 1.\n\nTable 1. Simulation Parameters\n\n Parameter Value Parameter Value Initial Energy 0.7 0.92 1000 [0,1] 3 50 dBm 0.7 0.3\n\nResult Analysis\n\n1. Convergence Performance over Learning Trails\n\nFigure 1. Convergence performance over learning trails\n\nFigure 1 illustrates the convergence performance of the proposed QRL- based energy balanced algorithm over number of learning trails. It can be clearly observed from result, the RL agent takes 940 steps to converge for the first trails of learning rate thereafter it took less number of steps approximate 560 steps for the very less value of learning rate .The number of steps on average higher than 200 steps for the taking the value of , whereas the best performance to achieve convergence by the RL agent for the value of . Thus, it is necessary to choose the learning rate with caution for convergence in small number of steps towards maximization of reward i.e. reduce the energy consumption and minimize the number of hop counts.\n\n1. Reward (Utility) over Link Quality\n\nFigure 2. Reward (Utility) over link Quality\n\nFigure 2 illustrates that the reward (utility) of the proposed Q-learning routing algorithm with respect to link quality under different residual energy of the intermediate hops in the routing path. The link quality describes the nature of communication path between the sensor nodes. And further, link quality depends upon the residual energy of the intermediate hops. If the residual energy of the communicating nodes is high then the link quality is also better and vice-versa. It can be observed from the results, at the beginning, the reward of the proposed algorithm increases rapidly then become stationary as the value of link quality improves. This is because of the learning capability of the proposed algorithm to optimize the reward faster at link quality’s value . It is also worth to note down reward of the proposed algorithm cannot increase with further increase in the link quality. This is due to the fact other factors such as limited bandwidth and energy consumption in communication also increases and then affects the reward of the proposed algorithm according to Eq. (10).\n\n1. Comparative analysis of Reward (Utility) over Episodes\n\nFigure 3. Reward (Utility) over Episodes\n\nA comparison of reward (utility) between proposed QRL- based energy balanced algorithm and state-of-art-algorithm over number of episode is presented in the figure.3 using learning rate of . it is clearly observed from the simulation results that Q-learning algorithm converges faster than random learning algorithm within 390 episodes. Whereas random algorithm’s utility converges around 580 episodes. This is due to the fact the proposed QRL algorithm uses -greedy technique to select an action rather than randomly selected any action for the current state-reward. Also, optimal learning policy helps in the selection of an action in QRL, which ultimately maximize the reward with less number of episodes. It is also worthy to note down that worst performance is shown by without learning algorithm. This is because of neither have learning policy nor any optimization technique involved in the process of reward maximization.\n\n1. Comparison of Residual Energy over Episodes\n\nA comparison of convergence characteristic in the terms of residual energy between Q-learning and the state-of-art-algorithm is presented in the figure 4 using learning rate of . It is clearly observed from the result as the number of episode increases residual energy increases for all the three algorithms and converges at 400 episodes. Further, it is noticeable that proposed QRL based energy balanced routing algorithm has higher residual energy about (0.87Joule) than other state-of-art-algorithm. This is because of the. QRL selects the next hop for the routing purpose based on optimal policy learning strategy subject to maximize the residual energy, better link quality and minimum distance. Whereas random algorithm select the next route based on minimum distance and does not consider residual energy of the next hop that in turns increases the overall energy consumption. And, the without learning based algorithm selects any hop for the routing randomly without considering the residual energy and minimum distance.\n\nFigure 4. Residual energy over Episodes\n\n1. Comparison of Energy Consumption over Episodes\n\nFigure 5. Energy Consumption over episodes\n\nA comparison of energy consumption of the proposed Q-learning routing algorithm with state-of-art-algorithms over number of learning episodes is shown in the figure 5. It can be observed from the results; at the start of learning episodes the energy consumption of the proposed algorithm is 0.225 J and as the algorithm reached to 160 episodes it lower down the energy consumption at 0.03 J. Further, proposed algorithm reached up to 400 episodes, the energy reduces to 0.025 J and consumption becomes stable. Whereas other state-of-art algorithm fails to optimize the energy consumption of the nodes involved during routing path. This is because of Q-learning algorithm uses -greedy approach for the selection of optimal policy, whereas Random algorithm selects any action randomly to obtain the reward. It is also noted down that the worst performance is shown by without (w/o) learning algorithm, because it does not have any learning policy and compute reward on the current situation of node’s parameters.\n\n1. Comparison of Edge Length over Episodes\n\nFigure 6. Edge Length over episodes\n\nA comparison of edge length of the proposed Q-learning routing algorithm with state-of-art-algorithms over number of learning episodes is shown in the figure 6. The edge length describe the distance between the intermediate hops, smaller the edge length (Euclidean distance) corresponds to reduce the transmission delay and improves the also convergence speed of the learning based routing algorithm. The results shows that as the number of episode increases, the edge length of proposed algorithm reduces and stabilized about to 17 m within 400 episodes. Whereas edge length of random and without learning algorithms are fluctuates and fails to convergence. This is because of proposed Q-Learning routing algorithm at the initialization know the number of hop counts and also in learning phase -greedy policy helps to compute less number of intermediate hops count and then compute the shortest distance edge length. It can be also observed that without learning based routing algorithm compute the edge length only on Euclidean distance formula according to Eq. (4) and nothing to do with learning and in turn fail to converge the edge length.\n\n## Conclusion\n\nIn this paper, we handled the problem of energy balanced routing algorithm using reinforcement learning and proposed QRL algorithm for wireless sensor network. The link quality, residual energy, and distance between the two consecutive hops are used as parameter for selection of an optimal action subject to maximize the reward (utility). To achieve the objective QRL based energy balanced algorithm has been proposed and their time complexity is also analyzed to show the effectives of the proposed algorithm. It is also proved from the simulation results the proposed QRL algorithm converges faster than other state-of-art-algorithms. It is also notable from the simulation results that energy consumption and link quality and residual energy also improved compared to random algorithm and without learning algorithm. In the future, we also include the node density as another parameter to estimate the energy balanced routing path using deep learning techniques.\n\nConflict of interest\n\nThe authors declare no potential conflict of interest regarding the publication of this work. In addition, the ethical issues including plagiarism, informed consent, misconduct, data fabrication and, or falsification, double publication and, or submission, and redundancy have been completely witnessed by the authors.\n\nFunding\n\n#### References\n\nA. Jaiswal, S. Kumar, O. Kaiwartya, N. Kumar, H. Song and J. Lloret, (2020) Secrecy Rate Maximization in Virtual-MIMO Enabled SWIPT for 5G Centric IoT Applications,\" in IEEE Systems Journal, doi: 10.1109/JSYST.2020.3036417.\nAhmed AA and Mohammed Y, (2007) A survey on clustering algorithms for wireless sensor networks, Elsevier, ComputerCommunications, Vol. 30, pp. 2826-2841.\nAkyildiz IF, Su W, Sankarasubramaniam Y and Cayirci E, (2002) Wireless sensor networks: a survey, Computer Networks, Vol.38, No. 4, pp. 393-422.\nC. Guestrin, P. Bodik, R. Thibaux, M. Paskin, and S. Madden, (2004) Distributedregression: An efficient framework for modeling sensor network data inProc. 3rd Int. Symp. Inf. Process. Sensor Netw., pp. 1–10.\nF. Bouabdallah, N. Bouabdallah, and R. Boutaba, (2008) Towards reliable and efficient reporting in wireless sensor networks, IEEE Trans. MobileComput., vol. 7, no. 8, pp. 978–994\nIshmanov F, Malik AS and Kim SW, (2011) Energy consumption balancing (ECB) issues and mechanisms in wireless sensor networks (WSNs): A comprehensive overview, European Transactions on Telecommunications, Vol. 22, pp. 151-167.\nJ. Zhou, H. Jiang, J. Wu, L. Wu, C. Zhu, and W. Li, (2016)‘ SDN-based application framework for wireless sensor and actor networks,’’ IEEE Access,vol. 4, pp. 1583–1594.\nKashyap, P. K., Kumar, S., and Jaiswal, A. (2019) Deep Learning Based Offloading Scheme for IoT Networks Towards Green Computing. IEEE International Conference on Industrial Internet (ICII), pp. 22-27, Orlando, FL, USA, 2019.\nKashyap, P. K., Kumar, S., Dohare, U., Kumar, V., &Kharel, R. (2019) Green Computing in Sensors-Enabled Internet of Things: Neuro Fuzzy Logic-Based Load Balancing. MDPI Electronics, 8(4), pp. 384-405.\nKashyap, P.K, Kumar, S. (2019) “Genetic-fuzzy based load balanced protocol for WSNs” International Journal of Electrical and Computer Engineering, Vol. 9, No.2, pp.1168-1183.\nKhan, Tayyab, Karan Singh, Mohamed Abdel-Basset, Hoang Viet Long, Satya P. Singh, and Manisha Manjul. (2019) A novel and comprehensive trust estimation clustering based approach for large scale wireless sensor networks.\" IEEE Access vol. 7 pp-58221-58240.\nN. Javaid, O. A. Karim, A. Sher, M. Imran, A. U. H. Yasar, and M. Guizani,(2003) Q-learning for energy balancing and avoiding the void hole routing protocol in underwater sensor networks,’’ in Proc. 14th Int. Wireless Commun. Mobile Comput. Conf. (IWCMC), Jun. 2018, pp. 702–706\nR. S. Sutton and A. G. Barto, (2018) Reinforcement Learning: An Introduction, 2nd ed. Cambridge, MA, USA: MIT Press\nT. Hu and Y. Fei, (2010) QELAR: A machine-learning-based adaptive routing protocol for energy-efficient and lifetime-extended underwater sensornetworks,’’ IEEE Trans. Mobile Comput., vol. 9, no. 6, pp. 796–809.\nW. Guo, C. Yan, and T. Lu,(2019) Optimizing the lifetime of wireless sensor networks via reinforcement-learning-based routing,” Int. J. Distrib. Sensor Netw., vol. 15, no. 2.\nZ. Jin, Y. Ma, Y. Su, S. Li, and X. Fu, (2017) A Q-learning-based delay-aware routing algorithm to extend the lifetime of underwater sensor networks,” Sensors, vol. 17, no. 7" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.92919606,"math_prob":0.9278879,"size":72519,"snap":"2023-14-2023-23","text_gpt3_token_len":14175,"char_repetition_ratio":0.1625181,"word_repetition_ratio":0.9960355,"special_character_ratio":0.1974655,"punctuation_ratio":0.08953714,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97258645,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-07T01:53:19Z\",\"WARC-Record-ID\":\"<urn:uuid:0e85ebe4-43c4-42cf-95f7-b0f139dae38b>\",\"Content-Length\":\"182754\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7d8c02f7-ee20-42e7-b5bc-25bac605cabb>\",\"WARC-Concurrent-To\":\"<urn:uuid:ea450387-28b1-427b-a597-6d3e6a2557c7>\",\"WARC-IP-Address\":\"80.66.179.110\",\"WARC-Target-URI\":\"https://jitm.ut.ac.ir/article_86486.html\",\"WARC-Payload-Digest\":\"sha1:DO6GYMUONKFJPOYVWIMRW36UU42MIJRA\",\"WARC-Block-Digest\":\"sha1:VV2HFC6GNG7CDZLQOWR7NHZBQZGIJH67\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224653501.53_warc_CC-MAIN-20230607010703-20230607040703-00657.warc.gz\"}"}
https://compassoftime.blogspot.com/2016/07/
[ "## 2016-07-29\n\n### Exception handling in C world\n\nThe following piece of code goes into infinite loop on Sagittarius.\n```(import (rnrs))\n(with-exception-handler\n(lambda (k) #t)\n(lambda ()\n(guard (e (#f #f))\n(error 'test \"msg\"))))\n```\nI couldn't figure it out why this never returned at first glance. And it turned out to be very interesting problem.\n\nThe problem is related to continuation boundary and exception handlers. If you print the k, then only the very first time you'd see error object with `\"msg\"` after that it'd be `\"attempt to return from C continuation boundary.\"`. But why?\n\nThis happens when you capture a continuation and invoke it in the different C level apply. Each time C level apply is called, then VM put a boundary mark on its stack to synchronise the C level stack and VM stack. This is needed because there's no way to restore C level stack when the function is returned. When a continuation is captured on the C apply which is already returned, then invocation of this continuation would cause unexpected result. To avoid this, the VM checks if the continuation is captured on the same C level apply.\n\nStill, why this combination causes this? The answer is `raise` is implemented in C world and exception handlers are invoked by C level apply. The whole step of the infinite loop is like this:\n1. `guard` without else clause captures a continuation.\n2. When `error` is called, then `guard` re-raise and invoke the captured continuation. (required by R6RS/R7RS)\n3. Exception handler is invoked by C level apply.\n4. VM check the continuation boundary and raises an error on the same dynamic environment as #3\n5. Goes to #3. Hurray!\nThere are 2 things I can do:\n1. Create a specific condition and when `with-exception-handler` received it, then it wouldn't restore the exception handler. [AdHoc]\n2. Let `raise` use the Scheme level apply. (Big task)\n#1 probably wouldn't work properly. #2 is really a huge task. I need to think about it.\n\n## 2016-07-12\n\n### リモートREPLの問題\n\nこれは非常に簡単で`#!read-macro=...`のようなのが送れない。問題も分かっていて、`#!`から始まるリーダーマクロは値を返さないで次の値を読みにいく。マクロはリーダー内で解決される。スクリプトや通常のREPLなら特に問題ないんだけど、それ自体を送りたい場合にはあまり嬉しくない。解決方法はぱっと思いつくだけで以下:\n• リモートREPL用のリーダーを作る\n• 面倒\n• リモートREPL用ポートを作って、リードマクロを読ませる\n• アドホックだけど、悪くない気がする\n\n### ライブラリ依存関係\n\nPlatoを使う最大の理由はREPLでリロードができるからなんだけど、ハンドラ(と呼んでいるライブラリ)が依存しているライブラリを変更した際に上手いことその変更を反映する方法がない。ハンドラであればリロードが可能なのだが、依存ライブラリだとロードパスの問題とかも出てきて嬉しくない。手動で`load`を呼ぶとか、コード自体をリモートREPLに貼り付けるとか(上記の問題が出ることもあるが)方法がないこともないけど今一面倒。\n\nあるとよさそうなものとして、\n• ライブラリファイルの変更を検知したらリロードする機構\n• 依存関係の親が子を知る手段\nかなぁ。一つ目はそういえばファイルシステムの監視機構を入れたからやろうと思えばやれなくもないのか(自動監視だと任意のタイミングでやれないから微妙かな)。二つ目はC側に手を入れてやればやれるけど、必要な場面がかなり限定されているので単なるオーバーヘッドにしかならない気がするなぁ(メモリも喰うだろうし)。こっちはかなり考えないとだめっぽい。いいアイデア募集。\n\n## 2016-07-08\n\n### Syntax parameters (aka SRFI-139)\n\nMarc Nieper-Wißkirchen submitted an interesting SRFI. The SRFI was based on the paper 'Keeping it Clean with Syntax Parameters'. The paper mentioned some corner case of breaking hygiene with `datum->syntax`, which I faced before (if I knew this paper at that moment!).\n\nIn 'Implementation' section, the SRFI mentions that this is implemented on 'Rapid Scheme', Guile and Racket. Unfortunately there's no portable implementation. In my very prejudiced perspective, Guile isn't so fascinated by macro, so the syntax parameters is probably implemented on top of existing macro expander (I haven't check since Guile is released under GPL, and if I see the code it may violates the license). If so, it might be able to be implemented on `syntax-case`.\n\nWithout any deep thinking, I've written the very sloppy implementation like this:\n```#!r6rs\n(library (srfi :139 syntax-parameters)\n(export define-syntax-parameter\nsyntax-parameterize)\n(import (rnrs))\n\n(define-syntax define-syntax-parameter\n(syntax-rules ()\n((_ keyword transformer)\n(define-syntax keyword transformer))))\n\n(define-syntax syntax-parameterize\n(lambda (x)\n(define (rewrite k body keys)\n(syntax-case body ()\n(() '())\n((a . d)\n#`(#,(rewrite k #'a keys). #,(rewrite k #'d keys)))\n(#(e ...)\n#`#(#,@(rewrite k #'(e ...) keys)))\n(e\n(and (identifier? #'e)\n(exists (lambda (o) (free-identifier=? #'e o)) keys))\n(datum->syntax k (syntax->datum #'e)))\n(e #'e)))\n\n(syntax-case x ()\n((k ((keyword spec) ...) body1 body* ...)\n(with-syntax (((n* ...)\n(map (lambda (n) (datum->syntax #'k (syntax->datum n)))\n#'(keyword ...)))\n((nb1 nb* ...)\n(rewrite #'k #'(body1 body* ...) #'(keyword ...))))\n#'(letrec-syntax ((n* spec) ...) nb1 nb* ...))))))\n)\n```\nAnd this can be used like this (taken from example of the SRFI):\n```#!r6rs\n(import (rnrs) (srfi :139 syntax-parameters))\n\n(define-syntax-parameter abort\n(syntax-rules ()\n((_ . _)\n(syntax-error \"abort used outside of a loop\"))))\n\n(define-syntax forever\n(syntax-rules ()\n((forever body1 body2 ...)\n(call-with-current-continuation\n(lambda (escape)\n(syntax-parameterize\n((abort\n(syntax-rules ()\n((abort value (... ...))\n(escape value (... ...))))))\n(let loop ()\nbody1 body2 ... (loop))))))))\n\n(define i 0)\n(forever\n(display i)\n(newline)\n(set! i (+ 1 i))\n(when (= i 10)\n(abort)))\n\n(define-syntax-parameter return\n(syntax-rules ()\n((_ . _)\n(syntax-error \"return used outside of a lambda^\"))))\n\n(define-syntax lambda^\n(syntax-rules ()\n((lambda^ formals body1 body2 ...)\n(lambda formals\n(call-with-current-continuation\n(lambda (escape)\n(syntax-parameterize\n((return\n(syntax-rules ()\n((return value (... ...))\n(escape value (... ...))))))\nbody1 body2 ...)))))))\n\n(define product\n(lambda^ (list)\n(fold-left (lambda (n o)\n(if (zero? n)\n(return 0)\n(* n o)))\n1 list)))\n\n(display (product '(1 2 3 4 5))) (newline)\n```\nI've tested on Chez, Larceny, Mosh and Sagittarius.\n\nThis implementation violates some of 'MUST' specified in the SRFI.\n1. keyword bound on `syntax-parameterize` doesn't have to be syntax parameter. (on the SRFI it MUST be)\n2. keyword on `syntax-parameterize` doesn't have to have binding.\nAnd these are the sloppy part:\n1. `define-syntax-parameter` does nothing\n2. `syntax-parameterize` traverses the given expression.\nIf there's concrete test cases and the above implementation passes all, I might send it as a sample implementation for R6RS.\n\n## 2016-07-07\n\n### Weirdness of self evaluating vector\n\nScheme's vector has a history of being non-self evaluating datum and self evaluating datum. The first one is on R6RS, and the latter one is R7RS (not sure about R5RS). Most of the time, you don't really care about the difference other than it requires `' (quote)` or not. However, you may need to think about the difference and maybe also think self evaluating causes more trouble. One of the particular case (and this is the only case I think self evaluating vector is evil) is when vector is used in macro.\n\nHave a look at this case:\n```(import (rnrs))\n\n(define-syntax foo\n(syntax-case ()\n((_ e) e)))\n\n(foo #(a b c))\n```\nWhat do you think how it behaves? The answer is depending on the standard. On R6RS, vectors are not self evaluating data so this should be an error. So you can't complain if you'd get a daemon from your nose. On R7RS (of course you should change the importing library name to `(scheme base)`), on the other hand, vectors are self evaluating data so this should return the input vector.\n\n```(import (scheme base) (scheme write))\n\n(define-syntax foo\n(syntax-rules ()\n((_ \"go\" (v ...) ()) #(v ...))\n((_ \"go\" (v ...) (e e* ...)) (foo \"go\" (v ... t) (e* ...)))\n((_ e ...) (foo \"go\" () (e ...)))))\n\n(foo a b c d e)\n```\nWhat would be the expansion result of macro `foo`? I think this is totally up to implementations (if it's not please let me know). For example, Chibi returns vector of something like {Sc #22 #<Environment 4365836288> () t} (syntactic closure, I think), Sagittarius returns vector of identifier, and Larceny returns vector of symbol `t`. If you put `' (quote)` to the result template then the expansion result should be the same as Larceny returns (though, Chibi still returned a vector of syntactic closure, so this might not be defined, either).\n\nBack to the first case. The first case sometimes bites me when I write/use R6RS macro in R7RS context. For example, SRFI-64 is implemented in R6RS macro and using it like this:\n```(import (scheme base) (srfi 64))\n\n(test-begin \"foo\")\n\n(test-equal \"boom!\" #(a b) (vector 'a 'b))\n;; FAIL!!\n\n(test-end)\n```\nOn Sagittarius, R6RS macro transformer first converts all symbols into identifiers, then syntax information will be stripped only if expressions have quote. Now, SRFI-64 is implemented on the R6RS macro transformer and the vector doesn't have quote. Thus, symbols inside of the vector are converted to identifiers. If it's R6RS, then it's an error. But if it's R7RS, it should be a valid script.\n\nI have sort of solution (not sure if I do it or not): Internally, symbol and the identifiers converted from symbols without any context (c.f. not using `datum->syntax`) are theoretically the same. So if compiler sees such an identifier, then it should be able to unwrap it safely.\n\nI haven't decided how it should be. So for now, just a memo and let it sleep." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8772579,"math_prob":0.8323172,"size":1688,"snap":"2019-51-2020-05","text_gpt3_token_len":380,"char_repetition_ratio":0.15380047,"word_repetition_ratio":0.014234875,"special_character_ratio":0.22097157,"punctuation_ratio":0.08888889,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9507117,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-28T15:59:23Z\",\"WARC-Record-ID\":\"<urn:uuid:e608c128-289d-4135-9447-0a42ab39a2b6>\",\"Content-Length\":\"130830\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:39bf8c57-cd99-4dbd-bb4e-118ea6b65f63>\",\"WARC-Concurrent-To\":\"<urn:uuid:90fd379f-bc8e-4e74-957f-5bd44616d799>\",\"WARC-IP-Address\":\"172.217.164.129\",\"WARC-Target-URI\":\"https://compassoftime.blogspot.com/2016/07/\",\"WARC-Payload-Digest\":\"sha1:LG37Q2CQAAHR5XJAGV7P3MED23QEFOCM\",\"WARC-Block-Digest\":\"sha1:GNBG4R47JIZZLC6HCZT55453CB7N7MWQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579251779833.86_warc_CC-MAIN-20200128153713-20200128183713-00092.warc.gz\"}"}
https://www.powerelectronics.com/community/why-pdn-measured-using-vna-and-not-oscilloscope
[ "", null, "Why is PDN measured using a VNA and not an oscilloscope?\n\nQUESTION: If PDN (Power Distribution Network) analysis is an assessment of the input power supply voltage to a CPU then why is it measured using a VNA and not an oscilloscope?\n\nANSWER: First, your understanding of PDN is largely correct.  The PDN includes the voltage regulator, printed circuit board traces or planes, vias and decoupling capacitors including the capacitor series resistance (ESR) and inductance (ESL).  The voltage arrives at the load (CPU) and must be within allowable regulation limits.  Internal to the CPU, the bond wires and die capacitance also form part of the PDN.\n\nWhile CPUs can be difficult to assess from the PDN perspective FPGAs can be as or even more difficult due to the higher edge speeds.  The spectral content of many high speed FPGAs can be as high as 10GHz, requiring the PDN assessment to include the content from DC to 10GHz. It is clear that only computing the DC input voltage regulation range is insufficient.\n\nMEASURING WITH AN OSCILLOSCOPE:\nSince we are interested in monitoring the voltage at the output of the regulator but including all of the pathways to the pins of the load (CPU or FPGA) it would seem that the best tool would be the oscilloscope, which allows us to directly view the voltage in the time domain. However, a fundamental limitation is that we do not know the load current pattern, since that is determined in large part by the software contained within the CPU or FPGA. It may not be obvious, but the output voltage is a function of the load profile and its reflection through the AC impedance to the regulation to a large degree. Therefore, capturing the voltage excursions of interest requires coordination with the FPGA operation. A resistive PDN and associated load pattern is shown in Figure 1. The same load pattern applied to a PDN with a single anti resonance, caused by the load impedance, is shown in Figure 2.", null, "Figure 1. Load current patter (green trace) and voltage response (yellow trace) for a resistive PDN.\n\nNATURAL AND FORCED RESPONSES\n\nWhile there is nothing particularly notable in Figure 1, the response shown in Figure 2 indicates two distinctively different responses. The first, natural response results in a damped ringing due to an underdamped antiresonance. The second or forced response is related to the current burst in the center of the load current pattern.\n\nThe natural response is the response of the regulator to a load step where by the output voltage is allowed to settle before then next load transient is applied. The forced response is when the load step is applied sometime before the output response has had time to settle. In that case, subsequent load transients can reinforce themselves creating output voltage excursions that are greater than the natural response produces. The behavior is a function of the bandwidth of the regulator.\n\nTherefore, if the regulator’s bandwidth is very close to, or a multiple of, the antiresonant (peak in the AC impedance, See Figure 6) frequency the result can be a growing output voltage waveform. The amplitude of this response will eventually settle to a fixed level after a number of cycles, dependent on the antiresonant Q. Many antiresonances can occur in the PDN range of DC - 10GHz, with each antiresonance exhibiting these interactions with the regulator and load step profile.", null, "Figure 2. Load current patter (green trace) and voltage response (yellow trace) for a PDN with a single resonance.\n\nMEASURING TRANSIENT RESPONSE WITH AN OSCILLOSCOPE\n\nWhile it would seem to be appropriate to measure the transient response of the PDN using an oscilloscope it is not as easy as it sounds. Measuring in the time domain requires us to provide the dynamic current stimulus and therein lies the issue. The Picotest J2112A Current Injector is much faster than an electronic load with a typical response time of 10nS as shown in Figure 3. It can also be controlled using an AWG making it ideal for generating irregular load profiles.", null, "Figure 3. Measuring response using an oscilloscope\n\nThis solution is acceptable for many lower power devices of 1.2V and above, however, FPGAs can be much faster and with much larger current changes than current injectors can generate.\n\nAnother possible solution is to write software using the FPGA to create the repetitive load profile. The only problem is that we do not know what the worst current profile looks like given the target impedance, and therefore, do not know how to create it. [Ref. 1]\n\nTHE CONCEPT OF TARGET IMPEDANCE\n\nThe RF Vector Network Analyser (‘VNA’) can easily measure impedance over a very wide frequency range. The OMICRON Lab Bode 100 offers a range of DC-40MHz. The Agilent E5061B can measure up to 3GHz and there are several VNAs that can measure to 10GHz or more.\n\nThe two-port RF VNA measurement can very accurately measure milliohm resistances consistent with an FPGA. Several application notes are referenced at the end of this post. [Ref 2]", null, "Figure 4. A low cost RF VNA and common mode transformer can measure 1 milliOhm", null, "Figure 5. Measurement showing the 1.3 milliOhm resistance and 500pH inductance\n\nThe theory behind target impedance is very simple, essentially using Ohms law to relate the voltage change as the product of the current change and the impedance of the path from the regulator to the FPGA. Though FPGAs do not always state it, the target impedance required for proper operation is the AC impedance of the path including all series and shunt loading, is usually a not to exceed impedance value with the goal of having as flat a response as possible.\n\nAt DC this works, though with AC signals the impact of the varying impedance isn’t quite that simple.", null, "Of course we can solve this for Z", null, "Therefore, we have converted the FPGA’s input voltage requirement (output of the regulator) to the impedance domain where we can more easily assess it. As a rule of thumb, many engineers assume that the FPGA current change can be up to 50% of the maximum operating current. The example impedance simulation in Figure 6 shows a PDN with a target impedance of 125 milliOhms and three antiresonances.", null, "Figure 6. A simulated impedance response showing three antiresonances each with a peak impedance within 125 a miliiOhm limit\n\nTHE TRANSFORMATION IS NOT SO SIMPLE\n\nIn our sample case, applying a 2A dynamic current change (from the FPGA as seen by the regulator) should theoretically result in a regulator voltage change of 250mV. Yet the simulated output of the regulator shown in Figure 7 is clearly significantly greater. At the rising edge of each of the initial bursts, we can see the three different ringing frequencies. Since the three antiresonant frequencies are broadly spaced, it is difficult to see the highest frequency while the other two frequencies are much more obvious. This is another advantage of the RF VNA measurement vs. the oscilloscope, in that the log sweep makes it much easier to identify these broadly spaced responses.\n\nUsing the ADS simulation optimizer we can determine the worst case current profile that results in the largest voltage excursion, shown as 559mV in this simulation. A closer view of this excursion is shown in Figure 8.", null, "Figure 7. Dynamic load current change and voltage response", null, "Figure 8. Closeup view of Figure 7\n\nTherefore, a more realistic assessment of the output voltage based on the output impedance measurement is:", null, "Where i represents each antiresonance in the impedance measurement. A detailed examination of this can be found in a recent article Target impedance based solutions for PDN may not provide a realistic assessment referenced at the end of this post.", null, "Figure 9. A real world example showing two low frequency resonances.  The blue trace is the current signal and the yellow traces is the voltage response.  The green trace is just the function generator trigger output.\n\nSo why is the PDN assessment so important for power systems designers to understand?\n\nThe most important concept to understand about PDN is that the FGPA’s data pattern induced load current profile causes ‘forced’ load step responses on the regulator that are superimposed on one another causing the voltage excursions to be larger than one would otherwise think. This is why the regulator’s regulation range can exceed the FGPAs input limits causing potential catastrophic consequences to the performance.", null, "" ]
[ null, "https://www.powerelectronics.com/sites/powerelectronics.com/files/styles/article_featured_standard/public/uploads/2013/09/3rdfigure1.jpg", null, "https://www.powerelectronics.com/sites/powerelectronics.com/files/uploads/2013/08/3rd_figure1.png", null, "https://www.powerelectronics.com/sites/powerelectronics.com/files/uploads/2013/08/3rd_figure2.png", null, "https://www.powerelectronics.com/sites/powerelectronics.com/files/uploads/2013/08/3rd_figure3.png", null, "https://www.powerelectronics.com/sites/powerelectronics.com/files/uploads/2013/08/3rd_figure4.png", null, "https://www.powerelectronics.com/sites/powerelectronics.com/files/uploads/2013/08/3rd_figure5.png", null, "https://www.powerelectronics.com/sites/powerelectronics.com/files/uploads/2013/08/equation1.PNG", null, "https://www.powerelectronics.com/sites/powerelectronics.com/files/uploads/2013/08/equation2.PNG", null, "https://www.powerelectronics.com/sites/powerelectronics.com/files/uploads/2013/08/3rd_figure6.png", null, "https://www.powerelectronics.com/sites/powerelectronics.com/files/uploads/2013/08/3rd_figure7.png", null, "https://www.powerelectronics.com/sites/powerelectronics.com/files/uploads/2013/08/3rd_figure8.png", null, "https://www.powerelectronics.com/sites/powerelectronics.com/files/uploads/2013/08/equation3.PNG", null, "https://www.powerelectronics.com/sites/powerelectronics.com/files/uploads/2013/08/3rd_figure9.png", null, "https://www.powerelectronics.com/sites/all/themes/penton_core_theme/images/account-default-image.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.90772665,"math_prob":0.91278905,"size":9065,"snap":"2019-26-2019-30","text_gpt3_token_len":1966,"char_repetition_ratio":0.14512748,"word_repetition_ratio":0.030303031,"special_character_ratio":0.19724214,"punctuation_ratio":0.07635919,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95840925,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28],"im_url_duplicate_count":[null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-16T16:37:52Z\",\"WARC-Record-ID\":\"<urn:uuid:e7669601-e545-47bc-9582-84c6e4ca27d2>\",\"Content-Length\":\"96688\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f4f0647d-7e0f-4d59-a647-16638952a665>\",\"WARC-Concurrent-To\":\"<urn:uuid:a5b5589c-3580-4d98-a575-165e44c44c16>\",\"WARC-IP-Address\":\"104.17.157.44\",\"WARC-Target-URI\":\"https://www.powerelectronics.com/community/why-pdn-measured-using-vna-and-not-oscilloscope\",\"WARC-Payload-Digest\":\"sha1:HAT6ZZA2XNFPRB5JEFQDCBWF3FH55EMP\",\"WARC-Block-Digest\":\"sha1:2ZO6NIQAVQHDCNJPHHY73WEVW7J637IZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560627998288.34_warc_CC-MAIN-20190616162745-20190616184745-00450.warc.gz\"}"}
https://www.gene-quantification.de/efficiency1.html
[ "", null, "Determination of PCR Efficiency  (4) Determination of PCR Efficiency  (main) Determination of PCR Efficiency  (1) Determination of PCR Efficiency  (2) Determination of PCR Efficiency  (3) Determination of PCR Efficiency  (5) New papers around PCR efficiency estimation and correction", null, "", null, "", null, "", null, "", null, "", null, "Estimation via \"calibration dilution curve and slope calculation\" real-time PCR efficiency:    E  =  10^[–1/slope] Efficiency of PCR Reactions Mx4000 Application Note #10  by Stratagene", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "4 parametric sigmoidal model Model is described by equation . One fluorescence data set from this study was used as an example. In this model, y0 is the ground fluorescence, a is the difference between maximal fluorescence acquired in the run and the ground fluorescence,  x0 is the first derivative maximum of the function or the inflexion point of the curve and b describes the slope of curve. equation", null, "", null, "Figure:    Plot of fluorescence observations from LightCycler (Roche Diagnostics). Forty observations give a sigmoid trajectory that can be described by full data fit (four parametric logistic model). Ground phase can be well linearly regressed (inlay). Following data of n > 7 are considered exponentially behaved and can be fitted by exponential model. Various model fits are designated in legend within figure. FDM and SDM denote position of first and second derivative maximum within full data fit." ]
[ null, "https://www.gene-quantification.de/qPCR-efficiency-header.gif", null, "https://www.gene-quantification.de/pfaffl-qPCR-efficiency1.gif", null, "https://www.gene-quantification.de/pfaffl-qPCR-efficiency2.gif", null, "https://www.gene-quantification.de/pfaffl-qPCR-efficiency3.gif", null, "https://www.gene-quantification.de/pfaffl-qPCR-efficiency4.gif", null, "https://www.gene-quantification.de/pfaffl-qPCR-efficiency5.gif", null, "https://www.gene-quantification.de/pfaffl-qPCR-efficiency6.gif", null, "https://www.gene-quantification.de/adope-pdf.gif", null, "https://www.gene-quantification.de/slope-2.gif", null, "https://www.gene-quantification.de/slope-1.gif", null, "https://www.gene-quantification.de/lc-methods-slide-1.gif", null, "https://www.gene-quantification.de/lc-manual-var.JPG", null, "https://www.gene-quantification.de/lc-manual-eff.JPG", null, "https://www.gene-quantification.de/lc-methods-slide-2.gif", null, "https://www.gene-quantification.de/tichopad-2.jpg", null, "https://www.gene-quantification.de/tichopad-1.jpg", null, "https://www.gene-quantification.de/graph-tichopad-2003.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7108107,"math_prob":0.92333895,"size":393,"snap":"2023-40-2023-50","text_gpt3_token_len":93,"char_repetition_ratio":0.28534704,"word_repetition_ratio":0.0,"special_character_ratio":0.22137405,"punctuation_ratio":0.01754386,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9951881,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34],"im_url_duplicate_count":[null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,3,null,3,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-03T07:46:49Z\",\"WARC-Record-ID\":\"<urn:uuid:663d4c94-2564-4c8b-b633-dfedd0473e9a>\",\"Content-Length\":\"9818\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d2e3b616-536a-4246-8048-b8c44405b097>\",\"WARC-Concurrent-To\":\"<urn:uuid:08b9a5b5-fa54-4140-81f4-2ef7cadd9b10>\",\"WARC-IP-Address\":\"193.141.3.65\",\"WARC-Target-URI\":\"https://www.gene-quantification.de/efficiency1.html\",\"WARC-Payload-Digest\":\"sha1:TND6RWFNAGIPUYI7YNG2SZGLPCA6JUHE\",\"WARC-Block-Digest\":\"sha1:7LXB2IGZIENELLAXSMJPRVS6XJ5MEDT5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233511055.59_warc_CC-MAIN-20231003060619-20231003090619-00765.warc.gz\"}"}
https://www.krivalar.com/c-strncmp
[ "# C strncmp()-string comparison up to n\n\n### <<Previous Next >>\n\nIt is similar to strcmp() but only difference is that it compares the characters up to the specified length.\n\n```Syntax:\nint strcmp(const char *s1,const char *s2,size_t number);\n```\n\nIt returns zero value if both strings are same or it returns negative value if string s1 is less than s2 or it returns positive value if s1 is greater than s2.\n\nFollowing is an example program using strncmp()\n\n```#include<stdio.h>\n\n#include<string.h>\n\nmain()\n{\nchar first[]={’W’,‘E’,‘L’,‘C’, ‘O’,‘M’,‘E’};\nchar second[];\nint result;\n\nclrscr();\nprintf(\" The first string is %s \",first);\nprintf(\" \\n Enter the second string \");\nscanf(\" %s \",&second);\n\nresult = strncmp(first,second,3 );\nif(result==0)\n{\nprintf(\" \\n The comparison of two strings are same \");\n}\nelse\n{\nprintf(\" \\n The comparison of two strings are different \");\n}\nreturn(0);\n}\n```\n\nOutput:\n\n```The first string is WELCOME\nEnter the second string WELCOME\nThe comparison of two strings are same\n```\n\n# C strncmp()-string comparison up to n\n\n### <<Previous Next >>\n\nIt is similar to strcmp() but only difference is that it compares the characters up to the specified length.\n\n```Syntax:\nint strcmp(const char *s1,const char *s2,size_t number);\n```\n\nIt returns zero value if both strings are same or it returns negative value if string s1 is less than s2 or it returns positive value if s1 is greater than s2.\n\nFollowing is an example program using strncmp()\n\n```#include<stdio.h>\n\n#include<string.h>\n\nmain()\n{\nchar first[]={’W’,‘E’,‘L’,‘C’, ‘O’,‘M’,‘E’};\nchar second[];\nint result;\n\nclrscr();\nprintf(\" The first string is %s \",first);\nprintf(\" \\n Enter the second string \");\nscanf(\" %s \",&second);\n\nresult = strncmp(first,second,3 );\nif(result==0)\n{\nprintf(\" \\n The comparison of two strings are same \");\n}\nelse\n{\nprintf(\" \\n The comparison of two strings are different \");\n}\nreturn(0);\n}\n```\n\nOutput:\n\n```The first string is WELCOME\nEnter the second string WELCOME\nThe comparison of two strings are same\n```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.64523226,"math_prob":0.97935855,"size":949,"snap":"2020-34-2020-40","text_gpt3_token_len":237,"char_repetition_ratio":0.13544974,"word_repetition_ratio":0.0952381,"special_character_ratio":0.28767124,"punctuation_ratio":0.15789473,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9536458,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-24T02:37:58Z\",\"WARC-Record-ID\":\"<urn:uuid:d46d1679-3a51-4d9a-b126-609c6eca9c13>\",\"Content-Length\":\"25119\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:600f3d65-a10b-4ad9-99db-f027267e1cb4>\",\"WARC-Concurrent-To\":\"<urn:uuid:97b4c0eb-450d-40f8-bc7d-115a6cf31f0e>\",\"WARC-IP-Address\":\"104.248.60.43\",\"WARC-Target-URI\":\"https://www.krivalar.com/c-strncmp\",\"WARC-Payload-Digest\":\"sha1:NNFD2VVRH7E2BFLNJEOWTJGIR544NJCQ\",\"WARC-Block-Digest\":\"sha1:5H24UU66LCKP2FUOJ7ITOC4R7KYDKN7X\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400213006.47_warc_CC-MAIN-20200924002749-20200924032749-00346.warc.gz\"}"}
https://www.crazy-numbers.com/en/21270
[ "Discover a lot of information on the number 21270: properties, mathematical operations, how to write it, symbolism, numerology, representations and many other interesting things!\n\n## Mathematical properties of 21270\n\nIs 21270 a prime number? No\nIs 21270 a perfect number? No\nNumber of divisors 16\nList of dividers 1, 2, 3, 5, 6, 10, 15, 30, 709, 1418, 2127, 3545, 4254, 7090, 10635, 21270\nSum of divisors 51120\nPrime factorization 2 x 3 x 5 x 709\nPrime factors 2, 3, 5, 709\n\n## How to write / spell 21270 in letters?\n\nIn letters, the number 21270 is written as: Twenty-one thousand two hundred and seventy. And in other languages? how does it spell?\n\n21270 in other languages\nWrite 21270 in english Twenty-one thousand two hundred and seventy\nWrite 21270 in french Vingt et un mille deux cent soixante-dix\nWrite 21270 in spanish Veintiuno mil doscientos setenta\nWrite 21270 in portuguese Vinte e um mil duzentos setenta\n\n## Decomposition of the number 21270\n\nThe number 21270 is composed of:\n\n2 iterations of the number 2 : The number 2 (two) represents double, association, cooperation, union, complementarity. It is the symbol of duality.... Find out more about the number 2\n\n1 iteration of the number 1 : The number 1 (one) represents the uniqueness, the unique, a starting point, a beginning.... Find out more about the number 1\n\n1 iteration of the number 7 : The number 7 (seven) represents faith, teaching. It symbolizes reflection, the spiritual life.... Find out more about the number 7\n\n1 iteration of the number 0 : ... Find out more about the number 0\n\nOther ways to write 21270\nIn letter Twenty-one thousand two hundred and seventy\nIn roman numeral\nIn binary 101001100010110\nIn octal 51426\nIn US dollars\nDeprecated: Function money_format() is deprecated in /home/clients/df8caba959271e8e753c9e287ae1296d/websites/crazy-numbers.com/templates/sample-number.tpl on line 180\nUSD 21,270.00 (\\$)\nIn euros\nDeprecated: Function money_format() is deprecated in /home/clients/df8caba959271e8e753c9e287ae1296d/websites/crazy-numbers.com/templates/sample-number.tpl on line 185\n21 270,00 EUR (€)\nSome related numbers\nPrevious number 21269\nNext number 21271\nNext prime number 21277\n\n## Mathematical operations\n\nOperations and solutions\n21270*2 = 42540 The double of 21270 is 42540\n21270*3 = 63810 The triple of 21270 is 63810\n21270/2 = 10635 The half of 21270 is 10635.000000\n21270/3 = 7090 The third of 21270 is 7090.000000\n212702 = 452412900 The square of 21270 is 452412900.000000\n212703 = 9622822383000 The cube of 21270 is 9622822383000.000000\n√21270 = 145.84238067174 The square root of 21270 is 145.842381\nlog(21270) = 9.9650529081943 The natural (Neperian) logarithm of 21270 is 9.965053\nlog10(21270) = 4.3277674899027 The decimal logarithm (base 10) of 21270 is 4.327767\nsin(21270) = 0.98830899650207 The sine of 21270 is 0.988309\ncos(21270) = 0.15246418409932 The cosine of 21270 is 0.152464\ntan(21270) = 6.4822371387779 The tangent of 21270 is 6.482237" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6510952,"math_prob":0.95718783,"size":2557,"snap":"2021-43-2021-49","text_gpt3_token_len":793,"char_repetition_ratio":0.16647083,"word_repetition_ratio":0.07084469,"special_character_ratio":0.4258897,"punctuation_ratio":0.14468086,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9795555,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-06T09:16:37Z\",\"WARC-Record-ID\":\"<urn:uuid:13f5265a-2238-4e8a-a9c7-ae9ad1bbd80c>\",\"Content-Length\":\"29211\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:86639461-8e7c-440c-ba82-cd725a668850>\",\"WARC-Concurrent-To\":\"<urn:uuid:0674997f-627d-4167-8077-5b75f12e6581>\",\"WARC-IP-Address\":\"128.65.195.174\",\"WARC-Target-URI\":\"https://www.crazy-numbers.com/en/21270\",\"WARC-Payload-Digest\":\"sha1:4YPG4ESQUZ3T2T6AJCH7NZX5H2F74JRP\",\"WARC-Block-Digest\":\"sha1:R5JYBSIWOJ4FSUQYNOXKMU6PXRWE4W2G\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363290.59_warc_CC-MAIN-20211206072825-20211206102825-00315.warc.gz\"}"}
https://calculomates.com/en/divisors/of/16541
[ "# Divisors of 16541\n\n## Divisors of 16541\n\nThe list of all positive divisors (that is, the list of all integers that divide 22) is as follows :\n\nAccordingly:\n\n16541 is multiplo of 1\n\n16541 is multiplo of 7\n\n16541 is multiplo of 17\n\n16541 is multiplo of 119\n\n16541 is multiplo of 139\n\n16541 is multiplo of 973\n\n16541 is multiplo of 2363\n\n16541 has 7 positive divisors\n\n## Parity of 16541\n\n16541is an odd number,as it is not divisible by 2\n\n## The factors for 16541\n\nThe factors for 16541 are all the numbers between -16541 and 16541 , which divide 16541 without leaving any remainder. Since 16541 divided by -16541 is an integer, -16541 is a factor of 16541 .\n\nSince 16541 divided by -16541 is a whole number, -16541 is a factor of 16541\n\nSince 16541 divided by -2363 is a whole number, -2363 is a factor of 16541\n\nSince 16541 divided by -973 is a whole number, -973 is a factor of 16541\n\nSince 16541 divided by -139 is a whole number, -139 is a factor of 16541\n\nSince 16541 divided by -119 is a whole number, -119 is a factor of 16541\n\nSince 16541 divided by -17 is a whole number, -17 is a factor of 16541\n\nSince 16541 divided by -7 is a whole number, -7 is a factor of 16541\n\nSince 16541 divided by -1 is a whole number, -1 is a factor of 16541\n\nSince 16541 divided by 1 is a whole number, 1 is a factor of 16541\n\nSince 16541 divided by 7 is a whole number, 7 is a factor of 16541\n\nSince 16541 divided by 17 is a whole number, 17 is a factor of 16541\n\nSince 16541 divided by 119 is a whole number, 119 is a factor of 16541\n\nSince 16541 divided by 139 is a whole number, 139 is a factor of 16541\n\nSince 16541 divided by 973 is a whole number, 973 is a factor of 16541\n\nSince 16541 divided by 2363 is a whole number, 2363 is a factor of 16541\n\n## What are the multiples of 16541?\n\nMultiples of 16541 are all integers divisible by 16541 , i.e. the remainder of the full division by 16541 is zero. There are infinite multiples of 16541. The smallest multiples of 16541 are:\n\n0 : in fact, 0 is divisible by any integer, so it is also a multiple of 16541 since 0 × 16541 = 0\n\n16541 : in fact, 16541 is a multiple of itself, since 16541 is divisible by 16541 (it was 16541 / 16541 = 1, so the rest of this division is zero)\n\n33082: in fact, 33082 = 16541 × 2\n\n49623: in fact, 49623 = 16541 × 3\n\n66164: in fact, 66164 = 16541 × 4\n\n82705: in fact, 82705 = 16541 × 5\n\netc.\n\n## Is 16541 a prime number?\n\nIt is possible to determine using mathematical techniques whether an integer is prime or not.\n\nfor 16541, the answer is: No, 16541 is not a prime number.\n\n## How do you determine if a number is prime?\n\nTo know the primality of an integer, we can use several algorithms. The most naive is to try all divisors below the number you want to know if it is prime (in our case 16541). We can already eliminate even numbers bigger than 2 (then 4 , 6 , 8 ...). Besides, we can stop at the square root of the number in question (here 128.612 ). Historically, the Eratosthenes screen (which dates back to Antiquity) uses this technique relatively effectively.\n\nMore modern techniques include the Atkin screen, probabilistic tests, or the cyclotomic test." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9192342,"math_prob":0.99626684,"size":3092,"snap":"2021-04-2021-17","text_gpt3_token_len":942,"char_repetition_ratio":0.27040157,"word_repetition_ratio":0.13,"special_character_ratio":0.40394565,"punctuation_ratio":0.12066365,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9996443,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-18T11:12:40Z\",\"WARC-Record-ID\":\"<urn:uuid:a4957147-a8d8-4ff3-a28e-9557858298e5>\",\"Content-Length\":\"18231\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3744750f-74db-4a7e-a4e8-e0786c6b8373>\",\"WARC-Concurrent-To\":\"<urn:uuid:23766440-3a8c-4c38-9987-1a858b5292f7>\",\"WARC-IP-Address\":\"172.64.96.18\",\"WARC-Target-URI\":\"https://calculomates.com/en/divisors/of/16541\",\"WARC-Payload-Digest\":\"sha1:VFALIF5GCV4QKRVPWQJTYGVMLBPLMA6N\",\"WARC-Block-Digest\":\"sha1:SU3CDMTAHPGZ2QD53CDE2B2HZOZJEJ37\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703514495.52_warc_CC-MAIN-20210118092350-20210118122350-00586.warc.gz\"}"}
https://es.mathworks.com/matlabcentral/answers/1711420-undefined-function-hinfnorm-for-input-arguments-of-type-tf
[ "# Undefined function 'hinfnorm' for input arguments of type 'tf'.\n\n6 views (last 30 days)\nKashish Pilyal on 4 May 2022\nEdited: Paul on 4 May 2022\nI am trying to calculate the H inf norm of a transfer function. The code is\ntheta=0;\nkp=0.2;\nkd=0.7;\ntau=0.1;\nh=0.5;\ns=tf('s');\nGamma=(exp(-theta*s)*s^2*((0.6*s)+1)+(kd*s)+kp)/((h*s+1)*(s^2*((0.1*s)+1)+(kd*s)+kp));\nninf=hinfnorm(Gamma);\nI do have the control system toolbox but I do not know why am I getting this error?\n\nPaul on 4 May 2022\nEdited: Paul on 4 May 2022\nHi Kashish,\nIf you don't have the Robust Control Toolbox for hinfnorm(), you can probably use sigma() from the Control Systems Toolbox\nExample from the Question:\ntheta=0;\nkp=0.2;\nkd=0.7;\ntau=0.1;\nh=0.5;\ns=tf('s');\nGamma=(exp(-theta*s)*s^2*((0.6*s)+1)+(kd*s)+kp)/((h*s+1)*(s^2*((0.1*s)+1)+(kd*s)+kp));\nninf=hinfnorm(Gamma)\nninf = 1.0753\nsv = sigma(Gamma);\nmax(sv(1,:))\nans = 1.0736\nMIMO example from doc page for hinfnorm()\nG = [0 tf([3 0],[1 1 10]);tf([1 1],[1 5]),tf(2,[1 6])];\nninf = hinfnorm(G)\nninf = 3.0150\nsv = sigma(G);\nmax(sv(1,:))\nans = 2.9976\nSee the doc page for sigma() for more info to see if it will suit your needs.\nNote that, unilke hinfnorm(),sigma() doesn't check for stability of the system.\n\n### Categories\n\nFind more on Dynamic System Models in Help Center and File Exchange\n\nR2021a\n\n### Community Treasure Hunt\n\nFind the treasures in MATLAB Central and discover how the community can help you!\n\nStart Hunting!" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6298464,"math_prob":0.9621222,"size":1409,"snap":"2023-14-2023-23","text_gpt3_token_len":475,"char_repetition_ratio":0.10533808,"word_repetition_ratio":0.039800994,"special_character_ratio":0.35415187,"punctuation_ratio":0.15116279,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9974123,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-05-30T02:06:25Z\",\"WARC-Record-ID\":\"<urn:uuid:636414ba-79f9-420e-8fd7-9d3e34c754f5>\",\"Content-Length\":\"121706\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a4b70cc2-e167-408d-8995-8c7791cc59a9>\",\"WARC-Concurrent-To\":\"<urn:uuid:5488b23f-4fe0-44fa-83fc-bb8c2c0f2835>\",\"WARC-IP-Address\":\"23.34.160.82\",\"WARC-Target-URI\":\"https://es.mathworks.com/matlabcentral/answers/1711420-undefined-function-hinfnorm-for-input-arguments-of-type-tf\",\"WARC-Payload-Digest\":\"sha1:ZMJHG4LOPMV4ULEEXQQGTHOEJHW4KBCY\",\"WARC-Block-Digest\":\"sha1:TLYPBUPKAQAIRE2XUCDZGWRVSYNUVMWM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224644915.48_warc_CC-MAIN-20230530000715-20230530030715-00673.warc.gz\"}"}
https://mathoverflow.net/questions/1464/euclidean-volume-of-the-unit-ball-of-matrices-under-the-matrix-norm
[ "# Euclidean volume of the unit ball of matrices under the matrix norm\n\nThe matrix norm for an $$n$$-by-$$n$$ matrix $$A$$ is defined as $$|A| := \\max_{|x|=1} |Ax|$$ where the vector norm is the usual Euclidean one. This is also called the induced (matrix) norm, the operator norm, or the spectral norm. The unit ball of matrices under this norm can be considered as a subset of $$\\Bbb R^{n^2}$$. What is the Euclidean volume of this set?\n\nI'd be interested in the answer even in just the $$2$$-by-$$2$$ case.\n\n• I've posted a problem mathoverflow.net/questions/310498/… , which appears to be an interesting extension of the question posed by Samuel. Now, the $2 \\times 2$ matrices are parameterized by a variable $\\varepsilon \\in [0,1]$, with the case $\\varepsilon =1$ corresponding to the question having been asked above. The problem has been formally solved (by Lovas and Andai) when the entries of the matrices are in $\\mathbb{R}$, and apparent solutions (but not yet proofs) for $\\mathbb{C}$ and $\\mathbb{H}$ given. – Paul B. Slater Sep 16 '18 at 21:13\n\nBuilding on the nice answer of Guillaume: The integral\n\n$$\\int_{[-1,1]^n} \\prod_{i < j} \\left| x_i^2 - x_j^2 \\right| \\, dx_1 \\dots dx_n$$\n\nhas the closed-form evaluation\n\n$$4^n \\prod_{k \\leq n} \\binom{2k}{k}^{-1}.$$\n\nThis basically follows from the evaluation of the Selberg beta integral Sn(1/2,1,1/2).\n\nCombined with modding out by a typo, we now arrive at the following product formula for the volume of the unit ball of nxn matrices in the matrix norm:\n\n$$n! \\prod_{k\\leq n} \\frac{ \\pi^k }{ ((k/2)! \\binom{2k}{k})} .$$\n\nIn particular, we have:\n\n• 2/3 π2 for n=2\n• 8/45 π4 for n=3\n• 4/1575 π8 for n=4\n• \"modding out by a typo\" is very charming! – paul garrett Jul 24 '17 at 23:28\n• Based on the values given for n = 2, 3, 4, I think that $(k/2)!$ term in the denominator of this answer should be squared, right? – Nathaniel Johnston Jun 4 at 18:08\n\nThe volume of the unit ball for the spectral norm in nxn real matrices is given by the formula\n\n$$c_n \\int\\limits_{[-1,1]^n} \\prod_{i < j} |x_i^2-x_j^2| dx_1\\dots dx_n$$\n\nwhere $c_n = n! 4^{-n} \\prod_{k=1}^n v_k^2$\n\nand $v_k=\\pi^{k/2}/\\Gamma(1+k/2)$ is the volume of the unit ball in R^n.\n\nA much more general formula for calculating all kind of similar quantities appears e.g. here (Lemma 1). The proof is by applying the SVD decomposition as a change of variables.\n\nThe first values are\n\n• 2/3 π2 for 2x2 matrices\n• 8/45 π4 for 3x3 matrices\n• 4/1575 π8 for 4x4 matrices ...\n\nThere might be a closed formula for the integral above. Edit : such a formula appears in Armin's post below !!\n\n• When using your formula I get 1/3 Pi^2 for 2x2 matrices and the values for n=3,4 are different from the ones you give as well. Is there a small typo somewhere (or am I just messing up the calculation)? – Armin Straub Oct 22 '09 at 20:20\n• I doubled-checked and the general formula seems corrects (anyway you can derive it from the paper I quoted). But you are right, there was a typo for n=4 (now corrected). By the way, you should find c_2=pi^2/4 ; c_3=pi^4/9 ; c_4=pi^8/144. – Guillaume Aubrun Oct 23 '09 at 21:47\n• Looking at the paper I found the typo: c_n = n! 4^{-n} ... Also, your quite right; the integral does have a nice closed form coming from writing it as a Selberg integral. I put details into a new answer. – Armin Straub Oct 28 '09 at 22:10\n• Oops you're right ... – Guillaume Aubrun Oct 28 '09 at 22:48\n\nConcerning the 2x2 case: As Mike points out, you can write down an explicit formula for the norm of the matrix {{a,b},{c,d}}. It takes a good while but Mathematica can then compute the volume you're asking for.\n\nIntegrate[If[a^2 + b^2 + c^2 + d^2\n+ Sqrt[((b+c)^2 + (a-d)^2) ((b-c)^2 + (a+d)^2)] <= 2, 1, 0],\n{a, -1, 1}, {b, -1, 1}, {c, -1, 1}, {d, -1, 1}]\n\n\nFor comparison: the volume of the Euclidean ball in R4 is π2/2 (which contradicts Mike's final statement that the matrix norm ball sits inside the Euclidean one).\n\n• My apologies, it is actually easy to see that the matrix norm ball does not sit inside the Euclidean one. The identity matrix clearly does the job. – Mike Hartglass Oct 21 '09 at 1:58\n• Nice example. At least it sits inside the max norm unit ball (filling it out by an ambitious 41% ...). – Armin Straub Oct 21 '09 at 11:12\n• I'd like to see the Mathematica code when the 2 x 2 matrices have complex entries. The answer, I believe, should then be $\\frac{\\pi^4}{6}$. – Paul B. Slater Mar 5 '17 at 16:53\n\nNot that this is too helpful, but in the case of a 2 x 2 matrix A (with diagonal entries a and d and off diagonal entries b and c all real) the norm for the matrix is given by the formula $\\frac{1}{2}(a^{2} + b^{2} + c^{2} + d^{2} + \\sqrt{(a^{2} + b^{2} + c^{2} + d^{2})^{2} - 4D})$ where $D = det(A^{*}A)$. It is a pretty ugly region but at least it can be computed in terms of a, b, c, and d and this unit ball will sit inside the Euclidean ball in R^{4}.\n\n• There should be a square root outside that expression, right? – j.c. Oct 21 '09 at 0:16\n• yes, I forgot the square root (or at least the lack of a square root is a typo in Conway's book). – Mike Hartglass Oct 21 '09 at 2:03\n\nYes, O(n) is the n(n-1)/2 dimensional space of orthogonal n by n matrices. Vol(O(n)) is its volume.\n\nThe integrand in the answer is simply the Jacobian of the singular value decomposition, {s_ i} is just the ordered set of the singular value and the integration is performed on the subset bounded by 1.\n\nI may just have missed a factor of 1/2^n because of the sign ambiguity in the svd singular values\n\n• The volume of the unit ball for the spectral norm in $2 \\times 2$ real matrices is, as indicated above,$\\frac{2 \\pi^2}{3}$, while for $2 \\times 2$ complex matrices it is $\\frac{\\pi^4}{6}$ (see Table 2, p. 7, in arxiv.org/pdf/1610.01410.pdf ). I would like to know the corresponding value for the $2 \\times 2$ quaternionic matrices. – Paul B. Slater Jun 26 '17 at 18:06\n\nI had a go at this question, but the method I tried here doesn't quite work out. It does reduce it to upper triangular matrices, although that doesn't seem to be a lot of help for general n.\n\nBy scaling, the volume of the set {|A|≤K} is VKn2. Now let M be a matrix whose entries are independent normal random variables with mean 0 variance 1. From the density function of the normal distribution, this gives P(|M|≤K)~(2π)-n2/2VKn2 in the limit of small K.\n\nI'll now calculate this expression in an alternative way. Use the M=QR decomposition, where Q is orthogonal and R is upper triangular, with diagonal elements λn, λn-1,…λ1, which are the eigenvalues of R. This can be done in such a way that λk2 has the χ2k-distribution (a quick google search gives this but there's probably better references). The upper triangular parts of R have the standard normal density. We need to calculate |R|. I was originally thinking that this is the max eigenvalue, but it's not quite that simple.\n\nBy means of singular value decomposition, I think that the general answer for a real n by n matrix should be: Required volume = $${\\rm vol}(O(n))^2 \\int\\limits_{0\\leq s_n \\leq s_{n-1}\\leq \\dots s_1\\leq 1}\\prod_{i < j < n} (s_ i^2-s_ j^2).$$\n\nO(n) is the n-dimensional orthogonal group\n\n• Is vol(O(n)) the n(n-1)/2-dimensional measure of the set of orthogonal n by n matrices? I would like to see how you came up with this. – Darsh Ranjan Oct 21 '09 at 8:15\n\nI worked out the answer for the 2 by 2 case as well.\n\nFirst, when dealing with 2 by 2 matrices in general, a convenient variable change is:\n\n$$a\\rightarrow\\frac{w+x}{\\sqrt{2}},d\\rightarrow\\frac{w-x}{\\sqrt{2}},c\\rightarrow\\frac{y-z}{\\sqrt{2}},b\\rightarrow\\frac{y+z}{\\sqrt{2}}.$$\n\nThen $a^2+b^2+c^2+d^2 = w^2+x^2+y^2+z^2$. And the determinant $(ad-bc) = \\frac{1}{2}(x^2+y^2-w^2-z^2)$.\n\n(Aside: this set of coordinates lets you see for instance that the set of rank 1 matrices in the space of 2D matrices realized as $\\mathbb{R}^4$ is a cone over the Clifford torus, since $x^2+y^2 = w^2+z^2$ on a sphere $x^2+y^2+w^2+z^2=r^2$ implies $x^2+y^2 = r^2/2$ and $w^2+z^2 = r^2/2$, which are scaled equations for a flat torus)\n\nLet $r_1^2 = x^2+y^2, r_2^2 = w^2+z^2$. (These are radial coordinates of a coordinate system consisting of two orthogonal 2D cylindrical coordinate systems). Then the norm squared is:\n\n$$\\frac{1}{2}\\left(r_1^2+r_2^2 + \\sqrt{ (r_1^2+r_2^2)^2 - (r_1^2-r_2^2)^2 }\\right)$$\n\nWhen this is less than one, this corresponds to the region plotted below:", null, "Note that each point in the $r_1,r_2$ picture corresponds to a different \"torus\", $x^2+y^2=r_1^2, w^2+z^2=r_2^2$.\n\nWe can now integrate over the shaded in region, $\\int_{region} dw dx dy dz$.\n\nThis 4-D integral can be reduced to 2D using $r_1$ and $r_2$, since $dx dy = 2\\pi r_1 dr_1, dw dz = 2\\pi r_2 dr_2$:\n\n$$(4\\pi^2) \\int_{region} dr_1 dr_2 r_1 r_2.$$\n\nNow, note that we can rewrite $r_2$ in terms of $r_1$. In particular, after some manipulation of our norm, the shaded-in region is defined by $r_2^2 \\leq 2-2\\sqrt{2}r_1+r_1^2=(\\sqrt{2}-r_1)^2$. Hence $r_2\\leq \\sqrt{2}-r_1$, and we can evaluate the $r_2$ integral:\n\n$$4\\pi^2 \\int_{r_1=0}^\\sqrt{2} dr_1 r_1 \\int_{r_2=0}^{\\sqrt{2}-r_1} r_2 dr_2 \\\\ = 4\\pi^2 \\int_{r_1=0}^\\sqrt{2} dr_1 r_1 (\\sqrt{2}-r_1)^2/2\\\\ = (4\\pi^2) (1/6).$$\n\nThis yields $2\\pi^2/3$, as Armin found." ]
[ null, "https://i.stack.imgur.com/1yt9L.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.70825595,"math_prob":0.99930966,"size":1875,"snap":"2020-45-2020-50","text_gpt3_token_len":782,"char_repetition_ratio":0.11758418,"word_repetition_ratio":0.0,"special_character_ratio":0.42666668,"punctuation_ratio":0.086283185,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999529,"pos_list":[0,1,2],"im_url_duplicate_count":[null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-30T20:37:32Z\",\"WARC-Record-ID\":\"<urn:uuid:a47dffbc-c8fc-4dfd-896d-927ce8a69025>\",\"Content-Length\":\"202875\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ea48bdb3-b2c5-4bd0-9957-05d4a0941e42>\",\"WARC-Concurrent-To\":\"<urn:uuid:1974f6c1-d534-4afb-94ac-c77d3a909c30>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://mathoverflow.net/questions/1464/euclidean-volume-of-the-unit-ball-of-matrices-under-the-matrix-norm\",\"WARC-Payload-Digest\":\"sha1:DTIFXMQFOZ7ZKPSSEFWSNU34ODKCO6AW\",\"WARC-Block-Digest\":\"sha1:TAPSSXJMVUSVR66FAPHCJNITKINKIXRP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107911229.96_warc_CC-MAIN-20201030182757-20201030212757-00545.warc.gz\"}"}
http://forum.lwjgl.org/index.php?action=printpage;topic=5712.0
[ "# LWJGL Forum\n\n## Programming => General Java Game Development => Topic started by: Exempt on February 21, 2015, 23:52:27\n\nTitle: Matrices and Quaternions! Solved\nPost by: Exempt on February 21, 2015, 23:52:27\nHello!\n\nI'm learning how to do quaternion rotation for use in my game (mainly for skeletal animation...what a pain..) and I've finally for a working rotation (I think)! For my question, is it normal to first turn a quaternion into a rotation matrix4 then multiply the transformation matrix4 of the object with the new rotation matrix4? This has been the only way I've gotten this to work or is there a better way to go about this?\nTitle: Re: Matrices and Quaternions!\nPost by: Cornix on February 22, 2015, 00:19:33\nYou could have a look how these people do it here:\nhttp://forum.lwjgl.org/index.php?topic=5695.0\n\nOr perhaps you may simply use their classes instead of writing your own.\nTitle: Re: Matrices and Quaternions!\nPost by: Exempt on February 22, 2015, 21:59:19\nI'm not really making a library to deal with quaternions since I'm using the older version of LWJGL there is already a library. I just don't know if it's normal or even a good idea to take a quaternion and turn it into a matrix then multiply that new rotation matrix with my transformation matrix for a object in the world.\nTitle: Re: Matrices and Quaternions!\nPost by: Kai on February 22, 2015, 22:45:24\nI am not sure why exactly you are asking. Maybe you can clear the motivation for your question a bit. Are you having troubles with either matrices or quaternions or how they can be converted into each other?\nMatrices give you a single way for representing and dealing with any transformations, and they are, unlike quaternions, natively supported by OpenGL and GLSL and the whole rendering pipeline.\nAdditionally, matrices have the nice property that you can concatenate many matrices representing various transformations (translation, scaling, rotation, shearing, projection) together to save matrix multiplications in the end.\nOf course you can also concatenate quaternions, but you would then need to have two different computation paths in your shader, one for quaternions for computing rotations, and one for matrices for anything else. That would be quite cumbersome. And in case you are using the fixed-function pipeline, then matrices really are your only option to transport transformations to the rendering pipeline.\nSo, it is perfectly reasonable to convert a quaternion to a matrix, in order for OpenGL and GLSL to handle them in the standard way.\nTitle: Re: Matrices and Quaternions!\nPost by: Exempt on February 24, 2015, 01:09:59\nThat's pretty much what I wanted to know. Thanks\nTitle: Re: Matrices and Quaternions!\nPost by: Exempt on February 25, 2015, 01:14:41\nSorry, I do have another question on this subject. Is there a more direction way to apply a quaternion orientation to a matrix4f? This is what I do right now to create my final transformation matrix.\n\nIf you see something wrong in here please tell me cause I'm not great at matrix math and quaternions.. well the math behind that is well..insane\nCode: [Select]\n`public static Matrix4f createTransformationMatrix(Vector3f translation, Quaternion rotation, float scale){ Matrix4f matrix = new Matrix4f(); matrix.setIdentity(); Matrix4f.translate(translation, matrix, matrix); Matrix4f.mul(matrix, convertQuaternionToMatrix4f(rotation), matrix); Matrix4f.scale(new Vector3f(scale, scale, scale), matrix, matrix); return matrix; }`\nCode: [Select]\n`public static Matrix4f convertQuaternionToMatrix4f(Quaternion q)    {        Matrix4f matrix = new Matrix4f();        matrix.m00 = 1.0f - 2.0f * ( q.getY() * q.getY() + q.getZ() * q.getZ() );        matrix.m01 = 2.0f * (q.getX() * q.getY() + q.getZ() * q.getW());        matrix.m02 = 2.0f * (q.getX() * q.getZ() - q.getY() * q.getW());        matrix.m03 = 0.0f;        matrix.m10 = 2.0f * ( q.getX() * q.getY() - q.getZ() * q.getW() );        matrix.m11 = 1.0f - 2.0f * ( q.getX() * q.getX() + q.getZ() * q.getZ() );        matrix.m12 = 2.0f * (q.getZ() * q.getY() + q.getX() * q.getW() );        matrix.m13 = 0.0f;        matrix.m20 = 2.0f * ( q.getX() * q.getZ() + q.getY() * q.getW() );        matrix.m21 = 2.0f * ( q.getY() * q.getZ() - q.getX() * q.getW() );        matrix.m22 = 1.0f - 2.0f * ( q.getX() * q.getX() + q.getY() * q.getY() );        matrix.m23 = 0.0f;        matrix.m30 = 0;        matrix.m31 = 0;        matrix.m32 = 0;        matrix.m33 = 1.0f;         return matrix;    }`\nTitle: Re: Matrices and Quaternions!\nPost by: Neoptolemus on February 25, 2015, 16:59:27\nThere is possibly a more direct way by simply applying all of the transformations directly in the same step rather than calling other methods in the process. You'll start getting code that looks extremely ugly, but when you're doing potentially thousands of these kinds of calculations per frame you're going to want to avoid creating too many temporary objects with such a short lifespan. If you take a look at the library I put together (Cornix supplied the link) you'll see that I avoid creating any objects, and just try to do everything as direct as possible.\n\nI do like your method for creating a transformation matrix in one easy step though, I'll see about implementing my own version if you don't mind ;)\n\nBy the way, the library Kai and I are putting together should be compatible with LWJGL 2.x as it uses the same naming conventions (though I've dropped Matrix4f.rotate() in favour of using quaternions...), but otherwise feel free to copy and paste bits out of it if you prefer using your own setup.\nTitle: Re: Matrices and Quaternions!\nPost by: Neoptolemus on February 25, 2015, 20:27:54\nHi again. I had a go at creating a more direct approach on the train home. Here is what I came up with:\n\nCode: [Select]\n`    public static void createTransformationMatrix(Vector3f position, Vector3f scale, Quaternion rotation, Matrix4f dest) {        float q00 = 2.0f * rotation.x * rotation.x;        float q11 = 2.0f * rotation.y * rotation.y;        float q22 = 2.0f * rotation.z * rotation.z;        float q01 = 2.0f * rotation.x * rotation.y;        float q02 = 2.0f * rotation.x * rotation.z;        float q03 = 2.0f * rotation.x * rotation.w;        float q12 = 2.0f * rotation.y * rotation.z;        float q13 = 2.0f * rotation.y * rotation.w;        float q23 = 2.0f * rotation.z * rotation.w;                dest.m00 = (1.0f - q11 - q22) * scale.x;        dest.m01 = (q01 + q23) * scale.x;        dest.m02 = (q02 - q13) * scale.x;        dest.m03 = 0.0f;        dest.m10 = (q01 - q23) * scale.y;        dest.m11 = (1.0f - q22 - q00) * scale.y;        dest.m12 = (q12 + q03) * scale.y;        dest.m13 = 0.0f;        dest.m20 = (q02 + q13) * scale.z;        dest.m21 = (q12 - q03) * scale.z;        dest.m22 = (1.0f - q11 - q00) * scale.z;        dest.m23 = 0.0f;        dest.m30 = position.x;        dest.m31 = position.y;        dest.m32 = position.z;        dest.m33 = 1.0f;    }`\nI'm quite pleased with how much I was able to simplify it actually, and all my testing so far shows it produces identical results to the \"old\" method (using translate, rotate and scale). Feel free to use it if you like and let me know if you run into any problems!\nTitle: Re: Matrices and Quaternions!\nPost by: Exempt on February 26, 2015, 13:09:42\nThat does look about right to me, I'll give it a test later myself, thanks. I'll take another look at the LWJGL3 classes you wrote, I guess sooner or later that will be the way to go." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.83359444,"math_prob":0.95582163,"size":6382,"snap":"2022-27-2022-33","text_gpt3_token_len":1720,"char_repetition_ratio":0.17278144,"word_repetition_ratio":0.03597786,"special_character_ratio":0.2981824,"punctuation_ratio":0.20378457,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99372476,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-08T23:25:49Z\",\"WARC-Record-ID\":\"<urn:uuid:3744b7a6-2b65-46a6-b45b-49b08f1b9e58>\",\"Content-Length\":\"13974\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6e392a1f-6f87-4b16-95c1-2fa3aef9ac49>\",\"WARC-Concurrent-To\":\"<urn:uuid:bf493a38-c408-484d-8d45-723ce71e1df3>\",\"WARC-IP-Address\":\"52.6.78.255\",\"WARC-Target-URI\":\"http://forum.lwjgl.org/index.php?action=printpage;topic=5712.0\",\"WARC-Payload-Digest\":\"sha1:FRHFLI3E75CBVPA4SII56723BBF4IIYB\",\"WARC-Block-Digest\":\"sha1:EESTX54HQDGZPVYFLMB2H5FCXAYC245V\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882570879.1_warc_CC-MAIN-20220808213349-20220809003349-00675.warc.gz\"}"}
https://electronics.stackexchange.com/questions/191159/step-by-step-explanation-of-how-voltage-follower-reaches-steady-state-using-nega/191172
[ "# Step-by-step explanation of how voltage follower reaches steady state using negative feedback\n\nJust one minute! I am not trying to understand what negative feedback does eventually, or why it should be used. I am trying to understand how the circuit reaches steady state, and how, step by step, the negative feedback causes Vout to be the same as Vin. This has not been addressed adequately in other answers.", null, "Let's assume the op-amp has a gain of 10,000, a supply of 15V, and Vin is 5V.\n\nAccording to my understanding, this is how it goes:\n\n1. $V_{in}$ is 5V, so $V_{out}$ should be 50,000V. However, it is limited to 15V by the power supply of the op-amp.\n2. $V_{out}$ is then applied back to $V_-$, but it is subtracted from $V_{in}$ due to it being negative feedback\n3. So the differential input voltage is now 5V - 15V = -10V\n4. This is then amplified to -15V by the op-amp (because of saturation)\n5. Now -15V is applied to $V_{in}$ through negative feedback, but it is added to 5V, due to double negative\n6. So now differential input is 20V, and $V_{out}$ is 15V (due to saturation)\n7. It seems that each time the op-amp will reach saturation, but just invert the output\n\nI have obviously done something wrong here. The output is never going to stabilize at 5V in this way. How does it actually work?\n\nDue to the excellent answers, I (think I) have understood the operation of negative feedback. According to my understanding, this is how it goes:\n\nLet's say for simplicity that the input is a perfect step to 5V (otherwise the output would follow the transient input, making everything 'continuous' and difficult to explain in steps).\n\n1. In the beginning, the input is 5V, and right now the output is at 0V, and 0V is being fed back to $V_{in}$\n2. So now the differential voltage $(V_+ - V_-)$ is 5V. Since the gain of the op-amp is 10,000, it will want to produce an output of 50,000V (practically limited by the supply voltage), thus the output will start to increase rapidly.\n3. Let's consider the point in time when this output reaches 1V.\n4. Right now the feedback will be 1V as well, and the differential voltage will have fallen to 4V. Now the 'target' voltage of the op-amp will be 40,000V (because of the 10,000 gain, and again, limited to 15V by the power supply). Thus V_out will keep on increasing rapidly.\n5. Let's consider the point in time when this output reaches 4V.\n6. Now the feedback will be at 4V as well, and the differential voltage will have fallen to 1V. Now the op-amp 'target' is 10,000V (limited to 15V by the supply). Thus $V_{out}$ will still keep on increasing.\n\nThe emerging pattern is: the differential input causes increase in V_out, which causes increase in feedback voltage, which causes decrease in differential input, which decreases the op-amp 'target' output voltage. This cycle is continuous, meaning we can split it into even shorter intervals for investigation. Anyhow:\n\n1. Let's consider the point in time when this output reaches 4.9995V. Right now the feedback is 4.9995V, so the differential voltage will fall to 0.0005V $(V_{in} - V_- = 5V - 4.9995V = 0.0005V)$. Now the target of the op-amp is $0.0005V*10,000 = 5V$.\n\nHowever, if the op-amp reaches 4.9998V, now the differential voltage will be only 0.0002V. Thus, the op-amp output should decrease to 2V. Why doesn't this happen?\n\nI believe I have finally understood the process:\n\nThe op-amp output cannot reach 4.9998V. Because as soon as $V_{out}$ increases above 4.9995V, the feedback will also increase, causing differential input to decrease, bringing the op-amp output back to 4.9995V.\n\nAnd if the op-amp output decreases to below 4.9995V, the feedback will decrease, causing the differential voltage to increase, bringing the op-amp output back to 4.9995V.\n\nThe last two points are the essence of negative feedback. $V_{out}$ has stabilized as close as possible to $V_{in}$. If the gain were higher, the difference in $V_{out}$ and $V_{in}$ would be smaller. If gain reaches infinity, then output voltage is exactly equal to input voltage, and because of feedback being exactly equal to $V_{in}$, there would be 0 differential voltage, and a virtual ground would be create between the two inputs.\n\n• If you assume the output transition time is not zero, everything will become clear. – Eugene Sh. Sep 18 '15 at 21:39\n• Depends on why do you need it. – Eugene Sh. Sep 18 '15 at 21:49\n• You can't describe it step by step. There are no steps. It's continuous. All the 'then's in your question are fallacious. Everything happens at once. – Marquis of Lorne Sep 18 '15 at 22:49\n• Even a continuous situation can be broken down into steps by inspecting it at important time intervals, in order to aid understanding. – Hassaan Sep 19 '15 at 11:04\n• You need to model the opamp with a differential equation to get some idea of the dynamics. Try something like $\\dot{v_o} = - v_o + K (v_+-v_-)$, with $v_- = v_o, v_+ = v_{\\text{in}}$. (I'm taking the time constant to be one for simplicity.) – copper.hat Sep 20 '15 at 1:13\n\n\"Vin is 5V, so Vout should be 50,000V.\"\n\nWhy? The OpAmp amplifies the the difference between the + and - inputs, not just the value on the + input!\n\nOK, you might start with: the output is at 0V, and the input (connected to the + input) is 5V. What you have done is apply a 5V step to the input.\n\nNow what happens is that the OpAmp starts to rise the voltage on the output. It can't do this at once, so it will rise 'slowly' (for some rather fast value of slowly, which has a technical name in OpAmp world: the slew rate, which is an importnat charactreistic of a real OpAmp). When it reaches 5V, this is fed back to the negative input, at which time it compensates the 5V at the + input, so the OpAmp no longer tries to rise its output level. (To be really accurate: this happens a little bit earlier, when the difference is 5V/10k.)\n\nDepending on timing characteristics, the output might 'slowly' settle to 5V, or overshoot the 5V, drop below 5V, etc (oscillate towards 5V). If the circuit is designed badly the oscillation might increase (and never end).\n\n• Wouter is correct - between step 1 and step 2 (in the question) is a whole load of things that make step 3 onwards basically redundant. – Andy aka Sep 18 '15 at 23:02\n\n## Most Basic Interpretation:\n\nHere is my intuitive way to understand a given op amp circuit by personification. Picture a little dude inside the op amp. The little dude has a display that indicates the difference in voltages between the + and - inputs. The little dude also has a knob. The knob adjusts the output voltage, somewhere between the voltage rails.", null, "The goal of our little friend is to make the difference between the two voltages zero. He will turn the knob until he finds the voltage on the output that, based on the circuit you connected to it, results in zero difference on his display.\n\nSo in \"sequential\" steps:\n\n1. The input to the buffer circuit is at 5V. Let's assume that the output knob is initially at 0V.\n2. Since the input is connected directly to the output in the buffer configuration, the difference that is on the little dude's display is 5V. He is not happy about that.\n3. The little dude starts turning the knob to increase the voltage output. It keeps getting closer and closer.\n4. Finally, when he sees 0V on the display he stops changing the knob. The output will now be at 5V.\n\n## Inside an Ideal Op Amp:\n\nIt is not actually a little dude inside an op amp: it's math! Here is a representation of what we are trying to implement in an op amp:", null, "simulate this circuit – Schematic created using CircuitLab\n\nThis will achieve what the little dude was trying to achieve with some limitations:\n\n• The little dude could figure out which way to turn the knob, but this can't. We have to hook it up such that increasing the output decreases the difference.\n• There will be a tiny error if the \"Lots of gain\" is not actually infinity.\n• We have to consider carefully whether the circuit will be stable. There's quite a bit out there on this topic.\n\n## A Real Op Amp:\n\nHere is what a real op amp (the 741) looks like on the inside:", null, "These transistors implement the mathematical representation above.\n\nIt is important to keep in mind that there are a whole host of practical issues that must be addressed when using a real op amp. To name a few:\n\n• Bias currents\n• Noise\n• Common mode input voltage\n• Current output\n• Supply voltages\n• Power dissipation\n• Dynamic behavior and stability\n\nBut in all op amp circuits, my mind always starts with the \"little dude\" explanation to get an idea of what is going on. Then, if needed, I extend this with mathematical analysis. Finally, also if needed, I apply practical knowledge of what is needed to meet an applications requirements.\n\nAn opAmp operates in continuous time and not in discreet time. This means that no action can occur instantaneously and actions do not happen in steps. Even if a switch is flipped to connect a voltage to the + pin there is still a transient rise time in the input and the out put continuously follows. This is very commonly described as opAmp action. A spice model is just that, a model. The model does not and can not incorporate all of the nuances that are in the opAmp. If you want to study the transient effects of an opAmp then buy one and look at it with an oscilloscope. That is the only way you will be able to study the effects.\n\nIn the real world, op amps have a limited slew rate. For some kinds of op amps, the slew rate can be very fast, but it's never quite instantaneous. When the \"+\" input of the op amp is higher, the output will rise very quickly until it reaches the positive rail or the \"+\" input is no longer higher than the \"-\" input. When the \"-\" input is higher, the output will fall very quickly until it reaches the negative rail or the \"-\" input is no longer higher than the \"+\" input.\n\nIn most properly-designed circuits that use op amps, the aspects of circuit behavior necessary to meet requirements should be satisfied equally well for a significant range of output slew rates. In the case of the voltage follower, for example, the slew rate will add a short delay between the time the input changes and the time the output reaches the same value, but it won't affect the value reached by the output.\n\nActually, the phenomenon you describe used to be a real problem, way back in the dark ages (1970s). The venerable LM310 Voltage Follower data sheet contains the application hint (bottom of page 2) which recommends a 10k ohm input resistor in order to maintain stability.\n\nAlso note that your argument can be applied to any op amp circuit, and dealing with your objection requires consideration of amplifier frequency response, which is way more than I can cover. Let it suffice to say that, on the one hand, the output does not change instantaneously (limited slew rate mentioned by other responders, and on the other hand there are consideration of how the internal circuitry responds to changes as well.\n\nWhat actually happens has been described by others: the output responds to bring the difference between the two inputs to zero, and if the circuit is properly designed will eventually stay there. But just to show you that the subject is complicated, consider that if you slow the output down too much (by putting a capacitor to ground on the output) you can also cause the amp to oscillate.\n\nI'm sorry I can't give more details, but it's pretty clear you need a lot more background before I could even try to explain it.\n\nThe gross answer is that the opamp's output will slew to whatever voltage is needed in order for the noninverting (+) and inverting (-) inputs to be at the same voltage. Consequently, if the + input is set to, say, 5 volts, the output will servo to 5 volts so that the - input will be at 5 volts, assuming the opamp's rails will allow that to happen.\n\nIn reality, though, the output never really settles down and is always servoing above and below the voltage on the + input.\n\nHow much is dependent on the opamp's gain and bandwidth and on the external circuitry, but that's a whole different question." ]
[ null, "https://i.stack.imgur.com/A9Tkq.png", null, "https://i.imgur.com/7ljuCtD.png", null, "https://i.stack.imgur.com/g26iZ.png", null, "https://upload.wikimedia.org/wikipedia/commons/thumb/e/e0/OpAmpTransistorLevel_Colored_Labeled.svg/780px-OpAmpTransistorLevel_Colored_Labeled.svg.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.91388863,"math_prob":0.95641434,"size":4124,"snap":"2020-45-2020-50","text_gpt3_token_len":1101,"char_repetition_ratio":0.15849514,"word_repetition_ratio":0.07648725,"special_character_ratio":0.28176528,"punctuation_ratio":0.1273585,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98744035,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,9,null,7,null,7,null,7,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-30T17:45:39Z\",\"WARC-Record-ID\":\"<urn:uuid:68cc08b4-9b6d-4603-9c57-82629b0cb249>\",\"Content-Length\":\"204357\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:01641c3f-63a5-4c1b-a634-52081088ffc2>\",\"WARC-Concurrent-To\":\"<urn:uuid:08232065-3261-4524-bb04-6e8032a448b1>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://electronics.stackexchange.com/questions/191159/step-by-step-explanation-of-how-voltage-follower-reaches-steady-state-using-nega/191172\",\"WARC-Payload-Digest\":\"sha1:E23P3YTO44NI2BFUHF55YA6PKGWEA7K7\",\"WARC-Block-Digest\":\"sha1:MBIEQBPDLJD55HOUIQLKBTSF2WWES37D\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107911027.72_warc_CC-MAIN-20201030153002-20201030183002-00129.warc.gz\"}"}
https://webob.info/and-relationship/edges-and-vertices-relationship-problems.php
[ "# Edges and vertices relationship problems\n\n### Edge and Vertex Connectivity", null, "He used graphs to solve the famous Königsberg bridge problem. Program Structure: A compiler builds a graph to represent relationships between A graph that have nonempty set of vertices connected at most by one edge is called simple. In mathematics, graph theory is the study of graphs, which are mathematical structures used to model pairwise relations between A vertex may exist in a graph and not belong to an edge. V and E are usually taken to be finite, and Many practical problems can be represented by graphs. Emphasizing their application to. A polyhedron is a solid object whose surface is made up of a number of flat faces which themselves are bordered by straight lines. Each face is.", null, "Furthermore, since a polyhedron can not have fewer than three edges at any vertex or any faces with fewer than 3 sides, these are obvious necessary conditions that a graph must obey in order for it to be the edge-vertex graph of some convex 3-dimensional polyhedron. However, these conditions are not sufficient for a graph to arise from a convex polyhedron. Clearly, a graph corresponding to a polyhedron can not have a single edge whose removal disconnects the graph into more than one piece, as would be true for the graph below: The graph below is 3-valent and has faces which are at least three-sided.\n\nCan you convince yourself that it does not correspond to any convex 3-dimensional polyhedron, but that there is a non-convex 3-dimensional polyhedron which does have this as its edge-vertex graph? The polyhedron, when you find it, has four triangular faces and two hexagonal faces! In a rather curious situation, the solution to this problem, finding conditions on a graph that it be isomorphic to have the same structure as the vertex-edge graph of a convex 3-dimensional polyhedron, was completely answered in the early part of the 20th century.\n\nThis work was done by the great German geometer and algebraist Ernst Steinitz Steinitz's work was published in German and, unfortunately, did not become widely known for quite some time. The catalyst for the reformulation of what Steinitz had done was the \"translation\" of his work into modern graph theory terminology.\n\nA graph G is isomorphic to the vertex-edge graph of a 3-dimensional polyhedron i. G is 3-polytopal if and only if G is planar and 3-connected.\n\nThe property of being 3-connected requires that for any pair of vertices u and v of the graph, there are at least three paths between u and v whose only vertices in common are u and v. The diagram below offers a \"schematic\" view of what such paths might look like for two typical vertices in a graph. The diagram omits other edges that might be present at the dots shown. Amazingly, Steinitz's Theorem enables one to study the combinatorial theory of 3-dimensional convex polyhedra by drawing diagrams in 2 dimensions!", null, "These appeared in areas involving Hamiltonian circuits a tour of the vertices, starting and ending at the same vertex, visiting each vertex once and only oncecoloring problems, and matchings disjoint sets of edges. Perhaps the greatest progress concerned existence theorems for 3-dimensional convex polyhedra.\n\nSuch questions now were reduced to constructions of planar graphs. The study of Hamiltonian circuits was spurred by the graph theory version of Steinitz's Theorem. Thus, David Barnette and others found a vertex, 3-valent, 3-polytopal non-Hamiltonian graph, and was led by his work to the still open conjecture that planar 3-valent 3-connected bipartite graphs have a Hamiltonian circuit. Another milestone in the theory of Hamiltonian circuits was Grinberg's amazing result, described below.\n\n## Euler's Polyhedral Formula: Part II\n\nLet G be a plane graph with Hamiltonian circuit H, let pk' denote the number of k-gonal faces of G which are interior to the circuit H, and let pk'' denote the number of k-gonal faces of G which are exterior to the circuit H. Figure 2 For the diagram above and the Hamiltonian circuit shown in blue we have four interior faces labeled in red which are a 3- 4- 6- and 7-gon.", null, "The faces which are exterior to the blue Hamiltonian circuit labeled in green are a 3-gon, two 4-gons, a 5-gon, and a 6-gon. Grinberg's Theorem states that the following relationship must hold between the faces lying in the interior and exterior of such a Hamiltonian circuit: You should verify that the numbers associated with the Hamiltonian circuit in Figure 2 satisfy the equation above.\n\nAs an example of the power of Grinberg's Theorem, one can use it to show that the 3-valent polyhedron whose graph is shown below has no Hamiltonian circuit. The lack of a Hamiltonian circuit follows from the fact that all of the faces in the graph except one have a number of sides which leaves a remainder of 2 when divided by 3. Put differently, all the faces have a number of sides congruent to two mod 3, except for one face, the 9-gon.\n\nAlthough Tutte's vertex graph contains sets of 3 edges which will disconnect the graph into two pieces, each of which contains a circuit, for the graph below, one has to cut 5 edges to disconnect the graph into two pieces, each of which contains a circuit. Eberhard Type Theorems Look again at the equation that a 3-valent plane graph must satisfy: Given a solution of this equation in non-negative integers, does there exist a convex 3-dimensional polyhedron P such that the number of sides of the faces of P are the given ones?\n\nHowever, because there is no restriction on the value of p6 for a plane graph, the following question arises.\n\nIt was this problem that a blind 19th century geometer, Victor Eberhard, raised and thought he had solved in his book Zur Morphologie der Polyeder. Although Eberhard's \"Theorem\" was given by Eberhard, his proof does not meet modern standards of rigor. However, the proof and some extensions and generalizations of the original proof and theorem are somewhat technical. The Euler relation for plane 4-valent graphs is: Then I will show a slightly simpler proof with easier-to-realize drawings.\n\nRather than show a general proof, I will show an example which indicates how to proceed in the general case. First, note that what the 4-valent Euler relation tells us is that every 4-valent 3-polytopal graph i. The block is constructed by placing k-4 dots along the left hand side and bottom of a square note: The dots are joined in the order of top dot on the left to the left dot on the bottom.\n\nNext, one joins the next dot down on the left to the next dot to the right on the bottom, etc. Note that this results in a group of k-4 triangles hugging a k-sided region in the square.\n\nThe internal vertices in the block are 4-valent, and the other faces within the block are all 4-gons, which can be used as generously as we want.\n\n### AMS :: Feature Column from the AMS\n\nOne now takes the individual blocks, one for each k-gon, and lays them out along the anti-diagonal of an array. For blocks with a 5-gon and 6-gon we get the diagram shown below: Note that in doing so only 4-gon regions are added, but we are allowed as many of these as we would like.\n\nNotice also that all the interior vertices above are 4-valent and that the number of vertices along the left and bottom are equal. InClaude Berge formulated another conjecture about graph coloring, the strong perfect graph conjecture, originally motivated by an information-theoretic concept called the zero-error capacity of a graph introduced by Shannon." ]
[ null, "http://courses.ischool.berkeley.edu/i290-14/s05/lecture-26/xml-graph-2.png", null, "http://vod.mathnet.or.kr/real/2011/04/XudingZhu(0422).jpg", null, "https://docs.microsoft.com/en-us/azure/cosmos-db/media/graph-introduction/sample-graph.png", null, "http://s3.studylib.net/store/data/008470305_1-e04f647bf679eea0c3a1017d1d83ce15.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.95851874,"math_prob":0.9669265,"size":8680,"snap":"2019-35-2019-39","text_gpt3_token_len":1945,"char_repetition_ratio":0.12736285,"word_repetition_ratio":0.024456521,"special_character_ratio":0.2032258,"punctuation_ratio":0.08198684,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9951927,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-23T17:50:40Z\",\"WARC-Record-ID\":\"<urn:uuid:258a5081-20da-4407-94a9-ee24c06cda8a>\",\"Content-Length\":\"28849\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6971d028-4cee-4985-b514-1825f6430017>\",\"WARC-Concurrent-To\":\"<urn:uuid:84acb13d-d3ce-4c5e-85d6-cb4666fb7a4e>\",\"WARC-IP-Address\":\"104.24.114.113\",\"WARC-Target-URI\":\"https://webob.info/and-relationship/edges-and-vertices-relationship-problems.php\",\"WARC-Payload-Digest\":\"sha1:3VC7M7SXPNOQSFP2DMOQT6XAIEN6473J\",\"WARC-Block-Digest\":\"sha1:XL4J752O6PV223KT325T6JVBBHYS5LU5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514577478.95_warc_CC-MAIN-20190923172009-20190923194009-00291.warc.gz\"}"}
https://media4math.com/classroom/browse-modules/what-function-notation/preview
[ "", null, "\\$6.99\n\n## What Is Function Notation?\n\nCheetahs can accelerate up to 75 mph and can easily outpace a gazelle. But gazelles have adapted to keep cheetahs at bay long enough to tire them out. We can analyze this phenomenon mathematically through the use of some basic concepts involving functions.\n\nIn this highly engaging module students learn about functions, domain, range, and mathematical modeling. They will look at the following types of functions:\n\n• Speed vs. distance\n• Distance vs. time\n• Displacement vs. time\n\nThese three functions are analyzed using function notation, and the domains and ranges are clearly defined. Students explore a mathematical model that shows whether a cheetah will catch the gazelle or if the gazelle escapes.\n\nThis module also uses the Desmos graphing calculator extensively.\n\n#### Math Concepts\n\n• Functions\n• Graphing Numbers\n• Problem Solving\n\n#### Learning Objectives\n\n• Defining a function using function notation\n• Developing a mathematical model using function notation\n• Identify the domain and range of a function\n\n#### Prerequisite Skills\n\n• Understands the basics of linear functions\nCommon Core Standards CCSS.MATH.CONTENT.8.F.B.4, CCSS.MATH.CONTENT.8.F.B.5 20 mins 8th - 10th Grade\n\n#### Lesson Preview", null, "", null, "", null, "", null, "" ]
[ null, "https://media4math.com/sites/default/files/BrowseModulesThumbnail--WhatIsFunctionNotation-01_0.png", null, "https://media4math.com/sites/default/files/module-preview-images/Screen%20Shot%202018-11-25%20at%2010.55.41%20PM.png", null, "https://media4math.com/sites/default/files/module-preview-images/Screen%20Shot%202018-11-25%20at%2010.56.15%20PM.png", null, "https://media4math.com/sites/default/files/module-preview-images/Screen%20Shot%202018-11-25%20at%2010.56.47%20PM.png", null, "https://media4math.com/sites/default/files/module-preview-images/Screen%20Shot%202018-11-25%20at%2010.57.06%20PM.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7703546,"math_prob":0.8415876,"size":1228,"snap":"2020-34-2020-40","text_gpt3_token_len":273,"char_repetition_ratio":0.13316993,"word_repetition_ratio":0.0,"special_character_ratio":0.19706841,"punctuation_ratio":0.13425925,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9946614,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,3,null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-09T05:17:00Z\",\"WARC-Record-ID\":\"<urn:uuid:6d48d2d4-ad13-48b5-bd67-9c0d67af1661>\",\"Content-Length\":\"28373\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4651164b-4433-485e-8df3-ec44bcee3089>\",\"WARC-Concurrent-To\":\"<urn:uuid:ce4f7815-9353-41a7-8afa-50d2f1d2d1c5>\",\"WARC-IP-Address\":\"132.148.156.202\",\"WARC-Target-URI\":\"https://media4math.com/classroom/browse-modules/what-function-notation/preview\",\"WARC-Payload-Digest\":\"sha1:64OTVK6L6VXRAV3T5GVEYHOHBF2IR4PR\",\"WARC-Block-Digest\":\"sha1:3ZCFRS3ZMKOQJONT2GM4G6AR2NEN4M2C\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439738425.43_warc_CC-MAIN-20200809043422-20200809073422-00458.warc.gz\"}"}
https://ptreview.sublinear.info/?p=1500
[ "# News for March 2021\n\nA somewhat quieter month by recent standards. Three Two papers: graph property testing and quantum distribution testing. (Ed: The distribution testing paper was a revision of a paper we already covered in Sept 2020.)\n\nRobust Self-Ordering versus Local Self-Ordering by Oded Goldreich (ECCC). In Nov 2020, we covered a paper that uses a tool called self-ordered graphs, that transferred bit string function lower bounds to graph property testing. Consider a labeled graph. A graph is self-ordered if its automorphism group only contains the identity element (it has no non-trivial isomorphisms). A graph is robustly self-ordered, if every permutation of the vertices leads to a (labeled) graph that is sufficiently “far” according to edit distance. Given a self-ordered graph $$G$$, a local self-ordering procedure is the following. Given access to a copy $$G’$$ of $$G$$ and a vertex $$v \\in V(G’)$$, this procedure determines the (unique) vertex in $$V(G)$$ that corresponds to $$v$$ with sublinear queries to $$G$$. In other words, it can locally “label” the graph. Intuitively, one would think that more robustly self-ordered graphs will be easier to locally label. This paper studies the relation between robust and local self-ordering. Curiously, this paper refutes the above intuition for bounded-degree graphs, and (weakly) confirms it for dense graphs. Roughly speaking, there are bounded degree graphs that are highly robustly self-ordered, for which any local self-ordering procedure requires $$\\omega(\\sqrt{n})$$ queries. Moreover, there are bounded degree graphs with $$O(\\log n)$$-query local self-ordering procedures, yet are not robustly self-ordered even for weak parameters. For dense graphs, the existence of fast non-adaptive local self-ordering procedures implies robust self-ordering.\n\nTesting identity of collections of quantum states: sample complexity analysis by Marco Fanizza, Raffaele Salvia, and Vittorio Giovannetti (arXiv). This paper takes identity testing to the quantum setting. One should think of a $$d$$-dimensional quantum state as a $$d \\times d$$ density matrix (with some special properties). To learn the state entirely up to error $$\\varepsilon$$ would require $$O(\\varepsilon^{-2} d^2)$$ samples/measurements. A recent result of Badescu-O’Donnell-Wright proves that identity testing to a known state can be done significantly faster using $$O(\\varepsilon^{-2} d)$$ measurements. This paper takes this result a step further by consider a set of $$N$$ quantum states. A “sample” is like a classical sample, where one gets a sample from a distribution of quantum states. The YES (“uniform”) case is when all the states are identical. The NO (“far from uniform”) case is when they are “far” from being the same state. This paper proves that $$O(\\varepsilon^{-2}\\sqrt{N}d)$$ samples suffices for distinguishing these cases." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.92179334,"math_prob":0.99121416,"size":2859,"snap":"2021-21-2021-25","text_gpt3_token_len":658,"char_repetition_ratio":0.12994745,"word_repetition_ratio":0.004819277,"special_character_ratio":0.2238545,"punctuation_ratio":0.09306931,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99792665,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-12T14:38:20Z\",\"WARC-Record-ID\":\"<urn:uuid:7b1c9a16-0fe9-455e-9c87-0c92eb226233>\",\"Content-Length\":\"31921\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ebe41475-b757-4861-ab2f-8bf4ed9cacad>\",\"WARC-Concurrent-To\":\"<urn:uuid:e1faf894-c55b-43f8-8a84-f9fc03762c85>\",\"WARC-IP-Address\":\"78.159.99.205\",\"WARC-Target-URI\":\"https://ptreview.sublinear.info/?p=1500\",\"WARC-Payload-Digest\":\"sha1:OX7EWNSA6DKZA74D36TYOEKFJKSL324N\",\"WARC-Block-Digest\":\"sha1:TNCPVUEQ43UZ33TXBCKHB6AIHQSXKECX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487584018.1_warc_CC-MAIN-20210612132637-20210612162637-00122.warc.gz\"}"}
https://chemistry.stackexchange.com/questions/75086/does-it-matter-what-electrolyte-we-use-for-a-galvanic-cell
[ "# Does it matter what electrolyte we use for a Galvanic Cell?\n\nA typical Galvanic Cell, say a Daniell cell, consitsts of two separate beaker, one containing Zn rod dipped inside aq $\\ce{ZnSO4}$ and the other beaker containing Cu rod dipped inside $\\ce{CuSO4}$. Does it matter what electrolyte we dip the metal in?\n\nIf we change the electrolyte of the beaker containing Zn rod to say $\\ce{AgNO3}$. I would expect a displacement reaction to take place where a redox reaction occur between the Zinc rod and the Ag+ ions. No electrons will flow through the external wire. What if I replace the electrolyte with a solution of a metal that is less reactive then the metal dipped in? eg. $\\ce{NaCl}$, $\\ce{MgCl2}$, $\\ce{KCl}$(aq). Would my cell still work?\n\nAlso, I don't understand the need of a porous pot OR salt bridge in a galvanic cell. Why would it matter if the solutions are mixed together? Why not just have one single beaker, containing an electrolyte $\\ce{KCl}$(aq), and have the two different metals dipped into it. In fact, if the solutions are mixed together, there will not even be unbalance charge accumulation, and the porous pot/salt bridge would not be necessary?\n\n• Hello and welcome to StackExchange! Take a look at the help center to learn how to use this site better. Cheerio! – Pritt says Reinstate Monica May 25 '17 at 4:38\n\nThe electrolyte is very important for the functioning of the galvanic cell. The emf that the cell generates depends on the type and the concentration of the electrolyte used. In the Daniell cell, the reaction would be:\n\n$$\\ce{Zn + CuSO4 -> ZnSO4 + Cu}$$\n\nAs per the Nernst's equation, you get the emf of the cell to be:\n\n$$\\text{E}=\\text{E}^\\text{o}-\\frac{2.303\\text{RT}}{\\text{nF}}\\log_{10}{\\left(\\frac{\\ce{[Zn^{2+}]}}{\\ce{[Cu^{2+}]}}\\right)}$$\n\nClearly, the emf of the cell depends on the concentrations of the cell. Dipping the rods in $\\ce{NaCl}$ would not make the cell work.\n\nAnd for why the salt bridge is needed, think about why you would build a cell in the first place. To generate power in a usable way right? If you let the solutions mix, they would react directly, and the energy would be released as heat instead, and not as electrical power. You want the ions to move towards the electrodes and get discharged there.\n\n• I don't want to add another answer so I'm commenting here. A salt bridge also: (1) completes the internal circuit of the galvanic cell. (2) establishes charge neutralization of the 2-electrode solutions by supplying them with appropriate ions. (3) minimizes liquid-liquid junction potential. – Berry Holmes May 25 '17 at 7:10\n• Also see: Ivan Neretin's comment to my question. – Berry Holmes May 25 '17 at 7:11" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.90827125,"math_prob":0.9563196,"size":1111,"snap":"2020-45-2020-50","text_gpt3_token_len":280,"char_repetition_ratio":0.11924119,"word_repetition_ratio":0.010928961,"special_character_ratio":0.23042305,"punctuation_ratio":0.108597286,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9724113,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-23T19:58:12Z\",\"WARC-Record-ID\":\"<urn:uuid:90fc5ca8-7fa9-4ab2-ad56-1142b27c07af>\",\"Content-Length\":\"150469\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:06412e23-856d-4707-9a5d-8a5517659dc0>\",\"WARC-Concurrent-To\":\"<urn:uuid:a395e98c-4493-4a0b-8bec-bdc10fb7bc68>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://chemistry.stackexchange.com/questions/75086/does-it-matter-what-electrolyte-we-use-for-a-galvanic-cell\",\"WARC-Payload-Digest\":\"sha1:UCF4N4X3WH575DRO6DHUVXBLZZXTOLD4\",\"WARC-Block-Digest\":\"sha1:6IAP2L2JVDQQPR6FQYISVCI2KL32TKO4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141164142.1_warc_CC-MAIN-20201123182720-20201123212720-00361.warc.gz\"}"}
https://www.epicedca.online/1st-khan-academy
[ "top of page\n\nStandard 1.G.1\n\nCommon Core Area - Geometry\n\nSkill - Name shapes 3 - Practice identifying circles, triangles, squares, rectangles, rhombuses, trapezoids, and hexagons.\n\nStandard 1.G.2\n\nCommon Core Area - GeometrySkill - Coming Soon\n\nStandard 1.G.3\n\nCommon Core Area - Geometry\n\nSkill - Halves and fourths - Practice dividing shapes into 2 or 4 equal sections.\n\nStandard 1.MD.1\n\nCommon Core Area - Measurement and Data\n\nSkill - Indirect measurement - Compare the lengths of 2 objects indirectly by using a third object.\n\nStandard 1.MD.1\n\nCommon Core Area - Measurement and Data\n\nSkill - Order by length - Practice ordering 3 objects by length.\n\nStandard 1.MD.2\n\nCommon Core Area - Measurement and Data\n\nSkill - Measure lengths 1 - Measure objects with same-size length units without gaps or overlaps.\n\nStandard 1.MD.3\n\nCommon Core Area - Measurement and Data\n\nSkill - Tell time to hour or half hour - Practice telling time on analog clocks to the hour or half hour.\n\nStandard 1.MD.4\n\nCommon Core Area - Measurement and Data\n\nSkill - Solve problems with bar graphs 1 - Read and interpret bar graphs.\n\nStandard 1.NBT.1\n\nCommon Core Area - Number and Operations in Base Ten\n\nSkill - Numbers to 120 - Practice finding missing numbers in a list of numbers between 0 and 120.\n\nStandard 1.NBT.2\n\nCommon Core Area - Number and Operations in Base Ten\n\nSkill - Groups of ten objects - Practice grouping objects by tens.\n\nStandard 1.NBT.2\n\nCommon Core Area - Number and Operations in Base Ten\n\nSkill - Groups of ten objects - Practice grouping objects by tens.\n\nStandard 1.NBT.2\n\nCommon Core Area - Number and Operations in Base Ten\n\nSkill - 2-digit place value challenge - Practice breaking numbers apart into tens and ones.\n\nStandard 1.NBT.2\n\nCommon Core Area - Number and Operations in Base Ten\n\nSkill - 2-digit place value challenge - Practice breaking numbers apart into tens and ones.\n\nStandard 1.NBT.2\n\nCommon Core Area - Number and Operations in Base Ten\n\nSkill - 2-digit place value challenge - Practice breaking numbers apart into tens and ones.\n\nStandard 1.NBT.2\n\nCommon Core Area - Number and Operations in Base Ten\n\nSkill - 2-digit place value challenge - Practice breaking numbers apart into tens and ones.\n\nStandard 1.NBT.2\n\nCommon Core Area - Number and Operations in Base Ten\n\nSkill - Groups of ten objects - Practice grouping objects by tens.\n\nStandard 1.NBT.2\n\nCommon Core Area - Number and Operations in Base Ten\n\nSkill - Groups of ten objects - Practice grouping objects by tens.\n\nStandard 1.NBT.3\n\nCommon Core Area - Number and Operations in Base Ten\n\nSkill - Compare 2-digit numbers - Practice comparing numbers (within 100) using the symbols <, >, and =.\n\nStandard 1.NBT.3\n\nCommon Core Area - Number and Operations in Base Ten\n\nSkill - Compare 2-digit numbers 2 - Practice more challenging problems comparing numbers within 100.\n\nStandard 1.NBT.4\n\nCommon Core Area - Number and Operations in Base Ten\n\nSkill - Add 2-digit numbers (no regrouping) - Practice solving problems like 24 + 45.\n\nStandard 1.NBT.4\n\nCommon Core Area - Number and Operations in Base Ten\n\nSkill - Add 1s or 10s (no regrouping) - Practice solving problems like 34+5 and 34+50.\n\nStandard 1.NBT.4\n\nCommon Core Area - Number and Operations in Base Ten\n\nSkill - Break apart 2-digit addition problems - Practice breaking apart problems like 23+45 into problems like 20+40+3+5.\n\nStandard 1.NBT.4\n\nCommon Core Area - Number and Operations in Base Ten\n\nSkill - Regroup when adding 1-digit numbers - Practice adding numbers like 45+8.\n\nStandard 1.NBT.4\n\nCommon Core Area - Number and Operations in Base Ten\n\nSkill - Add 1 or 10 - Practice solving problems like 34+1 and 34+10.\n\nStandard 1.NBT.5\n\nCommon Core Area - Number and Operations in Base Ten\n\nSkill - Add 1 or 10 - Practice solving problems like 34+1 and 34+10.\n\nStandard 1.NBT.6\n\nCommon Core Area - Number and Operations in Base Ten\n\nSkill - Coming Soon\n\nStandard 1.OA.1\n\nCommon Core Area - Operations and Algebraic Thinking\n\nSkill - Word problems with \"more\" and \"fewer\" 2 - Practice solving more word problems by finding how many more (or fewer) objects there are. Numbers used are 20 or less.\n\nStandard 1.OA.1\n\nCommon Core Area - Operations and Algebraic Thinking\n\nSkill - Word problems with \"more\" and \"fewer\" 1 - Practice solving word problems by finding how many more (or fewer) objects there are. Numbers used are 20 or less.\n\nStandard 1.OA.1\n\nCommon Core Area - Operations and Algebraic Thinking\n\nSkill - Addition and subtraction word problems 1 - Practice adding and subtracting to solve word problems. Numbers used are 20 or less.\n\nStandard 1.OA.1\n\nCommon Core Area - Operations and Algebraic Thinking\n\nSkill - Addition and subtraction word problems 2 - Practice solving more challenging word problems with addition and subtraction. Numbers used are 20 or less.\n\nStandard 1.OA.1\n\nCommon Core Area - Operations and Algebraic Thinking\n\nSkill - Word problems with \"more\" and \"fewer\" - Practice solving word problems by finding how many more (or fewer) objects there are. Each problem shows a diagram to help you.\n\nStandard 1.OA.2\n\nCommon Core Area - Operations and Algebraic Thinking\n\nSkill - Add 3 numbers - Practice adding 3 numbers. All numbers in these problems are 20 or less.\n\nStandard 1.OA.3\n\nCommon Core Area - Operations and Algebraic Thinking\n\nSkill - Coming Soon\n\nStandard 1.OA.4\n\nCommon Core Area - Operations and Algebraic Thinking\n\nSkill - Relate addition and subtraction - Practice seeing how addition and subtraction are related.\n\nStandard 1.OA.5\n\nCommon Core Area - Operations and Algebraic Thinking\n\nSkill - Coming Soon\n\nStandard 1.OA.6\n\nCommon Core Area - Operations and Algebraic Thinking\n\nSkill - Subtract within 20 - Practice subtracting. All numbers in these problems are 20 or less.\n\nStandard 1.OA.6\n\nCommon Core Area - Operations and Algebraic Thinking\n\nSkill - Add within 20 - Practice adding. All numbers in these problems are 20 or less.\n\nStandard 1.OA.7\n\nCommon Core Area - Operations and Algebraic Thinking\n\nSkill - Equal sign - Practice telling which equation is true.\n\nStandard 1.OA.8\n\nCommon Core Area - Operations and Algebraic Thinking\n\nSkill - Find missing number (add and subtract within 20) - Learn how to solve problems like \"___ - 7 = 18\" where you don't know one of the values in an addition or subtraction equation.\n\nAnchor 1.G.1\nAnchor 1.G.2\nAnchor 1.G.3\nAnchor 1.MD.1\nAnchor 1.MD.2\nAnchor 1.NBT.3\nAnchor 1.NBT.4\nAnchor 1.NBT.5\nAnchor 1.NBT.6\nAnchor 1.OA.1\nAnchor 1.OA.5\nAnchor 1.OA.3\nAnchor 1.OA.2\nAnchor 1.OA.4\nAnchor 1.OA.6\nAnchor 1.OA.7\nAnchor 1.OA.8\nAnchor 1.MD.3\nAnchor 1.MD.4\nAnchor 1.NBT.1\nAnchor 1.NBT.2\nbottom of page" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.74712706,"math_prob":0.8827517,"size":8403,"snap":"2023-40-2023-50","text_gpt3_token_len":2187,"char_repetition_ratio":0.2075247,"word_repetition_ratio":0.5195277,"special_character_ratio":0.24015233,"punctuation_ratio":0.14558914,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95543754,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-29T07:19:56Z\",\"WARC-Record-ID\":\"<urn:uuid:6005e361-bb17-4d50-83e4-2ab53f23801f>\",\"Content-Length\":\"460445\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e18eab6a-f541-4ab9-9814-24dd8b3d8d83>\",\"WARC-Concurrent-To\":\"<urn:uuid:57acc2c4-b374-422e-a482-bf6519974cd2>\",\"WARC-IP-Address\":\"34.149.87.45\",\"WARC-Target-URI\":\"https://www.epicedca.online/1st-khan-academy\",\"WARC-Payload-Digest\":\"sha1:2JLY4KPAL3K2LSEI5XEVFOBTHZWTGB56\",\"WARC-Block-Digest\":\"sha1:ZIUC3VTY53YM7MWUSZ752FOYYDQ23B6L\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510498.88_warc_CC-MAIN-20230929054611-20230929084611-00667.warc.gz\"}"}
https://www.sheetaki.com/highlight-the-smallest-n-values-in-each-row-in-google-sheets/
[ "# How to Highlight the Smallest N Values in Each Row in Google Sheets\n\nTo highlight the smallest n values in each row in Google Sheets is useful to make the cells with the smallest n values in each row easier to identify.\n\nGoogle Sheets has a built-in feature for formatting cells in your sheets based on whether they meet certain criteria.\n\nThis feature is called conditional formatting and is useful not only for formatting cells based on whether they meet certain conditions but for making your sheet visually more appealing, as well. Once you apply conditional formatting across your sheet, you will be able to gain some insight into the data just by looking at it briefly.\n\nLet’s say you sell books and have a sheet that contains information about the number of books sold each day for several weeks 📚📙\n\nAnd now we want to know which are the days when you had the least sold books each week. We want to highlight these cells by giving them a special background colour so we can easily see these days just by glancing at the sheet.\n\nSo how do we do that?\n\nSimple. We should apply some simple dynamic conditional formatting rules and set a style of our choosing.\n\nLet’s first learn how to use conditional formatting in Google Sheets if you don’t know how to do it already.\n\n## How to Use Conditional Formatting in Google Sheets\n\nConditional formatting in Google Sheets is a useful visual tool you can use to change the aspect of cells, rows, or columns (their background colour or the style of the text), based on the values in them and rules you set.\n\nTo access conditional formatting, go to the menu and select Format > Conditional Formatting. The ‘Conditional format rules’ toolbar will open on the right.", null, "Use this toolbar to set the rules according to which you would want to format your sheet. Every rule you set is an if-then statement, meaning it consists of a condition that needs to be evaluated and a corresponding action if the condition is met.\n\nFor example, let’s say that you would want all empty cells to have a red background. The condition would be ‘if the cell is empty’ and the corresponding action would be ‘then the background colour should change to red’. If the rule you set is evaluated as TRUE, the corresponding action will be met and the cell will change its aspect (background colour or text style) according to the style of your choosing.\n\nTo apply conditional formatting, you should first set three basic things:\n\n• Range: where would you want to apply conditional formatting\n• Condition: what is the criteria that the range should meet in order to be formatted\n• Style: how will the cells that meet the condition look once they are formatted\n\nYou can choose from default conditions or you can write your own condition. To write your own condition, you should select the ‘Custom formula is’ in the drop-down list of ‘Format cells if’.\n\nIn this guide, you will learn how to write a custom formula to highlight the smallest n values in each row in Google Sheets.\n\nNow, let’s go straight into real examples where we will deal with actual values so you can better understand how to highlight the smallest value in each row in Google Sheets and learn how to apply conditional formatting across your sheets.\n\n## A Real Example of Using Conditional Formatting to Highlight the Smallest N Values in Each Row in Google Sheets\n\nNow, let’s see how conditional formatting works with real examples and how we can use it to highlight cells in Google Sheets based on our criteria.\n\nSo let’s get back to our books! 📚📙\n\nThe picture below shows the number of books sold each day for several weeks.", null, "We will use conditional formatting to highlight the smallest n values in each row with a light yellow background. This way we will be able to easily identify days with the lowest numbers of sold books each week.\n\nIf you take a look at the list of basic conditions, you will notice that it does not include a condition that could help us, so we should write a custom formula that will highlight the smallest n values in each row.\n\nYou might have thought that using the SMALL formula would be practical. However, this is not always an ideal solution. Do you know why? The SMALL formula will return the nth smallest element from a data set. And this is OK if you are looking only for the smallest one number (or two). But what if you would want to highlight the smallest ten numbers in each row in your sheet? You will have to use ten conditional formatting rules using ten SMALL function based formulas.\n\nSince this is not the most practical solution, you will get a formula that will highlight the smallest n values in each row in Google Sheets (including 0 and excluding 0 in the calculation).", null, "Data is the same in both tables, the only difference being that to the first one we applied the formula including 0 and to the second one excluding 0. The ‘n’ in both formulas is 3.\n\n## The Anatomy of the Formula That Will Highlight the Smallest N Values in Each Row in Google Sheets\n\nThe following custom formula will highlight the smallest n values in each row in Google Sheets, including zeros:\n\n=REGEXMATCH(\nB3&””,\n“^”&textjoin(“\\$|^”,1,\nIFERROR(ARRAYFORMULA(\nSMALL(\\$B3:\\$H3,sequence(1,3)))))\n&”\\$”\n)\n\nIt might look complicated but let’s break it down to better understand its syntax and what does each term mean:\n\n• = the equal sign is how we begin any function in Google Sheets\n• REGEXMATCH this is our function. Its syntax is\nREGEXMATCH(text, regular_expression)\n\nwhere the text is the number in the first cell in the formatting range (however, we cannot use ‘B3’ as the text argument in REGEXMATCH so instead we use it by adding a null character to it and now it looks like B3&”” and the regular_expression is the following formula\n\n“^”&textjoin(“\\$|^”,1,\nIFERROR(ARRAYFORMULA(\nSMALL(\\$B3:\\$H3,sequence(1,3)))))\n&”\\$”\n\nLet’s break this one, too!\n\n• sequence(1,3) is the part of the formula where you can change the ‘n’ value (3 is the ‘n’ value in our example).\n• Instead of writing =SMALL(\\$B3:\\$H3,1) , =SMALL(\\$B3:\\$H3,2) and =SMALL(\\$B3:\\$H3,3) to get the smallest 3 values in Google Sheets, we are adding the sequence(1,3) at the end and ARRAYFORMULA function before.\n• The dollar sign ‘\\$’ is used before the column and/or row part of the reference to control how the reference will be updated (the dollar sign causes the corresponding part of the reference to remain unchanged). In our example, the condition will be applied to the whole range and will go row by row but its column references will remain the same.\n• The TEXTJOIN converts the above SMALL array output to a regex regular_expression to match in each cell in the row \\$B3:\\$H3.\n• And what about the IFERROR part of the formula? Without it, the conditional format would skip rows that contain less than 3 numbers (less than ‘n’ numbers) in formatting. But, now, with the IFERROR formula included, the formula would return #NUM! error if the row contains less than ‘n’ numbers.\n\nFinally, we should select the formatting style (in our case it will be a light yellow background for the cells that meet this condition).\n\nSelect ‘Done’ and the conditional formatting will be applied to the chosen range. Now you can easily see the smallest n values in each row in Google Sheets.\n\nIt wasn’t that hard, was it?\n\nYou can try it yourself by making a copy of the spreadsheet using the link attached below:\n\n## How to Highlight the Smallest N Values (Including 0) in Each Row in Google Sheets\n\nLet’s begin setting your own conditional formatting to highlight the smallest n values in each row in Google Sheets, step-by-step.\n\n1. First, you should select the range where you would like to apply conditional formatting. For this guide, I will choose the range B3:H12.", null, "1. Then, go to the upper menu and select Format > Conditional Formatting. This will open the ‘Conditional format rules’ toolbar on the right.", null, "1. Now, let’s go to the ‘Conditional format rules’ toolbar. In ‘Apply to range’, you will see the range you selected. Below that, you should set the rule. Click on the drop-down list below the ‘Format cells if…’ and choose ‘Custom formula is’.", null, "1. Click on the ‘Value or formula’ field and start writing your formula. The formula should be written for one row and it will automatically be applied to the whole range. In our example, the formula to highlight the smallest n values in each row is =REGEXMATCH(B3&””, “^”&textjoin(“\\$|^”, 1, IFERROR(ARRAYFORMULA(\nSMALL(\\$B3:\\$H3,sequence(1,3)))))&”\\$”)\n.", null, "1. Now you should select the formatting style you want to apply to the formatted cells. As you can see, you may change the font style and colour or the colour of the background. I will change the background colour to light yellow.", null, "1. Finally, click on the ‘Done’ button below to close the toolbar and apply the conditional formatting rule you have set. As a result, the background colour of the cells with the smallest n values in each row will be light yellow now. If you followed the steps correctly, the cells that meet your criteria should now have the style you have chosen.", null, "## How to Highlight the Smallest N Values (Excluding 0) in Each Row in Google Sheets\n\nIf you want to highlight the smallest n values (excluding 0) in each row in Google Sheets, you should use this formula instead:\n\n=REGEXMATCH(\nB3&””,\n“^”&textjoin(“\\$|^”,1,\nIFERROR(ARRAYFORMULA(\nSMALL(FILTER(\\$B3:\\$H3,\\$B3:\\$H3>0),\nsequence(1,3)))))\n&”\\$”\n)\n\nThe only difference is the (filter(\\$B3:\\$H3,\\$B3:\\$H3>0) part of the formula we added to filter 0 values.\n\nWhen applying this conditional rule, all steps are the same. The only difference is the formula you write in the ‘Value or formula’ field.", null, "That’s it, well done! You can now highlight the smallest n values (including and excluding 0) in each row in Google Sheets with conditional formatting. You can use it together with a wide range of other Google Sheets formulas to sort and filter your data according to your needs. 🙂", null, "Our goal this year is to create lots of rich, bite-sized tutorials for Google Sheets users like you. If you liked this one, you'll love what we are working on! Readers receive ✨ early access ✨ to new content.\n\n##### You May Also Like", null, "## How To Remove Duplicates in Google Sheets\n\nIn this article, we’ll see how to remove duplicates in Google Sheets. Consider you have a massive amount…", null, "## How to Use DAYS Function in Google Sheets\n\nThe DAYS function in Google Sheets is useful when you need to know the number of days between…", null, "## How to Use the DMIN Function in Google Sheets\n\nThe DMIN Function in Google Sheets is useful when you want to get the minimum value in a…", null, "## How to Use the IPMT Function in Google Sheets\n\nThe IPMT function in Google Sheets is used to calculate the payment on the interest for an investment…", null, "", null, "" ]
[ null, "https://www.sheetaki.com/wp-content/uploads/2020/06/Screenshot_1.jpg", null, "https://www.sheetaki.com/wp-content/uploads/2020/06/Screenshot_2.jpg", null, "https://www.sheetaki.com/wp-content/uploads/2020/06/Screenshot_4.jpg", null, "https://www.sheetaki.com/wp-content/uploads/2020/06/Screenshot_5.jpg", null, "https://www.sheetaki.com/wp-content/uploads/2020/06/Screenshot_6.jpg", null, "https://www.sheetaki.com/wp-content/uploads/2020/06/Screenshot_7.jpg", null, "https://www.sheetaki.com/wp-content/uploads/2020/06/Screenshot_8.jpg", null, "https://www.sheetaki.com/wp-content/uploads/2020/06/Screenshot_9.jpg", null, "https://www.sheetaki.com/wp-content/uploads/2020/06/Screenshot_10.jpg", null, "https://www.sheetaki.com/wp-content/uploads/2020/06/Screenshot_11.jpg", null, "https://www.sheetaki.com/wp-content/uploads/2020/05/background.png", null, "https://www.sheetaki.com/wp-content/uploads/2020/12/Copy-of-Template-1-380x220.png", null, "https://www.sheetaki.com/wp-content/uploads/2021/10/days_featured-380x220.png", null, "https://www.sheetaki.com/wp-content/uploads/2020/06/DMIN-Featured-Image-380x220.png", null, "https://www.sheetaki.com/wp-content/uploads/2020/11/Screen-Shot-2020-11-06-at-20.16.23-380x220.png", null, "https://www.sheetaki.com/wp-content/uploads/2020/03/or-function-in-google-sheets-380x220.png", null, "https://www.sheetaki.com/wp-content/uploads/2021/03/Copy-of-Template-2-380x220.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.86567974,"math_prob":0.9195041,"size":10136,"snap":"2021-43-2021-49","text_gpt3_token_len":2278,"char_repetition_ratio":0.18407027,"word_repetition_ratio":0.14115646,"special_character_ratio":0.22296764,"punctuation_ratio":0.08472856,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.967351,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34],"im_url_duplicate_count":[null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,null,null,8,null,6,null,6,null,9,null,5,null,6,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-17T12:59:15Z\",\"WARC-Record-ID\":\"<urn:uuid:7a24a5a6-8327-4f5f-b64a-39e965771f9b>\",\"Content-Length\":\"104016\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:632b5861-7b67-42ab-8b50-e12410583adf>\",\"WARC-Concurrent-To\":\"<urn:uuid:f984ace4-3b60-4b42-9814-b6d580aa22db>\",\"WARC-IP-Address\":\"104.21.61.76\",\"WARC-Target-URI\":\"https://www.sheetaki.com/highlight-the-smallest-n-values-in-each-row-in-google-sheets/\",\"WARC-Payload-Digest\":\"sha1:FSZJCZNJQOSSYAFIFVHOBWDKRQNAITH5\",\"WARC-Block-Digest\":\"sha1:5U6RUT224Z3OEPP72ENZL6A2664SKJPN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585177.11_warc_CC-MAIN-20211017113503-20211017143503-00192.warc.gz\"}"}
https://la.mathworks.com/matlabcentral/cody/problems/12-fibonacci-sequence/solutions/352431
[ "Cody\n\n# Problem 12. Fibonacci sequence\n\nSolution 352431\n\nSubmitted on 15 Nov 2013 by Shalini\nThis solution is locked. To view this solution, you need to provide a solution of the same size or smaller.\n\n### Test Suite\n\nTest Status Code Input and Output\n1   Pass\n%% n = 1; f = 1; assert(isequal(fib(n),f))\n\n2   Pass\n%% n = 6; f = 8; assert(isequal(fib(n),f))\n\n3   Pass\n%% n = 10; f = 55; assert(isequal(fib(n),f))\n\n4   Pass\n%% n = 20; f = 6765; assert(isequal(fib(n),f))" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.53021014,"math_prob":0.9986457,"size":433,"snap":"2020-34-2020-40","text_gpt3_token_len":161,"char_repetition_ratio":0.15850815,"word_repetition_ratio":0.0,"special_character_ratio":0.41801387,"punctuation_ratio":0.14130434,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99415576,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-09T00:03:05Z\",\"WARC-Record-ID\":\"<urn:uuid:f7f83588-2bb2-4a64-8f98-37387ad6d748>\",\"Content-Length\":\"73636\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:833990fd-110b-4374-9631-939fc474401f>\",\"WARC-Concurrent-To\":\"<urn:uuid:d27bb9e5-ee3b-4cf2-a4d7-c838cca51f4b>\",\"WARC-IP-Address\":\"23.212.144.59\",\"WARC-Target-URI\":\"https://la.mathworks.com/matlabcentral/cody/problems/12-fibonacci-sequence/solutions/352431\",\"WARC-Payload-Digest\":\"sha1:D6J4YYCGKBOV6BIMX2KEDLXC6KKIOAKM\",\"WARC-Block-Digest\":\"sha1:3UOPU3S76Q42X4CG4JZJVXJM3BTZ7DIA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439738366.27_warc_CC-MAIN-20200808224308-20200809014308-00385.warc.gz\"}"}
https://conventlearning.com/playing-with-numbers/
[ "", null, "# Playing with numbers\n\n###### Question 1. Write all the factors of the following numbers:\n\n(a) 24, (b) 15, (c) 21, (d) 27, (e) 12, (f) 20 , (g) 18, (h) 23, (i) 36\n\n(a) 24 = 1 x 24 = 2 x 12 = 3 x 8 = 4 x 6 = 6 x 4\n\nFactors of 24 = 1, 2, 3, 4, 6, 12, 24\n\n(b) 15 = 1 x 15 = 3 x 5 = 5 x 3\n\nFactors of 15 = 1, 3, 5, 15\n\n(c)21 = 1 x 21 = 3 x 7 = 7 x 3\n\nFactors of 21 = 1, 3, 7, 21\n\n(d)27 = 1 x 27 = 3 x 9 = 9 x 3\n\nFactors of 27 = 1, 3, 9, 27\n\n(e)12 = 1 x 12 = 2 x 6 = 3 x 4 = 4 x 3\n\nFactors of 12 = 1, 2, 3, 4, 6, 12\n\n(f)20 = 1 x 20 = 2 x 10 = 4 x 5 = 5 x 4\n\nFactors of 20 = 1, 2, 4, 5, 10, 20\n\n(g)18 = 1 x 18 = 2 x 9 = 3 x 6\n\nFactors of 18 = 1, 2, 3, 6, 9, 18\n\n(h)23 = 1 x 23\n\nFactors of 23 = 1, 23\n\n(i)36 = 1 x 36 = 2 x 18 = 3 x 12 = 4 x 9 = 6 x 6\n\nFactors of 36 = 1, 2, 3, 4, 6, 9, 12, 18, 36\n\nNCERT Solutions for Class 6 Maths Exercise 3.1\n\n###### Question 2. Write first five multiples of:\n\n(a) 5, (b) 8, (c) 9\n\n(a) 5 x 1 = 5, 5 x 2 = 10, 5 x 3 = 15, 5 x 4 = 20, 5 x 5 = 25\n\nFirst five multiples of 5 are 5, 10, 15, 20, 25.\n\n(b) 8 x 1 = 8, 8 x 2 = 16, 8 x 3 = 24, 8 x 4 = 32, 8 x 5 = 40\n\nFirst five multiples of 8 are 8, 16, 24, 32, 40.\n\n(c) 9 x 1 = 9, 9 x 2 = 18, 9 x 3 = ,27, 9 x 4 = 36, 9 x 5 = 45\n\nFirst five multiples of 9 are 9, 18, 27, 36, 45.\n\n###### Question 3.Match the items in column 1 with the items in column 2:\n Column 1 Column 2 (i)35 (a) Multiple of 8 (ii)15 (b) Multiple of 7 (iii)16 (c) Multiple of 70 (iv)20 (d) Factor of 30 (v)20 (e) Factor of 50\n\nAnswer: (i)  (b), (ii)  (d), (iii)  (a), (iv)  (f), (v)  (e)\n\nNCERT Solutions for Class 6 Maths Exercise 3.1\n\n###### Question 4.Find all the multiples of 9 up to 100.\n\nAnswer: Multiples of 9 up to 100 are:\n\n9, 18, 27, 36, 45, 54, 63, 72, 81, 90, 99\n\n###### Question1. What is the sum of any two:\n\n(a)Odd numbe\n\n(b)Even numbe\n\n(a) The sum of any two odd numbers is an even number.\n\nExample: 1 + 3 = 4, 3 + 5 = 8\n\n(b) The sum of any two even numbers is an even number.\n\nExample: 2 + 4 = 6, 6 + 8 = 14\n\n###### Question 2. State whether the following statements are true or false:\n\n(a)The sum of three odd numbers is even.\n\n(b)The sum of two odd numbers and one even number is even.\n\n(c)The product of three odd numbers is odd.\n\n(d)If an even number is divided by 2, the quotient is always odd.\n\n(e)All prime numbers are odd.\n\n(f)Prime numbers do not have any facto\n\n(g)Sum of two prime numbers is always even.\n\n(h)2 is the only even prime number.\n\n(i)All even numbers are composite numbe\n\n(j)The product of two even numbers is always even.\n\n(a) False, (b) True, (c) True, (d) False, (e) False, (f) False,(g) False, (h) True, (i) False, (j) True\n\nNCERT Solutions for Class 6 Maths Exercise 3.2\n\n###### Question 3.The numbers 13 and 31 are prime numbe Both these numbers have same digits 1 and 3. Find such pairs of prime numbers up to 100.\n\nAnswer: 17 and 71; 37 and 73; 79 and 97\n\n###### Question 4.Write down separately the prime and composite numbers less than 20.\n\nPrime numbers: 2, 3, 5, 7, 11, 13, 17, 19\n\nComposite numbers: 4, 6, 8, 9, 10, 12, 14, 15, 16, 18\n\nNCERT Solutions for Class 6 Maths Exercise 3.2\n\n###### Question 5.What is the greatest prime number between 1 and 10?\n\nAnswer: The greatest prime number between 1 and 10 is ‘7’.\n\nQuestion 6.Express the following as the sum of two odd numbers:\n\n(a) 44\n\n(b) 36\n\n(c) 24\n\n(d) 18\n\nAnswer:(a) 3 + 41 = 44, (b) 5 + 31 = 36, (c) 7 + 17 = 24, (d) 7 + 11 = 18\n\n###### Question 7.Give three pairs of prime numbers whose difference is 2.\n\n[Remark: Two prime numbers whose difference is 2 are called twin primes.]\n\n3 and 5;\n\n5 and 7;\n\n11 and 13\n\nNCERT Solutions for Class 6 Maths Exercise 3.2\n\n###### Question 8.Which of the following numbers are prime:\n\n(a) 23\n\n(b) 51\n\n(c) 37\n\n(d) 26\n\nAnswer: (a) 23 and (c) 37 are prime number\n\n###### Question 9.Write seven consecutive composite numbers less than 100 so that there is no prime number between them.\n\nAnswer: 90, 91, 92, 93, 94, 95, 96\n\nNCERT Solutions for Class 6 Maths Exercise 3.2\n\n###### Question 10.Express each of the following numbers as the sum of three odd primes:\n\n(a) 21\n\n(b) 31\n\n(c) 53\n\n(d) 61\n\nAnswer: (a) 21 = 3 + 7 + 11, (b) 31 = 3 + 11 + 17 , (c) 53 = 13 + 17 + 23, (d) 61 = 19 + 29 + 13\n\n###### Question 11.Write five pairs of prime numbers less than 20 whose sum is divisible by 5.\n\n[Hint: 3 + 7 = 10]\n\nAnswer: 2 + 3 = 5;, 7 + 13 = 20;, 3 + 17 = 20;, 2 + 13 = 15;, 5 + 5 = 10\n\nNCERT Solutions for Class 6 Maths Exercise 3.2\n\n###### Question 12. Fill in the blanks:\n\n(a)A number which has only two factors is called a _______________.\n\n(b)A number which has more than two factors is called a _______________.\n\n(c)1 neither _______________ nor _______________.\n\n(d)The smallest prime number is _______________.\n\n(e)The smallest composite number is _______________.\n\n(f)The smallest even number is _______________.\n\nAnswer: (a) Prime number, (b) Composite number, (c) Prime number and composite number, (d) 2, (e) 4, (f) 2\n\n###### Question 1.Using divisibility test, determine which of the following numbers are divisible by 2; by 3; by 4; by 5; by 6; by 8; by 9; by 10; by 11. (say yes or no)\n Number Divisible by 1289901586275668663921042971428563060406839 Yes No Yes No No Yes No No No\n\n Number Divisible by 2 3 4 5 6 8 9 10 11 1289901586275668663921042971428563060406839 YesYesYesNoYesYesYesYesYesNo NoYesNoNoNoYesYesYesYesYes YesNoNoNoNoNoNoYesYesNo NoYesNoYesNoYesNoNoYesNo NoYesNoNoNoYesYesYesYesNo YesNoNoNoNoNoNoYesNoNo NoYesNoNoNoNoYesNoYesNo NoYesNoNoNoYesNoNoYesNo NoYesNoYesNoYesNoNoNoNo\n\nNCERT Solutions for Class 6 Maths Exercise 3.3\n\n###### Question 2.Using divisibility test, determine which of the following numbers are divisibly by 4; by 8:\n\n(a) 572, (b) 726352, (c) 5500, (d) 6000, (e) 12159, (f) 14560, (g) 21084, (h) 31795072, (i) 1700, (j) 2150\n\n(a) 572  Divisible by 4 as its last two digits are divisible by 4.\n\nNot divisible by 8 as its last three digits are not divisible by 8.\n\n(b) 726352  Divisible by 4 as its last two digits are divisible by 4.\n\nDivisible by 8 as its last three digits are divisible by 8.\n\n(c) 5500  Divisible by 4 as its last two digits are divisible by 4.\n\nNot divisible by 8 as its last three digits are not divisible by 8.\n\n(d) 6000  Divisible by 4 as its last two digits are 0.\n\nDivisible by 8 as its last three digits are 0.\n\n(e) 12159  Not divisible by 4 and 8 as it is an odd number.\n\n(f) 14560  Divisible by 4 as its last two digits are divisible by 4.\n\nDivisible by 8 as its last three digits are divisible by 8.\n\n(g) 21084  Divisible by 4 as its last two digits are divisible by 4.\n\nNot divisible by 8 as its last three digits are not divisible by 8.\n\n(h) 31795072  Divisible by 4 as its last two digits are divisible by 4.\n\nDivisible by 8 as its last three digits are divisible by 8.\n\n(i) 1700  Divisible by 4 as its last two digits are 0.\n\nNot divisible by 8 as its last three digits are not divisible by 8.\n\n(j) 5500  Not divisible by 4 as its last two digits are not divisible by 4.\n\nNot divisible by 8 as its last three digits are not divisible by 8.\n\n###### Question 3.Using divisibility test, determine which of the following numbers are divisible by 6:\n\n(a)297144, (b) 1258, (c) 4335, (d) 61233, (e) 901352, (f) 438750, (g) 1790184, (h) 12583, (i) 639210, (j) 17852\n\n(a) 297144  Divisible by 2 as its units place is an even number.\n\nDivisible by 3 as sum of its digits (= 27) is divisible by 3.\n\nSince the number is divisible by both 2 and 3, therefore, it is also divisible by 6.\n\n(b) 1258  Divisible by 2 as its units place is an even number.\n\nNot divisible by 3 as sum of its digits (= 16) is not divisible by 3.\n\nSince the number is not divisible by both 2 and 3, therefore, it is not divisible by 6.\n\n(c) 4335  Not divisible by 2 as its units place is not an even number.\n\nDivisible by 3 as sum of its digits (= 15) is divisible by 3.\n\nSince the number is not divisible by both 2 and 3, therefore, it is not divisible by 6.\n\n(d) 61233  Not divisible by 2 as its units place is not an even number.\n\nDivisible by 3 as sum of its digits (= 15) is divisible by 3.\n\nSince the number is not divisible by both 2 and 3, therefore, it is not divisible by 6.\n\n(e) 901352  Divisible by 2 as its units place is an even number.\n\nNot divisible by 3 as sum of its digits (= 20) is not divisible by 3.\n\nSince the number is not divisible by both 2 and 3, therefore, it is not divisible by 6.\n\n(f) 438750  Divisible by 2 as its units place is an even number.\n\nDivisible by 3 as sum of its digits (= 27) is not divisible by 3.\n\nSince the number is divisible by both 2 and 3, therefore, it is divisible by 6.\n\n(g) 1790184  Divisible by 2 as its units place is an even number.\n\nDivisible by 3 as sum of its digits (= 30) is not divisible by 3.\n\nSince the number is divisible by both 2 and 3, therefore, it is divisible by 6.\n\n(h) 12583  Not divisible by 2 as its units place is not an even number.\n\nNot divisible by 3 as sum of its digits (= 19) is not divisible by 3.\n\nSince the number is not divisible by both 2 and 3, therefore, it is not divisible by 6.\n\n(i) 639210  Divisible by 2 as its units place is an even number.\n\nDivisible by 3 as sum of its digits (= 21) is not divisible by 3.\n\nSince the number is divisible by both 2 and 3, therefore, it is divisible by 6.\n\n(j) 17852  Divisible by 2 as its units place is an even number.\n\nNot divisible by 3 as sum of its digits (= 23) is not divisible by 3.\n\nSince the number is not divisible by both 2 and 3, therefore, it is not divisible by 6.\n\nNCERT Solutions for Class 6 Maths Exercise 3.3\n\n###### Question 4.Using divisibility test, determine which of the following numbers are divisible by 11:\n\n(a) 5445, (b) 10824, (c) 7138965, (d) 70169308, (e) 10000001 , (f) 901153\n\nAnswer: (a) 5445  Sum of the digits at odd places = 4 + 5 = 9\n\nSum of the digits at even places = 4 + 5 = 9\n\nDifference of both sums = 9 – 9 = 0\n\nSince the difference is 0, therefore, the number is divisible by 11.\n\n(b) 10824  Sum of the digits at odd places = 4 + 8 +1 = 13\n\nSum of the digits at even places = 2 + 0 = 2\n\nDifference of both sums = 13 – 2 = 11\n\nSince the difference is 11, therefore, the number is divisible by 11.\n\n(c) 7138965  Sum of the digits at odd places = 5 + 9 + 3 + 7 = 24\n\nSum of the digits at even places = 6 + 8 + 1 = 15\n\nDifference of both sums = 24 – 15 = 9\n\nSince the difference is neither 0 nor 11, therefore, the number is not divisible by 11.\n\n(d) 70169308  Sum of the digits at odd places = 8 + 3 + 6 + 0 = 17\n\nSum of the digits at even places = 0 + 9 + 1 + 7 = 17\n\nDifference of both sums = 17 – 17 = 0\n\nSince the difference is 0, therefore, the number is divisible by 11.\n\n(e) 10000001  Sum of the digits at odd places = 1 + 0 + 0 + 0 = 1\n\nSum of the digits at even places = 0 + 0 + 0 + 1 = 1\n\nDifference of both sums = 1 – 1 = 0\n\nSince the difference is 0, therefore, the number is divisible by 11.\n\n(f) 901153  Sum of the digits at odd places = 3 + 1 + 0 = 4\n\nSum of the digits at even places = 5 + 1 + 9 = 15\n\nDifference of both sums = 15 – 4 = 11\n\nSince the difference is 11, therefore, the number is divisible by 11.\n\n###### Question 5.Write the smallest digit and the largest digit in the blanks space of each of the following numbers so that the number formed is divisibly by 3:\n\n(a) __________ 6724\n\n(b) 4765 __________ 2\n\n(a) We know that a number is divisible by 3 if the sum of all digits is divisible by 3.\n\nTherefore, Smallest digit : 2  26724 = 2 + 6 + 7 + 2 + 4 = 21\n\nLargest digit : 8  86724 = 8 + 6 + 7 + 2 + 4 = 27\n\n(b) We know that a number is divisible by 3 if the sum of all digits is divisible by 3.\n\nTherefore, Smallest digit : 0  476502 = 4 + 7 + 6 + 5 + 0 + 2 = 24\n\nLargest digit : 9  476592 = 4 + 7 + 6 + 5 + 0 + 2 = 33\n\nNCERT Solutions for Class 6 Maths Exercise 3.3\n\n###### Question 6.Write the smallest digit and the largest digit in the blanks space of each of the following numbers so that the number formed is divisibly by 11:\n\n(a) 92 __________ 389\n\n(b) 8 __________ 9484\n\nAnswer: (a) We know that a number is divisible by 11 if the difference of the sum of the digits at odd\n\nplaces and that of even places should be either 0 or 11.\n\nTherefore, 928389  Odd places = 9 + 8 + 8 = 25\n\nEven places = 2 + 3 + 9 = 14\n\nDifference = 25 – 14 = 11\n\n(b) We know that a number is divisible by 11 if the difference of the sum of the digits at odd\n\nplaces and that of even places should be either 0 or 11.\n\nTherefore, 869484  Odd places = 8 + 9 + 8 = 25\n\nEven places = 6 + 4 + 4 = 14\n\nDifference = 25 – 14 = 11\n\n###### Question 1.Find the common factors of:\n\n(a) 20 and 28\n\n(b) 15 and 25\n\n(c) 35 and 50\n\n(d) 56 and 120\n\n(a) Factors of 20 = 1, 2, 4, 5, 10, 20\n\nFactors of 28 = 1, 2, 4, 7, 14, 28\n\nCommon factors = 1, 2, 4\n\n(b) Factors of 15 = 1, 3, 5, 15\n\nFactors of 25 = 1, 5, 25\n\nCommon factors = 1, 5\n\n(c) Factors of 35 = 1, 5, 7, 35\n\nFactors of 50 = 1, 2, 5, 10, 25, 50\n\nCommon factors = 1, 5\n\n(d) Factors of 56 = 1, 2, 4, 7, 8, 14, 28, 56\n\nFactors of 120 = 1, 2, 3, 4, 5, 6, 8, 10, 12, 15, 20, 24, 30, 60, 120\n\nCommon factors = 1, 2, 4, 8\n\n###### Question 2.Find the common factors of:\n\n(a) 4, 8 and 12\n\n(b) 5, 15 and 25\n\n(a) Factors of 4 = 1, 2, 4\n\nFactors of 8 = 1, 2, 4, 8\n\nFactors of 12 = 1, 2, 3, 4, 6, 12\n\nCommon factors of 4, 8 and 12 = 1, 2, 4\n\n(b) Factors of 5 = 1, 5\n\nFactors of 15 = 1, 3, 5, 15\n\nFactors of 25 = 1, 5, 25\n\nCommon factors of 5, 15 and 25 = 1, 5\n\nNCERT Solutions for Class 6 Maths Exercise 3.4\n\n###### Question 3.Find the first three common multiples of:\n\n(a) 6 and 8\n\n(b) 12 and 18\n\n(a) Multiple of 6 = 6, 12, 18, 24, 30, 36, 42, 28, 54, 60, 72, …………\n\nMultiple of 8 = 8, 16, 24, 32, 40, 48, 56, 64, 72, …………………….\n\nCommon multiples of 6 and 8 = 24, 48, 72\n\n(b) Multiple of 12 = 12, 24, 36, 48, 60, 72, 84, 96, 108, 120, ………\n\nMultiple of 18 = 18, 36, 54, 72, 90, 108, ………………………………\n\nCommon multiples of 12 and 18 = 36, 72, 108\n\n###### Question 4.Write all the numbers less than 100 which are common multiples of 3 and 4.\n\nAnswer: Multiple of 3 = 3, 6, 9, 12, 15, 18, 21, 24, 27, 30, 33, 36, 39, 42, 45, 48, 51, 54, 57, 60, 63, 66,\n\n69, 72, 75, 78, 81, 84, 87, 90, 93, 96, 99\n\nMultiple of 4 = 4, 8, 12, 16, 20, 24, 28, 32, 36, 40, 44, 48, 52, 56, 60, 64, 68, 72, 76, 80, 84, 88,\n\n92, 96, 100\n\nCommon multiples of 3 and 4 = 12, 24, 36, 48, 60, 72, 84, 96\n\nNCERT Solutions for Class 6 Maths Exercise 3.4\n\n###### Question 5.Which of the following numbers are co-prime:\n\n(a) 18 and 35\n\n(b) 15 and 37\n\n(c) 30 and 415\n\n(d) 17 and 68\n\n(e) 216 and 215\n\n(f) 81 and 16\n\nAnswer: (a) Factors of 18 = 1, 2, 3, 6, 9, 18\n\nFactors of 35 = 1, 5, 7, 35\n\nCommon factor = 1\n\nSince, both have only one common factor, i.e., 1, therefore, they are co-prime numbe\n\n(b) Factors of 15 = 1, 3, 5, 15\n\nFactors of 37 = 1, 37\n\nCommon factor = 1\n\nSince, both have only one common factor, i.e., 1, therefore, they are co-prime numbe\n\n(c) Factors of 30 = 1, 2, 3, 5, 6, 15, 30\n\nFactors of 415 = 1, 5, …….., 83, 415\n\nCommon factor = 1, 5\n\nSince, both have more than one common factor, therefore, they are not co-prime numbe\n\n(d) Factors of 17 = 1, 17\n\nFactors of 68 = 1, 2, 4, 17, 34, 86\n\nCommon factor = 1, 17\n\nSince, both have more than one common factor, therefore, they are not co-prime numbe\n\n(e) Factors of 216 = 1, 2, 3, 4, 6, 8, 36, 72, 108, 216\n\nFactors of 215 = 1, 5, 43, 215\n\nCommon factor = 1\n\nSince, both have only one common factor, i.e., 1, therefore, they are co-prime numbe\n\n(f) Factors of 81 = 1, 3, 9, 27, 81\n\nFactors of 16 = 1, 2, 4, 8, 16\n\nCommon factor = 1\n\nSince, both have only one common factor, i.e., 1, therefore, they are co-prime number\n\n###### Question 6.A number is divisible by both 5 and 12. By which other number will that number be always divisible?\n\nAnswer: 5 x 12 = 60. The number must be divisible by 60.\n\nNCERT Solutions for Class 6 Maths Exercise 3.4\n\n###### Question 7.A number is divisible by 12. By what other numbers will that number be divisible? Answer: Factors of 12 are 1, 2, 3, 4, 6, 12.\n\nTherefore, the number also be divisible by 1,2 ,3 4 and 6.\n\n###### Question 1.Which of the following statements are true:\n\n(a)If a number is divisible by 3, it must be divisible by 9.\n\n(b)If a number is divisible by 9, it must be divisible by 3.\n\n(c)If a number is divisible by 18, it must be divisible by both 3 and 6.\n\n(d)If a number is divisible by 9 and 10 both, then it must be divisible by 90.\n\n(e)If two numbers are co-primes, at least one of them must be prime.\n\n(f)All numbers which are divisible by 4 must also by divisible by 8.\n\n(g)All numbers which are divisible by 8 must also by divisible by 4.\n\n(h)If a number is exactly divides two numbers separately, it must exactly divide their sum.\n\n(i)If a number is exactly divides the sum of two numbers, it must exactly divide the two numbers separately.\n\nAnswer: Statements (b), (c), (d), (g) and (h) are true.\n\n###### Question 2.Here are two different factor trees for 60. Write the missing numbe\n\n(a)", null, "(b)", null, "(a)", null, "(b)", null, "NCERT Solutions for Class 6 Maths 3.5\n\n###### Question 4.Write the greatest 4-digit number and express it in terms of its prime facto\n\nAnswer: The greatest four digit number is 9999.", null, "The prime factors of 9999 are 3 x 3 x 11 x 101.\n\nNCERT Solutions for Class 6 Maths 3.5\n\n###### Question 5.Write the smallest 5-digit number and express it in terms of its prime facto\n\nAnswer: The smallest five digit number is 10000.", null, "The prime factors of 10000 are 2 x 2 x 2 x 2 x 5 x 5 x 5 x 5.\n\n###### Question 6.Find all the prime factors of 1729 and arrange them in ascending order. Now state the relation, if any, between, two consecutive prime numbe", null, "Prime factors of 1729 are 7 x 13 x 19.\n\nThe difference of two consecutive prime factors is 6.\n\nNCERT Solutions for Class 6 Maths 3.5\n\n###### Question 7.The product of three consecutive numbers is always divisible by 6. Verify this statement with the help of some examples.\n\nAnswer: Among the three consecutive numbers, there must be one even number and one multiple of 3. Thus, the product must be multiple of 6.\n\nExample:(i) 2 x 3 x 4 = 24\n\n(ii) 4 x 5 x 6 = 120\n\n###### Question 8.The sum of three consecutive numbers is always divisible by 4. Verify this statement with the help of some examples.\n\nAnswer: 3 + 5 = 8 and 8 is divisible by 4.\n\n5 + 7 = 12 and 12 is divisible by 4.\n\n7 + 9 = 16 and 16 is divisible by 4.\n\n9 + 11 = 20 and 20 is divisible by 4.\n\nNCERT Solutions for Class 6 Maths 3.5\n\n###### Question 9.In which of the following expressions, prime factorization has been done:\n\n(a)24 = 2 x 3 x 4\n\n(b)56 = 7 x 2 x 2 x 2\n\n(c)70 = 2 x 5 x 7\n\n(d)54 = 2 x 3 x 9\n\nAnswer: In expressions (b) and (c), prime factorization has been done.\n\n###### Question 10.Determine if 25110 is divisible by 45.\n\n[Hint: 5 and 9 are co-prime numbe Test the divisibility of the number by 5 and 9.]\n\nAnswer: The prime factorization of 45 = 5 x 9\n\n25110 is divisible by 5 as ‘0’ is at its unit place.\n\n25110 is divisible by 9 as sum of digits is divisible by 9.\n\nTherefore, the number must be divisible by 5 x 9 = 45\n\nNCERT Solutions for Class 6 Maths 3.5\n\n###### Question 11. 18 is divisible by both 2 and 3. It is also divisible by 2 x 3 = 6. Similarly, a number is divisible by 4 and 6. Can we say that the number must be divisible by 4 x 6 = 24? If not, give an example to justify your answer.\n\nAnswer: No. Number 12 is divisible by both 6 and 4 but 12 is not divisible by 24.\n\nQuestion 12.I am the smallest number, having four different prime facto Can you find me?\n\nAnswer: 2 x 3 x 5 x 7 = 210\n\n###### Question 1.Find the H.C.F. of the following numbers:\n\n(a) 18, 48, (b) 30, 42 , (c) 18, 60, (d) 27, 63, (e) 36, 84, (f) 34, 102, (g) 70, 105, 175, (h) 91, 112, 49, (i) 18, 54, 81, (j) 12, 45, 75\n\n(a) Factors of 18 = 2 x 3 x 3\n\nFactors of 48 = 2 x 2 x 2 x 2 x 3\n\nH.C.F. (18, 48) = 2 x 3 = 6\n\n(b) Factors of 30 = 2 x 3 x 5\n\nFactors of 42 = 2 x 3 x 7\n\nH.C.F. (30, 42) = 2 x 3 = 6\n\n(c) Factors of 18 = 2 x 3 x 3\n\nFactors of 60 = 2 x 2 x 3 x 5\n\nH.C.F. (18, 60) = 2 x 3 = 6\n\n(d) Factors of 27 = 3 x 3 x 3\n\nFactors of 63 = 3 x 3 x 7\n\nH.C.F. (27, 63) = 3 x 3 = 9\n\n(e) Factors of 36 = 2 x 2 x 3 x 3\n\nFactors of 84 = 2 x 2 x 3 x 7\n\nH.C.F. (36, 84) = 2 x 2 x 3 = 12\n\n(f) Factors of 34 = 2 x 17\n\nFactors of 102 = 2 x 3 x 17\n\nH.C.F. (34, 102) = 2 x 17 = 34\n\n(g) Factors of 70 = 2 x 5 x 7\n\nFactors of 105 = 3 x 5 x 7\n\nFactors of 175 = 5 x 5 x 7\n\nH.C.F. = 5 x 7 = 35\n\n(h) Factors of 91 = 7 x 13\n\nFactors of 112 = 2 x 2 x 2 x 2 x 7\n\nFactors of 49 = 7 x 7\n\nH.C.F. = 1 x 7 = 7\n\n(i) Factors of 18 = 2 x 3 x 3\n\nFactors of 54 = 2 x 3 x 3 x 3\n\nFactors of 81 = 3 x 3 x 3 x 3\n\nH.C.F. = 3 x 3 = 9\n\n(j) Factors of 12 = 2 x 2 x 3\n\nFactors of 45 = 3 x 3 x 5\n\nFactors of 75 = 3 x 5 x 5\n\nH.C.F. = 1 x 3 = 3\n\nNCERT Solutions for Class 6 Maths Exercise 3.6\n\n###### Question 2.What is the H.C.F. of two consecutive:\n\n(a) numbers?\n\n(b) even numbers?\n\n(c) odd numbers?\n\n(a) H.C.F. of two consecutive numbers be 1.\n\n(b) H.C.F. of two consecutive even numbers be 2.\n\n(c) H.C.F. of two consecutive odd numbers be 1.\n\nNCERT Solutions for Class 6 Maths Exercise 3.6\n\n###### Question 3.H.C.F. of co-prime numbers 4 and 15 was found as follows by factorization:\n\n4 = 2 x 2 and 15 = 3 x 5 since there is no common prime factor, so H.C.F. of 4 and 15 is 0. Is the answer correct? If not, what is the correct H.C.F.?\n\nAnswer: No. The correct H.C.F. is 1.\n\n###### Question 1.Renu purchases two bags of fertilizer of weights 75 kg and 69 kg. Find the maximum value of weight which can measure the weight of the fertilizer exact number of times.\n\nAnswer: For finding maximum weight, we have to find H.C.F. of 75 and 69.\n\nFactors of 75 = 3 x 5 x 5\n\nFactors of 69 = 3 x 69\n\nH.C.F. = 3\n\nTherefore the required weight is 3 kg.\n\n###### Question 2.Three boys step off together from the same spot. Their steps measure 63 cm, 70 cm and 77 cm respectively. What is the maximum distance each should cover so that all can cover the distance in complete steps?\n\nAnswer: For finding minimum distance, we have to find L.C.M of 63, 70, 77.\n\nL.C.M. of 63, 70 and 77 = 7 x 9 x 10 x 11 = 6930 cm.\n\nTherefore, the minimum distance is 6930 cm.\n\nNCERT Solutions for Class 6 Maths Exercise 3.7\n\n###### Question 3.The length, breadth and height of a room are 825 cm, 675 cm and 450 cm respectively. Find the longest tape which can measure the three dimensions of the room exactly.\n\nAnswer: The measurement of longest tape = H.C.F. of 825 cm, 675 cm and 450 cm.\n\nFactors of 825 = 3 x 5 x 5 x 11\n\nFactors of 675 = 3 x 5 x 5 x 3 x 3\n\nFactors of 450 = 2 x 3 x 3 x 5 x 5\n\nH.C.F. = 3 x 5 x 5 = 75 cm\n\nTherefore, the longest tape is 75 cm.\n\n###### Question 4.Determine the smallest 3-digit number which is exactly divisible by 6, 8 and 12.\n\nAnswer: L.C.M. of 6, 8 and 12 = 2 x 2 x 2 x 3 = 24\n\nThe smallest 3-digit number = 100\n\nTo find the number, we have to divide 100 by 24\n\n24{tex}\\mathop{\\left){\\vphantom{1\\begin{gathered}\n\n{\\text{ }}24 \\\\\\\\\\\\\n\n\\underline { – 24} \\\\\\\\\\\\\n\n{\\text{ }}4 \\\\\\\\\\\\\n\n\\end{gathered} }}\\right.\n\n\\!\\!\\!\\!\\overline{\\,\\,\\,\\vphantom 1{\\begin{gathered}\n\n{\\text{ }}24 \\\\\\\\\\\\\n\n\\underline { – 24} \\\\\\\\\\\\\n\n{\\text{ }}4 \\\\\\\\\\\\\n\n\\end{gathered} }}}\n\n\\limits^{\\displaystyle\\,\\,\\, 4}{/tex}\n\nTherefore, the required number = 100 + (24 – 4) = 120.\n\nNCERT Solutions for Class 6 Maths Exercise 3.7\n\n###### Question 5.Determine the largest 3-digit number which is exactly divisible by 8, 10 and 12.\n\nAnswer: L.C.M. of 8, 10, 12 = 2 x 2 x 2 x 3 x 5 = 120\n\nThe largest three digit number = 999\n\nNow,\n\n120{tex}\\mathop{\\left){\\vphantom{1\\begin{gathered}\n\n{\\text{ }}999 \\\\\\\\\\\\\n\n\\underline { – 960} \\\\\\\\\\\\\n\n{\\text{ }}39 \\\\\\\\\\\\\n\n\\end{gathered} }}\\right.\n\n\\!\\!\\!\\!\\overline{\\,\\,\\,\\vphantom 1{\\begin{gathered}\n\n{\\text{ }}999 \\\\\\\\\\\\\n\n\\underline { – 960} \\\\\\\\\\\\\n\n{\\text{ }}39 \\\\\\\\\\\\\n\n\\end{gathered} }}}\n\n\\limits^{\\displaystyle\\,\\,\\, 8}{/tex}\n\nTherefore, the required number = 999 – 39 = 960\n\n###### Question 6.The traffic lights at three different road crossings change after every 48 seconds, 72 seconds and 108 seconds respectively. If they change simultaneously at 7 a.m. at what time will they change simultaneously again?\n\nAnswer: L.C.M. of 48, 72, 108 = 2 x 2 x 2 x 2 x 3 x 3 x 3 = 432 sec.\n\nAfter 432 seconds, the lights change simultaneously.\n\n432 second = 7 minutes 12 seconds\n\nTherefore the time = 7 a.m. + 7 minutes 12 seconds\n\n= 7 : 07 : 12 a.m.\n\nNCERT Solutions for Class 6 Maths Exercise 3.7\n\n###### Question 7.Three tankers contain 403 liters and 465 liters of diesel respectively. Find the maximum capacity of a container that can measure the diesel of three containers exact number of times.\n\nAnswer: The maximum capacity of container = H.C.F. (403, 434, 465)\n\nFactors of 403 = 13 x 31\n\nFactors of 434 = 2 x 7 x 31\n\nFactors of 465 = 3 x 5 x 31\n\nH.C.F. = 31\n\nTherefore, 31 liters of container is required to measure the quantity.\n\n###### Question 8.Find the least number which when divided by 6, 15 and 18, leave remainder 5 in each case.\n\nAnswer: L.C.M. of 6, 15 and 18 = 2 x 3 x 3 x 5 = 90\n\nTherefore the required number = 90 + 5 = 95\n\nNCERT Solutions for Class 6 Maths Exercise 3.7\n\n###### Question 9.Find the smallest 4-digit number which is divisible by 18, 24 and 32.\n\nAnswer: L.C.M. of 18, 24 and 32 = 2 x 2 x 2 x 2 x 2 x 3 x 3 = 288\n\nThe smallest four-digit number = 1000\n\nNow,\n\n288{tex}\\mathop{\\left){\\vphantom{1\\begin{gathered}\n\n{\\text{ 1000}} \\\\\\\\\\\\\n\n\\underline {{\\text{ }} – 864} \\\\\\\\\\\\\n\n{\\text{ 136}} \\\\\\\\\\\\\n\n\\end{gathered} }}\\right.\n\n\\!\\!\\!\\!\\overline{\\,\\,\\,\\vphantom 1{\\begin{gathered}\n\n{\\text{ 1000}} \\\\\\\\\\\\\n\n\\underline {{\\text{ }} – 864} \\\\\\\\\\\\\n\n{\\text{ 136}} \\\\\\\\\\\\\n\n\\end{gathered} }}}\n\n\\limits^{\\displaystyle\\,\\,\\, 3}{/tex}\n\nTherefore, the required number is 1000 + (288 – 136) = 1152.\n\n###### Question 10.Find the L.C.M. of the following numbers:\n\n(a) 9 and 4\n\n(b) 12 and 5\n\n(c) 6 and 5\n\n(d) 15 and 4\n\nObserve a common property in the obtained L.C.Ms. Is L.C.M. the product of two numbers in each case?\n\nAnswer: (a) L.C.M. of 9 and 4\n\n= 2 x 2 x 3 x 3 = 36\n\n(b) L.C.M. of 12 and 5\n\n= 2 x 2 x 3 x 5 = 60\n\n(c) L.C.M. of 6 and 5\n\n= 2 x 3 x 5 = 30\n\n(d) L.C.M. of 15 and 4\n\n= 2 x 2 x 3 x 5 = 60\n\nYes, the L.C.M. is equal to the product of two numbers in each case.\n\nAnd L.C.M. is also the multiple of 3.\n\nNCERT Solutions for Class 6 Maths Exercise 3.7\n\n###### Question 11.Find the L.C.M. of the following numbers in which one number is the factor of other:\n\n(a) 5, 20\n\n(b) 6, 18\n\n(c) 12, 48\n\n(d) 9, 45\n\nWhat do you observe in the result obtained?\n\nAnswer: (a) L.C.M. of 5 and 20\n\n= 2 x 2 x 5 = 20\n\n(b) L.C.M. of 6 and 18\n\n2 x 3 x 3 = 18\n\n(c) L.C.M. of 12 and 48\n\n2 x 2 x 2 x 2 x 3 = 48\n\n(d) L.C.M. of 9 and 45\n\n= 3 x 3 x 5 = 45\n\nFrom these all cases, we can conclude that if the smallest number if the factor of largest number, then the L.C.M. of these two numbers is equal to that of larger number.\n\nerror: Content is protected !!" ]
[ null, "https://www.facebook.com/tr", null, "https://i0.wp.com/media.mycbseguide.com/images/static/ncert/06/mathematics/06_math_ncert_ch03_01.jpg", null, "https://i0.wp.com/media.mycbseguide.com/images/static/ncert/06/mathematics/06_math_ncert_ch03_02.jpg", null, "https://i0.wp.com/media.mycbseguide.com/images/static/ncert/06/mathematics/06_math_ncert_ch03_03.jpg", null, "https://i0.wp.com/media.mycbseguide.com/images/static/ncert/06/mathematics/06_math_ncert_ch03_04.jpg", null, "https://i1.wp.com/media.mycbseguide.com/images/static/ncert/06/mathematics/06_math_ncert_ch03_05.jpg", null, "https://i1.wp.com/media.mycbseguide.com/images/static/ncert/06/mathematics/06_math_ncert_ch03_06.jpg", null, "https://i1.wp.com/media.mycbseguide.com/images/static/ncert/06/mathematics/06_math_ncert_ch03_07.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8938801,"math_prob":0.9998141,"size":26424,"snap":"2021-04-2021-17","text_gpt3_token_len":10116,"char_repetition_ratio":0.21290688,"word_repetition_ratio":0.2986519,"special_character_ratio":0.43823797,"punctuation_ratio":0.16998354,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998202,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16],"im_url_duplicate_count":[null,null,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-24T06:09:09Z\",\"WARC-Record-ID\":\"<urn:uuid:c2a04666-9c1b-434a-b52f-fc81eefae031>\",\"Content-Length\":\"145317\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c378f8fb-7a68-4f0d-9fa2-743c2e72fc9f>\",\"WARC-Concurrent-To\":\"<urn:uuid:7336d10a-8b8a-45d6-b72b-c653caad283c>\",\"WARC-IP-Address\":\"103.129.97.81\",\"WARC-Target-URI\":\"https://conventlearning.com/playing-with-numbers/\",\"WARC-Payload-Digest\":\"sha1:LNDWU4HK4UNOJUVQV3GPO4ERWZBNRUPK\",\"WARC-Block-Digest\":\"sha1:IYLRNKD72QCM4UEXNTCIOAPUYRVKHD5M\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703547333.68_warc_CC-MAIN-20210124044618-20210124074618-00502.warc.gz\"}"}
https://www.combinatorics.org/ojs/index.php/eljc/article/view/v14i1n23
[ "# A Note on a Problem of Hilliker and Straus\n\n• Mirosława Jańczak\n\n### Abstract\n\nFor a prime $p$ and a vector $\\bar\\alpha=(\\alpha_1,\\dots,\\alpha_k)\\in {\\Bbb Z}_p^k$ let $f\\left(\\bar\\alpha,p\\right)$ be the largest $n$ such that in each set $A\\subseteq{\\Bbb Z}_{p}$ of $n$ elements one can find $x$ which has a unique representation in the form $x=\\alpha_{1}a_1+\\dots +\\alpha_{k}a_k, a_i\\in A$. Hilliker and Straus bounded $f\\left(\\bar\\alpha,p\\right)$ from below by an expression which contained the $L_1$-norm of $\\bar\\alpha$ and asked if there exists a positive constant $c\\left(k\\right)$ so that $f\\left(\\bar\\alpha,p\\right)>c\\left(k\\right)\\log p$. In this note we answer their question in the affirmative and show that, for large $k$, one can take $c(k)=O(1/k\\log (2k))$. We also give a lower bound for the size of a set $A\\subseteq {\\Bbb Z}_{p}$ such that every element of $A+A$ has at least $K$ representations in the form $a+a'$, $a, a'\\in A$.\n\nPublished\n2007-10-30\nIssue\nArticle Number\nN23" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.80750555,"math_prob":0.99979633,"size":919,"snap":"2019-35-2019-39","text_gpt3_token_len":300,"char_repetition_ratio":0.12568305,"word_repetition_ratio":0.0,"special_character_ratio":0.31229597,"punctuation_ratio":0.06896552,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999988,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-24T22:33:14Z\",\"WARC-Record-ID\":\"<urn:uuid:dc102a23-bc10-44ff-9c0e-b814a6f3756b>\",\"Content-Length\":\"9597\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2d24f290-c77a-4b80-ae42-e94ff255fc0c>\",\"WARC-Concurrent-To\":\"<urn:uuid:43e04cff-e777-4a97-92e3-45f759cc130f>\",\"WARC-IP-Address\":\"150.203.186.177\",\"WARC-Target-URI\":\"https://www.combinatorics.org/ojs/index.php/eljc/article/view/v14i1n23\",\"WARC-Payload-Digest\":\"sha1:VRVESRV3A6NE4KOWCJC3UOR6MMZ5STGF\",\"WARC-Block-Digest\":\"sha1:Z7A6EGTADGDO37NRBS32H2ECOOSJEQUL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027321786.95_warc_CC-MAIN-20190824214845-20190825000845-00477.warc.gz\"}"}
http://compgroups.net/comp.soft-sys.math.mathematica/gaussian-random-field-simulati/916173
[ "f\n\n#### Gaussian Random Field Simulation in Mathematica\n\n```Is there an available package or code in Mathematica\nthat can simulate Gaussian random fields in 3D? I am looking for\nsomething similar to what the package \"RandomFields\" can do\n(but only in 2D) in the free statistics package R.\n\nThanks for any pointers\n\n--\n\nUlvi Yurtsever\n\n```", null, "0", null, "", null, "Ulvi\n6/18/2008 8:26:33 AM", null, "comp.soft-sys.math.mathematica", null, "", null, "28821 articles.", null, "0 followers.", null, "1 Replies", null, "1854 Views", null, "Similar Articles\n\n[PageSpeed] 31\n\n```Try: http://mathematica.stackexchange.com/questions/4829/efficiently-generating-n-d-gaussian-random-fields\n```", null, "0", null, "6/24/2014 10:43:50 AM", null, "", null, "Similar Artilces:\n\nSimulating random fields in Mathematica\nGreetings, I'm searching for Mathematica code to simulate large (Gaussian) random fields, for work in spatial analysis. Can anyone point me in the right direction? Thanks, -Mark ~ Mark Coleman wrote: > Greetings, > > I'm searching for Mathematica code to simulate large (Gaussian) random fields, > for work in spatial analysis. Can anyone point me in the right > direction? > > Thanks, > > -Mark > I have a notebook for simulation of Gaussian random field as a Fourier-series representation. I have been told that this series re...\n\nMath.random() and Math.round(Math.random()) and Math.floor(Math.random()*2)\nAssuming one needs to have a function returning false or true on each call in pseudo-random order.and using JavaScript native Math.random() method as the basis of the pseudo-randomness. Say the variants of such function are: getAnswer1() { var n = Math.round(Math.random()); return n ? true : false; } getAnswer2() { var n = Math.floor(Math.random()*2); return (n==2) ? true : false; } Leaving obvious practical testing by platforms aside: Is there are theoretical considerations that pseudo-randomness (predictability) of either of above will be better or worse than the o...\n\nmath =!= mathematica\nDoes somebody know why I get different behaviour for the following commands in a commandline session of mathematica 7 or inside a notebook? It also seems to be different in different Mathematica versions for the notebook format. See http://www.risc.jku.at/people/hemmecke/mathematica/ for the notebook files for Mathematica 5.2, 6.0, 7.0. Can someone explain the General::dupsym: The symbol Array with context A` already exists. message? Why does that message appear at all? If A`Array exists, then Mathematica should just use it, shouldn't it? According to http://reference.wol...\n\nMathematica\nhttp://www.wolfram.com/ Mathematica seems to have similar problems or did I miss something? (Sure thing!) *\\jk not the q's wrote: > http://www.wolfram.com/ > > Mathematica seems to have similar problems or > did I miss something? (Sure thing!) Similar problems to what? -- Dr Jon D Harrop, Flying Frog Consultancy http://www.ffconsultancy.com/products/?u \"Jon Harrop\" <[email protected]> wrote in message news:[email protected]... > not the q's wrote: >> http://www.wolfram.com/ >&...\n\nMathematica\nHi All Any one knows if there are any good summary of important command to be use in mathematica with simple instruction and graphics to go along? I just started using this software and hope to grab some basics. The tutorial in the software not too appealing for me. I do not know why rgds and thanks Jason Jason, The best thing is to give yourself a crash course in Mathematica before attempting to use it in your work or studies. The best course is to work through most of Part I of The Book, actually typing in commands and seeing that they work. The only way to learn the s...\n\n[Mathematica 6] Sudden shutdown of Mathematica. No error messages, no traces, no warning. Mathematica just disappears from desktop\nI've noticed this now more than one time. I would say this has happened to me about 5-6 times already since I started using Mathematica 6 which is about 2 weeks now. This is what happens: I'll be working on an open notebook, and do something, for example, just now, I went to do SAVE AS to save the notebook. Mathematica would just disappear in split second. No errors, nothing. I look and it is just gone. All windows gone. No traces. I have to restart Mathematica. And I just lost all my work for the last 30 minutes. Strange thing is now when I open the document sin...\n\nWolfram reinvents Mathematica with Mathematica 6\nMathematica 6 is now available, introducing over 1,000 groundbreaking technologies developed over more than a decade at Wolfram Research. Mathematica 6 takes technical computing to a new level: more tightly bound, more natural, and more automated, applicable to a far wider range of areas than ever before. Central to this achievement is \"instant interactivity\"--taking models, computations, or just about any concept and turning them into fully interactive applications, sometimes within seconds. In addition to the new capabilities for instant development, Mathematica 6 is also optimiz...\n\nDebugging Mathematica Code (Mathematica 7)\nHello Experts, I made my fist steps with the Mathematica (so called) debugger and stumbled immediately. Is there anywhrere a documentation of this tool that is worth it's name (a criterion which the Mathematica 7 documentatin on debug surely fails). I've tried a lot, but I'm still at the stage \"trial an error\". Greetings Mike \"m.g.\" <[email protected]> wrote in message news:gl1okn\\$dpb\\[email protected]... > Hello Experts, > > I made my fist steps with the Mathematica (so called) debugger and > stumbled > immediate...\n\nWolfram reinvents Mathematica with Mathematica 6\nMathematica 6 is now available, introducing over 1,000 groundbreaking technologies developed over more than a decade at Wolfram Research. Mathematica 6 takes technical computing to a new level: more tightly bound, more natural, and more automated, applicable to a far wider range of areas than ever before. Central to this achievement is \"instant interactivity\"--taking models, computations, or just about any concept and turning them into fully interactive applications, sometimes within seconds. In addition to the new capabilities for instant development, Mathematica 6 is also optimiz...\n\nMathematica Programmer vs. Programming in Mathematica\nHas anybody read both of Roman Maeder's books _The Mathematica Programmer_ and _Programming in Mathematica_? I specifically mean the out of print first volume of the former. Dose _The Mathematica Programmer_ give a significantly different perspective than what is presented in _Programming in Mathematica_? It is, I believe, the best source for information about the design of his Classes` package. The only other source I am aware of is an unavailable (other than buying the hardcopy) back issue of The Mathematica Journal. And, if my understanding is correct, that is not as com...\n\nA.I for mathematica\nwhat do you think mathematica users, about the following suggestion: i suggest to wolfram R. to make a package may be called A.I mode for mathematica once loaded by the choice of the user, the system will be in the Genie mode, and will accept from the user questions like :please what is the sin of 45 degrees?, what is pi to 10000 decimal points, please plot for me the graph of z=12*cos(x^2+y^2)/(3+x^2+y^2). i think this will be a first step toward the star trek like system, in wich the captain told the system to analyse and to calculate by using common obscure language. this approach will...\n\nLearning maths with mathematica\nHi, I am going to fulfil a lifelong dream and will embark on a long distance physics university course in autumn after spending most of my working life as a professional musician in the abstract world of music. For this endeavour I have to brush up and recover my long lost maths basics. After discovering and trying out Mathematica I have purchased the home version of this amazing program which, after spending some time with it, I consider to be the swiss knife of everything. I would like to ask for recommendations about what would be the best way to use the program...\n\nRe: math =!= mathematica\nOn 3/23/10 at 4:23 AM, [email protected] (hemmecke) wrote: >Does somebody know why I get different behaviour for the following >General::dupsym: The symbol Array with context A` already exists. >message? Why does that message appear at all? If A`Array exists, >then Mathematica should just use it, shouldn't it? >According to >http://reference.wolfram.com/mathematica/tutorial/Contexts.html we >have: >`name a symbol in the current context >So why is Mathematica complaining? The message is not telling you the symbol exists in the current cont...\n\nReduce in Mathematica 5 vs Mathematica 8\nHi Mathematica Community, Knowing that Reduce has'nt been modified in Mathematica 8 why the same system that I try to solve with Reduce gives result with Mathematica 5 but not with Mathematica 8? Reduce[-y + Log[Log[v]]/Log == -yP + Log[Log[vP]]/Log && yP == y + Floor[Log[x]/Log], {yP, vP}, Backsubstitution -> True] Thank you very much. ...\n\nRe: Mathematica Programmer vs. Programming in Mathematica\nHi, \"Programming in Mathematica\" is the standard book for Mathematica Programming, the \"Mathematica Programmer\" collects mainly Roman Maeders articles in \"The Mathematica Journal\" and include interesting stuff like his polyhedron code. But it focus *not* on the programming techniques ... Both books are excelent ... The classes.m package is designed to show, that object oriented programming can be done in Mathematica. But nobody would consider it as more than an example that even a wonderfull functional programming language can be misused for nonsen...\n\nWolfram reinvents Mathematica with Mathematica 6 #2\nMathematica 6 is now available, introducing over 1,000 groundbreaking technologies developed over more than a decade at Wolfram Research. Mathematica 6 takes technical computing to a new level: more tightly bound, more natural, and more automated, applicable to a far wider range of areas than ever before. Central to this achievement is \"instant interactivity\"--taking models, computations, or just about any concept and turning them into fully interactive applications, sometimes within seconds. In addition to the new capabilities for instant development, Mathematica 6 is also optimized for the opposite end of the spectrum--infrastructure development--and everything in between. Key new features include: * Dynamic interactivity, allowing sophisticated interactive interfaces to be created from single lines of input * High-impact adaptive visualization for automated creation of high-fidelity function and data graphics * Language for data integration, including automatic integration of hundreds of standard data formats * Load-on-demand curated data for math, physics, chemistry, finance, geography, linguistics, and more * Symbolic interface construction for immediate creation of arbitrary interfaces from simple programs * Automated computational aesthetics, with algorithmic optimization for visual presentation Mathematica 6 is available for Windows NT/2000/XP/Vista, Mac OS X, Linux x86/Itanium, Solaris UltraSPARC/x86, HP-UX, IBM AIX, and compatible systems. For more i...\n\nMathematica.\nAloha, I'm interested in obtaining a copy of Mathematica for IRIX, but it appears that Wolfram has written off IRIX, as the last version listed on their site is Mathematica 4.2 for IRIX 6.2. Is there any way to obtain a copy? Also...what other companies have dropped IRIX support besides Adobe? Kai ...\n\nUsing a Mathematica Program to write a Mathematica Program\nHow can one write a Mathematica program the output of which is itself a Mathematica program? Here is a specific example. Suppose one runs FindLinearRecurrence on a sequence of numbers and Mathematica provides a list of the recurrence factors. What I would like is to run a program that would (1) enter the command \"Linear Recurrence\" plus the opening square bracket, (2) copy the output of the FindLinearRecurrence program, i.e., the recurrence factors (as a list), (3) determine the length of the list of those recurrence factors, (4) take from the sequence of numbers that h...\n\nMathematica at Math Camp\nI taught a group of 32 smart high school kids some Mathematica this week, at the Vermont Governor's Institute in Mathematical Sciences, aka Math Camp. This is my second year doing it, and it's a ton of fun. I had the kids an hour a day in one of our computer equipped classrooms, and this year we added some additional optional computer time in the late afternoons so they could come back and continue working / playing around. (Last year some of them wanted more time, but there was no time in the schedule when they could come back.) You might enjoy looking at graphs and a...\n\nRe: behavior from Mathematica 7 to Mathematica 8\nOn 11/19/10 at 5:10 AM, [email protected] (David Skulsky) wrote: >While testing Mathematica 8 on some heritage code, I found a problem >which a colleague traced to a behavioral change in Times[]. >Specifically, the documentation for Times[] states that \"0 x >evaluates to 0, but 0.0 x is left unchanged.\" This appears to be >true in Mathematica 7 but not in Mathematica 8 (at least not under >Mac OS X 10.6.4). >I have informed Wolfram about this change (bug?). I can confirm getting 0. as the result of doing 0.0 x in version 8 but getting 0.x as the...\n\nopening notebooks in mathematica player instead of mathematica\nWhen I double click on a notebook I want just want to view it. Is there a way to default the assigned application to open the mathematica notebooks with the mathematica viewer instead of having mathematica boot up? If I want mathematica to boot up I'll click on the icon. That's on my doc. I'm running Mac OS X 10.5.6 (9G55) on a PowerBook G4 if that helps any. On Mar 12, 2:21 am, Steven Matthew Anderson <[email protected]> wrote: > When I double click on a notebook I want just want to view it. Is ther= e a way to default the assigned application to open the mat...\n\nRe: Mathematica Programmer vs. Programming in Mathematica #2\nOn 12/13/05 at 3:41 AM, [email protected] (Steven T. Hatton) wrote: >Has anybody read both of Roman Maeder's books _The Mathematica >Programmer_ and _Programming in Mathematica_? I specifically mean >the out of print first volume of the former. Yes. I have copies of each within arms length of my desk. >Dose _The Mathematica Programmer_ give a significantly different >perspective than what is presented in _Programming in Mathematica_? I am not sure there is much difference in perspective. But there is a difference in the information covered. Prog...\n\nMathematica 6 -> Mathematica 7, Histogram problem\nI have a feeling that this a stupid question but... In v6 one could convert a vector to a Histogram plot (where each entry in the vector represents a bar) by using the syntax: Needs[\"Histograms'\"] Histogram[vector,FrequencyData->True] Since histograms have been included in the basic v7, this option has disappeared. My question is, is there any way to get the same result by using Histograms in v7? I am aware of the new function BarChart in v7, but I don't want to use it since I have to be able to do LOG plots on the x-axis, which the new histogram fun...\n\nChange in Times[] behavior from Mathematica 7 to Mathematica 8\nWhile testing Mathematica 8 on some heritage code, I found a problem which a colleague traced to a behavioral change in Times[]. Specifically, the documentation for Times[] states that \"0 x evaluates to 0, but 0.0 x is left unchanged.\" This appears to be true in Mathematica 7 but not in Mathematica 8 (at least not under Mac OS X 10.6.4). I have informed Wolfram about this change (bug?). Eli David Skulsky ...", null, "Web resources about - Gaussian Random Field Simulation in Mathematica - comp.soft-sys.math.mathematica", null, "Philosophiæ Naturalis Principia Mathematica - Wikipedia, the free encyclopedia\nThe Principia states Newton's laws of motion , forming the foundation of classical mechanics , also Newton's law of universal gravitation , and ...\n\nimage processing - How do I find Waldo with Mathematica? - Stack Overflow\nThis was bugging me over the weekend: What is a good way to solve those Where's Waldo? [ 'Wally' outside of North America] puzzles, using Mathematica ...", null, "Stephen Wolfram: The Background and Vision of Mathematica - YouTube\nDuring the Wolfram Mathematica Virtual Conference 2011, Wolfram founder Stephen Wolfram shared the background and vision of Mathematica, including ...", null, "Wolfram Mathematica coming to the iPad\nIt would appear that Wolfram, the company behind the Siri search engine is bringing its original product, Mathematica , to the iPad. In response ...", null, "Premium Mathematica software free on budget Raspberry Pi\nWolfram Research is giving away its Mathematica software for use on the diminutive, \\$25 Raspberry Pi computers and debuting a brand-new programming ...", null, "Stephen Wolfram: It was Steve Jobs who named 'Mathematica'\nThe creator of the answer engine in Siri writes about his long relationship with Jobs Wolfram. Photo: Creative Commons There are a several novel ...\n\n700 New Functions In Wolfram Mathematica 10\nSingle biggest jump in new functionality in the software's history\n\nNew Wolfram Language Brings The Power Of Mathematica To Any Device\n... is being expanded into a logic and knowledge engine that can operate locally or in the cloud. Wolfram Research's flagship program Mathematica ...", null, "Mathematica and Wolfram On The Raspberry Pi\n[Stephen Wolfram], possibly the only person on Earth who wants a second element named after him, is giving away Mathematica for the Raspberry ...", null, "Wolfram Brings Mathematica Technical Computing to the Web\nIf you’re a fan of Wolfram’s Mathematica app, you’ll be pleased to hear its comprehensive tools for technical computing are now more accessible. ...\n\nResources last updated: 3/5/2016 9:07:07 PM" ]
[ null, "http://compgroups.net/img/icn/plus32.png", null, "http://compgroups.net/img/icn/minus32.png", null, "http://compgroups.net/img/icn/user.png", null, "http://compgroups.net/img/icn/group.png", null, "http://compgroups.net/img/icn/i.png", null, "http://compgroups.net/img/icn/article.gif", null, "http://compgroups.net/img/icn/star2.gif", null, "http://compgroups.net/img/icn/reply.png", null, "http://compgroups.net/img/icn/eye.png", null, "http://compgroups.net/img/!.png", null, "http://compgroups.net/img/icn/plus32.png", null, "http://compgroups.net/img/icn/minus32.png", null, "http://compgroups.net/img/icn/bullet-yellow.png", null, "http://compgroups.net/img/!.png", null, "http://compgroups.net/img/!.png", null, "http://upload.wikimedia.org/wikipedia/commons/1/17/Prinicipia-title.png", null, "http://i2.ytimg.com/vi/56ISaies6Ws/mqdefault.jpg", null, "http://9to5mac.files.wordpress.com/2012/05/screen-shot-2012-05-15-at-5-33-44-am.png", null, "http://i.i.cbsi.com/cnwk.1d/i/tim2/2013/11/22/wolfram-language-and-mathematica-icons1.png", null, "http://fortunebrainstormtech.files.wordpress.com/2011/12/filestephen_wolfram_pr.jpeg", null, "http://hackadaycom.files.wordpress.com/2013/11/matpi.png", null, "http://cdn1.tnwcdn.com/wp-content/blogs.dir/1/files/2014/09/shutterstock_173694926.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8619193,"math_prob":0.44469452,"size":14975,"snap":"2020-24-2020-29","text_gpt3_token_len":3598,"char_repetition_ratio":0.13967003,"word_repetition_ratio":0.21077184,"special_character_ratio":0.22263773,"punctuation_ratio":0.14858328,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.980176,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,2,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-05-30T11:50:21Z\",\"WARC-Record-ID\":\"<urn:uuid:b616e857-9784-48dc-b8cd-827837f0db25>\",\"Content-Length\":\"37270\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c8bdfb87-6563-46db-85bc-8da8cbe07f80>\",\"WARC-Concurrent-To\":\"<urn:uuid:3c9a3d53-75a2-4a92-8c78-9736c1bbdbfc>\",\"WARC-IP-Address\":\"108.170.15.253\",\"WARC-Target-URI\":\"http://compgroups.net/comp.soft-sys.math.mathematica/gaussian-random-field-simulati/916173\",\"WARC-Payload-Digest\":\"sha1:OVNIAVY2VCJ2TGMXIGSHXF63ERCVEROI\",\"WARC-Block-Digest\":\"sha1:ORWTDDWAE2KQUXOW3P5H6LFLWNIGCEZY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347409171.27_warc_CC-MAIN-20200530102741-20200530132741-00195.warc.gz\"}"}
https://www.numbersaplenty.com/60060560506
[ "Search a number\nBaseRepresentation\nbin110111111011111000…\n…110110110001111010\n312202000201112121010221\n4313323320312301322\n51441001000414011\n643331430130254\n74224233224642\noct677370666172\n9182021477127\n1060060560506\n1123520691071\n12b7821b098a\n1358821722cb\n142c9a939522\n151867c16971\nhexdfbe36c7a\n\n60060560506 has 4 divisors (see below), whose sum is σ = 90090840762. Its totient is φ = 30030280252.\n\nThe previous prime is 60060560449. The next prime is 60060560531. The reversal of 60060560506 is 60506506006.\n\nIt is a semiprime because it is the product of two primes.\n\nIt can be written as a sum of positive squares in only one way, i.e., 57164506281 + 2896054225 = 239091^2 + 53815^2 .\n\nIt is an unprimeable number.\n\nIt is a polite number, since it can be written as a sum of consecutive naturals, namely, 15015140125 + ... + 15015140128.\n\nAlmost surely, 260060560506 is an apocalyptic number.\n\n60060560506 is a deficient number, since it is larger than the sum of its proper divisors (30030280256).\n\n60060560506 is a wasteful number, since it uses less digits than its factorization.\n\n60060560506 is an evil number, because the sum of its binary digits is even.\n\nThe sum of its prime factors is 30030280255.\n\nThe product of its (nonzero) digits is 32400, while the sum is 34.\n\nThe spelling of 60060560506 in words is \"sixty billion, sixty million, five hundred sixty thousand, five hundred six\".\n\nDivisors: 1 2 30030280253 60060560506" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.812101,"math_prob":0.97650415,"size":1516,"snap":"2021-31-2021-39","text_gpt3_token_len":481,"char_repetition_ratio":0.13359788,"word_repetition_ratio":0.025751073,"special_character_ratio":0.51385224,"punctuation_ratio":0.14606741,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99407417,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-08-03T13:08:57Z\",\"WARC-Record-ID\":\"<urn:uuid:5aedf9b8-1a21-40eb-b3c9-c2115f3440ba>\",\"Content-Length\":\"8351\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:596595b3-4506-40e5-b137-313dd6c05716>\",\"WARC-Concurrent-To\":\"<urn:uuid:144a9ed5-f4e3-4e96-b3cd-1fa8654eb747>\",\"WARC-IP-Address\":\"62.149.142.170\",\"WARC-Target-URI\":\"https://www.numbersaplenty.com/60060560506\",\"WARC-Payload-Digest\":\"sha1:MHLNIC6GZ6MOZFZAPRGPBLWUGLKFRBJV\",\"WARC-Block-Digest\":\"sha1:ECLID2PNOVLKS3CQ373N3D4NGMKHXIQI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046154459.22_warc_CC-MAIN-20210803124251-20210803154251-00136.warc.gz\"}"}
https://iluvsaving.savingadvice.com/2006/08/
[ "User Real IP - 3.227.240.143\n```Array\n(\n => Array\n(\n => 182.68.68.92\n)\n\n => Array\n(\n => 101.0.41.201\n)\n\n => Array\n(\n => 43.225.98.123\n)\n\n => Array\n(\n => 2.58.194.139\n)\n\n => Array\n(\n => 46.119.197.104\n)\n\n => Array\n(\n => 45.249.8.93\n)\n\n => Array\n(\n => 103.12.135.72\n)\n\n => Array\n(\n => 157.35.243.216\n)\n\n => Array\n(\n => 209.107.214.176\n)\n\n => Array\n(\n => 5.181.233.166\n)\n\n => Array\n(\n => 106.201.10.100\n)\n\n => Array\n(\n => 36.90.55.39\n)\n\n => Array\n(\n => 119.154.138.47\n)\n\n => Array\n(\n => 51.91.31.157\n)\n\n => Array\n(\n => 182.182.65.216\n)\n\n => Array\n(\n => 157.35.252.63\n)\n\n => Array\n(\n => 14.142.34.163\n)\n\n => Array\n(\n => 178.62.43.135\n)\n\n => Array\n(\n => 43.248.152.148\n)\n\n => Array\n(\n => 222.252.104.114\n)\n\n => Array\n(\n => 209.107.214.168\n)\n\n => Array\n(\n => 103.99.199.250\n)\n\n => Array\n(\n => 178.62.72.160\n)\n\n => Array\n(\n => 27.6.1.170\n)\n\n => Array\n(\n => 182.69.249.219\n)\n\n => Array\n(\n => 110.93.228.86\n)\n\n => Array\n(\n => 72.255.1.98\n)\n\n => Array\n(\n => 182.73.111.98\n)\n\n => Array\n(\n => 45.116.117.11\n)\n\n => Array\n(\n => 122.15.78.189\n)\n\n => Array\n(\n => 14.167.188.234\n)\n\n => Array\n(\n => 223.190.4.202\n)\n\n => Array\n(\n => 202.173.125.19\n)\n\n => Array\n(\n => 103.255.5.32\n)\n\n => Array\n(\n => 39.37.145.103\n)\n\n => Array\n(\n => 140.213.26.249\n)\n\n => Array\n(\n => 45.118.166.85\n)\n\n => Array\n(\n => 102.166.138.255\n)\n\n => Array\n(\n => 77.111.246.234\n)\n\n => Array\n(\n => 45.63.6.196\n)\n\n => Array\n(\n => 103.250.147.115\n)\n\n => Array\n(\n => 223.185.30.99\n)\n\n => Array\n(\n => 103.122.168.108\n)\n\n => Array\n(\n => 123.136.203.21\n)\n\n => Array\n(\n => 171.229.243.63\n)\n\n => Array\n(\n => 153.149.98.149\n)\n\n => Array\n(\n => 223.238.93.15\n)\n\n => Array\n(\n => 178.62.113.166\n)\n\n => Array\n(\n => 101.162.0.153\n)\n\n => Array\n(\n => 121.200.62.114\n)\n\n => Array\n(\n => 14.248.77.252\n)\n\n => Array\n(\n => 95.142.117.29\n)\n\n => Array\n(\n => 150.129.60.107\n)\n\n => Array\n(\n => 94.205.243.22\n)\n\n => Array\n(\n => 115.42.71.143\n)\n\n => Array\n(\n => 117.217.195.59\n)\n\n => Array\n(\n => 182.77.112.56\n)\n\n => Array\n(\n => 182.77.112.108\n)\n\n => Array\n(\n => 41.80.69.10\n)\n\n => Array\n(\n => 117.5.222.121\n)\n\n => Array\n(\n => 103.11.0.38\n)\n\n => Array\n(\n => 202.173.127.140\n)\n\n => Array\n(\n => 49.249.249.50\n)\n\n => Array\n(\n => 116.72.198.211\n)\n\n => Array\n(\n => 223.230.54.53\n)\n\n => Array\n(\n => 102.69.228.74\n)\n\n => Array\n(\n => 39.37.251.89\n)\n\n => Array\n(\n => 39.53.246.141\n)\n\n => Array\n(\n => 39.57.182.72\n)\n\n => Array\n(\n => 209.58.130.210\n)\n\n => Array\n(\n => 104.131.75.86\n)\n\n => Array\n(\n => 106.212.131.255\n)\n\n => Array\n(\n => 106.212.132.127\n)\n\n => Array\n(\n => 223.190.4.60\n)\n\n => Array\n(\n => 103.252.116.252\n)\n\n => Array\n(\n => 103.76.55.182\n)\n\n => Array\n(\n => 45.118.166.70\n)\n\n => Array\n(\n => 103.93.174.215\n)\n\n => Array\n(\n => 5.62.62.142\n)\n\n => Array\n(\n => 182.179.158.156\n)\n\n => Array\n(\n => 39.57.255.12\n)\n\n => Array\n(\n => 39.37.178.37\n)\n\n => Array\n(\n => 182.180.165.211\n)\n\n => Array\n(\n => 119.153.135.17\n)\n\n => Array\n(\n => 72.255.15.244\n)\n\n => Array\n(\n => 139.180.166.181\n)\n\n => Array\n(\n => 70.119.147.111\n)\n\n => Array\n(\n => 106.210.40.83\n)\n\n => Array\n(\n => 14.190.70.91\n)\n\n => Array\n(\n => 202.125.156.82\n)\n\n => Array\n(\n => 115.42.68.38\n)\n\n => Array\n(\n => 102.167.13.108\n)\n\n => Array\n(\n => 117.217.192.130\n)\n\n => Array\n(\n => 205.185.223.156\n)\n\n => Array\n(\n => 171.224.180.29\n)\n\n => Array\n(\n => 45.127.45.68\n)\n\n => Array\n(\n => 195.206.183.232\n)\n\n => Array\n(\n => 49.32.52.115\n)\n\n => Array\n(\n => 49.207.49.223\n)\n\n => Array\n(\n => 45.63.29.61\n)\n\n => Array\n(\n => 103.245.193.214\n)\n\n => Array\n(\n => 39.40.236.69\n)\n\n => Array\n(\n => 62.80.162.111\n)\n\n => Array\n(\n => 45.116.232.56\n)\n\n => Array\n(\n => 45.118.166.91\n)\n\n => Array\n(\n => 180.92.230.234\n)\n\n => Array\n(\n => 157.40.57.160\n)\n\n => Array\n(\n => 110.38.38.130\n)\n\n => Array\n(\n => 72.255.57.183\n)\n\n => Array\n(\n => 182.68.81.85\n)\n\n => Array\n(\n => 39.57.202.122\n)\n\n => Array\n(\n => 119.152.154.36\n)\n\n => Array\n(\n => 5.62.62.141\n)\n\n => Array\n(\n => 119.155.54.232\n)\n\n => Array\n(\n => 39.37.141.22\n)\n\n => Array\n(\n => 183.87.12.225\n)\n\n => Array\n(\n => 107.170.127.117\n)\n\n => Array\n(\n => 125.63.124.49\n)\n\n => Array\n(\n => 39.42.191.3\n)\n\n => Array\n(\n => 116.74.24.72\n)\n\n => Array\n(\n => 46.101.89.227\n)\n\n => Array\n(\n => 202.173.125.247\n)\n\n => Array\n(\n => 39.42.184.254\n)\n\n => Array\n(\n => 115.186.165.132\n)\n\n => Array\n(\n => 39.57.206.126\n)\n\n => Array\n(\n => 103.245.13.145\n)\n\n => Array\n(\n => 202.175.246.43\n)\n\n => Array\n(\n => 192.140.152.150\n)\n\n => Array\n(\n => 202.88.250.103\n)\n\n => Array\n(\n => 103.248.94.207\n)\n\n => Array\n(\n => 77.73.66.101\n)\n\n => Array\n(\n => 104.131.66.8\n)\n\n => Array\n(\n => 113.186.161.97\n)\n\n => Array\n(\n => 222.254.5.7\n)\n\n => Array\n(\n => 223.233.67.247\n)\n\n => Array\n(\n => 171.249.116.146\n)\n\n => Array\n(\n => 47.30.209.71\n)\n\n => Array\n(\n => 202.134.13.130\n)\n\n => Array\n(\n => 27.6.135.7\n)\n\n => Array\n(\n => 107.170.186.79\n)\n\n => Array\n(\n => 103.212.89.171\n)\n\n => Array\n(\n => 117.197.9.77\n)\n\n => Array\n(\n => 122.176.206.233\n)\n\n => Array\n(\n => 192.227.253.222\n)\n\n => Array\n(\n => 182.188.224.119\n)\n\n => Array\n(\n => 14.248.70.74\n)\n\n => Array\n(\n => 42.118.219.169\n)\n\n => Array\n(\n => 110.39.146.170\n)\n\n => Array\n(\n => 119.160.66.143\n)\n\n => Array\n(\n => 103.248.95.130\n)\n\n => Array\n(\n => 27.63.152.208\n)\n\n => Array\n(\n => 49.207.114.96\n)\n\n => Array\n(\n => 102.166.23.214\n)\n\n => Array\n(\n => 175.107.254.73\n)\n\n => Array\n(\n => 103.10.227.214\n)\n\n => Array\n(\n => 202.143.115.89\n)\n\n => Array\n(\n => 110.93.227.187\n)\n\n => Array\n(\n => 103.140.31.60\n)\n\n => Array\n(\n => 110.37.231.46\n)\n\n => Array\n(\n => 39.36.99.238\n)\n\n => Array\n(\n => 157.37.140.26\n)\n\n => Array\n(\n => 43.246.202.226\n)\n\n => Array\n(\n => 137.97.8.143\n)\n\n => Array\n(\n => 182.65.52.242\n)\n\n => Array\n(\n => 115.42.69.62\n)\n\n => Array\n(\n => 14.143.254.58\n)\n\n => Array\n(\n => 223.179.143.236\n)\n\n => Array\n(\n => 223.179.143.249\n)\n\n => Array\n(\n => 103.143.7.54\n)\n\n => Array\n(\n => 223.179.139.106\n)\n\n => Array\n(\n => 39.40.219.90\n)\n\n => Array\n(\n => 45.115.141.231\n)\n\n => Array\n(\n => 120.29.100.33\n)\n\n => Array\n(\n => 112.196.132.5\n)\n\n => Array\n(\n => 202.163.123.153\n)\n\n => Array\n(\n => 5.62.58.146\n)\n\n => Array\n(\n => 39.53.216.113\n)\n\n => Array\n(\n => 42.111.160.73\n)\n\n => Array\n(\n => 107.182.231.213\n)\n\n => Array\n(\n => 119.82.94.120\n)\n\n => Array\n(\n => 178.62.34.82\n)\n\n => Array\n(\n => 203.122.6.18\n)\n\n => Array\n(\n => 157.42.38.251\n)\n\n => Array\n(\n => 45.112.68.222\n)\n\n => Array\n(\n => 49.206.212.122\n)\n\n => Array\n(\n => 104.236.70.228\n)\n\n => Array\n(\n => 42.111.34.243\n)\n\n => Array\n(\n => 84.241.19.186\n)\n\n => Array\n(\n => 89.187.180.207\n)\n\n => Array\n(\n => 104.243.212.118\n)\n\n => Array\n(\n => 104.236.55.136\n)\n\n => Array\n(\n => 106.201.16.163\n)\n\n => Array\n(\n => 46.101.40.25\n)\n\n => Array\n(\n => 45.118.166.94\n)\n\n => Array\n(\n => 49.36.128.102\n)\n\n => Array\n(\n => 14.142.193.58\n)\n\n => Array\n(\n => 212.79.124.176\n)\n\n => Array\n(\n => 45.32.191.194\n)\n\n => Array\n(\n => 105.112.107.46\n)\n\n => Array\n(\n => 106.201.14.8\n)\n\n => Array\n(\n => 110.93.240.65\n)\n\n => Array\n(\n => 27.96.95.177\n)\n\n => Array\n(\n => 45.41.134.35\n)\n\n => Array\n(\n => 180.151.13.110\n)\n\n => Array\n(\n => 101.53.242.89\n)\n\n => Array\n(\n => 115.186.3.110\n)\n\n => Array\n(\n => 171.49.185.242\n)\n\n => Array\n(\n => 115.42.70.24\n)\n\n => Array\n(\n => 45.128.188.43\n)\n\n => Array\n(\n => 103.140.129.63\n)\n\n => Array\n(\n => 101.50.113.147\n)\n\n => Array\n(\n => 103.66.73.30\n)\n\n => Array\n(\n => 117.247.193.169\n)\n\n => Array\n(\n => 120.29.100.94\n)\n\n => Array\n(\n => 42.109.154.39\n)\n\n => Array\n(\n => 122.173.155.150\n)\n\n => Array\n(\n => 45.115.104.53\n)\n\n => Array\n(\n => 116.74.29.84\n)\n\n => Array\n(\n => 101.50.125.34\n)\n\n => Array\n(\n => 45.118.166.80\n)\n\n => Array\n(\n => 91.236.184.27\n)\n\n => Array\n(\n => 113.167.185.120\n)\n\n)\n```\nArchive for August, 2006: MariRDH's Personal Finance Blog\n << Back to all Blogs Login or Create your own free blog Layout: Blue and Brown (Default) Author's Creation\nHome > Archive: August, 2006", null, "", null, "", null, "# Archive for August, 2006\n\n## School starts tomorrow\n\nAugust 28th, 2006 at 02:53 am\n\nI should be asleep by now but I am still in denial. Maybe if I don't go to sleep I can make the summer vacation last longer! *grin* I guess it doesn't really work that way, huh?\n\nI checked the oil in my car (a 1999 Saturn with 131k miles on it), and it looks good and full. It is a great car but since it is getting up in years/miles it does eat oil pretty regularly. I usually have to put a quart in every week-and-a-half to 2 weeks just to top it off. I have been using full synthetic motor oil for a few years now so I don't have to change the oil as often (saves a few dollars). I cleaned out as much stuff as I could from the trunk to help with gas mileage, and I have an almost full tank which should last me until Friday afternoon, I hope.\n\nNine more months of driving to school everyday - an hour there and an hour back. I am looking forward to saving money on gas once I graduate in May! I can't wait to get a job and start my new career...just think, money coming in, instead of going out!", null, "## Good news on the Citibank card\n\nAugust 27th, 2006 at 02:35 am\n\nGood news and bad news really.\n\nGood news - it does not look like they raised the rates, so I get to keep my 2% and 3%. Whew!\n\nBad news - they charged me a \\$39 late fee. Ouch! Haven't seen one of those in a long time, and never want to see another. I went in today and made an extra payment toward September's bill so I will definitely make the minimum with no problem.\n\nOh, and when I was looking at the statement online I noticed the \"purchase rate\" is at 19.24%!!! Yikes...much worse than I thought it was. I NEVER want to have to feel the pain of that rate!\n\n## I really messed up this time\n\nAugust 22nd, 2006 at 06:47 pm\n\nI transferred all my DH's CC debt onto his Citibank card about a month ago. Half at 1.99% and half at 2.99% - both offers until paid in full. I was so happy that his previously 16+% card was now this low by doing the ol' switcheroo.\n\nWell, I logged into the account today, and see that the due date was yesterday and I am \\$20 short. (I pay a portion of each bill weekly and Citibank only has a 20 day grace period, unlike my other cards which have a 25 day grace period, so I usually check it to make sure the minimum is going to be paid by the due date). I immediately transferred the \\$20 and it will post today, but I have a horrible sinking feeling in my gut that they are going to jack his rate all the way up now.", null, "I have been so careful over the last 6-7 years since I have been paying the bills, and have gotten both our credit scores into the high 700's. Now this. I know it won't effect our credit scores but it is horrible to have the rate on this card so high even with great credit scores. I am so depressed. I can only hope that they do not raise the rate as is their perogative since it is my own stupid fault for being late on the minimum.\n\n## Making a personal website\n\nAugust 18th, 2006 at 03:41 am\n\nUgh. If anything seems like a never-ending task it is this monster called my website. I figured I'd start a page that had all my favorite websites in one spot, and maybe it would be able to help someone. No big deal. Just a little something. Well, that turned into something completely different because I really want it to look \"nice\" and for people to find it helpful. So I added a second page with just Dental Hygiene information, again, hoping that it would help people.\n\nOn another note, I had no idea how difficult it can be to actually get someone to visit! I figured, \"Hey! If I've got a website I might as well join AdSense and make some money!\" As if. In my quest to get visitors I added my site to some manual traffic pages, and promptly had my hand slapped by Google. I guess next time I join a program I should probably read all the rules instead of just skimming! Oops. Bad me. So, now I am trying to decide whether I want to just get rid of Adsense so I can generate some traffic to my site, or keep Adsense and try to actually get traffic the hard way!\n\nI am really trying to stop being obsessed with the silly website because I really need to start studying and getting ready for the new semester which starts in exactly 11 days. Eeek!\n\nThe real problem is that I am somewhat of a perfectionist in small, silly areas of my life...and for some unknown reason I seem to have chosen this crazy website as one of those items. I feel like I need a nice tight \"white coat\" supplied by the super nice doctors at the nearest asylum.\n\n## Saving on Textbooks\n\nAugust 17th, 2006 at 11:57 pm\n\nGreat resource for getting the best price on textbooks\n\nwww.bigwords.com.\n\nYou can search for all your textbooks for all your classes at the same time. They compare many different online stores including amazon, ebay, half.com, abesbooks, etc, then they tell you who would be the cheapest if you want to buy all your books from the same store, and also the lowest price at all the different stores on each book. They also show you all the shipping deals and coupons available at the different stores. Bigwords is a great website and saves time and money. Ya gotta love it!", null, "" ]
[ null, "https://www.savingadvice.com/blogs/images/search/top_left.php", null, "https://www.savingadvice.com/blogs/images/search/top_right.php", null, "https://www.savingadvice.com/blogs/images/search/bottom_left.php", null, "https://www.savingadvice.com/forums/core/images/smilies/wink.png", null, "https://www.savingadvice.com/forums/core/images/smilies/frown.png", null, "https://www.savingadvice.com/forums/core/images/smilies/smile.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.978538,"math_prob":0.9606841,"size":5207,"snap":"2019-51-2020-05","text_gpt3_token_len":1275,"char_repetition_ratio":0.085143186,"word_repetition_ratio":0.007866274,"special_character_ratio":0.24793547,"punctuation_ratio":0.103658535,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96925074,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-18T03:22:41Z\",\"WARC-Record-ID\":\"<urn:uuid:5ac7d750-6193-43b8-987d-39e4f025672b>\",\"Content-Length\":\"109936\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:43d027ef-ed5b-4541-a83f-d7b1fe50a1d1>\",\"WARC-Concurrent-To\":\"<urn:uuid:c0343aa5-8a1b-46fb-8f18-19b449571084>\",\"WARC-IP-Address\":\"173.231.200.26\",\"WARC-Target-URI\":\"https://iluvsaving.savingadvice.com/2006/08/\",\"WARC-Payload-Digest\":\"sha1:OZOEKLEPKNP3DCHFQU7PHFU5FL6EGN6B\",\"WARC-Block-Digest\":\"sha1:4WCDT76W7VJ3OPJMPSXTNG3AQSB4HBF3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250591763.20_warc_CC-MAIN-20200118023429-20200118051429-00534.warc.gz\"}"}