URL
stringlengths
15
1.68k
text_list
sequencelengths
1
199
image_list
sequencelengths
1
199
metadata
stringlengths
1.19k
3.08k
https://books.google.cat/books?id=8DIDAAAAQAAJ&hl=ca&lr=
[ "### Quč opinen els usuaris -Escriviu una ressenya\n\nNo hem trobat cap ressenya als llocs habituals.\n\n### Continguts\n\n Secció 1 3 Secció 2 5 Secció 3 6 Secció 4 21\n Secció 5 22 Secció 6 Secció 7\n\n### Passatges populars\n\nPŕgina 21 - When a straight line standing on another straight line makes the adjacent angles equal to one another, each of the angles is called a right angle; and the straight line which stands on the other is called a perpendicular to it.\nPŕgina 31 - If two triangles have two angles of the one equal to two angles of the other, each to each ; and one side equal to one side, viz.\nPŕgina 41 - Parallelograms upon the same base, and between the same parallels, are equal to one another.\nPŕgina 13 - The angles at the base of an Isosceles triangle are equal to one another ; and if the equal sides be produced, the angles upon the other side of the base shall also be equal. Let ABC be an isosceles triangle, of which the side AB is equal to AC, and let the straight lines AB, AC...\nPŕgina 9 - Things which are double of the same, are equal to one another. 7. Things which are halves of the same, are equal to one another.\nPŕgina 35 - If a straight line meets two straight lines, so as to \" make the two interior angles on the same side of it taken \" together less than two right angles...\nPŕgina 39 - ... together with four right angles, are equal to twice as many right angles as the figure has sides.\nPŕgina 13 - J which the equal sides are opposite, shall be equal, each to each, viz. the angle ABC to the angle DEF, and the angle ACB to DFE.\nPŕgina 53 - IF the square described upon one of 'the sides of a triangle be equal to the squares described upon the other two sides of it ; the angle contained by these two sides is a right angle.\nPŕgina 22 - If, at a point in a straight line, two other straight lines, on the opposite sides of it, make the adjacent angles together equal to two right angles, these two straight lines shall be in one and the same straight line." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8024977,"math_prob":0.99646246,"size":2751,"snap":"2020-24-2020-29","text_gpt3_token_len":657,"char_repetition_ratio":0.19985439,"word_repetition_ratio":0.022403259,"special_character_ratio":0.21119593,"punctuation_ratio":0.094339624,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9965177,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-09T18:02:37Z\",\"WARC-Record-ID\":\"<urn:uuid:23f03652-edea-4fa4-940e-67d1341565a5>\",\"Content-Length\":\"51511\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d4ab7843-7ba7-4237-a1e2-81ccdbbc3012>\",\"WARC-Concurrent-To\":\"<urn:uuid:bb3e2403-65e1-4eba-8da5-16cfc5eb7bcf>\",\"WARC-IP-Address\":\"172.253.63.101\",\"WARC-Target-URI\":\"https://books.google.cat/books?id=8DIDAAAAQAAJ&hl=ca&lr=\",\"WARC-Payload-Digest\":\"sha1:ZQSNSLDHSMJAHUAGAEBZQPPTDDW7JN3F\",\"WARC-Block-Digest\":\"sha1:5ZDZ7KYKNL2QEPCMLVE4MP6QMHZ255HT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655900614.47_warc_CC-MAIN-20200709162634-20200709192634-00125.warc.gz\"}"}
https://www.hashbangcode.com/article/php-random-string-function
[ "# PHP Random String Function\n\n28th January 2011 - 3 minutes read time\n\nI was testing a string manipulation function today (which I will post some other time) and I wanted to create a random string of characters that I could feed into it, so I came up with the function below. I thought it was a neat use of the rand() and chr() PHP functions, so here it is.\n\n``````function random_string(\\$length = 50) {\n\\$string = '';\n\nfor (\\$i = 0; \\$i < \\$length; ++\\$i) {\n\n\\$type = rand(1, 5);\n\nswitch (\\$type) {\ncase 1:\n// lowercase letters\n\\$ascii_start = 65;\n\\$ascii_end = 90;\nbreak;\ncase 2:\n// uppercase leters\n\\$ascii_start = 97;\n\\$ascii_end = 122;\nbreak;\ncase 3:\n// Space\n\\$ascii_start = 32;\n\\$ascii_end = 32;\nbreak;\ncase 4:\n// numbers\n\\$ascii_start = 48;\n\\$ascii_end = 57;\nbreak;\ncase 5:\n// Punctuation\n\\$ascii_start = 33;\n\\$ascii_end = 47;\nbreak;\n}\n\n\\$string .= chr(rand(\\$ascii_start, \\$ascii_end));\n}\nreturn \\$string;\n}``````\n\nIt works by picking a type of character to use (eg, uppercase letter, number etc) and then selecting one at random using the chr() function. The chr() function takes a number as a parameter and will return the ascii character corresponding to that number. For example, given the number 65 the chr() function will return the string 'A'. It will loop over this selection process to build a string.\n\nHere are some examples of the sort of output this function produces.\n\n``````ye 82 C!4p \\$\\$r\" lg 3 Ed-W KGrX1% 21V V\"mENV YzA B\n1% .AjU C/7 E7 %3uplK g40-'\\$ i,j% E+JYh Ox AU7I\n%q. v\\$ ,#H5t *d %9Xv59* *oZ3Hj-'G1- 2*7 a+I8Jy& 0\n\\$)V, 7&g6o\\$3 u27 g\" p6 O* eU\"LG Th 9J,&3* zH)+*e\n&5 u\\$/l)L0 MZ2'H 1MrymE X h3 66 4AW )WT1f 0 zQtF\n- CAc2U'QU*1E5 -MfQ \\$ HMGJ0xg%,J0 q27r s 4oFz!74\"\nh Dx.h\"Cq1ANF0S- S8w!z%hS x%D8M'O(6a) 3r8H#\\$#./&i\na .!J (As3a!v&DXK0PIf1\\$B0JR Pp,KrM (/uUz22gm S% ,-\n*5j\\$,%0+ VSsz,a0oA7)' s9J\\$5/ R\"iK3cvz GDQn3DC'\"lc\n6 xK,r2 R/1Y\"y46S& s39#US p*h1+2R8,0yr6 -HYG 'N\"``````" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5385272,"math_prob":0.97856903,"size":1774,"snap":"2022-40-2023-06","text_gpt3_token_len":704,"char_repetition_ratio":0.14124294,"word_repetition_ratio":0.0,"special_character_ratio":0.38387823,"punctuation_ratio":0.15816326,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95739806,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-26T19:18:06Z\",\"WARC-Record-ID\":\"<urn:uuid:1f65fa01-1591-44ce-9722-b688cc842696>\",\"Content-Length\":\"47280\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:47e9b550-ba7f-4a2c-8d83-7f4def4afb04>\",\"WARC-Concurrent-To\":\"<urn:uuid:e5ba7111-7c69-4bf0-87c0-a8e0035ba6fa>\",\"WARC-IP-Address\":\"104.21.26.114\",\"WARC-Target-URI\":\"https://www.hashbangcode.com/article/php-random-string-function\",\"WARC-Payload-Digest\":\"sha1:ICGQY6QDSO56I2SWBLYSAX3XRCBPOYSE\",\"WARC-Block-Digest\":\"sha1:KZZYLJCQCJVM35ADTLTYNXDIZLC24VUO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030334915.59_warc_CC-MAIN-20220926175816-20220926205816-00122.warc.gz\"}"}
https://en.wikipedia.org/wiki/Hodgkin%E2%80%93Huxley_model
[ "# Hodgkin–Huxley model", null, "Basic components of Hodgkin–Huxley-type models. Hodgkin–Huxley type models represent the biophysical characteristic of cell membranes. The lipid bilayer is represented as a capacitance (Cm). Voltage-gated and leak ion channels are represented by nonlinear (gn) and linear (gL) conductances, respectively. The electrochemical gradients driving the flow of ions are represented by batteries (E), and ion pumps and exchangers are represented by current sources (Ip).\n\nThe Hodgkin–Huxley model, or conductance-based model, is a mathematical model that describes how action potentials in neurons are initiated and propagated. It is a set of nonlinear differential equations that approximates the electrical characteristics of excitable cells such as neurons and cardiac myocytes. It is a continuous time model, unlike, for example, the Rulkov map.\n\nAlan Lloyd Hodgkin and Andrew Fielding Huxley described the model in 1952 to explain the ionic mechanisms underlying the initiation and propagation of action potentials in the squid giant axon. They received the 1963 Nobel Prize in Physiology or Medicine for this work.\n\n## Basic components\n\nThe typical Hodgkin–Huxley model treats each component of an excitable cell as an electrical element (as shown in the figure). The lipid bilayer is represented as a capacitance (Cm). Voltage-gated ion channels are represented by electrical conductances (gn, where n is the specific ion channel) that depend on both voltage and time. Leak channels are represented by linear conductances (gL). The electrochemical gradients driving the flow of ions are represented by voltage sources (En) whose voltages are determined by the ratio of the intra- and extracellular concentrations of the ionic species of interest. Finally, ion pumps are represented by current sources (Ip).[clarification needed] The membrane potential is denoted by Vm.\n\nMathematically, the current flowing through the lipid bilayer is written as\n\n$I_{c}=C_{m}{\\frac {{\\mathrm {d} }V_{m}}{{\\mathrm {d} }t}}$", null, "and the current through a given ion channel is the product\n\n$I_{i}={g_{i}}(V_{m}-V_{i})\\;$", null, "where $V_{i}$", null, "is the reversal potential of the i-th ion channel. Thus, for a cell with sodium and potassium channels, the total current through the membrane is given by:\n\n$I=C_{m}{\\frac {{\\mathrm {d} }V_{m}}{{\\mathrm {d} }t}}+g_{K}(V_{m}-V_{K})+g_{Na}(V_{m}-V_{Na})+g_{l}(V_{m}-V_{l})$", null, "where I is the total membrane current per unit area, Cm is the membrane capacitance per unit area, gK and gNa are the potassium and sodium conductances per unit area, respectively, VK and VNa are the potassium and sodium reversal potentials, respectively, and gl and Vl are the leak conductance per unit area and leak reversal potential, respectively. The time dependent elements of this equation are Vm, gNa, and gK, where the last two conductances depend explicitly on voltage as well.\n\n## Ionic current characterization\n\nIn voltage-gated ion channels, the channel conductance gi is a function of both time and voltage (gn(tV) in the figure), while in leak channels gi is a constant (gL in the figure). The current generated by ion pumps is dependent on the ionic species specific to that pump. The following sections will describe these formulations in more detail.\n\n### Voltage-gated ion channels\n\nUsing a series of voltage clamp experiments and by varying extracellular sodium and potassium concentrations, Hodgkin and Huxley developed a model in which the properties of an excitable cell are described by a set of four ordinary differential equations. Together with the equation for the total current mentioned above, these are:\n\n$I=C_{m}{\\frac {{\\mathrm {d} }V_{m}}{{\\mathrm {d} }t}}+{\\bar {g}}_{\\text{K}}n^{4}(V_{m}-V_{K})+{\\bar {g}}_{\\text{Na}}m^{3}h(V_{m}-V_{Na})+{\\bar {g}}_{l}(V_{m}-V_{l}),$", null, "${\\frac {dn}{dt}}=\\alpha _{n}(V_{m})(1-n)-\\beta _{n}(V_{m})n$", null, "${\\frac {dm}{dt}}=\\alpha _{m}(V_{m})(1-m)-\\beta _{m}(V_{m})m$", null, "${\\frac {dh}{dt}}=\\alpha _{h}(V_{m})(1-h)-\\beta _{h}(V_{m})h$", null, "where I is the current per unit area, and $\\alpha _{i}$", null, "and $\\beta _{i}$", null, "are rate constants for the i-th ion channel, which depend on voltage but not time. ${\\bar {g}}_{n}$", null, "is the maximal value of the conductance. n, m, and h are dimensionless quantities between 0 and 1 that are associated with potassium channel activation, sodium channel activation, and sodium channel inactivation, respectively. For $p=(n,m,h)$", null, ", $\\alpha _{p}$", null, "and $\\beta _{p}$", null, "take the form\n\n$\\alpha _{p}(V_{m})=p_{\\infty }(V_{m})/\\tau _{p}$", null, "$\\beta _{p}(V_{m})=(1-p_{\\infty }(V_{m}))/\\tau _{p}.$", null, "$p_{\\infty }$", null, "and $(1-p_{\\infty })$", null, "are the steady state values for activation and inactivation, respectively, and are usually represented by Boltzmann equations as functions of $V_{m}$", null, ". In the original paper by Hodgkin and Huxley, the functions $\\alpha$", null, "and $\\beta$", null, "are given by\n\n${\\begin{array}{lll}\\alpha _{n}(V_{m})={\\frac {0.01(10-V_{m})}{\\exp {\\big (}{\\frac {10-V_{m}}{10}}{\\big )}-1}}&\\alpha _{m}(V_{m})={\\frac {0.1(25-V_{m})}{\\exp {\\big (}{\\frac {25-V_{m}}{10}}{\\big )}-1}}&\\alpha _{h}(V_{m})=0.07\\exp {\\bigg (}{\\frac {-V_{m}}{20}}{\\bigg )}\\\\\\beta _{n}(V_{m})=0.125\\exp {\\bigg (}{\\frac {-V_{m}}{80}}{\\bigg )}&\\beta _{m}(V_{m})=4\\exp {\\bigg (}{\\frac {-V_{m}}{18}}{\\bigg )}&\\beta _{h}(V_{m})={\\frac {1}{\\exp {\\big (}{\\frac {30-V_{m}}{10}}{\\big )}+1}}\\end{array}}$", null, "while in many current software programs, Hodgkin–Huxley type models generalize $\\alpha$", null, "and $\\beta$", null, "to\n\n${\\frac {A_{p}(V_{m}-B_{p})}{\\exp {\\big (}{\\frac {V_{m}-B_{p}}{C_{p}}}{\\big )}-D_{p}}}$", null, "In order to characterize voltage-gated channels, the equations are fit to voltage clamp data. For a derivation of the Hodgkin–Huxley equations under voltage-clamp, see. Briefly, when the membrane potential is held at a constant value (i.e., voltage-clamp), for each value of the membrane potential the nonlinear gating equations reduce to equations of the form:\n\n$m(t)=m_{0}-[(m_{0}-m_{\\infty })(1-e^{-t/\\tau _{m}})]\\,$", null, "$h(t)=h_{0}-[(h_{0}-h_{\\infty })(1-e^{-t/\\tau _{h}})]\\,$", null, "$n(t)=n_{0}-[(n_{0}-n_{\\infty })(1-e^{-t/\\tau _{n}})]\\,$", null, "Thus, for every value of membrane potential $V_{m}$", null, "the sodium and potassium currents can be described by\n\n$I_{\\mathrm {Na} }(t)={\\bar {g}}_{\\mathrm {Na} }m(V_{m})^{3}h(V_{m})(V_{m}-E_{\\mathrm {Na} }),$", null, "$I_{\\mathrm {K} }(t)={\\bar {g}}_{\\mathrm {K} }n(V_{m})^{4}(V_{m}-E_{\\mathrm {K} }).$", null, "In order to arrive at the complete solution for a propagated action potential, one must write the current term I on the left-hand side of the first differential equation in terms of V, so that the equation becomes an equation for voltage alone. The relation between I and V can be derived from cable theory and is given by\n\n$I={\\frac {a}{2R}}{\\frac {\\partial ^{2}V}{\\partial x^{2}}},$", null, "where a is the radius of the axon, R is the specific resistance of the axoplasm, and x is the position along the nerve fiber. Substitution of this expression for I transforms the original set of equations into a set of partial differential equations, because the voltage becomes a function of both x and t.\n\nThe Levenberg–Marquardt algorithm, a modified Gauss–Newton algorithm, is often used to fit these equations to voltage-clamp data.[citation needed]\n\nWhile the original experiments treated only sodium and potassium channels, the Hodgkin Huxley model can also be extended to account for other species of ion channels.\n\n### Leak channels\n\nLeak channels account for the natural permeability of the membrane to ions and take the form of the equation for voltage-gated channels, where the conductance $g_{i}$", null, "is a constant.[citation needed]\n\n### Pumps and exchangers\n\nThe membrane potential depends upon the maintenance of ionic concentration gradients across it. The maintenance of these concentration gradients requires active transport of ionic species. The sodium-potassium and sodium-calcium exchangers are the best known of these. Some of the basic properties of the Na/Ca exchanger have already been well-established: the stoichiometry of exchange is 3 Na+: 1 Ca2+ and the exchanger is electrogenic and voltage-sensitive. The Na/K exchanger has also been described in detail, with a 3 Na+: 2 K+ stoichiometry.\n\n## Mathematical properties\n\nThe Hodgkin–Huxley model can be thought of as a differential equation with four state variables, v(t), m(t), n(t), and h(t), that change with respect to time t. The system is difficult to study because it is a nonlinear system and cannot be solved analytically. However, there are many numeric methods available to analyze the system. Certain properties and general behaviors, such as limit cycles, can be proven to exist.", null, "A simulation of the Hodgkin–Huxley model in phase space, in terms of voltage v(t) and potassium gating variable n(t). The closed curve is known as a limit cycle.\n\n### Center manifold\n\nBecause there are four state variables, visualizing the path in phase space can be difficult. Usually two variables are chosen, voltage v(t) and the potassium gating variable n(t), allowing one to visualize the limit cycle. However, one must be careful because this is an ad-hoc method of visualizing the 4-dimensional system. This does not prove the existence of the limit cycle.\n\nA better projection can be constructed from a careful analysis of the Jacobian of the system, evaluated at the equilibrium point. Specifically, the eigenvalues of the Jacobian are indicative of the center manifold's existence. Likewise, the eigenvectors of the Jacobian reveal the center manifold's orientation. The Hodgkin–Huxley model has two negative eigenvalues and two complex eigenvalues with slightly positive real parts. The eigenvectors associated with the two negative eigenvalues will reduce to zero as time t increases. The remaining two complex eigenvectors define the center manifold. In other words, the 4-dimensional system collapses onto a 2-dimensional plane. Any solution starting off the center manifold will decay towards the center manifold. Furthermore, the limit cycle is contained on the center manifold.", null, "The voltage v(t) (in millivolts) of the Hodgkin–Huxley model, graphed over 50 milliseconds. The injected current varies from −5 nanoamps to 12 nanoamps. The graph passes through three stages: an equilibrium stage, a single-spike stage, and a limit cycle stage.\n\n### Bifurcations\n\nIf the injected current $I$", null, "were used as a bifurcation parameter, then the Hodgkin–Huxley model undergoes a Hopf bifurcation. As with most neuronal models, increasing the injected current will increase the firing rate of the neuron. One consequence of the Hopf bifurcation is that there is a minimum firing rate. This means that either the neuron is not firing at all (corresponding to zero frequency), or firing at the minimum firing rate. Because of the all or none principle, there is no smooth increase in action potential amplitude, but rather there is a sudden \"jump\" in amplitude. The resulting transition is known as a classical canard phenomenon, or simply a canard.\n\n## Improvements and alternative models\n\nThe Hodgkin–Huxley model is regarded as one of the great achievements of 20th-century biophysics. Nevertheless, modern Hodgkin–Huxley-type models have been extended in several important ways:\n\nSeveral simplified neuronal models have also been developed (such as the FitzHugh–Nagumo model), facilitating efficient large-scale simulation of groups of neurons, as well as mathematical insight into dynamics of action potential generation." ]
[ null, "https://upload.wikimedia.org/wikipedia/commons/thumb/9/98/Hodgkin-Huxley.svg/350px-Hodgkin-Huxley.svg.png", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/8d2b282ed79a5aa53454391d91291c9eb10fd4bf", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/ed42987096b9c8185b5d5ca96b842d20e8d7804d", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/f300b83673e961a9d48f3862216b167f94e5668c", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/58ee17a4f52f7defa01af4e77bae2c348cd76d64", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/8fde652312d9692d346ee7150c362c7679bb7e3f", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/057155f00703e829696e069d0c66131e2c02e453", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/e721bc5c172643c1ea4c02507e593f3950561b6b", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/e2d6115fcbd65351edd5b8176fc192cddd4a49f4", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/3b1fb627423abe4988b7ed88d4920bf1ec074790", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/f111c43e5cfce37bcffc6121a19b81c6efd825ed", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/cae047eb7ad946282ec0525673c5099522c40be9", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/e85c6a80cc8fba91b2152ea2d89a3a59e2064c58", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/5c48aa9000af59f94d3022f58beadb61cea7d8b5", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/42e26c16ecbb2f91a41219f638a47d7592768f8b", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/5bf1e9d610710ec8906aaf72cf3c709d77e966af", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/35b37e1add7f4e051bd53376855f596352fab0d5", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/d4b1d64d7d5bc5f0a8490d9844565f5cb2b83802", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/63e356598f61d5afb9be7bfecb826a03ca7c3cf0", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/40836783b20c374654ab03ddd6b01586ad9b4fb8", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/b79333175c8b3f0840bfb4ec41b8072c83ea88d3", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/7ed48a5e36207156fb792fa79d29925d2f7901e8", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/f37c598ddc08a3021fe0e0213539ebeccda4f6fb", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/b79333175c8b3f0840bfb4ec41b8072c83ea88d3", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/7ed48a5e36207156fb792fa79d29925d2f7901e8", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/fa00ce345e4ec8be99f1b6090e9af39d23b8000e", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/7628fde7c11cc9053f49af7569f0807ffa6bf09c", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/3bba9ecee241689c0b7b5374c9567462a9b8931d", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/46507359f94faf96b2f83d41dc74bbce09630fd1", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/40836783b20c374654ab03ddd6b01586ad9b4fb8", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/8e1207ad30578ccb9b11c879d81cebac3c4b78fc", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/5af4592e4e8398c3d583a789d4d3c78967790f9c", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/132dafeb2d682786e7ee7c16c481542af530a8bd", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/2ce36142a0a1c6660e82bdf3ef3f1551317efe0c", null, "https://upload.wikimedia.org/wikipedia/commons/thumb/9/9e/Hodgkin_Huxley_Limit_Cycle.png/220px-Hodgkin_Huxley_Limit_Cycle.png", null, "https://upload.wikimedia.org/wikipedia/commons/c/c4/Hodgkins_Huxley_Plot.gif", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/535ea7fc4134a31cbe2251d9d3511374bc41be9f", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8484583,"math_prob":0.9966939,"size":13477,"snap":"2019-26-2019-30","text_gpt3_token_len":3195,"char_repetition_ratio":0.13656944,"word_repetition_ratio":0.057471264,"special_character_ratio":0.23781256,"punctuation_ratio":0.14205231,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9995795,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74],"im_url_duplicate_count":[null,5,null,7,null,7,null,null,null,7,null,7,null,7,null,7,null,7,null,null,null,null,null,7,null,7,null,null,null,null,null,7,null,7,null,null,null,7,null,null,null,null,null,null,null,5,null,null,null,null,null,7,null,7,null,7,null,7,null,null,null,7,null,7,null,7,null,null,null,5,null,5,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-19T00:50:19Z\",\"WARC-Record-ID\":\"<urn:uuid:7ba2260c-b5d7-463d-8e6d-7d4439fea6a1>\",\"Content-Length\":\"150758\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:17fe4ff6-1857-4113-b03b-7bcaf9c9bcbb>\",\"WARC-Concurrent-To\":\"<urn:uuid:c618a6f2-7603-479a-88fe-6a7eadb0e6ab>\",\"WARC-IP-Address\":\"208.80.154.224\",\"WARC-Target-URI\":\"https://en.wikipedia.org/wiki/Hodgkin%E2%80%93Huxley_model\",\"WARC-Payload-Digest\":\"sha1:GAO5IKO2U36GD7BB3HE353HR7YI6467Q\",\"WARC-Block-Digest\":\"sha1:XHDN5OZDLOI3SA4G7M63NTYNLZOYOJKA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195525863.49_warc_CC-MAIN-20190718231656-20190719013656-00373.warc.gz\"}"}
https://socratic.org/questions/what-are-the-intercepts-of-7y-3y-2-x-9-x-2
[ "# What are the intercepts of -7y=3y-2(x-9) -x^2?\n\nJun 6, 2018\n\n$x = - 10 \\pm \\sqrt{19}$\n\n$y = - \\frac{9}{5}$\n\n#### Explanation:\n\nTo find the y-intercepts set x=0 and solve for y:\n\n$- 7 y = 3 y - 2 \\left(x - 9\\right) - {x}^{2}$\n\n$- 7 y = 3 y - 2 \\left(0 - 9\\right) - {0}^{2}$\n\n$- 7 y = 3 y - 2 \\left(- 9\\right)$\n\n$- 7 y = 3 y + 18$\n\n$- 7 y = 3 y + 18$\n\n$- 10 y = 18$\n\n$y = - \\frac{9}{5}$\n\nTo find the x-intercept(s) if they exist set y=0 and solve for x:\n\n$- 7 y = 3 y - 2 \\left(x - 9\\right) - {x}^{2}$\n\n$- 7 \\left(0\\right) = 3 \\left(0\\right) - 2 \\left(x - 9\\right) - {x}^{2}$\n\n$0 = - 2 \\left(x - 9\\right) - {x}^{2}$\n\n$0 = - {x}^{2} - 2 \\left(x - 9\\right)$\n\n$0 = - {x}^{2} - 2 x + 18$\n\n$0 = {x}^{2} + 2 x - 18$\n\nYou will need to complete the square or use the quadratic equation to find these roots:\n\n$x = - 10 \\pm \\sqrt{19}$\n\ngraph{-7y=3y-2(x-9) -x^2 [-20.58, 19.42, -4.8, 15.2]}" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.69441587,"math_prob":1.0000097,"size":467,"snap":"2019-51-2020-05","text_gpt3_token_len":151,"char_repetition_ratio":0.12095033,"word_repetition_ratio":0.0,"special_character_ratio":0.32976446,"punctuation_ratio":0.117117114,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999198,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-14T00:53:29Z\",\"WARC-Record-ID\":\"<urn:uuid:a49a9529-056b-4b62-9fcb-2fdc7c42f6f3>\",\"Content-Length\":\"33922\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:439f074c-2ca0-4b03-b160-53491726278b>\",\"WARC-Concurrent-To\":\"<urn:uuid:a4938d75-d980-46d3-b189-9bd5c04f2c97>\",\"WARC-IP-Address\":\"54.221.217.175\",\"WARC-Target-URI\":\"https://socratic.org/questions/what-are-the-intercepts-of-7y-3y-2-x-9-x-2\",\"WARC-Payload-Digest\":\"sha1:TQIPZDUINIRP2SIBK56ISX2CZVOZA62E\",\"WARC-Block-Digest\":\"sha1:GC4UF5KKYANMPXHGBWRFOJ7SI4HGA43N\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540569332.20_warc_CC-MAIN-20191213230200-20191214014200-00152.warc.gz\"}"}
https://bluecoatmaths.com/angles-revision/
[ "# Angles Revision\n\nLesson 1\n\nLearning Objective: to be able to calculate the angle sum of any polygon and solve problems involving interior angles.\n\nResources:\n\nAngles in Polygons Table – Print for students to complete\n\nPolygonsInteriorAnglesPPT\n\nLesson 2\n\nLearning Objective: to be able to calculate a missing exterior angles and solve problems involving both interior and exterior angles.\n\nResources:\n\nExteriorAnglesPPT\n\nThe MathsPad Interactive Whiteboard Tool for this key learning point is fantastic. It can be found here: Exterior Angles Tool" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8536083,"math_prob":0.5660782,"size":476,"snap":"2020-45-2020-50","text_gpt3_token_len":92,"char_repetition_ratio":0.13983051,"word_repetition_ratio":0.08450704,"special_character_ratio":0.17857143,"punctuation_ratio":0.09756097,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9689087,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-29T09:59:41Z\",\"WARC-Record-ID\":\"<urn:uuid:39056d69-efcc-49c1-8f75-c9c6dae748c9>\",\"Content-Length\":\"49148\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3a6d6b08-8523-4a3d-9515-ad1258500dbe>\",\"WARC-Concurrent-To\":\"<urn:uuid:2ed44d37-245b-4291-ab9e-828c2f112ffd>\",\"WARC-IP-Address\":\"192.0.78.25\",\"WARC-Target-URI\":\"https://bluecoatmaths.com/angles-revision/\",\"WARC-Payload-Digest\":\"sha1:WOGETNZ366Q6HABXART5A7ZHCZLZWSI6\",\"WARC-Block-Digest\":\"sha1:O5XXULLF2N4DYK2P64B3VUJR55TMX62X\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141197593.33_warc_CC-MAIN-20201129093434-20201129123434-00401.warc.gz\"}"}
https://www.wgts-directory.com/insight/microsoft-dynamics-crm-and-power-bi/
[ "Microsoft Dynamics CRM和Power BI\n\n[fusion_builder_container background_color= \" \" background_image= \" \" background_parallax= \" none \" enable_mobile= \" no \" parallax_speed= \" 0.3″background_repeat= \" no-repeat \" background_position= \" left top \" video_url= \" video_aspect_ratio= \" 16:9″video_webm= \" \" video_mp4= \" \" video_ogv= \" \" video_preview_image= \" \" overlay_color= \" overlay_opacity= \" 0.5″video_mute = \" yes \" video_loop = \" yes \"褪色=“不”border_size = \" 0 px”border_color = \" \" border_style = \" \" padding_top = \" 20 \" padding_bottom = \" 20 \" padding_left = \" padding_right = \" \" hundred_percent =“不”equal_height_columns =“不”hide_on_mobile =“不”menu_anchor = \" class = \" id = \"] [fusion_builder_row] [fusion_builder_column类型=“1 _1”最后= =“是”“是”间距center_content = \" yes \" hide_on_mobile =“不”background_color = \"准确\" = \" background_repeat =“没有重演”background_position =“左前”hover_type =“没有”链接= \" \" border_position =“所有”border_size =“0 px”border_color = \" border_style =“固体”填充=“margin_top = \" margin_bottom = \" \" animation_type = \" 0 \" animation_direction =“向下”[/fusion_builder_column][fusion_builder_column type= \" 1_1″background_position= \" left top \" background_color= \" \" border_size= \" \" border_color= \" \" border_style= \" solid \" spacing= \" yes background_image= \" \"][/ fusion_youtube id= \" 1y5AQB_tvfo \" width= \" 600″height= \" 350″autoplay= \" no \" api_params= \" \" class= \" /][/fusion_builder_column][fusion_builder_column type= \" 1_1″background_position= \" left top \" background_color= \" \" border_size= \" \" border_color= \" \" border_style= \" solid \" spacing= \" yes background_image= \" \"]background_repeat =“没有重演”填充=“margin_top = \" 0 px”margin_bottom = \" 0 px”类= \" id = \" \" animation_type = \" \" animation_speed = \" 0.3 \" animation_direction =“左”hide_on_mobile =“不”center_content =“不”min_height = \"没有\"][fusion_separator style_type =“没有”top_margin =“20 px”bottom_margin = \" \" sep_color = \" \" border_size = = \" \" \"图标icon_circle = \" \"Icon_circle_color = \" \" width= \" \" align = \" center \" class= \" \" id= \" \" /][/fusion_builder_column][fusion_builder_column type= \" 1_1″last= \" yes \" spacing= \" yes \" center_content= \" no \" hide_on_mobile= \" no \" background_color= \" \" background_image= \" \" background_repeat= \" no-repeat \" background_position= \" left top \" hover_type= \" none \" link= \" \" border_position= \" all \"Border_size = \" 0px \" border_color= \" \" border_style= \" \" padding= \" \" margin_top= \" \" margin_bottom= \" \" animation_type= \" \" animation_direction= \" \" animation_speed= \" 0.1″animation_offset= \" \" class= \" \" id= \" \"][fusion_text]\n\n[/ fusion_text] [/ fusion_builder_column] [/ fusion_builder_row] [/ fusion_builder_container]" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5311654,"math_prob":0.8770662,"size":2651,"snap":"2022-05-2022-21","text_gpt3_token_len":812,"char_repetition_ratio":0.21193804,"word_repetition_ratio":0.17630854,"special_character_ratio":0.35194266,"punctuation_ratio":0.02232143,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9716893,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-24T14:42:27Z\",\"WARC-Record-ID\":\"<urn:uuid:c2af1120-9b77-4a4f-8e14-b49a9daefd40>\",\"Content-Length\":\"148870\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e449c943-2352-490a-905e-5ee5136961dd>\",\"WARC-Concurrent-To\":\"<urn:uuid:03104376-5a2a-45a6-8c4a-fedec310bc1f>\",\"WARC-IP-Address\":\"198.15.167.146\",\"WARC-Target-URI\":\"https://www.wgts-directory.com/insight/microsoft-dynamics-crm-and-power-bi/\",\"WARC-Payload-Digest\":\"sha1:GYSJCOS3ATBYYIZWATAMJU4TQEPLMP3F\",\"WARC-Block-Digest\":\"sha1:EKIMEHILQA6V6QY3NY3CTI7Q2JNY3Q4C\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320304570.90_warc_CC-MAIN-20220124124654-20220124154654-00558.warc.gz\"}"}
https://en.cppreference.com/w/cpp/thread/recursive_timed_mutex/unlock
[ "# std::recursive_timed_mutex::unlock\n\nC++\n Language Standard Library Headers Freestanding and hosted implementations Named requirements Language support library Concepts library (C++20) Diagnostics library Utilities library Strings library Containers library Iterators library Ranges library (C++20) Algorithms library Numerics library Input/output library Localizations library Regular expressions library (C++11) Atomic operations library (C++11) Thread support library (C++11) Filesystem library (C++17) Technical Specifications\n\n(C++11)\n(C++20)\n(C++20)\nthis_thread namespace\n get_id(C++11) yield(C++11)\n sleep_for(C++11) sleep_until(C++11)\nMutual exclusion\n mutex(C++11) recursive_mutex(C++11) shared_mutex(C++17)\n timed_mutex(C++11) recursive_timed_mutex(C++11) shared_timed_mutex(C++14)\nGeneric lock management\nCondition variables\n(C++11)\nSemaphores\nLatches and barriers\n(C++20)\n(C++20)\nFutures\n launch(C++11) future_status(C++11) future_error(C++11) future_category(C++11) future_errc(C++11)\n\nstd::recursive_timed_mutex\n Member functions recursive_timed_mutex::recursive_timed_mutex recursive_timed_mutex::~recursive_timed_mutex Locking recursive_timed_mutex::lock recursive_timed_mutex::try_lock recursive_timed_mutex::try_lock_for recursive_timed_mutex::try_lock_until recursive_timed_mutex::unlock Native handle recursive_timed_mutex::native_handle\n\n void unlock(); (since C++11)\n\nUnlocks the mutex if its level of ownership is 1 (there was exactly one more call to lock() than there were calls to unlock() made by this thread), reduces the level of ownership by 1 otherwise.\n\nThe mutex must be locked by the current thread of execution, otherwise, the behavior is undefined.\n\nThis operation synchronizes-with (as defined in std::memory_order) any subsequent lock operation that obtains ownership of the same mutex.\n\n(none)\n\n(none)\n\n(none)" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8739028,"math_prob":0.6948892,"size":1322,"snap":"2019-43-2019-47","text_gpt3_token_len":295,"char_repetition_ratio":0.13657056,"word_repetition_ratio":0.2160804,"special_character_ratio":0.22012103,"punctuation_ratio":0.11061947,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96062785,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-14T20:24:49Z\",\"WARC-Record-ID\":\"<urn:uuid:d2b0562c-6bd1-4223-bc68-f184c79a0d7c>\",\"Content-Length\":\"53154\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:37bf65ff-b1c9-4eaa-acf4-ff8d22dded29>\",\"WARC-Concurrent-To\":\"<urn:uuid:c95c1280-e0d4-454a-8dba-172dd94735f9>\",\"WARC-IP-Address\":\"74.114.90.46\",\"WARC-Target-URI\":\"https://en.cppreference.com/w/cpp/thread/recursive_timed_mutex/unlock\",\"WARC-Payload-Digest\":\"sha1:W3TEA2HGJKHOHKXYENOIS4CXA45DZIXK\",\"WARC-Block-Digest\":\"sha1:JVYCBH6PVNNYLSADKR3QT2KJKMPRKXL7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496668534.60_warc_CC-MAIN-20191114182304-20191114210304-00059.warc.gz\"}"}
https://ytenalizudos.ml/vampire/hilbert-space-boundary-value-problems-and-orthogonal-polynomials.php
[ "# Manual Hilbert Space, Boundary Value Problems and Orthogonal Polynomials\n\nChristian Remling Let G k,n be the k'th order jet group in n variables which consists of the set of k-jets of local diffeomorphisms of R n fixing the origin under the operation of composition. In coordinates, this group operation can be written explicitly using the chain rule.\n\nNow consider G 3,1 and G 2,1 and the obvious projection homomorphism from G 3,1 onto G 2, One very simple interpretation is possible, if I am not mistaken: Characterizing conformal nets, or more general harmonic nets having in mind the lift to minimal surfaces, it turns out that there exists a simple uniform relationship of four fundamental entities up to normalization for orthogonal trajectories: The sum of the changes of geodesic curvature It have an introduction to distribution theory and them apply it to finding Green's functions. I found a preview here Cheers.\n\nDox 3 3 silver badges 19 19 bronze badges. Lower bound for the eigenvalue. OK, here is the sketch of the bound I mentioned. How to determine the spectrum from the diagonal Green's function. Legendre differential equation with additional term. HeunC is a perfectly good \"analytical\" function.\n\nEigenfunctions of fourth-order differential operator. More precisely, what is the spectrum of this operator? Please edit you question to remove Liviu Nicolaescu Sturm Liouville problems for non-classical orthogonal polynomials. A reference in english for Bochner's theorem is section Ismail, Classical and quantum orthogonal polynomials in one variable, Encyclopedia of Mathematics and its Applications, Cambridge University Press, Cambridge, Carlo Beenakker Orthogonal Polynomials and Sturm Liouville operators.\n\nIt seems to be most usual to take a spectral theory viewpoint and it would be odd to avoid it because it is also most natural. One can give a proof of the spectral theorem for compact self-adjoint operators and motivate the introduction of compact operators by talking briefly about Green's functions and their relation to differential operators one can Josiah Park 1, 5 5 silver badges 25 25 bronze badges.\n\nMichael Renardy Non-self adjoint Sturm-Liouville problem. I made a little progress on this, and I am curious as to what people think. The whole motivation for this approach is that it seems much easier to solve a self-adjoint problem than a non-self adjoint one. Eric Gamliel 41 4 4 bronze badges.\n\nMy first remark would be that this equation doesn't look like one whose solutions could be expressed with some standard special functions, so that way of expressing solutions is out of the question. Zurab Silagadze Zeroes of Sturm-Liouville solutions as a function of the complex eigenvalue.\n\n1. Make: AVR Programming: Learning to Write Software for Hardware?\n2. The Trollope Critics.\n4. 1. Introduction!\n\nAaron Hoffman 1 1 gold badge 4 4 silver badges 8 8 bronze badges. Finally, we will show some regularity results for joint distributions of free variables, together with the main ideas of their proofs. From this, one can derive systems of nonlinear partial differential equations satisfied by tau, such as the Kadomstev-Petviashvili equation.\n\n## Holdings: Hilbert Space, Boundary Value Problems and Orthogonal Polynomials\n\nAs I have the honor to give an ILAS lecture, I take the liberty to leave the field of genuine operator theory and to move into linear algebra. The talk is about the question when certain matrices do generate a lattice, that is, a discrete subgroup of some finite-dimensional Euclidean space, and if this happens, which good properties these lattice have. The matrices considered come from equiangular tight frames.\n\nI promise a nice tour through some basics of equiangular lines, tight frames, and lattice theory. We will encounter lots of interesting vectors and matrices and enjoy some true treats in the intersection of discrete mathematics and finite-dimensional operator theory. Recently, S. Chandler-Wilde and D. Hewett have proposed a boundary integral equation approach for studying scattering problems involving fractal structures, in particular planar screens which are fractal or have a fractal boundary. This led them to consider, e.\n\nMoiola, study some properties of such spaces. In this talk I shall report on this and also on some answers to which we have arrived, using some current function spaces techniques, during our recent collaboration project. Besides, since the techniques involved in general work in a more general framework than the one presented above, I take the opportunity to dwell also on some relevant aspects of the modern theory of function spaces of Besov and Triebel-Lizorkin type which might also be useful in other settings.\n\nWe derive from variational principles a class of stochastic partial differential equations and show the existence of their solutions. We introduce a new framework for noncommutative convexity. We develop a noncommutative Choquet theory and prove an analogue of the Choquet-Bishop-de Leeuw theorem. This is joint work with Matthew Kennedy.\n\nThe boundary conditions are classical Dirichlet-Neumann mixed type. The object of the investigation is what happens with the above mentioned mixed boundary value problem when the thickness of the layer converges to zero. We shall begin by reviewing a compactification of the Markov space for an infinite transition matrix, introduced by Marcelo Laca and the speaker roughly 20 years ago. Given a continuous potential we will then consider the problem of characterizing the conformal measures on that space.\n\nIn the context of the Markov shifts mentioned above we will then explore the connections between conformal and DLR measures. It is the purpose of this presentation to explain certain aspects of Classical Fourier Analysis from the point of view of distribution theory. We will show how this setting of Banach Gelfand triples resp. In contrast to the Schwartz theory of tempered distributions it is expected that the mathematical tools can be also explained in more detail to engineers and physicists. We will discuss some examples of zeta-regularised spectral determinants of elliptic operators, focusing on the effect of the spatial dimension and the order of the operator.\n\nThe former case will be illustrated by the harmonic-oscillator, while for the latter we consider polyharmonic operators on bounded intervals. Bishop in the fifties as possible operators which might entail counterexamples for the Invariant Subspace Problem. In this talk we will analyse two classical theorems from a higher point of view. The first theorem is the famous Gohberg-Heinig inverse theorem for self-adjoint finite Toeplitz operator matrices.\n\nThe answers are remarkable similar to those of the classical problem see . The inverse problem for Ellis-Gohberg orthogonal Wiener class functions on the unite circle fits into this setting. We shall present the solution of the latter problem for matrix-valued Wiener class functions, and, if time permits, we shall also discuss the twofold version of the inverse problem. For several examples the problem is still open. The study is based on two different local-trajectory methods for the Banach and Hilbert space settings , with involving spectral measures, a lifting theorem and Mellin pseudodifferential operators with non-regular symbols.\n\nLinear matrix inequalities LMIs are ubiquitous in real algebraic geometry, semidefinite programming, control theory and signal processing.\n\nLMIs with dimension free matrix unknowns, called free LMIs, are central to the theories of completely positive maps and operator algebras, operator systems and spaces, and serve as the paradigm for matrix convex sets. The feasibility set of a free LMI is called a free spectrahedron.\n\nIn this talk, the bianalytic maps between a very general class of ball-like free spectrahedra examples of which include row or column contractions, and tuples of contractions and arbitrary free spectrahedra are characterized and seen to have an elegant algebraic form. They are all highly structured rational maps we call convexotonic.\n\n1. Digital Watermarking for Digital Media.\n2. Communication Under the Microscope: The Theory and Practice of Microanalysis!\n3. Beginning VB 2005 databases : from novice to professional;\n\nIn particular, this yields a classification of automorphism groups of ball-like free spectrahedra. The results depend on new tools in free analysis to obtain fine detail, geometric in nature locally and algebraic in nature globally, about the boundary of free spectrahedra. First of all I will shortly describe some facts concerning the fascinating prehistory and history of Hardy-type inequalities.\n\nAfter that I will present some fairly new discoveries how some Hardy-type inequalities are closely related to the concept of convexity. I will continue by presenting some facts from the further development of Hardy-type inequalities as presented in remarkable many papers and also some monographs see e. I will present some very new results and raise a number of open questions. The Hilbert transform is a singular integral operator that gives access to harmonic conjugate functions via a convolution of boundary values.\n\nThis operator is trivially a bounded linear operator in the space of square integrable functions. This is no longer obvious if we introduce a positive weight with respect to which we integrate the square. In this case, conditions on the weight need to be imposed, the so-called characteristic of the weight, both necessary and sufficient for boundedness.\n\n## Hilbert Space, Boundary Value Problems and Orthogonal Polynomials\n\nIt is a delicate question to find the exact way in which the operator norm grows with this characteristic. Interest was spiked by a classical question surrounding quasiconformality and the Beltrame equation. We give a brief historic perspective of the developments in this area of \"weights\" that spans about twenty years and that has changed our understanding of these important classical operators. We highlight a probabilistic and geometric perspective on these new ideas, giving dimensionless estimates of Riesz transforms on Riemannian manifolds with bounded geometry.\n\n### Introduction\n\nA typical question in numerical analysis is whether this sequence is stable. Several concepts were developed to study algebras of approximation sequences arising in this way. Two of these compactness and fractality will occur in this talk. Compact sequences play a role comparable to compact operators. Both concepts are related by the fact that the ideal of the compact sequences in a fractal algebra has a surprisingly simple structure: it is a dual algebra, i. In this talk, I consider algebras which are not fractal, but close to fractal algebras in the sense that every restriction has a fractal restriction.\n\nWe will discuss conditions which guarantee that these algebras again have a nice structure: they are isomorphic to a continuous field, and their compact sequences form an algebra with continuous trace. Effect algebras are important in mathematical foundations of quantum mechanics. We will present some recent results on symmetries of effect algebras. In the first part of this talk I will discuss both classical and contemporary results and applications of dilation theory in operator theory. I will present the solution of this problem, as well as a new application which came to us as a pleasant surprise of dilation theory to the continuity of the spectrum of the almost Mathieu operator from mathematical physics." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.91949797,"math_prob":0.91541654,"size":11492,"snap":"2019-51-2020-05","text_gpt3_token_len":2276,"char_repetition_ratio":0.12038649,"word_repetition_ratio":0.0044994378,"special_character_ratio":0.17777584,"punctuation_ratio":0.08779011,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.978161,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-26T10:51:59Z\",\"WARC-Record-ID\":\"<urn:uuid:b44314c9-b09b-40d6-aa3e-ef319041c434>\",\"Content-Length\":\"22011\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f9cf49d5-91c5-4239-9507-b8ba1176ec1c>\",\"WARC-Concurrent-To\":\"<urn:uuid:e647d691-9884-4686-8056-07d98ae8a165>\",\"WARC-IP-Address\":\"104.28.11.180\",\"WARC-Target-URI\":\"https://ytenalizudos.ml/vampire/hilbert-space-boundary-value-problems-and-orthogonal-polynomials.php\",\"WARC-Payload-Digest\":\"sha1:5QARG2B3EZI2GMZBBZFGK3ZXBKMAZCZE\",\"WARC-Block-Digest\":\"sha1:7ZERNTF4W36IU3EZQBVUDHGFWOSVSKGH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579251688806.91_warc_CC-MAIN-20200126104828-20200126134828-00031.warc.gz\"}"}
http://web.cs.wpi.edu/~cs4341/a17/HW/HW2/
[ "", null, "", null, "### CS4341 Introduction to Artificial Intelligence Homework 2 and Sample Exam Problems A Term 2017\n\n#### Prof. Carolina Ruiz and Ahmedul Kabir\n\nDue Date: Canvas submission by Monday, Sept. 25th, 2017 at 11 am.", null, "This homework assignment consists of 2 parts:\n1. Assigned problems from the textbook: You need to submit your solutions on Canvas.\n2. Sample exam problems: No need to submit your solutions but you should study them for Exam 2. Try to solve these problems first as this will help you with the HW problems too!\n\n1. #### Assigned HW problems from the textbook:\n\n• Read Chapters/Sections 6, 7.1-7.5, 8, 9.3-9.4 and 10 of the textbook.\n\n• Problems: Submit your answers on Canvas to the textbook problems listed below as \"Required for the HW\". Note that we have also listed other textbook problems that are required for the HW, but which would be good for you to try to solve in preparation for the exam (you don't need to submit solutions to these latter problems in your HW submission).\n\n• Chapter 6: (pp. 230-233)\n\n• Required for the HW:\n• 6.2\n• 6.4a\n• 6.9\n• 6.11\n• 6.16\n\n• Not required for the HW but useful to study for the exam:\n• 6.3\n• 6.6\n• 6.15\n\n• Chapter 7: (pp. 279-284)\n\n• Required for the HW:\n• 7.1\n• 7.4: parts a, b, c, i, j. [All the parts of this problem are good exercises; we are picking some just to keep the hw shorter.]\n• 7.10\n• 7.12. (Note that you need to covert those sentences to clausal form first.)\n• 7.17: part a\n\n• Not required for the HW but useful to study for the exam:\n• 7.5 (remember to prove both directions of the \"if and only if\")\n• 7.6\n• 7.7 (Note that the models must assign truth values each propositional letter A, B, C, and D, not just to the propositional letters appearing in the sentence. Also, note that A . B . C is shortcut for (A . B) . C, which is also equivalent to A . (B . C) )\n• 7.9\n• Determine whether the following sentence is valid, unsatisfiable, or neither. \"!\" means \"not. Justify your answer with either truth tables, equivalent rewrites of the sentence, or both.\n` (!! q => !p) => (p => !q) `\n\n• Chapter 8: (pp. 315-321)\n\n• Required for the HW:\n• 8.10\n• 8.11\n• 8.23\n\n• Not required for the HW but useful to study for the exam:\n• 8.20\n\n• Chapter 9: (pp. 360-365)\n\n• Required for the HW:\n• 9.9\n• 9.6 and 9.13a\n\n• Not required for the HW but useful to study for the exam:\n• 9.10\n\n• Chapter 10: (pp. 396-397)\n\n• Required for the HW:\n• 10.2\n• 10.3\n\n• Not required for the HW but useful to study for the exam:\n• 10.4\n\n2. #### Additional sample exam problems:\n\nPractice for the exam by solving the previous exams and homework problems from Prof. Ruiz's offerings of this course listed below. Solve the problems on your own before looking at the provided solutions. No need to submit your answers as part of your homework submission.\n\nSome of the problems below use another textbook's notation in which a letter precedeed by \"?\" means that the letter is a variable (and not a constant)). For example, \"?a\" is a variable.\n\n• From Graduate AI Course Spring 2013:\n• Homework 4 (with solutions): Problems 1 and 2.\n• Homework 5 (with solutions): Problem 1.\n\n• From B Term 2003:\n• Exam 1 (with solutions): Problems IV and V.\n• Project/HW 3 (with solutions): Part I.\n\n• From C Term 2002:\n• Exam 1 (with solutions): Problems II and III.\n• Project 2 (with solutions): Part I.\n\n• From D Term 2001:\n• Exam 1 (with solutions): Problems IV and V.\n• Exam 2 (with solutions): Problems 3 and 4.\n\n• From C Term 2000:\n• Exam 1 (with solutions): Problem III\n• Exam 2 (with solutions): Problems I and II\n• Homework 3 (with solutions): All problems\n• Homework 4 (with solutions): Part 1: Problems 1, 2, and 3\n\n• From A Term 1997:" ]
[ null, "http://web.cs.wpi.edu/images/cs_banner.gif", null, "http://web.cs.wpi.edu/images/hr.gif", null, "http://web.cs.wpi.edu/images/hr.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9469732,"math_prob":0.5558009,"size":1139,"snap":"2019-13-2019-22","text_gpt3_token_len":310,"char_repetition_ratio":0.2519824,"word_repetition_ratio":0.11111111,"special_character_ratio":0.30289727,"punctuation_ratio":0.16450216,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9567322,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-05-21T06:31:09Z\",\"WARC-Record-ID\":\"<urn:uuid:ee57f314-4c94-4ea6-b428-667aae91a6b0>\",\"Content-Length\":\"6626\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4e92e845-d819-4301-9575-2d94ab7812b8>\",\"WARC-Concurrent-To\":\"<urn:uuid:89ddeb88-ea39-4d6a-a3cb-99a6de00db98>\",\"WARC-IP-Address\":\"130.215.29.52\",\"WARC-Target-URI\":\"http://web.cs.wpi.edu/~cs4341/a17/HW/HW2/\",\"WARC-Payload-Digest\":\"sha1:UVX5GKD7XA4QVZ6GGKYAOAZBYYGZP4I4\",\"WARC-Block-Digest\":\"sha1:OR23GZGHHSZMX2VC7SMWWUPSWCYUWIVJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-22/CC-MAIN-2019-22_segments_1558232256281.35_warc_CC-MAIN-20190521062300-20190521084300-00165.warc.gz\"}"}
https://plexim.com/academy/analog-electronics/integrator-amplifier
[ "# Integrator Amplifier\n\n## Working principle\n\nAn integrator produces an output voltage, which is proportional to the integral of the input voltage:", null, "### Formula derivation\n\nBecause of virtual ground and infinite impedance of the op-amp’s input terminals, all of the input current flows through R and C:", null, "", null, "", null, "", null, "", null, "An application for this circuit could be integrating water flow and measuring the total quantity of water that has passed by the flowmeter.\n\n### Experiment\n\nSet the AC voltage source’s amplitude and frequency to 0 and the DC voltage source’s value to 1 V. Observe the output. If a fixed voltage is applied to the input of an integrator, the output voltage will be a ramp with a constant slope of the negative input voltage multiplied by a factor of 1/RC." ]
[ null, "https://plexim.com/sites/default/files/equations/analog_elect_integrator_1_and_6.png", null, "https://plexim.com/sites/default/files/equations/analog_elect_integrator_2.png", null, "https://plexim.com/sites/default/files/equations/analog_elect_integrator_3.png", null, "https://plexim.com/sites/default/files/equations/analog_elect_integrator_4.png", null, "https://plexim.com/sites/default/files/equations/analog_elect_integrator_5.png", null, "https://plexim.com/sites/default/files/equations/analog_elect_integrator_1_and_6.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.830136,"math_prob":0.96800405,"size":717,"snap":"2021-21-2021-25","text_gpt3_token_len":148,"char_repetition_ratio":0.16409537,"word_repetition_ratio":0.0,"special_character_ratio":0.19246861,"punctuation_ratio":0.066176474,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97251564,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,null,null,6,null,6,null,6,null,6,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-05T21:46:47Z\",\"WARC-Record-ID\":\"<urn:uuid:ae7d9d47-5bca-4e69-9a4e-84f2ca3e28d2>\",\"Content-Length\":\"25857\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:afe57368-c292-4e73-84fb-5a9049fc8e44>\",\"WARC-Concurrent-To\":\"<urn:uuid:bf3d53a0-84a0-4e65-ba8a-53bda4649419>\",\"WARC-IP-Address\":\"134.119.234.170\",\"WARC-Target-URI\":\"https://plexim.com/academy/analog-electronics/integrator-amplifier\",\"WARC-Payload-Digest\":\"sha1:LLHTNQ2LNHDZSV7CEZV2RN374JVA3XG7\",\"WARC-Block-Digest\":\"sha1:NLQLYM5KBU2RVIJ6OSEAHEYRWFV6G25U\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243988696.23_warc_CC-MAIN-20210505203909-20210505233909-00212.warc.gz\"}"}
https://www.beatthegmat.com/is-1-x-y-y-x-t301282.html?sid=3294eadbd97f9eb8d98b093500606a78
[ "## Is 1/(x - y) < y - x\n\n##### This topic has expert replies\nModerator\nPosts: 1805\nJoined: 29 Oct 2017\nThanked: 1 times\nFollowed by:5 members\n\n### Is 1/(x - y) < y - x\n\nby M7MBA » Fri Mar 16, 2018 4:03 am\n$$Is\\ \\ \\ \\ \\frac{1}{x-y} < y-x\\ .$$ $$(1)\\ \\ \\ \\frac{1}{x} < \\frac{1}{y}$$ $$(2)\\ \\ \\ 2x = 3y$$ The OA is the option C.\n\nExperts, can you help me here? How can I show that each statement alone is not sufficient? Thanks.\n\nLegendary Member\nPosts: 2663\nJoined: 14 Jan 2015\nLocation: Boston, MA\nThanked: 1153 times\nFollowed by:127 members\nGMAT Score:770\nby [email protected] » Fri Mar 16, 2018 7:46 am\nM7MBA wrote:$$Is\\ \\ \\ \\ \\frac{1}{x-y} < y-x\\ .$$ $$(1)\\ \\ \\ \\frac{1}{x} < \\frac{1}{y}$$ $$(2)\\ \\ \\ 2x = 3y$$ The OA is the option C.\n\nExperts, can you help me here? How can I show that each statement alone is not sufficient? Thanks.\nLet's rephrase the question.\n\nIf x > y, then x - y > 0 and y -x < 0. The left side of the inequality would be larger than the right side.\nIf x < y, then x - y < 0, and y - x > 0. Now the left side of the inequality would be smaller than the right side - this is exactly what we're asked.\n\nThus, our rephrased question: Is x < y?\n\n1) 1/x < 1/y\n\nCase 1: x = 2 and y = 1. x is not less than y, so the answer is NO\nCase 2: x = -1 and y = 1. x is less than y, so the answer is YES. Not Sufficient.\n\n2) 2x = 3y\nx = (3/2)y\n\nCase 1: y = 1 and x = 3/2. x < y? NO\nCase 2: y = -1 and x = -3/2. x < y? YES. Not Sufficient.\n\nTogether: Substitute x = (3/2)y into 1/x < 1/y to give us\n\n1/[(3/2)y] < 1/y\n(2/3) * 1/y < 1/y\n\nThis will only be true if y is positive. (If y were negative, 1/y would also be negative, and multiplying that value by 2/3 would make the result less negative.)\nIf y is positive, then and x = (3/2)y, x will be a larger positive number. If we know that x > y, the answer to the question is a definitive NO, and thus the statements together are sufficient to answer the question. The answer is C\nVeritas Prep | GMAT Instructor\n\nVeritas Prep Reviews\nSave \\$100 off any live Veritas Prep GMAT Course\n\n• Page 1 of 1" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.86039746,"math_prob":0.99736285,"size":2019,"snap":"2021-31-2021-39","text_gpt3_token_len":710,"char_repetition_ratio":0.099255584,"word_repetition_ratio":0.26168224,"special_character_ratio":0.38583457,"punctuation_ratio":0.13304721,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99969625,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-26T00:43:48Z\",\"WARC-Record-ID\":\"<urn:uuid:ea335ca4-dec5-4342-afcb-9b8b8365aec1>\",\"Content-Length\":\"75875\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:14797063-80e5-4d9a-9b50-c4116dbd3114>\",\"WARC-Concurrent-To\":\"<urn:uuid:2ad757be-59f6-4fbd-b491-4c1a6a84adda>\",\"WARC-IP-Address\":\"104.21.46.133\",\"WARC-Target-URI\":\"https://www.beatthegmat.com/is-1-x-y-y-x-t301282.html?sid=3294eadbd97f9eb8d98b093500606a78\",\"WARC-Payload-Digest\":\"sha1:VWNCDVXVBBHDY4CVEJZOI7FPIZABUQLQ\",\"WARC-Block-Digest\":\"sha1:Q7XKDTSX56I3C5HN74MDFZS2YYQINCSN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057787.63_warc_CC-MAIN-20210925232725-20210926022725-00441.warc.gz\"}"}
https://export.arxiv.org/abs/2005.02311
[ "math.AP\n\n# Title: Solutions for nonlinear Fokker-Planck equations with measures as initial data and McKean-Vlasov equations\n\nAbstract: One proves the existence and uniqueness of a generalized (mild) solution for the nonlinear Fokker-Planck equation (FPE) \\begin{align*} &u_t-\\Delta (\\beta(u))+{\\mathrm{ div}}(D(x)b(u)u)=0, \\quad t\\geq0,\\ x\\in\\mathbb{R}^d,\\ d\\ne2, \\\\ &u(0,\\cdot)=u_0,\\mbox{in }\\mathbb{R}^d, \\end{align*} where $u_0\\in L^1(\\mathbb{R}^d)$, $\\beta\\in C^2(\\mathbb{R})$ is a nondecreasing function, $b\\in C^1$, bounded, $b\\ge0$, $D\\in {L^\\infty}(\\mathbb{R}^d;\\mathbb{R}^d)$, ${\\rm div}\\,D\\in L^2(\\mathbb{R}^d)+L^\\infty(\\mathbb{R}^d),$ with ${({\\rm div}\\, D)^-}\\in L^\\infty(\\mathbb{R}^d)$, $\\beta$ strictly increasing, if $b$ is not constant. Moreover, $t\\to u(t,u_0)$ is a semigroup of contractions in $L^1(\\mathbb{R}^d)$, which leaves invariant the set of probability density functions in $\\mathbb{R}^d$. If ${\\rm div}\\,D\\ge0$, $\\beta'(r)\\ge a|r|^{\\alpha-1}$, and $|\\beta(r)|\\le C r^\\alpha$, $\\alpha\\ge1,$ $d\\ge3$, then $|u(t)|_{L^\\infty}\\le Ct^{-\\frac d{d+(\\alpha-1)d}}\\ |u_0|^{\\frac2{2+(m-1)d}},$ $t>0$, and, if $D\\in L^2(\\mathbb{R}^d;\\mathbb{R}^d)$, the existence extends to initial data $u_0$ in the space $\\mathcal{M}_b$ of bounded measures in $\\mathbb{R}^d$. As a consequence for arbitrary initial laws, we obtain weak solutions to a class of McKean-Vlasov SDEs with coefficients which have singular dependence on the time marginal laws.\n Comments: 37 pages Subjects: Analysis of PDEs (math.AP); Probability (math.PR) MSC classes: 35B40, 35Q84, 60H10 Cite as: arXiv:2005.02311 [math.AP] (or arXiv:2005.02311v4 [math.AP] for this version)\n\n## Submission history\n\nFrom: Michael Röckner [view email]\n[v1] Tue, 5 May 2020 16:16:31 GMT (26kb)\n[v2] Thu, 7 May 2020 16:20:47 GMT (26kb)\n[v3] Thu, 14 May 2020 14:55:13 GMT (26kb)\n[v4] Tue, 29 Dec 2020 17:20:42 GMT (26kb)\n\nLink back to: arXiv, form interface, contact." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6561872,"math_prob":0.9998122,"size":1942,"snap":"2021-31-2021-39","text_gpt3_token_len":735,"char_repetition_ratio":0.13054696,"word_repetition_ratio":0.0,"special_character_ratio":0.369207,"punctuation_ratio":0.15151516,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99989164,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-22T21:15:39Z\",\"WARC-Record-ID\":\"<urn:uuid:8584db2a-76d9-40e2-9825-e6e53b0fa94e>\",\"Content-Length\":\"17614\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8dd27699-c85f-4296-b3a6-6c1184667ea0>\",\"WARC-Concurrent-To\":\"<urn:uuid:9230e258-4119-4210-aaf5-246bbe8cccd3>\",\"WARC-IP-Address\":\"128.84.21.203\",\"WARC-Target-URI\":\"https://export.arxiv.org/abs/2005.02311\",\"WARC-Payload-Digest\":\"sha1:BWZIEVSTAGG4GYOIF3LCUD6DSQEFI3AW\",\"WARC-Block-Digest\":\"sha1:UNL7D7W4N3C7ZW3Y4IUN454MVE4KL2F5\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057388.12_warc_CC-MAIN-20210922193630-20210922223630-00210.warc.gz\"}"}
http://sipnayan.com/2016/10/addition-of-dissimilar-fractions/
[ "# AF2 Addition of Dissimilar Fractions (LCM by Listing)\n\nIn the AF1, we learned how to add similar fractions. In this post, we are going to learn addition of dissimilar fractions.\n\nDissimilar fractions are fractions with different denominators. To be able to add dissimilar fractions, you need to get the Least Common Multiple (LCM) of the denominator of the dissimilar fractions you add, convert them to similar fractions using their LCM, and then add them.\n\nWatch the video above and then answer the exercises below.\n\nPractice Exercises\n\n1.) 1/3 + 1/5\n\n2.) 1/4 + 1/3\n\n3.) 2/5 + 1/10\n\n4.) 1/2 + 1/4 + 3/8 = 1\n\n5.) 1/6 + 1/2 + 1/4\n\nAnswers to Practice Exercises\n\n1.) 8/15\n\n2.) 7/12\n\n3.) 1/2\n\n4.)", null, "$1 \\frac{1}{8}$\n\n5.) 11/12", null, "" ]
[ null, "http://s0.wp.com/latex.php", null, "https://i0.wp.com/www.linkwithin.com/pixel.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8203979,"math_prob":0.98628384,"size":758,"snap":"2019-35-2019-39","text_gpt3_token_len":240,"char_repetition_ratio":0.19363396,"word_repetition_ratio":0.030303031,"special_character_ratio":0.32849604,"punctuation_ratio":0.12195122,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9890681,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-18T17:38:15Z\",\"WARC-Record-ID\":\"<urn:uuid:6d82dd4c-d6f2-4a33-bd90-50f7d9716a71>\",\"Content-Length\":\"43247\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:aaa18ac5-4a40-4fee-847c-5e4a9bfb7b89>\",\"WARC-Concurrent-To\":\"<urn:uuid:a20ae137-7d48-44b0-b94e-ef695f1ea359>\",\"WARC-IP-Address\":\"184.168.164.1\",\"WARC-Target-URI\":\"http://sipnayan.com/2016/10/addition-of-dissimilar-fractions/\",\"WARC-Payload-Digest\":\"sha1:ZVWRZKLUVKSHIFESCL4OUJKLL37RLQTJ\",\"WARC-Block-Digest\":\"sha1:KY7DU37ZABNQXCN62KULKP464A4OW7DV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027313987.32_warc_CC-MAIN-20190818165510-20190818191510-00195.warc.gz\"}"}
https://www.convertunits.com/from/measure/to/pint
[ "## ››Convert measure [ancient hebrew] to pint [US, liquid]\n\n measure pint\n\n Did you mean to convert measure to pint [US, liquid] pint [US, dry] pint [UK]\n\nHow many measure in 1 pint? The answer is 0.06145149025974.\nWe assume you are converting between measure [ancient hebrew] and pint [US, liquid].\nYou can view more details on each measurement unit:\nmeasure or pint\nThe SI derived unit for volume is the cubic meter.\n1 cubic meter is equal to 129.87012987013 measure, or 2113.3764099325 pint.\nNote that rounding errors may occur, so always check the results.\nUse this page to learn how to convert between measure [ancient hebrew] and pints.\nType in your own numbers in the form to convert the units!\n\n## ››Quick conversion chart of measure to pint\n\n1 measure to pint = 16.273 pint\n\n2 measure to pint = 32.546 pint\n\n3 measure to pint = 48.819 pint\n\n4 measure to pint = 65.09199 pint\n\n5 measure to pint = 81.36499 pint\n\n6 measure to pint = 97.63799 pint\n\n7 measure to pint = 113.91099 pint\n\n8 measure to pint = 130.18399 pint\n\n9 measure to pint = 146.45699 pint\n\n10 measure to pint = 162.72998 pint\n\n## ››Want other units?\n\nYou can do the reverse unit conversion from pint to measure, or enter any two units below:\n\n## Enter two units to convert\n\n From: To:\n\n## ››Metric conversions and more\n\nConvertUnits.com provides an online conversion calculator for all types of measurement units. You can find metric conversion tables for SI units, as well as English units, currency, and other data. Type in unit symbols, abbreviations, or full names for units of length, area, mass, pressure, and other types. Examples include mm, inch, 100 kg, US fluid ounce, 6'3\", 10 stone 4, cubic cm, metres squared, grams, moles, feet per second, and many more!" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8351451,"math_prob":0.9900466,"size":1735,"snap":"2020-45-2020-50","text_gpt3_token_len":470,"char_repetition_ratio":0.231658,"word_repetition_ratio":0.0067114094,"special_character_ratio":0.29682997,"punctuation_ratio":0.15,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9654368,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-22T05:56:55Z\",\"WARC-Record-ID\":\"<urn:uuid:c58d10e4-cbc1-45cc-83be-a4b83b505b1e>\",\"Content-Length\":\"29110\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0dd8b2f9-0c83-4374-87b3-136290373ad8>\",\"WARC-Concurrent-To\":\"<urn:uuid:320b0487-865d-4a32-a06b-c8de8fc1dd0f>\",\"WARC-IP-Address\":\"54.80.109.124\",\"WARC-Target-URI\":\"https://www.convertunits.com/from/measure/to/pint\",\"WARC-Payload-Digest\":\"sha1:MWVKMQBRUB4NLLZIBZ7P57BSQLKKPUYB\",\"WARC-Block-Digest\":\"sha1:AECNUFNBXDC425JLHV2PKD7NS6LYUQDG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107878921.41_warc_CC-MAIN-20201022053410-20201022083410-00150.warc.gz\"}"}
https://www.gcflcm.com/lcm-of-42-and-48
[ "# What is the Least Common Multiple of 42 and 48?\n\nLeast common multiple or lowest common denominator (lcd) can be calculated in two way; with the LCM formula calculation of greatest common factor (GCF), or multiplying the prime factors with the highest exponent factor.\n\nLeast common multiple (LCM) of 42 and 48 is 336.\n\nLCM(42,48) = 336\n\nLCM Calculator and\nand\n\n## Least Common Multiple of 42 and 48 with GCF Formula\n\nThe formula of LCM is LCM(a,b) = ( a × b) / GCF(a,b).\nWe need to calculate greatest common factor 42 and 48, than apply into the LCM equation.\n\nGCF(42,48) = 6\nLCM(42,48) = ( 42 × 48) / 6\nLCM(42,48) = 2016 / 6\nLCM(42,48) = 336\n\n## Least Common Multiple (LCM) of 42 and 48 with Primes\n\nLeast common multiple can be found by multiplying the highest exponent prime factors of 42 and 48. First we will calculate the prime factors of 42 and 48.\n\n### Prime Factorization of 42\n\nPrime factors of 42 are 2, 3, 7. Prime factorization of 42 in exponential form is:\n\n42 = 21 × 31 × 71\n\n### Prime Factorization of 48\n\nPrime factors of 48 are 2, 3. Prime factorization of 48 in exponential form is:\n\n48 = 24 × 31\n\nNow multiplying the highest exponent prime factors to calculate the LCM of 42 and 48.\n\nLCM(42,48) = 24 × 31 × 71\nLCM(42,48) = 336" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8658833,"math_prob":0.99797547,"size":1193,"snap":"2021-31-2021-39","text_gpt3_token_len":376,"char_repetition_ratio":0.17577797,"word_repetition_ratio":0.08888889,"special_character_ratio":0.35624477,"punctuation_ratio":0.10714286,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999969,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-30T16:47:38Z\",\"WARC-Record-ID\":\"<urn:uuid:2d7ffed2-9ced-4ab5-808d-7d98d5809b66>\",\"Content-Length\":\"21734\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fa425745-0305-4d15-98fb-9c92d9c95c86>\",\"WARC-Concurrent-To\":\"<urn:uuid:c50d29ce-5c63-4b75-8f82-23aa460933a2>\",\"WARC-IP-Address\":\"104.154.21.107\",\"WARC-Target-URI\":\"https://www.gcflcm.com/lcm-of-42-and-48\",\"WARC-Payload-Digest\":\"sha1:K6BZU7ZC5ZWF2MSOH6N7W5D53FC2Z7NN\",\"WARC-Block-Digest\":\"sha1:EAQ7JY3OTO673W5EFIHPP6JSNNYXGJ6R\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046153971.20_warc_CC-MAIN-20210730154005-20210730184005-00595.warc.gz\"}"}
https://www.geeksforgeeks.org/count-nodes-having-bitwise-xor-of-all-edges-in-their-path-from-the-root-equal-to-k/?ref=rp
[ "# Count nodes having Bitwise XOR of all edges in their path from the root equal to K\n\n• Last Updated : 18 Aug, 2021\n\nGiven a Binary Tree consisting of N nodes and two integers R and K. Each edge of the tree has a positive integer associated with it, given in the form {u, v, w} where the edge (u, v) has weight w. The task is to calculate the number of nodes S having Bitwise XOR of all edges in the path from root R to S is equal to K.\n\nExamples:\n\nInput: R = 1, K = 0, N = 7, Edges[][] = {{1, 2, 3}, {1, 3, 1}, {2, 4, 3}, {2, 5, 4}, {3, 6, 1}, {3, 7, 2}}\nOutput: 2\nExplanation:\nRepresentation of the given Binary Tree:", null, "The following pair of nodes have a Bitwise XOR of edges in the path connecting them as K = 0:\nPair 1: (1, 4) = (3 ^ 3) = 0\nPair 2: (1, 6) = (1 ^ 1) = 0\n\nInput: R = 1, K = 0, N = 9, Edges[][] = {{1, 2, 3}, {1, 3, 2}, {2, 4, 3}, {2, 5, 4}, {3, 6, 1}, {3, 7, 2}, {6, 8, 3}, {6, 9, 7}}\nOutput: 3\nExplanation:\nThe representation of given Binary Tree is as follows:", null, "The following pair of nodes have a Bitwise XOR of edges in the path connecting them as K = 0:\nPair 1: (1, 4) = (3 ^ 3) = 0\nPair 2: (1, 8) = (2 ^ 1 ^ 3) = 0\nPair 3: (1, 7) = (2 ^ 2) = 0\n\nApproach: The problem can be solved using the Depth First Search approach. Follow the steps below to solve the problem:\n\n1. Initialize the variable ans and xor with 0 to store the number of pairs and the current xor of edges.\n2. Traverse the given tree using Depth First Search starting from the given root vertex R.\n3. For every node u, visit its adjacent nodes.\n4. For each edge {u, v}, if xor is equal to K, increment ans by 1. Otherwise, for the current edge {u, v, w}, update xor as xor = (xor^w) where ^ is the bitwise XOR.\n5. After traversing, print the value stored in the counter ans as the number of pairs.\n\nBelow is the implementation of the above approach:\n\n## C++\n\n `// C++ program for the above approach``#include ``using` `namespace` `std;` `// Initialize the adjacency list``// to represent the tree``vector > adj;` `// Marks visited / unvisited vertices``int` `visited = { 0 };` `// Stores the required count of nodes``int` `ans = 0;` `// DFS to visit each vertex``void` `dfs(``int` `node, ``int` `xorr, ``int` `k)``{``    ``// Mark the current node``    ``// as visited``    ``visited[node] = 1;` `    ``// Update the counter xor is K``    ``if` `(node != 1 && xorr == k)``        ``ans++;` `    ``// Visit adjacent nodes``    ``for` `(``auto` `x : adj[node]) {` `        ``if` `(!visited[x.first]) {` `            ``// Calculate Bitwise XOR of``            ``// edges in the path``            ``int` `xorr1 = xorr ^ x.second;` `            ``// Recursive call to dfs function``            ``dfs(x.first, xorr1, k);``        ``}``    ``}``}` `// Function to construct the tree and``// print required count of nodes``void` `countNodes(``int` `N, ``int` `K, ``int` `R,``                ``vector > edges)``{` `    ``// Add edges``    ``for` `(``int` `i = 0; i < N - 1; i++) {``        ``int` `u = edges[i], v = edges[i],``            ``w = edges[i];``        ``adj[u].push_back({ v, w });``        ``adj[v].push_back({ u, w });``    ``}` `    ``dfs(R, 0, K);` `    ``// Print answer``    ``cout << ans << ``\"\\n\"``;``}` `// Driver Code``int` `main()``{``    ``// Given K and R``    ``int` `K = 0, R = 1;` `    ``// Given edges``    ``vector > edges``        ``= { { 1, 2, 3 }, { 1, 3, 1 },``            ``{ 2, 4, 3 }, { 2, 5, 4 },``            ``{ 3, 6, 1 }, { 3, 7, 2 } };` `    ``// Number of vertices``    ``int` `N = edges.size();` `    ``// Function call``    ``countNodes(N, K, R, edges);` `    ``return` `0;``}`\n\n## Java\n\n `// Java program for the``// above approach``import` `java.util.*;``class` `GFG{` `static` `class` `pair``{``  ``int` `first, second;``  ``public` `pair(``int` `first,``              ``int` `second) ``  ``{``    ``this``.first = first;``    ``this``.second = second;``  ``}   ``}``  ` `// Initialize the adjacency list``// to represent the tree``static` `Vector []adj =``       ``new` `Vector[``100005``];` `// Marks visited / unvisited``// vertices``static` `int` `visited[] =``       ``new` `int``[``100005``];` `// Stores the required``// count of nodes``static` `int` `ans = ``0``;` `// DFS to visit each``// vertex``static` `void` `dfs(``int` `node,``                ``int` `xorr,``                ``int` `k)``{``  ``// Mark the current node``  ``// as visited``  ``visited[node] = ``1``;` `  ``// Update the counter``  ``// xor is K``  ``if` `(node != ``1` `&&``      ``xorr == k)``    ``ans++;` `  ``// Visit adjacent nodes``  ``for` `(pair x : adj[node])``  ``{``    ``if` `(visited[x.first] != ``1``)``    ``{``      ``// Calculate Bitwise XOR of``      ``// edges in the path``      ``int` `xorr1 = xorr ^ x.second;` `      ``// Recursive call to dfs``      ``// function``      ``dfs(x.first, xorr1, k);``    ``}``  ``}``}` `// Function to construct the tree and``// print required count of nodes``static` `void` `countNodes(``int` `N, ``int` `K,``                       ``int` `R, ``int``[][] edges)``{``  ``// Add edges``  ``for` `(``int` `i = ``0``; i < N - ``1``; i++)``  ``{``    ``int` `u = edges[i][``0``],``        ``v = edges[i][``1``],``    ``w = edges[i][``2``];``    ``adj[u].add(``new` `pair(v, w ));``    ``adj[v].add(``new` `pair(u, w ));``  ``}` `  ``dfs(R, ``0``, K);` `  ``// Print answer``  ``System.out.print(ans + ``\"\\n\"``);``}` `// Driver Code``public` `static` `void` `main(String[] args)``{``  ``// Given K and R``  ``int` `K = ``0``, R = ``1``;``  ` `  ``for` `(``int` `i = ``0``; i < adj.length; i++)``    ``adj[i] = ``new` `Vector();``  ``// Given edges``  ``int``[][] edges = {{``1``, ``2``, ``3``},``                   ``{``1``, ``3``, ``1``}, ``                   ``{``2``, ``4``, ``3``},``                   ``{``2``, ``5``, ``4``},``                   ``{``3``, ``6``, ``1``},``                   ``{``3``, ``7``, ``2``}};` `  ``// Number of vertices``  ``int` `N = edges.length;` `  ``// Function call``  ``countNodes(N, K, R, edges);``}``}` `// This code is contributed by 29AjayKumar`\n\n## Python3\n\n `# Python3 program for the above approach` `# Initialize the adjacency list``# to represent the tree``adj ``=` `[[] ``for` `i ``in` `range``(``100005``)]` `# Marks visited / unvisited vertices``visited ``=` `[``0``] ``*` `100005` `# Stores the required count of nodes``ans ``=` `0` `# DFS to visit each vertex``def` `dfs(node, xorr, k):``    ` `    ``global` `ans``    ` `    ``# Mark the current node``    ``# as visited``    ``visited[node] ``=` `1` `    ``# Update the counter xor is K``    ``if` `(node !``=` `1` `and` `xorr ``=``=` `k):``        ``ans ``+``=` `1` `    ``# Visit adjacent nodes``    ``for` `x ``in` `adj[node]:``        ``if` `(``not` `visited[x[``0``]]):` `            ``# Calculate Bitwise XOR of``            ``# edges in the path``            ``xorr1 ``=` `xorr ^ x[``1``]` `            ``# Recursive call to dfs function``            ``dfs(x[``0``], xorr1, k)` `# Function to construct the tree and``# prrequired count of nodes``def` `countNodes(N, K, R, edges):` `    ``# Add edges``    ``for` `i ``in` `range``(N ``-` `1``):``        ``u ``=` `edges[i][``0``]``        ``v ``=` `edges[i][``1``]``        ``w ``=` `edges[i][``2``]``        ` `        ``adj[u].append([v, w])``        ``adj[v].append([u, w])` `    ``dfs(R, ``0``, K)` `    ``# Print answer``    ``print``(ans)` `# Driver Code``if` `__name__ ``=``=` `'__main__'``:``    ` `    ``# Given K and R``    ``K ``=` `0``    ``R ``=` `1` `    ``# Given edges``    ``edges ``=` `[ [ ``1``, ``2``, ``3` `],[ ``1``, ``3``, ``1` `],``              ``[ ``2``, ``4``, ``3` `],[ ``2``, ``5``, ``4` `],``              ``[ ``3``, ``6``, ``1` `],[ ``3``, ``7``, ``2` `] ]` `    ``# Number of vertices``    ``N ``=` `len``(edges)` `    ``# Function call``    ``countNodes(N, K, R, edges)` `# This code is contributed by mohit kumar 29`\n\n## C#\n\n `// C# program for the``// above approach``using` `System;``using` `System.Collections.Generic;``class` `GFG{` `public` `class` `pair``{``  ``public` `int` `first,``             ``second;``  ``public` `pair(``int` `first,``              ``int` `second) ``  ``{``    ``this``.first = first;``    ``this``.second = second;``  ``}   ``}``  ` `// Initialize the adjacency list``// to represent the tree``static` `List []adj =``       ``new` `List;` `// Marks visited / unvisited``// vertices``static` `int` `[]visited =``       ``new` `int``;` `// Stores the required``// count of nodes``static` `int` `ans = 0;` `// DFS to visit each``// vertex``static` `void` `dfs(``int` `node,``                ``int` `xorr,``                ``int` `k)``{``  ``// Mark the current node``  ``// as visited``  ``visited[node] = 1;` `  ``// Update the counter``  ``// xor is K``  ``if` `(node != 1 &&``      ``xorr == k)``    ``ans++;` `  ``// Visit adjacent nodes``  ``foreach` `(pair x ``in` `adj[node])``  ``{``    ``if` `(visited[x.first] != 1)``    ``{``      ``// Calculate Bitwise XOR of``      ``// edges in the path``      ``int` `xorr1 = xorr ^ x.second;` `      ``// Recursive call to dfs``      ``// function``      ``dfs(x.first, xorr1, k);``    ``}``  ``}``}` `// Function to construct the tree and``// print required count of nodes``static` `void` `countNodes(``int` `N, ``int` `K,``                       ``int` `R, ``int``[,] edges)``{``  ``// Add edges``  ``for` `(``int` `i = 0; i < N - 1; i++)``  ``{``    ``int` `u = edges[i,0];``     ``int`   `v = edges[i,1],``    ``w = edges[i,2];``    ``adj[u].Add(``new` `pair(v, w ));``    ``adj[v].Add(``new` `pair(u, w ));``  ``}` `  ``dfs(R, 0, K);` `  ``// Print answer``  ``Console.Write(ans + ``\"\\n\"``);``}` `// Driver Code``public` `static` `void` `Main(String[] args)``{``  ``// Given K and R``  ``int` `K = 0, R = 1;``  ` `  ``for` `(``int` `i = 0; i < adj.Length; i++)``    ``adj[i] = ``new` `List();``  ` `  ``// Given edges``  ``int``[,] edges = {{1, 2, 3},``                   ``{1, 3, 1}, ``                   ``{2, 4, 3},``                   ``{2, 5, 4},``                   ``{3, 6, 1},``                   ``{3, 7, 2}};` `  ``// Number of vertices``  ``int` `N = edges.GetLength(0);` `  ``// Function call``  ``countNodes(N, K, R, edges);``}``}` `// This code is contributed by 29AjayKumar`\n\n## Javascript\n\n ``\n\nOutput:\n\n`2`\n\nTime Complexity: O(N) where N is the number of nodes.\nAuxiliary Space: O(N)\n\nMy Personal Notes arrow_drop_up" ]
[ null, "https://media.geeksforgeeks.org/wp-content/uploads/20201013021524/1-660x495.png", null, "https://media.geeksforgeeks.org/wp-content/uploads/20201013021858/2-660x495.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5436956,"math_prob":0.9866553,"size":9114,"snap":"2022-40-2023-06","text_gpt3_token_len":3205,"char_repetition_ratio":0.12129528,"word_repetition_ratio":0.39627522,"special_character_ratio":0.4068466,"punctuation_ratio":0.20726916,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998528,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-04T01:04:00Z\",\"WARC-Record-ID\":\"<urn:uuid:c57cc304-d7a5-45b8-9eea-298b8ae6f7ce>\",\"Content-Length\":\"203116\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d815c5b2-2017-4cf7-8b2c-c3fcfcec9a38>\",\"WARC-Concurrent-To\":\"<urn:uuid:5fa585b7-a15a-40bb-8d94-38ee1cd2c231>\",\"WARC-IP-Address\":\"104.97.85.137\",\"WARC-Target-URI\":\"https://www.geeksforgeeks.org/count-nodes-having-bitwise-xor-of-all-edges-in-their-path-from-the-root-equal-to-k/?ref=rp\",\"WARC-Payload-Digest\":\"sha1:WHVEYYFIS263FIRQ7OLVISQXXZA3VQZG\",\"WARC-Block-Digest\":\"sha1:HBKXDL3HBIW5CXKKVJPZGYIIFMMOY3JJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337446.8_warc_CC-MAIN-20221003231906-20221004021906-00075.warc.gz\"}"}
https://holidravel.com/challenge-can-you-solve-this-middle-school-math-problem-without-a-calculator/
[ "# Challenge: Can you solve this middle school math problem – without a calculator?\n\nSimple matters of addition are usually considered child’s play. What if you arrange a couple of laughably easy equations in a specific order and then add a logical question to them? Then they can become a real puzzler.\n\nTake a look at the math puzzle below and see what you’re doing with it. Also, check the clock before you start. How fast can you do this?\n\nThese are old classic mathematical problems when you were in middle or high school.\n\nThese tests are more fun when you find yourself trying to remember the math you learned as a child.\n\n### Can you figure out the solution for this mathematics quiz – without a calculator?\n\nOkay, now onto the challenge itself.\n\nIn the picture below, we see a mathematics problem made for middle schoolers.\n\nThe challenge: are you able to solve it without a calculator?", null, "Remember the order of operations. What part should you solve first?\n\nBelow the next picture, you will find the correct answer.\n\nA\n\nB\n\nC\n\nD\n\nE", null, "How do we know that 11 is the right answer?\n\nTo begin with, we need to solve the part inside the parenthesis, 12 ÷ 4 + 1, which equals 4.\n\nAfter that, the problem looks like this: 15 – 1 x 4. So the next step is to calculate the multiplication, which equals 4.\n\nSo the problem now is 15 – 4, which totals 11." ]
[ null, "https://i1.wp.com/wakeupyourmind.net/wp-content/uploads/2020/06/kid2-1.jpg", null, "https://i2.wp.com/wakeupyourmind.net/wp-content/uploads/2020/06/kid1.jpeg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9329102,"math_prob":0.9655782,"size":1525,"snap":"2023-14-2023-23","text_gpt3_token_len":350,"char_repetition_ratio":0.10979619,"word_repetition_ratio":0.0,"special_character_ratio":0.23147541,"punctuation_ratio":0.12772585,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9931376,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-22T20:27:14Z\",\"WARC-Record-ID\":\"<urn:uuid:0cd0aca0-20d5-4347-a46c-cbd869724aff>\",\"Content-Length\":\"73889\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ac512f37-d102-491b-b77d-030becaa7a1c>\",\"WARC-Concurrent-To\":\"<urn:uuid:8b48a001-ec6a-4f05-92a2-df55751fa567>\",\"WARC-IP-Address\":\"67.202.92.26\",\"WARC-Target-URI\":\"https://holidravel.com/challenge-can-you-solve-this-middle-school-math-problem-without-a-calculator/\",\"WARC-Payload-Digest\":\"sha1:VP7EOUT442QOHXDCGLIHJOKGLSTZQ6HB\",\"WARC-Block-Digest\":\"sha1:VKERVYVBZWYKVAOGGNZN3XROWZFO6IZI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296944452.74_warc_CC-MAIN-20230322180852-20230322210852-00600.warc.gz\"}"}
https://code.grnet.gr/projects/synnefo/repository/revisions/92d2d1ce9ea13c5d2a32e797f6da396f8e02049f/diff/snf-cyclades-app/synnefo/logic/backend.py
[ "544 544\n\n545 545\n``` kw['nics'] = [{\"name\": nic.backend_uuid,\n```\n546 546\n``` \"network\": nic.network.backend_id,\n```\n547\n``` \"ip\": nic.ipv4}\n```\n547\n``` \"ip\": nic.ipv4_address}\n```\n548 548\n``` for nic in nics]\n```\n549 549\n``` backend = vm.backend\n```\n550 550\n``` depend_jobs = []\n```\n551 551\n``` for nic in nics:\n```\n552\n``` network = Network.objects.select_for_update().get(id=nic.network.id)\n```\n552\n``` network = Network.objects.select_for_update().get(id=nic.network_id)\n```\n553 553\n``` bnet, created = BackendNetwork.objects.get_or_create(backend=backend,\n```\n554 554\n``` network=network)\n```\n555 555\n``` if bnet.operstate != \"ACTIVE\":\n```\n......\n714 714\n``` \"\"\"Create a network.\"\"\"\n```\n715 715\n\n716 716\n``` tags = network.backend_tag\n```\n717\n``` if network.dhcp:\n```\n718\n``` tags.append('nfdhcpd')\n```\n717\n``` subnet = None\n```\n718\n``` subnet6 = None\n```\n719\n``` gateway = None\n```\n720\n``` gateway6 = None\n```\n721\n``` for subnet in network.subnets.all():\n```\n722\n``` if subnet.ipversion == 4:\n```\n723\n``` if subnet.dhcp:\n```\n724\n``` tags.append('nfdhcpd')\n```\n725\n``` subnet = subnet.cidr\n```\n726\n``` gateway = subnet.gateway\n```\n727\n``` elif subnet.ipversion == 6:\n```\n728\n``` subnet6 = subnet.cidr\n```\n729\n``` gateway6 = subnet.gateway\n```\n719 730\n\n720 731\n``` if network.public:\n```\n721 732\n``` conflicts_check = True\n```\n......\n728 739\n``` # not support IPv6 only networks. To bypass this limitation, we create the\n```\n729 740\n``` # network with a dummy network subnet, and make Cyclades connect instances\n```\n730 741\n``` # to such networks, with address=None.\n```\n731\n``` subnet = network.subnet\n```\n732 742\n``` if subnet is None:\n```\n733 743\n``` subnet = \"10.0.0.0/24\"\n```\n734 744\n\n......\n742 752\n``` with pooled_rapi_client(backend) as client:\n```\n743 753\n``` return client.CreateNetwork(network_name=network.backend_id,\n```\n744 754\n``` network=subnet,\n```\n745\n``` network6=network.subnet6,\n```\n746\n``` gateway=network.gateway,\n```\n747\n``` gateway6=network.gateway6,\n```\n755\n``` network6=subnet6,\n```\n756\n``` gateway=gateway,\n```\n757\n``` gateway6=gateway6,\n```\n748 758\n``` mac_prefix=mac_prefix,\n```\n749 759\n``` conflicts_check=conflicts_check,\n```\n750 760\n``` tags=tags)\n```\n\nAlso available in: Unified diff" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5058583,"math_prob":0.7893408,"size":2120,"snap":"2022-05-2022-21","text_gpt3_token_len":732,"char_repetition_ratio":0.17533082,"word_repetition_ratio":0.0,"special_character_ratio":0.44386792,"punctuation_ratio":0.24808185,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98664933,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-18T23:11:24Z\",\"WARC-Record-ID\":\"<urn:uuid:d16ac808-ed5d-45f5-aeb1-d05bb5893c74>\",\"Content-Length\":\"15275\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fc52dc26-5bb8-4cae-903e-e6f50e0e8d1d>\",\"WARC-Concurrent-To\":\"<urn:uuid:781cc4df-2201-411f-baca-75b820072260>\",\"WARC-IP-Address\":\"194.177.210.147\",\"WARC-Target-URI\":\"https://code.grnet.gr/projects/synnefo/repository/revisions/92d2d1ce9ea13c5d2a32e797f6da396f8e02049f/diff/snf-cyclades-app/synnefo/logic/backend.py\",\"WARC-Payload-Digest\":\"sha1:SAGZC74YOZQZZTFH5XTBZ72A73BEMM5D\",\"WARC-Block-Digest\":\"sha1:CF7S6QKSAYJGN3XQXMY4J4QA57HSXCQZ\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662522556.18_warc_CC-MAIN-20220518215138-20220519005138-00416.warc.gz\"}"}
https://cs.stackexchange.com/users/11424/thenotme?tab=summary
[ "### Questions (19)\n\n 8 If all edges are of equal weight, can one use BFS to obtain a minimal spanning tree? 5 Finding Kernel in DAG 4 Subset Sum for {1,…,n} 3 Find an MST in a graph with edge weights from {1,2} 3 Shortest even path that goes through a vertex\n\n### Reputation (561)\n\n +40 If all edges are of equal weight, can one use BFS to obtain a minimal spanning tree? +10 Find an MST in a graph with edge weights from {1,2} +10 Finding Kernel in DAG +10 Subset Sum for {1,…,n}\n\nThis user has not answered any questions\n\n### Tags (20)\n\n 0 algorithms × 6 0 spanning-trees × 2 0 graphs × 5 0 complexity-theory × 2 0 computability × 5 0 reductions × 2 0 turing-machines × 3 0 permutations 0 formal-languages × 2 0 numerical-algorithms" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7017881,"math_prob":0.98591846,"size":482,"snap":"2019-51-2020-05","text_gpt3_token_len":126,"char_repetition_ratio":0.23640168,"word_repetition_ratio":0.0,"special_character_ratio":0.3174274,"punctuation_ratio":0.024096385,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96776736,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-28T18:16:30Z\",\"WARC-Record-ID\":\"<urn:uuid:df5e1978-a7e3-41ae-b83d-9ef3f1f1847a>\",\"Content-Length\":\"125899\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a02e77cc-7267-44ea-be2b-6d2d733bd7c9>\",\"WARC-Concurrent-To\":\"<urn:uuid:eb060551-31ee-4660-a204-6595b1349d76>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://cs.stackexchange.com/users/11424/thenotme?tab=summary\",\"WARC-Payload-Digest\":\"sha1:2HLXA2O4T7EUKHPS5DCOJCV5YWWARGYZ\",\"WARC-Block-Digest\":\"sha1:IWE6NRBIGRN5IIGASJKMIMCFN7BYSVNG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579251779833.86_warc_CC-MAIN-20200128153713-20200128183713-00504.warc.gz\"}"}
https://www.analyzemath.com/fractions/multiplication.html
[ "# Multiply Fractions\n\nExamples of fractions multiplication are presented along with detailed solutions and exercises with answers on are presented. A calculator to multiply fractions is included in this website.\n\n## How to Multiply Fractions? Rule\n\nTo multiply two fractions, we multiply the numerators together and the denominators together. Hence\n\n$\\dfrac{a}{b} \\times \\dfrac{c}{d} = \\dfrac{a \\times c}{b \\times d}$\n\n \n\n## Multiply Fractions: Examples with Detailed Solutions\n\nExample 1\nMultiply and simplfy, and express the final answer as a fraction.\n$\\dfrac{2}{3} \\times \\dfrac{7}{2}$ Solution to Example 1\nTo multiply fractions you multiply numerators and denominators\n$\\dfrac{2}{3} \\times \\dfrac{7}{2} = \\dfrac{2 \\times 7}{3 \\times 2}$\n\nDivide the numerator and the denominator by the common factor 2 and simplify\n$\\dfrac{ \\cancel{2} \\times 7}{3 \\times \\cancel{2}} = \\dfrac{7}{3}$\n\nExample 2\nMultiply and simplfy, and express the final answer as a fraction.\n$\\dfrac{11}{9} \\times \\dfrac{12}{25}$ Solution to Example 2\nMultiply numerators and denominators\n$\\dfrac{11}{9} \\times \\dfrac{12}{25} = \\dfrac{11 \\times 12}{9 \\times 25}$\n\n12 in the numeartor and 9 in the denominator have the greatest common factor equal to 3, hence divide 12 in numerator and 9 denominator by 3.\n$= \\dfrac{11 \\times (12\\div 3)}{(9\\div 3) \\times 25}$\n\nand simplify\n$\\dfrac{11 \\times 4}{3 \\times 25} = \\dfrac{44}{75}$\n\nExample 3 (Multiply a fraction by an integer)\nMultiply, simplfy and express the final answer as a fraction.\n$\\dfrac{2}{15} \\times 5$ Solution to Example 3\nMultiply numerator of first fraction by 5\n$\\dfrac{2}{15} \\times 5 = \\dfrac{2 \\times 5}{15}$\n\n5 in the numerator and 15 in the denominator have the greatest common factor equal to 5, hence divide 5 in the numerator and 15 in the denominator by $5$ to simplify\n$\\dfrac{2 \\times (5 \\div 5)}{15 \\div 5} = \\dfrac{2}{3}$\n\nExample 4 (Multiply a fraction by a decimal number)\nMultiply, simplfy and express the final answer as a fraction.\n$\\dfrac{2}{15} \\times 1.5$ Solution to Example 4\nRewrite the decimal number $1.5$ as a fraction\n$1.5 = \\dfrac{1.5}{1} = \\dfrac{1.5 \\times 10}{1 \\times 10} = \\dfrac{15}{10}$\n\nRewrite the given expression using fractions\n$\\dfrac{2}{15} \\times 1.5 = \\dfrac{2}{15} \\times \\dfrac{15}{10}$\n\nMultiply numerators and denominators\n$= \\dfrac{2 \\times 15}{15 \\times 10}$\n\nDivide numerator and denominator by $15$ and simplify\n$= \\dfrac{2}{10}$\n\nDivide numerator and denominator by $2$ and simplify\n$= \\dfrac{1}{5}$\n\n## Exercises with Answers: Multiply Fractions\n\nMultiply, simplify and express as fractions.\n\n1.     $\\dfrac{2}{3} \\times \\dfrac{7}{6}$\n\n2.     $\\dfrac{23}{30} \\times \\dfrac{33}{25}$\n\n3.     $5 \\times \\dfrac{2}{25}$\n\n4.     $0.14 \\times \\dfrac{40}{21}$\n\n1.     $\\dfrac{7}{9}$\n\n2.     $\\dfrac{253}{250}$\n\n3.     $\\dfrac{2}{5}$\n\n4.     $\\dfrac{4}{15}$" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.68477374,"math_prob":0.99998677,"size":3050,"snap":"2022-05-2022-21","text_gpt3_token_len":1018,"char_repetition_ratio":0.23736048,"word_repetition_ratio":0.13894737,"special_character_ratio":0.36983606,"punctuation_ratio":0.0707635,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.999997,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-25T04:39:23Z\",\"WARC-Record-ID\":\"<urn:uuid:ca9b7f4a-982f-4994-83d8-8ff984dc9b5e>\",\"Content-Length\":\"118129\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e404522e-db5c-450b-839a-4743a6d523c4>\",\"WARC-Concurrent-To\":\"<urn:uuid:269d895c-3e40-4a9a-83b0-e20e52d38f7d>\",\"WARC-IP-Address\":\"50.16.49.81\",\"WARC-Target-URI\":\"https://www.analyzemath.com/fractions/multiplication.html\",\"WARC-Payload-Digest\":\"sha1:GO7YDSGHKW2KHXSXGOSRBCCKWLUUXPXP\",\"WARC-Block-Digest\":\"sha1:COQEL3SWDHFOPIPEPO3EYY3LATJHN3IQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662578939.73_warc_CC-MAIN-20220525023952-20220525053952-00326.warc.gz\"}"}
https://fudedegisikuk.johnsonout.com/laplace-and-z-transforms-book-23918wh.php
[ "Last edited by Tygogar\nMonday, July 27, 2020 | History\n\n2 edition of Laplace and z-transforms found in the catalog.\n\nLaplace and z-transforms\n\nW. Bolton\n\n# Laplace and z-transforms\n\n## by W. Bolton\n\nWritten in English\n\nSubjects:\n• Laplace transformation.,\n• Z-transformation.\n\n• Edition Notes\n\nThe Physical Object ID Numbers Statement W. Bolton. Series Mathematics for engineers Pagination vii, 128 p. Number of Pages 128 Open Library OL21328172M ISBN 10 0582228190\n\nMost useful z-transforms can be expressed in the form X(z)= P(z) Q(z), where P(z)and Q(z)are polynomials in z. The values of z for which P(z)=0are called the zeros of X(z), and the values with Q(z)=0are called the poles. The zeros and poles completely specify X(z)to within a multiplicative constant. Example: right-sided exponential sequence. Students are scared of the more useful and intuitive Fourier Transform (FT) than of the Laplace Transform (LT). This fear is a refrain, from seeing these transforms as they should be seen.\n\nOn p. , the textbook, like many DSP books, denes the region of convergence or ROC to be: fithe set of all values of z for which X(z) attains a nite value.fl Writing each z in the polar form z = re|˚, on p. , the book says that: finding the ROC for X(z) is equivalent to determining. Video talks about th relationship between Laplace, Fourier and Z-Transforms as well as derives the Z-Transform from the Laplace Transform.\n\nFourier, Laplace, and z Transforms: Unique Insight into Continuous-Time and Discrete-Time Transforms. Their Definition and Applications (Technical LAP series) Dwight F. Mix. Paperback. \\$ An Introduction to Laplace Transforms and Fourier Series (Springer Undergraduate Mathematics Series) Phil Dyke. out Cited by: The book provides a comprehensive presentation of mathematical tools used by engineers to describe and analyze signals and systems. Topics include time domain transformations, frequency domain analysis, fourier transforms, laplace, and Z transforms. MATLAB and the Control System Toolbox is used to solve examples in the book. In addition, a.\n\nYou might also like\nStatus of interstate compacts for the disposal of low-level radioactive waste\n\nStatus of interstate compacts for the disposal of low-level radioactive waste\n\nProducts approved for sheep scab in Northern Ireland.\n\nProducts approved for sheep scab in Northern Ireland.\n\nTransatlantic episode\n\nTransatlantic episode\n\nEffects of a group assertiveness training workshop on anxiety, assertiveness, and health locus of control for registered nurses\n\nEffects of a group assertiveness training workshop on anxiety, assertiveness, and health locus of control for registered nurses\n\nFrank Auerbach\n\nFrank Auerbach\n\nCows of our planet\n\nCows of our planet\n\nBlue book of San Franciscans in public life\n\nBlue book of San Franciscans in public life\n\nSouth Yorkshire structure plan.\n\nSouth Yorkshire structure plan.\n\nPractical nursing review\n\nPractical nursing review\n\nMountain Branch, National Home for Disabled Volunteer Soldiers. Letter from the Secretary of the Treasury, transmitting a copy of a communication from the Secretary of War submitting an estimate of appropriation for Mountain Branch of National Home for Disabled Volunteer Soldiers.\n\nMountain Branch, National Home for Disabled Volunteer Soldiers. Letter from the Secretary of the Treasury, transmitting a copy of a communication from the Secretary of War submitting an estimate of appropriation for Mountain Branch of National Home for Disabled Volunteer Soldiers.\n\nApplied Laplace Transforms and z-Transforms for Scientists and Engineers: A Computational Approach Using A Mathematica Package Softcover reprint of the original 1st ed.\n\nEdition by Urs Graf (Author) › Visit Amazon's Urs Graf Page. Find all the books, read about the author, and more. Cited by: If you have some previous knowledge on Laplace and Z-transforms, the extra knowledge you can gain with this work is not worth its price.\n\nThe book is just pages and very simple. Read more. 5 people found this helpful. Helpful. Comment Report abuse. See all reviews from the United States/5(3). Download The Laplace Transform: Theory and Applications By Joel L. Schiff – The Laplace transform is a wonderful tool for solving ordinary and partial differential equations and has enjoyed much success in this realm.\n\nWith its success, however, a certain casualness has been bred concerning its application, without much regard for hypotheses and when they are valid. The book covers Dirichlet Series, Zeta Functions, the Laplace Transform, the Stieltjes Transform, Tauberian Theorems, and even covers the basics of Fractional Integrals and Fractional Derivatives.\n\nThe Laplace transform method is used to transform all time-dependent equations from the (r, z, t) domain to algebraic equations in the (r, z, s) domain. Step 2: The transformed potential variable, ϕ(r, z, s), is expanded in terms of a power series of s −1, where the coefficients of the series terms denoted by ψ(r, z) are functions of.\n\nLaplace and z-Transforms by W. Bolton,available at Book Depository with free delivery worldwide.3/5(2). The Laplace transforms of particular forms of such signals are.\n\nA unit step input which starts at a time t=0 and rises to the constant value 1 has a Laplace transform of 1/s. A unit impulse input which starts at a time t=0 and rises to the value 1 has a Laplace transform of A unit ramp input which starts at time t=0 and rises by 1 each second has a Laplace transform of 1/s 2.\n\nComparing the last two equations, we find the relationship between the unilateral Z-transform and the Laplace transform of the sampled signal: \\$\\$ X_q(s) = X(z) \\Big|_{z=e^{sT}} \\$\\$ The similarity between the Z and Laplace transforms is expanded upon in the theory of time scale calculus.\n\nknow Laplace Transforms of all the functions you are likely to encounter, so you have access to these online, and the packages have also an inversion routine to find a function f from a given f.˜ There are books with long lists of transforms of known functions and compositions of functions; we give some.\n\nof the Laplace transforms to cover the Z-transform, the discrete counterpart of the Laplace transform. With the use of the Z-transforms we can include examples of solutions to difference. Laplace and z-transforms Laplace and z-transform techniques and is intended to be part of MATH course.\n\nThese notes are freely composed from the sources given in the bibli-ography and are being constantly improved. Check the date above to see if this is Later Laplace2 independently used it in his book Theorie Ana.\n\nMary Attenborough, in Mathematics for Electrical Engineering and Computing, Introduction. In this chapter, we will present a quick review of Laplace transform methods for continuous and piecewise continuous systems and z transform methods for discrete systems. The widespread application of these methods means that engineers need increasingly to be able to apply these techniques in.\n\nLaplace Transform, Z Transform. INTRODUCTION. We tried to obtain a good answer for the Fourier and Laplace and Z - transforms relationship. Many of the explanations just mention that the relationship is that s=a+iw, so the Fourier transform becomes a special case of the laplace transform.\n\nIn the calculus we know that certain functions have power. The Laplace transform deals with differential equations, the s-domain, and the s-plane. Correspondingly, the z-transform deals with difference equations, the z-domain, and the z-plane.\n\nHowever, the two techniques are not a mirror image of each other; the s-plane is arranged in a rectangular coordinate system, while the z-plane uses a polar format.\n\nTitle: Table of Laplace and Z-transforms Author: Sami Terho Created Date: 1/26/ PM. The Laplace transform of a sum is the sum of a Laplace transforms. And in conjunction with the differentiation roll by which we knew that the Laplace transform of a derivative is s times the Laplace transform the function, the combination of linearity and the differentiation role allowed us to apply Laplace transforms to turn differential.\n\nLaplace and z-transforms Ali Sinan Sert¨oz [email protected] 11 April 1 Introduction These notes are intended to guide the student through problem solving using Laplace and z-transform techniques and is intended to be part of MATH Later Laplace2 independently used it in his book Th´eorie Analytique de Probabilit´es.\n\nApplied Laplace Transforms and z-Transforms for Scientists and Engineers: A Computational Approach using a Mathematica Package Urs Graf (auth.) The theory of Laplace transformation is an important part of the mathematical background required for engineers, physicists and mathematicians.\n\nof calculus books. Some entries for the special integral table appear in Table 1 and also in sectionTable 4.\n\nThe L-notation for the direct Laplace transform produces briefer details, as witnessed by the translation of Table 2 into Table 3 below. The reader is advised to move from Laplace integral notation to the L{notation as. Laplace, Fourier, and Z Transforms – Study Materials.\n\nIn this we have given Laplace, Fourier, and Z Transforms Study Materials for all competitive Exams like UPSC, MPPSC, APPSC, APSC, TNPSC, TSPSC etc.\n\nCandidates can download Laplace, Fourier, and Z Transforms Study Materials along with Previous Year Questions and Detailed solutions PDF from below mentioned links. † Multiplying z-transforms creates a cascade system, so factor-ing must create subsystems Example: † Since is a third-order polynomial, we should be able to factor it into a first degree and second degree polynomial † We can use the MATLAB function roots() to assist us >> p = roots([1 3 -2 1]) p = + i - i.\n\nThe book presents theory and applications of Laplace and z-transforms together with a Mathematica package developed by the author. The package substantially enhances the built-in Laplace and z-transforms facilities of Mathematica.\n\nThe emphasis lies on the computational and applied side, particularly in the fields of control engineering. Buy Fourier, Laplace, and z Transforms: Unique Insight into Continuous-Time and Discrete-Time Transforms. Their Definition and Applications (Technical LAP series Book 5): Read Kindle Store Reviews - hor: Dwight F.\n\nMix." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.88458127,"math_prob":0.9046653,"size":9401,"snap":"2021-04-2021-17","text_gpt3_token_len":2066,"char_repetition_ratio":0.17175694,"word_repetition_ratio":0.06910569,"special_character_ratio":0.20253165,"punctuation_ratio":0.10918544,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9684033,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-22T23:06:11Z\",\"WARC-Record-ID\":\"<urn:uuid:b270f37f-124c-4a15-aa6a-4fd17f9bd7d3>\",\"Content-Length\":\"23084\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:84939acd-3945-4bf1-a077-85f077cea82a>\",\"WARC-Concurrent-To\":\"<urn:uuid:b87218f2-c128-47d6-acb9-e74bdd2e426c>\",\"WARC-IP-Address\":\"172.67.140.68\",\"WARC-Target-URI\":\"https://fudedegisikuk.johnsonout.com/laplace-and-z-transforms-book-23918wh.php\",\"WARC-Payload-Digest\":\"sha1:R3XOR75FNHE42G5JQQIGKA7UBDWC6HP7\",\"WARC-Block-Digest\":\"sha1:IU4D4NFAZHASHP47D2CPNKOSGKQCI5QU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703531429.49_warc_CC-MAIN-20210122210653-20210123000653-00553.warc.gz\"}"}
http://giorgioschultze.eu/02-15/31637-calculate_cycle_time_in_surface_grinding.html
[ "#### Sales Inquiry\n\nCalculate Cycle Time In Surface Grinding", null, "### CR4 - Thread: Cycle Time for Centerless and Internal Grinding\n\nApr 17, 2009 · Allow enough stock for the grinder to set up and kiss . first let us know abt how we are going to calculate the cycle time in Centerless and . Surface .", null, "### grinding cycle time calculator - eitmindia\n\nJan 07, 2013· how to calculate grinding cycle time in centerless grinding, How to calculate cycle time for cylindrical grinding machine, Dec 06, 2007 Best . calculate cycle time in surface grinding", null, "### calculate grinding cycle time - sethhukamchand\n\ncalculate cycle time in surface grinding - Cycle-Time Reduction in. was limited in applicability due to the need to estimate tool. \" Presently a research strated in cycle-time reduction of cylindrical . Read more", null, "### Surface Grinding Time Calculation Formula\n\nSurface Grinding Time Calculation Formula. . calculate cycle time in surface grinding . cycletime calculation in infeed grinding cycle time calculation formula .", null, "### machiningsolutionsllc\n\nBall Nose Surface Finish Calculator Turning Power Page Drilling Power Page . TIME IN CUT (MIN/PART) # PARTS ACTUAL TIME IN CUT PER INSERT - .", null, "### grinding time calculation - atelierpiggypee.nl\n\nCalculate Grinding Cycle Time A cylindrical grinder is a type of grinding machine that is used to shape the . time calculation formula for surface grinding .", null, "### cycle time calculation cylindrical grinding\n\ncycle time calculation cylindrical grinding – X-crusher Machine. How to calculate cycle time for cylindrical grinding machine?machining time calculator for grinding – Grinding Mill China. Posted at: December 7, cylindrical grinding time calculation – Gulin Mining. Read more", null, "### How to Determine Cycle Time, Takt Time, Lead Time\n\nHow to Determine Cycle Time, Takt Time, . To calculate takt time think touchdown, or T/D, since we simply divide the net available time by the customer demand.", null, "### Grinding Cycle Time Calculation - snmarketing\n\ncalculate cycle time in surface grinding. how to calculate cycle time of surface grinding machine. id grinding cycle time calculation MTM Crusher. id grinding cycle time calculation. >>Price! Get Price.", null, "### grinding cycle calculate - hometbmueblerias.mx\n\ncalculate cycle time in surface grinding. how to calculate grinding cycle time in centerless and surface grinding are demonstrated . can be up to 3 parts in .", null, "### Cycletime Calculation In Infeed Grinding\n\ncycletime calculation in infeed grinding Overview. cylindrical grinding calculate. . how to calculate cycle time of surface grinding machine .", null, "### calculate cycle time in surface grinding\n\nCycle-Time Reduction in. was limited in applicability due to the need to estimate tool. \" Presently a research . strated in cycle-time reduction of cylindrical plunge grinding. 2 RCBZ Method ... machine settings for this problem, and power and surface .", null, "### Calculate Grinding Cycle Time – Grinding Mill China\n\nCalculate grinding cycle time beltconveyers how to calculate cycle time for cylindrical grinding machine . how to calculate cycle time of surface grinding .", null, "### surface grinding time calculation formula - tisshoo\n\ncalculate cycle time in surface grinding how to calculate grinding cycle time incenterless and surface grinding are demonstrated can be up to3 parts in the middle .", null, "### calculate cycle time in surface grinding - Jaw Crusher in .\n\ncalculate grinding cycle time - mayukhportfolio. calculate grinding cycle time; calculate grinding cycle time. . The calculation below illustrates how to calculate an existing surface grinding process .", null, "### cycletime calculation in grinding m c - fairytime\n\ncalculate cycle time in surface grinding - m-e-geu. Calculate grinding cycle time beltconveyers how to calculate cycle time for cylindrical grinding machine, Chat Now calculate cycle time in surface grinding. Read More", null, "### Cylindrical Grinding Machining Time Calculations\n\ncalculate cycle time in surface grinding. how to calculate cycle time of surface grinding machine. id grinding cycle time calculation MTM Crusher. id grinding cycle .", null, "### cylindrical grinding formula - omegatron\n\nHow to calculate cycle time for cylindrical grinding machine . formulas for surface grinding machine mining equipment price what is the formula of cylindrical .", null, "### Turning Formula Calculator - calculates automatically for .\n\nTurning Formula Calculator for SFM, RPM, inches per rev, inches per minute, and metal removal rates. . SURFACE FEET PER MINUTE (SFM)", null, "### GRINDING FEEDS AND SPEEDS - ABRASIVE ENGINEERING\n\nFEEDS AND SPEEDS. QUESTION: Why can't . shortness of cycle time, . peripheral surface grinding operations on medium size machines with drive motors in the above .", null, "### calculate grinding cycle time - mayukhportfolio\n\ncalculate grinding cycle time; calculate grinding cycle time. . The calculation below illustrates how to calculate an existing surface grinding process .", null, "### Cutting machining time Calculation - wise tool\n\ncutting Machining time inclusive of setting up the machine, getting the tools, study of drawing. The actual Machining time taken and measuring and checking.", null, "### calculate grinding cycle time - mayukhportfolio\n\ncalculate grinding cycle time; calculate grinding cycle time. . The calculation below illustrates how to calculate an existing surface grinding process .", null, "### Performance Analysis of Cylindrical Grinding .\n\nGrinding process plays a major role in controlling the quality requirements of components in terms of dimension, form, finish and surface integrity. Apart from these requirements, the cycle-time and grinding costs are also considered as important for assessing the effectiveness of grinding. In brief, the entire", null, "### cycletime calculation in grinding m c - fairytime\n\ncalculate cycle time in surface grinding - m-e-geu. Calculate grinding cycle time beltconveyers how to calculate cycle time for cylindrical grinding machine, Chat Now calculate cycle time in surface grinding. Read More", null, "### cycle time calculation cylindrical grinding\n\ncycle time calculation process for cylindrical grinding. calculate cycle time in surface grinding - m-e-geu. Cylindrical Grinding Cycle Time Calculation cylindrical grinding, how to calculate an existing surface grinding process for cycle time, and calculate test . Contact Supplier", null, "### Production Time Calculator - CustomPart.Net\n\nThe production time for a manufacturing process is primarily determined from the cycle time, but must also account for the defect rate, machine uptime, and number of .", null, "### cycle time calculation cylindrical grinding\n\ncalculate cycle time in surface grinding - accmet . cycle time calculation cylindrical grinding – AC . Cylindrical Grinding Calculator . Enter value and click on calculate. Result will be displayed. . Contact Supplier", null, "### Gear Hobbing, Shaping and Shaving - A Guide to Cycle Time .\n\nproductivity of hobbing. Surface finish of the gear flanks is not . finished by grinding or honing. Cycle Time Formula The formula to calculate shaving cycle time is:", null, "### calculate grinding cycle time - deenamarine\n\ncalculate cycle time in surface grinding. how to calculate grinding cycle time in centerless grinding How to calculate cycle time for cylindrical grinding machine Dec 06, 2007 · Best Answer: To\n\nPrevious Page: How To Make A Concrete Crusher\n\n#### related information\n\n• Calculate Crush Stone Volume\n• How To Calculate Stone Crusher Capacity\n• How To Calculate Mill Capacity\n• How To Calculate Cane Mill Torque Ie Knm\n• Calculate Motor Power For A Roll Crusher\n• How To Calculate Ball Volume\n• Calculate Sand Quantity For Rcc M20 Grade Concrete\n• How To Calculate Cost Phospating Production Line Equipment\n• Formulas To Calculate Power On Conveyor Belt\n• How To Calculate Machin Hour Of Cement Crusher\n• Calculate Power Stone Crusher Mining\n• Crusher Run Calculate Coverage\n• How Calculate Separation Efficiency In Cement Mill\n• How To Calculate Epoxy For Pebble Patio Overlay\n• Calculate Power Consumption Of Hammer Mill\n• Calculate Cycle Time In Surface Grinding\n• How To Calculate Tons Needed Of Screenings\n• How Calculate Grinding Ball Size For Tube Mill\n• How To Calculate Coal Mill Capacity For Cement Plant\n• How To Calculate The Energy Consumption Of A Ball Mill" ]
[ null, "http://giorgioschultze.eu/images/Schultze/547.jpg", null, "http://giorgioschultze.eu/images/Schultze/260.jpg", null, "http://giorgioschultze.eu/images/Schultze/351.jpg", null, "http://giorgioschultze.eu/images/Schultze/82.jpg", null, "http://giorgioschultze.eu/images/Schultze/309.jpg", null, "http://giorgioschultze.eu/images/Schultze/355.jpg", null, "http://giorgioschultze.eu/images/Schultze/412.jpg", null, "http://giorgioschultze.eu/images/Schultze/155.jpg", null, "http://giorgioschultze.eu/images/Schultze/312.jpg", null, "http://giorgioschultze.eu/images/Schultze/172.jpg", null, "http://giorgioschultze.eu/images/Schultze/69.jpg", null, "http://giorgioschultze.eu/images/Schultze/380.jpg", null, "http://giorgioschultze.eu/images/Schultze/18.jpg", null, "http://giorgioschultze.eu/images/Schultze/326.jpg", null, "http://giorgioschultze.eu/images/Schultze/337.jpg", null, "http://giorgioschultze.eu/images/Schultze/371.jpg", null, "http://giorgioschultze.eu/images/Schultze/233.jpg", null, "http://giorgioschultze.eu/images/Schultze/251.jpg", null, "http://giorgioschultze.eu/images/Schultze/347.jpg", null, "http://giorgioschultze.eu/images/Schultze/176.jpg", null, "http://giorgioschultze.eu/images/Schultze/521.jpg", null, "http://giorgioschultze.eu/images/Schultze/390.jpg", null, "http://giorgioschultze.eu/images/Schultze/69.jpg", null, "http://giorgioschultze.eu/images/Schultze/437.jpg", null, "http://giorgioschultze.eu/images/Schultze/286.jpg", null, "http://giorgioschultze.eu/images/Schultze/195.jpg", null, "http://giorgioschultze.eu/images/Schultze/145.jpg", null, "http://giorgioschultze.eu/images/Schultze/138.jpg", null, "http://giorgioschultze.eu/images/Schultze/110.jpg", null, "http://giorgioschultze.eu/images/Schultze/381.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8311242,"math_prob":0.9887722,"size":6170,"snap":"2019-13-2019-22","text_gpt3_token_len":1171,"char_repetition_ratio":0.3306844,"word_repetition_ratio":0.33724654,"special_character_ratio":0.1834684,"punctuation_ratio":0.12316716,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9939704,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60],"im_url_duplicate_count":[null,1,null,1,null,3,null,3,null,2,null,1,null,1,null,1,null,1,null,2,null,2,null,1,null,3,null,1,null,1,null,1,null,1,null,3,null,1,null,1,null,1,null,1,null,2,null,2,null,2,null,1,null,1,null,1,null,1,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-22T02:47:04Z\",\"WARC-Record-ID\":\"<urn:uuid:913ffd81-8b45-45ab-9a7d-d4175c73de8d>\",\"Content-Length\":\"32978\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d9884ae3-2512-495f-8b8d-02ceae98406b>\",\"WARC-Concurrent-To\":\"<urn:uuid:071ef059-3efc-410c-9972-560721b7cd7d>\",\"WARC-IP-Address\":\"104.28.6.236\",\"WARC-Target-URI\":\"http://giorgioschultze.eu/02-15/31637-calculate_cycle_time_in_surface_grinding.html\",\"WARC-Payload-Digest\":\"sha1:F5AG6LOVAEV3FZJZRYEXZXCQQ2Q52VHG\",\"WARC-Block-Digest\":\"sha1:OFL2NMSN434PYWYDR3U3MUZE54MSPIUN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912202589.68_warc_CC-MAIN-20190322014319-20190322040319-00374.warc.gz\"}"}
https://physics.stackexchange.com/questions/260953/friedmann-equations-with-w-1/260955
[ "# Friedmann equations with $w <-1$\n\nLet's consider a flat universe with an FRW metric with scale factor $a(t)$, with some matter content. The continuity equation $\\nabla_\\mu T^{\\mu\\nu}=0$ combined with assumptions of isotropy and homogeneity implies the equation $\\frac{d\\rho}{dt}=-3\\frac{\\dot a}{a}(\\rho+p)$. Now, if we consider equations of state of the form $p=w\\rho$, we can solve the above equation for $\\rho$ to obtain $\\rho\\propto a^{-3(1+w)}$. One usually considers the cases $w=-1,0,\\frac{1}{3}$, and there aren't many problems there (except $\\rho=const.$ for $w=-1$). But if one puts $w<-1$, it appears that the energy density increases with the scale factor! What's worse, a somewhat straightforward calculation would imply that the scale factor decreases with time. This kind of substance violates all the energy conditions, so I'm not sure how \"physical\" it is. I've tried integrating the Friedmann equations for $K\\neq 0$, but they seem a bit complex for an analytical solution. Does this imply that a substance with $w<-1$ can't exist in a FRW universe, or is one of my assumptions wrong ($K=0$ comes to mind)?\n\nUnder the assumptions that $a > 0$ and that the universe is expanding, we can derive some interesting results about the fate of such a universe.\n\nFrom the Friedmann equations alone, we may derive\n\n$$\\frac{d}{d \\tau} (\\rho a^3) = - P \\frac{d}{d \\tau} (a^3).$$\n\nFor $P = w \\rho$, as long as $w \\neq -1$, this yields\n\n$$\\rho \\propto \\frac{1}{a^{3(1 + w)}},$$\n\nexactly as you stated in your question. So, yes, if the universe is expanding and $w < -1$, then the energy density does increase with time!\n\nIn the flat case with no cosmological constant, $\\Lambda = K = 0$, we may integrate the first Friedmann equation $3 H^2 = 8 \\pi \\rho$, with this expression for $\\rho$, to yield\n\n$$a(\\tau) \\propto \\tau^{\\frac{2}{3(1+w)}},$$\n\nexactly as in the case $w > -1.$ The Hubble parameter is thus given by\n\n$$H = \\frac{\\dot{a}}{a} = \\frac{3}{3 (1 + w) \\tau}.$$\n\nIn an expanding universe, we have $H > 0.$ Since $1 + w < 0$, we must have $\\tau < 0$ in order for the universe to be expanding. But since $a(\\tau) \\propto \\tau^{\\frac{2}{3(1+w)}}$ and $w < -1,$ the scale factor diverges at $\\tau = 0.$ So the universe will suffer a \"big rip\" singularity at some finite time in the future.\n\nIn a contracting universe, we have $H < 0,$ and so $\\tau > 0.$ Since this is past the $\\tau=0$ singularity, such a universe must have originated at a \"big rip\" at some finite time in the past.\n\nSo if we assume that the universe has existed for finite time, then it must be contracting (as you stated in your question), and it must have originated with a divergent scale factor. On the other hand, if we assume that the universe is expanding, then it will meet a singularity in finite time as the scale factor diverges.\n\n• You mentioned \"But since $a(\\tau) \\propto \\tau^{\\frac{2}{3(1+w)}}$ and $w<−1$, the scale factor diverges at $\\tau=0$. So the universe will suffer a \"big rip\" singularity at some finite time in the future.\" If $a(\\tau)$ is divergent at $\\tau=0$, why would the big rip occur at a point in the future ($\\tau>0$) – IanDsouza Dec 29 '17 at 23:33\n• @IanDsouza In the first sentence of the same paragraph, I specified that (under our conventions) an expanding universe exists only in the regime $\\tau < 0$. So the point $\\tau = 0$ is in the future of an expanding universe. – user_35 Dec 30 '17 at 20:44\n• I see. But what is the physical significance of imposing the time to be negative? Is this just an imposition from the mathematics? – IanDsouza Dec 30 '17 at 21:02\n\nWhat you are commenting on is something called the big rip, or phantom energy. If you have $$\\frac{d\\rho}{dt} = -3\\frac{\\dot a}{a}(\\rho + p) = 0$$ then you have $p = -\\rho$, which is the sometimes commented on as negative pressure of dark energy. However for $p = w\\rho$ then $$\\frac{d\\rho}{dt} = -3\\frac{\\dot a}{a}(1 + w)\\rho$$ and we have for $\\dot a/a = \\sqrt{H}$ the Hubble parameter $$\\rho = \\rho_0exp\\left(-3\\sqrt{H}(1+w)t\\right)$$ for $w < -1$ something pathological happens in that the vacuum energy density grows exponentially with time.\n\nIf I put that in the FLRW equation result $$\\left(\\frac{\\dot a}{a}\\right)^2 = H = \\frac{8\\pi G\\rho}{3c^2}$$ and consider vacuum energy only then $a(t) = a_0exp(t\\sqrt{8\\pi G\\rho/3c^2})$. This is the exponential acceleration of the universe. However. if I put the $w < -1$ exponent for the energy density in this equation I get an exponential growth generated by an exponential growth! The universe tears itself to pieces.\n\n• I don't really see where your solution for $\\rho$ comes from, when I integrate the equation for $w\\neq 1$, I get $\\rho=\\rho_0 a^{-3(1+w)}$, and the Friedmann equation $H^2=\\left(\\frac{\\dot a}{a}\\right)^2=\\frac{8\\pi G}{3}\\rho$ integrates to a solution of the form $a(t)\\propto t^{\\frac{2}{3(1+w)}}$ (only the negative root though, the positive one turns out complex), so for $w<-1$ the universe shrinks instead of grows, at an accelerated rate even. Your solution would work for $H=const.$, but as the continuity and Friedmann eq. are coupled, I don't see how it can work. – blueshift Jun 6 '16 at 13:37" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8718186,"math_prob":0.9997651,"size":5226,"snap":"2021-04-2021-17","text_gpt3_token_len":1534,"char_repetition_ratio":0.11891995,"word_repetition_ratio":0.038946163,"special_character_ratio":0.31458095,"punctuation_ratio":0.088930935,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999901,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-28T16:30:15Z\",\"WARC-Record-ID\":\"<urn:uuid:10c7bfcf-d78c-4ee3-91ab-c2472d0d8085>\",\"Content-Length\":\"159868\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e6fd3f7d-ce2a-48c3-9f2e-a7508ba09222>\",\"WARC-Concurrent-To\":\"<urn:uuid:3990a3cf-e65e-467c-b09f-db59a0ceb9dc>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://physics.stackexchange.com/questions/260953/friedmann-equations-with-w-1/260955\",\"WARC-Payload-Digest\":\"sha1:QT77YS2623P4C4YNJU5S7SRF4TX73C42\",\"WARC-Block-Digest\":\"sha1:GI23H3QKGWRIN32QK4PDB3PGUDEYQ4WE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610704847953.98_warc_CC-MAIN-20210128134124-20210128164124-00779.warc.gz\"}"}
https://edurev.in/studytube/Solutions-of-Sources-of-Energy--Page-No--157--Phys/339fb8fa-100b-4d83-a87a-239e60975783_t
[ "# Solutions of Sources of Energy (Page No- 157) - Physics By Lakhmir Singh, Class 10 Class 10 Notes | EduRev\n\n## Class 10 : Solutions of Sources of Energy (Page No- 157) - Physics By Lakhmir Singh, Class 10 Class 10 Notes | EduRev\n\nThe document Solutions of Sources of Energy (Page No- 157) - Physics By Lakhmir Singh, Class 10 Class 10 Notes | EduRev is a part of the Class 10 Course Class 10 Physics Solutions By Lakhmir Singh & Manjit Kaur.\nAll you need of Class 10 at this link: Class 10\n\nLakhmir Singh Physics Class 10 Solutions Page No:157\n\nQuestion 9: Where, in a nuclear power station, is uranium, used up ?\n\nSolution : Uranium is used up in the reactor.\n\nQuestion 10: State one use of nuclear fission reactions.\n\nSolution : Nuclear fission reactions are used for generating electricity at a nuclear power plant.\n\nQuestion 11: Name the unit which is commonly used for expressing the energy released in nuclear reactions.\n\nSolution : Million electron volt (MeV)\n\nQuestion 12: How many MeV are equivalent to 1 atomic mass unit (u) ?\n\nSolution : 1 atomic mass unit=931 MeV\n\nQuestion 13: Fill in the following blanks with suitable words :\n(a) Splitting of a heavy nucleus into two lighter nuclei is called…………\n(b) Uranium-235 atoms will split when hit by……… This is called……………….\n(c) Nuclear………. is used in nuclear power stations for the production of electricity.\n(d) In a nuclear power station, nuclear fission takes place in the…………\n\nSolution : (a) nuclear fission\n(b) neutrons; nuclear fission\n(c) fission\n(d) reactor\n\nQuestion 14: What is nuclear fission ? Explain with an example. Write the equation of the nuclear reaction involved.\n\nSolution : The process in which the heavy nucleus of a radioactive atom(such as uranium) splits up into smaller nuclei when bombarded with low energy neutrons, is called nuclear fission.\nE.g., When uranium-235 atoms are bombarded with slow moving neutrons, the heavy uranium nucleus breaks up to produce two medium-weighted atoms and 3 neutrons, with the emission of tremendous amount of energy.", null, "tremendous amount of energy\n\nQuestion 15: (a) What is nuclear fusion ? Explain with an example. Write the equation of the reaction involved.\n(b) Why are very high temperatures required for fusion to occur ?\n\nSolution : (a) The process in which two nuclei of light elements (like that of hydrogen) combine to form a heavy nucleus (like that of helium), is called nuclear fusion.\nWhen deuterium atoms are heated to an extremely high temperature under extremely high pressure, then two deuterium nuclei combine together to form a heavy nucleus of helium, and a neutron is emitted. A tremendous amount of energy is liberated in the process.", null, "tremendous amount of energy\n\n(b) Because very high energy is required to force the lighter nuclei (which repel each other) to fuse together to form a bigger nuclei.\n\nQuestion 16: What is the nuclear fuel in the sun ? Describe the process by which energy is released in the sun. Write the equation of the nuclear reaction involved.\n\nSolution : The nuclear fuel in the sun is hydrogen gas.\nThe sun can be considered as a big thermonuclear furnace where hydrogen atoms continuously get fused into helium atoms. Mass gets lost during these fusion reactions and energy is being produced.\nNuclear reaction:", null, "tremendous amount of energy\n\nQuestion 17: (a) Write Einstein’s mass-energy equation. Give the meaning of each symbol which occurs in it.\n(b) If 25 atomic mass units (u) of a radioactive material are destroyed in a nuclear reaction, how much energy is released in MeV ?\n\nSolution : (a) Einstein’s equation: E=mc2\nwhere E is the amount of energy produced if mass m is destroyed, and c is the speed of light in vacuum.\n(b) 1 atomic mass unit = 931 MeV\n25 atomic mass unit = 931 x 25 MeV = 23275 MeV\n23275 MeV of energy is released.\n\nQuestion 18: (a) What is the source of energy of ths sun and other stars ?\n(b) Describe the working of a hydrogen bomb.\n(c) What is common between the sun and a hydrogen bomb ?\n\nSolution : (a) Nuclear fusion reactions 0f hydrogen.\n(b) The hydrogen bomb consists of heavy isotopes of hydrogen (deuterium and tritium) alongwith lithium-6. The explosion of hydrogen bomb is done by using an atom bomb. When the aton bomb is exploded, its fission reaction produces a lot of heat which raises the temperature of deuterium and tritium to 107oC in a few microseconds. Then fusion reactions of deuterium and tritium take place producing a tremendous amount of energy. This explodes the hydrogen bomb. Lithium-6 is used to produce more tritium needed for fusion.\n(c) The source of energy is same for both the sun and the hydrogen atom, i.e. nuclear fusion.\n\nQuestion 19: (a) What will happen if slow moving neutrons are made to strike the atoms of a heavy element 92U235 ? What\nis the name of this process ?\n(b) Write a nuclear equation to represent the process which takes place.\n(c) Name one installation where such a process is utilised.\n\nSolution : (a) When slow moving neutrons are made to strike the atoms of a heavy element uranium-235, the heavy uranium nucleus breaks up to produce two medium-weighted atoms and 3 neutrons, with the emission of tremendous amount of energy. This process is called nuclear fission.", null, "tremendous amount of energy\n\n(c) Nuclear Power Station\n\nQuestion 20: (a) What are the advantages of nuclear energy ?\n(b) State the disadvantages of nuclear energy.\n\nSolution : (a) Advantages of nuclear energy:\n(i) It produces a large amount of useful energy from a very small amount of a nuclear fuel.\n(ii) Once the nuclear fuel is loaded into the reactor, the nuclear power plant can go on producing electricity for two to three years at a stretch. There is no need of feeding the fuel again and again.\n(iii) It does not produce gases like CO2 or SO2.\n(i) The waste products of nuclear fission reactions are radioactive which keep on emitting harmful radiations for thousands of years and are difficult to store or dispose safely.\n(ii) Very high cost of installation is required.\n(iii) There is a limited availability of uranium fuel.\n\nQuestion 21: The following questions are about the nuclear reactor of a power plant.\n(a) Which isotope of uranium produces the energy in the fuel rods ?\n(b) Will the fuel rods last for ever ?\n(c) Is the energy produced by nuclear fission or nuclear fusion ?\n(d) What is the purpose of using the graphite moderator ?\n(e) What is the function of boron rods in the nuclear reactor ?\n(f) Why is liquid sodium (or carbon dioxide gas) pumped through the reactor ?\n\nSolution : (a) Uranium-235\n(b) No\n(c) Nuclear fission\n(d) Moderator slows down the speed of neutrons to make them fit for causing fission.\n(e) Boron rods are used to absorb excess neutrons and prevent the fission reaction from going out of control.\n(f) Liquid sodium or carbon dioxide gas is used as a ‘coolant’ to transfer the heat produced to heat exchanger for converting water into steam.\n\nQuestion 22: In the reactor of a nuclear power plant, name the material which is used :\n(a) as a moderator\n(c) in the fuel rods\n(d) in the control rods\n(e) to carry away heat\n\nSolution : (a) Graphite\n(b) Concrete\n(c) Uranium-235\n(d) Boron\n(e) Liquid sodium\n\nQuestion 23: In the nuclear reactor of a power plant:\n(a) how do control rods control the rate of fission ?\n(b) how is heat removed from the reactor core, and what use is made of this heat ?\n\nSolution : (a) Control rods control the rate of fission by absorbing the excess neutrons and preventing the fission reaction from going out of control.\n(b) Heat is removed from the nuclear reactor core with the help of liquid sodium, which absorbs the heat and transfers it to the heat exhanger. This heat is used for converting water in the heat exchanger into steam, which is then used to produce electricity by rotating a turbine and its shaft connected to a generator.\n\nQuestion 24: How does inserting the control rods in the graphite core affect the fission in the reactor ?\n\nSolution : On inserting the control rods in the graphite core, the rods start absorbing the excess neutrons and maintain the rate of reaction as per requirement. The rods can be raised or lowered in the reactor from outside. The part which is inside the reactor absorbs neutrons.\n\nQuestion 25: What are the advantages and disadvantages of using nuclear fuel for generating electricity ?\n\nSolution : Advantages of using nuclear fuel: Electricity can be produced for almost two to three years with the same uranium fuel in a nuclear power plant.\nDisadvantages of using nuclear fuel: The nuclear wastes produced by the fission of uranium-235 during the generation of electricity are radioactive and extremely harmful.\n\nQuestion 26: (a) What is a nuclear reactor ? What is the fuel used in a nuclear reactor ?\n(b) With the help of a labelled diagram, describe the working of a nuclear power plant.\n(c) How is the working nuclear reactor of a power plant shut down in an emergency ?\n(d) Name five places in India where nuclear power plants are located.\n\nSolution : (a) Nuclear reactor is a device designed to maintain a chain reaction producing a steady flow of neutrons generated by the fission of heavy nuclei. Uranium-235 is used as a fuel in a nuclear reactor.", null, "In a nuclear power plant, the fission of uranium-235 is carried out in a reactor R. Uranium-235 rods are inserted in a graphite core which acts as a moderator to slow down the neutrons. Boron rods B absorb excess neutrons and controls the rate of reaction. Liquid sodium or carbon dioxide gas, which is pumped continuously through pipes embedded in reactor by using a pump P, is used as a ‘coolant’ to transfer the heat produced to heat exchanger for converting water into steam. The hot steam at high pressure goes into a turbine chamber and makes the turbine rotate. The shaft of the generator also rotates and drives a generator connected to it.\n(c) By inserting the boron control rods fully into the reactor.\n(d) Five places in India where nuclear powerplants are located are:\n(i) Tarapur.\n(ii) Kalpakkam.\n(iii) Narora.\n(iv) Kaprapur.\n(vi) kaiga.\n\nOffer running on EduRev: Apply code STAYHOME200 to get INR 200 off on our premium plan EduRev Infinity!\n\n94 docs\n\n,\n\n,\n\n,\n\n,\n\n,\n\n,\n\n,\n\n,\n\n,\n\n,\n\n,\n\n,\n\n,\n\n,\n\n,\n\n,\n\n,\n\n,\n\n,\n\n,\n\n,\n\n,\n\n,\n\n,\n\n;" ]
[ null, "https://cdn3.edurev.in/ApplicationImages/Temp/2991120_bd9cfc51-42cd-412b-a99f-5fbf82891c46_lg.png", null, "https://cdn3.edurev.in/ApplicationImages/Temp/2991120_1c733ab0-cb70-42fa-85a9-3e5b56a2d032_lg.png", null, "https://cdn3.edurev.in/ApplicationImages/Temp/2991120_1bcbc22b-70b1-4575-bd9b-0791745bce42_lg.png", null, "https://cdn3.edurev.in/ApplicationImages/Temp/2991120_afa920a1-882f-44cc-8ef4-20a657ba0ef9_lg.png", null, "https://cdn3.edurev.in/ApplicationImages/Temp/2991120_b472e598-0263-42b2-a0aa-ca432baec0b2_lg.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.89701456,"math_prob":0.83974534,"size":9585,"snap":"2021-04-2021-17","text_gpt3_token_len":2152,"char_repetition_ratio":0.16167414,"word_repetition_ratio":0.08921037,"special_character_ratio":0.22587377,"punctuation_ratio":0.09856828,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9623429,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,3,null,3,null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-16T21:57:07Z\",\"WARC-Record-ID\":\"<urn:uuid:ff12651b-2651-4cb2-9fa7-f4a257ad8111>\",\"Content-Length\":\"317618\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ca2c3de2-3fee-45a9-93ba-50a6082efe62>\",\"WARC-Concurrent-To\":\"<urn:uuid:868e9084-487b-47fe-b6a7-7e7ce70225eb>\",\"WARC-IP-Address\":\"35.247.187.6\",\"WARC-Target-URI\":\"https://edurev.in/studytube/Solutions-of-Sources-of-Energy--Page-No--157--Phys/339fb8fa-100b-4d83-a87a-239e60975783_t\",\"WARC-Payload-Digest\":\"sha1:URGA2WIYRD2ANHKJYLFDA4Y2NWU7KGIM\",\"WARC-Block-Digest\":\"sha1:K4TLWU4OFZLESHU3I4LT6FJLQ24RIC4Z\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703507045.10_warc_CC-MAIN-20210116195918-20210116225918-00374.warc.gz\"}"}
https://www.cplusplus.com/forum/general/125453/
[ "### Help! To develop a program (For-loop)", null, "help\ni need to create a prgramme that does ..\n·\nDemonstrates\na ‘for’ loop that produces the\noutput: 3, 5, 9, 17, 33, 65. (Hint: the loop variable value is first doubled,\nthen reduced by 1)\n\nDemonstrates a ‘do’\n– ‘while’ loop that does exactly the same, just the other way round (Hint:\nfirst add 1 to the loop variable, then divide by 2). However, the 17 must not\nbe printed out, i.e. the output is: 65, 33, 9, 5, 3", null, "1)\n ``123456789101112131415161718192021`` ``````//series.cpp //## #include using std::cout; using std::endl; int main(){ for(int i=3;i<=66;i*=2){ //if i is equal to 3 just print i; else print i-1 cout<<(i==3?i:i-1)<<' '; if(i!=3)--i; //if i is not equal to 3 i-1 (decreases i by 1); }//end for cout<", null, "Why not simply put:\n `` `` ``for(int i = 3; i < 66; i = i * 2 - 1)``\n\nThen you can ignore the ternary operator and if statement.\n\nThe assignment is pretty straight forward could you show some code prash?\n\n just noticed I put + 1 instead of -1\n\nLast edited on", null, "You could accomplish it even faster by using\n ``12`` ``````for(unsigned int i = 3; i <= 65; (i *= 2) -= 1) cout << i << \" \";``````\n\nRemoving the need for an if statement.\n\nAs for the 2nd task, something like this should work\n ``12345678`` ``````unsigned int i = 65; do { if(i != 17) cout << i << \" \"; i = (i + 1) / 2; } while(i >= 3);``````", null, "I think that keeps (easy to understand) simple\nthe implementation.\nBut you are clever\npeople dudes ;D", null, "Thanks guys that was really helpful!! :)\nTopic archived. No new replies allowed." ]
[ null, "https://www.cplusplus.com/img/link.png", null, "https://www.cplusplus.com/img/link.png", null, "https://www.cplusplus.com/img/link.png", null, "https://www.cplusplus.com/img/link.png", null, "https://www.cplusplus.com/img/link.png", null, "https://www.cplusplus.com/img/link.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.77267355,"math_prob":0.94971853,"size":727,"snap":"2021-04-2021-17","text_gpt3_token_len":256,"char_repetition_ratio":0.08437068,"word_repetition_ratio":0.0,"special_character_ratio":0.3741403,"punctuation_ratio":0.21989529,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.994255,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-27T13:56:34Z\",\"WARC-Record-ID\":\"<urn:uuid:bd541785-0b44-4e96-8ab2-9756bdd23e47>\",\"Content-Length\":\"13626\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0eccf8aa-f124-45b6-9d5b-4e08d5f78f77>\",\"WARC-Concurrent-To\":\"<urn:uuid:f63f36bd-e0cb-4ff6-8341-71f8cf7d7e50>\",\"WARC-IP-Address\":\"144.217.110.12\",\"WARC-Target-URI\":\"https://www.cplusplus.com/forum/general/125453/\",\"WARC-Payload-Digest\":\"sha1:7IWZ77OHLATJOQQH22OABZ2DMERU6UOE\",\"WARC-Block-Digest\":\"sha1:EYNHT5E62AFWX7GOR5GD3BQEVCUUTJHI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610704824728.92_warc_CC-MAIN-20210127121330-20210127151330-00560.warc.gz\"}"}
https://databasefaq.com/index.php/tag/functional-programming
[ "FAQ Database Discussion Community\n\n## Assigning unique variable from a data.frame\n\nr,function,functional-programming,unique,identity-column\nThis is a similiar question to this but my output results are different. Take the data: example <- data.frame(var1 = c(2,3,3,2,4,5), var2 = c(2,3,5,4,2,5), var3 = c(3,3,4,3,4,5)) Now I want to create example\\$Identity which take a value from 1:x for each unique var1 value I have used example\\$Identity <- apply(example[,1],...\n\n## To “combine” functions in javascript in a functional way?\n\njavascript,functional-programming\nI'm learning functional programming and I wonder if there is a way to \"combine\" functions like this: function triple(x) { return x * 3; } function plusOne(x) { return x + 1; } function isZero(x) { return x === 0; } combine(1); //1 combine(triple)(triple)(plusOne)(1); // 10 combine(plusOne)(triple)(isZero)(-1); // true If...\n\n## Semantically correct way to modify a Ruby hash - a functional approach?\n\nruby,hash,functional-programming\nI have to clean up a object in Ruby, and the way I have been doing that is by cloning a hash, and modifying an original. This was my original method: def remove_empty_stories(content) content.clone.each do |section,stories| for i in 0...stories.length do if stories[i].length < 25 content[section].delete_at(i) end end end content...\n\njava,scala,functional-programming\nI know in Scala and many other functional languages there are monads that is mostly just an interface realization(for example in Scala with flatMap[T] and unit[T] methods) is there are any Java-style interfaces that could be a monad?\n\n## How to get only specific elements of list in racket\n\nfunctional-programming,scheme,racket\nInput: '((\"may 001\" 75 72) (\"may 002\" 75 75) (\"may 003\" 70 73) (\"june 101\" 55 55) (\"june 104\" 55 54) (\"aug 201\" 220 220)) Desired output: '((\"may 001\" 75 72) (\"may 002\" 75 75) (\"may 003\" 70 73)) How do I achieve this? I only want the may terms....\n\n## Apply a list of parameters to a curried function\n\nfunctional-programming,sml,currying\nsimple task: all I want is a function to apply a list of parameters to a curried function. Let's say our function is the famous add one: fun add a b = a + b; Now all I want is a function to apply a list (say [1, 5]) to...\n\n## Order items based on an other field MySQL\n\nphp,mysql,functional-programming\nI have a music theme for wordpress which has album listing page, albums are sorted ASC or DESC. I want the albums to be sorted based on another field which is a post_modified As I am new in PHP and programming can not figure it out self. I am pasting...\n\n## Java 8 Lambda expressions for solving fibonacci (non recursive way)\n\njava,lambda,functional-programming,java-8\nI am a beginner in using Lambda expression feature in Java 8. Lambda expressions are pretty well useful in solving programs like Prime number check, factorial etc. However can they be utilized effectively in solving problems like Fibonacci where the current value depends on sum of previous two values. I...\n\n## In Scala, is there an equivalent of Haskell's “fromListWith” for Map?\n\nIn Haskell, there is a function called fromListWith which can generate a Map from a function (used to merge values with the same key) and a list: fromListWith :: Ord k => (a -> a -> a) -> [(k, a)] -> Map k a The following expression will be evaluated...\n\n## In simplest term, what is currying and why should this approach be favored over traditional programming paradigm?\n\nscala,functional-programming,software-engineering,currying\nI am having hard time understanding currying through several sources on web . Isn't there more intuitive example of currying? Also, what are its advantages over traditional programming paradigm ? Is currying possible to achieve on non-functional programmming?...\n\n## Why use Object.create inside this reduce callback?\n\njavascript,functional-programming\nSo while working on #19 of this fine tutorial - http://jhusain.github.io/learnrx/, I find that the exercise works without using Object.create. (See the commented-out lines) 1. So what is the point of creating that copy of the accumulatedMap? Other than showing that it is possible... function() { var videos = [...\n\n## Creating a filterable list with RxJS\n\njavascript,functional-programming,reactive-programming,rxjs,reactive-extensions-js\nI'm trying to get into reactive programming. I use array-functions like map, filter and reduce all the time and love that I can do array manipulation without creating state. As an exercise, I'm trying to create a filterable list with RxJS without introducing state variables. In the end it should...\n\n## confused about return array#map javascript [duplicate]\n\njavascript,arrays,functional-programming\nThis question already has an answer here: Can anyone explain why linebreaks make return statements undefined in JavaScript? 4 answers I have the code: function func1(){ return array.map(function(el){ return el.method(); }); } function func2(){ var confused = array.map(function(el){ return el.method(); }); return confused; } Why func1 return undefined while...\n\n## Evaluate a list of functions in Clojure\n\nclojure,functional-programming,clojurescript\nI have a list of functions which have no side-effects and take the same arguments. I need to evaluate each function in my list and put the results into another list. Is there a function in Clojure that does it?\n\n## Purity, Referential Transparency and State Monad\n\nscala,functional-programming\nI'm currently designing a numerical algorithm which as part of its operations requires updating a vector of doubles many times. Due to the fact that the algorithm has to be as space and time efficient as possible, I do not want to code the traditional type of FP code which...\n\n## How to implement this simple algorithm elegantly in Scala\n\nlist,scala,functional-programming\nI would like to have an elegant implementation of the method with the following (or similar) signature: def increasingSubsequences(xs: List[Int]): List[List[Int]] What it does is it splits the input sequence without reordering the elements so that every subsequence in the result is strictly increasing. I implemented it myself as follows:...\n\n## What is the type of the variable in do-notation here in Haskell?\n\nThe codes below looks quite clear: do x <- Just 3 y <- Just \"!\" Just (show x ++ y) Here the type of x is Num and y is String. (<- here is used to take actual value out of the Monad) However, this snippet looks not so clear...\n\n## Lazy concat in Immutable.js?\n\njavascript,functional-programming,immutable.js\nIs there a way to do a lazy concat of two sequences in Immutable.js? Specifically, I have this algorithm that flattens a tree into a breadth-first sequence: var Immutable = require('immutable'); var tree = { value: 'foo', children: [ { value: 'bar', children: [ { value: 'barOne'}, { value: 'barTwo'}...\n\n## F# pass by reference\n\nf#,functional-programming,pass-by-reference,out,ref\nI'm trying to pass by reference in F#. In C#, it is pretty easy using the ref and out keywords, but it seems in F#, it is not that simple. I just read this: http://davefancher.com/2014/03/24/passing-arguments-by-reference-in-f/, which suggests using a reference cell. (I suppose this means there is no analog to...\n\n## Using Diagrams library in haskell (draw binary trees)\n\nI am trying to use the Haskell Diagrams library for drawing binary trees. This is my tree type: data Tree a = Empty | Node { label :: a, left,right :: Tree a } leaf :: a -> Tree a leaf a = Node a Empty Empty This is a...\n\n## F# Observable - Converting an event stream to a list\n\nf#,functional-programming,reactive-programming,observable\nI was writing an unit test that verified the events fired from a class. I followed the standard \"IEvent<_>, Publish, Trigger inside an FSharp type\" pattern. Can you recommend the \"functional\" way to achieve that? Here are the options I can think of: Convert the event stream into a list...\n\n## Kind Classification of Types\n\nIn Benjamin Pierce's book on Types and Programming Languages he classifies the different kinds of types as follows: * the kind of proper types (like Bool and Bool -> Bool) * -> * the kind of type operators (i.e., functions from proper types to proper types) * -> * ->...\n\n## Why prefer Typeclass over Inheritance?\n\nscala,inheritance,functional-programming,typeclass\nAccording to this Erik Osheim's slide, he says the inheritance can solve the same problem as typeclass would, but mentions that inheritance has a problem called: brittle inheritance nightmare and says the inheritance is tightly coupling the polymorphism to the member types What is he means? In my opinion, Inheritance...\n\n## How do you represent nested types using the Scott Encoding?\n\nAn ADT can be represented using the Scott Encoding by replacing products by tuples and sums by matchers. For example: data List a = Cons a (List a) | Nil Can be encoded using the Scott Encoding as: cons = (λ h t c n . c h t) nil...\n\n## Erlang syntax error unclear\n\nfunction,variables,if-statement,functional-programming,erlang\nI just got started with Erlang. I am trying if statement. I found out one particular behavior which I do not understand. the following statement does work perfectly. some_comp(Arg1) -> if (cal(Arg1)>50000)->'reached'; true -> 'invalid' end. cal(Arg2)-> %% some calculation. However the following shows an error syntax near if: some_comp(Arg1)...\n\n## Elm: understanding foldp and mouse-clicks\n\nfunctional-programming,elm\nI'm currently learning Elm. relatively new to functional programming. i'm trying to understand this example from http://elm-lang.org/learn/Using-Signals.elm on counting mouse-clicks. they provide the following code: clickCount = foldp (\\click count -> count + 1) 0 Mouse.clicks They explain that foldp takes three arguments: a counter-incrementer, which we defined as an...\n\n## Is there any formula that maps an int on the range [0..πR²] to the (x,y) coordinates inside the circle of radius R?\n\nalgorithm,math,language-agnostic,functional-programming,formula\nThe formula: index(i,w,h) = (i%w, (i/w)%h) uniquely maps each integer i on the range [0..w*h] to a coordinate inside the rectangle of width w and height h. Is there any similar formula: index(i,r) = ? that uniquely maps each integer on the range [0..πR²] to a coordinate inside the circle...\n\n## Accessing call stack depth in Scheme\n\nfunctional-programming,scheme,tail-recursion,callstack\nIn order to demonstrate the effectiveness of tail recursion, I would like a way to access the depth of the call stack dynamically in Scheme. Is there a way to do this? If not, is there a way to do this in other major functional languages (OCaml, Haskell, etc.)?...\n\n## Determine the arity of a function handle and currying\n\nmatlab,functional-programming,currying,arity\nIs there any way to determine the arity of a function and/or curry functions in MATLAB? I can't find any documentation on the matter.\n\n## Does for..in loop keeps track of orders of its properties for Object?\n\njavascript,arrays,object,functional-programming,reduce\nI'm a beginner for programming in general, and JavaScript is my first language. While I was studying, I faced with this concern, and I'm not sure if my concern is appropriate. I thought for..in loop does not keep track of orders when we loop through an Object, because, unlike Arrays,...\n\n## functional way to accumulate pairs in java8\n\njava,functional-programming,java-8,java-stream\nHere's some imperative code that I'm trying to translate into functional programming code: public class Person { String name; Token token; public Person(String name, Token token) { this.name = name; this.token = token; } } public class Token { String id; boolean isValid; public Token(String id, boolean isValid) { this.id...\n\n## in clojure, function argument type mismatch\n\nclojure,functional-programming,lisp\nclojure, function argument is vector, but it takes a map without problem. (defn flower-colors [colors] (str \"The flowers are \" (:flower1 colors) \" and \" (:flower2 colors))) (flower-colors {:flower1 \"red\" :flower2 \"blue\"}) ;; -> \"The flowers are red and blue\" Function flower-colors suppose to take vector type argument, but with...\n\n## Functional way of doing a loop of operations on an array\n\njava,scala,functional-programming\nI currently have a Java program which does something like the following: int nvars = 10; long vars[] = new long[nvars]; for(int i = 0; i < nvars; i++) { vars[i] = someFunction(i); anotherFunction(vars[i]); } I am converting it into Scala code and have: val nvars: Int = 10 val...\n\n## Why is 'window.angular' used like so, in this function definition?\n\njavascript,angularjs,functional-programming\nI'm trying to understand an angularjs file I need to use to integrate with Django, and it has a weird syntax I'm not familiar with (bear in mind I'm a junior dev, so this may be your bread and butter)... It goes something like: (function(angular, undefined){ 'use script'; var djng_forms_module...\n\n## Deep changing values in a JavaScript object\n\njavascript,recursion,data-structures,functional-programming,mootools\nI have an object which contains an unknown number of other objects. Each (sub-)object may contain boolean values as strings and I want to change them to real boolean values. Here's an example object: var myObj = { my1stLevelKey1: \"true\", my1stLevelKey2: \"a normal string\", my1stLevelKey3: { my2ndLevelKey1: { my3rdLevelKey1: {...\n\n## Is this definition of a tail recursive fibonacci function tail-recursive?\n\nscala,f#,functional-programming,tail-recursion,continuation-passing\nI've seen around the following F# definition of a continuation-passing-style fibonacci function, that I always assumed to be tail recursive: let fib k = let rec fib' k cont = match k with | 0 | 1 -> cont 1 | k -> fib' (k-1) (fun a -> fib' (k-2)...\n\n## Is there a better way to implement a functional recursive findById in javascript than this?\n\njavascript,functional-programming\nI dislike loops, however this seems easy to solve with a loop and hard using functional programming. Here's the loop version: for(var i = 0; i < collection.length; i++) { var result = collection[i].findById(id); if (result) { return result; } } Since this is a common pattern, I expected to...\n\n## Apply successive filters to an array in Ruby\n\nruby,functional-programming\nI have an array of some objects, perhaps strings or numbers. I want to apply some arbitrary filters to this array, and I want to be able to specify the order of the filters. Here's an example (I know that in this toy example, order doesn't matter, but I want...\n\n## First word of binary string erlang\n\nfunctional-programming,erlang,pattern-matching\nI need to do something like this <<\"Some text in binary\">>, and it should return <<\"Some\">. How can i do it without split function, only by pattern matching and select/if cases with Erlang. Thank you.\n\n## Hook into GHC runtime system\n\nI have been looking at how transactional memory is implemented in Haskell, and I am not sure I understand how the STM operations exposed to the programmer hook into the runtime system functions written in C. In ghc/libraries/base/GHC/Conc/Sync.hs of the git repo, I see the following definitions: -- |A monad...\n\n## Java 8 streaming API using with Map\n\nlambda,functional-programming,java-8,java-stream\nI see this code snippet at my work. I am unable to get correct picture of what is going on here. I tried using debugger to get values, but debugger is not helpful here. public static void process (ErrorCat exc, String toFind) { Map<String, Function<Error, Error>> translate = new HashMap<>();...\n\n## Cycling through a vector whose elements are inputs to another function in R\n\nr,functional-programming,combinations\nI need to apply all the combinations of the elements of a vector to a particular function, and use these elements as inputs for this function as well. I would like it to be somewhat fast, but any combination of apply and its different flavors has proved fruitless so far....\n\n## How to handle initial nil value for reduce functions\n\nswift,functional-programming,reduce\nI would like to learn and use more functional programming in Swift. So, I've been trying various things in playground. I don't understand Reduce, though. The basic textbook examples work, but I can't get my head around this problem. I have an array of strings called \"toDoItems\". I would like...\n\n## How to effectively get indices of 1s for given binary string using Scala?\n\nscala,functional-programming,higher-order-functions\nSuppose we have a binary string such as 10010010. All I want is a function returning indices of 1s for that string: indicesOfOnes(\"10010010\") -> List(0, 3, 6) indicesOfOnes(\"0\") -> List() And what I implemented is: def indicesOfOnes(bs: String): List[Int] = { val lb = ListBuffer[Int]() bs.zipWithIndex.foreach { case (v, i)...\n\n## What is the intuition behind the checkerboard covering recursive algorithm and how does one get better at formulating such an algorithm?\n\npython,algorithm,recursion,functional-programming,induction\nYou may have heard of the classic checkerboard covering puzzle. How do you cover a checkerboard that has one corner square missing, using L-shaped tiles? There is a recursive approach to this as explained in the book \"Python Algorithms Mastering Basic Algorithms in the Python Language.\" The idea is to...\n\n## Is memoizing possible without side effects\n\nf#,functional-programming,memoization,side-effects\nI have some F# code that caches results for future lookup. My understanding is that dictionaries and other data structures that you add to require side effects. (i.e. changing the state of the dictionary) Is this correct? Is this considered impure or is this still in the model of side...\n\n## Idiomatic exception handling for socket connection\n\nscala,sockets,error-handling,functional-programming,try-with-resources\nI'm trying to understand how I can elegantly use scala.util.control.Exception package. To be more specific I want to convert this piece of Java code to functional way: public static boolean hostAvailabilityCheck() { try (Socket s = new Socket(SERVER_ADDRESS, TCP_SERVER_PORT)) { return true; } catch (IOException ex) { /* ignore */...\n\n## Swift: Avoid imperative For Loop\n\nswift,ios8,functional-programming\nWhat I'm trying to accomplish in imperative: var mapNames = [String]() var mapLocation = [String]() for valueMap in valueMaps { if let name = valueMap.name { mapNames.append(name) } if let location = valueMap.location { mapLocation.append(location) } } What's the best way using a high order function or perhaps an array...\n\n## What is Anamorphism - example in C#\n\nc#,functional-programming,catamorphism\nI am trying to wrap my head around the concept of anamorphism. In functional programming, an anamorphism is a generalization of the concept of unfolds on lists. Formally, anamorphisms are generic functions that can corecursively construct a result of a certain type and which is parameterized by functions that determine...\n\n## A little confusion with the sorted function in Haskell\n\nsorted :: Ord a => [a] -> Bool sorted xs = and [x <= y | (x,y) <- pairs xs] Can anyone explain to me what this random and is doing after =? It works when I compile it but it doesn't make logical sense to me. Is it because...\n\n## Sum of Fibonacci term using Functional Swift\n\nswift,functional-programming\nI'm trying to learn functional Swift and started doing some exercises from Project Euler. Even Fibonacci numbers Problem 2 Each new term in the Fibonacci sequence is generated by adding the previous two terms. By starting with 1 and 2, the first 10 terms will be: 1, 2, 3, 5,...\n\n## Functional way of finding inverse ranges\n\njavascript,regex,functional-programming,templating\nI'm writing a templating utility for a library and trying to do it in a functional way (no mutable variables and ideally no intermediate state). This is mainly a learning exercise and I know there are libraries out there that do this already. Given the string: \"Hello, {{name}}! What are...\n\n## In underscore/lodash, how to avoid duplicate calculation in a `map` method?\n\njavascript,functional-programming,underscore.js,lodash\nHere is my code: var transformed = _(original).map(function (c) { return { lat: wgs2gcj(c.latitude, c.longitude).lat lng: wgs2gcj(c.latitude, c.longitude).lng } }); Let's say wgs2gcj is a function from a third-party library and will take a long time to compute. Is there a way to do the calculation only once?...\n\n## Grouping a range of integers to the answer of a function\n\njava,functional-programming,java-8,java-stream\nFor a range of integers, I would like to apply an (\"expensive\") operation, filter out only those integers with interesting answers, then group on the answer. This first snippet works, but it duplicates the operation (\"modulus 2\") both in code and computation: IntStream.range(1, 10).boxed() .filter(p -> (p % 2 !=...\n\n## Convert loop to Maybe monad\n\nc#,functional-programming,maybe\nRecently I tried applying Maybe monad pattern in my C# code using this library. What I found difficult to grasp was converting such a function into Maybe paradigm: public Maybe<object> DoSomething(IReader reader) { while (true) { var result = reader.Read(); if (result == null) return Maybe<object>.Nothing; if (result.HasValue) return new...\n\n## for vs map in functional programming\n\nscala,functional-programming\nI am learning functional programming using scala. In general I notice that for loops are not much used in functional programs instead they use map. Questions What are the advantages of using map over for loop in terms of performance, readablity etc ? What is the intention of bringing in...\n\n## Mapping a vector of one type to another using lambda\n\nc++,c++11,lambda,functional-programming\nI have a bit of code that looks like B Convert(const A& a) { B b; // implementation omitted. return b; } vector<B> Convert(const vector<A>& to_convert) { vector<B> ret; for (const A& a : to_convert) { ret.push_back(Convert(a)); } retun ret; } I was trying to rewrite this using lambdas but...\n\n## Idiomatic list construction\n\nI'm very new to Haskell and functional programming in general, so I don't really know how to make this code idiomatic: type Coord = Double data Point = Point Coord Coord Coord deriving Show type Polyline = [Point] -- Add a point to a polyline addPoint :: Polyline -> Point...\n\n## Having trouble stepping through function that reduces an array of functions\n\njavascript,functional-programming,reduce,higher-order-functions\nWhen using the reduce method on an array of functions I am having difficulty tracing through how reduce works on the array exactly. comboFunc(num, functionsArr) { return functionsArr.reduce(function (last, current) { return current(last); }, input); } so with functionsArr = [add, multi] and the functions add and multi being function...\n\n## Create array using _.each\n\njavascript,arrays,functional-programming,underscore.js,each\nI've this exercise to solve: Use _.each to create an array from 1, 1000 (inclusive) I really don't know how to do that and i'm thinking it's not possible... Can you help me?...\n\n## Turn object into array, add object as new element\n\njavascript,functional-programming,underscore.js,lodash\nTrying to transform an object of objects: var items: { item_a: { state: 'item_a status' }, item_b: { state: 'item_b status' } }; into an array of objects, whilst adding a new array element to the object (the object key): var items = [{ name: 'item_a', state: 'item_a status' },...\n\n## How are point-free functions actually “functions”?\n\nConal here argues that nullary-constructed types are not functions. However, point-free functions are described as such for example on Wikipedia, when they take no explicit arguments in their definitions, and it seemingly is rather a property of currying. How exactly are they functions? Specifically: how are f = map and...\n\n## Remove item from array using foreach - JavaScript [duplicate]\n\njavascript,functional-programming\nThis question already has an answer here: How do I remove an element in a list, using forEach? 2 answers Is it possible to remove something from an array using foreach? var array = [1,2,3,4,5,6,7,8]; array.forEach(function(data){ if (data == 4) { // do something here } }); console.log(array); ...\n\n## Why is there no “Functor” trait in Scala? [closed]\n\nIn Scala, the generic classes such as Future, Option and List all have methods map and flatMap. As I understand, all of them are like Functors in Haskell. I was just wondering why there isn't a trait (interface) called Functor in Scala.. Does anyone have ideas about this?...\n\nFunctional depth first search is lovely in directed acyclic graphs. In graphs with cycles however, how do we avoid infinite recursion? In a procedural language I would mark nodes as I hit them, but let's say I can't do that. A list of visited nodes is possible, but will be...\n\n## Java stream of optionals to optional stream\n\njava,functional-programming\nI need to convert Stream<Optional<Integer>> to Optional<Stream<Integer>>. The output Optional<Stream<Integer>> should be an empty value when at least one value ofStream<Optional<Integer>> is empty. Do you know any functional way to solve the problem? I tried to use collect method, but without success....\n\n## How to rewrite this code in Scala using a Functional Programming approach\n\nscala,functional-programming\nBelow is code snippet that does some URL normalization. How to rewrite it to use immutable variables only? Of course, without making it larger or more complex. private def normalizeUrl(url0: String) = { var url = url0 if (url.endsWith(\"/\")) { url = url.dropRight(1) } if (url.indexOf(':') < 0 || url.indexOf(':')...\n\n## Lazy functions evaluation in swift\n\nios,swift,if-statement,functional-programming,lazy-evaluation\nWondering is it possible to evaluate a simple if statement lazily. Below is an example that will print \"this is foo\" and \"this is bar\", but I really want to make it print only the first string: func foo() { println(\"this is foo\") } func bar() { println(\"this is bar\")...\n\nclojure,functional-programming\nI'm new in Clojure and i read that it is a functional language. It says that Clojure doesn't have variables, still when i find (def n 5), what's the difference between it and a variable? I can change the value of the var after, so is it really that different...\n\n## How to generate data idiomatically in f# inline in code\n\nf#,functional-programming,immutability,poker\nLets say i am attempting to implement some sort of poker program in f#. Firstly is this the correct use of the type system, massive newbie here. type Suit = | Hearts | Diamonds | Spades | Clubs type Card = { Suit:Suit Rank:int } type Hand = { Cards:List<Card>...\n\n## Joining a collection based on members of the type\n\njava,functional-programming,guava,method-chaining\nI have a class A and its members b and c. Now I construct the List<A> with this: add(new A().setb(\"abcd\").setc(\"123456\")); add(new A().setb(\"efgh\").setc(\"789101\")); add(new A().setb(\"ijkl\").setc(\"112345\")); I want to transform this List to string which looks like this abcd,123456 efgh,789101 ijkl,112345 Now the very obvious way would be to have a StringBuilder...\n\n## returning functions in R - when does the binding occur?\n\nr,function,functional-programming,lazy-evaluation\nAs in other functional languages, returning a function is a common case in R. for example, after training a model you'd like to return a \"predictor\" object, which is essentially a function, that given new data, returns predictions. There are other cases when this is useful, of course. My question...\n\n## Haskell function that accepts function or value, then calls function or returns value\n\nHow can I write a type declaration and function in Haskell that takes either a function (that itself takes no arguments) or a value. When given a function it calls the function. When given a value it returns the value. To give more context, I'm mostly curious how to...\n\n## Scala wrong forward reference\n\nscala,functional-programming\nI am working through some of the exercises in: Functional Programming in Scala specifically problem 5.2. The issue is that with the following code which I have pieced together from the answer key. sealed trait Stream[+A] { def take(n: Int): Stream[A] = this match { case Cons(hs, ts) if n...\n\n## Generating Custom Object from ArrayList in C# at runtime\n\nc#,asp.net,.net,dynamic,functional-programming\nI have the following content stored in ArrayList as pure string each line represents the value of an item in the list , is there a way to generate a dynamic object in the following style : [left operand is the property] = [right operand is the value of that...\n\n## A seemingly silly way of using the Stream API that leads to the need for Predicate\n\nfunctional-programming,java-8,java-stream\nA predicate on booleans seems a little silly to me (well, at least in the following scenario): static Set<A> aSet = ...; checkCondition(B b) { return aSet.stream() .map(b::aMethodReturningBoolean) .filter((Boolean check) -> check) .limit(1).count() > 0; } What I am doing is that given the object b, checking whether there is...\n\n## Computing a term of a list depending on all previous terms\n\nI have the following identity, that defines (implicitly) the number of partitions of positive integers (that is, the number of ways you can write the integer as the sum of ordered positive nonzero integers): Some notes: This is studied in the book Analytic Combinatorics by Flajolet and Sedjewick, and the...\n\n## How to rewrite Erlang combinations algorithm in Elixir?\n\nfunctional-programming,erlang,elixir\nI've been tinkering with Elixir for the last few weeks. I just came across this succinct combinations algorithm in Erlang, which I tried rewriting in Elixir but got stuck. Erlang version: comb(0,_) -> [[]]; comb(_,[]) -> []; comb(N,[H|T]) -> [[H|L] || L <- comb(N-1,T)]++comb(N,T). Elixir version I came up with...\n\n## How to get a Column from a Frame as a double array type in Deedle C#?\n\nc#,functional-programming,deedle\nI wish to extract a column from a frame as a new double array in C#. For example: double[] values = myFrame.GetColumn<double>(\"myColumnName\"); ...\n\n## How to re-create Underscore.js _.reduce method?\n\njavascript,functional-programming,underscore.js,reduce\nFor education purposes, I was trying to re-create Underscore.js's _.reduce() method. While I was able to do this in an explicit style using for loops. But this is far from ideal because it mutates the original list that was supplied as an argument, which is dangerous. I also realized that...\n\n## Easier way to apply multiple arguments in Haskell\n\nGiven value f with type :: Applicative f => f (a -> b -> c), What's the best way to map arguments to the inner function. So far I've found the following: (\\x -> x a b) <\\$> f (flip (\\$ a) b) <\\$> f (\\$ b) <\\$> (\\$ a)...\n\n## Is there a functional way to set a variable in javascript?\n\njavascript,functional-programming\nI am looking for something like this: set(\"variablename\", \"value\"); Why I need this? I want to write the following without creating a new function: navigator.getUserMedia({ audio: true }, success, fail); var success = function(stream) { var source = audioCtx.createMediaStreamSource(stream); } ...\n\n## why the interpreter tell me “This kind of expression is not allowed as right-hand side of `let rec'”\n\nfunctional-programming,ocaml\nI write a ocaml program that parse an arithmetic expression by parser combinator. type 'a parser = char list -> ('a * (char list)) list let return (x: 'a): 'a parser = fun input -> [x, input] let fail: 'a parser = fun _ -> [] let ( >>= )...\n\n## Apply a list of Functions to a Java stream's .map() method\n\njava,lambda,functional-programming,java-8,java-stream\nI map a stream of NameValuePairs with a lookupFunction (which returns a Function), like this: List<NameValuePair> paramPairs = getParamPairs(); List<NameValuePair> newParamPairs = paramPairs.stream() .map((NameValuePair nvp) -> lookupFunction(nvp.getName()).apply(nvp)) .flatMap(Collection::stream) .collect(toList()); But what if lookupFunction returned a Collection<Function> instead, and I wanted to perform a .map() with each of the returned Functions....\n\n## Why is `++` for Haskell List implemented recursively and costs O(n) time?\n\nAs I understood, a List in Haskell is a similar to a Linked-List in C language. So for expressions below: a = [1,2,3] b = [4,5,6] a ++ b Haskell implement this in a recursive way like this: (++) (x:xs) ys = x:xs ++ ys The time complexity for that...\n\n## Spray routing filter path parameter\n\nscala,functional-programming,akka,spray\ngiven this snippet of code val passRoute = (path(\"passgen\" / IntNumber) & get) { length => complete { if(length > 0){ logger.debug(s\"new password generated of length \\$length\") newPass(length) } else { logger.debug(\"using default length 8 when no length specified\") newPass(8) } } } How could I replace the if-else with...\n\n## Declaring a Ruby lambda with a yield\n\nruby,lambda,functional-programming\nI'm writing a method that splits an array of struct into a couple of different, arrays while also eliminating elements with nil values. What I want to write is: def my_func(some_data) f = lambda{|data| data.select{|m| yield(m).present? }.map { |m| [m.date, yield(m)]}} x = f.call(some_data) {|m| m.first_var} y = f.call(some_data) {|m|...\n\n## Is there a lazy functional (immutable) language where functions have intermediate variables+return?\n\nscope,functional-programming,lazy-evaluation\nI apologize if this has an obvious answer. I would like to find a lazy functional programming language where the following pseudo code makes sense: let f = function(x) { let y = x*x // The variables y and z let z = y*2 // are local return z }...\n\n## Lodash equals function\n\njavascript,functional-programming,underscore.js,lodash\nI need to make several operations with given map: var keys = { a: 1, b: 2, c: 1, d: 2, e: 2 } _.findKey(keys, function(value) { return value === 2; }); // \"b\" _.omit(keys, function(value) { return value === 2; }); // {a: 1, c: 1} I want to...\n\n## How is `each` different from `for-loop` when returning a value?\n\njavascript,functional-programming,find,each\nI have my each function which I created to emulate Underscore.js's _each() for my Javascript study. var each = function(list, iteratee) { if (Array.isArray(list)) { // list is array for (var i = 0; i < list.length; i++) { iteratee(list[i], i, list); } } else if (list.constructor === Object) {...\n\nfunctional-programming,sml\nI'm trying to write a bunch of functions in an SML file and then load them into the interpreter. I've been googling and came across this: http://www.smlnj.org/doc/interact.html Which has this section: Loading ML source text from a file The function use: string -> unit interprets its argument as a file...\n\n## OO way or functional way of string comparison conditionals in php\n\nphp,oop,functional-programming\nI am curling on a specific page that returns only html. To determine what page it returns, I simply try to stripos the result of the curl Like so: \\$result = curl_exec(\\$ch); if(stripos(\\$result, 'success') !== false) { // do something } else { if (stripos(\\$result, 'foo') !== false) { //...\n\n## First Object in Set> that satisfies a predicate\n\njava,functional-programming,java-8,future,rx-java\nAbstract idea I want to get the first value coming out of a set of Futures, that satisfies a given predicate. If a satisfying value is found, all other Futures should be cancelled. If no value is found after all Futures have returned the execution should be terminated (by returning...\n\n## LISP: read number from user and commpare with array index\n\nfunctional-programming,lisp,common-lisp\nHello guys I'm new In functional programming Really it is not Clear for me anyone can help me ? My Question just for getting the philosophy of writing on Functional programming language for example how I can write a program in Lisp language for reading the user inputs and compare...\n\n## Functional way of programming in Java 1.7 [closed]\n\njava,functional-programming\nIs it possible to code application in functional programming way without using Java 8 and other 3rd party library? I mean to ask that if we follow some design patterns can that help us achieve functional programming paradigm in Java 1.7 I am not much experienced with design patterns, i...\n\n## Why would my find method return undefined?\n\njavascript,functional-programming,find,each\nI'm recreating a number of Underscore.js methods to study JavaScript and programming in general. Below is my attempts to recreate Underscore's _.find() method. var find = function(list, predicate) { // Functional style _.each(list, function(elem){ if (predicate(elem)) { return elem; } }); }; var find = function(list, predicate) { // Explicit...\n\n## Properly implement F# Unit in C#\n\nc#,f#,functional-programming\nThis question is not about C#/F# compatibility as in this one. I'd like to know the proper way to implement a type like F# Unit in place of using void. Obviously I'll discard this result and I'll not expose this type to the outside world. Wondering if an empty class...\n\n## Trial division for primes with immutable collections in Scala\n\nalgorithm,scala,data-structures,functional-programming\nI am trying to learn Scala and functional programming ideology by rewriting basic exercises. Currently I have trouble with naive approach for generating primes \"trial division\". The trouble described below is that I could not rewrite well-known algorithm in functional style preserving efficiency, because I have no suitable immutable data...\n\n## Collapse similar case statements in Scala\n\nscala,functional-programming,pattern-matching\nIs there an elegant way to do something like the following example using just one case statement? foobar match { case Node(Leaf(key, value), parent, qux) => { // Do something with parent & qux } case Node(parent, Leaf(key, value), qux) => { // Do something with parent & qux (code...\n\n## Elm List type mismatch\n\nlist,functional-programming,elm\nI was following an (old?) tutorial and I got a type mismatch. Has the List library changed from 0.14.1 to 0.15? elmpage. Code: module Fibonacci where import List exposing (..) fibonacci : Int -> List Int fibonacci n = let fibonacci1 n acc = if n <= 2 then acc..." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.82370776,"math_prob":0.6457035,"size":33423,"snap":"2021-21-2021-25","text_gpt3_token_len":7977,"char_repetition_ratio":0.16631258,"word_repetition_ratio":0.015130119,"special_character_ratio":0.24486132,"punctuation_ratio":0.19282511,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9677324,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-06T23:06:21Z\",\"WARC-Record-ID\":\"<urn:uuid:d1721793-0056-45bc-b897-057211b7d7bd>\",\"Content-Length\":\"83817\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0bda70e9-c89d-469b-93f0-521aa70dfe18>\",\"WARC-Concurrent-To\":\"<urn:uuid:e8f497e1-b12c-43b0-bd5b-4a678b44d810>\",\"WARC-IP-Address\":\"172.67.161.15\",\"WARC-Target-URI\":\"https://databasefaq.com/index.php/tag/functional-programming\",\"WARC-Payload-Digest\":\"sha1:OV6HYSZJVE5MMYQWCGCMZZG6OHNVTCZL\",\"WARC-Block-Digest\":\"sha1:I7EKC6TAR3ECH7PH4YC7XBRMIQXBXH2Q\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243988763.83_warc_CC-MAIN-20210506205251-20210506235251-00424.warc.gz\"}"}
http://blade.nagaokaut.ac.jp/cgi-bin/scat.rb/ruby/ruby-core/25672?help
[ "```On Sun, Sep 20, 2009 at 3:29 AM, brian ford <brixen / gmail.com> wrote:\n> Hi,\n>\n> On Sat, Sep 19, 2009 at 1:04 AM, Charles Oliver Nutter\n>> On Sat, Sep 19, 2009 at 12:28 AM, Joel VanderWerf\n>> <vjoel / path.berkeley.edu> wrote:\n>>> Perhaps it should be the responsibility of users of numeric operators to\n>>> #floor explicitly when that is the intent, rather than rely on the (mostly\n>>> standard, sometimes convenient, but questionable) 1/2==0 behavior. Doing so\n>>> would make it easier to adapt the code to float, rational, or other numeric\n>>> types.\n>>>\n>>> In your proposal, would Rational(1,3) be the preferred notation, since\n>>> 1/3==0? Or would there be something else, 1//3 or ...?\n>>>\n>>> I've always thought of mathn as a kind of alternate ruby, not just another\n>>> core library, hence to be used with caution...\n>>\n>> I think Brian Ford expressed what I feel best...there should always be\n>> another method or operator. Using another operator or method is an\n>> explicit \"buy-in\" by the user--rather than a potential (at some\n>> undetermined time in the future) that everything you know about\n>> integral division in your program changes wildly. It should not be\n>> possible for any library to undermine the basic mathematical\n>> expectations of my program. Doing so, or expecting the user to do\n>> extra work to guarantee the *common case*, is a recipe for serious\n>> failure.\n>\n> There are a number of issues combined here, but I think they generally\n> reduce to these:\n>\n> 1. How do you model the abstractions that are number systems in the\n> abstractions that are classes and methods.\n> 2. Should the behavior of mathn be acceptable in the core language.\n>\n> We seem to think of the basic mathematical operations +, -, *, / as\n> being roughly equal. But of these four, division on the integers is\n> distinct. The set of integers is closed under addition, subtraction,\n> and multiplication. Given any two integers, you can add, subtract, or\n> multiply them and get an integer. But the result of dividing one\n> integer by another is not always an integer. The integers are not\n> closed under division. In mathematics, whether a set is closed under\n> an operation is a significant property.\n>\n> As such, there is nothing at all questionable about defining division\n> on the integers to be essentially floor(real(a)/real(b)) (where\n> real(x) returns the (mathematical) real number corresponding to the\n> value x, because the integers are embedded in the reals and the reals\n> are closed under division). You basically have five choices:\n>\n> 1. floor(real(a)/real(b))\n> 2. ceil(real(a)/real(b))\n> 3. round(real(a)/real(b)) where round may use floor or ceil\n> 4. real(a)/real(b)\n> 5. raise an exception\n>\n> In computer programming, there are a number of reasons for choosing 1,\n> 2 or 3 but basically it is because that's the only way to get the\n> \"closest\" integer (i.e. define division in a way that the integers are\n> closed under the operation) . Convention has selected option 1.\n> Numerous algorithms are implemented with the assumption that integral\n> division is implement as option 1. It's not right or wrong, but the\n> convention has certain advantages. In programming, we are typically\n> implementing algorithms, not just \"doing math\" in some approximations\n> of these abstractions called number systems. Any system for doing math\n> takes serious steps to implement the real number system in as\n> mathematically correct form as possible.\n>\n> My contention that we should always have two operators for integral\n> division is a compromise between the need to implement algorithms and\n> the desire to have concise \"operator\" notation for doing more\n> math-oriented computation. Given that programming in Ruby is more\n> about algorithms than it is about doing math, it's unreasonable to\n> expect (a/b).floor instead of a / b. At the same time, math-oriented\n> programs are not going to be happy with a.quo b. The reasons for\n> options 1 and 4 above are not mutually exclusive nor can one override\n> the other.\n\nActually in most languages which I've encountered, and that's quite a\nfew. Mixed mode arithmetic has been implemented by having some kind of\nrules on how to 'overload' arithmetic operators based on the\narguments, not by having different operator syntax.\n\nAnd those rules are usually based on doing conversions only when\nnecessary so as to preserve what information can be preserved given\nthe arguments,\n\nSo, for example\n\ninteger op integer - normally produces an integer for all of the\n'big four' + - * /\ninteger op float - normally produces a float, as does float op integer\n\nAs new numeric types are added, in languages which either include them\ninherently or allow them to be added, this pattern is usually\nfollowed.\n\nSmalltalk has the concept of generality of a number class. More\ngeneral classes can represent more numbers, albeit with some potential\nfor adding 'fuzziness' in the standard image Floats are the most\ngeneral, then Fractions, then equally LargePositiveIntegers and\nLargeNegativeIntegers (which together serve the same role as Bignum in\nRuby), then SmallInteger (Ruby's Fixnum).\n\n> The mathn library is clearly exploiting an implementation detail. Were\n> Ruby implemented like Smalltalk (or Rubinius), mathn would have never\n> been written as it is. The fact that it is even possible to load mathn\n> results from the fact that countless critical algorithms in MRI are in\n> the walled garden of C code. That's not true for your Ruby programs.\n> Any algorithm you implement that relies on the very reasonable\n> assumption of integral division will be broken by mathn.\n\nThe problem with mathn is that it introduces new numeric types, and\nalso changes the behavior of the existing types, particularly integer,\nso that when mathn is included\n\ninteger / integer produces a rational if the result can't be\nreduced to an integer.\n\nThis is at odds with most languages and, as Charles points out, it\neffectively changes the 'rules of physics' for other code which is\nlikely unaware that mathn has been introduced.\n\nIn Smalltalk, there is a standard Fraction class and integer division\ndoes in fact return a Fraction rather than an Integer. But that's\nknown and expected by Smalltalk programmers.\n\n>\n> You can say, \"but mathn is in the standard library, you have to\n> require it to use it\". But that ignores the fact that requiring the\n> library fundamentally changes assumptions that are at the very core of\n> writing algorithms. Essential computer programming and mathn can never\n> coexist without jumping through hoops.\n\nYes this is a problem IMHO. The difference between Ruby and Smalltalk\nhere is that one language starts out including Rationals/Fractions,\nand the other treats them as an optional add on which, when added,\nchanges the rules.\n\n> This is the point were some grandiose scheme like selector namespaces\n> are suggested. But I think the simple solution of two distinct\n> operators handily solves the problem of the messy facts of\n> mathematical number systems implemented in very untheoretic (i.e.\n> really real) silicon.\n>\n> As for which symbol to select, what about '/.' for real(a)/real(b).\n\nWell, first the problem we are talking about is Rationals, not Floats,\nand second, what happens as more numeric classes are introduced.\n\nAnother alternative would be to change mathn (or maybe make a new\nalternative mathn for compatibility for programs already using mathn)\nwhich\n\n1. Left 1 / 2 as producing the Integer 1\n2. Allowed explicit instantiation of Rationals\n* Rational.new(1,2) # i.e. make the new method public.\n* Change Object#Rational to always return a Rational for an\ninteger argument, with a denominator of 1.\n* Integer#to_rational which could be implemented as:\nclass Integer\ndef to_rational\nRational(self)\nend\nend\n\nThen rational arithmetic could be implemented so that\n\n5 / 3 => 1\n5.to_rational / 3 => 5 / 3\n5 / 3.to_rational => 5 / 3\n\n(5.to_r / 3).to_i => 1\n\nWhich would be in-line with standard arithmetic in Ruby IMHO.\n\nNote: it might be more ruby-like to name the coercion method to_r\ninstead of to_rational, but that might be confused by some as meaning\nsomething else like to real, although I don't really thing that that\nwould be that much of an issue.\n\n--\nRick DeNatale" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.90259004,"math_prob":0.9136246,"size":8644,"snap":"2020-45-2020-50","text_gpt3_token_len":2355,"char_repetition_ratio":0.10856482,"word_repetition_ratio":0.002762431,"special_character_ratio":0.24028228,"punctuation_ratio":0.1299505,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97657883,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-23T06:36:34Z\",\"WARC-Record-ID\":\"<urn:uuid:9bc0f04a-9e43-497a-bd8e-388092bfc791>\",\"Content-Length\":\"18622\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4eceef1e-0238-4d16-8f69-ec66f9eb00b8>\",\"WARC-Concurrent-To\":\"<urn:uuid:e8f124c2-0502-4e2e-9641-594d794e774a>\",\"WARC-IP-Address\":\"133.44.98.95\",\"WARC-Target-URI\":\"http://blade.nagaokaut.ac.jp/cgi-bin/scat.rb/ruby/ruby-core/25672?help\",\"WARC-Payload-Digest\":\"sha1:AVD2QA5CJ67ETYJFKPCCD6JZE3ISXM7M\",\"WARC-Block-Digest\":\"sha1:YJ2DHOSZPKR2BXWL6LU7XIZTDBSPFWN4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107880656.25_warc_CC-MAIN-20201023043931-20201023073931-00693.warc.gz\"}"}
https://www.acmicpc.net/problem/27569
[ "시간 제한메모리 제한제출정답맞힌 사람정답 비율\n1 초 1024 MB666100.000%\n\n## 문제\n\nThe $n$th Champernowne word is obtained by writing down the first $n$ positive integers and concatenating them together. For example, the 10th Champernowne word is \"12345678910\".\n\nGiven two positive integers $n$ and $k$, count how many of the first $n$ Champernowne words are divisible by $k$.\n\n## 입력\n\nThe single line of input contains two integers, $n$ $(1 \\le n \\le 10^5)$ and $k$ $(1 \\le k \\le 10^9)$.\n\n## 출력\n\nOutput a single integer, which is a count of the first $n$ Champernowne words divisible by $k$.\n\n## 예제 입력 1\n\n4 2\n\n\n## 예제 출력 1\n\n2\n\n\n## 예제 입력 2\n\n100 7\n\n\n## 예제 출력 2\n\n14\n\n\n## 예제 입력 3\n\n314 159\n\n\n## 예제 출력 3\n\n4\n\n\n## 예제 입력 4\n\n100000 999809848\n\n\n## 예제 출력 4\n\n1" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.74747884,"math_prob":0.9916012,"size":1035,"snap":"2023-14-2023-23","text_gpt3_token_len":341,"char_repetition_ratio":0.14064015,"word_repetition_ratio":0.21857923,"special_character_ratio":0.33816424,"punctuation_ratio":0.054054055,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99181366,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-28T15:26:21Z\",\"WARC-Record-ID\":\"<urn:uuid:bb9379a7-1c7b-4a22-9dce-200fb80bf29b>\",\"Content-Length\":\"29333\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d9454dda-51f6-41bf-b693-5f6bb34806b7>\",\"WARC-Concurrent-To\":\"<urn:uuid:3b916c58-e0b8-4ed2-b569-f0fe104436e5>\",\"WARC-IP-Address\":\"52.192.13.175\",\"WARC-Target-URI\":\"https://www.acmicpc.net/problem/27569\",\"WARC-Payload-Digest\":\"sha1:ONGJYFQHKFUZW55CAMAXEWHOWTUYK2V3\",\"WARC-Block-Digest\":\"sha1:FZI5MEASAQ5NVFSEA4K524SV5Y74MLC6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296948867.32_warc_CC-MAIN-20230328135732-20230328165732-00628.warc.gz\"}"}
http://promos.amalie.com/muwa72945.html
[ "# Question: What is capacitor formula?\n\nContents\n\nCapacitance Equation The basic formula governing capacitors is: charge = capacitance x voltage. or. Q = C x V. We measure capacitance in farads, which is the capacitance that stores one coulomb (defined as the amount of charge transported by one ampere in one second) of charge per one volt.\n\n## What is the formula for capacitance of a capacitor?\n\nThe capacitance C of a capacitor is defined as the ratio of the maximum charge Q that can be stored in a capacitor to the applied voltage V across its plates. In other words, capacitance is the largest amount of charge per volt that can be stored on the device: C = Q V . 1 F = 1 C 1 V .\n\n## What is capacitor unit?\n\nThe unit of electrical capacitance is the farad (abbreviated F), named after the English physicist and chemist Michael Faraday. The capacitance C of a capacitor is the ratio of the charge Q stored in the capacitor to the applied dc voltage U: C = Q/U.\n\n## What is Q in capacitance formula?\n\nThe experiment shows that Q ∝ V, or Q = constant × V. This constant is called the capacitance, C, of the capacitor and this is measured in farads (F). So capacitance is charge stored per volt, and. farads = coulombsvolts.\n\n## What is inductor formula?\n\nVT = V1 + V2 +V3. We know that the voltage across an inductor is given by the equation. V = L di / dt.\n\n## How do you calculate capacitors?\n\nThe amount of charge stored in a capacitor is calculated using the formula Charge = capacitance (in Farads) multiplied by the voltage. So, for this 12V 100uF microfarad capacitor, we convert the microfarads to Farads (100/1,000,000=0.0001F) Then multiple this by 12V to see it stores a charge of 0.0012 Coulombs.\n\n## What is the symbol of capacitor?\n\nThe SI unit of capacitance is farad (Symbol: F).\n\n## Why is C Q V?\n\nOne plate of the capacitor holds a positive charge Q, while the other holds a negative charge -Q. The charge Q on the plates is proportional to the potential difference V across the two plates. The capacitance C is the proportional constant, Q = CV, C = Q/V.\n\n## What is inductor and its types?\n\nInductors, devices that transmit and measure current in relation to the amount of voltage applied, are essentially elecotromagnets that store and release an electrical current. As the current is applied, an inductor coil stores the current to establish a magnetic field.\n\n## What is a inductor symbol?\n\nThe symbol for inductance is a series of coils as shown below. The letter L is used in equations. The resistance of a material is the opposite or the inverse of the conductivity. The Ohm is named after German physicist Georg Ohm.\n\n## How do you solve a capacitor problem?\n\nCalculate the charge in each capacitor.For example: The voltage across all the capacitors is 10V and the capacitance value are 2F, 3F and 6F respectively.Charge in first capacitor is Q1 = C1*V1 = 2*10 = 20 C.Charge in first capacitor is Q2 = C2*V2 = 3*10 = 30 C.Charge in first capacitor is Q3 = C3*V3 = 6*10 = 60 C.23 May 2017\n\n## What is the purpose of a capacitor?\n\nA capacitor is an electronic component that stores and releases electricity in a circuit. It also passes alternating current without passing direct current. A capacitor is an indispensible part of electronic equipment and is thus almost invariably used in an electronic circuit.\n\n## What are the different types of capacitor?\n\nDifferent Types of CapacitorsElectrolytic Capacitor.Mica Capacitor.Paper Capacitor.Film Capacitor.Non-Polarized Capacitor.Ceramic Capacitor.29 Jul 2019\n\n## How do we calculate capacitors in series?\n\nWhen capacitors are connected one after another, they are said to be in series. For capacitors in series, the total capacitance can be found by adding the reciprocals of the individual capacitances, and taking the reciprocal of the sum.\n\n## Why DC is blocked by capacitor?\n\nA capacitor blocks DC as once it gets charged up to the input voltage with the same polarity then no further transfer of electrons can happen accept to replenish the slow discharge due to leakage if any. hence the flow of electrons which represents electric current is stopped.\n\n## Write us\n\n#### Find us at the office\n\nPicardi- Katzung street no. 53, 78168 Tegucigalpa, Honduras" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8993077,"math_prob":0.9715569,"size":4018,"snap":"2021-43-2021-49","text_gpt3_token_len":960,"char_repetition_ratio":0.19382162,"word_repetition_ratio":0.0114613185,"special_character_ratio":0.2269786,"punctuation_ratio":0.11153359,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99841136,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-21T08:07:23Z\",\"WARC-Record-ID\":\"<urn:uuid:ee68ccbb-78b3-4a1d-b3a2-5b386c510b36>\",\"Content-Length\":\"27727\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b110d270-2ff9-4d8e-9d67-67e9c6bc057a>\",\"WARC-Concurrent-To\":\"<urn:uuid:f1f90326-f592-4801-a624-7ab861e0fea5>\",\"WARC-IP-Address\":\"20.49.104.37\",\"WARC-Target-URI\":\"http://promos.amalie.com/muwa72945.html\",\"WARC-Payload-Digest\":\"sha1:YWSCSFAM3CYEDV46EY6SIU5ILPGII2SQ\",\"WARC-Block-Digest\":\"sha1:AETPHFQZWMSV66PXMJIXM3AJZ3OTAWON\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585382.32_warc_CC-MAIN-20211021071407-20211021101407-00156.warc.gz\"}"}
https://www.numbersaplenty.com/210233233
[ "Search a number\nBaseRepresentation\nbin11001000011111…\n…10011110010001\n3112122120221210001\n430201332132101\n5412304430413\n632510010001\n75131644610\noct1441763621\n9478527701\n10210233233\n11a8742381\n125a4a6901\n133472b082\n141dcc7877\n15136cb4dd\nhexc87e791\n\n210233233 has 16 divisors (see below), whose sum is σ = 254401920. Its totient is φ = 169710336.\n\nThe previous prime is 210233171. The next prime is 210233239. The reversal of 210233233 is 332332012.\n\nIt is a happy number.\n\nIt is a cyclic number.\n\nIt is a de Polignac number, because none of the positive numbers 2k-210233233 is a prime.\n\nIt is a Harshad number since it is a multiple of its sum of digits (19).\n\nIt is a Duffinian number.\n\nIt is a self number, because there is not a number n which added to its sum of digits gives 210233233.\n\nIt is not an unprimeable number, because it can be changed into a prime (210233239) by changing a digit.\n\nIt is a polite number, since it can be written in 15 ways as a sum of consecutive naturals, for example, 18441 + ... + 27577.\n\nIt is an arithmetic number, because the mean of its divisors is an integer number (15900120).\n\nAlmost surely, 2210233233 is an apocalyptic number.\n\nIt is an amenable number.\n\n210233233 is a deficient number, since it is larger than the sum of its proper divisors (44168687).\n\n210233233 is a wasteful number, since it uses less digits than its factorization.\n\n210233233 is an odious number, because the sum of its binary digits is odd.\n\nThe sum of its prime factors is 9336.\n\nThe product of its (nonzero) digits is 648, while the sum is 19.\n\nThe square root of 210233233 is about 14499.4218160587. The cubic root of 210233233 is about 594.6121644564.\n\nAdding to 210233233 its reverse (332332012), we get a palindrome (542565245).\n\nThe spelling of 210233233 in words is \"two hundred ten million, two hundred thirty-three thousand, two hundred thirty-three\"." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8463137,"math_prob":0.99111795,"size":1991,"snap":"2022-27-2022-33","text_gpt3_token_len":626,"char_repetition_ratio":0.18117766,"word_repetition_ratio":0.006024096,"special_character_ratio":0.44349572,"punctuation_ratio":0.1305483,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9951195,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-03T03:30:00Z\",\"WARC-Record-ID\":\"<urn:uuid:4dcabebe-9a0d-4946-ac76-e9e6e18e99b4>\",\"Content-Length\":\"9484\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6bb2c1bc-3931-4309-bc64-ffc360dd74f3>\",\"WARC-Concurrent-To\":\"<urn:uuid:6e300732-fea9-404d-9a3c-c8e92634f80b>\",\"WARC-IP-Address\":\"62.149.142.170\",\"WARC-Target-URI\":\"https://www.numbersaplenty.com/210233233\",\"WARC-Payload-Digest\":\"sha1:3AEB3CRYA5OX2NYT6NXCHU6EU3AX3C75\",\"WARC-Block-Digest\":\"sha1:CUD5IW7GP2GSPPHH6DEWIAGEDI65DTS5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104209449.64_warc_CC-MAIN-20220703013155-20220703043155-00189.warc.gz\"}"}
http://santiago.begueria.es/2010/10/generating-spatially-correlated-random-fields-with-r/
[ "## Generating spatially correlated random fields with R\n\nIn several occasions I needed to generate synthetic data with a desired level of spatial autocorrelation. Here I will show how to generate as many such fields as we need by using R, an open-source port of the S language for statical analysis. I will concentrate on two alternative ways of generating spatially correlated random fields (commonly known as unconditional Gaussian simulation), using the libraries `gstat` and `fields`.\n\n1. Unconditional Gaussian simulation using gstat\n\nSpatial modellers commonly use the term unconditional Gaussian simulation to refer to the process of generating spatially correlated random fields. For comparison purposes, the Figure 0 shows an example of a random field with no spatial correlation. The value of the measured property at one given cell is completely independent of the values of that property at neighbouring cells. This situation is very seldom (or I perhaps should better say never) found in nature, since spatially distributed variables always have a certain level of autocorrelation, i.e. co-variance between neighbours. This is often mentioned as the ‘geographical law’, stating that closer locations tend to have similar properties.\n\nGenerating spatially correlated random fields is interesting because it makes it possible testing different issues related to the statistical analysis of spatial data.\n\n1.1. Generating the spatial field\n\nWe are going to use the `gstat` library, so we start by loading it:\n\n```library(gstat)\n```\n\nWe create a 100 x 100 grid, and we convert it into a data frame (xyz structure) by taking all possible combinations of the x and y coordinates:\n\n```xy <- expand.grid(1:100, 1:100)\n```\n\nWe give names to the variables:\n\n```names(xy) <- c('x','y')\n```\n\nDefining the spatial model and performing the simulations.\n\nSecond, we define the spatial model as a gstat object:\n\n```g.dummy <- gstat(formula=z~1, locations=~x+y, dummy=T, beta=1, model=vgm(psill=0.025, range=5, model='Exp'), nmax=20)\n```\n\nwhere `formula` defines the dependent variable (z) as a linear model of independent variables. For ordinary and simple kriging we can use the formula `z~1`; for simple kriging it is necessary to define a beta parameter too (see below); for universal kriging, if `z` is linearly dependent on `x` and `y` use the formula `z~x+y`. We are using simple kriging here. `locations` define the data coordinates, e.g. `~x+y` in our case here. `dummy` is a logical value, and it needs to be TRUE for unconditional simulation. `beta` is used only for simple kriging, and is a vector with the trend coefficients (including an intercept); if no independent variables are defined the model only contains an intercept, i.e. the simple kriging mean. `model` defines the variogram model, as defined by a call to `vgm`. `vgm` allows defining the (partial) sill, range and nugget paramaters, as well as the variogram model type (e.g. exponential, gaussian, spherical, etc). Anisotropy can also be used. `nmax` defines the number of nearest observations that should be used for a kriging prediction or simulation.\n\nNow we are ready to make as many simulations as we like based on the gstat object (four simulations in this example):\n\n```yy <- predict(g.dummy, newdata=xy, nsim=4)\n```\n\n1.2. Displaying the simulations\n\nTo see one realisation of the simulations:\n\n```gridded(yy) = ~x+y\nspplot(obj=yy)\n```", null, "Figure 1. A spatially correlated random spatial field with mean=1, sill=0.025, range=5, exponential variogram model.\n\n`spplot`, from the library `sp`, provides lattice (trellis) plot methods for spatial data with attributes. It’s only compulsory parameter is `obj`, which must point to an object of class Spatial; gstat objects belong to this class, so there’s no need to do anything extra. It is possible to show all four simulations in a single trellis plot:\n\n```spplot(yy)\n```\n\n1.3. Complete code\n\nI include the complete code for convenience:\n\n```# unconditional simulations on a 100 x 100 grid using gstat\nlibrary(gstat)\n\n# create structure\nxy <- expand.grid(1:100, 1:100)\nnames(xy) <- c(\"x\",\"y\")\n\n# define the gstat object (spatial model)\ng.dummy <- gstat(formula=z~1, locations=~x+y, dummy=T, beta=1, model=vgm(psill=0.025,model=\"Exp\",range=5), nmax=20)\n\n# make four simulations based on the stat object\nyy <- predict(g.dummy, newdata=xy, nsim=4)\n\n# show one realization\ngridded(yy) = ~x+y\nspplot(yy)\n\n# show all four simulations:\nspplot(yy)\n```\n\n1.4. Variations\n\nBy modifying the range parameter in the variogram model it is possible to control the degree of spatial correlation. For example, by setting it at 15 instead of 5 we get a random field with a ‘coarser’ autocorrelation (Figure 2).\n\n```g.dummy <- gstat(formula=z~1, locations=~x+y, dummy=T, beta=1, model=vgm(psill=0.025, range=15, model='Exp'), nmax=20)\n```", null, "Figure 2. A spatially correlated random spatial field with mean=1, sill=0.025, range=15, exponential variogram model.\n\nFor including a linear trend surface in the simulation we can perform a universal kriging. For doing so it is necessary to specify it in the formula parameter as `~1+x+y`, and coefficients for the x and y components need to be specified in the beta parameter. For example, the following defines a model with a spatial trend in the y dimension (Figure 3):\n\n```g.dummy <- gstat(formula=z~1+y, locations=~y, dummy=T, beta=c(1,0,0.005), model=vgm(psill=0.025, range=15, model='Exp'), nmax=20)\n```", null, "Figure 3. A spatially correlated random spatial field with mean=1, sill=0.025, range=15, y_trend=0.005, exponential variogram model.\n\nThe following code defines a model with a trend in both dimensions:\n\n```g.dummy <- gstat(formula=z~1+x+y, locations=~x+y, dummy=T, beta=c(1,0.01,0.005), model=vgm(psill=0.025, range=15, model='Exp'), nmax=20)\n```\n\n2. Unconditional Gaussian simulation using fields\n\nSpatially correlated random fields can also be generated using the `fields` library. I put an example code here, and leave the details to the reader to investigate.\n\n```library(fields)\n# load Maunga Whau volcano (Mt Eden) elevation dataset (matrix format)\ndata(volcano)\n\n# reduce size\nvolcano2 <- volcano[10:55, 14:51]\nfilled.contour(volcano2, color.palette=terrain.colors, asp=1)\ncols <- length(volcano2[1,])\nrows <- length(volcano2[,1])\n\n# create dataframe (xyz format)\nX <- rep(1:cols, each=rows)\nY <- rep(1:rows, cols)\nZ <- as.vector(volcano2)\nvolcano.df <- data.frame(X,Y,Z,cellid=1:cols*rows)\nattach(volcano.df)\n\n# create a spatial autocorrelation signature\n# coordinate list\ncoords <- data.frame(X,Y)\n# distance matrix\ndist <- as.matrix(dist(coords))\n\n# create a correlation structure (exponential)\nstr <- -0.1 # strength of autocorrelation, inv. proportional to str\nomega1 <- exp(str*dist)\n\n# calculate correlation weights, and invert weights matrix\nweights <- chol(solve(omega1))\nweights_inv <- solve(weights)\n\n# create an autocorrelated random field\nset.seed(1011)\nerror <- weights_inv %*% rnorm(dim(dist))\n\n# create a variable as a linear function of the elevation\na <- 10\nb <- 0.5\nZ2 <- a + b*Z\n\n# add the autocorrelated error to the new variable\nZ3 <- Z2 + error\n\n# export data (xyz format)\nwrite.table(data.frame(X,Y,Z,Z2,Z3), file=\"data.txt\", row.names=F, sep=\"\\t\")\n```\n\n•", null, "James Hodden wrote:\n\nThank you for this clear explanation for how you generate random fields.\n\n•", null, "Felipe S. wrote:\n\nThank you very interesing post, by the way do you know how to make lognormal random fields simulation?\n\n•", null, "admin wrote:\n\nSequential Gaussian simulation would yield a normally distributed data at each point, i.e. with mean and standard deviation equal to the kriged mean and standard deviation. I am not sure about how to generate lognormal fields, but it is an interesting question and I will do a little research.\n\n•", null, "guru wrote:\n\nQue pequeño es el mundo! Buscando como simular procesos gaussianos en R y aquí te encuentro. Muchas gracias por las instrucciones!\n\n•", null, "Laura wrote:\n\nHi, I am working on simulating a gaussian random field and I want to incorporate a y trend (as in your example). I want this trend to be the same as in some data I have (higher values for larger y) – am I correct in thinking that you find the values for beta by looking at the coefficients of the linear model data~x+y? Doing this gives me beta=(0.73,0,-1.2) but I then get the trend in the wrong direction (higher values for smaller y). Where am I going wrong here?\n\n•", null, "admin wrote:\n\nHi Laura, it’s difficult to say with so little information. The procedure you describe is basically right: you fit a linear model and look at the beta coefficients from that regression. I do not know why you get a negative beta for y, you should at your data and try to figure out…\n\n•", null, "mike give wrote:\n\nHi, this gives one a good starting point. in geoR, how would one differentiate a regular lattice from a fine grid?\n\n•", null, "admin wrote:\n\nHi Mike, I don’t use geoR, I’m sorry.\n\n•", null, "pd wrote:\n\nHi Sbegueria;\n\nI’m new to this field, and hoping you can clarify some issues– although you don’t explicitly indicate this, but should I assume that GSTAT and FIELDS use ‘direct sequential simulation’ (a new concept for me) versus synthesizing a covariance matrix COV (and then using Cholesky, etc.)? If so, is one limited to generating a grid only with GSTAT, whereas for a COV matrix, any point distribution can work, but at the computational cost of Cholesky?\n\nthanks!\n\n•", null, "Rocio wrote:\n\nHi, the package produces simulation unconditional but i have the concern that algorithm used to simulation of those realizations.\n\n•", null, "Nish wrote:\n\nHi,\n\nThanks for this interesting post.Is that possible to generate spatially correlated multivariate random filed using gstat or any other package?\n\n•", null, "sbegueria wrote:\n\nHi Nish, although I’ve never tried it, in principle it should be possible via co-kriging. Please, let us know if you succeded.\n\n•", null, "Sebastian wrote:\n\nHi Santiago,\nExcellent post.\nI am using synthetic datasets in my doctoral research and I would appreciate if you could point me to papers where you (or others) have used the approach in this post to generate random fields.\n\nMuch appreciated\nSebastian\n\n•", null, "sbegueria wrote:\n\nSure, this is one example: Beguería S., Pueyo Y. (2009) A comparison of simultaneous autoregressive, generalized least squares models for dealing with spatial autocorrelation. Global Ecology and Biogeography 18, 273–279. – See more at: http://santiago.begueria.es/publications-4/#sthash.TwPR0NaM.dpuf.\nAlso, but under review: Bias in the variance of gridded datasets leads to misleading conclusions about changes in climate variability.\n\n•", null, "Sebastian wrote:\n\nHi Santiago,\nFirst of all, thank you so much for your reply. I don’t want to abuse your kindness but in your 2009 paper (the one you cited above) I can’t find any reference to generating synthetic spatially autocorrelated data with the method you show in this blog. Furthermore, the data mentioned is from Kissling & Carl (2008), which comes from the volcano dataset. There are also 2 datasets mentioned in which the spatial structure was degraded by sample reduction and coarsening.\nSeems that I am missing something on how you used “gstat” or “fields” in generating synthetic data.\n\n•", null, "sbegueria wrote:\n\nHi Sebastian, you’re totally right. We used the dataset of Kissling and Carl on our 2009 paper. Well, I know I used this method for some application when I wrote the post, but I don’t remember to have used it in any paper. There is one in press right now, but nothing you can refer to in your thesis, I’m sorry for that. Hopefully in some weeks there’ll be one example, when this paper will appear online.\n\n•", null, "Sebastian wrote:\n\n•", null, "Andrew Hill wrote:\n\nWould you know of code to do the reverse; if you have a general matrix, extract the spatial auto-correlation function?\n\n•", null, "sbegueria wrote:\n\nIn the case of spatial data, I’m most familiar with using the functions `variogram` and `fit.variogram` in the `gstat` package. You may want to have a look at the tutorial at https://cran.r-project.org/web/packages/gstat/vignettes/gstat.pdf.\n\n•", null, "parisa wrote:\n\nHi\nI need to cross validate after simulation, how can I do that?\nI used grf in package geor to simulate, I can cross validate its output but grf can’t simulate the exact formula like z#1 or z#x and so on unlike your commands.\nSo how can I cross validate your command?\n\nMy English isn’t good unfortunately.\n\n•", null, "Helen wrote:\n\nThank you very much~It’s great!\n\n•", null, "jn wrote:\n\nThank you for this post. It’s very useful. I’ve still one question – how do you generate the first figure (with zero spatial correlation)?\n\n•", null, "sbegueria wrote:\n\nHi jn, as JoshO also noted there is an error in the first figure (it obviously has spatial correlation in it), I have corrected this in the post.\n\n•", null, "JoshO wrote:\n\nThanks for this nice post. There are a few +/- important errors that you might want to fix (if that’s possible in a blog post): (1) Figure 0 does not, as claimed in the text “show[] an example of a random field with no spatial correlation.”; (2) The code block following the phrase “For example, the following defines a model with a spatial trend in the y dimension (Figure 3)” does not do what it claims. It sets formula=z~1+x+y, when it should be formula=z~1+y.\n\n•", null, "sbegueria wrote:\n\nHi JoshO. You’re right, I corrected both issues in the post, thanks a lot.\n\n• […] These steps are repeated until all observations are given a value (Bivand et al., 2013). Note that another interesting post tackles the simulation of these gaussian random fields in […]\n\n• […] ce que toutes les observations reçoivent une valeur (Bivand et al., 2013). A noter cet autre post qui s'intéresse de près à la simulation de champs gaussians aléatoires sur R (et qui a bien […]\n\n•", null, "sbegueria wrote:\n\nI’m not sure that I understood your question. You can incorporate the variogram parameters that match your data directly into the function.\n\n•", null, "raj wrote:\n\nthank you for the post. but what if i have the correlation matrix already and i want to insert it as a given with the mean and cov?\n\n•", null, "NRS wrote:\n\nI am trying to create a 10 x 10 matrix comprised only of whole numbers ranging from 1-10, with each integer represented 10 times (e.g., 10 1’s and 10 2’s), and a spatial correlation strength of 0.15. I’ve tried altering your code in both the ‘gstat’ and ‘fields’ packages, but cannot quite get the process to work. The closest that I have come is to constrain the matrix to numbers ranging from 1-10, but the output winds up including numbers with decimals. Do you have any suggestions? Thanks!\n\n•", null, "sbegueria wrote:\n\nHi Nathan. A fast solution could be to just round the values of your resulting field to have zero decimals with function `round` in R. Getting exactly 10 repetitions of each value between 1 and 10 is going to be much more difficult, and I’m not aware of any method to get that. Only approach that comes to mind is to repeat a number of simulated fields and reject all those that do not meet the criteria. There is a pretty efficient way to get a number of simulations with parameter `nsim` of gstat’s `predict` method).\n\n•", null, "Stephanie wrote:\n\nHi, thanks for this post! I have one question as I am completely new to this. I am generating landscapes with various levels of spatial autocorrelation, however, the “landscape” is already composed of cells varying from 0-.50. Is there any way to do exactly as what you explained, but with a raster that has predefined values? In your figures above, I see that the cells have a range of values of about .4 to 1.6.\n\n•", null, "sbegueria wrote:\n\nHi Stephanie. The parameter `beta` specifies the mean of your Gaussian field. In my example I set it to 1, and that’s why the data vary around that value. You can also control the standard deviation / variance of the field with the semivariogram parameters (`psill` and, if you want to use it, `nugget`).\n\n•", null, "Stephanie wrote:\n\nThanks a lot! This is exactly what I needed." ]
[ null, "http://santiago.begueria.es/wordpress/wp-content/uploads/2010/10/180px-Yy.jpg", null, "http://santiago.begueria.es/wordpress/wp-content/uploads/2010/10/180px-Yyr15.jpg", null, "http://santiago.begueria.es/wordpress/wp-content/uploads/2010/10/180px-Yytrend.jpg", null, "http://2.gravatar.com/avatar/b3f038a9b4020bb23123c3817138863a", null, "http://2.gravatar.com/avatar/8889b6ea30621fb5e4911aa0bc09f18e", null, "http://1.gravatar.com/avatar/7f3d6280ef1742f462e4a42d878f5716", null, "http://1.gravatar.com/avatar/42b40256452aab44d70b61b24c106281", null, "http://0.gravatar.com/avatar/9b8fb2faccadd9ad3c2a19495064a1ca", null, "http://1.gravatar.com/avatar/7f3d6280ef1742f462e4a42d878f5716", null, "http://0.gravatar.com/avatar/3e40dad4ab40b170f06fdd0b69990911", null, "http://1.gravatar.com/avatar/7f3d6280ef1742f462e4a42d878f5716", null, "http://0.gravatar.com/avatar/c84bc95801b0a1ce869a54137d6f644d", null, "http://2.gravatar.com/avatar/eee555aad932cf6dbc6284c34b317a26", null, "http://0.gravatar.com/avatar/c39aaacff5ec81f5232d35dc0d53a655", null, "http://1.gravatar.com/avatar/7f3d6280ef1742f462e4a42d878f5716", null, "http://0.gravatar.com/avatar/c2fc35ea7439a260a3cb42964b648d03", null, "http://1.gravatar.com/avatar/7f3d6280ef1742f462e4a42d878f5716", null, "http://0.gravatar.com/avatar/c2fc35ea7439a260a3cb42964b648d03", null, "http://1.gravatar.com/avatar/7f3d6280ef1742f462e4a42d878f5716", null, "http://0.gravatar.com/avatar/c2fc35ea7439a260a3cb42964b648d03", null, "http://2.gravatar.com/avatar/25cb255470e9a85c583d71618685c699", null, "http://1.gravatar.com/avatar/7f3d6280ef1742f462e4a42d878f5716", null, "http://2.gravatar.com/avatar/27726d4bd3ecbff1881f103f550727c7", null, "http://0.gravatar.com/avatar/fd14fa55a646578ca4abd3ba1864fb0f", null, "http://2.gravatar.com/avatar/8fc56e93b4935fb86725839a60088425", null, "http://1.gravatar.com/avatar/7f3d6280ef1742f462e4a42d878f5716", null, "http://1.gravatar.com/avatar/1e253bbe2b531090dd01a6c248107c89", null, "http://1.gravatar.com/avatar/7f3d6280ef1742f462e4a42d878f5716", null, "http://1.gravatar.com/avatar/7f3d6280ef1742f462e4a42d878f5716", null, "http://1.gravatar.com/avatar/4cd656fee3b91e79161d8d87605a50aa", null, "http://1.gravatar.com/avatar/df5e955d6442a17702853444fb9b71f4", null, "http://1.gravatar.com/avatar/7f3d6280ef1742f462e4a42d878f5716", null, "http://0.gravatar.com/avatar/041e91ac6dcefefb42c7bac96792b5ad", null, "http://1.gravatar.com/avatar/7f3d6280ef1742f462e4a42d878f5716", null, "http://0.gravatar.com/avatar/041e91ac6dcefefb42c7bac96792b5ad", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8391262,"math_prob":0.9031327,"size":14955,"snap":"2019-51-2020-05","text_gpt3_token_len":3644,"char_repetition_ratio":0.12019263,"word_repetition_ratio":0.018820576,"special_character_ratio":0.23771314,"punctuation_ratio":0.13464053,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9933641,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70],"im_url_duplicate_count":[null,5,null,5,null,5,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-16T03:13:27Z\",\"WARC-Record-ID\":\"<urn:uuid:1158c09f-0510-44b9-8ace-93d49f355b6b>\",\"Content-Length\":\"74820\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:05a576cc-8155-4a11-9644-d110886f829b>\",\"WARC-Concurrent-To\":\"<urn:uuid:2f9856d7-002c-4197-80be-e7c6290aea6c>\",\"WARC-IP-Address\":\"46.29.49.21\",\"WARC-Target-URI\":\"http://santiago.begueria.es/2010/10/generating-spatially-correlated-random-fields-with-r/\",\"WARC-Payload-Digest\":\"sha1:3LG5M64LTWLJEREK4TMV6HECNDSRH2CT\",\"WARC-Block-Digest\":\"sha1:ESNWF4L3J3AZ23VEXZ7CXZ2ZFTTWGKIE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575541315293.87_warc_CC-MAIN-20191216013805-20191216041805-00409.warc.gz\"}"}
https://www.redhotpawn.com/forum/posers-and-puzzles/the-second-oldest-chess-riddle.38423
[ "", null, "The Second Oldest Chess Riddle", null, "Bishopcrw", null, "Posers and Puzzles", null, "15 Feb '06 18:49\n1.", null, "15 Feb '06 18:49\nNow that every one on the site is the wiser about how many squares on a chess board, I thought I would pose this question.\n\nHow many rectangles are on a chess board?\n\nP.M. your answers to me and I will let you know if you are correct or not.\n\nThen after a day or two I will post the solution.\n\nHave fun 😀\n2.", null, "15 Feb '06 19:51\nOriginally posted by Bishopcrw\nNow that every one on the site is the wiser about how many squares on a chess board, I thought I would pose this question.\n\nHow many rectangles are on a chess board?\n\nP.M. your answers to me and I will let you know if you are correct or not.\n\nThen after a day or two I will post the solution.\n\nHave fun 😀\n1296, of which 204 are squares.🙂\n3.", null, "15 Feb '06 20:03\nOriginally posted by excalibur 8\n1296, of which 204 are squares.🙂\nI concur.\n4.", null, "15 Feb '06 20:22\n5.", null, "15 Feb '06 20:26\nOriginally posted by Bishopcrw\nSo what? We're right. And for the record, I only concurred with the answer already given.\n6.", null, "15 Feb '06 20:39\nOriginally posted by Bishopcrw\nI am right, and you risk being a pedant.\n7.", null, "15 Feb '06 21:03\nIf there were ever a game for pedants it would be Chess.\nThe constant correction of the slightest flaws insearch of exacting perfection.\n\nMay be that is why my rating is so low😵\n\nYour skin might need a little thickening if you are going to make it in this or any other forum. For there are certainly a lot more people who are willing to correct you for much less.\n\nBut very well,\nYes, you did get the right answer. Very Good!😀\n\nSince this thread already had the spoiler post. If anyone does not think this is the correct answer and would like to discuss their answer and receive some clues as to what to fix please feel free to post.\n8.", null, "16 Feb '06 09:10\nNothing wrong with the thickness of my skin, and have no desire to 'make it' (whatever that means) in the forums.\n\nFor me, chess is simply entertaining, and reading these forums is sometimes educational and occasionally hilarious; in short, not to be taken too seriously. Thanks for the riddle.\n9.", null, "16 Feb '06 11:581 edit\nIn general, for positive integers m and n, how many rectangles are there on an mXn 'chessboard'?\n\n10.", null, "17 Feb '06 23:072 edits\n\nThere are 1296 rectangles on the chess board.\nThe reason there are so many more rectangles than just squares is due to two attributes.\n1). Dimension\n2). Orientation\n\nWe will now build on the squares puzzle.\nWe need to determine the dimention of the rectangles:\nSo we identify the sides of the board with x and y, respectively.\nFor a 1 by 2 rectangle, you can fit 8 on side x and 7 long on side y.\nfor a 1 by 3, there are 8 on x and 6 on y: as seen below\n8 * 7 = 56\n8 * 6 = 48\n\nTo simplify things a little we will do the following.\n8*(7+6+5+4+3+2+1) = 224\n7*(6+5+4+3+2+1) = 147\n\nNotice two things quickly\n1). the dimensions that would cause a square are excluded(8 by 8, or 7 by 7)\n2). the y dimension is one shorter than the previous line. This is because we already counted the 1x and 2y rectangles in the first line.\nThis pattern will continue through out.\n\n6*(5+4+3+2+1) = 90\n5*(4+3+2+1) = 50\n4*(3+2+1) = 24\n3*(2+1) = 9\n2*(1) = 2\n____________________________\nNow we add them up and get = 546\n\nThis is the total of rectangles with varying dimensions. And because they have varying dimensions they can also face the other direction on the board (Orientation), meaning we just counted all the x, y rectangles but now need all the y,x rectangles.\n\nSo we multiply by 2 = 1092\n\nAnd now we add in our squares of 204. Since the are the same in dimension in both x,y and y,x they were excluded from the above calculation. (and yes squares are rectangles, but not all rectangles are squares, Rectangle - geometrical shape with four right angles and opposite sides equal in length, squares also have adjacent sides equal in length)\n\nAnd get a total amount of rectangles of = 1296\n\n11.", null, "25 Feb '06 03:15\nOriginally posted by Bishopcrw\n\nThere are 1296 rectangles on the chess board.\nThe reason there are so many more rectangles than just squares is due to two attributes.\n1). Dimension\n2). Orientation\n\nWe will now build on the squares puzzle.\nWe need to determine the dimention of the rectangles:\nSo we identify the sides of the board with x and y, respectively.\n...[text shortened]... in length)\n\nAnd get a total amount of rectangles of = 1296\n\nThats why you win so many games!😵\n12.", null, "06 Mar '06 05:44\nOriginally posted by excalibur 8\nI am right, and you risk being a pedant.\nWell if he is just at risk, he should wear a condomint.\n13.", null, "08 Mar '06 17:57\nOriginally posted by Trains44\nThats why you win so many games!😵\nThank you Trains!\nAlhtough it didn't seem to help in our last game😛\n\nAs my wife tells me:\n\n\"Flattery will get you everywhere!\"\n\nCare for another?" ]
[ null, "https://www.redhotpawn.com/img/uisvg/forum-heading/puzzles.svg", null, "https://www.redhotpawn.com/img/uisvg/user-deco/substar-free.svg", null, "https://www.redhotpawn.com/img/uisvg/site/xpopinfo-link.svg", null, "https://www.redhotpawn.com/img/uisvg/site/clock.svg", null, "https://www.redhotpawn.com/img/uisvg/site/clock.svg", null, "https://www.redhotpawn.com/img/uisvg/site/clock.svg", null, "https://www.redhotpawn.com/img/uisvg/site/clock.svg", null, "https://www.redhotpawn.com/img/uisvg/site/clock.svg", null, "https://www.redhotpawn.com/img/uisvg/site/clock.svg", null, "https://www.redhotpawn.com/img/uisvg/site/clock.svg", null, "https://www.redhotpawn.com/img/uisvg/site/clock.svg", null, "https://www.redhotpawn.com/img/uisvg/site/clock.svg", null, "https://www.redhotpawn.com/img/uisvg/site/clock.svg", null, "https://www.redhotpawn.com/img/uisvg/site/clock.svg", null, "https://www.redhotpawn.com/img/uisvg/site/clock.svg", null, "https://www.redhotpawn.com/img/uisvg/site/clock.svg", null, "https://www.redhotpawn.com/img/uisvg/site/clock.svg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9418385,"math_prob":0.9497374,"size":2246,"snap":"2019-26-2019-30","text_gpt3_token_len":600,"char_repetition_ratio":0.13514718,"word_repetition_ratio":0.0,"special_character_ratio":0.29029384,"punctuation_ratio":0.08418891,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9707832,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-15T21:20:54Z\",\"WARC-Record-ID\":\"<urn:uuid:d7b2c786-a1f1-4124-bc24-f77d48c5c080>\",\"Content-Length\":\"47470\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d6c79894-a882-4272-8da5-346b50940621>\",\"WARC-Concurrent-To\":\"<urn:uuid:ec7fe24d-43c8-4c2e-9ff3-bf0f857bba99>\",\"WARC-IP-Address\":\"52.91.31.62\",\"WARC-Target-URI\":\"https://www.redhotpawn.com/forum/posers-and-puzzles/the-second-oldest-chess-riddle.38423\",\"WARC-Payload-Digest\":\"sha1:7U4GJT3LZBC3HZ5THKFCZPCIGPULML67\",\"WARC-Block-Digest\":\"sha1:O6U2XZ4ADK4QQD2B6UI4IHXSMCSRIL5X\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560627997335.70_warc_CC-MAIN-20190615202724-20190615224724-00396.warc.gz\"}"}
https://codex-africanus.readthedocs.io/en/0.2.1/rime-api.html
[ "Functions used to compute the terms of the Radio Interferometer Measurement Equation (RIME). It describes the response of an interferometer to a sky model.\n\n$V_{pq} = G_{p} \\left( \\sum_{s} E_{ps} L_{p} K_{ps} B_{s} K_{qs}^H L_{q}^H E_{qs}^H \\right) G_{q}^H$\n\nwhere for antenna $$p$$ and $$q$$, and source $$s$$:\n\n• $$G_{p}$$ represents direction-independent effects.\n• $$E_{ps}$$ represents direction-dependent effects.\n• $$L_{p}$$ represents the feed rotation.\n• $$K_{ps}$$ represents the phase delay term.\n• $$B_{s}$$ represents the brightness matrix.\n\nThe RIME is more formally described in the following four papers:\n\nNumpy¶\n\n predict_vis(time_index, antenna1, antenna2) Multiply Jones terms together to form model visibilities according to the following formula: phase_delay(lm, uvw, frequency[, convention]) Computes the phase delay (K) term: parallactic_angles(times, antenna_positions, …) Computes parallactic angles per timestep for the given reference antenna position and field centre. feed_rotation(parallactic_angles[, feed_type]) Computes the 2x2 feed rotation (L) matrix from the parallactic_angles. transform_sources(lm, parallactic_angles, …) Creates beam sampling coordinates suitable for use in beam_cube_dde() by: beam_cube_dde(beam, beam_lm_extents, …) Evaluates Direction Dependent Effects along a source’s path by interpolating the values of a complex beam cube at the source location. zernike_dde(coords, coeffs, noll_index) Computes Direction Dependent Effects by evaluating Zernicke Polynomials defined by coefficients coeffs and noll indexes noll_index at the specified coordinates coords. wsclean_predict(uvw, lm, source_type, flux, …) Predict visibilities from a WSClean sky model.\nafricanus.rime.predict_vis(time_index, antenna1, antenna2, dde1_jones=None, source_coh=None, dde2_jones=None, die1_jones=None, base_vis=None, die2_jones=None)[source]\n\nMultiply Jones terms together to form model visibilities according to the following formula:\n\n$V_{pq} = G_{p} \\left( B_{pq} + \\sum_{s} E_{ps} X_{pqs} E_{qs}^H \\right) G_{q}^H$\n\nwhere for antenna $$p$$ and $$q$$, and source $$s$$:\n\n• $$B_{{pq}}$$ represent base coherencies.\n• $$E_{{ps}}$$ represents Direction-Dependent Jones terms.\n• $$X_{{pqs}}$$ represents a coherency matrix (per-source).\n• $$G_{{p}}$$ represents Direction-Independent Jones terms.\n\nGenerally, $$E_{ps}$$, $$G_{p}$$, $$X_{pqs}$$ should be formed by using the RIME API functions and combining them together with einsum().\n\nParameters: time_index : numpy.ndarray Time index used to look up the antenna Jones index for a particular baseline with shape (row,). Obtainable via np.unique(time, return_inverse=True). antenna1 : numpy.ndarray Antenna 1 index used to look up the antenna Jones for a particular baseline. with shape (row,). antenna2 : numpy.ndarray Antenna 2 index used to look up the antenna Jones for a particular baseline. with shape (row,). dde1_jones : numpy.ndarray, optional $$E_{ps}$$ Direction-Dependent Jones terms for the first antenna. shape (source,time,ant,chan,corr_1,corr_2) source_coh : numpy.ndarray, optional $$X_{pqs}$$ Direction-Dependent Coherency matrix for the baseline. with shape (source,row,chan,corr_1,corr_2) dde2_jones : numpy.ndarray, optional $$E_{qs}$$ Direction-Dependent Jones terms for the second antenna. This is usually the same array as dde1_jones as this preserves the symmetry of the RIME. predict_vis will perform the conjugate transpose internally. shape (source,time,ant,chan,corr_1,corr_2) die1_jones : numpy.ndarray, optional $$G_{ps}$$ Direction-Independent Jones terms for the first antenna of the baseline. with shape (time,ant,chan,corr_1,corr_2) base_vis : numpy.ndarray, optional $$B_{pq}$$ base coherencies, added to source coherency summation before multiplication with die1_jones and die2_jones. shape (row,chan,corr_1,corr_2). die2_jones : numpy.ndarray, optional $$G_{ps}$$ Direction-Independent Jones terms for the second antenna of the baseline. This is usually the same array as die1_jones as this preserves the symmetry of the RIME. predict_vis will perform the conjugate transpose internally. shape (time,ant,chan,corr_1,corr_2) visibilities : numpy.ndarray Model visibilities of shape (row,chan,corr_1,corr_2)\n\nNotes\n\n• Direction-Dependent terms (dde{1,2}_jones) and Independent (die{1,2}_jones) are optional, but if one is present, the other must be present.\n• The inputs to this function involve row, time and ant (antenna) dimensions.\n• Each row is associated with a pair of antenna Jones matrices at a particular timestep via the time_index, antenna1 and antenna2 inputs.\n• The row dimension must be an increasing partial order in time.\nafricanus.rime.phase_delay(lm, uvw, frequency, convention='fourier')[source]\n\nComputes the phase delay (K) term:\n\n\\begin{align}\\begin{aligned}& {\\Large e^{-2 \\pi i (u l + v m + w (n - 1))} }\\\\& \\textrm{where } n = \\sqrt{1 - l^2 - m^2}\\end{aligned}\\end{align}\nParameters: LM coordinates of shape (source, 2) with L and M components in the last dimension. uvw : numpy.ndarray UVW coordinates of shape (row, 3) with U, V and W components in the last dimension. frequency : numpy.ndarray frequencies of shape (chan,) convention : {‘fourier’, ‘casa’} Uses the $$e^{-2 \\pi \\mathit{i}}$$ sign convention if fourier and $$e^{2 \\pi \\mathit{i}}$$ if casa. complex_phase : numpy.ndarray complex of shape (source, row, chan)\n\nNotes\n\nCorresponds to the complex exponential of the Van Cittert-Zernike Theorem.\n\nMeqTrees uses the CASA sign convention.\n\nafricanus.rime.parallactic_angles(times, antenna_positions, field_centre, backend='casa')[source]\n\nComputes parallactic angles per timestep for the given reference antenna position and field centre.\n\nParameters: times : numpy.ndarray Array of Mean Julian Date times in seconds with shape (time,), antenna_positions : numpy.ndarray Antenna positions of shape (ant, 3) in metres in the ITRF frame. field_centre : numpy.ndarray Field centre of shape (2,) in radians backend : {‘casa’, ‘test’}, optional Backend to use for calculating the parallactic angles. casa defers to an implementation depending on python-casacore. This backend should be used by default. test creates parallactic angles by multiplying the times and antenna_position arrays. It exist solely for testing. parallactic_angles : numpy.ndarray Parallactic angles of shape (time,ant)\nafricanus.rime.feed_rotation(parallactic_angles, feed_type='linear')[source]\n\nComputes the 2x2 feed rotation (L) matrix from the parallactic_angles.\n\n$\\begin{split}\\textrm{linear} \\begin{bmatrix} cos(pa) & sin(pa) \\\\ -sin(pa) & cos(pa) \\end{bmatrix} \\qquad \\textrm{circular} \\begin{bmatrix} e^{-i pa} & 0 \\\\ 0 & e^{i pa} \\end{bmatrix}\\end{split}$\nParameters: parallactic_angles : numpy.ndarray floating point parallactic angles. Of shape (pa0, pa1, ..., pan). feed_type : {‘linear’, ‘circular’} The type of feed feed_matrix : numpy.ndarray Feed rotation matrix of shape (pa0, pa1,...,pan,2,2)\nafricanus.rime.transform_sources(lm, parallactic_angles, pointing_errors, antenna_scaling, frequency, dtype=None)[source]\n\nCreates beam sampling coordinates suitable for use in beam_cube_dde() by:\n\n1. Rotating lm coordinates by the parallactic_angles\n3. Scaling by antenna_scaling\nParameters: LM coordinates of shape (src,2) in radians offset from the phase centre. parallactic_angles : numpy.ndarray parallactic angles of shape (time, antenna) in radians. pointing_errors : numpy.ndarray LM pointing errors for each antenna at each timestep in radians. Has shape (time, antenna, 2) antenna_scaling : numpy.ndarray antenna scaling factor for each channel and each antenna. Has shape (antenna, chan) frequency : numpy.ndarray frequencies for each channel. Has shape (chan,) dtype : numpy.dtype, optional Numpy dtype of result array. Should be float32 or float64. Defaults to float64 coords : numpy.ndarray coordinates of shape (3, src, time, antenna, chan) where each coordinate component represents l, m and frequency, respectively.\nafricanus.rime.beam_cube_dde(beam, beam_lm_extents, beam_freq_map, lm, parallactic_angles, point_errors, antenna_scaling, frequency)[source]\n\nEvaluates Direction Dependent Effects along a source’s path by interpolating the values of a complex beam cube at the source location.\n\nParameters: beam : numpy.ndarray Complex beam cube of shape (beam_lw, beam_mh, beam_nud, corr, corr). beam_lw, beam_mh and beam_nud define the size of the cube in the l, m and frequency dimensions, respectively. beam_lm_extents : numpy.ndarray lm extents of the beam cube of shape (2, 2). [[lower_l, upper_l], [lower_m, upper_m]]. beam_freq_map : numpy.ndarray Beam frequency map of shape (beam_nud,). This array is used to define interpolation along the (chan,) dimension. Source lm coordinates of shape (source, 2). These coordinates are: Scaled if the associated frequency lies outside the beam cube. Offset by pointing errors: point_errors Rotated by parallactic angles: parallactic_angles. Scaled by antenna scaling factors: antenna_scaling. parallactic_angles : numpy.ndarray Parallactic angles of shape (time, ant). point_errors : numpy.ndarray Pointing errors of shape (time, ant, chan, 2). antenna_scaling : numpy.ndarray Antenna scaling factors of shape (ant, chan, 2) frequency : numpy.ndarray Frequencies of shape (chan,). ddes : numpy.ndarray Direction Dependent Effects of shape (source, time, ant, chan, corr, corr)\n\nNotes\n\n1. Sources are clamped to the provided beam_lm_extents.\n2. Frequencies outside the cube (i.e. outside beam_freq_map) introduce linear scaling to the lm coordinates of a source.\nafricanus.rime.zernike_dde(coords, coeffs, noll_index)[source]\n\nComputes Direction Dependent Effects by evaluating Zernicke Polynomials defined by coefficients coeffs and noll indexes noll_index at the specified coordinates coords.\n\nDecomposition of a voxel beam cube into Zernicke polynomial coefficients can be achieved through the use of the eidos package.\n\nParameters: coords : numpy.ndarray Float coordinates at which to evaluate the zernike polynomials. Has shape (3, source, time, ant, chan). The three components in the first dimension represent l, m and frequency coordinates, respectively. coeffs : numpy.ndarray complex Zernicke polynomial coefficients. Has shape (ant, chan, corr_1, ..., corr_n, poly) where poly is the number of polynomial coefficients and corr_1, ..., corr_n are a variable number of correlation dimensions. noll_index : numpy.ndarray Noll index associated with each polynomial coefficient. Has shape (ant, chan, corr_1, ..., corr_n, poly). dde : numpy.ndarray complex values with shape (source, time, ant, chan, corr_1, ..., corr_n)\nafricanus.rime.wsclean_predict(uvw, lm, source_type, flux, coeffs, log_poly, ref_freq, gauss_shape, frequency)[source]\n\nPredict visibilities from a WSClean sky model.\n\nParameters: uvw : numpy.ndarray UVW coordinates of shape (row, 3) Source LM coordinates of shape (source, 2). Derived from the Ra and Dec fields. source_type : numpy.ndarray Strings defining the source type of shape (source,). Should be either \"POINT\" or \"GAUSSIAN\". Contains the Type field. flux : numpy.ndarray Source flux of shape (source,). Contains the I field. coeffs : numpy.ndarray Source Polynomial coefficients of shape (source, coeffs). Contains the SpectralIndex field. log_poly : numpy.ndarray Source polynomial type of shape (source,). If True, logarithmic polynomials are used. If False, standard polynomials are used. Contains the LogarithmicSI field. ref_freq : numpy.ndarray Source Reference frequency of shape (source,). Contains the ReferenceFrequency field. gauss_shape : numpy.ndarray Gaussian shape parameters of shape (source, 3) used when the corresponding source_type is \"GAUSSIAN\". The 3 components should contain the MajorAxis, MinorAxis and Orientation fields, respectively. frequency : numpy.ndarray Frequency of shape (chan,). visibilities : numpy.ndarray Complex visibilities of shape (row, chan, 1)\n\nCuda¶\n\n predict_vis(time_index, antenna1, antenna2) Multiply Jones terms together to form model visibilities according to the following formula: phase_delay(lm, uvw, frequency) Computes the phase delay (K) term: feed_rotation(parallactic_angles[, feed_type]) Computes the 2x2 feed rotation (L) matrix from the parallactic_angles. beam_cube_dde(beam, beam_lm_ext, …) Evaluates Direction Dependent Effects along a source’s path by interpolating the values of a complex beam cube at the source location.\nafricanus.rime.cuda.predict_vis(time_index, antenna1, antenna2, dde1_jones=None, source_coh=None, dde2_jones=None, die1_jones=None, base_vis=None, die2_jones=None)[source]\n\nMultiply Jones terms together to form model visibilities according to the following formula:\n\n$V_{pq} = G_{p} \\left( B_{pq} + \\sum_{s} E_{ps} X_{pqs} E_{qs}^H \\right) G_{q}^H$\n\nwhere for antenna $$p$$ and $$q$$, and source $$s$$:\n\n• $$B_{{pq}}$$ represent base coherencies.\n• $$E_{{ps}}$$ represents Direction-Dependent Jones terms.\n• $$X_{{pqs}}$$ represents a coherency matrix (per-source).\n• $$G_{{p}}$$ represents Direction-Independent Jones terms.\n\nGenerally, $$E_{ps}$$, $$G_{p}$$, $$X_{pqs}$$ should be formed by using the RIME API functions and combining them together with einsum().\n\nParameters: time_index : cupy.ndarray Time index used to look up the antenna Jones index for a particular baseline with shape (row,). Obtainable via cp.unique(time, return_inverse=True). antenna1 : cupy.ndarray Antenna 1 index used to look up the antenna Jones for a particular baseline. with shape (row,). antenna2 : cupy.ndarray Antenna 2 index used to look up the antenna Jones for a particular baseline. with shape (row,). dde1_jones : cupy.ndarray, optional $$E_{ps}$$ Direction-Dependent Jones terms for the first antenna. shape (source,time,ant,chan,corr_1,corr_2) source_coh : cupy.ndarray, optional $$X_{pqs}$$ Direction-Dependent Coherency matrix for the baseline. with shape (source,row,chan,corr_1,corr_2) dde2_jones : cupy.ndarray, optional $$E_{qs}$$ Direction-Dependent Jones terms for the second antenna. This is usually the same array as dde1_jones as this preserves the symmetry of the RIME. predict_vis will perform the conjugate transpose internally. shape (source,time,ant,chan,corr_1,corr_2) die1_jones : cupy.ndarray, optional $$G_{ps}$$ Direction-Independent Jones terms for the first antenna of the baseline. with shape (time,ant,chan,corr_1,corr_2) base_vis : cupy.ndarray, optional $$B_{pq}$$ base coherencies, added to source coherency summation before multiplication with die1_jones and die2_jones. shape (row,chan,corr_1,corr_2). die2_jones : cupy.ndarray, optional $$G_{ps}$$ Direction-Independent Jones terms for the second antenna of the baseline. This is usually the same array as die1_jones as this preserves the symmetry of the RIME. predict_vis will perform the conjugate transpose internally. shape (time,ant,chan,corr_1,corr_2) visibilities : cupy.ndarray Model visibilities of shape (row,chan,corr_1,corr_2)\n\nNotes\n\n• Direction-Dependent terms (dde{1,2}_jones) and Independent (die{1,2}_jones) are optional, but if one is present, the other must be present.\n• The inputs to this function involve row, time and ant (antenna) dimensions.\n• Each row is associated with a pair of antenna Jones matrices at a particular timestep via the time_index, antenna1 and antenna2 inputs.\n• The row dimension must be an increasing partial order in time.\nafricanus.rime.cuda.phase_delay(lm, uvw, frequency)[source]\n\nComputes the phase delay (K) term:\n\n\\begin{align}\\begin{aligned}& {\\Large e^{-2 \\pi i (u l + v m + w (n - 1))} }\\\\& \\textrm{where } n = \\sqrt{1 - l^2 - m^2}\\end{aligned}\\end{align}\nParameters: lm : cupy.ndarray LM coordinates of shape (source, 2) with L and M components in the last dimension. uvw : cupy.ndarray UVW coordinates of shape (row, 3) with U, V and W components in the last dimension. frequency : cupy.ndarray frequencies of shape (chan,) convention : {‘fourier’, ‘casa’} Uses the $$e^{-2 \\pi \\mathit{i}}$$ sign convention if fourier and $$e^{2 \\pi \\mathit{i}}$$ if casa. complex_phase : cupy.ndarray complex of shape (source, row, chan)\n\nNotes\n\nCorresponds to the complex exponential of the Van Cittert-Zernike Theorem.\n\nMeqTrees uses the CASA sign convention.\n\nafricanus.rime.cuda.feed_rotation(parallactic_angles, feed_type='linear')[source]\n\nComputes the 2x2 feed rotation (L) matrix from the parallactic_angles.\n\n$\\begin{split}\\textrm{linear} \\begin{bmatrix} cos(pa) & sin(pa) \\\\ -sin(pa) & cos(pa) \\end{bmatrix} \\qquad \\textrm{circular} \\begin{bmatrix} e^{-i pa} & 0 \\\\ 0 & e^{i pa} \\end{bmatrix}\\end{split}$\nParameters: parallactic_angles : cupy.ndarray floating point parallactic angles. Of shape (pa0, pa1, ..., pan). feed_type : {‘linear’, ‘circular’} The type of feed feed_matrix : cupy.ndarray Feed rotation matrix of shape (pa0, pa1,...,pan,2,2)\nafricanus.rime.cuda.beam_cube_dde(beam, beam_lm_ext, beam_freq_map, lm, parangles, pointing_errors, antenna_scaling, frequencies)[source]\n\nEvaluates Direction Dependent Effects along a source’s path by interpolating the values of a complex beam cube at the source location.\n\nParameters: beam : cupy.ndarray Complex beam cube of shape (beam_lw, beam_mh, beam_nud, corr, corr). beam_lw, beam_mh and beam_nud define the size of the cube in the l, m and frequency dimensions, respectively. beam_lm_extents : cupy.ndarray lm extents of the beam cube of shape (2, 2). [[lower_l, upper_l], [lower_m, upper_m]]. beam_freq_map : cupy.ndarray Beam frequency map of shape (beam_nud,). This array is used to define interpolation along the (chan,) dimension. lm : cupy.ndarray Source lm coordinates of shape (source, 2). These coordinates are: Scaled if the associated frequency lies outside the beam cube. Offset by pointing errors: point_errors Rotated by parallactic angles: parallactic_angles. Scaled by antenna scaling factors: antenna_scaling. parallactic_angles : cupy.ndarray Parallactic angles of shape (time, ant). point_errors : cupy.ndarray Pointing errors of shape (time, ant, chan, 2). antenna_scaling : cupy.ndarray Antenna scaling factors of shape (ant, chan, 2) frequency : cupy.ndarray Frequencies of shape (chan,). ddes : cupy.ndarray Direction Dependent Effects of shape (source, time, ant, chan, corr, corr)\n\nNotes\n\n1. Sources are clamped to the provided beam_lm_extents.\n2. Frequencies outside the cube (i.e. outside beam_freq_map) introduce linear scaling to the lm coordinates of a source.\n\n predict_vis(time_index, antenna1, antenna2) Multiply Jones terms together to form model visibilities according to the following formula: phase_delay(lm, uvw, frequency[, convention]) Computes the phase delay (K) term: parallactic_angles(times, antenna_positions, …) Computes parallactic angles per timestep for the given reference antenna position and field centre. feed_rotation(parallactic_angles, feed_type) Computes the 2x2 feed rotation (L) matrix from the parallactic_angles. transform_sources(lm, parallactic_angles, …) Creates beam sampling coordinates suitable for use in beam_cube_dde() by: beam_cube_dde(beam, beam_lm_extents, …) Evaluates Direction Dependent Effects along a source’s path by interpolating the values of a complex beam cube at the source location. zernike_dde(coords, coeffs, noll_index) Computes Direction Dependent Effects by evaluating Zernicke Polynomials defined by coefficients coeffs and noll indexes noll_index at the specified coordinates coords. wsclean_predict(uvw, lm, source_type, flux, …) Predict visibilities from a WSClean sky model.\nafricanus.rime.dask.predict_vis(time_index, antenna1, antenna2, dde1_jones=None, source_coh=None, dde2_jones=None, die1_jones=None, base_vis=None, die2_jones=None, streams=None)[source]\n\nMultiply Jones terms together to form model visibilities according to the following formula:\n\n$V_{pq} = G_{p} \\left( B_{pq} + \\sum_{s} E_{ps} X_{pqs} E_{qs}^H \\right) G_{q}^H$\n\nwhere for antenna $$p$$ and $$q$$, and source $$s$$:\n\n• $$B_{{pq}}$$ represent base coherencies.\n• $$E_{{ps}}$$ represents Direction-Dependent Jones terms.\n• $$X_{{pqs}}$$ represents a coherency matrix (per-source).\n• $$G_{{p}}$$ represents Direction-Independent Jones terms.\n\nGenerally, $$E_{ps}$$, $$G_{p}$$, $$X_{pqs}$$ should be formed by using the RIME API functions and combining them together with einsum().\n\nParameters: time_index : dask.array.Array Time index used to look up the antenna Jones index for a particular baseline with shape (row,). Obtainable via time.map_blocks(lambda a: np.unique(a, return_inverse=True)). antenna1 : dask.array.Array Antenna 1 index used to look up the antenna Jones for a particular baseline. with shape (row,). antenna2 : dask.array.Array Antenna 2 index used to look up the antenna Jones for a particular baseline. with shape (row,). dde1_jones : dask.array.Array, optional $$E_{ps}$$ Direction-Dependent Jones terms for the first antenna. shape (source,time,ant,chan,corr_1,corr_2) source_coh : dask.array.Array, optional $$X_{pqs}$$ Direction-Dependent Coherency matrix for the baseline. with shape (source,row,chan,corr_1,corr_2) dde2_jones : dask.array.Array, optional $$E_{qs}$$ Direction-Dependent Jones terms for the second antenna. This is usually the same array as dde1_jones as this preserves the symmetry of the RIME. predict_vis will perform the conjugate transpose internally. shape (source,time,ant,chan,corr_1,corr_2) die1_jones : dask.array.Array, optional $$G_{ps}$$ Direction-Independent Jones terms for the first antenna of the baseline. with shape (time,ant,chan,corr_1,corr_2) base_vis : dask.array.Array, optional $$B_{pq}$$ base coherencies, added to source coherency summation before multiplication with die1_jones and die2_jones. shape (row,chan,corr_1,corr_2). die2_jones : dask.array.Array, optional $$G_{ps}$$ Direction-Independent Jones terms for the second antenna of the baseline. This is usually the same array as die1_jones as this preserves the symmetry of the RIME. predict_vis will perform the conjugate transpose internally. shape (time,ant,chan,corr_1,corr_2) streams : {False, True} If True the coherencies are serially summed in a linear chain. If False, dask uses a tree style reduction algorithm. visibilities : dask.array.Array Model visibilities of shape (row,chan,corr_1,corr_2)\n\nNotes\n\n• Direction-Dependent terms (dde{1,2}_jones) and Independent (die{1,2}_jones) are optional, but if one is present, the other must be present.\n\n• The inputs to this function involve row, time and ant (antenna) dimensions.\n\n• Each row is associated with a pair of antenna Jones matrices at a particular timestep via the time_index, antenna1 and antenna2 inputs.\n\n• The row dimension must be an increasing partial order in time.\n\n• The ant dimension should only contain a single chunk equal to the number of antenna. Since each row can contain any antenna, random access must be preserved along this dimension.\n\n• The chunks in the row and time dimension must align. This subtle point must be understood otherwise invalid results will be produced by the chunking scheme. In the example below we have four unique time indices [0,1,2,3], and four unique antenna [0,1,2,3] indexing 10 rows.\n\n# Row indices into the time/antenna indexed arrays\ntime_idx = np.asarray([0,0,1,1,2,2,2,2,3,3])\nant1 = np.asarray( [0,0,0,0,1,1,1,2,2,3]\nant2 = np.asarray( [0,1,2,3,1,2,3,2,3,3])\n\nA reasonable chunking scheme for the row and time dimension would be (4,4,2) and (2,1,1) respectively. Another way of explaining this is that the first four rows contain two unique timesteps, the second four rows contain one unique timestep and the last two rows contain one unique timestep.\n\nSome rules of thumb:\n\n1. The number chunks in row and time must match although the individual chunk sizes need not.\n\n2. Unique timesteps should not be split across row chunks.\n\n3. For a Measurement Set whose rows are ordered on the TIME column, the following is a good way of obtaining the row chunking strategy:\n\nimport numpy as np\nimport pyrap.tables as pt\n\nms = pt.table(\"data.ms\")\ntimes = ms.getcol(\"TIME\")\nunique_times, chunks = np.unique(times, return_counts=True)\n\n4. Use aggregate_chunks() to aggregate multiple row and time chunks into chunks large enough such that functions operating on the resulting data can drop the GIL and spend time processing the data. Expanding the previous example:\n\n# Aggregate row\nutimes = unique_times.size\n# Single chunk for each unique time\ntime_chunks = (1,)*utimes\n# Aggregate row chunks into chunks <= 10000\naggregate_chunks((chunks, time_chunks), (10000, utimes))\n\nComputes the phase delay (K) term:\n\n\\begin{align}\\begin{aligned}& {\\Large e^{-2 \\pi i (u l + v m + w (n - 1))} }\\\\& \\textrm{where } n = \\sqrt{1 - l^2 - m^2}\\end{aligned}\\end{align}\nParameters: LM coordinates of shape (source, 2) with L and M components in the last dimension. uvw : dask.array.Array UVW coordinates of shape (row, 3) with U, V and W components in the last dimension. frequency : dask.array.Array frequencies of shape (chan,) convention : {‘fourier’, ‘casa’} Uses the $$e^{-2 \\pi \\mathit{i}}$$ sign convention if fourier and $$e^{2 \\pi \\mathit{i}}$$ if casa. complex_phase : dask.array.Array complex of shape (source, row, chan)\n\nNotes\n\nCorresponds to the complex exponential of the Van Cittert-Zernike Theorem.\n\nMeqTrees uses the CASA sign convention.\n\nComputes parallactic angles per timestep for the given reference antenna position and field centre.\n\nParameters: times : dask.array.Array Array of Mean Julian Date times in seconds with shape (time,), antenna_positions : dask.array.Array Antenna positions of shape (ant, 3) in metres in the ITRF frame. field_centre : dask.array.Array Field centre of shape (2,) in radians backend : {‘casa’, ‘test’}, optional Backend to use for calculating the parallactic angles. casa defers to an implementation depending on python-casacore. This backend should be used by default. test creates parallactic angles by multiplying the times and antenna_position arrays. It exist solely for testing. parallactic_angles : dask.array.Array Parallactic angles of shape (time,ant)\n\nComputes the 2x2 feed rotation (L) matrix from the parallactic_angles.\n\n$\\begin{split}\\textrm{linear} \\begin{bmatrix} cos(pa) & sin(pa) \\\\ -sin(pa) & cos(pa) \\end{bmatrix} \\qquad \\textrm{circular} \\begin{bmatrix} e^{-i pa} & 0 \\\\ 0 & e^{i pa} \\end{bmatrix}\\end{split}$\nParameters: parallactic_angles : numpy.ndarray floating point parallactic angles. Of shape (pa0, pa1, ..., pan). feed_type : {‘linear’, ‘circular’} The type of feed feed_matrix : numpy.ndarray Feed rotation matrix of shape (pa0, pa1,...,pan,2,2)\nafricanus.rime.dask.transform_sources(lm, parallactic_angles, pointing_errors, antenna_scaling, frequency, dtype=None)[source]\n\nCreates beam sampling coordinates suitable for use in beam_cube_dde() by:\n\n1. Rotating lm coordinates by the parallactic_angles\n3. Scaling by antenna_scaling\nParameters: LM coordinates of shape (src,2) in radians offset from the phase centre. parallactic_angles : dask.array.Array parallactic angles of shape (time, antenna) in radians. pointing_errors : dask.array.Array LM pointing errors for each antenna at each timestep in radians. Has shape (time, antenna, 2) antenna_scaling : dask.array.Array antenna scaling factor for each channel and each antenna. Has shape (antenna, chan) frequency : dask.array.Array frequencies for each channel. Has shape (chan,) dtype : numpy.dtype, optional Numpy dtype of result array. Should be float32 or float64. Defaults to float64 coords : dask.array.Array coordinates of shape (3, src, time, antenna, chan) where each coordinate component represents l, m and frequency, respectively.\nafricanus.rime.dask.beam_cube_dde(beam, beam_lm_extents, beam_freq_map, lm, parallactic_angles, point_errors, antenna_scaling, frequencies)[source]\n\nEvaluates Direction Dependent Effects along a source’s path by interpolating the values of a complex beam cube at the source location.\n\nParameters: beam : dask.array.Array Complex beam cube of shape (beam_lw, beam_mh, beam_nud, corr, corr). beam_lw, beam_mh and beam_nud define the size of the cube in the l, m and frequency dimensions, respectively. beam_lm_extents : dask.array.Array lm extents of the beam cube of shape (2, 2). [[lower_l, upper_l], [lower_m, upper_m]]. beam_freq_map : dask.array.Array Beam frequency map of shape (beam_nud,). This array is used to define interpolation along the (chan,) dimension. Source lm coordinates of shape (source, 2). These coordinates are: Scaled if the associated frequency lies outside the beam cube. Offset by pointing errors: point_errors Rotated by parallactic angles: parallactic_angles. Scaled by antenna scaling factors: antenna_scaling. parallactic_angles : dask.array.Array Parallactic angles of shape (time, ant). point_errors : dask.array.Array Pointing errors of shape (time, ant, chan, 2). antenna_scaling : dask.array.Array Antenna scaling factors of shape (ant, chan, 2) frequency : dask.array.Array Frequencies of shape (chan,). ddes : dask.array.Array Direction Dependent Effects of shape (source, time, ant, chan, corr, corr)\n\nNotes\n\n1. Sources are clamped to the provided beam_lm_extents.\n2. Frequencies outside the cube (i.e. outside beam_freq_map) introduce linear scaling to the lm coordinates of a source." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6076757,"math_prob":0.9806995,"size":32497,"snap":"2022-05-2022-21","text_gpt3_token_len":8740,"char_repetition_ratio":0.1551411,"word_repetition_ratio":0.72925556,"special_character_ratio":0.24436101,"punctuation_ratio":0.2253899,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9972455,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-25T16:52:37Z\",\"WARC-Record-ID\":\"<urn:uuid:ed0730e1-c483-42f7-95a5-5b655e792bf9>\",\"Content-Length\":\"144934\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6aae6424-9644-49a0-b4e4-bed747cce262>\",\"WARC-Concurrent-To\":\"<urn:uuid:57cd0b28-d14a-4da1-953c-e4c7553acd36>\",\"WARC-IP-Address\":\"104.17.33.82\",\"WARC-Target-URI\":\"https://codex-africanus.readthedocs.io/en/0.2.1/rime-api.html\",\"WARC-Payload-Digest\":\"sha1:FZTMKHMLCN76JG75UXPEEU5K4NTRISWU\",\"WARC-Block-Digest\":\"sha1:GAZ6CYBCLQ7YSJDZONTH2QZ4M2LHB5NH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320304859.70_warc_CC-MAIN-20220125160159-20220125190159-00290.warc.gz\"}"}
https://negocio-nytta.com/articles/annualized-return-vs-cumulative-return-2015-11-03j2va711672adv
[ "Home\n\n# Cumulative incidence formula\n\n### Cumulative incidence epidemiology Britannic\n\n• Cumulative incidence, also called incidence proportion, in epidemiology, estimate of the risk that an individual will experience an event or develop a disease during a specified period of time. Cumulative incidence is calculated as the number of new events or cases of disease divided by the total number of individuals in the population at risk.\n• Cumulative Incidence Explained. Cumulative incidence, also known as incidence proportion is an epidemiology measure that describes the number of new disease onsets per number of people in the population at risk during a specified period of time. This indicator can be measured in cohorts (closed populations only) and requires follow-up of.\n• In contrast to prevalence, incidence is a measure of the occurrence of new cases of disease (or some other outcome) during a span of time.There are two related measures that are used in this regard: incidence proportion (cumulative incidence) and incidence rate. A useful way to think about cumulative incidence (incidence proportion) is that it is the probability of developing disease over a.\n\nDefinition of Cumulative Incidence The proportion of individuals who experience the event in a defined time period (E/N during some time T) = cumulative incidence. OR. E(vent)/N(umber) during some time T(ime) = cumulative incidence Time Period for Cumulative Incidence If the measure is a proportion of persons, it is unitless since it has to vary between 0 and 1 Standard errors for the cumulative incidence function can be obtained using the delta method, although the derivation is a bit more complicated that in the case of Greenwood's formula. In the Stata logs we study how long U.S. Supreme Court Justices serve on the court, treating death and retirement as competing risks, with th\n\n### Cumulative Incidence Formula Calculator - MDAp\n\n• Cumulative incidence; Incidence proportion is the proportion of an initially disease-free population that develops disease, becomes injured, or dies during a specified (usually limited) period of time. Synonyms include attack rate, risk, probability of getting disease, and cumulative incidence. Incidence proportion is a proportion because the.\n• Incidence Ratio Cumulative Incidence Ratio Incidence Density Ratio Hazard Ratio Ratio of two risks or rates. Provides a relative measure of the effect of the exposure. Risk (or Rate) Difference Cumulative Incidence Difference Incidence Density Difference Difference of two risks or rates. Provides an absolute measure of the effect of the exposure\n• Cumulative incidence (the proportion of a population at risk that will develop an outcome in a given period of time) provides a measure of risk, and it is an intuitive way to think about possible health outcomes. An incidence rate is less intuitive, because it is really an estimate of the instantaneous rate of disease, i.e. the rate at which.\n• ology: The term rate is often used loosely, to refer to any of the above measures of disease frequency (even though the only true rate is the incidence density rate • Odds: Both prevalence and incidence proportions may be addressed in terms of odds\n• Incidence provides information about the spread of disease. Cumulative incidence (CI) and incidence rate (IR) are different approaches to calculating incidence, based on the nature of followup time. Let's say that health-care professionals working in an intensive care unit have asked whether there has been an increase in the number of new.\n• A constant rate produces an exponential cumulative incidence (or survival) distribution. If you know the instantaneous incidence rate, you can derive the cumulative incidence/survival function or vice-versa. There is a formal mathematical relationship between rate and cumulative incidence. It is represented by this formula for a constant rate\n\n### Incidence: Risk, Cumulative Incidence (Incidence\n\nCumulative Incidence Calculator. Cumulative Incidence is also known as the incidence proportion. In epidemiology, it is a measure of the frequency of disease during a period (entire lifetime). Use our cumulative incidence calculator to know the incidence proportion for a given number of an incident rate for a disease in a duration Centers for Disease Control and Prevention 1600 Clifton Rd. Atlanta, GA 30333, USA 800-CDC-INFO (800-232-4636) TTY: (888) 232-6348, 24 Hours/Every Day - [email protected]\n\n### Cumulative Incidence - Person-Time - CTSPedi\n\n1. I have 9 time points, 53026 events and 98230 individuals. There are no lost-to followup or competing risks. I have attached a snapshot of my data. When I calculate it manually, I get Survival =.0.46 and cumulative incidence=0.54 (i.e. 53026/98230) but when I use proc lifetest I get Survival=0.48 and cumulative incidence=0.52\n2. Also called risk and cumulative incidence. A measure of association calculated for studies that observe incident cases of disease ( cohorts or RCTs ). Calculated as the incidence proportion in the exposed over the incidence proportion in the unexposed, or A/(A+B) / C/(C+D), from a standard 2x2 table\n3. Cumulative incidence is a measure of the probability or risk of event (e.g. disease recurrence). It indicates what proportion of the population will experience the event during the specified time period. The cumulative incidence ratio is typically the ratio of the cumulative incidence in a treated or exposed group of people to that in a control o\n4. Incidence proportion. Incidence proportion (IP), also known as cumulative incidence, is defined as the probability that a particular event, such as occurrence of a particular disease, has occurred before a given time.. It is calculated dividing the number of new cases during a given period by the number of subjects at risk in the population initially at risk at the beginning of the study\n\nEstimating and modelling cumulative incidence functions using time-dependent weights Paul C Lambert1;2 1Department of Health Sciences, University of Leicester, Leicester, UK 2Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, Stockholm, Sweden UK Stata Users Group, London, September 201 Formula to calculate incidence rate. Example: In a day, there were 40 new corona virus cases, the county's population is 40,000 people. Calculate the incidence rate. Therefore, the disease's incidence rate is 0.001. Share. Tweet. Reddit. Pinterest. Email. Prev Article. Next Article From incidence function to cumulative-incidence-rate / risk. part II Draft aug 15, 2012 With part I as orientation, this second of the pair addresses our main objective { demysifying the formula used to convert an incidence function t\n\n### Principles of Epidemiology Lesson 3 - Section\n\nAdditionally, Excel can calculate the CAGR using the RAG function. Input this formula into a new cell by entering the following: = RATE (C1, -A1, B1). Press Enter and Excel will display the answer. Method 3. Using CAGR to Predict Cumulative Growth. 1.Identify the values required to calculate CAGR What is cumulative incidence. Cumulative incidence investigates disease frequency at a certain period of time.It is often called competing risks case. Technically, for a given period, the cumulative incidence is the probability that an observation still included in the analysis at the beginning of this period will be affected by an event during the period. It is especially appropriate in the. For this calculation, create a cell for the formula and enter: =POWER(B1/A1,(1/C1))-1. Again, the answer should appear in this cell after you press enter. Additionally, Excel can calculate CAGR by using the RATE function. Input this formula into a new cell by entering the following: =RATE(C1,-A1,B1) Start studying Cumulative incidence vs Incidence rate vs prevalence. Learn vocabulary, terms, and more with flashcards, games, and other study tools\n\nIncidence Rate Formula. The following formula is used to calculate an incidence rate. IR = #NC / AP * 100. Where IR is the incidence rate (%) #NC is the number of new cases during the time period; AP is the average population during the time perio Cumulative Incidence. The cumulative incidence is the proportion of a fixed population that develops epilepsy in a certain time. In a US study, the cumulative incidence was 1.2% by the age of 24 years, 3% by 75 years, and 4.4% by 85 years. Almost identical figures were found in Sweden and Iceland\n\n### Epidemiologic Formulas and Terminolog\n\n1. 92296 Fundamentals of Epidemiology: Formula sheet Cumulative incidence = Number of individuals experiencing a NEW event during a time period Number of susceptible individuals at the BEGINNING of the time period Cumulative incidence is also referred to as ' incidence proportion '. Incidence density = Number of individuals experiencing a NEW event during a time period PERSON-TIME of.\n2. Overview. Cumulative incidence (sometimes referred to as incidence proportion) is a measure of disease frequency which counts the proportion of a candidate population that becomes diseased/develops disease over a specified period of time.Cumulative incidence measures occurrence of new cases of disease. It involves the transition from one state to another, such as non-disease to diseased state.\n3. A cumulative incidence plot is usually used to visualize the estimated probability of an event prior to a specified time. For example, Figure 1 is a cumulative incidence plot. In this plot, the y-axis is the cumulative incidence rate and x-axis is the duration of the study in months. The numbers at the bottom of the plot ar\n4. Cumulative incidence of atopic disorders in high risk infants fed whey hydrolysate, soy, and conventional cow milk formulas. Chandra RK(1), Hamed A. Author information: (1)Department of Pediatrics, Memorial University of Newfoundland, St. John's, Canada\n5. This sounds like an incidence rate not cumulative incidence, which is a risk (number of cases divided by original disease-free population). Presumably the 95% CI for cumulative incidence is the same as that for any proportion. If you are after the 95% confidence interval for a rate it is given by\n7. The cumulative incidence of acute rheumatic fever estimates the proportion of susceptible individuals in endemically exposed populations. Our figures of 2·7-5·7% susceptible are consistent with others in the literature. Such comparisons suggest that the major part of the variation in rheumatic fever incidence between populations is due to.\n\n### Relationship of Incidence Rate to Cumulative Incidence (Risk\n\n• To calculate the Compound Annual Growth Rate in Excel, there is a basic formula =((End Value/Start Value)^(1/Periods) -1.And we can easily apply this formula as following: 1.Select a blank cell, for example Cell E3, enter the below formula into it, and press the Enter key.See screenshot\n• Formula: Cumulative Incidence = 1 - e (-IR x D) Where, IR = Incident Rate D = Duration e = 2.7182818284590452 Example: Calculate the incidence proportion for the incident rate of disease of 5 in a period of 50 years. (i.e., at any moment in time during the period the rate is the same).The longer the time period, the more the cumulative.\n• 5. For example, take a look at the formula in cell C4. 6. At step 2, enter the IF function shown below (and drag it down to cell C7) to only display a cumulative sum if data has been entered. Explanation: if cell B2 is not empty (> means not equal to), the IF function in cell C2 displays a cumulative sum, else it displays an empty string. 7\n\nCompeting risks occur commonly in medical research. For example, both treatment-related mortality and disease recurrence are important outcomes of interest and well-known competing risks in cancer research. In the analysis of competing risks data, methods of standard survival analysis such as the Kaplan-Meier method for estimation of cumulative incidence, the log-rank test for comparison of. Measures - cumulative incidence Cumulative incidence (CI) - aka risk, incidence proportion (IP - Rothman) • The proportion of a closed population at risk that becomes diseased within a given period of time • Numerator: number of new cases of a disease or a condition (Rothman calls this A) • Denominator: number of persons in.\n\nAbstract The cumulative incidence ratio is a relative risk. Cumulative Incidence Ratio - Gail - 2005 - Major Reference Works - Wiley Online Library Skip to Article Conten The formula instructs Excel to do the following: if cell C2 is blank, then return an empty string (blank cell), otherwise apply the cumulative total formula. Now, you can copy the formula to as many cells as you want, and the formula cells will look empty until you enter a number in the corresponding row in column C Cumulative total on columns that can be sorted. Most commonly, the cumulative total pattern tends to be based on the date. That said, that pattern can be adapted to any column that can be sorted. The option for a column to be sorted is important because the code includes a less than or equal to condition to work properly\n\n### Terminology 101: Cumulative incidence and incidence rat\n\nThe authors use the term cumulative incidence rate, when indeed cumulative incidence is a proportion, not a rate (events expressed per unit time) [2, 3]. For included studies that only contained information on cumulative incidence, but not an actual incidence rate, the authors used cumulative incidence (over the study period) divided by. Similarly, there's single-event probability and there's cumulative probability, and if you're looking at data, you need to understand the difference. Probability of a Single Event. The simplest type of probability is the measure of the chance that a single event will occur. For example, let's say we have a fair, six-sided die", null, "where is the cumulative probability that an event has already occurred, is the rate at which that event occurs, and is the number of rolls that have occurred so far. Negative rolls are meaningless by definition, so the probability function is always zero there. In order to make sense of this formula, it's useful to rearrange it Compound interest, or 'interest on interest', is calculated with the compound interest formula. The formula for compound interest is P (1 + r/n)^(nt), where P is the initial principal balance, r is the interest rate, n is the number of times interest is compounded per time period and t is the number of time periods Then press Enter key to get the first result, and then select the formula cell and drag the fill handle down to the cells that you want to apply this formula, see screenshot: Note : With the above formula, the cumulative totals in the rows below the last value in column B shows the same cumulative total The cumulative default rate up to time period k is denoted and can be considered as the integral (sum) of the Incremental Default Rate. Formula. The cumulative default rate during period k, given an initial count of N 0 and incremental default rate N t D. See Als In cell B10, enter a formula using the CUMIPMT function to calculate the cumulative interest paid on the loan for Year 1 (payment 1 in cell B7 through payment 4 in cell B8). Use 0 as the type argument. Use absolute references for the rate, nper, and pv arguments, and use relative references for the start and end arguments", null, "Cumulative Return: A cumulative return is the aggregate amount an investment has gained or lost over time, independent of the period of time involved. Presented as a percentage, the cumulative. Real World Example of Incidence Rate. In 2013, a county in the United States with a population of 500,000 people may have had 20 new cases of tuberculosis (TB), resulting in an incidence rate of four cases per 100,000 people. This is higher than the overall TB incidence rate in the United States, which was 9,852 new TB cases in 2013, or three cases per 100,000 people The compound annual growth rate (CAGR) shows the rate of return of an investment over a certain period of time, expressed in annual percentage terms. Below is an overview of how to calculate it.\n\n### Rate and Cumulative Incidence - CTSPedi\n\nIf you are also R-user than predict.crr function computes the predicted cumulative incidence function for a given set of covariate values z. Predicted probability of dying within 3-years = the. The cumulative distribution function (CDF) calculates the cumulative probability for a given x-value. Formula. mean = np. variance = np(1 - p) The probability mass function (PMF) is: instantaneous rate of failure (hazard function). The exponential distribution is a special case of the Weibull distribution and the gamma distribution Analysis of studies comparing soy to a hydrolysed formula found a significant increase in infant (one study: RR 1.67, 95% CI 1.03, 2.69) and childhood allergy cumulative incidence (one study: RR 1.55, 95% CI 1.02, 2.35), infant eczema cumulative incidence (2 studies: typical RR 2.34, 95% CI 1.51, 3.62) and childhood food allergy period. In this step, we are calculating the crude, age-specific rates. A crude incidence rate is the number of new cancers of a specific site/type occurring in a specified population during a year, usually expressed as the number of cancers per 100,000 population at risk. It is calculated using the following formula In epidemiology, prevalence is the proportion of a particular population found to be affected by a medical condition (typically a disease or a risk factor such as smoking or seat-belt use) at a specific time. It is derived by comparing the number of people found to have the condition with the total number of people studied, and is usually expressed as a fraction, a percentage, or the number of.\n\n### Cumulative Incidence Calculator - Easycalculation\n\n1. A. Incidence proportion = attack rate = absolute risk = probability of developing a disease= cumulative incidence numberof newcases of disease ∈ a population ¿ of personsat risk of t hedisease (expressed as a %) - The incidence proportion of X disease in this study sample over the five years of the study was Y new infections per Z (e.g. 100) people. OR - There were Y new cases of Hep C for.\n2. For this example, we want to calculate cumulative principal payments over the full term of a 5-year loan of $5,000 with an interest rate of 4.5%. To do this, we set up CUMPRINC like this: rate - The interest rate per period. We divide the value in C6 by 12 since 4.5% represents annual interest: 3. Also called cumulative incidence or an incidence rate, this formula provides an estimate of the risk for developing a disease. The population at risk information for the United States, which is needed to calculate the TB incidence (or case) rate, is collected by the US Census 4. The cumulative total DAX formula pattern that I cover in this tutorial is a little different to the one you may have used in the past. This is because there's a different requirement here around how to ultimately calculate the Cumulative Total for the average daily run rate 5. us the cumulative principal paid. Cumulative interest paid at time CalcPds: =PdRate*(Period*Loan - ((Period^2-Period)/2) * PrinPmt) Until the final formula above, the term-loan calculations were easy. Let's conclude this article by exa The answer is 7.2%. If your XYZ shares grow at a 7.2% annual compound rate for 10 years, you will have doubled your investment and achieved a 100% cumulative rate of return. The math involved in this calculation is complex. If you would like dive into the details you can read more here: Calculate a Compound Annual Rate of Retur rate - The interest rate per period. nper - The total number of payments. pv - the loan amount Start - start month for the period end period - end month for the period Type - The timing of the payment, either at the beginning or end of the period. Numbers 0 or 1 represent the payment date. The number 0 represents payment at the end of the period, and the number 1 represents payment at. Relative risk is a statistical term used to describe the chances of a certain event occurring among one group versus another. It is commonly used in epidemiology and evidence-based medicine, where relative risk helps identify the probability of developing a disease after an exposure (e.g., a drug treatment or an environmental event) versus the chance of developing the disease in the absence of. Incidence, in epidemiology, occurrence of new cases of disease, injury, or other medical conditions over a specified time period, typically calculated as a rate or proportion. Examples of incident cases or events include a person developing diabetes, becoming infected with HIV, starting to smoke Cumulative incidence is a measure of frequency, as in epidemiology where it is a measure of disease frequency during a period of time. Cumulative incidence is the incidence calculated using a. ### How to calculate Incidence Rate - Centers for Disease 1. That amount is called the cumulative return. Calculating the cumulative return allows an investor to compare the amount of money he is making on different investments, such as stocks, bonds or real estate. To calculate the cumulative return, you need to know just a few variables 2. Formula for Cumulative Dividend. To calculate the dollar amount of a cumulative dividend, use the following formula: Where: Dividend Rate is the expected dividend payment expressed as a percentage on an annualized basis. Par Value is the face value for a share. Note: The dividend rate and par value can be found on a preferred stock prospectus 3. The rate of interest on such loans are reset periodically, as should be stated clearly in the terms of the loan. Hence, assumptions of what these future interest rates will be have to be made in order to calculate cumulative interest, or any metric of yield, rate, or cost for that matter 4. The incidence of atopic dermatitis (AD) is increasing worldwide. Clinical studies have observed reduced risks of AD among infants fed with 100% whey partially hydrolyzed infant formula (PHF-W) compared with intact protein cow's milk formula. To evaluate this potential relationship more comprehensive 5. Hello, I would like to calculate the cumulative rate of a query. See the image below: The correct sum of accumulated rates is 6.29% in the year 2016 - based on the formula highlighted in the image http://www.xlninja.com/2012/07/27/excel-cumulative-sum-formula/There are a couple of easy ways to add a running total to a range of data in your spreadsheet The cumulative total return is then: ($44.26 - $0.06607 ) /$0.06607 = 668.90 = 66,890% In mutual fund fact sheets and websites, the cumulative return can be quickly deduced from a graph that. Syntax of the CUMIPMT Formula. The generic formula for the CUMIPMT function is: =CUMIPMT(rate, nper, pv, start_period, end_period, type) The parameters of the CUMIPMT function are: rate - an interest rate for the period (the annual rate divided by 12) nper - the total number of payments for a loan; pv - the present value of a loa injuries per intern-month into a 12-month cumulative incidence or risk, and of what assumptions are involved. Indeed, few textbooks present, and fewer still explain, the formula linking incidence and risk. Increased understanding of this link is all the more critical nowadays, a What is the cumulative incidence of the disease for 1996? What is the mid-year size of the population in 1996 (i.e. as of July 1996)? In practice, calculating the total number of people at risk in a population is difficult, so a mid-year tally is used to estimate the total number at risk during the entire time period", null, "", null, "How to calculate 1) Amount of drug released and 2) Cumulative percentage release (%). The amount of drug conjugated particles in dialysis bag is 5 ml, Bath volume is 150 ml and sample withdrawn is. Cumulative means increasing or growing by accumulation or successive additions. Cumulative frequency is the total of a frequency and all frequencies so far in a frequency distribution. How to calculate cumulative frequency. The frequency distribution table below records number of plates with their colours It then calculates the cumulative return and the average return in three ways -- first the numeric average of the numbers you enter, and then the cumulative return divided by the number of years, and finally by taking the cumulative return and finding the single rate that would compound to that cumulative amount over your time period", null, "", null, "### Solved: How to calcuate the cumulative incidence and the s\n\nTo calculate the Cumulative Percentage I made the intermediate step of calculating individual category percentages. To do this I had to sum the Number column in B15 and code the cells in Column C individually. Is there a formula I could have entered in D3 to calculate the cumulative percentage without the intermediate step and simply copy. Hi Chinquee, In regard to your first point, YES I AGREE that shortcut works. Note 2-year cumulative = 1 - (2 yr Probability of Repay)^2; i.e., we need to square to get the compounding thru both years. I added a page behind the XLS on the member page; the red cell proves your point! I still like following Saunder's 'long way around' because it forces us to think of the spot rate curve as.\n\nThere are some practical scenarios where we need to find out running total with some predefined formula. Cumulative Total Definition. The cumulative total is also known as Running total. It is the sum of a sequence of numbers, which is updated each time a new number is added to the sequence. In simple words, we can say it represents the current. Calculating cumulative dividends per share First, determine the preferred stock's annual dividend payment by multiplying the dividend rate by its par value. Both of these can be found in the. In real-life research the loss rate would of course not normally be so uniform as this. Time Period: At Risk: Became Unavailable (Censored) Died: Survived; Year 1: 100: 3: 5? Year 2? 3: 10? Year 3? 3: 15? Year 4? 3: 20? Year 5? 3: 25? The question in a situation of this sort is: What shall we make of the subjects who become unavailable in a. Plotting a Cumulative Sum vs A Cumulative Sum Well Cumulative - Wrong value sometimes, but not always using Sum OVER Intersect All Previous Trying a simple OVER calculated column and am missing something I believe should be obvious\n\n### Introduction to 2 x 2 Tables, Epidemiologic Study Design\n\n1. Step 3: λ is the mean (average) number of events (also known as Parameter of Poisson Distribution). If you take the simple example for calculating λ => 1, 2,3,4,5. If you apply the same set of data in the above formula, n = 5, hence mean = (1+2+3+4+5)/5=3\n2. Cumulative Change. Okay, let's see what all of these represent N(t) = Our population at time t. N(0) = The initial population. (Can be thought of the initial height at time 0) = The rate of change of the population. All of this was from Calculus 154. What we're saying now is that if we integrate , we can retrieve the function N(t)\n3. Incidence and Prevalence: Examples Incidence/Incidence Rates Incidence Rates: Example Consider Chicken Pox, where the cumulative incidence rate is 20 percent per year, and 100 individuals are followed up. On average, after 6 months, 10 individuals will catch the disease. In diseases that happen only once (as with Chicken Pox), the 1\n4. If the cumulative risk in an age range is less than 10%, as is the case with most tumours, it can be approximated very well by the cumulative rate. Table 7.2 shows the correction needed to convert the cumulative rate into the cumulative risk. For values under 10%, the difference between the two is small\n5. Cumulative vs. Accumulative. Increasing or increased in quantity, degree, or force by successive additions is called Cumulative, whereas, gathering or growing by gradual increase is called Accumulative. Cumulative is the addition that comes up with successive contributors, while accumulative is the addition that happens gradually\n6. The formula for the hazard function of the Weibull distribution is $$h(x) = \\gamma x^{(\\gamma - 1)} \\hspace{.3in} x \\ge 0; \\gamma > 0$$ The following is the plot of the Weibull hazard function with the same values of γ as the pdf plots above. Cumulative Hazard Function The formula for the cumulative hazard function of the Weibull distribution i\n7. To have cumulative totals, just add up the values as you go. Example: Jamie has earned this much in the last 6 months: Month Earned; March: $120: April:$50: May: $110: June:$100: July: $50: August:$20: To work out the cumulative totals, just add up as you go. The first line is easy, the total earned so far is the same as Jamie earned that.\n\nThe incidence rate is calculated by dividing the number of initial cases by a population. In the example discussed above, if 2,000 individuals contracted or developed the disease during that year, the incidence rate is 0.1% (2,000/2,000,000). The population at risk is often used as the denominator for certain research purposes Also called the internal rate of return, the average annual return measures average return of an investment every year over a certain number of years instead of the total return amount at the end of that term. Like the cumulative return calculation, this is also expressed as a percentage Earned Value Formula #2 - Planned Value (PV) Planned Value (PV) is defined as budgeted cost for work planned to be done. This is also known as BCWS. In simple terms, as of today, what is the estimated value of the work planned to be done. Formula is PV= BAC * Planned % Complete. Where Planned % Complete= Given Amount/Total Amount Cumulative Incidence Rate also known as incidence rate, which often result s in confusion. often used interchangeably with risk, with a rate from 0 to 1 (or 0 to 100 percent). Referring back to the previous equation: New instances within a certain time perio", null, "", null, "", null, "MedCalc's free online Relative risk statistical calculator calculates Relative risk and Number needed to treat (NNT) with 95% Confidence Intervals from a 2x2 table RATE and IRR are also two other functions that you can find the compound annual growth rate with. To add the CAGR formula to Excel spreadsheets, check out this Tech Junkie guide CUMPRINC(rate,nper,pv,n1,n2,0) - Cumulative principal payment for the periods n1 through n2; EFFECT(nominal_rate,compounding_periods_per_year) - Calculates the effective annual interest rate. See the Excel help file on this function. These functions use similar definitions for the arguments: rate - The interest rate per period Completion rate (cumulative) = credits earned/credits attempted =25/30 = 83.33%: If you would like help calculating your completion rate, feel free to connect with a Warrior Success Center advisor. They can sit down with you and walk you through the process. Academic Status Resources\n\nWhen we multiply $5,000 x 4.3294 (the cumulative discount factor) we also get$21,647, not by coincidence. The Cumulative Discount Factor formula used is (1 - (1 + r)-t ) / r where r is the period interest rate expressed as a decimal and t is the specific year. For example, 6% is expressed as 6/100 or 0.06; t is the number of periods What is Cumulative Frequency in statistics. If the frequency of first class interval is added to the frequency of second class and this sum is added to third class and so on then frequencies so obtained are known as Cumulative Frequency (c.f.).; There are two types of cumulative frequencies (a) less than, (b) greater than A table which displays the manner in which cumulative frequencies are. A cancer incidence rate is the number of new cancers of a specific site/type occurring in a specified population during a year, usually expressed as the number of cancers per 100,000 population at risk. That is, The numerator of the incidence rate is the number of new cancers; the denominator is the size of the population. The number of new cancers may include multiple primary cancers. Y = cumulative average time per unit or batch. a = time taken to produce initial quantity. X = the cumulative units of production or, if in batches, the cumulative number of batches. b = the learning index or coefficient, which is calculated as: log learning curve percentage ÷ log2. So b for an 80 per cent curve would be log0.8 ÷ log2 = -0.322 Looking the data up on Yahoo! Finance, you find: Closing price on 3/13/1986: $28.00; Closing price on 9/30/2015:$44.26; Before we apply the formula for the cumulative return, we need to make one.\n\nDetails. If there is a data argument, then variables in the formula, codeweights, subset, id, cluster and istate arguments will be searched for in that data set.. The routine returns both an estimated probability in state and an estimated cumulative hazard estimate. The cumulative hazard estimate is the Nelson-Aalen (NA) estimate or the Fleming-Harrington (FH) estimate, the latter includes a. In the simplest terms, an age-adjusted breast cancer incidence rate of 124.5 cases per 100,000 women with a confidence interval of 122.5 - 126.6 cases per 100,000 means that there is a 95 percent chance that the rate was between 122.5 and 126.6 cases per 100,000 women I have a series of monthly return and need to create a formula that can compute it's cumulative return in the following manner: (1+return1)*(1+return2)*(1+return3))*(1+return4)*(1+return5) Subsequently, I would need to compute the average return by creating another formula in the following manner: ((1+return1)*(1+return2)*(1+return3)*(1+return4)*(1+return5))^(1/n) - 1, where n should be. Incidence is a measure of disease that allows us to determine a person's probability of being diagnosed with a disease during a given period of time. Therefore, incidence is the number of newly diagnosed cases of a disease. An incidence rate is the number of new cases of a disease divided by the number of persons at risk for the disease Cumulative and Non-Cumulative Preferred Stock. To ensure investors of the stream of dividends, most preferred stocks are cumulative preferred stock\n\n• Bagel pronunciation Yiddish.\n• How to set caller tune in Vodafone in Tamil.\n• Nys Global History and Geography Regents Exam January 2005.\n• Sterling Silver necklace Women's.\n• Insideout song.\n• Free Guatemala phone number.\n• Cupcakes and cashmere pants.\n• How to do the panorama mirror trick.\n• Cover your gray waterproof brush in wand.\n• How far is dallas from Baton Rouge.\n• Malt vinegar for skin.\n• Pharmatex sponge.\n• IPad email attachments not showing.\n• Virginia Center for Reproductive Medicine.\n• How much does it cost to replace shutter on Canon 5D Mark IV.\n• How to dress to attract a wealthy man.\n• ASICS shoes models.\n• Medicare Open Enrollment period 2020 extended.\n• Mewtwo Fire Red best moveset.\n• London broil temperature chart.\n• Throw away in spanish.\n• Battery calibration issues.\n• How to create an archive folder in Outlook.\n• Zoo PowerPoint Template.\n• Las Vegas helicopter tour reviews.\n• Do you need a permit to build a deck in Philadelphia.\n• Apple Developer app.\n• Carnations for sale Durban.\n• Windows 7 update 2020.\n• Remove name from car title Texas.\n• Today show Halloween 2018.\n• Ferries from France to Jersey.\n• Cost of living in Cape Town vs Johannesburg.\n• Halloween Special shows.\n• Blood test for colon cancer markers.\n• Why should we always wash your hands before handling food items.\n• Coolest things you can use your GI Bill for.\n• Printers compatible with Windows XP.\n• Three basic steps in market positioning." ]
[ null, "https://negocio-nytta.com/jsb/VQuc2OkHsYsGAXSKEIvZ5QHaEK.jpg", null, "https://negocio-nytta.com/jsb/cUpZ4reyd_G5ytrrj6vruwHaDQ.jpg", null, "https://negocio-nytta.com/jsb/jaTw1Ogo9TbRXLx4i63kQQHaHa.jpg", null, "https://negocio-nytta.com/jsb/FTtTvJreknW0KIMJ5lviogHaEK.jpg", null, "https://negocio-nytta.com/jsb/iPXyVbINvLPj_Hew8ydpBwHaHX.jpg", null, "https://negocio-nytta.com/jsb/I-kXKR4naYevFMdyWKMW0AHaFO.jpg", null, "https://negocio-nytta.com/jsb/mseBBpW2TkGiNlJFA-98JgHaKS.jpg", null, "https://negocio-nytta.com/jsb/Sr2aPUegDH_HuoBaG0Gn_gHaDH.jpg", null, "https://negocio-nytta.com/jsb/U4RHmat65hgwcKctDy5igAHaLH.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.91854763,"math_prob":0.96708816,"size":20740,"snap":"2021-43-2021-49","text_gpt3_token_len":4566,"char_repetition_ratio":0.17766203,"word_repetition_ratio":0.030665888,"special_character_ratio":0.2246866,"punctuation_ratio":0.11293168,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9942807,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-27T12:42:43Z\",\"WARC-Record-ID\":\"<urn:uuid:5cec51d5-e6ae-487d-b6b2-ce776a241eb1>\",\"Content-Length\":\"45781\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2f2de791-d8b4-4c55-b8a4-d9d177de4a94>\",\"WARC-Concurrent-To\":\"<urn:uuid:747e8d5f-e80c-473b-8ffb-83e1b54e1846>\",\"WARC-IP-Address\":\"185.238.168.33\",\"WARC-Target-URI\":\"https://negocio-nytta.com/articles/annualized-return-vs-cumulative-return-2015-11-03j2va711672adv\",\"WARC-Payload-Digest\":\"sha1:ZXGMI2DI5W6VMRX7VJWTW3D2MCC5IO2Z\",\"WARC-Block-Digest\":\"sha1:IY76ELT4YRQALQISBWBIX7SXL5FA4PCH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323588153.7_warc_CC-MAIN-20211027115745-20211027145745-00118.warc.gz\"}"}
https://jp.mathworks.com/matlabcentral/profile/authors/4675442
[ "Community Profile", null, "# Ricky\n\n2014 以来アクティブ\n\nWelder, mechanical engineer, custom bike builder computer scientist.\n\n#### Statistics\n\nAll\n•", null, "•", null, "•", null, "•", null, "•", null, "バッジを表示\n\n#### Content Feed\n\nIs it really a 5?\nA number containing at least one five will be passed to your function, which must return true or false depending upon whether th...\n\n5年弱 前\n\nPernicious Anniversary Problem\nSince Cody is 5 years old, it's pernicious. <http://rosettacode.org/wiki/Pernicious_numbers Pernicious number> is an integer whi...\n\n5年弱 前\n\nBasic electricity in a dry situation\nThis is a very hypothetical situation between two individuals in a very dry atmosphere. He came running in rubber boots when...\n\n5年弱 前\n\nEnergy of a photon\nGiven the frequency F of a photon in giga hertz. Find energy E of this photon in giga electron volts. Assume h, Planck's ...\n\n5年弱 前\n\nHow to subtract?\n* Imagine you need to subtract a few numbers using MATLAB. * You will not be using eval for this task. * Given two ASCII strin...\n\n5年弱 前\n\nDeploying Matlab for 32Bit Machines is No Longer Supported\nThis is somewhat more of an FYI than a question. However, there are a number of answers around regarding 32Bit applications that...\n\n5年以上 前 | 1 件の回答 | 0\n\n### 1\n\nIs my wife right?\nRegardless of input, output the string 'yes'.\n\nInspired by Problem 2008 created by Ziko. In mathematics, the persistence of a number is the *number of times* one must apply...\n\nSort a list of complex numbers based on far they are from the origin.\nGiven a list of complex numbers z, return a list zSorted such that the numbers that are farthest from the origin (0+0i) appear f...\n\nSumming digits\nGiven n, find the sum of the digits that make up 2^n. Example: Input n = 7 Output b = 11 since 2^7 = 128, and 1 + ...\n\nPizza!\nGiven a circular pizza with radius _z_ and thickness _a_, return the pizza's volume. [ _z_ is first input argument.] Non-scor...\n\nWho Has the Most Change?\nYou have a matrix for which each row is a person and the columns represent the number of quarters, nickels, dimes, and pennies t...\n\nReturn the 3n+1 sequence for n\nA Collatz sequence is the sequence where, for a given number n, the next number in the sequence is either n/2 if the number is e...\n\nBack and Forth Rows\nGiven a number n, create an n-by-n matrix in which the integers from 1 to n^2 wind back and forth along the rows as shown in the...\n\nMost nonzero elements in row\nGiven the matrix a, return the index r of the row with the most nonzero elements. Assume there will always be exactly one row th...\n\nFind the numeric mean of the prime numbers in a matrix.\nThere will always be at least one prime in the matrix. Example: Input in = [ 8 3 5 9 ] Output out is 4...\n\nRemove any row in which a NaN appears\nGiven the matrix A, return B in which all the rows that have one or more <http://www.mathworks.com/help/techdoc/ref/nan.html NaN...\n\nFinding Perfect Squares\nGiven a vector of numbers, return true if one of the numbers is a square of one of the other numbers. Otherwise return false. E...\n\nCreate times-tables\nAt one time or another, we all had to memorize boring times tables. 5 times 5 is 25. 5 times 6 is 30. 12 times 12 is way more th...\n\nFibonacci sequence\nCalculate the nth Fibonacci number. Given n, return f where f = fib(n) and f(1) = 1, f(2) = 1, f(3) = 2, ... Examples: Inpu...\n\nDetermine whether a vector is monotonically increasing\nReturn true if the elements of the input vector increase monotonically (i.e. each element is larger than the previous). Return f...\n\nSwap the first and last columns\nFlip the outermost columns of matrix A, so that the first column becomes the last and the last column becomes the first. All oth...\n\nMake a checkerboard matrix\nGiven an integer n, make an n-by-n matrix made up of alternating ones and zeros as shown below. The a(1,1) should be 1. Examp...\n\nFind all elements less than 0 or greater than 10 and replace them with NaN\nGiven an input vector x, find all elements of x less than 0 or greater than 10 and replace them with NaN. Example: Input ...\n\nTriangle Numbers\nTriangle numbers are the sums of successive integers. So 6 is a triangle number because 6 = 1 + 2 + 3 which can be displa...\n\nColumn Removal\nRemove the nth column from input matrix A and return the resulting matrix in output B. So if A = [1 2 3; 4 5 6]; and ...\n\nSelect every other element of a vector\nWrite a function which returns every other element of the vector passed in. That is, it returns the all odd-numbered elements, s...\n\nDetermine if input is odd\nGiven the input n, return true if n is odd or false if n is even." ]
[ null, "https://jp.mathworks.com/responsive_image/150/150/0/0/0/cache/matlabcentral/profiles/4675442.jpg", null, "https://jp.mathworks.com/images/responsive/supporting/matlabcentral/fileexchange/badges/first_review.png", null, "https://jp.mathworks.com/images/responsive/supporting/matlabcentral/cody/badges/commenter.png", null, "https://jp.mathworks.com/matlabcentral/profile/badges/Thankful_1.png", null, "https://jp.mathworks.com/matlabcentral/profile/badges/Badge_ScavengerHunt_Finisher.png", null, "https://jp.mathworks.com/images/responsive/supporting/matlabcentral/cody/badges/solver.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6230468,"math_prob":0.9881082,"size":5167,"snap":"2022-40-2023-06","text_gpt3_token_len":1850,"char_repetition_ratio":0.16056557,"word_repetition_ratio":0.031809144,"special_character_ratio":0.2796594,"punctuation_ratio":0.1475,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9971946,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,3,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-06T07:02:04Z\",\"WARC-Record-ID\":\"<urn:uuid:e54ea395-82ce-42cd-bc84-5866a1ecbd48>\",\"Content-Length\":\"115158\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:80df2e54-bba0-4b0c-8b02-adf09f126fe5>\",\"WARC-Concurrent-To\":\"<urn:uuid:b64ac3a7-d3b4-46f3-af55-27d43119fc63>\",\"WARC-IP-Address\":\"23.218.145.211\",\"WARC-Target-URI\":\"https://jp.mathworks.com/matlabcentral/profile/authors/4675442\",\"WARC-Payload-Digest\":\"sha1:BXRKWMPKFISPYPB2OGRNMITRQ747BFLY\",\"WARC-Block-Digest\":\"sha1:2TVODLUCDO6LKVAKQGDJDNW4BZM3BT3V\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337731.82_warc_CC-MAIN-20221006061224-20221006091224-00324.warc.gz\"}"}
http://lambda.jimpryor.net/git/gitweb.cgi?p=lambda.git;a=blob;f=from_list_zippers_to_continuations.mdwn;h=dcd11cec2181b4b98f37477e0fbd693ba889a7fa;hb=12830f839dda3c46101812c38f5cb6d3926aa623
[ "1 Refunctionalizing zippers: from lists to continuations\n2 ------------------------------------------------------\n4 If zippers are continuations reified (defuntionalized), then one route\n5 to continuations is to re-functionalize a zipper.  Then the\n6 concreteness and understandability of the zipper provides a way of\n7 understanding an equivalent treatment using continuations.\n9 Let's work with lists of `char`s for a change.  We'll sometimes write\n10 \"abSd\" as an abbreviation for\n11 `['a'; 'b'; 'S'; 'd']`.\n13 We will set out to compute a deceptively simple-seeming **task: given a\n14 string, replace each occurrence of 'S' in that string with a copy of\n15 the string up to that point.**\n17 We'll define a function `t` (for \"task\") that maps strings to their\n18 updated version.\n20 Expected behavior:\n22         t \"abSd\" ~~> \"ababd\"\n25 In linguistic terms, this is a kind of anaphora\n26 resolution, where `'S'` is functioning like an anaphoric element, and\n27 the preceding string portion is the antecedent.\n29 This task can give rise to considerable complexity.\n30 Note that it matters which 'S' you target first (the position of the *\n31 indicates the targeted 'S'):\n33             t \"aSbS\"\n34                 *\n35         ~~> t \"aabS\"\n36                   *\n37         ~~> \"aabaab\"\n39 versus\n41             t \"aSbS\"\n42                   *\n43         ~~> t \"aSbaSb\"\n44                 *\n45         ~~> t \"aabaSb\"\n46                    *\n47         ~~> \"aabaaabab\"\n49 versus\n51             t \"aSbS\"\n52                   *\n53         ~~> t \"aSbaSb\"\n54                    *\n55         ~~> t \"aSbaaSbab\"\n56                     *\n57         ~~> t \"aSbaaaSbaabab\"\n58                      *\n59         ~~> ...\n61 Apparently, this task, as simple as it is, is a form of computation,\n62 and the order in which the `'S'`s get evaluated can lead to divergent\n63 behavior.\n65 For now, we'll agree to always evaluate the leftmost `'S'`, which\n66 guarantees termination, and a final string without any `'S'` in it.\n68 This is a task well-suited to using a zipper.  We'll define a function\n69 `tz` (for task with zippers), which accomplishes the task by mapping a\n70 `char list zipper` to a `char list`.  We'll call the two parts of the\n71 zipper `unzipped` and `zipped`; we start with a fully zipped list, and\n72 move elements to the unzipped part by pulling the zipper down until the\n73 entire list has been unzipped, at which point the zipped half of the\n74 zipper will be empty.\n76         type 'a list_zipper = ('a list) * ('a list);;\n78         let rec tz (z : char list_zipper) =\n79           match z with\n80             | (unzipped, []) -> List.rev(unzipped) (* Done! *)\n81             | (unzipped, 'S'::zipped) -> tz ((List.append unzipped unzipped), zipped)\n82             | (unzipped, target::zipped) -> tz (target::unzipped, zipped);; (* Pull zipper *)\n84         # tz ([], ['a'; 'b'; 'S'; 'd']);;\n85         - : char list = ['a'; 'b'; 'a'; 'b'; 'd']\n87         # tz ([], ['a'; 'S'; 'b'; 'S']);;\n88         - : char list = ['a'; 'a'; 'b'; 'a'; 'a'; 'b']\n90 Note that the direction in which the zipper unzips enforces the\n93 One way to see exactly what is going on is to watch the zipper in\n94 action by tracing the execution of `tz`.  By using the `#trace`\n95 directive in the OCaml interpreter, the system will print out the\n96 arguments to `tz` each time it is called, including when it is called\n97 recursively within one of the `match` clauses.  Note that the\n98 lines with left-facing arrows (`<--`) show (both initial and recursive) calls to `tz`,\n99 giving the value of its argument (a zipper), and the lines with\n100 right-facing arrows (`-->`) show the output of each recursive call, a\n101 simple list.\n103         # #trace tz;;\n104         t1 is now traced.\n105         # tz ([], ['a'; 'b'; 'S'; 'd']);;\n106         tz <-- ([], ['a'; 'b'; 'S'; 'd'])       (* Initial call *)\n107         tz <-- (['a'], ['b'; 'S'; 'd'])         (* Pull zipper *)\n108         tz <-- (['b'; 'a'], ['S'; 'd'])         (* Pull zipper *)\n109         tz <-- (['b'; 'a'; 'b'; 'a'], ['d'])    (* Special 'S' step *)\n110         tz <-- (['d'; 'b'; 'a'; 'b'; 'a'], [])  (* Pull zipper *)\n111         tz --> ['a'; 'b'; 'a'; 'b'; 'd']        (* Output reversed *)\n112         tz --> ['a'; 'b'; 'a'; 'b'; 'd']\n113         tz --> ['a'; 'b'; 'a'; 'b'; 'd']\n114         tz --> ['a'; 'b'; 'a'; 'b'; 'd']\n115         tz --> ['a'; 'b'; 'a'; 'b'; 'd']\n116         - : char list = ['a'; 'b'; 'a'; 'b'; 'd']\n118 The nice thing about computations involving lists is that it's so easy\n119 to visualize them as a data structure.  Eventually, we want to get to\n120 a place where we can talk about more abstract computations.  In order\n121 to get there, we'll first do the exact same thing we just did with\n122 concrete zipper using procedures instead.\n124 Think of a list as a procedural recipe: `['a'; 'b'; 'c'; 'd']` is the result of\n125 the computation `'a'::('b'::('c'::('d'::[])))` (or, in our old style,\n126 `make_list 'a' (make_list 'b' (make_list 'c' (make_list 'd' empty)))`). The\n127 recipe for constructing the list goes like this:\n129 >       (0)  Start with the empty list []\n130 >       (1)  make a new list whose first element is 'd' and whose tail is the list constructed in step (0)\n131 >       (2)  make a new list whose first element is 'c' and whose tail is the list constructed in step (1)\n132 >       -----------------------------------------\n133 >       (3)  make a new list whose first element is 'b' and whose tail is the list constructed in step (2)\n134 >       (4)  make a new list whose first element is 'a' and whose tail is the list constructed in step (3)\n136 What is the type of each of these steps?  Well, it will be a function\n137 from the result of the previous step (a list) to a new list: it will\n138 be a function of type `char list -> char list`.  We'll call each step\n139 (or group of steps) a **continuation** of the previous steps.  So in this\n140 context, a continuation is a function of type `char list -> char\n141 list`.  For instance, the continuation corresponding to the portion of\n142 the recipe below the horizontal line is the function `fun (tail : char\n143 list) -> 'a'::('b'::tail)`. What is the continuation of the 4th step? That is, after we've built up `'a'::('b'::('c'::('d'::[])))`, what more has to happen to that for it to become the list `['a'; 'b'; 'c'; 'd']`? Nothing! Its continuation is the function that does nothing: `fun tail -> tail`.\n145 In what follows, we'll be thinking about the result list that we're building up in this procedural way. We'll treat our input list just as a plain old static list data structure, that we recurse through in the normal way we're accustomed to. We won't need a zipper data structure, because the continuation-based representation of our result list will take over the same role.\n147 So our new function `tc` (for task with continuations) takes an input list (not a zipper) and a also takes a continuation `k` (it's conventional to use `k` for continuation variables). `k` is a function that represents how the result list is going to continue being built up after this invocation of `tc` delivers up a value. When we invoke `tc` for the first time, we expect it to deliver as a value the very de-S'd list we're seeking, so the way for the list to continue being built up is for nothing to happen to it. That is, our initial invocation of `tc` will supply `fun tail -> tail` as the value for `k`. Here is the whole `tc` function. Its structure and behavior follows `tz` from above, which we've repeated here to facilitate detailed comparison:\n149         let rec tz (z : char list_zipper) =\n150             match z with\n151             | (unzipped, []) -> List.rev(unzipped) (* Done! *)\n152             | (unzipped, 'S'::zipped) -> tz ((List.append unzipped unzipped), zipped)\n153             | (unzipped, target::zipped) -> tz (target::unzipped, zipped);; (* Pull zipper *)\n155         let rec tc (l: char list) (k: (char list) -> (char list)) =\n156             match l with\n157             | [] -> List.rev (k [])\n158             | 'S'::zipped -> tc zipped (fun tail -> k (k tail))\n159             | target::zipped -> tc zipped (fun tail -> target::(k tail));;\n161         # tc ['a'; 'b'; 'S'; 'd'] (fun tail -> tail);;\n162         - : char list = ['a'; 'b'; 'a'; 'b']\n164         # tc ['a'; 'S'; 'b'; 'S'] (fun tail -> tail);;\n165         - : char list = ['a'; 'a'; 'b'; 'a'; 'a'; 'b']\n167 To emphasize the parallel, we've re-used the names `zipped` and\n168 `target`.  The trace of the procedure will show that these variables\n169 take on the same values in the same series of steps as they did during\n170 the execution of `tz` above: there will once again be one initial and\n171 four recursive calls to `tc`, and `zipped` will take on the values\n172 `\"bSd\"`, `\"Sd\"`, `\"d\"`, and `\"\"` (and, once again, on the final call,\n173 the first `match` clause will fire, so the the variable `zipped` will\n174 not be instantiated).\n176 We have not named the continuation argument `unzipped`, although that is\n177 what the parallel would suggest.  The reason is that `unzipped` (in\n178 `tz`) is a list, but `k` (in `tc`) is a function.  That's the most crucial\n179 difference between the solutions---it's the\n180 point of the excercise, and it should be emphasized.  For instance,\n181 you can see this difference in the fact that in `tz`, we have to glue\n182 together the two instances of `unzipped` with an explicit (and,\n183 computationally speaking, relatively inefficient) `List.append`.\n184 In the `tc` version of the task, we simply compose `k` with itself:\n185 `k o k = fun tail -> k (k tail)`.\n187 A call `tc ['a'; 'b'; 'S'; 'd']` would yield a partially-applied function; it would still wait for another argument, a continuation of type `char list -> char list`. So we have to give it an \"initial continuation\" to get started. As mentioned above, we supply *the identity function* as the initial continuation. Why did we choose that? Again, if\n188 you have already constructed the result list `\"ababd\"`, what's the desired continuation? What's the next step in the recipe to produce the desired result, i.e, the very same list, `\"ababd\"`?  Clearly, the identity function.\n190 A good way to test your understanding is to figure out what the\n191 continuation function `k` must be at the point in the computation when\n192 `tc` is applied to the argument `\"Sd\"`.  Two choices: is it\n193 `fun tail -> 'a'::'b'::tail`, or it is `fun tail -> 'b'::'a'::tail`?  The way to see if you're right is to execute the following command and see what happens:\n195     tc ['S'; 'd'] (fun tail -> 'a'::'b'::tail);;\n197 There are a number of interesting directions we can go with this task.\n198 The reason this task was chosen is because the task itself (as opposed\n199 to the functions used to implement the task) can be viewed as a\n200 simplified picture of a computation using continuations, where `'S'`\n201 plays the role of a continuation operator. (It works like the Scheme\n202 operators `shift` or `control`; the differences between them don't\n203 manifest themselves in this example.\n204 See Ken Shan's paper [Shift to control](http://www.cs.rutgers.edu/~ccshan/recur/recur.pdf),\n205 which inspired some of the discussion in this topic.)\n206 In the analogy, the input list portrays a\n207 sequence of functional applications, where `[f1; f2; f3; x]` represents\n208 `f1(f2(f3 x))`.  The limitation of the analogy is that it is only\n209 possible to represent computations in which the applications are\n210 always right-branching, i.e., the computation `((f1 f2) f3) x` cannot\n211 be directly represented.\n213 One way to extend this exercise would be to add a special symbol `'#'`,\n214 and then the task would be to copy from the target `'S'` only back to\n215 the closest `'#'`.  This would allow our task to simulate delimited\n216 continuations with embedded `prompt`s (also called `reset`s).\n218 The reason the task is well-suited to the list zipper is in part\n219 because the List monad has an intimate connection with continuations.\n220 We'll explore this next." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.83454794,"math_prob":0.60766673,"size":10960,"snap":"2020-45-2020-50","text_gpt3_token_len":3002,"char_repetition_ratio":0.16036874,"word_repetition_ratio":0.07550077,"special_character_ratio":0.34270072,"punctuation_ratio":0.15367483,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9546736,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-05T02:19:10Z\",\"WARC-Record-ID\":\"<urn:uuid:4b5eacdf-ef77-44b9-be1c-3f7793b86c7a>\",\"Content-Length\":\"83561\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:57059c2e-7fe7-4c97-b07f-e6678dae7e93>\",\"WARC-Concurrent-To\":\"<urn:uuid:20c312b2-eabd-4587-901e-309be9ecb13e>\",\"WARC-IP-Address\":\"45.79.164.50\",\"WARC-Target-URI\":\"http://lambda.jimpryor.net/git/gitweb.cgi?p=lambda.git;a=blob;f=from_list_zippers_to_continuations.mdwn;h=dcd11cec2181b4b98f37477e0fbd693ba889a7fa;hb=12830f839dda3c46101812c38f5cb6d3926aa623\",\"WARC-Payload-Digest\":\"sha1:2NSNLLONCYXLYLOOPJQMR6KVS7IURJSJ\",\"WARC-Block-Digest\":\"sha1:MTVNXLNK5Z7HRREG6XS2QDYXHTWEGPWT\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141746033.87_warc_CC-MAIN-20201205013617-20201205043617-00595.warc.gz\"}"}
https://id.scribd.com/document/248019261/uni2
[ "Anda di halaman 1dari 8\n\n# The image.\n\n## its mathematical and\n\nphysical background\nlinearity\nLinearity also concerns more general\nelements of vector spaces, for instance, functions. The linear combination is a key\nconcept in linear mathematics, permitting the expression of a new element of a vector\nspace as a sum of known elements multiplied by coefficients (scalars, usually real numbers) .\nA general linear combination of two vectors x , y can b e written as ax + by, where a , b are\nscalars.\nFourier transform\n\nThe discrete Fourier transform is analogous to the continuous one and may be efficiently\ncomputed using the fast Fourier transform algorithm.\n\n## The properties of linearity, shift of position, modulation, convolution, multiplication, and\n\ncorrelation are analogous to the continuous case, with the difference of the discrete\nperiodic nature of the image and its transform.\n\nLet\n\nJJ\n\n## Considering implementation of the discrete Fourier transform, equation (11.8) can be\n\nmodified\n\nThe term in square brackets ... one-dimensional Fourier transform of the m-th line can be\ncomputed using standard fast Fourier transform (FFT) procedures (usually assuming\nN=2k ).\nEach line is substituted with its Fourier transform, and the one-dimensional discrete\nFourier transform of each column is computed.\n\n## The Fourier transform of a real function is a complex function\n\nwhere R(u,v) and I(u,v) are, respectively, the real and imaginary components of F(u,v).\n\nThe magnitude function |F(u,v)| is called the frequency spectrum of image f(m,n)\n\nThe phase spectrum (u,v) and power spectrum P(u,v) are used\n\ntwo crosses are visible; one is formed by the image borders and depicts the x and y axes\nin the spectrum.\n... the second is rotated by approximately 10 o anti-clockwise with respect to the first.\nThis cross comes from the image data its directions correspond to the main edge\ndirections present in the image.\nIt is important to realize that the frequency spectrum lines seem to be rotated by 90\ndegrees with respect to the edge image edge directions because of the perpendicular\nrelationship between image edges and the image intensity changes (in this case, the\nsinusoidal basis functions) assessing the frequency character of the edges.\n\n## Discrete cosine transform\n\nThere are four definitions of the discrete cosine transform, sometimes denoted DCT-I,\nDCT-II, DCT-III, and DCT-IV.\nThe most commonly used discrete cosine transform in image processing and compression\nis DCT-II - using equation (11.2) and a square N x N image, the discrete transform matrix\ncan be expressed as\n\nIn the two-dimensional case, the formula for a normalized version of the discrete cosine\ntransform (forward cosine transform DCT-II) may be written\n\n## and the inverse cosine transform is\n\nNote that the discrete cosine transform computation can be based on the Fourier\ntransform - all N coefficients of the discrete cosine transform may be computed using a\n2N -point fast Fourier transform.\n\n## Discrete cosine transform forms the basis of JPEG image compression.\n\nWavelets Transform\n\nWavelets represent another approach to decompose complex signals into sums of basis\nfunctions.\nFourier functions are localized in frequency but not in space, in the sense that they isolate\nfrequencies, but not isolate occurrences of those frequencies.\nSmall frequency changes in a Fourier transform will produce changes everywhere in the\ntime domain.\nWavelets are local in both frequency (via dilations) and time (via translations) and\ntherefore are able to analyze data at different scales or resolutions better than simple sines\nand cosines.\nSharp spikes and discontinuities normally take fewer wavelet bases to represent than if\nsine-cosine basis functions are used.\nThese types of signals generally have a more compact representation using wavelets than\nwith sine-cosine functions.\nIn the same way as Fourier analysis, wavelets are derived from a basis function called the\nMother function or analyzing wavelet.\nThe simplest Mother function is the Haar Mother function shown below.\n\nHaar wavelet coefficients. The coefficients in the upper left corner are related to a low resolution\nimage while the other panels correspond to high resolution features.\n\nWavelets are often used for data compression and image noise suppression.\nEIGEN-ANALYSIS\n\n## data to be expressed as a linear combination in\n\na new coordinate system consisting of orthogonal basis vectors. These basis vectors\nare the eigen-vectors, and the inherent orthogonality of eigen-vectors secures mutual\nindependence. For an n x n square regular matrix A,.\nOther orthogonal image transforms\n\n## Many other orthogonal image transforms exist.\n\nHadamard, Paley, Walsh, Haar, Hadamard-Haar, Slant, discrete sine transform, wavelets,\n...\n\n## The significance of image reconstruction from projections can be seen in computed\n\ntomography (CT), magnetic resonance imaging (MRI), positron emission tomography\n(PET), astronomy, holography, etc., where image formation is based on the Radon\ntransform.\n\n## Applications of orthogonal image transforms\n\nMany filters used in image pre-processing were presented - the convolution masks in\nmost cases were used for image filtering or image gradient computation.\n\nIn the frequency domain, such operations are usually called spatial frequency filtering.\n\n## (term-by-term multiplication, not a matrix multiplication)\n\nThe filtered image g can be obtained by applying the inverse Fourier transform to G.\n\nSome basic examples of spatial filtering are linear low-pass and high-pass frequency\nfilters." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8783564,"math_prob":0.9716142,"size":3636,"snap":"2019-35-2019-39","text_gpt3_token_len":710,"char_repetition_ratio":0.15418503,"word_repetition_ratio":0.0,"special_character_ratio":0.17656766,"punctuation_ratio":0.08681672,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99815154,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-15T06:40:48Z\",\"WARC-Record-ID\":\"<urn:uuid:4b6b34ae-e24f-47df-80ac-c3a66e46901a>\",\"Content-Length\":\"357445\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:16fb4b2f-95df-4d23-aa4d-fb909829c4e9>\",\"WARC-Concurrent-To\":\"<urn:uuid:bd805937-6cdc-474c-b841-7927481f9130>\",\"WARC-IP-Address\":\"151.101.250.152\",\"WARC-Target-URI\":\"https://id.scribd.com/document/248019261/uni2\",\"WARC-Payload-Digest\":\"sha1:TE7XUL2S4AIW2RFMHUGMTYZ7ZPWDRK3F\",\"WARC-Block-Digest\":\"sha1:YZGJUOP34PNPTQ5L6JGF5LZ77WT3B63O\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514570740.10_warc_CC-MAIN-20190915052433-20190915074433-00154.warc.gz\"}"}
https://scicomp.stackexchange.com/questions/27066/quadratic-programs-with-rank-deficient-positive-semidefinite-matrices
[ "Quadratic programs with rank deficient positive semidefinite matrices\n\nLet $A$ be a $n\\times n$ square symmetric matrix. In addition, $A\\succeq0$ and $\\mathrm{rank}(A)<n$. This means that all eigenvalues are non-negative, but also that there are some zero eigenvalues. I want to perform quadratic optimization of the functional $$\\frac{1}{2}x^HAx+b^Hx$$ under convex conditions $x_i\\geq0$ and $y^Hx=0$, where $b,y$ some fixed vectors.\n\nThe problem arises when, due to numerical rounding errors, the matrix has some very small negative spurious eigenvalues. Then, then functional becomes unbounded. I have tried this with SeDuMi and with matrix $$A=\\begin{bmatrix}10&17&25&-5&-9\\\\ 17 & 29 & 43 & -9 & -16\\\\ 25 & 43 & 65 &-15 &-26\\\\ -5 & -9 & -15 & 5 & 8\\\\ -9 & -16 & -26 & 8 & 13 \\end{bmatrix},\\quad b=-\\begin{bmatrix}1\\\\1\\\\1\\\\1\\\\1 \\end{bmatrix},\\quad y=\\begin{bmatrix} 1\\\\1\\\\1\\\\-1\\\\-1 \\end{bmatrix}$$ If you calculate the rank with MATLAB you will get $\\mathrm{rank}(A)=2$. However, the numerical calculation with the help of the eig() function gives following results:\n\n>> eig(A)\n\nans =\n\n-1.6661e-014\n-4.4496e-016\n4.2249e-015\n5.0536\n116.95\n\nSeDuMi (via Yalmip, excuted in Matlab 2007b) gives following error message:\n\nExiting: the solution is unbounded and at infinity;\nthe constraints are not restrictive enough.\n\nDo you have any idea, how to effectively address this problem numerically? I have tried diagonalizing $A$ and replacing the small negative eigenvalues with arbitrary small positive numbers, but this seems, well ... arbitrary, and I fear that it might produce numerical errors in the optimization. What do you think?\n\n• Does adding a small multiple of the identity helps? Jun 6 '17 at 13:01\n• I have tried replacing the negative eigenvalues with a small fixed number (e.g. a small multiple of the largest positive eigenvalue), which worked. Adding a multiple of the identity would be another option. However, the question is how to choose the size of the perturbation, so that the end result is not perturbed too much. Or is there a way to compensate for the perturbation a posteriori, to recover the exact solution? Jun 6 '17 at 14:17\n• If you add a multiple of the identity all the eigenvalues are shifted by the scaling constant. Then, I suppose that you can recover the \"original\" values that way. Jun 6 '17 at 14:23\n• To begin with, using an SDP solver (SeDuMi) to solve a simple convex QP is a bit overkill. Anyway, are you using a recent version of SeDuMi and YALMIP. I've tried it on my machine, and YALMIP+SeDuMi easily finds a solution without any issues. YALMIP has various safeguards to detect the low-rank but psd structure here (despite eig etc saying it is indefinite). YALMIP will exploit the rank-2 structure when setting up the resulting SOCP and thus the low-rank should be a non-issue Jun 8 '17 at 15:49\n• BTW, you're not using SeDuMi here. That error messsage is from quadprog. To use SeDuMI you explicitly have to select SeDuMi as a solver, as YALMIP picks the most simple solver possible (in your case, you have a quadratic program, and you have the QP solver quadprog installed, hence quadprog is used) Jun 8 '17 at 15:55\n\nTo ensure this does not drown in the comments, I make it an answer.\n\nThe solver used is not SeDuMi, as claimed in the question. The solver used is quadprog, and that solver (or more specifically, a severely outdated version of it), apparently had numerical issues on this particular instance. A recent version of quadprog, or any reasonably robust solver, solves this problem without issues.\n\n• Dear Johan, thank you very much for your reply, I didn't expect the creator of Yalmip to see this post! You are right, I forgot to set the solver settings to SeDuMi. When I used it, the problem was solved alright. By the way, since SeDuMi is \"overkill\", which solver would you recommend? I have a 2014 version of Yalmip... Jun 9 '17 at 7:41\n• First, don't use a 2014 version...For quadratic programming, you have excellent commercial solvers free for academia such as mosek, gurobi, mosek, but there are alternatives as listed here yalmip.github.io/tags/#quadratic-programming-solver Jun 9 '17 at 9:03\n\nThe following CVXPY script:\n\nfrom cvxpy import *\nimport numpy as np\n\n# optimization variables\nx = Variable(5)\n\n# matrix A and vectors b and c\nA = np.array([[10, 17, 25, -5, -9],\n[17, 29, 43, -9,-16],\n[25, 43, 65,-15,-26],\n[-5, -9,-15, 5, 8],\n[-9,-16,-26, 8, 13]])\nb = np.ones(5)\nc = np.array([ 1, 1, 1,-1,-1])\n\n# build optimization problem\nobjective = Minimize( 0.5 * quad_form(x, A) - b.T * x )\nconstraints = [ c.T * x == 0, x >= 0 ]\nprob = Problem(objective, constraints)\n\n# solve optimization problem\nprob.solve()\nprint \"x =\", x.value\n\nproduces the following vector\n\nx = [[ 4.44444306e-01]\n[ 2.85606990e-11]\n[ 1.71645999e-10]\n[ 2.22221718e-01]\n[ 2.22222588e-01]]\n\nwhich does satisfy the constraints. Perhaps the constraints are indeed \"restrictive enough\".\n\n• You are right that the unconstrained functional is unbounded from below. If we diagonalize, transform variables and complete the squares we can show that it is the sum of squares and a linear term (sth. like $d_1(\\tilde{x}_1-c_1)^2+d_2(\\tilde{x}_2-c_2)^2-x_3-x_4-x_5+c_0$. However, this functional is bounded below under the constraint $x_i\\geq 0$. I insist that the problem arises due to negative eigenvalues. If e.g. there exists $d_3<0$ (very small) we would have $d_1(\\tilde{x}_1-c_1)^2+d_2(\\tilde{x}_2-c_2)^2-|d_3|(\\tilde{x}_3-c_3)^2-x_4-x_5+c_0$ which becomes unbounded even if $x_i\\geq0$. Jun 8 '17 at 8:15\n• A further indication that negative eigenvalues are to blame is that when I slightly perturb the matrix (I replaced the negative eigenvalues with 2.6e-14), so that numerically it has only positive eigenvalues, SeDuMi is able to solve the problem and gives the same solution as your CVX Python code. Nevertheless, I want to thank you very much for your reply, since it cleared up a misunderstanding and made me realize that other numerical routines (like CVX in Python) might be more stable. Jun 8 '17 at 8:20\n• @BrysonofHeraclea You are correct. I have edited my answer. Jul 5 '17 at 13:16" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.79422456,"math_prob":0.97719085,"size":1557,"snap":"2022-05-2022-21","text_gpt3_token_len":481,"char_repetition_ratio":0.10173857,"word_repetition_ratio":0.0,"special_character_ratio":0.3320488,"punctuation_ratio":0.12418301,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9955082,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-23T18:27:07Z\",\"WARC-Record-ID\":\"<urn:uuid:1eeb2b34-90c2-4c35-b9ef-70e8102db7a7>\",\"Content-Length\":\"161548\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:65fb8ac2-4859-4d2f-817c-069f3c275866>\",\"WARC-Concurrent-To\":\"<urn:uuid:7d3700bf-45ff-4471-85ed-5a369dfaa592>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://scicomp.stackexchange.com/questions/27066/quadratic-programs-with-rank-deficient-positive-semidefinite-matrices\",\"WARC-Payload-Digest\":\"sha1:5JSNWQZYYLFKKT6SOXTF2INU7BRIUNRI\",\"WARC-Block-Digest\":\"sha1:YYIJ3FOCLY4GUQMH4UEBYCAWLEF43L2N\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320304309.5_warc_CC-MAIN-20220123172206-20220123202206-00407.warc.gz\"}"}
https://en.m.wikibooks.org/wiki/Modular_Arithmetic/Euler%27s_Theorem
[ "# Modular Arithmetic/Euler's Theorem\n\n Modular Arithmetic ← Fermat's Little Theorem Euler's Theorem Lagrange's Theorem →\nEuler's Theorem\n\nIf $\\alpha$", null, "and $\\beta$", null, "are positive coprime integers, then,\n\n$\\alpha ^{\\phi (\\beta )}\\equiv 1{\\pmod {\\beta }}$", null, "$\\phi$", null, "denotes Euler's totient function. $\\phi (\\beta )$", null, "gives the number of positive integers up to $\\beta$", null, "that are relatively prime to $\\beta$", null, "." ]
[ null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/b79333175c8b3f0840bfb4ec41b8072c83ea88d3", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/7ed48a5e36207156fb792fa79d29925d2f7901e8", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/e9f45534959493aef712ce78c04c9777745edbef", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/72b1f30316670aee6270a28334bdf4f5072cdde4", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/5fe2d3514aee58c2237e965b6533619e44934c7a", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/7ed48a5e36207156fb792fa79d29925d2f7901e8", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/7ed48a5e36207156fb792fa79d29925d2f7901e8", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.88394594,"math_prob":0.9997949,"size":349,"snap":"2022-27-2022-33","text_gpt3_token_len":81,"char_repetition_ratio":0.11304348,"word_repetition_ratio":0.0,"special_character_ratio":0.2034384,"punctuation_ratio":0.109375,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000029,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,null,null,null,null,5,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-17T22:39:35Z\",\"WARC-Record-ID\":\"<urn:uuid:db2f2a75-c73d-49ac-a249-33f034da41e2>\",\"Content-Length\":\"27865\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:50223c76-6087-408c-a565-fc49147275e5>\",\"WARC-Concurrent-To\":\"<urn:uuid:69913daa-b5f2-40db-b52e-7e7ebddf32c0>\",\"WARC-IP-Address\":\"208.80.154.224\",\"WARC-Target-URI\":\"https://en.m.wikibooks.org/wiki/Modular_Arithmetic/Euler%27s_Theorem\",\"WARC-Payload-Digest\":\"sha1:25AJK2ACZUBLVHJQNFZMSC66AJKNYB7G\",\"WARC-Block-Digest\":\"sha1:GQDSBSZQHGHSKIPQLWVQLL2SKJTOBSTS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882573118.26_warc_CC-MAIN-20220817213446-20220818003446-00745.warc.gz\"}"}
http://whitecementspecialties.com/about_1.html
[ "# How to Formulate your Patio\n\n## 1-Decide on Size of your Patio\n\nWhat to design your own patio but don't know where to begin?\n\nFirst - decide how small, or large, you want your patio to be.\n\nBegin by measuring the length and the width and multiplying those two numbers together.\n\n(For example: A 10' x 10' patio would be 10x10 and would equal 100 square feet. A 12' x 10' patio would equal 120 square feet, etc.)\n\nWrite that number down and move one to the next step.\n\n### 2-Decide what size blocks you want to use\n\nBelow is the square footage of each size of patio stones that White Cement Specialties offers\n\n12x12=1 square foot\n\n16x16=1.78 square foot\n\n18x18=2.25 square foot\n\n20x20=2.78 square foot\n\n24x24=4 square foot\n\n16\" Hexagon=1.56 square foot\n\n18' Hexagon=1/86 square foot\n\n### 3-Divide the two number together\n\nNow that you have the two numbers, you will divide the patio number by the patio stone number.\n\n(Example: The owner of the 10x 10 patio decides that he wants to make his patio using the 12x12 size patio stones. He will take the number of his patio that equals 100 sq. ft. and divide it by the number of the patio stone size which is 1 sq. ft. and he discovers that he will need 100 of the 12x12 patio stones to create his patio.)\n\nA 12x10 foot patio will require 120 of the 12x12 patio stones, etc.\n\nWhen you receive an odd number, always round that number up and be prepared to do some cutting for the perfect patio.\n\n### We can also tell you how much sand or screenings you will need for your new patio or driveway and provide it for you.\n\nCopyright2005FrancineMilford" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.90267193,"math_prob":0.97706586,"size":1408,"snap":"2019-13-2019-22","text_gpt3_token_len":378,"char_repetition_ratio":0.18162394,"word_repetition_ratio":0.0,"special_character_ratio":0.28551137,"punctuation_ratio":0.08940397,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96168077,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-05-22T03:09:19Z\",\"WARC-Record-ID\":\"<urn:uuid:93ded4df-88b2-4a11-97f7-7a33d9732080>\",\"Content-Length\":\"5610\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f586ec3d-9fca-4128-80c3-24caed1f910e>\",\"WARC-Concurrent-To\":\"<urn:uuid:515c0a6a-43fd-479f-9675-c5f879a27851>\",\"WARC-IP-Address\":\"64.136.20.63\",\"WARC-Target-URI\":\"http://whitecementspecialties.com/about_1.html\",\"WARC-Payload-Digest\":\"sha1:3SDTF3YN2IG5JUBAP5BEI3WKZDMYGFTL\",\"WARC-Block-Digest\":\"sha1:4M3NYUJO62PUZSQR74OBFGE6KW5PQ74L\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-22/CC-MAIN-2019-22_segments_1558232256724.28_warc_CC-MAIN-20190522022933-20190522044933-00178.warc.gz\"}"}
http://methods.sagepub.com/Reference/encyc-of-research-design/n434.xml
[ "# Standard Deviation\n\nEncyclopedia\nEdited by: Published: 2010\n\n• ## Subject Index\n\nIn the late 1860s, Sir Francis Galton formulated the law of deviation from an average, which has become one of the most useful statistical measures, known as the standard deviation, or SD as most often abbreviated. The standard deviation statistic is one way to describe the results of a set of measurements and, at a glance, it can provide a comprehensive understanding of the characteristics of the data set. Examples of some of the more familiar and easily calculated descriptors of a sample are the range, the median, and the mean of a set of data. The range provides the extent of the variation of the data, providing the highest and lowest scores but revealing nothing about the pattern of the data. And, either or ...\n\n• All\n• A\n• B\n• C\n• D\n• E\n• F\n• G\n• H\n• I\n• J\n• K\n• L\n• M\n• N\n• O\n• P\n• Q\n• R\n• S\n• T\n• U\n• V\n• W\n• X\n• Y\n• Z\n\n## Methods Map", null, "Research Methods\n\nCopy and paste the following HTML into your website" ]
[ null, "http://methods.sagepub.com/images/img-bg.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.83728653,"math_prob":0.6562542,"size":1474,"snap":"2019-43-2019-47","text_gpt3_token_len":356,"char_repetition_ratio":0.13605443,"word_repetition_ratio":0.0,"special_character_ratio":0.20759837,"punctuation_ratio":0.08097166,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98941374,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-19T00:00:07Z\",\"WARC-Record-ID\":\"<urn:uuid:ce3b1468-edeb-4e86-9987-fe2f68b19912>\",\"Content-Length\":\"245668\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9b7af698-1cd4-4ba5-8bee-ff6e10b797a0>\",\"WARC-Concurrent-To\":\"<urn:uuid:9b28bb7e-b42f-4633-85f7-19980ab9d90f>\",\"WARC-IP-Address\":\"128.121.3.195\",\"WARC-Target-URI\":\"http://methods.sagepub.com/Reference/encyc-of-research-design/n434.xml\",\"WARC-Payload-Digest\":\"sha1:TRATPVEWTBCTTT5EDLNCQIUTP53JZHZT\",\"WARC-Block-Digest\":\"sha1:WCTBSZ4PUAQOZYD2LCFW4ZYQTEYOJGRE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496669868.3_warc_CC-MAIN-20191118232526-20191119020526-00224.warc.gz\"}"}
https://www.convertunits.com/from/square+megameter/to/square+mil
[ "## ››Convert square megametre to square mil\n\n square megameter square mil\n\nHow many square megameter in 1 square mil? The answer is 6.4516E-22.\nWe assume you are converting between square megametre and square mil.\nYou can view more details on each measurement unit:\nsquare megameter or square mil\nThe SI derived unit for area is the square meter.\n1 square meter is equal to 1.0E-12 square megameter, or 1550003100.0062 square mil.\nNote that rounding errors may occur, so always check the results.\nUse this page to learn how to convert between square megameters and square mil.\nType in your own numbers in the form to convert the units!\n\n## ››Quick conversion chart of square megameter to square mil\n\n1 square megameter to square mil = 1.5500031000062E+21 square mil\n\n2 square megameter to square mil = 3.1000062000124E+21 square mil\n\n3 square megameter to square mil = 4.6500093000186E+21 square mil\n\n4 square megameter to square mil = 6.2000124000248E+21 square mil\n\n5 square megameter to square mil = 7.750015500031E+21 square mil\n\n6 square megameter to square mil = 9.3000186000372E+21 square mil\n\n7 square megameter to square mil = 1.0850021700043E+22 square mil\n\n8 square megameter to square mil = 1.240002480005E+22 square mil\n\n9 square megameter to square mil = 1.3950027900056E+22 square mil\n\n10 square megameter to square mil = 1.5500031000062E+22 square mil\n\n## ››Want other units?\n\nYou can do the reverse unit conversion from square mil to square megameter, or enter any two units below:\n\n## Enter two units to convert\n\n From: To:\n\n## ››Definition: Square mil\n\nA square mil is defined as the area of a square with sides one mil in length.\n\nThe definition of a mil is as follows:\n\nA milliradian, often called a mil or mrad, is an SI derived unit for angular measurement which is defined as a thousandth of a radian (0.001 radian).\n\nCompass use of mils typically rounds 6283 to 6400 for simplification, but here we use the official definition.\n\n## ››Metric conversions and more\n\nConvertUnits.com provides an online conversion calculator for all types of measurement units. You can find metric conversion tables for SI units, as well as English units, currency, and other data. Type in unit symbols, abbreviations, or full names for units of length, area, mass, pressure, and other types. Examples include mm, inch, 100 kg, US fluid ounce, 6'3\", 10 stone 4, cubic cm, metres squared, grams, moles, feet per second, and many more!" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7677294,"math_prob":0.9985704,"size":2470,"snap":"2020-34-2020-40","text_gpt3_token_len":680,"char_repetition_ratio":0.3325223,"word_repetition_ratio":0.04987531,"special_character_ratio":0.28582996,"punctuation_ratio":0.12266112,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9937987,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-14T14:32:25Z\",\"WARC-Record-ID\":\"<urn:uuid:21663694-578f-475e-b069-dcb7e113ad50>\",\"Content-Length\":\"27426\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d06b8491-9b2c-4144-89f0-72f822257f64>\",\"WARC-Concurrent-To\":\"<urn:uuid:ab4b264f-633d-47ab-9d4a-58492296104c>\",\"WARC-IP-Address\":\"18.210.85.219\",\"WARC-Target-URI\":\"https://www.convertunits.com/from/square+megameter/to/square+mil\",\"WARC-Payload-Digest\":\"sha1:VVHVALVNT2ND3O4V5XY562LGXD2HQZ3V\",\"WARC-Block-Digest\":\"sha1:HR2LD46MCDFBBEKMQ5CIPWJKOIAI27Y3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439739328.66_warc_CC-MAIN-20200814130401-20200814160401-00572.warc.gz\"}"}
http://apollo13cn.blogspot.com/2020/08/matrix-multiplication-graph-reachable.html
[ "## Friday, August 7, 2020\n\n### Matrix Multiplication & Graph Reachable\n\nI have been struggling to find a good way to represent graph structure. There are number of graph library where Node and Edge are defined. I found that matrix can be used to represent a graph.\n\nThe row and column of the matrix represent the node in a graph. The element in the matrix is connection between two nodes. In a graph, this connection is called Edge. The following diagram represents a graph with 3 nodes and 3 edges.\n\nIn a matrix representation, it can be written as a matrix. Row and column are named A, B, and C. If there is an edge between two nodes, the corresponding cell is set to 1. For example, edge AB shows 1 in cell (A, B) in the grid. We can denote this matrix as M\n\nMatrix \"multiplication\" to itself can reveal if a certain node can reach the other nodes in two steps. The \"multiplication\" operation is different from the linear algebra matrix multiplication.\n\nK = M ⊗ M\n\nEach element b in N can be calculated by the following formula, where N is the number of the nodes (or the column number of the matrix M).\n\nTwo nodes i and j. If there j is reachable from i, either of the following two conditions are met:\n• there is a direct edge between i and j\n• there is a node p which is reachable from i and p can reach j\nWritten as math language in the matrix M\n• M(i, j) = 1\n• 彐p, M(i, p) = 1 and M(p, j) = 1\nRepeat the \"multiplication\" operation logN times, it can calculate any node in a graph can be reachable from other nodes.\n\nFinally, I can move away from a complicated node/edge definition into a formal math world. Hopefully the existing linear algebra can help me down the road." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9319191,"math_prob":0.98191184,"size":1628,"snap":"2021-31-2021-39","text_gpt3_token_len":390,"char_repetition_ratio":0.13423645,"word_repetition_ratio":0.0,"special_character_ratio":0.23525798,"punctuation_ratio":0.10086455,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9994649,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-23T11:35:54Z\",\"WARC-Record-ID\":\"<urn:uuid:7b0ec11b-38cb-49a7-8e80-0ba8ea147c6b>\",\"Content-Length\":\"69423\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:20e15410-9dcf-4459-bff3-0dc7e9b2c44c>\",\"WARC-Concurrent-To\":\"<urn:uuid:d8989107-aea1-48ea-b488-d498785f5a54>\",\"WARC-IP-Address\":\"142.250.81.193\",\"WARC-Target-URI\":\"http://apollo13cn.blogspot.com/2020/08/matrix-multiplication-graph-reachable.html\",\"WARC-Payload-Digest\":\"sha1:YY2FXZRQJJPYV34W3HWRGITPQZHZKJBU\",\"WARC-Block-Digest\":\"sha1:7AMKXZMIEJYIJ4NNN3SIH4COGPRYIJ6E\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057421.82_warc_CC-MAIN-20210923104706-20210923134706-00609.warc.gz\"}"}
https://www.gradesaver.com/textbooks/math/algebra/elementary-and-intermediate-algebra-concepts-and-applications-6th-edition/chapter-5-polynomials-and-factoring-5-3-factoring-trinomials-of-the-type-ax2-bx-c-5-3-exercise-set-page-326/42
[ "## Elementary and Intermediate Algebra: Concepts & Applications (6th Edition)\n\n$(x+4)(x-2)$\nGrouping the first and second terms and the third and fourth terms, the given expression is equivalent to \\begin{array}{l}\\require{cancel} x^2+4x-2x-8 \\\\\\\\= (x^2+4x)-(2x+8) .\\end{array} Factoring the $GCF$ in each group results to \\begin{array}{l}\\require{cancel} x(x+4)-2(x+4) .\\end{array} Factoring the $GCF= (x+4)$ of the entire expression above results to \\begin{array}{l}\\require{cancel} (x+4)(x-2) .\\end{array}" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.61661667,"math_prob":0.99875903,"size":455,"snap":"2019-13-2019-22","text_gpt3_token_len":163,"char_repetition_ratio":0.14412417,"word_repetition_ratio":0.0,"special_character_ratio":0.33626375,"punctuation_ratio":0.04347826,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99922824,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-21T16:23:12Z\",\"WARC-Record-ID\":\"<urn:uuid:dee89ce7-28f7-4f1c-90d6-5133e4ae959b>\",\"Content-Length\":\"117326\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ab7440ae-a7a4-4d4b-87db-bf2a29a1595c>\",\"WARC-Concurrent-To\":\"<urn:uuid:e5669ee5-c6e5-4cac-8234-bf40efc3bd1b>\",\"WARC-IP-Address\":\"34.206.155.23\",\"WARC-Target-URI\":\"https://www.gradesaver.com/textbooks/math/algebra/elementary-and-intermediate-algebra-concepts-and-applications-6th-edition/chapter-5-polynomials-and-factoring-5-3-factoring-trinomials-of-the-type-ax2-bx-c-5-3-exercise-set-page-326/42\",\"WARC-Payload-Digest\":\"sha1:YKFRV3I7X76RHDRFNPBRM2LDIEYUUPTK\",\"WARC-Block-Digest\":\"sha1:AJCUGZPCYNYXCFGKX3MUB4YS52LLHOF7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912202526.24_warc_CC-MAIN-20190321152638-20190321174638-00550.warc.gz\"}"}
https://tvst.arvojournals.org/article.aspx?articleid=2685740
[ "", null, "tvst\nOpen Access\nArticles  |   June 2018\nNeonatal Contrast Sensitivity and Visual Acuity: Basic Psychophysics\nAuthor Affiliations & Notes\n• Angela M. Brown\nThe Ohio State University, College of Optometry, Columbus, OH, USA\n• Faustina Ottie Opoku\nThe Ohio State University, College of Optometry, Columbus, OH, USA\n• Michael R. Stenger\nThe Ohio State University, College of Medicine, Columbus, OH, USA\n• Correspondence: Angela M. Brown, The Ohio State University, College of Optometry, 338 W 10th Ave, Columbus, OH 43210, USA. e-mail: [email protected]\nTranslational Vision Science & Technology June 2018, Vol.7, 18. doi:https://doi.org/10.1167/tvst.7.3.18\n• Views\n• PDF\n• Share\n• Tools\n×\n###### This feature is available to authenticated users only.\n• Get Citation\n\nAngela M. Brown, Faustina Ottie Opoku, Michael R. Stenger; Neonatal Contrast Sensitivity and Visual Acuity: Basic Psychophysics. Trans. Vis. Sci. Tech. 2018;7(3):18. doi: https://doi.org/10.1167/tvst.7.3.18.\n\n© ARVO (1962-2015); The Authors (2016-present)\n\n×\n• Supplements\nAbstract\n\nPurpose: This research was prospectively designed to determine whether a 0.083 cycles per degree (cy/deg) (20/7200) square-wave stimulus is a good choice for clinical measurement of newborn infants' contrast sensitivity and whether the contrast sensitivity function (CSF) of the newborn infant is band-pass. The results were retrospectively analyzed to determine whether the method of constant stimuli (MCS) and the descending method of limits (dLIM) yielded similar results.\n\nMethods: In across-subjects experimental designs, a pilot experiment used MCS (N = 47 visual acuity; N = 38 contrast sensitivity at 0.083 cy/deg), and a main experiment used dLIM (N = 22 visual acuity; N = 22 contrast sensitivity at 0.083 cy/deg; N = 21 at 0.301 cy/deg) to measure visual function in healthy newborn infants. Three candidate CSFs estimated maximum neonatal contrast sensitivity. MCS and dLIM psychometric functions were compared while taking the stimulus presentation protocols into account.\n\nResults: The band-pass CSF fit the data best, with a peak sensitivity near 0.31 at 0.22 cy/deg. However, the 0.083 cy/deg square-wave stimulus underestimated the best performance of newborn infants by less than 0.15 log10 units. MCS and dLIM data agreed well when the stimulus presentation contingencies were taken into account.\n\nConclusions: Newborn contrast sensitivity is well measured using a 0.083 cy/deg square-wave target, regardless of which CSF shape is correct. MCS and dLIM yield wholly comparable results, with no evidence to suggest effects of other factors such as infant inattention or examiner impatience.\n\nTranslational Relevance: These measurements open the way for clinical behavioral measurement of infant visual acuity and contrast sensitivity in the neonatal period.\n\nIntroduction\nThe main challenge in designing a test of contrast sensitivity is to choose the appropriate size or spatial frequency of the target. This is difficult because the normal patient's sine-wave contrast sensitivity function (CSF) has a single peak in the middle spatial frequency range. A stimulus that is too small or too large (with dominant Fourier components that are at too high or too low spatial frequency) will not reveal the patient's maximum contrast capability. In the case of infants and patients with disorders of the visual system, the challenge is to choose the stimulus size in the face of uncertainty about the patient's CSF.\nWe have been working to develop the Newborn Contrast Cards and the Newborn Acuity Cards. The Newborn Contrast Cards present a very low spatial frequency square-wave at variable contrast.1 We chose a square-wave stimulus because it has a higher contrast than a sine-wave at the same spatial frequency,2 by a factor of √2, and it is much easier to create printed square-wave stimuli than printed sine-waves. Furthermore, recent modeling efforts3 have suggested that even if the CSF is band-pass, the difference between the measured square-wave contrast sensitivity and the best sensitivity of which the patient is capable will be small. The Newborn Acuity Cards present a square-wave grating at maximum contrast and variable spatial frequency.4 Both card tests involve a centrally presented grating (Fig. 1) and a fixation-refixation behavior. The examiner performs a “yes/no” judgment for each card, similar to the method of the Teller Acuity Cards,5,6 and contrast sensitivity or visual acuity is based on those judgments. The present project is the next step toward the goal of establishing these cards for clinical use on newborn infants. We recognize that, before they can be adopted into clinical practice, research will be required to establish their repeatability across examiners and their validity in discriminating between healthy infants and those with visual or neurologic disorders.\nFigure 1\n\nExamples of the Newborn Acuity Cards and Newborn Contrast Cards. (A) The “easy” contrast card (0.083 cy/deg, 0.86 contrast); (B) a typical contrast card (0.303 cy/deg, 0.50 contrast); (C, D) acuity cards (0.50 and 1 cy/deg, both at 0.86 contrast). The luminance mismatch between the grating and the surrounding gray in this figure is an artifact of reproduction. In the actual stimuli, the space-averaged luminance match was excellent.\nFigure 1\n\nExamples of the Newborn Acuity Cards and Newborn Contrast Cards. (A) The “easy” contrast card (0.083 cy/deg, 0.86 contrast); (B) a typical contrast card (0.303 cy/deg, 0.50 contrast); (C, D) acuity cards (0.50 and 1 cy/deg, both at 0.86 contrast). The luminance mismatch between the grating and the surrounding gray in this figure is an artifact of reproduction. In the actual stimuli, the space-averaged luminance match was excellent.", null, "We have recently reported that the Michelson contrast sensitivity of the newborn infant is about 2.0 (contrast threshold = 0.497, or −0.303 log10 Michelson contrast), when measured using a 30 × 30-degree square-wave stimulus at 0.10 cycles per degree (cy/deg).1 Here, we investigate the appropriateness of that type of stimulus because studies of repeatability and validity cannot occur until the correct stimulus design and testing methods are chosen. We collected contrast sensitivity and visual acuity data on newborn infants using the method of constant stimuli (MCS) in a pilot study, and the descending method of limits (dLIM) in the main experiment. We extracted thresholds from individual infants' data sets using logistic regression. We then fitted the resulting data using three candidate shapes of the CSF to determine whether the underlying CSF is band-pass (single-peaked) or low-pass. We used those fits to estimate the maximum possible sensitivity of the newborn infant as well as the spatial frequency at which it occurs. In a secondary analysis, we used the results we obtained in a pilot study using the MCS to predict the results from the main experiment using the dLIM, according to the stimulus presentation schedule in each data set.\nMethods\nSubjects\nNewborn infants in the postpartum unit of the Ohio State University Wexner Medical Center participated in this study with the informed permission of a parent. An infant was eligible for participation if the mother was over age 18 and able to give informed permission and if the infant and mother were both healthy (Table 1). The mother was given a \\$10 gift card for a local business to thank her for her infant's participation. The research was approved by the Ohio State University Institutional Review Board for protection of human subjects, and conformed to the tenets of the Declaration of Helsinki.\nTable 1\n\nParticipant Demographics\nTable 1\n\nParticipant Demographics", null, "Stimuli\nThe stimuli were 30.5 × 61-cm cards, printed with a gray surrounding field, and a 24.8-cm square light-and-dark grating in the center of the face of the card (Figs. 1A, 1C, 1E). Each card had a peephole in the center of the grating through which the examiner could observe the infant's looking behavior. The stimuli were designed to be large enough that, when the infant viewed the center of the card at a distance of 38 cm, the left and right edges of the card were at an eccentricity of 38.75° of visual angle. This placed the edges of the cards and the examiner's hands outside the horizontal visual field extent of the newborn infant, which is about 30° of visual angle, when measured using a 99% contrast stimulus.7 It is therefore unlikely that the infant's refixation behavior was guided by the edges of the card or the examiner's (low-contrast) hands rather than the position of the grating itself. We provide further information in support of this below.\nAll the cards used in this study were manufactured by Precision Vision, Inc. (Woodstock, IL). Testing was under available light, and the average luminance of the cards during testing was 299 cd/m2 (SD: 90). The stimulus values are listed in Appendix Tables A1 and A2. Contrasts were calibrated using a photometer (Pritchard SpectraScan PR-670; Photo Research, Syracuse, NY), and all data analyses were based on the calibrated contrast values. There were three sets of contrast cards and two sets of acuity cards. The contrast cards varied from 0.123 to 0.695 Michelson contrast, and within each stimulus set their contrasts were separated in contrast by approximately 0.15 log10-unit steps. There was also a high-contrast card at 0.86 contrast. There were two sets of acuity cards, which were similar to the contrast stimuli in size and general description. Their contrast was 0.86, the maximum that could be produced using printing technology, and their spatial frequencies varied from 0.33 to 2.84 cy/deg (1.024–2.079 logMAR or 20/211–20/2400 Snellen) in approximately 0.15 log10-unit (half-octave) steps, plus a low-frequency grating of 0.025 cy/deg. Stimuli were sanitized daily using hospital wipes.\nProcedures\nTesting occurred in the participants' hospital rooms in the postpartum unit of the Ohio State University Wexner Medical Center. During testing, a research nurse held the infant in her arms, standing or sitting with the light coming from behind so that the light fell onto the cards while keeping the infant's eyes in shadow as much as possible. Two other adults (coauthors AMB and FOO) were involved in testing: the “examiner,” who presented the cards to the infant and judged whether the infant saw the grating, and the “assistant,” who randomized the order of the cards, gave them to the examiner in such a way that the examiner could not learn their values, and recorded the examiner's judgments.\nThe examiner presented the cards one at a time, while observing the infant's looking behavior through the peephole in the center of each card.1,4 At the beginning of a trial, the examiner placed the card along the infant's line of sight at a distance of 38 cm. If the infant continued to look at the central grating, that was the first indication that the infant may have seen the grating. The examiner then moved the card stepwise a few centimeters to the right or left, attempting to elicit a refixation of the displaced grating. The examiner continued to present the card, placing it and moving it as necessary, until the examiner could judge whether or not the infant saw the grating using a fixation-refixation criterion. While the method is similar to the familiar “fixation and following” method, the movements of the card were not large, and the eye movements rarely resembled smooth pursuit. Thus, examiners performed a “yes/no” psychophysical judgment, just as for the Teller Acuity Cards.5,6 The average test time was 12.28 minutes (SD: 5.08).\nExperiment I\nTo choose the range and spacing of the stimulus values for the main experiment, we performed a pilot study in which we measured contrast sensitivity and visual acuity using MCS.\nMethods\nFor this pilot experiment, we used three trial sets of Newborn Contrast Cards (Appendix Table A1), which had slightly different calibrated contrast values for each of the three stimulus sets. The first two corresponding sets of acuity cards had identical contrast and spatial frequency specifications, and the third set had slightly different spatial frequencies. Thirty-three infants were tested using only one stimulus type: 20 infants were tested using only the Newborn Acuity Cards, and 13 infants were tested using only the Newborn Contrast Cards. These tests always began with an “easy” card (in Table A1, these were card no. 9 or no. 10 for the contrast tests, card no. 16, no. 17, or no. 18 for the acuity tests). This easy card established that the infant could see “something,” and it allowed the examiner to familiarize herself with that infant's looking behavior and state of alertness. Next, the examiner attempted to present four or five additional stimuli, in a predetermined random order, while being kept unaware of the values of the cards. If the examiner was uncertain whether an infant saw a particular card, she could set it aside to be presented a second time, while she remained unaware of its value. If the infant was not awake for the second presentation, that card was scored as “not presented.” The test continued until the examiner had reached a decision on all the cards, producing a complete data set, or the infant fell asleep, producing an incomplete data set.\nTwenty-seven additional infants were tested using mixed sets of contrast and acuity stimuli. In those experiments, the first card to be presented was always an easy contrast card, and the subsequent cards were intermixed sets of 8 to 10 total cards (Newborn Contrast Cards plus Newborn Acuity Cards), which were presented in a predetermined random order. The examiner was kept unaware of the contrast value or spatial frequency of the grating on each card and whether the card was a contrast or an acuity card. We eventually abandoned this procedure because some infants fell asleep before completing the full set of stimuli. As described under Data Analysis, we included all the complete and incomplete data sets (except for one) in our nonparametric statistical analyses.\nA total of 60 infants participated in this pilot experiment, and there was a total of 39 contrast sensitivity tests and 47 visual acuity tests.\nData Analysis\nWe used logistic regression software (Mathematica version 11.1.0; Wolfram, Champaign, IL) and the Mathemataica LogitModelFit command to estimate, for each infant, the contrast or spatial frequency value that elicited 50% “yes” responses. The psychometric functions fitted by Mathematica to the log10-transformed data were constrained to pass through zero (“no”) at a visual acuity value of 30 cy/deg or a contrast value of 1%.\nOut of a total of 47 infants tested for visual acuity, logistic regression converged on a threshold estimate within the tested range for all but three infants. The data of 37 infants were complete and strictly monotonic, and the resulting threshold estimates were within the stimulus range. For those infants, we could also estimate threshold by linear interpolation between the log10 values of the last-seen and the first not-seen stimulus values. The difference between the logistic and the interpolated thresholds was always less than 0.0005 log10 units. The data sets of seven additional infants were successfully fit by logistic regression even though they were incomplete or nonmonotonic. The acuity of one infant was too high to be measured because she saw all of the stimuli that were presented and was assigned an out-of-range value for the purpose of nonparametric statistical analysis. The logistic functions fitted to the data of two additional nonmonotonic infants had such shallow slopes that their thresholds fell outside the range of tested data.\nOut of 39 infants tested with the Newborn Contrast Cards, seven infants' thresholds were not within the tested range, according to logistic regression analysis. Six of these infants saw all the stimuli they were presented, so logistic regression produced thresholds outside the tested range. The data of 10 infants were nonmonotonic, but all but one of these data sets were fit successfully. The data set that was not fit contained only a single observation and was discarded.\nAll but two of the 60 infants who participated (97%) showed evidence of not seeing at least one stimulus. This suggests that the grating itself controlled infant behavior, because if the examiner's hands or the edge of the card were controlling infant fixation-refixation behavior, “yes” responses would occur often even for very hard-to-see gratings.\nIn preliminary analyses, we performed two parametric analyses of variance under the general linear model, with threshold as the dependent measure, omitting the out-of-range data sets. These analyses were designed to reveal effects of card set or examiner, if they existed, as well as any effect of the mixed versus separate experimental designs, such as might occur if the experimenter were hurrying to complete two stimulus sets before the infant fell asleep. We note that AMB has been testing for 34 years, whereas FOO had only approximately 1 month's experience before this project started. None of these three factors were significant for either contrast sensitivity or visual acuity (examiner, contrast: P = 0.503; visual acuity: P = 0.090; card set, contrast: P = 0.130; visual acuity: P = 0.421; experimental design, Contrast: P = 0.739; visual acuity: P = 0.234). Therefore, for the rest of our analyses, we pooled the data across the mixed and separate experimental designs, without regard to examiner or which contrast card stimulus set was used.\nWe performed our main data analyses using nonparametric statistics, which can handle out-of-range data. This was important here because if we had selected only “good” data for analysis, we could easily bias the estimated central tendencies and underestimate the dispersion of the data. This strategy allowed us to include the data on every infant tested but one. Our measure of central tendency was the median, and the dispersion was the interquartile range of the thresholds (compare to ±0.67 standard deviations). Confidence intervals around the medians were estimated as the 95% intervals of 10,000 resamplings of 50% subsets of the thresholds (compare to ±2 standard errors of the mean).\nResults\nEvery infant tested in this pilot study demonstrated that he/she could see at least the easy card in the stimulus set. The median linear Michelson contrast threshold was 0.330 (median contrast sensitivity: 3.030), which is below the 0.47 contrast threshold we reported previously.1 We discuss this discrepancy below. The median linear visual acuity was 1.204 cy/deg (logMAR: 1.396; Snellen equivalent: 20/498), which is within the rather wide range of newborn visual acuities reported by others.8 The results are presented in the MCS rows in Table 2, as CSFs in Figure 2A, and as thresholds (diamonds) in Figures 3A, 3C\nTable 2\n\nMedian Results\nTable 2\n\nMedian Results", null, "Figure 2\n\nResults of experiment I (A) and experiment II (B). The data points are the medians and the error bars are the 95% confidence intervals of the medians. Continuous curves: the standard model of the adult CSF applied to square-waves.9,10 Dashed curves: the same model, only with the Minkowski pooling exponent set to 1 instead of 4. Dotted curves: a generic low-pass CSF, with log10(CS) being a linear function of the linear spatial frequency. The difference between the maximum of the CSF and the measured contrast sensitivity at 0.083 cy/deg was no more than 0.15 log10 units (a factor of 1.41), regardless of which model is correct. The fit of the standard (band-pass) model is much better than either low-pass model (see Discussion for details).\nFigure 2\n\nResults of experiment I (A) and experiment II (B). The data points are the medians and the error bars are the 95% confidence intervals of the medians. Continuous curves: the standard model of the adult CSF applied to square-waves.9,10 Dashed curves: the same model, only with the Minkowski pooling exponent set to 1 instead of 4. Dotted curves: a generic low-pass CSF, with log10(CS) being a linear function of the linear spatial frequency. The difference between the maximum of the CSF and the measured contrast sensitivity at 0.083 cy/deg was no more than 0.15 log10 units (a factor of 1.41), regardless of which model is correct. The fit of the standard (band-pass) model is much better than either low-pass model (see Discussion for details).", null, "Figure 3\n\nThreshold comparisons. The data points are the medians of individual subjects' thresholds (medians in panels A and C are also shown in Fig. 2). Error bars (when visible) are 95% sampling confidence intervals of the thresholds. Diamonds: MCS data. Circles: dLIM data. Squares: Predicted dLIM data (see Discussion). (A) Contrast sensitivities at 0.083 cy/deg from experiments I and II. (B) contrast sensitivities from Brown et al.1 (C) Visual acuities from experiments I and II. (D) Visual acuities from Brown and Yamamoto.4 Horizontal dashed lines: averages of all predicted or measured dLIM contrast sensitivity (A, B) and predicted or measured dLIM visual acuity (C, D).\nFigure 3\n\nThreshold comparisons. The data points are the medians of individual subjects' thresholds (medians in panels A and C are also shown in Fig. 2). Error bars (when visible) are 95% sampling confidence intervals of the thresholds. Diamonds: MCS data. Circles: dLIM data. Squares: Predicted dLIM data (see Discussion). (A) Contrast sensitivities at 0.083 cy/deg from experiments I and II. (B) contrast sensitivities from Brown et al.1 (C) Visual acuities from experiments I and II. (D) Visual acuities from Brown and Yamamoto.4 Horizontal dashed lines: averages of all predicted or measured dLIM contrast sensitivity (A, B) and predicted or measured dLIM visual acuity (C, D).", null, "Experiment II\nIn the main experiment, we measured the visual acuity or contrast sensitivity using dLIM. We chose this method for three reasons. First, it is the method that will most likely be used by clinicians.6,11 Second, it allows for a better completion rate than MCS, because under MCS a great amount of time is spent trying to coax a response from an infant using stimuli that are below that infant's threshold, with the result that infants often fell asleep before testing was completed. Finally, dLIM psychometric functions are necessarily monotonic, so individual thresholds can be extracted easily by both logistic regression and linear interpolation.\nMethods\nThe infants were recruited and consented as in experiment I (Table 1). We used three card types: visual acuity cards, contrast cards at 0.083 cy/deg that were similar to stimulus set 3 used in experiment I, and a set of higher spatial frequency cards at 0.303 cy/deg (e.g., Fig. 1B; calibrated values listed in Appendix Table A2). A total of 53 infants were tested. Forty-three infants were tested once, nine infants stayed awake long enough to be tested with two card types, and one infant stayed awake for all three tests.\nBefore testing an infant using dLIM, the assistant chose one of the three stimulus sets from a predetermined schedule and placed the cards in descending order, starting with easier stimuli (lower spatial frequency or higher contrast) and proceeding to harder stimuli (Table 3). Then she cut the deck so that the start card was unknown to the examiner (bold numbers Table 3 and Appendix Table A2). This procedure is similar to the random start card method often used in research with the Teller Acuity Cards.5,6 The nine possible orderings of three stimulus types (0.083 cy/deg, 0.303 cy/deg, and visual acuity) and three possible start cards were repeated twice, in different prerandomized orders; additional infants were tested for each data set, with the examiner choosing a start card at random, resulting in 22 visual acuity tests and 22 tests at 0.083 cy/deg and 21 tests 0.303 cy/deg. The roles of assistant and examiner were decided by a randomized schedule.\nTable 3\n\nOrder of Card Presentation in Experiment 2a\nTable 3\n\nOrder of Card Presentation in Experiment 2a", null, "As in experiment I, the examiner presented the cards one at a time while remaining unaware of the values of the cards. In a few cases, the start card was judged to be “not seen,” at which point the examiner requested a different easy card, generally taken from the bottom of the cut deck, then the descending procedure continued as usual. When a not-seen card was reached, the examiner could terminate the test. However, if the infant was still awake, the examiner could verify her judgment of the last-seen and the first-not-seen cards. Occasionally, the examiner would change her judgment based on these last trials.\nResults\nEvery infant was judged to see at least one stimulus. There were four incomplete data sets on infants who fell asleep after seeing one to three stimuli but before failing to see any stimulus. Logistic regression fit out-of-range thresholds to these data sets. It was possible to include all the data, both in range and out of range, because we used nonparametric statistics. All complete data sets were successfully fitted by both logistic regression analysis in Mathematica of the log10-transformed data and by interpolation to the geometric average of the last-seen and the first-not-seen stimulus values. The maximum difference between the 50% point on the logistic functions and interpolated values was less than 0.008 log10 units.\nThe median contrast threshold was 0.458 linear Michelson contrast at 0.083 cy/deg and 0.330 linear Michelson contrast at 0.303 cy/deg. The median visual acuity was 0.783 cy/deg (1.558 logMAR, 20/766 Snellen). These results are presented, along with their interquartile ranges, in Table 2. The log10 medians and 95% sampling intervals are shown as a CSF in Figure 2B and are the circles in Figures 3A and 3C. The contrast sensitivity at 0.083 and the visual acuity in experiment II were both less sensitive than the results of experiment I (Mann-Whitney U tests: P = 0.004 for contrast and P = 0.001 for acuity). We discuss this in depth in the Discussion.\nFor comparison to the present results, we reanalyzed the original data from two previous experiments,1,4 applying our 50% seeing criterion for threshold. The contrast sensitivity results of experiment II were similar to the reanalyzed dLIM contrast sensitivity data from Brown et al.1 (compare the circles in Figs. 3A, 3B). The visual acuity results of experiment II were similar to the reanalyzed dLIM results of Brown and Yamamoto4 (compare the circles in Figs. 3C, 3D).\nDiscussion\nThis project had two main goals. The first was to determine whether the 0.083 cy/deg grating was suitable for measuring newborn infant contrast sensitivity. That is, will the contrast sensitivity of an infant, when measured using this grating, be close to the best contrast sensitivity of which that infant is capable? The second was to determine whether the newborn infant's CSF was band-pass (with a sensitivity peak at a spatial frequency below the acuity limit) or low-pass (with sensitivity monotonically improving as spatial frequency is reduced).\nTo answer these questions, we fitted three alternative CSFs to the results of experiments I and II. The first (continuous curves in Fig. 2) was the shape of the adult CSF for square-waves, as predicted when the standard “modelfest” model of adult contrast sensitivity9 was applied to square-wave stimuli (see Ref. 10 for review). Notice that this predicted CSF is truncated on the low spatial frequency end at about 0.19 log10 units below the maximum value, as is commonly found empirically in adult CSFs for square-wave stimuli (e.g., see reviews in Refs. 2 and 12). The second “modified modelfest” function (dashed curves in Fig. 2) was from the same model as the continuous curve, except that the contributions of the spatial frequency-tuned channels were combined using a Minkowski exponent of 1 instead of the more typical value of 4. In the third “generic low-pass” model (dotted curves in Fig. 2), the logarithm of contrast sensitivity is a linear function of linear spatial frequency.13 We fitted these three models to the logistic contrast sensitivity and acuity data by displacing them relative to logarithmic spatial frequency and contrast axes. For experiment I, the curves all passed through the two median values, and the fits were uniquely determined. For experiment II, we used the nonparametric least median squares criterion14 for the fits to the logistic threshold data. The standard model fit the results of experiment II best. We evaluated this fit statistically by calculating the residual difference between each model results (for the three threshold data points) and the measured thresholds of 10,000 combinations of the thresholds of three randomly chosen infants, one from each threshold group. In 7523 of the 10,000 cases, the modelfest data fit best; in 2477 cases the low-pass model fit best; and the generic model never fit best. This result rejects the hypothesis that all three models were equally good fits at P < 10−6 on a binomial probability test.\nThe peak of the band-pass CSF was 0.232 contrast threshold (4.31 contrast sensitivity) at 0.234 cy/deg for the MCS results of experiment I and 0.311 contrast (3.15 contrast sensitivity) at 0.218 cy/deg for the dLIM data of experiment II. The largest difference between the peak CSF sensitivity and the CSF value at 0.083 cy/deg was for the band-pass model, where the difference was 0.148 log10 units (a factor of 1.41) in experiment I and 0.128 log10 units (a factor of 1.34) in experiment II. This result is quantitatively similar to the results of a similar analysis (see review in Ref. 10). In short, the choice of 0.083 cy/deg as a spatial frequency does not substantially underestimate infant contrast sensitivity.\nComparison to Previous Results\nTo place these results into the context of the published literature, Figure 4 shows the contrast sensitivity and visual acuity of infants from birth through age 4 months, measured psychophysically using grating stimuli that do not drift or flicker. Where necessary, the results from those studies were adjusted to reflect the 50% seeing criterion of the results we report here. The contrast sensitivity data (Fig. 4A) are the maximum measured values near the peak of the CSFs. There is controversy about the shape of the CSF at age 1 month,1518 with some authors reporting low-pass CSFs in one-month-old infants and band-pass CSFs in older infants.15,19 Therefore, some data in Figure 4A are lower-bound estimates because the maxima of the CSFs may have been below the spatial frequency ranges tested (gray symbols). See Movshon and Kiorpes17 for a discussion on this point. Here, we show that even when tested using square-waves, the spatial CSF is band-pass at birth. The overall contrast sensitivity results on newborn infants are generally what one would expect by extrapolating from the data on older infants.\nFigure 4\n\nThe present dLIM data (larger black disks: contrast data at 0.301 cy/deg, acuity data at 0.86 contrast) compared to data from the literature on infants age 4 months or younger (lines extending to the right indicate other data on infants over age 4 months). Black symbols, card data; white or gray symbols, forced-choice preferential looking (FPL) data. Data that were scored as the last-seen stimulus (e.g., see McDonald et al.5) are shown as one half “step” (typically, 0.075 log10 units) above their reported values so that all the data in both panels are defined at 75% correct (FPL) or 50% seeing (“yes/no”). For clarity, superimposed data points have been displaced by 2 days along the age axis. (A) Contrast sensitivity. Gray symbols, the maximum of the CSF is a lower bound on contrast sensitivity because the CSF was low-pass over the range of spatial frequencies tested. Data from Adams et al.20 (black squares); Adams and Courage21 (black diamond); Brown et al.1 (smaller black circle); Slater and Sykes22 (gray square); Atkinson et al.19 (white and gray circles); Banks and Salapatek15 (white and gray upright triangles); and Banks et al.23 (gray inverted triangles). (B) Visual acuity. Data from Allen24 (white circles11,25); Van Hof-Van Duin and Mohn26 (white diamonds); Gwiazda et al.27 (white triangles); McDonald et al.5 (black diamonds); Mayer et al.6 (black triangles), (black squares); Dobson8 (small black diamonds); Ipata et al.28 (small black inverted triangle), present data (black circle); Brown and Yamamoto4 (smaller black circle).\nFigure 4\n\nThe present dLIM data (larger black disks: contrast data at 0.301 cy/deg, acuity data at 0.86 contrast) compared to data from the literature on infants age 4 months or younger (lines extending to the right indicate other data on infants over age 4 months). Black symbols, card data; white or gray symbols, forced-choice preferential looking (FPL) data. Data that were scored as the last-seen stimulus (e.g., see McDonald et al.5) are shown as one half “step” (typically, 0.075 log10 units) above their reported values so that all the data in both panels are defined at 75% correct (FPL) or 50% seeing (“yes/no”). For clarity, superimposed data points have been displaced by 2 days along the age axis. (A) Contrast sensitivity. Gray symbols, the maximum of the CSF is a lower bound on contrast sensitivity because the CSF was low-pass over the range of spatial frequencies tested. Data from Adams et al.20 (black squares); Adams and Courage21 (black diamond); Brown et al.1 (smaller black circle); Slater and Sykes22 (gray square); Atkinson et al.19 (white and gray circles); Banks and Salapatek15 (white and gray upright triangles); and Banks et al.23 (gray inverted triangles). (B) Visual acuity. Data from Allen24 (white circles11,25); Van Hof-Van Duin and Mohn26 (white diamonds); Gwiazda et al.27 (white triangles); McDonald et al.5 (black diamonds); Mayer et al.6 (black triangles), (black squares); Dobson8 (small black diamonds); Ipata et al.28 (small black inverted triangle), present data (black circle); Brown and Yamamoto4 (smaller black circle).", null, "The band-pass shape of the CSF from the present experiments may be compared to the results of two other studies of the contrast response in newborn infants. Atkinson et al.29 measured newborn contrast responses using visually evoked potentials (VEPs) and showed low-pass CSFs using 10-Hz flickering stimuli. However, their results are not incompatible with ours because flickering stimuli yield low-pass CSFs with both psychophysical30 and VEP31 methods in subjects of all ages. In a psychophysical experiment, Slater and Sykes22 measured contrast threshold at 0.43 cy/deg (gray square, Fig. 4) and also a point of subjective equality (PSE) between visible square-wave gratings at 0.1 cy/deg and 0.43 cy/deg. Their PSE occurred when the 0.1 cy/deg grating was 0.21 log10 units lower in contrast, suggesting a low-pass underlying CSF. Figure 2 predicts that the contrast sensitivity at those two spatial frequencies should be essentially equal (difference < 0.032 log10 units). We are unsure why their results differ from ours, but we suspect this is related to their lower luminance levels (44 vs. 299 cd/m2), which would probably have moved the peak of the underlying CSF to lower spatial frequencies,30,32 producing the appearance of a low-pass function under their conditions.\nThe visual acuity values from the present experiment (the big black circle in Fig. 4B) are quite compatible with the known development of visual acuity in older infants.\nThe precision of the present results can also be compared with data from the literature. For contrast sensitivity, the literature reports 95% sampling confidence intervals (about 4 standard errors of the mean) for 1-month-old infants measured at 0.3 cy/deg to be 1.52 log10 units (MCS)15 and 0.474 (dLIM).20 For comparison, our full 95% sampling confidence interval at 0.3 cy/deg was 0.568 log10 units (dLIM) (Table 2). The visual acuity literature reports 95% sampling confidence intervals of 0.48—0.65 log10 units (newborn)8,28 and 0.304 log10 units (1-month-old infants),6 both measured using dLIM, whereas our 95% sampling confidence intervals were 0.328 log10 units (MCS) and 0.602 log10 units (dLIM) (Table 2). On the individual level, we calculated the range embraced by the 84th percentile and the 16th percentile of our individual threshold data as our best approximation to the ±1 SD in parametric statistics. For contrast sensitivity, the literature shows a ±SD range of 1.861 log10 units measured using MCS15 and 1.06 log10 units using dLIM,21 both at 0.3 cy/deg and both on 1-month-old infants, compared to a 16th to 84th percentile range of 0.433 log10 units for contrast at 0.3 cy/deg for the present data on newborns. For visual acuity, the literature showed ±SD range of 0.32 to 0.44 log10 units on newborns8,28 and 0.259 log10 units for 1-month-old infants,6 both measured using dLIM. This may be compared to 16th- to 84th-percentile ranges of 0.416 log10 units (MCS) and 0.433 log10 units (dLIM) for the present data sets. Thus, the precision of the present results, both in how precisely we know the means of our data and in the range of data values obtained, is wholly comparable to the results obtained by other investigators on newborn and one-month-old infants.\nPsychophysical Methods\nIn comparing the results obtained using MCS in experiment I and dLIM in experiment II, we found a consistent, statistically significant difference between the two methods, with MCS producing better performance than dLIM (Figs. 3A, 3C: compare the diamonds to the circles). Here, we account quantitatively for this discrepancy.\nConsider an ordered set of stimuli a, b, c, d, e, with a being easiest to see and e being the hardest to see. Let Pmcs(a), Pmcs(b), . . . Pmcs(e) be the probability that each stimulus is “presented” under MCS, and let Pdlim(a) . . . Pdlim(e) be the probably that it is presented under dLIM. Let S(a), S(b) . . . S(e) be the independent probability that each stimulus will be “seen” if it is presented (under either method). Data D(n) associated with a typical stimulus n will be\n$$\\def\\upalpha{\\unicode[Times]{x3B1}}$$$$\\def\\upbeta{\\unicode[Times]{x3B2}}$$$$\\def\\upgamma{\\unicode[Times]{x3B3}}$$$$\\def\\updelta{\\unicode[Times]{x3B4}}$$$$\\def\\upvarepsilon{\\unicode[Times]{x3B5}}$$$$\\def\\upzeta{\\unicode[Times]{x3B6}}$$$$\\def\\upeta{\\unicode[Times]{x3B7}}$$$$\\def\\uptheta{\\unicode[Times]{x3B8}}$$$$\\def\\upiota{\\unicode[Times]{x3B9}}$$$$\\def\\upkappa{\\unicode[Times]{x3BA}}$$$$\\def\\uplambda{\\unicode[Times]{x3BB}}$$$$\\def\\upmu{\\unicode[Times]{x3BC}}$$$$\\def\\upnu{\\unicode[Times]{x3BD}}$$$$\\def\\upxi{\\unicode[Times]{x3BE}}$$$$\\def\\upomicron{\\unicode[Times]{x3BF}}$$$$\\def\\uppi{\\unicode[Times]{x3C0}}$$$$\\def\\uprho{\\unicode[Times]{x3C1}}$$$$\\def\\upsigma{\\unicode[Times]{x3C3}}$$$$\\def\\uptau{\\unicode[Times]{x3C4}}$$$$\\def\\upupsilon{\\unicode[Times]{x3C5}}$$$$\\def\\upphi{\\unicode[Times]{x3C6}}$$$$\\def\\upchi{\\unicode[Times]{x3C7}}$$$$\\def\\uppsy{\\unicode[Times]{x3C8}}$$$$\\def\\upomega{\\unicode[Times]{x3C9}}$$$$\\def\\bialpha{\\boldsymbol{\\alpha}}$$$$\\def\\bibeta{\\boldsymbol{\\beta}}$$$$\\def\\bigamma{\\boldsymbol{\\gamma}}$$$$\\def\\bidelta{\\boldsymbol{\\delta}}$$$$\\def\\bivarepsilon{\\boldsymbol{\\varepsilon}}$$$$\\def\\bizeta{\\boldsymbol{\\zeta}}$$$$\\def\\bieta{\\boldsymbol{\\eta}}$$$$\\def\\bitheta{\\boldsymbol{\\theta}}$$$$\\def\\biiota{\\boldsymbol{\\iota}}$$$$\\def\\bikappa{\\boldsymbol{\\kappa}}$$$$\\def\\bilambda{\\boldsymbol{\\lambda}}$$$$\\def\\bimu{\\boldsymbol{\\mu}}$$$$\\def\\binu{\\boldsymbol{\\nu}}$$$$\\def\\bixi{\\boldsymbol{\\xi}}$$$$\\def\\biomicron{\\boldsymbol{\\micron}}$$$$\\def\\bipi{\\boldsymbol{\\pi}}$$$$\\def\\birho{\\boldsymbol{\\rho}}$$$$\\def\\bisigma{\\boldsymbol{\\sigma}}$$$$\\def\\bitau{\\boldsymbol{\\tau}}$$$$\\def\\biupsilon{\\boldsymbol{\\upsilon}}$$$$\\def\\biphi{\\boldsymbol{\\phi}}$$$$\\def\\bichi{\\boldsymbol{\\chi}}$$$$\\def\\bipsy{\\boldsymbol{\\psy}}$$$$\\def\\biomega{\\boldsymbol{\\omega}}$$$$\\def\\bupalpha{\\bf{\\alpha}}$$$$\\def\\bupbeta{\\bf{\\beta}}$$$$\\def\\bupgamma{\\bf{\\gamma}}$$$$\\def\\bupdelta{\\bf{\\delta}}$$$$\\def\\bupvarepsilon{\\bf{\\varepsilon}}$$$$\\def\\bupzeta{\\bf{\\zeta}}$$$$\\def\\bupeta{\\bf{\\eta}}$$$$\\def\\buptheta{\\bf{\\theta}}$$$$\\def\\bupiota{\\bf{\\iota}}$$$$\\def\\bupkappa{\\bf{\\kappa}}$$$$\\def\\buplambda{\\bf{\\lambda}}$$$$\\def\\bupmu{\\bf{\\mu}}$$$$\\def\\bupnu{\\bf{\\nu}}$$$$\\def\\bupxi{\\bf{\\xi}}$$$$\\def\\bupomicron{\\bf{\\micron}}$$$$\\def\\buppi{\\bf{\\pi}}$$$$\\def\\buprho{\\bf{\\rho}}$$$$\\def\\bupsigma{\\bf{\\sigma}}$$$$\\def\\buptau{\\bf{\\tau}}$$$$\\def\\bupupsilon{\\bf{\\upsilon}}$$$$\\def\\bupphi{\\bf{\\phi}}$$$$\\def\\bupchi{\\bf{\\chi}}$$$$\\def\\buppsy{\\bf{\\psy}}$$$$\\def\\bupomega{\\bf{\\omega}}$$$$\\def\\bGamma{\\bf{\\Gamma}}$$$$\\def\\bDelta{\\bf{\\Delta}}$$$$\\def\\bTheta{\\bf{\\Theta}}$$$$\\def\\bLambda{\\bf{\\Lambda}}$$$$\\def\\bXi{\\bf{\\Xi}}$$$$\\def\\bPi{\\bf{\\Pi}}$$$$\\def\\bSigma{\\bf{\\Sigma}}$$$$\\def\\bPhi{\\bf{\\Phi}}$$$$\\def\\bPsi{\\bf{\\Psi}}$$$$\\def\\bOmega{\\bf{\\Omega}}$$\\begin{equation}\\tag{1}{\\rm{D}}\\left( {\\bf{n}} \\right) = {\\rm{P}}\\left( {\\bf{n}} \\right) \\times {\\rm{S}}\\left( {\\bf{n}} \\right).\\end{equation}\nUnder MCS, Pmcs(n) will always be 1, because every stimulus in the stimulus set was presented as long as the infant was awake, thus\n\\begin{equation}\\tag{2}{{\\rm{D}}_{{\\rm{mcs}}}}\\left( {\\bf{n}} \\right) = {\\rm{S}}\\left( {\\bf{n}} \\right).\\end{equation}\nUnder dLIM, stimulus n will be presented only if every higher-valued stimulus value in the series is presented and seen. Taking stimulus d as an example, the data under dLIM [Ddlim(d)] can be predicted from the data under MCS [e.g., Dmcs(d)]:\n\\begin{equation}\\tag{3}{{\\rm{P}}_{{\\rm{dlim}}}}\\left( {\\bf{d}} \\right) = {{\\rm{D}}_{{\\rm{mcs}}}}\\left( {\\bf{a}} \\right) \\times {{\\rm{D}}_{{\\rm{mcs}}}}\\left( {\\bf{b}} \\right) \\times {{\\rm{D}}_{{\\rm{mcs}}}}\\left( {\\bf{c}} \\right).\\end{equation}\nSubstituting Plim(d) from Eq. 3 into Eq. 1,\n\\begin{equation}\\tag{4}{{\\rm{D}}_{{\\rm{dlim}}}}\\left( {\\bf{d}} \\right) = {{\\rm{D}}_{{\\rm{mcs}}}}\\left( {\\bf{a}} \\right) \\times {{\\rm{D}}_{{\\rm{mcs}}}}\\left( {\\bf{b}} \\right) \\times {{\\rm{D}}_{{\\rm{mcs}}}}\\left( {\\bf{c}} \\right) \\times {\\rm{S}}\\left( {\\bf{d}} \\right).\\end{equation}\nSubstituting from Eq. 2,\n\\begin{equation}\\tag{5}{{\\rm{D}}_{{\\rm{dlim}}}}\\left( {\\bf{d}} \\right) = {{\\rm{D}}_{{\\rm{mcs}}}}\\left( {\\bf{a}} \\right) \\times {{\\rm{D}}_{{\\rm{mcs}}}}\\left( {\\bf{b}} \\right) \\times {{\\rm{D}}_{{\\rm{mcs}}}}\\left( {\\bf{c}} \\right) \\times {{\\rm{D}}_{{\\rm{mcs}}}}\\left( {\\bf{d}} \\right)\\end{equation}\n(see Pelli et al.,33 Eq. 6, for a similar approach). Thus, the probability that typical stimulus n will be seen if it is presented will be lower under dLIM than it is under MCS.\nTo test this prediction, we pooled all the complete and incomplete MCS and dLIM data sets into their respective group psychometric functions (bold logistic curves, Figs. 5CF), where the abscissa was the stimulus value of contrast or spatial frequency and the ordinate was the total “yes” responses across all infants divided by the number of infants who were presented that stimulus. We also applied the same analysis to the data from Brown et al.1 (Figs. 5A, 5B).\nFigure 5\n\nGroup psychometric functions. Data collected using dLIM (right panels) are compared to data collected using MCS (left panels). White circles indicate pooled data on all infants (including incomplete data sets), with the area of each data point proportional to the number of observations. Bold smooth curves are weighted logistic functions fitted to the white data points. Black dots indicate predicted dLIM performance from simulations based on MCS data. Dashed lines in left panels are the analytic predictions from Eq. 5, fine smooth curves fitted to the simulations. (A, B) Results from Brown et al.1; (C–F) results of the present experiment.\nFigure 5\n\nGroup psychometric functions. Data collected using dLIM (right panels) are compared to data collected using MCS (left panels). White circles indicate pooled data on all infants (including incomplete data sets), with the area of each data point proportional to the number of observations. Bold smooth curves are weighted logistic functions fitted to the white data points. Black dots indicate predicted dLIM performance from simulations based on MCS data. Dashed lines in left panels are the analytic predictions from Eq. 5, fine smooth curves fitted to the simulations. (A, B) Results from Brown et al.1; (C–F) results of the present experiment.", null, "We predicted the dLIM psychometric functions from the MCS data using Eq. 5 (dashed lines in Figs. 5A, 5C, 5E). We confirmed the analysis by simulating 1000 dLIM experiments, each on 20 simulated infants. Each simulated infant started at 100% contrast or 0.01 cy/deg and contributed a “yes” or “no” response for each stimulus value in a descending sequence. The response for each stimulus was “yes” with probability equal to the observed fraction “yes” responses for the corresponding MCS contrast or acuity stimulus, interpolating along the logistic MCS curve if necessary. Each simulated dLIM data set remained “no” after the first “no” response was reached. The fraction “yes” in the simulation analysis agreed well with the analytic prediction from Eq. 5 (the black dots are the simulated data and are close to the dashed lines in Figs. 5A, 5C, 5E). We applied this analysis to the results of experiments I and II, as well as the MCS and dLIM data from Brown et al.1 The results were encouraging: the small black dots and the smooth curve drawn through them fall close to all the dLIM data (white circles in Figs. 5B, 5D, 5F).\nTo compare the simulated dLIM results to the dLIM data quantitatively, we fitted a logistic function to the psychometric data from each simulated infant to produce a simulated 50% “yes” threshold. Next, we collated these individual threshold results into simulated 20-infant experiments. The final results of the simulation were the medians and the 95% sampling intervals of the median results of 1000 simulated experiments. The predicted thresholds from the simulations (listed as the “sampling distributions” for the “dLIM pred” results in Table 2 and shown as squares in Fig. 3) agreed well with the empirical thresholds (circles in Fig. 3).\nThe predicted difference between dLIM and MCS is generally greater when there are many stimuli separated by steps that are small when compared to the steepness of the psychometric function (as in Figs. 5C, 5E). This is because there are many steps of probabilities between 0 and 1 to be multiplied together in Eq. 5. By comparison, there is little difference between MCS results and dLIM predictions or data when the step size is larger compared to the steepness of the psychometric function (as in Figs. 5A, 5B). This suggests that clinical measurement using dLIM will be closer to the MCS results if larger step sizes are used, for example steps of 0.301 log10 units in contrast or one octave of spatial frequency.\nMore generally, predicted dLIM performance in the present experiments was both qualitatively (fine lines in Figs. 5B, 5D, 5F) and quantitatively (compare the squares to the circles in Fig. 3) similar to the empirical dLIM results. Therefore, we see very little reason to invoke explanations of the difference between MCS and dLIM based on the effort of the examiner, the alertness of the infant, or the incomplete blinding of the examiner to the values of the stimuli being presented in dLIM. This should be good news to those who use dLIM in a clinical setting.\nConclusions\nThe visual acuity of the newborn infant is 0.783 to 1.204 cy/deg, depending on psychometric method, which is similar to the overall results obtained by others using a range of methods.8\nThe CSF of the newborn infant is band-pass, with a peak located near 0.23 cy/deg. The peak contrast threshold of the newborn infant is about 0.232 to 0.311 contrast, depending on the psychophysical method.\nA good square-wave spatial frequency for testing the contrast sensitivity of newborn infants is 0.083 cy/deg because it underestimates an infant's maximum contrast sensitivity by no more than 0.15 log10 units (a linear factor of 1.41), no matter what shape of CSF is assumed.\nMCS yields better performance than dLIM, but the results of the two methods predict similar performance once the contingent presentation of stimuli under dLIM is taken into account. The agreement between dLIM and MCS is better when fewer stimuli, separated by wider step sizes, are used. dLIM can be used clinically on newborn infants with confidence.\nAcknowledgments\nThe authors are grateful for the help of the nurses and research assistants of the Clinical Research Center and the nurses of the Women and Infants Service of the Ohio State University, without whom this research would not be possible. Precision Vision, Inc., manufactured the stimuli, Lisa Jones-Jordan provided statistical advice, and Delwin T. Lindsey provided advice and suggestions about the simulation analyses.\nSupported by Grants NEI 2R42EU022545-02A1 and UL1TR001070.\nDisclosure: A.M. Brown, None; F.O. Opoku, None; M.R. Stenger, None\nReferences\nBrown AM, Lindsey DT, Cammenga JG, Giannone PJ, Stenger MR. The contrast sensitivity of the newborn human infant newborn contrast sensitivity. Invest Ophthalmol Vis Sci. 2015; 56: 625–632.\nCampbell F, Robson J. Application of Fourier analysis to the visibility of gratings. J Physiol. 1968; 197: 551–566.\nHopkins GR, Douherty BE, Brown AM. The Ohio Contrast Cards: visual performance in a pediatric low-vision site. Optom Vis Sci. 2017; 94: 946.\nBrown AM, Yamamoto M. Visual acuity in newborn and preterm infants measured with grating acuity cards. Am J Ophthalmol. 1986; 102: 245–253.\nMcDonald M, Dobson V, Sebris SL, Daich L, Varner D, Teller DY. The acuity card procedure: a rapid test of infant acuity. Invest Ophthalmol Vis Sci. 1985; 26: 1158–1162.\nMayer DL, Beiser AS, Warner AF, Pratt EM, Raye KN, Lang JM. Monocular acuity norms for the Teller Acuity Cards between ages one month and four years. Invest Ophthalmol Vis Sci. 1995; 36: 671–685.\nLewis TL, Maurer D. The development of the temporal and nasal visual fields during infancy. Vision Res. 1992; 32: 903–911.\nDobson V, Schwartz TL, Sandstrom DJ, Michel L. Binocular visual acuity of neonates: the acuity card procedure. Dev Med Child Neurol. 1987; 29: 199–206.\nWatson AB, Ahumada AJ. A standard model for foveal detection of spatial contrast. J Vis. 2005; 5: 6–6.\nHopkins GR, Dougherty BE, Brown AM. The Ohio Contrast Cards: visual performance in a pediatric low-vision site. Optom Vis Sci. 2017; 94: 946–956.\nMayer DL, Dobson V. Grating acuity cards: validity and reliability in studies of human visual development. In: Dobbing J, ed. Developing Brain Behaviour: The Role of Lipids in Infant Formula. San Diego, CA: Academic; 1997: 253.\nCampbell FW, Howell ER, Johnstone JR. A comparison of threshold and suprathreshold appearance of gratings with components in the low and high spatial frequency range. J Physiol. 1978; 284: 193–201.\nCampbell FW, Green DG. Optical and retinal factors affecting visual resolution. J Physiol. 1965; 181: 576–593.\nRousseeuw PJ. Least median squares regression. J Am Statist Assoc. 1984; 79: 871–880.\nBanks M, Salapatek P. Acuity and contrast sensitivity in 1-, 2-, and 3-month-old human infants. Invest Ophthalmol Vis Sci. 1978; 17: 361–365.\nAtkinson J, Braddick O, Moar K. Development of contrast sensitivity over the first 3 months of life in the human infant. Vision Res. 1977; 17: 1037–1044.\nMovshon JA, Kiorpes L. Analysis of the development of spatial contrast sensitivity in monkey and human infants. J Opt Soc Am. 1988; 5: 2166–2172.\nPeterzell DH, Teller DY. Individual differences in contrast sensitivity functions: the lowest spatial frequency channels. Vision Res. 1996; 36: 3077–3085.\nAtkinson J, Braddick O, Moar K. Contrast sensitivity of the human infant for moving and static patterns. Vision Res. 1977; 17: 1045–1047.\nAdams RJ, Courage ML. Using a single test to measure human contrast sensitivity from early childhood to maturity. Vision Res. 2002; 42: 1205–1210.\nAdams RJ, Courage ML. Monocular contrast sensitivity in 3-to 36-month-old human infants. Optom Vis Sci. 1996; 73: 546–551.\nSlater A, Sykes M. Newborn infants' visual responses to square wave gratings. Child Dev. 1977; 545–554.\nBanks MS, Stephens BR, Hartmann EE. The development of basic mechanisms of pattern vision: spatial frequency channels. J Exp Child Psychol. 1985; 40: 501–527.\nAllen JL. Visual Acuity Development in Human Infants up to 6 Months of Age. Seattle, WA: University of Washington, 1979. Unpublished doctoral dissertation.\nTeller DY. The development of visual acuity in human and monkey infants. Trends Neurosci. 1981; 4: 21–24.\nVan Hof-Van Duin J, Mohn G. The development of visual acuity in normal fullterm and preterm infants. Vision Res. 1986; 26: 909–916.\nGwiazda J, Brill S, Mohindra I, Held R. Preferential looking acuity in infants from two to fifty-eight weeks of age. Optom Vis Sci. 1980; 57: 428–432.\nIpata A, Cioni G, Boldrini A, Bottai P, Van Hof-van Duin J. Visual acuity of low-and high-risk neonates and acuity development during the first year. Behav Brain Res. 1992; 49: 107–114.\nAtkinson J, Braddick OJ, French J. Contrast sensitivity of the human neonate measured by the visual evoked potential. Invest Ophthalmol Vis Sci. 1979; 18: 210–213.\nKelly D. Adaptation effects on spatio-temporal sine-wave thresholds. Vision Res. 1972; 12: 89–I N81.\nNorcia AM, Tyler CW, Hamer RD. Development of contrast sensitivity in the human infant. Vision Res. 1990; 30: 1475–1486.\nVan Nes F, Koenderink J, Nas H, Bouman M. Spatiotemporal modulation transfer in the human eye. J Opt Soc Am. 1967; 57: 1082–1088.\nPelli DG, Robson JG, Wilkins AJ. The design of a new letter chart for measuring contrast sensitivity. Clin Vis Sci. 1988; 2: 187–199.\nTable A1\n\nStimuli Used in Experiment I\nTable A1\n\nStimuli Used in Experiment I", null, "Table A2\n\nStimuli Used in Experiment II\nTable A2\n\nStimuli Used in Experiment II", null, "Figure 1\n\nExamples of the Newborn Acuity Cards and Newborn Contrast Cards. (A) The “easy” contrast card (0.083 cy/deg, 0.86 contrast); (B) a typical contrast card (0.303 cy/deg, 0.50 contrast); (C, D) acuity cards (0.50 and 1 cy/deg, both at 0.86 contrast). The luminance mismatch between the grating and the surrounding gray in this figure is an artifact of reproduction. In the actual stimuli, the space-averaged luminance match was excellent.\nFigure 1\n\nExamples of the Newborn Acuity Cards and Newborn Contrast Cards. (A) The “easy” contrast card (0.083 cy/deg, 0.86 contrast); (B) a typical contrast card (0.303 cy/deg, 0.50 contrast); (C, D) acuity cards (0.50 and 1 cy/deg, both at 0.86 contrast). The luminance mismatch between the grating and the surrounding gray in this figure is an artifact of reproduction. In the actual stimuli, the space-averaged luminance match was excellent.", null, "Figure 2\n\nResults of experiment I (A) and experiment II (B). The data points are the medians and the error bars are the 95% confidence intervals of the medians. Continuous curves: the standard model of the adult CSF applied to square-waves.9,10 Dashed curves: the same model, only with the Minkowski pooling exponent set to 1 instead of 4. Dotted curves: a generic low-pass CSF, with log10(CS) being a linear function of the linear spatial frequency. The difference between the maximum of the CSF and the measured contrast sensitivity at 0.083 cy/deg was no more than 0.15 log10 units (a factor of 1.41), regardless of which model is correct. The fit of the standard (band-pass) model is much better than either low-pass model (see Discussion for details).\nFigure 2\n\nResults of experiment I (A) and experiment II (B). The data points are the medians and the error bars are the 95% confidence intervals of the medians. Continuous curves: the standard model of the adult CSF applied to square-waves.9,10 Dashed curves: the same model, only with the Minkowski pooling exponent set to 1 instead of 4. Dotted curves: a generic low-pass CSF, with log10(CS) being a linear function of the linear spatial frequency. The difference between the maximum of the CSF and the measured contrast sensitivity at 0.083 cy/deg was no more than 0.15 log10 units (a factor of 1.41), regardless of which model is correct. The fit of the standard (band-pass) model is much better than either low-pass model (see Discussion for details).", null, "Figure 3\n\nThreshold comparisons. The data points are the medians of individual subjects' thresholds (medians in panels A and C are also shown in Fig. 2). Error bars (when visible) are 95% sampling confidence intervals of the thresholds. Diamonds: MCS data. Circles: dLIM data. Squares: Predicted dLIM data (see Discussion). (A) Contrast sensitivities at 0.083 cy/deg from experiments I and II. (B) contrast sensitivities from Brown et al.1 (C) Visual acuities from experiments I and II. (D) Visual acuities from Brown and Yamamoto.4 Horizontal dashed lines: averages of all predicted or measured dLIM contrast sensitivity (A, B) and predicted or measured dLIM visual acuity (C, D).\nFigure 3\n\nThreshold comparisons. The data points are the medians of individual subjects' thresholds (medians in panels A and C are also shown in Fig. 2). Error bars (when visible) are 95% sampling confidence intervals of the thresholds. Diamonds: MCS data. Circles: dLIM data. Squares: Predicted dLIM data (see Discussion). (A) Contrast sensitivities at 0.083 cy/deg from experiments I and II. (B) contrast sensitivities from Brown et al.1 (C) Visual acuities from experiments I and II. (D) Visual acuities from Brown and Yamamoto.4 Horizontal dashed lines: averages of all predicted or measured dLIM contrast sensitivity (A, B) and predicted or measured dLIM visual acuity (C, D).", null, "Figure 4\n\nThe present dLIM data (larger black disks: contrast data at 0.301 cy/deg, acuity data at 0.86 contrast) compared to data from the literature on infants age 4 months or younger (lines extending to the right indicate other data on infants over age 4 months). Black symbols, card data; white or gray symbols, forced-choice preferential looking (FPL) data. Data that were scored as the last-seen stimulus (e.g., see McDonald et al.5) are shown as one half “step” (typically, 0.075 log10 units) above their reported values so that all the data in both panels are defined at 75% correct (FPL) or 50% seeing (“yes/no”). For clarity, superimposed data points have been displaced by 2 days along the age axis. (A) Contrast sensitivity. Gray symbols, the maximum of the CSF is a lower bound on contrast sensitivity because the CSF was low-pass over the range of spatial frequencies tested. Data from Adams et al.20 (black squares); Adams and Courage21 (black diamond); Brown et al.1 (smaller black circle); Slater and Sykes22 (gray square); Atkinson et al.19 (white and gray circles); Banks and Salapatek15 (white and gray upright triangles); and Banks et al.23 (gray inverted triangles). (B) Visual acuity. Data from Allen24 (white circles11,25); Van Hof-Van Duin and Mohn26 (white diamonds); Gwiazda et al.27 (white triangles); McDonald et al.5 (black diamonds); Mayer et al.6 (black triangles), (black squares); Dobson8 (small black diamonds); Ipata et al.28 (small black inverted triangle), present data (black circle); Brown and Yamamoto4 (smaller black circle).\nFigure 4\n\nThe present dLIM data (larger black disks: contrast data at 0.301 cy/deg, acuity data at 0.86 contrast) compared to data from the literature on infants age 4 months or younger (lines extending to the right indicate other data on infants over age 4 months). Black symbols, card data; white or gray symbols, forced-choice preferential looking (FPL) data. Data that were scored as the last-seen stimulus (e.g., see McDonald et al.5) are shown as one half “step” (typically, 0.075 log10 units) above their reported values so that all the data in both panels are defined at 75% correct (FPL) or 50% seeing (“yes/no”). For clarity, superimposed data points have been displaced by 2 days along the age axis. (A) Contrast sensitivity. Gray symbols, the maximum of the CSF is a lower bound on contrast sensitivity because the CSF was low-pass over the range of spatial frequencies tested. Data from Adams et al.20 (black squares); Adams and Courage21 (black diamond); Brown et al.1 (smaller black circle); Slater and Sykes22 (gray square); Atkinson et al.19 (white and gray circles); Banks and Salapatek15 (white and gray upright triangles); and Banks et al.23 (gray inverted triangles). (B) Visual acuity. Data from Allen24 (white circles11,25); Van Hof-Van Duin and Mohn26 (white diamonds); Gwiazda et al.27 (white triangles); McDonald et al.5 (black diamonds); Mayer et al.6 (black triangles), (black squares); Dobson8 (small black diamonds); Ipata et al.28 (small black inverted triangle), present data (black circle); Brown and Yamamoto4 (smaller black circle).", null, "Figure 5\n\nGroup psychometric functions. Data collected using dLIM (right panels) are compared to data collected using MCS (left panels). White circles indicate pooled data on all infants (including incomplete data sets), with the area of each data point proportional to the number of observations. Bold smooth curves are weighted logistic functions fitted to the white data points. Black dots indicate predicted dLIM performance from simulations based on MCS data. Dashed lines in left panels are the analytic predictions from Eq. 5, fine smooth curves fitted to the simulations. (A, B) Results from Brown et al.1; (C–F) results of the present experiment.\nFigure 5\n\nGroup psychometric functions. Data collected using dLIM (right panels) are compared to data collected using MCS (left panels). White circles indicate pooled data on all infants (including incomplete data sets), with the area of each data point proportional to the number of observations. Bold smooth curves are weighted logistic functions fitted to the white data points. Black dots indicate predicted dLIM performance from simulations based on MCS data. Dashed lines in left panels are the analytic predictions from Eq. 5, fine smooth curves fitted to the simulations. (A, B) Results from Brown et al.1; (C–F) results of the present experiment.", null, "Table 1\n\nParticipant Demographics\nTable 1\n\nParticipant Demographics", null, "Table 2\n\nMedian Results\nTable 2\n\nMedian Results", null, "Table 3\n\nOrder of Card Presentation in Experiment 2a\nTable 3\n\nOrder of Card Presentation in Experiment 2a", null, "Table A1\n\nStimuli Used in Experiment I\nTable A1\n\nStimuli Used in Experiment I", null, "Table A2\n\nStimuli Used in Experiment II\nTable A2\n\nStimuli Used in Experiment II", null, "×\n\nThis PDF is available to Subscribers Only" ]
[ null, "https://tvst.arvojournals.org/UI/app/images/arvo_journals_logo-white.png", null, "https://tvst.arvojournals.org/Images/grey.gif", null, "https://tvst.arvojournals.org/Images/grey.gif", null, "https://tvst.arvojournals.org/Images/grey.gif", null, "https://tvst.arvojournals.org/Images/grey.gif", null, "https://tvst.arvojournals.org/Images/grey.gif", null, "https://tvst.arvojournals.org/Images/grey.gif", null, "https://tvst.arvojournals.org/Images/grey.gif", null, "https://tvst.arvojournals.org/Images/grey.gif", null, "https://tvst.arvojournals.org/Images/grey.gif", null, "https://tvst.arvojournals.org/Images/grey.gif", null, "https://tvst.arvojournals.org/Images/grey.gif", null, "https://tvst.arvojournals.org/Images/grey.gif", null, "https://tvst.arvojournals.org/Images/grey.gif", null, "https://tvst.arvojournals.org/Images/grey.gif", null, "https://tvst.arvojournals.org/Images/grey.gif", null, "https://tvst.arvojournals.org/Images/grey.gif", null, "https://tvst.arvojournals.org/Images/grey.gif", null, "https://tvst.arvojournals.org/Images/grey.gif", null, "https://tvst.arvojournals.org/Images/grey.gif", null, "https://tvst.arvojournals.org/Images/grey.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8746179,"math_prob":0.9481258,"size":1808,"snap":"2021-21-2021-25","text_gpt3_token_len":503,"char_repetition_ratio":0.14523281,"word_repetition_ratio":0.045751635,"special_character_ratio":0.29701328,"punctuation_ratio":0.11282051,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9585834,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-16T12:34:20Z\",\"WARC-Record-ID\":\"<urn:uuid:44395cee-b5b8-453f-ae52-cc34012f3870>\",\"Content-Length\":\"282603\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:65f09c5b-ed75-42f1-9248-aba120f90681>\",\"WARC-Concurrent-To\":\"<urn:uuid:c6fce98a-0aea-4581-bb04-68f276b739c7>\",\"WARC-IP-Address\":\"52.191.96.132\",\"WARC-Target-URI\":\"https://tvst.arvojournals.org/article.aspx?articleid=2685740\",\"WARC-Payload-Digest\":\"sha1:BN6YXCB6MLXBWDC4CDWS6Y7CA4KXPRVK\",\"WARC-Block-Digest\":\"sha1:HSCIJN4BOLCWJGDPSGVXWEBVOQL6A6RV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243991269.57_warc_CC-MAIN-20210516105746-20210516135746-00401.warc.gz\"}"}
https://github.com/joseproenca/parameterised-connectors
[ "{{ message }}\n\n# joseproenca / parameterised-connectors Public\n\nScala library with support for a calculus of connectors with parameters that influence their interfaces.\n\nSwitch branches/tags\nNothing to show\n\n## Files\n\nFailed to load latest commit information.\nType\nName\nCommit time\n\n# Parameterised connectors", null, "This scala library investigates a language to compose connectors (or components).\n\nPrimitive blocks are blocks with input and output ports. Composition of blocks can be sequential (outputs to inputs) or parallel (appending inputs and outputs), and is defined in a pointfree style, i.e., without naming the ports. A type system guarantees that composition is correct.\n\nBoth connectors and types can be parameterised by integer and boolean variables, which determine the interface of the connector, i.e., how many input and output ports it has. The type checking uses a mix of constraint unification and constraint solving.\n\nThis project is a follow up and a simpler approach to the ideas experimented in https://github.com/joseproenca/connector-family, using a different construct to produce loops (traces instead of duals) and not considering connectors as parameters.\n\nTry it online with a new prototype visualiser, which uses a simplified engine without any constraint solving (just algebraic simplifications).\n\nThe following example shows how to quickly build and type-check a connector. To try the blocks of code below, the easiest way is to use `sbt` build tool by using the command `sbt console` and copy-paste this code into the console.\n\n```import paramConnectors.DSL._\n\ntypeOf( lam(\"x\":I, id^\"x\") )\n// returns the type: ∀x:I . x -> x\n\nfifo*id & drain\n// returns the connector with type information:\n// (fifo ⊗ id) ; drain\n// : 2 -> 0```\n\n## Visualising a connector\n\nLarger connectors can be hard to understand or debug. Therefore we provide a function that produces a simplified dot graph of a given connector, as exemplified in the example below.\n\n```import paramConnectors.DSL._\n\nval exrouter =\ndupl & dupl*id &\n(lossy*lossy & dupl*dupl & id*swap*id & id*id*merger)*id &\nid*id*drain\n// returns its type 1 -> 2\n\ndraw(exrouter)\n// returns a graph \"diagraph G { .... }\"```\n\nThe resulting string from `draw(exrouter)` can be compiled using dot, for example, using the online tool Viz.js. The produced graph is depicted below.", null, "## Running connectors\n\nA connector can be executed (simulated) using the scala engine developed within the PICC project. Follows an example of an execution that uses the `exrouter` defined above.\n\n```run(writer(\"some-data\") & exrouter & reader(1)*reader(1))\n// returns a message from one of the reader's instance\n// confirming the reception of \"some-data\".```\n\n## More examples\n\nThe examples below show more complex examples. Our library provides 3 main functions to type check connectors:\n\n• `typeOf` - returns the most general type after all steps (collect constraints, perform an unification algorithm, and perform constraint solver on remaining constraints.\n\n• `typeTree` - applies the type rules and collects the constraints, without checking if they hold.\n\n• `typeInstance` - performs the same steps as `typeOf` but provides an instance of the type, i.e., a type without constraints. This type can still be the most general type - it it is not a most general type, the type is annotated with `©` (standing for \"concrete\" type).\n\nAn extra function `debug` returns all intermediate steps during type-checking. The examples below show the usage of these functions with more complex examples.\n\n```val x:I = \"x\"\nval n:I = \"n\"\nval b:B = \"b\"\nval oneToTwo = Prim(\"oneToTwo\",1,2) // 1 input, 2 outputs\n// other primitives in the DSL:\n// id:1->1, lossy:1->1, fifo:1->1, merger:2->1, dupl:1->2, drain:2->0\n\ntypeOf( lam(x,oneToTwo^x)(2) )\n// returns 2 -> 4\n\ntypeOf( lam(x,(id^x) * (id^x)) & lam(n,fifo^n) )\n// returns ∀x:I,n:I . 2 * x -> 2 * x | n == (2 * x)\ntypeTree( lam(x,(id^x) * (id^x)) & lam(n,fifo^n) )\n// returns ∀x:I,n:I . (1^x) ⊗ (1^x) -> 1^n | (((1 * x) + (1 * x)) == (1 * n))\n// & (1 >= 0) & (1 >= 0)\n\ntypeOf( lam(b, b? fifo + drain) )\n// returns ∀b:B . if b then 1 else 2 -> if b then 1 else 0\ntypeOf( lam(b, b? fifo + drain) & id )\n// returns ∀b:B . 1 -> 1 | b\n\ntypeOf( lam(x,Tr(x - 1, sym(x - 1,1) & (fifo^x))))\n// returns ∀x:I . 1 -> 1\ntypeTree( lam(x,Tr(x - 1, sym(x - 1,1) & (fifo^x))))\n// returns ∀x:I . x1 -> x2 | ((x1 + (x - 1)) == ((x - 1) + 1))\n// & ((x2 + (x - 1)) == x)\n// & ((1 + (x - 1)) == x)\n// & (x1 >= 0) & (x2 >= 0)\n\ntypeOf( lam(n, id^x ^ x<--n) )\n// returns ∀n:I . x1 -> x2 | (n == ((n * n) + (-2 * x1)))\n// & (n == ((n * n) + (-2 * x2)))\n// & (x1 >= 0) & (x2 >= 0)\ntypeTree( lam(n, id^x ^ x<--n) )\n// returns ∀n:I . x1 -> x2 | (x1 == Σ{0 ≤ x < n}x) & (x2 == Σ{0 ≤ x < n}x)\n// & (x1 >= 0) & (x2 >= 0)\ntypeInstance(lam(n, id^x ^ x<--n) )\n// returns © 0 -> 0\n\ntypeOf( lam(n, id^x ^ x<--n)(3) )\n// returns 3 -> 3\ntypeTree( lam(n, id^x ^ x<--n)(3) )\n// returns x1 -> x2 | (x1 == Σ{0 ≤ x < 3}x) & (x2 == Σ{0 ≤ x < 3}x)\n// & (x1 >= 0) & (x2 >= 0)\n\ntypeOf( lam(x,Tr(x,id^3)) )\n// returns ∀x:I . (-1 * x) + 3 -> 3 + (-1 * x) | (((-1 * x) + 3) >= 0)\n// & ((3 + (-1 * x)) >= 0)\ntypeInstance( lam(x,Tr(x,id^3)) )\n// returns © 0 -> 0```\n\nEven more examples can be found in our test suite.\n\nObserve that an instance of the type of `lam(x,Tr(x,id^3))` is `© 0 -> 0`. The initial symbol means that this is a concrete solution, i.e., this is a particular instance of the type that satisfies the constraints. In practice, this means that when trying to solve the constraints multiple solutions were found for the variables of the type, and one particular was chosen. Whenever the `©` symbol does not appear when requesting an instance of a type we are guaranteed to have the most general type, as one would expect from a type.\n\nThe practical price to pay for knowing wether a type is concrete or not is a second run of the constraint solving, this time negating the previous assignment for the variables in the type.\n\nScala library with support for a calculus of connectors with parameters that influence their interfaces.\n\n## Releases\n\nNo releases published\n\n## Packages 0\n\nNo packages published" ]
[ null, "https://camo.githubusercontent.com/8e9b2769504964e07c859c9884e8ca3718c16c3c34bf83a9f6842bd609396223/68747470733a2f2f7472617669732d63692e6f72672f6a6f736570726f656e63612f706172616d65746572697365642d636f6e6e6563746f72732e7376673f6272616e63683d6d6173746572", null, "https://camo.githubusercontent.com/b02e99448b45474fe8cba6b88403f51b63d1f54a87875df0b8711820d3f2f709/68747470733a2f2f63646e2e7261776769742e636f6d2f6a6f736570726f656e63612f706172616d65746572697365642d636f6e6e6563746f72732f6d61737465722f6578726f757465722e737667", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6989775,"math_prob":0.98878884,"size":5591,"snap":"2021-43-2021-49","text_gpt3_token_len":1695,"char_repetition_ratio":0.13746196,"word_repetition_ratio":0.1262136,"special_character_ratio":0.33911642,"punctuation_ratio":0.13214286,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9932212,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-11-27T05:49:58Z\",\"WARC-Record-ID\":\"<urn:uuid:65863451-46b0-4e62-a426-bc2d28dc7deb>\",\"Content-Length\":\"202076\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:249b3932-d91e-42f4-848f-459bf1d121a2>\",\"WARC-Concurrent-To\":\"<urn:uuid:20ddfd98-49ca-4659-90b0-85170cc340b2>\",\"WARC-IP-Address\":\"140.82.113.3\",\"WARC-Target-URI\":\"https://github.com/joseproenca/parameterised-connectors\",\"WARC-Payload-Digest\":\"sha1:N5XMCZMYSI3BJ7KEXMRA4C2Y4NCW2XGZ\",\"WARC-Block-Digest\":\"sha1:3U5JFT34VV4DIEKXQE2XTNP543JKYK3F\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964358118.13_warc_CC-MAIN-20211127043716-20211127073716-00188.warc.gz\"}"}
https://www.numbertowordsconverter.com/factors-of-501000/
[ "# What are all the factors, the prime factorization, and factor pairs of 501000?\n\nTo find the factors of 501000, divide 501000 by each number starting with 1 and working up to 501000\n\n## What is a factor in math ?\n\nFactors are the numbers you multiply together to get another number. For example, the factors of 15 are 3 and 5 because 3 × 5 = 15.\n\nThe factors of a number can be positive or negative, but they cannot be zero.\n\nThe factors of a number can be used to find out if the number is prime or not.\n\nA prime number is a number that has only two factors: itself and 1. For example, the number 7 is prime because its only factors are 7 and 1.\n\n## List all of the factors of 501000 ?\n\nTo calculate the factors of 501000 , you can use the division method.\n\n1. Begin by dividing 501000 by the smallest possible number, which is 2.\n\n2. If the division is even, then 2 is a factor of 501000.\n\n3. Continue dividing 501000 by larger numbers until you find an odd number that does not divide evenly into 501000 .\n\n4. The numbers that divide evenly into 501000 are the factors of 501000 .\n\nNow let us find how to calculate all the factors of Five hundred one thousand :\n\n501000 ÷ 1 = 501000\n501000 ÷ 2 = 250500\n501000 ÷ 3 = 167000\n501000 ÷ 4 = 125250\n501000 ÷ 5 = 100200\n501000 ÷ 6 = 83500\n501000 ÷ 8 = 62625\n501000 ÷ 10 = 50100\n501000 ÷ 12 = 41750\n501000 ÷ 15 = 33400\n501000 ÷ 20 = 25050\n501000 ÷ 24 = 20875\n501000 ÷ 25 = 20040\n501000 ÷ 30 = 16700\n501000 ÷ 40 = 12525\n501000 ÷ 50 = 10020\n501000 ÷ 60 = 8350\n501000 ÷ 75 = 6680\n501000 ÷ 100 = 5010\n501000 ÷ 120 = 4175\n501000 ÷ 125 = 4008\n501000 ÷ 150 = 3340\n501000 ÷ 167 = 3000\n501000 ÷ 200 = 2505\n501000 ÷ 250 = 2004\n501000 ÷ 300 = 1670\n501000 ÷ 334 = 1500\n501000 ÷ 375 = 1336\n501000 ÷ 500 = 1002\n501000 ÷ 501 = 1000\n501000 ÷ 600 = 835\n501000 ÷ 668 = 750\n501000 ÷ 750 = 668\n501000 ÷ 835 = 600\n501000 ÷ 1000 = 501\n501000 ÷ 1002 = 500\n501000 ÷ 1336 = 375\n501000 ÷ 1500 = 334\n501000 ÷ 1670 = 300\n501000 ÷ 2004 = 250\n501000 ÷ 2505 = 200\n501000 ÷ 3000 = 167\n501000 ÷ 3340 = 150\n501000 ÷ 4008 = 125\n501000 ÷ 4175 = 120\n501000 ÷ 5010 = 100\n501000 ÷ 6680 = 75\n501000 ÷ 8350 = 60\n501000 ÷ 10020 = 50\n501000 ÷ 12525 = 40\n501000 ÷ 16700 = 30\n501000 ÷ 20040 = 25\n501000 ÷ 20875 = 24\n501000 ÷ 25050 = 20\n501000 ÷ 33400 = 15\n501000 ÷ 41750 = 12\n501000 ÷ 50100 = 10\n501000 ÷ 62625 = 8\n501000 ÷ 83500 = 6\n501000 ÷ 100200 = 5\n501000 ÷ 125250 = 4\n501000 ÷ 167000 = 3\n501000 ÷ 250500 = 2\n501000 ÷ 501000 = 1\n\nAs you can see, the factors of 501000 are 1 , 2 , 3 , 4 , 5 , 6 , 8 , 10 , 12 , 15 , 20 , 24 , 25 , 30 , 40 , 50 , 60 , 75 , 100 , 120 , 125 , 150 , 167 , 200 , 250 , 300 , 334 , 375 , 500 , 501 , 600 , 668 , 750 , 835 , 1000 , 1002 , 1336 , 1500 , 1670 , 2004 , 2505 , 3000 , 3340 , 4008 , 4175 , 5010 , 6680 , 8350 , 10020 , 12525 , 16700 , 20040 , 20875 , 25050 , 33400 , 41750 , 50100 , 62625 , 83500 , 100200 , 125250 , 167000 , 250500 and 501000 .\n\n## How many factors of 501000 are there ?\n\nThe factors of 501000 are the numbers that can evenly divide 501000 . These numbers are 1 , 2 , 3 , 4 , 5 , 6 , 8 , 10 , 12 , 15 , 20 , 24 , 25 , 30 , 40 , 50 , 60 , 75 , 100 , 120 , 125 , 150 , 167 , 200 , 250 , 300 , 334 , 375 , 500 , 501 , 600 , 668 , 750 , 835 , 1000 , 1002 , 1336 , 1500 , 1670 , 2004 , 2505 , 3000 , 3340 , 4008 , 4175 , 5010 , 6680 , 8350 , 10020 , 12525 , 16700 , 20040 , 20875 , 25050 , 33400 , 41750 , 50100 , 62625 , 83500 , 100200 , 125250 , 167000 , 250500 and 501000.\n\nThus, there are a total of 64 factors of 501000\n\n## What are the factor pairs of 501000 ?\n\nFactor Pairs of 501000 are combinations of two factors that when multiplied together equal 501000. There are many ways to calculate the factor pairs of 501000 .\n\nOne easy way is to list out the factors of 501000 :\n1 , 2 , 3 , 4 , 5 , 6 , 8 , 10 , 12 , 15 , 20 , 24 , 25 , 30 , 40 , 50 , 60 , 75 , 100 , 120 , 125 , 150 , 167 , 200 , 250 , 300 , 334 , 375 , 500 , 501 , 600 , 668 , 750 , 835 , 1000 , 1002 , 1336 , 1500 , 1670 , 2004 , 2505 , 3000 , 3340 , 4008 , 4175 , 5010 , 6680 , 8350 , 10020 , 12525 , 16700 , 20040 , 20875 , 25050 , 33400 , 41750 , 50100 , 62625 , 83500 , 100200 , 125250 , 167000 , 250500 , 501000\n\nThen, pair up the factors:\n(1,501000),(2,250500),(3,167000),(4,125250),(5,100200),(6,83500),(8,62625),(10,50100),(12,41750),(15,33400),(20,25050),(24,20875),(25,20040),(30,16700),(40,12525),(50,10020),(60,8350),(75,6680),(100,5010),(120,4175),(125,4008),(150,3340),(167,3000),(200,2505),(250,2004),(300,1670),(334,1500),(375,1336),(500,1002),(501,1000),(600,835) and (668,750) These are the factor pairs of 501000 .\n\n## Prime Factorisation of 501000\n\nThere are a few different methods that can be used to calculate the prime factorization of a number. Two of the most common methods are listed below.\n\n1) Use a factor tree :\n\n1. Take the number you want to find the prime factorization of and write it at the top of the page\n\n2. Find the smallest number that goes into the number you are finding the prime factorization of evenly and write it next to the number you are finding the prime factorization of\n\n3. Draw a line under the number you just wrote and the number you are finding the prime factorization of\n\n4. Repeat step 2 with the number you just wrote until that number can no longer be divided evenly\n\n5. The numbers written on the lines will be the prime factors of the number you started with\n\nFor example, to calculate the prime factorization of 501000 using a factor tree, we would start by writing 501000 on a piece of paper. Then, we would draw a line under it and begin finding factors.\n\nThe final prime factorization of 501000 would be 2 x 2 x 2 x 3 x 5 x 5 x 5 x 167.\n\n2) Use a factorization method :\n\nThere are a few different factorization methods that can be used to calculate the prime factorization of a number.\n\nOne common method is to start by dividing the number by the smallest prime number that will divide evenly into it.\n\nThen, continue dividing the number by successively larger prime numbers until the number has been fully factorised.\n\nFor example, to calculate the prime factorization of 501000 using this method, we keep dividing until it gives a non-zero remainder.\n\n501000 ÷ 2 = 250500\n250500 ÷ 2 = 125250\n125250 ÷ 2 = 62625\n62625 ÷ 3 = 20875\n20875 ÷ 5 = 4175\n4175 ÷ 5 = 835\n835 ÷ 5 = 167\n167 ÷ 167 = 1\n\nSo the prime factors of 501000 are 2 x 2 x 2 x 3 x 5 x 5 x 5 x 167.\n\n## Frequently Asked Questions on Factors\n\n### What are all the factors of 501000 ?\n\nThe factors of 501000 are 1 , 2 , 3 , 4 , 5 , 6 , 8 , 10 , 12 , 15 , 20 , 24 , 25 , 30 , 40 , 50 , 60 , 75 , 100 , 120 , 125 , 150 , 167 , 200 , 250 , 300 , 334 , 375 , 500 , 501 , 600 , 668 , 750 , 835 , 1000 , 1002 , 1336 , 1500 , 1670 , 2004 , 2505 , 3000 , 3340 , 4008 , 4175 , 5010 , 6680 , 8350 , 10020 , 12525 , 16700 , 20040 , 20875 , 25050 , 33400 , 41750 , 50100 , 62625 , 83500 , 100200 , 125250 , 167000 , 250500 and 501000.\n\n### What is the prime factorization of 501000 ?\n\nThe prime factorization of 501000 is 2 x 2 x 2 x 3 x 5 x 5 x 5 x 167 or 23 x 31 x 53 x 1671, where 2 , 3 , 5 , 167 are the prime numbers .\n\n### What are the prime factors of 501000 ?\n\nThe prime factors of 501000 are 2 , 3 , 5 , 167 .\n\n### Is 501000 a prime number ?\n\nA prime number is a number that has only two factors 1 and itself.\n501000 it is not a prime number because it has the factors 1 , 2 , 3 , 4 , 5 , 6 , 8 , 10 , 12 , 15 , 20 , 24 , 25 , 30 , 40 , 50 , 60 , 75 , 100 , 120 , 125 , 150 , 167 , 200 , 250 , 300 , 334 , 375 , 500 , 501 , 600 , 668 , 750 , 835 , 1000 , 1002 , 1336 , 1500 , 1670 , 2004 , 2505 , 3000 , 3340 , 4008 , 4175 , 5010 , 6680 , 8350 , 10020 , 12525 , 16700 , 20040 , 20875 , 25050 , 33400 , 41750 , 50100 , 62625 , 83500 , 100200 , 125250 , 167000 , 250500 and 501000." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.560076,"math_prob":0.9995115,"size":7518,"snap":"2022-40-2023-06","text_gpt3_token_len":2989,"char_repetition_ratio":0.24487624,"word_repetition_ratio":0.4334294,"special_character_ratio":0.651237,"punctuation_ratio":0.2549463,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9993871,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-06T10:02:37Z\",\"WARC-Record-ID\":\"<urn:uuid:e9ee6b52-5ad1-47a7-9746-c2e36bd07398>\",\"Content-Length\":\"137332\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:641526ec-71a4-4595-9149-c75869914265>\",\"WARC-Concurrent-To\":\"<urn:uuid:6683b4e5-7d82-493d-9d1b-3f605c85a775>\",\"WARC-IP-Address\":\"104.21.47.67\",\"WARC-Target-URI\":\"https://www.numbertowordsconverter.com/factors-of-501000/\",\"WARC-Payload-Digest\":\"sha1:7X3UD2EQXIV3Q3YF2PNEVUEG6KEFGPIZ\",\"WARC-Block-Digest\":\"sha1:CI4SCGYDZYLUTEDPHAS2TIWPVK7JO7R4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500334.35_warc_CC-MAIN-20230206082428-20230206112428-00418.warc.gz\"}"}
https://www.usgs.gov/apps/TrendPowerTool/
[ "TrendPowerTool v1.0: Estimate statistical power to detect a population trend\n\nOverview and instructions\n\nA power analysis can be used to estimate the sample sizes needed for successful study or monitoring program. Statistical power is the probability of detecting a population trend of a particular magnitude. Simple calculations are often insufficient to estimate statistical power, so a simulation-based power analysis must be used. However, running a simulation-based power analysis requires technical expertise and can require extensive computing resources.\n\nTrendPowerTool is a web-based lookup app to provide guidance for ecological monitoring programs when resources are not available for a simulation-based power analysis. By drawing on results of 1.4 million scenarios that we simulated on a supercomputer, TrendPowerTool quickly and easily estimates the statistical power of a monitoring program to detect a population trend of a particular magnitude. TrendPowerTool provides a wide range of options for the input parameters, including the monitoring design and the spatial and temporal variation of the resource to be monitored.\n\nResults from TrendPowerTool are based on several assumptions that were necessary to develop a generalized power analysis. If your study system does not meet these assumptions, results from TrendPowerTool might not be accurate, and it would be better to conduct a full simulation-based power analysis with the appropriate model structure.\n\nIf you would like to explore a wide variety of scenarios that cannot easily be displayed on the \"Run Tool\" page, you might prefer to download the entire database of results from the simulations that inform TrendPowerTool. See the \"Input values\" tab for definitions of the columns in this file:\n\nReference for more information about the power analysis used to produce these results: Weiser, E.L., J.E. Diffendorfer, L. Lopez-Hoffman, D. Semmens, and W.E. Thogmartin. 2021. TrendPowerTool: A lookup tool for estimating the statistical power of a monitoring program to detect population trends. Conservation Science & Practice e445. https://doi.org/10.1111/csp2.445\nContact: Emily Weiser ([email protected]). Please report any errors in red text that show up where the plots should be; errors in bold orange text are expected and indicate a known incompatibility with the input values you've selected.\n\nDisclaimer:\nThis software is in the public domain because it contains materials that originally came from the U.S. Geological Survey, an agency of the United States Department of Interior. For more information, see the official USGS copyright policy (https://www2.usgs.gov/visual-id/credit_usgs.html#copyright).\nAlthough this software program has been used by the U.S. Geological Survey (USGS), no warranty, expressed or implied, is made by the USGS or the U.S. Government as to the accuracy and functioning of the program and related program material nor shall the fact of distribution constitute any such warranty, and no responsibility is assumed by the USGS in connection therewith.\nThis software is provided \"AS IS.\"\n\nThis research used resources provided by the Core Science Analytics, Synthesis, & Libraries (CSASL) Advanced Research Computing (ARC) group at the U.S. Geological Survey.\n\nLast updated 19 January 2021", null, "Figure 1. Estimates of statistical power obtained from simulations using exact values (Table 1) versus estimates from TrendPowerTool for four example datasets. Estimates were made for three population trends, shown as the percent change per year.\n\nTable 1. Summary statistics for four example datasets. Values in parentheses are the nearest available in TrendPowerTool.", null, "How to calculate input values from pilot data\n\nPopulation parameters\n1. Transform the response variable, e.g. count per unit effort, to approximately follow a normal distribution. For example, if your data are counts that follow a Poisson distribution, a log transformation might be appropriate. If your data cannot be transformed to follow a normal distribution (e.g. zero-inflated), this tool might not be appropriate.\n\n2. Center the response variable to a mean of 1, by dividing all (transformed) observations by the global mean. Use the transformed, centered responses for all following calculations.\n\n3. Box 1: Find the mean for each site across all years and visits to that site. Calculate the SD among sites in their mean values and choose the nearest value (SD_mu).\n\n4. Box 2: Calculate the site-specific population trends as the proportional change per year (e.g. using linear regression with an effect of year; Trend). Calculate the SD among sites in the trend and choose the nearest value (SD_trend).\n\n5. Boxes 3 and 4: For each site, calculate CV across years from a detrended time series to represent annual variation (Gibbs JP. 2000. Monitoring Populations. Pages 213–252 in L. Boitani and T. K. Fuller, editors. Research Technique in Animal Ecology - Controversies and Consequences. Columbia University Press, New York, New York, USA):\nFirst, for each site: 1) calculate the mean observation for each year with pilot data (if only one survey was conducted, this is simply the observation from that survey). This will be X[s,y] for each site s and year y. 2) Fit a linear regression for each site with X[s,] as the response and the year as the explanatory variable. 3) Get the annual coefficient of variation (CV) for this site by dividing the SD of the residuals from the model by the mean observation across all years, and take the absolute value: siteCV[s] = |SD(residuals)/mean(X[s,])|. Finally, average siteCV across sites and choose the nearest value in Box 3 (CV_yr). Also calculate the SD of siteCV across sites and choose the nearest value in Box 4 (SD_CV_yr).\n\n6. Box 5: Calculate observation error as a proportion of the mean (Obs_err). The appropriate calculation will vary depending on your sources of observation error. If you have several replicate surveys within a survey period, you could calculate the SD among surveys to estimate observer error. If you don't have replicate surveys or some other measure of error, you'll want to choose your best guess or a moderate value (not zero).\n\nMonitoring design\n7. Box 6: Choose the number of years that you plan to survey during the monitoring program. Alternatively, this can be the number of intervals between surveys. For example, if you plan to survey every 2 years over 20 years, you will have 10 intervals (year 2, 4, 6, 8... 20). Thus the value of this box would be set to 10. If you use an interval other than one year, be sure to adjust the trend as well: to monitor a 10% change over 20 years with annual surveys, the per-interval trend is 0.01 (to be selected in Box 8 below); but if you survey every 2 years, the per-interval trend is 0.02. If you're interested in annual change, the number of visits per year would not affect the survey interval. However, if you are monitoring a change within a single year (say flowering phenology), your interval could be days, weeks, or months. The number of intervals is an important component of sample size when detecting a trend over time.\n\n8. Box 7: Choose the number of sites that you plan to survey for the monitoring program. TrendPowerTool assumes that the same sites are monitored every year.\n\n9. Box 8: Choose the population trend that you want to detect, expressed as a proportional change per year (e.g. 0.05 is a 5% change per year), or per the interval of interest (e.g. per 2 years if surveys will be conducted every second year). If you want to detect a 50% total change over 10 years, that would be equivalent to a 5% change per year. Given the data standardization steps above, the power to detect a negative trend will be the same as the power to detect a positive trend. A smaller trend (closer to 0) will be more difficult to detect.\n\n10. Box 9: Choose the desired power, i.e. the probability of detecting a statistically significant trend in the correct direction (positive or negative). The default, 0.80, is often used but may not always be appropriate.\n\nIf you have insufficient data to calculate any of the above, test a range of values, or select \"Any\" to view the results for all values that are available in TrendPowerTool. (Select \"Any\" for only one parameter at a time.)", null, "The development of TrendPowerTool was motivated by work to design a national monitoring program for monarchs and milkweed. However, TrendPowerTool is not appropriate to apply to these monitoring targets because our observations of both monarchs and milkweed follow a zero-inflated distribution, which cannot be transformed to approximate normality. Instead, we built another power analysis that appropriately represented the distributions of observations." ]
[ null, "https://www.usgs.gov/apps/TrendPowerTool/Figure2.jpg", null, "https://www.usgs.gov/apps/TrendPowerTool/Table1.jpg", null, "https://www.usgs.gov/apps/TrendPowerTool/pic.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8643653,"math_prob":0.8957683,"size":4616,"snap":"2022-05-2022-21","text_gpt3_token_len":1063,"char_repetition_ratio":0.12597571,"word_repetition_ratio":0.030495552,"special_character_ratio":0.23375216,"punctuation_ratio":0.13983051,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95451325,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-24T19:16:25Z\",\"WARC-Record-ID\":\"<urn:uuid:8bf3a250-4bcc-4342-a48f-8e9bda2443a2>\",\"Content-Length\":\"30932\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:79186096-ddf0-4f13-86b2-84b8c314b030>\",\"WARC-Concurrent-To\":\"<urn:uuid:ae76cbc7-00c2-4350-b5f7-04809ceb042a>\",\"WARC-IP-Address\":\"216.137.37.20\",\"WARC-Target-URI\":\"https://www.usgs.gov/apps/TrendPowerTool/\",\"WARC-Payload-Digest\":\"sha1:P2BRKC2W7ASEFCBUJQSAUKGQ22SIHPT5\",\"WARC-Block-Digest\":\"sha1:5DGCIKKED5NQPE6RECMUWANSMNTAH6U4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320304600.9_warc_CC-MAIN-20220124185733-20220124215733-00288.warc.gz\"}"}
https://web2.0calc.com/questions/help-pls_64336
[ "+0\n\n# help pls\n\n0\n134\n5\n\nLet S be a cube. Compute the number of planes which pass through at least 3 vertices of S\n\nJul 2, 2021\n\n#1\n0\n\nSince any three points determine a plane, the answer is $\\binom{8}{3} = \\boxed{56}$.\n\nJul 2, 2021\n#4\n0\n\nWe can break up the planes based on how many vertices of the bottom face they pass through.\n\n$1$ plane passes through exactly $4$ vertices, and $0$ pass through exactly $3$ vertices.\n\nThe planes passing through $2$ vertices of the bottom face either pass through an edge or a diagonal of the bottom face. Each edge on the bottom face has $2$ valid planes passing through it and each diagonal has $3$ valid planes, for a total of $4 \\cdot 2 + 2 \\cdot 3 = 14$.\n\nThe valid planes passing through exactly $1$ vertex of the bottom face must intersect the top face on a specific diagonal, so there are $4$ such planes.\n\nThere is $1$ plane passing through $0$ vertices, so this makes a total of $1 + 14 + 4 + 1 =$ 20 total planes.\n\nJul 2, 2021" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8858992,"math_prob":0.99865776,"size":894,"snap":"2021-43-2021-49","text_gpt3_token_len":232,"char_repetition_ratio":0.18426967,"word_repetition_ratio":0.011976048,"special_character_ratio":0.28747204,"punctuation_ratio":0.072222225,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99961406,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-27T15:44:10Z\",\"WARC-Record-ID\":\"<urn:uuid:55d51135-b184-4d94-ac40-d4cad887bd34>\",\"Content-Length\":\"24149\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8dbb5f7b-2924-4ef4-8f85-b0e6d8c96ce1>\",\"WARC-Concurrent-To\":\"<urn:uuid:2b259d88-3ea3-495f-9d2d-f49a9c8828a3>\",\"WARC-IP-Address\":\"168.119.68.208\",\"WARC-Target-URI\":\"https://web2.0calc.com/questions/help-pls_64336\",\"WARC-Payload-Digest\":\"sha1:7HBEDPBH7LV2PMCL64ABXQ77XYMEUGIA\",\"WARC-Block-Digest\":\"sha1:OGHH6DIEBFO3N2QPYV3DNW2M7256JWUK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323588216.48_warc_CC-MAIN-20211027150823-20211027180823-00237.warc.gz\"}"}
https://en.wikiversity.org/wiki/Electric_Circuit_Analysis/Nodal_Analysis
[ "# Electric Circuit Analysis/Nodal Analysis\n\n## Lesson Review 5 & 6:\n\nWhat you need to remember from Kirchhoff's Voltage & Current Law . If you ever feel lost, do not be shy to go back to the previous lesson & go through it again. You can learn by repitition.\n\n• Remember what was learned in Passive sign convention, You can go back and revise Lesson 1.\n• Define Kirchhoff's Voltage Law ( word-by-word ).\n• Kirchhoff's Voltage Law: $\\sum _{n}v_{n}=v_{4}+v_{1}+v_{2}+v_{3}=0$", null, "• Define Kirchhoff's Current Law ( word-by-word ).\n• Kirchhoff's Current Law: $\\sum _{n}i_{n}=i_{1}+i_{2}+i_{3}+i_{4}=0$", null, "This part of the course onwards will collaborate with the Mathematics Department extensively. Mathematical Theory will be kept minimal as mathematical tools are only used here as a means to an end. Links to relevant Mathematical theories will be supplied to assist the student.\n\n## Lesson 7: Preview\n\nThis Lesson is about Kirchhoff's Current Law. The student/User is expected to understand the following at the end of the lesson.\n\n• Use KCL at super nodes to formulate circuit equations.\n• Create matrix from circuit equations.\n• Solve for Unknown Node Voltages using Cramer's Rule.\n\n## Part 1: Pre-reading Material\n\nThe student is advised to read the following resources from the Mathematics department:\n\nThe following external link has an excellent summary on using Cramer's rule to solve linear equations:\n\nAfter you have satisfied yourself with the above resources, you can go to Part 2.\n\n## Part 2: Nodal analysis\n\nLet's start off with some useful definitions:\n\n• Node:\nA point in a circuit where terminals of atleast two electric components meet. This point can be on any wire, it is infinitely small and dimensionless.\n• Major Node:\nThis point is a node. A set of these nodes is used to create constraint equations.\n• Reference Node:\nThe node to which Voltages of other nodes is read with regard to. This can be seen as ground ( V = 0).\n• Branch:\nThis is a circuit element(s) that connect two nodes.\n\nBasic rule: The sum of the currents entering any point (Node) must equal the sum of the currents leaving.( From KCL in Lecture 6).\n\n## Part 3\n\nThe following is a general procedure for using Nodal Analysis method to solve electric circuit problems. The aim of this algorithm is to develop a matrix system from equations found by applying KCL at the major nodes in an electric circuit. Cramer's rule is then used to solve the unkown major node voltages.\n\nOnce the Node voltages are solved, normal circuit analysis methods ( Ohm's law; Voltage and Current Divider principles etc... ) can then be used to find whatever circuit entity is required.\n\nRemember to consult previous lessons if you are not confident using the circuit analysis techniques that will be employed in this lesson.\n\nManual Nodal Analysis Algorithm:\n\n1.) Choose a reference node. ( Rule of thumb: take the node with most branches connecting to it )\n\n2.) Identify and number major nodes. ( Usually 2 or 3 major Nodes )\n\n3.) Apply KCL to identified major nodes and formulate circuit equations.\n\n4.) Create matrix system from KCL equations obtained.\n\n5.) Solve matrix for unknown node voltages by using Cramer's Rule (Cramer's Rule is simpler, although you can still use the Gaussian method)\n\n6.) Use solved node voltages to solve for the desired circuit entity.\n\nThe above algorithm is very basic and useful for 2 x 2 and 3 x 3 size matrices. Generally as the number of major node voltages increase and the order of the matrix exceeds 3, numerical methods (beyond the scope of this course) are employed, sometimes with the aid of computers, to solve such circuit networks.\n\nLet's try an example to illustrate the above nodal analysis algorithm.\n\n## Part 4 : Example\n\nConsider Figure 7.1 with the following Parameters:\n\n$V_{1}=15V$", null, "$V_{2}=7V$", null, "$R_{1}=2\\Omega$", null, "$R_{2}=20\\Omega$", null, "$R_{3}=10\\Omega$", null, "$R_{4}=5\\Omega$", null, "$R_{5}=2\\Omega$", null, "$R_{6}=2\\Omega$", null, "Find current through $R_{3}$", null, "using Nodal Analysis method.\n\nSolution:", null, "Figure 7.2: Voltages at nodes\n\nThis is the same example we solved in Exercise 6, except that in this case we have added some resistors to increase the complexity of the circuit.\n\nFigure 7.2 shows voltages at nodes a, b, c and d.\n\nWe use node a as the common node (ground if you like). Thus $V_{a}=0V$", null, "as we did previously.\n\nCarefully follow the progression of the nodal analysis algorithm explained in Part 3 of this lesson as follows in Part 5 and 6.\n\n## Part 5 : Example (Continued)\n\nNow that we have labelled the currents flowing in this circuit using the Passive Sign Convention, and have identified nodes b; c and d as major nodes, we proceed as follows:\n\nKCL at node b:\n\n$i_{1}=i_{2}+i_{6}$", null, "Thus, by applying Ohm's Law to the above equation we get:\n\n${\\begin{matrix}\\ {\\frac {V_{1}-V_{b}}{R_{1}}}={\\frac {V_{b}-V_{c}}{R_{2}}}+{\\frac {V_{b}-V_{d}}{R_{6}}}\\end{matrix}}$", null, "Therefore\n\n${\\begin{matrix}\\ \\ V_{b}({\\frac {1}{R_{1}}}+{\\frac {1}{R_{2}}}+{\\frac {1}{R_{6}}})-V_{c}({\\frac {1}{R_{2}}})-V_{d}({\\frac {1}{R_{6}}})={\\frac {V_{1}}{R_{1}}}\\end{matrix}}$", null, "............... (1)\n\nKCL at node c:\n\n$i_{3}=i_{2}+i_{4}$", null, "Then, by applying Ohm's Law to the above equation we get:\n\n## Part 6 : Example (Continued)\n\n${\\begin{matrix}\\ {\\frac {V_{c}}{R_{3}}}={\\frac {V_{b}-V_{c}}{R_{2}}}+{\\frac {V_{d}-V_{c}}{R_{4}}}\\end{matrix}}$", null, "Therefore\n\n${\\begin{matrix}\\ \\ V_{b}(-{\\frac {1}{R_{2}}})+V_{c}({\\frac {1}{R_{2}}}+{\\frac {1}{R_{3}}}+{\\frac {1}{R_{4}}})+V_{d}(-{\\frac {1}{R_{4}}})=0\\end{matrix}}$", null, "............... (2)\n\nKCL at node d:\n\n$i_{4}=i_{5}+i_{6}$", null, "Applying Ohm's Law...\n\n${\\begin{matrix}\\ {\\frac {V_{d}-V_{c}}{R_{4}}}={\\frac {V_{2}-V_{d}}{R_{5}}}+{\\frac {V_{b}-V_{d}}{R_{6}}}\\end{matrix}}$", null, "Therefore\n\n${\\begin{matrix}\\ V_{b}(-{\\frac {1}{R_{6}}})-V_{c}({\\frac {1}{R_{4}}})+V_{d}({\\frac {1}{R_{4}}}+{\\frac {1}{R_{5}}}+{\\frac {1}{R_{6}}})={\\frac {V_{2}}{R_{5}}}\\end{matrix}}$", null, "............... (3)\n\n## Part 7:\n\nThe next step in this algorithm is to construct a matrix. To do that easily, we substitute all resistances in the above equations 1, 2 and 3 with their equivalent admittances - as follows:\n\n$G_{1}={\\frac {1}{R_{1}}}$", null, "etc thus equations 1, 2 and 3 will be re-written as follows:\n\n${\\begin{matrix}\\ V_{b}(G_{1}+G_{2}+G_{6})+V_{c}(-G_{2})+V_{d}(-G_{6})&=&(V_{1}\\times G_{1})....(1)\\\\\\ \\\\\\ V_{b}(-G_{2})+V_{c}(G_{2}+G_{3}+G_{4})+V_{d}(-G_{4})&=&0....(2)\\\\\\ \\\\\\ V_{b}(-G_{6})+V_{c}(-G_{4})+V_{d}(G_{4}+G_{5}+G_{6})&=&(V_{2}\\times G_{5})....(3)\\end{matrix}}$", null, "Now we can create a matrix with the above equations as follows:\n\n${\\begin{bmatrix}(G_{1}+G_{2}+G_{6})&-(G_{2})&-(G_{6})\\\\(-G_{2})&(G_{2}+G_{3}+G_{4})&(-G_{4})\\\\(-G_{6})&(-G_{4})&(G_{4}+G_{5}+G_{6})\\end{bmatrix}}.{\\begin{bmatrix}V_{b}\\\\V_{c}\\\\V_{d}\\end{bmatrix}}={\\begin{bmatrix}(V_{1}\\times G_{1})\\\\0\\\\(V_{2}\\times G_{5})\\end{bmatrix}}$", null, "The following matrix is the above with values substituted:\n\n$A.{\\vec {X}}={\\vec {Y}}$", null, "${\\begin{bmatrix}1.05&-0.05&-0.5\\\\-0.05&0.35&-0.2\\\\-0.5&-0.2&1.2\\end{bmatrix}}.{\\begin{bmatrix}V_{b}\\\\V_{c}\\\\V_{d}\\end{bmatrix}}={\\begin{bmatrix}7.5\\\\0\\\\3.5\\end{bmatrix}}$", null, "Now that we have arranged equations 1, 2 and 3 into a matrix we need to get determinants of the general matrix, and determinants of alterations of the general matrix as follows:\n\n## Part 8:\n\nRemember to read *Solutions/Cramer Also, use the provided link for details on working out the determinant of a 3 x 3 Matrix.\n\nSolving determinants of:\n\n• Matrix A : General matrix A from KCL equations\n• Matrix A1 : Genral Matrix A with Column 1 substituted by ${\\vec {Y}}$", null, ".\n• Matrix A2 : Genral Matrix A with Column 2 substituted by ${\\vec {Y}}$", null, ".\n• Matrix A3 : Genral Matrix A with Column 3 substituted by ${\\vec {Y}}$", null, ".\n\nAs follows:\n\n${\\begin{matrix}\\ det{\\begin{bmatrix}1.05&-0.05&-0.5\\\\-0.05&0.35&-0.2\\\\-0.5&-0.2&1.2\\end{bmatrix}}&=&detA\\\\\\ \\\\\\ &=&0.299\\end{matrix}}$", null, "${\\begin{matrix}\\ det{\\begin{bmatrix}7.5&-0.05&-0.5\\\\0&0.35&-0.2\\\\3.5&-0.2&1.2\\end{bmatrix}}&=&detA1\\\\\\ \\\\\\ &=&3.498\\end{matrix}}$", null, "${\\begin{matrix}\\ det{\\begin{bmatrix}1.05&7.5&-0.5\\\\-0.05&0&-.02\\\\-0.5&3.5&1.2\\end{bmatrix}}&=&detA2\\\\\\ \\\\\\ &=&2.023\\end{matrix}}$", null, "## Part 9:\n\n${\\begin{matrix}\\ det{\\begin{bmatrix}1.05&-0.05&7.5\\\\-0.05&0.35&0\\\\-0.5&-0.2&3.5\\end{bmatrix}}&=&detA3\\\\\\ \\\\\\ &=&2.665\\end{matrix}}$", null, "Now we can use the solved determinants to arrive at solutions for Node voltages $V_{b};V_{c}andV_{d}$", null, "as follows:\n\n1. $V_{b}={\\frac {detA1}{detA}}={\\frac {3.498}{0.299}}=11.717V$", null, "2. $V_{c}={\\frac {detA2}{detA}}={\\frac {2.023}{0.299}}=6.776V$", null, "3. $V_{d}={\\frac {detA3}{detA}}={\\frac {2.665}{0.299}}=8.928V$", null, "Now we can apply Ohm's law to solve for the current through $R_{3}$", null, "as follows:\n\n$I_{R_{3}}={\\frac {V_{c}}{R_{3}}}={\\frac {6.776V}{10\\Omega }}=+0.678A$", null, "As we previously saw, the positive sign in the above current tells us that the effective current flowing through $R_{3}$", null, "is in fact in the direction we chose when drawing up the circuit in figure 7.2.\n\nTo appreciate the algorithm we have just used, try solving the above problem using either KVL or KCL as we did in lessons 5 & 6 and see just how cumbersome the process would be.\n\nAs usual the following part is an exercise to test your self on the content discussed in this lesson. Please refer to Part 11 for further reading material and interesting related external links.\n\n## Part 10:Exercise 7\n\nConsider Figure 7.3 with the following parameters:\n\n$V_{s}=9V$", null, "$R_{1}=200\\Omega$", null, "$R_{2}=20\\Omega$", null, "$R_{3}=10k\\Omega$", null, "$R_{4}=5k\\Omega$", null, "$R_{5}=15k\\Omega$", null, "$R_{6}=1k\\Omega$", null, "Find the current through $R_{3}$", null, "using the Nodal Analysis method.\n\n## Part 11:\n\nOnce you finish your Exercises you can post your score here! To post your score just e-mail your course co-ordinator your name and score *Click Here", null, ".", null, "Action required: please create Category:Electric Circuit Analysis/Lectures and add it to Category:Lectures." ]
[ null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/8a3a1b742bd2a29267f68d54bcc036dfcb90c6fd", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/8305fab57dc5c8f4fb5a147d66a048e6d6940931", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/3fefd0e17a7a447c250096564220dc055b8b0d9a", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/5c1fefe94eff67adab5fe77955f4362e8df38f3e", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/813541d1b0cfe743932857c9b28d879e501caf76", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/445a31969b787de781e4862c2d1d69d9176fb992", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/5ee7ce38716f220cad369c873615599d156d9da2", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/cf2c9199649ca555c5d9f1fa0e880bfe55c92e55", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/9837e6070ea2ad68717ca4d858e382edbd0db65c", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/912cf1e1f366e973f01a83e4908041c884b3f767", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/a3b0bb30b2846df2cd6cbedc7a796388e339d0fc", null, "https://upload.wikimedia.org/wikiversity/en/thumb/4/4a/EE-102-L07-Fig2.jpg/250px-EE-102-L07-Fig2.jpg", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/7ce60849559d3b37de686915b70a6e41fcb643f0", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/7565a6e87a091a57ef537d99fbc7c83b97c80609", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/d26bf0e975d67f96a854d36cf9d0e832dbc38be0", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/7ef0086427279827acba7d4441b6b1b4642c66ee", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/0293225b2f6671e44b8d71fe597fd8e1038a92ff", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/86406fe1f384e802de03f491f1354b8ba54c1408", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/fe71658b36a90e82e87c6be0388e2c7148eb98a3", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/a2ee29d0ba1c99f0dfdceedd8777e48c70b8d8fc", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/1db1e7b988b579105a2bf73805a7be2669b06bba", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/f1c53934f13a87d91d9b926f052a94861d59e256", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/36b75eb406856937ae6a0f51d6200b9043d2e290", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/4a35f17b44809bb83f3f029ea0a5d6200cf0c70a", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/5f0a93fb88f01891c0000619881934efef470888", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/f730755eccf362f8896dd18c0465812a1e8a95e1", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/f257f2af513e793da2fc72efb0334a4d5aaa7992", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/e22a84638d168927de2db7306114b9b0e9e8da89", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/e22a84638d168927de2db7306114b9b0e9e8da89", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/e22a84638d168927de2db7306114b9b0e9e8da89", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/7d990fea952715c0d3bd55aacadc819a37ac690c", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/cfdb23c055a7cf3d79b3b861cf4b737dda86f772", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/55fdecd76b26804f3464368c7a7960b396f93101", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/c9ad5734d1a84d6f91a7dacf0627a82845a0b69a", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/42135fbf1588b9d4a6a1673d0230b0901aaa5ffa", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/85ce27c794f8e731c5fe6e4e175615283403b3bc", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/d311724b4f7508303212ab36df514c35c3209dd8", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/5658b64e9267add0439269a22962cf10b164cff5", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/a3b0bb30b2846df2cd6cbedc7a796388e339d0fc", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/16b8117b55520f07d3d874d0ac063629805b2673", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/a3b0bb30b2846df2cd6cbedc7a796388e339d0fc", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/3b2ef188e712abfc51f714ac1377f71bee198227", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/a3589583f59e49ee931e8919b71cc94a5fd43b6a", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/445a31969b787de781e4862c2d1d69d9176fb992", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/b3dc209f1c1900f513ee6d98d0483de80d413bc3", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/2f7ff186aa47ddc53574ddfa2bfbd6ceadba3444", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/ae75799f692dbf7c7f1981c0ee17d1150bb9f465", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/b9c7a4475508644984d9ef4d0546181164e01241", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/a3b0bb30b2846df2cd6cbedc7a796388e339d0fc", null, "https://upload.wikimedia.org/wikipedia/commons/thumb/f/f1/Crystal_Clear_app_xfmail.png/32px-Crystal_Clear_app_xfmail.png", null, "https://upload.wikimedia.org/wikipedia/commons/thumb/2/24/Warning_icon.svg/36px-Warning_icon.svg.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8909923,"math_prob":0.99946374,"size":4688,"snap":"2019-13-2019-22","text_gpt3_token_len":1046,"char_repetition_ratio":0.11720751,"word_repetition_ratio":0.01754386,"special_character_ratio":0.22824232,"punctuation_ratio":0.15755627,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999471,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,6,null,null,null,8,null,8,null,8,null,10,null,8,null,8,null,8,null,8,null,8,null,10,null,8,null,8,null,null,null,8,null,null,null,null,null,null,null,8,null,8,null,8,null,8,null,10,null,8,null,8,null,8,null,null,null,8,null,null,null,8,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-19T04:31:03Z\",\"WARC-Record-ID\":\"<urn:uuid:6ce1bd37-d5d5-4fd5-8094-e63ad37f2729>\",\"Content-Length\":\"172108\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:762bb1f9-0067-4bb2-bcfc-6b2b8c682694>\",\"WARC-Concurrent-To\":\"<urn:uuid:0e9b8afc-1840-4367-9ab9-a10540956003>\",\"WARC-IP-Address\":\"208.80.154.224\",\"WARC-Target-URI\":\"https://en.wikiversity.org/wiki/Electric_Circuit_Analysis/Nodal_Analysis\",\"WARC-Payload-Digest\":\"sha1:N5XUITIXEUGUGHXPZKWA5M7S25NRL6UT\",\"WARC-Block-Digest\":\"sha1:CDNOQZXN2WDUM62TDN5QOEHITTOXGAGV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912201885.28_warc_CC-MAIN-20190319032352-20190319054352-00519.warc.gz\"}"}
https://www.conceptispuzzles.com/ru/index.aspx?uri=puzzle/hitori/techniques
[ " Hitori решение\n ВходРегистрация", null, "# Hitori решение\n\nHitori is a number-elimination puzzle based on a square grid filled with numbers. The object is to shade squares so that no number appears in a row or column more than once. In addition, shaded (black) squares do not touch each other vertically or horizontally and all un-shaded (white) squares create a single continuous area when the puzzle is completed. In this example we have a 5x5 Hitori puzzle with five columns marked a through e, and five rows marked 1 through 5. It is clear that some rows and columns have the same numbers more than once and some numbers must be shaded… but how?", null, "", null, "## Starting techniques\n\nThe easiest way to begin a Hitori puzzle is to scan rows and columns using the starting techniques. After finding and solving these first clues, proceed to the next techniques to continue solving the puzzle. Here are some ways of using the starting techniques:\n\n#### 1. Searching for adjacent triplets:\n\nThe middle square of an adjacent triplet must always be un-shaded. Let’s look at column d which has a triplet of adjacent 5s. The first rule of Hitori says the same number cannot appear more than once in a row or a column so we need to shade some of the 5s. If we shade all three we violate the second rule which says shaded squares must not touch each other vertically or horizontally. This violation will also occur if we shade just the two top 5s or the two bottom 5s. So the only possibility left is to shade the top and bottom 5s, and mark a circle around the middle 5 to indicate it must remain un-shaded.", null, "", null, "#### 2. Square between a pair:\n\nA square between a pair of same-numbers must be un-shaded. In this example, the 4 in row 4 is placed between a pair of 2s. The second rule of Hitori says shaded squares must not touch each other vertically or horizontally implying that a shaded square is always surrounded by four un-shaded squares (or less, depending on whether it is located next to the edge or in a corner). If we try shading the 4 then the adjacent 2s must be un-shaded which is not permitted according to the first rule. Therefore the 4 in square b4 is marked with a circle to indicate it is un-shaded.", null, "", null, "#### 3. Pair induction:\n\nAnother interesting starting technique is pair induction, where a row or column contains three same-numbers but only two of them are an adjacent pair. In row 4 of this example there are three 5s, two of which are an adjacent pair and one which is single. If the single 5 is un-shaded, then according to the first rule of Hitori the adjacent pair of 5s must be shaded. But this would violate the second rule which says shaded squares must not touch each other vertically or horizontally. Therefore the 5 in square b4 must be shaded.", null, "", null, "## Basic techniques\n\nAfter solving the first clues using starting techniques there is usually a straightforward phase of shading and un-shading squares according to the three rules of Hitori. However, some hard puzzles may require the use of corner techniques or even advanced techniques right away. Here are some ways of using the basic techniques:\n\n#### 1. Shading squares in rows and columns:\n\nThe first rule of Hitori says same number cannot appear more than once in a row or a column. In this example square b5 contains an un-shaded 2 from the previous solution steps of the puzzle. This means the 2 in square b2 must be shaded to avoid conflict with this rule.", null, "", null, "The second rule of Hitori says shaded squares must not touch each other vertically or horizontally. In this example there are two shaded squares from the previous solution steps of the puzzle. This means we can un-shade all adjacent squares by marking them with circles.", null, "", null, "#### 3. Un-shading squares to avoid partitions:\n\nThe third rule of Hitori says un-shaded squares must create a single continuous area. In this example squares b1, c2, and d3 form a wall of shaded squares. This means squares e2 and e4 must be un-shaded to avoid partitioning the un-shaded area.", null, "", null, "## Corner techniques\n\nHitori puzzles many times create interesting situations in the corners, which require more advanced techniques to solve. It is always useful to take a close look at all four corners to see if these situations occur in the puzzle. Here are some ways of using the corner techniques:\n\n#### Corner technique 1:\n\nIf square a1 is un-shaded then the two adjacent squares in b1 and a2 must be shaded according to the first rule of Hitori. However, by shading squares b1 and a2, a wall is formed in the corner which partitions the un-shaded square a1 from the rest of the un-shaded area. Therefore square a1 must be shaded, and we can also un-shade squares b1 and a2.", null, "", null, "#### Corner technique 2:\n\nHere is another special corner situation. From the first rule, either square a1 or a2 must be shaded. If we shade square a2 then square b2 must be un-shaded, which in turn means square b1 must be shaded as well. We now have both a2 and b1 shaded which partitions the un-shaded square a1 from the rest of the puzzle. Therefore square a1 must be shaded, which leads to the un-shading of a2 and b1 and the shading of b2.", null, "", null, "The techniques described so far won’t be enough to solve medium and hard puzzles. For this you will need wits and advanced techniques to work out many special and interesting logic situations. Most advanced techniques are done by a process of assumption and conflict, where a square is assumed to be shaded (or un-shaded) and this assumption is proven wrong in the next step or two using logic. Here are some examples of advanced techniques to solve special situations. You will develop many more advanced techniques when you solve Hitori puzzles by yourself:\n\nIf we un-shade square a5, then squares a3 and c5 must be shaded which creates a partition wall and violation of the third rule of Hitori. Therefore square a5 must be shaded.", null, "", null, "", null, "If we shade square c5, then a partition violation will occur when the 3 in square e5 or a5 is shaded. Therefore square c5 must be un-shaded.", null, "", null, "", null, "If we shade square b4, then squares b3 and a4 must be un-shaded. This means squares a3 and a5 must be shaded, which will cause partitioning of square a4. Therefore square b4 must be un-shaded.", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "" ]
[ null, "https://www.conceptispuzzles.com/ru/picture/3/3468.gif", null, "https://www.conceptispuzzles.com/ru/picture/27/1219.gif", null, "https://www.conceptispuzzles.com/ru/picture/27/1220.gif", null, "https://www.conceptispuzzles.com/ru/picture/27/1221.gif", null, "https://www.conceptispuzzles.com/ru/picture/27/1222.gif", null, "https://www.conceptispuzzles.com/ru/picture/27/1223.gif", null, "https://www.conceptispuzzles.com/ru/picture/27/1224.gif", null, "https://www.conceptispuzzles.com/ru/picture/27/1225.gif", null, "https://www.conceptispuzzles.com/ru/picture/27/1226.gif", null, "https://www.conceptispuzzles.com/ru/picture/27/1227.gif", null, "https://www.conceptispuzzles.com/ru/picture/27/1228.gif", null, "https://www.conceptispuzzles.com/ru/picture/27/1229.gif", null, "https://www.conceptispuzzles.com/ru/picture/27/1230.gif", null, "https://www.conceptispuzzles.com/ru/picture/27/1231.gif", null, "https://www.conceptispuzzles.com/ru/picture/27/1232.gif", null, "https://www.conceptispuzzles.com/ru/picture/27/1233.gif", null, "https://www.conceptispuzzles.com/ru/picture/27/1234.gif", null, "https://www.conceptispuzzles.com/ru/picture/27/1235.gif", null, "https://www.conceptispuzzles.com/ru/picture/27/1236.gif", null, "https://www.conceptispuzzles.com/ru/picture/27/1237.gif", null, "https://www.conceptispuzzles.com/ru/picture/27/1238.gif", null, "https://www.conceptispuzzles.com/ru/picture/27/1239.gif", null, "https://www.conceptispuzzles.com/ru/picture/27/1240.gif", null, "https://www.conceptispuzzles.com/ru/picture/27/1241.gif", null, "https://www.conceptispuzzles.com/ru/picture/27/1242.gif", null, "https://www.conceptispuzzles.com/ru/picture/27/1243.gif", null, "https://www.conceptispuzzles.com/ru/picture/27/1244.gif", null, "https://www.conceptispuzzles.com/ru/picture/27/1245.gif", null, "https://www.conceptispuzzles.com/ru/picture/27/1246.gif", null, "https://www.conceptispuzzles.com/ru/picture/27/1247.gif", null, "https://www.conceptispuzzles.com/ru/picture/27/1248.gif", null, "https://www.conceptispuzzles.com/ru/picture/27/1249.gif", null, "https://www.conceptispuzzles.com/ru/picture/27/1250.gif", null, "https://www.conceptispuzzles.com/ru/picture/27/1251.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9296874,"math_prob":0.908557,"size":6717,"snap":"2022-40-2023-06","text_gpt3_token_len":1520,"char_repetition_ratio":0.20348577,"word_repetition_ratio":0.1015953,"special_character_ratio":0.21616793,"punctuation_ratio":0.083395384,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9818326,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68],"im_url_duplicate_count":[null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null,6,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-29T09:24:59Z\",\"WARC-Record-ID\":\"<urn:uuid:f11cd1e9-c52c-4ba3-9a4d-c9d01a268d84>\",\"Content-Length\":\"32273\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:64a6d38f-5ffc-4230-a46b-e3e0ba30e3e2>\",\"WARC-Concurrent-To\":\"<urn:uuid:e5323344-7114-40cc-b79d-6b69acfe1d76>\",\"WARC-IP-Address\":\"209.190.55.50\",\"WARC-Target-URI\":\"https://www.conceptispuzzles.com/ru/index.aspx?uri=puzzle/hitori/techniques\",\"WARC-Payload-Digest\":\"sha1:B6BDJ6N5K5L3WHCK4YXFKRHOOLYH7YQM\",\"WARC-Block-Digest\":\"sha1:BWPFZ7PY7IOEF2ZEWGD4WNWXPJEQTKBN\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499710.49_warc_CC-MAIN-20230129080341-20230129110341-00014.warc.gz\"}"}
https://mathspace.co/textbooks/syllabuses/Syllabus-405/topics/Topic-7180/subtopics/Subtopic-95936/?activeTab=theory
[ "", null, "Lesson\n\nIf you look at lists of numbers like $1$1, $3$3, $5$5, $7$7, you might notice the numbers jump by the same amount each time. This is what we call a simple pattern. The amazing thing about patterns is that we don't just look at patterns - we can continue patterns!\n\nWatch the video below to see how to continue a number pattern.\n\nNow that we've learned how to continue a number pattern, try some questions yourself!\n\nWorked Examples\n\nExample 1\n\nWrite the next number in the pattern:\n\n1. $2$2, $4$4, $6$6, $8$8, $10$10, $12$12, $\\editable{}$\n\nExample 2\n\nComplete the following pattern.\n\n1. $3$3 $6$6 $9$9 $\\editable{}$ $\\editable{}$ $\\editable{}$ $\\editable{}$ $\\editable{}$\n\nExample 3\n\nComplete the following pattern.\n\n1. $36$36 $32$32 $28$28 $\\editable{}$ $\\editable{}$ $\\editable{}$ $\\editable{}$ $\\editable{}$\n\nOutcomes\n\nNA2-8\n\nFind rules for the next member in a sequential pattern." ]
[ null, "https://mathspace-production-static.mathspace.co/permalink/badges/v3/patterns-algebra.svg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8407962,"math_prob":0.99996054,"size":816,"snap":"2022-05-2022-21","text_gpt3_token_len":243,"char_repetition_ratio":0.26231527,"word_repetition_ratio":0.033333335,"special_character_ratio":0.35661766,"punctuation_ratio":0.12578617,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99797946,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-21T02:29:02Z\",\"WARC-Record-ID\":\"<urn:uuid:d2a68ff7-8f17-4f62-8e01-6238157485e8>\",\"Content-Length\":\"362978\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:345a58b7-b086-49d1-b7db-a8dd4f5474e8>\",\"WARC-Concurrent-To\":\"<urn:uuid:e5918ed1-ee6e-47ce-93cf-b6fbb44257a4>\",\"WARC-IP-Address\":\"104.22.57.207\",\"WARC-Target-URI\":\"https://mathspace.co/textbooks/syllabuses/Syllabus-405/topics/Topic-7180/subtopics/Subtopic-95936/?activeTab=theory\",\"WARC-Payload-Digest\":\"sha1:E7HSSSCDU27F3GZVXPA264UVODJ6MB3Y\",\"WARC-Block-Digest\":\"sha1:EQCLQJELOZ4P3VKES6JYDOKY4VBBRG6A\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320302715.38_warc_CC-MAIN-20220121010736-20220121040736-00475.warc.gz\"}"}
https://dislib.readthedocs.io/en/release-0.7/dislib.model_selection.html
[ "# dislib.model_selection¶\n\nclass `dislib.model_selection.``GridSearchCV`(estimator, param_grid, scoring=None, cv=None, refit=True)[source]\n\nBases: `dislib.model_selection._search.BaseSearchCV`\n\nExhaustive search over specified parameter values for an estimator.\n\nGridSearchCV implements a “fit” and a “score” method.\n\nThe parameters of the estimator used to apply these methods are optimized by cross-validated grid-search over a parameter grid.\n\nParameters: estimator (estimator object.) – This is assumed to implement the scikit-learn estimator interface. Either estimator needs to provide a `score` function, or `scoring` must be passed. param_grid (dict or list of dictionaries) – Dictionary with parameters names (string) as keys and lists of parameter settings to try as values, or a list of such dictionaries, in which case the grids spanned by each dictionary in the list are explored. This enables searching over any sequence of parameter settings. scoring (callable, dict or None, optional (default=None)) – A callable to evaluate the predictions on the test set. It should take 3 parameters, estimator, x and y, and return a score (higher meaning better). For evaluating multiple metrics, give a dict with names as keys and callables as values. If None, the estimator’s score method is used. cv (int or cv generator, optional (default=None)) – Determines the cross-validation splitting strategy. Possible inputs for cv are: - None, to use the default 5-fold cross validation, - integer, to specify the number of folds in a KFold, - custom cv generator. refit (boolean, string, or callable, optional (default=True)) – Refit an estimator using the best found parameters on the whole dataset. For multiple metric evaluation, this needs to be a string denoting the scorer that would be used to find the best parameters for refitting the estimator at the end. Where there are considerations other than maximum score in choosing a best estimator, `refit` can be set to a function which returns the selected `best_index_` given `cv_results_`. The refitted estimator is made available at the `best_estimator_` attribute and permits using `predict` directly on this `GridSearchCV` instance. Also for multiple metric evaluation, the attributes `best_index_`, `best_score_` and `best_params_` will only be available if `refit` is set and all of them will be determined w.r.t this specific scorer. `best_score_` is not returned if refit is callable. See `scoring` parameter to know more about multiple metric evaluation.\n\nExamples\n\n```>>> import dislib as ds\n>>> from dislib.model_selection import GridSearchCV\n>>> from dislib.classification import RandomForestClassifier\n>>> import numpy as np\n>>> from sklearn import datasets\n>>>\n>>>\n>>> if __name__ == '__main__':\n>>> x = ds.array(x_np, (30, 4))\n>>> y = ds.array(y_np[:, np.newaxis], (30, 1))\n>>> param_grid = {'n_estimators': (2, 4), 'max_depth': range(3, 5)}\n>>> rf = RandomForestClassifier()\n>>> searcher = GridSearchCV(rf, param_grid)\n>>> searcher.fit(x, y)\n>>> searcher.cv_results_\n```\nVariables:\n• cv_results (dict of numpy (masked) ndarrays) –\n\nA dict with keys as column headers and values as columns, that can be imported into a pandas `DataFrame`. For instance the below given table:\n\nparam_kernel param_degree split0_test_score rank_t…\n’poly’ 2 0.80 2\n’poly’ 3 0.70 4\n’rbf’ 0.80 3\n’rbf’ 0.93 1\n\nwill be represented by a `cv_results_` dict of:\n\n```{\n'param_kernel': masked_array(data = ['poly', 'poly', 'rbf', 'rbf'],\nmask = [False False False False]...),\n'param_degree': masked_array(data = [2.0 3.0 -- --],\nmask = [False False True True]...),\n'split0_test_score' : [0.80, 0.70, 0.80, 0.93],\n'split1_test_score' : [0.82, 0.50, 0.68, 0.78],\n'split2_test_score' : [0.79, 0.55, 0.71, 0.93],\n...\n'mean_test_score' : [0.81, 0.60, 0.75, 0.85],\n'std_test_score' : [0.01, 0.10, 0.05, 0.08],\n'rank_test_score' : [2, 4, 3, 1],\n'params' : [{'kernel': 'poly', 'degree': 2}, ...],\n}\n```\n\nNOTES:\n\nThe key `'params'` is used to store a list of parameter settings dicts for all the parameter candidates.\n\nThe `mean_fit_time`, `std_fit_time`, `mean_score_time` and `std_score_time` are all in seconds.\n\nFor multi-metric evaluation, the scores for all the scorers are available in the `cv_results_` dict at the keys ending with that scorer’s name (`'_<scorer_name>'`) instead of `'_score'` shown above (‘split0_test_precision’, ‘mean_train_precision’ etc.).\n\n• best_estimator (estimator or dict) – Estimator that was chosen by the search, i.e. estimator which gave highest score (or smallest loss if specified) on the left out data. Not available if `refit=False`. See `refit` parameter for more information on allowed values.\n• best_score (float) – Mean cross-validated score of the best_estimator For multi-metric evaluation, this is present only if `refit` is specified.\n• best_params (dict) – Parameter setting that gave the best results on the hold out data. For multi-metric evaluation, this is present only if `refit` is specified.\n• best_index (int) – The index (of the `cv_results_` arrays) which corresponds to the best candidate parameter setting. The dict at `search.cv_results_['params'][search.best_index_]` gives the parameter setting for the best model, that gives the highest mean score (`search.best_score_`). For multi-metric evaluation, this is present only if `refit` is specified.\n• scorer (function or a dict) – Scorer function used on the held out data to choose the best parameters for the model. For multi-metric evaluation, this attribute holds the validated `scoring` dict which maps the scorer key to the scorer callable.\n• n_splits (int) – The number of cross-validation splits (folds/iterations).\n`fit`(x, y=None, **fit_params)[source]\n\nRun fit with all sets of parameters.\n\nParameters: x (ds-array) – Training data samples. y (ds-array, optional (default = None)) – Training data labels or values. **fit_params (dict of string -> object) – Parameters passed to the `fit` method of the estimator\nclass `dislib.model_selection.``RandomizedSearchCV`(estimator, param_distributions, n_iter=10, scoring=None, cv=None, refit=True, random_state=None)[source]\n\nBases: `dislib.model_selection._search.BaseSearchCV`\n\nRandomized search on hyper parameters.\n\nRandomizedSearchCV implements a “fit” and a “score” method.\n\nThe parameters of the estimator used to apply these methods are optimized by cross-validated search over parameter settings.\n\nIn contrast to GridSearchCV, not all parameter values are tried out, but rather a fixed number of parameter settings is sampled from the specified distributions. The number of parameter settings that are tried is given by n_iter.\n\nIf all parameters are presented as a list, sampling without replacement is performed. If at least one parameter is given as a distribution, sampling with replacement is used.\n\nParameters: estimator (estimator object.) – This is assumed to implement the scikit-learn estimator interface. Either estimator needs to provide a `score` function, or `scoring` must be passed. param_distributions (dict) – Dictionary with parameters names (string) as keys and distributions or lists of parameters to try. Distributions must provide a `rvs` method for sampling (such as those from scipy.stats.distributions). If a list is given, it is sampled uniformly. n_iter (int, optional (default=10)) – Number of parameter settings that are sampled. scoring (callable, dict or None, optional (default=None)) – A callable to evaluate the predictions on the test set. It should take 3 parameters, estimator, x and y, and return a score (higher meaning better). For evaluating multiple metrics, give a dict with names as keys and callables as values. If None, the estimator’s score method is used. cv (int or cv generator, optional (default=None)) – Determines the cross-validation splitting strategy. Possible inputs for cv are: - None, to use the default 5-fold cross validation, - integer, to specify the number of folds in a KFold, - custom cv generator. refit (boolean, string, or callable, optional (default=True)) – Refit an estimator using the best found parameters on the whole dataset. For multiple metric evaluation, this needs to be a string denoting the scorer that would be used to find the best parameters for refitting the estimator at the end. Where there are considerations other than maximum score in choosing a best estimator, `refit` can be set to a function which returns the selected `best_index_` given `cv_results_`. The refitted estimator is made available at the `best_estimator_` attribute and permits using `predict` directly on this `GridSearchCV` instance. Also for multiple metric evaluation, the attributes `best_index_`, `best_score_` and `best_params_` will only be available if `refit` is set and all of them will be determined w.r.t this specific scorer. `best_score_` is not returned if refit is callable. See `scoring` parameter to know more about multiple metric evaluation. random_state (int, RandomState instance or None, optional, default=None) – Pseudo random number generator state used for random sampling of params in param_distributions. This is not passed to each estimator. If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random.\n\nExamples\n\n```>>> import dislib as ds\n>>> from dislib.model_selection import RandomizedSearchCV\n>>> import numpy as np\n>>> import scipy.stats as stats\n>>> from sklearn import datasets\n>>>\n>>>\n>>> if __name__ == '__main__':\n>>> # Pre-shuffling required for CSVM\n>>> p = np.random.permutation(len(x_np))\n>>> x = ds.array(x_np[p], (30, 4))\n>>> y = ds.array((y_np[p] == 0)[:, np.newaxis], (30, 1))\n>>> param_distributions = {'c': stats.expon(scale=0.5),\n>>> 'gamma': stats.expon(scale=10)}\n>>> searcher = RandomizedSearchCV(csvm, param_distributions, n_iter=10)\n>>> searcher.fit(x, y)\n>>> searcher.cv_results_\n```\nVariables:\n• cv_results (dict of numpy (masked) ndarrays) –\n\nA dict with keys as column headers and values as columns, that can be imported into a pandas `DataFrame`.\n\nFor instance the below given table\n\nparam_c param_gamma split0_test_score rank_test_score\n0.193 1.883 0.82 3\n1.452 0.327 0.81 2\n0.926 3.452 0.94 1\n\nwill be represented by a `cv_results_` dict of:\n\n```{\n'param_kernel' : masked_array(data = ['rbf', 'rbf', 'rbf'],\n'split0_test_score' : [0.82, 0.81, 0.94],\n'split1_test_score' : [0.66, 0.75, 0.79],\n'split2_test_score' : [0.82, 0.87, 0.84],\n...\n'mean_test_score' : [0.76, 0.84, 0.86],\n'std_test_score' : [0.01, 0.20, 0.04],\n'rank_test_score' : [3, 2, 1],\n'params' : [{'c' : 0.193, 'gamma' : 1.883}, ...],\n}\n```\n\nNOTE\n\nThe key `'params'` is used to store a list of parameter settings dicts for all the parameter candidates.\n\nThe `mean_fit_time`, `std_fit_time`, `mean_score_time` and `std_score_time` are all in seconds.\n\nFor multi-metric evaluation, the scores for all the scorers are available in the `cv_results_` dict at the keys ending with that scorer’s name (`'_<scorer_name>'`) instead of `'_score'` shown above. (‘split0_test_precision’, ‘mean_train_precision’ etc.)\n\n• best_estimator (estimator or dict) –\n\nEstimator that was chosen by the search, i.e. estimator which gave highest score (or smallest loss if specified) on the left out data. Not available if `refit=False`.\n\nFor multi-metric evaluation, this attribute is present only if `refit` is specified.\n\nSee `refit` parameter for more information on allowed values.\n\n• best_score (float) –\n\nMean cross-validated score of the best_estimator.\n\nFor multi-metric evaluation, this is not available if `refit` is `False`. See `refit` parameter for more information.\n\n• best_params (dict) –\n\nParameter setting that gave the best results on the hold out data.\n\nFor multi-metric evaluation, this is not available if `refit` is `False`. See `refit` parameter for more information.\n\n• best_index (int) –\n\nThe index (of the `cv_results_` arrays) which corresponds to the best candidate parameter setting.\n\nThe dict at `search.cv_results_['params'][search.best_index_]` gives the parameter setting for the best model, that gives the highest mean score (`search.best_score_`).\n\nFor multi-metric evaluation, this is not available if `refit` is `False`. See `refit` parameter for more information.\n\n• scorer (function or a dict) –\n\nScorer function used on the held out data to choose the best parameters for the model.\n\nFor multi-metric evaluation, this attribute holds the validated `scoring` dict which maps the scorer key to the scorer callable.\n\n• n_splits (int) – The number of cross-validation splits (folds/iterations).\n`fit`(x, y=None, **fit_params)[source]\n\nRun fit with all sets of parameters.\n\nParameters: x (ds-array) – Training data samples. y (ds-array, optional (default = None)) – Training data labels or values. **fit_params (dict of string -> object) – Parameters passed to the `fit` method of the estimator\nclass `dislib.model_selection.``KFold`(n_splits=5, shuffle=False, random_state=None)[source]\n\nBases: `object`\n\nK-fold splitter for cross-validation\n\nReturns k partitions of the dataset into train and validation datasets. The dataset is shuffled and split into k folds; each fold is used once as validation dataset while the k - 1 remaining folds form the training dataset.\n\nEach fold contains n//k or n//k + 1 samples, where n is the number of samples in the input dataset.\n\nParameters: n_splits (int, optional (default=5)) – Number of folds. Must be at least 2. shuffle (boolean, optional (default=False)) – Shuffles and balances the data before splitting into batches. random_state (int, RandomState instance or None, optional, default=None) – If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random. Used when `shuffle` == True.\n`get_n_splits`()[source]\n\nGet the number of CV splits that this splitter does.\n\nReturns: n_splits – The number of splits performed by this CV splitter. int\n`split`(x, y=None)[source]\n\nGenerates K-fold splits.\n\nParameters: x (ds-array) – Samples array. y (ds-array, optional (default=None)) – Corresponding labels or values. train_data (train_x, train_y) – The training ds-arrays for that split. If y is None, train_y is None. test_data (test_x, test_y) – The testing ds-arrays data for that split. If y is None, test_y is None." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6205473,"math_prob":0.94762635,"size":14214,"snap":"2022-05-2022-21","text_gpt3_token_len":3536,"char_repetition_ratio":0.1330753,"word_repetition_ratio":0.6046738,"special_character_ratio":0.25911075,"punctuation_ratio":0.18579236,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98030037,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-19T15:32:40Z\",\"WARC-Record-ID\":\"<urn:uuid:3d6d771d-a6b1-4992-a55b-a4e34462998d>\",\"Content-Length\":\"50602\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4cba7dd4-42a2-4be7-8261-0f4f2dc27121>\",\"WARC-Concurrent-To\":\"<urn:uuid:52ae57f4-b608-4a09-870c-20273a5dd8ed>\",\"WARC-IP-Address\":\"104.17.32.82\",\"WARC-Target-URI\":\"https://dislib.readthedocs.io/en/release-0.7/dislib.model_selection.html\",\"WARC-Payload-Digest\":\"sha1:GZKMUDXEPIXEOY35JE4DR2GLZQ7MNLT4\",\"WARC-Block-Digest\":\"sha1:LRQCYNZ2EQQGH4QDEUORAPSFPNAUSCPT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662529538.2_warc_CC-MAIN-20220519141152-20220519171152-00674.warc.gz\"}"}
https://metanumbers.com/55136
[ "## 55136\n\n55,136 (fifty-five thousand one hundred thirty-six) is an even five-digits composite number following 55135 and preceding 55137. In scientific notation, it is written as 5.5136 × 104. The sum of its digits is 20. It has a total of 6 prime factors and 12 positive divisors. There are 27,552 positive integers (up to 55136) that are relatively prime to 55136.\n\n## Basic properties\n\n• Is Prime? No\n• Number parity Even\n• Number length 5\n• Sum of Digits 20\n• Digital Root 2\n\n## Name\n\nShort name 55 thousand 136 fifty-five thousand one hundred thirty-six\n\n## Notation\n\nScientific notation 5.5136 × 104 55.136 × 103\n\n## Prime Factorization of 55136\n\nPrime Factorization 25 × 1723\n\nComposite number\nDistinct Factors Total Factors Radical ω(n) 2 Total number of distinct prime factors Ω(n) 6 Total number of prime factors rad(n) 3446 Product of the distinct prime numbers λ(n) 1 Returns the parity of Ω(n), such that λ(n) = (-1)Ω(n) μ(n) 0 Returns: 1, if n has an even number of prime factors (and is square free) −1, if n has an odd number of prime factors (and is square free) 0, if n has a squared prime factor Λ(n) 0 Returns log(p) if n is a power pk of any prime p (for any k >= 1), else returns 0\n\nThe prime factorization of 55,136 is 25 × 1723. Since it has a total of 6 prime factors, 55,136 is a composite number.\n\n## Divisors of 55136\n\n1, 2, 4, 8, 16, 32, 1723, 3446, 6892, 13784, 27568, 55136\n\n12 divisors\n\n Even divisors 10 2 1 1\nTotal Divisors Sum of Divisors Aliquot Sum τ(n) 12 Total number of the positive divisors of n σ(n) 108612 Sum of all the positive divisors of n s(n) 53476 Sum of the proper positive divisors of n A(n) 9051 Returns the sum of divisors (σ(n)) divided by the total number of divisors (τ(n)) G(n) 234.811 Returns the nth root of the product of n divisors H(n) 6.0917 Returns the total number of divisors (τ(n)) divided by the sum of the reciprocal of each divisors\n\nThe number 55,136 can be divided by 12 positive divisors (out of which 10 are even, and 2 are odd). The sum of these divisors (counting 55,136) is 108,612, the average is 9,051.\n\n## Other Arithmetic Functions (n = 55136)\n\n1 φ(n) n\nEuler Totient Carmichael Lambda Prime Pi φ(n) 27552 Total number of positive integers not greater than n that are coprime to n λ(n) 13776 Smallest positive number such that aλ(n) ≡ 1 (mod n) for all a coprime to n π(n) ≈ 5601 Total number of primes less than or equal to n r2(n) 0 The number of ways n can be represented as the sum of 2 squares\n\nThere are 27,552 positive integers (less than 55,136) that are coprime with 55,136. And there are approximately 5,601 prime numbers less than or equal to 55,136.\n\n## Divisibility of 55136\n\n m n mod m 2 3 4 5 6 7 8 9 0 2 0 1 2 4 0 2\n\nThe number 55,136 is divisible by 2, 4 and 8.\n\n## Classification of 55136\n\n• Arithmetic\n• Deficient\n\n### Expressible via specific sums\n\n• Polite\n• Non-hypotenuse\n\n## Base conversion (55136)\n\nBase System Value\n2 Binary 1101011101100000\n3 Ternary 2210122002\n4 Quaternary 31131200\n5 Quinary 3231021\n6 Senary 1103132\n8 Octal 153540\n10 Decimal 55136\n12 Duodecimal 27aa8\n20 Vigesimal 6hgg\n36 Base36 16jk\n\n## Basic calculations (n = 55136)\n\n### Multiplication\n\nn×i\n n×2 110272 165408 220544 275680\n\n### Division\n\nni\n n⁄2 27568 18378.7 13784 11027.2\n\n### Exponentiation\n\nni\n n2 3039978496 167612254355456 9241469256142422016 509537648906668580274176\n\n### Nth Root\n\ni√n\n 2√n 234.811 38.0608 15.3235 8.87743\n\n## 55136 as geometric shapes\n\n### Circle\n\n Diameter 110272 346430 9.55037e+09\n\n### Sphere\n\n Volume 7.02093e+14 3.82015e+10 346430\n\n### Square\n\nLength = n\n Perimeter 220544 3.03998e+09 77974.1\n\n### Cube\n\nLength = n\n Surface area 1.82399e+10 1.67612e+14 95498.4\n\n### Equilateral Triangle\n\nLength = n\n Perimeter 165408 1.31635e+09 47749.2\n\n### Triangular Pyramid\n\nLength = n\n Surface area 5.2654e+09 1.97533e+13 45018.4\n\n## Cryptographic Hash Functions\n\nmd5 08913a2e8cbcc8bf302a8554782add46 4b5ee132a8b95f5ed4ae5a260cc2a10aeef47560 6e771ae51ce48292f2bb11fbfeecf748e73d42b624c4aff3e3a6151d7668f841 2318b3e9cbd1a88a59b2d72993d336dc256730a7874759e4b059e595974456727efd9fc562186487e536d725cde47da2048d780042fac60e0f50bd2db3128835 b5566663261334146922f2c3ea310e8db5a7c33f" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.61464804,"math_prob":0.98410815,"size":4504,"snap":"2020-34-2020-40","text_gpt3_token_len":1586,"char_repetition_ratio":0.11888889,"word_repetition_ratio":0.025525525,"special_character_ratio":0.45670515,"punctuation_ratio":0.07662338,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99602246,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-04T17:43:36Z\",\"WARC-Record-ID\":\"<urn:uuid:ac41e016-eb30-4d39-b98e-5b8b74efed88>\",\"Content-Length\":\"47745\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:36c47aa6-5a61-4c2a-8209-db69ff6f9b4e>\",\"WARC-Concurrent-To\":\"<urn:uuid:37b4d131-337b-4f75-b036-3e73c8034edc>\",\"WARC-IP-Address\":\"46.105.53.190\",\"WARC-Target-URI\":\"https://metanumbers.com/55136\",\"WARC-Payload-Digest\":\"sha1:UAU4Z2AOHWNUYLXU7NNWAS3PNLEQVTWP\",\"WARC-Block-Digest\":\"sha1:EYDCFAKSKIL6EVPVE6QUYACVZADNZLPF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439735881.90_warc_CC-MAIN-20200804161521-20200804191521-00557.warc.gz\"}"}
https://docs.seldon.io/projects/alibi/en/latest/examples/anchor_tabular_adult.html
[ "# Anchor explanations for income prediction¶\n\nIn this example, we will explain predictions of a Random Forest classifier whether a person will make more or less than \\$50k based on characteristics like age, marital status, gender or occupation. The features are a mixture of ordinal and categorical data and will be pre-processed accordingly.\n\n:\n\nimport numpy as np\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.compose import ColumnTransformer\nfrom sklearn.pipeline import Pipeline\nfrom sklearn.impute import SimpleImputer\nfrom sklearn.metrics import accuracy_score\nfrom sklearn.preprocessing import StandardScaler, OneHotEncoder\nfrom alibi.explainers import AnchorTabular\n\n\nThe fetch_adult function returns a Bunch object containing the features, the targets, the feature names and a mapping of categorical variables to numbers which are required for formatting the output of the Anchor explainer.\n\n:\n\nadult = fetch_adult()\n\n:\n\ndict_keys(['data', 'target', 'feature_names', 'target_names', 'category_map'])\n\n:\n\ndata = adult.data\n\n\nNote that for your own datasets you can use our utility function gen_category_map to create the category map:\n\n:\n\nfrom alibi.utils.data import gen_category_map\n\n\nDefine shuffled training and test set\n\n:\n\nnp.random.seed(0)\ndata_perm = np.random.permutation(np.c_[data, target])\ndata = data_perm[:,:-1]\ntarget = data_perm[:,-1]\n\n:\n\nidx = 30000\nX_train,Y_train = data[:idx,:], target[:idx]\nX_test, Y_test = data[idx+1:,:], target[idx+1:]\n\n\n## Create feature transformation pipeline¶\n\nCreate feature pre-processor. Needs to have ‘fit’ and ‘transform’ methods. Different types of pre-processing can be applied to all or part of the features. In the example below we will standardize ordinal features and apply one-hot-encoding to categorical features.\n\nOrdinal features:\n\n:\n\nordinal_features = [x for x in range(len(feature_names)) if x not in list(category_map.keys())]\nordinal_transformer = Pipeline(steps=[('imputer', SimpleImputer(strategy='median')),\n('scaler', StandardScaler())])\n\n\nCategorical features:\n\n:\n\ncategorical_features = list(category_map.keys())\ncategorical_transformer = Pipeline(steps=[('imputer', SimpleImputer(strategy='median')),\n('onehot', OneHotEncoder(handle_unknown='ignore'))])\n\n\nCombine and fit:\n\n:\n\npreprocessor = ColumnTransformer(transformers=[('num', ordinal_transformer, ordinal_features),\n('cat', categorical_transformer, categorical_features)])\npreprocessor.fit(X_train)\n\n:\n\nColumnTransformer(n_jobs=None, remainder='drop', sparse_threshold=0.3,\ntransformer_weights=None,\ntransformers=[('num',\nPipeline(memory=None,\nsteps=[('imputer',\ncopy=True,\nfill_value=None,\nmissing_values=nan,\nstrategy='median',\nverbose=0)),\n('scaler',\nStandardScaler(copy=True,\nwith_mean=True,\nwith_std=True))],\nverbose=False),\n[0, 8, 9...\nPipeline(memory=None,\nsteps=[('imputer',\ncopy=True,\nfill_value=None,\nmissing_values=nan,\nstrategy='median',\nverbose=0)),\n('onehot',\nOneHotEncoder(categorical_features=None,\ncategories=None,\ndrop=None,\ndtype=<class 'numpy.float64'>,\nhandle_unknown='ignore',\nn_values=None,\nsparse=True))],\nverbose=False),\n[1, 2, 3, 4, 5, 6, 7, 11])],\nverbose=False)\n\n\n## Train Random Forest model¶\n\nFit on pre-processed (imputing, OHE, standardizing) data.\n\n:\n\nnp.random.seed(0)\nclf = RandomForestClassifier(n_estimators=50)\nclf.fit(preprocessor.transform(X_train), Y_train)\n\n:\n\nRandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',\nmax_depth=None, max_features='auto', max_leaf_nodes=None,\nmin_impurity_decrease=0.0, min_impurity_split=None,\nmin_samples_leaf=1, min_samples_split=2,\nmin_weight_fraction_leaf=0.0, n_estimators=50,\nn_jobs=None, oob_score=False, random_state=None,\nverbose=0, warm_start=False)\n\n\nDefine predict function\n\n:\n\npredict_fn = lambda x: clf.predict(preprocessor.transform(x))\nprint('Train accuracy: ', accuracy_score(Y_train, predict_fn(X_train)))\nprint('Test accuracy: ', accuracy_score(Y_test, predict_fn(X_test)))\n\nTrain accuracy: 0.9655333333333334\nTest accuracy: 0.855859375\n\n\n## Initialize and fit anchor explainer for tabular data¶\n\n:\n\nexplainer = AnchorTabular(predict_fn, feature_names, categorical_names=category_map)\n\n\nDiscretize the ordinal features into quartiles\n\n:\n\nexplainer.fit(X_train, disc_perc=[25, 50, 75])\n\n\n## Getting an anchor¶\n\nBelow, we get an anchor for the prediction of the first observation in the test set. An anchor is a sufficient condition - that is, when the anchor holds, the prediction should be the same as the prediction for this instance.\n\n:\n\nidx = 0\nprint('Prediction: ', class_names[explainer.predict_fn(X_test[idx].reshape(1, -1))])\n\nPrediction: <=50K\n\n\nWe set the precision threshold to 0.95. This means that predictions on observations where the anchor holds will be the same as the prediction on the explained instance at least 95% of the time.\n\n:\n\nexplanation = explainer.explain(X_test[idx], threshold=0.95)\nprint('Anchor: %s' % (' AND '.join(explanation['names'])))\nprint('Precision: %.2f' % explanation['precision'])\nprint('Coverage: %.2f' % explanation['coverage'])\n\nAnchor: Marital Status = Separated AND Sex = Female\nPrecision: 0.96\nCoverage: 0.11\n\n\n## …or not?¶\n\nLet’s try getting an anchor for a different observation in the test set - one for the which the prediction is >50K.\n\n:\n\nidx = 6\nprint('Prediction: ', class_names[explainer.predict_fn(X_test[idx].reshape(1, -1))])\n\nexplanation = explainer.explain(X_test[idx], threshold=0.95)\nprint('Anchor: %s' % (' AND '.join(explanation['names'])))\nprint('Precision: %.2f' % explanation['precision'])\nprint('Coverage: %.2f' % explanation['coverage'])\n\nPrediction: >50K\n\nCould not find an anchor satisfying the 0.95 precision constraint. Now returning the best non-eligible anchor.\n\nAnchor: Capital Loss > 0.00 AND Marital Status = Married AND Relationship = Husband AND Age > 47.00 AND Race = White AND Sex = Male AND Country = United-States AND Workclass = Self-emp-not-inc AND Capital Gain <= 0.00\nPrecision: 0.64\nCoverage: 0.00\n\n\nNotice how no anchor is found!\n\nThis is due to the imbalanced dataset (roughly 25:75 high:low earner proportion), so during the sampling stage feature ranges corresponding to low-earners will be oversampled. This is a feature because it can point out an imbalanced dataset, but it can also be fixed by producing balanced datasets to enable anchors to be found for either class." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5912405,"math_prob":0.96115226,"size":6678,"snap":"2019-51-2020-05","text_gpt3_token_len":1628,"char_repetition_ratio":0.12496254,"word_repetition_ratio":0.0629275,"special_character_ratio":0.25920933,"punctuation_ratio":0.21972318,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99511683,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-26T08:10:48Z\",\"WARC-Record-ID\":\"<urn:uuid:791a90f4-4f18-4afb-bc55-0f267b013d5e>\",\"Content-Length\":\"48438\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cc6601d8-100e-40a1-b31f-ed954ab90b58>\",\"WARC-Concurrent-To\":\"<urn:uuid:dd937840-4761-4b59-b1b2-a211eabbc68f>\",\"WARC-IP-Address\":\"104.17.32.82\",\"WARC-Target-URI\":\"https://docs.seldon.io/projects/alibi/en/latest/examples/anchor_tabular_adult.html\",\"WARC-Payload-Digest\":\"sha1:4URMGQQGINHSYXTLKQSDZ32AC4CNVFUI\",\"WARC-Block-Digest\":\"sha1:2BGSAJN7EY2Z7X7YZMBTULCNFGQBEPKM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579251687958.71_warc_CC-MAIN-20200126074227-20200126104227-00207.warc.gz\"}"}
https://www.physicsforums.com/threads/simplify-trigonometric-equation-problem.301990/
[ "# Simplify trigonometric equation problem!\n\n## Homework Statement\n\nSimplify the following. Write all trigonometric\nfunctions in terms of t.", null, "## The Attempt at a Solution\n\nI know that:\nsec(t+2pi)=sec(t)\n\n1+tan(t+3$$\\pi$$)= 1+tan(t)\n\nWhat about: csc(t-6$$\\pi$$)??? Will it be equal to csc(t)." ]
[ null, "data:image/svg+xml;charset=utf-8,%3Csvg xmlns%3D'http%3A%2F%2Fwww.w3.org%2F2000%2Fsvg' width='300' height='111' viewBox%3D'0 0 300 111'%2F%3E", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8012731,"math_prob":0.99916744,"size":360,"snap":"2020-24-2020-29","text_gpt3_token_len":118,"char_repetition_ratio":0.101123594,"word_repetition_ratio":0.0,"special_character_ratio":0.30555555,"punctuation_ratio":0.1375,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9997688,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-04T12:57:16Z\",\"WARC-Record-ID\":\"<urn:uuid:8a28c620-f534-4489-aea3-39facda56e0f>\",\"Content-Length\":\"65007\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:605ac3c0-7d13-40b0-ae32-8f8a0e666e7f>\",\"WARC-Concurrent-To\":\"<urn:uuid:8896a804-8bf3-4004-99dc-f65808273660>\",\"WARC-IP-Address\":\"23.111.143.85\",\"WARC-Target-URI\":\"https://www.physicsforums.com/threads/simplify-trigonometric-equation-problem.301990/\",\"WARC-Payload-Digest\":\"sha1:IOJJNJX45F467SBDWU3Z66675QFJR7NP\",\"WARC-Block-Digest\":\"sha1:73AZK643MDGBMSKJGKUY56FMQNOR255G\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655886121.45_warc_CC-MAIN-20200704104352-20200704134352-00296.warc.gz\"}"}
https://www.millersville.edu/math/programs/departmental-syllabi.php
[ "# Departmental Syllabi\n\n## Departmental Syllabi\n\nThese are the official departmental syllabi for the courses offered by the mathematics department, and list the topics covered as well as a typical text for the course.\n\nMATH 090 - Basic Mathematics\nMATH 100 - Survey of Mathematical Ideas\nMATH 101 - College Algebra\nMATH 102 - Survey of Mathematical Ideas in Non-European Cultures\nMATH 104 - Fundamentals of Math I\nMATH 105 - Fundamentals of Math II\nMATH 110 - Trigonometry\nMATH 130 - Elements of Statistics I\nMATH 151 - Calculus for the Management, Life and Social Sciences\nMATH 160 - Precalculus\nMATH 161 - Calculus I\nMATH 163 - Honors Calculus I\nMATH 204 - Algebraic Foundations for the Middle Level Teacher\nMATH 205 - Geometry for the Middle Level Teacher\nMATH 211 - Calculus II\nMATH 230 - Data Analysis and Probability for the Middle Level Teacher\nMATH 235 - Survey of Statistics\nMATH 236 - Elements of Statistics II\nMATH 301 - History of Mathematics\nMATH 310 - Introduction to Mathematical Proof\nMATH 311 - Calculus III\nMATH 312 - Software for Multivariable Calculus\nMATH 319 - Calculus and Actuarial Science Problem-Solving Seminar\nMATH 322 - Linear Algebra I\nMATH 333 - Introduction to Probability and Statistics\nMATH 335 - Mathematical Statistics I\nMATH 345 - Abstract Algebra I\nMATH 353 - Survey of Geometry\nMATH 355 - Transformational Geometry\nMATH 365 - Ordinary Differential Equations\nMATH 370 - Operations Research\nMATH 375 - Numerical Analysis\nMATH 393 - Number Theory\nMATH 395 - Introductory Combinatorics\nMATH 405 - Teaching of Mathematics in the Secondary School\nMATH 422 - Linear Algebra II\nMATH 435 - Mathematical Statistics II\nMATH 445 - Abstract Algebra II\nMATH 457 - Elementary Differential Geometry\nMATH 464 - Real Analysis I\nMATH 465 - Real Analysis II\nMATH 467 - Partial Differential Equations\nMATH 471 - Mathematical Modeling\nMATH 472 - Financial Mathematics\nMATH 483 - Point-set Topology\n\nMATH 502 - Linear Algebra for Teachers\nMATH 503 - Probability and Statistics for Teachers\nMATH 504 - Modern Algebra for Teachers\nMATH 506 - Modern Analysis for Teachers\nMATH 520 - Logic and the Foundations of Mathematics\nMATH 525 - Axiomatic Development of Number Systems\nMATH 535 - Statistical Methods I\nMATH 536 - Statistical Methods II\nMATH 537 - Statistical Problem Solving Seminar\nMATH 566 - Complex Variables\nMATH 577 - Problems in Applied Mathematics\nMATH 592 - Graph Theory\nMATH 603 - History of Mathematics\nMATH 610 - Problem Solving Seminar\nMATH 611 - The Psychology of Learning Mathematics\nMATH 612 - Diagnostic/Prescriptive Teaching of Mathematics\nMATH 614 - Current Issues in Middle School Mathematics\nMATH 615 - Current Issues in Secondary School Mathematics\nMATH 616 - Teaching Advanced Placement (AP) Calculus in the Secondary School\nMATH 617 - Curricular Innovations in Middle and Secondary School Mathematics\nMATH 642 - Linear Algebra\nMATH 645 - Abstract Algebra\nMATH 650 - Topics in Geometry\nMATH 664 - Real Variables\nMATH 670 - Operations Research\nMATH 672 - Mathematical Modeling in the Secondary School Curriculum\nMATH 675 - Numerical Analysis\nMATH 679 - Technology in the Secondary Mathematics Classroom\nMATH 683 - General Topology\nMATH 690 - Topics in Discrete Mathematics for Teachers\nMATH 691 - Combinatorics\nMATH 693 - Number Theory\nMATH 695 - Topics in Mathematics\nMATH 697 - Topics in Mathematics Education" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.59249365,"math_prob":0.6627026,"size":3407,"snap":"2020-45-2020-50","text_gpt3_token_len":875,"char_repetition_ratio":0.20981488,"word_repetition_ratio":0.021390375,"special_character_ratio":0.26093337,"punctuation_ratio":0.006085193,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99981576,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-25T14:09:39Z\",\"WARC-Record-ID\":\"<urn:uuid:3e4e6d72-444c-4152-b73a-632022bbe3d3>\",\"Content-Length\":\"300790\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cc28cbcd-18b6-402a-bdbd-9e57c15eadbb>\",\"WARC-Concurrent-To\":\"<urn:uuid:8360ad11-dc03-4f4b-96db-20a8ea370072>\",\"WARC-IP-Address\":\"3.135.89.185\",\"WARC-Target-URI\":\"https://www.millersville.edu/math/programs/departmental-syllabi.php\",\"WARC-Payload-Digest\":\"sha1:R5WXIU4MVA2N6XROTM34EESEB66OMO54\",\"WARC-Block-Digest\":\"sha1:UV2WOAFF4GR54OCVMGBKSBI3UZMCXJZ3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107889173.38_warc_CC-MAIN-20201025125131-20201025155131-00701.warc.gz\"}"}
http://gestureml.org/doku.php/customizing_gesture_dimensions
[ "Documentation\n\n# Gestures\n\n## Touch\n\n### Simple\n\n#### Temporal\n\n##### Triple-tap\nTop ↑\n\nRelated\n\nGestureML was created and is maintained by Ideum.\n\ncustomizing_gesture_dimensions\n\n# Customizing Gesture Dimensions\n\nA gesture dimension is a dynamically defined fundamental property of a gesture object. Each gesture dimension can be treated independently such that it can be considered orthogonal system of properties. A gesture dimension must be uniquely defined locally so that no other dimension in the gesture object can share the same property “id”. A gesture object must have at least one dimension. A gesture dimension can be used to manage a tracked property that needs to be monitored of individually manipulated through the gesture pipeline. Each gesture dimension has a separate path trough the gesture pipeline and is subject to separate limits and controls.\n\nThe following “n-drag” gesture object has two dimensions “dx” and “dy”.In this example “dx” and “dy” represent calculated values for the change in position in the x direction and y direction respectively. This change is captured over a single frame and so can be considered a measure of velocity in each spacial dimension.\n\n```<Gesture id=\"n-drag\" type=\"drag\">\n<match>\n<action>\n<initial>\n<cluster point_number=\"0\" point_number_min=\"1\" point_number_max=\"10\"/>\n</initial>\n</action>\n</match>\n<analysis>\n<algorithm class=\"kinemetric\" type=\"continuous\">\n<library module=\"drag\"/>\n<returns>\n<property id=\"dx\" result=\"dx\"/>\n<property id=\"dy\" result=\"dy\"/>\n</returns>\n</algorithm>\n</analysis>\n<mapping>\n<update dispatch_type=\"continuous\">\n<gesture_event type=\"drag\">\n<property ref=\"dx\" target=\"x\"/>\n<property ref=\"dy\" target=\"y\"/>\n</gesture_event>\n</update>\n</mapping>\n</Gesture>\n```\n\nThe following “n-manipulate” gesture has five gesture dimensions “dx”,“dy”,“dsx”,“dsy” and “dtheta”. These dimensions each manage a fundamental property of the manipulate gesture:\n\n“dx” is the velocity in the x direction,\n“dy” is the velocity in the y direction,\n“dsx” is change in the separation in the x direction,\n“dsy” is the change in separation in the y direction,\n“dtheta” is the change in rotation on the x axis\n\nEach dimension has in own thread or path through the processing pipeline.\n\n```<Gesture id=\"n-manipulate\" type=\"manipulate\">\n<match>\n<action>\n<initial>\n<cluster point_number=\"0\" point_number_min=\"1\" point_number_max=\"10\"/>\n</initial>\n</action>\n</match>\n<analysis>\n<algorithm class=\"kinemetric\" type=\"continuous\">\n<library module=\"manipulate\"/>\n<returns>\n<property id=\"dx\" result=\"dx\"/>\n<property id=\"dy\" result=\"dy\"/>\n<property id=\"dsx\" result=\"ds\"/>\n<property id=\"dsy\" result=\"ds\"/>\n<property id=\"dtheta\" result=\"dtheta\"/>\n</returns>\n</algorithm>\n</analysis>\n<mapping>\n<update dispatch_type=\"continuous\">\n<gesture_event type=\"manipulate\">\n<property ref=\"dx\" target=\"x\"/>\n<property ref=\"dy\" target=\"y\"/>\n<property ref=\"dsx\" target=\"scaleX\"/>\n<property ref=\"dsy\" target=\"scaleY\"/>\n<property ref=\"dtheta\" target=\"rotation\"/>\n</gesture_event>\n</update>\n</mapping>\n</Gesture>\n```\n\nIf the values stored in a dimension are set to zero the dimension is automatically set to inactive. If all dimensions on a gesture object are inactive then the gesture is set to inactive.", null, "" ]
[ null, "http://gestureml.org/lib/exe/indexer.php", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.60275435,"math_prob":0.8225731,"size":496,"snap":"2022-27-2022-33","text_gpt3_token_len":124,"char_repetition_ratio":0.1910569,"word_repetition_ratio":0.0,"special_character_ratio":0.15927419,"punctuation_ratio":0.027777778,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9736755,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-02T03:00:38Z\",\"WARC-Record-ID\":\"<urn:uuid:cacd4102-7c12-4152-91b8-cb53dc36adb8>\",\"Content-Length\":\"27787\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:58864a71-82dd-44be-b5f7-570ce2aee3cb>\",\"WARC-Concurrent-To\":\"<urn:uuid:63a31710-f605-4a31-8283-7451ae09d181>\",\"WARC-IP-Address\":\"52.33.215.255\",\"WARC-Target-URI\":\"http://gestureml.org/doku.php/customizing_gesture_dimensions\",\"WARC-Payload-Digest\":\"sha1:OHDXKCJ2YX4CADBTYYUJ5GKL3CSFEFFZ\",\"WARC-Block-Digest\":\"sha1:WPL4M446IIRT6LVBOS4Y35C3IGVBQLJH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103983398.56_warc_CC-MAIN-20220702010252-20220702040252-00208.warc.gz\"}"}
https://stats.stackexchange.com/questions/49226/how-to-interpret-f-measure-values
[ "# How to interpret F-measure values?\n\nI would like to know how to interpret a difference of f-measure values. I know that f-measure is a balanced mean between precision and recall, but I am asking about the practical meaning of a difference in F-measures.\n\nFor example, if a classifier C1 has an accuracy of 0.4 and another classifier C2 an accuracy of 0.8, then we can say that C2 has correctly classified the double of test examples compared to C1. However, if a classifier C1 has an F-measure of 0.4 for a certain class and another classifier C2 an F-measure of 0.8, what can we state about the difference in performance of the 2 classifiers ? Can we say that C2 has classified X more instances correctly that C1 ?\n\n• I'm not sure you can say much since the F-measure is function of both precision and recall: en.wikipedia.org/wiki/F1_score. You can do the math though and hold one (either precision or recall) constant and say something about the other. – Nick Feb 4 '13 at 16:35\n\nI cannot think of an intuitive meaning of the F measure, because it's just a combined metric. What's more intuitive than F-mesure, of course, is precision and recall.\n\nBut using two values, we often cannot determine if one algorithm is superior to another. For example, if one algorithm has higher precision but lower recall than other, how can you tell which algorithm is better?\n\nIf you have a specific goal in your mind like 'Precision is the king. I don't care much about recall', then there's no problem. Higher precision is better. But if you don't have such a strong goal, you will want a combined metric. That's F-measure. By using it, you will compare some of precision and some of recall.\n\nThe ROC curve is often drawn stating the F-measure. You may find this article interesting as it contains explanation on several measures including ROC curves: http://binf.gmu.edu/mmasso/ROC101.pdf\n\nThe importance of the F1 score is different based on the scenario. Lets assume the target variable is a binary label.\n\n• Balanced class: In this situation, the F1 score can effectively be ignored, the mis-classification rate is key.\n• Unbalanced class, but both classes are important: If the class distribution is highly skewed (such as 80:20 or 90:10), then a classifier can get a low mis-classification rate simply by choosing the majority class. In such a situation, I would choose the classifier that gets high F1 scores on both classes, as well as low mis-classification rate. A classifier that gets low F1-scores should be overlooked.\n• Unbalanced class, but one class if more important that the other. For e.g. in Fraud detection, it is more important to correctly label an instance as fraudulent, as opposed to labeling the non-fraudulent one. In this case, I would pick the classifier that has a good F1 score only on the important class. Recall that the F1-score is available per class.\n\nF-measure has an intuitive meaning. It tells you how precise your classifier is (how many instances it classifies correctly), as well as how robust it is (it does not miss a significant number of instances).\n\nWith high precision but low recall, you classifier is extremely accurate, but it misses a significant number of instances that are difficult to classify. This is not very useful.\n\nTake a look at this histogram.", null, "Ignore its original purpose.\n\nTowards the right, you get high precision, but low recall. If I only select instances with a score above 0.9, my classified instances will be extremely precise, however I will have missed a significant number of instances. Experiments indicate that the sweet spot here is around 0.76, where the F-measure is 0.87.\n\nThe F-measure is the harmonic mean of your precision and recall. In most situations, you have a trade-off between precision and recall. If you optimize your classifier to increase one and disfavor the other, the harmonic mean quickly decreases. It is greatest however, when both precision and recall are equal.\n\nGiven F-measures of 0.4 and 0.8 for your classifiers, you can expect that these where the maximum values achieved when weighing out precision against recall.\n\nFor visual reference take a look at this figure from Wikipedia:", null, "The F-measure is H, A and B are recall and precision. You can increase one, but then the other decreases.\n\n• I found the \"Crossed Ladders\" visualization to be a bit more straightforward - for me, it makes the equality of A=B resulting in the greatest H more intuitive – Coruscate5 Jul 17 '17 at 17:18\n\nThe formula for F-measure (F1, with beta=1) is the same as the formula giving the equivalent resistance composed of two resistances placed in parallel in physics (forgetting about the factor 2).\n\nThis could give you a possible interpretation, and you can think about both electronic or thermal resistances. This analogy would define F-measure as the equivalent resistance formed by sensitivity and precision placed in parallel.\n\nFor F-measure, the maximum possible is 1, and you loose resistance as soon as one among he two looses resistance as well (that is too say, get a value below 1). If you want to understand better this quantity and its dynamic, think about the physic phenomenon. For example, it appears that the F-measure <= max(sensitivity, precision).\n\nWith precision on the y-axis and recall on the x-axis, the slope of the level curve $F_{\\beta}$ at (1, 1) is $-1/\\beta^2$.\n\nGiven $$P = \\frac{TP}{TP+FP}$$ and $$R = \\frac{TP}{TP+FN}$$, let $\\alpha$ be the ratio of the cost of false negatives to false positives. Then total cost of error is proportional to $$\\alpha \\frac{1-R}{R} + \\frac{1-P}{P}.$$ So the slope of the level curve at (1, 1) is $-\\alpha$. Therefore, for good models using the $F_{\\beta}$ implies you consider false negatives $\\beta^2$ times more costly than false positives.\n\nyou can write the F-measure equation http://e.hiphotos.baidu.com/baike/s%3D118/sign=e8083e4396dda144de0968b38ab6d009/f2deb48f8c5494ee14c095492cf5e0fe98257e84.jpg in another way, like $$F_\\beta=1/((\\beta^2/(\\beta^2+1))1/r+(1/(\\beta^2+1))1/p)$$ so, when $β^2<1$, $p$ should be more important (or, larger, to get a higher $F_\\beta$).\n\nThe closest intuitive meaning of the f1-score is being perceived as the mean of the recall and the precision. Let's clear it for you :\n\nIn a classification task, you may be planning to build a classifier with high precision AND recall. For example, a classifier that tells if a person is honest or not.\n\nFor precision, you are able to usually tell accurately how many honest people out there in a given group. In this case, when caring about high precision, you assume that you can misclassify a liar person as honest but not often. In other words, here you are trying to identify liar from honest as a whole group.\n\nHowever, for recall, you will be really concerned if you think a liar person to be honest. For you, this will be a great loss and a big mistake and you don't want to do it again. Also, it's okay if you classified someone honest as a liar but your model should never (or mostly not to) claim a liar person as honest. In other words, here you are focusing on a specific class and you are trying not to make a mistake about it.\n\nNow, let take the case where you want your model to (1) precisely identify honest from a liar (precision) (2) identify each person from both classes (recall). Which means that you will select the model that will perform well on both metric.\n\nYou model selection decision will then try to evaluate each model based on the mean of the two metrics. F-Score is the best one that can describe this. Let's have a look on the formula:\n\nRecall: p = tp/(tp+fp)\n\nRecall: r = tp/(tp+fn)\n\nF-score: fscore = 2/(1/r+1/p)\n\nAs you see, the higher recall AND precision, the higher F-score.\n\nKnowing that F1 score is harmonic mean of precision and recall, below is a little brief about them.\n\nI would say Recall is more about false negatives .i.e, Having a higher Recall means there are less FALSE NEGATIVES.\n\n$$\\text{Recall}=\\frac{tp}{tp+fn}$$\n\nAs much as less FN or Zero FN means, your model prediction is really good.\n\nWhereas having higher Precision means, there are less FALSE POSITIVES $$\\text{Precision}=\\frac{tp}{tp+fp}$$\n\nSame here, Less or Zero False Positives means Model prediction is really good." ]
[ null, "https://i.stack.imgur.com/RcSgR.png", null, "https://upload.wikimedia.org/wikipedia/commons/thumb/f/f7/MathematicalMeans.svg/320px-MathematicalMeans.svg.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9588281,"math_prob":0.97202784,"size":1606,"snap":"2019-51-2020-05","text_gpt3_token_len":371,"char_repetition_ratio":0.13420723,"word_repetition_ratio":0.013651877,"special_character_ratio":0.23100872,"punctuation_ratio":0.100294985,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9950814,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,5,null,8,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-17T13:31:22Z\",\"WARC-Record-ID\":\"<urn:uuid:d26e4609-135d-4352-9ead-e52d81b83fc1>\",\"Content-Length\":\"194650\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3ce63b21-da41-4bfd-84d2-896b99ecb3a8>\",\"WARC-Concurrent-To\":\"<urn:uuid:c2cb4e62-e78b-4428-bd52-905813dfa8fb>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://stats.stackexchange.com/questions/49226/how-to-interpret-f-measure-values\",\"WARC-Payload-Digest\":\"sha1:FRAP3UNUNFQDP7AZ6GT7UJBDQ7Z57Z6O\",\"WARC-Block-Digest\":\"sha1:CDGLSFQZ5ICINJILEE2QUGMNKIWYHVJD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250589560.16_warc_CC-MAIN-20200117123339-20200117151339-00413.warc.gz\"}"}
https://or.stackexchange.com/questions/143/why-are-integer-minimax-problems-hard?noredirect=1
[ "# Why are integer minimax problems hard?\n\nProblems that have a minimax-type structure are notoriously hard to solve. For example, the $$p$$-median problem from facility location (choose $$p$$ facilities to minimize demand-weighted distance to customers) does not have a minimax structure: \\begin{alignat}{2} \\text{minimize} \\quad & \\sum_i \\sum_j h_id_{ij}y_{ij} \\\\ \\text{subject to} \\quad & \\sum_j y_{ij} = 1 &\\quad& \\forall i \\\\ & y_{ij} \\le x_j && \\forall i, j \\\\ & \\sum_j x_j = p \\\\ & x_j, y_{ij} \\in \\{0,1\\} && \\forall i, j \\end{alignat} ($$h_i$$ = demand of customer $$i$$, $$d_{ij}$$ = distance from $$i$$ to $$j$$, $$x_j$$ = 1 if we open facility $$j$$, $$y_{ij}$$ = 1 if we assign $$i$$ to $$j$$.) Its cousin, the $$p$$-center problem (minimize the maximum distance from a customer to its assigned facility), does have a minimax structure: \\begin{alignat}{2} \\text{minimize} \\quad & u \\\\ \\text{subject to} \\quad & \\sum_j y_{ij} = 1 &\\quad& \\forall i \\\\ & y_{ij} \\le x_j && \\forall i, j \\\\ & \\sum_j x_j = p \\\\ & \\sum_j d_{ij}y_{ij} \\le u && \\forall i \\\\ & x_j, y_{ij} \\in \\{0,1\\} && \\forall i, j \\end{alignat} ($$u$$ = maximum distance to assigned facility, among all customers.)\n\nI solved a benchmark 88-node instance of the $$p$$-median problem with $$p=6$$ using CPLEX and it took 0.7 seconds. I solved the same instance of the $$p$$-center problem and it took CPLEX 1,607 seconds.\n\nSimilar effects happen, e.g., for stochastic facility location (minimize expected cost) vs. robust facility location (minimize maximum cost).\n\nI always tell my students, \"these minimax-type problems are just really hard for solvers to solve,\" but I don't have a good explanation for why. (I know their LP relaxations are weak, but again I don't know why this tends to happen.)\n\nSo, why are minimax problems harder to solve, computationally?\n\nAnd, are there any types of automatically generated cuts that one can turn on in commercial solvers that tend to help with minimax problems?\n\n• not fully related, but \" For many classes of problems, formulating the minimax problem leads to an integer program which apparently has no special structure and so is 'difficult' \" jstor.org/stable/pdf/… – independentvariable Jun 2 '19 at 15:34\n• @LarrySnyder610, do you have instances (LP, MPS) you may share? – Marco Lübbecke Feb 4 '20 at 21:52\n• @MarcoLübbecke I have AMPL .mod and .dat files, if that will do. – LarrySnyder610 Feb 4 '20 at 22:38\n\nI can see two reasons why branch-and-bound based solvers can have a hard time solving these problems:\n\n• the linear relaxation may be bad (as stated above);\n\n• these models have typically (exponentially) many optimal solutions, since the cost only depends on a single variable $$y_{ij}$$. Thus, you can move one customer to many centers without changing the cost of a solution.\n\nFor p-center, models that distribute the cost over several variables seem to be much more MIP-solver friendly. At least their linear relaxations are better (see for example Elloumi et al., 2004 ).\n\n• The dependence of the objective on a single variable is a really good insight, I had not thought of that. – LarrySnyder610 Jun 2 '19 at 18:30\n• A possible explanation for why a single variable is bad is that problems with fewer variables in the objective tend to have LP relaxations that are more dual degenerate. Degeneracy can be exploited by the MIP solvers to improve performance in some cases, but in general my experience tells me that more degenerate problems slow down the simplex algorithm and lead to longer overall solve times. – Philipp Christophel Jun 4 '19 at 7:21\n\nThis is going to be a hand-waving argument: perhaps this has been formalized in the literature someplace.\n\nI think the issue is that the linear relaxation is in some sense more compatible with the p-median objective than the p-center problem. Consider the following example (circles are customers; stars are facilities)", null, "For the left hand customers, the distance to the closest facility is 1; for the right hand customer, the closest facility (also on the right) is 3; the distance between the left column and the right column is 6.\n\nFor p-median, everything gets assigned to their closest facility. This is true for the linear relaxation as well.", null, "If you try to assign the right-hand customer to the left-hand facilities, it will cost 6 no matter how you assign it: it is cheaper to assign it to the right-hand facility at cost 3. Assigning fractional solutions doesn't \"cheat\": a good assignment will be pushed toward value 1.\n\nFor p-center, though, there is an advantage in the linear relaxation to fractionally assign things. In our example, the best integer assignment for the right-hand-side customer is still to its closest facility at cost 3. But fractionally, there is a value to assign things to the left hand side:", null, "If you put weight 1/3 on each of the dotted lines, then the p-center cost goes down to 2. In fact, the optimal thing to do is to is to assign weight .2 to each of the dotted lines, leaving .4 for the right hand facility, for a cost of 1.2.\n\nGiven the nasty fractional solution in this almost trivial example, it is not surprising that branch and bound has to work very hard to get to an integer solution.\n\nThis leads to the possibly-useful but not-tremendously-insightful cut of $$u \\ge d^*$$ where $$d^*$$ is the largest minimum distance among all customers to its closest facility.\n\n• Great example. Thanks. – LarrySnyder610 Jun 2 '19 at 17:00\n\nI will give you a little more insight based on my latest experience solving minimax (or maximin) integer programs. Sorry I will be a bit self-citing here.\n\nIndeed, the main reason that can explain the poor behavior of commercial solvers for solving those types of problems is the strong dependence on a single (or a very few) variable for the solution. In p-centre problems, it is the distance from the furthest node to its closest center. If you think a little, you will realize that a large number of the nodes are only noise and do not play any role in the optimization. Thus, the linear relaxation tends to behave poorly because the cost will tend to be split among several variables. In addition, the number of symmetries can be unreasonably large (for a given centre, there might be exponentially many possible assignments with the same cost), which is deadly for branch-and-bound.\n\nNow, I will argue that your point is not TOTALLY true. While plugging a minimax integer model into CPLEX or Gurobi can be a bad idea, there has been some recent research aiming at exploiting the issues described in the paragraph above and exploit them. Think of the p-centre problem. If you KNOW that a lot of points are only noise, why consider them? Chen & Chen (2009) showed that a decremental relaxation algorithm can be used to discard points that are deemed noisy. One then can solve the noise-free problem (small) and later assess whether this partial solution can be extended to a solution of the whole problem, which can be done very efficiently. Contardo, Iori and Kramer (2019, C&OR) showed that, when implemented correctly, and with the addition of a few acceleration mechanisms, problems containing up to 1M points can be solved to proven optimality.\n\nThe minimax diameter clustering problem (MMDCP) has a similar structure. In the MMDCP, one is given a set of n points, a dissimilarity matrix D and an integer k. One seeks to determine k clusters in such a way that the maximum distance between any two points within the same cluster, is minimized. This problem is NP-hard and problems with only a few thousand nodes could be handled by commercial solvers. Aloise and Contardo (2018, JOGO) showed that indeed a similar mechanism to the one used for the p-centre problem can be devised to sample points in an iterative manner. Problems with up to 600k can be solved to proven optimality.\n\nA few months ago I came up with a similar mechanism for discrete p-dispersion problems (pDP). In the pDP, one is given n points, a dissimilarity matrix D and an integer p. The objective is to select p points such that the minimum distance between any pair of selected points is maximized. I devised a clustering mechanism to reduce this problem to a series of smaller pDPs. Problems containing up to 100k nodes can be solved to proven optimality.\n\nSo here is my final opinion: If ever faced to a large-scale minimax (or maximin) integer problem, spend your time thinking on how to reduce that problem to the solution of smaller problems, so as to reduce the noise that makes you problem hard and with a high degree of symmetry. Then devise an algorithm to exploit those observations in an iterative manner.\n\nReferences\n\n• Chen & Chen (2009), New relaxation-based algorithms for the optimal solution of the continuous and discrete p-center problems. C&OR. Can be downloaded from R Chen's website for free\n\n• Aloise & Contardo (2018), A sampling-based exact algorithm for the solution of the minimax diameter clustering problem, JOGO\n\n• Contardo, Iori & Kramer (2019), A scalable exact algorithm for the vertex p-center problem, C&OR\n• Contardo (2019), Decremental clustering for the solution of p-dispersion problems to proven optimality. Cahier du GERAD G-2019-22\n• Very thorough, thank you! – LarrySnyder610 Jun 3 '19 at 15:59\n\nYou may find this paper (On the Complexity of Min-Max Optimization Problems and their Approximation interesting.\n\nAlso, only looking at the $$p$$-median and $$p$$-center examples you shared, I can say that the constraints of $$p$$-center problem (or its space), is equivalent to solving a $$p$$-median problem where $$h_i = 1$$. So, $$p$$-center is solving a series of $$p$$-median problems. Hopefully, this is not a wrong interpretation!\n\n• I agree that you can interpret $p$-center using a series of $p$-median (or even more accurately, max covering or set covering) problems, but of course CPLEX is not actually solving a series of those problem in order to solve $p$-median. – LarrySnyder610 Jun 2 '19 at 17:03" ]
[ null, "https://i.stack.imgur.com/fCvxr.png", null, "https://i.stack.imgur.com/wR85X.png", null, "https://i.stack.imgur.com/muIsF.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8237664,"math_prob":0.9814988,"size":1902,"snap":"2021-21-2021-25","text_gpt3_token_len":576,"char_repetition_ratio":0.12592202,"word_repetition_ratio":0.22857143,"special_character_ratio":0.3212408,"punctuation_ratio":0.110192835,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9932098,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,4,null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-16T05:05:07Z\",\"WARC-Record-ID\":\"<urn:uuid:af973570-f886-4e11-9835-fe4b6bdb7e16>\",\"Content-Length\":\"193967\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a0da03af-7024-44c0-a545-4bac70039921>\",\"WARC-Concurrent-To\":\"<urn:uuid:3a9494c0-0237-4856-8042-0752d09ac98f>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://or.stackexchange.com/questions/143/why-are-integer-minimax-problems-hard?noredirect=1\",\"WARC-Payload-Digest\":\"sha1:AZLKNB2WJHWDQ3INBIGKT6YHBW3FLYCY\",\"WARC-Block-Digest\":\"sha1:RVU6473JVLQ4A3H5BBQ66QMSYUBFKR5B\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243989690.55_warc_CC-MAIN-20210516044552-20210516074552-00358.warc.gz\"}"}
https://pandas.pydata.org/pandas-docs/version/1.0.5/reference/api/pandas.DataFrame.all.html
[ "# pandas.DataFrame.all¶\n\nDataFrame.all(self, axis=0, bool_only=None, skipna=True, level=None, **kwargs)[source]\n\nReturn whether all elements are True, potentially over an axis.\n\nReturns True unless there at least one element within a series or along a Dataframe axis that is False or equivalent (e.g. zero or empty).\n\nParameters\naxis{0 or ‘index’, 1 or ‘columns’, None}, default 0\n\nIndicate which axis or axes should be reduced.\n\n• 0 / ‘index’ : reduce the index, return a Series whose index is the original column labels.\n\n• 1 / ‘columns’ : reduce the columns, return a Series whose index is the original index.\n\n• None : reduce all axes, return a scalar.\n\nbool_onlybool, default None\n\nInclude only boolean columns. If None, will attempt to use everything, then use only boolean data. Not implemented for Series.\n\nskipnabool, default True\n\nExclude NA/null values. If the entire row/column is NA and skipna is True, then the result will be True, as for an empty row/column. If skipna is False, then NA are treated as True, because these are not equal to zero.\n\nlevelint or level name, default None\n\nIf the axis is a MultiIndex (hierarchical), count along a particular level, collapsing into a Series.\n\n**kwargsany, default None\n\nAdditional keywords have no effect but might be accepted for compatibility with NumPy.\n\nReturns\nSeries or DataFrame\n\nIf level is specified, then, DataFrame is returned; otherwise, Series is returned.\n\nSeries.all\n\nReturn True if all elements are True.\n\nDataFrame.any\n\nReturn True if one (or more) elements are True.\n\nExamples\n\nSeries\n\n>>> pd.Series([True, True]).all()\nTrue\n>>> pd.Series([True, False]).all()\nFalse\n>>> pd.Series([]).all()\nTrue\n>>> pd.Series([np.nan]).all()\nTrue\n>>> pd.Series([np.nan]).all(skipna=False)\nTrue\n\n\nDataFrames\n\nCreate a dataframe from a dictionary.\n\n>>> df = pd.DataFrame({'col1': [True, True], 'col2': [True, False]})\n>>> df\ncol1 col2\n0 True True\n1 True False\n\n\nDefault behaviour checks if column-wise values all return True.\n\n>>> df.all()\ncol1 True\ncol2 False\ndtype: bool\n\n\nSpecify axis='columns' to check if row-wise values all return True.\n\n>>> df.all(axis='columns')\n0 True\n1 False\ndtype: bool\n\n\nOr axis=None for whether every value is True.\n\n>>> df.all(axis=None)\nFalse" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.61646795,"math_prob":0.78002834,"size":1919,"snap":"2021-43-2021-49","text_gpt3_token_len":473,"char_repetition_ratio":0.13159269,"word_repetition_ratio":0.026936026,"special_character_ratio":0.2558624,"punctuation_ratio":0.17766498,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97085756,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-03T12:53:24Z\",\"WARC-Record-ID\":\"<urn:uuid:7f8cc30f-444e-497b-80b7-3e44076b8f0d>\",\"Content-Length\":\"34314\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:638f79dd-4205-4c54-8a71-7ef268806c9e>\",\"WARC-Concurrent-To\":\"<urn:uuid:932ec99b-f3c1-46a4-b7e3-f1d251f1f6d8>\",\"WARC-IP-Address\":\"104.26.0.204\",\"WARC-Target-URI\":\"https://pandas.pydata.org/pandas-docs/version/1.0.5/reference/api/pandas.DataFrame.all.html\",\"WARC-Payload-Digest\":\"sha1:ORHYKNLP4MMBD3XIQNZBUSEBKLNJ7AA6\",\"WARC-Block-Digest\":\"sha1:COELAMJAUMSXNSOFSII5VINU4RLRL7OP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964362879.45_warc_CC-MAIN-20211203121459-20211203151459-00626.warc.gz\"}"}
https://codereview.stackexchange.com/questions/202750/codility-binary-gap-solution-using-indices
[ "# Codility binary gap solution using indices\n\nI am seeking a review of my codility solution.\n\nWhilst the problem is fairly simple, my approach differs to many solutions already on this site. My submission passes 100% of the automated correctness tests.\n\nI am looking for feedback regarding the design of my solution, and any comments in relation to how this submission might be graded by a human.\n\nProblem description:\n\nA binary gap within a positive integer N is any maximal sequence of consecutive zeros that is surrounded by ones at both ends in the binary representation of N.\n\nFor example, number 9 has binary representation 1001 and contains a binary gap of length 2. The number 529 has binary representation 1000010001 and contains two binary gaps: one of length 4 and one of length 3. The number 20 has binary representation 10100 and contains one binary gap of length 1. The number 15 has binary representation 1111 and has no binary gaps. The number 32 has binary representation 100000 and has no binary gaps.\n\nWrite a function:\n\ndef solution(N)\n\nthat, given a positive integer N, returns the length of its longest binary gap. The function should return 0 if N doesn't contain a binary gap.\n\nFor example, given N = 1041 the function should return 5, because N has binary representation 10000010001 and so its longest binary gap is of length 5. Given N = 32 the function should return 0, because N has binary representation '100000' and thus no binary gaps.\n\nWrite an efficient algorithm for the following assumptions:\n\nN is an integer within the range [1..2,147,483,647].\n\nSolution:\n\ndef solution(N):\n\nbit_array = [int(bit) for bit in '{0:08b}'.format(N)]\nindices = [bit for bit, x in enumerate(bit_array) if x == 1]\n\nif len(indices) < 2:\nreturn 0\n\nlengths = [end - beg for beg, end in zip(indices, indices[1:])]\n\nreturn max(lengths) - 1\n\n\n3. you don't need to return early, as you can default max to 1.\n4. You shouldn't need the 8 in your format, as it shouldn't matter if there's padding or not, as it'll be filtered anyway.\ndef solution(N):" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8897457,"math_prob":0.9489211,"size":1784,"snap":"2022-05-2022-21","text_gpt3_token_len":422,"char_repetition_ratio":0.1735955,"word_repetition_ratio":0.013333334,"special_character_ratio":0.27017936,"punctuation_ratio":0.11614731,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9925271,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-21T13:37:39Z\",\"WARC-Record-ID\":\"<urn:uuid:d3aec0d2-2476-4444-bb0f-2e7412bfb312>\",\"Content-Length\":\"229266\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fcb7f413-9139-4a9e-b057-391540f25840>\",\"WARC-Concurrent-To\":\"<urn:uuid:d4e5560a-f5dd-44d8-a385-d27f99c713a6>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://codereview.stackexchange.com/questions/202750/codility-binary-gap-solution-using-indices\",\"WARC-Payload-Digest\":\"sha1:HXB36AB6O3DNHFXT4TMOCEAYE7MK52SJ\",\"WARC-Block-Digest\":\"sha1:SW4UXGZHA4O3G5SR4DOWXPRBUYWNJEZI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662539101.40_warc_CC-MAIN-20220521112022-20220521142022-00568.warc.gz\"}"}
https://www.knowpia.com/knowpedia/Circular_orbit
[ "BREAKING NEWS\n\n## Summary", null, "A circular orbit is depicted in the top-left quadrant of this diagram, where the gravitational potential well of the central mass shows potential energy, and the kinetic energy of the orbital speed is shown in red. The height of the kinetic energy remains constant throughout the constant speed circular orbit.", null, "At the top of the diagram, a satellite in a clockwise circular orbit (yellow spot) launches objects of negligible mass:\n(1 - blue) towards Earth,\n(2 - red) away from Earth,\n(3 - grey) in the direction of travel, and\n(4 - black) backwards of the direction of travel.\n\nDashed ellipses are orbits relative to Earth. Solid curves are perturbations relative to the satellite: in one orbit, (1) and (2) return to the satellite having made a clockwise loop on either side of the satellite. Unintuitively, (3) spirals farther and farther behind whereas (4) spirals ahead.\n\nA circular orbit is the orbit with a fixed distance around the barycenter, that is, in the shape of a circle.\n\nListed below is a circular orbit in astrodynamics or celestial mechanics under standard assumptions. Here the centripetal force is the gravitational force, and the axis mentioned above is the line through the center of the central mass perpendicular to the plane of motion.\n\nIn this case, not only the distance, but also the speed, angular speed, potential and kinetic energy are constant. There is no periapsis or apoapsis. This orbit has no radial version.\n\n## Circular acceleration\n\nTransverse acceleration (perpendicular to velocity) causes change in direction. If it is constant in magnitude and changing in direction with the velocity, circular motion ensues. Taking two derivatives of the particle's coordinates with respect to time gives the centripetal acceleration\n\n$a\\,={\\frac {v^{2}}{r}}\\,={\\omega ^{2}}{r}$", null, "where:\n\n• $v\\,$", null, "is orbital velocity of orbiting body,\n• $r\\,$", null, "is radius of the circle\n• $\\omega \\$", null, "is angular speed, measured in radians per unit time.\n\nThe formula is dimensionless, describing a ratio true for all units of measure applied uniformly across the formula. If the numerical value of $\\mathbf {a}$", null, "is measured in meters per second per second, then the numerical values for $v\\,$", null, "will be in meters per second, $r\\,$", null, "in meters, and $\\omega \\$", null, "in radians per second.\n\n## Velocity\n\nThe speed (or the magnitude of velocity) relative to the central object is constant::30\n\n$v={\\sqrt {GM\\! \\over {r}}}={\\sqrt {\\mu \\over {r}}}$", null, "where:\n\n• $G$", null, ", is the gravitational constant\n• $M$", null, ", is the mass of both orbiting bodies $(M_{1}+M_{2})$", null, ", although in common practice, if the greater mass is significantly larger, the lesser mass is often neglected, with minimal change in the result.\n• $\\mu =GM$", null, ", is the standard gravitational parameter.\n\n## Equation of motion\n\nThe orbit equation in polar coordinates, which in general gives r in terms of θ, reduces to:[clarification needed][citation needed]\n\n$r={{h^{2}} \\over {\\mu }}$", null, "where:\n\n• $h=rv$", null, "is specific angular momentum of the orbiting body.\n\nThis is because $\\mu =rv^{2}$", null, "## Angular speed and orbital period\n\n$\\omega ^{2}r^{3}=\\mu$", null, "Hence the orbital period ($T\\,\\!$", null, ") can be computed as::28\n\n$T=2\\pi {\\sqrt {r^{3} \\over {\\mu }}}$", null, "Compare two proportional quantities, the free-fall time (time to fall to a point mass from rest)\n\n$T_{ff}={\\frac {\\pi }{2{\\sqrt {2}}}}{\\sqrt {r^{3} \\over {\\mu }}}$", null, "(17.7% of the orbital period in a circular orbit)\n\nand the time to fall to a point mass in a radial parabolic orbit\n\n$T_{par}={\\frac {\\sqrt {2}}{3}}{\\sqrt {r^{3} \\over {\\mu }}}$", null, "(7.5% of the orbital period in a circular orbit)\n\nThe fact that the formulas only differ by a constant factor is a priori clear from dimensional analysis.[citation needed]\n\n## Energy\n\nThe specific orbital energy ($\\epsilon \\,$", null, ") is negative, and\n\n$\\epsilon =-{v^{2} \\over {2}}$", null, "$\\epsilon =-{\\mu \\over {2r}}$", null, "Thus the virial theorem:72 applies even without taking a time-average:[citation needed]\n\n• the kinetic energy of the system is equal to the absolute value of the total energy\n• the potential energy of the system is equal to twice the total energy\n\nThe escape velocity from any distance is 2 times the speed in a circular orbit at that distance: the kinetic energy is twice as much, hence the total energy is zero.[citation needed]\n\n## Delta-v to reach a circular orbit\n\nManeuvering into a large circular orbit, e.g. a geostationary orbit, requires a larger delta-v than an escape orbit, although the latter implies getting arbitrarily far away and having more energy than needed for the orbital speed of the circular orbit. It is also a matter of maneuvering into the orbit. See also Hohmann transfer orbit.\n\n## Orbital velocity in general relativity\n\nIn Schwarzschild metric, the orbital velocity for a circular orbit with radius $r$", null, "is given by the following formula:\n\n$v={\\sqrt {\\frac {GM}{r-r_{S}}}}$", null, "where $r_{S}={\\frac {2GM}{c^{2}}}$", null, "is the Schwarzschild radius of the central body.\n\n### Derivation\n\nFor the sake of convenience, the derivation will be written in units in which $c=G=1$", null, ".\n\nThe four-velocity of a body on a circular orbit is given by:\n\n$u^{\\mu }=({\\dot {t}},0,0,{\\dot {\\phi }})$", null, "($r$", null, "is constant on a circular orbit, and the coordinates can be chosen so that $\\theta ={\\frac {\\pi }{2}}$", null, "). The dot above a variable denotes derivation with respect to proper time $\\tau$", null, ".\n\nFor a massive particle, the components of the four-velocity satisfy the following equation:\n\n$\\left(1-{\\frac {2M}{r}}\\right){\\dot {t}}^{2}-r^{2}{\\dot {\\phi }}^{2}=1$", null, "We use the geodesic equation:\n\n${\\ddot {x}}^{\\mu }+\\Gamma _{\\nu \\sigma }^{\\mu }{\\dot {x}}^{\\nu }{\\dot {x}}^{\\sigma }=0$", null, "The only nontrivial equation is the one for $\\mu =r$", null, ". It gives:\n\n${\\frac {M}{r^{2}}}\\left(1-{\\frac {2M}{r}}\\right){\\dot {t}}^{2}-r\\left(1-{\\frac {2M}{r}}\\right){\\dot {\\phi }}^{2}=0$", null, "From this, we get:\n\n${\\dot {\\phi }}^{2}={\\frac {M}{r^{3}}}{\\dot {t}}^{2}$", null, "Substituting this into the equation for a massive particle gives:\n\n$\\left(1-{\\frac {2M}{r}}\\right){\\dot {t}}^{2}-{\\frac {M}{r}}{\\dot {t}}^{2}=1$", null, "Hence:\n\n${\\dot {t}}^{2}={\\frac {r}{r-3M}}$", null, "Assume we have an observer at radius $r$", null, ", who is not moving with respect to the central body, that is, their four-velocity is proportional to the vector $\\partial _{t}$", null, ". The normalization condition implies that it is equal to:\n\n$v^{\\mu }=\\left({\\sqrt {\\frac {r}{r-2M}}},0,0,0\\right)$", null, "The dot product of the four-velocities of the observer and the orbiting body equals the gamma factor for the orbiting body relative to the observer, hence:\n\n$\\gamma =g_{\\mu \\nu }u^{\\mu }v^{\\nu }=\\left(1-{\\frac {2M}{r}}\\right){\\sqrt {\\frac {r}{r-3M}}}{\\sqrt {\\frac {r}{r-2M}}}={\\sqrt {\\frac {r-2M}{r-3M}}}$", null, "This gives the velocity:\n\n$v={\\sqrt {\\frac {M}{r-2M}}}$", null, "Or, in SI units:\n\n$v={\\sqrt {\\frac {GM}{r-r_{S}}}}$", null, "" ]
[ null, "https://upload.wikimedia.org/wikipedia/commons/thumb/9/94/Gravity_Wells_Potential_Plus_Kinetic_Energy_-_Circle-Ellipse-Parabola-Hyperbola.png/250px-Gravity_Wells_Potential_Plus_Kinetic_Energy_-_Circle-Ellipse-Parabola-Hyperbola.png", null, "https://upload.wikimedia.org/wikipedia/commons/thumb/2/28/Counterintuitive_orbital_mechanics.svg/250px-Counterintuitive_orbital_mechanics.svg.png", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/dd69f3df5380ac9fa7c48bc89c4dea038be094f4", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/8b67d1fd725a759a151374b793113d7a78a65da4", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/f08ce4d4c86c5b43f36c8435fb598da6471047c6", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/618fc4788f13fcdfe792ddf35ff04c61cfc68d8d", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/1a957216653a9ee0d0133dcefd13fb75e36b8b9d", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/8b67d1fd725a759a151374b793113d7a78a65da4", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/f08ce4d4c86c5b43f36c8435fb598da6471047c6", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/618fc4788f13fcdfe792ddf35ff04c61cfc68d8d", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/64dbfbd4469bd36a45abcfa8d472ca02c4ff2e70", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/f5f3c8921a3b352de45446a6789b104458c9f90b", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/f82cade9898ced02fdd08712e5f0c0151758a0dd", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/4907d22d86fe6ef2dbcf647354a979ace131777f", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/183d080fad6c5e9292cd08bc28897f334189b75f", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/08de66385d483c859b5caa50363912a20c8fbd63", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/5ea3b4ec31fdf35ee7157fdba353cab74e53c67f", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/489be4be8f593b23c760b1792c791f379dbff4e9", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/c0e0031d8cb4b8cdf797c135a8dfdb1863953ba1", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/8e6be5af8259cedf1aac5ae3f6a5fad65ec2ef87", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/7cc3266bd3e3e42019cbdc46bb00a3a7828afcdb", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/6227f59b847bafa8d943688f6c2061644ed3415f", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/ed718599828b6ca56ccbdac527a8cf4534261e32", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/587ec0778f84a86a9567b2e296aec908d895ff21", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/335ab1075cf1b9630b30ed615ee6454234d11b34", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/e5e76cdb0d6b54fa8474008d993ab6118356c09b", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/0d1ecb613aa2984f0576f70f86650b7c2a132538", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/214fd2ed1905699f1e642e662a48143952ac3362", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/53f7dca3663de203a8b8e9ae2f552867d733eb12", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/de3cbea945edfbb165d1529d782019861a4907e8", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/10ed05e0438b67af8796c9d2d933267e4bc0d545", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/c263138a3e4ce7d72deef6ef08c090ace56c660f", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/abb957be64db8cf054f6c8d8c8719c09458d6b0a", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/de348b85ea9371d4278355de59ca49cf359f8be3", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/4c955d8460b74675948468de6b2899a346497616", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/eb69ae960a68f07bb8d949e39059732721fe5344", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/903d6f67085c92a5b573f3d4657f871e2f5a4a7e", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/875e54161c7d40eaf37954354f705daf522d0e96", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/b57e33cb450448b91ac6a14ecf5f1b12b64e6c43", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/f019acb6e0f5f0cc346073abc1d5ca0103cfbfb9", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/f030f0071a0c98552ef8b087ecceb1117605c1cd", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/c263138a3e4ce7d72deef6ef08c090ace56c660f", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/26caecdc6e976c004e457969276e411006cc854b", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/ae2a92bf796da4e288c8988f0c292015e84edd18", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/c9683c3569f867f347f6479afae2294bc7ebf3c5", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/b1da49a209c0c0cbf6ac659f99d42ac10c5acc0e", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/214fd2ed1905699f1e642e662a48143952ac3362", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.87008685,"math_prob":0.9993074,"size":3578,"snap":"2021-04-2021-17","text_gpt3_token_len":756,"char_repetition_ratio":0.13122551,"word_repetition_ratio":0.00681431,"special_character_ratio":0.1998323,"punctuation_ratio":0.11225997,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99997795,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94],"im_url_duplicate_count":[null,null,null,5,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-21T15:22:41Z\",\"WARC-Record-ID\":\"<urn:uuid:f1f19bd2-8a7f-4e6f-b38c-f97168e83dec>\",\"Content-Length\":\"139019\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:287aad7c-32a9-4c3b-962f-1f8be304710e>\",\"WARC-Concurrent-To\":\"<urn:uuid:602209cd-d106-4a34-9296-5b135aa4b3b5>\",\"WARC-IP-Address\":\"47.88.12.99\",\"WARC-Target-URI\":\"https://www.knowpia.com/knowpedia/Circular_orbit\",\"WARC-Payload-Digest\":\"sha1:ZKRVE7MW4GGEVKA6RMHYESKTLVO7T7UD\",\"WARC-Block-Digest\":\"sha1:5LXUFH4M27FVOANTEPFB7ZKT7GLOXO5M\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618039544239.84_warc_CC-MAIN-20210421130234-20210421160234-00067.warc.gz\"}"}
https://physics.stackexchange.com/questions/478040/not-able-to-clearly-understand-the-text-of-a-problem/478072
[ "# Not able to clearly understand the text of a problem [closed]", null, "I've not clearly understood what it's meant by the text of this problem. To me, it seems that I have to show why an object travels AB in less time than ACB. However, if an object travels from A to C, why doesn't it stop at C?\n\n## closed as off-topic by Bob D, John Rennie, Bill N, Kyle Kanos, tpg2114♦May 11 at 7:03\n\nThis question appears to be off-topic. The users who voted to close gave this specific reason:\n\n• \"Homework-like questions should ask about a specific physics concept and show some effort to work through the problem. We want our questions to be useful to the broader community, and to future users. See our meta site for more guidance on how to edit your question to make it better\" – Bob D, John Rennie, tpg2114\nIf this question can be reworded to fit the rules in the help center, please edit the question.\n\n• Would help if I fully understood which is B and C... – Rick May 5 at 13:06\n• The book doesn't show. If we consider the triangle in the figure, i think A is the top-left vertex, B is the top-right vertex and C the remaining vertex – AleQuercia May 5 at 13:18\n• You have to prove why an object travels faster along ACB than along AB according to the text, which says, \"Prove that wherever pt. C is chosen on the arc AB, an object will always get from A to B faster along the slopes ACB than along the original slope AB.\" If an object travels from A to C, it wouldn't stop at C because of the inertia of motion. – Tapi May 5 at 13:24\n• Okay thanks. So is the inertia what i didn't understood. How can i understand deeply why an object at C doesn't stop ? – AleQuercia May 5 at 13:41\n• No change in speed means there must be an elastic collision when it hits C. – Cuspy Code May 5 at 13:54\n\n## 1 Answer\n\nYou have to prove why an object travels faster along ACB than along AB according to the text, which says, \"Prove that wherever pt. C is chosen on the arc AB, an object will always get from A to B faster along the slopes ACB than along the original slope AB.\"\n\nIf an object travels from A to C, it wouldn't stop at C because of the inertia of motion. Moreover, it is a perfectly elastic collision when it hits C because the direction changes but the speed doesn't. So, the object will always move with the same speed through CB as it moved with along AC.\n\nThe speed when traveling from A to C is more than the speed any object would have when traveling from A to B (without external force). This is because the path from A to C is steeper than the path from A to B.\n\nMathematically, if you take the acceleration of the object to be 'a', and take its vertical component, it turns out to be $$a cos\\theta$$, $$\\theta$$ being the angle AB/AC makes with the normal (parallel to the side of the container). Now, $$cos\\theta$$ is more for AC than for AB, because $$cos \\theta$$ decreases as $$\\theta$$ increases. Hence, $$a cos\\theta$$ is more for AC, than for AB. Thus, acceleration is more for AC, making the journey through ACB faster than through AB." ]
[ null, "https://i.stack.imgur.com/zyXpi.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9476149,"math_prob":0.8726183,"size":2976,"snap":"2019-35-2019-39","text_gpt3_token_len":756,"char_repetition_ratio":0.122139975,"word_repetition_ratio":0.26296958,"special_character_ratio":0.25369623,"punctuation_ratio":0.10584518,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9709448,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-25T00:19:43Z\",\"WARC-Record-ID\":\"<urn:uuid:56ed93a5-2cf3-4ce7-8f34-84c1c5dbbf2b>\",\"Content-Length\":\"125970\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ecd19ba8-48ad-4d9a-9012-1b9a66088e41>\",\"WARC-Concurrent-To\":\"<urn:uuid:32053562-ff12-4578-83f8-fb1db9c13e08>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://physics.stackexchange.com/questions/478040/not-able-to-clearly-understand-the-text-of-a-problem/478072\",\"WARC-Payload-Digest\":\"sha1:NFRXUS5DY3RCFRBWN6ESZ5K2NLFEKMUA\",\"WARC-Block-Digest\":\"sha1:ZNMCRF7K4VKBTKTJDB7WAVPLOKZ7IHNE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027322160.92_warc_CC-MAIN-20190825000550-20190825022550-00121.warc.gz\"}"}
https://sei-international.github.io/NemoMod.jl/stable/variables/
[ "# Variables\n\nVariables are the outputs from calculating a scenario. They show the decisions taken to solve the optimization problem. When you calculate a scenario, you can choose which variables to output (see the varstosave argument of calculatescenario). NEMO will then save the selected variables in the scenario database. Each saved variable gets its own table with columns for its dimensions (labeled with NEMO's standard abbreviations - e.g., r for region), a value column (val), and a column indicating the date and time the scenario was solved (solvedtm).\n\n## Nodal vs. non-nodal variables\n\nMany NEMO outputs have \"nodal\" and \"non-nodal\" variants. Nodal variables show results for regions, fuels, technologies, storage, and years involved in transmission modeling - i.e., for cases where capacity, demand, and supply are simulated in a nodal network. To enable transmission modeling, you must define several dimensions and parameters: nodes, transmission lines, TransmissionModelingEnabled, TransmissionCapacityToActivityUnit, NodalDistributionDemand, NodalDistributionStorageCapacity, and NodalDistributionTechnologyCapacity. Non-nodal variables show results for cases where transmission modeling is not enabled.\n\n## Activity\n\n### Annual nodal generation\n\nTotal annual nodal production of a fuel excluding production from storage. Unit: region's energy unit.\n\n#### Julia code\n\n• Variable in JuMP model: vgenerationannualnodal[n,f,y]\n\n### Annual renewable nodal generation\n\nTotal annual nodal production of a fuel from renewable sources, excluding production from storage. The renewability of production is determined by the RETagTechnology parameter. Unit: region's energy unit.\n\n#### Julia code\n\n• Variable in JuMP model: vregenerationannualnodal[n,f,y]\n\n### Annual nodal production\n\nTotal annual nodal production of a fuel from all sources. Unit: region's energy unit.\n\n#### Julia code\n\n• Variable in JuMP model: vproductionannualnodal[n,f,y]\n\n### Annual nodal use\n\nTotal annual nodal use of a fuel. Unit: region's energy unit.\n\n#### Julia code\n\n• Variable in JuMP model: vuseannualnodal[n,l,f,y]\n\n### Annual non-nodal generation\n\nTotal annual non-nodal production of a fuel excluding production from storage. Unit: region's energy unit.\n\n#### Julia code\n\n• Variable in JuMP model: vgenerationannualnn[r,f,y]\n\n### Annual renewable non-nodal generation\n\nTotal annual non-nodal production of a fuel from renewable sources, excluding production from storage. The renewability of production is determined by the RETagTechnology parameter. Unit: region's energy unit.\n\n#### Julia code\n\n• Variable in JuMP model: vregenerationannualnn[r,f,y]\n\n### Annual non-nodal production\n\nTotal annual non-nodal production of a fuel from all sources. Unit: region's energy unit.\n\n#### Julia code\n\n• Variable in JuMP model: vproductionannualnn[r,f,y]\n\n### Annual non-nodal use\n\nTotal annual non-nodal use of a fuel. Unit: region's energy unit.\n\n#### Julia code\n\n• Variable in JuMP model: vuseannualnn[r,f,y]\n\n### Annual production by technology\n\nTotal annual production of a fuel by a technology, combining nodal and non-nodal production. Unit: region's energy unit.\n\n#### Julia code\n\n• Variable in JuMP model: vproductionbytechnologyannual[r,t,f,y]\n\nAnnual trade of a fuel from region r to region rr. Unit: region's energy unit.\n\n#### Julia code\n\n• Variable in JuMP model: vtradeannual[r,rr,f,y]\n\n### Annual use by technology\n\nAnnual use of a fuel by a technology. Unit: region's energy unit.\n\n#### Julia code\n\n• Variable in JuMP model: vusebytechnologyannual[r,t,f,y]\n\n### Nodal production\n\nTotal nodal production of a fuel in a time slice, combining all technologies. Unit: region's energy unit.\n\n#### Julia code\n\n• Variable in JuMP model: vproductionnodal[n,l,f,y]\n\n### Nodal rate of activity\n\nAmount of a technology's capacity in use in a time slice and node. NEMO multiplies the rate of activity by input activity ratios and output activity ratios to determine fuel use and production, respectively. Unit: region's energy unit / year.\n\n#### Julia code\n\n• Variable in JuMP model: vrateofactivitynodal[n,l,t,m,y]\n\n### Nodal rate of production by technology\n\nRate of time-sliced nodal production of a fuel by a technology. Unit: region's energy unit / year.\n\n#### Julia code\n\n• Variable in JuMP model: vrateofproductionbytechnologynodal[n,l,t,f,y]\n\n### Nodal rate of production\n\nRate of total nodal production of a fuel in a time slice, combining all technologies. Unit: region's energy unit / year.\n\n#### Julia code\n\n• Variable in JuMP model: vrateofproductionnodal[n,l,f,y]\n\n### Nodal rate of total activity\n\nNodal rate of activity summed across modes of operation. Unit: region's energy unit / year.\n\n#### Julia code\n\n• Variable in JuMP model: vrateoftotalactivitynodal[n,t,l,y]\n\n### Nodal rate of use by technology\n\nRate of time-sliced nodal use of a fuel by a technology. Unit: region's energy unit / year.\n\n#### Julia code\n\n• Variable in JuMP model: vrateofusebytechnologynodal[n,l,t,f,y]\n\n### Nodal rate of use\n\nRate of total nodal use of a fuel in a time slice, combining all technologies. Unit: region's energy unit / year.\n\n#### Julia code\n\n• Variable in JuMP model: vrateofusenodal[n,l,f,y]\n\n### Nodal use\n\nTotal nodal use of a fuel in a time slice, combining all technologies. Unit: region's energy unit.\n\n#### Julia code\n\n• Variable in JuMP model: vusenodal[n,l,f,y]\n\n### Non-nodal production\n\nTotal non-nodal production of a fuel in a time slice, combining all technologies. Unit: region's energy unit.\n\n#### Julia code\n\n• Variable in JuMP model: vproductionnn[r,l,f,y]\n\n### Non-nodal rate of production by technology by mode\n\nRate of time-sliced non-nodal production of a fuel by a technology operating in a mode. Unit: region's energy unit / year.\n\n#### Julia code\n\n• Variable in JuMP model: vrateofproductionbytechnologybymodenn[r,l,t,m,f,y]\n\n### Non-nodal rate of production by technology\n\nRate of time-sliced non-nodal production of a fuel by a technology. Unit: region's energy unit / year.\n\n#### Julia code\n\n• Variable in JuMP model: vrateofproductionbytechnologynn[r,l,t,f,y]\n\n### Non-nodal rate of production\n\nRate of total non-nodal production of a fuel in a time slice, combining all technologies. Unit: region's energy unit / year.\n\n#### Julia code\n\n• Variable in JuMP model: vrateofproductionnn[r,l,f,y]\n\n### Non-nodal rate of use by technology by mode\n\nRate of time-sliced non-nodal use of a fuel by a technology operating in a mode. Unit: region's energy unit / year.\n\n#### Julia code\n\n• Variable in JuMP model: vrateofusebytechnologybymodenn[r,l,t,m,f,y]\n\n### Non-nodal rate of use by technology\n\nRate of time-sliced non-nodal use of a fuel by a technology. Unit: region's energy unit / year.\n\n#### Julia code\n\n• Variable in JuMP model: vrateofusebytechnologynn[r,l,t,f,y]\n\n### Non-nodal rate of use\n\nRate of total non-nodal use of a fuel in a time slice, combining all technologies. Unit: region's energy unit / year.\n\n#### Julia code\n\n• Variable in JuMP model: vrateofusenn[r,l,f,y]\n\n### Non-nodal use\n\nTotal non-nodal use of a fuel in a time slice, combining all technologies. Unit: region's energy unit.\n\n#### Julia code\n\n• Variable in JuMP model: vusenn[r,l,f,y]\n\n### Production by technology\n\nProduction of a fuel by a technology in a time slice, combining nodal and non-nodal production. Unit: region's energy unit.\n\n#### Julia code\n\n• Variable in JuMP model: vproductionbytechnology[r,l,t,f,y]\n\n### Rate of activity\n\nAmount of a technology's capacity in use in a time slice (considering both nodal and non-nodal activity). NEMO multiplies the rate of activity by input activity ratios and output activity ratios to determine fuel use and production, respectively. Unit: region's energy unit / year.\n\n#### Julia code\n\n• Variable in JuMP model: vrateofactivity[r,l,t,m,y]\n\n### Rate of production\n\nRate of total production of a fuel in a time slice, combining all technologies and nodal and non-nodal production. Unit: region's energy unit / year.\n\n#### Julia code\n\n• Variable in JuMP model: vrateofproduction[r,l,f,y]\n\n### Rate of total activity\n\nRate of activity summed across modes of operation. Unit: region's energy unit / year.\n\n#### Julia code\n\n• Variable in JuMP model: vrateoftotalactivity[r,t,l,y]\n\n### Rate of use\n\nRate of total use of a fuel in a time slice, combining all technologies and nodal and non-nodal production. Unit: region's energy unit / year.\n\n#### Julia code\n\n• Variable in JuMP model: vrateofuse[r,l,f,y]\n\n### Total technology annual activity by mode\n\nNominal energy produced by a technology in a year when operating in the specified mode. Nominal energy is calculated by multiplying dispatched capacity by the length of time it is dispatched. This variable combines nominal energy due to both nodal and non-nodal activity. Unit: region's energy unit.\n\n#### Julia code\n\n• Variable in JuMP model: vtotalannualtechnologyactivitybymode[r,t,m,y]\n\n### Total technology annual activity\n\nNominal energy produced by a technology in a year. Nominal energy is calculated by multiplying dispatched capacity by the length of time it is dispatched. This variable combines nominal energy due to both nodal and non-nodal activity. Unit: region's energy unit.\n\n#### Julia code\n\n• Variable in JuMP model: vtotaltechnologyannualactivity[r,t,y]\n\n### Total technology model period activity\n\nNominal energy produced by a technology during all modeled years. Nominal energy is calculated by multiplying dispatched capacity by the length of time it is dispatched. This variable combines nominal energy due to both nodal and non-nodal activity. Unit: region's energy unit.\n\n#### Julia code\n\n• Variable in JuMP model: vtotaltechnologymodelperiodactivity[r,t]\n\nTime-sliced trade of a fuel from region r to region rr. Unit: region's energy unit.\n\n#### Julia code\n\n• Variable in JuMP model: vtrade[r,rr,l,f,y]\n\n### Use by technology\n\nUse of a fuel by a technology in a time slice, combining nodal and non-nodal use. Unit: region's energy unit.\n\n#### Julia code\n\n• Variable in JuMP model: vusebytechnology[r,l,t,f,y]\n\n## Costs\n\n### Capital investment\n\nUndiscounted investment in new endogenously determined technology capacity, including capital and financing costs. Unit: scenario's cost unit.\n\n#### Julia code\n\n• Variable in JuMP model: vcapitalinvestment[r,t,y]\n\n### Capital investment storage\n\nUndiscounted investment in new endogenously determined storage capacity, including capital and financing costs. Unit: scenario's cost unit.\n\n#### Julia code\n\n• Variable in JuMP model: vcapitalinvestmentstorage[r,s,y]\n\n### Capital investment transmission\n\nUndiscounted investment in new endogenously determined transmission capacity, including capital and financing costs. Unit: scenario's cost unit.\n\n#### Julia code\n\n• Variable in JuMP model: vcapitalinvestmenttransmission[tr,y]\n\n### Discounted capital investment\n\nDiscounted investment in new endogenously determined technology capacity, including capital and financing costs. NEMO discounts the investment to the first year in the scenario's database using the associated region's discount rate. This variable includes adjustments to account for non-modeled years when the calcyears argument of calculatescenario or writescenariomodel is invoked. See Calculating selected years for details. Unit: scenario's cost unit.\n\n#### Julia code\n\n• Variable in JuMP model: vdiscountedcapitalinvestment[r,t,y]\n\n### Discounted capital investment storage\n\nDiscounted investment in new endogenously determined storage capacity, including capital and financing costs. NEMO discounts the investment to the first year in the scenario's database using the associated region's discount rate. This variable includes adjustments to account for non-modeled years when the calcyears argument of calculatescenario or writescenariomodel is invoked. See Calculating selected years for details. Unit: scenario's cost unit.\n\n#### Julia code\n\n• Variable in JuMP model: vdiscountedcapitalinvestmentstorage[r,s,y]\n\n### Discounted capital investment transmission\n\nDiscounted investment in new endogenously determined transmission capacity, including capital and financing costs. NEMO discounts the investment to the first year in the scenario's database using the discount rate for the region containing the transmission line's first node. This variable includes adjustments to account for non-modeled years when the calcyears argument of calculatescenario or writescenariomodel is invoked. See Calculating selected years for details. Unit: scenario's cost unit.\n\n#### Julia code\n\n• Variable in JuMP model: vdiscountedcapitalinvestmenttransmission[tr,y]\n\n### Emission penalty by emission\n\nUndiscounted cost of annual technology emissions. Unit: scenario's cost unit.\n\n#### Julia code\n\n• Variable in JuMP model: vannualtechnologyemissionpenaltybyemission[r,t,e,y]\n\n### Emission penalty\n\nUndiscounted total emission costs associated with a technology (i.e., summing across emissions). Unit: scenario's cost unit.\n\n#### Julia code\n\n• Variable in JuMP model: vannualtechnologyemissionspenalty[r,t,y]\n\n### Discounted emission penalty\n\nDiscounted total emission costs associated with a technology (i.e., summing across emissions). NEMO discounts the costs to the first year in the scenario's database using the associated region's discount rate. This variable includes adjustments to account for non-modeled years when the calcyears argument of calculatescenario or writescenariomodel is invoked. See Calculating selected years for details. Unit: scenario's cost unit.\n\n#### Julia code\n\n• Variable in JuMP model: vdiscountedtechnologyemissionspenalty[r,t,y]\n\n### Financing cost\n\nFinancing cost incurred for new endogenously built technology capacity. NEMO calculates this cost by assuming that capital costs for the capacity are financed at the technology's interest rate and repaid in equal installments over the capacity's lifetime. This variable provides the total financing cost over the lifetime, discounted to the capacity's installation year. Unit: scenario's cost unit.\n\n#### Julia code\n\n• Variable in JuMP model: vfinancecost[r,t,y]\n\n### Financing cost storage\n\nFinancing cost incurred for new endogenously built storage capacity. NEMO calculates this cost by assuming that capital costs for the capacity are financed at the storage's interest rate and repaid in equal installments over the capacity's lifetime. This variable provides the total financing cost over the lifetime, discounted to the capacity's installation year. Unit: scenario's cost unit.\n\n#### Julia code\n\n• Variable in JuMP model: vfinancecoststorage[r,s,y]\n\n### Financing cost transmission\n\nFinancing cost incurred for new endogenously built transmission capacity. NEMO calculates this cost by assuming that capital costs for the capacity are financed at the transmission line's interest rate and repaid in equal installments over the capacity's lifetime. This variable provides the total financing cost over the lifetime, discounted to the capacity's installation year. Unit: scenario's cost unit.\n\n#### Julia code\n\n• Variable in JuMP model: vfinancecosttransmission[tr,y]\n\n### Model period cost by region\n\nSum of all discounted costs in a region during the modeled years. Includes technology, storage, and transmission costs. Unit: scenario's cost unit.\n\n#### Julia code\n\n• Variable in JuMP model: vmodelperiodcostbyregion[r]\n\n### Operating cost\n\nSum of fixed and variable operation and maintenance costs for a technology. Unit: scenario's cost unit.\n\n#### Julia code\n\n• Variable in JuMP model: voperatingcost[r,t,y]\n\n### Operating cost transmission\n\nSum of fixed and variable operation and maintenance costs for a transmission line. Unit: scenario's cost unit.\n\n#### Julia code\n\n• Variable in JuMP model: voperatingcosttransmission[tr,y]\n\n### Discounted operating cost\n\nDiscounted operation and maintenance costs for a technology. NEMO discounts the costs to the first year in the scenario's database using the associated region's discount rate. This variable includes adjustments to account for non-modeled years when the calcyears argument of calculatescenario or writescenariomodel is invoked. See Calculating selected years for details. Unit: scenario's cost unit.\n\n#### Julia code\n\n• Variable in JuMP model: vdiscountedoperatingcost[r,t,y]\n\n### Discounted operating cost transmission\n\nDiscounted operation and maintenance costs for a transmission line. NEMO discounts the costs to the first year in the scenario's database using the discount rate for the region containing the line's first node. This variable includes adjustments to account for non-modeled years when the calcyears argument of calculatescenario or writescenariomodel is invoked. See Calculating selected years for details. Unit: scenario's cost unit.\n\n#### Julia code\n\n• Variable in JuMP model: vdiscountedoperatingcosttransmission[tr,y]\n\n### Fixed operating cost\n\nFixed operation and maintenance costs for a technology. Unit: scenario's cost unit.\n\n#### Julia code\n\n• Variable in JuMP model: vannualfixedoperatingcost[r,t,y]\n\n### Variable operating cost\n\nVariable operation and maintenance costs for a technology. Unit: scenario's cost unit.\n\n#### Julia code\n\n• Variable in JuMP model: vannualvariableoperatingcost[r,t,y]\n\n### Variable operating cost transmission\n\nVariable operation and maintenance costs for a transmission line. Unit: scenario's cost unit.\n\n#### Julia code\n\n• Variable in JuMP model: vvariablecosttransmission[tr,y]\n\n### Variable operating cost transmission by time slice\n\nVariable operation and maintenance costs for a transmission line in a time slice. Unit: scenario's cost unit.\n\n#### Julia code\n\n• Variable in JuMP model: vvariablecosttransmissionbyts[tr,l,f,y]\n\n### Salvage value\n\nUndiscounted residual value of capital investment remaining at the end of the modeling period. The DepreciationMethod parameter determines the approach used to calculate salvage value. Unit: scenario's cost unit.\n\n#### Julia code\n\n• Variable in JuMP model: vsalvagevalue[r,t,y]\n\n### Salvage value storage\n\nUndiscounted residual value of capital investment storage remaining at the end of the modeling period. The DepreciationMethod parameter determines the approach used to calculate salvage value. Unit: scenario's cost unit.\n\n#### Julia code\n\n• Variable in JuMP model: vsalvagevaluestorage[r,s,y]\n\n### Salvage value transmission\n\nUndiscounted residual value of capital investment transmission remaining at the end of the modeling period. The DepreciationMethod parameter determines the approach used to calculate salvage value. Unit: scenario's cost unit.\n\n#### Julia code\n\n• Variable in JuMP model: vsalvagevaluetransmission[tr,y]\n\n### Discounted salvage value\n\nDiscounted residual value of capital investment remaining at the end of the modeling period. NEMO discounts the value to the first year in the scenario's database using the associated region's discount rate. Unit: scenario's cost unit.\n\n#### Julia code\n\n• Variable in JuMP model: vdiscountedsalvagevalue[r,t,y]\n\n### Discounted salvage value storage\n\nDiscounted residual value of capital investment storage remaining at the end of the modeling period. NEMO discounts the value to the first year in the scenario's database using the associated region's discount rate. Unit: scenario's cost unit.\n\n#### Julia code\n\n• Variable in JuMP model: vdiscountedsalvagevaluestorage[r,s,y]\n\n### Discounted salvage value transmission\n\nDiscounted residual value of capital investment transmission remaining at the end of the modeling period. NEMO discounts the value to the first year in the scenario's database using the discount rate for the region containing the transmission line's first node. Unit: scenario's cost unit.\n\n#### Julia code\n\n• Variable in JuMP model: vdiscountedsalvagevaluetransmission[tr,y]\n\n### Total discounted cost\n\nSum of all discounted costs in a region and year (technology, storage, and transmission). This variable includes adjustments to account for non-modeled years when the calcyears argument of calculatescenario or writescenariomodel is invoked. See Calculating selected years for details. Unit: scenario's cost unit.\n\n#### Julia code\n\n• Variable in JuMP model: vtotaldiscountedcost[r,y]\n\n### Total discounted storage cost\n\nSum of discounted storage costs: vdiscountedcapitalinvestmentstorage - vdiscountedsalvagevaluestorage. This variable includes adjustments to account for non-modeled years when the calcyears argument of calculatescenario or writescenariomodel is invoked. See Calculating selected years for details. Unit: scenario's cost unit.\n\n#### Julia code\n\n• Variable in JuMP model: vtotaldiscountedstoragecost[r,s,y]\n\n### Total discounted technology cost\n\nSum of discounted technology costs: vdiscountedoperatingcost + vdiscountedcapitalinvestment + vdiscountedtechnologyemissionspenalty - vdiscountedsalvagevalue. This variable includes adjustments to account for non-modeled years when the calcyears argument of calculatescenario or writescenariomodel is invoked. See Calculating selected years for details. Unit: scenario's cost unit.\n\n#### Julia code\n\n• Variable in JuMP model: vtotaldiscountedcostbytechnology[r,t,y]\n\n### Total discounted transmission cost\n\nSum of discounted transmission costs: vdiscountedcapitalinvestmenttransmission - vdiscountedsalvagevaluetransmission + vdiscountedoperatingcosttransmission. This variable includes adjustments to account for non-modeled years when the calcyears argument of calculatescenario or writescenariomodel is invoked. See Calculating selected years for details. Unit: scenario's cost unit.\n\n#### Julia code\n\n• Variable in JuMP model: vtotaldiscountedtransmissioncostbyregion[r,y]\n\n## Demand\n\n### Nodal annual demand\n\nNodal demand summed across time slices. Unit: region's energy unit.\n\n#### Julia code\n\n• Variable in JuMP model: vdemandannualnodal[n,f,y]\n\n### Non-nodal annual demand\n\nNon-nodal demand summed across time slices. Unit: region's energy unit.\n\n#### Julia code\n\n• Variable in JuMP model: vdemandannualnn[r,f,y]\n\n### Nodal demand\n\nTime-sliced nodal demand (time-sliced demand is defined with SpecifiedAnnualDemand and SpecifiedDemandProfile). Unit: region's energy unit.\n\n#### Julia code\n\n• Variable in JuMP model: vdemandnodal[n,l,f,y]\n\n### Non-nodal demand\n\nTime-sliced non-nodal demand (time-sliced demand is defined with SpecifiedAnnualDemand and SpecifiedDemandProfile). Unit: region's energy unit.\n\n#### Julia code\n\n• Variable in JuMP model: vdemandnn[r,l,f,y]\n\n### Non-nodal rate of demand\n\nRate of time-sliced non-nodal demand (time-sliced demand is defined with SpecifiedAnnualDemand and SpecifiedDemandProfile). Unit: region's energy unit / year.\n\n#### Julia code\n\n• Variable in JuMP model: vrateofdemandnn[r,l,f,y]\n\n## Emissions\n\n### Annual technology emissions by mode\n\nAnnual emissions produced by a technology operating in the specified mode. Unit: scenario's emissions unit.\n\n#### Julia code\n\n• Variable in JuMP model: vannualtechnologyemissionbymode[r,t,e,m,y]\n\n### Annual technology emissions\n\nAnnual emissions produced by a technology. Unit: scenario's emissions unit.\n\n#### Julia code\n\n• Variable in JuMP model: vannualtechnologyemission[r,t,e,y]\n\n### Annual emissions\n\nTotal emissions in a year. Includes any exogenously specified emissions (AnnualExogenousEmission parameter). Unit: scenario's emissions unit.\n\n#### Julia code\n\n• Variable in JuMP model: vannualemissions[r,e,y]\n\n### Model period emissions\n\nTotal emissions during all modeled years. Includes any exogenously specified emissions (AnnualExogenousEmission and ModelPeriodExogenousEmission parameters). Unit: scenario's emissions unit.\n\n#### Julia code\n\n• Variable in JuMP model: vmodelperiodemissions[r,e]\n\n## Reserve margin\n\n### Demand needing reserve margin\n\nTotal rate of production of fuels tagged with ReserveMarginTagFuel. This variable is an element in reserve margin calculations. Unit: region's energy unit / year.\n\n#### Julia code\n\n• Variable in JuMP model: vdemandneedingreservemargin[r,l,y]\n\n### Total capacity in reserve margin\n\nTotal technology capacity (combining all technologies) that counts toward meeting the region's reserve margin. Unit: region's energy unit / year.\n\n#### Julia code\n\n• Variable in JuMP model: vtotalcapacityinreservemargin[r,y]\n\n## Storage\n\n### Accumulated new storage capacity\n\nTotal endogenously determined storage capacity existing in a year. Unit: region's energy unit.\n\n#### Julia code\n\n• Variable in JuMP model: vaccumulatednewstoragecapacity[r,s,y]\n\n### New storage capacity\n\nNew endogenously determined storage capacity added in a year. Unit: region's energy unit.\n\n#### Julia code\n\n• Variable in JuMP model: vnewstoragecapacity[r,s,y]\n\n### Nodal rate of storage charge\n\nRate of energy stored in nodal storage. Unit: region's energy unit / year.\n\n#### Julia code\n\n• Variable in JuMP model: vrateofstoragechargenodal[n,s,l,y]\n\n### Nodal rate of storage discharge\n\nRate of energy released from nodal storage. Unit: region's energy unit / year.\n\n#### Julia code\n\n• Variable in JuMP model: vrateofstoragedischargenodal[n,s,l,y]\n\n### Nodal storage level time slice end\n\nEnergy in nodal storage at the end of the first hour in a time slice. Unit: region's energy unit.\n\n#### Julia code\n\n• Variable in JuMP model: vstorageleveltsendnodal[n,s,l,y]\n\n### Nodal storage level time slice group 1 start\n\nEnergy in nodal storage at the start of a time slice group 1. Unit: region's energy unit.\n\n#### Julia code\n\n• Variable in JuMP model: vstorageleveltsgroup1startnodal[n,s,tg1,y]\n\n### Nodal storage level time slice group 1 end\n\nEnergy in nodal storage at the end of a time slice group 1. Unit: region's energy unit.\n\n#### Julia code\n\n• Variable in JuMP model: vstorageleveltsgroup1endnodal[n,s,tg1,y]\n\n### Nodal storage level time slice group 2 start\n\nEnergy in nodal storage at the start of a time slice group 2 within a time slice group 1. Unit: region's energy unit.\n\n#### Julia code\n\n• Variable in JuMP model: vstorageleveltsgroup2startnodal[n,s,tg1,tg2,y]\n\n### Nodal storage level time slice group 2 end\n\nEnergy in nodal storage at the end of a time slice group 2 within a time slice group 1. Unit: region's energy unit.\n\n#### Julia code\n\n• Variable in JuMP model: vstorageleveltsgroup2endnodal[n,s,tg1,tg2,y]\n\n### Nodal storage level year end\n\nEnergy in nodal storage at the end of a year. Unit: region's energy unit.\n\n#### Julia code\n\n• Variable in JuMP model: vstoragelevelyearendnodal[n,s,y]\n\n### Non-nodal rate of storage charge\n\nRate of energy stored in non-nodal storage. Unit: region's energy unit / year.\n\n#### Julia code\n\n• Variable in JuMP model: vrateofstoragechargenn[r,s,l,y]\n\n### Non-nodal rate of storage discharge\n\nRate of energy released from non-nodal storage. Unit: region's energy unit / year.\n\n#### Julia code\n\n• Variable in JuMP model: vrateofstoragedischargenn[r,s,l,y]\n\n### Non-nodal storage level time slice end\n\nEnergy in non-nodal storage at the end of the first hour in a time slice. Unit: region's energy unit.\n\n#### Julia code\n\n• Variable in JuMP model: vstorageleveltsendnn[r,s,l,y]\n\n### Non-nodal storage level time slice group 1 start\n\nEnergy in non-nodal storage at the start of a time slice group 1. Unit: region's energy unit.\n\n#### Julia code\n\n• Variable in JuMP model: vstorageleveltsgroup1startnn[r,s,tg1,y]\n\n### Non-nodal storage level time slice group 1 end\n\nEnergy in non-nodal storage at the end of a time slice group 1. Unit: region's energy unit.\n\n#### Julia code\n\n• Variable in JuMP model: vstorageleveltsgroup1endnn[r,s,tg1,y]\n\n### Non-nodal storage level time slice group 2 start\n\nEnergy in non-nodal storage at the start of a time slice group 2 within a time slice group 1. Unit: region's energy unit.\n\n#### Julia code\n\n• Variable in JuMP model: vstorageleveltsgroup2startnn[r,s,tg1,tg2,y]\n\n### Non-nodal storage level time slice group 2 end\n\nEnergy in non-nodal storage at the end of a time slice group 2 within a time slice group 1. Unit: region's energy unit.\n\n#### Julia code\n\n• Variable in JuMP model: vstorageleveltsgroup2endnn[r,s,tg1,tg2,y]\n\n### Non-nodal storage level year end\n\nEnergy in non-nodal storage at the end of a year. Unit: region's energy unit.\n\n#### Julia code\n\n• Variable in JuMP model: vstoragelevelyearendnn[r,s,y]\n\n### Storage lower limit\n\nMinimum energy in storage (determined by MinStorageCharge and storage capacity). Unit: region's energy unit.\n\n#### Julia code\n\n• Variable in JuMP model: vstoragelowerlimit[r,s,y]\n\n### Storage upper limit\n\nMaximum energy in storage (determined by storage capacity). Unit: region's energy unit.\n\n#### Julia code\n\n• Variable in JuMP model: vstorageupperlimit[r,s,y]\n\n## Technology capacity\n\n### Accumulated new capacity\n\nTotal endogenously determined technology capacity existing in a year. Unit: region's power unit.\n\n#### Julia code\n\n• Variable in JuMP model: vaccumulatednewcapacity[r,t,y]\n\n### New capacity\n\nNew endogenously determined technology capacity added in a year. Unit: region's power unit.\n\n#### Julia code\n\n• Variable in JuMP model: vnewcapacity[r,t,y]\n\n### Number of new technology units\n\nNumber of increments of new endogenously determined capacity added for a technology in a year. The size of each increment is set with the CapacityOfOneTechnologyUnit parameter. No unit.\n\n#### Julia code\n\n• Variable in JuMP model: vnumberofnewtechnologyunits[r,t,y]\n\n### Total annual capacity\n\nTotal technology capacity (endogenous and exogenous) existing in a year. Unit: region's power unit.\n\n#### Julia code\n\n• Variable in JuMP model: vtotalcapacityannual[r,t,y]\n\n## Transmission\n\n### Annual transmission\n\nNet annual transmission of a fuel from a node. Accounts for efficiency losses in energy received at the node. Unit: energy unit for region containing node.\n\n#### Julia code\n\n• Variable in JuMP model: vtransmissionannual[n,f,y]\n\n### Transmission built\n\nFraction of a candidate transmission line built in a year. No unit (ranges between 0 and 1). This variable will have an integral value if you do not select the continuoustransmission option when calculating a scenario (see calculatescenario).\n\n#### Julia code\n\n• Variable in JuMP model: vtransmissionbuilt[tr,y]\n\n### Transmission by line\n\nFlow of a fuel through a transmission line (i.e., from the line's first node [n1] to its second node [n2]) in a time slice. Unit: megawatts.\n\n#### Julia code\n\n• Variable in JuMP model: vtransmissionbyline[tr,l,f,y]\n\n### Transmission exists\n\nFraction of a transmission line existing in a year. No unit (ranges between 0 and 1).\n\n#### Julia code\n\n• Variable in JuMP model: vtransmissionexists[tr,y]\n\n### Voltage angle\n\nVoltage angle at a node in a time slice. NEMO only calculates this variable if you enable direct current optimized power flow modeling (see TransmissionModelingEnabled). Unit: radians.\n\n#### Julia code\n\n• Variable in JuMP model: vvoltageangle[n,l,y]" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.751127,"math_prob":0.40131778,"size":29363,"snap":"2023-40-2023-50","text_gpt3_token_len":6981,"char_repetition_ratio":0.23130897,"word_repetition_ratio":0.633218,"special_character_ratio":0.18880905,"punctuation_ratio":0.15925284,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9846543,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-27T15:11:35Z\",\"WARC-Record-ID\":\"<urn:uuid:5a86001e-4e80-4e7d-ba25-5a0d41346d47>\",\"Content-Length\":\"105028\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7d1b0c75-1d3f-4373-8fd2-b60f3a6cf3cf>\",\"WARC-Concurrent-To\":\"<urn:uuid:e0a762f7-92b8-49d1-9ae3-09a84bf29b03>\",\"WARC-IP-Address\":\"185.199.109.153\",\"WARC-Target-URI\":\"https://sei-international.github.io/NemoMod.jl/stable/variables/\",\"WARC-Payload-Digest\":\"sha1:2YLVO44LKSARUIKITXWVWQPGT3O4KLW7\",\"WARC-Block-Digest\":\"sha1:4IO7XABRMLJRIB3QBROZHS3AURR6JW5Z\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510300.41_warc_CC-MAIN-20230927135227-20230927165227-00044.warc.gz\"}"}
https://www.jlqwer.com/posts/6217.html
[ "# 两数求和(1)\n\n``1 2``\n\n``3``\n\n``````program p1000(Input,Output);\nvar\na,b,sum:integer;\nbegin\nsum := a+b;\nwrite(sum);\nend.``````\n\n``````#include <iostream>\nusing namespace std;\nint main()\n{\nint a,b;\ncin>>a>>b;\ncout<<a+b<<endl;\nreturn 0;\n}``````" ]
[ null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.67088413,"math_prob":0.8856877,"size":349,"snap":"2021-31-2021-39","text_gpt3_token_len":194,"char_repetition_ratio":0.023188407,"word_repetition_ratio":0.0,"special_character_ratio":0.36103153,"punctuation_ratio":0.21686748,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9615611,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-23T18:29:33Z\",\"WARC-Record-ID\":\"<urn:uuid:7ec79835-5352-4a5b-b2be-5d5801c1f512>\",\"Content-Length\":\"34589\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9bc6ad4b-5db7-48f5-8c2c-c6983c0f2f14>\",\"WARC-Concurrent-To\":\"<urn:uuid:21b2dcf0-f868-49ca-b9ab-c6b281ea6d18>\",\"WARC-IP-Address\":\"49.234.16.84\",\"WARC-Target-URI\":\"https://www.jlqwer.com/posts/6217.html\",\"WARC-Payload-Digest\":\"sha1:FDJMN4NVLG43CP2W4MWCGQISW3YWOH4M\",\"WARC-Block-Digest\":\"sha1:POAZWH2Y5G2KQ2NGER6MAOI6NSFYQOEK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057427.71_warc_CC-MAIN-20210923165408-20210923195408-00571.warc.gz\"}"}
https://math.stackexchange.com/questions/2188712/question-on-difference-between-closedness-and-completeness-of-hilbert-spaces-and?noredirect=1
[ "# Question on difference between closedness and completeness of Hilbert spaces and subspaces of Hilbert spaces?\n\nin Stein's book, we have the following definitions: 1)A subspace S of a Hilbert space H is closed if whenever $(f_n) ⊂ S$ converges to some f ∈ H, then f also belongs to S. 2) The set L^2 is complete if every Cauchy sequence $(f_n)$ in $L^2(\\mathbb R)$ converges to a function $f \\in L^2(\\mathbb R).$\n\nBy assumption a Hilbert space H is complete. Now what is the difference between a subspace S of H being complete and being closed? Isn't any convergent (in the norm) sequence of functions, a cauchy sequence? So checking for closedness S is equivalent to checking for completness of S, ie whether every cauchy sequence in S conveges in S(in the norm)?\n\nI will take any help I can get as this is really confusing me. Thanks.\n\n• Yes, a closed subset of a complete metric space is complete, and conversely, if a subset of a complete metric space is complete, then it's closed. – quasi Mar 16 '17 at 0:59\n• Thanks, so completeness is stronger than closedness? ie if a metric space is complete then it is closed? And so a closed subset of a Hilbert space is Hilbert? – user172377 Mar 16 '17 at 1:13\n• Basically, a subspace of a hilbert space is complete iff it is closed? – user172377 Mar 16 '17 at 1:17\n• Yes, because the hilbert space is complete. – quasi Mar 16 '17 at 1:21\n• Closedness is a relative property, relative to a containing space. Completeness is a property of the space itself. – quasi Mar 16 '17 at 1:22\n\nIf you have $S$ a subspace of a Hilbert space $H$, then $S$ being complete is equivalent to $S$ being closed in $H$.\n\nHowever, completeness is an absolute property, and closedness is a relative property.\n\nFor example, let $c_{00}:=\\{x \\in \\mathbb{R}^{\\mathbb{N}} \\mid x(n) \\neq 0 \\text{ for finitely many terms}\\}$, with inner product $\\langle x,y \\rangle$ given by $\\sum_\\limits n x(n)y(n)$.\n\nConsider the sequence $y_n$ in $c_0$ given by $(y_n)(m)=1/m$ for $m \\leq n$ and $0$ for $m>n$. It is clear that $y_n$ is Cauchy. However, $y_n$ can't converge to any $x \\in c_{00}$, since the distance from $y_n$ to any given $x$ only grows up after $n$ sufficiently large.\n\nNote that the argument was entirely in $c_{00}$. This illustrate that completeness is an intrinsic property.\n\nHowever, we can see $c_{00}$ as a subspace of the Hilbert space $l^2:=\\{x \\in \\mathbb{R}^{\\mathbb{N}} \\mid \\sum\\limits_n (x(n))^2<\\infty\\}$. Now, my sequence $y_n$ previously defined clearly converges to $x$ given by $x(n)=1/n$. And $x$ is clearly not in $c_{00}$, hence $c_{00}$ is not closed (and therefore not complete by the statement in the yellow bar).\n\nThis may seem more straightforward, however it comes at a cost. We must know beforehand some manageable Hilbert space in which our candidate for completeness lives. This is not so easy in general.\n\nNote also that the advantage of completeness relies heavily on the fact that you can assure a sequence will converge by analysing itself, not finding a candidate beforehand. The situation is analogous: the desire for an intrinsic process.\n\nLet's consider extensively a more elementary example, since you seem to know the concept of metric spaces in the comments. Consider the example discussed above: $\\big((0,1),d\\big)$, where $d(x,y)=|x-y|$.\n\nLet $x_n=1/n$. Note that $d(x_n,x_m)=|1/n-1/m|< 1/\\min\\{n,m\\}$. Therefore, $x_n$ is Cauchy. Indeed, given $\\epsilon>0$, take $N$ such that $1/N < \\epsilon$. Therefore, if $n,m>N$, then $\\min\\{n,m\\}>N$ and $d(x_n,x_m)=|1/n-1/m|< 1/\\min\\{n,m\\}<1/N < \\epsilon$.\n\nHowever, $x_n$ does not converge. Indeed, for any $x \\in (0,1)$, we have that there exists $N$ such that $x> 1/N$, and therefore, for $n>N$, $d(x_n,x)=|x-1/n|=x-1/n>x-1/N$. That is, for $\\epsilon:=x-1/N$, there exists no $n$ etc etc.\n\nIt follows, by definition, that $\\big((0,1),d\\big)$ is not complete.\n\nNote that if $(0,1)$ is a subset of another metric space $(X,d')$ for which $d'|_{(0,1) \\times (0,1)}=d$, then given a sequence $x_n \\in (0,1)$ and an element $x\\in (0,1)$,\n\n$x_n$ is a Cauchy sequence in $\\big((0,1),d\\big)$ if and only if $x_n$ is a Cauchy sequence in $\\big(X,d').$\n\nand\n\n$x_n$ converges to $x$ in $\\big((0,1),d\\big)$ if and only if $x_n$ converges to $x$ in $\\big(X,d').$\n\nIt doesn't matter who $X$ is, as long as the induced metric is the metric on $X$. This is precisely why we call completeness an \"absolute\", or \"intrinsic\" property: it doesn't depend on where we are, as long as it induces the structure we originally have (obviously, otherwise it would be senseless to compare).\n\nClosedness is very sensitive to the metric/topology of the ambient space. $(0,1)$ is closed on $\\big((0,1),d\\big)$ (as is any metric space as a subset of itself), but $(0,1)$ is not closed on $\\big(\\mathbb{R},d_{can}\\big)$ for example, even though the metric is the induced one.\n\nTo be very explicit, when we talk about completeness, the following phrase is meaningful:\n\nThe metric space $(X,d)$ is complete.\n\nWhen talking about closedness, we need the following phrase in order to have an entire meaningful information:\n\nThe subset $A \\subset X$ is closed in (X,d).\n\n• I see, so you're advocating it is easier to work with completeness than closedness? Then if we are in a Hilbert space which is complete, why not just throw away closedness idea or definition in the first place and simply work with completeness property? Or the other way around? Jonas explained why closedness might be easier but you say completeness is usually easier to determine? Also could you take a look at my question posed to quasi in the top of the page comments? I would like to hear as much input as possible. – user172377 Mar 16 '17 at 1:34\n• @Socchi Firstly, I am not advocating it is easier one way or the other. It often depends on context and the informations you have at hand. I will add more info to the answer, hang on. – Aloizio Macedo Mar 16 '17 at 1:49\n• @Socchi I've made an update. Please see if there are any doubts left. – Aloizio Macedo Mar 16 '17 at 2:13\n• Thank you so much for the explanations! It is well appreciated. – user172377 Mar 16 '17 at 3:38\n\nYes, for a subspace $X$ of a complete metric space $Y$, $X$ is complete as a metric space with the restricted metric if and only $X$ is closed as a subset of $Y$. Even if $Y$ is not complete, completeness of $X$ with the restricted metric implies that $X$ is closed in $Y$, because, as you said, convergent sequences are Cauchy. (But if $Y$ is not complete, then closedness of $X$ does not imply completeness, as even taking $X=Y$ shows.)\n\nBut when we're working with complete spaces, why make the distinction? (Is that part of your question?) First of all, the equivalence only applies once you're already sitting in a complete space, so for our \"big\" space $Y$, we need to use the concept of completeness. But then for a subspace $X$ of a complete space $Y$, why not check completeness instead of closedness? I would say that part of the reason is that it's easier to work with and show closedness, and then you get completeness for free when it is convenient. To show closedness you start by assuming a sequence converges and then just have to show the limit is in your subspace. Working with completeness for subspaces in general would add another unnecessary step of pointing out that there is a limit of a Cauchy sequence because of completeness of the big space, then you still have to show the limit is also in the subspace.\n\n• Thanks that clears up some of my questions, yes I meant why not just work with completeness all the way through if working in a Hilbert space. Can you take a look at my other question that I wrote as a comment to quasi? – user172377 Mar 16 '17 at 1:31\n• @Socchi: I just responded directly to your comment above, but I also indirectly address that case in the parenthetical at the end of the first paragraph. Every space is closed in itself. However a given metric space may or may not be complete, which as Aloizio Macedo points out is an \"intrinsic\" property not depending on which space we're contained in, but depending only on our space and its metric. – Jonas Meyer Mar 16 '17 at 1:35" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8756243,"math_prob":0.9975029,"size":3595,"snap":"2020-34-2020-40","text_gpt3_token_len":1106,"char_repetition_ratio":0.10554163,"word_repetition_ratio":0.021543985,"special_character_ratio":0.32267037,"punctuation_ratio":0.14481898,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9994624,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-20T17:40:53Z\",\"WARC-Record-ID\":\"<urn:uuid:34436566-a31d-4f11-b47c-11e4f45b8029>\",\"Content-Length\":\"172338\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5d9caad4-4ad8-41f5-9238-b47f76f61192>\",\"WARC-Concurrent-To\":\"<urn:uuid:e24e8b03-1a3f-47d2-9b4d-b78bb2559215>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/2188712/question-on-difference-between-closedness-and-completeness-of-hilbert-spaces-and?noredirect=1\",\"WARC-Payload-Digest\":\"sha1:ECZXW46IZWNX62KSNZDGNJBN4W3DNJUG\",\"WARC-Block-Digest\":\"sha1:6SXQHZRGWSEWZMFD64VC3VVEJPE7OWR7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400198287.23_warc_CC-MAIN-20200920161009-20200920191009-00402.warc.gz\"}"}
https://www.x-mol.com/paper/cs/tag/152/journal/71602
[ "• arXiv.cs.SC Pub Date : 2020-11-22\nDingkang Wang; Hesong Wang; Fanghui Xiao\n\nIn this paper, we characterized the relationship between Groebner bases and u-bases: any minimal Groebner basis of the syzygy module for n univariate polynomials with respect to the term-over-position monomial order is its u-basis. Moreover, based on the gcd computation, we construct a free basis of the syzygy module by the recursive way. According to this relationship and the constructed free basis\n\n更新日期:2020-11-25\n• arXiv.cs.SC Pub Date : 2020-11-21\nAlexey Ovchinnikov; Anand Pillay; Gleb Pogudin; Thomas Scanlon\n\nStructural identifiability is a property of an ODE model with parameters that allows for the parameters to be determined from continuous noise-free data. This is natural prerequisite for practical identifiability. Conducting multiple independent experiments could make more parameters or functions of parameters identifiable, which is a desirable property to have. How many experiments are sufficient\n\n更新日期:2020-11-25\n• arXiv.cs.SC Pub Date : 2020-11-18\nShinichi Tajima; Katsusuke Nabeshima\n\nGrothendieck point residue is considered in the context of computational complex analysis. A new effective method is proposed for computing Grothendieck point residues mappings and residues. Basic ideas of our approach are the use of Grothendieck local duality and a transformation law for local cohomology classes. A new tool is devised for efficiency to solve the extended ideal membership problems\n\n更新日期:2020-11-19\n• arXiv.cs.SC Pub Date : 2020-11-17\nEvans Doe Ocansey; Carsten Schneider\n\nA non-trivial symbolic machinery is presented that can rephrase algorithmically a finite set of nested hypergeometric products in appropriately designed difference rings. As a consequence, one obtains an alternative representation in terms of one single product defined over a root of unity and nested hypergeometric products which are algebraically independent among each other. In particular, one can\n\n更新日期:2020-11-18\n• arXiv.cs.SC Pub Date : 2020-11-16\nStephen Melczer; Marc Mezzarobba\n\nWe prove solution uniqueness for the genus one Canham variational problem arising in the shape prediction of biomembranes. The proof builds on a result of Yu and Chen that reduces the variational problem to proving non-negativity of a sequence defined by a linear recurrence relation with polynomial coefficients. We combine rigorous numeric analytic continuation of D-finite functions with classic bounds\n\n更新日期:2020-11-17\n• arXiv.cs.SC Pub Date : 2020-11-12\nMaysum Panju; Kourosh Parand; Ali Ghodsi\n\nWe describe a neural-based method for generating exact or approximate solutions to differential equations in the form of mathematical expressions. Unlike other neural methods, our system returns symbolic expressions that can be interpreted directly. Our method uses a neural architecture for learning mathematical expressions to optimize a customizable objective, and is scalable, compact, and easily\n\n更新日期:2020-11-16\n• arXiv.cs.SC Pub Date : 2020-11-12\nRonaldo Garcia; Dan Reznik\n\nWe study more invariants in the elliptic billiard, including those manifested by self-intersected orbits and inversive polygons. We also derive expressions for some entries in \"Eighty New Invariants of N-Periodics in the Elliptic Billiard\" (2020), arXiv:2004.12497.\n\n更新日期:2020-11-16\n• arXiv.cs.SC Pub Date : 2020-11-08\n\nLU-factorization of matrices is one of the fundamental algorithms of linear algebra. The widespread use of supercomputers with distributed memory requires a review of traditional algorithms, which were based on the common memory of a computer. Matrix block recursive algorithms are a class of algorithms that provide coarse-grained parallelization. The block recursive LU factorization algorithm was obtained\n\n更新日期:2020-11-12\n• arXiv.cs.SC Pub Date : 2020-11-08\nFoyez Alauddin\n\nQuadratization is a transform of a system of ODEs with polynomial right-hand side into a system of ODEs with at most quadratic right-hand side via the introduction of new variables. It has been recently used as a pre-processing step for new model order reduction methods, so it is important to keep the number of new variables small. Several algorithms have been designed to search for a quadratization\n\n更新日期:2020-11-12\n• arXiv.cs.SC Pub Date : 2020-11-04\nMaysum Panju; Ali Ghodsi\n\nWhen neural networks are used to solve differential equations, they usually produce solutions in the form of black-box functions that are not directly mathematically interpretable. We introduce a method for generating symbolic expressions to solve differential equations while leveraging deep learning training methods. Unlike existing methods, our system does not require learning a language model over\n\n更新日期:2020-11-06\n• arXiv.cs.SC Pub Date : 2020-11-04\nHoon Hong; James Rohal; Mohab Safey El Din; Eric Schost\n\nA semi-algebraic set is a subset of the real space defined by polynomial equations and inequalities having real coefficients and is a union of finitely many maximally connected components. We consider the problem of deciding whether two given points in a semi-algebraic set are connected; that is, whether the two points lie in the same connected component. In particular, we consider the semi-algebraic\n\n更新日期:2020-11-06\n• arXiv.cs.SC Pub Date : 2020-11-03\nFredrik JohanssonLFANT\n\nCalcium is a C library for real and complex numbers in a form suitable for exact algebraic and symbolic computation. Numbers are represented as elements of fields $\\mathbb{Q}(a_1,\\ldots,a_n)$ where the extensions numbers $a_k$ may be algebraic or transcendental. The system combines efficient field operations with automatic discovery and certification of algebraic relations, resulting in a practical\n\n更新日期:2020-11-04\n• arXiv.cs.SC Pub Date : 2020-11-02\nMarc MezzarobbaPEQUAN\n\nWe develop a toolbox for the error analysis of linear recurrences with constant or polynomial coefficients, based on generating series, Cauchy's method of majorants, and simple results from analytic combinatorics. We illustrate the power of the approach by several nontrivial application examples. Among these examples are a new worst-case analysis of an algorithm for computing Bernoulli numbers, and\n\n更新日期:2020-11-03\n• arXiv.cs.SC Pub Date : 2020-10-30\nGabriel HondetDEDUCTEAM, Inria, LSV, ENS Paris Saclay, CNRS; Frédéric BlanquiDEDUCTEAM, Inria, LSV, ENS Paris Saclay, CNRS\n\nDedukti is a type-checker for the $\\lambda$$\\Pi$-calculus modulo rewriting, an extension of Edinburgh's logicalframework LF where functions and type symbols can be defined by rewrite rules. It thereforecontains an engine for rewriting LF terms and types according to the rewrite rules given by the user.A key component of this engine is the matching algorithm to find which rules can be fired. In thispaper\n\n更新日期:2020-11-02\n• arXiv.cs.SC Pub Date : 2020-10-29\nMichael Kerber; Alexander Rolle\n\nMulti-parameter persistent homology is a recent branch of topological data analysis. In this area, data sets are investigated through the lens of homology with respect to two or more scale parameters. The high computational cost of many algorithms calls for a preprocessing step to reduce the input size. In general, a minimal presentation is the smallest possible representation of a persistence module\n\n更新日期:2020-10-30\n• arXiv.cs.SC Pub Date : 2020-10-26\nRamachandran Anantharaman; Virendra Sule\n\nThis paper proposes a symbolic representation of non-linear maps $F$ in $\\ff^n$ in terms of linear combination of basis functions of a subspace of $(\\ff^n)^0$, the dual space of $\\ff^n$. Using this representation, it is shown that the inverse of $F$ whenever it exists can also be represented in a similar symbolic form using the same basis functions (using different coefficients). This form of representation\n\n更新日期:2020-10-30\n• arXiv.cs.SC Pub Date : 2020-10-28\nXavier Dahan\n\nLet T(x) in k[x] be a monic non-constant polynomial and write R=k[x]/(T) the quotient ring. Consider two bivariate polynomials a(x, y), b(x, y) in R[y]. In a first part, T = p^e is assumed to be the power of an irreducible polynomial p. A new algorithm that computes a minimal lexicographic Groebner basis of the ideal (a, b, p^e), is introduced. A second part extends this algorithm when T is general\n\n更新日期:2020-10-30\n• arXiv.cs.SC Pub Date : 2020-10-23\nHamid Rahkooy; Cristian Vargas Montero\n\nWe study binomiality of the steady state ideals of chemical reaction networks. Considering rate constants as indeterminates, the concept of unconditional binomiality has been introduced and an algorithm based on linear algebra has been proposed in a recent work for reversible chemical reaction networks, which has a polynomial time complexity upper bound on the number of species and reactions. In this\n\n更新日期:2020-10-30\n• arXiv.cs.SC Pub Date : 2020-10-23\nJianjun Wei; Liangyu Chen\n\nWe present an optimized algorithm calculating determinant for multivariate polynomial matrix on GPU. The novel algorithm provides precise determinant for input multivariate polynomial matrix in controllable time. Our approach is based on modular methods and split into Fast Fourier Transformation, Condensation method and Chinese Remainder Theorem where each algorithm is paralleled on GPU. The experiment\n\n更新日期:2020-10-26\n• arXiv.cs.SC Pub Date : 2020-10-21\nDhananjay Ashok; Joseph Scott; Sebastian Wetzel; Maysum Panju; Vijay Ganesh\n\nWe present a novel Auxiliary Truth enhanced Genetic Algorithm (GA) that uses logical or mathematical constraints as a means of data augmentation as well as to compute loss (in conjunction with the traditional MSE), with the aim of increasing both data efficiency and accuracy of symbolic regression (SR) algorithms. Our method, logic-guided genetic algorithm (LGGA), takes as input a set of labelled data\n\n更新日期:2020-10-26\n• arXiv.cs.SC Pub Date : 2020-10-20\nMatthew Francis-Landau; Tim Vieira; Jason Eisner\n\nWe present a scheme for translating logic programs, which may use aggregation and arithmetic, into algebraic expressions that denote bag relations over ground terms of the Herbrand universe. To evaluate queries against these relations, we develop an operational semantics based on term rewriting of the algebraic expressions. This approach can exploit arithmetic identities and recovers a range of useful\n\n更新日期:2020-10-26\n• arXiv.cs.SC Pub Date : 2020-10-20\nNiclas Kruff; Christoph Lüders; Ovidiu Radulescu; Thomas Sturm; Sebastian Walcher\n\nWe present a symbolic algorithmic approach that allows to compute invariant manifolds and corresponding reduced systems for differential equations modeling biological networks which comprise chemical reaction networks for cellular biochemistry, and compartmental models for pharmacology, epidemiology and ecology. Multiple time scales of a given network are obtained by scaling, based on tropical geometry\n\n更新日期:2020-10-26\n• arXiv.cs.SC Pub Date : 2020-10-19\nFrédéric Bihan; Alicia Dickenstein; Jens Forsgård\n\nWe present an optimal version of Descartes' rule of signs to bound the number of positive real roots of a sparse system of polynomial equations in n variables with n+2 monomials. This sharp upper bound is given in terms of the sign variation of a sequence associated to the exponents and the coefficients of the system.\n\n更新日期:2020-10-20\n• arXiv.cs.SC Pub Date : 2020-10-18\nChristoph Koutschan; Elaine Wong\n\nWe discuss the strategies and difficulties of determining a recurrence which a certain polynomial (in the form of a symbolic multiple sum) satisfies. The polynomial comes from an analysis of integral estimators derived via quasi-Monte Carlo methods.\n\n更新日期:2020-10-20\n• arXiv.cs.SC Pub Date : 2020-10-16\nBert Jüttler; Niels Lubbes; Josef Schicho\n\nWe present a method for computing projective isomorphisms between rational surfaces that are given in terms of their parametrizations. The main idea is to reduce the computation of such projective isomorphisms to five base cases by modifying the parametric maps such that the components of the resulting maps have lower degree. Our method can be used to compute affine, Euclidean and M\\\"obius isomorphisms\n\n更新日期:2020-10-19\n• arXiv.cs.SC Pub Date : 2020-10-14\nDong Lu; Dingkang Wang; Fanghui Xiao\n\nThis paper is concerned with the factorization and equivalence problems of multivariate polynomial matrices. We present some new criteria for the existence of matrix factorizations for a class of multivariate polynomial matrices, and obtain a necessary and sufficient condition for the equivalence of a square polynomial matrix and a diagonal matrix. Based on the constructive proof of the new criteria\n\n更新日期:2020-10-16\n• arXiv.cs.SC Pub Date : 2020-10-14\nDong Lu; Dingkang Wang; Fanghui Xiao\n\nThis paper is concerned with factor left prime factorization problems for multivariate polynomial matrices without full row rank. We propose a necessary and sufficient condition for the existence of factor left prime factorizations of a class of multivariate polynomial matrices, and then design an algorithm to compute all factor left prime factorizations if they exist. We implement the algorithm on\n\n更新日期:2020-10-16\n• arXiv.cs.SC Pub Date : 2020-10-14\nDong Lu; Dingkang Wang; Fanghui Xiao\n\nA new necessary and sufficient condition for the existence of minor left prime factorizations of multivariate polynomial matrices without full row rank is presented. The key idea is to establish a relationship between a matrix and its full row rank submatrix. Based on the new result, we propose an algorithm for factorizing matrices and have implemented it on the computer algebra system Maple. Two examples\n\n更新日期:2020-10-16\n• arXiv.cs.SC Pub Date : 2020-10-13\nRui Guo; Ivor Simpson; Thor Magnusson; Chris Kiefer; Dorien Herremans\n\nMany of the music generation systems based on neural networks are fully autonomous and do not offer control over the generation process. In this research, we present a controllable music generation system in terms of tonal tension. We incorporate two tonal tension measures based on the Spiral Array Tension theory into a variational autoencoder model. This allows us to control the direction of the tonal\n\n更新日期:2020-10-14\n• arXiv.cs.SC Pub Date : 2020-10-12\nD. V. Gribanov; N. Yu. Zolotykh\n\nLet a polyhedron $P$ be defined by one of the following ways: (i) $P = \\{x \\in R^n \\colon A x \\leq b\\}$, where $A \\in Z^{(n+k) \\times n}$, $b \\in Z^{(n+k)}$ and $rank\\, A = n$; (ii) $P = \\{x \\in R_+^n \\colon A x = b\\}$, where $A \\in Z^{k \\times n}$, $b \\in Z^{k}$ and $rank\\, A = k$. And let all rank order minors of $A$ be bounded by $\\Delta$ in absolute values. We show that the short rational generating\n\n更新日期:2020-10-13\n• arXiv.cs.SC Pub Date : 2020-10-12\nDaniel F. Scharler; Hans-Peter Schröcker\n\nWe present an algorithm to compute all factorizations into linear factors of univariate polynomials over the split quaternions, provided such a factorization exists. Failure of the algorithm is equivalent to non-factorizability for which we present also geometric interpretations in terms of rulings on the quadric of non-invertible split quaternions. However, suitable real polynomial multiples of split\n\n更新日期:2020-10-13\n• arXiv.cs.SC Pub Date : 2020-10-11\nSarika Jain\n\nMost of the existing techniques to product discovery rely on syntactic approaches, thus ignoring valuable and specific semantic information of the underlying standards during the process. The product data comes from different heterogeneous sources and formats giving rise to the problem of interoperability. Above all, due to the continuously increasing influx of data, the manual labeling is getting\n\n更新日期:2020-10-13\n• arXiv.cs.SC Pub Date : 2020-10-11\nTatsuya Hagino\n\nA theory of data types based on category theory is presented. We organize data types under a new categorical notion of F,G-dialgebras which is an extension of the notion of adjunctions as well as that of T-algebras. T-algebras are also used in domain theory, but while domain theory needs some primitive data types, like products, to start with, we do not need any. Products, coproducts and exponentiations\n\n更新日期:2020-10-13\n• arXiv.cs.SC Pub Date : 2020-10-09\nVincent Neiger; Clément Pernet\n\nThis paper describes an algorithm which computes the characteristic polynomial of a matrix over a field within the same asymptotic complexity, up to constant factors, as the multiplication of two square matrices. Previously, to our knowledge, this was only achieved by resorting to genericity assumptions or randomization techniques, while the best known complexity bound with a general deterministic\n\n更新日期:2020-10-12\n• arXiv.cs.SC Pub Date : 2020-10-07\nFeras A. Saad; Martin C. Rinard; Vikash K. Mansinghka\n\nWe present the Sum-Product Probabilistic Language (SPPL), a new system that automatically delivers exact solutions to a broad range of probabilistic inference queries. SPPL symbolically represents the full distribution on execution traces specified by a probabilistic program using a generalization of sum-product networks. SPPL handles continuous and discrete distributions, many-to-one numerical transformations\n\n更新日期:2020-10-08\n• arXiv.cs.SC Pub Date : 2020-10-07\nSören Laue; Matthias Mitterreiter; Joachim Giesen\n\nComputing derivatives of tensor expressions, also known as tensor calculus, is a fundamental task in machine learning. A key concern is the efficiency of evaluating the expressions and their derivatives that hinges on the representation of these expressions. Recently, an algorithm for computing higher order derivatives of tensor expressions like Jacobians or Hessians has been introduced that is a few\n\n更新日期:2020-10-08\n• arXiv.cs.SC Pub Date : 2020-10-05\nJohannes Siegele; Martin Pfurner; Hans-Peter Schröcker\n\nIn this paper we investigate factorizations of polynomials over the ring of dual quaternions into linear factors. While earlier results assume that the norm polynomial is real (\"motion polynomials\"), we only require the absence of real polynomial factors in the primal part and factorizability of the norm polynomial over the dual numbers into monic quadratic factors. This obviously necessary condition\n\n更新日期:2020-10-06\n• arXiv.cs.SC Pub Date : 2020-10-02\nCharlotte Hardouin; Michael F Singer\n\nWe refine necessary and sufficient conditions for the generating series of a weighted model of a quarter plane walk to be differentially algebraic. In addition, we give algorithms based on the theory of Mordell-Weil lattices, that, for each weighted model, yield polynomial conditions on the weights determining this property of the associated generating series.\n\n更新日期:2020-10-05\n• arXiv.cs.SC Pub Date : 2020-09-29\nMatt KaufmannUniv. of Texas at Austin; J Strother MooreUniv. of Texas at Austin\n\nIterative algorithms are traditionally expressed in ACL2 using recursion. On the other hand, Common Lisp provides a construct, loop, which -- like most programming languages -- provides direct support for iteration. We describe an ACL2 analogue loop$of loop that supports efficient ACL2 programming and reasoning with iteration. 更新日期:2020-09-30 • arXiv.cs.SC Pub Date : 2020-09-29 David M. RussinoffArm We present a methodology for formal verification of arithmetic RTL designs that combines sequential logic equivalence checking with interactive theorem proving. An intermediate model of a Verilog module is hand-coded in Restricted Algorithmic C (RAC), a primitive subset of C augmented by the integer and fixed-point register class templates of Algorithmic C. The model is designed to be as abstract and 更新日期:2020-09-30 • arXiv.cs.SC Pub Date : 2020-09-26 Ye Liu; Yao Wan; Lifang He; Hao Peng; Philip S. Yu Generative commonsense reasoning which aims to empower machines to generate sentences with the capacity of reasoning over a set of concepts is a critical bottleneck for text generation. Even the state-of-the-art pre-trained language generation models struggle at this task and often produce implausible and anomalous sentences. One reason is that they rarely consider incorporating the knowledge graph 更新日期:2020-09-29 • arXiv.cs.SC Pub Date : 2020-09-22 Thomas ProkoschInstitute for Informatics, Ludwig-Maximilian University of Munich, Germany A distributed logic programming language with support for meta-programming and stream processing offers a variety of interesting research problems, such as: How can a versatile and stable data structure for the indexing of a large number of expressions be implemented with simple low-level data structures? Can low-level programming help to reduce the number of occur checks in Robinson's unification 更新日期:2020-09-23 • arXiv.cs.SC Pub Date : 2020-09-22 Tuan Nguyen QuocNational Institute of Informatics; Katsumi InoueNational Institute of Informatics; Chiaki SakamaWakayama University Algebraic characterization of logic programs has received increasing attention in recent years. Researchers attempt to exploit connections between linear algebraic computation and symbolic computation in order to perform logical inference in large scale knowledge bases. This paper proposes further improvement by using sparse matrices to embed logic programs in vector spaces. We show its great power 更新日期:2020-09-23 • arXiv.cs.SC Pub Date : 2020-09-22 Paul TarauUniversity of North Texas; Valeria de PaivaTopos Institute The problem we want to solve is how to generate all theorems of a given size in the implicational fragment of propositional intuitionistic linear logic. We start by filtering for linearity the proof terms associated by our Prolog-based theorem prover for Implicational Intuitionistic Logic. This works, but using for each formula a PSPACE-complete algorithm limits it to very small formulas. We take a 更新日期:2020-09-23 • arXiv.cs.SC Pub Date : 2020-09-15 Spencer BreinerNIST; Blake PollardNIST; Eswaran SubrahmanianCMU; Olivier Marie-RosePrometheus Computing This paper applies operads and functorial semantics to address the problem of failure diagnosis in complex systems. We start with a concrete example, developing a hierarchical interaction model for the Length Scale Interferometer, a high-precision measurement system operated by the US National Institute of Standards and Technology. The model is expressed in terms of combinatorial/diagrammatic structures 更新日期:2020-09-22 • arXiv.cs.SC Pub Date : 2020-09-08 Khalil Ghorbal; Andrew Sogokon Set positive invariance is an important concept in the theory of dynamical systems and one which also has practical applications in areas of computer science, such as formal verification, as well as in control theory. Great progress has been made in understanding positively invariant sets in continuous dynamical systems and powerful computational tools have been developed for reasoning about them; 更新日期:2020-09-22 • arXiv.cs.SC Pub Date : 2020-09-11 Jérémy BerthomieuPolSys; Mohab Safey El DinPolSys Assuming sufficiently many terms of a n-dimensional table defined over a field are given, we aim at guessing the linear recurrence relations with either constant or polynomial coefficients they satisfy. In many applications, the table terms come along with a structure: for instance, they may be zero outside of a cone, they may be built from a Gr{\\\"o}bner basis of an ideal invariant under the action 更新日期:2020-09-14 • arXiv.cs.SC Pub Date : 2020-09-07 Rolf Drechsler Only by formal verification approaches functional correctness can be ensured. While for many circuits fast verification is possible, in other cases the approaches fail. In general no efficient algorithms can be given, since the underlying verification problem is NP-complete. In this paper we prove that for different types of adder circuits polynomial verification can be ensured based on BDDs. While 更新日期:2020-09-08 • arXiv.cs.SC Pub Date : 2020-09-04 Yuki IshiharaXLIM; Tristan VacconXLIM; Kazuhiro Yokoyama Let K be a field equipped with a valuation. Tropical varieties over K can be defined with a theory of Gr{\\\"o}bner bases taking into account the valuation of K. Because of the use of the valuation, the theory of tropical Gr{\\\"o}bner bases has proved to provide settings for computations over polynomial rings over a p-adic field that are more stable than that of classical Gr{\\\"o}bner bases. In this article 更新日期:2020-09-08 • arXiv.cs.SC Pub Date : 2020-09-03 Vladimir P. Gerdt; Daniel Robertz; Yuri A. Blinkov For a wide class of polynomially nonlinear systems of partial differential equations we suggest an algorithmic approach that combines differential and difference algebra to analyze s(trong)-consistency of finite difference approximations. Our approach is applicable to regular solution grids. For the grids of this type we give a new definition of s-consistency for finite difference approximations which 更新日期:2020-09-05 • arXiv.cs.SC Pub Date : 2020-09-03 Xavier DahanXLIM; Tristan VacconXLIM Newton's method is an ubiquitous tool to solve equations, both in the archimedean and non-archimedean settings -- for which it does not really differ. Broyden was the instigator of what is called \"quasi-Newton methods\". These methods use an iteration step where one does not need to compute a complete Jacobian matrix nor its inverse. We provide an adaptation of Broyden's method in a general non-archimedean 更新日期:2020-09-05 • arXiv.cs.SC Pub Date : 2020-09-02 Jean-Charles FaugèrePolSys; George LabahnSCG; Mohab Safey El DinPolSys; Éric SchostSCG; Thi Xuan VuPolSys, SCG Let$\\mathbf{K}$be a field and$\\phi$,$\\mathbf{f} = (f_1, \\ldots, f_s)$in$\\mathbf{K}[x_1, \\dots, x_n]$be multivariate polynomials (with$s < n$) invariant under the action of$\\mathcal{S}_n$, the group of permutations of$\\{1, \\dots, n\\}$. We consider the problem of computing the points at which$\\mathbf{f}$vanish and the Jacobian matrix associated to$\\mathbf{f}, \\phi$is rank deficient provided 更新日期:2020-09-03 • arXiv.cs.SC Pub Date : 2020-09-02 George LabahnSCG; Mohab Safey El DinPolSys; Éric SchostSCG; Thi Xuan VuPolSys, SCG Let$\\mathbf{K}$be a field of characteristic zero with$\\overline{\\mathbf{K}}$its algebraic closure. Given a sequence of polynomials$\\mathbf{g} = (g_1, \\ldots, g_s) \\in \\mathbf{K}[x_1, \\ldots , x_n]^s$and a polynomial matrix$\\mathbf{F} = [f_{i,j}] \\in \\mathbf{K}[x_1, \\ldots, x_n]^{p \\times q}$, with$p \\leq q$, we are interested in determining the isolated points of$V_p(\\mathbf{F},\\mathbf{g})$更新日期:2020-09-03 • arXiv.cs.SC Pub Date : 2020-08-31 Jose CapcoJKU; Mohab Safey El DinPolSys; Josef SchichoRISC Answering connectivity queries in semi-algebraic sets is a long-standing and challenging computational issue with applications in robotics, in particular for the analysis of kinematic singularities. One task there is to compute the number of connected components of the complementary of the singularities of the kinematic map. Another task is to design a continuous path joining two given points lying 更新日期:2020-09-01 • arXiv.cs.SC Pub Date : 2020-08-25 G. DuchampLIPN; Viincel Hoang Ngoc Minh; Vu Nguyen Dinh A Chen generating series, along a path and with respect to$m$differential forms, is a noncommutative series on$m$letters and with coefficients which are holomorphic functions over a simply connected manifold in other words a series with variable (holomorphic) coefficients. Such a series satisfies a first order noncommutative differential equation which is considered, by some authors, as the universal 更新日期:2020-08-26 • arXiv.cs.SC Pub Date : 2020-08-24 Christopher Doris We describe a new arithmetic system for the Magma computer algebra system for working with$p$-adic numbers exactly, in the sense that numbers are represented lazily to infinite$p$-adic precision. This is the first highly featured such implementation. This has the benefits of increasing user-friendliness and speeding up some computations, as well as forcibly producing provable results. We give theoretical 更新日期:2020-08-26 • arXiv.cs.SC Pub Date : 2020-08-24 Przemysław Koprowski The group of singular elements was first introduced by Helmut Hasse and later it has been studied by numerous authors including such well known mathematicians as: Cassels, Furtw\\\"{a}ngler, Hecke, Knebusch, Takagi and of course Hasse himself; to name just a few. The aim of the present paper is to present algorithms that explicitly construct groups of singular and$S$-singular elements (modulo squares) 更新日期:2020-08-25 • arXiv.cs.SC Pub Date : 2020-08-24 Huu Phuoc Le; Mohab Safey El Din; Timo de Wolff Let$\\mathbb{R}$be the field of real numbers. We consider the problem of computing the real isolated points of a real algebraic set in$\\mathbb{R}^n$given as the vanishing set of a polynomial system. This problem plays an important role for studying rigidity properties of mechanism in material designs. In this paper, we design an algorithm which solves this problem. It is based on the computations 更新日期:2020-08-25 • arXiv.cs.SC Pub Date : 2020-08-20 Alin Bostan; Ryuhei Mori We present a simple and fast algorithm for computing the$N$-th term of a given linearly recurrent sequence. Our new algorithm uses$O(\\mathsf{M}(d) \\log N)$arithmetic operations, where$d$is the order of the recurrence, and$\\mathsf{M}(d)$denotes the number of arithmetic operations for computing the product of two polynomials of degree$d\\$. The state-of-the-art algorithm, due to Charles Fiduccia\n\n更新日期:2020-08-21\n• arXiv.cs.SC Pub Date : 2020-08-07\nPhilipp RümmerUppsala University, Sweden\n\nCHC-COMP-20 is the third competition of solvers for Constrained Horn Clauses. In this year, 9 solvers participated at the competition, and were evaluated in four separate tracks on problems in linear integer arithmetic, linear real arithmetic, and arrays. The competition was run in the first week of May 2020 using the StarExec computing cluster. This report gives an overview of the competition design\n\n更新日期:2020-08-10\nContents have been reproduced by permission of the publishers.\ndown\nwechat\nbug" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8492976,"math_prob":0.8721646,"size":32617,"snap":"2020-45-2020-50","text_gpt3_token_len":7376,"char_repetition_ratio":0.15328243,"word_repetition_ratio":0.029479891,"special_character_ratio":0.20026366,"punctuation_ratio":0.10536603,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9642778,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-26T18:09:49Z\",\"WARC-Record-ID\":\"<urn:uuid:d5bb590a-18fc-4bc2-b092-513cc3d9604e>\",\"Content-Length\":\"481347\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3420a87d-59eb-4826-96f0-6d420b8ebd9e>\",\"WARC-Concurrent-To\":\"<urn:uuid:bf2fbe3d-dd8b-4cf9-9492-066c2802f6fd>\",\"WARC-IP-Address\":\"121.199.0.116\",\"WARC-Target-URI\":\"https://www.x-mol.com/paper/cs/tag/152/journal/71602\",\"WARC-Payload-Digest\":\"sha1:MELJJPTRFOVYGYDRJYKZSGPAGYD6ZMRF\",\"WARC-Block-Digest\":\"sha1:4SYPRWAU4RHLUCZ6FTTVI6MASGAEOQVT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141188899.42_warc_CC-MAIN-20201126171830-20201126201830-00114.warc.gz\"}"}
https://www.semanticscholar.org/paper/Chiral-Homology-of-elliptic-curves-and-Zhu's-Ekeren-Heluani/816983d4feefb45eb474f6c75b48e70961714da5
[ "Corpus ID: 55804272\n\n# Chiral Homology of elliptic curves and Zhu's algebra\n\n@article{Ekeren2018ChiralHO,\ntitle={Chiral Homology of elliptic curves and Zhu's algebra},\nauthor={Jethro van Ekeren and Reimundo Heluani},\njournal={arXiv: Quantum Algebra},\nyear={2018}\n}\n• Published 30 March 2018\n• Mathematics\n• arXiv: Quantum Algebra\nWe study the chiral homology of elliptic curves with coefficients in a conformal vertex algebra. Our main result expresses the nodal curve limit of the first chiral homology group in terms of the Hochschild homology of the Zhu algebra of V. A technical result of independent interest regarding the equivalence between the associated graded with respect to Li's filtration and the arc space of the C_2 algebra is presented.\n8 Citations\nCosets of free field algebras via arc spaces\nUsing the invariant theory of arc spaces, we find minimal strong generating sets for certain cosets C of affine vertex algebras inside free field algebras that are related to classical Howe duality.Expand\nOn the nilpotent orbits arising from admissible affine vertex algebras\n• Mathematics\n• 2020\nWe give a simple description of the closure of the nilpotent orbits appearing as associated varieties of admissible affine vertex algebras in terms of primitive ideals.\nSome remarks on associated varieties of vertex operator superalgebras\n• H. Li\n• Mathematics, Physics\n• European Journal of Mathematics\n• 2021\nWe study several families of vertex operator superalgebras from a jet (super)scheme point of view. We provide new examples of vertex algebras which are \"chiralizations\" of their Zhu's PoissonExpand\nSingular Support of a Vertex Algebra and the Arc Space of Its Associated Scheme\n• Mathematics\n• Representations and Nilpotent Orbits of Lie Algebraic Systems\n• 2019\nAttached to a vertex algebra $${\\mathcal V}$$ are two geometric objects. The associated scheme of $${\\mathcal V}$$ is the spectrum of Zhu’s Poisson algebra $$R_{{\\mathcal V}}$$. The singular supportExpand\nFurther 𝑞-series identities and conjectures relating false theta functions and characters\n• Mathematics\n• 2020\nIn this short note, a companion of , we discuss several families of $q$-series identities in connection to false and mock theta functions, characters of modules of vertex algebras, and \"sum ofExpand\nJet schemes, Quantum dilogarithm and Feigin-Stoyanovsky's principal subspaces\n• Mathematics\n• 2020\nWe analyze the structure of the infinite jet algebra, or arc algebra, associated to level one Feigin-Stoyanovsky's principal subspaces. For $A$-series, we show that their Hilbert series can beExpand\nA question of Joseph Ritt from the point of view of vertex algebras\n• Mathematics\n• 2020\nLet $k$ be a field of characteristic zero. This paper studies a problem proposed by Joseph F. Ritt in 1950. Precisely, we prove that (1) If $p\\geq 2$ is an integer, for every integerExpand\nThe singular support of the Ising model\n• Mathematics\n• 2020\nWe prove a new Fermionic quasiparticle sum expression for the character of the Ising model vertex algebra, related to the Jackson-Slater $q$-series identity of Rogers-Ramanujan type and to Nahm sumsExpand" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8164648,"math_prob":0.8722684,"size":5986,"snap":"2021-43-2021-49","text_gpt3_token_len":1392,"char_repetition_ratio":0.1706787,"word_repetition_ratio":0.023153253,"special_character_ratio":0.2006348,"punctuation_ratio":0.05090138,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99140954,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-08T10:38:43Z\",\"WARC-Record-ID\":\"<urn:uuid:0b8639bb-c4aa-4284-ad5f-4a75f947fe20>\",\"Content-Length\":\"301149\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6f9da5ad-6de7-47d6-9c73-29073dde57e6>\",\"WARC-Concurrent-To\":\"<urn:uuid:749563df-0938-4afc-a6cd-debac73679a5>\",\"WARC-IP-Address\":\"13.32.208.7\",\"WARC-Target-URI\":\"https://www.semanticscholar.org/paper/Chiral-Homology-of-elliptic-curves-and-Zhu's-Ekeren-Heluani/816983d4feefb45eb474f6c75b48e70961714da5\",\"WARC-Payload-Digest\":\"sha1:SSA4MHIPOSEP2ONEFITCPFOEWGUJWJ4G\",\"WARC-Block-Digest\":\"sha1:G2FLCN7HJAVDEVBA5U3JGQVMLVZM4AHQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363465.47_warc_CC-MAIN-20211208083545-20211208113545-00037.warc.gz\"}"}
https://pytorch.org/docs/stable/_modules/torchvision/transforms/transforms.html
[ "Shortcuts\n\n# Source code for torchvision.transforms.transforms\n\nimport torch\nimport math\nimport random\nfrom PIL import Image\ntry:\nimport accimage\nexcept ImportError:\naccimage = None\nimport numpy as np\nimport numbers\nimport types\nfrom collections.abc import Sequence, Iterable\nimport warnings\n\nfrom . import functional as F\n\n__all__ = [\"Compose\", \"ToTensor\", \"PILToTensor\", \"ConvertImageDtype\", \"ToPILImage\", \"Normalize\", \"Resize\", \"Scale\",\n\"CenterCrop\", \"Pad\", \"Lambda\", \"RandomApply\", \"RandomChoice\", \"RandomOrder\", \"RandomCrop\",\n\"RandomHorizontalFlip\", \"RandomVerticalFlip\", \"RandomResizedCrop\", \"RandomSizedCrop\", \"FiveCrop\", \"TenCrop\",\n\"LinearTransformation\", \"ColorJitter\", \"RandomRotation\", \"RandomAffine\", \"Grayscale\", \"RandomGrayscale\",\n\"RandomPerspective\", \"RandomErasing\"]\n\n_pil_interpolation_to_str = {\nImage.NEAREST: 'PIL.Image.NEAREST',\nImage.BILINEAR: 'PIL.Image.BILINEAR',\nImage.BICUBIC: 'PIL.Image.BICUBIC',\nImage.LANCZOS: 'PIL.Image.LANCZOS',\nImage.HAMMING: 'PIL.Image.HAMMING',\nImage.BOX: 'PIL.Image.BOX',\n}\n\ndef _get_image_size(img):\nif F._is_pil_image(img):\nreturn img.size\nelif isinstance(img, torch.Tensor) and img.dim() > 2:\nreturn img.shape[-2:][::-1]\nelse:\nraise TypeError(\"Unexpected type {}\".format(type(img)))\n\n[docs]class Compose(object):\n\"\"\"Composes several transforms together.\n\nArgs:\ntransforms (list of Transform objects): list of transforms to compose.\n\nExample:\n>>> transforms.Compose([\n>>> transforms.CenterCrop(10),\n>>> transforms.ToTensor(),\n>>> ])\n\"\"\"\n\ndef __init__(self, transforms):\nself.transforms = transforms\n\ndef __call__(self, img):\nfor t in self.transforms:\nimg = t(img)\nreturn img\n\ndef __repr__(self):\nformat_string = self.__class__.__name__ + '('\nfor t in self.transforms:\nformat_string += '\\n'\nformat_string += ' {0}'.format(t)\nformat_string += '\\n)'\nreturn format_string\n\n[docs]class ToTensor(object):\n\"\"\"Convert a PIL Image or numpy.ndarray to tensor.\n\nConverts a PIL Image or numpy.ndarray (H x W x C) in the range\n[0, 255] to a torch.FloatTensor of shape (C x H x W) in the range [0.0, 1.0]\nif the PIL Image belongs to one of the modes (L, LA, P, I, F, RGB, YCbCr, RGBA, CMYK, 1)\nor if the numpy.ndarray has dtype = np.uint8\n\nIn the other cases, tensors are returned without scaling.\n\"\"\"\n\n[docs] def __call__(self, pic):\n\"\"\"\nArgs:\npic (PIL Image or numpy.ndarray): Image to be converted to tensor.\n\nReturns:\nTensor: Converted image.\n\"\"\"\nreturn F.to_tensor(pic)\n\ndef __repr__(self):\nreturn self.__class__.__name__ + '()'\n\nclass PILToTensor(object):\n\"\"\"Convert a PIL Image to a tensor of the same type.\n\nConverts a PIL Image (H x W x C) to a torch.Tensor of shape (C x H x W).\n\"\"\"\n\ndef __call__(self, pic):\n\"\"\"\nArgs:\npic (PIL Image): Image to be converted to tensor.\n\nReturns:\nTensor: Converted image.\n\"\"\"\nreturn F.pil_to_tensor(pic)\n\ndef __repr__(self):\nreturn self.__class__.__name__ + '()'\n\nclass ConvertImageDtype(object):\n\"\"\"Convert a tensor image to the given dtype and scale the values accordingly\n\nArgs:\ndtype (torch.dtype): Desired data type of the output\n\n.. note::\n\nWhen converting from a smaller to a larger integer dtype the maximum values are **not** mapped exactly.\nIf converted back and forth, this mismatch has no effect.\n\nRaises:\nRuntimeError: When trying to cast :class:torch.float32 to :class:torch.int32 or :class:torch.int64 as\nwell as for trying to cast :class:torch.float64 to :class:torch.int64. These conversions might lead to\noverflow errors since the floating point dtype cannot store consecutive integers over the whole range\nof the integer dtype.\n\"\"\"\n\ndef __init__(self, dtype: torch.dtype) -> None:\nself.dtype = dtype\n\ndef __call__(self, image: torch.Tensor) -> torch.Tensor:\nreturn F.convert_image_dtype(image, self.dtype)\n\n[docs]class ToPILImage(object):\n\"\"\"Convert a tensor or an ndarray to PIL Image.\n\nConverts a torch.*Tensor of shape C x H x W or a numpy ndarray of shape\nH x W x C to a PIL Image while preserving the value range.\n\nArgs:\nmode (PIL.Image mode_): color space and pixel depth of input data (optional).\nIf mode is None (default) there are some assumptions made about the input data:\n- If the input has 4 channels, the mode is assumed to be RGBA.\n- If the input has 3 channels, the mode is assumed to be RGB.\n- If the input has 2 channels, the mode is assumed to be LA.\n- If the input has 1 channel, the mode is determined by the data type (i.e int, float,\nshort).\n\n\"\"\"\ndef __init__(self, mode=None):\nself.mode = mode\n\n[docs] def __call__(self, pic):\n\"\"\"\nArgs:\npic (Tensor or numpy.ndarray): Image to be converted to PIL Image.\n\nReturns:\nPIL Image: Image converted to PIL Image.\n\n\"\"\"\nreturn F.to_pil_image(pic, self.mode)\n\ndef __repr__(self):\nformat_string = self.__class__.__name__ + '('\nif self.mode is not None:\nformat_string += 'mode={0}'.format(self.mode)\nformat_string += ')'\nreturn format_string\n\n[docs]class Normalize(object):\n\"\"\"Normalize a tensor image with mean and standard deviation.\nGiven mean: (mean,...,mean[n]) and std: (std,..,std[n]) for n\nchannels, this transform will normalize each channel of the input\ntorch.*Tensor i.e.,\noutput[channel] = (input[channel] - mean[channel]) / std[channel]\n\n.. note::\nThis transform acts out of place, i.e., it does not mutate the input tensor.\n\nArgs:\nmean (sequence): Sequence of means for each channel.\nstd (sequence): Sequence of standard deviations for each channel.\ninplace(bool,optional): Bool to make this operation in-place.\n\n\"\"\"\n\ndef __init__(self, mean, std, inplace=False):\nself.mean = mean\nself.std = std\nself.inplace = inplace\n\n[docs] def __call__(self, tensor):\n\"\"\"\nArgs:\ntensor (Tensor): Tensor image of size (C, H, W) to be normalized.\n\nReturns:\nTensor: Normalized Tensor image.\n\"\"\"\nreturn F.normalize(tensor, self.mean, self.std, self.inplace)\n\ndef __repr__(self):\nreturn self.__class__.__name__ + '(mean={0}, std={1})'.format(self.mean, self.std)\n\n[docs]class Resize(object):\n\"\"\"Resize the input PIL Image to the given size.\n\nArgs:\nsize (sequence or int): Desired output size. If size is a sequence like\n(h, w), output size will be matched to this. If size is an int,\nsmaller edge of the image will be matched to this number.\ni.e, if height > width, then image will be rescaled to\n(size * height / width, size)\ninterpolation (int, optional): Desired interpolation. Default is\nPIL.Image.BILINEAR\n\"\"\"\n\ndef __init__(self, size, interpolation=Image.BILINEAR):\nassert isinstance(size, int) or (isinstance(size, Iterable) and len(size) == 2)\nself.size = size\nself.interpolation = interpolation\n\ndef __call__(self, img):\n\"\"\"\nArgs:\nimg (PIL Image): Image to be scaled.\n\nReturns:\nPIL Image: Rescaled image.\n\"\"\"\nreturn F.resize(img, self.size, self.interpolation)\n\ndef __repr__(self):\ninterpolate_str = _pil_interpolation_to_str[self.interpolation]\nreturn self.__class__.__name__ + '(size={0}, interpolation={1})'.format(self.size, interpolate_str)\n\n[docs]class Scale(Resize):\n\"\"\"\nNote: This transform is deprecated in favor of Resize.\n\"\"\"\ndef __init__(self, *args, **kwargs):\nwarnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\nsuper(Scale, self).__init__(*args, **kwargs)\n\n[docs]class CenterCrop(object):\n\"\"\"Crops the given PIL Image at the center.\n\nArgs:\nsize (sequence or int): Desired output size of the crop. If size is an\nint instead of sequence like (h, w), a square crop (size, size) is\n\"\"\"\n\ndef __init__(self, size):\nif isinstance(size, numbers.Number):\nself.size = (int(size), int(size))\nelse:\nself.size = size\n\ndef __call__(self, img):\n\"\"\"\nArgs:\nimg (PIL Image): Image to be cropped.\n\nReturns:\nPIL Image: Cropped image.\n\"\"\"\nreturn F.center_crop(img, self.size)\n\ndef __repr__(self):\nreturn self.__class__.__name__ + '(size={0})'.format(self.size)\n\n\"\"\"Pad the given PIL Image on all sides with the given \"pad\" value.\n\nArgs:\npadding (int or tuple): Padding on each border. If a single int is provided this\nis used to pad all borders. If tuple of length 2 is provided this is the padding\non left/right and top/bottom respectively. If a tuple of length 4 is provided\nthis is the padding for the left, top, right and bottom borders\nrespectively.\nfill (int or tuple): Pixel fill value for constant fill. Default is 0. If a tuple of\nlength 3, it is used to fill R, G, B channels respectively.\nThis value is only used when the padding_mode is constant\npadding_mode (str): Type of padding. Should be: constant, edge, reflect or symmetric.\nDefault is constant.\n\n- constant: pads with a constant value, this value is specified with fill\n\n- edge: pads with the last value at the edge of the image\n\n- reflect: pads with reflection of image without repeating the last value on the edge\n\nFor example, padding [1, 2, 3, 4] with 2 elements on both sides in reflect mode\nwill result in [3, 2, 1, 2, 3, 4, 3, 2]\n\n- symmetric: pads with reflection of image repeating the last value on the edge\n\nFor example, padding [1, 2, 3, 4] with 2 elements on both sides in symmetric mode\nwill result in [2, 1, 1, 2, 3, 4, 4, 3]\n\"\"\"\n\nassert isinstance(fill, (numbers.Number, str, tuple))\nassert padding_mode in ['constant', 'edge', 'reflect', 'symmetric']\nraise ValueError(\"Padding must be an int or a 2, or 4 element tuple, not a \" +\n\nself.fill = fill\n\ndef __call__(self, img):\n\"\"\"\nArgs:\nimg (PIL Image): Image to be padded.\n\nReturns:\n\"\"\"\n\ndef __repr__(self):\n\n[docs]class Lambda(object):\n\"\"\"Apply a user-defined lambda as a transform.\n\nArgs:\nlambd (function): Lambda/function to be used for transform.\n\"\"\"\n\ndef __init__(self, lambd):\nassert callable(lambd), repr(type(lambd).__name__) + \" object is not callable\"\nself.lambd = lambd\n\ndef __call__(self, img):\nreturn self.lambd(img)\n\ndef __repr__(self):\nreturn self.__class__.__name__ + '()'\n\nclass RandomTransforms(object):\n\"\"\"Base class for a list of transformations with randomness\n\nArgs:\ntransforms (list or tuple): list of transformations\n\"\"\"\n\ndef __init__(self, transforms):\nassert isinstance(transforms, (list, tuple))\nself.transforms = transforms\n\ndef __call__(self, *args, **kwargs):\nraise NotImplementedError()\n\ndef __repr__(self):\nformat_string = self.__class__.__name__ + '('\nfor t in self.transforms:\nformat_string += '\\n'\nformat_string += ' {0}'.format(t)\nformat_string += '\\n)'\nreturn format_string\n\n[docs]class RandomApply(RandomTransforms):\n\"\"\"Apply randomly a list of transformations with a given probability\n\nArgs:\ntransforms (list or tuple): list of transformations\np (float): probability\n\"\"\"\n\ndef __init__(self, transforms, p=0.5):\nsuper(RandomApply, self).__init__(transforms)\nself.p = p\n\ndef __call__(self, img):\nif self.p < random.random():\nreturn img\nfor t in self.transforms:\nimg = t(img)\nreturn img\n\ndef __repr__(self):\nformat_string = self.__class__.__name__ + '('\nformat_string += '\\n p={}'.format(self.p)\nfor t in self.transforms:\nformat_string += '\\n'\nformat_string += ' {0}'.format(t)\nformat_string += '\\n)'\nreturn format_string\n\n[docs]class RandomOrder(RandomTransforms):\n\"\"\"Apply a list of transformations in a random order\n\"\"\"\ndef __call__(self, img):\norder = list(range(len(self.transforms)))\nrandom.shuffle(order)\nfor i in order:\nimg = self.transforms[i](img)\nreturn img\n\n[docs]class RandomChoice(RandomTransforms):\n\"\"\"Apply single transformation randomly picked from a list\n\"\"\"\ndef __call__(self, img):\nt = random.choice(self.transforms)\nreturn t(img)\n\n[docs]class RandomCrop(object):\n\"\"\"Crop the given PIL Image at a random location.\n\nArgs:\nsize (sequence or int): Desired output size of the crop. If size is an\nint instead of sequence like (h, w), a square crop (size, size) is\nof the image. Default is None, i.e no padding. If a sequence of length\n4 is provided, it is used to pad left, top, right, bottom borders\nrespectively. If a sequence of length 2 is provided, it is used to\ndesired size to avoid raising an exception. Since cropping is done\nafter padding, the padding seems to be done at a random offset.\nfill: Pixel fill value for constant fill. Default is 0. If a tuple of\nlength 3, it is used to fill R, G, B channels respectively.\nThis value is only used when the padding_mode is constant\npadding_mode: Type of padding. Should be: constant, edge, reflect or symmetric. Default is constant.\n\n- constant: pads with a constant value, this value is specified with fill\n\n- edge: pads with the last value on the edge of the image\n\n- reflect: pads with reflection of image (without repeating the last value on the edge)\n\npadding [1, 2, 3, 4] with 2 elements on both sides in reflect mode\nwill result in [3, 2, 1, 2, 3, 4, 3, 2]\n\n- symmetric: pads with reflection of image (repeating the last value on the edge)\n\npadding [1, 2, 3, 4] with 2 elements on both sides in symmetric mode\nwill result in [2, 1, 1, 2, 3, 4, 4, 3]\n\n\"\"\"\n\nif isinstance(size, numbers.Number):\nself.size = (int(size), int(size))\nelse:\nself.size = size\nself.fill = fill\n\n@staticmethod\ndef get_params(img, output_size):\n\"\"\"Get parameters for crop for a random crop.\n\nArgs:\nimg (PIL Image): Image to be cropped.\noutput_size (tuple): Expected output size of the crop.\n\nReturns:\ntuple: params (i, j, h, w) to be passed to crop for random crop.\n\"\"\"\nw, h = _get_image_size(img)\nth, tw = output_size\nif w == tw and h == th:\nreturn 0, 0, h, w\n\ni = random.randint(0, h - th)\nj = random.randint(0, w - tw)\nreturn i, j, th, tw\n\ndef __call__(self, img):\n\"\"\"\nArgs:\nimg (PIL Image): Image to be cropped.\n\nReturns:\nPIL Image: Cropped image.\n\"\"\"\n\n# pad the width if needed\nif self.pad_if_needed and img.size < self.size:\n# pad the height if needed\nif self.pad_if_needed and img.size < self.size:\n\ni, j, h, w = self.get_params(img, self.size)\n\nreturn F.crop(img, i, j, h, w)\n\ndef __repr__(self):\n\n[docs]class RandomHorizontalFlip(torch.nn.Module):\n\"\"\"Horizontally flip the given image randomly with a given probability.\nThe image can be a PIL Image or a torch Tensor, in which case it is expected\nto have [..., H, W] shape, where ... means an arbitrary number of leading\ndimensions\n\nArgs:\np (float): probability of the image being flipped. Default value is 0.5\n\"\"\"\n\ndef __init__(self, p=0.5):\nsuper().__init__()\nself.p = p\n\ndef forward(self, img):\n\"\"\"\nArgs:\nimg (PIL Image or Tensor): Image to be flipped.\n\nReturns:\nPIL Image or Tensor: Randomly flipped image.\n\"\"\"\nif torch.rand(1) < self.p:\nreturn F.hflip(img)\nreturn img\n\ndef __repr__(self):\nreturn self.__class__.__name__ + '(p={})'.format(self.p)\n\n[docs]class RandomVerticalFlip(torch.nn.Module):\n\"\"\"Vertically flip the given PIL Image randomly with a given probability.\nThe image can be a PIL Image or a torch Tensor, in which case it is expected\nto have [..., H, W] shape, where ... means an arbitrary number of leading\ndimensions\n\nArgs:\np (float): probability of the image being flipped. Default value is 0.5\n\"\"\"\n\ndef __init__(self, p=0.5):\nsuper().__init__()\nself.p = p\n\ndef forward(self, img):\n\"\"\"\nArgs:\nimg (PIL Image or Tensor): Image to be flipped.\n\nReturns:\nPIL Image or Tensor: Randomly flipped image.\n\"\"\"\nif torch.rand(1) < self.p:\nreturn F.vflip(img)\nreturn img\n\ndef __repr__(self):\nreturn self.__class__.__name__ + '(p={})'.format(self.p)\n\n[docs]class RandomPerspective(object):\n\"\"\"Performs Perspective transformation of the given PIL Image randomly with a given probability.\n\nArgs:\ninterpolation : Default- Image.BICUBIC\n\np (float): probability of the image being perspectively transformed. Default value is 0.5\n\ndistortion_scale(float): it controls the degree of distortion and ranges from 0 to 1. Default value is 0.5.\n\nfill (3-tuple or int): RGB pixel fill value for area outside the rotated image.\nIf int, it is used for all channels respectively. Default value is 0.\n\"\"\"\n\ndef __init__(self, distortion_scale=0.5, p=0.5, interpolation=Image.BICUBIC, fill=0):\nself.p = p\nself.interpolation = interpolation\nself.distortion_scale = distortion_scale\nself.fill = fill\n\ndef __call__(self, img):\n\"\"\"\nArgs:\nimg (PIL Image): Image to be Perspectively transformed.\n\nReturns:\nPIL Image: Random perspectivley transformed image.\n\"\"\"\nif not F._is_pil_image(img):\nraise TypeError('img should be PIL Image. Got {}'.format(type(img)))\n\nif random.random() < self.p:\nwidth, height = img.size\nstartpoints, endpoints = self.get_params(width, height, self.distortion_scale)\nreturn F.perspective(img, startpoints, endpoints, self.interpolation, self.fill)\nreturn img\n\n@staticmethod\ndef get_params(width, height, distortion_scale):\n\"\"\"Get parameters for perspective for a random perspective transform.\n\nArgs:\nwidth : width of the image.\nheight : height of the image.\n\nReturns:\nList containing [top-left, top-right, bottom-right, bottom-left] of the original image,\nList containing [top-left, top-right, bottom-right, bottom-left] of the transformed image.\n\"\"\"\nhalf_height = int(height / 2)\nhalf_width = int(width / 2)\ntopleft = (random.randint(0, int(distortion_scale * half_width)),\nrandom.randint(0, int(distortion_scale * half_height)))\ntopright = (random.randint(width - int(distortion_scale * half_width) - 1, width - 1),\nrandom.randint(0, int(distortion_scale * half_height)))\nbotright = (random.randint(width - int(distortion_scale * half_width) - 1, width - 1),\nrandom.randint(height - int(distortion_scale * half_height) - 1, height - 1))\nbotleft = (random.randint(0, int(distortion_scale * half_width)),\nrandom.randint(height - int(distortion_scale * half_height) - 1, height - 1))\nstartpoints = [(0, 0), (width - 1, 0), (width - 1, height - 1), (0, height - 1)]\nendpoints = [topleft, topright, botright, botleft]\nreturn startpoints, endpoints\n\ndef __repr__(self):\nreturn self.__class__.__name__ + '(p={})'.format(self.p)\n\n[docs]class RandomResizedCrop(object):\n\"\"\"Crop the given PIL Image to random size and aspect ratio.\n\nA crop of random size (default: of 0.08 to 1.0) of the original size and a random\naspect ratio (default: of 3/4 to 4/3) of the original aspect ratio is made. This crop\nis finally resized to given size.\nThis is popularly used to train the Inception networks.\n\nArgs:\nsize: expected output size of each edge\nscale: range of size of the origin size cropped\nratio: range of aspect ratio of the origin aspect ratio cropped\ninterpolation: Default: PIL.Image.BILINEAR\n\"\"\"\n\ndef __init__(self, size, scale=(0.08, 1.0), ratio=(3. / 4., 4. / 3.), interpolation=Image.BILINEAR):\nif isinstance(size, (tuple, list)):\nself.size = size\nelse:\nself.size = (size, size)\nif (scale > scale) or (ratio > ratio):\nwarnings.warn(\"range should be of kind (min, max)\")\n\nself.interpolation = interpolation\nself.scale = scale\nself.ratio = ratio\n\n@staticmethod\ndef get_params(img, scale, ratio):\n\"\"\"Get parameters for crop for a random sized crop.\n\nArgs:\nimg (PIL Image): Image to be cropped.\nscale (tuple): range of size of the origin size cropped\nratio (tuple): range of aspect ratio of the origin aspect ratio cropped\n\nReturns:\ntuple: params (i, j, h, w) to be passed to crop for a random\nsized crop.\n\"\"\"\nwidth, height = _get_image_size(img)\narea = height * width\n\nfor _ in range(10):\ntarget_area = random.uniform(*scale) * area\nlog_ratio = (math.log(ratio), math.log(ratio))\naspect_ratio = math.exp(random.uniform(*log_ratio))\n\nw = int(round(math.sqrt(target_area * aspect_ratio)))\nh = int(round(math.sqrt(target_area / aspect_ratio)))\n\nif 0 < w <= width and 0 < h <= height:\ni = random.randint(0, height - h)\nj = random.randint(0, width - w)\nreturn i, j, h, w\n\n# Fallback to central crop\nin_ratio = float(width) / float(height)\nif (in_ratio < min(ratio)):\nw = width\nh = int(round(w / min(ratio)))\nelif (in_ratio > max(ratio)):\nh = height\nw = int(round(h * max(ratio)))\nelse: # whole image\nw = width\nh = height\ni = (height - h) // 2\nj = (width - w) // 2\nreturn i, j, h, w\n\ndef __call__(self, img):\n\"\"\"\nArgs:\nimg (PIL Image): Image to be cropped and resized.\n\nReturns:\nPIL Image: Randomly cropped and resized image.\n\"\"\"\ni, j, h, w = self.get_params(img, self.scale, self.ratio)\nreturn F.resized_crop(img, i, j, h, w, self.size, self.interpolation)\n\ndef __repr__(self):\ninterpolate_str = _pil_interpolation_to_str[self.interpolation]\nformat_string = self.__class__.__name__ + '(size={0}'.format(self.size)\nformat_string += ', scale={0}'.format(tuple(round(s, 4) for s in self.scale))\nformat_string += ', ratio={0}'.format(tuple(round(r, 4) for r in self.ratio))\nformat_string += ', interpolation={0})'.format(interpolate_str)\nreturn format_string\n\n[docs]class RandomSizedCrop(RandomResizedCrop):\n\"\"\"\nNote: This transform is deprecated in favor of RandomResizedCrop.\n\"\"\"\ndef __init__(self, *args, **kwargs):\nwarnings.warn(\"The use of the transforms.RandomSizedCrop transform is deprecated, \" +\nsuper(RandomSizedCrop, self).__init__(*args, **kwargs)\n\n[docs]class FiveCrop(object):\n\"\"\"Crop the given PIL Image into four corners and the central crop\n\n.. Note::\nThis transform returns a tuple of images and there may be a mismatch in the number of\ninputs and targets your Dataset returns. See below for an example of how to deal with\nthis.\n\nArgs:\nsize (sequence or int): Desired output size of the crop. If size is an int\ninstead of sequence like (h, w), a square crop of size (size, size) is made.\n\nExample:\n>>> transform = Compose([\n>>> FiveCrop(size), # this is a list of PIL Images\n>>> Lambda(lambda crops: torch.stack([ToTensor()(crop) for crop in crops])) # returns a 4D tensor\n>>> ])\n>>> #In your test loop you can do the following:\n>>> input, target = batch # input is a 5d tensor, target is 2d\n>>> bs, ncrops, c, h, w = input.size()\n>>> result = model(input.view(-1, c, h, w)) # fuse batch size and ncrops\n>>> result_avg = result.view(bs, ncrops, -1).mean(1) # avg over crops\n\"\"\"\n\ndef __init__(self, size):\nself.size = size\nif isinstance(size, numbers.Number):\nself.size = (int(size), int(size))\nelse:\nassert len(size) == 2, \"Please provide only two dimensions (h, w) for size.\"\nself.size = size\n\ndef __call__(self, img):\nreturn F.five_crop(img, self.size)\n\ndef __repr__(self):\nreturn self.__class__.__name__ + '(size={0})'.format(self.size)\n\n[docs]class TenCrop(object):\n\"\"\"Crop the given PIL Image into four corners and the central crop plus the flipped version of\nthese (horizontal flipping is used by default)\n\n.. Note::\nThis transform returns a tuple of images and there may be a mismatch in the number of\ninputs and targets your Dataset returns. See below for an example of how to deal with\nthis.\n\nArgs:\nsize (sequence or int): Desired output size of the crop. If size is an\nint instead of sequence like (h, w), a square crop (size, size) is\nvertical_flip (bool): Use vertical flipping instead of horizontal\n\nExample:\n>>> transform = Compose([\n>>> TenCrop(size), # this is a list of PIL Images\n>>> Lambda(lambda crops: torch.stack([ToTensor()(crop) for crop in crops])) # returns a 4D tensor\n>>> ])\n>>> #In your test loop you can do the following:\n>>> input, target = batch # input is a 5d tensor, target is 2d\n>>> bs, ncrops, c, h, w = input.size()\n>>> result = model(input.view(-1, c, h, w)) # fuse batch size and ncrops\n>>> result_avg = result.view(bs, ncrops, -1).mean(1) # avg over crops\n\"\"\"\n\ndef __init__(self, size, vertical_flip=False):\nself.size = size\nif isinstance(size, numbers.Number):\nself.size = (int(size), int(size))\nelse:\nassert len(size) == 2, \"Please provide only two dimensions (h, w) for size.\"\nself.size = size\nself.vertical_flip = vertical_flip\n\ndef __call__(self, img):\nreturn F.ten_crop(img, self.size, self.vertical_flip)\n\ndef __repr__(self):\nreturn self.__class__.__name__ + '(size={0}, vertical_flip={1})'.format(self.size, self.vertical_flip)\n\n[docs]class LinearTransformation(object):\n\"\"\"Transform a tensor image with a square transformation matrix and a mean_vector computed\noffline.\nGiven transformation_matrix and mean_vector, will flatten the torch.*Tensor and\nsubtract mean_vector from it which is then followed by computing the dot\nproduct with the transformation matrix and then reshaping the tensor to its\noriginal shape.\n\nApplications:\nwhitening transformation: Suppose X is a column vector zero-centered data.\nThen compute the data covariance matrix [D x D] with torch.mm(X.t(), X),\nperform SVD on this matrix and pass it as transformation_matrix.\n\nArgs:\ntransformation_matrix (Tensor): tensor [D x D], D = C x H x W\nmean_vector (Tensor): tensor [D], D = C x H x W\n\"\"\"\n\ndef __init__(self, transformation_matrix, mean_vector):\nif transformation_matrix.size(0) != transformation_matrix.size(1):\nraise ValueError(\"transformation_matrix should be square. Got \" +\n\"[{} x {}] rectangular matrix.\".format(*transformation_matrix.size()))\n\nif mean_vector.size(0) != transformation_matrix.size(0):\nraise ValueError(\"mean_vector should have the same length {}\".format(mean_vector.size(0)) +\n\" as any one of the dimensions of the transformation_matrix [{}]\"\n.format(tuple(transformation_matrix.size())))\n\nself.transformation_matrix = transformation_matrix\nself.mean_vector = mean_vector\n\ndef __call__(self, tensor):\n\"\"\"\nArgs:\ntensor (Tensor): Tensor image of size (C, H, W) to be whitened.\n\nReturns:\nTensor: Transformed image.\n\"\"\"\nif tensor.size(0) * tensor.size(1) * tensor.size(2) != self.transformation_matrix.size(0):\nraise ValueError(\"tensor and transformation matrix have incompatible shape.\" +\n\"[{} x {} x {}] != \".format(*tensor.size()) +\n\"{}\".format(self.transformation_matrix.size(0)))\nflat_tensor = tensor.view(1, -1) - self.mean_vector\ntransformed_tensor = torch.mm(flat_tensor, self.transformation_matrix)\ntensor = transformed_tensor.view(tensor.size())\nreturn tensor\n\ndef __repr__(self):\nformat_string = self.__class__.__name__ + '(transformation_matrix='\nformat_string += (str(self.transformation_matrix.tolist()) + ')')\nformat_string += (\", (mean_vector=\" + str(self.mean_vector.tolist()) + ')')\nreturn format_string\n\n[docs]class ColorJitter(torch.nn.Module):\n\"\"\"Randomly change the brightness, contrast and saturation of an image.\n\nArgs:\nbrightness (float or tuple of float (min, max)): How much to jitter brightness.\nbrightness_factor is chosen uniformly from [max(0, 1 - brightness), 1 + brightness]\nor the given [min, max]. Should be non negative numbers.\ncontrast (float or tuple of float (min, max)): How much to jitter contrast.\ncontrast_factor is chosen uniformly from [max(0, 1 - contrast), 1 + contrast]\nor the given [min, max]. Should be non negative numbers.\nsaturation (float or tuple of float (min, max)): How much to jitter saturation.\nsaturation_factor is chosen uniformly from [max(0, 1 - saturation), 1 + saturation]\nor the given [min, max]. Should be non negative numbers.\nhue (float or tuple of float (min, max)): How much to jitter hue.\nhue_factor is chosen uniformly from [-hue, hue] or the given [min, max].\nShould have 0<= hue <= 0.5 or -0.5 <= min <= max <= 0.5.\n\"\"\"\n\ndef __init__(self, brightness=0, contrast=0, saturation=0, hue=0):\nsuper().__init__()\nself.brightness = self._check_input(brightness, 'brightness')\nself.contrast = self._check_input(contrast, 'contrast')\nself.saturation = self._check_input(saturation, 'saturation')\nself.hue = self._check_input(hue, 'hue', center=0, bound=(-0.5, 0.5),\nclip_first_on_zero=False)\n\[email protected]\ndef _check_input(self, value, name, center=1, bound=(0, float('inf')), clip_first_on_zero=True):\nif isinstance(value, numbers.Number):\nif value < 0:\nraise ValueError(\"If {} is a single number, it must be non negative.\".format(name))\nvalue = [center - float(value), center + float(value)]\nif clip_first_on_zero:\nvalue = max(value, 0.0)\nelif isinstance(value, (tuple, list)) and len(value) == 2:\nif not bound <= value <= value <= bound:\nraise ValueError(\"{} values should be between {}\".format(name, bound))\nelse:\nraise TypeError(\"{} should be a single number or a list/tuple with lenght 2.\".format(name))\n\n# if value is 0 or (1., 1.) for brightness/contrast/saturation\n# or (0., 0.) for hue, do nothing\nif value == value == center:\nvalue = None\nreturn value\n\n@staticmethod\[email protected]\ndef get_params(brightness, contrast, saturation, hue):\n\"\"\"Get a randomized transform to be applied on image.\n\nArguments are same as that of __init__.\n\nReturns:\nTransform which randomly adjusts brightness, contrast and\nsaturation in a random order.\n\"\"\"\ntransforms = []\n\nif brightness is not None:\nbrightness_factor = random.uniform(brightness, brightness)\n\nif contrast is not None:\ncontrast_factor = random.uniform(contrast, contrast)\n\nif saturation is not None:\nsaturation_factor = random.uniform(saturation, saturation)\n\nif hue is not None:\nhue_factor = random.uniform(hue, hue)\n\nrandom.shuffle(transforms)\ntransform = Compose(transforms)\n\nreturn transform\n\ndef forward(self, img):\n\"\"\"\nArgs:\nimg (PIL Image or Tensor): Input image.\n\nReturns:\nPIL Image or Tensor: Color jittered image.\n\"\"\"\nfn_idx = torch.randperm(4)\nfor fn_id in fn_idx:\nif fn_id == 0 and self.brightness is not None:\nbrightness = self.brightness\nbrightness_factor = torch.tensor(1.0).uniform_(brightness, brightness).item()\n\nif fn_id == 1 and self.contrast is not None:\ncontrast = self.contrast\ncontrast_factor = torch.tensor(1.0).uniform_(contrast, contrast).item()\n\nif fn_id == 2 and self.saturation is not None:\nsaturation = self.saturation\nsaturation_factor = torch.tensor(1.0).uniform_(saturation, saturation).item()\n\nif fn_id == 3 and self.hue is not None:\nhue = self.hue\nhue_factor = torch.tensor(1.0).uniform_(hue, hue).item()\n\nreturn img\n\ndef __repr__(self):\nformat_string = self.__class__.__name__ + '('\nformat_string += 'brightness={0}'.format(self.brightness)\nformat_string += ', contrast={0}'.format(self.contrast)\nformat_string += ', saturation={0}'.format(self.saturation)\nformat_string += ', hue={0})'.format(self.hue)\nreturn format_string\n\n[docs]class RandomRotation(object):\n\"\"\"Rotate the image by angle.\n\nArgs:\ndegrees (sequence or float or int): Range of degrees to select from.\nIf degrees is a number instead of sequence like (min, max), the range of degrees\nwill be (-degrees, +degrees).\nresample ({PIL.Image.NEAREST, PIL.Image.BILINEAR, PIL.Image.BICUBIC}, optional):\nAn optional resampling filter. See filters_ for more information.\nIf omitted, or if the image has mode \"1\" or \"P\", it is set to PIL.Image.NEAREST.\nexpand (bool, optional): Optional expansion flag.\nIf true, expands the output to make it large enough to hold the entire rotated image.\nIf false or omitted, make the output image the same size as the input image.\nNote that the expand flag assumes rotation around the center and no translation.\ncenter (2-tuple, optional): Optional center of rotation.\nOrigin is the upper left corner.\nDefault is the center of the image.\nfill (n-tuple or int or float): Pixel fill value for area outside the rotated\nimage. If int or float, the value is used for all bands respectively.\nDefaults to 0 for all bands. This option is only available for pillow>=5.2.0.\n\n\"\"\"\n\ndef __init__(self, degrees, resample=False, expand=False, center=None, fill=None):\nif isinstance(degrees, numbers.Number):\nif degrees < 0:\nraise ValueError(\"If degrees is a single number, it must be positive.\")\nself.degrees = (-degrees, degrees)\nelse:\nif len(degrees) != 2:\nraise ValueError(\"If degrees is a sequence, it must be of len 2.\")\nself.degrees = degrees\n\nself.resample = resample\nself.expand = expand\nself.center = center\nself.fill = fill\n\n@staticmethod\ndef get_params(degrees):\n\"\"\"Get parameters for rotate for a random rotation.\n\nReturns:\nsequence: params to be passed to rotate for random rotation.\n\"\"\"\nangle = random.uniform(degrees, degrees)\n\nreturn angle\n\ndef __call__(self, img):\n\"\"\"\nArgs:\nimg (PIL Image): Image to be rotated.\n\nReturns:\nPIL Image: Rotated image.\n\"\"\"\n\nangle = self.get_params(self.degrees)\n\nreturn F.rotate(img, angle, self.resample, self.expand, self.center, self.fill)\n\ndef __repr__(self):\nformat_string = self.__class__.__name__ + '(degrees={0}'.format(self.degrees)\nformat_string += ', resample={0}'.format(self.resample)\nformat_string += ', expand={0}'.format(self.expand)\nif self.center is not None:\nformat_string += ', center={0}'.format(self.center)\nif self.fill is not None:\nformat_string += ', fill={0}'.format(self.fill)\nformat_string += ')'\nreturn format_string\n\n[docs]class RandomAffine(object):\n\"\"\"Random affine transformation of the image keeping center invariant\n\nArgs:\ndegrees (sequence or float or int): Range of degrees to select from.\nIf degrees is a number instead of sequence like (min, max), the range of degrees\nwill be (-degrees, +degrees). Set to 0 to deactivate rotations.\ntranslate (tuple, optional): tuple of maximum absolute fraction for horizontal\nand vertical translations. For example translate=(a, b), then horizontal shift\nis randomly sampled in the range -img_width * a < dx < img_width * a and vertical shift is\nrandomly sampled in the range -img_height * b < dy < img_height * b. Will not translate by default.\nscale (tuple, optional): scaling factor interval, e.g (a, b), then scale is\nrandomly sampled from the range a <= scale <= b. Will keep original scale by default.\nshear (sequence or float or int, optional): Range of degrees to select from.\nIf shear is a number, a shear parallel to the x axis in the range (-shear, +shear)\nwill be apllied. Else if shear is a tuple or list of 2 values a shear parallel to the x axis in the\nrange (shear, shear) will be applied. Else if shear is a tuple or list of 4 values,\na x-axis shear in (shear, shear) and y-axis shear in (shear, shear) will be applied.\nWill not apply shear by default\nresample ({PIL.Image.NEAREST, PIL.Image.BILINEAR, PIL.Image.BICUBIC}, optional):\nAn optional resampling filter. See filters_ for more information.\nIf omitted, or if the image has mode \"1\" or \"P\", it is set to PIL.Image.NEAREST.\nfillcolor (tuple or int): Optional fill color (Tuple for RGB Image And int for grayscale) for the area\noutside the transform in the output image.(Pillow>=5.0.0)\n\n\"\"\"\n\ndef __init__(self, degrees, translate=None, scale=None, shear=None, resample=False, fillcolor=0):\nif isinstance(degrees, numbers.Number):\nif degrees < 0:\nraise ValueError(\"If degrees is a single number, it must be positive.\")\nself.degrees = (-degrees, degrees)\nelse:\nassert isinstance(degrees, (tuple, list)) and len(degrees) == 2, \\\n\"degrees should be a list or tuple and it must be of length 2.\"\nself.degrees = degrees\n\nif translate is not None:\nassert isinstance(translate, (tuple, list)) and len(translate) == 2, \\\n\"translate should be a list or tuple and it must be of length 2.\"\nfor t in translate:\nif not (0.0 <= t <= 1.0):\nraise ValueError(\"translation values should be between 0 and 1\")\nself.translate = translate\n\nif scale is not None:\nassert isinstance(scale, (tuple, list)) and len(scale) == 2, \\\n\"scale should be a list or tuple and it must be of length 2.\"\nfor s in scale:\nif s <= 0:\nraise ValueError(\"scale values should be positive\")\nself.scale = scale\n\nif shear is not None:\nif isinstance(shear, numbers.Number):\nif shear < 0:\nraise ValueError(\"If shear is a single number, it must be positive.\")\nself.shear = (-shear, shear)\nelse:\nassert isinstance(shear, (tuple, list)) and \\\n(len(shear) == 2 or len(shear) == 4), \\\n\"shear should be a list or tuple and it must be of length 2 or 4.\"\n# X-Axis shear with [min, max]\nif len(shear) == 2:\nself.shear = [shear, shear, 0., 0.]\nelif len(shear) == 4:\nself.shear = [s for s in shear]\nelse:\nself.shear = shear\n\nself.resample = resample\nself.fillcolor = fillcolor\n\n@staticmethod\ndef get_params(degrees, translate, scale_ranges, shears, img_size):\n\"\"\"Get parameters for affine transformation\n\nReturns:\nsequence: params to be passed to the affine transformation\n\"\"\"\nangle = random.uniform(degrees, degrees)\nif translate is not None:\nmax_dx = translate * img_size\nmax_dy = translate * img_size\ntranslations = (np.round(random.uniform(-max_dx, max_dx)),\nnp.round(random.uniform(-max_dy, max_dy)))\nelse:\ntranslations = (0, 0)\n\nif scale_ranges is not None:\nscale = random.uniform(scale_ranges, scale_ranges)\nelse:\nscale = 1.0\n\nif shears is not None:\nif len(shears) == 2:\nshear = [random.uniform(shears, shears), 0.]\nelif len(shears) == 4:\nshear = [random.uniform(shears, shears),\nrandom.uniform(shears, shears)]\nelse:\nshear = 0.0\n\nreturn angle, translations, scale, shear\n\ndef __call__(self, img):\n\"\"\"\nimg (PIL Image): Image to be transformed.\n\nReturns:\nPIL Image: Affine transformed image.\n\"\"\"\nret = self.get_params(self.degrees, self.translate, self.scale, self.shear, img.size)\nreturn F.affine(img, *ret, resample=self.resample, fillcolor=self.fillcolor)\n\ndef __repr__(self):\ns = '{name}(degrees={degrees}'\nif self.translate is not None:\ns += ', translate={translate}'\nif self.scale is not None:\ns += ', scale={scale}'\nif self.shear is not None:\ns += ', shear={shear}'\nif self.resample > 0:\ns += ', resample={resample}'\nif self.fillcolor != 0:\ns += ', fillcolor={fillcolor}'\ns += ')'\nd = dict(self.__dict__)\nd['resample'] = _pil_interpolation_to_str[d['resample']]\nreturn s.format(name=self.__class__.__name__, **d)\n\n[docs]class Grayscale(object):\n\"\"\"Convert image to grayscale.\n\nArgs:\nnum_output_channels (int): (1 or 3) number of channels desired for output image\n\nReturns:\nPIL Image: Grayscale version of the input.\n- If num_output_channels == 1 : returned image is single channel\n- If num_output_channels == 3 : returned image is 3 channel with r == g == b\n\n\"\"\"\n\ndef __init__(self, num_output_channels=1):\nself.num_output_channels = num_output_channels\n\ndef __call__(self, img):\n\"\"\"\nArgs:\nimg (PIL Image): Image to be converted to grayscale.\n\nReturns:\nPIL Image: Randomly grayscaled image.\n\"\"\"\nreturn F.to_grayscale(img, num_output_channels=self.num_output_channels)\n\ndef __repr__(self):\nreturn self.__class__.__name__ + '(num_output_channels={0})'.format(self.num_output_channels)\n\n[docs]class RandomGrayscale(object):\n\"\"\"Randomly convert image to grayscale with a probability of p (default 0.1).\n\nArgs:\np (float): probability that image should be converted to grayscale.\n\nReturns:\nPIL Image: Grayscale version of the input image with probability p and unchanged\nwith probability (1-p).\n- If input image is 1 channel: grayscale version is 1 channel\n- If input image is 3 channel: grayscale version is 3 channel with r == g == b\n\n\"\"\"\n\ndef __init__(self, p=0.1):\nself.p = p\n\ndef __call__(self, img):\n\"\"\"\nArgs:\nimg (PIL Image): Image to be converted to grayscale.\n\nReturns:\nPIL Image: Randomly grayscaled image.\n\"\"\"\nnum_output_channels = 1 if img.mode == 'L' else 3\nif random.random() < self.p:\nreturn F.to_grayscale(img, num_output_channels=num_output_channels)\nreturn img\n\ndef __repr__(self):\nreturn self.__class__.__name__ + '(p={0})'.format(self.p)\n\n[docs]class RandomErasing(object):\n\"\"\" Randomly selects a rectangle region in an image and erases its pixels.\n'Random Erasing Data Augmentation' by Zhong et al. See https://arxiv.org/pdf/1708.04896.pdf\n\nArgs:\np: probability that the random erasing operation will be performed.\nscale: range of proportion of erased area against input image.\nratio: range of aspect ratio of erased area.\nvalue: erasing value. Default is 0. If a single int, it is used to\nerase all pixels. If a tuple of length 3, it is used to erase\nR, G, B channels respectively.\nIf a str of 'random', erasing each pixel with random values.\ninplace: boolean to make this transform inplace. Default set to False.\n\nReturns:\nErased Image.\n\n# Examples:\n>>> transform = transforms.Compose([\n>>> transforms.RandomHorizontalFlip(),\n>>> transforms.ToTensor(),\n>>> transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225)),\n>>> transforms.RandomErasing(),\n>>> ])\n\"\"\"\n\ndef __init__(self, p=0.5, scale=(0.02, 0.33), ratio=(0.3, 3.3), value=0, inplace=False):\nassert isinstance(value, (numbers.Number, str, tuple, list))\nif (scale > scale) or (ratio > ratio):\nwarnings.warn(\"range should be of kind (min, max)\")\nif scale < 0 or scale > 1:\nraise ValueError(\"range of scale should be between 0 and 1\")\nif p < 0 or p > 1:\nraise ValueError(\"range of random erasing probability should be between 0 and 1\")\n\nself.p = p\nself.scale = scale\nself.ratio = ratio\nself.value = value\nself.inplace = inplace\n\n@staticmethod\ndef get_params(img, scale, ratio, value=0):\n\"\"\"Get parameters for erase for a random erasing.\n\nArgs:\nimg (Tensor): Tensor image of size (C, H, W) to be erased.\nscale: range of proportion of erased area against input image.\nratio: range of aspect ratio of erased area.\n\nReturns:\ntuple: params (i, j, h, w, v) to be passed to erase for random erasing.\n\"\"\"\nimg_c, img_h, img_w = img.shape\narea = img_h * img_w\n\nfor _ in range(10):\nerase_area = random.uniform(scale, scale) * area\naspect_ratio = random.uniform(ratio, ratio)\n\nh = int(round(math.sqrt(erase_area * aspect_ratio)))\nw = int(round(math.sqrt(erase_area / aspect_ratio)))\n\nif h < img_h and w < img_w:\ni = random.randint(0, img_h - h)\nj = random.randint(0, img_w - w)\nif isinstance(value, numbers.Number):\nv = value\nelif isinstance(value, torch._six.string_classes):\nv = torch.empty([img_c, h, w], dtype=torch.float32).normal_()\nelif isinstance(value, (list, tuple)):\nv = torch.tensor(value, dtype=torch.float32).view(-1, 1, 1).expand(-1, h, w)\nreturn i, j, h, w, v\n\n# Return original image\nreturn 0, 0, img_h, img_w, img\n\ndef __call__(self, img):\n\"\"\"\nArgs:\nimg (Tensor): Tensor image of size (C, H, W) to be erased.\n\nReturns:\nimg (Tensor): Erased Tensor image.\n\"\"\"\nif random.uniform(0, 1) < self.p:\nx, y, h, w, v = self.get_params(img, scale=self.scale, ratio=self.ratio, value=self.value)\nreturn F.erase(img, x, y, h, w, v, self.inplace)\nreturn img", null, "## Docs\n\nAccess comprehensive developer documentation for PyTorch\n\nView Docs\n\n## Tutorials\n\nGet in-depth tutorials for beginners and advanced developers\n\nView Tutorials" ]
[ null, "https://www.googleadservices.com/pagead/conversion/795629140/", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5111398,"math_prob":0.9799734,"size":43197,"snap":"2020-45-2020-50","text_gpt3_token_len":11493,"char_repetition_ratio":0.15419166,"word_repetition_ratio":0.30256066,"special_character_ratio":0.29756695,"punctuation_ratio":0.2354143,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9964852,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-21T19:42:09Z\",\"WARC-Record-ID\":\"<urn:uuid:fb61b5d8-754e-4aad-94b9-68039c884a9c>\",\"Content-Length\":\"224868\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b8f8b829-2a29-42e4-8691-f5cf690ce645>\",\"WARC-Concurrent-To\":\"<urn:uuid:104c9269-9f4b-4c85-b596-7ab236d03538>\",\"WARC-IP-Address\":\"185.199.108.153\",\"WARC-Target-URI\":\"https://pytorch.org/docs/stable/_modules/torchvision/transforms/transforms.html\",\"WARC-Payload-Digest\":\"sha1:ZPKSW4ZOWM7SOZPEIZWPUCA7OWHNW27J\",\"WARC-Block-Digest\":\"sha1:MU6TTHB6PAKGHZOUIBJRUGL3XPQQGG7W\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107877420.17_warc_CC-MAIN-20201021180646-20201021210646-00358.warc.gz\"}"}
https://www.aimsciences.org/article/doi/10.3934/dcdsb.2013.18.1533
[ "", null, "", null, "", null, "", null, "• Previous Article\nExponential stability for a class of linear hyperbolic equations with hereditary memory\n• DCDS-B Home\n• This Issue\n• Next Article\nA note on the analysis of asymptotic mean-square stability properties for systems of linear stochastic delay differential equations\nAugust  2013, 18(6): 1533-1554. doi: 10.3934/dcdsb.2013.18.1533\n\n## Invariance and monotonicity for stochastic delay differential equations\n\n 1 Department of Mechanics and Mathematics, Kharkov National University, Kharkov, 61022, Ukraine 2 Institut für Mathematik, Technische Universität Berlin, Str. des 17. Juni 136, 10623 Berlin, Germany\n\nReceived  January 2012 Revised  March 2012 Published  March 2013\n\nWe study invariance and monotonicity properties of Kunita-type sto-chastic differential equations in $\\mathbb{R}^d$ with delay. Our first result provides sufficient conditions for the invariance of closed subsets of $\\mathbb{R}^d$. Then we present a comparison principle and show that under appropriate conditions the stochastic delay system considered generates a monotone (order-preserving) random dynamical system. Several applications are considered.\nCitation: Igor Chueshov, Michael Scheutzow. Invariance and monotonicity for stochastic delay differential equations. Discrete & Continuous Dynamical Systems - B, 2013, 18 (6) : 1533-1554. doi: 10.3934/dcdsb.2013.18.1533\n##### References:\n L. Arnold, \"Random Dynamical Systems,\", Springer Monographs in Mathematics, (1998).", null, "Google Scholar L. Arnold and I. Chueshov, Order-preserving random dynamical systems: Equilibria, attractors, applications,, Dynam. Stability Systems, 13 (1998), 265.", null, "Google Scholar L. Arnold and I. Chueshov, Cooperative random and stochastic differential equations,, Discrete Continuous Dynam. Systems - A, 7 (2001), 1.", null, "Google Scholar B. Bergé, I. Chueshov and P. Vuillermot, On the behavior of solutions to certain parabolic SPDE's driven by Wiener processes,, Stochastic Process. Appl., 92 (2001), 237.  doi: 10.1016/S0304-4149(00)00082-X.", null, "", null, "Google Scholar I. Chueshov, Order-preserving random dynamical systems generated by a class of coupled stochastic semilinear parabolic equations,, in, (2000), 711.", null, "Google Scholar I. Chueshov, \"Monotone Random Systems: Theory and Applications,\", Lecture Notes in Mathematics, 1779 (2002).  doi: 10.1007/b83277.", null, "", null, "Google Scholar I. Chueshov and M. Scheutzow, On the structure of attractors and invariant measures for a class of monotone random systems,, Dynamical Systems: An International Journal, 19 (2004), 127.  doi: 10.1080/1468936042000207792.", null, "", null, "Google Scholar I. Chueshov and B. Schmalfuss, Qualitative behavior of a class of stochastic parabolic PDEs with dynamical boundary conditions,, Discrete Continuous Dynam. Systems - A, 18 (2007), 315.  doi: 10.3934/dcds.2007.18.315.", null, "", null, "Google Scholar I. Chueshov and P. Vuillermot, Long-time behavior of solutions to a class of stochastic parabolic equations with homogeneous white noise: Stratonovitch's case,, Probab. Theory Rel. Fields, 112 (1998), 149.  doi: 10.1007/s004400050186.", null, "", null, "Google Scholar I. Chueshov and P. Vuillermot, Long-time behavior of solutions to a class of stochastic parabolic equations with homogeneous white noise: Itô's case,, Stoch. Anal. Appl., 18 (2000), 581.  doi: 10.1080/07362990008809687.", null, "", null, "Google Scholar I. Chueshov and P. Vuillermot, Non-random invariant sets for some systems of parabolic stochastic partial differential equations,, Stoch. Anal. Appl., 22 (2004), 1421.  doi: 10.1081/SAP-200029487.", null, "", null, "Google Scholar H. Crauel and F. Flandoli, Attractors for random dynamical systems,, Probab. Theory Relat. Fields, 100 (1994), 365.  doi: 10.1007/BF01193705.", null, "", null, "Google Scholar A. Es-Sarhir, M. Scheutzow and O. van Gaans, Invariant measures for stochastic functional differential equations with superlinear drift term,, Diff. Integral Equations, 23 (2010), 189.", null, "Google Scholar M. Hairer, J. Mattingly and M. Scheutzow, Asymptotic coupling and a general form of Harris' theorem with applications to stochastic delay equations,, Probab. Theory Relat. Fields, 149 (2011), 223.  doi: 10.1007/s00440-009-0250-6.", null, "", null, "Google Scholar P. Imkeller and M. Scheutzow, On the spatial asymptotic behaviour of stochastic flows in Euclidean space,, Ann. Probab., 27 (1999), 109.  doi: 10.1214/aop/1022677255.", null, "", null, "Google Scholar K. Itô and M. Nisio, On stationary solutions of a stochastic differential equation,, J. Math. Kyoto Univ., 4 (1964), 1.", null, "Google Scholar M. Krasnosel'skii, \"Positive Solutions of Operator Equations,\", Noordhoff Ltd. Groningen, (1964).", null, "Google Scholar M. Krasnosel'skii, Je. A. Lifshits and A. Sobolev, \"Positive Linear Systems. The Method of Positive Operators,\", Sigma Series in Applied Mathematics, 5 (1989).", null, "Google Scholar H. Kunita, \"Stochastic Flows and Stochastic Differential Equations,\", Cambridge Studies in Advanced Mathematics, 24 (1990).", null, "Google Scholar R. Martin and H. Smith, Abstract functional differential equations and reaction-diffusion systems,, Trans. AMS, 321 (1990), 1.  doi: 10.2307/2001590.", null, "", null, "Google Scholar R. Martin, Jr. and H. Smith, Reaction-diffusion systems with time delays: Monotonicity, invariance, comparison and convergence,, J. Reine Angew. Math., 413 (1991), 1.", null, "Google Scholar S. Mohammed, Nonlinear flows of stochastic linear delay equations,, Stochastics, 17 (1986), 207.  doi: 10.1080/17442508608833390.", null, "", null, "Google Scholar S. Mohammed, The Lyapunov spectrum and stable manifolds for stochastic linear delay equations,, Stochastics and Stochastic Reports, 29 (1990), 89.   Google Scholar S. Mohammed and M. Scheutzow, Spatial estimates for stochastic flows in Euclidean space,, Ann. Probab., 26 (1998), 56.  doi: 10.1214/aop/1022855411.", null, "", null, "Google Scholar S. Mohammed and M. Scheutzow, The stable manifold theorem for non-linear stochastic systems with memory. I. Existence of the semiflow,, J. Functional Anal., 205 (2003), 271.  doi: 10.1016/j.jfa.2002.04.001.", null, "", null, "Google Scholar G. Ochs, \"Weak Random Attractors,\", Institut für Dynamische Systeme, (1999).   Google Scholar M. Scheutzow, Qualitative behaviour of stochastic delay equations with a bounded memory,, Stochastics, 12 (1984), 41.  doi: 10.1080/17442508408833294.", null, "", null, "Google Scholar M. Scheutzow, Comparison of various concepts of a random attractor: A case study,, Arch. Math., 78 (2002), 233.  doi: 10.1007/s00013-002-8241-1.", null, "", null, "Google Scholar B. Schmalfuss, Backward cocycles and attractors for stochastic differential equations,, in, (1992), 185.   Google Scholar G. Seifert, Positively invariant closed sets for systems of delay differential equations,, J. Diff. Eqs., 22 (1976), 292.", null, "Google Scholar H. Smith, \"Monotone Dynamical Systems. An Introduction to the Theory of Competitive and Cooperative Systems,\", Mathematical Surveys and Monographs, 41 (1995).", null, "Google Scholar T. Tibor, H.-O. Walther and J. Wu, The structure of an attracting set defined by delayed and monotone positive feedback,, CWI Quarterly, 12 (1999), 315.   Google Scholar\n\nshow all references\n\n##### References:\n L. Arnold, \"Random Dynamical Systems,\", Springer Monographs in Mathematics, (1998).", null, "Google Scholar L. Arnold and I. Chueshov, Order-preserving random dynamical systems: Equilibria, attractors, applications,, Dynam. Stability Systems, 13 (1998), 265.", null, "Google Scholar L. Arnold and I. Chueshov, Cooperative random and stochastic differential equations,, Discrete Continuous Dynam. Systems - A, 7 (2001), 1.", null, "Google Scholar B. Bergé, I. Chueshov and P. Vuillermot, On the behavior of solutions to certain parabolic SPDE's driven by Wiener processes,, Stochastic Process. Appl., 92 (2001), 237.  doi: 10.1016/S0304-4149(00)00082-X.", null, "", null, "Google Scholar I. Chueshov, Order-preserving random dynamical systems generated by a class of coupled stochastic semilinear parabolic equations,, in, (2000), 711.", null, "Google Scholar I. Chueshov, \"Monotone Random Systems: Theory and Applications,\", Lecture Notes in Mathematics, 1779 (2002).  doi: 10.1007/b83277.", null, "", null, "Google Scholar I. Chueshov and M. Scheutzow, On the structure of attractors and invariant measures for a class of monotone random systems,, Dynamical Systems: An International Journal, 19 (2004), 127.  doi: 10.1080/1468936042000207792.", null, "", null, "Google Scholar I. Chueshov and B. Schmalfuss, Qualitative behavior of a class of stochastic parabolic PDEs with dynamical boundary conditions,, Discrete Continuous Dynam. Systems - A, 18 (2007), 315.  doi: 10.3934/dcds.2007.18.315.", null, "", null, "Google Scholar I. Chueshov and P. Vuillermot, Long-time behavior of solutions to a class of stochastic parabolic equations with homogeneous white noise: Stratonovitch's case,, Probab. Theory Rel. Fields, 112 (1998), 149.  doi: 10.1007/s004400050186.", null, "", null, "Google Scholar I. Chueshov and P. Vuillermot, Long-time behavior of solutions to a class of stochastic parabolic equations with homogeneous white noise: Itô's case,, Stoch. Anal. Appl., 18 (2000), 581.  doi: 10.1080/07362990008809687.", null, "", null, "Google Scholar I. Chueshov and P. Vuillermot, Non-random invariant sets for some systems of parabolic stochastic partial differential equations,, Stoch. Anal. Appl., 22 (2004), 1421.  doi: 10.1081/SAP-200029487.", null, "", null, "Google Scholar H. Crauel and F. Flandoli, Attractors for random dynamical systems,, Probab. Theory Relat. Fields, 100 (1994), 365.  doi: 10.1007/BF01193705.", null, "", null, "Google Scholar A. Es-Sarhir, M. Scheutzow and O. van Gaans, Invariant measures for stochastic functional differential equations with superlinear drift term,, Diff. Integral Equations, 23 (2010), 189.", null, "Google Scholar M. Hairer, J. Mattingly and M. Scheutzow, Asymptotic coupling and a general form of Harris' theorem with applications to stochastic delay equations,, Probab. Theory Relat. Fields, 149 (2011), 223.  doi: 10.1007/s00440-009-0250-6.", null, "", null, "Google Scholar P. Imkeller and M. Scheutzow, On the spatial asymptotic behaviour of stochastic flows in Euclidean space,, Ann. Probab., 27 (1999), 109.  doi: 10.1214/aop/1022677255.", null, "", null, "Google Scholar K. Itô and M. Nisio, On stationary solutions of a stochastic differential equation,, J. Math. Kyoto Univ., 4 (1964), 1.", null, "Google Scholar M. Krasnosel'skii, \"Positive Solutions of Operator Equations,\", Noordhoff Ltd. Groningen, (1964).", null, "Google Scholar M. Krasnosel'skii, Je. A. Lifshits and A. Sobolev, \"Positive Linear Systems. The Method of Positive Operators,\", Sigma Series in Applied Mathematics, 5 (1989).", null, "Google Scholar H. Kunita, \"Stochastic Flows and Stochastic Differential Equations,\", Cambridge Studies in Advanced Mathematics, 24 (1990).", null, "Google Scholar R. Martin and H. Smith, Abstract functional differential equations and reaction-diffusion systems,, Trans. AMS, 321 (1990), 1.  doi: 10.2307/2001590.", null, "", null, "Google Scholar R. Martin, Jr. and H. Smith, Reaction-diffusion systems with time delays: Monotonicity, invariance, comparison and convergence,, J. Reine Angew. Math., 413 (1991), 1.", null, "Google Scholar S. Mohammed, Nonlinear flows of stochastic linear delay equations,, Stochastics, 17 (1986), 207.  doi: 10.1080/17442508608833390.", null, "", null, "Google Scholar S. Mohammed, The Lyapunov spectrum and stable manifolds for stochastic linear delay equations,, Stochastics and Stochastic Reports, 29 (1990), 89.   Google Scholar S. Mohammed and M. Scheutzow, Spatial estimates for stochastic flows in Euclidean space,, Ann. Probab., 26 (1998), 56.  doi: 10.1214/aop/1022855411.", null, "", null, "Google Scholar S. Mohammed and M. Scheutzow, The stable manifold theorem for non-linear stochastic systems with memory. I. Existence of the semiflow,, J. Functional Anal., 205 (2003), 271.  doi: 10.1016/j.jfa.2002.04.001.", null, "", null, "Google Scholar G. Ochs, \"Weak Random Attractors,\", Institut für Dynamische Systeme, (1999).   Google Scholar M. Scheutzow, Qualitative behaviour of stochastic delay equations with a bounded memory,, Stochastics, 12 (1984), 41.  doi: 10.1080/17442508408833294.", null, "", null, "Google Scholar M. Scheutzow, Comparison of various concepts of a random attractor: A case study,, Arch. Math., 78 (2002), 233.  doi: 10.1007/s00013-002-8241-1.", null, "", null, "Google Scholar B. Schmalfuss, Backward cocycles and attractors for stochastic differential equations,, in, (1992), 185.   Google Scholar G. Seifert, Positively invariant closed sets for systems of delay differential equations,, J. Diff. Eqs., 22 (1976), 292.", null, "Google Scholar H. Smith, \"Monotone Dynamical Systems. An Introduction to the Theory of Competitive and Cooperative Systems,\", Mathematical Surveys and Monographs, 41 (1995).", null, "Google Scholar T. Tibor, H.-O. Walther and J. Wu, The structure of an attracting set defined by delayed and monotone positive feedback,, CWI Quarterly, 12 (1999), 315.   Google Scholar\n Junyi Tu, Yuncheng You. Random attractor of stochastic Brusselator system with multiplicative noise. Discrete & Continuous Dynamical Systems - A, 2016, 36 (5) : 2757-2779. doi: 10.3934/dcds.2016.36.2757 Chunhong Li, Jiaowan Luo. Stochastic invariance for neutral functional differential equation with non-lipschitz coefficients. Discrete & Continuous Dynamical Systems - B, 2019, 24 (7) : 3299-3318. doi: 10.3934/dcdsb.2018321 Fuke Wu, Peter E. Kloeden. Mean-square random attractors of stochastic delay differential equations with random delay. Discrete & Continuous Dynamical Systems - B, 2013, 18 (6) : 1715-1734. doi: 10.3934/dcdsb.2013.18.1715 Ludwig Arnold, Igor Chueshov. Cooperative random and stochastic differential equations. Discrete & Continuous Dynamical Systems - A, 2001, 7 (1) : 1-33. doi: 10.3934/dcds.2001.7.1 Yuncheng You. Random attractor for stochastic reversible Schnackenberg equations. Discrete & Continuous Dynamical Systems - S, 2014, 7 (6) : 1347-1362. doi: 10.3934/dcdss.2014.7.1347 Zhaojuan Wang, Shengfan Zhou. Random attractor and random exponential attractor for stochastic non-autonomous damped cubic wave equation with linear multiplicative white noise. Discrete & Continuous Dynamical Systems - A, 2018, 38 (9) : 4767-4817. doi: 10.3934/dcds.2018210 Min Zhao, Shengfan Zhou. Random attractor for stochastic Boissonade system with time-dependent deterministic forces and white noises. Discrete & Continuous Dynamical Systems - B, 2017, 22 (4) : 1683-1717. doi: 10.3934/dcdsb.2017081 Zhaojuan Wang, Shengfan Zhou. Random attractor for stochastic non-autonomous damped wave equation with critical exponent. Discrete & Continuous Dynamical Systems - A, 2017, 37 (1) : 545-573. doi: 10.3934/dcds.2017022 Shengfan Zhou, Min Zhao. Fractal dimension of random attractor for stochastic non-autonomous damped wave equation with linear multiplicative white noise. Discrete & Continuous Dynamical Systems - A, 2016, 36 (5) : 2887-2914. doi: 10.3934/dcds.2016.36.2887 María J. Garrido–Atienza, Kening Lu, Björn Schmalfuss. Random dynamical systems for stochastic partial differential equations driven by a fractional Brownian motion. Discrete & Continuous Dynamical Systems - B, 2010, 14 (2) : 473-493. doi: 10.3934/dcdsb.2010.14.473 Boling Guo, Yongqian Han, Guoli Zhou. Random attractor for the 2D stochastic nematic liquid crystals flows. Communications on Pure & Applied Analysis, 2019, 18 (5) : 2349-2376. doi: 10.3934/cpaa.2019106 Felix X.-F. Ye, Hong Qian. Stochastic dynamics Ⅱ: Finite random dynamical systems, linear representation, and entropy production. Discrete & Continuous Dynamical Systems - B, 2019, 24 (8) : 4341-4366. doi: 10.3934/dcdsb.2019122 Bixiang Wang. Stochastic bifurcation of pathwise random almost periodic and almost automorphic solutions for random dynamical systems. Discrete & Continuous Dynamical Systems - A, 2015, 35 (8) : 3745-3769. doi: 10.3934/dcds.2015.35.3745 Wenqiang Zhao. Pullback attractors for bi-spatial continuous random dynamical systems and application to stochastic fractional power dissipative equation on an unbounded domain. Discrete & Continuous Dynamical Systems - B, 2019, 24 (7) : 3395-3438. doi: 10.3934/dcdsb.2018326 Yanfeng Guo, Jinqiao Duan, Donglong Li. Approximation of random invariant manifolds for a stochastic Swift-Hohenberg equation. Discrete & Continuous Dynamical Systems - S, 2016, 9 (6) : 1701-1715. doi: 10.3934/dcdss.2016071 Neville J. Ford, Stewart J. Norton. Predicting changes in dynamical behaviour in solutions to stochastic delay differential equations. Communications on Pure & Applied Analysis, 2006, 5 (2) : 367-382. doi: 10.3934/cpaa.2006.5.367 Xiangnan He, Wenlian Lu, Tianping Chen. On transverse stability of random dynamical system. Discrete & Continuous Dynamical Systems - A, 2013, 33 (2) : 701-721. doi: 10.3934/dcds.2013.33.701 Defei Zhang, Ping He. Functional solution about stochastic differential equation driven by $G$-Brownian motion. Discrete & Continuous Dynamical Systems - B, 2015, 20 (1) : 281-293. doi: 10.3934/dcdsb.2015.20.281 Seyedeh Marzieh Ghavidel, Wolfgang M. Ruess. Flow invariance for nonautonomous nonlinear partial differential delay equations. Communications on Pure & Applied Analysis, 2012, 11 (6) : 2351-2369. doi: 10.3934/cpaa.2012.11.2351 Michael Scheutzow. Exponential growth rate for a singular linear stochastic delay differential equation. Discrete & Continuous Dynamical Systems - B, 2013, 18 (6) : 1683-1696. doi: 10.3934/dcdsb.2013.18.1683\n\n2018 Impact Factor: 1.008" ]
[ null, "https://www.aimsciences.org:443/style/web/images/white_google.png", null, "https://www.aimsciences.org:443/style/web/images/white_facebook.png", null, "https://www.aimsciences.org:443/style/web/images/white_twitter.png", null, "https://www.aimsciences.org:443/style/web/images/white_linkedin.png", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/crossref.jpeg", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null, "https://www.aimsciences.org:443/style/web/images/math-review.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6166335,"math_prob":0.7999653,"size":16825,"snap":"2019-51-2020-05","text_gpt3_token_len":5174,"char_repetition_ratio":0.19731288,"word_repetition_ratio":0.73819745,"special_character_ratio":0.34680536,"punctuation_ratio":0.2844497,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9723036,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160,161,162,163,164,165,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,182,183,184],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-12T21:17:17Z\",\"WARC-Record-ID\":\"<urn:uuid:94378b6d-6137-4985-a6de-370460f5d479>\",\"Content-Length\":\"112196\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5e087c77-6878-4afe-a2f6-cfc0499359cd>\",\"WARC-Concurrent-To\":\"<urn:uuid:f6d40ecf-0008-4c00-bc82-853cf64902f4>\",\"WARC-IP-Address\":\"216.227.221.143\",\"WARC-Target-URI\":\"https://www.aimsciences.org/article/doi/10.3934/dcdsb.2013.18.1533\",\"WARC-Payload-Digest\":\"sha1:KJYHIIRNKHAK64LEWIOD2TQDMXOFSJCP\",\"WARC-Block-Digest\":\"sha1:AZZOF4LEWZHR4AEKBRBRBQ53MBAWTWWS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540547165.98_warc_CC-MAIN-20191212205036-20191212233036-00262.warc.gz\"}"}
https://patents.justia.com/patent/20120182452
[ "# IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND PROGRAM\n\nAn image processing device includes a texture direction determining unit that determines a texture direction of an image, a defective pixel detecting unit that calculates a pixel value average for each of pixel groups including a plurality of pixels, and detects a defective pixel position on the basis of difference information of the pixel value average according to an arrangement direction of the pixel groups, and a correction unit that corrects, as a correction target, a pixel value at the defective pixel position detected on the basis of the difference information in the same pixel group arrangement direction as the texture direction determined by the texture direction determining unit.\n\nDescription\nBACKGROUND\n\nThe present disclosure relates to an image processing device, an image processing method, and a program, and more particularly, to an image processing device, an image processing method, and a program, which correct a signal output from a defective pixel included in a solid-state imaging device to suppress image quality deterioration caused by the defective pixel.\n\nGenerally, in a solid-state imaging element such as a CCD (Charge Coupled Device) and a CMOS (Complementary Metal Oxide Semiconductor) sensor, defective pixels may be included.\n\nThat is, in such a solid-state imaging element, a defective pixel may be caused by a partial crystal defect of a semiconductor and output an abnormal imaging signal, thereby causing image quality deterioration. For example, there is a black spot image defective pixel or a white spot defective pixel. Various signal processing methods and circuit configurations have been proposed for correcting a signal output from the defective pixel.\n\nIn the related art disclosing a technique of correcting a signal output from a defective pixel, there are, for example, Japanese Unexamined Patent Application Publication No 2009-290653, Japanese Patent No. 4343988, and Japanese Patent No. 4307318.\n\nSUMMARY\n\nIn the related art described above, however, when detective pixels continuously occur or when a defect occurs at a circuit part (for example, a transistor) shared by a plurality of pixels and defects occur in the plurality of pixels, it is difficult to acquire a texture direction accurately and therefore it is difficult to sufficiently correct the defective pixels.\n\nWhen the correction process disclosed in the related art is applied, only the center pixel is to be corrected in a neighboring area used in the defect correction. For example, when a signal process is performed at a later stage using the neighboring area used for the defect correction and the defect is included in pixels other than the center pixel, the defective pixel which has not been corrected may influence the signal process of the later stage and it is difficult to obtain the correction effect. Accordingly, in the method of the related art, it is necessary to perform the signal process of the later stage after the defect correction is performed on all the pixels once, which causes a problem of an increase of a circuit scale or a delay of a process speed.\n\nIt is desirable to provide an image processing device, an image processing method, and a program that perform a determination process in which an influence of a defect is reduced when determining a direction of a texture so as to efficiently perform detection and correction of a defect even when there are a plurality of defective pixels in the neighboring area.\n\nAccording to a first embodiment of the present disclosure, there is provided an image processing device including a texture direction determining unit that determines a texture direction of an image, a defective pixel detecting unit that calculates a pixel value average for each of pixel groups including a plurality of pixels and detects a defective pixel position on the basis of difference information of the pixel value average according to an arrangement direction of the pixel groups, and a correction unit that corrects, as a correction target, a pixel value at the defective pixel position detected on the basis of the difference information in the same pixel group arrangement direction as the texture direction determined by the texture direction determining unit.\n\nIn the image processing device according to the embodiment of the present disclosure, the defective pixel detecting unit may calculate the pixel value average for each pixel group including the plurality of pixels sharing a pixel value reading circuit and detect the defective pixel position on the basis of the difference information of the pixel value average according to the arrangement direction of the pixel group.\n\nIn the image processing device according to the embodiment of the present disclosure, the defective pixel detecting unit may determine whether or not a pixel is a defective pixel in accordance with a difference value from a reference value (safe_mW), the reference value being an average value of a plurality of pixel groups in a flat area where a difference from the average value of a plurality of pixel group units is small in a plurality of adjacent arrangements of pixel groups in the same arrangement direction.\n\nIn the image processing device according to the embodiment of the present disclosure, the texture direction determining unit may perform a process of determining one of the four directions of the horizontal, vertical, upper right, and lower right directions as the texture direction, and the defective pixel detecting unit may detect the defective pixel position on the basis of the difference information of the pixel value average according to the arrangement direction of the pixel groups in the four directions of horizontal, vertical, upper right, and lower right.\n\nIn the image processing device according to the embodiment of the present disclosure, the texture determining unit may calculate a plurality of differential values based on pixel values of pixels arranged in a predetermined direction included in a neighboring area centered on a pixel for attention, sort the plurality of differential values, select only data with small values, calculate statistics, and determine the texture direction on the basis of comparison of the statistics.\n\nIn the image processing device according to the embodiment of the present disclosure, the correction unit performs a process of determining the pixel value at the defective pixel position with reference to a neighboring pixel in the texture direction as a reference pixel on the basis of the pixel value of the reference pixel.\n\nIn the image processing device according to the embodiment of the present disclosure, as a process target, the defective pixel detecting unit may calculate a pixel value average for each pixel group including a plurality of pixels in the same pixel group arrangement direction as the texture direction determined by the texture direction determining unit, and detect the defective pixel position on the basis of the difference information of the pixel value averages according to the arrangement direction of the pixel groups.\n\nAccording to a second embodiment of the present disclosure, there is provided an image processing method performed in an image processing device, the method including causing a texture direction determining unit to determine a texture direction of an image, causing a defective pixel detecting unit to calculate a pixel value average for each of pixel groups including a plurality of pixels and to detect a defective pixel position on the basis of difference information of the pixel value average according to an arrangement direction of the pixel group, and causing a correction unit to correct, as a correction target, the defective pixel at the defective pixel position detected on the basis of the difference information in the same pixel group arrangement direction as the texture direction determined by the texture direction determining unit.\n\nAccording to a third embodiment of the present disclosure, there is provided a program for causing an image processing device to execute an image process including causing a texture direction determining unit to determine a texture direction of an image, causing a defective pixel detecting unit to calculate a pixel value average for each of pixel groups including a plurality of pixels and to detect a defective pixel position on the basis of difference information of the pixel value average according to an arrangement direction of the pixel group, and causing a correction unit to correct, as a correction target, the defective pixel at the defective pixel position detected on the basis of the difference information in the same pixel group arrangement direction as the texture direction determined by the texture direction determining unit.\n\nThe program according to the embodiment of the present disclosure is, for example, a program which can be provided with a computer-readable storage medium or communication medium usable in a general-purpose system which can execute a variety of program code. Such a computer-readable program is provided to perform a process according to the program on a computer system.\n\nOther characteristics and advantages of the present disclosure will be clarified by detailed description based on examples of the present disclosure and the accompanying drawings. In the specification, a system is referred to as a logical group configuration of a plurality of devices, and the devices are not necessarily provided in the same casing.\n\nAccording to an embodiment of the present disclosure, a configuration of performing detection and correction of influence of a defective pixel on a captured image is realized. Specifically, a texture direction of an image is determined, a pixel value average is calculated for each pixel group including a plurality of pixels, and a defective pixel position is detected on the basis of difference information of the pixel value average according to an arrangement direction of the pixel group. A pixel value at the defective pixel position detected in the same pixel group arrangement direction as the texture direction is corrected as a correction target. Defective pixel detection is performed at a position in the texture direction, for example, on each pixel group sharing a reading circuit to efficiently detect the defective pixel position.\n\nBRIEF DESCRIPTION OF THE DRAWINGS\n\nFIG. 1 is a diagram illustrating an example of a configuration of an image processing device according to an embodiment of the present disclosure.\n\nFIG. 2 is a diagram illustrating an example of a configuration of an image processing unit of the image processing device according to the embodiment of the present disclosure.\n\nFIG. 3 is a flowchart illustrating a process sequence performed by the image processing device according to the embodiment of the present disclosure.\n\nFIG. 4 is a diagram illustrating an example of a configuration of a defective pixel correcting process of the image processing device according to the embodiment of the present disclosure.\n\nFIG. 5 is a diagram illustrating a texture direction determining process performed in the defective pixel correction process of the image processing device according to the embodiment of the present disclosure.\n\nFIG. 6 is a diagram illustrating a horizontal direction differential value calculating process performed in the texture direction determining process performed by the image processing device according to the embodiment of the present disclosure.\n\nFIG. 7 is a diagram illustrating a vertical direction differential value calculating process performed in the texture direction determining process performed by the image processing device according to the embodiment of the present disclosure.\n\nFIG. 8 is a diagram illustrating an upper right direction differential value calculating process performed in the texture direction determining process performed by the image processing device according to the embodiment of the present disclosure.\n\nFIG. 9 is a diagram illustrating a lower right direction differential value calculating process performed in the texture direction determining process performed by the image processing device according to the embodiment of the present disclosure.\n\nFIG. 10 is a diagram specifically illustrating a process performed by a defective pixel detecting unit in a defect correcting unit of the image processing device according to the embodiment of the present disclosure.\n\nFIG. 11 is a diagram illustrating an example of a process performed by the defective pixel detecting unit in the defect correcting unit of the image processing device according to the embodiment of the present disclosure.\n\nFIG. 12 is a diagram illustrating a modified example of the defective pixel detecting unit in the defect correcting unit of the image processing device according to the embodiment of the present disclosure.\n\nDETAILED DESCRIPTION OF EMBODIMENTS\n\nHereinafter, an image processing device, an image processing method, and a program according to an embodiment of the present disclosure will be described in detail with reference to the drawings. An embodiment described herein is implemented by an imaging device system. First, a configuration and an operation of an overall system will be described, and a process according an embodiment of the present disclosure will be described in detail. The description is provided as follows.\n\n1. Example of Configuration of Image Processing Device\n\n2. Details of Image Process according to Embodiment of Present Disclosure\n\n3. Details of Configuration and Process of Defect Correcting Unit\n\n4. Details of Process performed by Neighboring Area Extracting Unit in Defect Correcting Unit\n\n5. Details of Process performed by Texture Direction Determining Unit in Defect Correcting Unit\n\n6. Details of Process performed by Defective Pixel Detecting Unit in Defect Correcting Unit\n\n7. Process of Defective Pixel Correcting Unit\n\n1. Example of Configuration of Image Processing Device\n\nFIG. 1 is an overall diagram of an imaging device (digital video camera) that is an example of an image processing device according to an embodiment of the present disclosure. The imaging device mainly includes an optical system, a signal processing system, a recording system, a display system, and a control system. Incident light passing through the optical system including a lens and the like reaches an imaging element 101 such as CMOS. The light first reaches light receiving elements of a CMOS imaging face, and is converted into an electrical signal by photoelectric conversion in the light receiving element. Noise is removed by a correlation double sampling circuit (CDS) 102, the signal is converted into digital data by a digitizing process of an analogue-to-digital (A/D) converter 103, then the digital data is temporarily stored in an image memory of a DSP 104, and various signal processes are performed in the DSP 104.\n\nIn the imaging state, a timing generator (TG) 114 controls the signal processing system to keep image reading at a regular frame rate.\n\nThe A/D convertor 103 outputs a pixel stream at a regular rate to the DSP 104 where various image processes are performed, and then the image data is transmitted to an LCD driver 112, a CODEC 105, or both of them. The LCD driver 112 converts the image data transmitted from the DSP 104 into an analog signal, and the analog signal is output to and displayed on an LCD 113. For example, the LCD 113 serves as a finder of a camera. The CODEC 105 performs encoding of the image data transmitted from the DSP 104, and the encoded image data is recorded in a memory 106.\n\nThe memory 106 may be a recording device using a semiconductor, a magnetic recording medium, an optical magnetic recording medium, an optical recording medium, and the like.\n\nFor example, a CPU 115 performs an overall process control of an imaging process and an image process according to a program stored in a storage unit in advance.\n\nAn input unit 116 is an operation unit operated by a user.\n\nThe above description is description of an overall system of a digital video camera of the embodiment.\n\nIn the imaging device shown in FIG. 1, for example, the DSP 104 mainly performs the processes according to an embodiment of the present disclosure. Hereinafter, an image process according to an embodiment of the present disclosure will be described in detail. In the following embodiment, the DSP 104 performs the image process according to an embodiment of the present disclosure, but other hardware or software than the DSP 104 may be used to perform the process according to an embodiment of the present disclosure. The other constituent element, for example, the CMOS 101 may perform the image process.\n\n2. Details of Image Process According to Embodiment of Present Disclosure\n\nAs described above, for example, the image process according to an embodiment of the present disclosure can be performed by the DSP 104.\n\nAccordingly, in the configuration of the embodiment described hereinafter, an example will be described in which an operation unit in the DSP 104 sequentially performs operations in the image process in accordance with predetermined program code on a stream of the image signal input to the DSP 104.\n\nIn the embodiment described hereinafter, each process unit in the program will be described as a functional block, and a sequence of performing each process will be described using a flowchart. However, an embodiment of the present disclosure may be implemented by hardware, for example, by mounting a hardware circuit realizing a process equivalent to the process executed by the functional block described hereinafter, in addition to being implemented by the form of the program described in the embodiment.\n\nFIG. 2 is a block diagram illustrating an example of a configuration of an image processing unit performing the image process according to an embodiment of the present disclosure. As described above, the image processing unit is configured by, for example, the DSP 104 shown in FIG. 1. In FIG. 2, mosaic image 117, Y image 124, and C image 125 represented by two parallel horizontal lines indicate data or memories storing data, and the other configurations including defect correcting unit 118 to YC conversion unit 123 represent processes performed in the image processing unit or process units.\n\nAs shown in FIG. 2, the image processing unit includes a defect correcting unit 118, a white balance unit 119, a demosaic unit 120, a matrix unit 121, a gamma correcting unit 122, and a YC conversion unit 123. The mosaic image 117 represents an input image to the image processing unit, that is, an image signal input to the DSP 104 digitized by the A/D convertor 103 shown in FIG. 1.\n\nThe mosaic image 117 is provided by storing an intensity signal (pixel value) of any of colors R, G, and B in a corresponding pixel of the imaging element 101 as shown in FIG. 1, and the color arrangement is, for example, a primary color system Bayer arrangement.\n\nThe Y image 124 and the C image 125 are images output from the image processing unit. Those images correspond to a YCbCr image signal output from the DSP 104 and input to the CODEC 105 as shown in FIG. 1.\n\nThe processes performed by the units of the image processing unit shown in FIG. 2 will be described.\n\nThe defect correcting unit 118 corrects a pixel value of a defective pixel position into an accurate value in the mosaic image 117 input from the A/D converter 103 shown in FIG. 1.\n\nA white balance unit 119 sets a proper coefficient in response to a color of each pixel intensity such that a color balance of an achromatic color photography subject area is an achromatic color with respect to the defect-corrected mosaic image.\n\nThe demosaic unit 120 performs an interpolation process so as to have the same intensities of R, G, and B at the pixel positions of the mosaic image subjected to the white balance adjustment. Outputs from the demosaic unit 120 are three images in which pixel values of three colors of R, G, and B are individually set at the pixel positions.\n\nThe matrix unit 121 applies a 3-row and 3-column linear matrix to which a coefficient is set in advance to the pixels [R, G, and B] of the outputs of the demosaic unit 120, and converts them into pixel values of three primary colors (intensity values R_m, G_m, and B_m). The coefficient of the linear matrix is an important design item to exhibit optimal color representation, but the embodiment of the present disclosure relates to the defect correcting process. The matrix process is applied after the defect correcting process, and thus a specific value of a linear matrix coefficient may be designed irrespective of the embodiment of the present disclosure.\n\nThe outputs of the matrix unit 121 are three images corresponding to three colors of color-corrected R_m, G_m, and B_m. After the matrix process, the gamma correcting unit 122 performs gamma correction on a color-corrected 3-channel image.\n\nThe YC conversion unit 123 performs a YC matrix process and band restriction for a chroma component on the gamma-corrected 3-channel image to generate the Y image 124 and the C image 125.\n\nNext, a sequence of the process performed by the image processing unit shown in FIG. 2 will be described with reference to a flowchart shown in FIG. 3.\n\nFirst, in Step S101, the image processing unit acquires a mosaic image based on an output signal of the image processing element 101. The image is the mosaic image 117 shown in FIG. 2.\n\nThen, in Step S102, the defect correcting unit 118 performs the defect correcting process on the mosaic image.\n\nThen, in Step S103, the white balance unit 119 performs the white balance process on the defect-corrected mosaic image.\n\nThen, in Step S104, the demosaic unit 120 performs the demosaic process of setting all the intensities (pixel values) of R, G, and B at the pixel positions of the mosaic image subjected to the white balance process.\n\nThen, in Step S105, the matrix unit 121 applies the linear matrix to the pixels of the 3-channel images and obtains an RGB 3-channel image.\n\nThen, in Step S106, the gamma correcting unit 122 performs the gamma correction on the pixels of the 3-channel image color-corrected by the matrix process.\n\nThen, in Step S107, the YC conversion unit 123 performs YC conversion on the gamma-corrected 3-channel image to generate the Y image 124 and the C image 125.\n\nLast, in Step S108, the generated Y image 124 and C image 125 are output.\n\nAs described above, the operation of the image processing unit is completed.\n\n3. Details of Configuration and Process of Defect Correcting Unit\n\nNext, the image correcting process which is a main part of the embodiment of the present disclosure and is performed in the defect correcting unit 118 will be described in detail.\n\nFIG. 4 shows a block diagram illustrating an internal configuration of the defect correcting unit 118.\n\nAs shown in FIG. 4, the defect correcting unit 118 mainly includes a neighboring area extracting unit 201, a texture direction determining unit 202, a defective pixel detecting unit 203, and a defective pixel correcting unit 204.\n\nThe neighboring area extracting unit 201 cuts out a neighboring area 211 in specific size including a position of a pixel for attention and its neighboring area, from the mosaic image 117 input to the defect correcting unit 118, that is, the mosaic image 117 based on the output signal of the imaging element 101 shown in FIG. 1. In the embodiment, the neighboring area 211 is a rectangular area of 7×7 pixels centered on the position of the pixel for attention.\n\nThe texture direction determining unit 202 determines a direction of performing the process of detecting the defective pixel in a plurality of directions, at the position of the pixel for attention set to the center position of the rectangular area of 7×7 pixels. In the embodiment, the plurality of determination directions of the texture direction determining unit 202 are four directions of:\n\nHorizontal Direction (H direction),\n\nVertical Direction (V direction),\n\nUpper Right Direction (A direction), and\n\nLower Right Direction (D direction).\n\nThe defective pixel detecting unit 203 performs the detection of the defective pixel according to the direction of performing the process of detecting the defective pixel determined by the texture direction determining unit 202.\n\nThe defective pixel correcting unit 204 corrects the pixel value of the defective pixel position detected by the defective pixel detecting unit 203, using the pixels of the neighboring area 211.\n\nThe defect correcting unit 118 performs the correction of the defective pixel by such a series of processes. Hereinafter, an example of specific processes of the process units constituting the defect correcting unit 118 will be sequentially described.\n\n4. Details of Process Performed by Neighboring Area Extracting Unit in Defect Correcting Unit\n\nFirst, details of the process performed by the neighboring area extracting unit 201 in the defect correcting unit 118 will be described.\n\nThe neighboring area extracting unit 201 performs an operation of securing an access to pixel information in the 7×7 rectangular area in the vicinity of the position of the pixel for attention. As a specific method thereof, various methods can be applied. For example, when the embodiment of the present disclosure is realized as software, the pixel values in the neighboring 7×7 rectangular area centered on the position of the pixel for attention are secured in a memory in the form of arrangement in which the pixel values are associated with, for example, coordinate positions.\n\nWhen the embodiment of the present disclosure is realized as hardware, a signal processing system of a general imaging device is often implemented such that a signal from a sensor flows sequentially as data in a first-order series of pixel intensities with a horizontal line. In this case, generally, an access to the pixels of the horizontal line adjacent in the vertical direction is secured using a delay line capable of storing the pixel intensities (pixel values) of one horizontal line.\n\nAt least six delay lines are prepared to secure the access to the 7×7 rectangular area.\n\n5. Details of Process performed by Texture Direction Determining Unit in Defect Correcting Unit\n\nNext, details of the process performed by the texture direction determining unit 202 in the defect correcting unit 118 will be described.\n\nFIG. 5 is a diagram illustrating a detailed configuration and operation of the texture direction determining unit 202.\n\nThe texture direction determining unit 202 has the following differential value calculating units performing pixel value analysis of the neighboring area (in the example, 7×7 pixel area) of the pixel for attention:\n\n(1) Horizontal Direction Differential Value Calculating Unit 311 calculating Differential Value in Horizontal Direction,\n\n(2) Vertical Direction Differential Value Calculating Unit 312 calculating Differential Value in Vertical Direction,\n\n(3) Upper Right Direction Differential Value Calculating Unit 313 calculating Differential Value in Upper Right Direction, and\n\n(4) Lower Right Direction Differential Value Calculating Unit 314 calculating Differential Value in Lower Right Direction.\n\nIn the embodiment, the neighboring area is an area of 7×7 pixels centered on the pixel for attention, and analysis is performed on the neighboring area.\n\nThe texture direction determining unit 202 further includes statistic calculating units 321a to 321d calculating statistics from the differential values calculated regarding four directions, and a statistic comparing unit 331 determining the direction of the texture in the neighboring area 211 by comparing the statistics of the directions.\n\nThe process of the horizontal direction differential value calculating unit 311 calculating the differential value in the horizontal direction will be described with reference to the drawings.\n\nFIG. 6 is a diagram illustrating the process of the horizontal direction differential value calculating unit 311. FIG. 6 shows the neighboring area of 7×7 pixels centered on the pixel for attention.\n\nThe horizontal direction is represented by the x coordinate, and the vertical direction is represented by the y coordinate. The center of the neighboring area of 7×7 pixels, that is, (x, y)=(4, 4) corresponds to the position of the pixel for attention.\n\nThe pixel used in the texture direction determination is represented by W. At the pixel position (x, y), the horizontal direction differential value gradH(x, y) is acquired by the following formula:\n\nwhere abs( ) is a function of acquiring an absolute value, w(x−1, y) is a pixel value (intensity) of W at the coordinate position (x−1, y), and w(x+1, y) is a pixel value (intensity) of W at the coordinate position (x+1, y).\n\nThe horizontal direction differential value calculating unit 311 acquires the differential value gradH at the pixel positions indicated by symbol O in FIG. 6.\n\nFor example, at the position of symbol O of the position of (x, y)=(2, 1), the differential value gradH is acquired on the basis of the pixel values (intensities) of both neighboring W in the horizontal direction.\n\nThat is, the differential value gradH(2, 1) is calculated using the pixel values of two W pixels of (x, y)=(1, 1) and (x, y)=(3, 1).\n\nThe differential values gradH are acquired at eighteen pixel positions indicated by symbol O in FIG. 6.\n\nNext, the process of the vertical direction differential value calculating unit 312 calculating the differential value in the vertical direction will be described.\n\nFIG. 7 is a diagram illustrating the process of the vertical direction differential value calculating unit 312. FIG. 7 shows the neighboring area of 7×7 pixels centered on the pixel for attention.\n\nThe horizontal direction is represented by the x coordinate, and the vertical direction is represented by the y coordinate. The center of the neighboring area of 7×7 pixels, that is, (x, y)=(4, 4) corresponds to the position of the pixel for attention.\n\nThe pixel used in the texture direction determination is represented by W. At the pixel position (x, y), the vertical direction differential value gradV(x, y) is acquired by the following formula:\n\nwhere abs( ) is a function of acquiring an absolute value, w(x, y−1) is a pixel value (intensity) of W at the coordinate position (x, y−1), and w(x, y+1) is a pixel value (intensity) of W at the coordinate position (x, y+1).\n\nThe vertical direction differential value calculating unit 312 acquires the differential value gradV at the pixel positions indicated by symbol O in FIG. 7.\n\nFor example, at the position of symbol O of the position of (x, y)=(1, 2), the differential value gradV is acquired on the basis of the pixel values (intensities) of both neighboring W in the vertical direction.\n\nThat is, the differential value gradV(1, 2) is calculated using the pixel values of two W pixels of (x, y)=(1, 1) and (x, y)=(1, 3).\n\nThe differential values gradV are acquired at eighteen pixel positions indicated by symbol O in FIG. 7.\n\nNext, the process of the upper right direction differential value calculating unit 313 calculating the differential value in the upper right direction will be described.\n\nFIG. 8 is a diagram illustrating the process of the upper right direction differential value calculating unit 313. FIG. 8 shows the neighboring area of/pixels centered on the pixel for attention.\n\nThe horizontal direction is represented by the x coordinate, and the vertical direction is represented by the y coordinate. The center of the neighboring area of 7×7 pixels, that is, (x, y)=(4, 4) corresponds to the position of the pixel for attention.\n\nThe pixel used in the texture direction determination is represented by W. At the pixel position (x, y), the upper right direction differential value gradA(x, y) is acquired by the following formula:\n\nwhere abs( ) is a function of acquiring an absolute value, w(x, y) is a pixel value (intensity) of W at the coordinate position (x, y), and w(x+1, y−1) is a pixel value (intensity) of W at the coordinate position (x+1, y−1).\n\nThe upper right direction differential value calculating unit 313 acquires the differential value gradA at the W pixel positions within a dotted line in FIG. 8.\n\nFor example, at the position of symbol W of the position of (x, y)=(1, 3), the differential value gradA is acquired on the basis of the pixel values (intensities) of the pixel itself and the upper right adjacent W.\n\nThat is, the differential value gradA(1, 3) is calculated using the pixel values of two W pixels of (x, y)=(1, 3) and (x, y)=(2, 2).\n\nThe differential values gradA are acquired at eighteen W pixel positions within a dotted line shown in FIG. 8.\n\nNext, the process of the lower right direction differential value calculating unit 314 calculating the differential value in the lower right direction will be described.\n\nFIG. 9 is a diagram illustrating the process of the lower right direction differential value calculating unit 314. FIG. 9 shows the neighboring area of 7×7 pixels centered on the pixel for attention.\n\nThe horizontal direction is represented by the x coordinate, and the vertical direction is represented by the y coordinate. The center of the neighboring area of 7×7 pixels, that is, (x, y)=(4, 4) corresponds to the position of the pixel for attention.\n\nThe pixel used in the texture direction determination is represented by W. At the pixel position (x, y), the lower right direction differential value gradD(x, y) is acquired by the following formula:\n\nwhere abs( ) is a function of acquiring an absolute value, w(x, y) is a pixel value (intensity) of W at the coordinate position (x, y), and w(x+1, y+1) is a pixel value (intensity) of W at the coordinate position (x+1, y+1).\n\nThe lower right direction differential value calculating unit 314 acquires the differential value gradD at the W pixel position within a dotted line in FIG. 9.\n\nFor example, at the position of symbol W of the position of (x, y)=(1, 1), the differential value gradD is acquired on the basis of the pixel values (intensities) of the pixel itself and the lower right adjacent W.\n\nThat is, the differential value gradD(1, 1) is calculated using the pixel values of two W pixels of (x, y)=(1, 1) and (x, y)=(2, 2).\n\nThe differential values gradD are acquired at eighteen W pixel positions within a dotted line shown in FIG. 9.\n\nNext, the processes of the statistic calculating units 321a to 321d will be described.\n\nThe statistic calculating units 321a to 321d perform sorting, on the basis of the magnitudes of the differential values, on the differential values gradH, gradV, gradA, and gradD respectively calculated by the differential value calculating units of:\n\n(1) Horizontal Direction Differential Value Calculating Unit 311,\n\n(2) Vertical Direction Differential Value Calculating Unit 312,\n\n(3) Upper Right Direction Differential Value Calculating Unit 313, and\n\n(4) Lower Right Direction Differential Value Calculating Unit 314.\n\nAs described with reference to FIG. 6 to FIG. 9, the following differential values are calculated by the pixel analysis of the 7×7 neighboring area centered on one pixel for attention:\n\n(1) Eighteen Horizontal Direction Differential Values gradH of Horizontal Direction Differential Calculating Unit 311,\n\n(2) Eighteen Vertical Direction Differential Values gradV of Vertical Direction Differential Calculating Unit 312,\n\n(3) Eighteen Upper Right Direction Differential Values gradA of Upper Right Direction Differential Calculating Unit 313, and\n\n(4) Eighteen Lower Right Direction Differential Values gradD of Right down Direction Differential Calculating Unit 314.\n\nThe statistic calculating units 321a to 321d perform sorting on the basis of the magnitudes of the differential values as the statistics based on the plurality of differential values, and calculate the average differential values mHgrad, mVgrad, mAgrad, and mDgrad using the differential values to the n-th value in ascending order.\n\nHerein, n is a value equal to or less than the length N of the sorted differential values. In the example, N is 18.\n\nThe value of n is determined depending on the magnitude of a continuous defect assumed to be included in the neighboring area 211. For example, when the continuous defective pixels included in the neighboring area pixels are 2×2 pixels, the differential values which are not accurately acquired by the influence of the defect are maximal 4 pixels, and thus n=N−4.\n\nNext, the process of the statistic comparing unit 331 will be described.\n\nThe statistic comparing unit 331 compares the statistics mHgrad, mVgrad, mAgrad, and mDgrad calculated by the statistic calculating units 321a to 321d in the horizontal, vertical, upper right, and lower right directions as described above, and determines a direction with the smallest statistic as the texture direction.\n\nFor example, when the mHgrad is the smallest value, the texture direction of the neighboring area 211 is determined as the horizontal direction.\n\nThe texture direction determining unit 202 in the defect correcting unit 118 determines the texture direction as described above, and outputs texture direction information (dir) as determination information to the defective pixel detecting unit 203.\n\nThe texture direction information (dir) corresponds to a direction with the smallest brightness change or pixel value change.\n\n6. Details of Process performed by Defective Pixel Detecting Unit in Defect Correcting Unit\n\nNext, details of the process performed by the defective pixel detecting unit 203 in the defect correcting unit 118 will be described with reference to FIG. 10.\n\nFIG. 10 is a diagram illustrating a process performed by the defective pixel detecting unit 203 and a configuration thereof.\n\nThe defective pixel detecting unit 203 performs the process on the pixels constituting the neighboring area 211 (in the example, the area of 7×7 pixels centered on the pixel for attention) for each sharing pixel group formed of a plurality of pixels sharing a pixel output reading circuit (sharing FD). The sharing pixel group is a group of pixels sharing the reading circuit.\n\nThe sharing pixel statistic calculating unit 411 calculates a statistic for each sharing pixel group formed of a plurality of pixels sharing the pixel output reading circuit (sharing FD).\n\nIn addition, the defective pixel detecting unit 203 includes the following units that use the statistics calculated by the sharing pixel statistic calculating unit 411 and detect pixels estimated to be defective in the respective directions in the neighboring area 211. That is, the following defect detecting units corresponding to four directions are provided:\n\n(1) Horizontal Direction Defect Detecting Unit 421 performing Defect Detection in Horizontal Direction,\n\n(2) Vertical Direction Defect Detecting Unit 422 performing Defect Detection in Vertical Direction,\n\n(3) Upper Right Direction Defect Detecting Unit 423 performing Defect Detection in Upper Right Direction, and\n\n(4) Lower Right Direction Defect Detecting Unit 424 performing Defect Detection in Lower Right Direction.\n\nIn addition, the defective pixel detecting unit 203 includes a defective pixel position selecting unit 431 that inputs the texture direction information (dir) determined by the texture direction determining unit 202 described above and selects a correction target defective pixel position to be a correction target from the pixel estimated to be the defective pixels by the defective detecting units 421 to 424 at the preceding stage.\n\nFor example, as a sharing pixel configuration of the solid-state imaging device, there is a pixel configuration in units of 8 pixels.\n\nFIG. 11 shows an example of a configuration in which 8 pixels connected by a solid line indicate the sharing pixels.\n\nThat is, the example shown in FIG. 11 describes a configuration of a sharing pixel group including 8 pixels and using one pixel output reading circuit (sharing FD).\n\nIn the case of a sharing pixel pattern shown in FIG. 11, the pixel group of the sharing pixels (8 pixels) including the pixel for attention is FD01.\n\nPixel groups of the sharing pixels in the neighboring left, lower left, down, lower right, and right directions of the pixel group of the sharing pixels FD01 including the pixel for attention are sequentially indicated by FD00, FD10, FD11, FD12, and FD22.\n\nThat is, in the 7×7 pixel area centered on the pixel for attention, the pixel groups of the sharing pixels (8 pixels) such as FD00, FD01, and FD02 from the upper left side and FD10, FD11, and FD12 from the lower left side are set.\n\nWhen a pixel output reading circuit is shared by a sharing pixel group, there is a case where one pixel of the group is defective, or a case where an output value is set to a value away from a normal value due to a defect of the reading circuit. For example, all the constituent pixels of the pixel group FD00 shown in FIG. 11 may be defective pixels.\n\nHereinafter, an example of a process when one of the pixel groups FD01 and FD11 is defective will be described.\n\nFirst, the sharing pixel statistic calculating unit 411 calculates a pixel value average value of W pixels for each sharing pixel group in the neighboring area 211. In the example, the neighboring area 211 is the 7×7 pixel area centered on the pixel for attention.\n\nFor example, mW01 is an average value of the W pixels included in the sharing pixel group (FD01) including the center pixel surrounded by a dotted line in the pixel area formed of 7×7 pixels shown in FIG. 11. The sharing pixel group is a group of the pixels sharing the reading circuit as described above.\n\nSimilarly, mW00 is the average value of W pixels included in the pixel sharing group (FD00);\n\nmW01 is the average value of W pixels included in the pixel sharing group (FD01);\n\nmW02 is the average value of W pixels included in the pixel sharing group (FD02);\n\nmW10 is the average value of W pixels included in the pixel sharing group (FD10);\n\nmW11 is the average value of W pixels included in the pixel sharing group (FD11); and\n\nmW12 is the average value of W pixels included in the pixel sharing group (FD12).\n\nAn average value group formed of six average values (mW00 to mW12) of the respective sharing pixel groups is an average value group 1.\n\nThat is, the average value group 1 is formed of the average values of total 2×3 pixel sharing groups: the pixel sharing group (FD01) including the pixel for attention (center pixel surrounded with the dotted line in the 7×7 pixel area shown in FIG. 11) and the adjacent sharing pixel groups of the left (FD00), lower left (FD10), down (FD11), lower right (FD12), and right (FD02) directions thereof.\n\nThe average value group 1 is calculated corresponding to the six sharing pixel groups FD00 to FD12 shown in FIG. 11. Alternatively, a configuration using an average value group 2 may be formed by using the average values of total 2×3 pixel sharing groups: the sharing pixel group (FD01) including the pixel for attention, and the adjacent sharing pixel groups of the left (FD00), upper left (not shown, above FD00), up (not shown, above FD01), upper right (not shown, above FD02), and right (FD02).\n\nWhether to use the average value group 1 or the average value group 2 is determined depending on a phase of the pixel for attention and a pattern of the sharing pixel. That is, the average value of the W pixels sharing the FD00 is represented by mW00, and the average value of the W pixels sharing another FD is represented in the same manner.\n\nThe process of the horizontal direction defect detecting unit 421 will be described. First, the horizontal direction defect detecting unit 421 calculates horizontal direction difference values (gradients: gH00 to gH11) of the following sharing pixel groups on the basis of 2×3 average values (mW00 to mW12) calculated by the sharing pixel statistic calculating unit 411 of the preceding stage. That is, gH00=abs(mW00−mW01), gH01=abs(mW01−mW02), gH10=abs(mW10−mW11), and gH11=abs(mW11−mW12) are calculated.\n\nThe difference values (gradients) correspond to first-order differential values.\n\nThe horizontal direction defect detecting unit 421 calculates the following difference average values on the basis of the horizontal direction difference values (gradients: gH00 to gH11) of the respective sharing pixel groups:\n\nAverage Value of Upper Horizontal Direction Difference Values (gradients: gH00 and gH01)\n\ngH0=(gH00+gH01)/2\n\nAverage Value of Lower Horizontal Direction Difference Values (gradients: gH10 and gH11)\n\ngH1=(gH10+gH11)/2\n\nThe horizontal direction defect detecting unit 421 compares the following:\n\nDifference Average Value (gH0=(gH00+gH01)/2) that is Average Value of Upper Horizontal Direction Difference Values (gradients), and\n\nDifference Average Value (gH1=(gH10+gH11)/2) that is Average Value of Lower Horizontal Direction Difference Values (gradients).\n\nThe horizontal direction defect detecting unit 421 calculates the horizontal direction defect detecting average value (safe_mW) as follows in accordance with the comparison result.\n\n(a) if gH0<gH1,\n\nsafemW=(mW00+mW01+mW02)/3, and\n\n(b) if gH0>gH1,\n\nsafemW=(mW10+mW11+mW12)/3.\n\nAccording to any of the above (a) and (b), the horizontal direction defect detecting average value safe_mW is acquired.\n\nThe process is a process of selecting a flat area with a smaller change in pixel values from the upper and lower pixel areas, and calculating the average value as the horizontal direction defect detecting average value (safe_mW).\n\nThen, the horizontal direction defect detecting unit 421 determines that the sharing pixel group FD01 has a defect, when the difference average value (gH0) of the upper horizontal direction difference values (gradients) of the upper sharing pixel groups including the sharing pixel group (FD01) of the defect determination target is equal to or larger than a predetermined threshold value and the following condition formula is satisfied:\n\nabs(mW01−safemW)>abs(mW00−safemW)\n\nThe determination process is a process of determining that the sharing pixel group (FD01) of the defect determination target has a defect, when the difference average value (gH0) of the upper horizontal direction difference values (gradients) of the upper sharing pixel groups including the sharing pixel group (FD01) of the defect determination target is equal to or larger than a predetermined threshold value and the difference between the pixel value average (mW01) of the sharing pixel group (FD01) of the defect determination target and the horizontal direction defect detecting average value (safe_mW) is larger than the difference between the pixel value average (mW00) of the left sharing pixel group (FD00) adjacent to the sharing pixel group (FD01) in the horizontal direction and the horizontal direction defect detecting average value (safe_mW).\n\nWhen the difference average value (gH1) of the lower horizontal direction difference values (gradients) of the lower sharing pixel groups including the sharing pixel group (FD11) of the defect determination target is equal to or larger than a predetermined threshold value and the following condition formula is satisfied, the sharing pixel group (FD11) is determined to have a defect.\n\nabs(mW11−safemW)>abs(mW10−safemW)\n\nThe determination process is a process of determining that the sharing pixel group (FD11) of the defect determination target has a defect, when the difference average value (gH1) of the lower horizontal direction difference values (gradients) is equal to or larger than a predetermined threshold value and the difference between the pixel value average (mW11) of the sharing pixel group (FD11) of the defect determination target and the horizontal direction defect detecting average value (safe_mW) is larger than the difference between the pixel value average (mW10) of the left sharing pixel group (FD10) adjacent to the sharing pixel group (FD11) and the horizontal direction defect detecting average value (safe_mW).\n\nNext, the process of the vertical direction defect detecting unit 422 will be described.\n\nThe vertical direction defect detecting unit 422 calculates vertical direction difference values (gradients: gV0 and gV1) of the following sharing pixel groups on the basis of 2×3 average values (mW00 to mW12) calculated by the sharing pixel statistic calculating unit 411 of the preceding stage. That is, the vertical direction defect detecting unit 422 calculates the vertical direction difference value (gradient) of the sharing pixel groups (FD00 and FD10) of the left column: gV0, gV0=abs(mW00−mW10), and the vertical direction difference value (gradient) of the sharing pixel groups (FD01 and FD11) of the right column: gV1, gV1=abs(mW01−mW11).\n\nFurthermore, the vertical direction defect detecting unit 422 compares two vertical direction difference values (gradients: gV0 and gV1) of the respective sharing pixel groups, and calculates the following vertical direction defect detecting average value (safe_mW) on the basis of the comparison result.\n\n(a) if gV0<gV1,\n\nsafemW=(mW00+mW10)/2, and\n\n(b) if gV0>gV1,\n\nsafemW=(mW01+mW11)/2.\n\nAccording to any of the above (a) and (b), the vertical direction defect detecting average value safe_mW is acquired.\n\nThe process is a process of selecting a flat area with a smaller change in pixel values from pixel areas of the columns of the sharing pixel groups in two vertical columns adjacent to each other, that is, the left column including the sharing pixel groups (FD00 and FD10) and the right column including the sharing pixel groups FD01 and FD11, and calculating the average value as the vertical direction detect detecting average value (safe_mW).\n\nThen, the vertical direction defect detecting unit 422 determines that the sharing pixel group (FD01) has a defect, when the vertical direction difference value (gradient) gV1 of the column including the sharing pixel group (FD01) of the defect determination target is equal to or larger than a predetermined threshold value and the following condition formula is satisfied:\n\nabs(mW01−safemW)>abs(mW11−safemW)\n\nThe determination process is a process of determining that the sharing pixel group (FD01) of the defect determination target has a defect, when the vertical direction difference value (gradient) (gV1) of the vertical direction column including the sharing pixel group (FD01) of the defect determination target is equal to or larger than a predetermined threshold value and the difference between the pixel value average (mW01) of the sharing pixel group (FD01) of the defect determination target and the vertical direction defect detecting average value (safe_mW) is larger than the difference between the pixel value average (mW11) of the lower sharing pixel group (FD11) adjacent to the sharing pixel group (FD01) in the vertical direction and the vertical direction defect detecting average value (safe_mW).\n\nThen, the vertical direction defect detecting unit 422 determines that the sharing pixel group (FD11) has a defect, when the vertical direction difference value (gradient) gV1 of the column including the sharing pixel group (FD11) of the defect determination target is equal to larger than a predetermined threshold value and the following condition formula is satisfied:\n\nabs(mW11−safemW)>abs(mW01−safemW)\n\nThe determination process is a process of determining that the sharing pixel group (FD11) of the defect determination target has a defect, when the vertical direction difference value (gradient) (gV1) of the vertical direction column including the sharing pixel group (FD11) of the defect determination target is equal to or larger than a predetermined threshold value and the difference between the pixel value average (mW11) of the sharing pixel group (FD11) of the defect determination target and the vertical direction defect detecting average value (safe_mW) is larger than the difference between the pixel value average (mW01) of the upper sharing pixel group (FD01) adjacent to the sharing pixel group (FD11) in the vertical direction and the vertical direction defect detecting average value (safe_mW).\n\nNext, the process of the upper right direction defect detecting unit 423 will be described.\n\nThe upper right direction defect detecting unit 423 calculates upper right direction difference values (gradients: gA0 and gA1) of the following sharing pixel groups on the basis of 2×3 average values (mW00 to mW12) calculated by the sharing pixel statistic calculating unit 411 of the preceding stage. That is, the upper right direction defect detecting unit 423 calculates the upper right direction difference value (gradient) of the sharing pixel groups (FD10 and FD01) adjacent in the upper right direction: gA0, gA0=abs(mW10−mW01), and the upper right direction difference value (gradient) of the sharing pixel groups (FD11 and FD02) adjacent in the upper right direction: gA1, gA1=abs(mW11−mW02).\n\nThe upper right direction defect detecting unit 423 compares two upper right direction difference values (gradients: gA0 and gA1) of the respective sharing pixel groups, and calculates the following upper right direction defect detecting average value (safe_mW) on the basis of the comparison result.\n\n(a) if gA0<gA1,\n\nsafemW=(mW10+mW01)/2, and\n\n(b) if gA0>gA1,\n\nsafemW=(mW11+mW02)/2.\n\nAccording to any of the above (a) and (b), the upper right direction defect detecting average value safe_mW is acquired.\n\nThe process is a process of selecting a flat area with a smaller change in pixel values from the adjacent data of sharing pixel groups in two upper right direction lines, that is, the sharing pixel groups (FD10 and FD01) and the sharing pixel groups (FD11 and FD02), and calculating the average value as the upper right direction defect detecting average value (safe_mW).\n\nThen, the upper right direction defect detecting unit 423 determines that the sharing pixel group FD01 has a defect, when the upper right direction difference value (gradient) gA0 of the sharing pixel groups including the sharing pixel group (FD01) of the defect determination target is equal to or larger than a predetermined threshold value and the following condition formula is satisfied:\n\nabs(mW01−safemW)>abs(mW10−safemW)\n\nThe determination process is a process of determining that the sharing pixel group (FD01) of the defect determination target has a defect, when the upper right direction difference value (gradient) (gA1) in the upper right direction including the sharing pixel group (FD01) of the defect determination target is equal to or larger than a predetermined threshold value and the difference between the pixel value average (mW01) of the sharing pixel group (FD01) of the defect determination target and the upper right direction defect detecting average value (safe_mW) is larger than the difference between the pixel value average (mW10) of the lower left sharing pixel group (FD10) adjacent to the sharing pixel group (FD01) in the upper right direction and the upper right direction defect detecting average value (safe_mW).\n\nThen, the upper right direction defect detecting unit 423 determines that the sharing pixel group FD11 has a defect, when the upper right direction difference value (gradient) gA1 of the sharing pixel groups in the upper right direction including the sharing pixel group (FD11) of the defect determination target is equal to or larger than a predetermined threshold value and the following condition formula is satisfied:\n\nabs(mW11−safemW)>abs(mW02−safemW)\n\nThe determination process is a process of determining that the sharing pixel group (FD11) of the defect determination target has a defect, when the upper right direction difference value (gradient) (gA1) in the upper right direction including the sharing pixel group (FD11) of the defect determination target is equal to or larger than a predetermined threshold value and the difference between the pixel value average (mW11) of the sharing pixel group (FD11) of the defect determination target and the upper right direction defect detecting average value (safe_mW) is larger than the difference between the pixel value average (mW02) of the upper right sharing pixel group (FD02) adjacent to the sharing pixel group (FD11) in the upper right direction and the upper right direction defect detecting average value (safe_mW).\n\nNext, the process of the lower right direction defect detecting unit 424 will be described.\n\nThe lower right direction defect detecting unit 424 calculates lower right direction difference values (gradients: gD0 and gD1) of the following sharing pixel groups on the basis of 2×3 average values (mW00 to mW12) calculated by the sharing pixel statistic calculating unit 411 of the preceding stage. That is, the lower right direction defect detecting unit 424 calculates the lower right direction difference value (gradient) of the sharing pixel groups (FD00 and FD11) adjacent in the lower right direction: gD0, gD0=abs(mW00−mW11), and the lower right direction difference value (gradient) of the sharing pixel groups (FD01 and FD12) adjacent in the lower right direction: gD1, gD1=abs(mW01−mW12).\n\nThe lower right direction defect detecting unit 424 compares two lower right direction difference values (gradients: gD0 and gD1) of the respective sharing pixel groups, and calculates the following lower right direction defect detecting average value (safe_mW) on the basis of the comparison result.\n\n(a) if gD0<gD1,\n\nsafemW=(mW00+mW11)/2, and\n\n(b) if gD0>gD1,\n\nsafemW=(mW01+mW12)/2.\n\nAccording to any of the above (a) and (b), the lower right direction defect detecting average value safe_mW is acquired.\n\nThe process is a process of selecting a flat area with a smaller change in pixel values from the adjacent data of sharing pixel groups in two lower right direction lines, that is, the pixel sharing groups (FD00 and FD11) and the pixel sharing groups (FD01 and FD12), and calculating the average value as the vertical direction defect detecting average value (safe_mW).\n\nThen, the lower right direction defect detecting unit 424 determines that the pixel sharing group FD01 has a defect, when the lower right direction difference value (gradient) gD1 of the sharing pixel groups in the lower right direction including the sharing pixel group (FD01) of the defect determination target is equal to or larger than a predetermined threshold value and the following condition formula is satisfied.\n\nabs(mW01−safemW)>abs(mW12−safemW)\n\nThe determination process is a process of determining that the sharing pixel group (FD01) of the defect determination target has a defect, when the lower right direction difference value (gradient) (gD1) of the sharing pixel groups in the lower right direction including the sharing pixel group (FD01) of the defect determination target is equal to or larger than a predetermined threshold value and the difference between the pixel value average (mW01) of the sharing pixel group (FD01) of the defect determination target and the lower right direction defect detecting average value (safe_mW) is larger than the difference between the pixel value average (mW12) of the lower right sharing pixel group (FD12) adjacent to the sharing pixel group (FD01) in the lower right direction and the lower right direction defect detecting average value (safe_mW).\n\nThen, the lower right direction defect detecting unit 424 determines that the sharing pixel group FD11 has a defect, when the lower right direction difference value (gradient) gD0 of the sharing pixel groups including the sharing pixel group (FD11) of the defect determination target is equal to larger than a predetermined threshold value and the following condition formula is satisfied:\n\nabs(mW11−safemW)>abs(mW00−safemW)\n\nThe determination process is a process of determining that the sharing pixel group (FD11) of the defect determination target has a defect, when the lower right direction difference value (gradient) (gD0) of the sharing pixel groups in the lower right direction including the sharing pixel group (FD11) of the defect determination target is equal to or larger than a predetermined threshold value and the difference between the pixel value average (mW11) of the sharing pixel group (FD11) of the defect determination target and the lower right direction defect detecting average value (safe_mW) is larger than the difference between the pixel value average (mW00) of the upper left sharing pixel group (FD00) adjacent to the sharing pixel group (FD11) in the lower right direction and the lower right direction defect detecting average value (safe_mW).\n\nNext, the process of the defective pixel position selecting unit 431 will be described.\n\nThe defective pixel position selecting unit 431 selects the detection result of the texture direction (dir) determined by the texture direction determining unit 202 described above, from the pixel positions detected by the defect detecting units 421 to 424 of the respective directions described above.\n\nThat is, when the texture direction (dir) determined by the texture direction determining unit 202 is the horizontal direction (H), the detection result of the horizontal direction defect detecting unit 421 is selected. That is, when the horizontal direction defect detecting unit 421 determines that the sharing pixel group FDxy (for example, FD01) has a defective pixel, the sharing pixel group FDxy is selected as the sharing pixel group having the defective pixel of the correction target.\n\nWhen the texture direction (dir) determined by the texture direction determining unit 202 is the vertical direction (V), the detection result of the vertical direction defect detecting unit 422 is selected. That is, when the vertical direction defect detecting unit 422 determines that the sharing pixel group FDxy (for example, FD01) has a defective pixel, the sharing pixel group FDxy is selected as the sharing pixel group having the defective pixel of the correction target.\n\nWhen the texture direction (dir) determined by the texture direction determining unit 202 is the upper right direction (A), the detection result of the upper right direction defect detecting unit 423 is selected. That is, when the upper right direction defect detecting unit 423 determines that the sharing pixel group FDxy (for example, FD01) has a defective pixel, the sharing pixel group FDxy is selected as the sharing pixel group including the defective pixel of the correction target.\n\nWhen the texture direction (dir) determined by the texture direction determining unit 202 is the lower right direction (D), the detection result of the lower right direction defect detecting unit 424 is selected. That is, when the lower right direction defect detecting unit 424 determines that the sharing pixel group FDxy (for example, FD01) has a defective pixel, the sharing pixel group FDxy is selected as the sharing pixel group including the defective pixel of the correction target.\n\nThe selection process of the correction target defective pixel performed by the defective pixel position selecting unit 431 is a process of determining that only a pixel at a position in a direction corresponding to the texture direction (dir) determined by the texture direction determining unit 202 has a high probability of being an actually defective pixel, from the pixels estimated as pixels having a defect by the defect detecting units 421 to 424, and selecting the pixel as the correction target.\n\nThe selection process will be described.\n\nIn the defect detecting units 421 to 424, the sharing pixel group with a large change in pixel values in the corresponding direction is set as the sharing pixel group probably having a defective pixel, and the pixel group is determined as the group including the defective pixel.\n\nHowever, the defective pixel determined by the defect detecting units 421 to 424 may not be a defective pixel to be corrected and may output a true value.\n\nThe defective pixel position selecting unit 431 selects a sharing pixel group including a defective pixel to be corrected from the sharing pixel groups including pixels determined as the defective pixels by the defect detecting units 421 to 423. For the selection process, the texture direction information (dir) is applied.\n\nThe texture direction is originally a direction with a small change in pixel values.\n\nIn the defect detecting units 421 to 424, the sharing pixel group with a large change in pixel values in the corresponding direction is estimated as the group probably having a defective pixel, and these are determined as the defective pixels.\n\nThe defective pixel position selecting unit 431 selects only a sharing pixel group corresponding to the texture direction from the outputs of the defect detecting units 421 to 424, as a sharing pixel group having an actual defective pixel to be corrected. The other groups are determined to be highly probably outputting the actual pixel value and are excepted from the correction target.\n\nIn the configuration of the defective pixel detecting unit 203 described with reference to FIG. 10, the pixel group estimated to have a defective pixel is determined in the detecting units of the horizontal direction defect detecting unit 421, the vertical direction defect detecting unit 422, the upper right direction defect detecting unit 423, and the lower right direction defect detecting unit 424, and then one of the outputs of the detecting units 421 to 424 is selected and set as the correction target using the texture direction information in the defective pixel position selecting unit 431.\n\nAlternatively, for example, in accordance with the texture direction information, the defect detecting unit performing the defect detection in the same direction as the texture direction may be selectively operated from the defect detecting units 421 to 424.\n\nFor example, as shown in FIG. 12, whether to operate any of the defect detecting units 421 to 424 is determined using a switch 432 that is controlled in accordance with the texture direction information in the defective pixel position selecting unit 431 so as to operate any one of the detecting units 421 to 424.\n\nIn the embodiment, although the example of using the first-order differential value (gradient) in the defect detection is described, for example, a second-order differential value (Laplacian) may be used in addition to the first-order differential value (gradient).\n\n7. Process of Defective Pixel Correcting Unit\n\nThe process of the defective pixel correcting unit 204 shown in FIG. 4 will be described.\n\nThe defective pixel correcting unit 204 performs correction on the defective pixel detected by the defective pixel detecting unit 203 described above.\n\nThe correcting unit 204 determines the correction pixel value using the neighboring pixels on the basis of the texture direction determined by the texture direction determining unit 202, with respect to the detected defective pixel.\n\nAs a method of determining the correction pixel value, various methods can be applied, for example, a correction process referring to the pixel values of the neighboring pixels can be applied. For example, a correction process such as a method of replacing the pixel value of the defective pixel with the pixel value of the pixel closest to the defective pixel position is performed in the texture direction. In addition, a process of selecting from 2×3 average values calculated by the sharing pixel statistic calculating unit 411 may be applied in the texture direction.\n\nThe present disclosure has been described in detail with reference to the specific embodiment. However, it is obvious that a person skilled in the art can correct and modify the embodiment within the scope which does not deviate from the main concept of the present disclosure. That is, the embodiment of the present disclosure is disclosed as an example, and thus should not be interpreted as limiting. To determine the main concept of the present disclosure, it is preferable to refer to Claims.\n\nThe series of processes described in the specification may be performed by hardware, software, or combination of both. When the process is performed by software, a program recording the process sequence may be installed and executed in a memory of a computer installed in dedicated hardware, or the program may be installed and executed in a general-purpose computer which can perform various processes. For example, the program may be recorded in advance in a recording medium. In addition to the installation from the recording medium to the computer, the program may be received through a network such as LAN (Local Area Network) and the Internet and may be installed in a recording medium such as a built-in hard disk.\n\nVarious processes described in the specification may not only be performed in time series according to the description, but also be performed in parallel or individually in accordance with the performance of the device performing the processes or as necessary. The system in the specification is a logical group configuration of a plurality of devices, and the constituent devices are not limited to being provided in the same casing.\n\nThe present disclosure contains subject matter related to that disclosed in Japanese Priority Patent Application JP 2011-006464 filed in the Japan Patent Office on Jan. 14, 2011, the entire contents of which are hereby incorporated by reference.\n\nIt should be understood by those skilled in the art that various modifications, combinations, sub-combinations and alterations may occur depending on design requirements and other factors insofar as they are within the scope of the appended claims or the equivalents thereof.\n\n## Claims\n\n1. An image processing device comprising:\n\na texture direction determining unit that determines a texture direction of an image;\na defective pixel detecting unit that calculates a pixel value average for each of pixel groups including a plurality of pixels, and detects a defective pixel position on the basis of difference information of the pixel value average according to an arrangement direction of the pixel groups; and\na correction unit that corrects, as a correction target, a pixel value at the defective pixel position detected on the basis of the difference information in the same pixel group arrangement direction as the texture direction determined by the texture direction determining unit.\n\n2. The image processing device according to claim 1, wherein\n\nthe defective pixel detecting unit calculates the pixel value average for each pixel group including the plurality of pixels sharing a pixel value reading circuit, and detects the defective pixel position on the basis of the difference information of the pixel value average according to the arrangement direction of the pixel group.\n\n3. The image processing device according to claim 1, wherein\n\nthe defective pixel detecting unit determines whether or not a pixel is a defective pixel in accordance with a difference value from a reference value (safe_mW), the reference value being an average value of a plurality of pixel groups in a flat area where a difference from the average value of the plurality of pixel groups is small in a plurality of adjacent arrangements of pixel groups in the same arrangement direction.\n\n4. The image processing device according to claim 1,\n\nwherein the texture direction determining unit performs a process of determining one of four directions of horizontal, vertical, upper right, and lower right as the texture direction, and\nwherein the defective pixel detecting unit detects the defective pixel position on the basis of the difference information of the pixel value average according to the arrangement direction of the pixel groups in the four directions of horizontal, vertical, upper right, and lower right.\n\n5. The image processing device according to claim 1, wherein\n\nthe texture determining unit calculates a plurality of differential values based on pixel values of pixels arranged in a predetermined direction included in a neighboring area centered on a pixel for attention, sorts the plurality of differential values, selects only data with small values, calculates statistics, and determines the texture direction on the basis of comparison of the statistics.\n\n6. The image processing device according to claim 1, wherein\n\nthe correction unit performs a process of determining the pixel value at the defective pixel position with reference to a neighboring pixel in the texture direction as a reference pixel on the basis of the pixel value of the reference pixel.\n\n7. The image processing device according to claim 1, wherein\n\nthe defective pixel detecting unit calculates, as a process target, a pixel value average for each pixel group including a plurality of pixels in the same pixel group arrangement direction as the texture direction determined by the texture direction determining unit, and detects the defective pixel position on the basis of the difference information of the pixel value averages according to the arrangement direction of the pixel groups.\n\n8. An image processing method performed in an image processing device, the method comprising:\n\ncausing a texture direction determining unit to determine a texture direction of an image;\ncausing a defective pixel detecting unit to calculate a pixel value average for each of pixel groups including a plurality of pixels and to detect a defective pixel position on the basis of difference information of the pixel value average according to an arrangement direction of the pixel groups; and\ncausing a correction unit to correct, as a correction target, a pixel value at the defective pixel position detected on the basis of the difference information in the same pixel group arrangement direction as the texture direction determined by the texture direction determining unit.\n\n9. A program for causing an image processing device to execute an image process comprising:\n\ncausing a texture direction determining unit to determine a texture direction of an image;\ncausing a defective pixel detecting unit to calculate a pixel value average for each of pixel groups including a plurality of pixels and to detect a defective pixel position on the basis of difference information of the pixel value average according to an arrangement direction of the pixel groups; and\ncausing a correction unit to correct, as a correction target, the detective pixel position detected on the basis of the difference information in the same pixel group arrangement direction as the texture direction determined by the texture direction determining unit.\nPatent History\nPublication number: 20120182452\nType: Application\nFiled: Jan 5, 2012\nPublication Date: Jul 19, 2012\nPatent Grant number: 8698923\nInventor: Fumihito YASUMA (Tokyo)\nApplication Number: 13/344,152\nClassifications\nCurrent U.S. Class: Defective Pixel (e.g., Signal Replacement) (348/246); 348/E05.078\nInternational Classification: H04N 5/217 (20110101);" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8780258,"math_prob":0.96645963,"size":66301,"snap":"2021-43-2021-49","text_gpt3_token_len":13456,"char_repetition_ratio":0.27876073,"word_repetition_ratio":0.49033678,"special_character_ratio":0.20636189,"punctuation_ratio":0.07187338,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9666734,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-22T20:40:47Z\",\"WARC-Record-ID\":\"<urn:uuid:e6753631-3462-454f-990a-9c719fe2ce11>\",\"Content-Length\":\"147974\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:dd430c21-0a35-480e-86f9-6229ea005545>\",\"WARC-Concurrent-To\":\"<urn:uuid:9b58f35b-fd11-4306-9c41-f5def934c453>\",\"WARC-IP-Address\":\"52.202.107.78\",\"WARC-Target-URI\":\"https://patents.justia.com/patent/20120182452\",\"WARC-Payload-Digest\":\"sha1:FUPWZB5QJVQBZ6E4KLM4LLLCZSR7LAG3\",\"WARC-Block-Digest\":\"sha1:A5EQUA6OK7OWL56UM7LCPTLFBWQKVATQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585518.54_warc_CC-MAIN-20211022181017-20211022211017-00437.warc.gz\"}"}
https://www.k5learning.com/free-math-worksheets/fifth-grade-5/multiplication-division/multiply-columns-2-digit-4-digit
[ "# Multiply 4-digit by 2-digit numbers\n\n## Math worksheets: Multiply 4-digit by 2-digit numbers\n\nBelow are six versions of our grade 5 math worksheet on multiplying 4-digit by 2-digit numbers. These worksheets are pdf files.\n\n## More division worksheets\n\nExplore all of our division worksheets, from simple division facts to long division of large numbers.\n\n## More multiplication worksheets\n\nFind all of our multiplication worksheets, from basic multiplication facts to multiplying multi-digit whole numbers in columns.\n\nWhat is K5?\n\nK5 Learning offers free worksheets, flashcards and inexpensive workbooks for kids in kindergarten to grade 5. Become a member to access additional content and skip ads." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8860102,"math_prob":0.90599406,"size":309,"snap":"2022-27-2022-33","text_gpt3_token_len":67,"char_repetition_ratio":0.16721311,"word_repetition_ratio":0.0,"special_character_ratio":0.21035598,"punctuation_ratio":0.1,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9832575,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-01T23:03:24Z\",\"WARC-Record-ID\":\"<urn:uuid:d10e8c56-ca34-4108-aff4-deca3d0a500d>\",\"Content-Length\":\"86349\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c4469252-5b35-463a-8f8f-7a3c1fe45d8a>\",\"WARC-Concurrent-To\":\"<urn:uuid:77cb3a86-89df-43af-a1bb-0e52cab282a0>\",\"WARC-IP-Address\":\"146.75.38.132\",\"WARC-Target-URI\":\"https://www.k5learning.com/free-math-worksheets/fifth-grade-5/multiplication-division/multiply-columns-2-digit-4-digit\",\"WARC-Payload-Digest\":\"sha1:L3Y7B7WWJC6RU6J5J3IXQSRY7PFTC6HJ\",\"WARC-Block-Digest\":\"sha1:2ACBIDSKGNLKO22BBGEXEXQH7YVZCVZV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103947269.55_warc_CC-MAIN-20220701220150-20220702010150-00762.warc.gz\"}"}
https://www.chemistryhive.com/calculating-moles-moles-and-compounds/
[ "//Calculating Moles: Moles and Compounds", null, "## Moles and Compounds\n\nThe idea of the mole has not been only used with elements and atoms but also used with compounds.", null, "1 mole of water H2O (left) and 1 mole of ethanol C2H5OH (right) in measuring cylinders\n\nIf you asked what will be the mass of 1 mole of water (H2O) molecules? (where, Ar for H = 1 and O = 16)\n\nLet’s find the solution.\n\nFrom the formula of water, H2O, we can see that 1 mole of water molecules contains 2 moles of hydrogen (H) atoms and 1 mole of oxygen (O) atoms.So we get the mass of 1 mole of water molecules is = (2 × 1) + (1 × 16) = 18 g\n\nThe mass of 1 mole of any compound is called its molar mass. One can write the molar mass of a compound without any units. Then it is called the relative formula mass, often called the relative molecular mass (Mr ).\n\nSo the relative formula mass (RFM) of water is = 18.\n\nLet’s learn and understand more about moles and compounds!\n\n###### Example 1\n\nWhat will be the mass of  a) one mole and b) the relative formula mass (RFM) of ethanol, C2H5OH?  (Ar For H = 1, C = 12 and O = 16)\n\na) 1 mole of C2H5OH contains 2 moles of carbon atoms, 6 moles of hydrogen atoms and 1 mole of oxygen atoms. So, the mass of one mole of ethanol is\n\n= (2 × 12) + (6 × 1) + (1 × 16)\n\n= 46 g\n\nb) The RFM of ethanol is 46.\n\n###### Example 2\n\nWhat will be the mass of a) 1 mole and b) the RFM of nitrogen gas, N2? (Ar for N = 14)\n\na) We know Nitrogen (N2) is a diatomic gas. Each nitrogen molecule contains two atoms of nitrogen.\n\nSo, the mass of 1 mole of nitrogen is\n\n= N2 = 2 × 14\n\n= 28 g\n\nb) The RFM of N2 is 28.\n\nThe mass of any compound found in any number of moles can be calculated using the following formula", null, "###### Example 3\n\nCalculate the mass of a) 3 moles and b) 0.4 moles of carbon dioxide gas, CO2 . (Ar for C = 12 and O = 16)\n\n1 mole of CO2 contains 1 mole of carbon atoms and 2 moles of oxygen atoms.\n\nSo the mass of 1 mole of CO2 is\n\n= (1 × 12) + (2 × 16)\n\n= 44 g\n\na) The mass of 3 moles of CO2 is\n\n= number of moles × mass of 1 mole of CO2\n\n= 3 × 44\n\n= 132 g\n\nSimilarly,\n\nb) the mass of 0.4 mole of CO2 is = 0.4 × 44 = 17.6 g\n\nIf you know the mass of a compound then you can calculate the number of moles of the compound using the following relationship", null, "###### Example 4\n\nWe can calculate the number of moles of magnesium oxide, MgO in  a) 80 g and b) 10 g of the compound. (Ar for O = 16 and Mg = 24)\n\n1 mole of MgO contains 1 mole of magnesium atoms and 1 mole of oxygen atoms.\n\nSo, the mass of 1 mole of MgO\n\n= (1 × 24) + (1 × 16)\n\n= 40 g\n\na)The number of moles of MgO in 80 g", null, "= 80 g ÷ 40g = 2\n\nSimilarly\n\nb) the number of moles of MgO in 10 g is\n\n= 10 g ÷ 40g\n\n= 0.25 mole\n\n`Source- Cambridge IGCSE Chemistry 3rd Edition`" ]
[ null, "https://www.chemistryhive.com/wp-content/uploads/2018/11/Moles-and-Compounds-.png", null, "http://www.chemistryhive.com/wp-content/uploads/2018/11/Screenshot_14.png", null, "http://www.chemistryhive.com/wp-content/uploads/2018/11/Screenshot_15.png", null, "http://www.chemistryhive.com/wp-content/uploads/2018/11/Screenshot_16.png", null, "http://www.chemistryhive.com/wp-content/uploads/2018/11/Screenshot_17.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8678623,"math_prob":0.99902034,"size":2697,"snap":"2023-14-2023-23","text_gpt3_token_len":889,"char_repetition_ratio":0.19606388,"word_repetition_ratio":0.12541807,"special_character_ratio":0.3362996,"punctuation_ratio":0.06688963,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99996006,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,5,null,8,null,8,null,8,null,8,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-21T12:04:14Z\",\"WARC-Record-ID\":\"<urn:uuid:993a2dba-950b-4aa4-9cc5-86768f09da7c>\",\"Content-Length\":\"75185\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:90fbbd4d-ed68-4ce5-91c6-358e2611fbf5>\",\"WARC-Concurrent-To\":\"<urn:uuid:14543dac-23d3-4b3a-9713-f5305dda35d9>\",\"WARC-IP-Address\":\"162.244.93.3\",\"WARC-Target-URI\":\"https://www.chemistryhive.com/calculating-moles-moles-and-compounds/\",\"WARC-Payload-Digest\":\"sha1:KCHNU4IPP3R5WN2IOS65A76CGAC2RN5Y\",\"WARC-Block-Digest\":\"sha1:AWDK6TY5P747NQ54WRJVV52U2YRV55SW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296943695.23_warc_CC-MAIN-20230321095704-20230321125704-00096.warc.gz\"}"}
https://vbdotnetforums.com/threads/exporting-from-access-to-text-file.41453/
[ "#### gwbasic\n\n##### New member\nHi, this is my problem: i've some data in an access database, i collect this data from the table and when i try to record the data in the text file, with the fields separated with a \"!\", the registration fails, better, doesn't register all the datas that are in the table.\n\nThis is the code:\n\nVB.NET:\n``````Dim tw As TextWriter = File.AppendText(Application.StartupPath & \"\\DStampa.csv\")\nDim objDataTable As DataTable\nDim TotaleDatiStampa As String = \"\"\n\nobjDataTable = objData.QueryDatabase(\"SELECT * FROM SStato ORDER BY Cam, Dal\")\nIf objDataTable.Rows.Count <> 0 Then\nTotaleDatiStampa = \"\"\nFor i = 0 To objDataTable.Rows.Count - 1\nIf Not IsDBNull(objDataTable.Rows(i).Item(\"Cam\")) Then TotaleDatiStampa = objDataTable.Rows(i).Item(\"Cam\").ToString & \"!\" Else TotaleDatiStampa = \"!\"\nIf Not IsDBNull(objDataTable.Rows(i).Item(\"Stato\")) Then TotaleDatiStampa = TotaleDatiStampa & objDataTable.Rows(i).Item(\"Stato\").ToString & \"!\" Else TotaleDatiStampa = TotaleDatiStampa & \"!\"\nIf Not IsDBNull(objDataTable.Rows(i).Item(\"Dal\")) Then TotaleDatiStampa = TotaleDatiStampa & CDate(objDataTable.Rows(i).Item(\"Dal\")).ToShortDateString & \"!\" Else TotaleDatiStampa = TotaleDatiStampa & \"!\"\nIf Not IsDBNull(objDataTable.Rows(i).Item(\"Al\")) Then TotaleDatiStampa = TotaleDatiStampa & CDate(objDataTable.Rows(i).Item(\"Al\")).ToShortDateString & \"!\" Else TotaleDatiStampa = TotaleDatiStampa & \"!\"\nIf Not IsDBNull(objDataTable.Rows(i).Item(\"MieNote\")) Then TotaleDatiStampa = TotaleDatiStampa & objDataTable.Rows(i).Item(\"MieNote\").ToString & \"!\" Else TotaleDatiStampa = TotaleDatiStampa & \"!\"\nIf i < objDataTable.Rows.Count - 1 Then i = i + 1\nNext\nEnd If\ntw.WriteLine(TotaleDatiStampa)\ntw.Close()\nobjDataTable = Nothing``````\nNow, i did not understand why also if the table has two rows, in the text file i register only one.\nSeems that the problem is that the saving of rows in the db access is not yet completed, because when i bypass in debug mode the code, perche' quando vado in debug, sometime the loop say me that are present two rows and not one.\n\nHelp me, because i'vent no idea on the solution.\n\nThanx to all.\n\n#### jmcilhinney\n\n##### VB.NET Forum Moderator\nStaff member\nThe first point to note is that you should basically NEVER increment a For loop counter explicitly, so get rid of this line for a start:\nVB.NET:\n``If i < objDataTable.Rows.Count - 1 Then i = i + 1``\nThe next point to note is that you can simplify this:\nVB.NET:\n`````` If objDataTable.Rows.Count <> 0 Then\nTotaleDatiStampa = \"\"\nFor i = 0 To objDataTable.Rows.Count - 1\nIf Not IsDBNull(objDataTable.Rows(i).Item(\"Cam\")) Then TotaleDatiStampa = objDataTable.Rows(i).Item(\"Cam\").ToString & \"!\" Else TotaleDatiStampa = \"!\"\nIf Not IsDBNull(objDataTable.Rows(i).Item(\"Stato\")) Then TotaleDatiStampa = TotaleDatiStampa & objDataTable.Rows(i).Item(\"Stato\").ToString & \"!\" Else TotaleDatiStampa = TotaleDatiStampa & \"!\"\nIf Not IsDBNull(objDataTable.Rows(i).Item(\"Dal\")) Then TotaleDatiStampa = TotaleDatiStampa & CDate(objDataTable.Rows(i).Item(\"Dal\")).ToShortDateString & \"!\" Else TotaleDatiStampa = TotaleDatiStampa & \"!\"\nIf Not IsDBNull(objDataTable.Rows(i).Item(\"Al\")) Then TotaleDatiStampa = TotaleDatiStampa & CDate(objDataTable.Rows(i).Item(\"Al\")).ToShortDateString & \"!\" Else TotaleDatiStampa = TotaleDatiStampa & \"!\"\nIf Not IsDBNull(objDataTable.Rows(i).Item(\"MieNote\")) Then TotaleDatiStampa = TotaleDatiStampa & objDataTable.Rows(i).Item(\"MieNote\").ToString & \"!\" Else TotaleDatiStampa = TotaleDatiStampa & \"!\"\nIf i < objDataTable.Rows.Count - 1 Then i = i + 1\nNext\nEnd If``````\ndown to this:\nVB.NET:\n``````For Each row As DataRow In objDataTable.Rows\nTotaleDatiStampa = String.Format(\"{0}!{1}!{2}!{3}!{4}\", _\nrow(\"Cam\"), _\nrow(\"Stato\"), _\nrow(\"Dal\"), _\nrow(\"Al\"), _\nrow(\"MieNote\"))\nNext``````\nAny null values will automatically be converted to empty strings when the formatting is done. Much neater, yes?\n\nNow, with regards to the problem, you're only actually writing to the file once, at the end, after the loop. That means that you will only ever write the last record. If you want to write every record then you need to write inside the loop.\n\nIt's also worth noting that the StreamWriter itself supports formatting, so you don't even need that TotaleDatiStampa variable. In the above code, you just replace:\nVB.NET:\n``TotaleDatiStampa = String.Format``\nwith:\nVB.NET:\n``tw.WriteLine``\nand the formatted text will be written directly to the file.\n\n#### gwbasic\n\n##### New member\njmcilhinney, very good piece of code. You've drastically reduced the lines of code with only one.\n\nTnx for your interest. I've seen that the problem not was generated only by the loop but also by a wrong registration of the dates in the db, so, when the datatable try to read the rows it load only some rows and not all.\n\nI hope that is this the real problem; so, if i see that the other persist, i'll try a solution with you in the forum.\n\nThanks!" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5996217,"math_prob":0.9207981,"size":2397,"snap":"2020-34-2020-40","text_gpt3_token_len":659,"char_repetition_ratio":0.22983703,"word_repetition_ratio":0.3976261,"special_character_ratio":0.24196912,"punctuation_ratio":0.1784897,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9579045,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-22T02:33:29Z\",\"WARC-Record-ID\":\"<urn:uuid:c05e7028-05cd-4e51-a014-df47eae02e81>\",\"Content-Length\":\"64909\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:154b149f-58bc-4c4f-8e08-d9e68135c18c>\",\"WARC-Concurrent-To\":\"<urn:uuid:89f93166-1b1b-4929-abeb-4dc11039f0cb>\",\"WARC-IP-Address\":\"209.9.229.33\",\"WARC-Target-URI\":\"https://vbdotnetforums.com/threads/exporting-from-access-to-text-file.41453/\",\"WARC-Payload-Digest\":\"sha1:S6G44MUWSWFWCYDYDAPCNOOLDVPFY4LP\",\"WARC-Block-Digest\":\"sha1:X7VJSMG4IFVVMRLGI6LJKGFKONPRRLJ6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400202686.56_warc_CC-MAIN-20200922000730-20200922030730-00345.warc.gz\"}"}
https://physics.stackexchange.com/questions/164776/are-the-jacobi-equation-and-the-geodesic-deviation-equation-related
[ "# Are the Jacobi equation and the geodesic deviation equation related?\n\nOn page 111 in his book Riemannian Geometry, Manfredo Do Carmo states what he calls the Jacobi equation\n\n\\begin{equation} \\frac{D^2J}{dt^2} + R(\\gamma'(t),J(t))\\gamma'(t) = 0 \\end{equation}\n\nwhere $\\gamma$ is a geodesic. Note, here we are dealing with Riemannian manifolds. Here's the Wikipedia page on the Jacobi equation\n\nOn page 34 in their book Graviation, Misner Thorne and Wheeler state what they call the equation of geodesic deviation:\n\n\\begin{equation} \\frac{D^2\\xi^\\alpha}{d\\tau^2}+R^{\\alpha}_{\\phantom{x}\\beta\\gamma\\delta}\\frac{dx^\\beta}{d\\tau}\\xi^{\\gamma}\\frac{dx^\\delta}{d\\tau} = 0 \\end{equation}\n\nMy Question:\n\nAre these equations related?\n\nBoth seem to involve measuring geodesic deviations somehow. Misner Thorne and Wheeler frequently refer to this idea called a separation vector $\\xi$. I got the impression this was somehow related to Jacobi fields, but I'd like someone to confirm this is true and explain how specifically Jacobi fields and the Jacobi equation are related to the geodesic deviation equation, if at all.\n\nRemark:\n\nSurprisingly, Misner Thorne and Wheeler don't mention Jacobi fields or the Jacobi equation once in their massive volume on gravity. But, as I am new to this field, I am not sure whether Jacobi fields work with pseudo-Riemannian manifolds. Obviously, general relativity emphasizes pseudo-Riemannian instead of Riemannian, so if Jacobi fields are more applicable to Riemannian that may explain why they don't appear int MTW's book.\n\nDo Carmo never refers to a \"separation vector\", at least no where I have seen and not in the index. I can't tell if separation vector is a commonly used term or just by MTW. Do Carmo obviously talks extensively about Jacobi fields, devoting a whole chapter to it.\n\n• You mind moving this to Phys SE? I feel like geodesic deviation belongs there, not here. I'll post my answer when you move it. Feb 12, 2015 at 20:02\n• That's fine. Probably makes more sense. I can flag it. Feb 12, 2015 at 20:06\n• Cool. Do you know anything about geodesic congruences? If not, I'll explain it in my answer. Feb 12, 2015 at 20:07\n• No, that would be great! Feb 12, 2015 at 20:12\n\nThe separation vector is a Jacobi field because it obeys the Jacobi equation.\n\nHere I will derive geodesic deviation from scratch because I find MTW's derivation hard to follow. (Much like everything else in that book.)\n\nDefinition 1. Consider a family of timelike geodesics, having the property that in a sufficiently small open region of the Lorentz manifold $(M,g)$ precisely one geodesic passes through every point. Such a collection is called a congruence. The tangent field to this set of curves, parameterized by proper time $s$, is denoted by $u$ and is normalized as $\\langle u,u\\rangle=-1$.\n\nLet $\\gamma(t)$ be some curve transversal to the congruence. This means that the tangent $\\dot\\gamma$ is never parallel to $u$ in the region under consideration. Imagine that every point on the curve $\\gamma(t)$ moves a distance $s$ along the geodesics which passes through that point. Let the resulting point be $H(s,t)$. This defines a map $H:O\\longrightarrow M$ for some open $O\\subset \\mathbb{R}^2$. For each $t$, the curve $s\\longmapsto H(s,t)$ is a timelike geodesic with tangent vectors $u(H(s,t))$. We say that $u$ defines a vector field $u\\circ H$ along which the map $H$ is tangential, i.e. of the form $$u\\circ H=H_*\\circ\\frac{\\partial}{\\partial s}\\quad\\text{or}\\quad u\\big(H(s,t)\\big)=T_{(s,t)}H\\cdot\\frac{\\partial}{\\partial s}$$ In other words, $u$ and $\\partial/\\partial s$ are $H$-related. We shall use for $u\\circ H$ the same letter $u$. Let $v$ be the tangential vector field along $H$ belonging to $\\partial/\\partial t$ $$v=H_*\\circ\\frac{\\partial }{\\partial t}$$ The vectors $v(s,t)$ are tangent to the curves $t\\longmapsto H(s,t)$ ($s$ fixed), and represent the separation of points which are moved with the same proper time along the neighboring geodesics of the congruence (beginning at arbitrary starting points). This is shown in the following diagram (taken from Straumann, General Relativity ):", null, "Since $\\partial/\\partial s$ and $\\partial/\\partial t$ commute, the tangential fields $u$ and $v$ also commute. To see this, recall that if the vectors of two Lie brackets are $H$-related, then the Lie brackets themselves are $H$-related.\n\nTo get the distance between curves, we create a projection operator $id+u\\otimes u$ onto the subspace of the tangent space orthogonal to $u$. Thus the relevant infinitesimal separation vector $n$ is $$n=v+\\langle v,u\\rangle u$$ Note that $\\langle n,u\\rangle=\\langle v,u\\rangle-\\langle v,u\\rangle=0$, so $n$ is indeed perpendicular to $u$. We now show that $n$ is also Lie transported. We have \\begin{align*} \\mathcal{L}_un&=[u,n]=[u,v]+[u,\\langle v,u\\rangle u]=\\big(u\\langle v,u\\rangle\\big)u\\\\ u\\langle v,u\\rangle&=\\frac{\\partial}{\\partial s}\\langle v,u\\rangle \\end{align*} The normalization condition $\\langle u,u\\rangle=-1$ implies $$0=\\frac{\\partial}{\\partial t}\\langle u,u\\rangle=2\\langle \\nabla_v u,u\\rangle$$ Furthermore, since $\\nabla_vu=\\nabla_uv$, it follows that $$\\frac{\\partial}{\\partial s}\\langle v,u\\rangle=\\langle \\nabla_uu,v\\rangle+\\langle u,\\nabla_uv\\rangle=\\langle u,\\nabla_vu\\rangle=0$$ This shows that indeed $$\\tag{1} \\mathcal{L}_un=0$$\n\nNext, we consider $$\\nabla^2_uv=\\nabla_u\\nabla_uv=\\nabla_u\\nabla_vu=[\\nabla_u,\\nabla_v]u$$ However, the Riemann tensor for three vectors $X,Y,Z$ is $R(X,Y)Z=[\\nabla_X,\\nabla_Y]Z-\\nabla_{[X,Y]}Z$. Due to $[u,v]=0$, we obtain $$\\nabla^2_uv=R(u,v)u$$ This is called the Jacobi equation for the field $v$.\n\nWe now show that $n$ also satisfies this equation. From (1) it follows that \\begin{equation} \\nabla_un=\\nabla_uv+\\big(u\\langle v,u\\rangle\\big)u+\\langle v,u\\rangle \\nabla_uu=\\nabla_uv \\end{equation} Furthermore, \\begin{equation} R(u,n)u=R(u,v)u+\\langle v,u\\rangle R(u,u)u=R(u,v)u \\end{equation} Putting this all together, we see that $n$ satisfies the Jacobi equation \\begin{equation} \\nabla^2_un=R(u,n)u \\end{equation} The Jacobi field $n$ is everywhere perpendicular to $u$. In physics we call this the equation for geodesic deviation. For a given $u$ the right hand side defines at each point $p\\in M$ a linear map $n\\longmapsto R(u,n)u$ of the subspace of $T_pM$ perpendicular to $u$.\n\nIn more familiar notation, we have $$\\frac{D^2 n^a}{D\\tau^2}=R^a_{\\;bcd}u^b n^c u^d$$ So why even bother calling this equation a Jacobi equation? Perhaps there is some general result of Jacobi fields that we wish to apply to GR? This is indeed the case.\n\nDefinition 2. Let $\\gamma$ be a geodesic. A pair of points $p$, $q\\in\\gamma$ are said to be conjugate if there exists a Jacobi field $n$ which is not identically zero but vanishes at both $p$ and $q$.\n\nThere exists a very powerful theorem regarding conjugate points (suitably generalized here to spacetime, but a similar version holds in positive-definite spaces):\n\nTheorem 1. Let $\\gamma$ be a smooth timelike curve connecting two points $p$, $q\\in M$. Then the necessary and sufficient condition that $\\gamma$ locally maximizes proper between $p$ and $q$ over smooth one parameter variations is that $\\gamma$ be a geodesic with no point conjugate to $p$ between $p$ and $q$.\n\nThis theorem is the basis for the singularity theorems of GR. Since MTW doesn't discuss these theorems (AFAIK, haven't read the whole thing), they don't need to mention Jacobi fields or conjugate points. Indirectly, this property of Jacobi fields is used in the justification for Big Bang cosmology.\n\nFor more in-depth discussions, I refer you to\n\n1. Wald, General Relativity (1984)\n\n2. Hawking & Ellis, The Large Scale Structure of Spacetime (1973)\n\n(in that order) and references therein.\n\n• That was an excellent introduction to the idea of a separation vector (and much more). I have read (or tried to lol) do Carmo's chapter on Jacobi fields, but nothing connected visually for me in a way I could see the relation to MTW's separation vector. This did that much better. By the way, I'm glad I'm not the only one who finds that book unreadable. I thought I might being over critical but now I think its justified. Its nice visually but poorly organized and too all over the place. Weinberg's book was a nice change and much more succinct and organized. Feb 13, 2015 at 5:32\n• Hey champ, I got another one for you: physics.stackexchange.com/q/159308/66165 didn't get an answer that I felt answered the question. Feb 14, 2015 at 7:47" ]
[ null, "https://i.stack.imgur.com/bSqO7.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7920337,"math_prob":0.9978754,"size":5527,"snap":"2023-40-2023-50","text_gpt3_token_len":1655,"char_repetition_ratio":0.14430563,"word_repetition_ratio":0.002621232,"special_character_ratio":0.26759544,"punctuation_ratio":0.108205594,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999503,"pos_list":[0,1,2],"im_url_duplicate_count":[null,9,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-11-30T11:04:10Z\",\"WARC-Record-ID\":\"<urn:uuid:e1d02844-52f2-46b8-ac89-33e81766ee36>\",\"Content-Length\":\"175600\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:01730d47-8342-4c67-8c22-e923db0f986a>\",\"WARC-Concurrent-To\":\"<urn:uuid:78ac4ea1-b9c8-4692-b75b-371e01cd5fa2>\",\"WARC-IP-Address\":\"104.18.43.226\",\"WARC-Target-URI\":\"https://physics.stackexchange.com/questions/164776/are-the-jacobi-equation-and-the-geodesic-deviation-equation-related\",\"WARC-Payload-Digest\":\"sha1:OJUAO3KSXF3ULBQRQWPMPYWH4NRU55VU\",\"WARC-Block-Digest\":\"sha1:VKD6R2SGJ6RFWOFJSEXANO7SDICL7QYK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100184.3_warc_CC-MAIN-20231130094531-20231130124531-00265.warc.gz\"}"}
https://jamiekuppens.com/post/wtf-is-a-scene-graph/
[ "# WTF is a Scene Graph?\n\n• 7 min read\nWritten by\n\nNot too long ago I started to dabble in low-level graphics programming again. I set a goal of creating a basic scene that can be defined in a similar way to production-grade game engines like Unity and friends. I felt comfortable with basic OpenGL at the time, however the guides that I followed never told me how to create a hierarchical scene.\n\nIn addition to this many examples made assumptions about the 3D environment that cannot be made when using a scene graph. I will be glossing over implementation specific details related to the graphics pipeline in this article and focus on scene graph itself, you can view the full source on GitHub if you want to see how things fit together.\n\n## What is a Scene Graph?\n\nA scene graph is a tree data structure with nodes. Each node contains transformation matrices that define their position in 3D space; these transformation matrices are generated using local position, rotation, and scale values and incorporating parent transformations to create a world transformation. To keep things brief, this article does not go over OpenGL or theory of how coordinate systems work; you can get a more detailed explanation of these things at https://learnopengl.com/ if you’re not already familiar.\n\nThis visualization shows a very basic scenegraph where there is a single car with spinning wheels. Each node in the tree has a local position, rotation, and scale. Each node’s position is relative to its parent so if the car moves (via any transform), the wheels and driver stay in the same position relative to the car. We’re going to go over a basic scene graph implementation that allows us to represent this kind of hierarchy.\n\n## Traversing the Tree\n\nEach time a frame is being rendered, the `updateWorldTransform` method is called on the root node. This method will calculate the local and world transformation matrices if needed. First and foremost the local transform needs to be created using matrix translate, rotate, and scale functions from GLM. Once the local transform is acquired it can be multiplied against its parent’s world transform using the cross product, this being the actual location in world space where the node is located.\n\n``````void Node::updateWorldTransform() {\nif (dirty) {\nauto parent = this->parent.lock();\nthis->updateLocalTransform();\nif (parent.get() != nullptr) {\nworldTransform = parent.get()->worldTransform * localTransform;\n} else {\nworldTransform = glm::mat4() * localTransform;\n}\n\ndirty = false;\nfor (auto node : children) {\nnode.get()->markDirty();\nnode.get()->updateWorldTransform();\n}\n} else {\nfor (auto node : children) {\nnode.get()->updateWorldTransform();\n}\n}\n}\n\n// TODO: Use quaternions to save on matrix multiplications.\nvoid Node::updateLocalTransform() {\nif (dirty) {\nauto transform = glm::mat4();\ntransform = glm::translate(transform, position);\ntransform = glm::rotate(transform, rotation.y, glm::vec3(0.0, 1.0, 0.0));\ntransform = glm::rotate(transform, rotation.x, glm::vec3(1.0, 0.0, 0.0));\ntransform = glm::rotate(transform, rotation.z, glm::vec3(0.0, 0.0, 1.0));\ntransform = glm::scale(transform, scale);\nlocalTransform = transform;\n}\n}``````\n\nYou’ll see that children nodes are referenced and the `updateWorldTransform` method is called recursively on them. This is done because any changes to the current node’s world transform needs to change the child’s transform as children are positioned relative to their parents. If this code was omitted and we used the car example, the wheels would stay put if the car chassis moved forward.\n\nYou may also note that there is a check against `dirty`. This is an optional optimization that the walker uses so it knows whether a transformation needs to be computed or not. The class uses setters to automatically set the dirty flag when certain values are changed so this happens transparently. The tree is still traversed in-case any children are marked dirty as the traversal starts at the root. There are probably faster ways to do this when working with larger scene graphs.\n\n## Adding and Removing Nodes\n\nAdding nodes is as simple as passing a `shared_ptr` into a local `children` vector. Once added the dirty flag is set so the walker creates world transforms for the children as soon as possible.\n\nRemoving nodes is as simple as removing the parent’s shared pointer to the node and letting RAII take care of the rest. The children have a weak pointer to the parent to prevent cyclic references (and cross references between nodes should use `weak_ptr`).\n\n## Making the Camera Work\n\nGenerating the transforms for the camera mostly the same sans a few additions. A view and projection matrix are created alongside the local and world transforms. Generating the view matrix from the world transform is tricky when using `lookAt` to generate the matrix. The `lookAt` function requires an up vector and one needs to be calculated as the camera can have any arbitrary rotation and we can’t just make it point up (FPS style) like a lot of other guides do. Lots of this code is based off of work from A Camera Implementation in C.\n\n``````PerspectiveCamera::PerspectiveCamera(std::string name, glm::vec3 position,\nglm::vec3 rotation, glm::vec3 scale, float fov, float aspect,\nfloat near, float far) {\nthis->name = name;\nthis->position = position;\nthis->rotation = rotation;\nthis->scale = scale;\nthis->projectionMatrix = glm::perspective(fov, aspect, near, far);\n}\n\nvoid PerspectiveCamera::updateWorldTransform() {\nif (dirty) {\nauto parent = this->parent.lock();\nthis->updateLocalTransform();\nglm::mat4 parentWorldTransform;\nif (parent.get() != nullptr) {\nparentWorldTransform = parent.get()->worldTransform;\nworldTransform = parent.get()->worldTransform * localTransform;\n} else {\nworldTransform = glm::mat4() * localTransform;\n}\n\n// Get a normalized rotation vector (the forward vector) that points\n// in the direction the camera is facing that's 1 unit in length.\n// Also generate the up and right vectors from the forward vector.\nconst auto decomposed = this->getDecomposedTransform();\nconst auto eulerRotation = glm::eulerAngles(decomposed.rotation);\nforward = -glm::normalize(glm::vec3(\n-sin(eulerRotation.y),\nsin(eulerRotation.x) * cos(eulerRotation.y),\ncos(eulerRotation.x) * cos(eulerRotation.y)\n));\nup = glm::cross(\nglm::cross(forward, glm::vec3(0.0f, 1.0f, 0.0f)),\nforward\n);\nright = glm::cross(forward, up);\n\n// Create new right and up vectors based on the roll.\nright = glm::normalize(\nright * cosf(rotation.z * M_PI) +\nup * sinf(rotation.z * M_PI)\n);\nup = glm::cross(forward, right);\nup.y = -up.y;\n\n// Adding the rotation results in a point 1 unit in front of the\n// camera which is then passed to the lookAt function to create a\n// view matrix. The projection matrix is pre-baked for performance.\ntarget = decomposed.translation + forward;\nviewMatrix = glm::lookAt(decomposed.translation, target, up);\nviewProjectionMatrix = projectionMatrix * viewMatrix;\n\ndirty = false;\nfor (auto node : children) {\nnode.get()->markDirty();\nnode.get()->updateWorldTransform();\n}\n} else {\nfor (auto node : children) {\nnode.get()->updateWorldTransform();\n}\n}\n}``````\n\nForward, up, and right vectors are initially created using world transform data; however since the up and right vectors are created using the forward vector, the roll is not accounted for as a 3 dimensional vector has no concept of roll. To get around this a new right vector is created by rotating it using sin and cos, then the up vector can be recalculated by crossing the right and forward vectors since we know all the vectors should be perpendicular.\n\nA target vector is also used to generate the view matrix which is a point 1 unit in front of the camera. This position can be calculated by adding the forward vector to the world position (all values in forward sum up to 1). With that we have the information required to generate the view matrix with `lookAt`. The projection matrix is created in the constructor and the view-projection matrix is generated by multiplying the view and projection matrices. The view-projection matrix is cached so it doesn’t need to be recalculated every frame.\n\nIn order to make objects appear from the camera’s point of view, their world transform needs to be multiplied with the camera’s view-projection matrix before drawing. This can be cached as well but is not done in this example to keep things simple.\n\n``````scenegraph.get()->updateWorldTransform();\n\nglEnable(GL_DEPTH_TEST);\nglClearColor(0.3, 0.6, 0.8, 1.0);\nglClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);\n\n// Draw the parent cube.\nauto modelViewProjectionMatrix = camera.get()->viewProjectionMatrix * parentThing.get()->worldTransform;\ntextureTest.bind();\nglBindVertexArray(vao);\nglDrawArrays(GL_TRIANGLES, 0, 36);\n// Draw the child cube.\nmodelViewProjectionMatrix = camera.get()->viewProjectionMatrix * childThing.get()->worldTransform;" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8437036,"math_prob":0.9550514,"size":9949,"snap":"2022-40-2023-06","text_gpt3_token_len":2200,"char_repetition_ratio":0.15072902,"word_repetition_ratio":0.068797775,"special_character_ratio":0.2254498,"punctuation_ratio":0.17269292,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9732211,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-26T12:05:22Z\",\"WARC-Record-ID\":\"<urn:uuid:74b9583a-6193-495e-8468-bc3f8834bb89>\",\"Content-Length\":\"51492\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c835e9eb-cc53-4575-a6f4-58cd682990f7>\",\"WARC-Concurrent-To\":\"<urn:uuid:06b21223-797c-4d2d-90b4-68ffa5a507b1>\",\"WARC-IP-Address\":\"45.55.210.5\",\"WARC-Target-URI\":\"https://jamiekuppens.com/post/wtf-is-a-scene-graph/\",\"WARC-Payload-Digest\":\"sha1:K453RPPP445QX6ICFEWXIE7MK4WQPNHE\",\"WARC-Block-Digest\":\"sha1:YZG7N7HBVP5YFIBC2QE5WAXESAVXUSXX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030334871.54_warc_CC-MAIN-20220926113251-20220926143251-00348.warc.gz\"}"}
https://en.m.wikiversity.org/wiki/Monetary_Circuit_Model/Profits
[ "# Monetary Circuit Model/Profits\n\nIn this lesson, we will derive the income of capitalists - profits - in our simple economy. We do this by assuming a constant wage share of surplus. In more advanced models, this assumption is relaxed and wage share can vary cyclically.\n\n(My knowledge of this topic is somewhat shaky. Steve Keen's papers omit a lot of explanation of this area, so my interpretation may not be 100% correct. Corrections are welcome.)\n\nSurplus here is in the Sraffian sense of revenues minus direct costs (not including labour) for each individual firm.\n\nThe \"wage rate\", fwr, paid by firms depends on two factors. First is the wage share received by workers, 1 - σ, where σ (sigma) is the share of surplus going to capitalists. Second is the turnover period, τ (tau), the length of time it takes for the firm to recoup its initial investment in production. The initial investment is the stock of firm deposits, fd. If we take the inverse of the turnover period, 1/τ, we have the number of times the firm recoups on its initial investment in a year. Multiplying this by the initial investment, fd, gives the firm revenue over the course of a year and multiplying this again by the wage share, 1- σ, gives the wage bill. Since the wage bill is also given by fwr*fd, we can derive τ from fwr and an assumed wage share:\n\n$fwr={\\frac {1-\\sigma }{\\tau }}$", null, "Given our fwr parameter of 2 and an assumed wage share (1 - σ) of 60%, we can calculate τ to be 0.3. Since all firm revenue that does not go to wages is distributed as profit (Π), we can now determine it using τ (turnover period) and σ (profit share):\n\n$\\Pi ={\\frac {\\sigma }{\\tau }}F_{d}$", null, "The result (plus interest earned on deposits and banker income) is this:\n\nAll classes are able to earn a continuous positive income despite a fixed initial stock of money. This is because that initial stock is recycled over and over in the economy - one person's expenditure is another's income.\n\nIn the next lesson, we will look at what happens when we apply a shock to the model in the form of a credit crunch." ]
[ null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/c32e47372570bf7016798d1c826c493b68453e13", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/a686036116d4faad408b3da7fe852b491a449bbd", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9324474,"math_prob":0.9627831,"size":1989,"snap":"2023-40-2023-50","text_gpt3_token_len":457,"char_repetition_ratio":0.11788413,"word_repetition_ratio":0.0,"special_character_ratio":0.22574158,"punctuation_ratio":0.10173697,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99570316,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-11T12:09:13Z\",\"WARC-Record-ID\":\"<urn:uuid:529eead9-808a-4e3d-a8d4-38ab53483969>\",\"Content-Length\":\"26232\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:81a92675-aaaf-4119-a071-a21b36a5f346>\",\"WARC-Concurrent-To\":\"<urn:uuid:a440a1b7-6a73-4f65-90c3-0806383cbc7c>\",\"WARC-IP-Address\":\"208.80.154.224\",\"WARC-Target-URI\":\"https://en.m.wikiversity.org/wiki/Monetary_Circuit_Model/Profits\",\"WARC-Payload-Digest\":\"sha1:S2BARVYFBKPOP4ZJNFFGTB7KV5WW7B44\",\"WARC-Block-Digest\":\"sha1:5JCH46D7FGBU2NS4OMZDNHFKS7LRBNLA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679511159.96_warc_CC-MAIN-20231211112008-20231211142008-00099.warc.gz\"}"}
https://numberworld.info/8836
[ "# Number 8836\n\n### Properties of number 8836\n\nCross Sum:\nFactorization:\n2 * 2 * 47 * 47\nDivisors:\n1, 2, 4, 47, 94, 188, 2209, 4418, 8836\nCount of divisors:\nSum of divisors:\n15799\nPrime number?\nNo\nFibonacci number?\nNo\nBell Number?\nNo\nCatalan Number?\nNo\nBase 2 (Binary):\nBase 3 (Ternary):\nBase 4 (Quaternary):\nBase 5 (Quintal):\nBase 8 (Octal):\nBase 16 (Hexadecimal):\nBase 32:\n8k4\nsin(8836)\n0.96359416815138\ncos(8836)\n-0.2673691812918\ntan(8836)\n-3.6039836883809\nln(8836)\n9.08658956454\nlg(8836)\n3.9462557071994\nsqrt(8836)\n94\nSquare(8836)\n\n### Number Look Up\n\nLook Up\n\n8836 (eight thousand eight hundred thirty-six) is a very special figure. The cross sum of 8836 is 25. If you factorisate the figure 8836 you will get these result 2 * 2 * 47 * 47. The number 8836 has 9 divisors ( 1, 2, 4, 47, 94, 188, 2209, 4418, 8836 ) whith a sum of 15799. 8836 is not a prime number. The figure 8836 is not a fibonacci number. The number 8836 is not a Bell Number. The figure 8836 is not a Catalan Number. The convertion of 8836 to base 2 (Binary) is 10001010000100. The convertion of 8836 to base 3 (Ternary) is 110010021. The convertion of 8836 to base 4 (Quaternary) is 2022010. The convertion of 8836 to base 5 (Quintal) is 240321. The convertion of 8836 to base 8 (Octal) is 21204. The convertion of 8836 to base 16 (Hexadecimal) is 2284. The convertion of 8836 to base 32 is 8k4. The sine of the number 8836 is 0.96359416815138. The cosine of the figure 8836 is -0.2673691812918. The tangent of 8836 is -3.6039836883809. The square root of 8836 is 94.\nIf you square 8836 you will get the following result 78074896. The natural logarithm of 8836 is 9.08658956454 and the decimal logarithm is 3.9462557071994. You should now know that 8836 is very impressive number!" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.77435476,"math_prob":0.9769577,"size":1908,"snap":"2019-35-2019-39","text_gpt3_token_len":669,"char_repetition_ratio":0.16911764,"word_repetition_ratio":0.24299066,"special_character_ratio":0.4528302,"punctuation_ratio":0.1482412,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99825275,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-18T07:01:05Z\",\"WARC-Record-ID\":\"<urn:uuid:82f3878a-f3b0-416c-9516-b4f26cc484d8>\",\"Content-Length\":\"13333\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2c13dd22-8bb6-448e-933f-7fc5ebf0ab22>\",\"WARC-Concurrent-To\":\"<urn:uuid:22f595dc-7212-4b0c-8f82-107dcf356fbd>\",\"WARC-IP-Address\":\"176.9.140.13\",\"WARC-Target-URI\":\"https://numberworld.info/8836\",\"WARC-Payload-Digest\":\"sha1:W2LRSKY42H2VZ2TGB7FQXXVGMSMFSEDN\",\"WARC-Block-Digest\":\"sha1:3PDXT6BJYWOIXIWUVQEC22VTYORC5NCC\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027313715.51_warc_CC-MAIN-20190818062817-20190818084817-00447.warc.gz\"}"}
https://algosim.org/doc/PrimePi.html
[ " PrimePi – Algosim documentation\nAlgosim documentation: PrimePi\n\n# PrimePi\n\nThe prime-counting function.\n\n## Syntax\n\n• `PrimePi(n)`\n\n• `n` is an integer\n\n## Description\n\nIf `n` is an integer, then `PrimePi(n)` is the number of prime number less than or equal to `n`, that is,\n\n``PrimePi(n) = count(SequenceVector(n), IsPrime)`.`\n\n## Examples\n\n`PrimePi(1000000)`\n`78498`" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.53329116,"math_prob":0.9747366,"size":267,"snap":"2023-14-2023-23","text_gpt3_token_len":79,"char_repetition_ratio":0.16730037,"word_repetition_ratio":0.0,"special_character_ratio":0.28089887,"punctuation_ratio":0.12,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99461865,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-05-30T17:10:55Z\",\"WARC-Record-ID\":\"<urn:uuid:3d0dad17-8d60-4ab9-a1c3-6daadee9dc93>\",\"Content-Length\":\"2317\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0afdc879-34fd-41b0-8dd8-71c232ed201b>\",\"WARC-Concurrent-To\":\"<urn:uuid:df0e9c64-4b6f-4b9d-819f-308562c51a2d>\",\"WARC-IP-Address\":\"93.188.2.54\",\"WARC-Target-URI\":\"https://algosim.org/doc/PrimePi.html\",\"WARC-Payload-Digest\":\"sha1:WHROFLCCWMF3WZCS5QCWL7XHLPV4BYWN\",\"WARC-Block-Digest\":\"sha1:SESSOYAC2X7TFUB7WJ3GGZYD5J4EKKDC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224646076.50_warc_CC-MAIN-20230530163210-20230530193210-00669.warc.gz\"}"}
https://discuss.streamlit.io/t/plot-a-dataframe-with-values-sorted/1243
[ "", null, "# Plot a dataframe with values sorted\n\n``````tmp = df.groupby(\"someCol\")[\"someOtherCol\"].mean().sort_values()\nst.dataframe(tmp) # (1)\nst.bar_chart(tmp) # (2)\n``````\n\nThe dataframe (well technically a `Series`) in (1) is rendered correctly and the values are ordered. But, the plot is ordered by the lexicographic order of the index (i.e. the values in `someCol`). How can I control this?\n\nHi @drorata, welcome to the forum! We implement `bar_chart` with Altair with the following spec:\n\n`````` chart = (\ngetattr(alt.Chart(data), \"mark_\" + chart_type)()\n.encode(\nalt.X(\"index\", title=\"\"),\nalt.Y(\"value\", title=\"\"),\nalt.Color(\"variable\", title=\"\", type=\"nominal\"),\nalt.Tooltip([\"index\", \"value\", \"variable\"]),\nopacity=opacity,\n)\n.interactive()\n``````\n\nwhere `chart_type` is `\"bar\"`\nYou can find this in https://github.com/streamlit/streamlit/blob/develop/lib/streamlit/elements/altair.py#L52\n\nYou can use altair directly with\n\n``````st.altair_chart(chart)\n``````\n\nand you can customize `chart` as you want by following the Altair spec. I don’t have your dataset, but this SortedBarChart example may be what you are looking for: https://altair-viz.github.io/gallery/bar_chart_sorted.html\n\nHope this helps.\n\nBest,\n\nMatteo\n\n1 Like" ]
[ null, "https://aws1.discourse-cdn.com/business7/uploads/streamlit/original/2X/7/7cbf2ca198cd15eaaeb2e177a37b2c1c8c9a6e33.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6328198,"math_prob":0.7160659,"size":785,"snap":"2020-34-2020-40","text_gpt3_token_len":202,"char_repetition_ratio":0.108834825,"word_repetition_ratio":0.0,"special_character_ratio":0.25859872,"punctuation_ratio":0.22727273,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95933366,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-07T23:35:30Z\",\"WARC-Record-ID\":\"<urn:uuid:23398d38-c7f8-4e13-81bd-7fb31f7a5615>\",\"Content-Length\":\"15979\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:11468be7-7fa5-47e5-908a-c83d5fb279d3>\",\"WARC-Concurrent-To\":\"<urn:uuid:23956832-ddbb-47fc-bea2-b4c872735eeb>\",\"WARC-IP-Address\":\"72.52.80.21\",\"WARC-Target-URI\":\"https://discuss.streamlit.io/t/plot-a-dataframe-with-values-sorted/1243\",\"WARC-Payload-Digest\":\"sha1:PIXLR7ELFTJ76UB7V5QNX3KUBUN7I2HQ\",\"WARC-Block-Digest\":\"sha1:HI5PADZHYRDKFJ5U2ED5EN77OXMQUHJJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439737233.51_warc_CC-MAIN-20200807231820-20200808021820-00115.warc.gz\"}"}
https://www.esaral.com/q/construct-a-quadrilateral-abcd-in-which-ab-7-7-cm-34631
[ "", null, "# Construct a quadrilateral ABCD in which AB = 7.7 cm,\n\nQuestion:\n\nConstruct a quadrilateral ABCD in which AB = 7.7 cm, BC = 6.8 cm, CD = 5.1 cm, AD = 3.6 cm and ∠C = 120°.\n\nSolution:\n\nSteps of construction:\n\nStep I: Draw DC $=5.1 \\mathrm{~cm} .$\n\nStep II : Construct $\\angle \\mathrm{DCB}=120^{\\circ}$.\n\nStep III : With C as the centre and radius $6.8 \\mathrm{~cm}$, cut off BC $=6.8 \\mathrm{~cm} .$\n\nStep IV : With B as the centre and radius $7.7 \\mathrm{~cm}$, draw an arc.\n\nStep V : With D as the centre and radius $3.6 \\mathrm{~cm}$, draw an arc to intersect the arc drawn in Step IV at A.\n\nStep VI : Join $A B$ and $A D$ to obtained the required quadrilateral.", null, "" ]
[ null, "https://www.facebook.com/tr", null, "https://img-nm.mnimgs.com/img/study_content/content_ck_images/images/3%2Cq(65).png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.73628014,"math_prob":0.998601,"size":600,"snap":"2023-14-2023-23","text_gpt3_token_len":209,"char_repetition_ratio":0.16275167,"word_repetition_ratio":0.026785715,"special_character_ratio":0.37166667,"punctuation_ratio":0.20394737,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.999216,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-09T17:38:37Z\",\"WARC-Record-ID\":\"<urn:uuid:18321ec4-2836-484d-91df-2ec4867d07ee>\",\"Content-Length\":\"25695\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:710a5e6d-8157-4444-9197-1a18f46ef653>\",\"WARC-Concurrent-To\":\"<urn:uuid:ede57110-1259-41de-aa87-ed42560feca4>\",\"WARC-IP-Address\":\"104.21.61.187\",\"WARC-Target-URI\":\"https://www.esaral.com/q/construct-a-quadrilateral-abcd-in-which-ab-7-7-cm-34631\",\"WARC-Payload-Digest\":\"sha1:CRPUXJXMFT6VMWRCD2YFMLVBDWFYPJ2P\",\"WARC-Block-Digest\":\"sha1:OR5MSIXN6F5MQVIROX22YQ6QNQORBDX3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224656788.77_warc_CC-MAIN-20230609164851-20230609194851-00764.warc.gz\"}"}
https://www.colorhexa.com/3f3d3e
[ "# #3f3d3e Color Information\n\nIn a RGB color space, hex #3f3d3e is composed of 24.7% red, 23.9% green and 24.3% blue. Whereas in a CMYK color space, it is composed of 0% cyan, 3.2% magenta, 1.6% yellow and 75.3% black. It has a hue angle of 330 degrees, a saturation of 1.6% and a lightness of 24.3%. #3f3d3e color hex could be obtained by blending #7e7a7c with #000000. Closest websafe color is: #333333.\n\n• R 25\n• G 24\n• B 24\nRGB color chart\n• C 0\n• M 3\n• Y 2\n• K 75\nCMYK color chart\n\n#3f3d3e color description : Very dark grayish pink.\n\n# #3f3d3e Color Conversion\n\nThe hexadecimal color #3f3d3e has RGB values of R:63, G:61, B:62 and CMYK values of C:0, M:0.03, Y:0.02, K:0.75. Its decimal value is 4144446.\n\nHex triplet RGB Decimal 3f3d3e `#3f3d3e` 63, 61, 62 `rgb(63,61,62)` 24.7, 23.9, 24.3 `rgb(24.7%,23.9%,24.3%)` 0, 3, 2, 75 330°, 1.6, 24.3 `hsl(330,1.6%,24.3%)` 330°, 3.2, 24.7 333333 `#333333`\nCIE-LAB 25.987, 1.075, -0.314 4.588, 4.742, 5.231 0.315, 0.326, 4.742 25.987, 1.12, 343.721 25.987, 0.987, -0.49 21.776, -0.501, 1.002 00111111, 00111101, 00111110\n\n# Color Schemes with #3f3d3e\n\n• #3f3d3e\n``#3f3d3e` `rgb(63,61,62)``\n• #3d3f3e\n``#3d3f3e` `rgb(61,63,62)``\nComplementary Color\n• #3f3d3f\n``#3f3d3f` `rgb(63,61,63)``\n• #3f3d3e\n``#3f3d3e` `rgb(63,61,62)``\n• #3f3d3d\n``#3f3d3d` `rgb(63,61,61)``\nAnalogous Color\n• #3d3f3f\n``#3d3f3f` `rgb(61,63,63)``\n• #3f3d3e\n``#3f3d3e` `rgb(63,61,62)``\n• #3d3f3d\n``#3d3f3d` `rgb(61,63,61)``\nSplit Complementary Color\n• #3d3e3f\n``#3d3e3f` `rgb(61,62,63)``\n• #3f3d3e\n``#3f3d3e` `rgb(63,61,62)``\n• #3e3f3d\n``#3e3f3d` `rgb(62,63,61)``\n• #3e3d3f\n``#3e3d3f` `rgb(62,61,63)``\n• #3f3d3e\n``#3f3d3e` `rgb(63,61,62)``\n• #3e3f3d\n``#3e3f3d` `rgb(62,63,61)``\n• #3d3f3e\n``#3d3f3e` `rgb(61,63,62)``\n• #181718\n``#181718` `rgb(24,23,24)``\n• #252425\n``#252425` `rgb(37,36,37)``\n• #323031\n``#323031` `rgb(50,48,49)``\n• #3f3d3e\n``#3f3d3e` `rgb(63,61,62)``\n• #4c4a4b\n``#4c4a4b` `rgb(76,74,75)``\n• #595658\n``#595658` `rgb(89,86,88)``\n• #666364\n``#666364` `rgb(102,99,100)``\nMonochromatic Color\n\n# Alternatives to #3f3d3e\n\nBelow, you can see some colors close to #3f3d3e. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #3f3d3f\n``#3f3d3f` `rgb(63,61,63)``\n• #3f3d3e\n``#3f3d3e` `rgb(63,61,62)``\n• #3f3d3e\n``#3f3d3e` `rgb(63,61,62)``\n• #3f3d3e\n``#3f3d3e` `rgb(63,61,62)``\n• #3f3d3e\n``#3f3d3e` `rgb(63,61,62)``\n• #3f3d3e\n``#3f3d3e` `rgb(63,61,62)``\n• #3f3d3e\n``#3f3d3e` `rgb(63,61,62)``\nSimilar Colors\n\n# #3f3d3e Preview\n\nThis text has a font color of #3f3d3e.\n\n``<span style=\"color:#3f3d3e;\">Text here</span>``\n#3f3d3e background color\n\nThis paragraph has a background color of #3f3d3e.\n\n``<p style=\"background-color:#3f3d3e;\">Content here</p>``\n#3f3d3e border color\n\nThis element has a border color of #3f3d3e.\n\n``<div style=\"border:1px solid #3f3d3e;\">Content here</div>``\nCSS codes\n``.text {color:#3f3d3e;}``\n``.background {background-color:#3f3d3e;}``\n``.border {border:1px solid #3f3d3e;}``\n\n# Shades and Tints of #3f3d3e\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #030303 is the darkest color, while #f8f8f8 is the lightest one.\n\n• #030303\n``#030303` `rgb(3,3,3)``\n• #0d0d0d\n``#0d0d0d` `rgb(13,13,13)``\n• #171617\n``#171617` `rgb(23,22,23)``\n• #212021\n``#212021` `rgb(33,32,33)``\n• #2b2a2a\n``#2b2a2a` `rgb(43,42,42)``\n• #353334\n``#353334` `rgb(53,51,52)``\n• #3f3d3e\n``#3f3d3e` `rgb(63,61,62)``\n• #494748\n``#494748` `rgb(73,71,72)``\n• #535052\n``#535052` `rgb(83,80,82)``\n• #5d5a5b\n``#5d5a5b` `rgb(93,90,91)``\n• #676465\n``#676465` `rgb(103,100,101)``\n• #716d6f\n``#716d6f` `rgb(113,109,111)``\n• #7b7779\n``#7b7779` `rgb(123,119,121)``\n• #858183\n``#858183` `rgb(133,129,131)``\n• #8e8b8c\n``#8e8b8c` `rgb(142,139,140)``\n• #989596\n``#989596` `rgb(152,149,150)``\n• #a29fa0\n``#a29fa0` `rgb(162,159,160)``\n• #aba9aa\n``#aba9aa` `rgb(171,169,170)``\n• #b5b2b4\n``#b5b2b4` `rgb(181,178,180)``\n• #bfbcbe\n``#bfbcbe` `rgb(191,188,190)``\n• #c8c6c7\n``#c8c6c7` `rgb(200,198,199)``\n• #d2d0d1\n``#d2d0d1` `rgb(210,208,209)``\n``#dcdadb` `rgb(220,218,219)``\n• #e5e4e5\n``#e5e4e5` `rgb(229,228,229)``\n• #efeeef\n``#efeeef` `rgb(239,238,239)``\n• #f8f8f8\n``#f8f8f8` `rgb(248,248,248)``\nTint Color Variation\n\n# Tones of #3f3d3e\n\nA tone is produced by adding gray to any pure hue. In this case, #3f3d3e is the less saturated color, while #78043e is the most saturated one.\n\n• #3f3d3e\n``#3f3d3e` `rgb(63,61,62)``\n• #44383e\n``#44383e` `rgb(68,56,62)``\n• #49333e\n``#49333e` `rgb(73,51,62)``\n• #4d2f3e\n``#4d2f3e` `rgb(77,47,62)``\n• #522a3e\n``#522a3e` `rgb(82,42,62)``\n• #57253e\n``#57253e` `rgb(87,37,62)``\n• #5c203e\n``#5c203e` `rgb(92,32,62)``\n• #601c3e\n``#601c3e` `rgb(96,28,62)``\n• #65173e\n``#65173e` `rgb(101,23,62)``\n• #6a123e\n``#6a123e` `rgb(106,18,62)``\n• #6f0d3e\n``#6f0d3e` `rgb(111,13,62)``\n• #73093e\n``#73093e` `rgb(115,9,62)``\n• #78043e\n``#78043e` `rgb(120,4,62)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #3f3d3e is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.50271535,"math_prob":0.8848073,"size":3667,"snap":"2020-34-2020-40","text_gpt3_token_len":1737,"char_repetition_ratio":0.13895714,"word_repetition_ratio":0.007380074,"special_character_ratio":0.5486774,"punctuation_ratio":0.23522854,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98134357,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-29T05:42:09Z\",\"WARC-Record-ID\":\"<urn:uuid:9c5fdd2e-d68b-4b56-bc35-58d623dd9a95>\",\"Content-Length\":\"36215\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:85032d9d-4826-461e-bcee-d43d55bafd92>\",\"WARC-Concurrent-To\":\"<urn:uuid:0342098f-6b01-48fc-85d6-1c0d6eb23686>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/3f3d3e\",\"WARC-Payload-Digest\":\"sha1:2CLNYG4EYV5ILGQR22M6CZR62N7JRUTV\",\"WARC-Block-Digest\":\"sha1:BIHOY4VJK625AYLYBHY4E5NTP4OMO424\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600401624636.80_warc_CC-MAIN-20200929025239-20200929055239-00400.warc.gz\"}"}
https://answerologyreloaded.com/266530/drain-gallons-water-minutes-many-hours-drain-gallons-water
[ "A tub can drain 24 gallons of water in 5 minutes. How many hours will it take to drain 139 gallons of water?\n\n31 views", null, "by anonymous\n\n+1 vote\n\n4.6 hrs or  4 hrs 18 minutes\n\nby (593,420 points)\n\n42\n\nby (2,156,350 points)\n+1\n\nYou're on to something.\n\n+1 vote\n\nHave you pulled the plug yet?  In your scenario - 30 minutes and changed\n\nby (2,935,370 points)\n+1 vote\n\nOMG.  This is not difficult.\n\nThink of it in terms of 24 gallon units.  How many 24 gallon units are in 139 gallons?  Divide 139 by 24.  There are in round numbers, about 5.8 twenty-four gallon units.\n\nEach 24 gallon units takes 5 minutes to drain.  So multiply 5.8 units by 5 minutes per unit and you about 29 minutes.  Rounding it to 30 minutes and you have half an hour.  For exact numbers, your calculator should work as good as mine.\n\nby (1,346,070 points)\n\nI got 4.8 hrs like lady4u.\n\nFor every 5 mins the tub drains 24 gals of water. So in 10 mins it will drain 48 gals of water.  Multiply that 10 by 6, to make it an hour, and multiply the 48 by 6 as well, you get 288 gals. Divide that by 60, and its 4.8.\n\nWell I was wrong. That was for 288 gals.   It would take about 28 mins.  I made a table.  !!\n\nby (732,260 points)\n+1 vote\n\nDivide 24 by 5 to get the amount of gallons per minute = the tub drains at 4.8  gallons per minute\n\nDivide 139 by 4.8 to get the amount of minutes to drain the tub = 28.95833333333333\n\nTherefore it will take less than an hour to drain.\n\nby (180 points)" ]
[ null, "https://answerologyreloaded.com/", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9825239,"math_prob":0.92770594,"size":338,"snap":"2019-26-2019-30","text_gpt3_token_len":114,"char_repetition_ratio":0.11676647,"word_repetition_ratio":0.0,"special_character_ratio":0.35798818,"punctuation_ratio":0.17894737,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95470196,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-18T01:25:35Z\",\"WARC-Record-ID\":\"<urn:uuid:53eb09ff-81e2-4807-983b-888d3f36d626>\",\"Content-Length\":\"39366\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3aef019b-e9f3-4873-8f0a-251bba512b5d>\",\"WARC-Concurrent-To\":\"<urn:uuid:4f687563-e1a0-4c03-b3ad-f80a4e8734ed>\",\"WARC-IP-Address\":\"185.2.4.36\",\"WARC-Target-URI\":\"https://answerologyreloaded.com/266530/drain-gallons-water-minutes-many-hours-drain-gallons-water\",\"WARC-Payload-Digest\":\"sha1:2W4LJ5CGOBQZPIM2UKYGMQXO3EBUR7GU\",\"WARC-Block-Digest\":\"sha1:YQJZGPA5FSD27IDFE2Z4WCEIW6TIMMGQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560627998600.48_warc_CC-MAIN-20190618003227-20190618025227-00352.warc.gz\"}"}
https://sophuc.com/excel-arabic-function/
[ "# How to use the Excel ARABIC function\n\nARABIC function is a mathematical conversion operator used to convert the roman numerals as text to ARABIC numerals. ARABIC numerals to ROMAN numerals are shown below.\n\nSyntax: =ARABIC( text )\n\nThe ARABIC function syntax has the following arguments:\n\n• Text (Required): A string enclosed in quotation marks, an empty string (“”), or a reference to a cell containing text.\n\nExample: Let’s look at some Excel ARABIC function examples and explore how to use the ARABIC function as a worksheet function in Microsoft Excel:", null, "Syntax:  =ARABIC(C2)\n\nResult:", null, "Based on the Excel spreadsheet above, the following ARABIC examples would return:\n\nSyntax: =ARABIC(C3)\nResult: 2\n\nSyntax: =ARABIC(C4)\nResult: 3\n\nSyntax: =ARABIC(C5)\nResult: 6\n\nSyntax: =ARABIC(C6)\nResult: 10\n\nSyntax: =ARABIC(C7)\nResult: 1050\n\nSyntax: =ARABIC(C8)\nResult: 1990\n\nREAD:  How to use the Excel MMULT function" ]
[ null, "https://sophuc.com/wp-content/uploads/2020/04/Excel-ARABIC-function.png", null, "https://sophuc.com/wp-content/uploads/2020/04/Excel-ARABIC-function-1.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.52341276,"math_prob":0.8094547,"size":811,"snap":"2020-45-2020-50","text_gpt3_token_len":228,"char_repetition_ratio":0.23048328,"word_repetition_ratio":0.0,"special_character_ratio":0.24537608,"punctuation_ratio":0.16993465,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9856627,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-24T07:11:50Z\",\"WARC-Record-ID\":\"<urn:uuid:699d49aa-7f2f-465b-bb71-d416e42d3b32>\",\"Content-Length\":\"45340\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:80ea66ba-3ef5-4a11-b572-461d0364bfbc>\",\"WARC-Concurrent-To\":\"<urn:uuid:b2360223-c600-4594-889f-3dd3aecdf405>\",\"WARC-IP-Address\":\"172.96.191.202\",\"WARC-Target-URI\":\"https://sophuc.com/excel-arabic-function/\",\"WARC-Payload-Digest\":\"sha1:S5ZWQUDWBCJLYX4V5ZIDZICGZP773IDF\",\"WARC-Block-Digest\":\"sha1:HJ5VV7DOMJFSNFELUIQPWMR5TCL3W7MK\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141171126.6_warc_CC-MAIN-20201124053841-20201124083841-00230.warc.gz\"}"}
https://fdocument.org/document/phy-321-classical-mechanics-i-homework-solutions-2-chapter-2-solutions-1-consider.html
[ "• date post\n\n02-Nov-2019\n• Category\n\n## Documents\n\n• view\n\n56\n\n1\n\nEmbed Size (px)\n\n### Transcript of PHY 321, Classical Mechanics I, Homework Solutions · PDF file2 Chapter 2 Solutions 1....\n\n• Homework Solutions PHY 321 - Classical Mechanics I\n\nwww.pa.msu.edu/courses/phy321\n\nInstructor: Scott Pratt, [email protected]\n\n1 Chapter 1 Solutions\n\n1. All physicists must become comfortable with thinking of oscillatory and wave mechanics in terms of expressions that include the form eiωt.\n\n(a) a)\n\ncosωt = 1− 1\n\n2 (ωt)2 +\n\n1\n\n4! (ωt)4 +\n\n1\n\n6! (ωt)6 · · ·\n\nsinωt = (ωt)− 1\n\n3! (ωt)3 +\n\n1\n\n5! (ωt)5 · · ·\n\n(b) b)\n\neiωt = 1 + iωt + 1\n\n2 (iωt)2 +\n\n1\n\n3 (iωt)3 +\n\n1\n\n4! (iωt)4 · · ·\n\n= 1 + i2 1\n\n2 (ωt)2 + i4\n\n1\n\n4! (ωt)4 + i6\n\n1\n\n6! (ωt)6 · · ·+ iωt + i3\n\n1\n\n3! (ωt)3 + i5\n\n1\n\n5! (ωt)5 · · ·\n\n(c) c) Use fact that i2 = −1 and i4 = 1 then add expressions above for even and odd terms of expansions to see eiωt = cosωt + i sinωt.\n\n(d) d)\n\neiπ = cos(π) + i sin(π) = −1 (1) ln(−1) = iπ. (2)\n\n2. Find the angle between the vectors b⃗ = (1, 2, 4) and c⃗ = (4, 2, 1) by evaluating their scalar product.\n\nb⃗ · c⃗ = 1 + 4 + 4 = 12, |⃗b| = √ 1 + 4 + 16 =\n\n√ 21, |c⃗| =\n\n√ 16 + 4 + 1 =\n\n√ 21(3)\n\ncos θbc = b⃗ · c⃗ |⃗b| |c⃗|\n\n= 12\n\n21 , (4)\n\nθbc = cos −1(4/7). (5)\n\n3. Use the chain rule to show that\n\nd\n\ndt (r⃗ · s⃗) =\n\ndr⃗\n\ndt · s⃗ + r⃗ ·\n\nds⃗\n\ndt .\n\nSolution\n\nd\n\ndt\n\n∑ i\n\nrisi = ∑ i\n\ndri\n\ndt si + ri\n\ndsi\n\ndt (6)\n\n= dr⃗\n\ndt · s⃗ + r⃗ ·\n\nds⃗\n\ndt . (7)\n\nhttp://www.pa.msu.edu/courses/phy321\n\n• 4. Multiply the rotation matrix in Example 1 by its transpose to show that the matrix is unitary, i.e. you get the unit matrix.\n\nSolution:\n\n cosϕ sinϕ 0− sinϕ cosϕ 0 0 0 1\n\n cosϕ − sinϕ 0sinϕ cosϕ 0 0 0 1\n\n =\n\n cos2 ϕ + sin2 ϕ − cosϕ sinϕ + cos sinϕ 0− cosϕ sinϕ + cos sinϕ cos2 ϕ + sin2 ϕ 0 0 0 1\n\n =\n\n 1 0 00 1 0 0 0 1\n\n 5. Find the matrix for rotating a coordinate system by 90 degrees about the x axis.\n\nSolution:\n\nU =\n\n ê′1 · ê1 ê′1 · ê2 ê′1 · ê3ê′2 · ê1 ê′2 · ê2 ê′2 · ê3 ê′3 · ê1 ê′3 · ê2 ê′3 · ê3\n\n , Uij = cos θij =\n\n 1 0 00 0 −1 0 1 0\n\n The signs of the off-diagonal term would switch depending on whether the coordinate system changes or the object.\n\n6. Consider a parity transformation which reflects about the x = 0 plane. Find the matrix that performs the transformation. Find the matrix that performs the inverse transformation.\n\nSolution: The matrix needs to flip the x components of any vector while leaving the y and z components unchanged.  −1 0 00 1 0\n\n0 0 1\n\n 7. Show that the scalar product of two vectors is unchanged if both undergo the same rotation. Use\n\nthe fact that the rotation matrix is unitary, Uij = U −1 ji .\n\nSolution:\n\nx⃗ · y⃗ = xiyy x⃗′ · y⃗′ = UijxjUikyk\n\n= xjU t jiUikyk\n\n= xjU −1 ji Uikyk\n\n= xjδjkyk = xjyj.✓\n\n• 8. Show that the product of two unitary matrices is a unitary matrix. Solution:\n\n(UV )tUV =? 1\n\n(UV )tij = UjkVki\n\nUjkVkiUjmVmn =?δin\n\n= V tikU t kjUjmVmn\n\n= V tikVkb = δin.✓\n\n9. Show that ∑ k\n\nϵijkϵklm = δilδjm − δimδjl.\n\nSolution: For any k in the sum, because i ̸= j and l ̸= m, and because there are only two indices available because none can be equal to k, the only terms possible are δijδjm and δimδjl. One then multiplies the matrices out for i = 1, j = 2, l = 1,m = 2, i = 2, j = 3, l = 2,m = 3, and i = 1, j = 3, l = 1,m = 3. There are also solutions with i ← j and l ↔ m, but if you show these four, you know the ones with the indices switched will just switch the overall sign due to the antisymmetry of ϵ. To show the first case (i = 1, j = 2, l = 1,m = 2),∑\n\nk\n\nϵ12kϵ12k = 1, δ11δ22− δ12δ21 = 1.\n\nThe others are similar.\n\n10. Consider a cubic volume V = L3 defined by 0 < x < L, 0 < y < L and 0 < z < L. Consider a vector A⃗ that depends arbitrarily on x, y, z. Show how Guass’s law,∫\n\nV dv∇ · A⃗ =\n\n∫ S dS⃗ · A⃗,\n\nis satisfied by direct integration.\n\nSolution:\n\n∫ V dv∇ · A⃗ =\n\n∫ Lx 0\n\ndx\n\n∫ Ly 0\n\ndy\n\n∫ Lz 0\n\ndz (∂xAx + ∂yAy + ∂zLz)\n\n=\n\n∫ Ly 0\n\ndy\n\n∫ Lz 0\n\ndz [Ax(Lx, y, z)−Ax(0, y, z)]\n\n+\n\n∫ Lx 0\n\ndx\n\n∫ Lz 0\n\ndz [Ay(x, Ly, z)−Ay(x, 0, z)]\n\n+\n\n∫ Lx 0\n\ndx\n\n∫ Lz 0\n\ndy [Az(x, y, Lz)−Az(x, y, 0)]\n\n=\n\n∫ S dS⃗ · A⃗.\n\n• 11. Find min or max of z = 3x2 − 4y2 + 12xy − 6x + 24 Solution:\n\n∂xz = 6x + 12y − 6 = 0, (8) ∂yz = −8y + 12x = 0 (9)\n\n−32y + 12 = 0, y = 3/8, (10) x = 1/4. (11)\n\nTo find whether max or min,\n\nz(1/4, 3/8) = −6.375 (12) z(0, 0) = −6. (13)\n\nSuggests minimum. However, if you look more carefully, you will find this is a saddle point (Eigenvalues of ∂i∂jz matrix are mix of positive and negative. – You won’t be expected to do this). You can also try a few sample points and see that some values are higher than the xy point found above and some are lower.\n\n12. A real n−dimensional symmetric matrix λ can always be diagonalized by a unitary transfor- mation, i.e. there exists some unitary matrix U such that,\n\nUijλjkU −1 km = λ̃im =\n\n λ̃11 0 · · · 0 0 λ̃22 · · · 0 ...\n\n. . . ...\n\n0 · · · · · · λ̃nn\n\n . (14) The values λ̃ii are referred to as eigenvalues. The set of n eigenvalues are unique, but their ordering is not – there exists a unitary transformation that permutes the indices.\n\nConsider a function f(x1, · · · , xn) that has the property,\n\n∂if(x⃗)|x⃗=0 = 0, (15)\n\nfor all i. Show that if this function is a minimum, and not a maximum or an inflection point, that the n eigenvalues of the matrix\n\nλij ≡ ∂i∂jf(x⃗)|x⃗=0 , (16)\n\nmust be positive. Solution: You can always calculate δf in the new coordinate system, and because f is a scalar, you can write\n\nδf = 1\n\n2 δriδrj\n\n∂ri\n\n∂rj f(r⃗)\n\n= 1\n\n2 δr′iδr\n\n′ j\n\n∂r′i\n\n∂r′j f(r⃗′)\n\n= 1\n\n2\n\n∑ i\n\nδr′iδr ′ iλ̃ii.\n\n4 Because you can adjust the components of δr′ individually, then each component of λ̃, i.e. each eigenvalue of λ, must be positive.\n\n• 13. (a) For the unitary matrix U that diagonalizes λ as shown in the previous problem. Show that each row of the unitary matrix represents an orthogonal unit vector by using the definition of a unitary matrix.\n\n(b) Show that the vector\n\nx (k) i ≡ Uki = (Uk1, Uk2, · · · , Ukn), (17)\n\nhas the property that\n\nλijx (k) j = λ̃kkx\n\n(k) i . (18)\n\nThese vectors are known as eigenvectors, as they have the property that when multiplied by λ the resulting vector is proportional ( same direction) as the original vector. Because one can transform to a basis, using U , where λ is diagonalized, in the new basis the eigen- vectors are simply the unit vectors.\n\nSolution:\n\nU tjiUik = δik (19)\n\nx (j) i ≡ U\n\nt ji = Uij, (20)\n\nx (j) i x\n\n(k) i = δik.✓ (21)\n\n• 2 Chapter 2 Solutions\n\n1. Consider a bicyclist with air resistance proportional to v2 and rolling resistance proportional to v, so that\n\ndv\n\ndt = −Bv2 − Cv.\n\nIf the cyclist has initial velocity v0 and is coasting on a flat course, a) find her velocity as a function of time, and b) find her position as a function of time.\n\nSolution:\n\nt = − ∫ v v0\n\ndv′ 1\n\nBv′2 + Cv′\n\nBt = − 1\n\nβ\n\n∫ v v0\n\ndv′ (\n\n1\n\nv′ −\n\n1\n\nv′ + β\n\n) , β ≡ C/B\n\n= −1 β\n\nln\n\n( v(v0 + β)\n\nv0(v + β)\n\n) ,\n\nv = v0e\n\n−Ct\n\n1 + (Bv0/C)(1− e−Ct) ,\n\nx =\n\n∫ dt v(t) =\n\n−1 C\n\n∫ du\n\nv0\n\n1 + Bv0(1− u)/C , u ≡ e−Ct\n\n= 1\n\nB ln (1 + Bv0(1− u)/c)\n\n= 1\n\nB ln [ 1 + Bv0(1− e−Ct)/C\n\n] .\n\n2. For Eq. (36) show that in the limit where γ → 0 one finds t = 2v0y/g. Solution: Eq. (36) is\n\n0 = − gt\n\nγ +\n\nv0y + g/γ\n\nγ\n\n( 1− e−γt\n\n) .\n\nExpand the exponential to 2nd order in a Taylor expansion,\n\n0 = − gt\n\nγ +\n\nv0y + g/γ\n\nγ\n\n( γt−\n\n1\n\n2 γ2t2 − · · ·\n\n) 0 = v0yt−\n\n1\n\n2 gt2,\n\nt = 2v0y/g.\n\n3. The motion of a charged particle in an electromagnetic field can be obtained from the Lorentz equation. If the electric field vector is E and the magnetic field is B, the force on a particle of mass m that carries a charge q and has a velocity v\n\nF = qE + qv × B\n\nwhere we assume v\n\n• (a) If there is no electric field and if the particle enters the magnetic field in a direct" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7441853,"math_prob":0.9989962,"size":7559,"snap":"2022-40-2023-06","text_gpt3_token_len":3088,"char_repetition_ratio":0.10350761,"word_repetition_ratio":0.06573589,"special_character_ratio":0.38325176,"punctuation_ratio":0.12742858,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9989065,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-26T10:11:32Z\",\"WARC-Record-ID\":\"<urn:uuid:e1f330cb-1f8e-4b7f-9f5d-2a8dc2e79040>\",\"Content-Length\":\"103922\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ac8e8970-ad18-40a8-a31c-47029693968b>\",\"WARC-Concurrent-To\":\"<urn:uuid:70a587d4-eaf1-4427-ac5f-857a2a8087a7>\",\"WARC-IP-Address\":\"172.67.195.50\",\"WARC-Target-URI\":\"https://fdocument.org/document/phy-321-classical-mechanics-i-homework-solutions-2-chapter-2-solutions-1-consider.html\",\"WARC-Payload-Digest\":\"sha1:KVZ4UVX73XHMPLXKLZ527YDSKRZB7IIO\",\"WARC-Block-Digest\":\"sha1:X4LCCWONNFS2SIFAVHF62EOPOBJZAEEK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030334855.91_warc_CC-MAIN-20220926082131-20220926112131-00112.warc.gz\"}"}
https://www.cloudhadoop.com/2018/08/understanding-functional-programming
[ "# Understanding Functional programming basics in javascript with examples\n\nIn this post, We will discuss different programming paradigm related to functional programming, how it will be implemented in javascript.\n\n#### Functional Programming\n\nIt is one of programming type to handle computations and mathematical applications. Haskell, Clojure, Python, Erlang, and Scala are complete functional programming languages. Javascript is not strictly functional programming languages but it supports some of the techniques in it.\n\n#### Applicative Programming\n\nIt is one more pattern for a function as parameter other function and executes for each element in an array or collection. To make it happen, javascript introduced First class objects/citizens.\n\nJavascript already implemented this paradigm with Array and String methods like forEach method, filter, every(), Map() methods. These methods are iterated each element and apply function at each element.\n\nWe will see examples in the next sections\nFunctions are also objects in javascript\nFunctions also contain properties\n\nlet us see simple function creation\n\n``````console.log(testmethod); // returns function code\nconsole.log(testmethod.name); // returns testmethod\nfunction testmethod() {\nconsole.log('Howdie');\n\n}\n``````\n\naAter running the aboe code, It just returns function. Let u\nFunctions also has properties, testmethod.name prints name of the function\n\n#### First Class Functions\n\nWhat are First Class Functions?\n\nFirst class functions in any programming are called for functions if Functions can be treated the same as variables. whatever you do with variables, you can do everything with javascript functions also.\n\nthat means functions are saved to variables, passed as a parameter to function and returned from the function.\n\nIt is also called  First class citizens or First class objects of Javascript\n\nWe will see detailed tutorials using functions as equal to using a variable  with examples below\n\n• Functions can be assigned to variables/constants/objects/arrays\n• Functions can be passed as a parameter to a function\n• Functions can be returned from a Function\n\n#### Function saved to variables/constants/objects/arrays\n\nHere we assigned the anonyms function to the variable. we are calling the function with that variable. Named functions can also be called like this. Functions and variables/values are treated the same in javascript\nFunction assign to a variable\nThe function also can be assigned to variables\n\n``````let functionVariable = function method() {}\n\n``````\n\nAssign to a constant\nThe function also can be assigned to constants\n\n``````let functionVariable = function method() {}\nconst function1 = function() {\nconsole.log(\"function printed\");\n}\nfunction1();\n\n``````\n\nFunction assign to Object\nThis is an example of assigning or storing function like a variable in Object\n\n``````let obj = { method : function(){} }\n\n``````\n\nFunction save to array\nThis is a syntax of storing function like an element in arrays\n\n``````myarray.push(function method() {})\n\n``````\n\n#### Passing as a Function argument to other functions\n\nThis example, passing a function as an argument to function Here function2 is passed as an argument to display function function2 is a callback function\n\n``````function function2() {\nreturn \"function2 \";\n}\nfunction display(helloMessage, name) {\nconsole.log(function2() + name);\n}\ndisplay(function2);\n\n``````\n\n#### Function returns function like a variable value\n\n``````function mainfunction() {\nreturn function(){\nreturn \"return function as value\"}\n}\n\n``````\n\nFrom the above functions examples, We can say javascript in functions are treated as a variable. we can easily say first class citizens.\n\nHigh Order Functions are functions which takes a function as an argument and return the function as parameters\n\n#### Data is immutable in Functional programming\n\nIn the functional programming, state or data is immutable meaning not able to update it.\n\nFor a variable,\n\nIf you want to change data in a variable, you need to create a new variable. To achieve this use const keyword for a declaration\n\nFor arrays\n\nyou need to create a new array and concat existing array You can use Object.assign() method or spread operator to achieve this.\n\nFor objects\n\nData in the object will never be changed, So you need to use Object.assign for cloning the object and update it.\n\nIn functional programming data is never changed, You need to create a copy of data to change it.\n\nfunctions provide copy of data and javascript native map and reduce methods are supported for returning copy/creation of existing data.\n\nSimilar Posts\nSubscribe\nYou'll get a notification every time a post gets published here.\n\nRelated posts" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.79975766,"math_prob":0.81729203,"size":3991,"snap":"2020-34-2020-40","text_gpt3_token_len":765,"char_repetition_ratio":0.19638826,"word_repetition_ratio":0.025931928,"special_character_ratio":0.19543974,"punctuation_ratio":0.0966325,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9588813,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-10T16:36:32Z\",\"WARC-Record-ID\":\"<urn:uuid:d244807c-e6fb-4c95-b221-afa35e21275e>\",\"Content-Length\":\"32077\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8cb75432-2678-4bcd-adfc-84563b268c17>\",\"WARC-Concurrent-To\":\"<urn:uuid:1d6d95c0-b3a4-451e-8883-d44c066cf2e1>\",\"WARC-IP-Address\":\"192.81.212.192\",\"WARC-Target-URI\":\"https://www.cloudhadoop.com/2018/08/understanding-functional-programming\",\"WARC-Payload-Digest\":\"sha1:6W3ZVTEEMRH4R43RKFBKENTMCVVTSQWD\",\"WARC-Block-Digest\":\"sha1:KUSMBKH4OOVNMBOPGOJKCTTJHEYI63NK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439736057.87_warc_CC-MAIN-20200810145103-20200810175103-00118.warc.gz\"}"}
https://hungary.pure.elsevier.com/en/publications/spring-constant-of-microcantilevers-in-fundamental-and-higher-eig
[ "# Spring constant of microcantilevers in fundamental and higher eigenmodes\n\nJ. Kokavecz, A. Mechler\n\nResearch output: Contribution to journalArticle\n\n10 Citations (Scopus)\n\n### Abstract\n\nMicrocantilever beams are versatile force sensors used for, among others, microaccelerometry, microelectromechanical systems, and surface force measurements, the most prominent application being atomic force microscopic imaging and force spectroscopy. Bending of the cantilever is used for simple force measurements, while changes in the amplitude or frequency of the fundamental resonance are used to detect small interaction forces or brief perturbations. Spring constants needed for quantitative measurements are determined by \"reversing\" the force measurements, using either Hooke's law or the oscillation of the beam. The equality of the Hookian and the oscillating spring constant is generally assumed; however, consistent differences in experimental results suggest otherwise. In this work, we introduce a theoretical formula to describe the relationship between these two spring constants for an Euler-Bernoulli beam. We show that the two spring constants are not equal, although the percentage difference stays in the range of a single digit. We derive a general formula for the determination of effective spring constants of arbitrary eigenmodes of the cantilever beam. We demonstrate that all overtones can be treated with a linear spring - effective mass approach, where the mass remains the same for higher eigenmodes.\n\nOriginal language English 172101 Physical Review B - Condensed Matter and Materials Physics 78 17 https://doi.org/10.1103/PhysRevB.78.172101 Published - Nov 4 2008\n\n### Fingerprint\n\nForce measurement\nSurface measurement\nCantilever beams\nMEMS\nSpectroscopy\nImaging techniques\nEuler-Bernoulli beams\nSensors\nreversing\ndigits\ncantilever beams\nguy wires\nmicroelectromechanical systems\nharmonics\nperturbation\noscillations\nsensors\nspectroscopy\n\n### ASJC Scopus subject areas\n\n• Condensed Matter Physics\n• Electronic, Optical and Magnetic Materials\n\n### Cite this\n\nIn: Physical Review B - Condensed Matter and Materials Physics, Vol. 78, No. 17, 172101, 04.11.2008.\n\nResearch output: Contribution to journalArticle\n\n@article{dd78c62bcbdf4b30b1f567198285b4c7,\ntitle = \"Spring constant of microcantilevers in fundamental and higher eigenmodes\",\nabstract = \"Microcantilever beams are versatile force sensors used for, among others, microaccelerometry, microelectromechanical systems, and surface force measurements, the most prominent application being atomic force microscopic imaging and force spectroscopy. Bending of the cantilever is used for simple force measurements, while changes in the amplitude or frequency of the fundamental resonance are used to detect small interaction forces or brief perturbations. Spring constants needed for quantitative measurements are determined by {\"}reversing{\"} the force measurements, using either Hooke's law or the oscillation of the beam. The equality of the Hookian and the oscillating spring constant is generally assumed; however, consistent differences in experimental results suggest otherwise. In this work, we introduce a theoretical formula to describe the relationship between these two spring constants for an Euler-Bernoulli beam. We show that the two spring constants are not equal, although the percentage difference stays in the range of a single digit. We derive a general formula for the determination of effective spring constants of arbitrary eigenmodes of the cantilever beam. We demonstrate that all overtones can be treated with a linear spring - effective mass approach, where the mass remains the same for higher eigenmodes.\",\nauthor = \"J. Kokavecz and A. Mechler\",\nyear = \"2008\",\nmonth = \"11\",\nday = \"4\",\ndoi = \"10.1103/PhysRevB.78.172101\",\nlanguage = \"English\",\nvolume = \"78\",\njournal = \"Physical Review B-Condensed Matter\",\nissn = \"0163-1829\",\npublisher = \"American Physical Society\",\nnumber = \"17\",\n\n}\n\nTY - JOUR\n\nT1 - Spring constant of microcantilevers in fundamental and higher eigenmodes\n\nAU - Kokavecz, J.\n\nAU - Mechler, A.\n\nPY - 2008/11/4\n\nY1 - 2008/11/4\n\nN2 - Microcantilever beams are versatile force sensors used for, among others, microaccelerometry, microelectromechanical systems, and surface force measurements, the most prominent application being atomic force microscopic imaging and force spectroscopy. Bending of the cantilever is used for simple force measurements, while changes in the amplitude or frequency of the fundamental resonance are used to detect small interaction forces or brief perturbations. Spring constants needed for quantitative measurements are determined by \"reversing\" the force measurements, using either Hooke's law or the oscillation of the beam. The equality of the Hookian and the oscillating spring constant is generally assumed; however, consistent differences in experimental results suggest otherwise. In this work, we introduce a theoretical formula to describe the relationship between these two spring constants for an Euler-Bernoulli beam. We show that the two spring constants are not equal, although the percentage difference stays in the range of a single digit. We derive a general formula for the determination of effective spring constants of arbitrary eigenmodes of the cantilever beam. We demonstrate that all overtones can be treated with a linear spring - effective mass approach, where the mass remains the same for higher eigenmodes.\n\nAB - Microcantilever beams are versatile force sensors used for, among others, microaccelerometry, microelectromechanical systems, and surface force measurements, the most prominent application being atomic force microscopic imaging and force spectroscopy. Bending of the cantilever is used for simple force measurements, while changes in the amplitude or frequency of the fundamental resonance are used to detect small interaction forces or brief perturbations. Spring constants needed for quantitative measurements are determined by \"reversing\" the force measurements, using either Hooke's law or the oscillation of the beam. The equality of the Hookian and the oscillating spring constant is generally assumed; however, consistent differences in experimental results suggest otherwise. In this work, we introduce a theoretical formula to describe the relationship between these two spring constants for an Euler-Bernoulli beam. We show that the two spring constants are not equal, although the percentage difference stays in the range of a single digit. We derive a general formula for the determination of effective spring constants of arbitrary eigenmodes of the cantilever beam. We demonstrate that all overtones can be treated with a linear spring - effective mass approach, where the mass remains the same for higher eigenmodes.\n\nUR - http://www.scopus.com/inward/record.url?scp=56349155070&partnerID=8YFLogxK\n\nUR - http://www.scopus.com/inward/citedby.url?scp=56349155070&partnerID=8YFLogxK\n\nU2 - 10.1103/PhysRevB.78.172101\n\nDO - 10.1103/PhysRevB.78.172101\n\nM3 - Article\n\nAN - SCOPUS:56349155070\n\nVL - 78\n\nJO - Physical Review B-Condensed Matter\n\nJF - Physical Review B-Condensed Matter\n\nSN - 0163-1829\n\nIS - 17\n\nM1 - 172101\n\nER -" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8212468,"math_prob":0.745938,"size":5133,"snap":"2020-10-2020-16","text_gpt3_token_len":1110,"char_repetition_ratio":0.1251706,"word_repetition_ratio":0.79698217,"special_character_ratio":0.20358464,"punctuation_ratio":0.117577195,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97144383,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-21T23:11:32Z\",\"WARC-Record-ID\":\"<urn:uuid:e55aa928-3ff2-4a15-9017-666136ab2bde>\",\"Content-Length\":\"42498\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c8816f5e-1380-4e51-82e8-18ab2faf3791>\",\"WARC-Concurrent-To\":\"<urn:uuid:77a634b6-b006-4f33-b867-1f7619e57fd7>\",\"WARC-IP-Address\":\"52.209.51.54\",\"WARC-Target-URI\":\"https://hungary.pure.elsevier.com/en/publications/spring-constant-of-microcantilevers-in-fundamental-and-higher-eig\",\"WARC-Payload-Digest\":\"sha1:3V4BVSONLTOJQ7FUFM5FEVFZKV5QEIH3\",\"WARC-Block-Digest\":\"sha1:L5645GZRRGK5KMKWGRTOVOVKYHRLTWR3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875145538.32_warc_CC-MAIN-20200221203000-20200221233000-00298.warc.gz\"}"}
https://hindi.examsdaily.in/nimcet-syllabus-pdf
[ "# NIMCET Syllabus 2021 PDF – Download Entrance Test Pattern @ nimcet.in!!!\n\n0\n937\n\nNIMCET Syllabus 2021 PDF – Download Entrance Test Pattern @ nimcet.in. National Institute Of Technology Raipur has going to conduct the NIT MCA Common Entrance Test (NIMCET) admission to the MCA programme on the official site; That’s why we have attached the Syllabus and Exam Patter for Registered Candidates.\n\n### NIMCET Syllabus 2021\n\n Name of the Board National Institute Of Technology Raipur Exam Name T MCA Common Entrance Test (NIMCET) Exam Date 23.03.2021 Admission for MCA Result Date 08.06.2021 Status Syllabus Available\n\n### NIMCET Exam Date 2021:\n\nNIMCET is scheduled to take place on May 23, 2021 and the result announcement will be announced on June 8, 2021. The NIMCET2021 admit card is an important identity document and must be carried to the exam hall, else candidates will not be allowed to appear for the entrance test.\n\n### NIMCET Exam Pattern 2021:\n\nNIMCET-2021 test will be conducted with only one question paper containing 120 multiple choice questions covering the following subjects. Multiple Choice Questions will be written in English Language only and will not be translated into any other language.\n\n Subject Total number of questions Total Marks Allotted Mathematics 50 200 Analytical Ability and Logical Reasoning 40 160 General English 20 80 Computer Awareness 10 40 Total 120 480\n\n### NIMCET Syllabus 2021:\n\n#### Mathematics Syllabus\n\nThere will be a total of 50 questions from this section\n\nSet Theory: Concept of sets – Union, Intersection, Cardinality, Elementary counting; permutations and combinations.\n\nProbability and Statistics: Basic concepts of probability theory, Averages, Dependent and independent events, frequency distributions, measures of central tendencies and dispersions.\n\nAlgebra: Fundamental operations in algebra, expansions, factorization, simultaneous linear /quadratic equations, indices, logarithms, arithmetic, geometric and harmonic progressions, determinants and matrices.\n\nCoordinate Geometry: Rectangular Cartesian coordinates, distance formulae, equation of a line, and the intersection of lines, pair of straight lines, equations of a circle, parabola, ellipse, and hyperbola.\n\nCalculus: Limit of functions, continuous function, differentiation of function, tangents and normals, simple examples of maxima and minima. Integration of functions by parts, by substitution and by partial fraction, definite integrals, applications of definite integrals to areas.\n\nVectors: the Position vector, addition and subtraction of vectors, scalar and vector products and their applications to simple geometrical problems and mechanics.\n\nTrigonometry: Simple identities, trigonometric equations properties of triangles, solution of triangles, heights and distances, general solutions of trigonometric equations.\n\n#### Analytical Ability and Logical Reasoning:\n\nThere will be a total of 40 questions from this section. The questions in this section will cover\n\n• logical situations\n• questions based on the facts given in the passage.\n\n#### Computer Awareness:\n\nThere will be a total of 10 questions from this section:\n\nComputer Basics: Organization of a computer, Central Processing Unit (CPU), the structure of instructions in CPU, input/output devices, computer memory, and back-up devices.\n\nData Representation: Representation of characters, integers and fractions, binary and hexadecimal representations, binary arithmetic: addition, subtraction, multiplication, division, simple arithmetic and two’s complement arithmetic, floating-point representation of numbers, Boolean algebra, truth tables, Venn diagrams.\n\n#### General English:\n\nThere will be a total of 20 questions from this section:\n\n• Comprehension\n• Vocabulary\n• Basic English Grammar\n• Word power\n• Synonyms and Antonyms\n• Meaning of words and phrases\n• Technical writing.\n\nOfficial Site\n\n For", null, "Online Test Series Click Here To Join", null, "Whatsapp Click Here To Subscribe", null, "Youtube Click Here To Join", null, "Telegram Channel Click Here To Join", null, "Facebook Click Here\nWhat is the Exam Date for NIMCET 2021?\n\nNIMCET is scheduled to take place on May 23, 2021.\n\nWhat is the result date for NIMCET 2021?\n\nThe Result will be Released on 08.06.2021." ]
[ null, "https://examsdaily.in/wp-content/uploads/2020/02/ot.png", null, "https://examsdaily.in/wp-content/uploads/2019/07/whats.png", null, "https://examsdaily.in/wp-content/uploads/2019/07/you.png", null, "https://examsdaily.in/wp-content/uploads/2019/07/tele.png", null, "https://hindi.examsdaily.in/wp-content/uploads/2020/06/fb.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.79557353,"math_prob":0.59063226,"size":4090,"snap":"2021-31-2021-39","text_gpt3_token_len":897,"char_repetition_ratio":0.1174743,"word_repetition_ratio":0.034305315,"special_character_ratio":0.21124694,"punctuation_ratio":0.15306123,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96397936,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,8,null,8,null,8,null,null,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-21T02:24:09Z\",\"WARC-Record-ID\":\"<urn:uuid:26ce264c-0d55-4619-89eb-33792ff77d94>\",\"Content-Length\":\"127878\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7c6b5933-11fc-4ba9-94fd-c63ba321b8e2>\",\"WARC-Concurrent-To\":\"<urn:uuid:22642368-7ac9-4d7c-8a26-efc71dd63c66>\",\"WARC-IP-Address\":\"13.127.205.166\",\"WARC-Target-URI\":\"https://hindi.examsdaily.in/nimcet-syllabus-pdf\",\"WARC-Payload-Digest\":\"sha1:2BFOGKXOB65XBCTFXF5MGDMPVRBQHPUB\",\"WARC-Block-Digest\":\"sha1:UBZEXNKHDL3ZCBAPHT42R62DS6OAXGVR\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057131.88_warc_CC-MAIN-20210921011047-20210921041047-00234.warc.gz\"}"}
https://alphaarchitect.com/2022/11/optimal-trend-following-rules-in-two-state-regime-switching-models/
[ "• Valeriy Zakamulin and Javier Giner\n• Optimal Trend Following Rules in Two-State Regime-Switching Models\n• Working paper, University of Agder and University of La Laguna\n• A version of this paper can be found here\n\n## What are the Motivations?\n\nAcademic research on trend-following investing has almost exclusively been focused on testing the profitability of various trading rules. Most of these rules are based on moving averages of past prices. The most popular is the Simple Moving Average (SMA). Less commonly used types of moving averages are the Linear Moving Average (LMA) and Exponential Moving Average (EMA). Each moving average is computed using an averaging window of a particular size. Besides, a trend-following rule can be based on either a single moving average or a combination of moving averages. For example, a moving average crossover is a rule constructed using two moving averages: one with short window size and another with long window size. There are SMA, LMA, and EMA crossovers. A popular Moving Average Convergence/Divergence (MACD) rule is based on three EMAs. Last but not least, there is a Momentum (MOM) rule that can also be considered as a specific moving average rule. However, all existing trend-following rules are ad-hoc rules whose optimality has never been justified theoretically.\n\n## What are the Research Questions?\n\nThe goal of our research is to examine the optimal trend-following rules when the stock returns follow a two-state process that randomly switches between bull and bear markets. A two-state regime-switching process is a widely accepted model for stock returns that can reproduce a number of stylized facts: fat tails, negative skewness, volatility clustering, short-term momentum, and medium-term mean reversion.\n\n## What are the Theoretical Findings?\n\nFirst, we consider the case where the returns follow a Markov switching process. We show that in this case it is optimal to follow the trend using the EMA. A great advantage of a Markov switching model is its analytical tractability. However, this model oversimplifies the complexity of the real-life return process. In particular, a serious limitation of a Markov model is that the state termination probability does not depend on the time already spent in that state. In other words, there is no duration dependence. By contrast, many empirical studies document that the bull and bear states of the market exhibit positive duration dependence. A positive duration dependence means that the longer a bull (bear) market lasts, the higher its probability of ending.\n\nSecond, we consider the case where the returns follow a semi-Markov switching process where the stock market states exhibit positive duration dependence. A severe obstacle in this case is that a semi-Markov model lacks analytical tractability. Besides, all numerical computations rely on using complicated recursive algorithms. Our analysis relies on a semi-Markov model realized as an expanded-state Markov model suggested by Giner and Zakamulin (2021). This model allows some analytical tractability, and the numerical computations are of the same complexity as that of a conventional Markov model. Our results show that the optimal trend-following rule in a semi-Markov model is somewhat similar to the MACD rule.\n\n## Does the Empirical Evidence Support the Theoretical Results?\n\nYes, we demonstrate that our theoretical semi-Markov model is in good agreement with the empirical data, and our theoretically optimal trend-following rule outperforms the popular trend-following rules used by investment professionals and academics. In particular, in our empirical application, we start with fitting our semi-Markov model to the monthly returns of the S&P 500 and Dow Jones Industrial Average indices. Then we simulate the trend-following strategy using the theoretically optimal trading rule and demonstrate that the estimated performance of this strategy is better than that of the popular 10-month SMA rule and the 12-month MOM rule.\n\nFor each market index, the table below reports the descriptive statistics and performance of alternative trading strategies. Our main observations are as follows.\n\n• First, regardless of the choice of a market index, all trend-following rules generate positive and statistically significant alpha.\n• Second, all rules have an economically significantly higher Sharpe ratio than the buy-and-hold strategy. However, only the theoretically optimal rule has a Sharpe ratio that is statistically significantly higher than that of the buy-and-hold strategy regardless of the choice of a market index.\n• Third, there are clear indications that the 10-month SMA rule outperforms the 12-month MOM rule, whereas the optimal rule outperforms both alternative trend-following rules. In particular, the theoretically optimal rule has a higher mean return, skewness, Sharpe ratio, and alpha than the SMA and MOM rules. Besides, the optimal rule has a lower standard deviation of returns than the SMA and MOM rules. However, the differences are relatively small and not statistically significant.", null, "The results are hypothetical results and are NOT an indicator of future results and do NOT represent returns that any investor actually attained. Indexes are unmanaged, do not reflect management or trading fees, and one cannot invest directly in an index.\n\n## References\n\nGiner and Zakamulin (2021), “Momentum and Mean Reversion in a Semi-Markov Model for Stock Returns”, working paper available at the SSRN" ]
[ null, "data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%201200%20424'%3E%3C/svg%3E", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.91593355,"math_prob":0.86145216,"size":5330,"snap":"2023-40-2023-50","text_gpt3_token_len":1053,"char_repetition_ratio":0.13030417,"word_repetition_ratio":0.034739453,"special_character_ratio":0.18330206,"punctuation_ratio":0.09169363,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97845167,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-22T18:47:22Z\",\"WARC-Record-ID\":\"<urn:uuid:c2fa6efe-11c0-466f-9f3c-c1451eba0417>\",\"Content-Length\":\"405644\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2917ac82-2a20-46da-8210-cc8d555df2b4>\",\"WARC-Concurrent-To\":\"<urn:uuid:f54c494e-43d3-443f-93c2-16c6b6a35c28>\",\"WARC-IP-Address\":\"104.26.5.49\",\"WARC-Target-URI\":\"https://alphaarchitect.com/2022/11/optimal-trend-following-rules-in-two-state-regime-switching-models/\",\"WARC-Payload-Digest\":\"sha1:W6GZH7U7BKIKSV2R6AMPL22PITTQ6PXP\",\"WARC-Block-Digest\":\"sha1:EUBSFYH37LUUHZX5RFSEKFON65PCZLO3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506421.14_warc_CC-MAIN-20230922170343-20230922200343-00843.warc.gz\"}"}
https://forums.codeguru.com/showthread.php?536325-Lehmer-gcd-algorithm&s=92fa2e4bb1098c4c8a4eb6e601d65e43&p=2239731
[ "CodeGuru Home VC++ / MFC / C++ .NET / C# Visual Basic VB Forums Developer.com\n\n1.", null, "Junior Member", null, "Join Date\nApr 2013\nPosts\n1\n\n## Lehmer gcd algorithm\n\nHey guys!\n\nI have a problem with programming a Lehmer's gcd algorithm. It works fine but the problem is that it's slower than the basic euclidean algorithm. Can anyone tell me where is the problem.\nThis is my code in java:\n\nprivate static BigInteger LEHMER(BigInteger A0, BigInteger A1) {\n\nBigInteger compareNum = new BigInteger(\"4294967296\"); //4294967296 = math.pow(2,32)\n\nwhile( A1.compareTo(compareNum) >= 0) {\nint h = A0.bitCount()+1 - 32;\nBigInteger a0 = A0.shiftRight(h);\nBigInteger a1 = A1.shiftRight(h);\n\nBigInteger u0 = new BigInteger(\"1\");\nBigInteger u1 = new BigInteger(\"0\");\nBigInteger v0 = new BigInteger(\"0\");\nBigInteger v1 = new BigInteger(\"1\");\n\nbreak;\n\nBigInteger r = a0.subtract(q0.multiply(a1)); a0 = a1; a1 = r;\nr = u0.subtract(q0.multiply(u1)); u0 = u1; u1 = r;\nr = v0.subtract(q0.multiply(v1)); v0 = v1; v1 = r;\n}\n\nif (v0.equals(BigInteger.ZERO)) {\nBigInteger R = A0.remainder(A1); A0 = A1; A1 = R;\n}\nelse {\nA0 = R; A1 = T;\n}\n}\n\nwhile (A1.compareTo(BigInteger.ZERO) > 0) {\nBigInteger R = A0.remainder(A1); A0 = A1; A1 = R;\n}\n\nreturn A0;\n}\n\nThe basic Euclidean algorithm:\n\nprivate static BigInteger EA(BigInteger a, BigInteger b) {\n\nwhile ( !b.equals(BigInteger.ZERO) ) {\nBigInteger q = a.divide(b);\nBigInteger r = a.subtract(q.multiply(b)); a = b; b = r;\n}\n\nreturn a;\n}", null, "", null, "Reply With Quote\n\n2.", null, "Junior Member", null, "Join Date\nJul 2021\nPosts\n1\n\n## Re: Lehmer gcd algorithm\n\nHi,\nLooks good.\nSeems, for u0, u1, v0, v1 you do not need to use BigInteger type, just use primitive int (unsigned?) or long (you may extract 63 bits in this case).", null, "", null, "Reply With Quote\n\n3. ## Re: Lehmer gcd algorithm\n\nWell, good reply on the question asked more than eight years back!", null, "", null, "", null, "Reply With Quote\n\n####", null, "Posting Permissions\n\n• You may not post new threads\n• You may not post replies\n• You may not post attachments\n• You may not edit your posts\n•", null, "" ]
[ null, "https://forums.codeguru.com/images/statusicon/user-offline.png", null, "https://forums.codeguru.com/images/reputation/reputation_pos.png", null, "https://forums.codeguru.com/images/misc/progress.gif", null, "https://forums.codeguru.com/clear.gif", null, "https://forums.codeguru.com/images/statusicon/user-offline.png", null, "https://forums.codeguru.com/images/reputation/reputation_pos.png", null, "https://forums.codeguru.com/images/misc/progress.gif", null, "https://forums.codeguru.com/clear.gif", null, "https://forums.codeguru.com/images/smilies/biggrin.gif", null, "https://forums.codeguru.com/images/misc/progress.gif", null, "https://forums.codeguru.com/clear.gif", null, "https://forums.codeguru.com/images/buttons/collapse_40b.png", null, "https://www.codeguru.com/imagesvr_ce/7083/windows_mobile_banner.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5170325,"math_prob":0.9913739,"size":2233,"snap":"2021-31-2021-39","text_gpt3_token_len":690,"char_repetition_ratio":0.18483625,"word_repetition_ratio":0.05202312,"special_character_ratio":0.35244066,"punctuation_ratio":0.23045267,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99942136,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-08-03T14:39:28Z\",\"WARC-Record-ID\":\"<urn:uuid:99ff6593-e1dd-4bb3-815e-74afb967189a>\",\"Content-Length\":\"86093\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a79ab645-6614-4281-8e34-7076242e2535>\",\"WARC-Concurrent-To\":\"<urn:uuid:691462ad-8551-4a00-9b66-9e6e6a4858f2>\",\"WARC-IP-Address\":\"3.143.14.125\",\"WARC-Target-URI\":\"https://forums.codeguru.com/showthread.php?536325-Lehmer-gcd-algorithm&s=92fa2e4bb1098c4c8a4eb6e601d65e43&p=2239731\",\"WARC-Payload-Digest\":\"sha1:5KKFJGZWB3CHH4NDIPKJVHTYEWRLGZC3\",\"WARC-Block-Digest\":\"sha1:M2RTHZYA6V6FQYWRORAV7KRK2OKBJ37X\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046154459.22_warc_CC-MAIN-20210803124251-20210803154251-00409.warc.gz\"}"}
https://www.reference.com/math/prime-factorization-256-a1623494ba87800b
[ "What Is the Prime Factorization of 256?\n\nThe prime factorization of 256 is two to the eighth power, or 2^8. The prime factors of a given number are the prime numbers that, when multiplied together, yield the given number. Prime numbers are numbers that are evenly divisible by only themselves and one.\n\nTo find the prime factors of 256, start by finding two numbers that multiply together to make 256, such as 8 and 32. Neither of these numbers is prime, so both must be further broken down into prime numbers. Eight equals four times two. Two is prime, but four is equal to two times two. Thirty-two equals eight times four. Eight is two times two times two, and four is two times two. Two, eight times, is the prime factor of 256.\n\nSimilar Articles" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9571132,"math_prob":0.9997627,"size":690,"snap":"2019-43-2019-47","text_gpt3_token_len":158,"char_repetition_ratio":0.1909621,"word_repetition_ratio":0.0,"special_character_ratio":0.23913044,"punctuation_ratio":0.13422818,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999113,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-14T05:51:35Z\",\"WARC-Record-ID\":\"<urn:uuid:264b633d-c278-4b8a-9649-e36c37c072a6>\",\"Content-Length\":\"169506\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ed883e46-2e61-4628-a41a-7a72d5fac0ea>\",\"WARC-Concurrent-To\":\"<urn:uuid:2d889bcd-53d4-4294-8abb-0325108a9815>\",\"WARC-IP-Address\":\"151.101.202.114\",\"WARC-Target-URI\":\"https://www.reference.com/math/prime-factorization-256-a1623494ba87800b\",\"WARC-Payload-Digest\":\"sha1:XD4ZMF7GLP2GSKP2RAOBWVHIXEQC2WFS\",\"WARC-Block-Digest\":\"sha1:RYPZE33B5R5SXAUTT4P6YPVRPJ63MBSW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986649232.14_warc_CC-MAIN-20191014052140-20191014075140-00111.warc.gz\"}"}
http://midwaylabsusa.com/blog/calories-burn-speed-running
[ "", null, "# How many calories do we burn speed running?", null, "# Burning Calories\n\nThe caloric expenditure of the different activities is one of the most important indicators of the effectiveness of the exercises. Their estimation when done with adequate accuracy. Provides important information for dietary adequacy, weight loss programs. It also quantifies the intensity of exercise. In occupational medicine, the measurement of caloric expenditure facilitates the adequacy of the workday. As well as the number of breaks, and the characterization of the tolerance degree of the worker. Estimating the caloric expenditure of the race has already become a habit of the runners, and the more modern treadmills show in the panel the approximate caloric expenditure for different speeds.\n\n## Oxygen Consumption\n\nThe physiological principle of this calculation are reference standards of the measurement of the oxygen consumption of the race. The consumption of oxygen directly reflects the production of energy, that is, the caloric expenditure. When we consume 1 liter of oxygen to \"burn\" our energy substrates (mainly fat and carbohydrate) we produce about 5 Calories. When we run we multiply our resting oxygen consumption which is called MET. This consumption is constant and equals 3.5 ml of oxygen per pound of body weight per minute. There is a very adequate estimate to consider that for each Km / h of running speed we spend 1 MET. So when we run at 10 km / h speed the oxygen consumption is 10 times rest, ie 10 MET. For simplicity we can use a rule of thumb that calculates how many calories per minute we are spending while running, introducing 2 variables: race speed and body weight. With a calculator, make the following calculation: CALORIC EXPENDITURE IN CALORIES / MIN = SPEED (KM / H) X WEIGHT (Kg) x 0.0175 To give an example, an individual of 78 kg running at a speed of 8 km / h will be spending: 8 x 78 x 0.0175 = 10.92 Calories per minute. A 1-hour run at this speed will therefore have spent 10.92 x 60 min = 637.2 Calories. It is important to note that this calculation is valid for running in the plane. Any slope or slope in the course changes this value. For those who usually travel a certain distance in each workout, the calculation of the time allows the estimated average speed developed and thus calculate the caloric expenditure according to the formula above." ]
[ null, "https://www.facebook.com/tr", null, "http://midwaylabsusa.com/uploads/blog/destaque/2017/04/shutterstock_340082210.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9073868,"math_prob":0.9920457,"size":2323,"snap":"2019-13-2019-22","text_gpt3_token_len":496,"char_repetition_ratio":0.13799052,"word_repetition_ratio":0.0,"special_character_ratio":0.21179509,"punctuation_ratio":0.09259259,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9802489,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-23T03:24:08Z\",\"WARC-Record-ID\":\"<urn:uuid:0cf4e09f-3832-40a6-a502-4b9b0c3bef87>\",\"Content-Length\":\"29494\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c24682ed-5d40-4ac2-86ec-7d9ba66c10f8>\",\"WARC-Concurrent-To\":\"<urn:uuid:fab64256-cd45-42f5-8927-0656d35d3c2d>\",\"WARC-IP-Address\":\"107.180.51.31\",\"WARC-Target-URI\":\"http://midwaylabsusa.com/blog/calories-burn-speed-running\",\"WARC-Payload-Digest\":\"sha1:24QPYB3HO3IA6NN724GGMSMPW75BNKWZ\",\"WARC-Block-Digest\":\"sha1:M25IXS6A2TILNJMYSTYGATNBIUJLUEGZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912202711.3_warc_CC-MAIN-20190323020538-20190323042538-00556.warc.gz\"}"}
http://arxiv-export-lb.library.cornell.edu/abs/2201.08062v1
[ "math.NA\n\n# Title: A stabilizer-free $C^0$ weak Galerkin method for the biharmonic equations\n\nAbstract: In this article, we present and analyze a stabilizer-free $C^0$ weak Galerkin (SF-C0WG) method for solving the biharmonic problem. The SF-C0WG method is formulated in terms of cell unknowns which are $C^0$ continuous piecewise polynomials of degree $k+2$ with $k\\geq 0$ and in terms of face unknowns which are discontinuous piecewise polynomials of degree $k+1$. The formulation of this SF-C0WG method is without the stabilized or penalty term and is as simple as the $C^1$ conforming finite element scheme of the biharmonic problem. Optimal order error estimates in a discrete $H^2$-like norm and the $H^1$ norm for $k\\geq 0$ are established for the corresponding WG finite element solutions. Error estimates in the $L^2$ norm are also derived with an optimal order of convergence for $k>0$ and sub-optimal order of convergence for $k=0$. Numerical experiments are shown to confirm the theoretical results.\n Comments: 26 pages, 1 figure Subjects: Numerical Analysis (math.NA) MSC classes: 65N15, 65N30 Cite as: arXiv:2201.08062 [math.NA] (or arXiv:2201.08062v1 [math.NA] for this version)\n\n## Submission history\n\nFrom: Peng Zhu [view email]\n[v1] Thu, 20 Jan 2022 08:58:36 GMT (64kb,D)\n\nLink back to: arXiv, form interface, contact." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8705737,"math_prob":0.98569185,"size":1268,"snap":"2022-27-2022-33","text_gpt3_token_len":336,"char_repetition_ratio":0.087025315,"word_repetition_ratio":0.0,"special_character_ratio":0.24447949,"punctuation_ratio":0.08898305,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99271494,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-29T02:53:22Z\",\"WARC-Record-ID\":\"<urn:uuid:e4b904b4-9305-4d11-9a58-28f6dec6246b>\",\"Content-Length\":\"16618\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:71b23929-df27-45a4-b7c8-348c9244fe92>\",\"WARC-Concurrent-To\":\"<urn:uuid:9b26d942-e67e-44d6-8ac9-cafd18d2bc7b>\",\"WARC-IP-Address\":\"128.84.21.203\",\"WARC-Target-URI\":\"http://arxiv-export-lb.library.cornell.edu/abs/2201.08062v1\",\"WARC-Payload-Digest\":\"sha1:LSAQWDWAAIOMQMTRCY3QKKZU44PFFHBA\",\"WARC-Block-Digest\":\"sha1:47G3CEYBPZ3BU5OCRHZLZDCIVU77PC7T\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103620968.33_warc_CC-MAIN-20220629024217-20220629054217-00373.warc.gz\"}"}
https://mailman-1.sys.kth.se/pipermail/gromacs.org_gmx-users/2013-October/084608.html
[ "# [gmx-users] change of bilayer structure during NVT equilibration\n\nshahab shariati shahab.shariati at gmail.com\nSun Oct 6 16:03:55 CEST 2013\n\n```Dear gromacs users\n\nMy system contains DOPC + CHOLESTEROLO + WATER in a rectangular box.\n\nI did energy minimization successfully with following mdp file.\n--------------------------------------------------------------------------------------\n; Parameters describing what to do, when to stop and what to save\nintegrator = steep ; Algorithm (steep = steepest descent\nminimization)\nemtol = 1000.0 ; Stop minimization when the maximum force <\n1000.0 kJ/mol/nm\nemstep = 0.01 ; Energy step size\nnsteps = 50000 ; Maximum number of (minimization) steps to\nperform\n\n; Parameters describing how to find the neighbors of each atom\nnstlist = 1 ; Frequency to update the neighbor list and\nlong range forces\nns_type = grid ; Method to determine neighbor list (simple,\ngrid)\nrlist = 1.2 ; Cut-off for making neighbor list (short range\nforces)\ncoulombtype = PME ; Treatment of long range electrostatic\ninteractions\nrcoulomb = 1.2 ; Short-range electrostatic cut-off\nrvdw = 1.2 ; Short-range Van der Waals cut-off\npbc = xyz ; Periodic Boundary Conditions\n---------------------------------------------------------------------------------------\n\nAfter energy minimization, I saw obtained file (em.gro) by VMD. All things\nwere true\nand intact.\n\nI did equilibration in NVT ensemble with following mdp file.\n--------------------------------------------------------------------------------------\ntitle = NVT equilibration\n; Run parameters\nintegrator = md ; leap-frog integrator\nnsteps = 7500000 ; 2 * 7500000 = 15 ns\ndt = 0.002 ; 2 fs\n; Output control\nnstxout = 1000 ; save coordinates every 0.2 ps\nnstvout = 1000 ; save velocities every 0.2 ps\nnstxtcout = 1000\nnstenergy = 1000 ; save energies every 0.2 ps\nnstlog = 1000 ; update log file every 0.2 ps\n; Bond parameters\ncontinuation = no ; first dynamics run\nconstraint_algorithm = lincs ; holonomic constraints\nconstraints = all-bonds ; all bonds (even heavy atom-H bonds)\nconstrained\nlincs_iter = 1 ; accuracy of LINCS\nlincs_order = 4 ; also related to accuracy\n; Neighborsearching\nns_type = grid ; search neighboring grid cels\nnstlist = 5 ; 10 fs\nrlist = 1.2 ; short-range neighborlist cutoff (in\nnm)\nrcoulomb = 1.2 ; short-range electrostatic cutoff (in\nnm)\nrvdw = 1.2 ; short-range van der Waals cutoff (in\nnm)\n; Electrostatics\ncoulombtype = PME ; Particle Mesh Ewald for long-range\nelectrostatics\npme_order = 4 ; cubic interpolation\nfourierspacing = 0.16 ; grid spacing for FFT\n; Temperature coupling is on\ntcoupl = V-rescale ; modified Berendsen thermostat\ntc-grps = CHOL_DOPC SOL ; three coupling groups - more accurate\ntau_t = 0.1 0.1 ; time constant, in ps\nref_t = 323 323 ; reference temperature, one for each\ngroup, in K\n; Pressure coupling is off\npcoupl = no ; no pressure coupling in NVT\n; Periodic boundary conditions\npbc = xyz ; 3-D PBC\n; Dispersion correction\nDispCorr = EnerPres ; account for cut-off vdW scheme\n; Velocity generation\ngen_vel = yes ; assign velocities from Maxwell\ndistribution\ngen_temp = 323 ; temperature for Maxwell distribution\ngen_seed = -1 ; generate a random seed\n; COM motion removal\n; These options remove motion of the protein/bilayer relative to the\nsolvent/ions\nnstcomm = 5\ncomm-mode = Linear\ncomm-grps = CHOL_DOPC SOL\n---------------------------------------------------------------------------------------\n\nAfter 15 ns NVT equilibration, I saw obtained file (nvt.gro) by VMD.\n\nUnfortunately, rectangular shape of box was converted to cylinder shape.\nDOPC and CHOL\nmolecules moved from center of box to environs of box.\n\nWhat is reason of this issue?\n\nShould I use new parameters in mdp file?" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.65056556,"math_prob":0.9436551,"size":3862,"snap":"2019-26-2019-30","text_gpt3_token_len":1020,"char_repetition_ratio":0.15966822,"word_repetition_ratio":0.003338898,"special_character_ratio":0.3314345,"punctuation_ratio":0.15517241,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9587202,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-24T01:26:35Z\",\"WARC-Record-ID\":\"<urn:uuid:37accdb7-055d-49d7-b340-e7afa1711879>\",\"Content-Length\":\"7450\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a7bff189-92c2-40fe-b57c-827e94714f86>\",\"WARC-Concurrent-To\":\"<urn:uuid:d2fb6eef-2e9f-4c6c-88cd-92b9d00cff47>\",\"WARC-IP-Address\":\"130.237.48.102\",\"WARC-Target-URI\":\"https://mailman-1.sys.kth.se/pipermail/gromacs.org_gmx-users/2013-October/084608.html\",\"WARC-Payload-Digest\":\"sha1:JPVMWS3YXMFXAJHJDNFVSTTALU6GO55L\",\"WARC-Block-Digest\":\"sha1:FXDY2UO2DSSOZP5XITL6FKB4CBWOLNRM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195530246.91_warc_CC-MAIN-20190723235815-20190724021815-00184.warc.gz\"}"}
https://barbourdesign.wordpress.com/tag/hamid-naderi-yeganeh/
[ "The connection between mathematics and art dates back thousands of years. From cathedrals to ancient tilings to oriental rugs, mathematics have been fundamental in geometric designs that are now revered and often emulated. In honor of Common Core testing that is taking place here in New York State this week, we thought it fitting to look at the work of Iranian mathematical artist Hamid Naderi Yeganeh. These often delicately intricate works are quite remarkable, and more astounding is that Yeganeh writes computer programs based on mathematical equations to produce them. Though Yeganeh’s mathematical descriptions are way over our heads (example below), the aesthetic and conceptual allure of these works is certainly not lost on us. The results are stunning, and just proof that math can be beautiful.\n\nThis first image shows 9,000 ellipses. For each k=1,2,3,…,9000 the foci of the k-th ellipse are:\nA(k)+iB(k)+C(k)e^(300πik/9000)\nand\nA(k)+iB(k)-C(k)e^(300πik/9000)\nand the eccentricity of the k-th ellipse is D(k), where\nA(k)=sin(12πk/9000)cos(8πk/9000),\nB(k)=cos(12πk/9000)cos(8πk/9000),\nC(k)=(1/14)+(1/16)sin(10πk/9000),\nD(k)=(49/50)-(1/7)(sin(10πk/9000))^4.", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "" ]
[ null, "https://barbourdesign.files.wordpress.com/2016/04/yeganeh-01.jpg", null, "https://barbourdesign.files.wordpress.com/2016/04/yeganeh-02.jpg", null, "https://barbourdesign.files.wordpress.com/2016/04/yeganeh-03.jpg", null, "https://barbourdesign.files.wordpress.com/2016/04/yeganeh-06.jpg", null, "https://barbourdesign.files.wordpress.com/2016/04/yeganeh-04.jpg", null, "https://barbourdesign.files.wordpress.com/2016/04/yeganeh-05.jpg", null, "https://barbourdesign.files.wordpress.com/2016/04/yeganeh-07.jpg", null, "https://barbourdesign.files.wordpress.com/2016/04/yeganeh-08.jpg", null, "https://barbourdesign.files.wordpress.com/2016/04/yeganeh-09.jpg", null, "https://barbourdesign.files.wordpress.com/2016/04/yeganeh-10.jpg", null, "https://barbourdesign.files.wordpress.com/2016/04/yeganeh-11.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.89893585,"math_prob":0.9914144,"size":1199,"snap":"2019-43-2019-47","text_gpt3_token_len":339,"char_repetition_ratio":0.11882845,"word_repetition_ratio":0.0,"special_character_ratio":0.2885738,"punctuation_ratio":0.10204082,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99101657,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-16T22:57:42Z\",\"WARC-Record-ID\":\"<urn:uuid:52deb93a-e7cc-477e-bd30-6d67c3e64bd0>\",\"Content-Length\":\"72452\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9c510e50-9fd2-4afe-916c-512829c7f193>\",\"WARC-Concurrent-To\":\"<urn:uuid:98f67cc8-b3f2-4e46-8a1f-7cfa2cccde2c>\",\"WARC-IP-Address\":\"192.0.78.13\",\"WARC-Target-URI\":\"https://barbourdesign.wordpress.com/tag/hamid-naderi-yeganeh/\",\"WARC-Payload-Digest\":\"sha1:JLG26MBVRAR7A5BVT56ZM6W3RIVRFNZB\",\"WARC-Block-Digest\":\"sha1:R6KTX7DMFSAVAYMX3COR6H73ZUK253GX\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986670928.29_warc_CC-MAIN-20191016213112-20191017000612-00381.warc.gz\"}"}
https://justaaa.com/chemistry/248508-a-980-l-container-holds-a-mixture-of-two-gases-at
[ "Question\n\n# A 9.80-L container holds a mixture of two gases at 53\n\nA 9.80-L container holds a mixture of two gases at 53\n\ngiven\n\nvolume of container = 9.8\n\ntemperature = 53 C = 53 + 273 = 326 K\n\nnow\n\ntotal pressure = partial pressure of A (PA) + partial pressure of B (PB)\n\nso\n\ntotal pressure = 0.165 + 0.829\n\ntotal pressure (P) = 0.994 atm\n\nwe know that\n\nPV = nRT\n\nso\n\n0.994 x 9.80 = n x 0.0821 x 326\n\nn = 0.364\n\nso\n\nthe total moles = 0.364\n\nnow\n\n0.18 moles of third gas is added\n\nso\n\nnew total moles = 0.364 + 0.18 = 0.544\n\nalso given\n\nvolume and temperature remains constant\n\nso use\n\nPV = nRT\n\nP x 9.8 = 0.544 x 0.0821 x 326\n\nP = 1.4856\n\nso the total pressure becomes 1.4856 atm\n\n#### Earn Coins\n\nCoins can be redeemed for fabulous gifts." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.80936784,"math_prob":0.99992824,"size":649,"snap":"2023-40-2023-50","text_gpt3_token_len":247,"char_repetition_ratio":0.15348837,"word_repetition_ratio":0.0,"special_character_ratio":0.42681047,"punctuation_ratio":0.125,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9996803,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-10T00:30:40Z\",\"WARC-Record-ID\":\"<urn:uuid:43d3b452-99e3-4b72-b10a-f72fd892fa9e>\",\"Content-Length\":\"40070\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bad57d0f-8f83-48f5-9755-a682159b44e3>\",\"WARC-Concurrent-To\":\"<urn:uuid:b133bb0b-b50c-440c-a213-4683803c84ce>\",\"WARC-IP-Address\":\"172.67.222.198\",\"WARC-Target-URI\":\"https://justaaa.com/chemistry/248508-a-980-l-container-holds-a-mixture-of-two-gases-at\",\"WARC-Payload-Digest\":\"sha1:VVUKSYQBBIB7V47R6X2DKYPM2WSJZUP7\",\"WARC-Block-Digest\":\"sha1:QHCGSGJ7IH6L5WLQ2X5EB6TJIRSWCMCA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100989.75_warc_CC-MAIN-20231209233632-20231210023632-00145.warc.gz\"}"}
http://www.woodmann.com/forum/showthread.php?3066-Blum-Blum-Shub-questions
[ "1. ## Blum-Blum-Shub questions\n\nHi\ni have read the blum blum shub algo in applied cryptography(Bruce Schiner).\nAs i have understand the program should be like this:\n//p=7 q=19 n=133\n#include<iostream.h>\nmain(){\nint n=133;\nint s=11;\nint x;\nint b;\nx=((s*s)%n); //x0=s^2 mod n\nfor (int i=0;i<14;i++)\n{\nx[i]=((x[i-1]*x[i-1])% n);\nb[i]=x[i]&1;\ncout<< b[i];\n}\n}\nThis program give me a 14 digit random that i can use for the\nPassword.My question is if I want to produce 10000 random passwords I have to give p and q 10000 different number that\nproduce n witch is a blum integer.The problem is that this\nnumber should be produced randomly i mean i have to produce\na big quantity of prime number p and q each congruent to 3 modulo 4 and different from the latest produced number.\nAs my program should produce n batch of card and each batch\ncontain 10000 different password I don't know how to produce\np and q.\n\nakimp3", null, "2. No, you only need one n ever.\n\nn=p*q\n\np and q should be really big (like 512 bits)\n\nYou will need a bignum library to implement this code in such a way that it can't be broken. I suggest using Wei Dai's crypto++ library:\n\nhttp://www.eskimo.com/~weidai\n\nBBS will generate one bit per modular squaring. You need 14 log_2 10 bits = 47 bits for a 14-digit decimal number.\n\nSo for every 47 output bits, convert the bits from a 47-digit binary number into a 14-digit decimal number. Anyone who can predict your numbers can make a lot more money breaking into banks & other such stuff.\n\nAn even better way of getting your numbers is a true rng, not a pseudo-rng, based on thermal noise or something similar.", null, "3. ## thanks\n\nHi\nthank you very much for your help.\nI have understand completly my mistakes.\nthanks\nbye", null, "4. ## anyone have test value for Blum-Blum-Shum?\n\nHi\nit.I have used Miracl library(the one used by tE in RSA tools).\nI think that my program is correct but I don't have any test data\nthat i can check it,do you have any sample data for the\nBlum-Blum-Shub algo?\n\nI have attached my source code and my exe file\nto this post if anyone could tell me if everythings is correct.\n@Mike:\nabout the 202 bit that you told me i have a little question\ni want a 14 digit password each digit is beetween 0-9 so\ni think that 46 bit is nedded could you please tell me if I\nam wrong?\n\nakimp3", null, "5. I don't have test vectors, sorry. And 47 bits is right. I can't remember how I got 202. It's obviously way off.", null, "####", null, "Posting Permissions\n\n• You may not post new threads\n• You may not post replies\n• You may not post attachments\n• You may not edit your posts\n•" ]
[ null, "http://www.woodmann.com/forum/images/misc/progress.gif", null, "http://www.woodmann.com/forum/images/misc/progress.gif", null, "http://www.woodmann.com/forum/images/misc/progress.gif", null, "http://www.woodmann.com/forum/images/misc/progress.gif", null, "http://www.woodmann.com/forum/images/misc/progress.gif", null, "http://www.woodmann.com/forum/images/buttons/collapse_40b.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.77966493,"math_prob":0.7505055,"size":940,"snap":"2020-34-2020-40","text_gpt3_token_len":276,"char_repetition_ratio":0.110042736,"word_repetition_ratio":0.0,"special_character_ratio":0.31276596,"punctuation_ratio":0.08372093,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9718582,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-10T02:33:39Z\",\"WARC-Record-ID\":\"<urn:uuid:dca38115-8f06-4d30-877f-d52b96a109c7>\",\"Content-Length\":\"53080\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0d040332-9193-4c1d-876e-17c05cd2666c>\",\"WARC-Concurrent-To\":\"<urn:uuid:145dc0b6-6ee2-4d18-92ca-eaebfe64526b>\",\"WARC-IP-Address\":\"185.62.190.110\",\"WARC-Target-URI\":\"http://www.woodmann.com/forum/showthread.php?3066-Blum-Blum-Shub-questions\",\"WARC-Payload-Digest\":\"sha1:JN6BJAJQBPXG7ASYG74U7E2P3PGTPQ2G\",\"WARC-Block-Digest\":\"sha1:VLODBYAUNJ3TZLOFU4LXQFLR3PLN5PRT\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439738603.37_warc_CC-MAIN-20200810012015-20200810042015-00373.warc.gz\"}"}
https://ham.stackexchange.com/questions/9759/why-is-the-neper-a-useful-unit-for-transmission-line-calculations?noredirect=1
[ "# Why is the neper a useful unit for transmission line calculations?\n\nSWR Measured at the Transmitter versus SWR at the Antenna says the neper is \"a more convenient unit for transmission line calculations\".\n\nWhy exactly? What is a neper, and what about it makes it more convenient than the more common decibel? Are there some examples of equations which are more complicated in decibels?\n\nA neper, just like a decibel, is a logarithmic expression of ratios. The decibel uses the base-10, or decadic, logarithm while the neper uses the natural, or Euler constant, logarithm.\n\nThe decibel is strictly defined as the ratio of two powers.\n\n$$dB=10\\log_{10}\\left(\\frac{P_1}{P_2}\\right) \\tag 1$$\n\nWhile it is common to see a decibel formula based on voltage or current, such a ratio is only valid if the impedance of the two terms is the same.\n\nThe neper is simply defined as the ratio of voltage or current (or more generally, 'field' values):\n\n$$Np=\\ln\\left(\\frac{V_1}{V_2}\\right) \\text{ or } \\ln\\left(\\frac{I_1}{I_2}\\right) \\tag 2$$\n\nIt can be shown that 1 neper is equal to $20\\log_{10}(e)$ or about 8.6858 decibels.\n\nOne way to think about a natural logarithm is that it can be used to calculate how much time it takes to get a certain growth. The inverse function, ex, can be used to calculate growth given a certain amount of time. In fact e is sometimes referred to as the universal rate of growth.\n\nThis has many applications in the area of electronics. As an example, the formula for voltage across a capacitor as a function of time (growth/decay) as it discharges through a resistor makes use of this relationship:\n\n$$V_C{(t)}=V_0*e^\\left({\\frac{-t}{RC}}\\right) \\tag 3$$\n\nwhere R is the discharge resistance in ohms, C is the capacitance in Farads, t is the time in seconds, and V0 is the initial voltage across the capacitor.\n\nSimilarly transmission line equations have the notion of growth or decay of voltage and current as a function of time or length. Transmission line attenuation is such as example. The attenuation of a transmission line ($\\alpha$) is generally given as nepers/meter or nepers/kilometer. Thus for a given length of transmission line, the attenuated voltage at any point along the line is simply given as:\n\n$$V_{(l)}=\\frac{V_0}{e^{\\alpha l}} \\tag 4$$\n\nwhere V0 is the original voltage, $\\alpha$ is the attenuation in nepers/meter, and l is the point in the transmission line in meters from the initial voltage V0.\n\nThe same form of the calculation using attenuation expressed in dB/meter results in:\n\n$$V_{(l)}=\\frac{V_0}{e^{\\left(\\frac{\\alpha l}{8.6858}\\right)}} \\tag 5$$\n\nwhere $\\alpha$ is expressed in dBs/meter.\n\nAn alternative form to equation 5 would be:\n\n$$V_{(l)}=\\frac{V_0}{10^{\\left(\\frac{\\alpha l}{20}\\right)}} \\tag 6$$\n\nwhere again $\\alpha$ is expressed in dBs/meter.\n\nThus it can be seen that equation 4 is slightly simpler in form compared to equation 5 or 6. The simplicity of equation 4 also makes the derivation of the \"rate of growth\" clearer. There is no need to ask, for example, what is the 8.6858 factor doing there?\n\n• never heard of neper before this post... thanks ! Jan 17 '18 at 13:57\n• Excellent reference post, Glenn & Phil. I haven't encountered this before. And am not sure where I might. But at least now I will know it is real and can refer back here if necessary. Apr 21 '18 at 0:51" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8809021,"math_prob":0.99864787,"size":2658,"snap":"2021-31-2021-39","text_gpt3_token_len":726,"char_repetition_ratio":0.12170309,"word_repetition_ratio":0.01909308,"special_character_ratio":0.2712566,"punctuation_ratio":0.08646616,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99986064,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-20T04:11:17Z\",\"WARC-Record-ID\":\"<urn:uuid:82d3b247-2e65-4fd2-94bd-4f2717ed1ea5>\",\"Content-Length\":\"167848\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8736139b-2041-49c3-9c8f-2453a7230aa3>\",\"WARC-Concurrent-To\":\"<urn:uuid:45027658-0b4f-4f4d-ba19-d4b9afb09c21>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://ham.stackexchange.com/questions/9759/why-is-the-neper-a-useful-unit-for-transmission-line-calculations?noredirect=1\",\"WARC-Payload-Digest\":\"sha1:DNORWBXLHRECRR4V62HFQDGFDAOKR2RU\",\"WARC-Block-Digest\":\"sha1:BRGZUOHXOX4LFQTPI2RCHIOVOEUTAUIG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057018.8_warc_CC-MAIN-20210920040604-20210920070604-00579.warc.gz\"}"}
https://hackernoon.com/how-to-use-pointers-in-c-rrn3wnt
[ "How To Use Pointers in C by@zeppsan\n\n# How To Use Pointers in C\n\nA pointer is a kind of datatype that keeps track of wherein the computer’s memory a specific value is stored. Pointers are often used when working with bigger sets of data that need to be manipulated or when memory management is very important. The pointer had a rumor for being hard to learn and unnecessary since modern languages handle them for you. But understanding how pointers work is essential for a programmer since they will use them all the time without even thinking about it. To understand this article, you need to know the basics of variables in programming, preferably C.", null, "To understand this article, you need to know the basics of variables in programming, preferably C since that is what I will be using.\n\nWhile attending my first programming course at university, there was a fear among us students about what was to come. The pointer. The pointer had a rumor for being hard to learn and unnecessary since modern languages handle them for you. But understanding how pointers work, is essential for a programmer since they will use them all the time without even thinking about it.\n\nWhat is a pointer?\n\nA pointer is a kind of datatype that keeps tracks of wherein the computer’s memory a specific value is stored. Think of the pointer as a handle for a variable and without this handle, the computer would lose track of it.\n\nThe difference between pointers and a normal data type such as\n\n``int``\nis that a pointer can only hold a memory address while an\n``int``\nholds an actual value. This might sound weird, but follow along and I'll make it more clear.\n\nWhen and why are pointers used?\n\nPointers are often used when working with bigger sets of data that need to be manipulated or when memory management is very important. Pointers will reduce the amount of memory required to do certain things such as working with recursive functions or when passing big data around.\n\nAs you might know, when sending a variable to a function it creates a local copy of the variable in that function instance which could result in big memory waste further down the road. So instead of passing around bigger variable/struct that could be ex. 100 Bytes, we can instead pass around a pointer that contains a reference to the variable/structs memory location. Since a pointer is 8 bytes, this is often beneficial because they use a lot less memory.\n\nHow to create a pointer?\n\nTo kickstart your understanding, I will jump straight into the code and explain afterward what is happening.\n\n``````int main() {\n\n// Declaring and initializing the variable Luke\nint Luke = 22;\n\n// Declaring the pointer Henry\nint* Henry;\n\n// Initializing the pointer Henry\nHenry = &Luke;\n\nprintf(\"The pointer to the variable Luke is: %p\", Henry);\n\nreturn 0;\n}``````\n\nFor demonstration purposes we will use:\n\n``Luke the variable``\nand\n``Henry the pointer``\nto understand the concept more easily.\n\nAt first, I created a normal int named Luke and assigned the value 22 to it, just as one would normally do.\n\nI then created another\n\n``int``\nbut with an\n``*``\n(asterisk) immediately after. This signals the computer that the int variable that we just created named Henry, is a pointer that will point at a specific int value in the computer's memory. This could be done with any data type:\n\n``````#include <stdio.h>\n\nint main() {\nchar* name;\nint* age;\n}\n``````\n\nWhen declared, we never gave Henry the pointer any directives about where to point, so right now, Henry is pointing in a random direction. At this point, Henry is whats called a wild pointer. To give Henry somewhere to point, we can assign him the memory location of Luke the variable, by typing\n\n``Henry = &Luke``\n.\n\nWhen telling Henry where to point, you might have realized that I used a\n\n``&``\n(Ampersand) before Luke the variable. The ampersand is used in front of a variable to get its address in memory. When this step is done, we have successfully created a pointer to the memory address where the value 22 is stored.\n\nThe output of the function above would have been something like:\n\n``The pointer to the variable Liks is: 00B3F98C``\n\nThis example was made using an int, however, the same technique applies for floats, chars, structs, and so on.\n\nHow does a pointer work?\n\nNow that we know how to create a pointer, let's try to more deeply understand how they actually work, else there would be no point ;)", null, "``````#include <stdio.h>\n\nint main() {\nint Luke = 22;\n\nint* Henry;\n\nreturn 0;\n}``````\n\nThe image & code above are equivalent. We declared and initialized the variable Luke and declared the pointer Henry. Henry is now a wild pointer and to make him point at the same memory address as Luke, we simply set Henry = &Luke as before.", null, "By doing this we have successfully made Henry aware of where in the memory Luke stores his value.\n\nHow to use a pointer?\n\nUntil now you have learned how to declare and initialize a pointer but not how to use it really.\n\nLet's say that we want to print the value that Henry is pointing at, how would we do such a thing? Take a look below:\n\n``````#include <stdio.h>\n\nint main() {\n\nint Luke = 22;\nint* Henry = &Luke;\n\nprintf(\"The value at the memory location that Henry points at is : %d\", *Henry);\n\nreturn 0;\n}``````\n\nAs you might has noticed I used the\n\n``*``\nin front of the pointer Henry when printing him. This is how we \"dereference\" a pointer, with other words, actually looking at the value the pointer is pointing at. In this case, we would print the value 22 since that is what Henry is pointing at.\n\nIf you are struggling to understand why Henry is pointing at the value 22, I would recommend you to take another look at the images & code snippets above.\n\nPointers in functions\n\nA pointer can be sent as a function parameter and then be manipulated inside the function and stay that way without having to return the value afterward. Below I will show the difference in working with/without pointers.\n\nWithout pointer:\n\n``````#include <stdio.h>\n\nint main() {\nint Luke = 22;\nLuke = changeValue(Luke);\n}\n\nint changeValue(int value) {\nvalue = 31;\nreturn value;\n}``````\n\nWith pointer:\n\n``````int main() {\nint Luke = 22;\nchangeValue(&Luke);\nprintf(\"%d\", Luke);\n}\n\nvoid changeValue(int* value) {\n*value = 31;\n}``````\n\nIn the example with using a pointer, we sent the memory address of Luke to the function\n\n``changeValue()``\nand then manipulated the value at the memory location by dereferencing the pointer and setting the value. By doing this, Luke's value is now changed without having to return the value.\n\nMakes it possible to return more values\n\nBy using pointers, it's possible to return more than one value from a function. Since we can change the value of a variable by manipulating its memory, we could can do stuff like this:\n\n## Summary:\n\nPointers is a data type that holds memory addresses to values in memory, often other variables or structs. Pointers are used to reduce memory usage and speed up the execution time in applications, especially when working with bigger sets of data.\n\nTo create a pointer you will have to put an * after the data type declaration, ex\n\n``int* age;``\n. This is how we tell the computer that we are creating a pointer and not a variable. The same operator is used when dereferencing a pointer to get the actual value that the pointer is pointing at.\n\nWhen getting the memory address of a variable/struct we use the & operator before the variable name, ex\n\n``&age``\n.\n\nPointers are very useful when working with bigger sets of data or applications that rely on recursive functions. This is because pointers can increase the execution speed of those programs.\n\n## Final words\n\nI hope that this increased your knowledge about pointers and why they are used. The reason for writing this is because I see people struggling with pointers a lot and thought I should try to pedagogically teach how they work.\n\nProgramming is not impossible to learn, it is all about time and dedication. Everyone can code, all other statements are wrong.\n\nThanks for me, I'm Eric, a computer science student at Mälardalens Högskola, Sweden.", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "Join Hacker Noon" ]
[ null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "https://hackernoon.com/emojis/heart.png", null, "https://hackernoon.com/emojis/heart.png", null, "https://hackernoon.com/emojis/heart.png", null, "https://hackernoon.com/emojis/heart.png", null, "https://hackernoon.com/emojis/light.png", null, "https://hackernoon.com/emojis/light.png", null, "https://hackernoon.com/emojis/light.png", null, "https://hackernoon.com/emojis/light.png", null, "https://hackernoon.com/emojis/boat.png", null, "https://hackernoon.com/emojis/boat.png", null, "https://hackernoon.com/emojis/boat.png", null, "https://hackernoon.com/emojis/boat.png", null, "https://hackernoon.com/emojis/money.png", null, "https://hackernoon.com/emojis/money.png", null, "https://hackernoon.com/emojis/money.png", null, "https://hackernoon.com/emojis/money.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.92922807,"math_prob":0.9492854,"size":7346,"snap":"2021-43-2021-49","text_gpt3_token_len":1604,"char_repetition_ratio":0.15608826,"word_repetition_ratio":0.02607362,"special_character_ratio":0.22052819,"punctuation_ratio":0.1048218,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95291865,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-28T04:44:57Z\",\"WARC-Record-ID\":\"<urn:uuid:de8b9b71-ceed-475c-98be-c341f435379b>\",\"Content-Length\":\"239040\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:637784ce-811c-42ec-b6a3-ff0a2f6aacf2>\",\"WARC-Concurrent-To\":\"<urn:uuid:75f1d643-6e6f-4dfd-956d-05f06d8ffe2c>\",\"WARC-IP-Address\":\"104.21.54.123\",\"WARC-Target-URI\":\"https://hackernoon.com/how-to-use-pointers-in-c-rrn3wnt\",\"WARC-Payload-Digest\":\"sha1:3NDKLVG3ANGNVOHZSCAVPHNPLUTY6JLB\",\"WARC-Block-Digest\":\"sha1:RBB2ONT2A5PMOCYXIW7IJFYXGMVBYIML\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323588257.34_warc_CC-MAIN-20211028034828-20211028064828-00128.warc.gz\"}"}